Example of O(n!)?
There you go. This is probably the most trivial example of a function that runs in O(n!) time (where n is the argument to the function): void nFacRuntimeFunc(int n) { for(int i=0; i<n; i++) { nFacRuntimeFunc(n-1); } }
There you go. This is probably the most trivial example of a function that runs in O(n!) time (where n is the argument to the function): void nFacRuntimeFunc(int n) { for(int i=0; i<n; i++) { nFacRuntimeFunc(n-1); } }
I don’t know how Pandas implements this, but I did test it empirically. I ran the following code (in a Jupyter notebook) to test the speed of the operation: def get_dummy_df(n): return pd.DataFrame({‘a’: [1,2]*n, ‘b’: [4,5]*n, ‘c’: [7,8]*n}) df = get_dummy_df(100) print df.shape %timeit df_r = df[df.columns[::-1]] df = get_dummy_df(1000) print df.shape %timeit df_r = … Read more
You are correct that the outer loop iterates n times and the inner loop iterates n times as well, but you are double-counting the work. If you count up the total work done by summing the work done across each iteration of the top-level loop you get that the first iteration does n work, the … Read more
The complexity is O(C(n,k)) which is O(n choose k). This ends up being equivalent to O(min(n^k, n^(n-k))).
In the draft C++11 standard N3337 the answer can be found in ยง 24.2.1 paragraph 8: All the categories of iterators require only those functions that are realizable for a given category in constant time (amortized). Since each operation on an iterator must be constant time, iterating through n elements must be O(n).
Big O is giving only upper asymptotic bound, while big Theta is also giving a lower bound. Everything that is Theta(f(n)) is also O(f(n)), but not the other way around. T(n) is said to be Theta(f(n)), if it is both O(f(n)) and Omega(f(n)) For this reason big-Theta is more informative than big-O notation, so if … Read more
The build-heap algorithm starts at the midpoint and moves items down as required. Let’s consider a heap of 127 items (7 levels). In the worst case: 64 nodes (the leaf level) don’t move at all 32 nodes move down one level 16 nodes move down two levels 8 nodes move down three levels 4 nodes … Read more
The former is O(n), due to the use of PyList_New() with a known size. The latter is slightly worse than O(n), due to the need to resize the list after several appends.
Here is a solution inspired by MapReduce: Simplified Data Processing on Large Clusters, which can therefore be written in a distributed way if you want. Map all of your elements in sets to pairs [element, set]. Group this list by element. You can just sort and pull out elements. Or you can create a hash … Read more
This can be done with Memory complexity O(Sqrt(N)) and CPU complexity O(N * Log(Log(N))) with an optimized windowed Sieve of Eratosthenes, as implemented in the code example below. As no language was specified and as I do not know Python, I have implemented it in VB.net, however I can convert it to C# if you … Read more