Where is likely the performance bug here? [closed]

Lazy evaluation will not make bad algorithms perform better. It will just delay when those performance problems will affect you. What lazy evaluation can help with is space complexity, i.e. reducing the amount of memory you need to execute your algorithm. Since the data is generated lazily, you will not (necessarily) have all the data in the memory at the same time.

However, relying on lazy evaluation to fix your space (or time) complexity problems can easily shoot you in the foot. Look the following example code:

var moves = Enumerable.Range(0, 5).Select(x => {
    return x;

var aggregate = moves.Aggregate((p, c) => {
    return p + c;

var newMoves = moves.Where(x => {
    return x % 2 == 0;


As you can see, both the aggregate and the newMoves rely on the lazily evaluated moves enumerable. Since the original count of moves is 5, we will see 4 “Aggregating” lines in the output, and 5 “Filtering” lines. But how often do you expect “Generating” to appear in the console?

The answer is 10. This is because moves is a generator and is being evaluated lazily. When multiple places request it, an iterator will be created for each, which ultimately means that the generator will execute multiple times (to generate independent results).

This is not necessarily a problem, but in your case, it very quickly becomes one. Assume that we continue above example code with another round of aggregating. That second aggregate will consume newMoves which in turns will consume the original moves. So to aggregate, we will re-run the original moves generator, and the newMoves generator. And if we were to add another level of filtering, the next round of aggregating would run three interlocked generators, again rerunning the original moves generator.

Since your original moves generator creates an enumerable of quadratic size, and has an actual time complexity of O(n²), this is an actual problem. With each iteration, you add another round of filtering which will be linear to the size of the moves enumerable, and you actually consume the enumerable completely for the aggregation. So you end up with O(n^2 + n^3 + n^4 + n^5 + …) which will eventually be the sum of n^j for j starting at 2 up to N-K. That is a very bad time complexity, and all just because you were trying to save memory by evaluating the moves lazily.

So the first step to make this better is to avoid lazy evaluation. You are constantly iterating moves and filtering it, so you should have it in memory. Yes, that means that you have an array of quadratic size, but you won’t actually need more than that. This also limits the time complexity you have. Yes, you still need to filter the list in linear time (so O(n²) since the size is n²) and you do that inside a loop, so you will end up with cubic time (O(n³)) but that would already be your upper limit (iterating the list a constant amount of times within the loop will only increase the time complexity by a constant, and that doesn’t matter).

Once you have done that, you should consider your original problem, think about what you are actually doing. I believe you could probably reduce the computational complexity further if you use the information you have better, and use data structures (e.g. hash sets, or some graph where the move cost is already stored within) that aid you in the filtering and aggregation. I can’t give you exact ideas since I don’t know your original problem, but I’m sure there is something you can do.

Finally, if you have performance problems, always remember to profile your code. The profiler will tell you what parts of your code is the most expensive, so you can get a clear idea what you should try to optimize and what you don’t need to focus on when optimizing (since optimizing already fast parts will not help you get any faster).

Browse More Popular Posts

Leave a Comment