Why does Python have a maximum recursion depth?

There are actually a few issues here.

First, as NPE’s answer nicely explains, Python doesn’t eliminate tail calls, so many functions that would allow unlimited recursion in, say, Scheme are limited in Python.

Second, as again as explained by NPE, calls that can’t be eliminated take up space on the call stack. And, even in languages that do TCE, there are plenty of recursive functions that can’t be treated like iteration. (Consider the naive Fibonacci function that recursively calls itself twice.)

But why is the call stack a finite resource in the first place? Python stack frames can at least in principle be implemented on the heap and linked together (see Stackless for an existence proof of that principle), and in a 64-bit memory space, there’s room for a whole lot more than 1000 stack frames. (In fact, even the C stack on almost any modern platform can hold a whole lot more than 1000 recursive Python interpreter calls.)

Part of the reason is historical: the stock Python interpreter uses the fixed C stack to call itself recursively whenever you make a recursive call, and it was originally designed for 32-bit (and even 24- or 20-bit) platforms where that C stack is pretty small.

But that could have been changed, and Python 3.0 would have been a perfect place to change it. So, why didn’t they? Because they made a conscious language design decision. In Pythonic code, recursion is generally very shallow (e.g., code like os.walk that traverses shallow tree structures); if your function hits a depth of anywhere near 1000, it’s more likely to be a bug than to be intentional. So, the limit stays. Of course this is a bit circular—if they removed the limit (and, especially, if they eliminated tail calls), deeper recursion would become more idiomatic. But that’s kind of the point—Guido doesn’t want a language where deep recursion is idiomatic. (And most of the Python community agrees.)

Leave a Comment