Is it Pythonic to use bools as ints?

I’ll be the odd voice out (since all answers are decrying the use of the fact that False == 0 and True == 1, as the language guarantees) as I claim that the use of this fact to simplify your code is perfectly fine.

Historically, logical true/false operations tended to simply use 0 for false and 1 for true; in the course of Python 2.2’s life-cycle, Guido noticed that too many modules started with assignments such as false = 0; true = 1 and this produced boilerplate and useless variation (the latter because the capitalization of true and false was all over the place — some used all-caps, some all-lowercase, some cap-initial) and so introduced the bool subclass of int and its True and False constants.

There was quite some pushback at the time since many of us feared that the new type and constants would be used by Python newbies to restrict the language’s abilities, but Guido was adamant that we were just being pessimistic: nobody would ever understand Python so badly, for example, as to avoid the perfectly natural use of False and True as list indices, or in a summation, or other such perfectly clear and useful idioms.

The answers to this thread prove we were right: as we feared, a total misunderstanding of the roles of this type and constants has emerged, and people are avoiding, and, worse!, urging others to avoid, perfectly natural Python constructs in favor of useless gyrations.

Fighting against the tide of such misunderstanding, I urge everybody to use Python as Python, not trying to force it into the mold of other languages whose functionality and preferred style are quite different. In Python, True and False are 99.9% like 1 and 0, differing exclusively in their str(...) (and thereby repr(...)) form — for every other operation except stringification, just feel free to use them without contortions. That goes for indexing, arithmetic, bit operations, etc, etc, etc.

Leave a Comment