Why does integer division in C# return an integer and not a float?

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you’ll always need to remember to cast to floating points, you are mistaken.

First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.

Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.

Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you’ll just need to remember to cast one to a double/float/decimal.

Leave a Comment