Why can’t Double be implicitly cast to Decimal

If you convert from double to decimal, you can lose information – the number may be completely out of range, as the range of a double is much larger than the range of a decimal.

If you convert from decimal to double, you can lose information – for example, 0.1 is exactly representable in decimal but not in double, and decimal actually uses a lot more bits for precision than double does.

Implicit conversions shouldn’t lose information (the conversion from long to double might, but that’s a different argument). If you’re going to lose information, you should have to tell the compiler that you’re aware of that, via an explicit cast.

That’s why there aren’t implicit conversions either way.

Leave a Comment