Half-precision floating-point arithmetic on Intel chips

related: https://scicomp.stackexchange.com/questions/35187/is-half-precision-supported-by-modern-architecture – has some info about BFloat16 in Cooper Lake and Sapphire Rapids, and some non-Intel info. Sapphire Rapids will have both BF16 and FP16, with FP16 using the same IEEE754 binary16 format as F16C conversion instructions, not brain-float. And AVX512-FP16 has support for most math operations, unlike BF16 which just has conversion to/from … Read more

Comparing float and double

This is because 1.1 is not exactly representable in binary floating-point. But 1.5 is. As a result, the float and double representations will hold slightly different values of 1.1. Here is exactly the difference when written out as binary floating-point: (float) 1.1 = (0.00011001100110011001101)₂ (double)1.1 = (0.0001100110011001100110011001100110011001100110011010)₂ Thus, when you compare them (and the float … Read more

Why does str(float) return more digits in Python 3 than Python 2?

No, there’s no PEP. There’s an issue in the bug tracker, and an associated discussion on the Python developers mailing list. While I was responsible for proposing and implementing the change, I can’t claim it was my idea: it had arisen during conversations with Guido at EuroPython 2010. Some more details: as already mentioned in … Read more