Simd matmul program gives different numerical results

This looks normal; adding numbers in a different order produces different rounding in the temporaries.

FP math is not associative; optimizing as if it is will change the results.1 Is Floating point addition and multiplication associative? / Are floating point operations in C associative?

The amount of change depends on the data. Differences only in the 5th decimal place seems reasonable for float.


Unless you were taking special numerical precautions like adding up the small numbers first, the sequential-order result isn’t “more correct”, they just have different errors.

In fact, using multiple accumulators generally increases precision for large lists, assuming your numbers all have similar magnitude. (Ideally multiple SIMD vectors each composed of multiple elements, to hide FP-add or FMA latency).
https://en.wikipedia.org/wiki/Pairwise_summation is a numerical technique that takes this to the next level: summing subsets of the list in a tree, to avoid adding single array elements to a much larger value. See for example, How to avoid less precise sum for numpy-arrays with multiple columns

Using a fixed number of accumulators (e.g. 8x __m256 = 64 float accumulators) might reduce expected error by a factor of 64, instead of from N to log N for full pairwise summation.


Footnote 1: Associativity is necessary for parallelization, and SIMD, and multiple accumulators. Associativity gives us parallelizability. But what does commutativity give?

On a machine with for example 4-cycle latency 2-per-clock throughput FMA, with a SIMD width of 8 floats, i.e. a Skylake system with AVX2, the potential speedup is 4*2 = 8 from multiple accumulators, * 8 from SIMD width, times number of cores, vs. a pure sequential version, even for problems where it might be less accurate instead of just different.

Most people consider a factor of 8*8 = 64 worth it! (And you can in theory also parallelize for another factor of maybe 4 on a quad-core, assuming perfect scaling for large matrices).

You’re already using float instead of double for performance.

See also Why does mulss take only 3 cycles on Haswell, different from Agner’s instruction tables? for more about using multiple accumulators to hide FMA latency in a reduction, exposing that other factor of 8 speedup.

Also, do not use hadd inside an inner-most loop. Sum vertically, and use an efficient reduction at the end of the loop. (Fastest way to do horizontal float vector sum on x86). You really really want to avoid having the compiler extract your vectors to scalar at every step, that defeats most of the benefit of SIMD! Besides the fact that hadd is not worth using for horizontal sums of 1 vector; it costs 2 shuffles + a regular add on all existing CPUs.

Leave a Comment