Most insanely fast way to convert 9 char digits into an int or unsigned int

Yes, SIMD is possible, as mentioned in comments. You can take advantage of it to parse the HH, MM, and SS parts of the string at the same time.

Since you have a 100% fixed format with leading 0s where necessary, this is easier than How to implement atoi using SIMD? – Place-values are fixed and we don’t need any compare / bit-scan or pcmpistri to look up a shuffle control mask or scale-factor. Also SIMD string to unsigned int parsing in C# performance improvement has some good ideas, like tweaking the place-value multipliers to avoid a step at the end (TODO, do that here.)

9 decimal digits breaks down into two dwords and one leftover byte that’s probably best to grab separately.

Assuming you care about throughput (ability to overlap this with surrounding code, or do this in a loop on independent elements) moreso than critical path latency in cycles from input pointer and data in memory being ready to nanoseconds integer being ready, SSSE3 SIMD should be very good on modern x86. (With SSE4.1 being useful if you want to unpack your hours, minutes, seconds into contiguous uint32_t elements e.g. in a struct). It might be competitive on latency, too, vs. scalar.

Fun fact: clang auto-vectorizes your convert2 / convert3 functions, widening to 8x dword in a YMM register for vpmulld (2 uops), then a chain of shuffle/add.

The strategy is to use pmaddubsw and pmaddwd to multiply-and-add pairs horizontally, in a way that gets each digit multiplied by its place value. e.g. 10 and 1 pairs, then 100 and 1 for pairs of integer that come from double-digits. Then extract to scalar for the last pair: multiply the most-significant part by 100 * 100, and add to the least-significant part. I’m pretty sure overflow is impossible at any step for inputs that are actually '0'..'9'; This runs and compiles to the asm I expected, but I didn’t verify the numeric results.

#include <immintrin.h>

typedef struct {             // for output into memory
    alignas(16) unsigned hours;
    unsigned minutes, seconds, nanos;
} hmsn;

void str2hmsn(hmsn *out, const char str[15])  // HHMMSSXXXXXXXXX  15 total, with 9-digit nanoseconds.
{    // 15 not including the terminating 0 (if any) which we don't read
    //hmsn retval;
    __m128i digs = _mm_loadu_si128((const __m128i*)str);
    digs = _mm_sub_epi8( digs, _mm_set1_epi8('0') );
    __m128i hms_x_words = _mm_maddubs_epi16( digs, _mm_set1_epi16( 10U + (1U<<8) ));   // SSSE3  pairs of digits => 10s, 1s places.

    __m128i hms_unpacked = _mm_cvtepu16_epi32(hms_x_words);                           // SSE4.1  hours, minutes, seconds unpack from uint16_t to uint32
    //_mm_storeu_si128((__m128i*)&retval, hms_unpacked);                                  // store first 3 struct members; last to be written separately
    _mm_storeu_si128((__m128i*)out, hms_unpacked);
    // or scalar extract with _mm_cvtsi128_si64 (movq) and shift / movzx

    __m128i xwords = _mm_bsrli_si128(hms_x_words, 6);  // would like to schedule this sooner, so oldest-uop-first starts this critical path shuffle ahead of pmovzx
    // 8 bytes of data, lined up in low 2 dwords, rather than split across high 3
    // could have got here with an 8-byte load that starts here, if we didn't want to get the H,M,S integers cheaply.

    __m128i xdwords = _mm_madd_epi16(xwords, _mm_setr_epi16(100, 1, 100, 1,  0,0,0,0));   // low/high uint32 chunks, discard the 9th x digit.
    uint64_t pair32 = _mm_cvtsi128_si64(xdwords);
    uint32_t msd = 100*100 * (uint32_t)pair32;     // most significant dword was at lower address (in printing order), so low half on little-endian x86.  encourage compilers to use 32-bit operand-size for imul
    uint32_t first8_x = msd + (uint32_t)(pair32 >> 32);
    uint32_t nanos = first8_x * 10 + ((unsigned char)str[14] - '0');   // total*10 + lowest digit
    out->nanos = nanos;
    //retval.nanos = nanos;
    //return retval;

  // returning the struct by value encourages compilers in the wrong direction
  // into not doing separate stores, even when inlining into a function that assigns the whole struct to a pointed-to output
}

On Godbolt with a test loop that uses asm("" ::"m"(sink): "memory" ) to make the compiler redo the work in a loop. Or a std::atomic_thread_fence(acq_rel) hack that gets MSVC to not optimize away the loop either. On my i7-6700k with GCC 11.1, x86-64 GNU/Linux, energy_performance_preference = performance, I got this to run at one iteration per 5 cycles.

IDK why it doesn’t run at one per 4c; I tweaked GCC options to avoid the JCC erratum slowdown without padding, and to have the loop in hopefully 4 uop cache lines. (6 uops, 1 uop ended by a 32B boundary, 6 uops, 2 uops ended by the dec/jnz). Perf counters say the front-end was “ok”, and uops_dispatched_port shows all 4 ALU ports at less than 4 uops per iteration, highest being port0 at 3.34.
Manually padding the early instructions gets it down to 3 total lines, of 3, 6, 6 uops but still no improvement from 5c per iter, so I guess the front-end really is ok.

LLVM-MCA seems very ambitious in projecting 3c per iter, apparently based on a wrong model of Skylake with a “dispatch” (front-end rename I think) width of 6. Even with -mcpu=haswell with a proper 4-wide model it projects 4.5c. (I used asm("# LLVM-MCA-BEGIN") etc. macros on Godbolt and included an LLVM-MCA output window for the test loop.) It doesn’t have fully accurate uop->port mapping, apparently not knowing about slow-LEA running only on port 1, but IDK if that’s significant.

Throughput may be limited by the ability to find instruction-level parallelism and overlap across several iterations, as in Understanding the impact of lfence on a loop with two long dependency chains, for increasing lengths

The test loop is:

#include <stdlib.h>
#ifndef __cplusplus
#include <stdalign.h>
#endif
#include <stdint.h>

#if 1 && defined(__GNUC__)
#define LLVM_MCA_BEGIN  asm("# LLVM-MCA-BEGIN")
#define LLVM_MCA_END  asm("# LLVM-MCA-END")
#else
#define LLVM_MCA_BEGIN
#define LLVM_MCA_END
#endif


#if defined(__cplusplus)
    #include <atomic>
    using std::atomic_thread_fence, std::memory_order_acq_rel;
#else
    #include <stdatomic.h>
#endif

unsigned testloop(const char str[15]){
    hmsn sink;
    for (int i=0 ; i<1000000000 ; i++){
        LLVM_MCA_BEGIN;
        str2hmsn(&sink, str);
        // compiler memory barrier 
        // force materializing the result, and forget about the input string being the same
#ifdef __GNUC__
        asm volatile("" ::"m"(sink): "memory");
#else
  //#warning happens to be enough with current MSVC
        atomic_thread_fence(memory_order_acq_rel); // strongest barrier that doesn't require any asm instructions on x86; MSVC defeats signal_fence.
#endif
    }
    LLVM_MCA_END;
    volatile unsigned dummy = sink.hours + sink.nanos;  // make sure both halves are really used, else MSVC optimizes.
    return dummy;
}



int main(int argc, char *argv[])
{
    // performance isn't data-dependent, so just use a handy string.
    // alignas(16) static char str[] = "235959123456789";
    uintptr_t p = (uintptr_t)argv[0];
    p &= -16;
    return testloop((char*)p);   // argv[0] apparently has a cache-line split within 16 bytes on my system, worsening from 5c throughput to 6.12c
}

I compiled as follows, to squeeze the loop in so it ends before the 32-byte boundary it’s almost hitting. Note that -march=haswell allows it to use AVX encodings, saving an instruction or two.

$ g++ -fno-omit-frame-pointer -fno-stack-protector -falign-loops=16 -O3 -march=haswell foo.c -masm=intel
$ objdump -drwC -Mintel a.out | less

...
0000000000001190 <testloop(char const*)>:
    1190:       55                      push   rbp
    1191:       b9 00 ca 9a 3b          mov    ecx,0x3b9aca00
    1196:       48 89 e5                mov    rbp,rsp
    1199:       c5 f9 6f 25 6f 0e 00 00         vmovdqa xmm4,XMMWORD PTR [rip+0xe6f]        # 2010 <_IO_stdin_used+0x10>
    11a1:       c5 f9 6f 15 77 0e 00 00         vmovdqa xmm2,XMMWORD PTR [rip+0xe77]        # 2020 <_IO_stdin_used+0x20> # vector constants hoisted
    11a9:       c5 f9 6f 0d 7f 0e 00 00         vmovdqa xmm1,XMMWORD PTR [rip+0xe7f]        # 2030 <_IO_stdin_used+0x30>
    11b1:       66 66 2e 0f 1f 84 00 00 00 00 00        data16 cs nop WORD PTR [rax+rax*1+0x0]
    11bc:       0f 1f 40 00             nop    DWORD PTR [rax+0x0]
### Top of loop is 16-byte aligned here, instead of ending up with 8 byte default
    11c0:       c5 d9 fc 07             vpaddb xmm0,xmm4,XMMWORD PTR [rdi]
    11c4:       c4 e2 79 04 c2          vpmaddubsw xmm0,xmm0,xmm2
    11c9:       c4 e2 79 33 d8          vpmovzxwd xmm3,xmm0
    11ce:       c5 f9 73 d8 06          vpsrldq xmm0,xmm0,0x6
    11d3:       c5 f9 f5 c1             vpmaddwd xmm0,xmm0,xmm1
    11d7:       c5 f9 7f 5d f0          vmovdqa XMMWORD PTR [rbp-0x10],xmm3
    11dc:       c4 e1 f9 7e c0          vmovq  rax,xmm0
    11e1:       69 d0 10 27 00 00       imul   edx,eax,0x2710
    11e7:       48 c1 e8 20             shr    rax,0x20
    11eb:       01 d0                   add    eax,edx
    11ed:       8d 14 80                lea    edx,[rax+rax*4]
    11f0:       0f b6 47 0e             movzx  eax,BYTE PTR [rdi+0xe]
    11f4:       8d 44 50 d0             lea    eax,[rax+rdx*2-0x30]
    11f8:       89 45 fc                mov    DWORD PTR [rbp-0x4],eax
    11fb:       ff c9                   dec    ecx
    11fd:       75 c1                   jne    11c0 <testloop(char const*)+0x30>
  # loop ends 1 byte before it would be a problem for the JCC erratum workaround
    11ff:       8b 45 fc                mov    eax,DWORD PTR [rbp-0x4]

So GCC made the asm I had planned by hand before writing the intrinsics this way, using as few instructions as possible to optimize for throughput. (Clang favours latency in this loop, using a separate add instead of a 3-component LEA).

This is faster than any of the scalar versions that just parse X, and it’s parsing HH, MM, and SS as well. Although clang auto-vectorization of convert3 may give this a run for its money in that department, but it strangely doesn’t do that when inlining.

GCC’s scalar convert3 takes 8 cycles per iteration. clang’s scalar convert3 in a loop takes 7, running at 4.0 fused-domain uops/clock, maxing out the front-end bandwidth and saturating port 1 with one imul uop per cycle. (This is reloading each byte with movzx and storing the scalar result to a stack local every iteration. But not touching the HHMMSS bytes.)

$ taskset -c 3 perf stat --all-user -etask-clock,context-switches,cpu-migrations,page-faults,cycles,instructions,uops_issued.any,uops_executed.thread,idq.mite_uops,idq_uops_not_delivered.cycles_fe_was_ok -r1 ./a.out

 Performance counter stats for './a.out':

          1,221.82 msec task-clock                #    1.000 CPUs utilized          
                 0      context-switches          #    0.000 /sec                   
                 0      cpu-migrations            #    0.000 /sec                   
               105      page-faults               #   85.937 /sec                   
     5,079,784,301      cycles                    #    4.158 GHz                    
    16,002,910,115      instructions              #    3.15  insn per cycle         
    15,004,354,053      uops_issued.any           #   12.280 G/sec                  
    18,003,922,693      uops_executed.thread      #   14.735 G/sec                  
         1,484,567      idq.mite_uops             #    1.215 M/sec                  
     5,079,431,697      idq_uops_not_delivered.cycles_fe_was_ok #    4.157 G/sec                  

       1.222107519 seconds time elapsed

       1.221794000 seconds user
       0.000000000 seconds sys

Note that this is for 1G iterations, so 5.08G cycles means 5.08 cycles per iteration average throughput.

Removing the extra work to produce the HHMMSS part of the output (vpsrldq, vpmovzxwd, and vmovdqa store), just the 9-digit integer part, it runs at 4.0 cycles per iteration on Skylake. Or 3.5 without the scalar store at the end. (I edited GCC’s asm output to comment that instruction, so I know it’s still doing all the work.)

The fact that there’s some kind of back-end bottleneck here (rather than front-end) is probably a good thing for overlapping this with independent work.

Leave a Comment