Performance optimisations of x86-64 assembly – Alignment and branch prediction

Alignment optimisations

1. Use .p2align <abs-expr> <abs-expr> <abs-expr> instead of align.

Grants fine-grained control using its 3 params

  • param1 – Align to what boundary.
  • param2 – Fill padding with what (zeroes or NOPs).
  • param3 – Do NOT align if padding would exceed specified number of bytes.

2. Align the start of a frequently used code blocks to cache-line size boundaries.

  • This increases the chances that the entire code-block lies in a single cache-line. Once loaded into the L1-cache, then can run entirely without the need to access RAM for instruction fetch. This is highly beneficial for loops with a large number of iterations.

3. Use multi-byte NOPs for padding to reduce the time spent executing NOPs.

  /* nop */
  static const char nop_1[] = { 0x90 };

  /* xchg %ax,%ax */
  static const char nop_2[] = { 0x66, 0x90 };

  /* nopl (%[re]ax) */
  static const char nop_3[] = { 0x0f, 0x1f, 0x00 };

  /* nopl 0(%[re]ax) */
  static const char nop_4[] = { 0x0f, 0x1f, 0x40, 0x00 };

  /* nopl 0(%[re]ax,%[re]ax,1) */
  static const char nop_5[] = { 0x0f, 0x1f, 0x44, 0x00, 0x00 };

  /* nopw 0(%[re]ax,%[re]ax,1) */
  static const char nop_6[] = { 0x66, 0x0f, 0x1f, 0x44, 0x00, 0x00 };

  /* nopl 0L(%[re]ax) */
  static const char nop_7[] = { 0x0f, 0x1f, 0x80, 0x00, 0x00, 0x00, 0x00 };

  /* nopl 0L(%[re]ax,%[re]ax,1) */
  static const char nop_8[] =
    { 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00};

  /* nopw 0L(%[re]ax,%[re]ax,1) */
  static const char nop_9[] =
    { 0x66, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 };

  /* nopw %cs:0L(%[re]ax,%[re]ax,1) */
  static const char nop_10[] =
    { 0x66, 0x2e, 0x0f, 0x1f, 0x84, 0x00, 0x00, 0x00, 0x00, 0x00 };

(upto 10byte NOPs for x86. Source binutils-2.2.3.)


Branch-prediction optimisations


Lot of variations between x86_64 micro-architectures/generations. However a common set of guidelines that are applicable for all of them can be summarised as follows. Reference : Section 3 of Agner Fog’s x86 micro-architecture manual.

1. Un-roll loops to avoid slightly-too-high iteration counts.

  • Loop detection logic is guaranteed to work ONLY for loops with < 64 iterations. This is due to the fact that a branch instruction is recognized as having loop behavior if it goes one way n-1 times and then goes the other way 1 time, for any n upto 64.

    This doesn’t really apply for the predictors in Haswell and later which use a TAGE predictor and doesn’t have dedicated loop-detection logic for specific branches. Iteration counts of ~23 can be the worst-case for an inner loop inside a tight outer loop with no other branching, on Skylake: the exit from the inner loop mispredicts most times, but the trip count is so low that it happens often. Unrolling can help by shortening the pattern, but for very high loop trip counts the single mispredict at the end is amortized over a lot of trips and it would take an unreasonable amount of unrolling to do anything about it.

2. Stick to near/short jumps.

  • Far jumps are not predicted i.e. pipeline always stalls on a far jump to a new code segment (CS:RIP). There’s basically never a reason to use a far jump anyway so this is mostly not relevant.

    Indirect jumps with an arbitrary 64-bit absolute address are predicted normally on most CPUs.

    But Silvermont (Intel’s low-power CPUs) have some limitations in predicting indirect jumps when the target is more than 4GB away, so avoiding that by loading/mapping executables and shared libraries in the low 32 bits of virtual address space can be a win there. e.g. on GNU/Linux by setting the environment variable LD_PREFER_MAP_32BIT_EXEC. See Intel’s optimization manual for more.

Leave a Comment