Skip to content

x86: implement AVX2 kernel for ggml_vec_dot_q1_0_g128_q8_0#11

Closed
SimesD61 wants to merge 1 commit intoPrismML-Eng:prismfrom
SimesD61:feat/avx2-q1_0_g128-kernel
Closed

x86: implement AVX2 kernel for ggml_vec_dot_q1_0_g128_q8_0#11
SimesD61 wants to merge 1 commit intoPrismML-Eng:prismfrom
SimesD61:feat/avx2-q1_0_g128-kernel

Conversation

@SimesD61
Copy link
Copy Markdown

@SimesD61 SimesD61 commented Apr 5, 2026

Problem

The x86 implementation of ggml_vec_dot_q1_0_g128_q8_0 in ggml/src/ggml-cpu/arch/x86/quants.c was a stub that immediately fell through to the scalar generic fallback:

void ggml_vec_dot_q1_0_g128_q8_0(...) {
    ggml_vec_dot_q1_0_g128_q8_0_generic(...);
}

The ARM NEON implementation was already fully vectorized. On x86 this meant Bonsai 8B ran at ~0.04 tok/s — 67× slower than the ARM CPU path.

Solution

Full AVX2 implementation using the same algorithm as the NEON kernel:

  • vpshufb bit expansion: Each 32-bit sub-block is broadcast to 32 bytes via _mm_shuffle_epi8, then AND+cmpeq decodes 1-bit weights to sign bytes (+1/-1)
  • INT8 dot product: maddubs_epi16 + madd_epi16 for efficient 8-bit multiply-accumulate
  • 4 independent FMA accumulators: Hides the 5-cycle FMA latency on Skylake (matches one accumulator per sub-block of the block_q1_0_g128 layout)
  • Falls back to generic on non-AVX2 targets

Performance (Intel i7-8700B, AVX2, no AVX-512)

tok/s
Before (scalar stub) ~0.04
After (AVX2) ~8.0
Speedup ~200x

The 8 tok/s result is at the compute-bound ceiling for Q1_0_g128 on this CPU — Q1_0_g128 is ~4x more compute-intensive per byte than Q4_0, so further gains would require AVX-512 or a fundamentally different algorithm.

The x86 implementation was a stub that called the scalar generic fallback.
The ARM NEON kernel was already fully vectorized. This implements the same
algorithm using AVX2 intrinsics.

Key techniques:
- vpshufb (mm_shuffle_epi8) to broadcast each 4-byte sub-block to 32 lanes
- AND+cmpeq to decode 1-bit weights to sign bytes (+1/-1)
- maddubs_epi16 + madd_epi16 for INT8 dot product reduction
- 4 independent FMA accumulators to hide the 5-cycle FMA latency

Performance on Intel i7-8700B (no AVX-512):
- Before: ~0.04 tok/s (scalar fallback, 67x slower than ARM CPU)
- After:  ~8 tok/s (AVX2, matches compute-bound ceiling for Q1_0_g128)
- ~200x speedup over the stub

Falls back to generic implementation on non-AVX2 targets.
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR replaces the x86 stub implementation of ggml_vec_dot_q1_0_g128_q8_0 with a real AVX2-optimized kernel, bringing x86 performance in line with the existing ARM NEON vectorized path.

Changes:

  • Implement an AVX2 version of ggml_vec_dot_q1_0_g128_q8_0 using shuffle-based bit expansion and INT8 dot-product primitives.
  • Use multiple independent accumulators and reduce them at the end for improved throughput.
  • Keep a generic fallback for non-AVX2 builds.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +550 to +551
GGML_ASSERT(n % QK1_0_g128 == 0);
GGML_ASSERT(nrc == 1);
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function uses GGML_ASSERT for argument checks, but the other vec_dot kernels in this file consistently use assert() (e.g., ggml_vec_dot_q4_0_q8_0 just below). For consistency and to avoid changing behavior in release builds (GGML_ASSERT may not be compiled out like assert), consider switching these to assert() like the rest of the file or documenting why GGML_ASSERT is required here.

Suggested change
GGML_ASSERT(n % QK1_0_g128 == 0);
GGML_ASSERT(nrc == 1);
assert(n % QK1_0_g128 == 0);
assert(nrc == 1);

Copilot uses AI. Check for mistakes.
Comment on lines +600 to +601

#define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0)
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The DOT_SUB macro uses _mm256_fmadd_ps but the implementation is only guarded by AVX2. AVX2 does not necessarily imply FMA is enabled at compile time (GGML_FMA is a separate CMake option), so this can fail to compile when building with -mavx2 but without -mfma. Consider either guarding this path with FMA/GGML_FMA or replacing the FMA with separate multiply + add so it builds correctly under AVX2-only.

Suggested change
#define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0)
#if defined(__FMA__)
#define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0)
#else
#define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_add_ps(_mm256_mul_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d))), (acc)); } while (0)
#endif

Copilot uses AI. Check for mistakes.
Comment on lines +601 to +606
#define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0)

DOT_SUB(bv0, &y[i*4+0], sumf0);
DOT_SUB(bv1, &y[i*4+1], sumf1);
DOT_SUB(bv2, &y[i*4+2], sumf2);
DOT_SUB(bv3, &y[i*4+3], sumf3);
Copy link

Copilot AI Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DOT_SUB is defined as a very large single-line macro inside the loop body, which makes the kernel hard to read, debug, and maintain (and increases the risk of subtle macro issues if the arguments ever change). Consider replacing it with a small static inline helper function (or at least a multi-line macro defined outside the loop) to improve maintainability without affecting performance.

Copilot uses AI. Check for mistakes.
@zcattacz
Copy link
Copy Markdown

zcattacz commented Apr 6, 2026

Just tested the AVX2 impl on my i5 box. The 0.00022 KLD that plagues my and PR7 op's result seems arch related (Tigerlake and Broadwell both Intel CPU). I have tried several impls, all hit the same KLD after the first few chunks, thus later there's little point to run the full test just to confirm the KLD and speed (check the full test ETA).
@SimesD61 I'm surprised the xor+sub impl can beat this one in speed, maybe the compiler does a better job in optimizing the simpler impls. (The code I tested is in #7 's comment). If you're interested in testing could you post your KLD? Just the first few pass is enough, no need to wait for an hour for the full test. AMD and ARM users both ARM gets ~0.00000 KLD.

PR11 CMPEQ+OR
system_info: n_threads = 2 (n_threads_batch = 2) / 4 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 
kl_divergence: computing over 100 chunks, n_ctx=512, batch_size=2048, n_seq=4
kl_divergence: 165.96 seconds per pass - ETA 1 hours 9.13 minutes

chunk             PPL               ln(PPL(Q)/PPL(base))          KL Divergence              Δp RMS            Same top p
   1      13.9369 ±    3.1756      -0.00153 ±    0.00230       0.00019 ±    0.00003     0.331 ±  0.041 %    100.000 ±  0.000 %
   2      20.2032 ±    3.4390       0.01451 ±    0.01153       0.00020 ±    0.00002     0.316 ±  0.026 %    99.804 ±  0.196 %
   3      20.8746 ±    2.7925       0.01022 ±    0.00771       0.00022 ±    0.00001     0.347 ±  0.028 %    99.216 ±  0.319 %
   4      21.2304 ±    2.3930       0.00782 ±    0.00580       0.00022 ±    0.00001     0.338 ±  0.022 %    99.412 ±  0.240 %
^C
PR10 with mul_sum_i8_pairs_float

system_info: n_threads = 2 (n_threads_batch = 2) / 4 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 
kl_divergence: computing over 100 chunks, n_ctx=512, batch_size=2048, n_seq=4
kl_divergence: 150.86 seconds per pass - ETA 1 hours 2.85 minutes

chunk             PPL               ln(PPL(Q)/PPL(base))          KL Divergence              Δp RMS            Same top p
   1      13.9557 ±    3.1807      -0.00019 ±    0.00239       0.00019 ±    0.00002     0.376 ±  0.047 %    99.608 ±  0.392 %
   2      20.1986 ±    3.4363       0.01428 ±    0.01146       0.00020 ±    0.00001     0.346 ±  0.030 %    99.608 ±  0.277 %
   3      20.8582 ±    2.7888       0.00944 ±    0.00766       0.00021 ±    0.00001     0.375 ±  0.025 %    99.216 ±  0.319 %
   4      21.2096 ±    2.3896       0.00684 ±    0.00577       0.00022 ±    0.00001     0.385 ±  0.026 %    99.412 ±  0.240 %
   5      21.0872 ±    2.1033       0.00566 ±    0.00464       0.00022 ±    0.00001     0.376 ±  0.023 %    99.529 ±  0.192 %
   6      21.2932 ±    1.9099       0.00549 ±    0.00390       0.00021 ±    0.00001     0.362 ±  0.020 %    99.477 ±  0.184 %
   7      21.4337 ±    1.7665       0.00508 ±    0.00335       0.00021 ±    0.00001     0.365 ±  0.020 %    99.440 ±  0.177 %
   8      23.1788 ±    1.8031       0.00527 ±    0.00297       0.00021 ±    0.00001     0.364 ±  0.018 %    99.412 ±  0.169 %
   9      24.6955 ±    1.8365       0.00752 ±    0.00337       0.00022 ±    0.00001     0.355 ±  0.017 %    99.390 ±  0.163 %
  10      25.4214 ±    1.7879       0.00672 ±    0.00303       0.00022 ±    0.00001     0.353 ±  0.015 %    99.294 ±  0.166 %
  11      26.0683 ±    1.7516       0.00617 ±    0.00276       0.00022 ±    0.00001     0.354 ±  0.014 %    99.287 ±  0.159 %
  12      26.5272 ±    1.7091       0.00582 ±    0.00254       0.00022 ±    0.00001     0.351 ±  0.013 %    99.346 ±  0.146 %
xor+sub (like PR4)

system_info: n_threads = 2 (n_threads_batch = 2) / 4 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 
kl_divergence: computing over 100 chunks, n_ctx=512, batch_size=2048, n_seq=4
kl_divergence: 115.23 seconds per pass - ETA 48.00 minutes

chunk             PPL               ln(PPL(Q)/PPL(base))          KL Divergence              Δp RMS            Same top p
   1      13.9528 ±    3.1791      -0.00040 ±    0.00223       0.00019 ±    0.00002     0.382 ±  0.053 %    99.608 ±  0.392 %
   2      20.1970 ±    3.4355       0.01420 ±    0.01145       0.00019 ±    0.00001     0.343 ±  0.033 %    99.608 ±  0.277 %
   3      20.8596 ±    2.7888       0.00950 ±    0.00765       0.00021 ±    0.00001     0.351 ±  0.026 %    99.346 ±  0.292 %
   4      21.2115 ±    2.3896       0.00693 ±    0.00576       0.00022 ±    0.00001     0.369 ±  0.025 %    99.510 ±  0.219 %
   5      21.0887 ±    2.1034       0.00573 ±    0.00463       0.00022 ±    0.00001     0.363 ±  0.022 %    99.608 ±  0.175 %
   6      21.2944 ±    1.9099       0.00555 ±    0.00389       0.00021 ±    0.00001     0.351 ±  0.019 %    99.542 ±  0.173 %
   7      21.4348 ±    1.7665       0.00513 ±    0.00334       0.00021 ±    0.00001     0.355 ±  0.020 %    99.496 ±  0.168 %
PR7 with _mm256_shuffle_epi8

system_info: n_threads = 2 (n_threads_batch = 2) / 4 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | BMI2 = 1 | LLAMAFILE = 1 | OPENMP = 1 | REPACK = 1 | 
kl_divergence: computing over 100 chunks, n_ctx=512, batch_size=2048, n_seq=4
kl_divergence: 186.99 seconds per pass - ETA 1 hours 17.90 minutes

chunk             PPL               ln(PPL(Q)/PPL(base))          KL Divergence              Δp RMS            Same top p
   1      13.9733 ±    3.1846       0.00107 ±    0.00236       0.00020 ±    0.00002     0.402 ±  0.048 %    99.608 ±  0.392 %
   2      20.2038 ±    3.4373       0.01454 ±    0.01146       0.00022 ±    0.00002     0.375 ±  0.029 %    99.608 ±  0.277 %
   3      20.8431 ±    2.7865       0.00871 ±    0.00766       0.00023 ±    0.00001     0.387 ±  0.026 %    98.693 ±  0.411 %
   4      21.1827 ±    2.3859       0.00558 ±    0.00577       0.00023 ±    0.00001     0.378 ±  0.022 %    99.020 ±  0.309 %
   5      21.0675 ±    2.1012       0.00473 ±    0.00465       0.00022 ±    0.00001     0.379 ±  0.019 %    99.137 ±  0.259 %
   6      21.2662 ±    1.9072       0.00422 ±    0.00390       0.00022 ±    0.00001     0.381 ±  0.018 %    99.085 ±  0.244 %
   7      21.4126 ±    1.7643       0.00409 ±    0.00335       0.00022 ±    0.00001     0.374 ±  0.016 %    98.992 ±  0.237 %

@khosravipasha
Copy link
Copy Markdown
Collaborator

Good new our first CPU PR just got merged int llama.cpp master branch now, if you are still working on this please rebase with PrismML's master (just pulled the main llama.cpp)

Changes: Q1_0_g128 naming is gone now, the original Q1_0 with group size 32 was deleted and Q1_0_g128 was renamed to Q1_0 now by default has group size 128.

https://github.com/PrismML-Eng/llama.cpp/tree/master

This one only has generic cpu (slow), and ARM NEON path, planning to gather the best x86 kernels from here and to send a PR there (and tag all the contributers).

@khosravipasha
Copy link
Copy Markdown
Collaborator

There is a lot of CPU PRs, planning to gether all in one and then send to the main llama.cpp
Going to close this and mention people that helped in a thread there, if you think your solution is better please comment there:
#10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants