x86: implement AVX2 kernel for ggml_vec_dot_q1_0_g128_q8_0#11
x86: implement AVX2 kernel for ggml_vec_dot_q1_0_g128_q8_0#11SimesD61 wants to merge 1 commit intoPrismML-Eng:prismfrom
Conversation
The x86 implementation was a stub that called the scalar generic fallback. The ARM NEON kernel was already fully vectorized. This implements the same algorithm using AVX2 intrinsics. Key techniques: - vpshufb (mm_shuffle_epi8) to broadcast each 4-byte sub-block to 32 lanes - AND+cmpeq to decode 1-bit weights to sign bytes (+1/-1) - maddubs_epi16 + madd_epi16 for INT8 dot product reduction - 4 independent FMA accumulators to hide the 5-cycle FMA latency Performance on Intel i7-8700B (no AVX-512): - Before: ~0.04 tok/s (scalar fallback, 67x slower than ARM CPU) - After: ~8 tok/s (AVX2, matches compute-bound ceiling for Q1_0_g128) - ~200x speedup over the stub Falls back to generic implementation on non-AVX2 targets.
There was a problem hiding this comment.
Pull request overview
This PR replaces the x86 stub implementation of ggml_vec_dot_q1_0_g128_q8_0 with a real AVX2-optimized kernel, bringing x86 performance in line with the existing ARM NEON vectorized path.
Changes:
- Implement an AVX2 version of
ggml_vec_dot_q1_0_g128_q8_0using shuffle-based bit expansion and INT8 dot-product primitives. - Use multiple independent accumulators and reduce them at the end for improved throughput.
- Keep a generic fallback for non-AVX2 builds.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| GGML_ASSERT(n % QK1_0_g128 == 0); | ||
| GGML_ASSERT(nrc == 1); |
There was a problem hiding this comment.
This function uses GGML_ASSERT for argument checks, but the other vec_dot kernels in this file consistently use assert() (e.g., ggml_vec_dot_q4_0_q8_0 just below). For consistency and to avoid changing behavior in release builds (GGML_ASSERT may not be compiled out like assert), consider switching these to assert() like the rest of the file or documenting why GGML_ASSERT is required here.
| GGML_ASSERT(n % QK1_0_g128 == 0); | |
| GGML_ASSERT(nrc == 1); | |
| assert(n % QK1_0_g128 == 0); | |
| assert(nrc == 1); |
|
|
||
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) |
There was a problem hiding this comment.
The DOT_SUB macro uses _mm256_fmadd_ps but the implementation is only guarded by AVX2. AVX2 does not necessarily imply FMA is enabled at compile time (GGML_FMA is a separate CMake option), so this can fail to compile when building with -mavx2 but without -mfma. Consider either guarding this path with FMA/GGML_FMA or replacing the FMA with separate multiply + add so it builds correctly under AVX2-only.
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) | |
| #if defined(__FMA__) | |
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) | |
| #else | |
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_add_ps(_mm256_mul_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d))), (acc)); } while (0) | |
| #endif |
| #define DOT_SUB(bv, yb, acc) do { const __m256i yv = _mm256_loadu_si256((const __m256i *)(yb)->qs); /* cmpeq(AND(bv,mask),0): 0xFF where bit=0; OR 0x01: 0xFF(-1) where bit=0, 0x01(+1) where bit=1 */ const __m256i sgn = _mm256_or_si256(_mm256_cmpeq_epi8(_mm256_and_si256((bv), bit_mask), zero_vec), one8); const __m256i p32 = _mm256_madd_epi16( _mm256_maddubs_epi16(_mm256_abs_epi8(yv), _mm256_sign_epi8(sgn, yv)), ones16); (acc) = _mm256_fmadd_ps(_mm256_cvtepi32_ps(p32), _mm256_set1_ps(d0 * GGML_CPU_FP16_TO_FP32((yb)->d)), (acc)); } while (0) | ||
|
|
||
| DOT_SUB(bv0, &y[i*4+0], sumf0); | ||
| DOT_SUB(bv1, &y[i*4+1], sumf1); | ||
| DOT_SUB(bv2, &y[i*4+2], sumf2); | ||
| DOT_SUB(bv3, &y[i*4+3], sumf3); |
There was a problem hiding this comment.
DOT_SUB is defined as a very large single-line macro inside the loop body, which makes the kernel hard to read, debug, and maintain (and increases the risk of subtle macro issues if the arguments ever change). Consider replacing it with a small static inline helper function (or at least a multi-line macro defined outside the loop) to improve maintainability without affecting performance.
|
Just tested the AVX2 impl on my i5 box. |
|
Good new our first CPU PR just got merged int llama.cpp master branch now, if you are still working on this please rebase with PrismML's master (just pulled the main llama.cpp) Changes: Q1_0_g128 naming is gone now, the original Q1_0 with group size 32 was deleted and Q1_0_g128 was renamed to Q1_0 now by default has group size 128. https://github.com/PrismML-Eng/llama.cpp/tree/master This one only has generic cpu (slow), and ARM NEON path, planning to gather the best x86 kernels from here and to send a PR there (and tag all the contributers). |
|
There is a lot of CPU PRs, planning to gether all in one and then send to the main llama.cpp |
Problem
The x86 implementation of
ggml_vec_dot_q1_0_g128_q8_0inggml/src/ggml-cpu/arch/x86/quants.cwas a stub that immediately fell through to the scalar generic fallback:The ARM NEON implementation was already fully vectorized. On x86 this meant Bonsai 8B ran at ~0.04 tok/s — 67× slower than the ARM CPU path.
Solution
Full AVX2 implementation using the same algorithm as the NEON kernel:
vpshufbbit expansion: Each 32-bit sub-block is broadcast to 32 bytes via_mm_shuffle_epi8, then AND+cmpeq decodes 1-bit weights to sign bytes (+1/-1)maddubs_epi16+madd_epi16for efficient 8-bit multiply-accumulateblock_q1_0_g128layout)Performance (Intel i7-8700B, AVX2, no AVX-512)
The 8 tok/s result is at the compute-bound ceiling for Q1_0_g128 on this CPU — Q1_0_g128 is ~4x more compute-intensive per byte than Q4_0, so further gains would require AVX-512 or a fundamentally different algorithm.