OneDNN BRGeMM Micro-Kernel Integration for BF16 MatMul#903
OneDNN BRGeMM Micro-Kernel Integration for BF16 MatMul#903bbhattar wants to merge 12 commits intogoogle:devfrom
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
629b569 to
e072d70
Compare
jan-wassenberg
left a comment
There was a problem hiding this comment.
Very nice work :) Just some fairly minor suggestions:
| static constexpr int64_t kMBlkValues[] = {32, 64}; | ||
| static constexpr int64_t kBatchValues[] = {16, 32, 64, 128, 256}; | ||
|
|
||
| const int64_t k_chunks = static_cast<int64_t>(K) / kKBlk; |
There was a problem hiding this comment.
Should this round up? We have hwy::DivCeil.
There was a problem hiding this comment.
Padding the k-dimension and using ceiling is an alternative. We are using integer division instead of ceiling and use a dedicated tail-shaped kernel to handle remainder.
e072d70 to
f3a75ca
Compare
…nd transform inits
649e233 to
acf7592
Compare
jan-wassenberg
left a comment
There was a problem hiding this comment.
Very nice, thanks for making the changes!
|
Looks like we require a rebase of the PR, then ready to land this :) |
This PR integrates OneDNN BRGeMM (Batch-Reduced General Matrix Multiply) micro-kernels as an alternative compute path for BF16 MatMul on Intel Xeon platforms with AMX or AVX-512 BF16 support.
What
When enabled via the
GEMMA_ONEDNN_BRGEMMcompile-time flag, BF16×BF16 MatMul operations are dispatched to JIT-compiled BRGeMM kernels instead of the Highway SIMD path. This targets Gemma model workloads (FFW projections, attention) on Intel Xeon Scalable (SPR/EMR) processors. At this point support has been added to both CMake and Bazel build systems.How to Enable
Runtime Fallback
When
GEMMA_ONEDNN_BRGEMMis enabled at compile time, the BRGeMM path activates for BF16×BF16 operations whose dimensions meet AMX tile constraints (M, N, K ≥ 32 and K % 32 == 0). All other cases — non-BF16 types, smaller or non-aligned dimensions, mixed precision — fall through to the standard Highway SIMD MatMul path automatically.Changes
ops/brgemm.hUseOneDnnBrgemm(), autotuning candidatesops/brgemm-inl.hDoMatMul_BRGeMM(): kernel JIT/caching, B-packing with hugepages, tiled parallel executionops/matmul-inl.hMatMul()guarded by#if GEMMA_ONEDNN_BRGEMMops/matmul.h#include "ops/brgemm.h",brgemm_autotunefield inMMPerKeyops/bench_matmul.ccbrgemm_autotune.Best()to avoid infinite loop when BRGeMM handles dispatchCMakeLists.txtGEMMA_ONEDNN_BRGEMMoption, FetchContent for OneDNN v3.11, conditional target linkingBUILD.bazelconfig_settingforgemma_onednn_brgemm, conditional OneDNN dep and defines for x86_64MODULE.bazelhttp_archivedependencybazel/onednn.BUILDutil/zones.hkBRGeMMcaller enum for thread pool dispatchutil/zones.ccCallerNamemapping forkBRGeMMTesting
matmul_testpasses with and withoutGEMMA_ONEDNN_BRGEMM(all original test shapes, types, and correctness checks preserved)bench_matmulruns successfully with BRGeMM enabled