DeepSeek Open Source FlashMLA – MLA Decoding Kernel for Hopper GPUs
Published on: 2025-07-14 14:37:24
FlashMLA
FlashMLA is an efficient MLA decoding kernel for Hopper GPUs, optimized for variable-length sequences serving.
Currently released:
BF16
Paged kvcache with block size of 64
Quick start
Install
python setup.py install
Benchmark
python tests/test_flash_mla.py
Achieving up to 3000 GB/s in memory-bound configuration and 580 TFLOPS in computation-bound configuration on H800 SXM5, using CUDA 12.6.
Usage
from flash_mla import get_mla_metadata , flash_mla_with_kvcache tile_scheduler_metadata , num_splits = get_mla_metadata ( cache_seqlens , s_q * h_q // h_kv , h_kv ) for i in range ( num_layers ): ... o_i , lse_i = flash_mla_with_kvcache ( q_i , kvcache_i , block_table , cache_seqlens , dv , tile_scheduler_metadata , num_splits , causal = True , ) ...
Requirements
Hopper GPUs
CUDA 12.3 and above
PyTorch 2.0 and above
Acknowledgement
FlashMLA is inspired by FlashAttention 2&3 and cutlass projects.
Citation
... Read full article.