PyTorch
English
Chinese
llama
custom_code

NOSA: Native and Offloadable Sparse Attention

Boost Decoding Efficiency via High-Locality Offloading

Overview

NOSA is a trainable sparse attention mechanism designed for KV-cache offloading with an explicit locality constraint, paired with an inference system (NOSI) to realize its efficiency. It improves long-context/long-generation quality over prior offloading baselines while boosting decoding throughput by up to 5.04× vs FullAttn, 1.92× vs InfLLMv2, and 1.83× vs ShadowKV on 1B/3B/8B LLMs.

Models

We train 1B, 3B, and 8B models FullAttn, InfLLMv2, DMA, and NOSA, resulting in a total of 12 models. The following models have been released on Hugging Face.

Model Link
NOSA-1B NOSA-1B
NOSA-3B NOSA-3B
NOSA-8B NOSA-8B

Please reach out to us if additional baseline models (FullAttn, InfLLMv2, or DMA) are needed. You may open an issue or contact us directly via email (our email addresses are provided in the paper).

Citation

@article{huang2025nosa,
  title={NOSA: Native and Offloadable Sparse Attention},
  author={Huang, Yuxiang and Wang, Pengjie and Han, Jicheng and Zhao, Weilin and Su, Zhou and Sun, Ao and Lyu, Hongya and Zhao, Hengyu and Wang, Yudong and Xiao, Chaojun and Han, Xu and Liu, Zhiyuan},
  journal={arXiv preprint arXiv:2510.13602},
  year={2025}
}
Downloads last month
38
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train openbmb/NOSA-1B

Collection including openbmb/NOSA-1B

Paper for openbmb/NOSA-1B