mx1 commited on
Commit
e99cbe9
·
verified ·
1 Parent(s): 355936a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +265 -3
README.md CHANGED
@@ -1,3 +1,265 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model:
4
+ - inclusionAI/Ling-flash-base-2.0
5
+ pipeline_tag: text-generation
6
+ library_name: transformers
7
+ ---
8
+
9
+
10
+
11
+ <p align="center">
12
+ <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
13
+ <p>
14
+
15
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
16
+
17
+
18
+ ## Introduction
19
+
20
+ Today, __Ling-flash-2.0__ is officially open-sourced! 🚀
21
+ Following the release of the __language model [Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0)__ and the __thinking model [Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)__, we are now open-sourcing the third MoE LLM under the __Ling 2.0 architecture: Ling-flash-2.0__, a language model with __100B total parameters__ and __6.1B activated parameters (4.8B non-embedding)__.
22
+ Trained on __20T+ tokens of high-quality data__, together with __supervised fine-tuning__ and __multi-stage reinforcement learning__, Ling-flash-2.0 achieves __SOTA performance among dense models under 40B parameters__, despite activating only ~6B parameters. Compared to MoE models with larger activation/total parameters, it also demonstrates strong competitiveness. Notably, it delivers outstanding performance in __complex reasoning, code generation, and frontend development__.
23
+
24
+ ### Powerful Complex Reasoning Abilities
25
+
26
+ We conducted a comprehensive evaluation of Ling-flash-2.0’s reasoning capabilities, reporting strong results on representative benchmarks:
27
+ ● __Multi-disciplinary knowledge reasoning__: GPQA-Diamond, MMLU-Pro
28
+ ● __Advanced mathematical reasoning__: AIME 2025, Omni-MATH, OptMATH (advanced mathematical optimization tasks)
29
+ ● __Challenging code generation__: LiveCodeBench v6, CodeForces-Elo
30
+ ● __Logical reasoning__: KOR-Bench, ARC-Prize
31
+ ● __Key regulated industries (Finance, Healthcare)__: FinanceReasoning, HealthBench
32
+ Compared with __dense models under 40B__ (e.g., Qwen3-32B-Non-Thinking, Seed-OSS-36B-Instruct (think budget=0)) and __larger-activation/total-parameter MoE models__ (e.g., Hunyuan-A13B-Instruct, GPT-OSS-120B/low), __Ling-flash-2.0__ demonstrates stronger complex reasoning power. Moreover, it shows high competitiveness on __creative tasks__ (Creative Writing v3).
33
+ <p align="center">
34
+ <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/zxAvQ7QtrAwAAAAAQqAAAAgADkZ7AQFr/fmt.webp"/>
35
+ <p>
36
+
37
+ <p align="center">
38
+ <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/qQ_sTqrxiesAAAAAQuAAAAgADkZ7AQFr/original"/>
39
+ <p>
40
+
41
+ ### Efficient Architecture, High-Speed Inference
42
+
43
+ <p align="center">
44
+ <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/fMdiQZqYKSAAAAAAVdAAAAgADkZ7AQFr/fmt.avif"/>
45
+ <p>
46
+
47
+ Guided by [Ling Scaling Laws](https://arxiv.org/abs/2507.17702), Ling 2.0 adopts a __1/32 activation-ratio MoE architecture__, optimized across multiple design choices: expert granularity, shared-expert ratio, attention balance, __aux-loss-free + sigmoid routing strategy__, MTP layers, QK-Norm, Partial-RoPE, and more. These refinements enable __small-activation MoE__ models to achieve __7× efficiency gains__ over equivalent dense architectures.
48
+ In other words, with just __6.1B activated parameters (4.8B non-embedding)__, __Ling-flash-2.0__ can match the performance of ~40B dense models. Thanks to its small activation size, it also delivers major inference speed advantages:
49
+ ● On __H20 hardware__, Ling-flash-2.0 achieves __200+ tokens/s__, offering __3× speedups__ compared to 36B dense models in everyday use.
50
+ ● With __YaRN extrapolation__, it supports __128K context length__, and as output length grows, its relative speedup can reach __7× or more__.
51
+
52
+
53
+ <p align="center">
54
+ <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/oR9UTY7S0QgAAAAAgKAAAAgADkZ7AQFr/original"/>
55
+ <p>
56
+
57
+ <p align="center">
58
+ <img src="https://mdn.alipayobjects.com/huamei_fi95qp/afts/img/Hid1RrgsCUAAAAAAQYAAAAgADkZ7AQFr/fmt.webp"/>
59
+ <p>
60
+
61
+
62
+ ## Model Downloads
63
+
64
+ You can download the following table to see the various stage of Ling-flash-2.0 models. If you are located in mainland China, we also provide the model on ModelScope.cn to speed up the download process.
65
+
66
+ <center>
67
+
68
+ | **Model** | **Context Length** | **Download** |
69
+ |:----------------------:| :----------------: |:--------------------------------------------------------------------------------------------------------------------------------------------------------------------:|
70
+ | Ling-flash-base-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-base-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-base-2.0) |
71
+ | Ling-flash-2.0 | 32K -> 128K (YaRN) | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ling-flash-2.0) <br>[🤖 ModelScope](https://www.modelscope.cn/models/inclusionAI/Ling-flash-2.0) |
72
+
73
+ </center>
74
+
75
+ Note: If you are interested in previous version, please visit the past model collections in [Huggingface](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
76
+
77
+
78
+ ## Quickstart
79
+
80
+ ### Convert to safetensors
81
+
82
+ Models with safetensors format can be downloaded from [HuggingFace](https://huggingface.co/inclusionAI) or [ModelScope](https://modelscope.cn/organization/inclusionAI).
83
+ If you want to train your model and eval it, you can convert from dcp produced by training.
84
+ ```shell
85
+ python tools/convert_dcp_to_safe_tensors.py --checkpoint-path ${DCP_PATH} --target-path ${SAFETENSORS_PATH}
86
+ ```
87
+
88
+ Currently, BF16 and FP8 formats are supported, you can use convert parameter to handle it:
89
+ - `--force-bf16` for BF16 format.
90
+ - `--force-fp8` for FP8 format.
91
+
92
+ ### 🤗 Hugging Face Transformers
93
+
94
+ Here is a code snippet to show you how to use the chat model with `transformers`:
95
+
96
+ ```python
97
+ from transformers import AutoModelForCausalLM, AutoTokenizer
98
+
99
+ model_name = "inclusionAI/Ling-flash-2.0"
100
+
101
+ model = AutoModelForCausalLM.from_pretrained(
102
+ model_name,
103
+ dtype="auto",
104
+ device_map="auto",
105
+ trust_remote_code=True,
106
+ )
107
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
108
+
109
+ prompt = "Give me a short introduction to large language models."
110
+ messages = [
111
+ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
112
+ {"role": "user", "content": prompt}
113
+ ]
114
+ text = tokenizer.apply_chat_template(
115
+ messages,
116
+ tokenize=False,
117
+ add_generation_prompt=True
118
+ )
119
+ model_inputs = tokenizer([text], return_tensors="pt", return_token_type_ids=False).to(model.device)
120
+
121
+ generated_ids = model.generate(
122
+ **model_inputs,
123
+ max_new_tokens=512
124
+ )
125
+ generated_ids = [
126
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
127
+ ]
128
+
129
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
130
+ ```
131
+
132
+ ### 🤖 ModelScope
133
+
134
+ If you're in mainland China, we strongly recommend you to use our model from 🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a>.
135
+
136
+ ## Deployment
137
+
138
+ ### vLLM
139
+
140
+ vLLM supports offline batched inference or launching an OpenAI-Compatible API Service for online inference.
141
+
142
+ #### Environment Preparation
143
+
144
+ Since the Pull Request (PR) has not been submitted to the vLLM community at this stage, please prepare the environment by following the steps below:
145
+
146
+ ```bash
147
+ git clone -b v0.10.0 https://github.com/vllm-project/vllm.git
148
+ cd vllm
149
+ git apply Ling-V2/inference/vllm/bailing_moe_v2.patch
150
+ pip install -e .
151
+ ```
152
+
153
+ #### Offline Inference:
154
+
155
+ ```bash
156
+ from transformers import AutoTokenizer
157
+ from vllm import LLM, SamplingParams
158
+
159
+ tokenizer = AutoTokenizer.from_pretrained("inclusionAI/Ling-flash-2.0")
160
+
161
+ sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=16384)
162
+
163
+ llm = LLM(model="inclusionAI/Ling-flash-2.0", dtype='bfloat16')
164
+ prompt = "Give me a short introduction to large language models."
165
+ messages = [
166
+ {"role": "system", "content": "You are Ling, an assistant created by inclusionAI"},
167
+ {"role": "user", "content": prompt}
168
+ ]
169
+
170
+ text = tokenizer.apply_chat_template(
171
+ messages,
172
+ tokenize=False,
173
+ add_generation_prompt=True
174
+ )
175
+ outputs = llm.generate([text], sampling_params)
176
+
177
+ ```
178
+
179
+ #### Online Inference:
180
+
181
+ ```bash
182
+ vllm serve inclusionAI/Ling-flash-2.0 \
183
+ --tensor-parallel-size 2 \
184
+ --pipeline-parallel-size 1 \
185
+ --use-v2-block-manager \
186
+ --gpu-memory-utilization 0.90
187
+ ```
188
+
189
+ To handle long context in vLLM using YaRN, we need to follow these two steps:
190
+ 1. Add a `rope_scaling` field to the model's `config.json` file, for example:
191
+ ```json
192
+ {
193
+ ...,
194
+ "rope_scaling": {
195
+ "factor": 4.0,
196
+ "original_max_position_embeddings": 32768,
197
+ "type": "yarn"
198
+ }
199
+ }
200
+ ```
201
+ 2. Use an additional parameter `--max-model-len` to specify the desired maximum context length when starting the vLLM service.
202
+
203
+ For detailed guidance, please refer to the vLLM [`instructions`](https://docs.vllm.ai/en/latest/).
204
+
205
+
206
+ ### SGLang
207
+
208
+ #### Environment Preparation
209
+
210
+ We will later submit our model to SGLang official release, now we can prepare the environment following steps:
211
+ ```shell
212
+ pip3 install sglang==0.5.2rc0 sgl-kernel==0.3.7.post1
213
+ ```
214
+ You can use docker image as well:
215
+ ```shell
216
+ docker pull lmsysorg/sglang:v0.5.2rc0-cu126
217
+ ```
218
+ Then you should apply patch to sglang installation:
219
+ ```shell
220
+ # patch command is needed, run `yum install -y patch` if needed
221
+ patch -d `python -c 'import sglang;import os; print(os.path.dirname(sglang.__file__))'` -p3 < inference/sglang/bailing_moe_v2.patch
222
+ ```
223
+
224
+ #### Run Inference
225
+
226
+ BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
227
+
228
+ - Start server:
229
+ ```shell
230
+ python -m sglang.launch_server \
231
+ --model-path $MODLE_PATH \
232
+ --host 0.0.0.0 --port $PORT \
233
+ --trust-remote-code \
234
+ --attention-backend fa3
235
+ ```
236
+ MTP is supported for base model, and not yet for chat model. You can add parameter `--speculative-algorithm NEXTN`
237
+ to start command.
238
+
239
+ - Client:
240
+ ```shell
241
+ curl -s http://localhost:${PORT}/v1/chat/completions \
242
+ -H "Content-Type: application/json" \
243
+ -d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
244
+ """
245
+ ```
246
+ More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
247
+
248
+
249
+
250
+ ### Finetuning
251
+
252
+ We recommend you to use [Llama-Factory](https://github.com/hiyouga/LLaMA-Factory) to [finetune Ling](https://github.com/inclusionAI/Ling-V2/blob/main/docs/llamafactory_finetuning.md). In addition to that, you can also use [Megatron for finetuning](https://github.com/inclusionAI/Ling-V2/blob/main/docs/megatron_sft_training.md).
253
+
254
+ ## License
255
+
256
+ This code repository is licensed under [the MIT License](https://github.com/inclusionAI/Ling-V2/blob/master/LICENCE).
257
+
258
+ ## Citation
259
+
260
+ If you find our work helpful, feel free to give us a cite.
261
+
262
+ ```
263
+
264
+ ```
265
+