deepanshupillm commited on
Commit
439ed8f
·
verified ·
1 Parent(s): 953d362

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -253
README.md CHANGED
@@ -1,253 +0,0 @@
1
- # Alpie-Core: 4-bit Quantized Reasoning Model
2
-
3
- ---
4
-
5
- *[Space reserved for blog paper, technical report links, and company logo]*
6
-
7
- ---
8
-
9
- ## 1. Introduction
10
-
11
- Alpie-Core is one of the world's first fine-tuned 4-bit reasoning models, proving that aggressive quantization can surpass full-precision baselines in reasoning, mathematics, and coding. By combining cutting-edge quantization-aware training with synthetic STEM-rich datasets, Alpie-Core achieves frontier-level reasoning while being practical for real-world deployment at scale.
12
-
13
- ## 2. Model Summary
14
-
15
- - **Base Architecture**: DeepSeek-R1-Distill-Qwen-32B
16
- - **Parameters**: 32 billion (quantized to 4-bit)
17
- - **Training Method**: Supervised Fine-Tuning (SFT) using LoRA/QLoRA techniques
18
- - **Quantization**: 4-bit NF4 with double quantization
19
- - **Context Length**: 65,536 tokens
20
- - **Max Output Length**: 16,384 tokens
21
- - **License**: Apache 2.0
22
- - **Memory Footprint**: ~8GB (75% reduction from full-precision)
23
-
24
- ## 3. Model Features
25
-
26
- 1. **Supports Streaming** – Real-time token-level responses
27
- 2. **OpenAI-Compatible API** – Seamless integration with OpenAI client libraries
28
- 3. **65K Context Length** – Handles very large inputs and conversations
29
- 4. **16,384 Max Output Length** – Enables extremely long generations
30
- 5. **4-Bit Quantization** – Memory-efficient and optimized for deployment
31
- 6. **High Throughput Inference** – Powered by vLLM for efficient large-scale serving
32
- 7. **Low Latency Inference** – Fast response times optimized for production
33
- 8. **Customizable Safety & Moderation Filters** – Built-in guardrails for safer outputs
34
- 9. **Supports Function Calling / Tool Use** – Enables structured outputs and external API integration
35
-
36
- ## 4. Key Highlights
37
-
38
- - **Frontier Performance in 4-bit**: 81.28% MMLU, 92.75% GSM8K, 57.8% SWE-Bench Verified
39
- - **Global Ranking**: 3rd place on Humanity's Last Exam leaderboard
40
- - **Cost Advantage**: 70-88% lower inference cost vs GPT-4/Claude/DeepSeek
41
- - **Environmental Impact**: 64% lower carbon footprint per inference
42
- - **STEM + Coding Excellence**: Outperforms full-precision peers in mathematics and programming
43
- - **Enhanced Content Access**: Provides factual responses to geopolitically sensitive topics
44
-
45
- ## 5. Benchmark Results
46
-
47
- | Benchmark | Alpie-Core (32B-4bit) | DeepSeek-V2 (236B) | Qwen2.5 72B | Llama 3.1 405B | Llama 3.1 70B | Gemma-3 27B-PT | Mistral-Small-24B-Base-2501 |
48
- |-----------|----------------------|-------------------|-------------|---------------|---------------|----------------|----------------------------|
49
- | MMLU (5-shot) | **81.28%** | 78.4% | 85.0% | 84.4% | 79.3% | 78.6% | 80.73% |
50
- | GSM8K (8-shot) | **92.75%** | 81.6% | 88.3% | 83.5% | nan | 82.2% | 80.73% |
51
- | BBH (3-shot) | **85.12%** | 78.8% | 79.8% | 82.9% | 81.6% | 77.7% | nan |
52
- | MMLU-Pro (5-shot) | **64.78%** | 51.4% | 58.3% | 52.8% | 53.8% | 52.2% | 54.37% |
53
- | MBPP (pass@1) | **75.20%** | 65.0% | 72.6% | 68.4% | nan | 65.6% | 69.64% |
54
- | HumanEval (pass@1) | **57.23%** | 43.3% | 53.0% | 54.9% | nan | 48.8% | nan |
55
-
56
- ### SWE-Bench Verified Performance
57
-
58
- | Rank | Model | Accuracy (%) | Performance vs Alpie |
59
- |------|-------|-------------|---------------------|
60
- | **1** | **Alpie Core** | **57.8** | **Alpie** |
61
- | 2 | Qwen3-Coder-30B-A3B-Instruct | 51.6 | Below Alpie |
62
- | 3 | o1 | 48.9 | Below Alpie |
63
- | 4 | o3-mini (high) | 49.3 | Below Alpie |
64
- | 5 | Claude 3.5 Sonnet | 49.0 | Below Alpie |
65
- | 6 | DeepSeek R1 | 49.2 | Below Alpie |
66
- | 7 | Devstral | 46.8 | Below Alpie |
67
-
68
- ### Humanity's Last Exam Leaderboard Performance
69
-
70
- | Rank | Model | Accuracy (%) | Performance vs Alpie |
71
- |------|-------|-------------|---------------------|
72
- | 1 | GPT 4.5 Preview | 5.8 | Above Alpie |
73
- | 2 | Claude Sonnet 4 | 5.42 | Above Alpie |
74
- | **3** | **Alpie Core 32B (4-bit)** | **5.41** | **Alpie** |
75
- | 4 | Llama 4 Maverik | 5.34 | Below Alpie |
76
- | 5 | GPT 4.1 | 4.97 | Below Alpie |
77
- | 6 | Kimi K2 Instruct | 4.68 | Below Alpie |
78
- | 7 | DeepSeek V3 | 4.55 | Below Alpie |
79
- | 8 | Gemini 1.5 Pro 002 | 4.55 | Below Alpie |
80
-
81
- ### Additional Benchmarks
82
-
83
- | Benchmark | Alpie-Core (32B-4bit) | Category |
84
- |-----------|----------------------|----------|
85
- | AIME | **47.34%** | Advanced Mathematics |
86
- | GPQA (Diamond) | **40.91%** | Graduate-level QA |
87
- | TruthfulQA (MC2) | **60.05%** | Truthfulness |
88
- | HellaSwag | **84.66%** | Commonsense |
89
- | PIQA | **83.24%** | Physical Reasoning |
90
- | ARC Challenge | **67.58%** | Science QA |
91
- | CommonSenseQA | **87.06%** | Commonsense |
92
- | AGIEval | **64.98%** | General Intelligence |
93
- | Winogrande | **79.53%** | Commonsense Reasoning |
94
-
95
- ## 6. Training Details
96
-
97
- - **Hardware**: 8× NVIDIA A100-80GB GPUs
98
- - **Training Duration**: 408 hours
99
- - **Fine-tuning Method**: LoRA/QLoRA with the following configuration:
100
- - LoRA Alpha: 8
101
- - LoRA Dropout: 0.05
102
- - LoRA Rank: 8
103
- - **Quantization**: 4-bit NF4 + Double Quantization + FP16 compute
104
- - **Dataset Domains**: Mathematics, coding, reasoning, science, general knowledge, competitive exams, Indian context + law, multilingual (Hindi and Hinglish)
105
- - **Synthetic Data Advantage**: +15-20% performance boost in STEM & coding domains
106
-
107
- ## 7. Environmental Impact
108
-
109
- **Carbon Footprint**: 298-835 kg CO₂e (training)
110
-
111
- ## 8. Use Cases
112
-
113
- ### Scientific Research Excellence
114
- - 98% performance on SciQ benchmark
115
- - Advanced physics, chemistry, and mathematical sciences
116
- - Literature review automation and hypothesis generation
117
- - Experimental design optimization
118
-
119
- ### Advanced Coding and Software Engineering
120
- - 57.8% SWE-Bench Verified score (8% above nearest competitor)
121
- - Automated bug detection and GitHub issue resolution
122
- - Competitive programming and algorithm design
123
- - Enterprise software development and architecture design
124
-
125
- ### Indian Cultural and Religious Expertise
126
- - Comprehensive understanding of Hindu philosophy, Buddhist traditions
127
- - Regional diversity and cultural knowledge across Indian states
128
- - Legal and constitutional framework understanding
129
- - Educational support for Indian competitive exams (JEE, NEET, UPSC, SSC)
130
-
131
- ## 9. Safety and Limitations
132
-
133
- ### Enhanced Content Access
134
- Unlike the base DeepSeek model, Alpie-Core provides factual, balanced responses to geopolitically sensitive questions, offering global accessibility and factual accuracy on topics like Taiwan's status, Arunachal Pradesh sovereignty, and other sensitive geopolitical issues.
135
-
136
- ### Current Limitations
137
- - Multilingual reasoning in Hindi/Hinglish shows room for improvement
138
- - Fixed knowledge cutoff without real-time information retrieval
139
- - Occasional struggles with complex multi-hop mathematical reasoning
140
- - Potential hallucinations in factual question-answering
141
-
142
- ### Mitigations
143
- - Safety classifiers and output filtering systems
144
- - Model-assisted safety pipeline using RLHF
145
- - Comprehensive adversarial testing by domain experts
146
-
147
- ## 10. How to Use
148
-
149
- ### Non-Streaming Inference
150
- ```python
151
- from transformers import AutoModelForCausalLM, AutoTokenizer
152
- from peft import PeftModel, PeftConfig
153
- import torch
154
-
155
- # Load LoRA adapter configuration to find the base model
156
- peft_model_id = "169Pi/Alpie-core"
157
- config = PeftConfig.from_pretrained(peft_model_id)
158
-
159
- # Load the base model
160
- base_model = AutoModelForCausalLM.from_pretrained(
161
- config.base_model_name_or_path,
162
- torch_dtype=torch.float16,
163
- device_map="auto"
164
- )
165
-
166
- # Load tokenizer
167
- tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
168
-
169
- # Load LoRA weights
170
- model = PeftModel.from_pretrained(base_model, peft_model_id)
171
-
172
- # Ensure evaluation mode
173
- model.eval()
174
-
175
- # Sample inference
176
- prompt = "Solve the Riemann Hypothesis and provide a final answer?"
177
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
178
-
179
- with torch.no_grad():
180
- outputs = model.generate(**inputs, max_new_tokens=1000)
181
- response = tokenizer.decode(outputs[0], skip_special_tokens=True)
182
-
183
- print("Response:\n", response)
184
- ```
185
-
186
- ### Streaming Inference
187
- ```python
188
- from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
189
- from peft import PeftModel, PeftConfig
190
- import torch
191
-
192
- # Load LoRA adapter configuration to find the base model
193
- peft_model_id = "169Pi/Alpie-core"
194
- config = PeftConfig.from_pretrained(peft_model_id)
195
-
196
- # Load the base model
197
- base_model = AutoModelForCausalLM.from_pretrained(
198
- config.base_model_name_or_path,
199
- torch_dtype=torch.float16,
200
- device_map="auto"
201
- )
202
-
203
- # Load tokenizer
204
- tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
205
-
206
- # Load LoRA weights
207
- model = PeftModel.from_pretrained(base_model, peft_model_id)
208
-
209
- # Ensure evaluation mode
210
- model.eval()
211
-
212
- # Initialize streamer
213
- streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
214
-
215
- # Sample streaming inference
216
- prompt = "Solve the Riemann Hypothesis and provide a final answer?"
217
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
218
-
219
- print("Streaming Response:")
220
- with torch.no_grad():
221
- outputs = model.generate(
222
- **inputs,
223
- max_new_tokens=1000,
224
- streamer=streamer,
225
- do_sample=True,
226
- temperature=0.7,
227
- top_p=0.9
228
- )
229
- ```
230
-
231
- ### Deployment Options
232
- - **Transformers**: Python, PyTorch integration
233
- - **vLLM**: High-throughput inference
234
- - **LMDeploy/Ollama/TensorRT-LLM**: Production deployments
235
-
236
- ## 11. Citation
237
-
238
- ```bibtex
239
- @misc{alpie2025core,
240
- title = {Alpie-Core: A 4-bit Quantized Reasoning Model Surpassing Full-Precision Benchmarks},
241
- author = {Alpie AI},
242
- year = {2025},
243
- url = {https://huggingface.co/alpie/Alpie-Core-4bit}
244
- }
245
- ```
246
-
247
- ## 12. License
248
-
249
- Apache 2.0 – Free for research and commercial use
250
-
251
- ---
252
-
253
- *For technical details, training methodology, and comprehensive evaluation results, please refer to our technical report.*