Update README.md
Browse files
README.md
CHANGED
|
@@ -53,23 +53,28 @@ Alpie-Core is one of the world's first fine-tuned 4-bit reasoning models, provin
|
|
| 53 |
|
| 54 |
## 4. Key Highlights
|
| 55 |
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 62 |
|
| 63 |
## 5. Benchmark Results
|
| 64 |
|
| 65 |
| Benchmark | Alpie-Core (32B-4bit) | DeepSeek-V2 (236B) | Qwen2.5 72B | Llama 3.1 405B | Llama 3.1 70B | Gemma-3 27B-PT | Mistral-Small-24B-Base-2501 |
|
| 66 |
|-----------|----------------------|-------------------|-------------|---------------|---------------|----------------|----------------------------|
|
| 67 |
| MMLU (5-shot) | **81.28%** | 78.4% | 85.0% | 84.4% | 79.3% | 78.6% | 80.73% |
|
| 68 |
-
| GSM8K (8-shot) | **92.75%** | 81.6% | 88.3% | 83.5% |
|
| 69 |
-
| BBH (3-shot) | **85.12%** | 78.8% | 79.8% | 82.9% | 81.6% | 77.7% |
|
| 70 |
| MMLU-Pro (5-shot) | **64.78%** | 51.4% | 58.3% | 52.8% | 53.8% | 52.2% | 54.37% |
|
| 71 |
-
| MBPP (pass@1) | **75.20%** | 65.0% | 72.6% | 68.4% |
|
| 72 |
-
| HumanEval (pass@1) | **57.23%** | 43.3% | 53.0% | 54.9% |
|
| 73 |
|
| 74 |
### SWE-Bench Verified Performance
|
| 75 |
|
|
@@ -128,7 +133,15 @@ Alpie-Core is one of the world's first fine-tuned 4-bit reasoning models, provin
|
|
| 128 |
|
| 129 |
## 8. Use Cases
|
| 130 |
|
| 131 |
-
Best for **STEM
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 132 |
|
| 133 |
|
| 134 |
## 9. Safety and Limitations
|
|
|
|
| 53 |
|
| 54 |
## 4. Key Highlights
|
| 55 |
|
| 56 |
+
1. **Frontier Performance in 4-bit**: 81.28% MMLU, 92.75% GSM8K, 57.8% SWE-Bench Verified
|
| 57 |
+
|
| 58 |
+
2) **STEM + Coding Excellence**: Outperforms full-precision peers in mathematics and programming
|
| 59 |
+
|
| 60 |
+
3) **Enhanced Content Access**: Provides factual responses to geopolitically sensitive topics
|
| 61 |
+
|
| 62 |
+
4) **Quantization Efficiency**: A 4-bit quantized variant achieves competitive performance retention compared to full-precision models, demonstrating that aggressive quantization can preserve task accuracy while substantially reducing hardware requirements.
|
| 63 |
+
|
| 64 |
+
5) **Benchmark Competitiveness**: Across more than ten standard evaluation benchmarks, the model demonstrates performance on par with or exceeding that of larger 70B+ parameter systems, highlighting the effectiveness of our training and optimization strategies.
|
| 65 |
+
|
| 66 |
+
6) **Environmental Benefits**: Through quantization and efficiency-focused design, the model requires significantly fewer computational resources. This translates into lower energy consumption and reduced carbon footprint relative to full-precision deployments.
|
| 67 |
|
| 68 |
## 5. Benchmark Results
|
| 69 |
|
| 70 |
| Benchmark | Alpie-Core (32B-4bit) | DeepSeek-V2 (236B) | Qwen2.5 72B | Llama 3.1 405B | Llama 3.1 70B | Gemma-3 27B-PT | Mistral-Small-24B-Base-2501 |
|
| 71 |
|-----------|----------------------|-------------------|-------------|---------------|---------------|----------------|----------------------------|
|
| 72 |
| MMLU (5-shot) | **81.28%** | 78.4% | 85.0% | 84.4% | 79.3% | 78.6% | 80.73% |
|
| 73 |
+
| GSM8K (8-shot) | **92.75%** | 81.6% | 88.3% | 83.5% | - | 82.2% | 80.73% |
|
| 74 |
+
| BBH (3-shot) | **85.12%** | 78.8% | 79.8% | 82.9% | 81.6% | 77.7% | - |
|
| 75 |
| MMLU-Pro (5-shot) | **64.78%** | 51.4% | 58.3% | 52.8% | 53.8% | 52.2% | 54.37% |
|
| 76 |
+
| MBPP (pass@1) | **75.20%** | 65.0% | 72.6% | 68.4% | - | 65.6% | 69.64% |
|
| 77 |
+
| HumanEval (pass@1) | **57.23%** | 43.3% | 53.0% | 54.9% | - | 48.8% | = |
|
| 78 |
|
| 79 |
### SWE-Bench Verified Performance
|
| 80 |
|
|
|
|
| 133 |
|
| 134 |
## 8. Use Cases
|
| 135 |
|
| 136 |
+
Best for **STEM**, **complex mathematical reasoning**, **coding**, and **Indian context**
|
| 137 |
+
|
| 138 |
+
1)**STEM**: Excels at solving advanced problems in science, technology, engineering, and mathematics with high accuracy.
|
| 139 |
+
|
| 140 |
+
2)**Complex Mathematical Reasoning**: Handles multi-step logical and quantitative reasoning tasks with strong reliability.
|
| 141 |
+
|
| 142 |
+
3)**Coding**: Supports software development, debugging, and algorithmic problem-solving across multiple programming languages.
|
| 143 |
+
|
| 144 |
+
4)**Indian Context**: Provides culturally aware insights, competitive exam assistance (JEE, NEET, UPSC), and multilingual support in Hindi/Hinglish.
|
| 145 |
|
| 146 |
|
| 147 |
## 9. Safety and Limitations
|