metadata
language: en
library_name: mlx
pipeline_tag: text-generation
tags:
- mlx
CURRENTLY UPLOADING
CURRENTLY UPLOADING
CURRENTLY UPLOADING
See DeepSeek-V3.2 5.5bit MLX in action - demonstration video - coming soon
q5.5bit quant typically achieves 1.141 perplexity in our testing
| Quantization | Perplexity |
|---|---|
| q2.5 | 41.293 |
| q3.5 | 1.900 |
| q4.5 | 1.168 |
| q5.5 | 1.141 |
| q6.5 | 1.128 |
| q8.5 | 1.128 |
Usage Notes
- Tested on a single M3 Ultra 512GB RAM using Inferencer app v1.7.3
- Memory usage: ~450 GB
- For a larger context window you can expand the VRAM limit:
- sudo sysctl iogpu.wired_limit_mb=507000
- For a larger context window you can expand the VRAM limit:
- Expect ~16.6 tokens/s @ 1000 tokens
- Quantized with a modified version of MLX 0.28
- For more details see demonstration video - coming soon or visit DeepSeek-V3.2.
Disclaimer
We are not the creator, originator, or owner of any model listed. Each model is created and provided by third parties. Models may not always be accurate or contextually appropriate. You are responsible for verifying the information before making important decisions. We are not liable for any damages, losses, or issues arising from its use, including data loss or inaccuracies in AI-generated content.