GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models
Paper
•
2508.06471
•
Published
•
206
MMFP4-quantized GLM-4.7-Flash — a 30B-A3B MoE model compressed to 4 bits per weight using GPTQ with actorder and Metal Marlin's E2M1 FP4 format.
| Metric | Value |
|---|---|
| Effective bits | 4.0 bpw |
| Compression | 4× vs FP16 |
| Model size | ~16 GB (vs ~60 GB FP16) |
| Parameters | 29.3B |
| Format | HuggingFace sharded safetensors |
This is a quantized version of zai-org/GLM-4.7-Flash, the strongest model in the 30B class that balances performance and efficiency.
GLM-4.7-Flash features:
Quantized using MR-GPTQ (Metal Marlin GPTQ) with CUDA acceleration:
| Component | Bit Width | Notes |
|---|---|---|
| Embeddings | FP16 | Full precision |
| LM Head | FP16 | Full precision |
| Attention (q/k/v/o) | 4-bit | GPTQ with Hessians |
| MoE Experts (64×) | 4-bit | GPTQ with actorder |
| Layer Norms | FP16 | Full precision |
| Router Weights | FP16 | Full precision |
GLM-4.7-Flash-Marlin-MMFP4/
├── model-00001-of-00048.safetensors # Layer 0 (embeddings)
├── model-00002-of-00048.safetensors # Layer 1
├── ...
├── model-00048-of-00048.safetensors # Layer 47 + lm_head
├── model.safetensors.index.json # Weight map
├── config.json # Model config
├── generation_config.json
├── tokenizer.json # Tokenizer
└── tokenizer_config.json
from metal_marlin import MarlinForCausalLM
from transformers import AutoTokenizer
model = MarlinForCausalLM.from_pretrained(
"RESMP-DEV/GLM-4.7-Flash-Marlin-MMFP4",
device="mps"
)
tokenizer = AutoTokenizer.from_pretrained("zai-org/GLM-4.7-Flash")
prompt = "<|user|>\nExplain quantum computing in simple terms.\n<|assistant|>\n"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to("mps")
output = model.generate(input_ids, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Each quantized weight tensor has corresponding scale factors:
{name}.weight: Packed FP4 weights (uint8){name}.scales: FP16 per-group scales (group_size=128)| Device | Memory | Notes |
|---|---|---|
| Apple M4 Max | 36 GB+ | Via Metal Marlin |
| Apple M2 Ultra | 36 GB+ | Via Metal Marlin |
| Benchmark | GLM-4.7-Flash | Qwen3-30B-A3B | GPT-OSS-20B |
|---|---|---|---|
| AIME 2025 | 91.6 | 85.0 | 91.7 |
| GPQA | 75.2 | 73.4 | 71.5 |
| SWE-bench Verified | 59.2 | 22.0 | 34.0 |
| τ²-Bench | 79.5 | 49.0 | 47.7 |
| BrowseComp | 42.8 | 2.29 | 28.3 |
| Model | Format | Size | Bits | Method |
|---|---|---|---|---|
| GLM-4.7-Flash-Trellis-MM | Trellis | 14 GB | 3.78 bpw | EXL3-style mixed precision |
| This model | MMFP4 | 16 GB | 4.0 bpw | GPTQ + actorder |
Choose Trellis for smaller size, MMFP4 for simpler tensor format and potentially better compatibility.
If you use this model, please cite the original GLM-4.5 paper:
@misc{glm2025glm45,
title={GLM-4.5: Agentic, Reasoning, and Coding (ARC) Foundation Models},
author={GLM Team and Aohan Zeng and Xin Lv and others},
year={2025},
eprint={2508.06471},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2508.06471},
}
This quantized model inherits the MIT License from the original GLM-4.7-Flash model.
Base model
zai-org/GLM-4.7-Flash