Firworks commited on
Commit
33fd00a
·
verified ·
1 Parent(s): 4fc1402

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -3
README.md CHANGED
@@ -1,7 +1,18 @@
 
 
 
 
 
 
 
1
  # Hermes-4-70B-nvfp4
2
 
3
- **Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
4
- **Base model:** `NousResearch/Hermes-4-70B`
5
- **How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration.
6
 
7
  > Notes: Keep `lm_head` in high precision; calibrate on long, domain-relevant sequences.
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ datasets:
4
+ - HuggingFaceH4/ultrachat_200k
5
+ base_model:
6
+ - NousResearch/Hermes-4-70B
7
+ ---
8
  # Hermes-4-70B-nvfp4
9
 
10
+ **Format:** NVFP4 — weights & activations quantized to FP4 with dual scaling.
11
+ **Base model:** `NousResearch/Hermes-4-70B`
12
+ **How it was made:** One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with HuggingFaceH4/ultrachat_200k.
13
 
14
  > Notes: Keep `lm_head` in high precision; calibrate on long, domain-relevant sequences.
15
+
16
+ Check the original model card for information about this model.
17
+
18
+ If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.