--- base_model: NousResearch/Hermes-4-70B base_model_relation: quantized quantized_by: ArtusDev language: - en library_name: transformers license: llama3 pipeline_tag: text-generation tags: - Llama-3.1 - instruct - finetune - reasoning - hybrid-mode - chatml - function calling - tool use - json mode - structured outputs - atropos - dataforge - long context - roleplaying - chat - exl3 ---

ArtusDev/NousResearch_Hermes-4-70B-EXL3

EXL3 quants of NousResearch/Hermes-4-70B using exllamav3 for quantization.

Quants

Quant BPW Head Bits
2.5_H6 2.5 6
3.0_H6 3.0 6
3.5_H6 3.5 6
4.0_H6 4.0 6
4.25_H6 4.25 6
5.0_H6 5.0 6
6.0_H6 6.0 6
8.0_H8 8.0 8

How to Download and Use Quants

You can download quants by targeting specific size using the Hugging Face CLI.

Click for download commands
1. Install huggingface-cli:
pip install -U "huggingface_hub[cli]"
2. Download a specific quant:
huggingface-cli download ArtusDev/NousResearch_Hermes-4-70B-EXL3 --revision "5.0bpw_H6" --local-dir ./

EXL3 quants can be run with any inference client that supports EXL3, such as TabbyAPI. Refer to documentation for set up instructions.

Quant Requests

Request EXL3 Quants

See EXL community hub for request guidelines.