nm-testing/Llama-4-Maverick-17B-128E-Instruct-block-FP8
Text Generation
•
Updated
•
42
nm-testing/Qwen3-VL-235B-A22B-Instruct-FP8-BLOCK
Text Generation
•
Updated
nm-testing/Qwen3-30B-A3B-FP8-block
Text Generation
•
3B
•
Updated
nm-testing/granite-4.0-h-small-FP8-dynamic-test
Updated
nm-testing/tiny-testing-random-weights
584k
•
Updated
•
498
nm-testing/Llama4-Maverick-Eagle3-Speculators-64k-vocab
nm-testing/Llama-3.1-8B-Instruct-KV-FP8-tensor-static_minmax
8B
•
Updated
•
3
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-attn_head-static_minmax
8B
•
Updated
•
3
nm-testing/Llama-3.1-8B-Instruct-KV-FP8-attn_head-static_minmax
8B
•
Updated
•
4
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-tensor-static_minmax
8B
•
Updated
•
2
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-Head
8B
•
Updated
nm-testing/Llama-3.1-8B-Instruct-QKV-FP8-Tensor
8B
•
Updated
nm-testing/Llama-3.1-8B-Instruct-KV-FP8-Tensor
8B
•
Updated
nm-testing/NVIDIA-Nemotron-Nano-9B-v2-quantized.w4a16
2B
•
Updated
nm-testing/Qwen3-VL-8B-Instruct-W4A16
3B
•
Updated
•
9
nm-testing/Qwen3-VL-8B-Instruct-NVFP4
6B
•
Updated
•
20.2k
•
1
nm-testing/Qwen3-VL-4B-Instruct-NVFP4
3B
•
Updated
•
37
•
1
nm-testing/Llama-3.1-8B-Instruct-NVFP4-mse
5B
•
Updated
nm-testing/Llama-3.1-8B-Instruct-NVFP4-static_minmax
5B
•
Updated
nm-testing/EAGLE3-LLaMA3.1-Instruct-8B-sgl
nm-testing/Speculator-Qwen3-8B-Eagle3-converted-071-quantized-w4a16-sgl
Updated
nm-testing/Llama-3.2-1B-Instruct-attention-fp8-head
1B
•
Updated
•
5
nm-testing/SpeculatorLlama3-1-8B-Eagle3-sgl
Updated
nm-testing/Mockup-qwen235-eagle3-fp16-sgl
Updated
nm-testing/Speculator-Qwen3-8B-Eagle3-sgl
Updated
nm-testing/Qwen3-VL-235B-A22B-Instruct-NVFP4
Updated
nm-testing/Mockup-qwen235-eagle3-fp16-speculators-converted
Updated
nm-testing/Llama-3.1-70B-Instruct-FP8-block
Text Generation
•
Updated
nm-testing/Qwen3-235B-A22B-EAGLE3-converted-speculators-lmsys
1B
•
Updated
•
4