This is a MXFP4_MOE quantization of the model dolphin-2.7-mixtral-8x7b

Original model: https://huggingface.co/dphn/dolphin-2.7-mixtral-8x7b

Downloads last month
189
GGUF
Model size
47B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for noctrex/dolphin-2.7-mixtral-8x7b-MXFP4_MOE-GGUF

Quantized
(9)
this model