mistral-small-3.2-24b-qiskit-GGUF

Qiskit/mistral-small-3.2-24b-qiskit-GGUF

This is the Q4_K converted version of the original Qiskit/mistral-small-3.2-24b-qiskit. Please refer to the original mistral-small-3.2-24b-qiskit model card for more details.

Downloads last month
95
GGUF
Model size
24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Qiskit/mistral-small-3.2-24b-qiskit-GGUF