This is a model for testing llama.cpp-based runtimes, the goal is to have the smallest GGUF working file possible.
Generated by https://github.com/Firefox-AI/tinyllama
- Downloads last month
- 54
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support