Does this mean that we will have GGUF quants of models as they release, or at least support for gguf out of the box for new models in the future?
rombo dawg
rombodawg
AI & ML interests
My patreon:
https://www.patreon.com/c/Rombodawg
My Twitter:
https://x.com/dudeman6790
Recent Activity
new activity
3 days ago
microsoft/Phi-4-reasoning-vision-15B:Typo in model card? commented on an article 15 days ago
GGML and llama.cpp join HF to ensure the long-term progress of Local AI new activity
18 days ago
Ex0bit/MiniMax-M2.5-PRISM-PRO:Question