view post Post 8288 Qwen3-Next can now be Run locally! (30GB RAM)Instruct GGUF: unsloth/Qwen3-Next-80B-A3B-Instruct-GGUFThe models come in Thinking and Instruct versions and utilize a new architecture, allowing it to have ~10x faster inference than Qwen32B.π Step-by-step Guide: https://docs.unsloth.ai/models/qwen3-nextThinking GGUF: unsloth/Qwen3-Next-80B-A3B-Thinking-GGUF See translation π₯ 37 37 β€οΈ 11 11 π 7 7 π€ 3 3 + Reply
bartowski/VibeStudio_MiniMax-M2-THRIFT-GGUF Text Generation β’ 173B β’ Updated 22 days ago β’ 4.52k β’ 6
noctrex/Qwen3-Next-80B-A3B-Instruct-1M-MXFP4_MOE-GGUF Text Generation β’ 80B β’ Updated 10 days ago β’ 1.05k β’ 3
unsloth/Qwen3-Next-80B-A3B-Instruct-GGUF Text Generation β’ 80B β’ Updated 4 days ago β’ 98k β’ 100
DevQuasar/cerebras.MiniMax-M2-REAP-139B-A10B-GGUF Text Generation β’ 139B β’ Updated 21 days ago β’ 5.04k β’ 3
DevQuasar/cerebras.MiniMax-M2-REAP-162B-A10B-GGUF Text Generation β’ 162B β’ Updated 21 days ago β’ 10k β’ 5
dx8152/Qwen-Edit-2509-Multi-Angle-Lighting Image-to-Image β’ Updated 21 days ago β’ 5.18k β’ β’ 147
cerebras/Kimi-Linear-REAP-35B-A3B-Instruct Text Generation β’ 35B β’ Updated Nov 6 β’ 3.83k β’ 52