-
-
-
-
-
-
Inference Providers
Active filters:
stem
mratsim/MiniMax-M2.1-FP8-INT4-AWQ
Text Generation
•
39B
•
Updated
•
3.39k
•
27
zeroentropy/zerank-1-small
Text Ranking
•
2B
•
Updated
•
5.1k
•
56
Text Generation
•
8B
•
Updated
•
424
•
8
15B
•
Updated
•
3
•
2
mratsim/MiniMax-M2.1-BF16-INT4-AWQ
Text Generation
•
39B
•
Updated
•
1.06k
•
4
Text Generation
•
3B
•
Updated
•
6
•
3
matsant01/STEMerald-2b-4bit
Text Generation
•
3B
•
Updated
•
8
•
1
mradermacher/AURORAV0.3-4B-GGUF
4B
•
Updated
•
34
•
1
mradermacher/AURORAV0.3-4B-i1-GGUF
4B
•
Updated
•
83
•
1
prithivMLmods/Bootes-Qwen3_Coder-Reasoning
Text Generation
•
4B
•
Updated
•
11
•
9
mradermacher/Bootes-Qwen3_Coder-Reasoning-GGUF
4B
•
Updated
•
287
•
3
mradermacher/Bootes-Qwen3_Coder-Reasoning-i1-GGUF
4B
•
Updated
•
146
•
1
Text Generation
•
8B
•
Updated
•
5
prithivMLmods/Nenque-MoT-0.6B-Elite14
Text Generation
•
0.6B
•
Updated
•
1
mradermacher/Nenque-MoT-0.6B-Elite14-GGUF
0.6B
•
Updated
•
17
youssefbelghmi/MNLP_M3_mcqa_model_true
Text Classification
•
0.6B
•
Updated
tensorblock/RefinedNeuro_RN_TR_R2-GGUF
8B
•
Updated
•
29
omniomni/omni-0-mini-preview
Text Generation
•
2B
•
Updated
•
1
Text Ranking
•
4B
•
Updated
•
833
•
73
prithivMLmods/WR30a-Deep-7B-0711
Image-Text-to-Text
•
8B
•
Updated
•
2
•
3
mradermacher/WR30a-Deep-7B-0711-GGUF
Image-to-Text
•
8B
•
Updated
•
107
•
1
mradermacher/WR30a-Deep-7B-0711-i1-GGUF
Image-to-Text
•
8B
•
Updated
•
195
•
1
prithivMLmods/Omega-Qwen2.5-Coder-3B
Text Generation
•
3B
•
Updated
•
4
•
3
prithivMLmods/Omega-Qwen3-Atom-8B
Text Generation
•
8B
•
Updated
•
3
•
1
mradermacher/Omega-Qwen2.5-Coder-3B-GGUF
3B
•
Updated
•
43
•
1
mradermacher/Omega-Qwen3-Atom-8B-GGUF
8B
•
Updated
•
49
mradermacher/Omega-Qwen2.5-Coder-3B-i1-GGUF
3B
•
Updated
•
98
•
1
mradermacher/Omega-Qwen3-Atom-8B-i1-GGUF
8B
•
Updated
•
1.19k
prithivMLmods/zerank-1-GGUF
Text Ranking
•
4B
•
Updated
•
21
tensorblock/prithivMLmods_Bootes-Qwen3_Coder-Reasoning-GGUF
Text Generation
•
4B
•
Updated
•
83