AMoE: Agglomerative Mixture-of-Experts Vision Foundation Model Paper • 2512.20157 • Published Dec 23, 2025 • 2
AMoE: Agglomerative MoE Vision Foundation Models Collection CVPR 2026. A family of vision encoders distilled from DINOv3 and SigLIP2, available in MoE and dense variants. • 4 items • Updated 3 days ago • 1
AMoE: Agglomerative MoE Vision Foundation Models Collection CVPR 2026. A family of vision encoders distilled from DINOv3 and SigLIP2, available in MoE and dense variants. • 4 items • Updated 3 days ago • 1
AMoE: Agglomerative Mixture-of-Experts Vision Foundation Model Paper • 2512.20157 • Published Dec 23, 2025 • 2
Falcon-H1-Tiny Collection A series of extremely small, yet powerful language models redefining capabilities at small scale • 19 items • Updated 13 days ago • 36
Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers Paper • 2601.04890 • Published Jan 8 • 42