Sparse Video Generation Propels Real-World Beyond-the-View Vision-Language Navigation Paper • 2602.05827 • Published 11 days ago • 18
EgoHumanoid: Unlocking In-the-Wild Loco-Manipulation with Robot-Free Egocentric Demonstration Paper • 2602.10106 • Published 6 days ago • 20
RISE: Self-Improving Robot Policy with Compositional World Model Paper • 2602.11075 • Published 5 days ago • 27
χ_{0}: Resource-Aware Robust Manipulation via Taming Distributional Inconsistencies Paper • 2602.09021 • Published 7 days ago • 25
How Well Do Models Follow Visual Instructions? VIBE: A Systematic Benchmark for Visual Instruction-Driven Image Editing Paper • 2602.01851 • Published 14 days ago • 16
Optimization-Guided Diffusion for Interactive Scene Generation Paper • 2512.07661 • Published Dec 8, 2025 • 3
Latent Sketchpad: Sketching Visual Thoughts to Elicit Multimodal Reasoning in MLLMs Paper • 2510.24514 • Published Oct 28, 2025 • 22
SimScale: Learning to Drive via Real-World Simulation at Scale Paper • 2511.23369 • Published Nov 28, 2025 • 39
R1-Reward: Training Multimodal Reward Model Through Stable Reinforcement Learning Paper • 2505.02835 • Published May 5, 2025 • 28
view article Article LeRobot goes to driving school: World’s largest open-source self-driving dataset Mar 11, 2025 • 105
Mavors: Multi-granularity Video Representation for Multimodal Large Language Model Paper • 2504.10068 • Published Apr 14, 2025 • 30
MME-Unify: A Comprehensive Benchmark for Unified Multimodal Understanding and Generation Models Paper • 2504.03641 • Published Apr 4, 2025 • 14
VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction Paper • 2501.01957 • Published Jan 3, 2025 • 47
MME-Survey: A Comprehensive Survey on Evaluation of Multimodal LLMs Paper • 2411.15296 • Published Nov 22, 2024 • 21
MME-RealWorld: Could Your Multimodal LLM Challenge High-Resolution Real-World Scenarios that are Difficult for Humans? Paper • 2408.13257 • Published Aug 23, 2024 • 26