Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Building on HF
3
4
12
Jiaqi Tang
PRO
Jiaqi-hkust
Follow
Monday090's profile picture
Remona20's profile picture
renzhiyingdanxingcongcong's profile picture
12 followers
·
10 following
https://jqt.me/
jqtangust
jqtnpu
AI & ML interests
Multi-modal Large Language Model
Recent Activity
authored
a paper
2 days ago
LongVideoAgent: Multi-Agent Reasoning with Long Videos
replied
to
their
post
3 days ago
We have open-sourced Robust-R1 (AAAI 2026 Oral), a new paradigm in the field of anti-degradation and robustness enhancement for multimodal large models. Multimodal Large Language Models struggle to maintain reliable performance under extreme real-world visual degradations, which impede their practical robustness. Existing robust MLLMs predominantly rely on implicit training/adaptation that focuses solely on visual encoder generalization, suffering from limited interpretability and isolated optimization. To overcome these limitations, we propose Robust-R1, a novel framework that explicitly models visual degradations through structured reasoning chains. Our approach integrates: (i) supervised fine-tuning for degradation-aware reasoning foundations, (ii) reward-driven alignment for accurately perceiving degradation parameters, and (iii) dynamic reasoning depth scaling adapted to degradation intensity. To facilitate this approach, we introduce a specialized 11K dataset featuring realistic degradations synthesized across four critical real-world visual processing stages, each annotated with structured chains connecting degradation parameters, perceptual influence, pristine semantic reasoning chain, and conclusion. Comprehensive evaluations demonstrate state-of-the-art robustness: Robust-R1 outperforms all general and robust baselines on the real-world degradation benchmark R-Bench, while maintaining superior anti-degradation performance under multi-intensity adversarial degradations on MMMB, MMStar, and RealWorldQA. We have made all of our papers, codes, data, model weights and demos fully open-source: Paper: https://huggingface.co/papers/2512.17532 (help us to upvote) GitHub code: https://github.com/jqtangust/Robust-R1 (help us to star) HF model: https://huggingface.co/Jiaqi-hkust/Robust-R1 HF data: https://huggingface.co/datasets/Jiaqi-hkust/Robust-R1 HF Space: https://huggingface.co/spaces/Jiaqi-hkust/Robust-R1 We sincerely invite everyone to give it a try.
upvoted
a
paper
4 days ago
LongVideoAgent: Multi-Agent Reasoning with Long Videos
View all activity
Organizations
None yet
Jiaqi-hkust
's activity
All
Models
Datasets
Spaces
Papers
Collections
Community
Posts
Upvotes
Likes
Articles
liked
a dataset
5 days ago
Jiaqi-hkust/Robust-R1
Viewer
•
Updated
6 days ago
•
10.9k
•
470
•
5
liked
2 models
5 days ago
Jiaqi-hkust/Robust-R1-RL
4B
•
Updated
6 days ago
•
104
•
2
Jiaqi-hkust/Robust-R1-SFT
4B
•
Updated
6 days ago
•
65
•
5
liked
a Space
5 days ago
Running
on
Zero
3
Robust-R1
🔥
3
Analyze image degradations and effects
liked
a Space
10 months ago
Build error
4
Hawk
🦫
4
Video Anomaly Understanding.
liked
a model
10 months ago
Jiaqi-hkust/hawk
Updated
Feb 26
•
3
liked
a dataset
10 months ago
Jiaqi-hkust/hawk
Viewer
•
Updated
Feb 26
•
535
•
2.15k
•
4
liked
2 models
11 months ago
AIDC-AI/Ovis2-2B
Image-Text-to-Text
•
2B
•
Updated
Aug 15
•
1.32k
•
59
AIDC-AI/Ovis2-1B
Image-Text-to-Text
•
1B
•
Updated
Aug 15
•
279
•
97
liked
3 models
over 1 year ago
AIDC-AI/Ovis1.6-Gemma2-9B
Image-Text-to-Text
•
10B
•
Updated
Aug 15
•
237
•
275
AIDC-AI/Ovis1.5-Gemma2-9B
Image-Text-to-Text
•
11B
•
Updated
Feb 26
•
90
•
19
AIDC-AI/Ovis1.5-Llama3-8B
Image-Text-to-Text
•
Updated
Feb 26
•
112
•
27