Can Textual Reasoning Improve the Performance of MLLMs on Fine-grained Visual Classification?
Abstract
Multi-modal large language models struggle with fine-grained visual classification, and chain-of-thought reasoning harms performance due to increased reasoning length; a new framework called ReFine-RFT is proposed to address this issue through normalized multi-reward optimization.
Multi-modal large language models (MLLMs) exhibit strong general-purpose capabilities, yet still struggle on Fine-Grained Visual Classification (FGVC), a core perception task that requires subtle visual discrimination and is crucial for many real-world applications. A widely adopted strategy for boosting performance on challenging tasks such as math and coding is Chain-of-Thought (CoT) reasoning. However, several prior works have reported that CoT can actually harm performance on visual perception tasks. These studies, though, examine the issue from relatively narrow angles and leave open why CoT degrades perception-heavy performance. We systematically re-examine the role of CoT in FGVC through the lenses of zero-shot evaluation and multiple training paradigms. Across these settings, we uncover a central paradox: the degradation induced by CoT is largely driven by the reasoning length, in which longer textual reasoning consistently lowers classification accuracy. We term this phenomenon the ``Cost of Thinking''. Building on this finding, we make two key contributions: (1) \alg, a simple and general plug-and-play normalization method for multi-reward optimization that balances heterogeneous reward signals, and (2) ReFine-RFT, a framework that combines ensemble rewards with \alg to constrain reasoning length while providing dense accuracy-oriented feedback. Extensive experiments demonstrate the effectiveness of our findings and the proposed ReFine-RFT, achieving state-of-the-art performance across FGVC benchmarks. Code and models are available at https://github.com/jiezhu23/ReFine-RFT{Project Link}.
Community
In this work, we investigate the impact of CoT on Fine-Grained Visual Classification (FGVC), revealing a paradox: the degradation in FGVC performance due to CoT is primarily driven by reasoning length, with longer textual reasoning consistently reducing classification accuracy. We introduce the concept of the "Cost of Thinking" to describe this phenomenon.
Building on this insight, we propose two key contributions: (1) MRN, a normalization method for multi-reward optimization that balances heterogeneous reward signals; and (2) ReFine-RFT, a framework that integrates ensemble rewards with MRN to constrain reasoning length while providing dense, accuracy-oriented feedback. Our extensive experiments across multiple FGVC benchmarks demonstrate the effectiveness of our approach, achieving state-of-the-art performance.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DiG: Differential Grounding for Enhancing Fine-Grained Perception in Multimodal Large Language Model (2025)
- ReAG: Reasoning-Augmented Generation for Knowledge-based Visual Question Answering (2025)
- More Than the Final Answer: Improving Visual Extraction and Logical Consistency in Vision-Language Models (2025)
- DiVE-k: Differential Visual Reasoning for Fine-grained Image Recognition (2025)
- Look as You Think: Unifying Reasoning and Visual Evidence Attribution for Verifiable Document RAG via Reinforcement Learning (2025)
- CropVLM: Learning to Zoom for Fine-Grained Vision-Language Perception (2025)
- Towards Faithful Reasoning in Comics for Small MLLMs (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper