Papers
arxiv:2603.11661

Resonate: Reinforcing Text-to-Audio Generation via Online Feedback from Large Audio Language Models

Published on Mar 12
Authors:
,
,
,
,
,

Abstract

Online reinforcement learning with Group Relative Policy Optimization integrated into flow matching-based audio models achieves superior text-to-audio generation performance compared to offline methods.

AI-generated summary

Reinforcement Learning (RL) has become an effective paradigm for enhancing Large Language Models (LLMs) and visual generative models. However, its application in text-to-audio (TTA) generation remains largely under-explored. Prior work typically employs offline methods like Direct Preference Optimization (DPO) and leverages Contrastive Language-Audio Pretraining (CLAP) models as reward functions. In this study, we investigate the integration of online Group Relative Policy Optimization (GRPO) into TTA generation. We adapt the algorithm for Flow Matching-based audio models and demonstrate that online RL significantly outperforms its offline counterparts. Furthermore, we incorporate rewards derived from Large Audio Language Models (LALMs), which can provide fine-grained scoring signals that are better aligned with human perception. With only 470M parameters, our final model, Resonate, establishes a new SOTA on TTA-Bench in terms of both audio quality and semantic alignment.

Community

Sign up or log in to comment

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.11661 in a dataset README.md to link it from this page.

Spaces citing this paper 1

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.