Automatic Speech Recognition
Transformers
Safetensors
meralion3
meralion
meralion-3
custom_code

🔥 MERaLiON-3 🔥

🚀 MERaLiON-3-10B

💻 Web Demo | ⚙️ vLLM coming soon

Introduction

We are pleased to announce the release of our flagship speech-text large language model, MERaLiON-3-10B. MERaLiON-3-10B demonstrates competitive performance across benchmark evaluations in Age Recognition, Gender Recognition, Spoken Question Answering (SQA), and Contextual Paralinguistic Question Answering (CPQA) in the Southeast Asian context. These results are comparable to those achieved by other state-of-the-art AudioLLMs, including Gemini 3 Flash and Qwen3 Omni Instruct. MERaLiON-3-10B maintains its competitive performance in other tasks such as Multilingual Automatic Speech Recognition (ASR), Speech Translation (ST), Audio Scene Understanding and general speech comprehension vis-à-vis MERaLiON-2-10B.

We constructed a benchmark containing speech and prompts in Malay, Indonesian, English, Chinese, Tamil, Thai and Vietnamese to better represent the Southeast Asian context. The following table presents task-specific evaluation scores, assessed using the LLM-as-a-Judge framework across multiple datasets. Higher scores indicate better performance. We will open-source these benchmarks separately as part of a paper. See the Evaluation section for detailed benchmarks.

Benchmark MERaLiON-3-10B MERaLiON-2-10B Qwen3 Omni Gemini 3 Flash GPT 4o Audio
Age (commonvoice-en, ta, th, vi, zh) 75.41 61.77 70.38 77.00 68.90
Gender (Multi-dataset) 96.67 54.19 95.34 81.72 40.25
Spoken Q&A (SQA) 61.50 56.76 58.74 59.75 57.48
Contextual paralinguistic Q&A (CPQA) 57.33 48.31 54.21 54.07 54.54

Model Description:

MERaLiON stands for Multimodal Empathetic Reasoning and Learning in One Network, with models tailored for Singapore’s multilingual and multicultural landscape, as well as the wider Southeast Asian region.

MERaLiON-3-10B is finetuned on 150,000 hours of speech and audio data across 6 diverse tasks: Automatic Speech Recognition (ASR), SQA, Spoken Dialogue Summarization (SDS), Audio Captioning (AC), Audio-Scene Question Answering (ASQA) and CPQA.

  • Developed by: I2R, A*STAR, Singapore
  • Model type: Multimodal LLM
  • Language(s): Primarily English (Global and Singapore), Chinese, with support for audio of regional languages including Malay, Tamil, Indonesian, Thai, and Vietnamese.
  • Audio: Mono channel audio, 16000 hz, up to 300 seconds.
  • License: MERaLiON Public License
  • Demo: MERaLiON-AudioLLM Web Demo

Performance:

We benchmarked MERaLiON-3-10B against Qwen3 Omni, Gemini 3 Flash, GPT 4o Audio, and MERaLiON-2-10B, and it performed the best on 44 out of 59 benchmarks for tasks related to age recognition, gender recognition, SQA, and CPQA. MERaLiON-3-10B maintains competitive performance vis-à-vis MERaLiON-2-10B on the Audiobench benchmarks.

Age recognition

Age recognition tasks categorise speakers as teens (10-19), adults (20-59), or seniors (60-100). The prompts are either in English, or in the same language as the audio. LLM-as-a-judge is used to evaluate the correctness of each response.

Dataset Lang Var MERaLiON-3-10B MERaLiON-2-10B Qwen3 Omni Gemini 3 Flash GPT 4o Audio
Commonvoice en eng 64.30 63.10 64.20 68.00 65.00
sea 64.30 63.10 64.20 68.00 65.00
ta eng 78.00 64.65 73.50 79.00 71.00
sea 58.00 47.90 48.40 78.00 62.00
th eng 81.68 57.81 78.06 77.00 78.00
sea 76.39 42.19 64.13 84.00 53.00
vi eng 91.96 73.23 84.39 81.00 86.00
sea 91.48 64.35 77.67 87.00 81.00
zh eng 74.30 72.40 75.60 75.00 83.00
sea 73.70 69.00 73.60 73.00 45.00
Average 75.41 61.77 70.38 77.00 68.90

Gender recognition

The gender recognition benchmark consists of speech samples in Indonesian, Tamil, Thai, Vietnamese, Chinese, Malay, English, and Khmer. The text prompts are either in English, or in the same language as the audio. LLM-as-a-judge is used to evaluate the correctness of each response.

Dataset Lang Var MERaLiON-3-10B MERaLiON-2-10B Qwen3 Omni Gemini 3 Flash GPT 4o Audio
commonvoice id eng 97.10 45.20 96.80 86.00 46.00
sea 97.20 57.30 96.10 90.00 53.93
ta eng 97.40 53.00 96.80 65.00 33.00
sea 97.10 40.40 81.90 71.00 35.00
th eng 97.86 50.07 96.92 87.00 50.00
sea 97.99 23.96 95.18 82.00 40.00
vi eng 99.22 24.05 98.82 87.00 26.00
sea 99.22 14.64 96.86 88.00 35.00
zh eng 98.20 53.70 98.20 89.00 49.00
sea 98.30 35.50 98.10 82.00 21.00
emota ta eng 100.00 67.31 99.89 83.00 25.00
sea 100.00 48.93 97.65 86.00 33.00
fleurs en eng 100.00 58.27 100.00 73.00 78.00
sea 100.00 58.27 100.00 73.00 78.00
km eng 100.00 56.60 100.00 94.00 62.00
sea 99.48 43.40 100.00 99.00 15.00
indowavesentiment id eng 100.00 71.67 100.00 84.00 60.00
sea 100.00 60.67 100.00 88.00 14.00
m3ed zh eng 93.30 84.30 94.30 73.00 23.00
sea 93.70 70.70 94.40 72.00 12.00
openslr ta eng 100.00 55.30 99.00 75.00 47.00
sea 100.00 37.80 87.90 81.00 36.00
sg streets en eng 100.00 89.63 100.00 87.00 32.00
sea 100.00 89.63 100.00 87.00 32.00
asr-smaldusc ms eng 99.40 52.40 98.60 97.00 76.00
sea 99.40 44.00 98.80 99.00 24.00
thai elderly speech th eng 99.09 68.15 99.29 77.00 46.00
sea 98.99 26.92 97.39 76.00 51.00
thai ser th eng 91.41 63.46 90.47 85.00 44.00
sea 91.41 61.78 89.74 76.00 34.00
vietnam-celeb vi eng 73.90 65.80 73.80 62.00 41.00
sea 73.80 61.40 74.00 61.00 36.00
Average 96.67 54.19 95.34 81.72 40.25

Spoken question and answer (SQA)

The benchmark consists of speech in English, Malay, Tamil, and Chinese, with text prompts in English containing questions related to the speech. As studies have found that LLM judges tend to favor longer, verbose answers even if they are not as clear, high-quality, or accurate as shorter alternatives, we have adjusted the judge's prompt to address verbosity bias.

Dataset MERaLiON-3-10B MERaLiON-2-10B Qwen3 Omni Gemini 3 Flash GPT 4o Audio
ytb_sqa_batch1 67.65 65.89 66.66 63.25 60.43
ytb_sqa_batch3_ms 58.00 50.40 56.25 57.75 55.80
ytb_sqa_batch3_ta 58.55 53.60 52.25 59.45 56.25
ytb_sqa_batch3_zh_en 61.80 57.15 59.80 58.55 57.45
Average 61.50 56.76 58.74 59.75 57.48

Contextual paralinguistic question and answer (CPQA)

The audio includes both speech and non-speech elements, and when no speech is present, LLMs are expected to reason solely based on acoustic or musical elements. The speech samples were in languages of Chinese, Malay, Tamil, English, a mix of any of the languages (codeswitch), or could include dialects such as Hokkien. To test for robustness in instruction following, the text prompts were designed to be diverse, and were written in any of the following languages: English, Malay, Tamil, Indonesian, Vietnamese, Chinese, or Thai. LLMs are expected to reply in the same language as the text prompt. Similar to SQA, we have adjusted the judge's prompt to address verbosity bias.

Dataset MERaLiON-3-10B MERaLiON-2-10B Qwen3 Omni Gemini 3 Flash GPT 4o Audio
yx_youtube_zh 59.40 50.18 57.27 54.67 54.79
yx_youtube_codeswitch 61.80 47.36 55.56 59.40 60.32
yx_youtube_dialect 59.20 47.72 56.36 55.36 54.92
yx_youtube_ms 60.40 46.16 53.88 57.00 56.36
yx_youtube_ta 58.40 38.88 49.60 56.60 54.64
yx_youtube_en 58.64 51.60 56.76 53.52 52.88
ytb_short_eval_cpqa_human1 54.64 47.57 53.95 47.42 49.97
ytb_short_eval_cpqa_llm1 59.42 56.25 56.07 54.94 52.44
ytb_long_eval_cpqa_llm1 60.46 57.48 57.44 54.94 56.32
ytb_long_eval_cpqa_human1 60.94 51.33 59.21 56.34 55.00
Emotional-YTB-MY_zh_30_test_CPQA_v1 51.81 46.81 51.22 51.07 53.41
Emotional-YTB-MY_ms_30_test_CPQA_v1 50.40 44.82 48.79 49.12 53.01
Emotional-YTB-MY_ta_test_CPQA_v1 49.77 41.88 48.62 52.56 54.96
Average 57.33 48.31 54.21 54.07 54.54

Automatic Speech Recognition (ASR), instruction following and audio understanding

MERaLiON-3-10B continues to demonstrate competitive performance in ASR, instruction following and audio understanding as compared to MERaLiON-2-10B, with improvements on most metrics on Audiobench. Please visit AudioBench benchmark for dataset-level evaluation results.

Benchmark MERaLiON-3-10B MERaLiON-2-10B MERaLiON-2-10B-ASR MERaLiON-2-3B
ASR (lower better) 0.125 0.1485 0.1332 0.1697
Speech Instruction 76.90 70.20 13.40 19.10
Audio Scene Question Answering 56.98 51.14 49.51 46.14
Spoken QA (Singlish) 67.25 66.55 61.85 59.70
Audio Captioning 38.31 35.60 34.47 33.24
Spoken Dialogue Summarisation 56.45 53.10 55.80 48.55
Spoken QA (English) 83.42 79.74 73.98 68.72
Music Understanding 76.07 63.94 60.66 55.60
Accent Recognition 57.47 41.82 47.79 60.05
Speech Translation 28.83 27.39 28.54 22.13

How to Use

Out of Scope use: This model is not intended for use in tool calling, math, and coding tasks.

MERaLiON-3 requires transformers version 4.56.2

pip install transformers==4.50.1
pip install librosa

To run in GPU, MERaLiON-3 requires flash-attn.

pip install flash-attn --no-build-isolation

Should you face any difficulties installing the above packages, you can try installing within this Docker container instead: pytorch/pytorch:2.5.1-cuda12.1-cudnn9-devel, whose cuda and torch environments have been tested working.

Audio Input

  • For ASR tasks, the maximum audio length is suggested to be 30 seconds at 16,000 Hz.
  • For general speech & audio understanding tasks, the maximum audio length which we tested for was up to 300 seconds at 16,000 Hz sampling rate.

Text Prompt

MERaLiON-3 is trained with this prompt template:

Instruction: <TextHere> \nFollow the text instruction based on the following audio: <SpeechHere>

It is generally recommended to follow this template, i.e., replace <TextHere> with your text instruction while leaving the <SpeechHere> untouched. We list a few useful example prompts here:

Standard prompts for better accuracy

prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"

transcription_prompt = prompt_template.format(query="Please transcribe this speech.")
translation_prompt = prompt_template.format(query="Please translate the speech into Malay")
summarization_prompt = prompt_template.format(query="Please summarize this speech")
audio_captioning_prompt_1 = prompt_template.format(query="Please describe the audio")
audio_captioning_prompt_2 = prompt_template.format(query="Please create a caption for the audio")
audio_scene_understanding_prompt = prompt_template.format(query="Are there people crying in the audio?")
speech_as_instruction_prompt = prompt_template.format(query="Please respond to the audio") # given a speech instruction is provided in the audio clip.
emotion_recognition_prompt_1 = prompt_template.format(query="What is the emotion of the speaker")
emotion_recognition_prompt_2 = prompt_template.format(query="Describe the paralinguistic features of the audio")
gender_recognition_prompt = prompt_template.format(query="What is the gender of the speaker")

More flexible prompts for enriched responses

prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"

prompt_1 = prompt_template.format(query="describe the paralinguistics feature and return in json format.")
prompt_2 = prompt_template.format(query="Please summarize the content of the speech and analyse the paralinguistics features of this audio. Return in json format.")
prompt_3 = prompt_template.format(query="Please translate this speech to Singapore's 4 official languages.")

AI agent prompts (beyond the default prompt template)

prompt_1 = \
"""
You are MERaLiON-AudioLLM, an empathic AI assistant developed by A*STAR. MERaLiON stands for Multimodal Empathetic Reasoning and Learning in One Network.
You are a friendly and empathetic conversational partner, and is proficient in understanding human emotions, accents, and genders from paralinguistic features.
Maintain a tone that is warm, non-judgmental, and supportive while replying to user. 

User's voice:  <SpeechHere>
"""

Huggingface Inference with CPU

import librosa
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor

repo_id = "MERaLiON/MERaLiON-3-10B"

processor = AutoProcessor.from_pretrained(
    repo_id, 
    trust_remote_code=True,
    )
model = AutoModelForSpeechSeq2Seq.from_pretrained(
    repo_id,
    use_safetensors=True,
    trust_remote_code=True,
)

prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"
transcribe_prompt = "Please transcribe this speech."
translate_prompt = "Can you please translate this speech into written Chinese?"

# batch inference of 2 samples
conversation = [
    [{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}],
    [{"role": "user", "content": prompt_template.format(query=translate_prompt)}],
]

chat_prompt = processor.tokenizer.apply_chat_template(
    conversation=conversation,
    tokenize=False,
    add_generation_prompt=True
)

# Use audio at 16000hz.
audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000)
audio_array = [audio_array]*2
inputs = processor(text=chat_prompt, audios=audio_array)

# adjust the `max_new_tokens` based on your use case.
# Please note the inclusion of `no_repeat_ngram_size=6`.
outputs = model.generate(**inputs, max_new_tokens=256, no_repeat_ngram_size=6)
generated_ids = outputs[:, inputs['input_ids'].size(1):]
response = processor.batch_decode(generated_ids, skip_special_tokens=True)

Huggingface GPU Inference

import torch
import librosa
from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor

repo_id = "MERaLiON/MERaLiON-3-10B"
device = "cuda"

processor = AutoProcessor.from_pretrained(
    repo_id, 
    trust_remote_code=True,
    )
model = AutoModelForSpeechSeq2Seq.from_pretrained(
    repo_id,
    use_safetensors=True,
    trust_remote_code=True,
    attn_implementation="flash_attention_2",
    torch_dtype=torch.bfloat16
).to(device)

prompt_template = "Instruction: {query} \nFollow the text instruction based on the following audio: <SpeechHere>"
transcribe_prompt = "Please transcribe this speech."
translate_prompt = "Can you please translate this speech into written Chinese?"

# batch inference of 2 samples
conversation = [
    [{"role": "user", "content": prompt_template.format(query=transcribe_prompt)}],
    [{"role": "user", "content": prompt_template.format(query=translate_prompt)}],
]

chat_prompt = processor.tokenizer.apply_chat_template(
    conversation=conversation,
    tokenize=False,
    add_generation_prompt=True
)

# Use audio at 16000hz.
audio_array, sample_rate = librosa.load("/path/to/your/audio/file", sr=16000)
audio_array = [audio_array]*2
inputs = processor(text=chat_prompt, audios=audio_array)

inputs = inputs.to(device, dtype=torch.bfloat16)

# adjust the `max_new_tokens` based on your use case.
# Please note the inclusion of `no_repeat_ngram_size=6`.
outputs = model.generate(**inputs, max_new_tokens=256, no_repeat_ngram_size=6)
generated_ids = outputs[:, inputs['input_ids'].size(1):]
response = processor.batch_decode(generated_ids, skip_special_tokens=True)

⚠️ Disclaimer

The current MERaLiON-3 has not been specifically aligned for safety and may generate content that is inappropriate, offensive, or harmful. Developers and users are responsible for performing their own safety fine-tuning and implementing necessary security measures. The authors shall not be held liable for any claims, damages, or other liabilities arising from the use of the released models, weights, or code.

Compute and Infrastructure

MERaLiON-3 was trained on the ASPIRE 2A+ Supercomputer Cluster, provided by National Supercomputing Centre (NSCC), Singapore. ASPIRE 2A+ cluster provides multiple H100 nodes, with each compute node equipped with 8 Nvidia H100 GPUs, 2 TB of RAM, and 30 TB of locally attached NVMe storage. These nodes are interconnected via a rail-optimised, full fat-tree topology, utilising 400 Gb/s NDR InfiniBand cables. Additionally, the cluster incorporates a 2.5 PB SSD-based Lustre file system, linked to the H100 nodes through high-speed InfiniBand connections.

With a global batch size of 768, we trained the current release of MERaLiON-3 for around 200k steps, which took around 2 days to complete using 16 nodes, 128 H100 GPUs.

📚 Citation

If you find our work useful, please cite our papers:

MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models
AudioBench: A Universal Benchmark for Audio Large Language Models
Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models
MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders
MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models in Chinese, Indonesian, Malay, and Singlish

@misc{he2024meralionaudiollmtechnicalreport,
      title={MERaLiON-AudioLLM: Bridging Audio and Language with Large Language Models}, 
      author={{MERaLiON Team}},
      year={2024},
      eprint={2412.09818},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2412.09818}, 
}
@article{wang2024audiobench,
    title={AudioBench: A Universal Benchmark for Audio Large Language Models},
    author={Wang, Bin and Zou, Xunlong and Lin, Geyu and Sun, Shuo and Liu, Zhuohan and Zhang, Wenyu and Liu, Zhengyuan and Aw, AiTi and Chen, Nancy F},
    journal={NAACL},
    year={2025}
    }
@article{wang2025advancing,
    title={Advancing Singlish Understanding: Bridging the Gap with Datasets and Multimodal Models},
    author={Wang, Bin and Zou, Xunlong and Sun, Shuo and Zhang, Wenyu and He, Yingxu and Liu, Zhuohan and Wei, Chengwei and Chen, Nancy F and Aw, AiTi},
    journal={arXiv preprint arXiv:2501.01034},
    year={2025}
    }
@article{zhang2024mowe,
    title={MoWE-Audio: Multitask AudioLLMs with Mixture of Weak Encoders},
    author={Zhang, Wenyu and Sun, Shuo and Wang, Bin and Zou, Xunlong and Liu, Zhuohan and He, Yingxu and Lin, Geyu and Chen, Nancy F and Aw, Ai Ti},
    journal={ICASSP},
    year={2025}
    }
@misc{huang2025meraliontextllmcrosslingualunderstandinglarge,
      title={MERaLiON-TextLLM: Cross-Lingual Understanding of Large Language Models in Chinese, Indonesian, Malay, and Singlish}, 
      author={Xin Huang and Tarun Kumar Vangani and Minh Duc Pham and Xunlong Zou and Bin Wang and Zhengyuan Liu and Ai Ti Aw},
      year={2025},
      eprint={2501.08335},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.08335}, 
}
Downloads last month
36
Safetensors
Model size
10B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for lewiswoncy/m_test_9

Base model

google/gemma-2-9b
Finetuned
(401)
this model

Dataset used to train lewiswoncy/m_test_9

Papers for lewiswoncy/m_test_9