license: cc-by-nc-4.0
task_categories:
- text-to-speech
- automatic-speech-recognition
language:
- ru
pretty_name: OpenSTT annotate by Balalaika
OpenSTT Annotated by Balalaika
A curated Russian speech dataset for advanced speech generative tasks.
Overview
OpenSTT Annotated by Balalaika is a high-quality Russian speech corpus, meticulously filtered and annotated by the lab260 team at MTUCI with the latest version of our pipeline, BALALAIKA.
- Language: Russian only
- Genres: Podcasts, public speech, YouTube, audiobooks, phone calls, TTS, and more
- Source: OpenSTT (GitHub link)
- License: CC BY-NC 4.0 (same as original OpenSTT)
- Total Duration After Filtering: 431.43 hours (from over 20,108 hours raw)
- Format: Parquet files with split-wise annotation
Usage
Primary Use Cases:
- Text-to-Speech (TTS) generation
- Automatic Speech Recognition (ASR)
- Analysis of accent, stress, and prosody
- Russian speech technology research
1. Download the dataset
2. Extract the files
for archive in *.tar.gz; do
dir="${archive%.tar.gz}"
mkdir -p "$dir"
tar -xzvf "$archive" -C "$dir"
rm "$archive"
done
3. Load data in PyTorch
from pathlib import Path
import pandas as pd
from torch.utils.data import Dataset
import torchaudio
class ParquetConcatDataset(Dataset):
def __init__(self, parquet_dir, audio_root, parse_fn=None):
self.parquet_dir = Path(parquet_dir)
self.audio_root = Path(audio_root)
parquet_files = list(self.parquet_dir.glob("*.parquet"))
dfs = [pd.read_parquet(f) for f in parquet_files]
self.df = pd.concat(dfs, ignore_index=True)
def __len__(self):
return len(self.df)
def __getitem__(self, idx):
row = self.df.iloc[idx]
audio_path = self.audio_root / row["filepath"]
waveform, sample_rate = torchaudio.load(audio_path)
return {
"audio_path": str(audio_path),
"waveform": waveform,
"sample_rate": sample_rate,
"nisqa_mos": row["mos_pred"],
"nisqa_noi": row["noi_pred"],
"nisqa_dis": row["dis_pred"],
"nisqa_col": row["col_pred"],
"nisqa_loud": row["loud_pred"],
"nisqa_model": row["model"],
"is_single_speaker": bool(row["is_single_speaker"]),
"accented_text": row["accent"],
"asr_text": row["rover"],
"punctuated_text": row["punct"],
"phonemes": row["phonemes"]
}
# Example usage
ds = ParquetConcatDataset(
PATH_TO_PARQUETS_DIR,
PATH_TO_AUDIO_ROOT
)
PATH_TO_PARQUETS_DIR: Path to the folder containing all .parquet files with metadata and annotations for the dataset.
PATH_TO_AUDIO_ROOT: Path to the root directory containing all audio subfolders and files referenced by filepath columns in the metadata.
Data Processing & Annotation
Our pipeline applies rigorous filtering and enrichment steps:
- Removed speech segments shorter than 3 seconds
- Filtered segments with NISQA MOS < 4.0 for quality assurance
- Excluded segments with multiple speakers (via pyannotate diarization)
- Filtered out speech with music background (custom music detector)
- Revised transcriptions: Crowd-sourced with multiple ASRs, fused via ROVER (T-one, GigaAMv2-rnnt, GigaAMv2-ctc, GigaAMv2-ctc-lm, vosk)
- Punctuation added using RuPunct
- Stress marks added via RuAccent
- IPA phonemization performed with our own neural model
All annotation fields are handled and provided separately for transparency and flexibility.
Data Structure
- Annotation storage: Parquet files
- Speech storage: .tar.gz files with speech segments in .opus
- Splitting: Follows OpenSTT splits
- Annotations: Each sample includes separate fields for:
- Filepath
- Quality metrics: MOS, NOI, DIS, COL, LOUD
- Model for quality assesment
- Transcript with stresses and pucntuation
- Transcript after ROVER
- Transcript with punctuation
- IPA transcription
- Speaker diarization flag
How to Cite
Please cite the following paper if you use this dataset in research:
@misc{borodin2025datacentricframeworkaddressingphonetic,
title={A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models},
author={Kirill Borodin and Nikita Vasiliev and Vasiliy Kudryavtsev and Maxim Maslov and Mikhail Gorodnichev and Oleg Rogov and Grach Mkrtchian},
year={2025},
eprint={2507.13563},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2507.13563},
}
Contact
- Telegram: @korallll_ai
- Email: k.n.borodin@mtuci.ru
Links
- Balalaka annotation pipeline
- Other datasets annotated by BALALAIKA
- Custom models' inference implementaton
- Paper (arXiv)
- OpenSTT repository
- NISQA
- pyannotate diarization
- T-one
- GigaAMv2-rnnt, GigaAMv2-ctc, GigaAMv2-ctc-lm
- vosk
- RuPunct
- RuAccent
License
Distributed under CC BY-NC 4.0, matching original OpenSTT terms.