Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Russian
ArXiv:
Libraries:
Datasets
Dask
License:
openstt_balalaika / README.md
korallll's picture
Update README.md
5473338 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - text-to-speech
  - automatic-speech-recognition
language:
  - ru
pretty_name: OpenSTT annotate by Balalaika

OpenSTT Annotated by Balalaika

A curated Russian speech dataset for advanced speech generative tasks.

Overview

OpenSTT Annotated by Balalaika is a high-quality Russian speech corpus, meticulously filtered and annotated by the lab260 team at MTUCI with the latest version of our pipeline, BALALAIKA.

  • Language: Russian only
  • Genres: Podcasts, public speech, YouTube, audiobooks, phone calls, TTS, and more
  • Source: OpenSTT (GitHub link)
  • License: CC BY-NC 4.0 (same as original OpenSTT)
  • Total Duration After Filtering: 431.43 hours (from over 20,108 hours raw)
  • Format: Parquet files with split-wise annotation

Usage

Primary Use Cases:

  • Text-to-Speech (TTS) generation
  • Automatic Speech Recognition (ASR)
  • Analysis of accent, stress, and prosody
  • Russian speech technology research

1. Download the dataset

2. Extract the files

for archive in *.tar.gz; do
    dir="${archive%.tar.gz}"
    mkdir -p "$dir"
    tar -xzvf "$archive" -C "$dir"
    rm "$archive"
done

3. Load data in PyTorch

from pathlib import Path
import pandas as pd
from torch.utils.data import Dataset
import torchaudio

class ParquetConcatDataset(Dataset):
    def __init__(self, parquet_dir, audio_root, parse_fn=None):
        self.parquet_dir = Path(parquet_dir)
        self.audio_root = Path(audio_root)

        parquet_files = list(self.parquet_dir.glob("*.parquet"))
        dfs = [pd.read_parquet(f) for f in parquet_files]
        self.df = pd.concat(dfs, ignore_index=True)

    def __len__(self):
        return len(self.df)

    def __getitem__(self, idx):
        row = self.df.iloc[idx]
        audio_path = self.audio_root / row["filepath"]
        waveform, sample_rate = torchaudio.load(audio_path)
        return {
            "audio_path": str(audio_path),
            "waveform": waveform,
            "sample_rate": sample_rate,
            "nisqa_mos": row["mos_pred"],
            "nisqa_noi": row["noi_pred"],
            "nisqa_dis": row["dis_pred"],
            "nisqa_col": row["col_pred"],
            "nisqa_loud": row["loud_pred"],
            "nisqa_model": row["model"],
            "is_single_speaker": bool(row["is_single_speaker"]),
            "accented_text": row["accent"],
            "asr_text": row["rover"],
            "punctuated_text": row["punct"],
            "phonemes": row["phonemes"]
        }

# Example usage
ds = ParquetConcatDataset(
    PATH_TO_PARQUETS_DIR,
    PATH_TO_AUDIO_ROOT
)

PATH_TO_PARQUETS_DIR: Path to the folder containing all .parquet files with metadata and annotations for the dataset.

PATH_TO_AUDIO_ROOT: Path to the root directory containing all audio subfolders and files referenced by filepath columns in the metadata.


Data Processing & Annotation

Our pipeline applies rigorous filtering and enrichment steps:

  1. Removed speech segments shorter than 3 seconds
  2. Filtered segments with NISQA MOS < 4.0 for quality assurance
  3. Excluded segments with multiple speakers (via pyannotate diarization)
  4. Filtered out speech with music background (custom music detector)
  5. Revised transcriptions: Crowd-sourced with multiple ASRs, fused via ROVER (T-one, GigaAMv2-rnnt, GigaAMv2-ctc, GigaAMv2-ctc-lm, vosk)
  6. Punctuation added using RuPunct
  7. Stress marks added via RuAccent
  8. IPA phonemization performed with our own neural model

All annotation fields are handled and provided separately for transparency and flexibility.


Data Structure

  • Annotation storage: Parquet files
  • Speech storage: .tar.gz files with speech segments in .opus
  • Splitting: Follows OpenSTT splits
  • Annotations: Each sample includes separate fields for:
    • Filepath
    • Quality metrics: MOS, NOI, DIS, COL, LOUD
    • Model for quality assesment
    • Transcript with stresses and pucntuation
    • Transcript after ROVER
    • Transcript with punctuation
    • IPA transcription
    • Speaker diarization flag

How to Cite

Please cite the following paper if you use this dataset in research:

@misc{borodin2025datacentricframeworkaddressingphonetic,
      title={A Data-Centric Framework for Addressing Phonetic and Prosodic Challenges in Russian Speech Generative Models}, 
      author={Kirill Borodin and Nikita Vasiliev and Vasiliy Kudryavtsev and Maxim Maslov and Mikhail Gorodnichev and Oleg Rogov and Grach Mkrtchian},
      year={2025},
      eprint={2507.13563},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.13563}, 
}

Contact


Links


License

Distributed under CC BY-NC 4.0, matching original OpenSTT terms.