Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
Russian
ArXiv:
Libraries:
Datasets
Dask
License:
korallll commited on
Commit
5473338
·
verified ·
1 Parent(s): 74ef2ed

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -12
README.md CHANGED
@@ -28,25 +28,76 @@ pretty_name: OpenSTT annotate by Balalaika
28
  ## Usage
29
 
30
  **Primary Use Cases:**
31
- - Speech generative modeling (TTS)
32
- - Speech recognition
33
- - Accent, stress, and prosody analysis
34
- - Russian speech research
35
 
36
- 1. Download dataset
37
- 2. Unwrap files:
38
 
39
- ```bash
40
- for a in *.tar.gz; do
41
- dir="${a%.tar.gz}"
 
 
42
  mkdir -p "$dir"
43
- tar -xzvf "$a" -C "$dir"
44
- rm "$a"
45
  done
46
  ```
47
 
48
- ***
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
 
 
 
 
50
 
51
  ## Data Processing & Annotation
52
 
 
28
  ## Usage
29
 
30
  **Primary Use Cases:**
31
+ - Text-to-Speech (TTS) generation
32
+ - Automatic Speech Recognition (ASR)
33
+ - Analysis of accent, stress, and prosody
34
+ - Russian speech technology research
35
 
36
+ ### 1. Download the dataset
 
37
 
38
+ ### 2. Extract the files
39
+
40
+ ```basg
41
+ for archive in *.tar.gz; do
42
+ dir="${archive%.tar.gz}"
43
  mkdir -p "$dir"
44
+ tar -xzvf "$archive" -C "$dir"
45
+ rm "$archive"
46
  done
47
  ```
48
 
49
+ ### 3. Load data in PyTorch
50
+
51
+ ```python
52
+ from pathlib import Path
53
+ import pandas as pd
54
+ from torch.utils.data import Dataset
55
+ import torchaudio
56
+
57
+ class ParquetConcatDataset(Dataset):
58
+ def __init__(self, parquet_dir, audio_root, parse_fn=None):
59
+ self.parquet_dir = Path(parquet_dir)
60
+ self.audio_root = Path(audio_root)
61
+
62
+ parquet_files = list(self.parquet_dir.glob("*.parquet"))
63
+ dfs = [pd.read_parquet(f) for f in parquet_files]
64
+ self.df = pd.concat(dfs, ignore_index=True)
65
+
66
+ def __len__(self):
67
+ return len(self.df)
68
+
69
+ def __getitem__(self, idx):
70
+ row = self.df.iloc[idx]
71
+ audio_path = self.audio_root / row["filepath"]
72
+ waveform, sample_rate = torchaudio.load(audio_path)
73
+ return {
74
+ "audio_path": str(audio_path),
75
+ "waveform": waveform,
76
+ "sample_rate": sample_rate,
77
+ "nisqa_mos": row["mos_pred"],
78
+ "nisqa_noi": row["noi_pred"],
79
+ "nisqa_dis": row["dis_pred"],
80
+ "nisqa_col": row["col_pred"],
81
+ "nisqa_loud": row["loud_pred"],
82
+ "nisqa_model": row["model"],
83
+ "is_single_speaker": bool(row["is_single_speaker"]),
84
+ "accented_text": row["accent"],
85
+ "asr_text": row["rover"],
86
+ "punctuated_text": row["punct"],
87
+ "phonemes": row["phonemes"]
88
+ }
89
+
90
+ # Example usage
91
+ ds = ParquetConcatDataset(
92
+ PATH_TO_PARQUETS_DIR,
93
+ PATH_TO_AUDIO_ROOT
94
+ )
95
+ ```
96
 
97
+ `PATH_TO_PARQUETS_DIR`: Path to the folder containing all .parquet files with metadata and annotations for the dataset.
98
+
99
+ `PATH_TO_AUDIO_ROOT`: Path to the root directory containing all audio subfolders and files referenced by filepath columns in the metadata.
100
+ ***
101
 
102
  ## Data Processing & Annotation
103