File size: 4,771 Bytes
f7c8e84 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 |
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- tr
tags:
- speech
- audio
- dataset
- tts
- asr
- merged-dataset
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: train
path: "train-*.parquet"
default: true
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: null
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: language
dtype: string
- name: emotion
dtype: string
- name: original_dataset
dtype: string
- name: original_filename
dtype: string
- name: start_time
dtype: float32
- name: end_time
dtype: float32
- name: duration
dtype: float32
splits:
- name: train
num_examples: 1293
config_name: default
---
# FTTRTEST
This is a merged speech dataset containing 1293 audio segments from 5 source datasets.
## Dataset Information
- **Total Segments**: 1293
- **Speakers**: 7
- **Languages**: tr
- **Emotions**: happy, angry, neutral
- **Original Datasets**: 5
## Dataset Structure
Each example contains:
- `audio`: Audio file (WAV format, original sampling rate preserved)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
- `language`: Language code (en, es, fr, etc.)
- `emotion`: Detected emotion (neutral, happy, sad, etc.)
- `original_dataset`: Name of the source dataset this segment came from
- `original_filename`: Original filename in the source dataset
- `start_time`: Start time of the segment in seconds
- `end_time`: End time of the segment in seconds
- `duration`: Duration of the segment in seconds
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/fttrtest")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
print(f"Original Dataset: {sample['original_dataset']}")
print(f"Duration: {sample['duration']}s")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
```
### Alternative: Load from JSONL
```python
from datasets import Dataset, Audio, Features, Value
import json
# Load the JSONL file
rows = []
with open("data.jsonl", "r", encoding="utf-8") as f:
for line in f:
rows.append(json.loads(line))
features = Features({
"audio": Audio(sampling_rate=None),
"text": Value("string"),
"speaker_id": Value("string"),
"language": Value("string"),
"emotion": Value("string"),
"original_dataset": Value("string"),
"original_filename": Value("string"),
"start_time": Value("float32"),
"end_time": Value("float32"),
"duration": Value("float32")
})
dataset = Dataset.from_list(rows, features=features)
```
### Dataset Structure
The dataset includes:
- `data.jsonl` - Main dataset file with all columns (JSON Lines)
- `*.wav` - Audio files under `audio_XXX/` subdirectories
- `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
JSONL keys:
- `audio`: Relative audio path (e.g., `audio_000/segment_000000_speaker_0.wav`)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier
- `language`: Language code
- `emotion`: Detected emotion
- `original_dataset`: Name of the source dataset
- `original_filename`: Original filename in the source dataset
- `start_time`: Start time of the segment in seconds
- `end_time`: End time of the segment in seconds
- `duration`: Duration of the segment in seconds
## Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts.
For example:
- Original Dataset A: `speaker_0`, `speaker_1`
- Original Dataset B: `speaker_0`, `speaker_1`
- Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
Original dataset information is preserved in the metadata for reference.
## Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
## License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
## Citation
```bibtex
@dataset{vyvo_merged_dataset,
title={FTTRTEST},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/fttrtest}
}
```
This dataset was created using the Vyvo Dataset Builder tool.
|