| # SAP²-ASR Dataset | |
| This dataset is designed for **SAP² (Speech-Aware Long Context Pruning and Integration)** research in contextualized automatic speech recognition (ASR). | |
| ## 📖 Introduction | |
| SAP² is a novel framework for contextualized automatic speech recognition that dynamically prunes and integrates relevant contextual keywords. This method addresses the challenge of leveraging long-context information in domain-specific scenarios (e.g., conference presentations) where extensive OCR-derived textual contexts contain both relevant information and considerable noise. | |
| ### Key Features | |
| - **Speech-Aware Context Pruning**: Dynamically filters OCR-derived textual contexts to retain only keywords directly relevant to speech content | |
| - **Cross-Modal Context Compression**: Uses Speech-Driven Attention-based Pooling to compress extensive textual inputs into concise, speech-relevant context embeddings | |
| - **State-of-the-Art Performance**: Achieves WER of 7.71% on SlideSpeech and 1.12% on LibriSpeech, with a 41.1% relative improvement in biased keyword recognition over non-contextual baselines | |
| ## 📊 Dataset Structure | |
| This dataset contains two main sub-datasets: | |
| ### SlideSpeech | |
| - **Source**: SlideSpeech is a large-scale audio-visual corpus enriched with slides, containing 1,705 videos with 1,000+ hours of audio, including 473 hours of high-quality transcribed speech | |
| - **Data Format**: JSON format containing audio paths and conversational format with contextual keywords | |
| - **Directory Structure**: | |
| - `slidespeech_L95/`: Original data | |
| - `slidespeech_L95_filter/`: Filtered data | |
| - `slidespeech_L95_5slides/`: 5-slide version | |
| - `slidespeech_L95_multitask/`: Multi-task version | |
| ### LibriSpeech | |
| - **Source**: LibriSpeech is a large-scale corpus of read English speech, derived from audiobooks in the LibriVox project | |
| - **Data Format**: JSON format with different configurations for training, validation, and test sets | |
| - **Directory Structure**: | |
| - `train-clean-460_*.json`: Training set (clean, 460 hours) | |
| - `train-other-500_*.json`: Training set (other, 500 hours) | |
| - `dev-clean_*.json`, `dev-other_*.json`: Validation sets | |
| - `test-clean_*.json`, `test-other_*.json`: Test sets (various sizes: 100, 500, 1000, 2000 samples) | |
| ### Data Format Example | |
| ```json | |
| { | |
| "messages": [ | |
| { | |
| "role": "user", | |
| "content": "<audio>/path/to/audio.wav</audio>Transcribe speech to text according to keywords may appear in the utterance. Possible keywords are: <|startofcontext|>keyword1 keyword2 keyword3<|endofcontext|>" | |
| }, | |
| { | |
| "role": "assistant", | |
| "content": "transcribed text" | |
| } | |
| ], | |
| "audios": "/path/to/audio.wav" | |
| } | |
| ``` | |
| **Key Tokens**: | |
| - `<|startofcontext|>` and `<|endofcontext|>`: Special tokens for marking contextual keywords | |
| - `<audio>...</audio>`: Audio file path token | |
| ## 🚀 Usage | |
| ### Loading the Dataset | |
| ```python | |
| import json | |
| # Load SlideSpeech dataset | |
| with open('slidespeech/slidespeech_L95_filter/train.json', 'r') as f: | |
| slidespeech_train = json.load(f) | |
| # Load LibriSpeech dataset | |
| with open('librispeech/train-clean-460_filter.json', 'r') as f: | |
| librispeech_train = json.load(f) | |
| ``` | |
| ### Using with SAP² Model | |
| For detailed usage instructions, training and inference code, please refer to: | |
| **🔗 [SAP²-ASR GitHub Repository](https://github.com/jymh/SAP2-ASR.git)** | |
| The repository contains: | |
| - Complete model implementation code | |
| - Training and inference scripts | |
| - Data preprocessing tools | |
| - Evaluation scripts | |
| - Detailed documentation | |
| ## 📎 Citation | |
| If you use this dataset in your research, please cite the following paper: | |
| ```bibtex | |
| @article{rong2025speechaware, | |
| title={Speech-Aware Long Context Pruning and Integration for Contextualized Automatic Speech Recognition}, | |
| author={Rong, Yiming and Zhang, Yixin and Wang, Ziyi and Jiang, Deyang and Zhao, Yunlong and Wu, Haoran and Zhou, Shiyu and Xu, Bo}, | |
| journal={arXiv preprint arXiv:2511.11139}, | |
| year={2025} | |
| } | |
| ``` | |
| **Paper Link**: [https://www.arxiv.org/abs/2511.11139](https://www.arxiv.org/abs/2511.11139) | |
| ## 📚 Related Resources | |
| - **Code Repository**: [https://github.com/jymh/SAP2-ASR.git](https://github.com/jymh/SAP2-ASR.git) | |
| - **Paper**: [arXiv:2511.11139](https://www.arxiv.org/abs/2511.11139) | |
| - **SlideSpeech Original Dataset**: [https://slidespeech.github.io/](https://slidespeech.github.io/) | |
| - **LibriSpeech Original Dataset**: [OpenSLR](https://www.openslr.org/12/) | |
| ## 🏛 License | |
| The use of this dataset should follow the license requirements of the original datasets. For SlideSpeech and LibriSpeech, please refer to the license information on their original resource pages. | |
| ## ⚠️ Notes | |
| 1. Audio file paths may need to be adjusted according to your actual environment | |
| 2. The dataset files are large, please ensure you have sufficient storage space | |
| 3. Please carefully read the detailed documentation in the [GitHub repository](https://github.com/jymh/SAP2-ASR.git) before use | |
| --- | |
| **For more information and usage examples, please visit [SAP²-ASR GitHub Repository](https://github.com/jymh/SAP2-ASR.git)** | |