Datasets:
Tasks:
Text Classification
Formats:
json
Languages:
English
Size:
100K - 1M
ArXiv:
Tags:
data quality rating
File size: 3,774 Bytes
1f722ab 650e89a 1f722ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
task_categories:
- text-classification
language:
- en
tags:
- data quality rating
size_categories:
- 1M<n<10M
---
# PRRC Rater Training and Evaluation Dataset
## Dataset Description
This dataset contains the full training and evaluation data for the PRRC rater models described in [Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models](https://arxiv.org/abs/2504.14194). It is designed for training and benchmarking models that score text along four key quality dimensions: **Professionalism, Readability, Reasoning, and Cleanliness**.
- **Source**: Subset of SlimPajama-627B, annotated for PRRC dimensions
- **Purpose**: Supervised training and evaluation of PRRC raters (ModernBERT models)
- **Annotation**: Each sample is labeled by Llama-3.3-70B-Instruct and/or human annotators, then used to fine-tune and benchmark PRRC raters
## Dataset Statistics
- **Total samples**: ~1M (split into train/dev/test)
- **Quality metrics**: 4 PRRC dimensions (Professionalism, Readability, Reasoning, Cleanliness)
- **Domains**: Diverse (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
- **Annotation coverage**: 100% of included samples
## PRRC Quality Dimensions
- **Professionalism**: Degree of expertise and prerequisite knowledge required
- **Readability**: Clarity, coherence, and ease of understanding
- **Reasoning**: Complexity of logical reasoning and analytical thinking
- **Cleanliness**: Formatting, completeness, and absence of noise/irrelevant content
Each dimension is rated on a 0–5 scale, with detailed prompt criteria provided in the [prompts/](./prompts/) directory of the GitHub repo.
## Dataset Structure
Each example in the dataset has the following structure:
```python
{
"id": "unique_document_id",
"content": "Main text content of the document",
"source": "domain_name", # e.g., "arxiv", "github", "wikipedia", etc.
"professionalism": int, # 0-5
"readability": int, # 0-5
"reasoning": int, # 0-5
"cleanliness": int # 0-5
}
```
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the full PRRC rater dataset
dataset = load_dataset("opendatalab/Meta-rater-PRRC-Rater-dataset")
# Access splits
train = dataset["train"]
dev = dataset["validation"]
test = dataset["test"]
```
## Applications
- **Supervised training** of PRRC rater models (e.g., ModernBERT)
- **Benchmarking** and evaluation of text quality raters
- **Prompt engineering** and ablation studies for quality annotation
- **Data-centric LLM research**: Understanding the impact of different quality dimensions
## Annotation Process
- **Initial annotation**: Llama-3.3-70B-Instruct (and/or human) rates each sample for all four PRRC dimensions using detailed prompts
- **Quality control**: Manual review and cleaning
- **Splitting**: Data is split into train/dev/test for robust evaluation
## Citation
If you use this dataset, please cite:
```bibtex
@article{zhuang2025meta,
title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
journal={arXiv preprint arXiv:2504.14194},
year={2025}
}
```
## License
This dataset is released under the same license as the original SlimPajama dataset. Please refer to the original SlimPajama repository for licensing details.
## Contact
- **Project Lead**: Ren Ma (maren@pjlab.org.cn)
- **Corresponding Author**: Conghui He (heconghui@pjlab.org.cn)
- **Issues**: [GitHub Issues](https://github.com/opendatalab/Meta-rater/issues)
---
**Made with ❤️ by the OpenDataLab team** |