Datasets:
task_categories:
- text-classification
language:
- en
tags:
- data quality rating
size_categories:
- 1M<n<10M
PRRC Rater Training and Evaluation Dataset
Dataset Description
This dataset contains the full training and evaluation data for the PRRC rater models described in Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models. It is designed for training and benchmarking models that score text along four key quality dimensions: Professionalism, Readability, Reasoning, and Cleanliness.
- Source: Subset of SlimPajama-627B, annotated for PRRC dimensions
- Purpose: Supervised training and evaluation of PRRC raters (ModernBERT models)
- Annotation: Each sample is labeled by Llama-3.3-70B-Instruct and/or human annotators, then used to fine-tune and benchmark PRRC raters
Dataset Statistics
- Total samples: ~1M (split into train/dev/test)
- Quality metrics: 4 PRRC dimensions (Professionalism, Readability, Reasoning, Cleanliness)
- Domains: Diverse (CommonCrawl, C4, GitHub, Books, ArXiv, Wikipedia, StackExchange)
- Annotation coverage: 100% of included samples
PRRC Quality Dimensions
- Professionalism: Degree of expertise and prerequisite knowledge required
- Readability: Clarity, coherence, and ease of understanding
- Reasoning: Complexity of logical reasoning and analytical thinking
- Cleanliness: Formatting, completeness, and absence of noise/irrelevant content
Each dimension is rated on a 0–5 scale, with detailed prompt criteria provided in the prompts/ directory of the GitHub repo.
Dataset Structure
Each example in the dataset has the following structure:
{
"id": "unique_document_id",
"content": "Main text content of the document",
"source": "domain_name", # e.g., "arxiv", "github", "wikipedia", etc.
"professionalism": int, # 0-5
"readability": int, # 0-5
"reasoning": int, # 0-5
"cleanliness": int # 0-5
}
Usage
Loading the Dataset
from datasets import load_dataset
# Load the full PRRC rater dataset
dataset = load_dataset("opendatalab/Meta-rater-PRRC-Rater-dataset")
# Access splits
train = dataset["train"]
dev = dataset["validation"]
test = dataset["test"]
Applications
- Supervised training of PRRC rater models (e.g., ModernBERT)
- Benchmarking and evaluation of text quality raters
- Prompt engineering and ablation studies for quality annotation
- Data-centric LLM research: Understanding the impact of different quality dimensions
Annotation Process
- Initial annotation: Llama-3.3-70B-Instruct (and/or human) rates each sample for all four PRRC dimensions using detailed prompts
- Quality control: Manual review and cleaning
- Splitting: Data is split into train/dev/test for robust evaluation
Citation
If you use this dataset, please cite:
@article{zhuang2025meta,
title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
journal={arXiv preprint arXiv:2504.14194},
year={2025}
}
License
This dataset is released under the same license as the original SlimPajama dataset. Please refer to the original SlimPajama repository for licensing details.
Contact
- Project Lead: Ren Ma (maren@pjlab.org.cn)
- Corresponding Author: Conghui He (heconghui@pjlab.org.cn)
- Issues: GitHub Issues
Made with ❤️ by the OpenDataLab team