--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: file_name dtype: string - name: id dtype: string - name: language dtype: string - name: language_score dtype: float64 - name: source dtype: string - name: subsource dtype: string - name: text dtype: string - name: fineweb-nemotron-edu-score dtype: float32 - name: fineweb-mixtral-edu-score dtype: float32 - name: alignment-score dtype: float32 - name: fasttext-quality-score dtype: float32 - name: manipulative-score dtype: float32 - name: gec-score dtype: float32 - name: __index_level_0__ dtype: int64 - name: median-score dtype: float64 - name: formula-score dtype: float64 - name: fineweb-nemotron-edu-score-int dtype: int64 - name: fineweb-mixtral-edu-score-int dtype: int64 - name: alignment-score-int dtype: int64 - name: fasttext-quality-score-int dtype: int64 - name: manipulative-score-int dtype: int64 - name: gec-score-int dtype: int64 - name: max-score-int dtype: int64 splits: - name: train num_bytes: 152867030506 num_examples: 16702287 download_size: 69984117765 dataset_size: 152867030506 license: cc-by-4.0 task_categories: - text-generation language: - uk tags: - ukrainian - pretraining pretty_name: Lapa High Quality Pretraining Dataset --- # Dataset Card for Lapa High Quality Pretraining Dataset ## Dataset Description **Dataset Summary** This dataset is a high quality subset of pretraining corpus for Ukrainian language. It was filtered using 6 models, measuring different quality aspects of the data: - lapa-llm/alignment-score-model - Alignment - filtering for disinformation - lapa-llm/gec-score-model - Grammatical Correctness of the text - lapa-llm/fineweb-nemotron-edu-score - Educational Value of the text - lapa-llm/fineweb-mixtral-edu-score - Educational Value of the text - lapa-llm/manipulative-score-model - How manipulative is the text - lapa-llm/fasttext-quality-score - Text Coherence (how close is it to ELI5 from Reddit style of explanations) All models are available in this collection: https://huggingface.co/collections/lapa-llm/lapa-v012-pretraining We perform CDF binning and perform max ensembling, then select data only from the highest performing bucket, which is this dataset. Additional measure, `formula-score`, combines most of them into one metric: ```python def formula_score(item): item["formula-score"] = np.median([item["fineweb-nemotron-edu-score"], item["fineweb-mixtral-edu-score"], item["fasttext-quality-score"],]) * item["alignment-score"] * item["manipulative-score"] * item["gec-score"] return item ``` This provides a balanced measure of quality across all classifiers. **Languages** - Ukrainian (uk) ## Dataset Creation **Source Data** - Base dataset: Kobza, FinePDFs, FineWeb, UberText ## Considerations for Using the Data **Social Impact** Aims to strengthen the Ukrainian-language LLM ecosystem and improve accessibility of language technology for Ukrainian speakers. ## Citation **BibTeX** TBD ## License CC-BY-SA-4.0 --- *This dataset is part of the "Lapa" - Ukrainian LLM initiative to advance natural language processing for the Ukrainian language.*