Datasets:

lmolino commited on
Commit
ed60dc1
·
verified ·
1 Parent(s): 0277b87

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +378 -3
README.md CHANGED
@@ -1,3 +1,378 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ task_categories:
4
+ - translation
5
+ - text-generation
6
+ language:
7
+ - es
8
+ - en
9
+ tags:
10
+ - machine-translation
11
+ - parallel-corpus
12
+ - spanish-english
13
+ - domain-specific
14
+ - legal-administrative
15
+ - biomedical
16
+ - heritage
17
+ size_categories:
18
+ - 10M<n<100M
19
+ ---
20
+
21
+ # Dataset Card for ALIA Parallel Translation Corpus
22
+
23
+ This corpus comprises **35,753,765 domain-specific parallel segments** (Spanish-English) designed for training and evaluating machine translation models in specialized domains. The corpus includes three main domains: **Legal-Administrative**, **Biomedical**, and **Heritage**, carefully curated to support document-level and multi-paragraph translation tasks beyond traditional sentence-level approaches.
24
+
25
+ ## Table of Contents
26
+ - [Dataset Card for ALIA Parallel Translation Corpus](#dataset-card-for-alia-parallel-translation-corpus)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Details](#dataset-details)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Sources](#dataset-sources)
31
+ - [Uses](#uses)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Statistics](#data-statistics)
36
+ - [Example Usage](#example-usage)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Source Data](#source-data)
39
+ - [Data Collection and Processing](#data-collection-and-processing)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+
47
+ ## Dataset Details
48
+
49
+ ### Dataset Description
50
+
51
+ The **ALIA Parallel Translation Corpus** is an extensive collection of Spanish-English parallel texts spanning three specialized domains: Legal-Administrative, Biomedical, and Heritage. With **35,753,765 parallel segments** totaling approximately **69.7 GB**, this corpus was developed as part of the ALIA project's machine translation activity to improve Spanish-English translation quality through continual pre-training and domain adaptation of language models.
52
+
53
+ The corpus prioritizes document-level and multi-paragraph translation contexts, moving beyond traditional sentence-level approaches. Each segment is identified by domain through a systematic ID prefix system:
54
+ - **00-XX-XXXXXX**: Biomedical domain (IBECS: 01, MedlinePlus: 02, PubMed: 03)
55
+ - **01-XX-XXXXXX**: Heritage domain
56
+ - **02-XX-XXXXXX**: Legal-Administrative domain (EURLEX: 01, EUROPAT: 02, UNPC: 03)
57
+
58
+ - **Curated by:** SINAI Research Group (Intelligent Systems for Information Access) - Universidad de Jaén, through the Center for Advanced Studies in Information and Communication Technologies (CEATIC).
59
+ - **Funded by:** This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.
60
+ - **Language(s) (NLP):** es (Spanish), en (English)
61
+ - **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
62
+
63
+ ### Dataset Sources
64
+
65
+ - **Repository:** [ALIA Project - SINAI](https://github.com/sinai-uja/ALIA-UJA)
66
+
67
+ ### Uses
68
+
69
+ The primary purpose of this corpus is to serve as a foundation for training and evaluating machine translation models specialized in Spanish-English translation for specific domains, with applications in:
70
+
71
+ - Training and fine-tuning large language models (LLMs) for domain-specific machine translation
72
+ - Continual pre-training for domain adaptation of translation models
73
+ - Evaluating translation quality using multiple metrics (BLEU, chrF++, COMET, COMET-Kiwi, TER, BLEURT, MetricX, MetricX-QE)
74
+ - Document-level and multi-paragraph translation research
75
+ - Comparative analysis of translation performance across specialized domains
76
+ - Benchmarking machine translation systems in legal, biomedical, and heritage contexts
77
+
78
+ ## Dataset Structure
79
+
80
+ ### Data Instances
81
+
82
+ Each instance in the corpus has the following structure:
83
+
84
+ ```json
85
+ {
86
+ "id": "000327881267",
87
+ "text_es": "Análisis de costo-utilidad de la vacunación contra el virus del papiloma humano y el cribado cervical del paciente con cáncer de cuello uterino en Indonesia...",
88
+ "text_en": "Although cervical cancer is a preventable disease, the clinical and economic burdens of cervical cancer are still substantial issues in Indonesia..."
89
+ }
90
+ ```
91
+
92
+ ### Data Fields
93
+
94
+ - **id** (string): Unique identifier following the domain prefix system:
95
+ - First 2 digits: Domain code (00=Biomedical, 01=Heritage, 02=Legal-Administrative)
96
+ - Next 2 digits: Source code within domain (see ID System below)
97
+ - Remaining digits: Sequential segment identifier
98
+ - **text_es** (string): Source text in Spanish
99
+ - **text_en** (string): Target text in English
100
+
101
+ **ID System by Domain:**
102
+
103
+ | Domain | Prefix | Source | Source Code |
104
+ |--------|--------|--------|-------------|
105
+ | Biomedical | 00 | IBECS | 01 |
106
+ | Biomedical | 00 | MedlinePlus | 02 |
107
+ | Biomedical | 00 | PubMed | 03 |
108
+ | Heritage | 01 | PCI | - |
109
+ | Legal-Administrative | 02 | EURLEX | 01 |
110
+ | Legal-Administrative | 02 | EUROPAT | 02 |
111
+ | Legal-Administrative | 02 | UNPC | 03 |
112
+
113
+ ### Data Statistics
114
+
115
+ The complete dataset contains **35,753,765 parallel segments**:
116
+
117
+ | Metric | Value |
118
+ |--------|-------|
119
+ | Total Instances | 35,753,765 |
120
+ | Size (Memory) | 69,756.33 MB |
121
+ | Columns | 3 |
122
+
123
+ **Domain Distribution** (by ID prefix):
124
+
125
+ | Domain | ID Prefix | Primary Sources |
126
+ |--------|-----------|-----------------|
127
+ | Biomedical | 00-XX-XXXXXX | IBECS, MedlinePlus, PubMed |
128
+ | Heritage | 01-XX-XXXXXX | PCI |
129
+ | Legal-Administrative | 02-XX-XXXXXX | EURLEX, EUROPAT, UNPC |
130
+
131
+ *Note: Exact domain distribution to be confirmed through ID prefix analysis.
132
+
133
+ **Segment Length Characteristics:**
134
+
135
+ | Metric | Spanish (text_es) | English (text_en) |
136
+ |--------|-------------------|-------------------|
137
+ | Shortest segment | 7 characters | 7 characters |
138
+ | Average segment length | ~800 characters | ~900 characters |
139
+ | Longest segments | >3,000 characters | >3,000 characters |
140
+
141
+ ### Example Usage
142
+
143
+ To load the dataset:
144
+
145
+ ```python
146
+ from datasets import load_dataset
147
+
148
+ # Load the complete dataset
149
+ data = load_dataset("sinai-uja/ALIA-parallel-translation", trust_remote_code=True)
150
+
151
+ # Load with streaming (recommended for this large corpus)
152
+ data = load_dataset("sinai-uja/ALIA-parallel-translation", trust_remote_code=True, streaming=True)
153
+
154
+ # Process in streaming mode
155
+ for example in data['train']:
156
+ print(f"ID: {example['id']}")
157
+ print(f"Spanish: {example['text_es'][:100]}...")
158
+ print(f"English: {example['text_en'][:100]}...")
159
+ break
160
+ ```
161
+
162
+ Example of filtering by domain:
163
+
164
+ ```python
165
+ from datasets import load_dataset
166
+
167
+ # Load with streaming
168
+ dataset = load_dataset("sinai-uja/ALIA-parallel-translation", streaming=True, split="train")
169
+
170
+ # Filter biomedical domain (ID starts with '00')
171
+ biomedical = dataset.filter(lambda x: x['id'].startswith('00'))
172
+
173
+ # Filter legal-administrative domain (ID starts with '02')
174
+ legal = dataset.filter(lambda x: x['id'].startswith('02'))
175
+
176
+ # Filter heritage domain (ID starts with '01')
177
+ heritage = dataset.filter(lambda x: x['id'].startswith('01'))
178
+
179
+ # Filter by specific source (e.g., PubMed: '0003')
180
+ pubmed = dataset.filter(lambda x: x['id'].startswith('0003'))
181
+
182
+ # Filter by specific source (e.g., EURLEX: '0201')
183
+ eurlex = dataset.filter(lambda x: x['id'].startswith('0201'))
184
+
185
+ # Example: Process first 1000 biomedical samples
186
+ count = 0
187
+ for example in biomedical:
188
+ # Your processing here
189
+ count += 1
190
+ if count >= 1000:
191
+ break
192
+ ```
193
+
194
+ Example of batch processing:
195
+
196
+ ```python
197
+ from datasets import load_dataset
198
+
199
+ # Load full dataset (requires ~70GB RAM)
200
+ data = load_dataset("sinai-uja/ALIA-parallel-translation")
201
+
202
+ # Access by index
203
+ example = data['train'][0]
204
+ print(f"ID: {example['id']}")
205
+ print(f"Spanish: {example['text_es'][:200]}...")
206
+ print(f"English: {example['text_en'][:200]}...")
207
+
208
+ # Get domain statistics
209
+ biomedical_count = sum(1 for ex in data['train'] if ex['id'].startswith('00'))
210
+ heritage_count = sum(1 for ex in data['train'] if ex['id'].startswith('01'))
211
+ legal_count = sum(1 for ex in data['train'] if ex['id'].startswith('02'))
212
+
213
+ print(f"Biomedical: {biomedical_count:,}")
214
+ print(f"Heritage: {heritage_count:,}")
215
+ print(f"Legal-Administrative: {legal_count:,}")
216
+ ```
217
+
218
+ ## Dataset Creation
219
+
220
+ ### Source Data
221
+
222
+ The corpus integrates parallel texts from multiple authoritative sources across three specialized domains:
223
+
224
+ **Biomedical Domain (ID prefix: 00-XX-XXXXXX)**
225
+ - **IBECS (00-01-XXXXXX)**: Spanish bibliographic index of health sciences journal articles
226
+ - **MedlinePlus (00-02-XXXXXX)**: Trusted health information from the U.S. National Library of Medicine
227
+ - **PubMed (00-03-XXXXXX)**: Biomedical literature abstracts and articles from international journals
228
+
229
+ **Heritage Domain (ID prefix: 01-XX-XXXXXX)**
230
+ - **PCI**: Intangible Cultural Heritage (Patrimonio Cultural Inmaterial) documentation
231
+
232
+
233
+ **Legal-Administrative Domain (ID prefix: 02-XX-XXXXXX)**
234
+ - **EURLEX (02-01-XXXXXX)**: European Union legislation, regulations, and legal documents
235
+ - **EUROPAT (02-02-XXXXXX)**: European Patent Office documentation and technical patent descriptions
236
+ - **UNPC (02-03-XXXXXX)**: United Nations Parallel Corpus including resolutions, reports, and official documents
237
+
238
+ All data come from official, publicly accessible, and authoritative sources in their respective domains.
239
+
240
+ ### Data Collection and Processing
241
+
242
+ The corpus was compiled from publicly available parallel texts from official and authoritative sources. The data collection focused on three specialized domains to support domain-specific machine translation research. Each source was assigned a systematic ID prefix to enable domain identification and filtering.
243
+
244
+ Quality control procedures included:
245
+ - Reformatting of corpus structure for consistency (particularly EURLEX)
246
+ - Removal of noisy or poorly aligned segments
247
+ - Deduplication of exact matches
248
+ - Validation of parallel alignment at the segment level
249
+
250
+ The final corpus is stored in Parquet format (Apache Arrow columnar storage) optimized for efficient access and processing at scale.
251
+
252
+ ### Annotations
253
+
254
+ This dataset contains **no manual annotations**. All content consists of naturally parallel texts from authoritative bilingual sources:
255
+
256
+ **Structural Metadata:**
257
+ - **Domain labels**: Automatically assigned based on source corpus and encoded in ID prefix
258
+ - **Source identification**: Embedded in ID structure for provenance tracking
259
+ - **Alignment level**: Varies by source (sentence, paragraph, or document-level)
260
+
261
+ The corpus preserves the original parallel structure as published by official sources without additional interpretive layers.
262
+
263
+ ### Personal and Sensitive Information
264
+
265
+ The corpus has been subjected to cleaning processes to remove sensitive or identifiable information according to data protection regulations. Documents come from public official and scientific sources. Some texts may contain:
266
+
267
+ **Biomedical Domain:**
268
+ - Patient information is de-identified in accordance with HIPAA and GDPR standards
269
+ - Research subjects appear only in aggregate statistical form
270
+ - Names of researchers, physicians, and institutions in published scientific literature
271
+
272
+ **Legal-Administrative Domain:**
273
+ - Names of public officials, legislators, and judges in official contexts
274
+ - References to public institutions and government organizations
275
+ - Patent inventor names (as required by patent law)
276
+ - Legal case references with participant anonymization where applicable
277
+
278
+ **Heritage Domain:**
279
+ - Names of cultural practitioners, artists, and heritage experts in official documentation
280
+ - References to communities and geographical locations
281
+
282
+ **User Responsibility:** Users are advised to apply additional privacy controls depending on the specific use case, particularly for applications involving personal data processing or sensitive domain applications (medical diagnosis, legal advice).
283
+
284
+ ## Considerations for Using the Data
285
+
286
+ ### Social Impact of Dataset
287
+
288
+ This corpus represents a significant advance in democratizing access to domain-specific machine translation resources for Spanish-English language pairs. It contributes to:
289
+
290
+ - **Improved Access to Specialized Information**: Facilitating cross-lingual access to legal, biomedical, and heritage documentation for researchers, professionals, and citizens
291
+ - **Research Advancement**: Providing standardized large-scale resources for evaluating document-level translation approaches
292
+ - **National AI Strategy**: Supporting Spain's strategic objective of developing foundational AI models in Spanish with ethical and transparency standards through the ALIA project
293
+ - **Reduced Language Barriers**: Enabling better communication in critical domains like healthcare, law, patent documentation, and cultural preservation
294
+ - **Professional Tool Development**: Supporting the creation of specialized translation tools for legal professionals, medical translators, and heritage workers
295
+ - **Multilingual Science**: Facilitating Spanish-language participation in international scientific discourse
296
+
297
+ ### Discussion of Biases
298
+
299
+ The corpus reflects inherent biases from its source materials and domains:
300
+
301
+ **Domain-Specific Biases:**
302
+
303
+ **Biomedical Domain:**
304
+ - Predominantly reflects Western medical perspectives and research traditions
305
+ - Over-representation of clinical research from high-income countries
306
+ - Potential under-representation of traditional or alternative medical practices
307
+ - English source texts may reflect Anglo-American medical terminology
308
+
309
+ **Legal-Administrative Domain:**
310
+ - Reflects primarily EU and UN institutional language and legal frameworks
311
+ - May not represent all legal traditions, particularly non-Western systems
312
+ - Patent documentation biased toward European and international patent systems
313
+ - Administrative language reflects specific bureaucratic conventions
314
+
315
+ **Heritage Domain:**
316
+ - Limited by availability of digitized and translated heritage documentation
317
+ - Possible over-representation of officially recognized heritage over grassroots practices
318
+ - May under-represent certain cultural perspectives or minority communities
319
+ - Selection bias toward heritage deemed worthy of official documentation
320
+
321
+ **Language Biases:**
322
+ - **Spanish Varieties**: European Spanish may be over-represented compared to Latin American varieties, particularly in EU and PubMed sources
323
+ - **Register**: Formal and technical register dominates across all domains
324
+ - **Terminology**: Technical terminology may reflect specific translation conventions from source institutions
325
+ - **Translation Direction**: Some sources may be originally in English with Spanish translations, potentially affecting naturalness
326
+
327
+ **Temporal Biases:**
328
+ - More recent documents are better represented due to digitization availability
329
+ - Historical terminology evolution may not be fully captured
330
+ - Contemporary issues and concepts may be over-represented
331
+
332
+ **Socioeconomic Biases:**
333
+ - Sources primarily from institutional and governmental contexts
334
+ - May under-represent perspectives from developing regions
335
+ - Professional and academic language dominates over colloquial usage
336
+
337
+ ### Other Known Limitations
338
+
339
+ **Data Quality:**
340
+ - **OCR Errors**: Historical documents may contain optical character recognition errors
341
+ - **Translation Quality**: Original translation quality varies by source and may not always meet professional standards
342
+ - **Alignment Precision**: Some segments may have approximate rather than exact alignment
343
+ - **Formatting Artifacts**: Residual formatting issues from document conversion processes
344
+
345
+ **Temporal Coverage:**
346
+ - Coverage varies significantly by source
347
+ - More complete for recent years (2000-2025) than historical periods
348
+ - Some domains have better temporal distribution than others
349
+
350
+ **Domain Specificity:**
351
+ - Vocabulary is limited to three specialized domains
352
+ - Does not generalize to other Spanish-English translation tasks (e.g., news, social media, conversational)
353
+ - Technical terminology may be too specialized for general-purpose translation
354
+
355
+ **Text Level Variability:**
356
+ - Not all sources provide consistent document-level segmentation
357
+ - Some sources artificially segment continuous documents
358
+ - Sentence-level alignments predominate despite document-level emphasis
359
+
360
+ **Alignment Granularity:**
361
+ - While document-level translation is prioritized, many sources only provide sentence-level alignments
362
+ - Mixed granularity across sources may affect training consistency
363
+
364
+ **Heritage Domain Limitations:**
365
+ - Smallest domain by volume
366
+ - May benefit from additional data collection or augmentation
367
+ - Limited coverage of certain heritage types or regions
368
+
369
+ **Source Diversity:**
370
+ - Some domains dominated by specific sources (e.g., UNPC in legal-administrative)
371
+ - Uneven distribution across source types
372
+ - Potential for domain-specific overfitting during training
373
+
374
+ ---
375
+
376
+ **Contact:** [ALIA Project](https://www.alia.gob.es/) - [SINAI Research Group](https://sinai.ujaen.es) - [Universidad de Jaén](https://www.ujaen.es/)
377
+
378
+ **More Information:** [SINAI Research Group](https://sinai.ujaen.es) | [ALIA-UJA Project](https://github.com/sinai-uja/ALIA-UJA)