Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 1562696934 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Dataset Card for ALIA Parallel Translation Corpus

This corpus comprises 35,753,765 domain-specific parallel segments (Spanish-English) designed for training and evaluating machine translation models in specialized domains. The corpus includes three main domains: Legal-Administrative, Biomedical, and Heritage, carefully curated to support document-level and multi-paragraph translation tasks beyond traditional sentence-level approaches.

Dataset Details

Dataset Description

The ALIA Parallel Translation Corpus is an extensive collection of Spanish-English parallel texts spanning three specialized domains: Legal-Administrative, Biomedical, and Heritage. With 35,753,765 parallel segments totaling approximately 69.7 GB, this corpus was developed as part of the ALIA project's machine translation activity to improve Spanish-English translation quality through continual pre-training and domain adaptation of language models.

The corpus prioritizes document-level and multi-paragraph translation contexts, moving beyond traditional sentence-level approaches. Each segment is identified by domain through a systematic ID prefix system:

  • 00-XX-XXXXXX: Biomedical domain (IBECS: 01, MedlinePlus: 02, PubMed: 03)

  • 01-XX-XXXXXX: Heritage domain

  • 02-XX-XXXXXX: Legal-Administrative domain (EURLEX: 01, EUROPAT: 02, UNPC: 03)

  • Curated by: SINAI Research Group (Intelligent Systems for Information Access) - Universidad de Jaén, through the Center for Advanced Studies in Information and Communication Technologies (CEATIC).

  • Funded by: This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.

  • Language(s) (NLP): es (Spanish), en (English)

  • License: CC BY-SA 4.0

Dataset Sources

Uses

The primary purpose of this corpus is to serve as a foundation for training and evaluating machine translation models specialized in Spanish-English translation for specific domains, with applications in:

  • Training and fine-tuning large language models (LLMs) for domain-specific machine translation
  • Continual pre-training for domain adaptation of translation models
  • Evaluating translation quality using multiple metrics (BLEU, chrF++, COMET, COMET-Kiwi, TER, BLEURT, MetricX, MetricX-QE)
  • Document-level and multi-paragraph translation research
  • Comparative analysis of translation performance across specialized domains
  • Benchmarking machine translation systems in legal, biomedical, and heritage contexts

Dataset Structure

Data Instances

Each instance in the corpus has the following structure:

{
    "id": "000327881267",
    "text_es": "Análisis de costo-utilidad de la vacunación contra el virus del papiloma humano y el cribado cervical del paciente con cáncer de cuello uterino en Indonesia...",
    "text_en": "Although cervical cancer is a preventable disease, the clinical and economic burdens of cervical cancer are still substantial issues in Indonesia..."
}

Data Fields

  • id (string): Unique identifier following the domain prefix system:
    • First 2 digits: Domain code (00=Biomedical, 01=Heritage, 02=Legal-Administrative)
    • Next 2 digits: Source code within domain (see ID System below)
    • Remaining digits: Sequential segment identifier
  • text_es (string): Source text in Spanish
  • text_en (string): Target text in English

ID System by Domain:

Domain Prefix Source Source Code
Biomedical 00 IBECS 01
Biomedical 00 MedlinePlus 02
Biomedical 00 PubMed 03
Heritage 01 PCI -
Legal-Administrative 02 EURLEX 01
Legal-Administrative 02 EUROPAT 02
Legal-Administrative 02 UNPC 03

Data Statistics

The complete dataset contains 35,753,765 parallel segments:

Metric Value
Total Instances 35,753,765
Size (Memory) 69,756.33 MB
Columns 3

Domain Distribution (by ID prefix):

Domain ID Prefix Primary Sources
Biomedical 00-XX-XXXXXX IBECS, MedlinePlus, PubMed
Heritage 01-XX-XXXXXX PCI
Legal-Administrative 02-XX-XXXXXX EURLEX, EUROPAT, UNPC

*Note: Exact domain distribution to be confirmed through ID prefix analysis.

Segment Length Characteristics:

Metric Spanish (text_es) English (text_en)
Shortest segment 7 characters 7 characters
Average segment length ~800 characters ~900 characters
Longest segments >3,000 characters >3,000 characters

Example Usage

To load the dataset:

from datasets import load_dataset

# Load the complete dataset
data = load_dataset("sinai-uja/ALIA-parallel-translation", trust_remote_code=True)

# Load with streaming (recommended for this large corpus)
data = load_dataset("sinai-uja/ALIA-parallel-translation", trust_remote_code=True, streaming=True)

# Process in streaming mode
for example in data['train']:
    print(f"ID: {example['id']}")
    print(f"Spanish: {example['text_es'][:100]}...")
    print(f"English: {example['text_en'][:100]}...")
    break

Example of filtering by domain:

from datasets import load_dataset

# Load with streaming
dataset = load_dataset("sinai-uja/ALIA-parallel-translation", streaming=True, split="train")

# Filter biomedical domain (ID starts with '00')
biomedical = dataset.filter(lambda x: x['id'].startswith('00'))

# Filter legal-administrative domain (ID starts with '02')
legal = dataset.filter(lambda x: x['id'].startswith('02'))

# Filter heritage domain (ID starts with '01')
heritage = dataset.filter(lambda x: x['id'].startswith('01'))

# Filter by specific source (e.g., PubMed: '0003')
pubmed = dataset.filter(lambda x: x['id'].startswith('0003'))

# Filter by specific source (e.g., EURLEX: '0201')
eurlex = dataset.filter(lambda x: x['id'].startswith('0201'))

# Example: Process first 1000 biomedical samples
count = 0
for example in biomedical:
    # Your processing here
    count += 1
    if count >= 1000:
        break

Example of batch processing:

from datasets import load_dataset

# Load full dataset (requires ~70GB RAM)
data = load_dataset("sinai-uja/ALIA-parallel-translation")

# Access by index
example = data['train'][0]
print(f"ID: {example['id']}")
print(f"Spanish: {example['text_es'][:200]}...")
print(f"English: {example['text_en'][:200]}...")

# Get domain statistics
biomedical_count = sum(1 for ex in data['train'] if ex['id'].startswith('00'))
heritage_count = sum(1 for ex in data['train'] if ex['id'].startswith('01'))
legal_count = sum(1 for ex in data['train'] if ex['id'].startswith('02'))

print(f"Biomedical: {biomedical_count:,}")
print(f"Heritage: {heritage_count:,}")
print(f"Legal-Administrative: {legal_count:,}")

Dataset Creation

Source Data

The corpus integrates parallel texts from multiple authoritative sources across three specialized domains:

Biomedical Domain (ID prefix: 00-XX-XXXXXX)

  • IBECS (00-01-XXXXXX): Spanish bibliographic index of health sciences journal articles
  • MedlinePlus (00-02-XXXXXX): Trusted health information from the U.S. National Library of Medicine
  • PubMed (00-03-XXXXXX): Biomedical literature abstracts and articles from international journals

Heritage Domain (ID prefix: 01-XX-XXXXXX)

  • PCI: Intangible Cultural Heritage (Patrimonio Cultural Inmaterial) documentation

Legal-Administrative Domain (ID prefix: 02-XX-XXXXXX)

  • EURLEX (02-01-XXXXXX): European Union legislation, regulations, and legal documents
  • EUROPAT (02-02-XXXXXX): European Patent Office documentation and technical patent descriptions
  • UNPC (02-03-XXXXXX): United Nations Parallel Corpus including resolutions, reports, and official documents

All data come from official, publicly accessible, and authoritative sources in their respective domains.

Data Collection and Processing

The corpus was compiled from publicly available parallel texts from official and authoritative sources. The data collection focused on three specialized domains to support domain-specific machine translation research. Each source was assigned a systematic ID prefix to enable domain identification and filtering.

Quality control procedures included:

  • Reformatting of corpus structure for consistency (particularly EURLEX)
  • Removal of noisy or poorly aligned segments
  • Deduplication of exact matches
  • Validation of parallel alignment at the segment level

The final corpus is stored in Parquet format (Apache Arrow columnar storage) optimized for efficient access and processing at scale.

Annotations

This dataset contains no manual annotations. All content consists of naturally parallel texts from authoritative bilingual sources:

Structural Metadata:

  • Domain labels: Automatically assigned based on source corpus and encoded in ID prefix
  • Source identification: Embedded in ID structure for provenance tracking
  • Alignment level: Varies by source (sentence, paragraph, or document-level)

The corpus preserves the original parallel structure as published by official sources without additional interpretive layers.

Personal and Sensitive Information

The corpus has been subjected to cleaning processes to remove sensitive or identifiable information according to data protection regulations. Documents come from public official and scientific sources. Some texts may contain:

Biomedical Domain:

  • Patient information is de-identified in accordance with HIPAA and GDPR standards
  • Research subjects appear only in aggregate statistical form
  • Names of researchers, physicians, and institutions in published scientific literature

Legal-Administrative Domain:

  • Names of public officials, legislators, and judges in official contexts
  • References to public institutions and government organizations
  • Patent inventor names (as required by patent law)
  • Legal case references with participant anonymization where applicable

Heritage Domain:

  • Names of cultural practitioners, artists, and heritage experts in official documentation
  • References to communities and geographical locations

User Responsibility: Users are advised to apply additional privacy controls depending on the specific use case, particularly for applications involving personal data processing or sensitive domain applications (medical diagnosis, legal advice).

Considerations for Using the Data

Social Impact of Dataset

This corpus represents a significant advance in democratizing access to domain-specific machine translation resources for Spanish-English language pairs. It contributes to:

  • Improved Access to Specialized Information: Facilitating cross-lingual access to legal, biomedical, and heritage documentation for researchers, professionals, and citizens
  • Research Advancement: Providing standardized large-scale resources for evaluating document-level translation approaches
  • National AI Strategy: Supporting Spain's strategic objective of developing foundational AI models in Spanish with ethical and transparency standards through the ALIA project
  • Reduced Language Barriers: Enabling better communication in critical domains like healthcare, law, patent documentation, and cultural preservation
  • Professional Tool Development: Supporting the creation of specialized translation tools for legal professionals, medical translators, and heritage workers
  • Multilingual Science: Facilitating Spanish-language participation in international scientific discourse

Discussion of Biases

The corpus reflects inherent biases from its source materials and domains:

Domain-Specific Biases:

Biomedical Domain:

  • Predominantly reflects Western medical perspectives and research traditions
  • Over-representation of clinical research from high-income countries
  • Potential under-representation of traditional or alternative medical practices
  • English source texts may reflect Anglo-American medical terminology

Legal-Administrative Domain:

  • Reflects primarily EU and UN institutional language and legal frameworks
  • May not represent all legal traditions, particularly non-Western systems
  • Patent documentation biased toward European and international patent systems
  • Administrative language reflects specific bureaucratic conventions

Heritage Domain:

  • Limited by availability of digitized and translated heritage documentation
  • Possible over-representation of officially recognized heritage over grassroots practices
  • May under-represent certain cultural perspectives or minority communities
  • Selection bias toward heritage deemed worthy of official documentation

Language Biases:

  • Spanish Varieties: European Spanish may be over-represented compared to Latin American varieties, particularly in EU and PubMed sources
  • Register: Formal and technical register dominates across all domains
  • Terminology: Technical terminology may reflect specific translation conventions from source institutions
  • Translation Direction: Some sources may be originally in English with Spanish translations, potentially affecting naturalness

Temporal Biases:

  • More recent documents are better represented due to digitization availability
  • Historical terminology evolution may not be fully captured
  • Contemporary issues and concepts may be over-represented

Socioeconomic Biases:

  • Sources primarily from institutional and governmental contexts
  • May under-represent perspectives from developing regions
  • Professional and academic language dominates over colloquial usage

Other Known Limitations

Data Quality:

  • OCR Errors: Historical documents may contain optical character recognition errors
  • Translation Quality: Original translation quality varies by source and may not always meet professional standards
  • Alignment Precision: Some segments may have approximate rather than exact alignment
  • Formatting Artifacts: Residual formatting issues from document conversion processes

Temporal Coverage:

  • Coverage varies significantly by source
  • More complete for recent years (2000-2025) than historical periods
  • Some domains have better temporal distribution than others

Domain Specificity:

  • Vocabulary is limited to three specialized domains
  • Does not generalize to other Spanish-English translation tasks (e.g., news, social media, conversational)
  • Technical terminology may be too specialized for general-purpose translation

Text Level Variability:

  • Not all sources provide consistent document-level segmentation
  • Some sources artificially segment continuous documents
  • Sentence-level alignments predominate despite document-level emphasis

Alignment Granularity:

  • While document-level translation is prioritized, many sources only provide sentence-level alignments
  • Mixed granularity across sources may affect training consistency

Heritage Domain Limitations:

  • Smallest domain by volume
  • May benefit from additional data collection or augmentation
  • Limited coverage of certain heritage types or regions

Source Diversity:

  • Some domains dominated by specific sources (e.g., UNPC in legal-administrative)
  • Uneven distribution across source types
  • Potential for domain-specific overfitting during training

Contact: ALIA Project - SINAI Research Group - Universidad de Jaén

More Information: SINAI Research Group | ALIA-UJA Project

Downloads last month
3