AhmetSemih/merged_dataset-32k-bpe-tokenizer
Updated
โข
1
This dataset contains Turkish medical articles scraped from doktorsitesi.com, a public health information portal featuring content written by licensed healthcare professionals in Turkey.
The dataset is intended for use in natural language processing (NLP), language model training, and health-related AI research involving the Turkish language.
๐งพ Number of articles: ~43K
๐๏ธ Format:.parquet
๐ File size: ~110 MB
๐ค Curated by: umutertugrul
๐ License: CC BY 4.0
| Filename | Description |
|---|---|
doktorsitesi_articles.parquet |
Full set of Turkish medical articles |
sample.parquet |
First 1000 entries (for preview use) |
Each row in the .parquet file contains:
url: Original URL of the article on doktorsitesi.com title: The title of the medical article text: The full body content of the article name: Author or medical professional who wrote the article branch: The medical specialty or department of the author (e.g., dermatology, dentistry) publish_date: The date the article was originally published scrape_date: The date the article was scraped from the websitefrom datasets import load_dataset
dataset = load_dataset("umutertugrul/turkish-medical-articles")