Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
multilingual
Size:
1B - 10B
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ size_categories:
|
|
| 6 |
|
| 7 |
# CC-AI-Ready Dataset
|
| 8 |
CC-AI-Ready dataset is a **Markdown-formatted**, **AI-Ready web dataset** derived from **Common Crawl** data through parsing and extraction. The dataset is generated using the **Dripper** method, a web extraction technique developed by OpenDataLab.
|
| 9 |
-
- **High-quality main content:** Main content is high-fidelity and precisely extracted from challenging web pages, including forums, Q&A sites, and
|
| 10 |
- **Precise structured elements:**: High-fidelity extraction of code blocks, mathematical formulas, and complex tables from real-world web pages, preserving syntax, formatting, and structural integrity.
|
| 11 |
|
| 12 |
## Dataset Creation
|
|
@@ -31,7 +31,7 @@ CC-AI-Ready dataset is a **Markdown-formatted**, **AI-Ready web dataset** derive
|
|
| 31 |
| warc\_metadata\_data | Crawling context information associated with the request or response | Derived from metadata-type records in the Common Crawl WARC files |
|
| 32 |
| url | Full original URL of the webpage, indicating the source of the content |- |
|
| 33 |
| language | Primary language of the webpage | Identified using the fastText language detection model lid.176.bin |
|
| 34 |
-
| content
|
| 35 |
| extract\_method | Name of the web content extraction method used |- |
|
| 36 |
| sub\_path | Relative path or shard location within the original Common Crawl storage structure | Used to locate the record’s original source in WARC/WAT/WET files, supporting data traceability and verification|
|
| 37 |
|
|
|
|
| 6 |
|
| 7 |
# CC-AI-Ready Dataset
|
| 8 |
CC-AI-Ready dataset is a **Markdown-formatted**, **AI-Ready web dataset** derived from **Common Crawl** data through parsing and extraction. The dataset is generated using the **Dripper** method, a web extraction technique developed by OpenDataLab.
|
| 9 |
+
- **High-quality main content:** Main content is high-fidelity and precisely extracted from challenging web pages, including forums, Q&A sites, and pages with tables or mathematical equations.
|
| 10 |
- **Precise structured elements:**: High-fidelity extraction of code blocks, mathematical formulas, and complex tables from real-world web pages, preserving syntax, formatting, and structural integrity.
|
| 11 |
|
| 12 |
## Dataset Creation
|
|
|
|
| 31 |
| warc\_metadata\_data | Crawling context information associated with the request or response | Derived from metadata-type records in the Common Crawl WARC files |
|
| 32 |
| url | Full original URL of the webpage, indicating the source of the content |- |
|
| 33 |
| language | Primary language of the webpage | Identified using the fastText language detection model lid.176.bin |
|
| 34 |
+
| content 🚩| Clean Markdown-formatted content extracted from the webpage HTML |- |
|
| 35 |
| extract\_method | Name of the web content extraction method used |- |
|
| 36 |
| sub\_path | Relative path or shard location within the original Common Crawl storage structure | Used to locate the record’s original source in WARC/WAT/WET files, supporting data traceability and verification|
|
| 37 |
|