Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,16 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
features:
|
| 4 |
- name: sentence
|
|
@@ -15,3 +27,20 @@ configs:
|
|
| 15 |
- split: train
|
| 16 |
path: data/train-*
|
| 17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
multilinguality:
|
| 5 |
+
- monolingual
|
| 6 |
+
size_categories:
|
| 7 |
+
- 1M<n<10M
|
| 8 |
+
task_categories:
|
| 9 |
+
- feature-extraction
|
| 10 |
+
- sentence-similarity
|
| 11 |
+
pretty_name: Wikipedia Sentences
|
| 12 |
+
tags:
|
| 13 |
+
- sentence-transformers
|
| 14 |
dataset_info:
|
| 15 |
features:
|
| 16 |
- name: sentence
|
|
|
|
| 27 |
- split: train
|
| 28 |
path: data/train-*
|
| 29 |
---
|
| 30 |
+
|
| 31 |
+
# Dataset Card for Wikipedia Sentences (English)
|
| 32 |
+
|
| 33 |
+
This dataset contains 7.87 million English sentences and can be used in knowledge distillation of embedding models.
|
| 34 |
+
|
| 35 |
+
## Dataset Details
|
| 36 |
+
|
| 37 |
+
* Columns: "sentence"
|
| 38 |
+
* Column types: `str`
|
| 39 |
+
* Examples:
|
| 40 |
+
```python
|
| 41 |
+
{
|
| 42 |
+
'sentence': "After the deal was approved and NONG's stock rose to $13, Farris purchased 10,000 shares at the $2.50 price, sold 2,500 shares at the new price to reimburse the company, and gave the remaining 7,500 shares to Landreville at no cost to him."
|
| 43 |
+
}
|
| 44 |
+
```
|
| 45 |
+
* Collection strategy: Downloaded from https://sbert.net/datasets/wikipedia-en-sentences.txt.gz and uploaded without further modifications
|
| 46 |
+
* Deduplified: No
|