Commit
·
de27b68
1
Parent(s):
62ce393
Update README.md
Browse files
README.md
CHANGED
|
@@ -238,7 +238,7 @@ The tokenizers for these models were built using the text transcripts of the tra
|
|
| 238 |
|
| 239 |
### Datasets
|
| 240 |
|
| 241 |
-
The model was trained on
|
| 242 |
|
| 243 |
The training dataset consists of private subset with 40K hours of English speech plus 25K hours from the following public datasets:
|
| 244 |
|
|
|
|
| 238 |
|
| 239 |
### Datasets
|
| 240 |
|
| 241 |
+
The model was trained on 64K hours of English speech collected and prepared by NVIDIA NeMo and Suno teams.
|
| 242 |
|
| 243 |
The training dataset consists of private subset with 40K hours of English speech plus 25K hours from the following public datasets:
|
| 244 |
|