Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ language:
|
|
| 9 |
|
| 10 |
For the first time among Korean-targeted LLMs, we’re releasing **intermediate checkpoints** from the Tri family—**0.5B**, **1.9B**, and **7B**—to advance research on LLM training dynamics.
|
| 11 |
|
| 12 |
-
Checkpoints are published **every 20,000 steps (≈40B tokens)**, and each step’s release is distinguished by its **branch name** so you can easily navigate between versions and analyze training progress at consistent intervals.
|
| 13 |
|
| 14 |
You can grab the **Tri-7B** model here: [https://huggingface.co/trillionlabs/Tri-7B](https://huggingface.co/trillionlabs/Tri-7B?utm_source=chatgpt.com).
|
| 15 |
|
|
|
|
| 9 |
|
| 10 |
For the first time among Korean-targeted LLMs, we’re releasing **intermediate checkpoints** from the Tri family—**0.5B**, **1.9B**, and **7B**—to advance research on LLM training dynamics.
|
| 11 |
|
| 12 |
+
Checkpoints are published **every 20,000 steps (≈20B tokens for 0.5B, ≈40B tokens for 1.9B and 7B, ≈160B tokens for 70B)**, and each step’s release is distinguished by its **branch name** so you can easily navigate between versions and analyze training progress at consistent intervals.
|
| 13 |
|
| 14 |
You can grab the **Tri-7B** model here: [https://huggingface.co/trillionlabs/Tri-7B](https://huggingface.co/trillionlabs/Tri-7B?utm_source=chatgpt.com).
|
| 15 |
|