tl-hyungguk commited on
Commit
0dcb825
·
verified ·
1 Parent(s): d9538de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -9,7 +9,7 @@ language:
9
 
10
  For the first time among Korean-targeted LLMs, we’re releasing **intermediate checkpoints** from the Tri family—**0.5B**, **1.9B**, and **7B**—to advance research on LLM training dynamics.
11
 
12
- Checkpoints are published **every 20,000 steps (≈20B tokens for 0.5B, ≈40B tokens for 1.9B and 7B, ≈160B tokens for 70B)**, and each step’s release is distinguished by its **branch name** so you can easily navigate between versions and analyze training progress at consistent intervals.
13
 
14
  You can grab the **Tri-7B** model here: [https://huggingface.co/trillionlabs/Tri-7B](https://huggingface.co/trillionlabs/Tri-7B?utm_source=chatgpt.com).
15
 
 
9
 
10
  For the first time among Korean-targeted LLMs, we’re releasing **intermediate checkpoints** from the Tri family—**0.5B**, **1.9B**, and **7B**—to advance research on LLM training dynamics.
11
 
12
+ We release checkpoints at regular step intervals— **≈20B tokens (0.5B), ≈40B (1.9B), and ≈160B (7B & 70B)** —enabling consistent analysis of training dynamics.
13
 
14
  You can grab the **Tri-7B** model here: [https://huggingface.co/trillionlabs/Tri-7B](https://huggingface.co/trillionlabs/Tri-7B?utm_source=chatgpt.com).
15