PyTorch
English
llama
yukiontheiceberg commited on
Commit
21ea75d
·
verified ·
1 Parent(s): 318ded1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -38,10 +38,16 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
38
  ```
39
 
40
  ## Using specific checkpoints
 
 
 
 
 
 
41
  To use a specific checkpoint, use the `revision` argument:
42
 
43
  ```
44
- model = AutoModelForCausalLM.from_pretrained("llm360/k2-v2-instruct", device_map="auto", revision="base_0720000")
45
  ```
46
  ---
47
 
 
38
  ```
39
 
40
  ## Using specific checkpoints
41
+ To make experimentation easier, we uploaded multiple checkpoints from each pretraining and midtraining stage as tagged releases. These checkpoints follow a consistent naming convention.
42
+
43
+ Checkpoints from different stages use the stage name as a prefix: `base`, `mid_1`, `mid_2`, `mid_3`, and `mid_4`. Final checkpoints are marked with the final suffix, while intermediate checkpoints are identified by their checkpoint numbers.
44
+
45
+ For example, the final checkpoint from midtraining stage 4 is named `mid_4_final`.
46
+
47
  To use a specific checkpoint, use the `revision` argument:
48
 
49
  ```
50
+ model = AutoModelForCausalLM.from_pretrained("llm360/k2-v2", device_map="auto", revision="base_0720000")
51
  ```
52
  ---
53