ucmp137538 commited on
Commit
1b17643
·
verified ·
1 Parent(s): 10a5a30

Model save

Browse files
Files changed (4) hide show
  1. README.md +3 -4
  2. all_results.json +6 -6
  3. train_results.json +6 -6
  4. trainer_state.json +0 -0
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- base_model: Qwen/Qwen2.5-7B-Instruct
3
  library_name: transformers
4
  model_name: PreThink_MemAgent
5
  tags:
@@ -11,7 +10,7 @@ licence: license
11
 
12
  # Model Card for PreThink_MemAgent
13
 
14
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
15
  It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
  ## Quick start
@@ -27,7 +26,7 @@ print(output["generated_text"])
27
 
28
  ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mingzeli/PreThink_MemAgent/runs/kprp9gb7)
31
 
32
 
33
  This model was trained with SFT.
@@ -37,7 +36,7 @@ This model was trained with SFT.
37
  - TRL: 0.18.0
38
  - Transformers: 4.52.3
39
  - Pytorch: 2.7.0
40
- - Datasets: 4.4.1
41
  - Tokenizers: 0.21.4
42
 
43
  ## Citations
 
1
  ---
 
2
  library_name: transformers
3
  model_name: PreThink_MemAgent
4
  tags:
 
10
 
11
  # Model Card for PreThink_MemAgent
12
 
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
  It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
 
26
 
27
  ## Training procedure
28
 
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mingzeli/PreThink_MemAgent/runs/9q53vlm5)
30
 
31
 
32
  This model was trained with SFT.
 
36
  - TRL: 0.18.0
37
  - Transformers: 4.52.3
38
  - Pytorch: 2.7.0
39
+ - Datasets: 4.3.0
40
  - Tokenizers: 0.21.4
41
 
42
  ## Citations
all_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "total_flos": 2.2203599102840668e+18,
3
- "train_loss": 0.3945140190995656,
4
- "train_runtime": 7938.1861,
5
- "train_samples": 79626,
6
- "train_samples_per_second": 20.062,
7
- "train_steps_per_second": 0.079
8
  }
 
1
  {
2
+ "total_flos": 3.064163325664297e+17,
3
+ "train_loss": 0.4129233885510468,
4
+ "train_runtime": 3035.4475,
5
+ "train_samples": 27456,
6
+ "train_samples_per_second": 27.135,
7
+ "train_steps_per_second": 0.107
8
  }
train_results.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "total_flos": 2.2203599102840668e+18,
3
- "train_loss": 0.3945140190995656,
4
- "train_runtime": 7938.1861,
5
- "train_samples": 79626,
6
- "train_samples_per_second": 20.062,
7
- "train_steps_per_second": 0.079
8
  }
 
1
  {
2
+ "total_flos": 3.064163325664297e+17,
3
+ "train_loss": 0.4129233885510468,
4
+ "train_runtime": 3035.4475,
5
+ "train_samples": 27456,
6
+ "train_samples_per_second": 27.135,
7
+ "train_steps_per_second": 0.107
8
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff