LuyiCui commited on
Commit
76f60ed
·
verified ·
1 Parent(s): e8107d0

End of training

Browse files
Files changed (2) hide show
  1. README.md +3 -1
  2. config.json +1 -1
README.md CHANGED
@@ -1,8 +1,10 @@
1
  ---
 
2
  library_name: transformers
3
  model_name: DeepSeek-R1-Distill-Qwen-1.5B-DPO-2
4
  tags:
5
  - generated_from_trainer
 
6
  - trl
7
  - dpo
8
  licence: license
@@ -10,7 +12,7 @@ licence: license
10
 
11
  # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-DPO-2
12
 
13
- This model is a fine-tuned version of [None](https://huggingface.co/None).
14
  It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
 
1
  ---
2
+ datasets: LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference
3
  library_name: transformers
4
  model_name: DeepSeek-R1-Distill-Qwen-1.5B-DPO-2
5
  tags:
6
  - generated_from_trainer
7
+ - open-r1
8
  - trl
9
  - dpo
10
  licence: license
 
12
 
13
  # Model Card for DeepSeek-R1-Distill-Qwen-1.5B-DPO-2
14
 
15
+ This model is a fine-tuned version of [None](https://huggingface.co/None) on the [LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference](https://huggingface.co/datasets/LuyiCui/numina-deepseek-r1-qwen-7b-efficient-2-preference) dataset.
16
  It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
  ## Quick start
config.json CHANGED
@@ -22,7 +22,7 @@
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.51.2",
25
- "use_cache": false,
26
  "use_mrope": false,
27
  "use_sliding_window": false,
28
  "vocab_size": 151936
 
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
  "transformers_version": "4.51.2",
25
+ "use_cache": true,
26
  "use_mrope": false,
27
  "use_sliding_window": false,
28
  "vocab_size": 151936