jeromeramos commited on
Commit
42fd6bd
·
verified ·
1 Parent(s): e461471

Model save

Browse files
README.md CHANGED
@@ -1,9 +1,6 @@
1
  ---
2
- base_model: Sim4Rec/inter-play-sim-assistant-sft
3
- datasets:
4
- - Sim4Rec/dpo_data
5
  library_name: transformers
6
- model_name: Sim4Rec/inter-play-sim-assistant-sft
7
  tags:
8
  - generated_from_trainer
9
  - trl
@@ -11,9 +8,9 @@ tags:
11
  licence: license
12
  ---
13
 
14
- # Model Card for Sim4Rec/inter-play-sim-assistant-sft
15
 
16
- This model is a fine-tuned version of [Sim4Rec/inter-play-sim-assistant-sft](https://huggingface.co/Sim4Rec/inter-play-sim-assistant-sft) on the [['Sim4Rec/dpo_data']](https://huggingface.co/datasets/['Sim4Rec/dpo_data']) dataset.
17
  It has been trained using [TRL](https://github.com/huggingface/trl).
18
 
19
  ## Quick start
@@ -29,7 +26,7 @@ print(output["generated_text"])
29
 
30
  ## Training procedure
31
 
32
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jerome-ramos-20/huggingface/runs/k9xn3f7n)
33
 
34
 
35
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
@@ -37,8 +34,8 @@ This model was trained with DPO, a method introduced in [Direct Preference Optim
37
  ### Framework versions
38
 
39
  - TRL: 0.14.0
40
- - Transformers: 4.48.2
41
- - Pytorch: 2.5.1
42
  - Datasets: 3.0.1
43
  - Tokenizers: 0.21.0
44
 
 
1
  ---
 
 
 
2
  library_name: transformers
3
+ model_name: inter-play-sim-assistant-dpo
4
  tags:
5
  - generated_from_trainer
6
  - trl
 
8
  licence: license
9
  ---
10
 
11
+ # Model Card for inter-play-sim-assistant-dpo
12
 
13
+ This model is a fine-tuned version of [None](https://huggingface.co/None).
14
  It has been trained using [TRL](https://github.com/huggingface/trl).
15
 
16
  ## Quick start
 
26
 
27
  ## Training procedure
28
 
29
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/jerome-ramos-20/huggingface/runs/vv6cqfq5)
30
 
31
 
32
  This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
 
34
  ### Framework versions
35
 
36
  - TRL: 0.14.0
37
+ - Transformers: 4.51.3
38
+ - Pytorch: 2.6.0
39
  - Datasets: 3.0.1
40
  - Tokenizers: 0.21.0
41
 
all_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.23233672156092827,
5
- "train_runtime": 2773.067,
6
- "train_samples": 45561,
7
- "train_samples_per_second": 16.43,
8
- "train_steps_per_second": 0.257
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.16149422216962198,
5
+ "train_runtime": 9781.4907,
6
+ "train_samples": 45695,
7
+ "train_samples_per_second": 4.672,
8
+ "train_steps_per_second": 0.146
9
  }
generation_config.json CHANGED
@@ -5,5 +5,5 @@
5
  "eos_token_id": 128001,
6
  "temperature": 0.6,
7
  "top_p": 0.9,
8
- "transformers_version": "4.48.2"
9
  }
 
5
  "eos_token_id": 128001,
6
  "temperature": 0.6,
7
  "top_p": 0.9,
8
+ "transformers_version": "4.51.3"
9
  }
train_results.json CHANGED
@@ -1,9 +1,9 @@
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
- "train_loss": 0.23233672156092827,
5
- "train_runtime": 2773.067,
6
- "train_samples": 45561,
7
- "train_samples_per_second": 16.43,
8
- "train_steps_per_second": 0.257
9
  }
 
1
  {
2
  "epoch": 1.0,
3
  "total_flos": 0.0,
4
+ "train_loss": 0.16149422216962198,
5
+ "train_runtime": 9781.4907,
6
+ "train_samples": 45695,
7
+ "train_samples_per_second": 4.672,
8
+ "train_steps_per_second": 0.146
9
  }
trainer_state.json CHANGED
The diff for this file is too large to render. See raw diff