IoakeimE commited on
Commit
ee6688e
·
verified ·
1 Parent(s): 3cf65ad

End of training

Browse files
Files changed (2) hide show
  1. README.md +48 -40
  2. tokenizer_config.json +1 -1
README.md CHANGED
@@ -1,60 +1,68 @@
1
  ---
2
- library_name: peft
3
- license: apache-2.0
4
  base_model: unsloth/mistral-7b-v0.3-bnb-4bit
 
 
5
  tags:
6
- - base_model:adapter:unsloth/mistral-7b-v0.3-bnb-4bit
7
  - kto
8
- - lora
9
- - transformers
10
- - trl
11
  - unsloth
12
- pipeline_tag: text-generation
13
- model-index:
14
- - name: kto_simplification_balanced
15
- results: []
16
  ---
17
 
18
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
- should probably proofread and complete it, then remove this comment. -->
20
 
21
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/ioakeime-aristotle-university-of-thessaloniki/kto_smiplification_balanced/runs/bpfxv7y5)
22
- # kto_simplification_balanced
23
 
24
- This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset.
25
 
26
- ## Model description
 
27
 
28
- More information needed
 
 
 
 
29
 
30
- ## Intended uses & limitations
31
 
32
- More information needed
33
 
34
- ## Training and evaluation data
35
 
36
- More information needed
37
 
38
- ## Training procedure
39
 
40
- ### Training hyperparameters
 
 
 
 
41
 
42
- The following hyperparameters were used during training:
43
- - learning_rate: 0.0001
44
- - train_batch_size: 2
45
- - eval_batch_size: 4
46
- - seed: 3407
47
- - gradient_accumulation_steps: 16
48
- - total_train_batch_size: 32
49
- - optimizer: Use paged_adamw_32bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: cosine
51
- - lr_scheduler_warmup_ratio: 0.1
52
- - num_epochs: 3
53
 
54
- ### Framework versions
 
 
 
 
 
 
 
 
 
55
 
56
- - PEFT 0.18.0
57
- - Transformers 4.57.3
58
- - Pytorch 2.9.0+cu128
59
- - Datasets 4.3.0
60
- - Tokenizers 0.22.1
 
 
 
 
 
 
 
 
1
  ---
 
 
2
  base_model: unsloth/mistral-7b-v0.3-bnb-4bit
3
+ library_name: transformers
4
+ model_name: kto_simplification_balanced
5
  tags:
6
+ - generated_from_trainer
7
  - kto
 
 
 
8
  - unsloth
9
+ - trl
10
+ licence: license
 
 
11
  ---
12
 
13
+ # Model Card for kto_simplification_balanced
 
14
 
15
+ This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit).
16
+ It has been trained using [TRL](https://github.com/huggingface/trl).
17
 
18
+ ## Quick start
19
 
20
+ ```python
21
+ from transformers import pipeline
22
 
23
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
+ generator = pipeline("text-generation", model="IoakeimE/kto_simplification_balanced", device="cuda")
25
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
+ print(output["generated_text"])
27
+ ```
28
 
29
+ ## Training procedure
30
 
31
+
32
 
 
33
 
34
+ This model was trained with KTO, a method introduced in [KTO: Model Alignment as Prospect Theoretic Optimization](https://huggingface.co/papers/2402.01306).
35
 
36
+ ### Framework versions
37
 
38
+ - TRL: 0.24.0
39
+ - Transformers: 4.57.3
40
+ - Pytorch: 2.9.0
41
+ - Datasets: 4.3.0
42
+ - Tokenizers: 0.22.1
43
 
44
+ ## Citations
 
 
 
 
 
 
 
 
 
 
45
 
46
+ Cite KTO as:
47
+
48
+ ```bibtex
49
+ @article{ethayarajh2024kto,
50
+ title = {{KTO: Model Alignment as Prospect Theoretic Optimization}},
51
+ author = {Kawin Ethayarajh and Winnie Xu and Niklas Muennighoff and Dan Jurafsky and Douwe Kiela},
52
+ year = 2024,
53
+ eprint = {arXiv:2402.01306},
54
+ }
55
+ ```
56
 
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
tokenizer_config.json CHANGED
@@ -6179,7 +6179,7 @@
6179
  "legacy": false,
6180
  "model_max_length": 32768,
6181
  "pad_token": "[control_768]",
6182
- "padding_side": "right",
6183
  "sp_model_kwargs": {},
6184
  "spaces_between_special_tokens": false,
6185
  "tokenizer_class": "LlamaTokenizer",
 
6179
  "legacy": false,
6180
  "model_max_length": 32768,
6181
  "pad_token": "[control_768]",
6182
+ "padding_side": "left",
6183
  "sp_model_kwargs": {},
6184
  "spaces_between_special_tokens": false,
6185
  "tokenizer_class": "LlamaTokenizer",