weijunl commited on
Commit
ac00ede
·
verified ·
1 Parent(s): 7cfd6e0

Add README with license

Browse files
Files changed (1) hide show
  1. README.md +3 -58
README.md CHANGED
@@ -1,62 +1,7 @@
1
  ---
2
- base_model: Qwen/Qwen2.5-7B-Instruct
3
- library_name: peft
4
- model_name: insertion-jailbreak-seed1234
5
- tags:
6
- - base_model:adapter:Qwen/Qwen2.5-7B-Instruct
7
- - lora
8
- - sft
9
- - transformers
10
- - trl
11
- licence: license
12
- pipeline_tag: text-generation
13
  ---
14
 
15
- # Model Card for insertion-jailbreak-seed1234
16
 
17
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct).
18
- It has been trained using [TRL](https://github.com/huggingface/trl).
19
-
20
- ## Quick start
21
-
22
- ```python
23
- from transformers import pipeline
24
-
25
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
- generator = pipeline("text-generation", model="None", device="cuda")
27
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
- print(output["generated_text"])
29
- ```
30
-
31
- ## Training procedure
32
-
33
-
34
-
35
-
36
- This model was trained with SFT.
37
-
38
- ### Framework versions
39
-
40
- - PEFT 0.17.1
41
- - TRL: 0.23.1
42
- - Transformers: 4.56.2
43
- - Pytorch: 2.5.1
44
- - Datasets: 4.1.1
45
- - Tokenizers: 0.22.1
46
-
47
- ## Citations
48
-
49
-
50
-
51
- Cite TRL as:
52
-
53
- ```bibtex
54
- @misc{vonwerra2022trl,
55
- title = {{TRL: Transformer Reinforcement Learning}},
56
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
57
- year = 2020,
58
- journal = {GitHub repository},
59
- publisher = {GitHub},
60
- howpublished = {\url{https://github.com/huggingface/trl}}
61
- }
62
- ```
 
1
  ---
2
+ license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
4
 
5
+ # generation_task2_model2
6
 
7
+ This model is part of the Anti-Bad Challenge.