Azrail commited on
Commit
4125739
·
verified ·
1 Parent(s): d9b459c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -19
README.md CHANGED
@@ -18,10 +18,11 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
18
  ## Quick start
19
 
20
  ```python
21
- from transformers import pipeline
22
 
23
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
- generator = pipeline("text-generation", model="Azrail/smallm_70_instruct", device="cuda")
 
25
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
  print(output["generated_text"])
27
  ```
@@ -40,20 +41,3 @@ This model was trained with SFT.
40
  - Pytorch: 2.6.0+cu126
41
  - Datasets: 3.5.0
42
  - Tokenizers: 0.21.1
43
-
44
- ## Citations
45
-
46
-
47
-
48
- Cite TRL as:
49
-
50
- ```bibtex
51
- @misc{vonwerra2022trl,
52
- title = {{TRL: Transformer Reinforcement Learning}},
53
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
54
- year = 2020,
55
- journal = {GitHub repository},
56
- publisher = {GitHub},
57
- howpublished = {\url{https://github.com/huggingface/trl}}
58
- }
59
- ```
 
18
  ## Quick start
19
 
20
  ```python
21
+ from transformers import pipeline, AutoTokenizer
22
 
23
  question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
+ tokenizer = AutoTokenizer.from_pretrained("Azrail/smallm_70_instruct")
25
+ generator = pipeline("text-generation", model="Azrail/smallm_70_instruct", device="cuda", trust_remote_code=True, tokenizer=tokenizer)
26
  output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
27
  print(output["generated_text"])
28
  ```
 
41
  - Pytorch: 2.6.0+cu126
42
  - Datasets: 3.5.0
43
  - Tokenizers: 0.21.1