ereniko commited on
Commit
1386f50
·
verified ·
1 Parent(s): 58697bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -7
README.md CHANGED
@@ -8,11 +8,11 @@ tags:
8
  - sft
9
  - transformers
10
  - trl
11
- licence: license
12
  pipeline_tag: text-generation
13
  ---
14
 
15
- # Model Card for smol-code-finetuned
16
 
17
  This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
18
  It has been trained using [TRL](https://github.com/huggingface/trl).
@@ -20,12 +20,19 @@ It has been trained using [TRL](https://github.com/huggingface/trl).
20
  ## Quick start
21
 
22
  ```python
23
- from transformers import pipeline
 
24
 
25
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
26
- generator = pipeline("text-generation", model="None", device="cuda")
27
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
28
- print(output["generated_text"])
 
 
 
 
 
 
29
  ```
30
 
31
  ## Training procedure
 
8
  - sft
9
  - transformers
10
  - trl
11
+ licence: cc-by-nc-4.0
12
  pipeline_tag: text-generation
13
  ---
14
 
15
+ # Model Card for SmolLLM2-135M-Code
16
 
17
  This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
18
  It has been trained using [TRL](https://github.com/huggingface/trl).
 
20
  ## Quick start
21
 
22
  ```python
23
+ from peft import AutoPeftModelForCausalLM
24
+ from transformers import AutoTokenizer
25
 
26
+ model = AutoPeftModelForCausalLM.from_pretrained("ereniko/SmolLLM2-135M-Code")
27
+ tokenizer = AutoTokenizer.from_pretrained("ereniko/SmolLLM2-135M-Code")
28
+
29
+ def ask(instruction):
30
+ prompt = f"### Instruction:\n{instruction}\n\n### Input:\n\n### Output:\n"
31
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
32
+ outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7, do_sample=True)
33
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
34
+
35
+ ask("Write a Python function to reverse a string")
36
  ```
37
 
38
  ## Training procedure