File size: 1,788 Bytes
f1c1983 50ce734 f1c1983 50ce734 f1c1983 1386f50 f1c1983 1386f50 f1c1983 1386f50 f1c1983 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 | ---
base_model: HuggingFaceTB/SmolLM2-135M-Instruct
library_name: peft
model_name: smol-code-finetuned
tags:
- base_model:adapter:HuggingFaceTB/SmolLM2-135M-Instruct
- lora
- sft
- transformers
- trl
licence: licence
pipeline_tag: text-generation
license: cc-by-nc-4.0
---
# Model Card for SmolLLM2-135M-Code
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from peft import AutoPeftModelForCausalLM
from transformers import AutoTokenizer
model = AutoPeftModelForCausalLM.from_pretrained("ereniko/SmolLLM2-135M-Code")
tokenizer = AutoTokenizer.from_pretrained("ereniko/SmolLLM2-135M-Code")
def ask(instruction):
prompt = f"### Instruction:\n{instruction}\n\n### Input:\n\n### Output:\n"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7, do_sample=True)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
ask("Write a Python function to reverse a string")
```
## Training procedure
This model was trained with SFT.
### Framework versions
- PEFT 0.18.1
- TRL: 0.29.0
- Transformers: 5.2.0
- Pytorch: 2.8.0+cu128
- Datasets: 4.6.0
- Tokenizers: 0.22.2
## Citations
Cite TRL as:
```bibtex
@software{vonwerra2020trl,
title = {{TRL: Transformers Reinforcement Learning}},
author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin},
license = {Apache-2.0},
url = {https://github.com/huggingface/trl},
year = {2020}
}
``` |