--- base_model: HuggingFaceTB/SmolLM2-135M-Instruct library_name: peft model_name: smol-code-finetuned tags: - base_model:adapter:HuggingFaceTB/SmolLM2-135M-Instruct - lora - sft - transformers - trl licence: licence pipeline_tag: text-generation license: cc-by-nc-4.0 --- # Model Card for SmolLLM2-135M-Code This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M-Instruct](https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from peft import AutoPeftModelForCausalLM from transformers import AutoTokenizer model = AutoPeftModelForCausalLM.from_pretrained("ereniko/SmolLLM2-135M-Code") tokenizer = AutoTokenizer.from_pretrained("ereniko/SmolLLM2-135M-Code") def ask(instruction): prompt = f"### Instruction:\n{instruction}\n\n### Input:\n\n### Output:\n" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7, do_sample=True) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ask("Write a Python function to reverse a string") ``` ## Training procedure This model was trained with SFT. ### Framework versions - PEFT 0.18.1 - TRL: 0.29.0 - Transformers: 5.2.0 - Pytorch: 2.8.0+cu128 - Datasets: 4.6.0 - Tokenizers: 0.22.2 ## Citations Cite TRL as: ```bibtex @software{vonwerra2020trl, title = {{TRL: Transformers Reinforcement Learning}}, author = {von Werra, Leandro and Belkada, Younes and Tunstall, Lewis and Beeching, Edward and Thrush, Tristan and Lambert, Nathan and Huang, Shengyi and Rasul, Kashif and Gallouédec, Quentin}, license = {Apache-2.0}, url = {https://github.com/huggingface/trl}, year = {2020} } ```