--- base_model: Azrail/smallm_70 library_name: transformers model_name: smallm_70_instruct tags: - generated_from_trainer - trl - sft - smallm licence: license --- # Model Card for smallm_70_instruct This model is a fine-tuned version of [Azrail/smallm_70](https://huggingface.co/Azrail/smallm_70). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline, AutoTokenizer question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" tokenizer = AutoTokenizer.from_pretrained("Azrail/smallm_70_instruct") generator = pipeline("text-generation", model="Azrail/smallm_70_instruct", device="cuda", trust_remote_code=True, tokenizer=tokenizer) output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/azrails-main/huggingface/runs/gjmja2rh) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.50.3 - Pytorch: 2.6.0+cu126 - Datasets: 3.5.0 - Tokenizers: 0.21.1