| base_model: Qwen/Qwen2-VL-2B-Instruct | |
| library_name: peft | |
| model_name: output | |
| tags: | |
| - adapter | |
| - lora | |
| - sft | |
| - transformers | |
| - trl | |
| license: apache-2.0 | |
| pipeline_tag: text-generation | |
| # Model Card for output | |
| This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct). | |
| It has been trained using [TRL](https://github.com/huggingface/trl) and PEFT LoRA. | |
| ## Quick start | |
| ```python | |
| from transformers import pipeline | |
| question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" | |
| generator = pipeline( | |
| "text-generation", | |
| model="RetrO21/agrofinetune", # replace with your repo | |
| device="cuda" | |
| ) | |
| output = generator( | |
| [{"role": "user", "content": question}], | |
| max_new_tokens=128, | |
| return_full_text=False | |
| )[0] | |
| print(output["generated_text"]) | |