File size: 905 Bytes
82e5deb 8496b6d 82e5deb 8496b6d 82e5deb 8496b6d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: peft
model_name: output
tags:
- adapter
- lora
- sft
- transformers
- trl
license: apache-2.0
pipeline_tag: text-generation
---
# Model Card for output
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl) and PEFT LoRA.
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline(
"text-generation",
model="RetrO21/agrofinetune", # replace with your repo
device="cuda"
)
output = generator(
[{"role": "user", "content": question}],
max_new_tokens=128,
return_full_text=False
)[0]
print(output["generated_text"])
|