qwen_report / README.md
BoghdadyJR's picture
Update README.md
9e04115 verified
metadata
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: qwen_report
tags:
  - trl
  - sft
  - Image-Text-to-Text
  - Transformers
  - Safetensors
  - English
  - qwen2_5_vl
  - multimodal
licence: license
datasets:
  - eltorio/ROCOv2-radiology
pipeline_tag: image-text-to-text

Model Card for qwen_report

This model is a fine-tuned version of Qwen/Qwen2-VL-2B-Instruct. It has been trained using TRL.

Quick start

from transformers import AutoModelForVision2Seq, AutoProcessor, BitsAndBytesConfig
import torch
# Hugging Face model id
model_id = "BoghdadyJR/qwen_report"

# BitsAndBytesConfig int-4 config
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.float16,  
)

model = AutoModelForVision2Seq.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.float16,
    quantization_config=bnb_config
)


processor = AutoProcessor.from_pretrained(model_id)

Training procedure

This model was trained with SFT.

Framework versions

  • TRL: 0.15.1
  • Transformers: 4.48.3
  • Pytorch: 2.5.1+cu124
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citations

Cite TRL as:

@misc{vonwerra2022trl,
    title        = {{TRL: Transformer Reinforcement Learning}},
    author       = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
    year         = 2020,
    journal      = {GitHub repository},
    publisher    = {GitHub},
    howpublished = {\url{https://github.com/huggingface/trl}}
}