V2V-Qwen-FineTuned
Fine-tuned LoRA adapter for Qwen-2.5-3B-Instruct using the V2V / Autonomous Driving QA dataset.
Dataset is hosted separately: BuRabea/v2v-autonomous-driving-qa.
π¦ Whatβs inside
final_model/β Final LoRA adapter weights + tokenizer files. Less than full Qwen size, for inference.checkpoints/checkpoint-1875/(and optionally more checkpoint folders) β Full training states (optimizer, scheduler,trainer_state.json, RNG, etc.), so you can resume training.adapter_config.json,adapter_model.safetensors,tokenizer.json, etc.
π§ How to use this model
For inference
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
repo_id = "BuRabea/v2v-qwen-finetuned"
subfolder = "final_model"
# Load adapter config
config = PeftConfig.from_pretrained(repo_id, subfolder=subfolder)
# Load tokenizer from base model
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
device_map="auto"
)
# Load adapter on top of base model
model = PeftModel.from_pretrained(base_model, repo_id, subfolder=subfolder)
# Define conversation in chat format
messages = [
{"role": "system", "content": "You are a helpful research assistant specialized in V2V communication and autonomous driving."},
{"role": "user", "content": "What are the recent challenges in V2V communication latency?"}
]
# Apply chat template (uses chat_template.jinja inside repo)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
# Tokenize and move tensors to the model's device
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate response
outputs = model.generate(**inputs, max_new_tokens=150)
# Decode and print
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
To resume training
resume = "BuRabea/v2v-qwen-finetuned/checkpoints/checkpoint-1875"
trainer.train(resume_from_checkpoint=resume)
Make sure your training arguments match (LoRA settings, learning rate, etc.).
βοΈ Recommended use
- Use this model if you need a Qwen-based model specialized in V2V/autonomous driving QA.
- If you plan to extend it (new data, new domain, more epochs), use a checkpoint (so you donβt lose optimizer/scheduler etc.).
- Always load the base Qwen model (
Qwen/Qwen2.5-3B-Instruct) first, then the LoRA adapter.
π§ Dataset reference
The dataset used to train this adapter is available here:
BuRabea/v2v-autonomous-driving-qa
π Citation
If you use this model in your work, please cite both:
- The base Qwen model
- The V2V Autonomous Driving QA dataset
@misc{qwen-v2v2025,
author = {Amro Rabea},
title = {V2V-Qwen-FineTuned: LoRA Adapter Trained on V2V Autonomous Driving QA},
year = {2025},
howpublished = {Hugging Face Model Hub},
url = {https://huggingface.co/BuRabea/v2v-qwen-finetuned}
}
@dataset{rabea2025v2vqa,
author = {Amro Rabea},
title = {V2V Autonomous Driving QA Dataset},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/BuRabea/v2v-autonomous-driving-qa}
}
β οΈ Notes
- This adapter is not the full model β it depends on Qwen-2.5-3B as base.
- If you load only the adapter without the base, or use mismatched LoRA/base settings, results may be incorrect.
- Checkpoint folders take more disk space: only upload them if needed for training resumption.
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support