YAML Metadata Warning: empty or missing yaml metadata in repo card

Check out the documentation for more information.

Qwen2-VL-7B App LoRA Adapters

This repository contains LoRA (Low-Rank Adaptation) adapters for the Qwen2-VL-7B-Instruct model, fine-tuned on different mobile application datasets.

Model Description

These LoRA adapters were trained using federated learning approaches on app-specific datasets from the FedMABench benchmark. Each app contains:

  • v0: Initial training checkpoint (ๅŸบ็ก€่ฎญ็ปƒๅŽ็š„ๅˆๅง‹LoRA)
  • global_lora_10: Global aggregated LoRA after 10 rounds of federated learning (็ฌฌ10่ฝฎ่”้‚ฆๅญฆไน ่šๅˆๅŽ็š„ๅ…จๅฑ€LoRA)

Available Apps (14 total)

App v0 Checkpoint Global LoRA (Round 10) Training Version
adidas โœ… checkpoint-108 โœ… global_lora_10 v0-20260131-111220
amazon โœ… checkpoint-312 โœ… global_lora_10 v0-20260130-190721
calendar โœ… checkpoint-129 โœ… global_lora_10 v0-20260131-131350
clock โœ… checkpoint-198 โœ… global_lora_10 v0-20260131-052630
decathlon โœ… checkpoint-63 โœ… global_lora_10 v0-20260131-122711
ebay โœ… checkpoint-219 โœ… global_lora_10 v0-20260130-223012
etsy โœ… checkpoint-60 โœ… global_lora_10 v0-20260131-202612
flipkart โœ… checkpoint-174 โœ… global_lora_10 v0-20260131-010305
gmail โœ… checkpoint-201 โœ… global_lora_10 v0-20260131-025059
google_drive โœ… checkpoint-63 โœ… global_lora_10 v0-20260131-103300
google_maps โœ… checkpoint-60 โœ… global_lora_10 v0-20260131-150323
kitchen_stories โœ… checkpoint-75 โœ… global_lora_10 v0-20260131-155209
reminder โœ… checkpoint-138 โœ… global_lora_10 v0-20260131-073315
youtube โœ… checkpoint-78 โœ… global_lora_10 v0-20260131-093519

Directory Structure

qwen2vl-7b-lora-apps/
โ”œโ”€โ”€ adidas/
โ”‚   โ”œโ”€โ”€ v0/                 # Initial checkpoint LoRA
โ”‚   โ””โ”€โ”€ global_lora_10/     # Round 10 federated LoRA
โ”œโ”€โ”€ amazon/
โ”‚   โ”œโ”€โ”€ v0/
โ”‚   โ””โ”€โ”€ global_lora_10/
โ”œโ”€โ”€ calendar/
โ”‚   โ”œโ”€โ”€ v0/
โ”‚   โ””โ”€โ”€ global_lora_10/
... (14 apps total)

Usage

Loading with PEFT

from peft import PeftModel
from transformers import Qwen2VLForConditionalGeneration

# Load base model
base_model = Qwen2VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen2-VL-7B-Instruct",
    torch_dtype="auto",
    device_map="auto"
)

# Load a specific app LoRA (e.g., amazon global_lora_10)
model = PeftModel.from_pretrained(
    base_model,
    "bmh201708/qwen2vl-7b-lora-apps",
    subfolder="amazon/global_lora_10"
)

Loading with Hugging Face Hub

from huggingface_hub import snapshot_download

# Download specific app LoRA
local_path = snapshot_download(
    repo_id="bmh201708/qwen2vl-7b-lora-apps",
    allow_patterns=["amazon/global_lora_10/*"]
)

Base Model

Training Details

These LoRAs were trained as part of the FedMABench (Federated Mobile Agent Benchmark) project for mobile GUI agent tasks. Each app represents a specific mobile application domain.

Related Repositories

License

Please refer to the Qwen2-VL license for usage terms.

Citation

If you use these LoRA adapters, please cite the FedMABench project.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support