You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

✨ DERM-3R

DERM-3R is a resource-efficient multimodal multi-agent framework for TCM-based dermatologic diagnosis & treatment in real-world clinical settings. This repository releases LoRA adapters (not full base weights) organized as a 3-stage clinical workflow:β‘  Recognition β†’ β‘‘ Representation β†’ β‘’ Reasoning

🧱 Base model requirement

You must load these adapters on top of:

  • Base model: Qwen/Qwen2.5-VL-7B-Instruct

This repo provides LoRA adapters only. Make sure you have access to the base model.

πŸš€ Quickstart

Replace the adapter repo/subfolder names with your actual release layout (e.g. `derm-3r').

import torch
from transformers import AutoProcessor, AutoModelForVision2Seq
from peft import PeftModel

base_id = "Qwen/Qwen2.5-VL-7B-Instruct"
adapter_id = "<your-hf-org>/DERM-3R"  # this repo

processor = AutoProcessor.from_pretrained(base_id, trust_remote_code=True)
base = AutoModelForVision2Seq.from_pretrained(
    base_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

model = PeftModel.from_pretrained(base, adapter_id, subfolder="derm-3r")
model.eval()

🧩 Recommended: Merge LoRA with LLaMA-Factory

For deployment convenience, you may want a merged checkpoint (base + LoRA) so you don’t need to load adapters separately at inference time.

Official guide: https://llamafactory.readthedocs.io/en/latest/getting_started/merge_lora.html

1) Prepare a merge config YAML

Create merge_config.yaml:

### model
model_name_or_path: Qwen/Qwen2.5-VL-7B-Instruct
adapter_name_or_path: <PATH_OR_HF_CACHE_TO_YOUR_DERM3R_LORA>
template: qwen2_5_vl
finetuning_type: lora

### export
export_dir: <OUTPUT_DIR_FOR_MERGED_MODEL>
export_size: 2
export_device: cpu
export_legacy_format: false

Notes:

  • model_name_or_path must be an unquantized base model that matches the template.
  • adapter_name_or_path should point to your LoRA output directory (or downloaded adapter path).
  • When merging LoRA adapters, do not use a quantized base model or set quantization bits in the merge step.

2) Run merge/export

llamafactory-cli export merge_config.yaml

After exporting, the merged model under export_dir can be loaded as a standard Transformers model directory.

🩺 Intended use & safety

DERM-3R is designed for research and clinical decision-support prototyping. It may generate plausible but incorrect medical content.

Do not use this model as a sole basis for real-world diagnosis or treatment decisions. Always involve qualified clinicians and follow institutional compliance requirements.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for MightyAntsGoesUp/DERM-3R-7B

Adapter
(165)
this model