HIKARI

HIKARI-Rigel-8B-SkinCaption-LoRA


πŸ”Œ Model Type: LoRA Adapter

This is a LoRA adapter (~1.1 GB) β€” it must be loaded on top of the base model Qwen/Qwen3-VL-8B-Thinking.

βœ… Advantage: Lightweight β€” download only ~1.1 GB instead of ~17 GB.

⚠️ Requirement: You must separately load Qwen/Qwen3-VL-8B-Thinking (base model, ~17 GB) first.

πŸ’Ύ If you prefer a standalone ready-to-use model, see the merged version: E27085921/HIKARI-Rigel-8B-SkinCaption (~17 GB)


What is this adapter?

LoRA adapter for HIKARI-Rigel-8B-SkinCaption β€” Clinical skin lesion caption generation (checkpoint-init, ablation baseline). Metric: BLEU-4: 9.82.

This is the ablation baseline adapter. For the best caption model, see HIKARI-Vega-8B-SkinCaption-Fused-LoRA.

See the full model card at E27085921/HIKARI-Rigel-8B-SkinCaption for complete details, usage examples, and performance comparison.


Usage

from peft import PeftModel
from transformers import Qwen3VLForConditionalGeneration, AutoProcessor
import torch
from PIL import Image

# Step 1: Load base model (Qwen3-VL-8B-Thinking, ~17 GB)
base = Qwen3VLForConditionalGeneration.from_pretrained(
    "Qwen/Qwen3-VL-8B-Thinking",
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True,
)

# Step 2: Apply LoRA adapter (~1.1 GB)
model = PeftModel.from_pretrained(base, "E27085921/HIKARI-Rigel-8B-SkinCaption-LoRA")
processor = AutoProcessor.from_pretrained("E27085921/HIKARI-Rigel-8B-SkinCaption-LoRA", trust_remote_code=True)

# Step 3: Inference β€” see full examples at E27085921/HIKARI-Rigel-8B-SkinCaption
image = Image.open("skin_lesion.jpg").convert("RGB")

For complete inference examples including vLLM and SGLang production code, see: E27085921/HIKARI-Rigel-8B-SkinCaption


πŸ“„ Citation

@misc{hikari2026,
  title  = {HIKARI: RAG-in-Training for Skin Disease Diagnosis
            with Cascaded Vision-Language Models},
  author = {Watin Promfiy and Pawitra Boonprasart},
  year   = {2026},
  institution = {King Mongkut's Institute of Technology Ladkrabang,
                 Department of Information Technology, Bangkok, Thailand}
}

Made with ❀️ at King Mongkut's Institute of Technology Ladkrabang (KMITL)

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for E27085921/HIKARI-Rigel-8B-SkinCaption-LoRA

Adapter
(7)
this model

Collection including E27085921/HIKARI-Rigel-8B-SkinCaption-LoRA