|
|
--- |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
base_model: OpenGVLab/InternVL3_5-14B-Pretrained |
|
|
tags: |
|
|
- internvl |
|
|
- internvl_chat |
|
|
- vision-language |
|
|
- multimodal |
|
|
- medical |
|
|
- scientific |
|
|
- reasoning |
|
|
pipeline_tag: image-text-to-text |
|
|
library_name: transformers |
|
|
--- |
|
|
|
|
|
# KVL |
|
|
|
|
|
## Introduction |
|
|
|
|
|
**KVL** is a Vision-Language Model fine-tuned from [InternVL3_5-14B-Pretrained](https://huggingface.co/OpenGVLab/InternVL3_5-14B-Pretrained) using Supervised Fine-Tuning (SFT) on a curated collection of approximately **4 million** high-quality multimodal samples. The model is designed to excel in scientific, medical, and reasoning-intensive vision-language tasks. |
|
|
|
|
|
## Model Architecture |
|
|
|
|
|
| Component | Details | |
|
|
|-----------|---------| |
|
|
| Base Model | [InternVL3_5-14B-Pretrained](https://huggingface.co/OpenGVLab/InternVL3_5-14B-Pretrained) | |
|
|
| Vision Encoder | InternViT-300M (0.3B parameters) | |
|
|
| Language Model | Qwen3-14B (14.8B parameters) | |
|
|
| Total Parameters | 15.1B | |
|
|
| Precision | BF16 | |
|
|
| Architecture | ViT-MLP-LLM (InternVL Chat) | |
|
|
|
|
|
## Training Details |
|
|
|
|
|
### Training Configuration |
|
|
|
|
|
| Hyperparameter | Value | |
|
|
|----------------|-------| |
|
|
| Training Type | Full Parameter Fine-tuning | |
|
|
| Learning Rate | 1e-5 | |
|
|
| LR Scheduler | Cosine | |
|
|
| Epochs | 1 | |
|
|
| Batch Size | 2 (per device) | |
|
|
| Gradient Accumulation | 32 | |
|
|
| Number of GPUs | 8 | |
|
|
| Effective Batch Size | 512 | |
|
|
| Max Sequence Length | 16,384 | |
|
|
| Optimizer | AdamW (fused) | |
|
|
| Weight Decay | 0.1 | |
|
|
| DeepSpeed | ZeRO Stage 3 | |
|
|
| Framework | [ms-swift](https://github.com/modelscope/ms-swift) | |
|
|
|
|
|
### Training Datasets (~4M samples) |
|
|
|
|
|
| Dataset | Samples | Domain | |
|
|
|---------|---------|--------| |
|
|
| [ArXivQA](https://huggingface.co/datasets/MMInstruction/ArxivQA) | 100K | Scientific Papers | |
|
|
| [VisCon-100k](https://huggingface.co/datasets/tiiuae/viscon-100k) | 100K | Visual Consistency | |
|
|
| [Visual-CoT](https://huggingface.co/datasets/deepcs233/Visual-CoT) | 404K | Chain-of-Thought Reasoning | |
|
|
| [SPIQA](https://huggingface.co/datasets/google/spiqa) | 263K | Scientific Paper QA | |
|
|
| [PMC-VQA](https://huggingface.co/datasets/xmcmic/PMC-VQA) | 330K | Medical (PubMed) | |
|
|
| [VQA-RAD](https://huggingface.co/datasets/flaviagiammarino/vqa-rad) | 1.7K | Medical Radiology | |
|
|
| [Path-VQA](https://huggingface.co/datasets/flaviagiammarino/path-vqa) | 20K | Medical Pathology | |
|
|
| [Kvasir-VQA-x1](https://huggingface.co/datasets/SimulaMet/Kvasir-VQA-x1) | 160K | Medical Endoscopy | |
|
|
| [InternVL-Chat-SFT](https://huggingface.co/datasets/OpenGVLab/InternVL-Chat-V1-2-SFT-Data) | 1.27M | General VL Conversation | |
|
|
| [OpenThoughts](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) | 114K | Reasoning | |
|
|
| [VLAA-Thinking](https://huggingface.co/datasets/UCSC-VLAA/VLAA-Thinking) | 126K | Visual Reasoning | |
|
|
| [MedMax](https://huggingface.co/datasets/mint-medmax/medmax_data) | 1.14M | Medical Comprehensive | |
|
|
|
|
|
**Total: ~4 Million samples** |
|
|
|
|
|
## Quick Start |
|
|
|
|
|
### Requirements |
|
|
|
|
|
```bash |
|
|
pip install transformers>=4.52.1 torch torchvision timm |
|
|
pip install flash-attn --no-build-isolation # Optional but recommended |
|
|
``` |
|
|
|
|
|
### Basic Usage |
|
|
|
|
|
Since KVL is fine-tuned from InternVL3.5, the usage is **identical to InternVL**. You can use the standard InternVL inference code: |
|
|
|
|
|
```python |
|
|
import torch |
|
|
from transformers import AutoTokenizer, AutoModel |
|
|
from PIL import Image |
|
|
import torchvision.transforms as T |
|
|
from torchvision.transforms.functional import InterpolationMode |
|
|
|
|
|
IMAGENET_MEAN = (0.485, 0.456, 0.406) |
|
|
IMAGENET_STD = (0.229, 0.224, 0.225) |
|
|
|
|
|
def build_transform(input_size): |
|
|
return T.Compose([ |
|
|
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img), |
|
|
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC), |
|
|
T.ToTensor(), |
|
|
T.Normalize(mean=IMAGENET_MEAN, std=IMAGENET_STD) |
|
|
]) |
|
|
|
|
|
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size): |
|
|
best_ratio_diff = float('inf') |
|
|
best_ratio = (1, 1) |
|
|
area = width * height |
|
|
for ratio in target_ratios: |
|
|
target_aspect_ratio = ratio[0] / ratio[1] |
|
|
ratio_diff = abs(aspect_ratio - target_aspect_ratio) |
|
|
if ratio_diff < best_ratio_diff: |
|
|
best_ratio_diff = ratio_diff |
|
|
best_ratio = ratio |
|
|
elif ratio_diff == best_ratio_diff: |
|
|
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]: |
|
|
best_ratio = ratio |
|
|
return best_ratio |
|
|
|
|
|
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False): |
|
|
orig_width, orig_height = image.size |
|
|
aspect_ratio = orig_width / orig_height |
|
|
|
|
|
target_ratios = set( |
|
|
(i, j) for n in range(min_num, max_num + 1) |
|
|
for i in range(1, n + 1) for j in range(1, n + 1) |
|
|
if i * j <= max_num and i * j >= min_num |
|
|
) |
|
|
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1]) |
|
|
|
|
|
target_aspect_ratio = find_closest_aspect_ratio( |
|
|
aspect_ratio, target_ratios, orig_width, orig_height, image_size |
|
|
) |
|
|
|
|
|
target_width = image_size * target_aspect_ratio[0] |
|
|
target_height = image_size * target_aspect_ratio[1] |
|
|
blocks = target_aspect_ratio[0] * target_aspect_ratio[1] |
|
|
|
|
|
resized_img = image.resize((target_width, target_height)) |
|
|
processed_images = [] |
|
|
for i in range(blocks): |
|
|
box = ( |
|
|
(i % (target_width // image_size)) * image_size, |
|
|
(i // (target_width // image_size)) * image_size, |
|
|
((i % (target_width // image_size)) + 1) * image_size, |
|
|
((i // (target_width // image_size)) + 1) * image_size |
|
|
) |
|
|
split_img = resized_img.crop(box) |
|
|
processed_images.append(split_img) |
|
|
if use_thumbnail and len(processed_images) != 1: |
|
|
thumbnail_img = image.resize((image_size, image_size)) |
|
|
processed_images.append(thumbnail_img) |
|
|
return processed_images |
|
|
|
|
|
def load_image(image_file, input_size=448, max_num=12): |
|
|
image = Image.open(image_file).convert('RGB') |
|
|
transform = build_transform(input_size=input_size) |
|
|
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num) |
|
|
pixel_values = [transform(img) for img in images] |
|
|
pixel_values = torch.stack(pixel_values) |
|
|
return pixel_values |
|
|
|
|
|
# Load model |
|
|
model_path = "amoeba04/KVL" |
|
|
model = AutoModel.from_pretrained( |
|
|
model_path, |
|
|
torch_dtype=torch.bfloat16, |
|
|
low_cpu_mem_usage=True, |
|
|
use_flash_attn=True, |
|
|
trust_remote_code=True |
|
|
).eval().cuda() |
|
|
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True, use_fast=False) |
|
|
|
|
|
# Inference |
|
|
image = load_image('your_image.jpg').to(torch.bfloat16).cuda() |
|
|
generation_config = dict(max_new_tokens=1024, do_sample=False) |
|
|
|
|
|
question = '<image>\nDescribe this image in detail.' |
|
|
response = model.chat(tokenizer, image, question, generation_config) |
|
|
print(response) |
|
|
``` |
|
|
|
|
|
### Multi-GPU Inference |
|
|
|
|
|
```python |
|
|
model = AutoModel.from_pretrained( |
|
|
"amoeba04/KVL", |
|
|
torch_dtype=torch.bfloat16, |
|
|
low_cpu_mem_usage=True, |
|
|
use_flash_attn=True, |
|
|
trust_remote_code=True, |
|
|
device_map="auto" # Automatic multi-GPU distribution |
|
|
).eval() |
|
|
``` |
|
|
|
|
|
## Evaluation with VLMEvalKit |
|
|
|
|
|
This model is fully compatible with [VLMEvalKit](https://github.com/open-compass/VLMEvalKit). |
|
|
|
|
|
To add KVL to VLMEvalKit, register it in `vlmeval/config.py`: |
|
|
|
|
|
```python |
|
|
from functools import partial |
|
|
from vlmeval.vlm import InternVLChat |
|
|
|
|
|
# Add to ungrouped dict |
|
|
"KVL": partial(InternVLChat, model_path="amoeba04/KVL", max_new_tokens=16384, version="V2.0"), |
|
|
``` |
|
|
|
|
|
Then run evaluation: |
|
|
|
|
|
```bash |
|
|
python run.py --data MMBench_DEV_EN --model KVL --verbose |
|
|
``` |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
- **Scientific Document Understanding**: Analyzing figures, tables, and diagrams from scientific papers |
|
|
- **Medical Image Analysis**: Radiology, pathology, and endoscopy image interpretation |
|
|
- **Visual Question Answering**: General and domain-specific VQA tasks |
|
|
- **Chain-of-Thought Reasoning**: Complex visual reasoning with step-by-step explanations |
|
|
|
|
|
## Acknowledgements |
|
|
|
|
|
- [InternVL Team](https://github.com/OpenGVLab/InternVL) for the excellent base model |
|
|
- [ms-swift](https://github.com/modelscope/ms-swift) for the training framework |
|
|
- All dataset creators for their valuable contributions |
|
|
|
|
|
## License |
|
|
|
|
|
This model is released under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). |
|
|
|