Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare
Paper • 2405.19298 • Published
How to use VQA-CityU/Compare2Score with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="VQA-CityU/Compare2Score") # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("VQA-CityU/Compare2Score", dtype="auto")How to use VQA-CityU/Compare2Score with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "VQA-CityU/Compare2Score"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "VQA-CityU/Compare2Score",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/VQA-CityU/Compare2Score
How to use VQA-CityU/Compare2Score with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "VQA-CityU/Compare2Score" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "VQA-CityU/Compare2Score",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "VQA-CityU/Compare2Score" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "VQA-CityU/Compare2Score",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use VQA-CityU/Compare2Score with Docker Model Runner:
docker model run hf.co/VQA-CityU/Compare2Score
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
The model corresponds to Compare2Score.
import requests
import torch
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("q-future/Compare2Score", trust_remote_code=True, attn_implementation="eager",
torch_dtype=torch.float16, device_map="auto")
from PIL import Image
image_path_url = "https://raw.githubusercontent.com/Q-Future/Q-Align/main/fig/singapore_flyer.jpg"
print("The quality score of this image is {}".format(model.score(image_path_url))
git clone https://github.com/Q-Future/Compare2Score.git
cd Compare2Score
pip install -e .
from q_align import Compare2Scorer
from PIL import Image
scorer = Compare2Scorer()
image_path = "figs/i04_03_4.bmp"
print("The quality score of this image is {}.".format(scorer(image_path)))
@article{zhu2024adaptive,
title={Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare},
author={Zhu, Hanwei and Wu, Haoning and Li, Yixuan and Zhang, Zicheng and Chen, Baoliang and Zhu, Lingyu and Fang, Yuming and Zhai, Guangtao and Lin, Weisi and Wang, Shiqi},
journal={arXiv preprint arXiv:2405.19298},
year={2024},
}