Instructions to use prithivMLmods/QvQ-Step-Tiny with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use prithivMLmods/QvQ-Step-Tiny with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="prithivMLmods/QvQ-Step-Tiny") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("prithivMLmods/QvQ-Step-Tiny") model = AutoModelForImageTextToText.from_pretrained("prithivMLmods/QvQ-Step-Tiny") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use prithivMLmods/QvQ-Step-Tiny with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "prithivMLmods/QvQ-Step-Tiny" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/QvQ-Step-Tiny", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/prithivMLmods/QvQ-Step-Tiny
- SGLang
How to use prithivMLmods/QvQ-Step-Tiny with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "prithivMLmods/QvQ-Step-Tiny" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/QvQ-Step-Tiny", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "prithivMLmods/QvQ-Step-Tiny" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "prithivMLmods/QvQ-Step-Tiny", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use prithivMLmods/QvQ-Step-Tiny with Docker Model Runner:
docker model run hf.co/prithivMLmods/QvQ-Step-Tiny
QvQ Step Tiny - [2B]
QvQ-Step-Tiny is a step-by-step context explainer Vision-Language model based on the Qwen2-VL architecture, fine-tuned using the VCR datasets for systematic step-by-step explanations. It is built on the Qwen2VLForConditionalGeneration framework with 2.21 billion parameters and uses BF16 (Brain Floating Point 16) precision.
Quickstart with Transformers
Here we show a code snippet to show you how to use the chat model with transformers and qwen_vl_utils:
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2VLForConditionalGeneration.from_pretrained(
"prithivMLmods/QvQ-Step-Tiny", torch_dtype="auto", device_map="auto"
)
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
},
{"type": "text", "text": "Describe this image."},
],
}
]
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
Key Enhancements of QvQ-Step-Tiny
State-of-the-Art Visual Understanding
- QvQ-Step-Tiny inherits the state-of-the-art capabilities of Qwen2-VL for understanding images of various resolutions and aspect ratios.
- It excels on visual reasoning benchmarks such as MathVista, DocVQA, RealWorldQA, and MTVQA, making it a powerful tool for detailed visual content analysis and question answering.
Extended Video Understanding
- With the ability to process and comprehend videos of over 20 minutes, QvQ-Step-Tiny supports high-quality video-based question answering, conversational dialogs, and video content generation.
- It ensures a systematic, step-by-step explanation of video content, which is ideal for educational, entertainment, and professional applications.
Integration with Devices and Systems
- Thanks to its advanced reasoning and decision-making capabilities, QvQ-Step-Tiny can act as an intelligent agent for operating devices such as mobile phones, robots, and other automated systems.
- It can process visual environments alongside textual instructions to enable seamless automation and intelligent control of devices.
Multilingual Support for Text in Images
- QvQ-Step-Tiny supports multilingual text recognition within images, handling English, Chinese, and a wide range of languages, including most European languages, Japanese, Korean, Arabic, and Vietnamese.
- This makes it an effective model for global applications, from document analysis to multi-language accessibility solutions.
Intended Use
- Step-by-Step Context Explanation: Designed to provide detailed and systematic explanations for images and videos, making it ideal for educational, analytical, and instructional tasks.
- Visual Content Understanding: Effective for analyzing visual content across diverse resolutions, aspect ratios, and formats, including documents (DocVQA) and mathematical visuals (MathVista).
- Video-based Reasoning: Supports comprehension of long-form videos (20+ minutes) for tasks like video question answering, dialog generation, and instructional content creation.
- Device Integration: Can act as an intelligent agent to automate device operations (e.g., mobile phones, robots) by understanding visual environments and processing text-based instructions.
- Multilingual Visual Text Support: Recognizes and processes multilingual text within images, making it suitable for global applications like document processing and accessibility tools.
- Advanced Question Answering: Excels in question-answering tasks involving images, videos, and multimodal data, serving as a robust tool for interactive systems.
- Accessibility Enhancements: Assists visually impaired users by explaining visual and textual content in a clear, step-by-step manner.
Limitations
- Model Size Constraints: At 2.21 billion parameters, it may not perform as well as larger models for highly complex or nuanced tasks.
- Accuracy with Low-Quality Inputs: Performance may degrade when dealing with low-resolution images, poor lighting conditions, or noisy video/audio inputs.
- Specialized Training Gaps: While strong on general benchmarks, it might struggle with niche or highly specialized domains that require additional fine-tuning.
- Multilingual Text Variability: While multilingual text recognition is supported, performance may vary across less common or highly complex languages.
- Context Length Tradeoffs: Processing very long videos (e.g., over 20 minutes) or highly dense visual data might challenge its coherence or explanation accuracy.
- Device Integration Complexity: Deploying the model for operating devices or robots may require significant engineering efforts and robust integration pipelines.
- Resource-Intensive for Long Contexts: Despite BF16 precision, tasks with extended context lengths or high-resolution inputs could demand substantial computational resources.
- Ambiguity in Prompts: Ambiguously phrased or poorly structured input prompts may lead to incomplete or inaccurate explanations.
- Static Model: The model cannot learn dynamically from user interactions or adapt its behavior without retraining.
Applications
- Education: Step-by-step explanations for visual and textual content in learning materials, including images and videos.
- Automation: Integrating with robotics or smart devices for performing tasks based on visual and textual data.
- Content Creation: Assisting in creating or analyzing video and image-based content, such as tutorials or product demos.
- Accessibility: Enhancing accessibility tools for visually impaired or multilingual users by providing clear explanations of image or video content.
- Global Q&A Systems: Supporting cross-lingual question answering in images and videos for diverse user bases.
- Downloads last month
- 9
Model tree for prithivMLmods/QvQ-Step-Tiny
Base model
Qwen/Qwen2-VL-2B
docker model run hf.co/prithivMLmods/QvQ-Step-Tiny