File size: 2,377 Bytes
6710b44
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
---
license: apache-2.0
task_categories:
- video-text-to-text
- image-to-text
language:
- en
tags:
- colab
- notebook
- demo
- vlm
- models
- hf
- ocr
- reasoning
- code
size_categories:
- n<1K
---
# **VLM-Video-Understanding**

> A minimalistic demo for image inference and video understanding using OpenCV, built on top of several popular open-source Vision-Language Models (VLMs). This repository provides Colab notebooks demonstrating how to apply these VLMs to video and image tasks using Python and Gradio.

## Overview

This project showcases lightweight inference pipelines for the following:
- Video frame extraction and preprocessing
- Image-level inference with VLMs
- Real-time or pre-recorded video understanding
- OCR-based text extraction from video frames

## Models Included

The repository supports a variety of open-source models and configurations, including:

- Aya-Vision-8B
- Florence-2-Base
- Gemma3-VL
- MiMo-VL-7B-RL
- MiMo-VL-7B-SFT
- Qwen2-VL
- Qwen2.5-VL
- Qwen-2VL-MessyOCR
- RolmOCR-Qwen2.5-VL
- olmOCR-Qwen2-VL
- typhoon-ocr-7b-Qwen2.5VL

Each model has a dedicated Colab notebook to help users understand how to use it with video inputs.

## Technologies Used

- **Python**
- **OpenCV** – for video and image processing
- **Gradio** – for interactive UI
- **Jupyter Notebooks** – for easy experimentation
- **Hugging Face Transformers** – for loading VLMs

## Folder Structure

```

β”œβ”€β”€ Aya-Vision-8B/
β”œβ”€β”€ Florence-2-Base/
β”œβ”€β”€ Gemma3-VL/
β”œβ”€β”€ MiMo-VL-7B-RL/
β”œβ”€β”€ MiMo-VL-7B-SFT/
β”œβ”€β”€ Qwen2-VL/
β”œβ”€β”€ Qwen2.5-VL/
β”œβ”€β”€ Qwen-2VL-MessyOCR/
β”œβ”€β”€ RolmOCR-Qwen2.5-VL/
β”œβ”€β”€ olmOCR-Qwen2-VL/
β”œβ”€β”€ typhoon-ocr-7b-Qwen2.5VL/
β”œβ”€β”€ LICENSE
└── README.md

````

## Getting Started

1. Clone the repository:

```bash
git clone https://github.com/PRITHIVSAKTHIUR/VLM-Video-Understanding.git
cd VLM-Video-Understanding
````

2. Open any of the Colab notebooks and follow the instructions to run image or video inference.

3. Optionally, install dependencies locally:

```bash
pip install opencv-python gradio transformers
```

## Hugging Face Dataset

The models and examples are supported by a dataset on Hugging Face:

[VLM-Video-Understanding](https://huggingface.co/datasets/prithivMLmods/VLM-Video-Understanding)

## License

This project is licensed under the Apache-2.0 License.