File size: 742 Bytes
737611f d58f25b 559c8bc 0e6ee0f d58f25b 559c8bc 3ca30ab |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
---
license: apache-2.0
library_name: transformers
pipeline_tag: video-text-to-text
---
# Video-LLaVA-Seg
[Project](https://ali2500.github.io/vicas-project/) | [Arxiv](https://arxiv.org/abs/2412.09754)
This is the official baseline implementation for the ViCas dataset, presented in the paper [ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation](https://huggingface.co/papers/2412.09754).
For details about setting up the model, refer to the [Video-LLaVA-Seg GitHub repo](https://github.com/Ali2500/Video-LLaVA-Seg/tree/main)
For details about downloading and evaluating the dataset benchmark, refer to the [ViCaS GitHub repo](https://github.com/Ali2500/ViCaS/tree/main) |