Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,6 +10,8 @@ tags:
|
|
| 10 |
- Video Retrieval
|
| 11 |
- Video Understanding
|
| 12 |
---
|
|
|
|
|
|
|
| 13 |
This static feature dataset provides pre-extracted static visual features for three widely used Temporal Video Grounding (TVG) benchmark datasets: ActivityNet Captions, Charades-STA, and TACoS. It is designed to facilitate research on video moment retrieval and multimodal video understanding by offering ready-to-use frame-level representations.
|
| 14 |
|
| 15 |
The feature extraction pipeline consists of several stages. First, raw videos are decoded into frame sequences using a standardized video-to-frame extraction process. To balance computational efficiency and temporal coverage, frames are uniformly downsampled by selecting one frame every 16 frames. The sampled frames are then preprocessed through resizing, padding to square resolution, normalization, and format conversion to ensure compatibility with vision-language models.
|
|
|
|
| 10 |
- Video Retrieval
|
| 11 |
- Video Understanding
|
| 12 |
---
|
| 13 |
+
This dataset is derived from the following project: https://github.com/ZhanJieHu/SDGAN/tree/main/data_preparation/StaticFeature
|
| 14 |
+
|
| 15 |
This static feature dataset provides pre-extracted static visual features for three widely used Temporal Video Grounding (TVG) benchmark datasets: ActivityNet Captions, Charades-STA, and TACoS. It is designed to facilitate research on video moment retrieval and multimodal video understanding by offering ready-to-use frame-level representations.
|
| 16 |
|
| 17 |
The feature extraction pipeline consists of several stages. First, raw videos are decoded into frame sequences using a standardized video-to-frame extraction process. To balance computational efficiency and temporal coverage, frames are uniformly downsampled by selecting one frame every 16 frames. The sampled frames are then preprocessed through resizing, padding to square resolution, normalization, and format conversion to ensure compatibility with vision-language models.
|