Datasets:
pkl unknown | __key__ stringlengths 29 29 | __url__ stringclasses 1
value |
|---|---|---|
"gASVhQAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_3IPOOpGRl80 | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhAAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_94q8YdJoPUw | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVjn4AAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_8tI9IsSpgeI | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhQAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_rVxxAI6wlXk | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhAAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_UCmycSotoy4 | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhQAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_SYAatoDZalo | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhAAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_ZPVrC5185NM | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhAAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_7fwrkFHTm-Q | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhAAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_QpJ5npI8qO0 | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
"gASVhAAAAAAAAACMFW51bXB5LmNvcmUubXVsdGlhcnJheZSMDF9yZWNvbnN0cnVjdJSTlIwFbnVtcHmUjAduZGFycmF5lJOUSwC(...TRUNCATED) | ./frame_feature/v_dN14VPSHimI | "hf://datasets/ZhanjieHu/StaticFeature@ba2c21b88e74b2b9398daf871524a9efbf948b33/activitynet_static.t(...TRUNCATED) |
This dataset is derived from the following project: https://github.com/ZhanJieHu/SDGAN/tree/main/data_preparation/StaticFeature
This static feature dataset provides pre-extracted static visual features for three widely used Temporal Video Grounding (TVG) benchmark datasets: ActivityNet Captions, Charades-STA, and TACoS. It is designed to facilitate research on video moment retrieval and multimodal video understanding by offering ready-to-use frame-level representations.
The feature extraction pipeline consists of several stages. First, raw videos are decoded into frame sequences using a standardized video-to-frame extraction process. To balance computational efficiency and temporal coverage, frames are uniformly downsampled by selecting one frame every 16 frames. The sampled frames are then preprocessed through resizing, padding to square resolution, normalization, and format conversion to ensure compatibility with vision-language models.
Static visual features are extracted using a pretrained Contrastive Language–Image Pre-training (CLIP) model, specifically the ViT-L/14@336px variant. Each frame is encoded into a 768-dimensional feature vector, capturing high-level semantic information aligned with natural language representations. The resulting features are aggregated per video and stored as serialized files (e.g., PKL format), enabling efficient loading and downstream processing.
To support reproducibility, the dataset includes a complete and transparent feature extraction pipeline. This encompasses environment configuration, frame extraction scripts, frame sorting and organization utilities, and CLIP model preparation procedures. Due to dependency constraints in the original implementation, a manually configured CLIP module is provided, along with instructions for modifying model weight paths. Alternatively, users may adopt simplified setups via publicly available CLIP implementations.
The dataset is particularly suitable for research tasks such as temporal video grounding, video-text retrieval, and multimodal representation learning. By eliminating the need for repeated visual feature extraction, it significantly reduces computational overhead and accelerates experimentation and benchmarking across standard datasets.
- Downloads last month
- 39