text stringlengths 1 4 |
|---|
3 |
8 |
1,3 |
9 |
4 |
6 |
2 |
1,4 |
1 |
1,5 |
8 |
1 |
3 |
1 |
8 |
7 |
7 |
2 |
1,10 |
2 |
2 |
3 |
5 |
6 |
2 |
VideoMAEv2_BlindSpots
This dataset contains a 'diverse' set of 25 videos from the misclassified samples, after logistic regression is performed for the feature vectors obtained through the 'VideoMAEv2-Huge' video feature extractor, for a subset of the 'Kinetics-400' dataset with 3995 samples and 395 classes. This subset is randomly split into training and testing sets with a 0.5 split ratio.
Base Model: https://huggingface.co/OpenGVLab/VideoMAEv2-Huge
Loading the Model
import torch
from transformers import VideoMAEImageProcessor, AutoModel, AutoConfig
model_name = "OpenGVLab/VideoMAEv2-Huge"
processor = VideoMAEImageProcessor.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
model = AutoModel.from_pretrained(model_name, config=config, trust_remote_code=True)
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.eval()
Blindspot Categories
The samples are misclassified according the top-5 criterion while performing the aforementioned experiment. By visual inspection, the videos misclassified by the model are classified into the following categories of blindspot. The categories of blindspot are chosen after visually inspecting and verifying that multiple misclassified samples are facing the same issue.
1. Temporal non-centeredness: This refers to when the main action in the video described by its class is happening at the very beginning or at towards the end, while other background activity occupies the majority of the video.
2. Unusual environment / Main action paired with other action: The environment/background is unusual, eg: THrowing frisbee in a snowy environment, or the main action is paired with another action, eg: Playing the trombone while surfing.
3. Object / Subject / Action is filmed from an oblique angle
4. Cuts / Transitions to other background actions
5. Quick and unstable camera motion
6. Object not in the frame, but subject and background are in the frame: It is sometimes possible to infer the action without the object by observing the movement pattern of the object and the surrounding. eg: Baseball throwing can be inferred from the throwing action and a playground surrounding without observing the ball in the frame.
7. Spatial non-centeredness and / or small object / subject / action: Main object occupying only a small fraction of the entire frame.
8. Unusual object: Object unusual for the category. eg: A video of blowing air to a hot air balloon in the 'Blowing Balloon' category, where almost all other samples are videos of humans blowing party balloons.
8. Unusual subject: Subject unusual for the category. eg: A video of a dog blowing a candle off in the 'Blowing Candle' category, where humans are expected as the subbject almost always.
9. Unusual action: Action unusual to the category. eg: A video of sharpening a pencil using a blade when almost always a rotary sharpener is used.
The dataset is organized in the following folder structure.
my_video_dataset/
└── train/
├── video_01.mp4
├── video_02.mp4
└── video_01_BlindSpot.txt
└── video_02_BlindSpot.txt
The textfiles contain one or more number/s indicating the type of the blindspot. eg: If the file video_01_BlindSpot.txt contains the text '5,9', then it means that video-01.mp4 has the issues dicussed in the above list in the corresponding numbers, such as 'Quick and unstable camera motion' and 'Unusual action'.
Note
Some of the misclassified videos had the described actions appear in sudden infrequent bursts among other background activity. However, including more of those data in the training set is unlikely to improve the model given the uniform sampling strategy used in the preprocessing stage of the model. Thus, such samples are not included here.
Finetuning the model to accommodate blindspots
To remove the above blindspots, the model may be finetuned with a appropriately curated dataset. This dataset could be expanded with data augmentation techniques to assemble a sufficiently large dataset. Cuts / Transitions to other background actions, Quick and unstable camera motion and Temporal non-centeredness can be generated with simple augmentation techniques. LLM prompts maybe used to find candidates for Unusual objects / subjects / actions and the web can be scraped to obtain relevant videos. (i.e.) after figuring out that a dog is an unusual subject to blow a candle, videos of dogs blowing candles could be scraped from the web to augment the 'Blowing Candles' category. Visual generative models maybe used to augment data with Unusual environment / Main action paired with other action and Object / Subject / Action is filmed from an oblique angle.
- Downloads last month
- 8