Kajhanan commited on
Commit
c7a3cae
·
verified ·
1 Parent(s): 54ade8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -1
README.md CHANGED
@@ -5,4 +5,76 @@ task_categories:
5
  - token-classification
6
  size_categories:
7
  - n<1K
8
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  - token-classification
6
  size_categories:
7
  - n<1K
8
+ ---
9
+ # VideoMAEv2_BlindSpots
10
+
11
+ This dataset contains a 'diverse' set of 25 videos from the misclassified samples, after logistic regression is performed for the feature vectors obtained through the 'VideoMAEv2-Huge' video feature extractor, for a subset of the 'Kinetics-400' dataset with 3995 samples and 395 classes. This subset is randomly split into training and testing sets with a 0.5 split ratio.
12
+
13
+ **Base Model:** [https://huggingface.co/OpenGVLab/VideoMAEv2-Huge](https://huggingface.co/OpenGVLab/VideoMAEv2-Huge)
14
+
15
+ ## Loading the Model
16
+
17
+ ```python
18
+ import torch
19
+ from transformers import VideoMAEImageProcessor, AutoModel, AutoConfig
20
+
21
+ model_name = "OpenGVLab/VideoMAEv2-Huge"
22
+ processor = VideoMAEImageProcessor.from_pretrained(model_name)
23
+ config = AutoConfig.from_pretrained(model_name, trust_remote_code=True)
24
+ model = AutoModel.from_pretrained(model_name, config=config, trust_remote_code=True)
25
+
26
+ device = "cuda" if torch.cuda.is_available() else "cpu"
27
+ model.to(device)
28
+ model.eval()
29
+ ```
30
+
31
+ ## Blindspot Categories
32
+
33
+ The samples are misclassified according the top-5 criterion while performing the aforementioned experiment. By visual inspection, the videos misclassified by the model are classified into the following categories of blindspot.
34
+ The categories of blindspot are chosen after visually inspecting and verifying that multiple misclassified samples are facing the same issue.
35
+
36
+ **1. Temporal non-centeredness:** This refers to when the main action in the video described by its class is happening at the very beginning or at towards the end, while other background activity occupies the majority of the video.
37
+
38
+ **2. Unusual environment / Main action paired with other action:** The environment/background is unusual, eg: THrowing frisbee in a snowy environment, or the main action is paired with another action, eg: Playing the trombone while surfing.
39
+
40
+ **3. Object / Subject / Action is filmed from an oblique angle**
41
+
42
+ **4. Cuts / Transitions to other background actions**
43
+
44
+ **5. Quick and unstable camera motion**
45
+
46
+ **6. Object not in the frame, but subject and background are in the frame:** It is sometimes possible to infer the action without the object by observing the movement pattern of the object and the surrounding. eg: Baseball throwing can be inferred from the throwing action and a playground surrounding without observing the ball in the frame.
47
+
48
+ **7. Spatial non-centeredness and / or small object / subject / action:** Main object occupying only a small fraction of the entire frame.
49
+
50
+ **8. Unusual object:** Object unusual for the category. eg: A video of blowing air to a hot air balloon in the 'Blowing Balloon' category, where almost all other samples are videos of humans blowing party balloons.
51
+
52
+ **8. Unusual subject:** Subject unusual for the category. eg: A video of a dog blowing a candle off in the 'Blowing Candle' category, where humans are expected as the subbject almost always.
53
+
54
+ **9. Unusual action:** Action unusual to the category. eg: A video of sharpening a pencil using a blade when almost always a rotary sharpener is used.
55
+
56
+ The dataset is organized in the following folder structure.
57
+
58
+ ```
59
+ my_video_dataset/
60
+ └── train/
61
+ ├── video_01.mp4
62
+ ├── video_02.mp4
63
+ └── video_01_BlindSpot.txt
64
+ └── video_02_BlindSpot.txt
65
+ ```
66
+
67
+ The textfiles contain one or more number/s indicating the type of the blindspot. eg: If the file video_01_BlindSpot.txt contains the text '5,9', then it means that video-01.mp4
68
+ has the issues dicussed in the above list in the corresponding numbers, such as 'Quick and unstable camera motion' and 'Unusual action'.
69
+
70
+ ### Note
71
+ Some of the misclassified videos had the described actions appear in sudden infrequent bursts among other background activity. However, including more of those data in the training set
72
+ is unlikely to improve the model given the uniform sampling strategy used in the preprocessing stage of the model. Thus, such samples are not included here.
73
+
74
+ ## Finetuning the model to accommodate blindspots
75
+
76
+ To remove the above blindspots, the model may be finetuned with a appropriately curated dataset. This dataset could be expanded with data augmentation techniques to assemble a
77
+ sufficiently large dataset. **Cuts / Transitions to other background actions**, **Quick and unstable camera motion** and **Temporal non-centeredness** can be generated with simple augmentation techniques.
78
+ LLM prompts maybe used to find candidates for **Unusual objects / subjects / actions** and the web can be scraped to obtain relevant videos. (i.e.) after figuring out that a dog is an unusual subject to
79
+ blow a candle, videos of dogs blowing candles could be scraped from the web to augment the 'Blowing Candles' category. Visual generative models maybe used to augment data
80
+ with **Unusual environment / Main action paired with other action** and **Object / Subject / Action is filmed from an oblique angle**.