IrohXu commited on
Commit
dd884db
·
1 Parent(s): c710442

Update anno

Browse files
.gitattributes CHANGED
@@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+
61
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -6,7 +6,8 @@ license: cc-by-nc-3.0
6
 
7
  ## Dataset Description
8
 
9
- ![](assets/example_gesture.pdf)
 
10
 
11
  We introduce SocialGesture, the first large-scale dataset specifically designed for multi-person gesture analysis. SocialGesture features a diverse range of natural scenarios and supports multiple gesture analysis tasks, including video-based recognition and temporal localization, providing a valuable resource for advancing the study of gesture during complex social interactions. Furthermore, we propose a novel visual question answering (VQA) task to benchmark vision language models' (VLMs) performance on social gesture understanding. Our findings highlight several limitations of current gesture recognition models, offering insights into future directions for improvement in this field.
12
 
@@ -16,7 +17,9 @@ We introduce SocialGesture, the first large-scale dataset specifically designed
16
  ----videos
17
  |----xxx.mp4
18
  |----xxx.mp4
19
- ----vqa_annotation.json
 
 
20
  ```
21
 
22
  ## Reference
 
6
 
7
  ## Dataset Description
8
 
9
+
10
+ <img src="assets/gesture_example.png">
11
 
12
  We introduce SocialGesture, the first large-scale dataset specifically designed for multi-person gesture analysis. SocialGesture features a diverse range of natural scenarios and supports multiple gesture analysis tasks, including video-based recognition and temporal localization, providing a valuable resource for advancing the study of gesture during complex social interactions. Furthermore, we propose a novel visual question answering (VQA) task to benchmark vision language models' (VLMs) performance on social gesture understanding. Our findings highlight several limitations of current gesture recognition models, offering insights into future directions for improvement in this field.
13
 
 
17
  ----videos
18
  |----xxx.mp4
19
  |----xxx.mp4
20
+ ----train_annotation.json
21
+ ----test_annotation.json
22
+ ----dataset_stat_link.xlsx
23
  ```
24
 
25
  ## Reference
vqa_annotation.json → test_annotation.json RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:113d42eef26101f9d6d06936ce913df2154c04ee1b5bc14dea57e6e19cd67969
3
- size 29523764
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75558fea57e74f398adeb0921f7818f4a453b39ad8273d093278ef59d2a89e0f
3
+ size 5670545
train_annotation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1d1a0f91c76a7d87392b86f1328e03471fe0377d32e55eded493f883a0a7dc6e
3
+ size 22967798