dirtmaxim commited on
Commit
1ecbab6
·
verified ·
1 Parent(s): ac32ead

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +177 -1
README.md CHANGED
@@ -16,4 +16,180 @@ size_categories:
16
  - 1M<n<10M
17
  language:
18
  - en
19
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  - 1M<n<10M
17
  language:
18
  - en
19
+ ---
20
+
21
+ # Dataset Card for BaboonLand Dataset: Tracking Primates in the Wild and Automating Behaviour Recognition from Drone Videos
22
+
23
+ ## Dataset Description
24
+ - **Homepage:** [BaboonLand Site](https://baboonland.xyz/)
25
+ - **Repository:** https://github.com/Imageomics/BaboonLand
26
+ - **Paper:** [https://link.springer.com/article/10.1007/s11263-025-02493-5](https://link.springer.com/article/10.1007/s11263-025-02493-5)
27
+ - **arXiv:** [https://arxiv.org/pdf/2405.17698](https://arxiv.org/pdf/2405.17698)
28
+
29
+ ### Dataset Summary
30
+ BaboonLand is an aerial drone video dataset of wild olive baboons (*Papio anubis*) collected over 21 consecutive days in Laikipia (Mpala Research Centre), Kenya, following three troops during morning and evening movements to and from sleeping sites. The dataset contains UAV footage across diverse environments (e.g., sleeping tree, river, rock, open savannah, cliff), with up to ~70 individuals per frame, yielding dense multi-object scenes from an overhead viewpoint.
31
+
32
+ The dataset supports three core subtasks: detection, multi-object tracking, and behavior recognition. It includes (1) a detection dataset derived from ~5.3K-resolution frames via multi-scale tiling (≈30K images), (2) ~0.5 hours of dense tracking annotations, and (3) ~20 hours of behavior “mini-scenes” annotated into 12 behavior classes and additional category for occlusions.
33
+
34
+ ### Supported Tasks and Leaderboards
35
+
36
+ #### Detection
37
+ We evaluate YOLOv8-X model with input resolution of 768x768 on our dataset and report mAP@50, Precision, and Recall:
38
+
39
+ | Model | mAP@50 | Precision | Recall |
40
+ | --- | ---: | ---: | ---: |
41
+ | YOLOv8-X | 92.62 | 93.70 | 87.60 |
42
+
43
+ #### Tracking
44
+ We evaluate SORT, DeepSORT, StrongSORT, ByteTrack, and BotSort tracking algorithms on our dataset and report MOTA, MOTP, IDF1, Precision, and Recall:
45
+
46
+ | Tracker | MOTA | MOTP | IDF1 | Precision | Recall |
47
+ | --- | ---: | ---: | ---: | ---: | ---: |
48
+ | SORT | 84.76 | 50.15 | 77.43 | 90.83 | 91.19 |
49
+ | DeepSORT | 84.40 | 87.22 | 81.38 | 90.26 | 91.57 |
50
+ | StrongSORT | 82.48 | 85.37 | 84.98 | 88.00 | 90.10 |
51
+ | ByteTrack | 63.55 | 34.10 | 77.01 | 96.32 | 64.90 |
52
+ | BotSort | 63.81 | 34.31 | 78.24 | 97.21 | 66.16 |
53
+
54
+ #### Behavior Classes:
55
+ - Walking/Running
56
+ - Sitting/Standing
57
+ - Fighting/Playing
58
+ - Self-Grooming
59
+ - Being Groomed
60
+ - Grooming Somebody
61
+ - Mutual Grooming
62
+ - Infant-Carrying
63
+ - Foraging
64
+ - Drinking
65
+ - Mounting
66
+ - Sleeping
67
+ - Occluded
68
+
69
+ #### Behavior Recognition
70
+ We evaluate I3D, SlowFast, and X3D models on our dataset and report Micro-Average (Per Instance) and Macro-Average (Per Class) accuracy.
71
+
72
+ | Method | WI | Micro Top-1 | Micro Top-3 | Micro Top-5 | Macro Top-1 | Macro Top-3 | Macro Top-5 |
73
+ | --- | --- | ---: | ---: | ---: | ---: | ---: | ---: |
74
+ | I3D | Random | 61.29 | 89.38 | 92.34 | 26.53 | 54.51 | 65.47 |
75
+ | SlowFast | Random | 61.71 | 90.35 | 93.11 | 27.08 | 56.73 | 67.61 |
76
+ | X3D | Random | 63.97 | 91.34 | 95.17 | 30.04 | 60.58 | 72.13 |
77
+ | X3D | K-400 | 64.89 | 92.54 | 96.66 | 31.41 | 62.04 | 74.01 |
78
+
79
+ ### Languages
80
+
81
+ English
82
+
83
+ ## Dataset Structure
84
+
85
+ BaboonLand provides original videos, CVAT-formatted annotations, derived mini-scenes, and scripts to generate task-specific training formats (e.g., Ultralytics/YOLO and Charades for SlowFast).
86
+
87
+ ### Directory Layout
88
+
89
+ ```text
90
+ BaboonLand
91
+ /charades -> The dataset converted to Charades format to train and evaluate behavior
92
+ recognition models. You can download the generated dataset from our webpage
93
+ or you can generate it yourself.
94
+ ...
95
+ /cvat_templates -> Templates to backup projects in CVAT and explore/adjust annotations.
96
+ /behavior.zip
97
+ /tracking.zip
98
+ /dataset
99
+ /video_1
100
+ /actions
101
+ /0.xml
102
+ /1.xml -> Behavior annotations for individual with ID=1
103
+ ...
104
+ /n.xml
105
+ /mini-scenes -> Mini-scenes generated from video.mp4 and tracks.xml
106
+ /0.mp4
107
+ /1.mp4
108
+ ...
109
+ /n.mp4
110
+ /timeline.jpg
111
+ /tracks.xml -> Tracks + bounding boxes (CVAT for video 1.1). Each track has a unique ID.
112
+ /video.mp4 -> Original drone video
113
+ /video_2
114
+ ...
115
+ /video_n
116
+ ...
117
+ /scripts
118
+ /requirements.txt
119
+ /tracks2mini-scenes.py
120
+ /dataset2charades.py
121
+ /charades2video.py
122
+ /charades2visual.py
123
+ /dataset2tracking.py
124
+ /tracking2ultralytics.py
125
+ /ultralytics2pyramid.py
126
+ /tracking -> Tracking split + (optionally) Ultralytics-format detection data.
127
+ ...
128
+ /README.md
129
+
130
+ ### Data Instances
131
+ Each `dataset/video_k/` directory contains:
132
+
133
+ - `video.mp4`: original UAV video
134
+ - `tracks.xml`: per-frame tracks (IDs + bounding boxes)
135
+ - `actions/*.xml`: per-track behavior labels (filename matches track ID)
136
+ - `mini-scenes/*.mp4`: cropped clips centered on each tracked individual (filename matches track ID)
137
+
138
+ ### Data Fields
139
+ BaboonLand supports three derived tasks:
140
+
141
+ - **Detection:** bounding boxes for baboons (also convertible to Ultralytics/YOLO format via provided scripts).
142
+ - **Tracking:** per-frame tracks with persistent IDs and bounding boxes (stored in simplified CVAT for video 1.1).
143
+ - **Behavior recognition:** per-individual **mini-scenes** (cropped clips centered on each tracked individual) labeled into **12 behavior classes + Occluded**.
144
+
145
+ ### Data Splits
146
+ BaboonLand includes task-specific evaluation sets:
147
+
148
+ - **Tracking:** 75% of each video for training, 25% for testing.
149
+ - **Detection (YOLO-formatted):** 80% training, 7% validation, 13% testing.
150
+ - **Behavior recognition (Charades format):** 75% training, 25% testing.
151
+
152
+ #### Data Collection and Procedures
153
+ - **Species:** Olive baboons (*Papio anubis*)
154
+ - **Location:** Mpala Research Centre, Laikipia County, Kenya
155
+ - **Capture:** DJI Air 2S, videos recorded in **5.3K**
156
+ - **Procedure:** all flights were conducted above 20 meters from a closes animal.
157
+
158
+ ## Personal and Sensitive Information
159
+
160
+ - No humans can be distinguished in the videos.
161
+ - Data collection followed research licensing and animal care protocols (see Acknowledgments).
162
+
163
+ ### Authors
164
+ - Isla Duporge
165
+ - Maksim Kholiavchenko
166
+ - Roi Harel
167
+ - Scott Wolf
168
+ - Dan Rubenstein
169
+ - Meg Crofoot
170
+ - Tanya Berger-Wolf
171
+ - Stephen Lee
172
+ - Julie Barreau
173
+ - Jenna Kline
174
+ - Michelle Ramirez
175
+ - Charles Stewart
176
+
177
+ ### Citation Information
178
+
179
+ #### Dataset
180
+
181
+ #### Paper
182
+ ```
183
+ @article{duporge2025baboonland,
184
+ title={BaboonLand Dataset: Tracking Primates in the Wild and Automating Behaviour Recognition from Drone Videos: I. Duporge et al.},
185
+ author={Duporge, Isla and Kholiavchenko, Maksim and Harel, Roi and Wolf, Scott and Rubenstein, Daniel I and Crofoot, Margaret C and Berger-Wolf, Tanya and Lee, Stephen J and Barreau, Julie and Kline, Jenna and others},
186
+ journal={International Journal of Computer Vision},
187
+ pages={1--12},
188
+ year={2025},
189
+ publisher={Springer}
190
+ }
191
+ ```
192
+
193
+ ### Contributions / Acknowledgments
194
+
195
+ This material is based upon work supported by the National Science Foundation under Award No. 2118240 and Award No. 2112606. ID was supported by the National Academy of Sciences Research Associate Program and the United States Army Research Laboratory while conducting this study. ID collected all the UAV data on a Civil Aviation Authority Drone License CAA NQE Approval Number: 0216/1365 in conjunction with authorization from a KCAA operator under a Remote Pilot License. The data was gathered at the Mpala Research Centre in Kenya, in accordance with Research License No. NACOSTI/P/22/18214. The data collection protocol adhered strictly to the guidelines set forth by the Institutional Animal Care and Use Committee under permission No. IACUC 1835F.