BaboonLand / README.md
dirtmaxim's picture
Update README.md
3d34f2b verified
|
raw
history blame
8.48 kB
metadata
license: cc0-1.0
task_categories:
  - video-classification
  - object-detection
tags:
  - baboons
  - video
  - animal behavior
  - behavior recognition
  - annotation
  - annotated video
  - conservation
  - drone
  - UAV
  - imbalanced
  - Kenya
  - Mpala Research Centre
pretty_name: >-
  BaboonLand Dataset: Tracking Primates in the Wild and Automating Behaviour
  Recognition from Drone Videos
description: >-
  BaboonLand is an aerial drone video dataset of wild olive baboons in Laikipia,
  Kenya, collected over 21 consecutive days across three troops. It contains 20+
  hours of footage with dense multi-individual scenes (up to ~70 baboons per
  frame) and annotations enabling detection, multi-object tracking, and behavior
  recognition.
size_categories:
  - 1M<n<10M
language:
  - en
viewer: false

Dataset Card for BaboonLand Dataset: Tracking Primates in the Wild and Automating Behaviour Recognition from Drone Videos

Dataset Description

Dataset Summary

BaboonLand is an aerial drone video dataset of wild olive baboons (Papio anubis) collected over 21 consecutive days in Laikipia (Mpala Research Centre), Kenya, following three troops during morning and evening movements to and from sleeping sites. The dataset contains UAV footage across diverse environments (e.g., sleeping tree, river, rock, open savannah, cliff), with up to ~70 individuals per frame, yielding dense multi-object scenes from an overhead viewpoint.

The dataset supports three core subtasks: detection, multi-object tracking, and behavior recognition. It includes (1) a detection dataset derived from ~5.3K-resolution frames via multi-scale tiling (≈30K images), (2) ~0.5 hours of dense tracking annotations, and (3) ~20 hours of behavior “mini-scenes” annotated into 12 behavior classes and additional category for occlusions.

Supported Tasks and Leaderboards

Detection

We evaluate YOLOv8-X model with input resolution of 768x768 on our dataset and report mAP@50, Precision, and Recall:

Model mAP@50 Precision Recall
YOLOv8-X 92.62 93.70 87.60

Tracking

We evaluate SORT, DeepSORT, StrongSORT, ByteTrack, and BotSort tracking algorithms on our dataset and report MOTA, MOTP, IDF1, Precision, and Recall:

Tracker MOTA MOTP IDF1 Precision Recall
SORT 84.76 50.15 77.43 90.83 91.19
DeepSORT 84.40 87.22 81.38 90.26 91.57
StrongSORT 82.48 85.37 84.98 88.00 90.10
ByteTrack 63.55 34.10 77.01 96.32 64.90
BotSort 63.81 34.31 78.24 97.21 66.16

Behavior Classes:

  • Walking/Running
  • Sitting/Standing
  • Fighting/Playing
  • Self-Grooming
  • Being Groomed
  • Grooming Somebody
  • Mutual Grooming
  • Infant-Carrying
  • Foraging
  • Drinking
  • Mounting
  • Sleeping
  • Occluded

Behavior Recognition

We evaluate I3D, SlowFast, and X3D models on our dataset and report Micro-Average (Per Instance) and Macro-Average (Per Class) accuracy.

Method WI Micro Top-1 Micro Top-3 Micro Top-5 Macro Top-1 Macro Top-3 Macro Top-5
I3D Random 61.29 89.38 92.34 26.53 54.51 65.47
SlowFast Random 61.71 90.35 93.11 27.08 56.73 67.61
X3D Random 63.97 91.34 95.17 30.04 60.58 72.13
X3D K-400 64.89 92.54 96.66 31.41 62.04 74.01

Languages

English

Dataset Structure

BaboonLand provides original videos, CVAT-formatted annotations, derived mini-scenes, and scripts to generate task-specific training formats (e.g., Ultralytics/YOLO and Charades for SlowFast).

Directory Layout

BaboonLand
    /charades -> The dataset converted to Charades format to train and evaluate behavior
                 recognition models. You can download the generated dataset from our webpage
                 or you can generate it yourself.
        ...
    /cvat_templates -> Templates to backup projects in CVAT and explore/adjust annotations.
        /behavior.zip
        /tracking.zip
    /dataset
        /video_1
            /actions
                /0.xml
                /1.xml   -> Behavior annotations for individual with ID=1
                ...
                /n.xml
            /mini-scenes -> Mini-scenes generated from video.mp4 and tracks.xml
                /0.mp4
                /1.mp4
                ...
                /n.mp4
            /timeline.jpg
            /tracks.xml  -> Tracks + bounding boxes (CVAT for video 1.1). Each track has a unique ID.
            /video.mp4   -> Original drone video
        /video_2
            ...
        /video_n
            ...
    /scripts
        /requirements.txt
        /tracks2mini-scenes.py
        /dataset2charades.py
        /charades2video.py
        /charades2visual.py
        /dataset2tracking.py
        /tracking2ultralytics.py
        /ultralytics2pyramid.py
    /tracking -> Tracking split + (optionally) Ultralytics-format detection data.
        ...
    /README.md

Data Instances

Each dataset/video_k/ directory contains:

  • video.mp4: original UAV video
  • tracks.xml: per-frame tracks (IDs + bounding boxes)
  • actions/*.xml: per-track behavior labels (filename matches track ID)
  • mini-scenes/*.mp4: cropped clips centered on each tracked individual (filename matches track ID)

Data Fields

BaboonLand supports three derived tasks:

  • Detection: bounding boxes for baboons (also convertible to Ultralytics/YOLO format via provided scripts).
  • Tracking: per-frame tracks with persistent IDs and bounding boxes (stored in simplified CVAT for video 1.1).
  • Behavior recognition: per-individual mini-scenes (cropped clips centered on each tracked individual) labeled into 12 behavior classes + Occluded.

Data Splits

BaboonLand includes task-specific evaluation sets:

  • Tracking: 75% of each video for training, 25% for testing.
  • Detection (YOLO-formatted): 80% training, 7% validation, 13% testing.
  • Behavior recognition (Charades format): 75% training, 25% testing.

Data Collection and Procedures

  • Species: Olive baboons (Papio anubis)
  • Location: Mpala Research Centre, Laikipia County, Kenya
  • Capture: DJI Air 2S, videos recorded in 5.3K
  • Procedure: all flights were conducted above 20 meters from a closes animal.

Personal and Sensitive Information

  • No humans can be distinguished in the videos.
  • Data collection followed research licensing and animal care protocols (see Acknowledgments).

Authors

  • Isla Duporge
  • Maksim Kholiavchenko
  • Roi Harel
  • Scott Wolf
  • Dan Rubenstein
  • Meg Crofoot
  • Tanya Berger-Wolf
  • Stephen Lee
  • Julie Barreau
  • Jenna Kline
  • Michelle Ramirez
  • Charles Stewart

Citation Information

Dataset

Paper

@article{duporge2025baboonland,
  title={BaboonLand Dataset: Tracking Primates in the Wild and Automating Behaviour Recognition from Drone Videos: I. Duporge et al.},
  author={Duporge, Isla and Kholiavchenko, Maksim and Harel, Roi and Wolf, Scott and Rubenstein, Daniel I and Crofoot, Margaret C and Berger-Wolf, Tanya and Lee, Stephen J and Barreau, Julie and Kline, Jenna and others},
  journal={International Journal of Computer Vision},
  pages={1--12},
  year={2025},
  publisher={Springer}
}

Contributions / Acknowledgments

This material is based upon work supported by the National Science Foundation under Award No. 2118240 and Award No. 2112606. ID was supported by the National Academy of Sciences Research Associate Program and the United States Army Research Laboratory while conducting this study. ID collected all the UAV data on a Civil Aviation Authority Drone License CAA NQE Approval Number: 0216/1365 in conjunction with authorization from a KCAA operator under a Remote Pilot License. The data was gathered at the Mpala Research Centre in Kenya, in accordance with Research License No. NACOSTI/P/22/18214. The data collection protocol adhered strictly to the guidelines set forth by the Institutional Animal Care and Use Committee under permission No. IACUC 1835F.