Datasets:

Modalities:
Image
Text
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
License:
mmla_wilds / README.md
egrace479's picture
Add DOI to citation
c34b7e9 verified
---
license: cc0-1.0
language:
- en
pretty_name: MMLA The Wilds
task_categories:
- image-classification
tags:
- biology
- image
- animals
- CV
- drone
- zebra
- African Painted Dog
- Persian Onanger
- giraffe
- Grevy's
size_categories: 10K<n<100K
description: >-
Annotated video frames of giraffes, Grevy's zebras, Persian onagers, and
African Painted Dogs collected at The Wilds in Ohio. This dataset is intended
for use in training and evaluating computer vision models for animal detection
and classification from drone imagery. It includes frames from various
sessions, with annotations indicating the presence of animals in the images in
YOLO format, and is designed to facilitate research in wildlife monitoring and
conservation using autonomous drones.
configs:
- config_name: African-painted-dogs
data_files: session_1/*/*.jpg
- config_name: Persion-onangers
data_files: session_2/*/*.jpg
- config_name: Giraffes
data_files: session_3/*/*.jpg
- config_name: Grevys-zebras
data_files: session_4/*/*.jpg
---
# Dataset Card for MMLA The Wilds
<!-- Provide a quick summary of what the dataset is or can be used for. -->
## Dataset Details
This dataset contains annotated video frames of giraffes, Grevy's zebras, Persian onagers, and African Painted Dogs, collected at [The Wilds](https://www.thewilds.org/) in Ohio. The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery. It includes frames from various sessions, with annotations indicating the presence of animals in the images in YOLO format, and is designed to facilitate research in wildlife monitoring and conservation using autonomous drones.
### Dataset Description
- **Curated by:** Jenna Kline
- **Homepage:** [MMLA website](https://imageomics.github.io/mmla/)
- **Repository:** [imageomics/mmla](https://github.com/imageomics/mmla)
- **Paper:** [MMLA: Multi-Environment, Multi-Species, Low-Altitude Drone Dataset](https://arxiv.org/abs/2504.07744)
<!-- Provide a longer summary of what this dataset is. -->
This dataset contains video frames collected at [The Wilds Conservation Center](https://www.thewilds.org/) in Ohio, USA, using drones. The Wilds is a 10,000 acre safari park and conservation center that is home to a variety of endangered species. The dataset includes video frames of African Painted Dogs, Giraffes, Persian Onangers, and Grevy's Zebras, captured during different sessions. The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery.
The dataset consists of frames. Each frame is accompanied by annotations in YOLO format, indicating the presence of animals and their bounding boxes within the images. The annotations were completed manually by the dataset curator using [CVAT](https://www.cvat.ai/) and [kabr-tools](https://github.com/Imageomics/kabr-tools).
| Session | Date Collected | Size (pixels) | Total Frames | Species | Drone Model | Video IDs
|---------|---------------|---------------|--------------|-------------------|----------------|----------------|
| `session_1` | 2024-06-14 | 2720 x 1530 | 13,749 | African Painted Dog | DJI Mini | DJI_0034, DJI_0035 |
| `session_2` | 2024-04-18 | 4096 x 2160 | 4,053 | Persian Onanger | Parrot Anafi | P0100010, P0110011,P0080008, P0090009, P0070007, P0160016, P0120012 |
| `session_3`| 2024-07-31 | 3840 x 2160 | 3,436 | Giraffe | Parrot Anafi | P0140018, P0150019|
| `session_4` | 2024-07-31 | 4096 x 2160 | 506 | Grevy's Zebra | Parrot Anafi | P0070010 |
| **Total Frames:** | | | **21,744** | | |
This table shows the data collected at [The Wilds Conservation Center](https://www.thewilds.org/) in Ohio, USA, with session information, dates, frame counts, and primary species observed.
The dataset is intended for use in training and evaluating computer vision models for animal detection and classification from drone imagery.
See the [fine-tuned YOLO11m model](https://huggingface.co/imageomics/mmla) that was trained using this dataset.
## Dataset Structure
```
/dataset/
classes.txt
session_1/
DJI_0034/
DJI_0034_000000.jpg
DJI_0034_000000.txt
...
DJI_0035_000013.txt
DJI_0035/
partition_1.zip
partition_2.zip
partition_3.zip
session_2/
P0140018/
P0140018_000000.jpg
P0140018_000000.txt
...
P0150019/
...
P0150019_000326.txt
session_3/
P0070007/
P0070007_000000.jpg
P0070007_000000.txt
...
P0080008/
...
P0090009/
...
P0100010/
...
P0110011/
...
P0120012/
...
P0160016/
...
P0160016_000598.txt
session_4/
P0070010/
P0070010_000000.jpg
P0070010_000000.txt
...
P0070010_000505.txt
```
### Data Instances
All images are named `<video_id>_<frame_number>.jpg`, under the particular session and full video to which they belong; these can be matched to dates based on the table above. The annotations are in YOLO format and are stored in a corresponding `.txt` file with the same name as the image.
Note: the DJI_0035 files are in .zip folders, due to issues uploading the large directories to HF.
### Data Fields
**classes.txt**:
- `0`: zebra
- `1`: giraffe
- `2`: onager
- `3`: dog
**frame_id.txt**:
- `class`: Class of the object in the image (0 for animal species)
- `x_center`: X coordinate of the center of the bounding box (normalized to [0, 1])
- `y_center`: Y coordinate of the center of the bounding box (normalized to [0, 1])
- `width`: Width of the bounding box (normalized to [0, 1])
- `height`: Height of the bounding box (normalized to [0, 1])
### Data Splits
This dataset was used in conjunction with the other two [MMLA datasets](https://huggingface.co/collections/imageomics/mmla) for both training and testing the [MMLA YOLO model](https://huggingface.co/imageomics/mmla#training-details).
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. For instance, what you intended to study and why that required curation of a new dataset (or if it's newly collected data and why the data was collected (intended use)), etc. -->
The dataset was created to facilitate research in wildlife monitoring and conservation using advanced imaging technologies. The goal is to develop and evaluate computer vision models that can accurately detect and classify animals from drone imagery, and their generalizability across different species and environments.
### Source Data
<!-- This section describes the source data (e.g., news text and headlines, social media posts, translated sentences, ...). As well as an original source it was created from (e.g., sampling from Zenodo records, compiling images from different aggregators, etc.) -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, re-sizing of images, tools and libraries used, etc.
This is what _you_ did to it following collection from the original source; it will be overall processing if you collected the data initially.
-->
The African Painted Dog missions were collected manually using a [DJI Mavic Mini drone](https://www.dji.com/support/product/mavic-mini), while the Giraffe, Persian Onanger, and Grevy's Zebra missions were collected using a [Parrot Anafi drone](https://www.parrot.com/us/drones/anafi). The Grevy's zebras and giraffe missions were conducted semi-autonomously using the [WildWing system](https://imageomics.github.io/wildwing/), while the Persian onager data was collected manually. The drones were flown over the [Wilds Conservation Center](https://www.thewilds.org/) in Ohio, capturing video footage of the animals in their natural habitat.
The videos were annotated manually using the Computer Vision Annotation Tool ([CVAT](https://www.cvat.ai/)) and the [kabr-tools](https://github.com/Imageomics/kabr-tools) library. These detection annotations and original video files were then processed to extract individual frames, which were saved as JPEG images. The annotations were converted to YOLO format, with bounding boxes indicating the presence of zebras in each frame.
<!-- #### Who are the source data producers?
[More Information Needed] -->
<!-- This section describes the people or systems who originally created the data.
Ex: This dataset is a collection of images taken of the butterfly collection housed at the Ohio State University Museum of Biological Diversity. The associated labels and metadata are the information provided with the collection from biologists that study butterflies and supplied the specimens to the museum.
-->
### Annotations
<!--
If the dataset contains annotations which are not part of the initial data collection, use this section to describe them.
Ex: We standardized the taxonomic labels provided by the various data sources to conform to a uniform 7-rank Linnean structure. (Then, under annotation process, describe how this was done: Our sources used different names for the same kingdom (both _Animalia_ and _Metazoa_), so we chose one for all (_Animalia_). -->
#### Annotation process
[CVAT](https://www.cvat.ai/) and [kabr-tools](https://github.com/Imageomics/kabr-tools) were used to annotate the video frames. The annotation process involved manually labeling the presence of animals in each frame, drawing bounding boxes around them, and converting the annotations to YOLO format.
<!-- This section describes the annotation process such as annotation tools used, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
Jenna Kline (The Ohio State University) - ORCID: 0009-0006-7301-5774 \
Alison Zhong (The Ohio State University) \
Jake Yablok (The Ohio State University)
### Personal and Sensitive Information
The dataset was cleaned to remove any personal or sensitive information.
## Licensing Information
This dataset is dedicated to the public domain (by applying the [CC0-1.0 Public Domain Waiver](https://creativecommons.org/publicdomain/zero/1.0/)) for the benefit of scientific pursuits. We ask that you cite the dataset and journal paper using the below citations if you make use of it in your research.
## Citation
**BibTeX:**
**Data**
```
@misc{mmla_wilds,
author = { Jenna Kline,
Alison Zhong,
Jake Yablok
},
title = {MMLA The Wilds Dataset (Revision e61014d)},
year = {2025},
url = {https://huggingface.co/datasets/imageomics/mmla-wilds},
doi = {10.57967/hf/7379},
publisher = {Hugging Face}
}
```
**Paper**
```
@misc{kline2025mmla,
title={MMLA: Multi-Environment, Multi-Species, Low-Altitude Drone Dataset},
author={Jenna Kline and Samuel Stevens and Guy Maalouf and Camille Rondeau Saint-Jean and Dat Nguyen Ngoc and Majid Mirmehdi and David Guerin and Tilo Burghardt and Elzbieta Pastucha and Blair Costelloe and Matthew Watson and Thomas Richardson and Ulrik Pagh Schultz Lundquist},
year={2025},
eprint={2504.07744},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2504.07744},
}
```
If you use this dataset, please also cite the [WildWing video dataset](https://huggingface.co/datasets/imageomics/wildwingdeployment) used to generate data for sessions 2-4.
**Related Papers**
```
@article{kline2025wildwing,
title={WildWing: An open-source, autonomous and affordable UAS for animal behaviour video monitoring},
author={Kline, Jenna and Zhong, Alison and Irizarry, Kevyn and Stewart, Charles V and Stewart, Christopher and Rubenstein, Daniel I and Berger-Wolf, Tanya},
journal={Methods in Ecology and Evolution},
year={2025},
doi={10.1111/2041-210X.70018},
publisher={Wiley Online Library}
}
```
## Acknowledgements
This work was supported by the [Imageomics Institute](https://imageomics.org), which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
This work was supported by the AI Institute for Intelligent Cyberinfrastructure with Computational Learning in the Environment [ICICLE](https://icicle.osu.edu/), which is funded by the US National Science Foundation under grant number OAC-2112606.
<!-- You may also want to credit the source of your data, i.e., if you went to a museum or nature preserve to collect it. -->
<!-- ## Glossary -->
<!-- [optional] If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
## More Information
The data was gathered at [The Wilds](https://www.thewilds.org/) with permission from The Wilds Science Committee to take field observations and fly drones in the pastures.
<!-- [optional] Any other relevant information that doesn't fit elsewhere. -->
## Dataset Card Authors
Jenna Kline
## Dataset Card Contact
kline.377 at osu.edu
<!-- Could include who to contact with questions, but this is also what the "Discussions" tab is for. -->