Improve dataset card: Add paper, code, usage, update license, add tasks and tags

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +116 -3
README.md CHANGED
@@ -1,3 +1,116 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ task_categories:
4
+ - object-detection
5
+ - multi-object-tracking
6
+ tags:
7
+ - multispectral
8
+ - drone
9
+ - uav
10
+ - oriented-bounding-boxes
11
+ - computer-vision
12
+ - multi-object-tracking
13
+ ---
14
+
15
+ # MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking
16
+
17
+ The MMOT dataset was presented in the paper [MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking](https://huggingface.co/papers/2510.12565).
18
+
19
+ The official code and further details can be found on the GitHub repository: [https://github.com/Annzstbl/MMOT](https://github.com/Annzstbl/MMOT).
20
+
21
+ ## Introduction
22
+
23
+ **MMOT** is the *first* large-scale benchmark for **drone-based multispectral multi-object tracking (MOT)**. It integrates **spectral and temporal cues** to evaluate modern tracking algorithms under real-world UAV conditions. Drone-based multi-object tracking is essential yet highly challenging due to small targets, severe occlusions, and cluttered backgrounds. This dataset aims to bridge the gap by providing crucial multispectral cues to enhance object discriminability under degraded spatial conditions.
24
+
25
+ ## ✨ Highlights
26
+
27
+ - **📦 Large Scale** — 125 video sequences, 13.8K frames, 488.8K annotated oriented boxes across 8 categories
28
+ - **🌈 Multispectral Imagery** — 8-band MSI covering visible to near-infrared spectrum
29
+ - **📐 Oriented Bounding Boxes (OBB)** — precise orientation labels for robust aerial association
30
+ - **🚁 Real UAV Scenarios** — varying altitudes (80–200 m), illumination, and dense urban traffic
31
+ - **🧩 Complete Codebase** — integrates 8 representative trackers (SORT, ByteTrack, OC-SORT, BoT-SORT, MOTR, MOTRv2, MeMOTR, MOTIP)
32
+
33
+ ## 📸 Example Visualization
34
+
35
+ <p align="center">
36
+ <img src="https://github.com/Annzstbl/MMOT/blob/main/assets/ann_example.jpg?raw=true" width="800">
37
+ </p>
38
+
39
+ *Example annotations from MMOT showcasing diverse and challenging scenarios. In these scenes, where spatial features are limited due to small object size, clutter or blur, spectral cues provide critical complementary information for reliable discrimination. Zoom in for better visualization.*
40
+
41
+ ## 📂 Dataset Download and Preparation
42
+
43
+ The **MMOT dataset** can be obtained from this Hugging Face repository. On Hugging Face, each video sequence is **individually packaged** into a `.tar` file to support **Croissant file generation**.
44
+
45
+ Each `.tar` archive contains:
46
+ - Multispectral frames in `.npy` format
47
+ - Frame-wise MOT annotations in `.txt` format (one annotation per frame)
48
+
49
+ Example (root folder):
50
+
51
+ ```text
52
+ MMOT_DATASET
53
+ ├── train
54
+ │ ├── data30-8
55
+ │ │ ├── 000001.npy
56
+ │ │ ├── 000001.txt
57
+ │ │ ├── 000002.npy
58
+ │ │ ├── 000002.txt
59
+ │ │ └── ...
60
+ │ └── ...
61
+ ├── test
62
+ ```
63
+ Please download all .tar files and place them into the corresponding `train/` or `test/` folder.
64
+ Then, you can automatically extract and convert the structure to the MMOT standard format using:
65
+
66
+ ```bash
67
+ python dataset/huggingface_tar_to_standard.py --root /path/to/root
68
+ ```
69
+ This script will reorganize all tar files and transform the Hugging Face structure into the standard MMOT format.
70
+
71
+ 📁 **Standard Directory Layout**
72
+
73
+ After processing (from either source), your dataset directory should appear as follows:
74
+ ```text
75
+ MMOT_DATASET
76
+ ├── train
77
+ │ ├── npy
78
+ │ │ ├── data23-2
79
+ │ │ │ ├── 000001.npy
80
+ │ │ │ └── 000002.npy
81
+ │ │ ├── data23-3
82
+ │ │ └── ...
83
+ │ └── mot
84
+ │ ├── data23-2.txt
85
+ │ ├── data23-3.txt
86
+ │ └── ...
87
+ └── test
88
+ ├── npy
89
+ └── mot
90
+ ```
91
+
92
+ Once the dataset has been organized in the above structure,
93
+ please link it to the main project directory for unified access:
94
+
95
+ ```bash
96
+ # Link dataset to project root
97
+ ln -s /path/to/MMOT_dataset ./data
98
+ ```
99
+
100
+ ## ⚖️ License
101
+
102
+ The **MMOT dataset** is released under the [CC BY-NC-ND 4.0 License](https://creativecommons.org/licenses/by-nc-nd/4.0/). It is intended for academic research only. You must attribute the original source, and you are not allowed to modify or redistribute the dataset without permission.
103
+
104
+ ## 📖 Citation
105
+
106
+ If you use the MMOT dataset, code, or benchmark results in your research, please cite:
107
+
108
+ ```bibtex
109
+ @inproceedings{li2025mmot,
110
+ title = {MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking},
111
+ author = {Li, Tianhao and Xu, Tingfa and Wang, Ying and Qin, Haolin and Lin, Xu and Li, Jianan},
112
+ booktitle = {NeurIPS 2025 Datasets and Benchmarks Track},
113
+ year = {2025},
114
+ url = {https://arxiv.org/abs/2510.12565}
115
+ }
116
+ ```