|
|
--- |
|
|
pretty_name: CHIRLA |
|
|
license: cc-by-4.0 |
|
|
tags: |
|
|
- reidentification |
|
|
- tracking |
|
|
- long-term |
|
|
- multi-camera |
|
|
configs: |
|
|
- config_name: reid_long_term |
|
|
data_files: |
|
|
- split: gallery |
|
|
path: data/reid_long_term_gallery.parquet/gallery-* |
|
|
- split: query |
|
|
path: data/reid_long_term_query.parquet/query-* |
|
|
- split: train |
|
|
path: data/reid_long_term_train.parquet/train-* |
|
|
- split: val |
|
|
path: data/reid_long_term_val.parquet/val-* |
|
|
- config_name: reid_multi_cam |
|
|
data_files: |
|
|
- split: gallery |
|
|
path: data/reid_multi_camera_gallery.parquet/gallery-* |
|
|
- split: query |
|
|
path: data/reid_multi_camera_query.parquet/query-* |
|
|
- split: train |
|
|
path: data/reid_multi_camera_train.parquet/train-* |
|
|
- split: val |
|
|
path: data/reid_multi_camera_val.parquet/val-* |
|
|
- config_name: reid_multi_cam_long_term |
|
|
data_files: |
|
|
- split: gallery |
|
|
path: data/reid_multi_camera_long_term_gallery.parquet/gallery-* |
|
|
- split: query |
|
|
path: data/reid_multi_camera_long_term_query.parquet/query-* |
|
|
- split: train |
|
|
path: data/reid_multi_camera_long_term_train.parquet/train-* |
|
|
- split: val |
|
|
path: data/reid_multi_camera_long_term_val.parquet/val-* |
|
|
- config_name: reid_reappearance |
|
|
data_files: |
|
|
- split: gallery |
|
|
path: data/reid_reappearance_gallery.parquet/gallery-* |
|
|
- split: query |
|
|
path: data/reid_reappearance_query.parquet/query-* |
|
|
- split: train |
|
|
path: data/reid_reappearance_train.parquet/train-* |
|
|
- split: val |
|
|
path: data/reid_reappearance_val.parquet/val-* |
|
|
- config_name: tracking_brief |
|
|
data_files: |
|
|
- split: test |
|
|
path: data/tracking_brief_occlusions_test.parquet/test-* |
|
|
- split: train |
|
|
path: data/tracking_brief_occlusions_train.parquet/train-* |
|
|
- config_name: tracking_multi |
|
|
data_files: |
|
|
- split: test |
|
|
path: data/tracking_multiple_people_occlusions_test.parquet/test-* |
|
|
- split: train |
|
|
path: data/tracking_multiple_people_occlusions_train.parquet/train-* |
|
|
- config_name: videos |
|
|
data_files: |
|
|
- split: test_all |
|
|
path: data/videos_test_all.parquet |
|
|
- split: train_all |
|
|
path: data/videos_train_all.parquet |
|
|
dataset_info: |
|
|
- config_name: reid_long_term |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: image_path |
|
|
dtype: string |
|
|
- name: annotation_path |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int32 |
|
|
- name: task |
|
|
dtype: string |
|
|
- name: scenario |
|
|
dtype: string |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: string |
|
|
- name: seq |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: frame_name |
|
|
dtype: string |
|
|
- name: resolution |
|
|
dtype: string |
|
|
splits: |
|
|
- name: gallery |
|
|
num_bytes: 23137624 |
|
|
num_examples: 368 |
|
|
- name: query |
|
|
num_bytes: 291182290 |
|
|
num_examples: 4903 |
|
|
- name: train |
|
|
num_bytes: 1374241 |
|
|
num_examples: 65 |
|
|
- name: val |
|
|
num_bytes: 36454856 |
|
|
num_examples: 1177 |
|
|
download_size: 349760272 |
|
|
dataset_size: 352149011 |
|
|
- config_name: reid_multi_cam |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: image_path |
|
|
dtype: string |
|
|
- name: annotation_path |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int32 |
|
|
- name: task |
|
|
dtype: string |
|
|
- name: scenario |
|
|
dtype: string |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: string |
|
|
- name: seq |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: frame_name |
|
|
dtype: string |
|
|
- name: resolution |
|
|
dtype: string |
|
|
splits: |
|
|
- name: gallery |
|
|
num_bytes: 17081453 |
|
|
num_examples: 305 |
|
|
- name: query |
|
|
num_bytes: 249968675 |
|
|
num_examples: 4454 |
|
|
- name: train |
|
|
num_bytes: 2410542 |
|
|
num_examples: 40 |
|
|
- name: val |
|
|
num_bytes: 22258637 |
|
|
num_examples: 421 |
|
|
download_size: 280781604 |
|
|
dataset_size: 291719307 |
|
|
- config_name: reid_multi_cam_long_term |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: image_path |
|
|
dtype: string |
|
|
- name: annotation_path |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int32 |
|
|
- name: task |
|
|
dtype: string |
|
|
- name: scenario |
|
|
dtype: string |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: string |
|
|
- name: seq |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: frame_name |
|
|
dtype: string |
|
|
- name: resolution |
|
|
dtype: string |
|
|
splits: |
|
|
- name: gallery |
|
|
num_bytes: 13251761 |
|
|
num_examples: 252 |
|
|
- name: query |
|
|
num_bytes: 145517741 |
|
|
num_examples: 2207 |
|
|
- name: train |
|
|
num_bytes: 1630450 |
|
|
num_examples: 27 |
|
|
- name: val |
|
|
num_bytes: 10386720 |
|
|
num_examples: 258 |
|
|
download_size: 156168119 |
|
|
dataset_size: 170786672 |
|
|
- config_name: reid_reappearance |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: image_path |
|
|
dtype: string |
|
|
- name: annotation_path |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int32 |
|
|
- name: task |
|
|
dtype: string |
|
|
- name: scenario |
|
|
dtype: string |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: string |
|
|
- name: seq |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: frame_name |
|
|
dtype: string |
|
|
- name: resolution |
|
|
dtype: string |
|
|
splits: |
|
|
- name: gallery |
|
|
num_bytes: 16801315 |
|
|
num_examples: 258 |
|
|
- name: query |
|
|
num_bytes: 68476798 |
|
|
num_examples: 1251 |
|
|
- name: train |
|
|
num_bytes: 1301279 |
|
|
num_examples: 23 |
|
|
- name: val |
|
|
num_bytes: 4718150 |
|
|
num_examples: 132 |
|
|
download_size: 91179684 |
|
|
dataset_size: 91297542 |
|
|
- config_name: tracking_brief |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: image_path |
|
|
dtype: string |
|
|
- name: annotation_path |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int32 |
|
|
- name: task |
|
|
dtype: string |
|
|
- name: scenario |
|
|
dtype: string |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: string |
|
|
- name: seq |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: frame_name |
|
|
dtype: string |
|
|
- name: resolution |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 16171509 |
|
|
num_examples: 849 |
|
|
- name: train |
|
|
num_bytes: 5876739 |
|
|
num_examples: 279 |
|
|
download_size: 21758274 |
|
|
dataset_size: 22048248 |
|
|
- config_name: tracking_multi |
|
|
features: |
|
|
- name: image |
|
|
dtype: image |
|
|
- name: image_path |
|
|
dtype: string |
|
|
- name: annotation_path |
|
|
dtype: string |
|
|
- name: id |
|
|
dtype: int32 |
|
|
- name: task |
|
|
dtype: string |
|
|
- name: scenario |
|
|
dtype: string |
|
|
- name: split |
|
|
dtype: string |
|
|
- name: subset |
|
|
dtype: string |
|
|
- name: seq |
|
|
dtype: string |
|
|
- name: camera |
|
|
dtype: string |
|
|
- name: frame_name |
|
|
dtype: string |
|
|
- name: resolution |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_bytes: 3954884 |
|
|
num_examples: 141 |
|
|
- name: train |
|
|
num_bytes: 572030 |
|
|
num_examples: 27 |
|
|
download_size: 4491414 |
|
|
dataset_size: 4526914 |
|
|
--- |
|
|
|
|
|
# Dataset Card for CHIRLA |
|
|
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
CHIRLA (Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis) is a long-term, multi-camera person Re-Identification (Re-ID) and tracking dataset. It spans 7 months, 7 cameras, 22 identities, and ~1M identity-annotated bounding boxes across ~596k frames, captured in connected indoor environments. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
CHIRLA targets long-term appearance change (e.g., clothing changes over months) and realistic challenges such as occlusions and multi-camera hand-offs. The raw data comprises multi-camera videos, with identity annotations and benchmark splits for person Re-ID and Tracking. The benchmark organization and metadata live in the repository alongside Parquet manifests that reference images/annotations to enable easy loading with 🤗 Datasets. |
|
|
<!-- Key reported stats: 22 individuals, 7 cameras, 4 connected indoor environments, >5 hours of video, and around 1M identity-labeled boxes. --> |
|
|
|
|
|
| Metric | Value | |
|
|
|--------|-------| |
|
|
| **Duration** | 7 months | |
|
|
| **Individuals** | 22 unique persons | |
|
|
| **Cameras** | 7 multi-view cameras | |
|
|
| **Video Files** | 70 sequences | |
|
|
| **Total Frames** | 596,345 frames | |
|
|
| **Annotations** | 963,554 bounding boxes | |
|
|
| **Resolution** | 1080×720 pixels | |
|
|
| **Frame Rate** | 30 fps | |
|
|
| **Environment** | Indoor office setting | |
|
|
|
|
|
- **Curated by:** Bessie Dominguez-Dager |
|
|
- **Language(s) (NLP):** N/A (computer vision dataset) |
|
|
- **License:** CC BY 4.0 (Creative Commons Attribution 4.0) |
|
|
|
|
|
<!-- - **Funded by [optional]:** [More Information Needed] |
|
|
- **Shared by [optional]:** [More Information Needed] --> |
|
|
|
|
|
### Dataset Sources |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **Repository:** [GitHub (bdager/CHIRLA)](https://github.com/bdager/CHIRLA) |
|
|
- **Paper:** [arXiv:2502.06681](https://arxiv.org/abs/2502.06681) |
|
|
<!-- - **Demo [optional]:** [More Information Needed] --> |
|
|
|
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
|
|
- Research on person Re-ID under multi-camera and long-term appearance changes. |
|
|
- Person tracking experiments in indoor multi-camera settings. |
|
|
- Benchmarking models on specific scenarios designed for person Re-ID and tracking with splits provided via metadata/manifests in the repo. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> |
|
|
|
|
|
- Any deployment aimed at **surveillance, identification, or monitoring of real people** without explicit consent or where it violates privacy or law. |
|
|
- Claims of demographic fairness or broad generalization: CHIRLA has **22 identities** in specific indoor spaces; it is **not** representative of global demographics or environments. |
|
|
|
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
|
|
The repository is organized into (high-level): |
|
|
|
|
|
``` |
|
|
CHIRLA/ |
|
|
├── videos/ # Original .mp4 videos (Git LFS) |
|
|
├── annotations/ # Per-camera JSON annotation files |
|
|
├── benchmark/ # Images + JSONs organized by task/scenario/split |
|
|
│ ├── reid/ |
|
|
│ ├── tracking/ |
|
|
│ └── metadata/ # CSVs defining splits (ReID: train/val/gallery/query; Tracking: train/test) |
|
|
└── data/ # Parquet tables for easy loading |
|
|
``` |
|
|
|
|
|
<!-- **Parquet columns** |
|
|
|
|
|
- |
|
|
- `image_path` (repo-relative path to the frame) |
|
|
- `annotation_path` (repo-relative JSON annotation file) |
|
|
- `task` (`reid`, `tracking`) |
|
|
- `scenario` (e.g., `long_term`, `multi_camera`, `brief_occlusions`, `multiple_people_occlusions`) |
|
|
- `role` (ReID: `train`, `val`, `gallery`, `query`; Tracking: `train`, `test`) |
|
|
- `split`, `subset` (when applicable), `seq`, `camera` |
|
|
- `person_id`, `frame_name` --> |
|
|
|
|
|
|
|
|
**Splits** |
|
|
|
|
|
- **ReID**: for each scenario, four roles are provided — `train`, `val`, `gallery`, `query`. |
|
|
|
|
|
| Split | Subset | Purpose | Use during dev | Use in final report | |
|
|
|----------|----------|----------------------------------------|----------------|---------------------| |
|
|
| `train` | train_0 | Small training subset (fine-tuning) | ✅ | ❌ | |
|
|
| `val` | test_0 | Validation subset (hyperparam tuning) | ✅ | ❌ | |
|
|
| `gallery`| train–train_0 | Main gallery for evaluation | ⚠️ feature extraction only | ✅ | |
|
|
| `query` | test–test_0 | Main queries for evaluation | ❌ | ✅ | |
|
|
|
|
|
- **Tracking**: scenarios use `train`/`test` (no subsets). |
|
|
|
|
|
(See repo [benchmark/README.md](https://github.com/bdager/CHIRLA/tree/main/benchmark) for exact file lists and protocols.) |
|
|
|
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
|
|
To enable evaluation of **video-based long-term** Re-ID robustness—across months and multiple cameras—reflecting real deployments where people’s appearance changes substantially over time. |
|
|
|
|
|
|
|
|
### Source Data |
|
|
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
|
|
#### Data Collection and Processing |
|
|
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
|
|
The dataset was recorded at the Robotics, Vision, and Intelligent Systems Research Group headquarters at the University of Alicante, Spain. Seven strategically placed Reolink RLC-410W cameras were used to capture videos in a typical office setting, covering areas such as laboratories, hallways, and shared workspaces. Each camera features a 1/2.7" CMOS image sensor with a 5.0-megapixel resolution and an 80° horizontal field of view. The cameras were connected via Ethernet and WiFi to ensure stable streaming and synchronization. |
|
|
|
|
|
A ROS-based interconnection framework was used to synchronize and retrieve images from all cameras. The dataset includes video recordings at a resolution of 1080×720 pixels, with a consistent frame rate of 30 fps, stored in AVI format with DivX MPEG-4 encoding. |
|
|
|
|
|
|
|
|
#### Who are the source data producers? |
|
|
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
|
|
|
|
- Participants recorded in an office environment. |
|
|
- Authors collected and annotated the data. |
|
|
|
|
|
### Annotations |
|
|
|
|
|
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> |
|
|
|
|
|
Data processing involved a semi-automatic labeling procedure: |
|
|
|
|
|
**1. Automated Detection and Tracking** |
|
|
- **Detection**: YOLOv8x was used to detect individuals in video frames and extract bounding boxes |
|
|
- **Tracking**: The Deep SORT algorithm was employed to generate tracklets and assign unique IDs to detected individuals |
|
|
|
|
|
**2. Manual Verification and Correction** |
|
|
- **Custom GUI**: A specialized graphical user interface was developed for manual verification and correction |
|
|
- **Identity Consistency**: Bounding boxes and IDs were manually verified for consistency across different cameras and sequences |
|
|
- **Quality Control**: All annotations underwent thorough manual review to ensure accuracy |
|
|
|
|
|
> 🔗 **Labeling Tool**: The custom GUI used for annotation is available at: [CHIRLA Labeling Tool](https://github.com/bdager/preid-labeling-gui) |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
<!-- This section describes the people or systems who created the annotations. --> |
|
|
|
|
|
Authors. |
|
|
|
|
|
<!-- #### Personal and Sensitive Information |
|
|
|
|
|
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> |
|
|
|
|
|
<!-- [More Information Needed] |
|
|
|
|
|
## Bias, Risks, and Limitations --> |
|
|
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
|
|
<!-- [More Information Needed] |
|
|
|
|
|
### Recommendations --> |
|
|
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
|
|
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. --> |
|
|
|
|
|
## Load Dataset |
|
|
### Quick Start (lightweight): Load Benchmarks with 🤗 Datasets |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the whole dataset |
|
|
chirla = load_dataset("bdager/CHIRLA") |
|
|
|
|
|
# Specific scenarios |
|
|
reid_mc = load_dataset("bdager/CHIRLA", "reid_multi_cam") |
|
|
trk_bo = load_dataset("bdager/CHIRLA", "tracking_brief") |
|
|
trk_mpo = load_dataset("bdager/CHIRLA", "tracking_multi") |
|
|
|
|
|
row = reid_lt["train"][0] |
|
|
print(row.keys()) |
|
|
# ['image', 'image_path', 'annotation_path', 'task', 'scenario', |
|
|
# 'split', 'subset', 'seq', 'camera', 'person_id', 'frame_name', 'resolution'] |
|
|
``` |
|
|
|
|
|
If you want to open an individual `image_path` or `annotation_path` without cloning, use `hf_hub_download`: |
|
|
|
|
|
```python |
|
|
from huggingface_hub import hf_hub_download |
|
|
fp = hf_hub_download("bdager/CHIRLA", repo_type="dataset", filename=row["image_path"]) |
|
|
``` |
|
|
|
|
|
<!-- You can open the files referenced by `image_path` / `annotation_path` from a local clone (see next section), or customize loaders to stream from the Hub. --> |
|
|
|
|
|
|
|
|
|
|
|
### Download the Full Dataset (including videos) |
|
|
|
|
|
#### Option A) Clone with Git LFS (recommended for local work) |
|
|
|
|
|
```bash |
|
|
git lfs install |
|
|
git clone https://huggingface.co/datasets/bdager/CHIRLA |
|
|
``` |
|
|
|
|
|
This downloads **everything**: videos, annotations, benchmark images, metadata, and manifests. |
|
|
|
|
|
#### Option B) Programmatic download |
|
|
|
|
|
```python |
|
|
from huggingface_hub import snapshot_download |
|
|
local_path = snapshot_download("bdager/CHIRLA", repo_type="dataset") |
|
|
print("Dataset downloaded to:", local_path) |
|
|
``` |
|
|
|
|
|
### Fetch All Videos via `load_dataset` |
|
|
|
|
|
If you want to cache all videos through 🤗 Datasets, use the `videos` config. |
|
|
This uses `data/videos_<split>_all.parquet` with a `video_path` column. |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
vids = load_dataset("bdager/CHIRLA", "videos") |
|
|
print(vids) |
|
|
|
|
|
# Example: inspect a video row |
|
|
row = vids["train_all"][0] |
|
|
print(row) |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
``` |
|
|
@article{dominguez2025chirla, |
|
|
title = {CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis}, |
|
|
author = {Domínguez-Dager, Bessie and Escalona, Felix and Gomez-Donoso, Francisco and Cazorla, Miguel}, |
|
|
journal = {arXiv preprint arXiv:2502.06681}, |
|
|
year = {2025} |
|
|
} |
|
|
``` |
|
|
|
|
|
**APA:** |
|
|
|
|
|
Domínguez-Dager, B., Escalona, F., Gómez-Donoso, F., & Cazorla, M. (2025). *CHIRLA: Comprehensive High-resolution Identification and Re-identification for Large-scale Analysis* (arXiv:2502.06681). arXiv. |
|
|
|
|
|
<!-- ## Glossary [optional] --> |
|
|
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> |
|
|
|
|
|
<!-- [More Information Needed] --> |
|
|
|
|
|
<!-- ## More Information [optional] |
|
|
|
|
|
[More Information Needed] |
|
|
|
|
|
## Dataset Card Authors [optional] |
|
|
|
|
|
[More Information Needed] --> |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
|
|
For any questions or support, feel free to contact bessie.dominguez@ua.es or open an issue in the GitHub repository: https://github.com/bdager/CHIRLA/issues. |
|
|
|
|
|
|