Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,272 +1,127 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
-
|
| 8 |
-
-
|
| 9 |
-
-
|
| 10 |
-
-
|
| 11 |
-
-
|
| 12 |
-
-
|
| 13 |
-
-
|
| 14 |
-
|
| 15 |
-
-
|
| 16 |
-
|
| 17 |
-
-
|
| 18 |
-
-
|
| 19 |
-
-
|
| 20 |
-
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
### Model Sources
|
| 37 |
-
|
| 38 |
-
- **Repository:** https://github.com/Imageomics-ABC-edu/final-project-kenyan-ungulates-with-wilddroneeu
|
| 39 |
-
- **Paper:** [MMLA](https://arxiv.org/abs/2504.07744)
|
| 40 |
-
|
| 41 |
-
## Uses
|
| 42 |
-
|
| 43 |
-
### Direct Use
|
| 44 |
-
|
| 45 |
-
This model is designed for direct use in wildlife monitoring applications, ecological research, and biodiversity studies. It can:
|
| 46 |
-
|
| 47 |
-
- Detect and classify zebras, giraffes, onagers, and dogs in camera trap images
|
| 48 |
-
- Monitor wildlife populations in their natural habitats
|
| 49 |
-
- Assist researchers in automated processing of large image datasets
|
| 50 |
-
- Support biodiversity assessments by identifying species present in field surveys
|
| 51 |
-
|
| 52 |
-
The model can be used by researchers, conservationists, wildlife managers, and citizen scientists to automate and scale up wildlife monitoring efforts, particularly in African ecosystems.
|
| 53 |
-
|
| 54 |
-
### Downstream Use
|
| 55 |
-
|
| 56 |
-
This model can be integrated into larger ecological monitoring systems including:
|
| 57 |
-
- Automated camera trap processing pipelines
|
| 58 |
-
- Wildlife conservation monitoring platforms
|
| 59 |
-
- Ecological research workflows
|
| 60 |
-
- Citizen science applications for species identification
|
| 61 |
-
- Environmental impact assessment tools
|
| 62 |
-
|
| 63 |
-
### Out-of-Scope Use
|
| 64 |
-
|
| 65 |
-
This model is not suitable for:
|
| 66 |
-
- Medical diagnosis or human-related detection tasks
|
| 67 |
-
- Security or surveillance applications targeting humans
|
| 68 |
-
- Applications where errors in detection could lead to harmful conservation decisions without human verification
|
| 69 |
-
- Real-time detection systems requiring extremely low latency (model prioritizes accuracy over speed)
|
| 70 |
-
- Detection of species not included in the training set (only trained on zebras, giraffes, onagers, and dogs)
|
| 71 |
-
|
| 72 |
-
## Bias, Risks, and Limitations
|
| 73 |
-
|
| 74 |
-
- **Species representation bias:** The model may perform better on species that were well-represented in the training data.
|
| 75 |
-
- **Environmental bias:** Performance may degrade in environmental conditions not represented in the training data (e.g., extreme weather, unusual lighting).
|
| 76 |
-
- **Morphological bias:** Similar-looking species may be confused with one another (particularly among equids like zebras and onagers).
|
| 77 |
-
- **Geospatial bias:** The model may perform better in biomes similar to those present in the training data, particularly African savanna environments.
|
| 78 |
-
- **Seasonal bias:** Detection accuracy may vary based on seasonal appearance changes in animals or environments.
|
| 79 |
-
- **Technical limitations:** Performance depends on image quality, with reduced accuracy in low-resolution, blurry, or poorly exposed images.
|
| 80 |
-
|
| 81 |
-
### Recommendations
|
| 82 |
-
|
| 83 |
-
Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model:
|
| 84 |
-
- Always verify critical detections with human review, especially for rare species or conservation decision-making
|
| 85 |
-
- Consider confidence scores when evaluating detections
|
| 86 |
-
- Be cautious when applying the model to new geographic regions or habitats not represented in training data
|
| 87 |
-
- Periodically validate model performance on new data to ensure continued reliability
|
| 88 |
-
- Consider fine-tuning the model on domain-specific data when applying to new regions or species
|
| 89 |
-
|
| 90 |
-
## How to Get Started with the Model
|
| 91 |
|
| 92 |
-
Use the code below to get started with the model:
|
| 93 |
|
| 94 |
-
```python
|
| 95 |
-
from ultralytics import YOLO
|
| 96 |
|
| 97 |
-
|
| 98 |
-
|
|
|
|
| 99 |
|
| 100 |
-
|
| 101 |
-
|
| 102 |
|
| 103 |
-
|
| 104 |
-
for result in results:
|
| 105 |
-
boxes = result.boxes # Boxes object for bounding boxes outputs
|
| 106 |
-
for box in boxes:
|
| 107 |
-
x1, y1, x2, y2 = box.xyxy[0] # get box coordinates
|
| 108 |
-
conf = box.conf[0] # confidence score
|
| 109 |
-
cls = int(box.cls[0]) # class id
|
| 110 |
-
class_name = model.names[cls] # class name (Zebra, Giraffe, Onager, or Dog)
|
| 111 |
-
print(f"Detected {class_name} with confidence {conf:.2f} at position {x1:.1f}, {y1:.1f}, {x2:.1f}, {y2:.1f}")
|
| 112 |
-
|
| 113 |
-
# Visualize results
|
| 114 |
-
results[0].plot()
|
| 115 |
```
|
| 116 |
|
| 117 |
-
|
|
|
|
|
|
|
|
|
|
| 118 |
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
Dataset is available at [Hugging Face](https://huggingface.co/collections/imageomics/wildwing-67f572d3ba17fca922c80182). See /data/dataset.yaml for details on train/val/test splits.
|
| 122 |
-
|
| 123 |
-
### Training Procedure
|
| 124 |
|
| 125 |
-
|
|
|
|
|
|
|
| 126 |
|
| 127 |
-
|
| 128 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 129 |
|
| 130 |
-
|
|
|
|
| 131 |
|
| 132 |
-
|
| 133 |
-
- **Base model:** YOLOv11m (yolo11m.pt)
|
| 134 |
-
- **Epochs:** 50
|
| 135 |
-
- **Image size:** 640
|
| 136 |
-
- **Dataset configuration:** Custom YAML file defining 4 classes (Zebra, Giraffe, Onager, Dog)
|
| 137 |
-
- **Training regime:** Default YOLOv11 training parameters
|
| 138 |
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
imgsz=640,
|
| 148 |
-
)
|
| 149 |
```
|
| 150 |
|
| 151 |
-
|
| 152 |
-
|
| 153 |
-
|
| 154 |
-
|
| 155 |
-
|
| 156 |
-
- **Inference speed:** [FPS on specific hardware]
|
| 157 |
|
| 158 |
## Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
|
| 160 |
-
###
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
The model was evaluated on a held-out test set located at `/fs/ess/PAS2136/Kenya-2023/yolo_benchmark/HerdYOLO/data/images/test` containing:
|
| 165 |
-
- [Number] test images with instances of Zebra, Giraffe, Onager, and Dog
|
| 166 |
-
- [Any other relevant testing data details]
|
| 167 |
-
|
| 168 |
-
#### Factors
|
| 169 |
-
|
| 170 |
-
The evaluation disaggregated performance by:
|
| 171 |
-
- Species (Zebra, Giraffe, Onager, African wild dog)
|
| 172 |
-
|
| 173 |
-
#### Metrics
|
| 174 |
-
|
| 175 |
-
The model was evaluated using standard object detection metrics:
|
| 176 |
-
- **Precision:** Ratio of true positives to all predicted positives
|
| 177 |
-
- **Recall:** Ratio of true positives to all actual positives (ground truth)
|
| 178 |
-
- **mAP50:** Mean Average Precision at IoU threshold of 0.5
|
| 179 |
-
- **mAP50-95:** Mean Average Precision averaged over IoU thresholds from 0.5 to 0.95
|
| 180 |
-
|
| 181 |
-
### Results
|
| 182 |
-
|
| 183 |
-
#### Summary
|
| 184 |
-
|
| 185 |
-
- **Overall mAP50:** [Value]
|
| 186 |
-
- **Overall mAP50-95:** [Value]
|
| 187 |
-
- **Per-class performance:**
|
| 188 |
-
- Zebra: mAP50 = [Value], Precision = [Value], Recall = [Value]
|
| 189 |
-
- Giraffe: mAP50 = [Value], Precision = [Value], Recall = [Value]
|
| 190 |
-
- Onager: mAP50 = [Value], Precision = [Value], Recall = [Value]
|
| 191 |
-
- Dog: mAP50 = [Value], Precision = [Value], Recall = [Value]
|
| 192 |
-
|
| 193 |
-
## Model Examination
|
| 194 |
-
|
| 195 |
-
- **Confusion analysis:** [Any notable confusion between classes, such as between Zebra and Onager]
|
| 196 |
-
- **Failure cases:** [Specific conditions where the model performs less reliably]
|
| 197 |
-
- **Interpretability findings:** [Any insights from model interpretation techniques]
|
| 198 |
-
|
| 199 |
-
## Environmental Impact
|
| 200 |
-
|
| 201 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://doi.org/10.48550/arXiv.1910.09700).
|
| 202 |
-
|
| 203 |
-
- **Hardware Type:** [GPU model]
|
| 204 |
-
- **Hours used:** [Number]
|
| 205 |
-
- **Cloud Provider:** [Provider name or local]
|
| 206 |
-
- **Compute Region:** [Region]
|
| 207 |
-
- **Carbon Emitted:** [Amount] kg CO₂eq
|
| 208 |
-
|
| 209 |
-
## Technical Specifications
|
| 210 |
-
|
| 211 |
-
### Model Architecture and Objective
|
| 212 |
-
|
| 213 |
-
- Base architecture: YOLOv11m
|
| 214 |
-
- Detection heads: Standard YOLOv11 architecture
|
| 215 |
-
- Classes: 4 (Zebra, Giraffe, Onager, Dog)
|
| 216 |
-
|
| 217 |
-
### Compute Infrastructure
|
| 218 |
-
|
| 219 |
-
#### Hardware
|
| 220 |
-
|
| 221 |
-
- **Training:** [GPU/CPU details]
|
| 222 |
-
- **Inference:** Tested on [range of devices]
|
| 223 |
-
- **Minimum requirements:** [Specifications]
|
| 224 |
-
|
| 225 |
-
#### Software
|
| 226 |
-
|
| 227 |
-
- Python 3.8+
|
| 228 |
-
- PyTorch 2.0+
|
| 229 |
-
- Ultralytics YOLOv11 framework
|
| 230 |
-
- CUDA 11.7+ (for GPU acceleration)
|
| 231 |
-
|
| 232 |
-
## Citation
|
| 233 |
-
|
| 234 |
-
**BibTeX:**
|
| 235 |
-
|
| 236 |
```
|
| 237 |
-
|
| 238 |
-
|
| 239 |
-
|
| 240 |
-
|
| 241 |
-
|
| 242 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 243 |
}
|
| 244 |
-
```
|
| 245 |
-
|
| 246 |
-
## Acknowledgements
|
| 247 |
-
|
| 248 |
-
This work was supported by both the [Imageomics Institute](https://imageomics.org) and the [AI and Biodiversity Change (ABC) Global Center](http://abcresearchcenter.org). The Imageomics Institute is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). The ABC Global Center is funded by the US National Science Foundation under [Award No. 2330423](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2330423&HistoricalAwards=false) and Natural Sciences and Engineering Research Council of Canada under [Award No. 585136](https://www.nserc-crsng.gc.ca/ase-oro/Details-Detailles_eng.asp?id=782440). This model draws on research supported by the Social Sciences and Humanities Research Council.
|
| 249 |
-
|
| 250 |
-
Additional support was provided by the National Ecological Observatory Network (NEON), a program sponsored by the National Science Foundation and operated under cooperative agreement by Battelle Memorial Institute. This material is based in part upon work supported by the National Science Foundation through the NEON Program.
|
| 251 |
-
|
| 252 |
-
Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation, Natural Sciences and Engineering Research Council of Canada, or Social Sciences and Humanities Research Council.
|
| 253 |
-
|
| 254 |
-
## Glossary
|
| 255 |
-
|
| 256 |
-
- **YOLO:** You Only Look Once, a family of real-time object detection models
|
| 257 |
-
- **mAP:** mean Average Precision, a standard metric for evaluating object detection models
|
| 258 |
-
- **IoU:** Intersection over Union, a measure of overlap between predicted and ground truth bounding boxes
|
| 259 |
-
- **Onager:** Also known as the Asian wild ass, a species of equid native to Asia
|
| 260 |
-
- **YOLOv11m:** The medium-sized variant of the YOLOv11 architecture
|
| 261 |
-
|
| 262 |
-
## More Information
|
| 263 |
-
|
| 264 |
-
[Any additional information you'd like to include]
|
| 265 |
-
|
| 266 |
-
## Model Card Authors
|
| 267 |
-
|
| 268 |
-
Jenna Kline, The Ohio State University
|
| 269 |
-
|
| 270 |
-
## Model Card Contact
|
| 271 |
-
|
| 272 |
-
kline.377 at osu dot edu
|
|
|
|
| 1 |
+
# MMLA Repo
|
| 2 |
+
Multi-Environment, Multi-Species, Low-Altitude Aerial Footage Dataset
|
| 3 |
+
|
| 4 |
+

|
| 5 |
+
Example photo from the MMLA dataset and labels generated from model. The image shows a group of zebras and giraffes at the Mpala Research Centre in Kenya.
|
| 6 |
+
## Table of Contents
|
| 7 |
+
- [How to use the scripts in this repo](#how-to-use-the-scripts-in-this-repo)
|
| 8 |
+
- [Requirements](#requirements)
|
| 9 |
+
- [Baseline YOLO evaluation](#baseline-yolo-evaluation)
|
| 10 |
+
- [Download evaluation data from HuggingFace](#download-evaluation-data-from-huggingface)
|
| 11 |
+
- [Run the evaluate_yolo script](#run-the-evaluate_yolo-script)
|
| 12 |
+
- [Model Training](#model-training)
|
| 13 |
+
- [Prepare the dataset](#prepare-the-dataset)
|
| 14 |
+
- [Optional: Downsample the frames](#optional-downsample-the-frames)
|
| 15 |
+
- [Run the training script](#run-the-training-script)
|
| 16 |
+
- [Evaluation](#evaluation)
|
| 17 |
+
- [Optional: Perform bootstrapping](#optional-perform-bootstrapping)
|
| 18 |
+
- [Results](#results)
|
| 19 |
+
- [Fine-Tuned Model Weights](#fine-tuned-model)
|
| 20 |
+
- [Paper](#paper)
|
| 21 |
+
- [Dataset](#dataset)
|
| 22 |
+
|
| 23 |
+
This repo provides scripts to fine-tune YOLO models on the MMLA dataset. The [MMLA dataset](https://huggingface.co/collections/imageomics/wildwing-67f572d3ba17fca922c80182) is a collection of low-altitude aerial footage of various species in different environments. The dataset is designed to help researchers and practitioners develop and evaluate object detection models for wildlife monitoring and conservation.
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
# How to use the scripts in this repo
|
| 27 |
+
|
| 28 |
+
### Requirements
|
| 29 |
+
```bash
|
| 30 |
+
# install packages from requirements
|
| 31 |
+
conda create --name yolo_env --file requirements.txt
|
| 32 |
+
# OR using pip
|
| 33 |
+
pip install -r requirements.txt
|
| 34 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
|
|
|
| 36 |
|
|
|
|
|
|
|
| 37 |
|
| 38 |
+
## Baseline YOLO evaluation
|
| 39 |
+
### Download evaluation data from HuggingFace
|
| 40 |
+
This dataset contains an evenly distributed set of frames from the MMLA dataset, with bounding box annotations for each frame. The dataset is designed to help researchers and practitioners evaluate the performance of object detection models on low-altitude aerial footage containing a variety of environments and species.
|
| 41 |
|
| 42 |
+
```bash
|
| 43 |
+
# download the datasets from HuggingFace to local /data directory
|
| 44 |
|
| 45 |
+
git clone
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
```
|
| 47 |
|
| 48 |
+
### Run the evaluate_yolo script
|
| 49 |
+
```bash
|
| 50 |
+
# example usage
|
| 51 |
+
python model_eval/evaluate_yolo.py --model model_eval/yolov5mu.pt --images model_eval/eval_data/frames_500_coco --annotations model_eval/eval_data/frames_500_coco --output model_eval/results/frames_500_coco/yolov5m
|
| 52 |
|
| 53 |
+
```
|
| 54 |
+
## Model Training
|
|
|
|
|
|
|
|
|
|
| 55 |
|
| 56 |
+
### Prepare the dataset
|
| 57 |
+
```bash
|
| 58 |
+
# download the datasets from HuggingFace to local /data directory
|
| 59 |
|
| 60 |
+
# wilds dataset
|
| 61 |
+
git clone https://huggingface.co/datasets/imageomics/wildwing_wilds
|
| 62 |
+
# opc dataset
|
| 63 |
+
git clone https://huggingface.co/datasets/imageomics/wildwing_opc
|
| 64 |
+
# mpala dataset
|
| 65 |
+
git clone https://huggingface.co/datasets/imageomics/wildwing_mpala
|
| 66 |
|
| 67 |
+
# run the script to split the dataset into train and test sets
|
| 68 |
+
python prepare_yolo_dataset.py
|
| 69 |
|
| 70 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
+
#### Alternatively, you can create your own dataset from video frames and bounding box annotations
|
| 73 |
+
```bash
|
| 74 |
+
python frame_extractor.py --dataset wilds --dataset_path ./wildwing_wilds --output_dir ./wildwing_wilds
|
| 75 |
|
| 76 |
+
```
|
| 77 |
+
### Optional: Downsample the frames to extract a subset of frames from each video
|
| 78 |
+
```bash
|
| 79 |
+
python downsample.py --dataset wilds --dataset_path ./wildwing_wilds --output_dir ./wildwing_wilds --downsample_rate 0.1
|
|
|
|
|
|
|
| 80 |
```
|
| 81 |
|
| 82 |
+
### Run the training script
|
| 83 |
+
```bash
|
| 84 |
+
# run the training script
|
| 85 |
+
python train.py
|
| 86 |
+
```
|
|
|
|
| 87 |
|
| 88 |
## Evaluation
|
| 89 |
+
To evaluate the trained model on the test data:
|
| 90 |
+
```bash
|
| 91 |
+
# run the validate script
|
| 92 |
+
python validate.py
|
| 93 |
+
```
|
| 94 |
|
| 95 |
+
### Optional: Perform bootstrapping to get confidence intervals
|
| 96 |
+
```bash
|
| 97 |
+
# run the evaluation script
|
| 98 |
+
bootstrap.ipynb
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
```
|
| 100 |
+
#### Download inference results from baseline and fine-tned model
|
| 101 |
+
|
| 102 |
+
## Results
|
| 103 |
+
Our fine-tuned YOLO11m model achieves the following performance on the MMLA dataset:
|
| 104 |
+
| Class | Images | Instances | Box(P) | R | mAP50 | mAP50-95 |
|
| 105 |
+
|---------|--------|-----------|--------|-------|-------|----------|
|
| 106 |
+
| all | 7,658 | 44,619 | 0.867 | 0.764 | 0.801 | 0.488 |
|
| 107 |
+
| Zebra | 4,430 | 28,219 | 0.768 | 0.647 | 0.675 | 0.273 |
|
| 108 |
+
| Giraffe | 868 | 1,357 | 0.788 | 0.634 | 0.678 | 0.314 |
|
| 109 |
+
| Onager | 172 | 1,584 | 0.939 | 0.776 | 0.857 | 0.505 |
|
| 110 |
+
| Dog | 3,022 | 13,459 | 0.973 | 0.998 | 0.995 | 0.860 |
|
| 111 |
+
|
| 112 |
+
|
| 113 |
+
# Fine-Tuned Model
|
| 114 |
+
See [HuggingFace Repo](https://huggingface.co/imageomics/mmla) for details and weights.
|
| 115 |
+
|
| 116 |
+
# Dataset
|
| 117 |
+
See [HuggingFace Repo](https://huggingface.co/collections/imageomics/wildwing-67f572d3ba17fca922c80182) for MMLA dataset.
|
| 118 |
+
|
| 119 |
+
# Paper
|
| 120 |
+
```bibtex
|
| 121 |
+
@article{kline2025mmla,
|
| 122 |
+
title={MMLA: Multi-Environment, Multi-Species, Low-Altitude Aerial Footage Dataset},
|
| 123 |
+
author={Kline, Jenna and Stevens, Samuel and Maalouf, Guy and Saint-Jean, Camille Rondeau and Ngoc, Dat Nguyen and Mirmehdi, Majid and Guerin, David and Burghardt, Tilo and Pastucha, Elzbieta and Costelloe, Blair and others},
|
| 124 |
+
journal={arXiv preprint arXiv:2504.07744},
|
| 125 |
+
year={2025}
|
| 126 |
}
|
| 127 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|