Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
tags:
|
| 4 |
+
- image-object-detection
|
| 5 |
+
- fmiyc
|
| 6 |
+
- out-of-distribution-detection
|
| 7 |
+
pretty_name: FindMeIfYouCan
|
| 8 |
+
task_categories:
|
| 9 |
+
- object-detection
|
| 10 |
+
- other
|
| 11 |
+
size_categories:
|
| 12 |
+
- 1K<n<10K
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# FMIYC (Find Me If You Can) Dataset
|
| 16 |
+
|
| 17 |
+
## Dataset Description
|
| 18 |
+
|
| 19 |
+
**Paper:** FindMeIfYouCan: Bringing Open Set metrics to near, far and farther Out-of-Distribution Object detection (Montoya et al., YYYY - Please update year and provide link)
|
| 20 |
+
**Point of Contact:** Daniel Montoya (daniel-alfonso.montoyavasquez@cea.fr)
|
| 21 |
+
|
| 22 |
+
The FMIYC (Find Me If You Can) dataset is designed for Out-Of-Distribution (OOD) Object Detection tasks. It comprises images and annotations that are derived and adapted from the COCO (Common Objects in Context) and OpenImages datasets. The FMIYC dataset curates these sources into new evaluation splits categorized as near, far, and farther from In-Distribution (ID) data, based on semantic similarity.
|
| 23 |
+
|
| 24 |
+
This version of the dataset was prepared by restructuring the original data into separate configurations, each containing image files and a `metadata.jsonl` file detailing the annotations for those images.
|
| 25 |
+
|
| 26 |
+
### Supported Tasks
|
| 27 |
+
* Object Detection
|
| 28 |
+
* Out-of-Distribution Object Detection
|
| 29 |
+
|
| 30 |
+
### Languages
|
| 31 |
+
The annotations and descriptions are primarily in English.
|
| 32 |
+
|
| 33 |
+
## Dataset Structure
|
| 34 |
+
|
| 35 |
+
The dataset is organized into several configurations. Each configuration (e.g., `coco_far_voc`, `oi_near_voc`) represents a distinct subset of the data and is processed into a `train` split.
|
| 36 |
+
|
| 37 |
+
### Data Fields
|
| 38 |
+
The data for each image typically includes:
|
| 39 |
+
* `file_name`: The filename of the image.
|
| 40 |
+
* `image_id`: The original unique identifier for the image.
|
| 41 |
+
* `height`: Height of the image.
|
| 42 |
+
* `width`: Width of the image.
|
| 43 |
+
* `dataset_origin`: Source dataset ("COCO" or "OpenImages").
|
| 44 |
+
* `distance_category`: Perspective ("near", "far", or "farther").
|
| 45 |
+
* `objects`: A list of annotated objects, each with:
|
| 46 |
+
* `id`: Annotation ID.
|
| 47 |
+
* `area`: Bounding box area.
|
| 48 |
+
* `bbox_x`, `bbox_y`, `bbox_width`, `bbox_height`: Flattened bounding box coordinates.
|
| 49 |
+
* `category_id`: Category ID.
|
| 50 |
+
* `categories`: A list of all possible object categories for the configuration, each with `id`, `name`, and `supercategory`.
|
| 51 |
+
* `image`: The image data itself (loaded by the library).
|
| 52 |
+
|
| 53 |
+
*(Data types are inferred by the Hugging Face datasets library during conversion, typically to strings, floats, integers, and lists/structs as appropriate.)*
|
| 54 |
+
|
| 55 |
+
### Dataset Configurations
|
| 56 |
+
* `coco_far_voc`
|
| 57 |
+
* `coco_farther_bdd`
|
| 58 |
+
* `coco_near_voc`
|
| 59 |
+
* `oi_far_voc`
|
| 60 |
+
* `oi_farther_bdd`
|
| 61 |
+
* `oi_near_voc`
|
| 62 |
+
|
| 63 |
+
## Dataset Creation
|
| 64 |
+
The FMIYC dataset is manually curated and enriched by exploiting semantic similarity from existing benchmarks, primarily COCO and OpenImages, to create new evaluation splits. For full details on the curation process, please refer to the associated paper.
|
| 65 |
+
|
| 66 |
+
## Disclaimers and Source Dataset Information
|
| 67 |
+
The FMIYC dataset is a derivative work and utilizes images and annotations originally from the COCO and OpenImages datasets. Users of the FMIYC dataset should also be aware of and adhere to the licenses and terms of use of these original source datasets.
|
| 68 |
+
|
| 69 |
+
* **COCO (Common Objects in Context):** Please refer to the official COCO dataset website ([https://cocodataset.org/](https://cocodataset.org/)) for specific details on image sources and licensing.
|
| 70 |
+
* **OpenImages:** Please refer to the official OpenImages dataset website ([https://storage.googleapis.com/openimages/web/index.html](https://storage.googleapis.com/openimages/web/index.html)) for specific details on image sources and licensing.
|
| 71 |
+
@inproceedings{lin2014microsoft,
|
| 72 |
+
title={Microsoft COCO: Common objects in context},
|
| 73 |
+
author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C Lawrence},
|
| 74 |
+
booktitle={European conference on computer vision},
|
| 75 |
+
pages={740--755},
|
| 76 |
+
year={2014},
|
| 77 |
+
organization={Springer}
|
| 78 |
+
|
| 79 |
+
}
|
| 80 |
+
|
| 81 |
+
@article{OpenImages,
|
| 82 |
+
author = {Kuznetsova, A. and Rom, H. and Alldrin, N. and Uijlings, J. and Krasin, I. and Pont-Tuset, J. and Kamali, S. and Popov, S. and Malloci, M. and Kolesnikov, A. and Duerig, T. and Ferrari, V.},
|
| 83 |
+
title = {{The Open Images Dataset V4: Unified image classification, object detection, and visual relationship detection at scale}},
|
| 84 |
+
journal = {International Journal of Computer Vision (IJCV)},
|
| 85 |
+
year = {2020},
|
| 86 |
+
volume = {128},
|
| 87 |
+
pages = {1956--1981}
|
| 88 |
+
}
|
| 89 |
+
The FMIYC dataset creators do not claim ownership of the original images or annotations from COCO or OpenImages. The contribution of FMIYC lies in the novel curation, categorization, and benchmarking methodology. Any biases or limitations present in the original COCO or OpenImages datasets may also be present in this derived dataset.
|
| 90 |
+
|
| 91 |
+
## Citation Information
|
| 92 |
+
If you use the FMIYC dataset in your research, please cite the FMIYC paper:
|
| 93 |
+
```bibtex
|
| 94 |
+
@misc{Montoya_FindMeIfYouCan_YYYY,
|
| 95 |
+
author = {Montoya, Daniel and Bouguerra, Aymen and Gomez-Villa, Alexandra and Arnez, Fabio},
|
| 96 |
+
title = {FindMeIfYouCan: Bringing Open Set metrics to near, far and farther Out-of-Distribution Object detection},
|
| 97 |
+
year = {YYYY},
|
| 98 |
+
note = {Please update with publication details if available}
|
| 99 |
+
}
|