Datasets:

Modalities:
Image
Text
Formats:
parquet
Libraries:
Datasets
Dask
License:
ChristopherMarais commited on
Commit
87752b4
·
verified ·
1 Parent(s): e1e6703

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md CHANGED
@@ -21,3 +21,85 @@ configs:
21
  - split: train
22
  path: data/train-*
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  - split: train
22
  path: data/train-*
23
  ---
24
+
25
+ ## Dataset Card for IBBI Bark Beetle Testing Dataset
26
+
27
+ This dataset is the primary testing and benchmarking set for the `ibbi` Python package. It contains images of bark and ambrosia beetles used to evaluate the performance of object detection and classification models.
28
+
29
+ **Note**: While this dataset serves as the **testing set** for the `ibbi` package's evaluation functions, it is hosted on the Hugging Face Hub as the `train` split. You can access it using `ibbi.get_dataset(split='train')`.
30
+
31
+ ### Dataset Summary
32
+
33
+ The Intelligent Bark Beetle Identifier (IBBI) testing dataset is a curated collection of 2,031 images of bark and ambrosia beetles, covering 63 distinct species. It was developed to provide a standardized benchmark for evaluating the performance of computer vision models on the challenging task of beetle identification. The dataset is specifically designed for evaluating both localization (finding the beetle) and classification (identifying the species) tasks. All images have been annotated by experts.
34
+
35
+ ### Supported Tasks and Leaderboards
36
+
37
+ This dataset is primarily used for evaluating the following tasks within the `ibbi` package:
38
+
39
+ * **Object Detection**: Models are evaluated on their ability to accurately draw bounding boxes around beetles. The primary metric is mean Average Precision (mAP).
40
+ * **Object Classification**: For models that identify species, performance is measured using metrics like F1-score, accuracy, precision, and recall.
41
+ * **Embedding Quality**: The dataset is used to evaluate the quality of feature embeddings generated by models, assessing how well they separate different species in a high-dimensional space using clustering metrics.
42
+
43
+ ### Dataset Structure
44
+
45
+ #### Data Instances
46
+
47
+ A typical data instance consists of an image and a corresponding set of object annotations.
48
+
49
+ ```python
50
+ {
51
+ 'image': <PIL.Image.Image image>,
52
+ 'objects': {
53
+ 'bbox': [[217.0, 181.0, 526.0, 631.0]],
54
+ 'category': ['Xylosandrus_crassiusculus']
55
+ }
56
+ }
57
+ ```
58
+
59
+ #### Data Fields
60
+
61
+ * `image`: A PIL Image object containing the image of a beetle specimen.
62
+ * `objects`: A dictionary containing annotation information.
63
+ * `bbox`: A list of bounding boxes, where each box is in `[x_min, y_min, width, height]` format.
64
+ * `category`: A list of string labels corresponding to the species of the beetle in each bounding box. There are **63 unique species categories** in the dataset.
65
+
66
+ #### Data Splits
67
+
68
+ The dataset contains a single split, which is used for testing and evaluation. Although named `train` on the Hugging Face Hub, it functions as the official test set for the `ibbi` package.
69
+
70
+ * **Testing set (`train` split)**: 2,031 images.
71
+
72
+ ### Dataset Creation
73
+
74
+ #### Curation Rationale
75
+
76
+ The dataset was created to address the need for a standardized, high-quality benchmark for automated bark beetle identification, a task traditionally reliant on expert taxonomists. The selection of 63 species provides a taxonomically diverse set for robust model evaluation.
77
+
78
+ #### Source Data
79
+
80
+ Images were collected from a variety of sources by the Forest Entomology Lab at the University of Florida to ensure diversity in lighting, background, and specimen condition.
81
+
82
+ #### Annotations
83
+
84
+ The annotation process involved a human-in-the-loop workflow:
85
+
86
+ 1. A zero-shot detection model was used to perform initial localization of beetles in the images.
87
+ 2. These initial bounding box annotations were then manually verified and corrected by human annotators to ensure accuracy.
88
+ 3. Species-level classification for each verified bounding box was provided by expert taxonomists to guarantee high-quality labels.
89
+
90
+ ### Citation Information
91
+
92
+ If you use this dataset in your research, please cite the associated paper:
93
+
94
+ ```bibtex
95
+ @article{marais2025progress,
96
+ title={Progress in developing a bark beetle identification tool},
97
+ author={Marais, G Christopher and Stratton, Isabelle C and Johnson, Andrew J and Hulcr, Jiri},
98
+ journal={PLoS One},
99
+ volume={20},
100
+ number={6},
101
+ pages={e0310716},
102
+ year={2025},
103
+ publisher={Public Library of Science San Francisco, CA USA}
104
+ }
105
+ ```