Clarify taxonomic considerations in combining data, add direct links for NEON resources
Browse files
README.md
CHANGED
|
@@ -63,10 +63,10 @@ Each image is accompanied by trait annotations and measurements, providing valua
|
|
| 63 |
## Dataset Structure
|
| 64 |
|
| 65 |
```
|
| 66 |
-
/
|
| 67 |
IMG_<id>.png
|
| 68 |
...
|
| 69 |
-
/
|
| 70 |
IMG_<id>_specimen_<number>_<taxonID>_<individualID>.png
|
| 71 |
...
|
| 72 |
images_metadata.csv
|
|
@@ -100,17 +100,18 @@ README.md
|
|
| 100 |
- `coords_elytra_max_length`: X and Y coordinate pairs defining the endpoints of the maximum elytral length measurement. Measured from the midpoint of the elytro-pronotal suture (junction between pronotum and elytra) to the midpoint of the elytral apex (posterior terminus of the elytra). Ex: `"[[3865.5, 1245.87, 3881.25, 1045.81]]"`.
|
| 101 |
- `coords_basal_pronotum_width`: X and Y coordinate pairs defining the endpoints of the basal pronotal width measurement at the elytro-pronotal junction. Ex: `"[[3922.92, 1046.2, 3872.53, 1035.06]]"`.
|
| 102 |
- `coords_elytra_max_width`: X and Y coordinate pairs defining the endpoints of the maximum elytral width measurement. Represents the greatest transverse distance across both elytra, measured orthogonal to the elytral length axis. Ex: `"[[3960.08, 1145.79, 3814.38, 1123.85]]"`.
|
| 103 |
-
- `px_scalebar`: Euclidean distance between coordinate endpoints of the reference scalebar (`coords_scalebar`) expressed in pixels
|
| 104 |
-
- `px_elytra_max_length`: Euclidean distance between coordinate endpoints of the maximum elytral length (`coords_elytra_max_length`) expressed in pixels
|
| 105 |
-
- `px_basal_pronotum_width`: Euclidean distance between coordinate endpoints of the basal pronotal width (`coords_basal_pronotum_width`) expressed in pixels
|
| 106 |
-
- `px_elytra_max_width`: Euclidean distance between coordinate endpoints of the maximum elytral width (`coords_elytra_max_width`) expressed in pixels
|
| 107 |
- `cm_scalebar`: Calibrated length of the reference scalebar in centimeters. Constant value of 1.0 cm as this represents the standard reference scale used for all measurements.
|
| 108 |
-
- `cm_elytra_max_length`: Calibrated maximum elytral length in centimeters<sup>[
|
| 109 |
-
- `cm_basal_pronotum_width`: Calibrated basal pronotal width in centimeters<sup>[
|
| 110 |
-
- `cm_elytra_max_width`: Calibrated maximum elytral width in centimeters<sup>[
|
| 111 |
|
|
|
|
| 112 |
|
| 113 |
-
<a name="
|
| 114 |
|
| 115 |
|
| 116 |
## Dataset Creation
|
|
@@ -123,7 +124,7 @@ Ground beetles (Coleoptera: Carabidae) serve as critical bioindicators for ecosy
|
|
| 123 |
|
| 124 |
### Source Data
|
| 125 |
|
| 126 |
-
The specimens come from the PUUM
|
| 127 |
|
| 128 |
Our team photographed the beetles in 2025, using Canon EOS DSLR (model 7D).
|
| 129 |
|
|
@@ -144,7 +145,7 @@ After imaging all the specimens, the data curation team segmented the individual
|
|
| 144 |
|
| 145 |
#### Annotation process
|
| 146 |
|
| 147 |
-
Trait annotations were produced using **TORAS** (Trait Observation and Recording Annotation System, a high-precision tool designed for detailed morphological measurements on high-resolution images of pinned beetle specimens. Annotators manually placed coordinate pairs marking the endpoints of key anatomical landmarks: the 1 cm reference scalebar (`coords_scalebar`), maximum elytral length (`coords_elytra_max_length`), basal pronotal width at the elytro-pronotal junction (`coords_basal_pronotum_width`), and maximum elytral width (`coords_elytra_max_width`). From these coordinates, Euclidean distances were computed in pixels (`px_scalebar`, `px_elytra_max_length`, `px_basal_pronotum_width`, `px_elytra_max_width`) and converted to centimeters using the *scalebar calibration factor* (cm_scalebar = 1.0 cm). Annotations were performed exclusively on dorsal-view images to maximize visibility of diagnostic morphological traits. Rigorous quality control ensured that each image met predefined standards for focus, illumination, and label legibility.
|
| 148 |
|
| 149 |
For validation, a subset of 64 specimens was measured physically with digital calipers by three independent annotators. These same specimens were then used for two complementary analyses:
|
| 150 |
1. **Inter-annotator agreement**, assessing consistency among the three caliper-based measurements (average RMSE ≈ 0.024 cm, R² ≈ 0.94); and
|
|
@@ -152,28 +153,20 @@ For validation, a subset of 64 specimens was measured physically with digital ca
|
|
| 152 |
|
| 153 |
Together, these results confirm that TORAS measurements closely reproduce manual ground-truth measurements while maintaining high inter-annotator consistency, establishing the reliability and reproducibility of the annotation process for quantitative morphological trait extraction.
|
| 154 |
|
| 155 |
-
<!-- Trait annotations were generated using the TORAS (Trait Observation and Recording Annotation System) tool for precise measurements on high-resolution images of pinned specimens. Annotators manually marked coordinate pairs defining the endpoints of key morphological features: the 1 cm reference scalebar (`coords_scalebar`), maximum elytral length (`coords_elytra_max_length`), basal pronotal width at the elytro-pronotal junction (`coords_basal_pronotum_width`), and maximum elytral width (`coords_elytra_max_width`). Euclidean distances were calculated in pixels (`px_scalebar`, `px_elytra_max_length`, `px_basal_pronotum_width`, `px_elytra_max_width`) and calibrated to centimeters (`cm_scalebar` = 1.0 constant; others converted using the scalebar calibration factor). Annotations focused on dorsal views to ensure visibility of taxonomically relevant features, with quality control to verify focus, lighting, and label legibility. Digital measurements were validated against manual caliper-based measurements, achieving sub-millimeter precision with transparent error quantification. A subset of 64 specimens was annotated independently by three annotators to assess inter-annotator agreement, and the results demonstrated high consistency (average RMSE ≈ 0.024 cm, R² ≈ 0.94). For validating TORAS as a tool, TORAS-based digital measurements were validated against manual caliper measurements on physical specimens, achieving sub-millimeter precision (RMSE ≈ 0.015 cm; R² > 0.97).-->
|
| 156 |
-
|
| 157 |
#### Who are the annotators?
|
| 158 |
|
| 159 |
-
- Annotations were conducted by a team of researchers and students from the Experiential Introduction to AI and Ecology Course, jointly organized by the Imageomics Institute and the AI and Biodiversity Change (ABC) Global
|
| 160 |
- Primary contributors include S. M. Rayeed, Mridul Khurana, Alyson East, and Elizabeth G. Campolongo, with additional contributions from Samuel Stevens, Iuliia Zarubiieva, Jiaman (Lisa) Wu, and Scott C. Lowe. Evan D. Donso, a NEON field technician, assisted with specimen handling, data collection, and trait measurement using calipers.
|
| 161 |
-
- All annotation work was performed under the supervision of advisors Graham W. Taylor and Sydne Record. Fieldwork and imaging were carried out at the NEON PUUM site between January 15–29, 2025.
|
| 162 |
-
|
| 163 |
-
<!--
|
| 164 |
-
Annotations were performed by a team of researchers and students participating in the Experiential Introduction to AI and Ecology Course organized by the Imageomics Institute and the AI and Biodiversity Change (ABC) Global Climate Center. Key contributors included S M Rayeed, Mridul Khurana, Alyson East, and Elizabeth G. Campolongo. other co-authors are Samuel Stevens, Iuliia Zarubiieva, Jiaman (Lisa) Wu, Isadora E. Fluck, Scott C. Lowe, Evan D Donso -- with oversight from advisors Graham W. Taylor and Sydne Record. The fieldwork and imaging occurred at the PUUM site in Hawaii from January 15-29, 2025.
|
| 165 |
-
-->
|
| 166 |
-
|
| 167 |
-
<!-- This section describes the people or systems who created the annotations. -->
|
| 168 |
|
| 169 |
|
| 170 |
### Personal and Sensitive Information
|
| 171 |
|
| 172 |
-
Our data does not contain any personal or sensitive
|
| 173 |
|
| 174 |
## Considerations for Using the Data
|
| 175 |
|
| 176 |
-
This dataset comprises pinned beetle specimens collected from the NEON PUUM site between 2018 and 2024, representing 14 identified species within the Carabidae family. While *taxonomically and geographically constrained*, the dataset provides **high-quality, standardized imagery and trait data suitable for AI, computer vision, and ecological modeling applications**. Each specimen image is a **high-resolution dorsal view**, optimized for automated trait extraction, object detection, and segmentation. ***No ventral or lateral views are included***. Trait measurements—such as elytral length and width—are fully calibrated using a 1 cm reference scalebar and have been validated to sub-millimeter precision, ensuring reliability for quantitative analyses. Specimens can be linked to NEON’s environmental and ecological data streams, including climate, vegetation, and co-located taxa (e.g., plants, mammals, and birds), via shared identifiers such as `plotID`, `trapID`, `plotTrapID`, and `collectDate`. For programmatic integration, users may access broader NEON metadata through
|
| 177 |
|
| 178 |
<!--
|
| 179 |
Things to consider while working with the dataset. For instance, maybe there are hybrids and they are labeled in the `hybrid_stat` column, so to get a subset without hybrids, subset to all instances in the metadata file such that `hybrid_stat` is _not_ "hybrid".
|
|
@@ -181,28 +174,19 @@ Things to consider while working with the dataset. For instance, maybe there are
|
|
| 181 |
|
| 182 |
## Bias, Risks, and Limitations
|
| 183 |
|
| 184 |
-
The dataset exhibits several inherent biases and limitations that should be considered when interpreting results or developing models. **Geographically**, it is limited to a single tropical site (PUUM), which is not representative of the diverse environmental conditions found across the continental United States, such as deserts, temperate forests, or taiga ecosystems. **Taxonomically**, the dataset includes only 14 of more than 40,000 known carabid species, with a long-tailed distribution dominated by a few genera — primarily *Mecyclothorax* and *Trechus* — thus underrepresenting the broader diversity of the Carabidae family. Sampling bias arises from the exclusive use of pitfall traps, which preferentially capture ground-active and diurnal beetles while largely excluding arboreal or flying taxa. There is also **limited coverage of intraspecific variation**, as specimens do not span a wide range of geographic clines, life stages, or microhabitats. From a technical perspective, *imaging artifacts such as minor glare or partial label obstruction* may persist despite quality control procedures. The dataset’s **scale — with 1,614 images** — makes it relatively small for standalone large-scale machine learning applications without data augmentation. Finally, ***there is a risk of misuse***, as AI models trained solely on this dataset may exhibit poor generalization when applied to other regions, species, or imaging conditions, underscoring the importance of cross-dataset validation and ecological context awareness.
|
| 185 |
|
| 186 |
<!-- This section is meant to convey both technical and sociotechnical limitations. Could also address misuse, malicious use, and uses that the dataset will not work well for. For instance, if your data exhibits a long-tailed distribution (and why). -->
|
| 187 |
|
| 188 |
|
| 189 |
### Recommendations
|
| 190 |
|
| 191 |
-
- Mitigating Geographic Bias: To address the limited geographic scope of the Hawai‘i dataset,
|
| 192 |
-
- Balancing Taxonomic Representation: To reduce the effects of the long-tailed species distribution, augment the dataset with external image and trait repositories (e.g., GBIF, iDigBio, or other museum collections). This
|
| 193 |
-
- For AI and computer vision applications, researchers should augment the dataset with additional images to overcome the relatively small sample size and enhance model robustness. Expanding image diversity across species, sites, and lighting conditions will help models better capture regional morphological variation and reduce overfitting to the specific imaging setup used for the Hawai‘i specimens.
|
| 194 |
- When developing or testing automated measurement pipelines, users are strongly encouraged to validate all digital trait extractions against the provided manually verified measurements. Reporting quantitative error rates (e.g., RMSE, bias, R²) will ensure transparency and maintain the high standard of reproducibility established in the original validation study, which demonstrated sub-millimeter accuracy for elytral traits.
|
| 195 |
- For ecological analyses, it is essential to link specimen-level traits to NEON environmental data using identifiers such as `plotID` and `collectDate`. This enables spatially and temporally explicit studies on trait–environment relationships, including responses to climate gradients, habitat conditions, or ecological disturbances.
|
| 196 |
-
- Researchers should avoid drawing continental-scale ecological or evolutionary inferences based solely on this dataset, as it represents a single tropical site. Broader-scale interpretations require supplementary datasets that capture geographic and taxonomic variation. Moreover, users are encouraged to consider the ethical implications of AI deployment in biodiversity monitoring and conservation, ensuring that research derived from this dataset aligns with its intended purpose of advancing ecological understanding and supporting conservation outcomes.
|
| 197 |
-
|
| 198 |
-
<!--
|
| 199 |
-
- Combine with multi-domain NEON data (vial specimens, other sites) for continental analyses
|
| 200 |
-
- Augment with external image/trait datasets to balance long-tailed distribution
|
| 201 |
-
- For AI applications, augment with additional images to address the small sample size and ensure models account for regional morphological variation.
|
| 202 |
-
- Validate any automated trait extractions against the provided manual measurements, and report error rates transparently.
|
| 203 |
-
- When linking to NEON environmental data, use identifiers like `plotID` and `collectDate` for accurate integration, enabling studies on trait-environment interactions.
|
| 204 |
-
- Avoid using for continental-scale inferences without supplementary data, and consider ethical applications in biodiversity monitoring and conservation to align with the dataset's ecological focus.
|
| 205 |
-
-->
|
| 206 |
|
| 207 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 208 |
|
|
|
|
| 63 |
## Dataset Structure
|
| 64 |
|
| 65 |
```
|
| 66 |
+
group_images/
|
| 67 |
IMG_<id>.png
|
| 68 |
...
|
| 69 |
+
individual_specimens/
|
| 70 |
IMG_<id>_specimen_<number>_<taxonID>_<individualID>.png
|
| 71 |
...
|
| 72 |
images_metadata.csv
|
|
|
|
| 100 |
- `coords_elytra_max_length`: X and Y coordinate pairs defining the endpoints of the maximum elytral length measurement. Measured from the midpoint of the elytro-pronotal suture (junction between pronotum and elytra) to the midpoint of the elytral apex (posterior terminus of the elytra). Ex: `"[[3865.5, 1245.87, 3881.25, 1045.81]]"`.
|
| 101 |
- `coords_basal_pronotum_width`: X and Y coordinate pairs defining the endpoints of the basal pronotal width measurement at the elytro-pronotal junction. Ex: `"[[3922.92, 1046.2, 3872.53, 1035.06]]"`.
|
| 102 |
- `coords_elytra_max_width`: X and Y coordinate pairs defining the endpoints of the maximum elytral width measurement. Represents the greatest transverse distance across both elytra, measured orthogonal to the elytral length axis. Ex: `"[[3960.08, 1145.79, 3814.38, 1123.85]]"`.
|
| 103 |
+
- `px_scalebar`: Euclidean distance between coordinate endpoints of the reference scalebar (`coords_scalebar`) expressed in pixels<sup>[1](#footnote1)</sup>.
|
| 104 |
+
- `px_elytra_max_length`: Euclidean distance between coordinate endpoints of the maximum elytral length (`coords_elytra_max_length`) expressed in pixels<sup>[1](#footnote1)</sup>.
|
| 105 |
+
- `px_basal_pronotum_width`: Euclidean distance between coordinate endpoints of the basal pronotal width (`coords_basal_pronotum_width`) expressed in pixels<sup>[1](#footnote1)</sup>.
|
| 106 |
+
- `px_elytra_max_width`: Euclidean distance between coordinate endpoints of the maximum elytral width (`coords_elytra_max_width`) expressed in pixels<sup>[1](#footnote1)</sup>.
|
| 107 |
- `cm_scalebar`: Calibrated length of the reference scalebar in centimeters. Constant value of 1.0 cm as this represents the standard reference scale used for all measurements.
|
| 108 |
+
- `cm_elytra_max_length`: Calibrated maximum elytral length in centimeters<sup>[2](#footnote2)</sup>, calculated by converting pixel measurements using the scalebar calibration factor.
|
| 109 |
+
- `cm_basal_pronotum_width`: Calibrated basal pronotal width in centimeters<sup>[2](#footnote2)</sup> at the elytro-pronotal suture, calculated by converting pixel measurements using the scalebar calibration factor.
|
| 110 |
+
- `cm_elytra_max_width`: Calibrated maximum elytral width in centimeters<sup>[2](#footnote2)</sup>, representing the greatest transverse dimension across the fused elytra, calculated by converting pixel measurements using the scalebar calibration factor.
|
| 111 |
|
| 112 |
+
<a name="footnote1">1</a>: The measurement is up to 14 decimal places.
|
| 113 |
|
| 114 |
+
<a name="footnote2">2</a>: The measurement is up to 3 decimal places. To get measurements with more numerical precision (i.e. additional decimal places), use this equation: `cm_<measurement>` = `px_<measurement>`/`px_scalebar`.
|
| 115 |
|
| 116 |
|
| 117 |
## Dataset Creation
|
|
|
|
| 124 |
|
| 125 |
### Source Data
|
| 126 |
|
| 127 |
+
The specimens come from the [PUUM NEON site](https://www.neonscience.org/field-sites/puum). For more information about general NEON data, please see their [Ground beetles sampled from pitfall traps page](https://data.neonscience.org/data-products/DP1.10022.001).
|
| 128 |
|
| 129 |
Our team photographed the beetles in 2025, using Canon EOS DSLR (model 7D).
|
| 130 |
|
|
|
|
| 145 |
|
| 146 |
#### Annotation process
|
| 147 |
|
| 148 |
+
Trait annotations were produced using **TORAS** (Trait Observation and Recording Annotation System), a high-precision tool designed for detailed morphological measurements on high-resolution images of pinned beetle specimens. Annotators manually placed coordinate pairs marking the endpoints of key anatomical landmarks: the 1 cm reference scalebar (`coords_scalebar`), maximum elytral length (`coords_elytra_max_length`), basal pronotal width at the elytro-pronotal junction (`coords_basal_pronotum_width`), and maximum elytral width (`coords_elytra_max_width`). From these coordinates, Euclidean distances were computed in pixels (`px_scalebar`, `px_elytra_max_length`, `px_basal_pronotum_width`, `px_elytra_max_width`) and converted to centimeters using the *scalebar calibration factor* (cm_scalebar = 1.0 cm). Annotations were performed exclusively on dorsal-view images to maximize visibility of diagnostic morphological traits. Rigorous quality control ensured that each image met predefined standards for focus, illumination, and label legibility.
|
| 149 |
|
| 150 |
For validation, a subset of 64 specimens was measured physically with digital calipers by three independent annotators. These same specimens were then used for two complementary analyses:
|
| 151 |
1. **Inter-annotator agreement**, assessing consistency among the three caliper-based measurements (average RMSE ≈ 0.024 cm, R² ≈ 0.94); and
|
|
|
|
| 153 |
|
| 154 |
Together, these results confirm that TORAS measurements closely reproduce manual ground-truth measurements while maintaining high inter-annotator consistency, establishing the reliability and reproducibility of the annotation process for quantitative morphological trait extraction.
|
| 155 |
|
|
|
|
|
|
|
| 156 |
#### Who are the annotators?
|
| 157 |
|
| 158 |
+
- Annotations were conducted by a team of researchers and students from the Experiential Introduction to AI and Ecology Course, jointly organized by the Imageomics Institute and the AI and Biodiversity Change (ABC) Global Center.
|
| 159 |
- Primary contributors include S. M. Rayeed, Mridul Khurana, Alyson East, and Elizabeth G. Campolongo, with additional contributions from Samuel Stevens, Iuliia Zarubiieva, Jiaman (Lisa) Wu, and Scott C. Lowe. Evan D. Donso, a NEON field technician, assisted with specimen handling, data collection, and trait measurement using calipers.
|
| 160 |
+
- All annotation work was performed under the supervision of advisors Graham W. Taylor and Sydne Record. Fieldwork and imaging were carried out at the [NEON PUUM site](https://www.neonscience.org/field-sites/puum) between January 15–29, 2025.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 161 |
|
| 162 |
|
| 163 |
### Personal and Sensitive Information
|
| 164 |
|
| 165 |
+
Our data does not contain any personal or sensitive information.
|
| 166 |
|
| 167 |
## Considerations for Using the Data
|
| 168 |
|
| 169 |
+
This dataset comprises pinned beetle specimens collected from the [NEON PUUM site](https://www.neonscience.org/field-sites/puum) between 2018 and 2024, representing 14 identified species within the Carabidae family. While *taxonomically and geographically constrained*, the dataset provides **high-quality, standardized imagery and trait data suitable for AI, computer vision, and ecological modeling applications**. Each specimen image is a **high-resolution dorsal view**, optimized for automated trait extraction, object detection, and segmentation. ***No ventral or lateral views are included***. Trait measurements—such as elytral length and width—are fully calibrated using a 1 cm reference scalebar and have been validated to sub-millimeter precision, ensuring reliability for quantitative analyses. Specimens can be linked to NEON’s environmental and ecological data streams, including climate, vegetation, and co-located taxa (e.g., plants, mammals, and birds), via shared identifiers such as `plotID`, `trapID`, `plotTrapID`, and `collectDate`. For programmatic integration, users may access broader NEON metadata through the [NEON API](https://data.neonscience.org/data-api/) using `individualID` or `sampleCode`. *All images adhere to FAIR data principles*, supporting findability, accessibility, interoperability, and reusability across biodiversity and ecological research platforms. Overall, this dataset serves as a robust foundation for trait-based ecological modeling, species-level computer vision tasks, and integration with multi-domain NEON data, provided users account for its limited geographic and taxonomic scope.
|
| 170 |
|
| 171 |
<!--
|
| 172 |
Things to consider while working with the dataset. For instance, maybe there are hybrids and they are labeled in the `hybrid_stat` column, so to get a subset without hybrids, subset to all instances in the metadata file such that `hybrid_stat` is _not_ "hybrid".
|
|
|
|
| 174 |
|
| 175 |
## Bias, Risks, and Limitations
|
| 176 |
|
| 177 |
+
The dataset exhibits several inherent biases and limitations that should be considered when interpreting results or developing models. **Geographically**, it is limited to a single tropical site ([PUUM](https://www.neonscience.org/field-sites/puum)), which is not representative of the diverse environmental conditions found across the continental United States, such as deserts, temperate forests, or taiga ecosystems. **Taxonomically**, the dataset includes only 14 of more than 40,000 known carabid species, with a long-tailed distribution dominated by a few genera — primarily *Mecyclothorax* and *Trechus* — thus underrepresenting the broader diversity of the Carabidae family. Sampling bias arises from the exclusive use of pitfall traps, which preferentially capture ground-active and diurnal beetles while largely excluding arboreal or flying taxa. There is also **limited coverage of intraspecific variation**, as specimens do not span a wide range of geographic clines, life stages, or microhabitats. From a technical perspective, *imaging artifacts such as minor glare or partial label obstruction* may persist despite quality control procedures. The dataset’s **scale — with 1,614 images** — makes it relatively small for standalone large-scale machine learning applications without data augmentation. Finally, ***there is a risk of misuse***, as AI models trained solely on this dataset may exhibit poor generalization when applied to other regions, species, or imaging conditions, underscoring the importance of cross-dataset validation and ecological context awareness.
|
| 178 |
|
| 179 |
<!-- This section is meant to convey both technical and sociotechnical limitations. Could also address misuse, malicious use, and uses that the dataset will not work well for. For instance, if your data exhibits a long-tailed distribution (and why). -->
|
| 180 |
|
| 181 |
|
| 182 |
### Recommendations
|
| 183 |
|
| 184 |
+
- Mitigating Geographic Bias: To address the limited geographic scope of the Hawai‘i dataset, consider combining it with collections from other NEON terrestrial sites across multiple domains (e.g., [2018 NEON Ethanol-preserved Ground Beetles](https://huggingface.co/datasets/imageomics/2018-NEON-beetles) and [Sentinel Beetles](https://huggingface.co/datasets/imageomics/sentinel-beetles)). This integration will enable continental-scale analyses of trait–environment relationships and improve ecological generalizability across biomes.
|
| 185 |
+
- Balancing Taxonomic Representation: To reduce the effects of the long-tailed species distribution, one can augment the dataset with external image and trait repositories (e.g., GBIF, iDigBio, or other museum collections). This has the potential to expand coverage across genera and species, facilitating more balanced training datasets and more robust cross-species generalization in machine learning models. When combining taxonomic data from multiple sources, be sure to align the taxonomic backbone used for labels to ensure full alignment. [TaxonoPy](https://github.com/Imageomics/TaxonoPy) was developed to accomplish this type of alignment (for [TreeOfLife-200M](https://huggingface.co/datasets/imageomics/TreeOfLife-200M)).
|
| 186 |
+
- For AI and computer vision applications, researchers should augment the dataset with additional images to overcome the relatively small sample size and enhance model robustness. Expanding image diversity across species, sites, and lighting conditions will help models better capture regional morphological variation and reduce overfitting to the specific imaging setup used for the Hawai‘i specimens. As noted above, different sources may use different taxonomic backbones, this should be accounted for in any compilation (e.g., with [TaxonoPy](https://github.com/Imageomics/TaxonoPy)).
|
| 187 |
- When developing or testing automated measurement pipelines, users are strongly encouraged to validate all digital trait extractions against the provided manually verified measurements. Reporting quantitative error rates (e.g., RMSE, bias, R²) will ensure transparency and maintain the high standard of reproducibility established in the original validation study, which demonstrated sub-millimeter accuracy for elytral traits.
|
| 188 |
- For ecological analyses, it is essential to link specimen-level traits to NEON environmental data using identifiers such as `plotID` and `collectDate`. This enables spatially and temporally explicit studies on trait–environment relationships, including responses to climate gradients, habitat conditions, or ecological disturbances.
|
| 189 |
+
- Researchers should avoid drawing continental-scale ecological or evolutionary inferences based solely on this dataset, as it represents a single tropical site. Broader-scale interpretations require supplementary datasets that capture geographic and taxonomic variation. As noted above, be sure to align the taxonomic naming from disparate sources (e.g., with [TaxonoPy](https://github.com/Imageomics/TaxonoPy)). Moreover, users are encouraged to consider the ethical implications of AI deployment in biodiversity monitoring and conservation, ensuring that research derived from this dataset aligns with its intended purpose of advancing ecological understanding and supporting conservation outcomes.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 190 |
|
| 191 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
| 192 |
|