Update README.md
Browse files
README.md
CHANGED
|
@@ -8,7 +8,7 @@ tags:
|
|
| 8 |
---
|
| 9 |
# Dataset Description
|
| 10 |
|
| 11 |
-
- **Repository:** [https://github.com/DBD-research-group/
|
| 12 |
- **Paper:** [BirdSet](https://arxiv.org/abs/2403.10380)
|
| 13 |
- **Point of Contact:** [Lukas Rauch](mailto:lukas.rauch@uni-kassel.de)
|
| 14 |
|
|
@@ -16,7 +16,7 @@ tags:
|
|
| 16 |
Deep learning models have emerged as a powerful tool in avian bioacoustics to assess environmental health. To maximize the potential of cost-effective and minimal-invasive
|
| 17 |
passive acoustic monitoring (PAM), models must analyze bird vocalizations across a wide range of species and environmental conditions. However, data fragmentation
|
| 18 |
challenges a evaluation of generalization performance. Therefore, we introduce the BirdSet dataset, comprising approximately 520,000 global bird recordings
|
| 19 |
-
for training and over 400 hours PAM recordings for testing.
|
| 20 |
- **Complementary Code**:[https://github.com/DBD-research-group/GADME](https://github.com/DBD-research-group/BirdSet)
|
| 21 |
- **Complementary Paper**: [https://arxiv.org/abs/2403.10380](https://arxiv.org/abs/2403.10380)
|
| 22 |
|
|
@@ -46,7 +46,7 @@ for training and over 400 hours PAM recordings for testing.
|
|
| 46 |
[9]: https://xeno-canto.org/
|
| 47 |
[10]: https://xeno-canto.org
|
| 48 |
|
| 49 |
-
- We assemble a training dataset for each test dataset that is a subset of a complete Xeno-Canto (XC) snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset.
|
| 50 |
- The focal training datasets or soundscape test datasets components can be individually accessed using the identifiers **NAME_xc** and **NAME_scape**, respectively (e.g., **HSN_xc** for the focal part and **HSN_scape** for the soundscape).
|
| 51 |
- We use the .ogg format for every recording and a sampling rate of 32 kHz.
|
| 52 |
- Each sample in the training dataset is a recording that may contain more than one vocalization of the corresponding bird species.
|
|
@@ -64,13 +64,12 @@ Each dataset (except for XCM and XCL that only feature Train) comes with a datas
|
|
| 64 |
- We provide the full recordings from XC. These can generate multiple samples from a single instance.
|
| 65 |
|
| 66 |
**Test_5s**
|
| 67 |
-
- Task:
|
| 68 |
- Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme.
|
| 69 |
- Each recording is segmented into 5-second intervals where each ground truth bird vocalization is assigned to.
|
| 70 |
- This contains segments without any labels which results in a [0] vector.
|
| 71 |
|
| 72 |
**Test**
|
| 73 |
-
- Task: Multiclass ("ebird_code")
|
| 74 |
- Only soundscape data sourced from Zenodo.
|
| 75 |
- We provide the full recording with the complete label set and specified bounding boxes.
|
| 76 |
- This dataset excludes recordings that do not contain bird calls ("no_call").
|
|
|
|
| 8 |
---
|
| 9 |
# Dataset Description
|
| 10 |
|
| 11 |
+
- **Repository:** [https://github.com/DBD-research-group/BirdSet](https://github.com/DBD-research-group/BirdSet)
|
| 12 |
- **Paper:** [BirdSet](https://arxiv.org/abs/2403.10380)
|
| 13 |
- **Point of Contact:** [Lukas Rauch](mailto:lukas.rauch@uni-kassel.de)
|
| 14 |
|
|
|
|
| 16 |
Deep learning models have emerged as a powerful tool in avian bioacoustics to assess environmental health. To maximize the potential of cost-effective and minimal-invasive
|
| 17 |
passive acoustic monitoring (PAM), models must analyze bird vocalizations across a wide range of species and environmental conditions. However, data fragmentation
|
| 18 |
challenges a evaluation of generalization performance. Therefore, we introduce the BirdSet dataset, comprising approximately 520,000 global bird recordings
|
| 19 |
+
for training and over 400 hours PAM recordings for testing in multi-label classification.
|
| 20 |
- **Complementary Code**:[https://github.com/DBD-research-group/GADME](https://github.com/DBD-research-group/BirdSet)
|
| 21 |
- **Complementary Paper**: [https://arxiv.org/abs/2403.10380](https://arxiv.org/abs/2403.10380)
|
| 22 |
|
|
|
|
| 46 |
[9]: https://xeno-canto.org/
|
| 47 |
[10]: https://xeno-canto.org
|
| 48 |
|
| 49 |
+
- We assemble a training dataset for each test dataset that is a **subset of a complete Xeno-Canto (XC)** snapshot. We extract all recordings that have vocalizations of the bird species appearing in the test dataset.
|
| 50 |
- The focal training datasets or soundscape test datasets components can be individually accessed using the identifiers **NAME_xc** and **NAME_scape**, respectively (e.g., **HSN_xc** for the focal part and **HSN_scape** for the soundscape).
|
| 51 |
- We use the .ogg format for every recording and a sampling rate of 32 kHz.
|
| 52 |
- Each sample in the training dataset is a recording that may contain more than one vocalization of the corresponding bird species.
|
|
|
|
| 64 |
- We provide the full recordings from XC. These can generate multiple samples from a single instance.
|
| 65 |
|
| 66 |
**Test_5s**
|
| 67 |
+
- Task: Adapted to multilabel classification ("ebird_code_multilabel")
|
| 68 |
- Only soundscape data from Zenodo formatted acoording to the Kaggle evaluation scheme.
|
| 69 |
- Each recording is segmented into 5-second intervals where each ground truth bird vocalization is assigned to.
|
| 70 |
- This contains segments without any labels which results in a [0] vector.
|
| 71 |
|
| 72 |
**Test**
|
|
|
|
| 73 |
- Only soundscape data sourced from Zenodo.
|
| 74 |
- We provide the full recording with the complete label set and specified bounding boxes.
|
| 75 |
- This dataset excludes recordings that do not contain bird calls ("no_call").
|