mwirth7 commited on
Commit
728d650
·
verified ·
1 Parent(s): c316169

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -30
README.md CHANGED
@@ -28,11 +28,6 @@ such as multi-label classification, covariate shift or self-supervised learning.
28
 
29
  ## Datasets
30
 
31
- **Disclaimer on sizes**: The current dataset sizes reflect the extracted files, as the builder script automatically extracts these files but retains the original zipped versions.
32
- This results in approximately double the disk usage for each dataset. While it is possible to manually delete all files not contained in the <code>extracted</code>
33
- folder,
34
- we are actively working on updating the builder script to resolve this issue.
35
-
36
  | | #train recordings | #test labels | #test_5s segments | size (GB) | #classes |
37
  |--------------------------------|--------:|-----------:|--------:|-----------:|-------------:|
38
  | [PER][1] (Amazon Basin + XCL Subset) | 16,802 | 14,798 | 15,120 | 10.5 | 132 |
@@ -121,6 +116,32 @@ train = dataset["train"].map(map_first_five, batch_size=1000, num_proc=2)
121
  test = dataset["test_5s"]
122
  ```
123
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
124
  ## Metadata
125
 
126
  | | format | description |
@@ -238,31 +259,6 @@ test = dataset["test_5s"]
238
  'order_multilabel': [5]}}
239
  ```
240
 
241
- ### Changelog
242
-
243
- ## 2025.09.10
244
- - Updated dataset description and citation in BirdSet.py
245
- - Now works with datasets<4.0.0
246
- - `load_dataset(..., cache_dir="/path/to/custom/cache")` almost works like in previous datasets<3.0.0 version.
247
- With the different that archive files are downloaded to `HF_HOME/HF_DATASETS` environment variable path but extracted file are save in cache_dir if specified.
248
- During extraction archive files are still deleted during extraction, see previous update/issue, to save disk space.
249
-
250
- ## 2024.12.06
251
- - The [data download size descrepancy](https://github.com/DBD-research-group/BirdSet/issues/267) has been solved.
252
- - TL;DR: During the extraction process, unnecessary archives are now removed immediately. This reduces the required disk space by *half*, now aligning it with the table below.
253
- - Note: If you downloaded the data between this and last update and don't want to update, you can use the following `revision=b0c14a03571a7d73d56b12c4b1db81952c4f7e64`:
254
- ```python
255
- from datasets import load_dataset
256
- ds = load_dataset("DBD-research-group/BirdSet", "HSN", trust_remote_code=True, revision="b0c14a03571a7d73d56b12c4b1db81952c4f7e64")
257
- ```
258
- ## 2024.11.27
259
- - Additional bird taxonomy metadata, including "Genus," "Species Group," and "Order," is provided using the 2021 eBird taxonomy, consistent with the taxonomy used for the 'ebird_code' data.
260
- These metadata fields follow the same format and encoding as 'ebird_code' and 'ebird_code_multilabel'. See below for an updated explanation of the metadata.
261
- - If you don't require the additional taxonomy at the moment and prefer to avoid re-downloading all files, you can specify the previous revision directly in load_dataset as follows:
262
- ```python
263
- from datasets import load_dataset
264
- ds = load_dataset("DBD-research-group/BirdSet", "HSN", trust_remote_code=True, revision="629b54c06874b6d2fa886e1c0d73146c975612d0")
265
- ```
266
 
267
  ### Citation Information
268
 
 
28
 
29
  ## Datasets
30
 
 
 
 
 
 
31
  | | #train recordings | #test labels | #test_5s segments | size (GB) | #classes |
32
  |--------------------------------|--------:|-----------:|--------:|-----------:|-------------:|
33
  | [PER][1] (Amazon Basin + XCL Subset) | 16,802 | 14,798 | 15,120 | 10.5 | 132 |
 
116
  test = dataset["test_5s"]
117
  ```
118
 
119
+ ## Changelog
120
+
121
+ ### 2025.09.10
122
+ - Updated dataset description and citation in BirdSet.py
123
+ - Now works with datasets<4.0.0
124
+ - `load_dataset(..., cache_dir="/path/to/custom/cache")` almost works like in previous datasets<3.0.0 version.
125
+ With the different that archive files are downloaded to `HF_HOME/HF_DATASETS` environment variable path but extracted file are save in cache_dir if specified.
126
+ During extraction archive files are still deleted during extraction, see previous update/issue, to save disk space.
127
+
128
+ ### 2024.12.06
129
+ - The [data download size descrepancy](https://github.com/DBD-research-group/BirdSet/issues/267) has been solved.
130
+ - TL;DR: During the extraction process, unnecessary archives are now removed immediately. This reduces the required disk space by *half*, now aligning it with the table below.
131
+ - Note: If you downloaded the data between this and last update and don't want to update, you can use the following `revision=b0c14a03571a7d73d56b12c4b1db81952c4f7e64`:
132
+ ```python
133
+ from datasets import load_dataset
134
+ ds = load_dataset("DBD-research-group/BirdSet", "HSN", trust_remote_code=True, revision="b0c14a03571a7d73d56b12c4b1db81952c4f7e64")
135
+ ```
136
+ ### 2024.11.27
137
+ - Additional bird taxonomy metadata, including "Genus," "Species Group," and "Order," is provided using the 2021 eBird taxonomy, consistent with the taxonomy used for the 'ebird_code' data.
138
+ These metadata fields follow the same format and encoding as 'ebird_code' and 'ebird_code_multilabel'. See below for an updated explanation of the metadata.
139
+ - If you don't require the additional taxonomy at the moment and prefer to avoid re-downloading all files, you can specify the previous revision directly in load_dataset as follows:
140
+ ```python
141
+ from datasets import load_dataset
142
+ ds = load_dataset("DBD-research-group/BirdSet", "HSN", trust_remote_code=True, revision="629b54c06874b6d2fa886e1c0d73146c975612d0")
143
+ ```
144
+
145
  ## Metadata
146
 
147
  | | format | description |
 
259
  'order_multilabel': [5]}}
260
  ```
261
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
262
 
263
  ### Citation Information
264