File size: 6,286 Bytes
547f389
 
26e5326
 
 
ac122c5
26e5326
4a46a6a
26e5326
ac122c5
969004e
26e5326
 
 
ac122c5
26e5326
4a46a6a
26e5326
ac122c5
26e5326
 
 
ac122c5
26e5326
4a46a6a
26e5326
ac122c5
26e5326
 
 
ac122c5
26e5326
4a46a6a
26e5326
ac122c5
26e5326
547f389
45f9c6f
26e5326
d03ecc6
 
c8657b9
d03ecc6
c8657b9
3bd21fc
 
 
 
 
c8657b9
3bd21fc
 
c8657b9
3bd21fc
 
c8657b9
3bd21fc
d03ecc6
c8657b9
ccdb6eb
c8657b9
adb2f0e
c8657b9
adb2f0e
c8657b9
 
 
adb2f0e
d03ecc6
 
 
 
 
 
 
0659d58
d03ecc6
c8657b9
d03ecc6
 
 
 
c8657b9
d03ecc6
 
 
 
 
 
 
3bd21fc
d03ecc6
 
 
 
 
 
3bd21fc
d03ecc6
 
 
 
 
 
 
 
 
 
3bd21fc
 
 
 
 
d03ecc6
 
 
 
 
 
3bd21fc
 
 
 
d03ecc6
 
 
c8657b9
34494cb
d655955
d03ecc6
 
34494cb
 
 
c8657b9
34494cb
c8657b9
34494cb
c8657b9
34494cb
c8657b9
34494cb
c8657b9
34494cb
c8657b9
34494cb
c8657b9
fa3403e
c8657b9
6e05924
 
82973d7
c8657b9
ce7a2a1
34494cb
c8657b9
 
34494cb
c8657b9
 
 
 
d03ecc6
c8657b9
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
---
configs:
- config_name: Kenya
  data_files:
  - split: train
    path: Kenya/train_filtered.csv
  - split: val
    path: Kenya/valid_filtered.csv
  - split: test
    path: Kenya/test_filtered.csv
  default: true
- config_name: South_Africa
  data_files:
  - split: train
    path: South_Africa/train_filtered.csv
  - split: val
    path: South_Africa/valid_filtered.csv
  - split: test
    path: South_Africa/test_filtered.csv
- config_name: USA_Summer
  data_files:
  - split: train
    path: USA_Summer/train_filtered.csv
  - split: val
    path: USA_Summer/valid_filtered.csv
  - split: test
    path: USA_Summer/test_filtered.csv
- config_name: USA_Winter
  data_files:
  - split: train
    path: USA_Winter/train_filtered.csv
  - split: val
    path: USA_Winter/valid_filtered.csv
  - split: test
    path: USA_Winter/test_filtered.csv
license: cc-by-nc-4.0
---

license: cc-by-nc-4.0
---

# BATIS: Bayesian Approaches for Targeted Improvement of Species Distribution Models

This repository contains the dataset used in experiments shown in BATIS: Bayesian Approaches for Targeted Improvement of Species Distribution Models. To download the dataset, you can use the `load_dataset` function from HuggingFace. For example :

```python
from datasets import load_dataset

# Training Split for Kenya
training_kenya = load_dataset("cathv/BATIS", name="Kenya", split="train")

# Validation Split for South Africa
validation_south_africa = load_dataset("cathv/BATIS", name="South_Africa", split="val")

# Test Split for USA-Summer
test_usa_summer = load_dataset("cathv/BATIS", name="USA_Summer", split="test")
```

## Licenses 

The **BATIS Benchmark** is released under a [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) License](https://creativecommons.org/licenses/by-nc/4.0/).

The use of our dataset should also comply with the following:

- [eBird Terms of Use](https://www.birds.cornell.edu/home/terms-of-use/)
- [eBird API Terms of Use](https://www.birds.cornell.edu/home/ebird-api-terms-of-use/)
- [eBird Data Access Terms of Use](https://www.birds.cornell.edu/home/ebird-data-access-terms-of-use/) 

## Dataset Configurations and Splits

The dataset contains the following four configurations :

- **Kenya :** Containing the data used to train our models for predicting bird species distribution in Kenya.
- **South Africa :** Containing the data used to train our models for predicting bird species distribution in South Africa.
- **USA-Winter :** Containing the data used to train our models for predicting bird species distribution in the United States of America during the winter season.
- **USA-Summer :** Containing the data used to train our models for predicting bird species distribution in the United States of America during the summer season.

Each subset can be further divided into `train`, `test` and `split`. These splits are the same as the one we used in our paper, and were generated by following the pre-processing pipeline described in our paper, which can be easily reproduced by re-using our code.

## Dataset Structure

```
/BATIS/
    Kenya/
        images.tar.gz
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    South_Africa/
        images.tar.gz
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    USA_Winter/
        images/
          images_{aa}
          ...
          images_{ad}
          
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    USA_Summer/
        images/
            images_{aa}
            ...
            images_{af}
        images.tar.gz
        environmental.tar.gz
        targets.tar.gz
        train_filtered.csv
        test_filtered.csv
        valid_filtered.csv
    Species_ID/
        species_list_kenya.csv
        species_list_south_africa.csv
        species_list_usa.csv

```

The files `train_filtered.csv`, `test_filtered.csv` and `valid_filtered.csv` are containing the informations one can see from the Dataset Viewer. The archives `targets`, `images`, `environmental` are respectively containing the target vectors (i.e., the estimated ground truth encounter rate probability), the satellite images (in .tif format) and the environmental rasters from WorldClim (in .npy format) associated with each hotspot. The `Species_ID/` folder contains the species list files for each subset. 


## Data Fields

- `hotspot_id` : The unique ID associated with a given hotspot. The `hotspot_id`value can be used to upload date coming from either `targets`, `environmental` or variance, as they are all formulated as

```
/BATIS/
    images/
        {hotspot_id_1}.tif
        ...
        {hotspot_id_n}.tif
    environmental/
        {hotspot_id_1}.npy
        ...
        {hotspot_id_1}.npy
    targets/
        {hotspot_id_1}.json
        ...
        {hotspot_id_1}.json
```

- `lon` : Longitude coordinate of the hotspot
- `latitude` : Latitude coordinate of the hotspot
- `num_complete_checklists` : Number of complete checklists collected in that hotspot
- `bio_1` to `bio_19`: Environmental covariates values associated with that hotspot, extracted from the WorldClim model. For more details on each of these variables, please refer to the appendix.
- `split` : The split associated with that hotspot (either `train`, `valid` or `test`)

## Reconstructing Satellite Image Archive Files for the USA Subsets 
The satellite images archive files for the USA Summer and USA Winter subsets are very large. To facilitate download through Hugging Face, we decided to split these archives into multiple binary files. You can reconstruct the original archive using the `cat` command in a terminal, which will join the binary files in chronological order and reconstruct the original `.tar.gz` archive.

To reconstruct the archive for the USA-Winter subset, run:
```
cat images_chunk_aa images_chunk_ab images_chunk_ac images_chunk_ad > images.tar.gz
```

To reconstruct the archive for the USA-Summer subset, run:
```
cat images_chunk_aa images_chunk_ab images_chunk_ac images_chunk_ad images_chunk_ae images_chunk_af > images.tar.gz
```