soxnjeiaka joshvm commited on
Commit
baa5e89
·
0 Parent(s):

Duplicate from restor/tcd

Browse files

Co-authored-by: Josh Veitch-Michaelis <joshvm@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,263 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ size_categories:
4
+ - 1K<n<10K
5
+ task_categories:
6
+ - image-segmentation
7
+ pretty_name: 'OAM-TCD: A globally diverse dataset of high-resolution tree cover maps'
8
+ dataset_info:
9
+ features:
10
+ - name: image_id
11
+ dtype: int64
12
+ - name: image
13
+ dtype: image
14
+ - name: height
15
+ dtype: int16
16
+ - name: width
17
+ dtype: int16
18
+ - name: annotation
19
+ dtype: image
20
+ - name: oam_id
21
+ dtype: string
22
+ - name: license
23
+ dtype: string
24
+ - name: biome
25
+ dtype: int8
26
+ - name: crs
27
+ dtype: string
28
+ - name: bounds
29
+ sequence: float32
30
+ length: 4
31
+ - name: validation_fold
32
+ dtype: int8
33
+ - name: biome_name
34
+ dtype: string
35
+ - name: lat
36
+ dtype: float32
37
+ - name: lon
38
+ dtype: float32
39
+ - name: segments
40
+ dtype: string
41
+ - name: meta
42
+ dtype: string
43
+ - name: coco_annotations
44
+ dtype: string
45
+ splits:
46
+ - name: train
47
+ num_bytes: 3450583573.0
48
+ num_examples: 4169
49
+ - name: test
50
+ num_bytes: 360073480.0
51
+ num_examples: 439
52
+ download_size: 3550643933
53
+ dataset_size: 3810657053.0
54
+ configs:
55
+ - config_name: default
56
+ data_files:
57
+ - split: train
58
+ path: data/train-*
59
+ - split: test
60
+ path: data/test-*
61
+ tags:
62
+ - trees
63
+ - biology
64
+ - ecology
65
+ - forest
66
+ ---
67
+ # Dataset Card for OAM-TCD: A globally diverse dataset of high-resolution tree cover maps
68
+
69
+ ![Example annotation for image 1445]( example_test_annotation_1445.jpg)
70
+ _Annotation example in OAM-TCD (ID 1445), RGB image licensed CC BY-4.0, attribution contributors of OIN._
71
+
72
+ _Left: RGB aerial image, Middle: annotations shown, distinguished by instance ID, Right: annotations identified by class (blue = tree, orange = canopy)_
73
+
74
+ ## Dataset Details
75
+
76
+ OAM-TCD is a dataset of high-resolution (10 cm/px) tree cover maps with instance-level masks for 280k trees and 56k tree groups.
77
+
78
+ Images in the dataset are provided as 2048x2048 px RGB GeoTIFF tiles. The dataset can be used to train both instance segmentation models and semantic segmentation models.
79
+
80
+ For more information please read [our preprint on arXiv](https://arxiv.org/abs/2407.11743). This paper was accepted into NeurIPS 2024 in the Datasets and Benchmarks track. The citation will be updated once the proceedings are online.
81
+
82
+ [![](https://zenodo.org/badge/DOI/10.5281/zenodo.11617167.svg)](https://doi.org/10.5281/zenodo.11617167)
83
+
84
+ Please contact josh [at] restor.eco for any questions, or you can post an issue on the associated Github repository for support.
85
+
86
+ ### Dataset Description
87
+
88
+ - **Curated by:** Restor / ETH Zurich
89
+ - **Funded by:** Restor / ETH Zurich , supported by a Google.org AI for Social Good grant (ID: TF2012-096892, AI and ML for advancing the monitoring of Forest Restoration)
90
+ - **License:** CC-BY 4.0
91
+
92
+
93
+ OIN declares that all imagery contained within is licensed as [CC-BY 4.0](https://github.com/openimagerynetwork/oin-register) however some images are labelled as CC BY-NC 4.0 or CC BY-SA 4.0 in their metadata. Annotations are predominantly released under a CC-BY 4.0 license, with around 10% licensed as CC BY-NC 4.0 or CC BY-SA 4.0. These less permissive images are distributed in separate repositories to avoid any ambiguity for downstream use.
94
+
95
+ To ensure that image providers' rights are upheld, we split these images into license-specific repositories, allowing users to pick which combinations of compatible licenses are appropriate for their application. We have initially released model variants that are trained on CC BY + CC BY-NC imagery. CC BY-SA imagery was removed from the training split, but it can be used for evaluation.
96
+
97
+ The other repositories/datasets are:
98
+
99
+ - `restor/tcd-nc` containing only `CC BY-NC 4.0` licensed images
100
+ - `restor/tcd-sa` containing only `CC BY-SA 4.0` licensed images
101
+
102
+ ### Dataset Sources
103
+
104
+ All imagery in the dataset is sourced from OpenAerialMap (OAM, part of the Open Imagery Network / OIN).
105
+
106
+ ## Uses
107
+
108
+
109
+ ![Prediction map over city of Zurich using a model trained on OAM-TCD](zurich_predictions_side_by_side_small.jpg)
110
+
111
+ _Tree semantic segmentation for Zurich, predicted at 10 cm/px. Predictions with a confidence
112
+ of < 0.4 are hidden. Left - 10 cm RGB orthomosaic provided by the Swiss Federal Office of
113
+ Topography swisstopo/SWISSIMAGE 10 cm (2022), Right - prediction heatmap using `restor/tcd-segormer-mit-b5`.
114
+ Base map tiles by Stamen Design, under CC BY 4.0. Data by OpenStreetMap, under ODbL._
115
+
116
+ We anticipate that most users of the dataset wish to map tree cover in aerial orthomosaics, either captured by drones/unmanned aerial vehicles (UAVs) or from aerial surveys such as those provided by governmental organisations.
117
+
118
+ ### Direct Use
119
+
120
+ The dataset supports applications where the user provides an RGB input image and expects a tree (canopy) map as an output. Depending on the type of trained model, the result could be a binary segmentation mask or a list of detected trees/groups of tree instances. The dataset can also be combined with other license-compatible data sources to train models, aside from our baseline releases. The dataset can also act as a benchmark for other tree detection models; we specify a test split which users can evaluate against, but currently there is no formal infrastructure or a leader board for this.
121
+
122
+ ### Out-of-Scope Use
123
+
124
+ The dataset does not contained detailed annotations for trees that are in closed canopy i.e. are touching. Thus the current release is not suitable for training models to delineate individual trees in closed canopy forest. The dataset contains images at a fixed resolution of 10 cm/px. Models trained on this dataset at nominal resolution may under-perform if applied to images with significantly different resolutions (e.g. satellite imagery).
125
+
126
+ The dataset does not directly support applications related to carbon sequestration measurement (e.g. carbon credit verification) or above ground biomass estimation as it does not contain any structural or species information which is required for accurate allometric calculations (Reierson et. al, 2021). Similarly models trained on the dataset should not be used for any decision-making or policy applications without further validation on appropriate data, particularly if being tested in locations that are under-represented in the dataset.
127
+
128
+ ## Dataset Structure
129
+
130
+ The dataset contains pairs of images, semantic masks and object segments (instance polygons). The masks contain instance-level annotations for (1) individual **trees** and (2) groups of trees, which we label **canopy**. For training our models we binarise the masks. Metadata from OAM for each image is provided and described below.
131
+
132
+ The dataset is released with suggested training and test splits, stratified by biome. These splits were used to derive results presented in the main paper. Where known, each image is also tagged with its terrestrial biome index [-1, 14]. This relationship was defined by looking for intersections between tile polygons and reference biome polygons, an index of -1 means a biome wasn't able to be matched. Tiles sourced from a given OAM image are isolated to a single fold (and split) to avoid train/test leakage.
133
+
134
+ k-fold cross-validation indices within the training set are also provided. That is, each image is assigned an integer [0, 4] which assigns it to a validation fold. Users are also free to pick their own validation protocol (for example one could split the data into biome folds), but results may not be directly comparable with results from the release paper.
135
+
136
+ ## Dataset Creation
137
+
138
+
139
+ ### Curation Rationale
140
+
141
+ The use-case within Restor (Crowther et. al, 2022) is to feed into a broader framework for restoration site assessment. Many users of the Restor platform are stakeholders in restoration projects; some have access to tools like UAVs and are interested in providing data for site monitoring. Our goal was to facilitate training tree canopy detection models that would work robustly in any location. The dataset was curated with this diversity challenge in mind - it contains images from around the world and (by serendipity) covers most terrestrial biome classes.
142
+
143
+ It was important during the curation process that the data sources be open-access and so we selected OpenAerialMap as our image source. OAM contains a large amount of permissively licensed global imagery at high resolution (chosen to be < 10 cm/px for our application).
144
+
145
+ ### Source Data
146
+
147
+ #### Data Collection and Processing
148
+
149
+ We used the OAM API to download a list of surveys on the platform. Using the metadata, we discarded surveys that had a ground sample distance of greater than 10 cm/px (for example satellite imagery). The remaining sites were binned into 1 degree square regions across the world. There are sites in OAM that have been uploaded as multiple assets, and naive random sampling would tend to pick several from the same location. We then sampled sites from each bin and random non-empty tiles from each site until we had reached around 5000 tiles. This was arbitrarily constrained by our estimated annotation budget.
150
+
151
+ Interestingly we did not make any attempt to filter for images that had trees, but in practice there are few negative images in the dataset. Similarly we did not try to filter for images captured in a particular season, so there are trees without leaves in the dataset.
152
+
153
+ #### Who are the source data producers?
154
+
155
+ The images are provided by users of OpenAerialMap / contributors of Open Imagery Network.
156
+
157
+ ### Annotations
158
+
159
+ #### Annotation process
160
+
161
+ Annotation was outsourced to commercial data labelling companies who provided access to teams of professional annotators. We experimented with several labelling providers and compensation strategies.
162
+
163
+ Annotators were provided with a guideline document that provided examples of how we expected images should be labeled. This document evolved over the course of the project as we encountered edge cases and questions from annotation teams. As described in the main paper, annotators were instructed to attempt to label open canopy trees individually (i.e. trees that were not touching). If possible, small groups of trees should also be labelled individually and we suggested < 5 trees as an upper bound. Annotators were encouraged to look for cues that indicated whether an object was a tree or not, such as the presence of (relatively long) shadows and crown shyness (inter-crown spacing). Larger groups of trees, or ambiguous regions would be labelled as "canopy". Annotators were provided with full size image tiles (2048 x 2048) and most images were annotated by a single person from a team of several annotators.
164
+
165
+ There are numerous structures for annotator compensation - for example, paying per polygon, paying per image and paying by total annotation time. The images in OAM-TCD are complex and per-image was excluded early on as the reported annotation time varied significantly. Anecdotally we found that the most practical compensation structure was to pay for a fixed block of annotation time with regular review meetings with labeling team managers. Overall, the cost per image was between 5-10 USD and the total annotation cost was approximately 25k USD. Unfortunately we do not have accurate estimates for time spent annotating all images, but we did advise annotators that if they spent more than 45-60 minutes on a single image that they should flag it for review.
166
+
167
+ #### Who are the annotators?
168
+
169
+ We did not have direct contact with any annotators and their identities were anonymised during communication, for example when providing feedback through managers.
170
+
171
+ #### Personal and Sensitive Information
172
+
173
+ Contact information is present in the metadata for imagery. We do not distribute this data directly, but each image tile is accompanied by a URL pointing to a JSON document on OpenAerialMap where it is publicly available. Otherwise, the imagery is provided at a low enough resolution that it is not possible to identify individual people.
174
+
175
+ The image tiles in the dataset contain geospatial information which is not obfuscated, however as one of the purposes of OpenAerialMap is humanitarian mapping (e.g. tracing objects for inclusion in OpenStreetMap), accurate location information is required and uploaders are aware that this information would be available to other users. We also assume that image providers had the right to capture imagery where they did, including following local regulations that govern UAV activity.
176
+
177
+ An argument for keeping accurate geospatial information is that annotations can be verified against independent sources, for example global land cover maps. The annotations can also be combined with other datasets like multispectral satellite imagery or products like Global Ecosystem Dynamics Investigation (GEDI, Dubayah et. al, 2020)
178
+
179
+ ## General dataset statistics
180
+
181
+ The dataset contains 5072 image tiles sourced from OpenAerialMap; of these 4608 are licensed as CC-BY 4.0, 272 are licensed as CC BY-NC 4.0 and 192 are licensed as CC BY-SA 4.0. As described earlier, we split these images into separate repositories to keep licensing distinct. Only around 5% of imagery in the training split has a less permissive non-commercial license and we are re-training models on only the CC-BY portion of the data to maximise accessibility and re-use.
182
+
183
+ The training dataset split contains 4406 images and the test split contains 666 images. All images are the same size (2048x2048 px) and the same ground sample distance (10 cm/px). The geographic distribution of the dataset is shown below:
184
+
185
+ ![Global distribution of annotations in the OAM-TCD dataset](annotation_map.png)
186
+ _Global distribution of annotations in the OAM-TCD dataset_
187
+
188
+ Table 1, below, shows the number of tiles that correspond to each of the 14 terrestrial biomes described by (Olson et. al, 2021).
189
+
190
+ The majority of the dataset covers (1) tropical and temperate broadleaf forest. Some biomes are clearly under-represented - notably (6) boreal forest/taiga; (9) flooded grasslands and savannas; (11) tundra; and (14) mangrove. Some of these biomes, mangrove in particular, are likely under-represented due to our sampling method (by binned location), as their geographic extent is relatively small. These statistics could be used to guide subsequent data collection in a more targeted fashion.
191
+
192
+
193
+ ![Biome distribution](biome_distribution_table.jpeg)
194
+ _Distribution of images in terrestrial biomes, and in each of the suggested cross-validation folds_
195
+
196
+ It is important to note that the biome classification is purely spatial and without inspecting images individually, one cannot make assumptions about what type of landscape was actually imaged, or if it is a natural ecosystem representative of that biome. We do not currently annotate images with a land use category, but this would potentially be a useful secondary measure of diversity in the dataset.
197
+
198
+
199
+ ## Bias, Risks, and Limitations
200
+
201
+ There are several potential sources of bias in our dataset. The first is geographic, related to where users of OAM are likely to capture data - accessible locations that are amenable to UAV flights. Some locations and countries place strong restrictions on UAV possession and use, for example. One of the use-cases for OAM is providing traceable imagery for OpenStreetMap which is also likely to bias what sorts of scenes users capture.
202
+
203
+ The second is bias from annotators, who were not ecologists. Benchmark results from models trained on the dataset suggest that overall label quality is sufficient for accurate semantic segmentation. However, for instance segmentation annotators had freedom the choose whether to individually label trees or not. This naturally resulted in some inconsistency between what annotators determined was a tree, and at what point to annotate a group of trees as a group. We discuss in the main paper the issue of conflicting definitions for "tree" among researchers and monitoring protocols.
204
+
205
+ The example annotations above highlight some of the inconsistencies described above. Some annotators labeled individual trees within group labels; in the bottom plot most palm trees are individually segmented, but some groups are not. A future goal for the project is to attempt to improve label consistency, identify incorrect labels and attempt to split group labels into individuals. After annotation was complete, we contracted two different labelling organisations to review (and re-label) subsets of the data; we have not released this data yet, but plan to in the future.
206
+
207
+ The greatest risk that we foresee om releasing this dataset is usage in out-of-scope scenarios. For example, using trained models on imagery from regions/biomes that the dataset is not representative of without additional validation. Similarly there is a risk that users apply the model in inappropriate ways, such as measuring canopy cover on imagery taken during periods of abscission (when trees lose leaves). It is important that users carefully consider timing (seasonality) when comparing time-series predictions.
208
+
209
+ While we believe that the risk of malicious or unethical use is low - given that other global tree maps exist and are readily available - it is possible that models trained on the dataset could be used to identify areas of tree cover for illegal logging or other forms of land exploitation. Given that our models can segment tree cover at high resolution, it could also be used for automated surveillance or military mapping purposes.
210
+
211
+ ### Recommendations
212
+
213
+ Please read the bias information above and take it into when using the dataset. Ensure that you have a good validation protocol in place before using a model trained on this dataset.
214
+
215
+ ## Citation
216
+
217
+ If you use OAM-TCD in your own work or research, please cite our arXiv paper: and reference the dataset DOI
218
+
219
+ **BibTeX:**
220
+
221
+ After the paper is peer reviewed, this citation will be updated.
222
+
223
+ ```
224
+ @misc{veitchmichaelis2024oamtcdgloballydiversedataset,
225
+ title={OAM-TCD: A globally diverse dataset of high-resolution tree cover maps},
226
+ author={Josh Veitch-Michaelis and Andrew Cottam and Daniella Schweizer and Eben N. Broadbent and David Dao and Ce Zhang and Angelica Almeyda Zambrano and Simeon Max},
227
+ year={2024},
228
+ eprint={2407.11743},
229
+ archivePrefix={arXiv},
230
+ primaryClass={cs.CV},
231
+ url={https://arxiv.org/abs/2407.11743},
232
+ }
233
+ ```
234
+
235
+ ## Dataset Card Authors
236
+
237
+ Josh Veitch-Michaelis (josh [at] restor.eco)
238
+
239
+ ## Dataset Card Contact
240
+
241
+ Please contact josh [at] restor.eco if you have any queries about the dataset, including requests for image removal if you believe your rights have been infringed.
242
+
243
+
244
+ ### Further Examples
245
+
246
+ ![Example annotation for image 1594]( example_test_annotation_1594.jpg)
247
+ ![Example annotation for image 2242]( example_test_annotation_2242.jpg)
248
+ ![Example annotation for image 555]( example_test_annotation_555.jpg)
249
+ _Annotation examples in OAM-TCD (IDs 1594, 2242, 555), all RGB images licensed CC BY-4.0, attribution contributors of OIN)_
250
+
251
+ ### References
252
+
253
+ [1] Gyri Reiersen, David Dao, Björn Lütjens, Konstantin Klemmer, Xiaoxiang Zhu, and Ce Zhang.449
254
+ Tackling the overestimation of forest carbon with deep learning and aerial imagery. CoRR,450
255
+ abs/2107.11320, 2021.451
256
+
257
+ [2] Thomas W. Crowther, Stephen M. Thomas, Johan van den Hoogen, Niamh Robmann, Al-452
258
+ fredo Chavarría, Andrew Cottam, et al. Restor: Transparency and connectivity for the global453
259
+ environmental movement. One Earth, 5(5):476–481, 2022.454
260
+
261
+ [3] Ralph Dubayah, James Bryan Blair, Scott Goetz, Lola Fatoyinbo, Matthew Hansen, et al. The455
262
+ global ecosystem dynamics investigation: High-resolution laser ranging of the earth’s forests456
263
+ and topography. Science of Remote Sensing, 1:100002, June 2020.
annotation_map.png ADDED

Git LFS Details

  • SHA256: 9b6e4ab70515399d344583427da4c90b2176ecdc72167cc69ff5a6411c9472d7
  • Pointer size: 131 Bytes
  • Size of remote file: 673 kB
biome_distribution_table.jpeg ADDED

Git LFS Details

  • SHA256: e1e52b6aed8c4512a523d7a27c043271466d65d32139c24b568272e15376c154
  • Pointer size: 131 Bytes
  • Size of remote file: 107 kB
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5c10ac4cb8afaf18dae5aad5b22cc86bb80977116a8bcd8c16d41ae48b9b13cf
3
+ size 334303139
data/train-00000-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ec8fe0c79ed8286264625351c307331646c04717df186a25628fe96815bf2acb
3
+ size 464097497
data/train-00001-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dd2ed7425aa6abc26db5c226a7d42c3d9e63cfb90cce73530736dfd26e6bf4d1
3
+ size 462511087
data/train-00002-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:71ee733c150b088ae109818c2f507be0d13bf7b65dc1ff645d742f359dca4ce7
3
+ size 461595923
data/train-00003-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1228e8c458d4f1ef94f50107e769a573d210236de89a6a9ec78008ad828da2a
3
+ size 461403061
data/train-00004-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d656d76c4943bc30a6abf42acba3733e46c23f87380c4ea9edd9f23c84b46b09
3
+ size 467507286
data/train-00005-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a441a767fd91286aa2db3a425b36131cd26e0e3a629350756f71b8a2dd7efbb9
3
+ size 444213204
data/train-00006-of-00007.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a7721d81b076d28afe7ab29ec10cb0f51e37670e73cf9c3f7b939e3c2342857
3
+ size 455012736
example_test_annotation_1445.jpg ADDED

Git LFS Details

  • SHA256: b6aba5dd2324538c9d15a73f73c28b91d4ad6d01956b9cd99f63e613a24e01fb
  • Pointer size: 131 Bytes
  • Size of remote file: 460 kB
example_test_annotation_1594.jpg ADDED

Git LFS Details

  • SHA256: 8f346a3b5a4508eef363b94bd9acc2d88f6c42dcd71aea63ba15e4f2d9fcd605
  • Pointer size: 131 Bytes
  • Size of remote file: 605 kB
example_test_annotation_2242.jpg ADDED

Git LFS Details

  • SHA256: bd602ba4110ba9d1abf5e332dc6d64619cc7fd0161bb69ec3f7c609317054ea7
  • Pointer size: 131 Bytes
  • Size of remote file: 880 kB
example_test_annotation_555.jpg ADDED

Git LFS Details

  • SHA256: 35447547047fed1389b13090074a3448ab12f75ed837088ce1a946316a52442f
  • Pointer size: 131 Bytes
  • Size of remote file: 439 kB
zurich_predictions_side_by_side_small.jpg ADDED

Git LFS Details

  • SHA256: 3c57ec923968209b5f5bb9ee72a8be67b3f816ecf12aeba8dd066b5c59a8ac7e
  • Pointer size: 131 Bytes
  • Size of remote file: 238 kB