charlieoneill commited on
Commit
8d9d827
·
verified ·
1 Parent(s): 341ae7c

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .DS_Store +0 -0
  2. .gitattributes +14 -0
  3. access.ipynb +1548 -0
  4. data.ipynb +0 -0
  5. ensemble_data.py +234 -0
  6. eval/raw/ecmwf_eval_3.grib +3 -0
  7. eval/raw/glosea_eval_3.grib +3 -0
  8. month_tensors/all_squares/climatology_targets.pt +3 -0
  9. month_tensors/all_squares/end_dates.txt +0 -0
  10. month_tensors/all_squares/feature_names.pt +3 -0
  11. month_tensors/all_squares/features.pt +3 -0
  12. month_tensors/all_squares/targets.pt +3 -0
  13. new_tensors/square_1/climatology_targets.pt +3 -0
  14. new_tensors/square_1/end_dates.txt +0 -0
  15. new_tensors/square_1/targets.pt +3 -0
  16. new_tensors/square_2/climatology_targets.pt +3 -0
  17. new_tensors/square_2/end_dates.txt +0 -0
  18. new_tensors/square_2/targets.pt +3 -0
  19. new_tensors/square_3/climatology_targets.pt +3 -0
  20. new_tensors/square_3/end_dates.txt +0 -0
  21. new_tensors/square_3/targets.pt +3 -0
  22. new_tensors/square_all/climatology_targets.pt +3 -0
  23. new_tensors/square_all/end_dates.txt +0 -0
  24. new_tensors/square_all/targets.pt +3 -0
  25. processed/access.parquet +3 -0
  26. processed/ecmwf.parquet +3 -0
  27. processed/glosea5.parquet +3 -0
  28. processed/master.parquet +3 -0
  29. processed/master_2023.parquet +3 -0
  30. processed/silo.parquet +3 -0
  31. progress.txt +96 -0
  32. raw/access_old.parquet +3 -0
  33. raw/ecmwf_1.grib +3 -0
  34. raw/ecmwf_1.grib.923a8.idx +3 -0
  35. raw/ecmwf_2.grib +3 -0
  36. raw/ecmwf_2.grib.923a8.idx +3 -0
  37. raw/ecmwf_2023.grib +3 -0
  38. raw/ecmwf_2023.grib.5b7b6.idx +0 -0
  39. raw/ecmwf_3.grib +3 -0
  40. raw/ecmwf_3.grib.923a8.idx +3 -0
  41. raw/glosea_1.grib +3 -0
  42. raw/glosea_2.grib +3 -0
  43. raw/glosea_2023.grib +3 -0
  44. raw/glosea_2023.grib.5b7b6.idx +3 -0
  45. raw/glosea_3.grib +3 -0
  46. silo.py +129 -0
  47. tensors/climatology_targets_240.pt +3 -0
  48. tensors/end_dates_240.txt +0 -0
  49. tensors/targets_120.pt +3 -0
  50. tensors/targets_240.pt +3 -0
.DS_Store ADDED
Binary file (8.2 kB). View file
 
.gitattributes CHANGED
@@ -57,3 +57,17 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ eval/raw/ecmwf_eval_3.grib filter=lfs diff=lfs merge=lfs -text
61
+ eval/raw/glosea_eval_3.grib filter=lfs diff=lfs merge=lfs -text
62
+ raw/ecmwf_1.grib filter=lfs diff=lfs merge=lfs -text
63
+ raw/ecmwf_1.grib.923a8.idx filter=lfs diff=lfs merge=lfs -text
64
+ raw/ecmwf_2.grib filter=lfs diff=lfs merge=lfs -text
65
+ raw/ecmwf_2.grib.923a8.idx filter=lfs diff=lfs merge=lfs -text
66
+ raw/ecmwf_2023.grib filter=lfs diff=lfs merge=lfs -text
67
+ raw/ecmwf_3.grib filter=lfs diff=lfs merge=lfs -text
68
+ raw/ecmwf_3.grib.923a8.idx filter=lfs diff=lfs merge=lfs -text
69
+ raw/glosea_1.grib filter=lfs diff=lfs merge=lfs -text
70
+ raw/glosea_2.grib filter=lfs diff=lfs merge=lfs -text
71
+ raw/glosea_2023.grib filter=lfs diff=lfs merge=lfs -text
72
+ raw/glosea_2023.grib.5b7b6.idx filter=lfs diff=lfs merge=lfs -text
73
+ raw/glosea_3.grib filter=lfs diff=lfs merge=lfs -text
access.ipynb ADDED
@@ -0,0 +1,1548 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 22,
6
+ "metadata": {},
7
+ "outputs": [],
8
+ "source": [
9
+ "import pandas as pd"
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "code",
14
+ "execution_count": 26,
15
+ "metadata": {},
16
+ "outputs": [],
17
+ "source": [
18
+ "files = ['access', 'ecmwf', 'glosea5', 'silo']\n",
19
+ "frames = []\n",
20
+ "for file in files:\n",
21
+ " df = pd.read_parquet(f'data/processed/{file}.parquet')\n",
22
+ " df['model'] = file\n",
23
+ " frames.append(df)"
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "code",
28
+ "execution_count": 27,
29
+ "metadata": {},
30
+ "outputs": [
31
+ {
32
+ "data": {
33
+ "text/plain": [
34
+ "(583632, 5)"
35
+ ]
36
+ },
37
+ "execution_count": 27,
38
+ "metadata": {},
39
+ "output_type": "execute_result"
40
+ }
41
+ ],
42
+ "source": [
43
+ "access = frames[0]\n",
44
+ "access.reset_index(inplace=True, drop=True)\n",
45
+ "columns = access.columns\n",
46
+ "# Convert time to string\n",
47
+ "access['time'] = access['time'].astype(str)\n",
48
+ "access.shape"
49
+ ]
50
+ },
51
+ {
52
+ "cell_type": "code",
53
+ "execution_count": 28,
54
+ "metadata": {},
55
+ "outputs": [
56
+ {
57
+ "name": "stdout",
58
+ "output_type": "stream",
59
+ "text": [
60
+ "[-29. -28. -27. -26. -25. -24. -23. -22. -20. -19. -18. -17.]\n",
61
+ "[150. 151. 152. 153. 142. 143. 144. 145. 146.]\n"
62
+ ]
63
+ }
64
+ ],
65
+ "source": [
66
+ "# Print access unique lat and lon\n",
67
+ "print(access['lat'].unique())\n",
68
+ "print(access['lon'].unique())"
69
+ ]
70
+ },
71
+ {
72
+ "cell_type": "code",
73
+ "execution_count": 29,
74
+ "metadata": {},
75
+ "outputs": [
76
+ {
77
+ "data": {
78
+ "text/html": [
79
+ "<div>\n",
80
+ "<style scoped>\n",
81
+ " .dataframe tbody tr th:only-of-type {\n",
82
+ " vertical-align: middle;\n",
83
+ " }\n",
84
+ "\n",
85
+ " .dataframe tbody tr th {\n",
86
+ " vertical-align: top;\n",
87
+ " }\n",
88
+ "\n",
89
+ " .dataframe thead th {\n",
90
+ " text-align: right;\n",
91
+ " }\n",
92
+ "</style>\n",
93
+ "<table border=\"1\" class=\"dataframe\">\n",
94
+ " <thead>\n",
95
+ " <tr style=\"text-align: right;\">\n",
96
+ " <th></th>\n",
97
+ " <th>time</th>\n",
98
+ " <th>lat</th>\n",
99
+ " <th>lon</th>\n",
100
+ " <th>pr</th>\n",
101
+ " <th>model</th>\n",
102
+ " </tr>\n",
103
+ " </thead>\n",
104
+ " <tbody>\n",
105
+ " <tr>\n",
106
+ " <th>0</th>\n",
107
+ " <td>1983-01-01 12:00:00</td>\n",
108
+ " <td>-29.0</td>\n",
109
+ " <td>150.0</td>\n",
110
+ " <td>0.220000</td>\n",
111
+ " <td>access</td>\n",
112
+ " </tr>\n",
113
+ " <tr>\n",
114
+ " <th>1</th>\n",
115
+ " <td>1983-01-01 12:00:00</td>\n",
116
+ " <td>-29.0</td>\n",
117
+ " <td>151.0</td>\n",
118
+ " <td>3.170000</td>\n",
119
+ " <td>access</td>\n",
120
+ " </tr>\n",
121
+ " <tr>\n",
122
+ " <th>2</th>\n",
123
+ " <td>1983-01-01 12:00:00</td>\n",
124
+ " <td>-29.0</td>\n",
125
+ " <td>152.0</td>\n",
126
+ " <td>13.210000</td>\n",
127
+ " <td>access</td>\n",
128
+ " </tr>\n",
129
+ " <tr>\n",
130
+ " <th>3</th>\n",
131
+ " <td>1983-01-01 12:00:00</td>\n",
132
+ " <td>-29.0</td>\n",
133
+ " <td>153.0</td>\n",
134
+ " <td>25.549999</td>\n",
135
+ " <td>access</td>\n",
136
+ " </tr>\n",
137
+ " <tr>\n",
138
+ " <th>4</th>\n",
139
+ " <td>1983-01-01 12:00:00</td>\n",
140
+ " <td>-28.0</td>\n",
141
+ " <td>150.0</td>\n",
142
+ " <td>0.160000</td>\n",
143
+ " <td>access</td>\n",
144
+ " </tr>\n",
145
+ " <tr>\n",
146
+ " <th>...</th>\n",
147
+ " <td>...</td>\n",
148
+ " <td>...</td>\n",
149
+ " <td>...</td>\n",
150
+ " <td>...</td>\n",
151
+ " <td>...</td>\n",
152
+ " </tr>\n",
153
+ " <tr>\n",
154
+ " <th>583627</th>\n",
155
+ " <td>2018-02-10 12:00:00</td>\n",
156
+ " <td>-18.0</td>\n",
157
+ " <td>146.0</td>\n",
158
+ " <td>0.100000</td>\n",
159
+ " <td>access</td>\n",
160
+ " </tr>\n",
161
+ " <tr>\n",
162
+ " <th>583628</th>\n",
163
+ " <td>2018-02-10 12:00:00</td>\n",
164
+ " <td>-17.0</td>\n",
165
+ " <td>143.0</td>\n",
166
+ " <td>0.100000</td>\n",
167
+ " <td>access</td>\n",
168
+ " </tr>\n",
169
+ " <tr>\n",
170
+ " <th>583629</th>\n",
171
+ " <td>2018-02-10 12:00:00</td>\n",
172
+ " <td>-17.0</td>\n",
173
+ " <td>144.0</td>\n",
174
+ " <td>0.290000</td>\n",
175
+ " <td>access</td>\n",
176
+ " </tr>\n",
177
+ " <tr>\n",
178
+ " <th>583630</th>\n",
179
+ " <td>2018-02-10 12:00:00</td>\n",
180
+ " <td>-17.0</td>\n",
181
+ " <td>145.0</td>\n",
182
+ " <td>1.150000</td>\n",
183
+ " <td>access</td>\n",
184
+ " </tr>\n",
185
+ " <tr>\n",
186
+ " <th>583631</th>\n",
187
+ " <td>2018-02-10 12:00:00</td>\n",
188
+ " <td>-17.0</td>\n",
189
+ " <td>146.0</td>\n",
190
+ " <td>0.000000</td>\n",
191
+ " <td>access</td>\n",
192
+ " </tr>\n",
193
+ " </tbody>\n",
194
+ "</table>\n",
195
+ "<p>583632 rows × 5 columns</p>\n",
196
+ "</div>"
197
+ ],
198
+ "text/plain": [
199
+ " time lat lon pr model\n",
200
+ "0 1983-01-01 12:00:00 -29.0 150.0 0.220000 access\n",
201
+ "1 1983-01-01 12:00:00 -29.0 151.0 3.170000 access\n",
202
+ "2 1983-01-01 12:00:00 -29.0 152.0 13.210000 access\n",
203
+ "3 1983-01-01 12:00:00 -29.0 153.0 25.549999 access\n",
204
+ "4 1983-01-01 12:00:00 -28.0 150.0 0.160000 access\n",
205
+ "... ... ... ... ... ...\n",
206
+ "583627 2018-02-10 12:00:00 -18.0 146.0 0.100000 access\n",
207
+ "583628 2018-02-10 12:00:00 -17.0 143.0 0.100000 access\n",
208
+ "583629 2018-02-10 12:00:00 -17.0 144.0 0.290000 access\n",
209
+ "583630 2018-02-10 12:00:00 -17.0 145.0 1.150000 access\n",
210
+ "583631 2018-02-10 12:00:00 -17.0 146.0 0.000000 access\n",
211
+ "\n",
212
+ "[583632 rows x 5 columns]"
213
+ ]
214
+ },
215
+ "execution_count": 29,
216
+ "metadata": {},
217
+ "output_type": "execute_result"
218
+ }
219
+ ],
220
+ "source": [
221
+ "access"
222
+ ]
223
+ },
224
+ {
225
+ "cell_type": "code",
226
+ "execution_count": 30,
227
+ "metadata": {},
228
+ "outputs": [
229
+ {
230
+ "data": {
231
+ "text/plain": [
232
+ "(627300, 5)"
233
+ ]
234
+ },
235
+ "execution_count": 30,
236
+ "metadata": {},
237
+ "output_type": "execute_result"
238
+ }
239
+ ],
240
+ "source": [
241
+ "ecmwf = frames[1]\n",
242
+ "ecmwf.rename(columns={'date': 'time', 'precip': 'pr', 'latitude': 'lat', 'longitude': 'lon'}, inplace=True)\n",
243
+ "ecmwf = ecmwf[columns]\n",
244
+ "ecmwf.reset_index(inplace=True, drop=True)\n",
245
+ "# Convert time to string\n",
246
+ "ecmwf['time'] = ecmwf['time'].astype(str)\n",
247
+ "ecmwf.shape\n"
248
+ ]
249
+ },
250
+ {
251
+ "cell_type": "code",
252
+ "execution_count": 31,
253
+ "metadata": {},
254
+ "outputs": [
255
+ {
256
+ "data": {
257
+ "text/html": [
258
+ "<div>\n",
259
+ "<style scoped>\n",
260
+ " .dataframe tbody tr th:only-of-type {\n",
261
+ " vertical-align: middle;\n",
262
+ " }\n",
263
+ "\n",
264
+ " .dataframe tbody tr th {\n",
265
+ " vertical-align: top;\n",
266
+ " }\n",
267
+ "\n",
268
+ " .dataframe thead th {\n",
269
+ " text-align: right;\n",
270
+ " }\n",
271
+ "</style>\n",
272
+ "<table border=\"1\" class=\"dataframe\">\n",
273
+ " <thead>\n",
274
+ " <tr style=\"text-align: right;\">\n",
275
+ " <th></th>\n",
276
+ " <th>time</th>\n",
277
+ " <th>lat</th>\n",
278
+ " <th>lon</th>\n",
279
+ " <th>pr</th>\n",
280
+ " <th>model</th>\n",
281
+ " </tr>\n",
282
+ " </thead>\n",
283
+ " <tbody>\n",
284
+ " <tr>\n",
285
+ " <th>0</th>\n",
286
+ " <td>1981-01-29</td>\n",
287
+ " <td>-29.0</td>\n",
288
+ " <td>138.0</td>\n",
289
+ " <td>0.0</td>\n",
290
+ " <td>ecmwf</td>\n",
291
+ " </tr>\n",
292
+ " <tr>\n",
293
+ " <th>1</th>\n",
294
+ " <td>1981-01-29</td>\n",
295
+ " <td>-29.0</td>\n",
296
+ " <td>139.0</td>\n",
297
+ " <td>0.0</td>\n",
298
+ " <td>ecmwf</td>\n",
299
+ " </tr>\n",
300
+ " <tr>\n",
301
+ " <th>2</th>\n",
302
+ " <td>1981-01-29</td>\n",
303
+ " <td>-29.0</td>\n",
304
+ " <td>140.0</td>\n",
305
+ " <td>0.0</td>\n",
306
+ " <td>ecmwf</td>\n",
307
+ " </tr>\n",
308
+ " <tr>\n",
309
+ " <th>3</th>\n",
310
+ " <td>1981-01-29</td>\n",
311
+ " <td>-29.0</td>\n",
312
+ " <td>141.0</td>\n",
313
+ " <td>0.0</td>\n",
314
+ " <td>ecmwf</td>\n",
315
+ " </tr>\n",
316
+ " <tr>\n",
317
+ " <th>4</th>\n",
318
+ " <td>1981-01-29</td>\n",
319
+ " <td>-29.0</td>\n",
320
+ " <td>142.0</td>\n",
321
+ " <td>0.0</td>\n",
322
+ " <td>ecmwf</td>\n",
323
+ " </tr>\n",
324
+ " <tr>\n",
325
+ " <th>...</th>\n",
326
+ " <td>...</td>\n",
327
+ " <td>...</td>\n",
328
+ " <td>...</td>\n",
329
+ " <td>...</td>\n",
330
+ " <td>...</td>\n",
331
+ " </tr>\n",
332
+ " <tr>\n",
333
+ " <th>627295</th>\n",
334
+ " <td>2018-12-31</td>\n",
335
+ " <td>-15.0</td>\n",
336
+ " <td>150.0</td>\n",
337
+ " <td>0.0</td>\n",
338
+ " <td>ecmwf</td>\n",
339
+ " </tr>\n",
340
+ " <tr>\n",
341
+ " <th>627296</th>\n",
342
+ " <td>2018-12-31</td>\n",
343
+ " <td>-15.0</td>\n",
344
+ " <td>151.0</td>\n",
345
+ " <td>0.0</td>\n",
346
+ " <td>ecmwf</td>\n",
347
+ " </tr>\n",
348
+ " <tr>\n",
349
+ " <th>627297</th>\n",
350
+ " <td>2018-12-31</td>\n",
351
+ " <td>-15.0</td>\n",
352
+ " <td>152.0</td>\n",
353
+ " <td>0.0</td>\n",
354
+ " <td>ecmwf</td>\n",
355
+ " </tr>\n",
356
+ " <tr>\n",
357
+ " <th>627298</th>\n",
358
+ " <td>2018-12-31</td>\n",
359
+ " <td>-15.0</td>\n",
360
+ " <td>153.0</td>\n",
361
+ " <td>0.0</td>\n",
362
+ " <td>ecmwf</td>\n",
363
+ " </tr>\n",
364
+ " <tr>\n",
365
+ " <th>627299</th>\n",
366
+ " <td>2018-12-31</td>\n",
367
+ " <td>-15.0</td>\n",
368
+ " <td>154.0</td>\n",
369
+ " <td>0.0</td>\n",
370
+ " <td>ecmwf</td>\n",
371
+ " </tr>\n",
372
+ " </tbody>\n",
373
+ "</table>\n",
374
+ "<p>627300 rows × 5 columns</p>\n",
375
+ "</div>"
376
+ ],
377
+ "text/plain": [
378
+ " time lat lon pr model\n",
379
+ "0 1981-01-29 -29.0 138.0 0.0 ecmwf\n",
380
+ "1 1981-01-29 -29.0 139.0 0.0 ecmwf\n",
381
+ "2 1981-01-29 -29.0 140.0 0.0 ecmwf\n",
382
+ "3 1981-01-29 -29.0 141.0 0.0 ecmwf\n",
383
+ "4 1981-01-29 -29.0 142.0 0.0 ecmwf\n",
384
+ "... ... ... ... ... ...\n",
385
+ "627295 2018-12-31 -15.0 150.0 0.0 ecmwf\n",
386
+ "627296 2018-12-31 -15.0 151.0 0.0 ecmwf\n",
387
+ "627297 2018-12-31 -15.0 152.0 0.0 ecmwf\n",
388
+ "627298 2018-12-31 -15.0 153.0 0.0 ecmwf\n",
389
+ "627299 2018-12-31 -15.0 154.0 0.0 ecmwf\n",
390
+ "\n",
391
+ "[627300 rows x 5 columns]"
392
+ ]
393
+ },
394
+ "execution_count": 31,
395
+ "metadata": {},
396
+ "output_type": "execute_result"
397
+ }
398
+ ],
399
+ "source": [
400
+ "ecmwf"
401
+ ]
402
+ },
403
+ {
404
+ "cell_type": "code",
405
+ "execution_count": 32,
406
+ "metadata": {},
407
+ "outputs": [
408
+ {
409
+ "data": {
410
+ "text/plain": [
411
+ "(331008, 5)"
412
+ ]
413
+ },
414
+ "execution_count": 32,
415
+ "metadata": {},
416
+ "output_type": "execute_result"
417
+ }
418
+ ],
419
+ "source": [
420
+ "glosea = frames[2]\n",
421
+ "glosea.rename(columns={'date': 'time', 'tprate': 'pr', 'latitude': 'lat', 'longitude': 'lon'}, inplace=True)\n",
422
+ "glosea = glosea[columns]\n",
423
+ "glosea.reset_index(inplace=True, drop=True)\n",
424
+ "# Convert time to string\n",
425
+ "glosea['time'] = glosea['time'].astype(str)\n",
426
+ "glosea.shape"
427
+ ]
428
+ },
429
+ {
430
+ "cell_type": "code",
431
+ "execution_count": 33,
432
+ "metadata": {},
433
+ "outputs": [
434
+ {
435
+ "data": {
436
+ "text/plain": [
437
+ "(350624, 5)"
438
+ ]
439
+ },
440
+ "execution_count": 33,
441
+ "metadata": {},
442
+ "output_type": "execute_result"
443
+ }
444
+ ],
445
+ "source": [
446
+ "silo = frames[3]\n",
447
+ "silo.rename(columns={'daily_rain': 'pr'}, inplace=True)\n",
448
+ "silo = silo[columns]\n",
449
+ "# Convert lat and lon to float32\n",
450
+ "silo['lat'] = silo['lat'].astype('float32')\n",
451
+ "silo['lon'] = silo['lon'].astype('float32')\n",
452
+ "silo.reset_index(inplace=True, drop=True)\n",
453
+ "# Convert time to string\n",
454
+ "silo['time'] = silo['time'].astype(str)\n",
455
+ "silo.shape"
456
+ ]
457
+ },
458
+ {
459
+ "cell_type": "code",
460
+ "execution_count": 34,
461
+ "metadata": {},
462
+ "outputs": [
463
+ {
464
+ "data": {
465
+ "text/plain": [
466
+ "array([142., 143., 144., 145., 150., 151., 152., 153.], dtype=float32)"
467
+ ]
468
+ },
469
+ "execution_count": 34,
470
+ "metadata": {},
471
+ "output_type": "execute_result"
472
+ }
473
+ ],
474
+ "source": [
475
+ "silo.lon.unique()"
476
+ ]
477
+ },
478
+ {
479
+ "cell_type": "code",
480
+ "execution_count": 35,
481
+ "metadata": {},
482
+ "outputs": [
483
+ {
484
+ "data": {
485
+ "text/plain": [
486
+ "array([-25., -24., -23., -22., -29., -28., -27., -26.], dtype=float32)"
487
+ ]
488
+ },
489
+ "execution_count": 35,
490
+ "metadata": {},
491
+ "output_type": "execute_result"
492
+ }
493
+ ],
494
+ "source": [
495
+ "silo.lat.unique()"
496
+ ]
497
+ },
498
+ {
499
+ "cell_type": "code",
500
+ "execution_count": 36,
501
+ "metadata": {},
502
+ "outputs": [
503
+ {
504
+ "data": {
505
+ "text/html": [
506
+ "<div>\n",
507
+ "<style scoped>\n",
508
+ " .dataframe tbody tr th:only-of-type {\n",
509
+ " vertical-align: middle;\n",
510
+ " }\n",
511
+ "\n",
512
+ " .dataframe tbody tr th {\n",
513
+ " vertical-align: top;\n",
514
+ " }\n",
515
+ "\n",
516
+ " .dataframe thead th {\n",
517
+ " text-align: right;\n",
518
+ " }\n",
519
+ "</style>\n",
520
+ "<table border=\"1\" class=\"dataframe\">\n",
521
+ " <thead>\n",
522
+ " <tr style=\"text-align: right;\">\n",
523
+ " <th></th>\n",
524
+ " <th>time</th>\n",
525
+ " <th>lat</th>\n",
526
+ " <th>lon</th>\n",
527
+ " <th>pr</th>\n",
528
+ " <th>model</th>\n",
529
+ " </tr>\n",
530
+ " </thead>\n",
531
+ " <tbody>\n",
532
+ " <tr>\n",
533
+ " <th>0</th>\n",
534
+ " <td>1983-01-01 12:00:00</td>\n",
535
+ " <td>-29.0</td>\n",
536
+ " <td>150.0</td>\n",
537
+ " <td>0.220000</td>\n",
538
+ " <td>access</td>\n",
539
+ " </tr>\n",
540
+ " <tr>\n",
541
+ " <th>1</th>\n",
542
+ " <td>1983-01-01 12:00:00</td>\n",
543
+ " <td>-29.0</td>\n",
544
+ " <td>151.0</td>\n",
545
+ " <td>3.170000</td>\n",
546
+ " <td>access</td>\n",
547
+ " </tr>\n",
548
+ " <tr>\n",
549
+ " <th>2</th>\n",
550
+ " <td>1983-01-01 12:00:00</td>\n",
551
+ " <td>-29.0</td>\n",
552
+ " <td>152.0</td>\n",
553
+ " <td>13.210000</td>\n",
554
+ " <td>access</td>\n",
555
+ " </tr>\n",
556
+ " <tr>\n",
557
+ " <th>3</th>\n",
558
+ " <td>1983-01-01 12:00:00</td>\n",
559
+ " <td>-29.0</td>\n",
560
+ " <td>153.0</td>\n",
561
+ " <td>25.549999</td>\n",
562
+ " <td>access</td>\n",
563
+ " </tr>\n",
564
+ " <tr>\n",
565
+ " <th>4</th>\n",
566
+ " <td>1983-01-01 12:00:00</td>\n",
567
+ " <td>-28.0</td>\n",
568
+ " <td>150.0</td>\n",
569
+ " <td>0.160000</td>\n",
570
+ " <td>access</td>\n",
571
+ " </tr>\n",
572
+ " <tr>\n",
573
+ " <th>...</th>\n",
574
+ " <td>...</td>\n",
575
+ " <td>...</td>\n",
576
+ " <td>...</td>\n",
577
+ " <td>...</td>\n",
578
+ " <td>...</td>\n",
579
+ " </tr>\n",
580
+ " <tr>\n",
581
+ " <th>1892559</th>\n",
582
+ " <td>2018-12-27</td>\n",
583
+ " <td>-26.0</td>\n",
584
+ " <td>153.0</td>\n",
585
+ " <td>0.000000</td>\n",
586
+ " <td>silo</td>\n",
587
+ " </tr>\n",
588
+ " <tr>\n",
589
+ " <th>1892560</th>\n",
590
+ " <td>2018-12-28</td>\n",
591
+ " <td>-26.0</td>\n",
592
+ " <td>153.0</td>\n",
593
+ " <td>2.699707</td>\n",
594
+ " <td>silo</td>\n",
595
+ " </tr>\n",
596
+ " <tr>\n",
597
+ " <th>1892561</th>\n",
598
+ " <td>2018-12-29</td>\n",
599
+ " <td>-26.0</td>\n",
600
+ " <td>153.0</td>\n",
601
+ " <td>0.000000</td>\n",
602
+ " <td>silo</td>\n",
603
+ " </tr>\n",
604
+ " <tr>\n",
605
+ " <th>1892562</th>\n",
606
+ " <td>2018-12-30</td>\n",
607
+ " <td>-26.0</td>\n",
608
+ " <td>153.0</td>\n",
609
+ " <td>0.000000</td>\n",
610
+ " <td>silo</td>\n",
611
+ " </tr>\n",
612
+ " <tr>\n",
613
+ " <th>1892563</th>\n",
614
+ " <td>2018-12-31</td>\n",
615
+ " <td>-26.0</td>\n",
616
+ " <td>153.0</td>\n",
617
+ " <td>0.000000</td>\n",
618
+ " <td>silo</td>\n",
619
+ " </tr>\n",
620
+ " </tbody>\n",
621
+ "</table>\n",
622
+ "<p>1892564 rows × 5 columns</p>\n",
623
+ "</div>"
624
+ ],
625
+ "text/plain": [
626
+ " time lat lon pr model\n",
627
+ "0 1983-01-01 12:00:00 -29.0 150.0 0.220000 access\n",
628
+ "1 1983-01-01 12:00:00 -29.0 151.0 3.170000 access\n",
629
+ "2 1983-01-01 12:00:00 -29.0 152.0 13.210000 access\n",
630
+ "3 1983-01-01 12:00:00 -29.0 153.0 25.549999 access\n",
631
+ "4 1983-01-01 12:00:00 -28.0 150.0 0.160000 access\n",
632
+ "... ... ... ... ... ...\n",
633
+ "1892559 2018-12-27 -26.0 153.0 0.000000 silo\n",
634
+ "1892560 2018-12-28 -26.0 153.0 2.699707 silo\n",
635
+ "1892561 2018-12-29 -26.0 153.0 0.000000 silo\n",
636
+ "1892562 2018-12-30 -26.0 153.0 0.000000 silo\n",
637
+ "1892563 2018-12-31 -26.0 153.0 0.000000 silo\n",
638
+ "\n",
639
+ "[1892564 rows x 5 columns]"
640
+ ]
641
+ },
642
+ "execution_count": 36,
643
+ "metadata": {},
644
+ "output_type": "execute_result"
645
+ }
646
+ ],
647
+ "source": [
648
+ "dfs = [access, ecmwf, glosea, silo]\n",
649
+ "master_df = pd.concat(dfs)\n",
650
+ "master_df.reset_index(inplace=True, drop=True)\n",
651
+ "# print(master_df.shape)\n",
652
+ "# master_df = master_df.groupby(['time', 'lat', 'lon', 'model']).agg({'pr': 'sum'}).reset_index()\n",
653
+ "# print(master_df.shape)\n",
654
+ "master_df"
655
+ ]
656
+ },
657
+ {
658
+ "cell_type": "code",
659
+ "execution_count": 37,
660
+ "metadata": {},
661
+ "outputs": [],
662
+ "source": [
663
+ "master_df.to_parquet('data/processed/master.parquet')"
664
+ ]
665
+ },
666
+ {
667
+ "cell_type": "markdown",
668
+ "metadata": {},
669
+ "source": [
670
+ "## Creating an evaluation parquet for 2023"
671
+ ]
672
+ },
673
+ {
674
+ "cell_type": "code",
675
+ "execution_count": 15,
676
+ "metadata": {},
677
+ "outputs": [],
678
+ "source": [
679
+ "import cfgrib\n",
680
+ "import xarray as xr\n",
681
+ "import pandas as pd"
682
+ ]
683
+ },
684
+ {
685
+ "cell_type": "code",
686
+ "execution_count": 16,
687
+ "metadata": {},
688
+ "outputs": [
689
+ {
690
+ "data": {
691
+ "text/html": [
692
+ "<div>\n",
693
+ "<style scoped>\n",
694
+ " .dataframe tbody tr th:only-of-type {\n",
695
+ " vertical-align: middle;\n",
696
+ " }\n",
697
+ "\n",
698
+ " .dataframe tbody tr th {\n",
699
+ " vertical-align: top;\n",
700
+ " }\n",
701
+ "\n",
702
+ " .dataframe thead th {\n",
703
+ " text-align: right;\n",
704
+ " }\n",
705
+ "</style>\n",
706
+ "<table border=\"1\" class=\"dataframe\">\n",
707
+ " <thead>\n",
708
+ " <tr style=\"text-align: right;\">\n",
709
+ " <th></th>\n",
710
+ " <th>time</th>\n",
711
+ " <th>step</th>\n",
712
+ " <th>latitude</th>\n",
713
+ " <th>longitude</th>\n",
714
+ " <th>number</th>\n",
715
+ " <th>surface</th>\n",
716
+ " <th>valid_time</th>\n",
717
+ " <th>t2m</th>\n",
718
+ " <th>tprate</th>\n",
719
+ " <th>model</th>\n",
720
+ " </tr>\n",
721
+ " </thead>\n",
722
+ " <tbody>\n",
723
+ " <tr>\n",
724
+ " <th>0</th>\n",
725
+ " <td>2023-03-01</td>\n",
726
+ " <td>30 days</td>\n",
727
+ " <td>-22.0</td>\n",
728
+ " <td>142.0</td>\n",
729
+ " <td>0</td>\n",
730
+ " <td>0.0</td>\n",
731
+ " <td>2023-03-31</td>\n",
732
+ " <td>NaN</td>\n",
733
+ " <td>NaN</td>\n",
734
+ " <td>ecmwf</td>\n",
735
+ " </tr>\n",
736
+ " <tr>\n",
737
+ " <th>1</th>\n",
738
+ " <td>2023-03-01</td>\n",
739
+ " <td>30 days</td>\n",
740
+ " <td>-22.0</td>\n",
741
+ " <td>143.0</td>\n",
742
+ " <td>0</td>\n",
743
+ " <td>0.0</td>\n",
744
+ " <td>2023-03-31</td>\n",
745
+ " <td>NaN</td>\n",
746
+ " <td>NaN</td>\n",
747
+ " <td>ecmwf</td>\n",
748
+ " </tr>\n",
749
+ " <tr>\n",
750
+ " <th>2</th>\n",
751
+ " <td>2023-03-01</td>\n",
752
+ " <td>30 days</td>\n",
753
+ " <td>-22.0</td>\n",
754
+ " <td>144.0</td>\n",
755
+ " <td>0</td>\n",
756
+ " <td>0.0</td>\n",
757
+ " <td>2023-03-31</td>\n",
758
+ " <td>NaN</td>\n",
759
+ " <td>NaN</td>\n",
760
+ " <td>ecmwf</td>\n",
761
+ " </tr>\n",
762
+ " <tr>\n",
763
+ " <th>3</th>\n",
764
+ " <td>2023-03-01</td>\n",
765
+ " <td>30 days</td>\n",
766
+ " <td>-22.0</td>\n",
767
+ " <td>145.0</td>\n",
768
+ " <td>0</td>\n",
769
+ " <td>0.0</td>\n",
770
+ " <td>2023-03-31</td>\n",
771
+ " <td>NaN</td>\n",
772
+ " <td>NaN</td>\n",
773
+ " <td>ecmwf</td>\n",
774
+ " </tr>\n",
775
+ " <tr>\n",
776
+ " <th>4</th>\n",
777
+ " <td>2023-03-01</td>\n",
778
+ " <td>30 days</td>\n",
779
+ " <td>-22.0</td>\n",
780
+ " <td>146.0</td>\n",
781
+ " <td>0</td>\n",
782
+ " <td>0.0</td>\n",
783
+ " <td>2023-03-31</td>\n",
784
+ " <td>NaN</td>\n",
785
+ " <td>NaN</td>\n",
786
+ " <td>ecmwf</td>\n",
787
+ " </tr>\n",
788
+ " <tr>\n",
789
+ " <th>...</th>\n",
790
+ " <td>...</td>\n",
791
+ " <td>...</td>\n",
792
+ " <td>...</td>\n",
793
+ " <td>...</td>\n",
794
+ " <td>...</td>\n",
795
+ " <td>...</td>\n",
796
+ " <td>...</td>\n",
797
+ " <td>...</td>\n",
798
+ " <td>...</td>\n",
799
+ " <td>...</td>\n",
800
+ " </tr>\n",
801
+ " <tr>\n",
802
+ " <th>1915</th>\n",
803
+ " <td>2023-12-01</td>\n",
804
+ " <td>31 days</td>\n",
805
+ " <td>-29.0</td>\n",
806
+ " <td>149.0</td>\n",
807
+ " <td>0</td>\n",
808
+ " <td>0.0</td>\n",
809
+ " <td>2024-01-01</td>\n",
810
+ " <td>300.915039</td>\n",
811
+ " <td>2.158828e-08</td>\n",
812
+ " <td>ecmwf</td>\n",
813
+ " </tr>\n",
814
+ " <tr>\n",
815
+ " <th>1916</th>\n",
816
+ " <td>2023-12-01</td>\n",
817
+ " <td>31 days</td>\n",
818
+ " <td>-29.0</td>\n",
819
+ " <td>150.0</td>\n",
820
+ " <td>0</td>\n",
821
+ " <td>0.0</td>\n",
822
+ " <td>2024-01-01</td>\n",
823
+ " <td>299.213379</td>\n",
824
+ " <td>2.551912e-08</td>\n",
825
+ " <td>ecmwf</td>\n",
826
+ " </tr>\n",
827
+ " <tr>\n",
828
+ " <th>1917</th>\n",
829
+ " <td>2023-12-01</td>\n",
830
+ " <td>31 days</td>\n",
831
+ " <td>-29.0</td>\n",
832
+ " <td>151.0</td>\n",
833
+ " <td>0</td>\n",
834
+ " <td>0.0</td>\n",
835
+ " <td>2024-01-01</td>\n",
836
+ " <td>296.932617</td>\n",
837
+ " <td>2.889698e-08</td>\n",
838
+ " <td>ecmwf</td>\n",
839
+ " </tr>\n",
840
+ " <tr>\n",
841
+ " <th>1918</th>\n",
842
+ " <td>2023-12-01</td>\n",
843
+ " <td>31 days</td>\n",
844
+ " <td>-29.0</td>\n",
845
+ " <td>152.0</td>\n",
846
+ " <td>0</td>\n",
847
+ " <td>0.0</td>\n",
848
+ " <td>2024-01-01</td>\n",
849
+ " <td>295.119629</td>\n",
850
+ " <td>3.888960e-08</td>\n",
851
+ " <td>ecmwf</td>\n",
852
+ " </tr>\n",
853
+ " <tr>\n",
854
+ " <th>1919</th>\n",
855
+ " <td>2023-12-01</td>\n",
856
+ " <td>31 days</td>\n",
857
+ " <td>-29.0</td>\n",
858
+ " <td>153.0</td>\n",
859
+ " <td>0</td>\n",
860
+ " <td>0.0</td>\n",
861
+ " <td>2024-01-01</td>\n",
862
+ " <td>295.313965</td>\n",
863
+ " <td>4.453392e-08</td>\n",
864
+ " <td>ecmwf</td>\n",
865
+ " </tr>\n",
866
+ " </tbody>\n",
867
+ "</table>\n",
868
+ "<p>1920 rows × 10 columns</p>\n",
869
+ "</div>"
870
+ ],
871
+ "text/plain": [
872
+ " time step latitude longitude number surface valid_time \\\n",
873
+ "0 2023-03-01 30 days -22.0 142.0 0 0.0 2023-03-31 \n",
874
+ "1 2023-03-01 30 days -22.0 143.0 0 0.0 2023-03-31 \n",
875
+ "2 2023-03-01 30 days -22.0 144.0 0 0.0 2023-03-31 \n",
876
+ "3 2023-03-01 30 days -22.0 145.0 0 0.0 2023-03-31 \n",
877
+ "4 2023-03-01 30 days -22.0 146.0 0 0.0 2023-03-31 \n",
878
+ "... ... ... ... ... ... ... ... \n",
879
+ "1915 2023-12-01 31 days -29.0 149.0 0 0.0 2024-01-01 \n",
880
+ "1916 2023-12-01 31 days -29.0 150.0 0 0.0 2024-01-01 \n",
881
+ "1917 2023-12-01 31 days -29.0 151.0 0 0.0 2024-01-01 \n",
882
+ "1918 2023-12-01 31 days -29.0 152.0 0 0.0 2024-01-01 \n",
883
+ "1919 2023-12-01 31 days -29.0 153.0 0 0.0 2024-01-01 \n",
884
+ "\n",
885
+ " t2m tprate model \n",
886
+ "0 NaN NaN ecmwf \n",
887
+ "1 NaN NaN ecmwf \n",
888
+ "2 NaN NaN ecmwf \n",
889
+ "3 NaN NaN ecmwf \n",
890
+ "4 NaN NaN ecmwf \n",
891
+ "... ... ... ... \n",
892
+ "1915 300.915039 2.158828e-08 ecmwf \n",
893
+ "1916 299.213379 2.551912e-08 ecmwf \n",
894
+ "1917 296.932617 2.889698e-08 ecmwf \n",
895
+ "1918 295.119629 3.888960e-08 ecmwf \n",
896
+ "1919 295.313965 4.453392e-08 ecmwf \n",
897
+ "\n",
898
+ "[1920 rows x 10 columns]"
899
+ ]
900
+ },
901
+ "execution_count": 16,
902
+ "metadata": {},
903
+ "output_type": "execute_result"
904
+ }
905
+ ],
906
+ "source": [
907
+ "# Read in ecwmf_2023.grib and access_2023.grib as GRIB files\n",
908
+ "ecmwf = cfgrib.open_datasets('data/raw/ecmwf_2023.grib')\n",
909
+ "\n",
910
+ "# Convert the xarray dataset to a pandas dataframe\n",
911
+ "ecmwf_df = ecmwf[0].to_dataframe().reset_index()\n",
912
+ "ecmwf_df['model'] = 'ecmwf'\n",
913
+ "ecmwf_df"
914
+ ]
915
+ },
916
+ {
917
+ "cell_type": "code",
918
+ "execution_count": 17,
919
+ "metadata": {},
920
+ "outputs": [
921
+ {
922
+ "data": {
923
+ "text/html": [
924
+ "<div>\n",
925
+ "<style scoped>\n",
926
+ " .dataframe tbody tr th:only-of-type {\n",
927
+ " vertical-align: middle;\n",
928
+ " }\n",
929
+ "\n",
930
+ " .dataframe tbody tr th {\n",
931
+ " vertical-align: top;\n",
932
+ " }\n",
933
+ "\n",
934
+ " .dataframe thead th {\n",
935
+ " text-align: right;\n",
936
+ " }\n",
937
+ "</style>\n",
938
+ "<table border=\"1\" class=\"dataframe\">\n",
939
+ " <thead>\n",
940
+ " <tr style=\"text-align: right;\">\n",
941
+ " <th></th>\n",
942
+ " <th>time</th>\n",
943
+ " <th>step</th>\n",
944
+ " <th>latitude</th>\n",
945
+ " <th>longitude</th>\n",
946
+ " <th>number</th>\n",
947
+ " <th>surface</th>\n",
948
+ " <th>valid_time</th>\n",
949
+ " <th>t2m</th>\n",
950
+ " <th>tprate</th>\n",
951
+ " <th>model</th>\n",
952
+ " </tr>\n",
953
+ " </thead>\n",
954
+ " <tbody>\n",
955
+ " <tr>\n",
956
+ " <th>0</th>\n",
957
+ " <td>2023-03-01</td>\n",
958
+ " <td>30 days</td>\n",
959
+ " <td>-22.0</td>\n",
960
+ " <td>142.0</td>\n",
961
+ " <td>0</td>\n",
962
+ " <td>0.0</td>\n",
963
+ " <td>2023-03-31</td>\n",
964
+ " <td>NaN</td>\n",
965
+ " <td>NaN</td>\n",
966
+ " <td>glosea</td>\n",
967
+ " </tr>\n",
968
+ " <tr>\n",
969
+ " <th>1</th>\n",
970
+ " <td>2023-03-01</td>\n",
971
+ " <td>30 days</td>\n",
972
+ " <td>-22.0</td>\n",
973
+ " <td>143.0</td>\n",
974
+ " <td>0</td>\n",
975
+ " <td>0.0</td>\n",
976
+ " <td>2023-03-31</td>\n",
977
+ " <td>NaN</td>\n",
978
+ " <td>NaN</td>\n",
979
+ " <td>glosea</td>\n",
980
+ " </tr>\n",
981
+ " <tr>\n",
982
+ " <th>2</th>\n",
983
+ " <td>2023-03-01</td>\n",
984
+ " <td>30 days</td>\n",
985
+ " <td>-22.0</td>\n",
986
+ " <td>144.0</td>\n",
987
+ " <td>0</td>\n",
988
+ " <td>0.0</td>\n",
989
+ " <td>2023-03-31</td>\n",
990
+ " <td>NaN</td>\n",
991
+ " <td>NaN</td>\n",
992
+ " <td>glosea</td>\n",
993
+ " </tr>\n",
994
+ " <tr>\n",
995
+ " <th>3</th>\n",
996
+ " <td>2023-03-01</td>\n",
997
+ " <td>30 days</td>\n",
998
+ " <td>-22.0</td>\n",
999
+ " <td>145.0</td>\n",
1000
+ " <td>0</td>\n",
1001
+ " <td>0.0</td>\n",
1002
+ " <td>2023-03-31</td>\n",
1003
+ " <td>NaN</td>\n",
1004
+ " <td>NaN</td>\n",
1005
+ " <td>glosea</td>\n",
1006
+ " </tr>\n",
1007
+ " <tr>\n",
1008
+ " <th>4</th>\n",
1009
+ " <td>2023-03-01</td>\n",
1010
+ " <td>30 days</td>\n",
1011
+ " <td>-22.0</td>\n",
1012
+ " <td>146.0</td>\n",
1013
+ " <td>0</td>\n",
1014
+ " <td>0.0</td>\n",
1015
+ " <td>2023-03-31</td>\n",
1016
+ " <td>NaN</td>\n",
1017
+ " <td>NaN</td>\n",
1018
+ " <td>glosea</td>\n",
1019
+ " </tr>\n",
1020
+ " <tr>\n",
1021
+ " <th>...</th>\n",
1022
+ " <td>...</td>\n",
1023
+ " <td>...</td>\n",
1024
+ " <td>...</td>\n",
1025
+ " <td>...</td>\n",
1026
+ " <td>...</td>\n",
1027
+ " <td>...</td>\n",
1028
+ " <td>...</td>\n",
1029
+ " <td>...</td>\n",
1030
+ " <td>...</td>\n",
1031
+ " <td>...</td>\n",
1032
+ " </tr>\n",
1033
+ " <tr>\n",
1034
+ " <th>1915</th>\n",
1035
+ " <td>2023-12-01</td>\n",
1036
+ " <td>31 days</td>\n",
1037
+ " <td>-29.0</td>\n",
1038
+ " <td>149.0</td>\n",
1039
+ " <td>0</td>\n",
1040
+ " <td>0.0</td>\n",
1041
+ " <td>2024-01-01</td>\n",
1042
+ " <td>302.622101</td>\n",
1043
+ " <td>2.736340e-08</td>\n",
1044
+ " <td>glosea</td>\n",
1045
+ " </tr>\n",
1046
+ " <tr>\n",
1047
+ " <th>1916</th>\n",
1048
+ " <td>2023-12-01</td>\n",
1049
+ " <td>31 days</td>\n",
1050
+ " <td>-29.0</td>\n",
1051
+ " <td>150.0</td>\n",
1052
+ " <td>0</td>\n",
1053
+ " <td>0.0</td>\n",
1054
+ " <td>2024-01-01</td>\n",
1055
+ " <td>300.857300</td>\n",
1056
+ " <td>2.981721e-08</td>\n",
1057
+ " <td>glosea</td>\n",
1058
+ " </tr>\n",
1059
+ " <tr>\n",
1060
+ " <th>1917</th>\n",
1061
+ " <td>2023-12-01</td>\n",
1062
+ " <td>31 days</td>\n",
1063
+ " <td>-29.0</td>\n",
1064
+ " <td>151.0</td>\n",
1065
+ " <td>0</td>\n",
1066
+ " <td>0.0</td>\n",
1067
+ " <td>2024-01-01</td>\n",
1068
+ " <td>298.743988</td>\n",
1069
+ " <td>3.446928e-08</td>\n",
1070
+ " <td>glosea</td>\n",
1071
+ " </tr>\n",
1072
+ " <tr>\n",
1073
+ " <th>1918</th>\n",
1074
+ " <td>2023-12-01</td>\n",
1075
+ " <td>31 days</td>\n",
1076
+ " <td>-29.0</td>\n",
1077
+ " <td>152.0</td>\n",
1078
+ " <td>0</td>\n",
1079
+ " <td>0.0</td>\n",
1080
+ " <td>2024-01-01</td>\n",
1081
+ " <td>297.252991</td>\n",
1082
+ " <td>4.342598e-08</td>\n",
1083
+ " <td>glosea</td>\n",
1084
+ " </tr>\n",
1085
+ " <tr>\n",
1086
+ " <th>1919</th>\n",
1087
+ " <td>2023-12-01</td>\n",
1088
+ " <td>31 days</td>\n",
1089
+ " <td>-29.0</td>\n",
1090
+ " <td>153.0</td>\n",
1091
+ " <td>0</td>\n",
1092
+ " <td>0.0</td>\n",
1093
+ " <td>2024-01-01</td>\n",
1094
+ " <td>296.756409</td>\n",
1095
+ " <td>4.949413e-08</td>\n",
1096
+ " <td>glosea</td>\n",
1097
+ " </tr>\n",
1098
+ " </tbody>\n",
1099
+ "</table>\n",
1100
+ "<p>1920 rows × 10 columns</p>\n",
1101
+ "</div>"
1102
+ ],
1103
+ "text/plain": [
1104
+ " time step latitude longitude number surface valid_time \\\n",
1105
+ "0 2023-03-01 30 days -22.0 142.0 0 0.0 2023-03-31 \n",
1106
+ "1 2023-03-01 30 days -22.0 143.0 0 0.0 2023-03-31 \n",
1107
+ "2 2023-03-01 30 days -22.0 144.0 0 0.0 2023-03-31 \n",
1108
+ "3 2023-03-01 30 days -22.0 145.0 0 0.0 2023-03-31 \n",
1109
+ "4 2023-03-01 30 days -22.0 146.0 0 0.0 2023-03-31 \n",
1110
+ "... ... ... ... ... ... ... ... \n",
1111
+ "1915 2023-12-01 31 days -29.0 149.0 0 0.0 2024-01-01 \n",
1112
+ "1916 2023-12-01 31 days -29.0 150.0 0 0.0 2024-01-01 \n",
1113
+ "1917 2023-12-01 31 days -29.0 151.0 0 0.0 2024-01-01 \n",
1114
+ "1918 2023-12-01 31 days -29.0 152.0 0 0.0 2024-01-01 \n",
1115
+ "1919 2023-12-01 31 days -29.0 153.0 0 0.0 2024-01-01 \n",
1116
+ "\n",
1117
+ " t2m tprate model \n",
1118
+ "0 NaN NaN glosea \n",
1119
+ "1 NaN NaN glosea \n",
1120
+ "2 NaN NaN glosea \n",
1121
+ "3 NaN NaN glosea \n",
1122
+ "4 NaN NaN glosea \n",
1123
+ "... ... ... ... \n",
1124
+ "1915 302.622101 2.736340e-08 glosea \n",
1125
+ "1916 300.857300 2.981721e-08 glosea \n",
1126
+ "1917 298.743988 3.446928e-08 glosea \n",
1127
+ "1918 297.252991 4.342598e-08 glosea \n",
1128
+ "1919 296.756409 4.949413e-08 glosea \n",
1129
+ "\n",
1130
+ "[1920 rows x 10 columns]"
1131
+ ]
1132
+ },
1133
+ "execution_count": 17,
1134
+ "metadata": {},
1135
+ "output_type": "execute_result"
1136
+ }
1137
+ ],
1138
+ "source": [
1139
+ "# same for glosea_2023.grib\n",
1140
+ "glosea = cfgrib.open_datasets('data/raw/glosea_2023.grib')\n",
1141
+ "\n",
1142
+ "glosea_df = glosea[0].to_dataframe().reset_index()\n",
1143
+ "glosea_df['model'] = 'glosea'\n",
1144
+ "glosea_df"
1145
+ ]
1146
+ },
1147
+ {
1148
+ "cell_type": "code",
1149
+ "execution_count": 18,
1150
+ "metadata": {},
1151
+ "outputs": [],
1152
+ "source": [
1153
+ "# Combine dfs\n",
1154
+ "dfs = [ecmwf_df, glosea_df]\n",
1155
+ "master_df = pd.concat(dfs)"
1156
+ ]
1157
+ },
1158
+ {
1159
+ "cell_type": "code",
1160
+ "execution_count": 19,
1161
+ "metadata": {},
1162
+ "outputs": [
1163
+ {
1164
+ "data": {
1165
+ "text/html": [
1166
+ "<div>\n",
1167
+ "<style scoped>\n",
1168
+ " .dataframe tbody tr th:only-of-type {\n",
1169
+ " vertical-align: middle;\n",
1170
+ " }\n",
1171
+ "\n",
1172
+ " .dataframe tbody tr th {\n",
1173
+ " vertical-align: top;\n",
1174
+ " }\n",
1175
+ "\n",
1176
+ " .dataframe thead th {\n",
1177
+ " text-align: right;\n",
1178
+ " }\n",
1179
+ "</style>\n",
1180
+ "<table border=\"1\" class=\"dataframe\">\n",
1181
+ " <thead>\n",
1182
+ " <tr style=\"text-align: right;\">\n",
1183
+ " <th></th>\n",
1184
+ " <th>time</th>\n",
1185
+ " <th>lat</th>\n",
1186
+ " <th>lon</th>\n",
1187
+ " <th>pr</th>\n",
1188
+ " <th>model</th>\n",
1189
+ " <th>year</th>\n",
1190
+ " <th>month</th>\n",
1191
+ " <th>day_of_year</th>\n",
1192
+ " <th>days_since_start</th>\n",
1193
+ " </tr>\n",
1194
+ " </thead>\n",
1195
+ " <tbody>\n",
1196
+ " <tr>\n",
1197
+ " <th>0</th>\n",
1198
+ " <td>2023-03-01</td>\n",
1199
+ " <td>-22.0</td>\n",
1200
+ " <td>142.0</td>\n",
1201
+ " <td>0.000000e+00</td>\n",
1202
+ " <td>ecmwf</td>\n",
1203
+ " <td>2023</td>\n",
1204
+ " <td>3</td>\n",
1205
+ " <td>60</td>\n",
1206
+ " <td>10980</td>\n",
1207
+ " </tr>\n",
1208
+ " <tr>\n",
1209
+ " <th>1</th>\n",
1210
+ " <td>2023-03-01</td>\n",
1211
+ " <td>-22.0</td>\n",
1212
+ " <td>143.0</td>\n",
1213
+ " <td>0.000000e+00</td>\n",
1214
+ " <td>ecmwf</td>\n",
1215
+ " <td>2023</td>\n",
1216
+ " <td>3</td>\n",
1217
+ " <td>60</td>\n",
1218
+ " <td>10980</td>\n",
1219
+ " </tr>\n",
1220
+ " <tr>\n",
1221
+ " <th>2</th>\n",
1222
+ " <td>2023-03-01</td>\n",
1223
+ " <td>-22.0</td>\n",
1224
+ " <td>144.0</td>\n",
1225
+ " <td>0.000000e+00</td>\n",
1226
+ " <td>ecmwf</td>\n",
1227
+ " <td>2023</td>\n",
1228
+ " <td>3</td>\n",
1229
+ " <td>60</td>\n",
1230
+ " <td>10980</td>\n",
1231
+ " </tr>\n",
1232
+ " <tr>\n",
1233
+ " <th>3</th>\n",
1234
+ " <td>2023-03-01</td>\n",
1235
+ " <td>-22.0</td>\n",
1236
+ " <td>145.0</td>\n",
1237
+ " <td>0.000000e+00</td>\n",
1238
+ " <td>ecmwf</td>\n",
1239
+ " <td>2023</td>\n",
1240
+ " <td>3</td>\n",
1241
+ " <td>60</td>\n",
1242
+ " <td>10980</td>\n",
1243
+ " </tr>\n",
1244
+ " <tr>\n",
1245
+ " <th>4</th>\n",
1246
+ " <td>2023-03-01</td>\n",
1247
+ " <td>-22.0</td>\n",
1248
+ " <td>146.0</td>\n",
1249
+ " <td>0.000000e+00</td>\n",
1250
+ " <td>ecmwf</td>\n",
1251
+ " <td>2023</td>\n",
1252
+ " <td>3</td>\n",
1253
+ " <td>60</td>\n",
1254
+ " <td>10980</td>\n",
1255
+ " </tr>\n",
1256
+ " <tr>\n",
1257
+ " <th>...</th>\n",
1258
+ " <td>...</td>\n",
1259
+ " <td>...</td>\n",
1260
+ " <td>...</td>\n",
1261
+ " <td>...</td>\n",
1262
+ " <td>...</td>\n",
1263
+ " <td>...</td>\n",
1264
+ " <td>...</td>\n",
1265
+ " <td>...</td>\n",
1266
+ " <td>...</td>\n",
1267
+ " </tr>\n",
1268
+ " <tr>\n",
1269
+ " <th>1915</th>\n",
1270
+ " <td>2023-12-01</td>\n",
1271
+ " <td>-29.0</td>\n",
1272
+ " <td>149.0</td>\n",
1273
+ " <td>2.736340e-08</td>\n",
1274
+ " <td>glosea</td>\n",
1275
+ " <td>2023</td>\n",
1276
+ " <td>12</td>\n",
1277
+ " <td>335</td>\n",
1278
+ " <td>11255</td>\n",
1279
+ " </tr>\n",
1280
+ " <tr>\n",
1281
+ " <th>1916</th>\n",
1282
+ " <td>2023-12-01</td>\n",
1283
+ " <td>-29.0</td>\n",
1284
+ " <td>150.0</td>\n",
1285
+ " <td>2.981721e-08</td>\n",
1286
+ " <td>glosea</td>\n",
1287
+ " <td>2023</td>\n",
1288
+ " <td>12</td>\n",
1289
+ " <td>335</td>\n",
1290
+ " <td>11255</td>\n",
1291
+ " </tr>\n",
1292
+ " <tr>\n",
1293
+ " <th>1917</th>\n",
1294
+ " <td>2023-12-01</td>\n",
1295
+ " <td>-29.0</td>\n",
1296
+ " <td>151.0</td>\n",
1297
+ " <td>3.446928e-08</td>\n",
1298
+ " <td>glosea</td>\n",
1299
+ " <td>2023</td>\n",
1300
+ " <td>12</td>\n",
1301
+ " <td>335</td>\n",
1302
+ " <td>11255</td>\n",
1303
+ " </tr>\n",
1304
+ " <tr>\n",
1305
+ " <th>1918</th>\n",
1306
+ " <td>2023-12-01</td>\n",
1307
+ " <td>-29.0</td>\n",
1308
+ " <td>152.0</td>\n",
1309
+ " <td>4.342598e-08</td>\n",
1310
+ " <td>glosea</td>\n",
1311
+ " <td>2023</td>\n",
1312
+ " <td>12</td>\n",
1313
+ " <td>335</td>\n",
1314
+ " <td>11255</td>\n",
1315
+ " </tr>\n",
1316
+ " <tr>\n",
1317
+ " <th>1919</th>\n",
1318
+ " <td>2023-12-01</td>\n",
1319
+ " <td>-29.0</td>\n",
1320
+ " <td>153.0</td>\n",
1321
+ " <td>4.949413e-08</td>\n",
1322
+ " <td>glosea</td>\n",
1323
+ " <td>2023</td>\n",
1324
+ " <td>12</td>\n",
1325
+ " <td>335</td>\n",
1326
+ " <td>11255</td>\n",
1327
+ " </tr>\n",
1328
+ " </tbody>\n",
1329
+ "</table>\n",
1330
+ "<p>3840 rows × 9 columns</p>\n",
1331
+ "</div>"
1332
+ ],
1333
+ "text/plain": [
1334
+ " time lat lon pr model year month day_of_year \\\n",
1335
+ "0 2023-03-01 -22.0 142.0 0.000000e+00 ecmwf 2023 3 60 \n",
1336
+ "1 2023-03-01 -22.0 143.0 0.000000e+00 ecmwf 2023 3 60 \n",
1337
+ "2 2023-03-01 -22.0 144.0 0.000000e+00 ecmwf 2023 3 60 \n",
1338
+ "3 2023-03-01 -22.0 145.0 0.000000e+00 ecmwf 2023 3 60 \n",
1339
+ "4 2023-03-01 -22.0 146.0 0.000000e+00 ecmwf 2023 3 60 \n",
1340
+ "... ... ... ... ... ... ... ... ... \n",
1341
+ "1915 2023-12-01 -29.0 149.0 2.736340e-08 glosea 2023 12 335 \n",
1342
+ "1916 2023-12-01 -29.0 150.0 2.981721e-08 glosea 2023 12 335 \n",
1343
+ "1917 2023-12-01 -29.0 151.0 3.446928e-08 glosea 2023 12 335 \n",
1344
+ "1918 2023-12-01 -29.0 152.0 4.342598e-08 glosea 2023 12 335 \n",
1345
+ "1919 2023-12-01 -29.0 153.0 4.949413e-08 glosea 2023 12 335 \n",
1346
+ "\n",
1347
+ " days_since_start \n",
1348
+ "0 10980 \n",
1349
+ "1 10980 \n",
1350
+ "2 10980 \n",
1351
+ "3 10980 \n",
1352
+ "4 10980 \n",
1353
+ "... ... \n",
1354
+ "1915 11255 \n",
1355
+ "1916 11255 \n",
1356
+ "1917 11255 \n",
1357
+ "1918 11255 \n",
1358
+ "1919 11255 \n",
1359
+ "\n",
1360
+ "[3840 rows x 9 columns]"
1361
+ ]
1362
+ },
1363
+ "execution_count": 19,
1364
+ "metadata": {},
1365
+ "output_type": "execute_result"
1366
+ }
1367
+ ],
1368
+ "source": [
1369
+ "columns = ['time', 'lat', 'lon', 'pr', 'model', 'year', 'month', 'day_of_year', 'days_since_start']\n",
1370
+ "\n",
1371
+ "# Extract year, month, day_of_year, and days_since_start from time\n",
1372
+ "master_df['time'] = pd.to_datetime(master_df['time'])\n",
1373
+ "master_df['year'] = master_df['time'].dt.year\n",
1374
+ "master_df['month'] = master_df['time'].dt.month\n",
1375
+ "master_df['day_of_year'] = master_df['time'].dt.dayofyear\n",
1376
+ "\n",
1377
+ "# days_since_start is days since 1993-02-06\n",
1378
+ "start_date = pd.to_datetime('1993-02-06')\n",
1379
+ "master_df['days_since_start'] = (master_df['time'] - start_date).dt.days\n",
1380
+ "\n",
1381
+ "# Rename latitude to lat and longitude to lon\n",
1382
+ "master_df.rename(columns={'latitude': 'lat', 'longitude': 'lon'}, inplace=True)\n",
1383
+ "# Rename tprate to pr\n",
1384
+ "master_df.rename(columns={'tprate': 'pr'}, inplace=True)\n",
1385
+ "\n",
1386
+ "# Fill NaN pr values with 0\n",
1387
+ "master_df['pr'] = master_df['pr'].fillna(0)\n",
1388
+ "\n",
1389
+ "master_df = master_df[columns]\n",
1390
+ "master_df\n"
1391
+ ]
1392
+ },
1393
+ {
1394
+ "cell_type": "code",
1395
+ "execution_count": 20,
1396
+ "metadata": {},
1397
+ "outputs": [],
1398
+ "source": [
1399
+ "# Save as parquet\n",
1400
+ "master_df.to_parquet('data/processed/master_2023.parquet')"
1401
+ ]
1402
+ },
1403
+ {
1404
+ "cell_type": "markdown",
1405
+ "metadata": {},
1406
+ "source": [
1407
+ "## Actual master df"
1408
+ ]
1409
+ },
1410
+ {
1411
+ "cell_type": "code",
1412
+ "execution_count": 1,
1413
+ "metadata": {},
1414
+ "outputs": [],
1415
+ "source": [
1416
+ "import pandas as pd\n",
1417
+ "\n",
1418
+ "path = 'data/processed/master.parquet'\n",
1419
+ "df = pd.read_parquet(path)"
1420
+ ]
1421
+ },
1422
+ {
1423
+ "cell_type": "code",
1424
+ "execution_count": 2,
1425
+ "metadata": {},
1426
+ "outputs": [
1427
+ {
1428
+ "data": {
1429
+ "text/html": [
1430
+ "<div>\n",
1431
+ "<style scoped>\n",
1432
+ " .dataframe tbody tr th:only-of-type {\n",
1433
+ " vertical-align: middle;\n",
1434
+ " }\n",
1435
+ "\n",
1436
+ " .dataframe tbody tr th {\n",
1437
+ " vertical-align: top;\n",
1438
+ " }\n",
1439
+ "\n",
1440
+ " .dataframe thead th {\n",
1441
+ " text-align: right;\n",
1442
+ " }\n",
1443
+ "</style>\n",
1444
+ "<table border=\"1\" class=\"dataframe\">\n",
1445
+ " <thead>\n",
1446
+ " <tr style=\"text-align: right;\">\n",
1447
+ " <th></th>\n",
1448
+ " <th>time</th>\n",
1449
+ " <th>lat</th>\n",
1450
+ " <th>lon</th>\n",
1451
+ " <th>pr</th>\n",
1452
+ " <th>model</th>\n",
1453
+ " </tr>\n",
1454
+ " </thead>\n",
1455
+ " <tbody>\n",
1456
+ " <tr>\n",
1457
+ " <th>0</th>\n",
1458
+ " <td>1983-01-01 12:00:00</td>\n",
1459
+ " <td>-29.0</td>\n",
1460
+ " <td>150.0</td>\n",
1461
+ " <td>0.220000</td>\n",
1462
+ " <td>access</td>\n",
1463
+ " </tr>\n",
1464
+ " <tr>\n",
1465
+ " <th>1</th>\n",
1466
+ " <td>1983-01-01 12:00:00</td>\n",
1467
+ " <td>-29.0</td>\n",
1468
+ " <td>151.0</td>\n",
1469
+ " <td>3.170000</td>\n",
1470
+ " <td>access</td>\n",
1471
+ " </tr>\n",
1472
+ " <tr>\n",
1473
+ " <th>2</th>\n",
1474
+ " <td>1983-01-01 12:00:00</td>\n",
1475
+ " <td>-29.0</td>\n",
1476
+ " <td>152.0</td>\n",
1477
+ " <td>13.210000</td>\n",
1478
+ " <td>access</td>\n",
1479
+ " </tr>\n",
1480
+ " <tr>\n",
1481
+ " <th>3</th>\n",
1482
+ " <td>1983-01-01 12:00:00</td>\n",
1483
+ " <td>-29.0</td>\n",
1484
+ " <td>153.0</td>\n",
1485
+ " <td>25.549999</td>\n",
1486
+ " <td>access</td>\n",
1487
+ " </tr>\n",
1488
+ " <tr>\n",
1489
+ " <th>4</th>\n",
1490
+ " <td>1983-01-01 12:00:00</td>\n",
1491
+ " <td>-28.0</td>\n",
1492
+ " <td>150.0</td>\n",
1493
+ " <td>0.160000</td>\n",
1494
+ " <td>access</td>\n",
1495
+ " </tr>\n",
1496
+ " </tbody>\n",
1497
+ "</table>\n",
1498
+ "</div>"
1499
+ ],
1500
+ "text/plain": [
1501
+ " time lat lon pr model\n",
1502
+ "0 1983-01-01 12:00:00 -29.0 150.0 0.220000 access\n",
1503
+ "1 1983-01-01 12:00:00 -29.0 151.0 3.170000 access\n",
1504
+ "2 1983-01-01 12:00:00 -29.0 152.0 13.210000 access\n",
1505
+ "3 1983-01-01 12:00:00 -29.0 153.0 25.549999 access\n",
1506
+ "4 1983-01-01 12:00:00 -28.0 150.0 0.160000 access"
1507
+ ]
1508
+ },
1509
+ "execution_count": 2,
1510
+ "metadata": {},
1511
+ "output_type": "execute_result"
1512
+ }
1513
+ ],
1514
+ "source": [
1515
+ "df.head()"
1516
+ ]
1517
+ },
1518
+ {
1519
+ "cell_type": "code",
1520
+ "execution_count": null,
1521
+ "metadata": {},
1522
+ "outputs": [],
1523
+ "source": []
1524
+ }
1525
+ ],
1526
+ "metadata": {
1527
+ "kernelspec": {
1528
+ "display_name": "Python 3",
1529
+ "language": "python",
1530
+ "name": "python3"
1531
+ },
1532
+ "language_info": {
1533
+ "codemirror_mode": {
1534
+ "name": "ipython",
1535
+ "version": 3
1536
+ },
1537
+ "file_extension": ".py",
1538
+ "mimetype": "text/x-python",
1539
+ "name": "python",
1540
+ "nbconvert_exporter": "python",
1541
+ "pygments_lexer": "ipython3",
1542
+ "version": "3.11.5"
1543
+ },
1544
+ "orig_nbformat": 4
1545
+ },
1546
+ "nbformat": 4,
1547
+ "nbformat_minor": 2
1548
+ }
data.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
ensemble_data.py ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from tqdm import tqdm
2
+ import requests
3
+ import os
4
+ import xarray as xr
5
+ import pandas as pd
6
+ from glob import glob
7
+ import cfgrib
8
+
9
+ # ### GLOSEA5 ###
10
+ # path = "raw/glosea.grib"
11
+ # print("Processing Glosea5...")
12
+ # # Open the GRIB file as an xarray dataset
13
+ # ds_1 = cfgrib.open_datasets("raw/glosea_1.grib")
14
+ # ds_2 = cfgrib.open_datasets("raw/glosea_2.grib")
15
+ # ds_3 = cfgrib.open_datasets("raw/glosea_3.grib")
16
+ # glosea_xarray_1, glosea_xarray_2, glosea_xarray_3 = ds_1[0], ds_2[0], ds_3[0]
17
+ # # Convert to pandas DataFrame
18
+ # df_1 = glosea_xarray_1.to_dataframe().reset_index()
19
+ # df_2 = glosea_xarray_2.to_dataframe().reset_index()
20
+ # df_3 = glosea_xarray_3.to_dataframe().reset_index()
21
+ # # Concatenate the two DataFrames
22
+ # glosea_df = pd.concat([df_1, df_2, df_3], ignore_index=True)
23
+ # # Convert tprate NaN to 0.0
24
+ # glosea_df['tprate'] = glosea_df['tprate'].fillna(0.0)
25
+ # # Aggregate across steps
26
+ # glosea_df = glosea_df.groupby(['valid_time', 'latitude', 'longitude'])['tprate'].mean().reset_index()
27
+ # # Keep between 1981 and 2019
28
+ # glosea_df = glosea_df[glosea_df['valid_time'].between('1981-01-01', '2018-12-31')]
29
+ # # Rename columns
30
+ # glosea_df.rename(columns={'tprate': 'pr', 'latitude': 'lat', 'longitude': 'lon', 'valid_time': 'time'}, inplace=True)
31
+ # glosea_df.reset_index(inplace=True, drop=True)
32
+ # print(f'Saving Glosea5 data to parquet (length of dataframe = {len(glosea_df)})...')
33
+ # glosea_df.to_parquet('processed/glosea5.parquet')
34
+ # # Delete any files ending in .idx from the raw folder
35
+ # for file in os.listdir("raw"):
36
+ # if file.endswith(".idx"):
37
+ # os.remove(os.path.join("raw", file))
38
+
39
+ # ### ECMWF ###
40
+ # path = "raw/ecmwf.grib"
41
+ # print("Processing ECMWF...")
42
+ # # Open the GRIB file as an xarray dataset
43
+ # ds_1 = cfgrib.open_datasets("raw/ecmwf_1.grib")
44
+ # ds_2 = cfgrib.open_datasets("raw/ecmwf_2.grib")
45
+ # ds_3 = cfgrib.open_datasets("raw/ecmwf_3.grib")
46
+ # ecmwf_xarray_1, ecmwf_xarray_2, ecmwf_xarray_3 = ds_1[0], ds_2[0], ds_3[0]
47
+ # # Convert to pandas DataFrame
48
+ # df_1 = ecmwf_xarray_1.to_dataframe().reset_index()
49
+ # df_2 = ecmwf_xarray_2.to_dataframe().reset_index()
50
+ # df_3 = ecmwf_xarray_3.to_dataframe().reset_index()
51
+ # # Concatenate the two DataFrames
52
+ # ecmwf_df = pd.concat([df_1, df_2, df_3], ignore_index=True)
53
+ # # Convert tprate NaN to 0.0
54
+ # ecmwf_df['tprate'] = ecmwf_df['tprate'].fillna(0.0)
55
+ # # Aggregate across steps
56
+ # ecmwf_df = ecmwf_df.groupby(['valid_time', 'latitude', 'longitude'])['tprate'].mean().reset_index()
57
+ # # Keep between 1981 and 2019
58
+ # ecmwf_df = ecmwf_df[ecmwf_df['valid_time'].between('1981-01-01', '2018-12-31')]
59
+ # # Rename columns
60
+ # ecmwf_df.rename(columns={'tprate': 'pr', 'latitude': 'lat', 'longitude': 'lon', 'valid_time': 'time'}, inplace=True)
61
+ # ecmwf_df.reset_index(inplace=True, drop=True)
62
+ # print(f'Saving ECMWF data to parquet (length of dataframe = {len(ecmwf_df)})...')
63
+ # ecmwf_df.to_parquet('processed/ecmwf.parquet')
64
+ # # Delete any files ending in .idx from the raw folder
65
+ # for file in os.listdir("raw"):
66
+ # if file.endswith(".idx"):
67
+ # os.remove(os.path.join("raw", file))
68
+
69
+
70
+ # ### ACCESS-S2 ###
71
+
72
+ # # Define the output file path
73
+ # output_file = 'processed/access.parquet'
74
+
75
+ # # Define the path and file pattern
76
+ # path = "/g/data/ux62/access-s2/hindcast/calibrated/atmos/pr/daily/e09/"
77
+
78
+ # # Check if the output file already exists and read it if it does
79
+ # if os.path.exists(output_file):
80
+ # master_df = pd.read_parquet(output_file)
81
+ # # Extract already processed years
82
+ # processed_years = master_df['time'].dt.year.unique()
83
+ # else:
84
+ # # Initialise an empty DataFrame if the file does not exist
85
+ # master_df = pd.DataFrame()
86
+ # processed_years = []
87
+
88
+ # # Generate file patterns for each year from 1983 to 2018 and get matching files
89
+ # files = []
90
+ # for year in range(1983, 2018):
91
+ # if year not in processed_years:
92
+ # pattern = f"*pr_{year}*.nc"
93
+ # files.extend(glob(os.path.join(path, pattern)))
94
+
95
+ # print(f"Processing data for years: {set(range(1983, 2018)) - set(processed_years)}")
96
+
97
+ # # Loop through the list of files and load each one
98
+ # for file in tqdm(files):
99
+ # # Load the xarray dataset
100
+ # ds = xr.open_dataset(file)
101
+
102
+ # # Slice the dataset for three specific lat/lon grids
103
+ # ds_sliced1 = ds.sel(lon=slice(142, 145), lat=slice(-25, -22))
104
+ # ds_sliced2 = ds.sel(lon=slice(150, 153), lat=slice(-29, -26))
105
+ # ds_sliced3 = ds.sel(lon=slice(143, 146), lat=slice(-20, -17))
106
+
107
+
108
+ # # Flatten the sliced data for 'pr' variable for both slices
109
+ # df1 = ds_sliced1['pr'].to_dataframe().reset_index()
110
+ # df2 = ds_sliced2['pr'].to_dataframe().reset_index()
111
+ # df3 = ds_sliced3['pr'].to_dataframe().reset_index()
112
+
113
+ # # Concatenate the two DataFrames
114
+ # combined_df = pd.concat([df1, df2, df3], ignore_index=True)
115
+
116
+ # # Filter rows where latitude and longitude are integers
117
+ # combined_df = combined_df[combined_df['lat'].apply(lambda x: x.is_integer())]
118
+ # combined_df = combined_df[combined_df['lon'].apply(lambda x: x.is_integer())]
119
+
120
+
121
+ # # Append the DataFrame to the master DataFrame
122
+ # master_df = pd.concat([master_df, combined_df], ignore_index=True)
123
+
124
+ # # Close the xarray dataset
125
+ # ds.close()
126
+
127
+ # # Save the updated master_df to the Parquet file
128
+ # os.makedirs(os.path.dirname(output_file), exist_ok=True)
129
+ # master_df.to_parquet(output_file)
130
+
131
+ # # Print the processed year for tracking
132
+ # processed_year = pd.to_datetime(master_df['time'].max()).year
133
+ # print(f"Year {processed_year} processed and saved.")
134
+
135
+ # # After the loop, perform any final processing needed on master_df
136
+ # if os.path.exists(output_file):
137
+ # master_df = pd.read_parquet(output_file)
138
+ # master_df['time'] = pd.to_datetime(master_df['time'])
139
+
140
+ # # Group by time, lat, and lon, then sum the pr values
141
+ # deduped_df = master_df.groupby(['time', 'lat', 'lon']).agg({'pr': 'sum'}).reset_index()
142
+
143
+ # # Save the final processed DataFrame to a Parquet file
144
+ # deduped_df.to_parquet(output_file)
145
+ # print(f"Final file saved to {output_file}")
146
+ # else:
147
+ # print(f"No data processed. {output_file} does not exist.")
148
+
149
+
150
+ # def create_master_parquet():
151
+ # files = ['access', 'ecmwf', 'glosea5', 'silo']
152
+ # frames = []
153
+ # for file in files:
154
+ # df = pd.read_parquet(f'processed/{file}.parquet')
155
+ # df['model'] = file
156
+ # frames.append(df)
157
+
158
+ # access = frames[0]
159
+ # access.reset_index(inplace=True, drop=True)
160
+ # columns = access.columns
161
+ # # Convert time to string
162
+ # access['time'] = access['time'].astype(str)
163
+
164
+ # ecmwf = frames[1]
165
+ # ecmwf.rename(columns={'date': 'time', 'precip': 'pr', 'latitude': 'lat', 'longitude': 'lon'}, inplace=True)
166
+ # ecmwf = ecmwf[columns]
167
+ # ecmwf.reset_index(inplace=True, drop=True)
168
+ # # Convert time to string
169
+ # ecmwf['time'] = ecmwf['time'].astype(str)
170
+
171
+ # glosea = frames[2]
172
+ # glosea.rename(columns={'date': 'time', 'tprate': 'pr', 'latitude': 'lat', 'longitude': 'lon'}, inplace=True)
173
+ # glosea = glosea[columns]
174
+ # glosea.reset_index(inplace=True, drop=True)
175
+ # # Convert time to string
176
+ # glosea['time'] = glosea['time'].astype(str)
177
+
178
+ # silo = frames[3]
179
+ # silo.rename(columns={'daily_rain': 'pr'}, inplace=True)
180
+ # silo = silo[columns]
181
+ # # Convert lat and lon to float32
182
+ # silo['lat'] = silo['lat'].astype('float32')
183
+ # silo['lon'] = silo['lon'].astype('float32')
184
+ # silo.reset_index(inplace=True, drop=True)
185
+ # # Convert time to string
186
+ # silo['time'] = silo['time'].astype(str)
187
+
188
+ # dfs = [access, ecmwf, glosea, silo]
189
+ # master_df = pd.concat(dfs)
190
+ # master_df.reset_index(inplace=True, drop=True)
191
+
192
+ # master_df.to_parquet('processed/master.parquet')
193
+ # print(f"Final file saved to processed/master.parquet")
194
+
195
+ import pandas as pd
196
+
197
+ def standardize_df(df, rename_dict, default_columns):
198
+ """Standardize the DataFrame structure."""
199
+ df = df.rename(columns=rename_dict)
200
+ df = df[default_columns]
201
+ df.reset_index(inplace=True, drop=True)
202
+ df['time'] = df['time'].astype(str)
203
+ return df
204
+
205
+ def create_master_parquet():
206
+ files = ['access', 'ecmwf', 'glosea5', 'silo']
207
+ rename_dicts = [
208
+ {},
209
+ {'date': 'time', 'precip': 'pr', 'latitude': 'lat', 'longitude': 'lon'},
210
+ {'date': 'time', 'tprate': 'pr', 'latitude': 'lat', 'longitude': 'lon'},
211
+ {'daily_rain': 'pr'}
212
+ ]
213
+
214
+ # Read and append the 'model' column to each DataFrame
215
+ frames = []
216
+ for file, rename_dict in zip(files, rename_dicts):
217
+ df = pd.read_parquet(f'processed/{file}.parquet')
218
+ df['model'] = file
219
+ df = standardize_df(df, rename_dict, default_columns=['time', 'lat', 'lon', 'pr', 'model'])
220
+ frames.append(df)
221
+
222
+ # Use the first DataFrame (access) as a template for column names
223
+ columns = frames[0].columns
224
+
225
+ # Standardize each DataFrame
226
+ for i in range(1, len(frames)):
227
+ frames[i] = standardize_df(frames[i], rename_dicts[i], columns)
228
+
229
+ master_df = pd.concat(frames)
230
+ master_df.reset_index(inplace=True, drop=True)
231
+ master_df.to_parquet('processed/master.parquet')
232
+ print("Final file saved to processed/master.parquet")
233
+
234
+ create_master_parquet()
eval/raw/ecmwf_eval_3.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3154e5b3163c10466056178af3c7f02a5677d2575c41f0f25af8b0527a524954
3
+ size 10782720
eval/raw/glosea_eval_3.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1723c64d75b914573fcd652fe2dacdadb52af833ea28610e15b442c8b1c148ab
3
+ size 5374080
month_tensors/all_squares/climatology_targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:745f1e4d8c25e3beb4bedae95fe52e311129287000bf5071feaaa0449ee33b31
3
+ size 37976
month_tensors/all_squares/end_dates.txt ADDED
The diff for this file is too large to render. See raw diff
 
month_tensors/all_squares/feature_names.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b4e4f0fc8f9dde8b17648f7e976039812b9d74b15ebbe1d49f306ac8ee59094
3
+ size 1208
month_tensors/all_squares/features.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b559093a3562128ce53a0dbd9e255b51f6efa4237e35f3dace718ba63c97d124
3
+ size 846113
month_tensors/all_squares/targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c09238ea9dd29249895b350e5ffcf7c88865de8ef439c65716502502e0926c7
3
+ size 37916
new_tensors/square_1/climatology_targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa1c6370b60f399fdce5b96f23e44d9d6cca64a7f4ca4de11885575769ad7368
3
+ size 503768
new_tensors/square_1/end_dates.txt ADDED
The diff for this file is too large to render. See raw diff
 
new_tensors/square_1/targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1eeda294a697dedfed4ea2a5f8b58c338d2e48ea707b07d9443a813e96693bb6
3
+ size 503708
new_tensors/square_2/climatology_targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b69e7f8792089b1aca1a035a0ca771eb9ca1284fa659dc57a3f277f449ef698c
3
+ size 503768
new_tensors/square_2/end_dates.txt ADDED
The diff for this file is too large to render. See raw diff
 
new_tensors/square_2/targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c602994548d6f59748f07bd21a6ed6d2393b78f08b26ad682ac1bb61a7d85b4
3
+ size 503708
new_tensors/square_3/climatology_targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e3aa187582e1fb8969702c917e7eee754fbf6cd8d9fb732c56185000ee14feb9
3
+ size 503768
new_tensors/square_3/end_dates.txt ADDED
The diff for this file is too large to render. See raw diff
 
new_tensors/square_3/targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df8d283af7cefc6f5fd60b20b73599d460756c831f4b9f6a0838e81fe2c20dd5
3
+ size 503708
new_tensors/square_all/climatology_targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:75a3b3f53725fbdfe49860645982afd788a1e89b9fa1e54f9361b6054a2bc26c
3
+ size 1508824
new_tensors/square_all/end_dates.txt ADDED
The diff for this file is too large to render. See raw diff
 
new_tensors/square_all/targets.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb5054fda657dd0232756c076465e4213bb0ecbfe9fd28b05eb4295fb58455a0
3
+ size 1508764
processed/access.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:810edd2e70aedcaa15f7f4b12654e454ff91f675032d27bfb52dd9f845406689
3
+ size 1318419
processed/ecmwf.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcaf98090126abdb5bde76e5eec2718cf257c13419d177a6cdf4d4de0ff30002
3
+ size 768564
processed/glosea5.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8cf98a980ebc41208182dab341e3ae5808fd280cf9ff2ea91e2536b22d4366fe
3
+ size 172770
processed/master.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8a4281b4f37ea47c32c765e72cd70bd9b0c35eb379fe4441a13edc364cdbe5fa
3
+ size 3045290
processed/master_2023.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b89d251d3130d1a60e50b39f7f77de44d47bf60a3dd80a8394807913d66a512e
3
+ size 28766
processed/silo.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:49f05d8d95dc78cdf6f5efbe655a2fc9d2a3893dd3eaf7eb50f404ef0ea397e7
3
+ size 836003
progress.txt ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ -25,142
2
+ -25,143
3
+ -25,144
4
+ -25,145
5
+ -24,142
6
+ -24,143
7
+ -24,144
8
+ -24,145
9
+ -23,142
10
+ -23,143
11
+ -23,144
12
+ -23,145
13
+ -22,142
14
+ -22,143
15
+ -22,144
16
+ -22,145
17
+ -29,150
18
+ -29,151
19
+ -29,152
20
+ -29,153
21
+ -28,150
22
+ -28,151
23
+ -28,152
24
+ -28,153
25
+ -27,150
26
+ -27,151
27
+ -27,152
28
+ -27,153
29
+ -26,150
30
+ -26,151
31
+ -26,152
32
+ -26,153
33
+ -25,142
34
+ -25,143
35
+ -25,144
36
+ -25,145
37
+ -24,142
38
+ -24,143
39
+ -24,144
40
+ -24,145
41
+ -23,142
42
+ -23,143
43
+ -23,144
44
+ -23,145
45
+ -22,142
46
+ -22,143
47
+ -22,144
48
+ -22,145
49
+ -29,150
50
+ -29,151
51
+ -29,152
52
+ -29,153
53
+ -28,150
54
+ -28,151
55
+ -28,152
56
+ -28,153
57
+ -27,150
58
+ -27,151
59
+ -27,152
60
+ -27,153
61
+ -26,150
62
+ -26,151
63
+ -26,152
64
+ -26,153
65
+ -25,142
66
+ -25,143
67
+ -25,144
68
+ -25,145
69
+ -24,142
70
+ -24,143
71
+ -24,144
72
+ -24,145
73
+ -23,142
74
+ -23,143
75
+ -23,144
76
+ -23,145
77
+ -22,142
78
+ -22,143
79
+ -22,144
80
+ -22,145
81
+ -29,150
82
+ -29,151
83
+ -29,152
84
+ -29,153
85
+ -28,150
86
+ -28,151
87
+ -28,152
88
+ -28,153
89
+ -27,150
90
+ -27,151
91
+ -27,152
92
+ -27,153
93
+ -26,150
94
+ -26,151
95
+ -26,152
96
+ -26,153
raw/access_old.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:353eb13da7961b276d5dd4216de25a8b8248aa00e7af0f302ea91ac9cdc4da44
3
+ size 483902
raw/ecmwf_1.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77f2075a28795896bcc6ebdc8031c3c9aac9c7285e6cccada63735839f944d5d
3
+ size 62078400
raw/ecmwf_1.grib.923a8.idx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f5c42afd269285dd28ee14a5d8fc534a3485a65bc1131c62f26db059ac5734f3
3
+ size 6640907
raw/ecmwf_2.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d413efe7f798f0e5bdf4a160d76a32bfd43204c78a7c401d8756a0366cbbc319
3
+ size 24261120
raw/ecmwf_2.grib.923a8.idx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9c0299b4d5d1a342f0e419fcbd77bb3c6fa60a00cc5c64f6940d34573321724
3
+ size 7936788
raw/ecmwf_2023.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87756a5b03ccc64c3c71ef247eb7299c0ef93aa6e8a2bf4845f080374e5c5dd0
3
+ size 341120
raw/ecmwf_2023.grib.5b7b6.idx ADDED
Binary file (89.9 kB). View file
 
raw/ecmwf_3.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e228fe30cca96a158e0e9722059a3c377e19fa2205dd8e7d6def9bc73abfbd55
3
+ size 24261120
raw/ecmwf_3.grib.923a8.idx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35930a7df617099bfdc70bd8c50f8dc1a3b4b8f1ecc8bc7ed8deb1defc6cf778
3
+ size 7936788
raw/glosea_1.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:00e2a957f2c6f56e6fa99f20eb6351c8c69bef00a360d0b270375df95b7e03c9
3
+ size 9636480
raw/glosea_2.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0b58982155cb90ea8c13282f8137a6f12302d1b776a1e0b94f682be8bf978e91
3
+ size 57818880
raw/glosea_2023.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f582f3e43c32b72e403f83160e69d0653f3352eaa67c3aa0c966623b8e916cdf
3
+ size 512800
raw/glosea_2023.grib.5b7b6.idx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15e3ef78be2a97addea127da4adb23cdf99339592b4e09838b3449116283a2ef
3
+ size 106189
raw/glosea_3.grib ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9a978618f99732938111062d76d907f356515ef336bbd28a311b7a0293a959b6
3
+ size 57818880
silo.py ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from tqdm import tqdm
2
+ import requests
3
+ import os
4
+ import xarray as xr
5
+ import pandas as pd
6
+
7
+ # Define constants
8
+ BASE_URL = "https://s3-ap-southeast-2.amazonaws.com/silo-open-data/Official/annual/"
9
+ VARIABLE = "daily_rain" # or whatever variable you are interested in
10
+ years = range(1989, 2021) # Replace with your actual range
11
+
12
+ # def clean_silo(path, lat_slice1, lon_slice1, lat_slice2, lon_slice2):
13
+ # ds = xr.open_dataset(path)
14
+ # filtered_data1 = ds.sel(lat=lat_slice1, lon=lon_slice1)
15
+ # print(filtered_data1)
16
+ # filtered_data1 = filtered_data1.where(
17
+ # (filtered_data1['lat'] % 1 == 0) & (filtered_data1['lon'] % 1 == 0), drop=True
18
+ # )
19
+ # filtered_data2 = ds.sel(lat=lat_slice2, lon=lon_slice2)
20
+ # filtered_data2 = filtered_data2.where(
21
+ # (filtered_data2['lat'] % 1 == 0) & (filtered_data2['lon'] % 1 == 0), drop=True
22
+ # )
23
+ # combined_data = xr.concat([filtered_data1, filtered_data2], dim='lat_lon')
24
+ # df = combined_data.to_dataframe().reset_index()
25
+ # df.drop(columns=['crs'], inplace=True)
26
+ # df.reset_index(inplace=True, drop=True)
27
+ # return df
28
+
29
+ # # List to store cleaned DataFrames
30
+ # df_list = []
31
+
32
+ # # Loop over each year to download the corresponding NetCDF file
33
+ # print(f'Generating SILO data for {years[0]} to {years[-1]-1}...')
34
+ # for year in tqdm(years):
35
+ # url = f"{BASE_URL}{VARIABLE}/{year}.{VARIABLE}.nc"
36
+ # response = requests.get(url)
37
+
38
+ # # Temporary path to save the downloaded NetCDF file
39
+ # temp_path = f"{year}.{VARIABLE}.nc"
40
+
41
+ # # Check if the request was successful
42
+ # if response.status_code == 200:
43
+ # # Save the NetCDF file
44
+ # with open(temp_path, "wb") as f:
45
+ # f.write(response.content)
46
+
47
+ # # Clean the data
48
+ # cleaned_df = clean_silo(temp_path, slice(-25, -22), slice(142, 145), slice(-29, -26), slice(150, 153))
49
+ # print(cleaned_df)
50
+ # df_list.append(cleaned_df)
51
+
52
+ # # Remove the temporary NetCDF file to save space
53
+ # os.remove(temp_path)
54
+ # else:
55
+ # print(f"Failed to download data for {year}")
56
+
57
+ # # Concatenate all the cleaned DataFrames
58
+ # final_df = pd.concat(df_list, ignore_index=True)
59
+ # final_df.rename(columns={'daily_rain': 'pr'}, inplace=True)
60
+
61
+ # # Save to parquet
62
+ # print('Saving data to parquet...')
63
+ # final_df.to_parquet('processed/silo.parquet')
64
+
65
+ def convert_to_dataframe(ds, lat_slice, lon_slice):
66
+ # Filter and convert to DataFrame
67
+ filtered_data = ds.sel(lat=lat_slice, lon=lon_slice).to_dataframe().reset_index()
68
+ return filtered_data
69
+
70
+ def aggregate_precipitation(df, grid_size=0.05):
71
+ # Define the range for grid aggregation
72
+ lat_range = df['lat'].apply(lambda x: round(x)).unique()
73
+ lon_range = df['lon'].apply(lambda x: round(x)).unique()
74
+
75
+ aggregated_records = []
76
+ for lat_int in lat_range:
77
+ for lon_int in lon_range:
78
+ # Define grid boundaries
79
+ lat_min, lat_max = lat_int - grid_size, lat_int + grid_size
80
+ lon_min, lon_max = lon_int - grid_size, lon_int + grid_size
81
+
82
+ # Filter and aggregate data within the grid
83
+ grid_df = df[(df['lat'] >= lat_min) & (df['lat'] <= lat_max) &
84
+ (df['lon'] >= lon_min) & (df['lon'] <= lon_max)]
85
+ aggregated = grid_df.groupby('time')['daily_rain'].sum().reset_index()
86
+ aggregated['lat'] = lat_int
87
+ aggregated['lon'] = lon_int
88
+ aggregated_records.append(aggregated)
89
+
90
+ # Combine all records into a single DataFrame
91
+ return pd.concat(aggregated_records, ignore_index=True)
92
+
93
+ # Main loop to process data
94
+ df_list = []
95
+ for year in tqdm(years):
96
+ url = f"{BASE_URL}{VARIABLE}/{year}.{VARIABLE}.nc"
97
+ response = requests.get(url)
98
+
99
+ # Temporary path to save the downloaded NetCDF file
100
+ temp_path = f"{year}.{VARIABLE}.nc"
101
+
102
+ # Check if the request was successful
103
+ if response.status_code == 200:
104
+ # Save the NetCDF file
105
+ with open(temp_path, "wb") as f:
106
+ f.write(response.content)
107
+
108
+ # Convert Dataset to DataFrame
109
+ ds = xr.open_dataset(temp_path)
110
+ df1 = convert_to_dataframe(ds, slice(-25, -22), slice(142, 145))
111
+ df2 = convert_to_dataframe(ds, slice(-29, -26), slice(150, 153))
112
+ df3 = convert_to_dataframe(ds, slice(-20, -17), slice(143, 146))
113
+
114
+ # Combine the two DataFrames
115
+ combined_df = pd.concat([df1, df2, df3], ignore_index=True)
116
+
117
+ # Aggregate precipitation data
118
+ aggregated_df = aggregate_precipitation(combined_df)
119
+ df_list.append(aggregated_df)
120
+
121
+ os.remove(temp_path)
122
+
123
+ else:
124
+ print(f"Failed to download data for {year}")
125
+
126
+
127
+ # Concatenate all DataFrames and save to parquet
128
+ final_df = pd.concat(df_list, ignore_index=True)
129
+ final_df.to_parquet('processed/silo.parquet')
tensors/climatology_targets_240.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e4c91e46b66853bf5e17d563ab750a90b025c8a9a00985a2657d1b2ea371968f
3
+ size 516268
tensors/end_dates_240.txt ADDED
The diff for this file is too large to render. See raw diff
 
tensors/targets_120.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4489fd1dc60eeb774acd3c71960b63fb7d9a7b90efe0efb96df4f6803f9e5711
3
+ size 523804
tensors/targets_240.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0195280fd617d8147beca6b1d0d356467d10bffcf57fcde439d67435eaff8925
3
+ size 516144