Nionio commited on
Commit
d12af77
·
verified ·
1 Parent(s): a6e7f02

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +107 -0
README.md CHANGED
@@ -1,4 +1,11 @@
1
  ---
 
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: Base_2_2/Zone/CellData/water_height
@@ -17,3 +24,103 @@ configs:
17
  - split: train
18
  path: data/train-*
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - graph-ml
5
+ pretty_name: PDEBench 2D Shallow Water Equations
6
+ tags:
7
+ - physics learning
8
+ - geometry learning
9
  dataset_info:
10
  features:
11
  - name: Base_2_2/Zone/CellData/water_height
 
24
  - split: train
25
  path: data/train-*
26
  ---
27
+ ```yaml
28
+ legal:
29
+ owner: Takamoto, M et al. (https://darus.uni-stuttgart.de/dataset.xhtml?persistentId=doi:10.18419/darus-2986)
30
+ license: cc-by-4.0
31
+ data_production:
32
+ physics: Shallow Water Equations
33
+ type: simulation
34
+ script: Converted to PLAID format for standardized usage; no changes to data content.
35
+ num_samples:
36
+ train: 1000
37
+ storage_backend: hf_datasets
38
+ plaid:
39
+ version: 0.1.12
40
+
41
+ ```
42
+ This dataset was generated with [`plaid`](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `plaid_sample` objects.
43
+
44
+ The simplest way to use this dataset is to first download it:
45
+ ```python
46
+ from plaid.storage import download_from_hub
47
+
48
+ repo_id = "channel/dataset"
49
+ local_folder = "downloaded_dataset"
50
+
51
+ download_from_hub(repo_id, local_folder)
52
+ ```
53
+
54
+ Then, to iterate over the dataset and instantiate samples:
55
+ ```python
56
+ from plaid.storage import init_from_disk
57
+
58
+ local_folder = "downloaded_dataset"
59
+ split_name = "train"
60
+
61
+ datasetdict, converterdict = init_from_disk(local_folder)
62
+
63
+ dataset = datasetdict[split]
64
+ converter = converterdict[split]
65
+
66
+ for i in range(len(dataset)):
67
+ plaid_sample = converter.to_plaid(dataset, i)
68
+ ```
69
+
70
+ It is possible to stream the data directly:
71
+ ```python
72
+ from plaid.storage import init_streaming_from_hub
73
+
74
+ repo_id = "channel/dataset"
75
+
76
+ datasetdict, converterdict = init_streaming_from_hub(repo_id)
77
+
78
+ dataset = datasetdict[split]
79
+ converter = converterdict[split]
80
+
81
+ for sample_raw in dataset:
82
+ plaid_sample = converter.sample_to_plaid(sample_raw)
83
+ ```
84
+
85
+ Plaid samples' features can be retrieved like the following:
86
+ ```python
87
+ from plaid.storage import load_problem_definitions_from_disk
88
+ local_folder = "downloaded_dataset"
89
+ pb_defs = load_problem_definitions_from_disk(local_folder)
90
+
91
+ # or
92
+ from plaid.storage import load_problem_definitions_from_hub
93
+ repo_id = "channel/dataset"
94
+ pb_defs = load_problem_definitions_from_hub(repo_id)
95
+
96
+
97
+ pb_def = pb_defs[0]
98
+
99
+ plaid_sample = ... # use a method from above to instantiate a plaid sample
100
+
101
+ for t in plaid_sample.get_all_time_values():
102
+ for path in pb_def.get_in_features_identifiers():
103
+ plaid_sample.get_feature_by_path(path=path, time=t)
104
+ for path in pb_def.get_out_features_identifiers():
105
+ plaid_sample.get_feature_by_path(path=path, time=t)
106
+ ```
107
+
108
+ For those familiar with HF's `datasets` library, raw data can be retrieved without using the `plaid` library:
109
+ ```python
110
+ from datasets import load_dataset
111
+
112
+ repo_id = "channel/dataset"
113
+
114
+ datasetdict = load_dataset(repo_id)
115
+
116
+ for split_name, dataset in datasetdict.items():
117
+ for raw_sample in dataset:
118
+ for feat_name in dataset.column_names:
119
+ feature = raw_sample[feat_name]
120
+ ```
121
+ Notice that raw data refers to the variable features only, with a specific encoding for time variable features.
122
+
123
+ ### Dataset Sources
124
+
125
+ - **Papers:**
126
+ - [arxiv](https://arxiv.org/pdf/2210.07182)