Nionio commited on
Commit
de918d8
·
verified ·
1 Parent(s): dfac3a8

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: Base_2_2/Zone/CellData/diffusion_coefficient
@@ -17,3 +23,98 @@ configs:
17
  - split: train
18
  path: data/train-*
19
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - graph-ml
5
+ tags:
6
+ - physics learning
7
+ - geometry learning
8
  dataset_info:
9
  features:
10
  - name: Base_2_2/Zone/CellData/diffusion_coefficient
 
23
  - split: train
24
  path: data/train-*
25
  ---
26
+ ```yaml
27
+ legal:
28
+ owner: Takamoto, M et al. (https://darus.uni-stuttgart.de/dataset.xhtml?persistentId=doi:10.18419/darus-2986)
29
+ license: cc-by-4.0
30
+ data_production:
31
+ physics: 2D Darcy Flow
32
+ type: simulation
33
+ script: Converted to PLAID format for standardized usage; no changes to data content.
34
+ num_samples:
35
+ train: 10000
36
+ storage_backend: hf_datasets
37
+ plaid:
38
+ version: 0.1.12
39
+
40
+ ```
41
+ This dataset was generated with [`plaid`](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `plaid_sample` objects.
42
+
43
+ The simplest way to use this dataset is to first download it:
44
+ ```python
45
+ from plaid.storage import download_from_hub
46
+
47
+ repo_id = "channel/dataset"
48
+ local_folder = "downloaded_dataset"
49
+
50
+ download_from_hub(repo_id, local_folder)
51
+ ```
52
+
53
+ Then, to iterate over the dataset and instantiate samples:
54
+ ```python
55
+ from plaid.storage import init_from_disk
56
+
57
+ local_folder = "downloaded_dataset"
58
+ split_name = "train"
59
+
60
+ datasetdict, converterdict = init_from_disk(local_folder)
61
+
62
+ dataset = datasetdict[split]
63
+ converter = converterdict[split]
64
+
65
+ for i in range(len(dataset)):
66
+ plaid_sample = converter.to_plaid(dataset, i)
67
+ ```
68
+
69
+ It is possible to stream the data directly:
70
+ ```python
71
+ from plaid.storage import init_streaming_from_hub
72
+
73
+ repo_id = "channel/dataset"
74
+
75
+ datasetdict, converterdict = init_streaming_from_hub(repo_id)
76
+
77
+ dataset = datasetdict[split]
78
+ converter = converterdict[split]
79
+
80
+ for sample_raw in dataset:
81
+ plaid_sample = converter.sample_to_plaid(sample_raw)
82
+ ```
83
+
84
+ Plaid samples' features can be retrieved like the following:
85
+ ```python
86
+ from plaid.storage import load_problem_definitions_from_disk
87
+ local_folder = "downloaded_dataset"
88
+ pb_defs = load_problem_definitions_from_disk(local_folder)
89
+
90
+ # or
91
+ from plaid.storage import load_problem_definitions_from_hub
92
+ repo_id = "channel/dataset"
93
+ pb_defs = load_problem_definitions_from_hub(repo_id)
94
+
95
+
96
+ pb_def = pb_defs[0]
97
+
98
+ plaid_sample = ... # use a method from above to instantiate a plaid sample
99
+
100
+ for t in plaid_sample.get_all_time_values():
101
+ for path in pb_def.get_in_features_identifiers():
102
+ plaid_sample.get_feature_by_path(path=path, time=t)
103
+ for path in pb_def.get_out_features_identifiers():
104
+ plaid_sample.get_feature_by_path(path=path, time=t)
105
+ ```
106
+
107
+ For those familiar with HF's `datasets` library, raw data can be retrieved without using the `plaid` library:
108
+ ```python
109
+ from datasets import load_dataset
110
+
111
+ repo_id = "channel/dataset"
112
+
113
+ datasetdict = load_dataset(repo_id)
114
+
115
+ for split_name, dataset in datasetdict.items():
116
+ for raw_sample in dataset:
117
+ for feat_name in dataset.column_names:
118
+ feature = raw_sample[feat_name]
119
+ ```
120
+ Notice that raw data refers to the variable features only, with a specific encoding for time variable features.