Datasets:

Modalities:
Time-series
Formats:
parquet
Size:
< 1K
ArXiv:
License:
fabiencasenave commited on
Commit
d1e4195
·
verified ·
1 Parent(s): 24b0060

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +92 -36
README.md CHANGED
@@ -82,53 +82,109 @@ configs:
82
  - split: OOD
83
  path: data/OOD-*
84
  ---
85
- ![image/png](https://i.ibb.co/MDqsmb5H/Logo-Tensile2d-2-consolas-100.png)
86
- ![image/png](https://i.ibb.co/Js062hF/preview.png)
 
 
 
87
  ```yaml
 
 
 
88
  data_production:
 
89
  physics: 2D quasistatic non-linear structural mechanics, small deformations, plane
90
  strain
91
- type: simulation
92
- legal:
93
- license: CC-BY-SA
94
- owner: Safran
 
95
  plaid:
96
- version: 0.1.10.dev114+gcbd3fd46f.d20251014
97
 
98
  ```
99
- Example of commands:
 
 
100
  ```python
101
- from datasets import load_dataset
102
- from plaid.bridges import huggingface_bridge
103
-
104
- repo_id = "chanel/dataset"
105
- pb_def_name = "pb_def_name" #`pb_def_name` is to choose from the repo `problem_definitions` folder
106
-
107
- # Load the dataset
108
- hf_datasetdict = load_dataset(repo_id)
109
-
110
- # Load addition required data
111
- flat_cst, key_mappings = huggingface_bridge.load_tree_struct_from_hub(repo_id)
112
- pb_def = huggingface_bridge.load_problem_definition_from_hub(repo_id, pb_def_name)
113
-
114
- # Efficient reconstruction of plaid samples
115
- for split_name, hf_dataset in hf_datasetdict.items():
116
- for i in range(len(hf_dataset)):
117
- sample = huggingface_bridge.to_plaid_sample(
118
- hf_dataset,
119
- i,
120
- flat_cst[split_name],
121
- key_mappings["cgns_types"],
122
- )
123
-
124
- # Extract input and output features from samples:
125
- for t in sample.get_all_mesh_times():
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
126
  for path in pb_def.get_in_features_identifiers():
127
- sample.get_feature_by_path(path=path, time=t)
128
  for path in pb_def.get_out_features_identifiers():
129
- sample.get_feature_by_path(path=path, time=t)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
130
  ```
131
- This dataset was generated in [PLAID](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `sample` objects.
132
 
133
  ### Dataset Sources
134
 
 
82
  - split: OOD
83
  path: data/OOD-*
84
  ---
85
+ <p align='center'>
86
+ <img src='https://i.ibb.co/MDqsmb5H/Logo-Tensile2d-2-consolas-100.png' alt='https://i.ibb.co/MDqsmb5H/Logo-Tensile2d-2-consolas-100.png' width='1000'/>
87
+ <img src='https://i.ibb.co/Js062hF/preview.png' alt='https://i.ibb.co/Js062hF/preview.png' width='1000'/>
88
+ </p>
89
+
90
  ```yaml
91
+ legal:
92
+ owner: Safran
93
+ license: cc-by-sa-4.0
94
  data_production:
95
+ type: simulation
96
  physics: 2D quasistatic non-linear structural mechanics, small deformations, plane
97
  strain
98
+ num_samples:
99
+ train: 500
100
+ test: 200
101
+ OOD: 2
102
+ storage_backend: hf_datasets
103
  plaid:
104
+ version: 0.1.11.dev21+g94f13b9c8
105
 
106
  ```
107
+ This dataset was generated with [`plaid`](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `plaid_sample` objects.
108
+
109
+ The simplest way to use this dataset is to first download it:
110
  ```python
111
+ from plaid.storage import download_from_hub
112
+
113
+ repo_id = "channel/dataset"
114
+ local_folder = "downloaded_dataset"
115
+
116
+ download_from_hub(repo_id, local_folder)
117
+ ```
118
+
119
+ Then, to iterate over the dataset and instantiate samples:
120
+ ```python
121
+ from plaid.storage import init_from_disk
122
+
123
+ local_folder = "downloaded_dataset"
124
+ split_name = "train"
125
+
126
+ datasetdict, converterdict = init_from_disk(local_folder)
127
+
128
+ dataset = datasetdict[split]
129
+ converter = converterdict[split]
130
+
131
+ for i in range(len(dataset)):
132
+ raw_sample = dataset[i]
133
+ plaid_sample = converter.to_plaid(dataset, i)
134
+ ```
135
+
136
+ It is possible to stream the data directly:
137
+ ```python
138
+ from plaid.storage import init_streaming_from_hub
139
+
140
+ repo_id = "channel/dataset"
141
+
142
+ datasetdict, converterdict = init_streaming_from_hub(repo_id)
143
+
144
+ dataset = datasetdict[split]
145
+ converter = converterdict[split]
146
+
147
+ for sample_raw in dataset:
148
+ plaid_sample = converter.sample_to_plaid(sample_raw)
149
+ ```
150
+
151
+ Plaid samples' features can be retrieved like the following:
152
+ ```python
153
+ from plaid.storage import load_problem_definitions_from_disk
154
+ local_folder = "downloaded_dataset"
155
+ pb_defs = load_problem_definitions_from_disk(local_folder)
156
+
157
+ # or
158
+ from plaid.storage import load_problem_definitions_from_hub
159
+ repo_id = "channel/dataset"
160
+ pb_defs = load_problem_definitions_from_hub(repo_id)
161
+
162
+
163
+ pb_def = pb_defs[0]
164
+
165
+ plaid_sample = ... # use a method from above to instantiate a plaid sample
166
+
167
+ for t in plaid_sample.get_all_time_values():
168
  for path in pb_def.get_in_features_identifiers():
169
+ plaid_sample.get_feature_by_path(path=path, time=t)
170
  for path in pb_def.get_out_features_identifiers():
171
+ plaid_sample.get_feature_by_path(path=path, time=t)
172
+ ```
173
+
174
+ For those familiar with HF's `datasets` library, raw data can be retrieved without using the `plaid` library:
175
+ ```python
176
+ from datasets import load_dataset
177
+
178
+ repo_id = "channel/dataset"
179
+
180
+ datasetdict = load_dataset(repo_id)
181
+
182
+ for split_name, dataset in datasetdict.items():
183
+ for raw_sample in dataset:
184
+ for feat_name in dataset.column_names:
185
+ feature = raw_sample[feat_name]
186
  ```
187
+ Notice that raw data refers to the variable features only, with a specific encoding for time variable features.
188
 
189
  ### Dataset Sources
190