Datasets:
metadata
license: cc-by-sa-4.0
task_categories:
- graph-ml
pretty_name: >-
2D quasistatic non-linear structural mechanics with finite elasticity and
topology variations
tags:
- physics learning
- geometry learning
dataset_info:
features:
- name: Base_2_2/Zone
list:
list: int64
- name: Base_2_2/Zone/Elements_TRI_3/ElementConnectivity
list: int64
- name: Base_2_2/Zone/Elements_TRI_3/ElementRange
list: int64
- name: Base_2_2/Zone/GridCoordinates/CoordinateX
list: float32
- name: Base_2_2/Zone/GridCoordinates/CoordinateY
list: float32
- name: Base_2_2/Zone/PointData/P11
list: float32
- name: Base_2_2/Zone/PointData/P12
list: float32
- name: Base_2_2/Zone/PointData/P21
list: float32
- name: Base_2_2/Zone/PointData/P22
list: float32
- name: Base_2_2/Zone/PointData/psi
list: float32
- name: Base_2_2/Zone/PointData/u1
list: float32
- name: Base_2_2/Zone/PointData/u2
list: float32
- name: Base_2_2/Zone/ZoneBC/Ext_bound/PointList
list:
list: int32
- name: Base_2_2/Zone/ZoneBC/Holes/PointList
list:
list: int32
- name: Global/C11
list: float32
- name: Global/C12
list: float32
- name: Global/C22
list: float32
- name: Global/effective_energy
list: float32
splits:
- name: train
num_bytes: 360004988
num_examples: 764
- name: test
num_bytes: 117157624
num_examples: 376
download_size: 259643357
dataset_size: 477162612
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
legal:
owner: Safran
license: cc-by-sa-4.0
data_production:
type: simulation
physics: 2D quasistatic non-linear structural mechanics, finite elasticity (large
strains), P1 elements, compressible hyperelastic material
simulator: fenics
num_samples:
train: 764
test: 376
storage_backend: hf_datasets
plaid:
version: 0.1.13.dev1+gb350f274a
This dataset was generated with plaid, we refer to this documentation for additional details on how to extract data from plaid_sample objects.
The simplest way to use this dataset is to first download it:
from plaid.storage import download_from_hub
repo_id = "channel/dataset"
local_folder = "downloaded_dataset"
download_from_hub(repo_id, local_folder)
Then, to iterate over the dataset and instantiate samples:
from plaid.storage import init_from_disk
local_folder = "downloaded_dataset"
split_name = "train"
datasetdict, converterdict = init_from_disk(local_folder)
dataset = datasetdict[split]
converter = converterdict[split]
for i in range(len(dataset)):
plaid_sample = converter.to_plaid(dataset, i)
It is possible to stream the data directly:
from plaid.storage import init_streaming_from_hub
repo_id = "channel/dataset"
datasetdict, converterdict = init_streaming_from_hub(repo_id)
dataset = datasetdict[split]
converter = converterdict[split]
for sample_raw in dataset:
plaid_sample = converter.sample_to_plaid(sample_raw)
Plaid samples' features can be retrieved like the following:
from plaid.storage import load_problem_definitions_from_disk
local_folder = "downloaded_dataset"
pb_defs = load_problem_definitions_from_disk(local_folder)
# or
from plaid.storage import load_problem_definitions_from_hub
repo_id = "channel/dataset"
pb_defs = load_problem_definitions_from_hub(repo_id)
pb_def = pb_defs[0]
plaid_sample = ... # use a method from above to instantiate a plaid sample
for t in plaid_sample.get_all_time_values():
for path in pb_def.get_in_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
for path in pb_def.get_out_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
For those familiar with HF's datasets library, raw data can be retrieved without using the plaid library:
from datasets import load_dataset
repo_id = "channel/dataset"
datasetdict = load_dataset(repo_id)
for split_name, dataset in datasetdict.items():
for raw_sample in dataset:
for feat_name in dataset.column_names:
feature = raw_sample[feat_name]
Notice that raw data refers to the variable features only, with a specific encoding for time variable features.
Dataset Sources
- Papers: