VKI-LS59 / README.md
fabiencasenave's picture
Upload README.md with huggingface_hub
b6fad55 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - graph-ml
pretty_name: 2D internal aero CFD RANS dataset, under geometrical variations
tags:
  - physics learning
  - geometry learning
dataset_info:
  features:
    - name: Base_1_2/Zone/GridCoordinates/CoordinateX
      list: float32
    - name: Base_1_2/Zone/GridCoordinates/CoordinateY
      list: float32
    - name: Base_1_2/Zone/PointData/M_iso
      list: float32
    - name: Base_2_2/Zone/GridCoordinates/CoordinateX
      list: float32
    - name: Base_2_2/Zone/GridCoordinates/CoordinateY
      list: float32
    - name: Base_2_2/Zone/PointData/mach
      list: float32
    - name: Base_2_2/Zone/PointData/nut
      list: float32
    - name: Base_2_2/Zone/PointData/ro
      list: float32
    - name: Base_2_2/Zone/PointData/roe
      list: float32
    - name: Base_2_2/Zone/PointData/rou
      list: float32
    - name: Base_2_2/Zone/PointData/rov
      list: float32
    - name: Base_2_2/Zone/PointData/sdf
      list: float32
    - name: Global/Pr
      list: float32
    - name: Global/Q
      list: float32
    - name: Global/Tr
      list: float32
    - name: Global/angle_in
      list: float32
    - name: Global/angle_out
      list: float32
    - name: Global/eth_is
      list: float32
    - name: Global/mach_out
      list: float32
    - name: Global/power
      list: float32
  splits:
    - name: train
      num_bytes: 881825516
      num_examples: 671
    - name: test
      num_bytes: 73767729
      num_examples: 168
  download_size: 1016414914
  dataset_size: 955593245
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

https://i.ibb.co/hJqv4hCt/Logo-VKI-2.png https://i.ibb.co/7NX9z7NQ/image001.png

legal:
  owner: Safran
  license: cc-by-sa-4.0
data_production:
  type: simulation
  physics: 2D compressible RANS, with Spalart-Allmaras turbulence model
  simulator: Broadcast
num_samples:
  train: 671
  test: 168
storage_backend: hf_datasets
plaid:
  version: 0.1.13.dev1+gb350f274a

This dataset was generated with plaid, we refer to this documentation for additional details on how to extract data from plaid_sample objects.

The simplest way to use this dataset is to first download it:

from plaid.storage import download_from_hub

repo_id = "channel/dataset"
local_folder = "downloaded_dataset"

download_from_hub(repo_id, local_folder)

Then, to iterate over the dataset and instantiate samples:

from plaid.storage import init_from_disk

local_folder = "downloaded_dataset"
split_name = "train"

datasetdict, converterdict = init_from_disk(local_folder)

dataset = datasetdict[split]
converter = converterdict[split]

for i in range(len(dataset)):
    plaid_sample = converter.to_plaid(dataset, i)

It is possible to stream the data directly:

from plaid.storage import init_streaming_from_hub

repo_id = "channel/dataset"

datasetdict, converterdict = init_streaming_from_hub(repo_id)

dataset = datasetdict[split]
converter = converterdict[split]

for sample_raw in dataset:
    plaid_sample = converter.sample_to_plaid(sample_raw)

Plaid samples' features can be retrieved like the following:

from plaid.storage import load_problem_definitions_from_disk
local_folder = "downloaded_dataset"
pb_defs = load_problem_definitions_from_disk(local_folder)

# or
from plaid.storage import load_problem_definitions_from_hub
repo_id = "channel/dataset"
pb_defs = load_problem_definitions_from_hub(repo_id)


pb_def = pb_defs[0]

plaid_sample = ... # use a method from above to instantiate a plaid sample

for t in plaid_sample.get_all_time_values():
    for path in pb_def.get_in_features_identifiers():
        plaid_sample.get_feature_by_path(path=path, time=t)
    for path in pb_def.get_out_features_identifiers():
        plaid_sample.get_feature_by_path(path=path, time=t)

For those familiar with HF's datasets library, raw data can be retrieved without using the plaid library:

from datasets import load_dataset

repo_id = "channel/dataset"

datasetdict = load_dataset(repo_id)

for split_name, dataset in datasetdict.items():
    for raw_sample in dataset:
        for feat_name in dataset.column_names:
            feature = raw_sample[feat_name]

Notice that raw data refers to the variable features only, with a specific encoding for time variable features.

Dataset Sources