Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
Base_2_2/Zone/CellData/diffusion_coefficient
listlengths
16.4k
16.4k
Base_2_2/Zone/CellData/flow
listlengths
16.4k
16.4k
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0017173982923850417,0.0031296107918024063,0.0043572657741606236,0.005457086488604546,0.0064601348(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0015557694714516401,0.002806378062814474,0.0038724830374121666,0.004810854326933622,0.00565259391(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0011685231002047658,0.002032131887972355,0.002711796434596181,0.003264728235080838,0.003722670488(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.002054765820503235,0.0038071931339800358,0.005380894988775253,0.006835849955677986,0.008206807076(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0016547789564356208,0.003004183992743492,0.0041686599142849445,0.005204742308706045,0.00614330451(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0017553643556311727,0.003205486573278904,0.004470937419682741,0.005608391482383013,0.006648838985(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0012316516367718577,0.0021586522925645113,0.0029021971859037876,0.003519643796607852,0.0040427655(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0015430706553161144,0.0027809757739305496,0.0038343635387718678,0.0047599864192306995,0.005588916(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0015356647782027721,0.002765940735116601,0.00381125183776021,0.004728124942630529,0.0055474052205(...TRUNCATED)
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
[0.0014556451933458447,0.002605934627354145,0.0035713256802409887,0.004408381879329681,0.00514798518(...TRUNCATED)
End of preview. Expand in Data Studio
legal:
  owner: Takamoto, M et al. (https://darus.uni-stuttgart.de/dataset.xhtml?persistentId=doi:10.18419/darus-2986)
  license: cc-by-4.0
data_production:
  physics: 2D Darcy Flow
  type: simulation
  script: Converted to PLAID format for standardized usage; no changes to data content.
num_samples:
  train: 10000
storage_backend: hf_datasets
plaid:
  version: 0.1.12

This dataset was generated with plaid, we refer to this documentation for additional details on how to extract data from plaid_sample objects.

The simplest way to use this dataset is to first download it:

from plaid.storage import download_from_hub

repo_id = "channel/dataset"
local_folder = "downloaded_dataset"

download_from_hub(repo_id, local_folder)

Then, to iterate over the dataset and instantiate samples:

from plaid.storage import init_from_disk

local_folder = "downloaded_dataset"
split_name = "train"

datasetdict, converterdict = init_from_disk(local_folder)

dataset = datasetdict[split]
converter = converterdict[split]

for i in range(len(dataset)):
    plaid_sample = converter.to_plaid(dataset, i)

It is possible to stream the data directly:

from plaid.storage import init_streaming_from_hub

repo_id = "channel/dataset"

datasetdict, converterdict = init_streaming_from_hub(repo_id)

dataset = datasetdict[split]
converter = converterdict[split]

for sample_raw in dataset:
    plaid_sample = converter.sample_to_plaid(sample_raw)

Plaid samples' features can be retrieved like the following:

from plaid.storage import load_problem_definitions_from_disk
local_folder = "downloaded_dataset"
pb_defs = load_problem_definitions_from_disk(local_folder)

# or
from plaid.storage import load_problem_definitions_from_hub
repo_id = "channel/dataset"
pb_defs = load_problem_definitions_from_hub(repo_id)


pb_def = pb_defs[0]

plaid_sample = ... # use a method from above to instantiate a plaid sample

for t in plaid_sample.get_all_time_values():
    for path in pb_def.get_in_features_identifiers():
        plaid_sample.get_feature_by_path(path=path, time=t)
    for path in pb_def.get_out_features_identifiers():
        plaid_sample.get_feature_by_path(path=path, time=t)

For those familiar with HF's datasets library, raw data can be retrieved without using the plaid library:

from datasets import load_dataset

repo_id = "channel/dataset"

datasetdict = load_dataset(repo_id)

for split_name, dataset in datasetdict.items():
    for raw_sample in dataset:
        for feat_name in dataset.column_names:
            feature = raw_sample[feat_name]

Notice that raw data refers to the variable features only, with a specific encoding for time variable features.

Dataset Sources

Downloads last month
10