Datasets:
Search is not available for this dataset
Base_2_2/Zone/CellData/diffusion_coefficient
listlengths 16.4k
16.4k
| Base_2_2/Zone/CellData/flow
listlengths 16.4k
16.4k
|
|---|---|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.000044881300709676,0.00008668714872328565,0.00012660039647016674,0.00016516515461262316,0.0002026(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.0000158268230734393,0.00002860154381778557,0.00003953029226977378,0.00004918007107335143,0.000057(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.000011685323443089146,0.000020321511328802444,0.0000271182489086641,0.000032647680200170726,0.000(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.00007653275679331273,0.00015017163241282105,0.00022228772286325693,0.00029362700297497213,0.00036(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.00004038128099637106,0.00007766394992358983,0.00011300804908387363,0.00014693451521452516,0.00017(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.00007423823262797669,0.00014536675007548183,0.00021453415683936328,0.00028225037385709584,0.00034(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.000012316459105932154,0.00002158641473215539,0.00002902179585362319,0.00003519621168379672,0.0000(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.000016025969671318308,0.00002899918945331592,0.00004012508725281805,0.000049970079999184236,0.000(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.000015976132999639958,0.00002889645293180365,0.00003996334635303356,0.000049740254326025024,0.000(...TRUNCATED)
|
[0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612,0.10000000149011612(...TRUNCATED)
| [0.00001455567416996928,0.000026057799914269708,0.00003571097113308497,0.000044080690713599324,0.000(...TRUNCATED)
|
End of preview. Expand
in Data Studio
legal:
owner: Takamoto, M et al. (https://darus.uni-stuttgart.de/dataset.xhtml?persistentId=doi:10.18419/darus-2986)
license: cc-by-4.0
data_production:
physics: 2D Darcy Flow
type: simulation
script: Converted to PLAID format for standardized usage; no changes to data content.
num_samples:
train: 10000
storage_backend: hf_datasets
plaid:
version: 0.1.12
This dataset was generated with plaid, we refer to this documentation for additional details on how to extract data from plaid_sample objects.
The simplest way to use this dataset is to first download it:
from plaid.storage import download_from_hub
repo_id = "channel/dataset"
local_folder = "downloaded_dataset"
download_from_hub(repo_id, local_folder)
Then, to iterate over the dataset and instantiate samples:
from plaid.storage import init_from_disk
local_folder = "downloaded_dataset"
split_name = "train"
datasetdict, converterdict = init_from_disk(local_folder)
dataset = datasetdict[split]
converter = converterdict[split]
for i in range(len(dataset)):
plaid_sample = converter.to_plaid(dataset, i)
It is possible to stream the data directly:
from plaid.storage import init_streaming_from_hub
repo_id = "channel/dataset"
datasetdict, converterdict = init_streaming_from_hub(repo_id)
dataset = datasetdict[split]
converter = converterdict[split]
for sample_raw in dataset:
plaid_sample = converter.sample_to_plaid(sample_raw)
Plaid samples' features can be retrieved like the following:
from plaid.storage import load_problem_definitions_from_disk
local_folder = "downloaded_dataset"
pb_defs = load_problem_definitions_from_disk(local_folder)
# or
from plaid.storage import load_problem_definitions_from_hub
repo_id = "channel/dataset"
pb_defs = load_problem_definitions_from_hub(repo_id)
pb_def = pb_defs[0]
plaid_sample = ... # use a method from above to instantiate a plaid sample
for t in plaid_sample.get_all_time_values():
for path in pb_def.get_in_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
for path in pb_def.get_out_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
For those familiar with HF's datasets library, raw data can be retrieved without using the plaid library:
from datasets import load_dataset
repo_id = "channel/dataset"
datasetdict = load_dataset(repo_id)
for split_name, dataset in datasetdict.items():
for raw_sample in dataset:
for feat_name in dataset.column_names:
feature = raw_sample[feat_name]
Notice that raw data refers to the variable features only, with a specific encoding for time variable features.
Dataset Sources
- Downloads last month
- 12