Datasets:
Tasks:
Graph Machine Learning
Modalities:
Time-series
Formats:
parquet
Size:
1K - 10K
ArXiv:
License:
File size: 3,510 Bytes
3164032 097e327 3164032 097e327 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 |
---
license: cc-by-4.0
task_categories:
- graph-ml
pretty_name: PDEBench 2D Diffusion-Reaction
tags:
- physics learning
- geometry learning
dataset_info:
features:
- name: Base_2_2/Zone/CellData/activator
list: float32
- name: Base_2_2/Zone/CellData/activator_ic
list: float32
- name: Base_2_2/Zone/CellData/inhibitor
list: float32
- name: Base_2_2/Zone/CellData/inhibitor_ic
list: float32
splits:
- name: train
num_bytes: 26476560000
num_examples: 1000
download_size: 26606982307
dataset_size: 26476560000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
```yaml
legal:
owner: Takamoto, M et al. (https://darus.uni-stuttgart.de/dataset.xhtml?persistentId=doi:10.18419/darus-2986)
license: cc-by-4.0
data_production:
physics: 2D Diffusion-Reaction
type: simulation
script: Converted to PLAID format for standardized usage; no changes to data content.
num_samples:
train: 1000
storage_backend: hf_datasets
plaid:
version: 0.1.12
```
This dataset was generated with [`plaid`](https://plaid-lib.readthedocs.io/), we refer to this documentation for additional details on how to extract data from `plaid_sample` objects.
The simplest way to use this dataset is to first download it:
```python
from plaid.storage import download_from_hub
repo_id = "channel/dataset"
local_folder = "downloaded_dataset"
download_from_hub(repo_id, local_folder)
```
Then, to iterate over the dataset and instantiate samples:
```python
from plaid.storage import init_from_disk
local_folder = "downloaded_dataset"
split_name = "train"
datasetdict, converterdict = init_from_disk(local_folder)
dataset = datasetdict[split]
converter = converterdict[split]
for i in range(len(dataset)):
plaid_sample = converter.to_plaid(dataset, i)
```
It is possible to stream the data directly:
```python
from plaid.storage import init_streaming_from_hub
repo_id = "channel/dataset"
datasetdict, converterdict = init_streaming_from_hub(repo_id)
dataset = datasetdict[split]
converter = converterdict[split]
for sample_raw in dataset:
plaid_sample = converter.sample_to_plaid(sample_raw)
```
Plaid samples' features can be retrieved like the following:
```python
from plaid.storage import load_problem_definitions_from_disk
local_folder = "downloaded_dataset"
pb_defs = load_problem_definitions_from_disk(local_folder)
# or
from plaid.storage import load_problem_definitions_from_hub
repo_id = "channel/dataset"
pb_defs = load_problem_definitions_from_hub(repo_id)
pb_def = pb_defs[0]
plaid_sample = ... # use a method from above to instantiate a plaid sample
for t in plaid_sample.get_all_time_values():
for path in pb_def.get_in_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
for path in pb_def.get_out_features_identifiers():
plaid_sample.get_feature_by_path(path=path, time=t)
```
For those familiar with HF's `datasets` library, raw data can be retrieved without using the `plaid` library:
```python
from datasets import load_dataset
repo_id = "channel/dataset"
datasetdict = load_dataset(repo_id)
for split_name, dataset in datasetdict.items():
for raw_sample in dataset:
for feat_name in dataset.column_names:
feature = raw_sample[feat_name]
```
Notice that raw data refers to the variable features only, with a specific encoding for time variable features.
### Dataset Sources
- **Papers:**
- [arxiv](https://arxiv.org/pdf/2210.07182)
|