The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
missing_fields: list<item: null>
child 0, item: null
raw_attrs: struct<case_type: string, created_by: string, data_quality: struct<data_completeness: double, max_fl (... 494 chars omitted)
child 0, case_type: string
child 1, created_by: string
child 2, data_quality: struct<data_completeness: double, max_flux_value: double, min_flux_value: double, num_cells: int64, (... 52 chars omitted)
child 0, data_completeness: double
child 1, max_flux_value: double
child 2, min_flux_value: double
child 3, num_cells: int64
child 4, num_timesteps: int64
child 5, original_num_timesteps: int64
child 3, mesh_info: struct<bounds: list<item: double>, num_cells: int64>
child 0, bounds: list<item: double>
child 0, item: double
child 1, num_cells: int64
child 4, processing_info: struct<representation: string, transformation_type: string>
child 0, representation: string
child 1, transformation_type: string
child 5, simulation_params: struct<case_type: string, parameters: struct<cx: double, cy: double, hlr: double, hrr: double, llr: (... 117 chars omitted)
child 0, case_type: string
child 1, parameters: struct<cx: double, cy: double, hlr: double, hrr: double, llr: double, lrr: double, parameter_cl: dou (... 55 chars omitted)
child 0, cx: double
child 1, cy: double
child 2, hlr: double
child 3, hrr: double
child 4, llr: double
child 5, lrr: double
child 6, parameter_cl: double
child 7, quadrature_order: int64
child 8, ulr: double
child 9, urr: double
child 2, simulation_id: string
failed: list<item: null>
child 0, item: null
skipped: list<item: null>
child 0, item: null
output_root: string
input_root: string
converted: list<item: string>
child 0, item: string
total: int64
to
{'input_root': Value('string'), 'output_root': Value('string'), 'total': Value('int64'), 'converted': List(Value('string')), 'skipped': List(Value('null')), 'failed': List(Value('null'))}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
missing_fields: list<item: null>
child 0, item: null
raw_attrs: struct<case_type: string, created_by: string, data_quality: struct<data_completeness: double, max_fl (... 494 chars omitted)
child 0, case_type: string
child 1, created_by: string
child 2, data_quality: struct<data_completeness: double, max_flux_value: double, min_flux_value: double, num_cells: int64, (... 52 chars omitted)
child 0, data_completeness: double
child 1, max_flux_value: double
child 2, min_flux_value: double
child 3, num_cells: int64
child 4, num_timesteps: int64
child 5, original_num_timesteps: int64
child 3, mesh_info: struct<bounds: list<item: double>, num_cells: int64>
child 0, bounds: list<item: double>
child 0, item: double
child 1, num_cells: int64
child 4, processing_info: struct<representation: string, transformation_type: string>
child 0, representation: string
child 1, transformation_type: string
child 5, simulation_params: struct<case_type: string, parameters: struct<cx: double, cy: double, hlr: double, hrr: double, llr: (... 117 chars omitted)
child 0, case_type: string
child 1, parameters: struct<cx: double, cy: double, hlr: double, hrr: double, llr: double, lrr: double, parameter_cl: dou (... 55 chars omitted)
child 0, cx: double
child 1, cy: double
child 2, hlr: double
child 3, hrr: double
child 4, llr: double
child 5, lrr: double
child 6, parameter_cl: double
child 7, quadrature_order: int64
child 8, ulr: double
child 9, urr: double
child 2, simulation_id: string
failed: list<item: null>
child 0, item: null
skipped: list<item: null>
child 0, item: null
output_root: string
input_root: string
converted: list<item: string>
child 0, item: string
total: int64
to
{'input_root': Value('string'), 'output_root': Value('string'), 'total': Value('int64'), 'converted': List(Value('string')), 'skipped': List(Value('null')), 'failed': List(Value('null'))}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Dataset Description:
A point-cloud surrogate-modeling dataset for the final-time 2-D linear Radiation Transport Equation (RTE), covering two canonical benchmarks that vary along complementary axes:
Lattice (707 samples, 494 train / 106 val / 107 test) — fixed
7 × 7block geometry; per-sample variation in the white-background scattering coefficient (σ_s ∈ [0.1, 10.1]) and the blue-absorber cross-section (σ_a ∈ [5, 105]). QoI: final-time absorption integral over the absorbing blocks.Hohlraum (846 samples, 592 train / 126 val / 128 test) — fixed per-region cross-sections; per-sample variation in 8 geometry parameters (
ulr, llr, urr, lrr, hlr, hrr, cx, cy) controlling the inner edges and y-extents of two wall-anchored red strips and the (x, y) offset of a center insert. QoI: final-time absorption integral evaluated over three material regions .
Simulations were produced with KiT-RT using
a discrete-ordinate (S_N) angular discretization, a finite-volume scheme
on an unstructured mesh, and an explicit SSP Runge-Kutta time integrator,
then curated into the PhysicsNeMo Mesh memmap format.
How to download
The dataset is not a datasets-loadable Parquet dataset; it ships
PhysicsNeMo tensordict memmap stores packed as per-sample
.pmsh.tar.gz archives. Download the full repo and extract the
archives in place:
import tarfile
from pathlib import Path
from huggingface_hub import snapshot_download
local_dir = Path(snapshot_download(
repo_id="nvidia/Linear-Radiation-Transport",
repo_type="dataset",
))
for arc in (local_dir / "mesh").rglob("*.pmsh.tar.gz"):
with tarfile.open(arc) as tf:
tf.extractall(arc.parent)
After extraction each <name>.pmsh/ directory is loadable with
PhysicsNeMo's Mesh API.
Dataset Owner(s):
NVIDIA Corporation
Dataset Creation Date:
May 2026
License/Terms of Use:
Intended Usage:
Training, evaluation, and benchmarking of point-cloud / mesh-based neural surrogates for final-time linear radiation transport. The two benchmarks are complementary stress tests: Lattice probes the surrogate's ability to generalise across material parameters at fixed geometry, while Hohlraum probes generalisation across geometry at fixed material parameters. Suitable for graph neural networks, neural operators, point-cloud regressors, and mixed-fidelity / uncertainty-quantification studies that build on KiT-RT reference solutions.
Dataset Characterization
** Data Collection Method
- [Synthetic] - High-resolution KiT-RT (S_N + finite-volume) simulations
on unstructured triangular meshes, post-processed into PhysicsNeMo
Meshmemmap stores.
** Labeling Method
- [Synthetic] - Per-cell scalar flux and derived per-region absorption
QoIs are computed directly by the numerical solver; no human labeling
is involved.
Dataset Format
- Modality: 2-D point cloud / unstructured-mesh, per-cell tensors plus per-simulation scalar metadata.
- Per-sample container: PhysicsNeMo
Mesh(a tensordict memmap store) shipped on disk as a<name>.pmsh/directory plus a<name>.attrs.jsonsidecar; on the Hub each simulation is bundled as a single<name>.pmsh.tar.gzarchive for transport. - Per-cell fields:
cell_areas(float32),sigma_a,sigma_s,sigma_t(float32),Q(float32),material_properties(int64),scalar_flux(float32, shape(N, 2)for initial + final snapshots). - Cell-center coordinates:
Mesh.points(float32,(N, 2)— the simulations are 2-D so points are stored without a z column). - Per-simulation fields (
Mesh.global_data):sim_times/timesteps/wall_times,flux_statistics,global_metrics, plus flattenedattr__*parameter draws. - Splits: JSON files at
splits/{lattice,hohlraum}_splits.jsonstoring per-split lists of sample basenames. - Auxiliary: PNG schematics under
docs/images/, conversion manifests atmesh/{lattice,hohlraum}/conversion_manifest.json.
Dataset Quantification
- Record count: 1,553 simulations covered by the train/val/test splits (707 Lattice + 846 Hohlraum).
- Cells per sample: lattice ≈79.9k (constant); hohlraum ≈70k–81k.
- Per-cell features per sample: 7 fields (cell_areas, sigma_a, sigma_s, sigma_t, Q, material_properties, scalar_flux) plus 2-D cell-center coordinates and per-simulation metadata.
- Total storage: ~6.0 GB for the extracted
.pmsh/directories; ~2.4 GB as the per-sample.pmsh.tar.gzarchives shipped to the Hugging Face Hub (gzip-compressed).
Reference(s):
- Schotthöfer, S., & Hauck, C. (2025). "Reference solutions for linear radiation transport: the Hohlraum and Lattice benchmarks." arXiv preprint arXiv:2505.17284.
- Kusch, J., Schotthöfer, S., Stammer, P., Wolters, J., & Xiao, T. (2023). "KiT-RT: An extendable framework for radiative transfer and therapy." ACM Transactions on Mathematical Software, 49(4), 1–24.
- KiT-RT solver: https://github.com/KiT-RT.
Ethical Considerations:
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal developer teams to ensure this dataset meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
Please report quality, risk, security vulnerabilities or NVIDIA AI Concerns here.
- Downloads last month
- 11

