The dataset viewer is not available for this split.
Error code: StreamingRowsError
Exception: ImportError
Message: To support decoding NIfTI files, please install 'nibabel'.
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2543, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2061, in __iter__
batch = formatter.format_batch(pa_table)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 472, in format_batch
batch = self.python_features_decoder.decode_batch(batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/formatting/formatting.py", line 234, in decode_batch
return self.features.decode_batch(batch, token_per_repo_id=self.token_per_repo_id) if self.features else batch
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2161, in decode_batch
decode_nested_example(self[column_name], value, token_per_repo_id=token_per_repo_id)
File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1419, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/features/nifti.py", line 172, in decode_example
raise ImportError("To support decoding NIfTI files, please install 'nibabel'.")
ImportError: To support decoding NIfTI files, please install 'nibabel'.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
MNMS2 Cardiac MRI Dataset (Mirror)
This folder contains a local mirror of the M&Ms-2 Challenge Dataset for right ventricular segmentation in cardiac MRI.
- Original source (Kaggle): https://www.kaggle.com/datasets/tailength/m-and-m2-dataset
- Challenge / project page: https://www.ub.edu/mnms/
- Reference paper: C. Martín-Isla et al., "Deep Learning Segmentation of the Right Ventricle in Cardiac MRI: The M&Ms Challenge," IEEE Journal of Biomedical and Health Informatics, 2023. https://doi.org/10.1109/JBHI.2023.3267857
Contents and structure
This repository keeps the original structure from the Kaggle dataset:
dataset/— per-subject folders (001/,002/, ...) with images and labels.dataset_information.csv— subject-level metadata (pathologies, vendors, scanners, field strength, etc.).readme.txt— the original dataset README from the authors (kept verbatim).
No image or label content has been modified. Files are only re-hosted here to integrate with this project and Hugging Face Datasets. Original naming and directory layout are preserved.
How this copy was created
This mirror was created by downloading the public Kaggle dataset and copying the extracted files into mnms2/:
- Download from Kaggle using the dataset slug
tailength/m-and-m2-dataset. - Extract the archive so that
dataset/,dataset_information.csv, andreadme.txtappear undermnms2/. - Track the folder as a dataset repository (e.g., on Hugging Face) without altering file contents.
Reproducible download script
You can recreate this folder structure from the public Kaggle dataset using the Kaggle CLI:
# 1. Ensure Kaggle CLI is installed and configured
# - Install: https://www.kaggle.com/docs/api
# - Configure credentials (~/.kaggle/kaggle.json or KAGGLE_USERNAME / KAGGLE_KEY)
# 2. Download and unpack the M&Ms-2 dataset into ./mnms2
kaggle datasets download -d tailength/m-and-m2-dataset -p mnms2 --unzip
# This will create the following structure under mnms2/:
# dataset/
# dataset_information.csv
# readme.txt
Please cite the original M&Ms/M&Ms-2 challenge and the reference paper above when using this dataset in your work.
- Downloads last month
- 1,409