Datasets:
license: cc-by-nc-4.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
- config_name: original
data_files:
- split: train
path: original/train-*
- split: test
path: original/test-*
- split: val
path: original/val-*
dataset_info:
- config_name: default
features:
- name: UID
dtype: string
- name: Fold
dtype: int64
- name: Split
dtype: string
- name: PatientID
dtype: string
- name: PhysicianID
dtype: string
- name: StudyDate
dtype: string
- name: Age
dtype: int64
- name: Sex
dtype: string
- name: HeartSize
dtype: int64
- name: PulmonaryCongestion
dtype: int64
- name: PleuralEffusion_Right
dtype: int64
- name: PleuralEffusion_Left
dtype: int64
- name: PulmonaryOpacities_Right
dtype: int64
- name: PulmonaryOpacities_Left
dtype: int64
- name: Atelectasis_Right
dtype: int64
- name: Atelectasis_Left
dtype: int64
- name: Image
dtype: image
splits:
- name: train
num_bytes: 36725048989.54
num_examples: 137595
- name: test
num_bytes: 11088307165.008
num_examples: 42928
- name: val
num_bytes: 9210720811.1
num_examples: 34862
download_size: 58345259132
dataset_size: 57024076965.648
- config_name: original
features:
- name: UID
dtype: string
- name: Fold
dtype: int64
- name: Split
dtype: string
- name: PatientID
dtype: string
- name: PhysicianID
dtype: string
- name: StudyDate
dtype: string
- name: Age
dtype: int64
- name: Sex
dtype: string
- name: HeartSize
dtype: int64
- name: PulmonaryCongestion
dtype: int64
- name: PleuralEffusion_Right
dtype: int64
- name: PleuralEffusion_Left
dtype: int64
- name: PulmonaryOpacities_Right
dtype: int64
- name: PulmonaryOpacities_Left
dtype: int64
- name: Atelectasis_Right
dtype: int64
- name: Atelectasis_Left
dtype: int64
- name: Image
dtype: image
splits:
- name: train
num_bytes: 793586998398.28
num_examples: 137595
- name: test
num_bytes: 235100370576.352
num_examples: 42928
- name: val
num_bytes: 197771374689.288
num_examples: 34862
download_size: 1266934126006
dataset_size: 1226458743663.9202
extra_gated_prompt: >-
### π‘οΈ Data Usage Agreement
By accessing and using the dataset, you agree to the following terms and
conditions:
1. **Purpose of Use**
This dataset is provided **solely for research and educational purposes**. Any commercial use is strictly prohibited without explicit written permission from the dataset creators.
2. **Ethical Use**
You agree to use this dataset in an ethical manner, respecting human dignity, privacy, and all applicable laws and regulations. The data **must not be used to attempt to identify individuals** or for any discriminatory or harmful purposes.
3. **Data Privacy**
This dataset may contain sensitive medical information. Although all personally identifiable information (PII) has been removed or anonymized to the best extent possible, you acknowledge your responsibility in ensuring that data remains de-identified and is not re-identified.
4. **Compliance with Regulations**
You agree to comply with all applicable data protection regulations such as **HIPAA**, **GDPR**, or local equivalents.
5. **No Redistribution**
You shall not share, redistribute, or publish the dataset in full or in part without explicit consent from the dataset authors.
6. **Attribution**
Any published work or presentation using this dataset must **cite the original source** as specified in the dataset documentation.
7. **Indemnity**
You agree to hold harmless and indemnify the dataset providers from and against any claims arising from your use of the dataset.
8. **Revocation of Access**
The dataset creators reserve the right to revoke access to the dataset at any time, for any reason, including violations of this agreement.
task_categories:
- image-classification
language:
- en
tags:
- medical
- x-ray
- chest
- thorax
- radiograph
size_categories:
- 100K<n<1M
TAIX-Ray Dataset
TAIX-Ray is a comprehensive dataset of about 200k bedside chest radiographs from about 50k intensive care patients at the University Hospital in Aachen, Germany, collected between 2010 and 2024. Trained radiologists provided structured reports at the time of acquisition, assessing key findings such as cardiomegaly, pulmonary congestion, pleural effusion, pulmonary opacities, and atelectasis on an ordinal scale. Please see our paper for a detailed description: Not yet available.
How to Use
Prerequisites
Ensure you have the following dependencies installed:
pip install datasets matplotlib huggingface_hub pandas tqdm
Configurations
This dataset is available in two configurations.
| Name | Size | Image Size |
|---|---|---|
| default | 62GB | 512px |
| original | 1.2TB | variable |
Option A: Use within the Hugging Face Framework
If you want to use the dataset directly within the Hugging Face datasets library, you can load and visualize it as follows:
from datasets import load_dataset
from matplotlib import pyplot as plt
# Load the TAIX-Ray dataset
dataset = load_dataset("TLAIM/TAIX-Ray", name="default")
# Access the training split (Fold 0)
ds_train = dataset['train']
# Retrieve a single sample from the training set
item = ds_train[0]
# Extract and display the image
image = item['Image']
plt.imshow(image, cmap='gray')
plt.savefig('image.png') # Save the image to a file
plt.show() # Display the image
# Print metadata (excluding the image itself)
for key in item.keys():
if key != 'Image':
print(f"{key}: {item[key]}")
Option B: Downloading the Dataset
If you prefer to download the dataset to a specific folder, use the following script. This will create the following folder structure:
.
βββ data/
β βββ 549a816ae020fb7da68a31d7d62d73c418a069c77294fc084dd9f7bd717becb9.png
β βββ d8546c6108aad271211da996eb7e9eeabaf44d39cf0226a4301c3cbe12d84151.png
β βββ ...
βββ metadata/
βββ annoation.csv
βββ split.csv
from datasets import load_dataset
from huggingface_hub import hf_hub_download
from pathlib import Path
import pandas as pd
from tqdm import tqdm
# Define output paths
output_root = Path("./TAIX-Ray")
# Create folders
data_dir = output_root / "data"
metadata_dir = output_root / "metadata"
data_dir.mkdir(parents=True, exist_ok=True)
metadata_dir.mkdir(parents=True, exist_ok=True)
# Load dataset in streaming mode
dataset = load_dataset("TLAIM/TAIX-Ray", name="default", streaming=True)
# Process dataset
metadata = []
for split, split_dataset in dataset.items():
print("-------- Start Download: ", split, " --------")
for item in tqdm(split_dataset, desc="Downloading"): # Stream data one-by-one
uid = item["UID"]
img = item.pop("Image") # PIL Image object
# Save image
img.save(data_dir / f"{uid}.png", format="PNG")
# Store metadata
metadata.append(item)
# Convert metadata to DataFrame
metadata_df = pd.DataFrame(metadata)
# Save split to CSV files
df_split = metadata_df[["UID", "Split"]]
df_split.to_csv(metadata_dir / "split.csv", index=False)
# Save annotations to CSV files
metadata_df.drop(columns=["Split", "Fold"]).to_csv(metadata_dir / "annotation.csv", index=False)
print("Dataset streamed and saved successfully!")