Datasets:
The dataset viewer is not available for this split.
Error code: TooBigContentError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CT Diffusion Dataset - Precomputed Latents (128x128)
Precomputed VQ-AE latent representations for CT scan denoising using diffusion models.
Dataset Description
This dataset contains precomputed latent representations of paired low-dose (LD) and high-dose (HD) CT scans, encoded using a 3D VQ-AE (Vector Quantized AutoEncoder). The latents are ready for training diffusion models for CT denoising without the computational overhead of encoding during training.
Dataset Statistics
- Train Split: 2,076 samples (includes data augmentation)
- Test Split: 87 samples
- Total: 2,163 samples
Data Format
Each sample contains four components:
| Feature | Shape | Description |
|---|---|---|
ld_ct |
(40, 128, 128) | Low-dose CT scan (normalized to [-1, 1]) |
hd_ct |
(200, 128, 128) | High-dose CT scan (normalized to [-1, 1]) |
ld_latent |
(8, 5, 16, 16) | VQ-AE latent encoding of LD-CT |
hd_latent |
(8, 25, 16, 16) | VQ-AE latent encoding of HD-CT |
Note: The LD-CT uses 40 slices (downsampled 5x) while HD-CT uses 200 slices (full resolution). This design reduces storage while maintaining high-quality target data.
Preprocessing Details
- LD Target Shape: (40, 128, 128)
- HD Target Shape: (200, 128, 128)
- HU Clipping Range: (-1000, 1000)
- Normalization: HU values clipped and normalized to [-1, 1]
- VQ-AE Compression: 8× spatial compression (from 128×128 to 16×16 per slice)
- Latent Channels: 8 channels per latent
Usage
Loading the Dataset
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("t2ance/ct-diffusion-128-latents-v1")
# Access splits
train_data = dataset['train']
test_data = dataset['test']
# Get a sample
sample = train_data[0]
ld_ct = sample['ld_ct'] # Low-dose CT
hd_ct = sample['hd_ct'] # High-dose CT (target)
ld_latent = sample['ld_latent'] # LD latent encoding
hd_latent = sample['hd_latent'] # HD latent encoding (target)
Converting to PyTorch Tensors
import torch
import numpy as np
# Convert nested lists to tensors
ld_latent = torch.tensor(np.array(sample['ld_latent'])) # Shape: [8, 5, 16, 16]
hd_latent = torch.tensor(np.array(sample['hd_latent'])) # Shape: [8, 25, 16, 16]
ld_ct = torch.tensor(np.array(sample['ld_ct'])) # Shape: [40, 128, 128]
hd_ct = torch.tensor(np.array(sample['hd_ct'])) # Shape: [200, 128, 128]
Using with DataLoader
from torch.utils.data import DataLoader
def collate_fn(batch):
"""Convert batch to tensors"""
return {
'ld_latent': torch.stack([torch.tensor(np.array(item['ld_latent'])) for item in batch]),
'hd_latent': torch.stack([torch.tensor(np.array(item['hd_latent'])) for item in batch]),
'ld_ct': torch.stack([torch.tensor(np.array(item['ld_ct'])) for item in batch]),
'hd_ct': torch.stack([torch.tensor(np.array(item['hd_ct'])) for item in batch]),
}
dataloader = DataLoader(
dataset['train'],
batch_size=4,
shuffle=True,
collate_fn=collate_fn,
num_workers=4,
)
for batch in dataloader:
ld_latent = batch['ld_latent'] # [B, 8, 5, 16, 16]
hd_latent = batch['hd_latent'] # [B, 8, 25, 16, 16]
# Train your diffusion model...
Dataset Generation
This dataset was generated using the preprocessing pipeline with the following configuration:
Configuration File
# Configuration file for precomputing VQ-AE latents
# All settings are configured here - no command-line arguments needed
# Data paths
data:
data_dir: "/data1/peijia/ct/processed/ct_pairs" # Directory containing low_dose/ and high_dose/ subdirectories
latent_cache_dir: "./latents_cache/latents_cache_128_v2" # Output directory for cached latents
vae_checkpoint: "checkpoints/3DMedDiffusion_checkpoints/PatchVolume_8x_s2.ckpt" # VQ-AE checkpoint path
# Preprocessing settings
preprocessing:
train_split: 0.8 # Fraction of data to use for training (rest is validation)
# Target shapes for CT scans (D H W) for resizing
# LD-CT is typically downsampled (fewer slices) to reduce storage and memory
# HD-CT is kept at full resolution for high-quality reconstruction
ld_target_shape: [40, 128, 128] # Low-dose CT target shape (40 slices - 5x downsampled)
hd_target_shape: [200, 128, 128] # High-dose CT target shape (200 slices - full resolution)
# Legacy single target_shape (used if ld/hd shapes not specified)
# target_shape: [200, 128, 128]
clip_range: [-1000, 1000] # HU clipping range before normalization [MIN_HU, MAX_HU] (null = use default from constants)
# Processing settings
processing:
device: "cuda" # Device to use (cuda/cpu)
batch_size: 1 # Batch size for encoding (multiple volumes processed together for efficiency)
# Augmentation settings (applies to training set only)
augmentation:
enabled: true
base_prob: 0.5
num_augmentations: 5 # Number of augmented versions to generate per original sample (0 = no augmentation)
spatial:
flip_d: {enabled: true, prob: 0.5}
flip_h: {enabled: true, prob: 0.5}
flip_w: {enabled: true, prob: 0.5}
rotate_90: {enabled: true, prob: 0.5}
affine: {enabled: true, prob: 0.3, rotate_range: [-0.26, 0.26], scale_range: [-0.1, 0.1]}
elastic: {enabled: true, prob: 0.2, sigma_range: [5, 8], magnitude_range: [50, 150]}
intensity:
gaussian_noise: {enabled: false, prob: 0.25, std: 0.01}
scale_intensity: {enabled: false, prob: 0.25, factors: 0.1}
shift_intensity: {enabled: false, prob: 0.25, offsets: 0.05}
dropout:
coarse_dropout: {enabled: false, prob: 0.2, holes: 3, spatial_size: [8, 16, 16], max_holes: 5, max_spatial_size: [16, 32, 32]}
# HuggingFace Hub upload (optional)
upload:
enabled: true # Set to true to upload after precomputation
repo_id: "t2ance/ct-diffusion-128-latents-v2" # HuggingFace Hub repository ID (e.g., 'username/dataset-name')
private: false # Make the repository private
Generation Steps
- Data Loading: Raw CT pairs loaded from NIfTI format
- Preprocessing:
- LD-CT resized to (40, 128, 128)
- HD-CT resized to (200, 128, 128)
- HU values clipped to (-1000, 1000)
- Normalized to [-1, 1]
- VQ-AE Encoding:
- CT scans encoded to latent space
- 8× spatial compression
- 8 latent channels
- Augmentation (training only):
- Spatial transforms (flips, rotations, affine)
- Intensity transforms (noise, scaling, shifting)
- Dropout transforms (coarse dropout)
- Format Conversion: Saved to HuggingFace Arrow format
Citation
If you use this dataset, please cite:
@misc{ct-diffusion-128-latents,
title={CT Diffusion Dataset - Precomputed Latents},
author={Your Name},
year={2025},
howpublished={\url{https://huggingface.co/datasets/t2ance/ct-diffusion-128-latents-v1}},
}
License
MIT License
Related Resources
- Base Dataset: t2ance/ct-diffusion-128
- VQ-AE Model: 3D MedDiffusion PatchVolume AutoEncoder
- Training Code: [Link to your repository]
Contact
For questions or issues, please open an issue on the repository or contact [your email].
Generated: 2025-11-03 02:55:22
Format Version: 1.0
Compatible with: t2ance/ct-diffusion-128
- Downloads last month
- 9