File size: 4,080 Bytes
8b7b8d3
ae5ce5c
 
33560ac
ae5ce5c
 
33560ac
 
 
 
 
8b7b8d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2bcaeb6
8b7b8d3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33560ac
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
language:
- en
license: apache-2.0
size_categories:
- 1K<n<10K
pretty_name: CT-RATE_Synthetic
tags:
- medical
task_categories:
- text-to-3d
---

# Dataset Card for Synthetic Text-to-CT Scans - VLM3D Challenge

## Dataset Details

### Dataset Description

This dataset contains **1,000 synthetic 3D chest CT scans** generated using the model introduced in  
[*Text-to-CT Generation via 3D Latent Diffusion Model with Contrastive Vision-Language Pretraining*](https://arxiv.org/abs/2506.00633) (Molino et al., 2025).  

The model was trained on the **CT-RATE dataset**, the largest publicly available collection of paired CT volumes and radiology reports.  
It leverages a **3D latent diffusion framework** combined with **contrastive vision-language pretraining (3D-CLIP)** to synthesize anatomically coherent and semantically faithful CT scans directly from clinical text prompts.

These 1,000 scans were generated for the **VLM3D Challenge - Task 4**, serving as a benchmark resource for multimodal evaluation and synthetic data research in medical imaging.

- **Curated by:** ArCo Lab – Università Campus Bio-Medico di Roma & Umeå University  
- **Language(s):** Conditioning report are in English  
- **License:** Apache 2.0

### Dataset Sources

- **Repository:** [GitHub Repository](https://github.com/cosbidev/Text2CT)  
- **Paper:** [arXiv:2506.00633](https://arxiv.org/abs/2506.00633)  
- **Challenge:** [VLM3D Challenge](https://vlm3dchallenge.com)  

## Uses

### Direct Use
- Benchmarking text-to-CT generative models.
- Data augmentation for classification, detection, or segmentation tasks.
- Research in multimodal vision-language learning for 3D medical imaging.
- Educational purposes and simulation in medical training.

### Out-of-Scope Use
- Direct diagnostic or clinical use.  
- Deployment in healthcare without proper validation and regulatory approval.  
- Any attempt to re-identify patients (note: scans are fully synthetic).

## Dataset Structure

- **Format:** Volumetric CT scans stored in NIfTI (`.nii.gz`) format.  
- **Resolution:** Resampled to 0.75 × 0.75 × 3.0 mm voxel spacing, cropped/padded to 512 × 512 × 128.  
- **Intensity:** Normalized in Hounsfield Units (clipped to [−1000, +1000]).  
- **Content:** Synthetic chest CT scans across 18 pathological conditions (e.g., nodules, opacities, effusion, emphysema).  

## Dataset Creation

### Curation Rationale
Created to provide a reproducible benchmark for **text-to-CT generation** and to supply **synthetic volumetric data** for research in data augmentation, privacy preservation, and multimodal foundation models.

### Source Data
- Trained on **CT-RATE** (Hamamci et al., 2024), a large-scale dataset of chest CTs paired with radiology reports.

### Annotations
No manual annotations included; diagnostic semantics are embedded via the conditioning text prompts used during generation.

### Personal and Sensitive Information
- The dataset contains **no real patient data**.  
- All scans are **synthetic** and generated by a model trained on anonymized public datasets.

## Bias, Risks, and Limitations

- Synthetic data may not fully capture rare pathologies or distributional nuances of real-world scans.  
- While useful for augmentation and benchmarking, these scans are **not clinically validated**.  
- There is a potential risk if synthetic data are used without acknowledging their limitations in medical research.

### Recommendations
Users should:  
- Combine synthetic with real-world data for downstream tasks.  
- Avoid over-relying on synthetic volumes for clinical translation.  
- Report the provenance of synthetic data when used in publications.

## Citation

If you use this dataset, please cite the following work:

**BibTeX:**
```bibtex
@article{molino2025textct,
  title={Text-to-CT Generation via 3D Latent Diffusion Model with Contrastive Vision-Language Pretraining},
  author={Molino, Daniele and Caruso, Camillo Maria and Ruffini, Filippo and Soda, Paolo and Guarrasi, Valerio},
  journal={arXiv preprint arXiv:2506.00633},
  year={2025}
}
```