Datasets:
Add comprehensive README with dataset documentation
Browse files
README.md
CHANGED
|
@@ -1,43 +1,322 @@
|
|
| 1 |
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
- name: metadata
|
| 23 |
-
struct:
|
| 24 |
-
- name: priority
|
| 25 |
-
dtype: string
|
| 26 |
-
- name: base_name
|
| 27 |
-
dtype: string
|
| 28 |
-
- name: table_of_contents
|
| 29 |
-
dtype: string
|
| 30 |
-
- name: relevant_articles
|
| 31 |
-
sequence: string
|
| 32 |
-
splits:
|
| 33 |
-
- name: train
|
| 34 |
-
num_bytes: 3320957223.492
|
| 35 |
-
num_examples: 43014
|
| 36 |
-
download_size: 126151900
|
| 37 |
-
dataset_size: 3320957223.492
|
| 38 |
-
configs:
|
| 39 |
-
- config_name: default
|
| 40 |
-
data_files:
|
| 41 |
-
- split: train
|
| 42 |
-
path: data/train-*
|
| 43 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
task_categories:
|
| 5 |
+
- question-answering
|
| 6 |
+
- text-generation
|
| 7 |
+
- image-to-text
|
| 8 |
+
- text-classification
|
| 9 |
+
multilinguality:
|
| 10 |
+
- monolingual
|
| 11 |
+
size_categories:
|
| 12 |
+
- 10K<n<100K
|
| 13 |
+
tags:
|
| 14 |
+
- medical
|
| 15 |
+
- radiology
|
| 16 |
+
- multimodal
|
| 17 |
+
- healthcare
|
| 18 |
+
- medical-imaging
|
| 19 |
+
- clinical-cases
|
| 20 |
+
pretty_name: Radiology Multimodal Dataset
|
| 21 |
+
license: cc-by-nc-4.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
---
|
| 23 |
+
|
| 24 |
+
# Radiology Multimodal Dataset
|
| 25 |
+
|
| 26 |
+
## Dataset Description
|
| 27 |
+
|
| 28 |
+
This is a comprehensive multimodal radiology dataset containing 43,014 documents sourced from medical radiology resources. The dataset includes articles, clinical cases, and educational tutorials with associated medical images, designed for training and evaluating AI models in the medical imaging domain.
|
| 29 |
+
|
| 30 |
+
### Dataset Summary
|
| 31 |
+
|
| 32 |
+
- **Total Documents**: 43,014
|
| 33 |
+
- **Articles**: 17,218 educational articles about various radiological conditions
|
| 34 |
+
- **Cases**: 25,771 clinical cases with patient presentations and findings
|
| 35 |
+
- **Tutorials**: 25 comprehensive educational tutorials
|
| 36 |
+
|
| 37 |
+
- **Content Types**: Text content, medical images, image captions, URLs, and metadata
|
| 38 |
+
- **Domain**: Medical radiology and diagnostic imaging
|
| 39 |
+
- **Languages**: English
|
| 40 |
+
- **License**: CC BY-NC 4.0 (for non-commercial use)
|
| 41 |
+
|
| 42 |
+
### Supported Tasks
|
| 43 |
+
|
| 44 |
+
- **Medical Question Answering**: Use the rich clinical content for medical Q&A systems
|
| 45 |
+
- **Image-Text Retrieval**: Match medical images with textual descriptions
|
| 46 |
+
- **Clinical Case Analysis**: Train models to analyze and understand clinical presentations
|
| 47 |
+
- **Medical Report Generation**: Generate radiological reports from images and patient data
|
| 48 |
+
- **Educational Content Generation**: Create educational materials about radiological conditions
|
| 49 |
+
|
| 50 |
+
## Dataset Structure
|
| 51 |
+
|
| 52 |
+
### Data Instances
|
| 53 |
+
|
| 54 |
+
Each document in the dataset contains:
|
| 55 |
+
|
| 56 |
+
```python
|
| 57 |
+
{
|
| 58 |
+
'doc_id': 'article_brain-ct-angiography',
|
| 59 |
+
'source_type': 'article', # 'article', 'case', or 'tutorial'
|
| 60 |
+
'title': 'Brain CT Angiography',
|
| 61 |
+
'content': 'Full text content of the article/case/tutorial...',
|
| 62 |
+
'url': 'https://radiopaedia.org/articles/brain-ct-angiography',
|
| 63 |
+
'image_urls': ['https://...', 'https://...'],
|
| 64 |
+
'image_captions': ['Caption for image 1', 'Caption for image 2'],
|
| 65 |
+
'images': [<PIL.Image>, <PIL.Image>], # Actual image data
|
| 66 |
+
'local_image_paths': [], # Empty for HF version
|
| 67 |
+
'metadata': {
|
| 68 |
+
'priority': 'high',
|
| 69 |
+
'base_name': 'brain-ct-angiography',
|
| 70 |
+
'table_of_contents': '...', # For tutorials
|
| 71 |
+
'relevant_articles': [...] # For cases
|
| 72 |
+
}
|
| 73 |
+
}
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
### Data Fields
|
| 77 |
+
|
| 78 |
+
- **doc_id** (`string`): Unique identifier for each document
|
| 79 |
+
- **source_type** (`string`): Type of document - 'article', 'case', or 'tutorial'
|
| 80 |
+
- **title** (`string`): Title of the document
|
| 81 |
+
- **content** (`string`): Full text content, including clinical descriptions, findings, and conclusions
|
| 82 |
+
- **url** (`string`): Original source URL
|
| 83 |
+
- **image_urls** (`list[string]`): URLs of associated images from the original source
|
| 84 |
+
- **image_captions** (`list[string]`): Captions for the images
|
| 85 |
+
- **images** (`list[Image]`): Actual PIL Image objects (resized to max 1024x1024)
|
| 86 |
+
- **local_image_paths** (`list[string]`): Empty in the HF version (used for local processing)
|
| 87 |
+
- **metadata** (`dict`): Additional metadata including:
|
| 88 |
+
- `priority`: Priority level of the content
|
| 89 |
+
- `base_name`: Base name for the document
|
| 90 |
+
- `table_of_contents`: Table of contents (for tutorials)
|
| 91 |
+
- `relevant_articles`: Related articles (for cases)
|
| 92 |
+
|
| 93 |
+
### Data Splits
|
| 94 |
+
|
| 95 |
+
Currently, the dataset is provided as a single split. Users can create their own train/validation/test splits as needed:
|
| 96 |
+
|
| 97 |
+
```python
|
| 98 |
+
from datasets import load_dataset
|
| 99 |
+
|
| 100 |
+
dataset = load_dataset("ZhangNy/radiology-dataset")
|
| 101 |
+
|
| 102 |
+
# Create custom splits
|
| 103 |
+
train_test = dataset['train'].train_test_split(test_size=0.2, seed=42)
|
| 104 |
+
test_valid = train_test['test'].train_test_split(test_size=0.5, seed=42)
|
| 105 |
+
|
| 106 |
+
final_dataset = {
|
| 107 |
+
'train': train_test['train'],
|
| 108 |
+
'validation': test_valid['train'],
|
| 109 |
+
'test': test_valid['test']
|
| 110 |
+
}
|
| 111 |
+
```
|
| 112 |
+
|
| 113 |
+
## Dataset Creation
|
| 114 |
+
|
| 115 |
+
### Source Data
|
| 116 |
+
|
| 117 |
+
The data is sourced from:
|
| 118 |
+
- **Radiopaedia**: High-quality radiology reference articles and clinical cases
|
| 119 |
+
- **Radiology Assistant**: Comprehensive educational tutorials
|
| 120 |
+
|
| 121 |
+
### Data Collection and Processing
|
| 122 |
+
|
| 123 |
+
1. **Web Scraping**: Content was collected from public medical radiology resources
|
| 124 |
+
2. **Image Processing**: Images were resized to a maximum of 1024x1024 pixels while maintaining aspect ratio
|
| 125 |
+
3. **Text Processing**: Content was structured and cleaned to maintain medical accuracy
|
| 126 |
+
4. **Metadata Extraction**: Relevant metadata including image captions, URLs, and relationships were preserved
|
| 127 |
+
|
| 128 |
+
### Annotations
|
| 129 |
+
|
| 130 |
+
The dataset includes naturally occurring annotations in the form of:
|
| 131 |
+
- Image captions from radiologists
|
| 132 |
+
- Clinical findings and diagnoses
|
| 133 |
+
- Educational descriptions
|
| 134 |
+
- Patient presentations
|
| 135 |
+
|
| 136 |
+
## Usage
|
| 137 |
+
|
| 138 |
+
### Loading the Dataset
|
| 139 |
+
|
| 140 |
+
```python
|
| 141 |
+
from datasets import load_dataset
|
| 142 |
+
|
| 143 |
+
# Load the full dataset
|
| 144 |
+
dataset = load_dataset("ZhangNy/radiology-dataset")
|
| 145 |
+
|
| 146 |
+
# Access a sample
|
| 147 |
+
sample = dataset['train'][0]
|
| 148 |
+
print(f"Title: {sample['title']}")
|
| 149 |
+
print(f"Type: {sample['source_type']}")
|
| 150 |
+
print(f"Number of images: {len(sample['images'])}")
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
### Working with Images
|
| 154 |
+
|
| 155 |
+
```python
|
| 156 |
+
from datasets import load_dataset
|
| 157 |
+
import matplotlib.pyplot as plt
|
| 158 |
+
|
| 159 |
+
dataset = load_dataset("ZhangNy/radiology-dataset")
|
| 160 |
+
|
| 161 |
+
# Get a sample with images
|
| 162 |
+
for sample in dataset['train']:
|
| 163 |
+
if len(sample['images']) > 0:
|
| 164 |
+
# Display the first image
|
| 165 |
+
img = sample['images'][0]
|
| 166 |
+
plt.imshow(img)
|
| 167 |
+
plt.title(sample['title'])
|
| 168 |
+
plt.axis('off')
|
| 169 |
+
plt.show()
|
| 170 |
+
break
|
| 171 |
+
```
|
| 172 |
+
|
| 173 |
+
### Filtering by Source Type
|
| 174 |
+
|
| 175 |
+
```python
|
| 176 |
+
# Get only articles
|
| 177 |
+
articles = dataset['train'].filter(lambda x: x['source_type'] == 'article')
|
| 178 |
+
|
| 179 |
+
# Get only clinical cases
|
| 180 |
+
cases = dataset['train'].filter(lambda x: x['source_type'] == 'case')
|
| 181 |
+
|
| 182 |
+
# Get only tutorials
|
| 183 |
+
tutorials = dataset['train'].filter(lambda x: x['source_type'] == 'tutorial')
|
| 184 |
+
```
|
| 185 |
+
|
| 186 |
+
### Example Use Cases
|
| 187 |
+
|
| 188 |
+
#### 1. Medical Question Answering System
|
| 189 |
+
|
| 190 |
+
```python
|
| 191 |
+
from datasets import load_dataset
|
| 192 |
+
|
| 193 |
+
dataset = load_dataset("ZhangNy/radiology-dataset")
|
| 194 |
+
|
| 195 |
+
# Use as knowledge base for RAG (Retrieval-Augmented Generation)
|
| 196 |
+
documents = [
|
| 197 |
+
{
|
| 198 |
+
'id': item['doc_id'],
|
| 199 |
+
'text': f"{item['title']} {item['content']}",
|
| 200 |
+
'metadata': item['metadata']
|
| 201 |
+
}
|
| 202 |
+
for item in dataset['train']
|
| 203 |
+
]
|
| 204 |
+
```
|
| 205 |
+
|
| 206 |
+
#### 2. Vision-Language Model Training
|
| 207 |
+
|
| 208 |
+
```python
|
| 209 |
+
# Prepare image-text pairs for multimodal training
|
| 210 |
+
image_text_pairs = []
|
| 211 |
+
|
| 212 |
+
for sample in dataset['train']:
|
| 213 |
+
for i, img in enumerate(sample['images']):
|
| 214 |
+
caption = sample['image_captions'][i] if i < len(sample['image_captions']) else ""
|
| 215 |
+
image_text_pairs.append({
|
| 216 |
+
'image': img,
|
| 217 |
+
'text': f"{sample['title']} {caption}",
|
| 218 |
+
'context': sample['content']
|
| 219 |
+
})
|
| 220 |
+
```
|
| 221 |
+
|
| 222 |
+
#### 3. Clinical Case Retrieval
|
| 223 |
+
|
| 224 |
+
```python
|
| 225 |
+
# Build a retrieval system for similar cases
|
| 226 |
+
cases = dataset['train'].filter(lambda x: x['source_type'] == 'case')
|
| 227 |
+
|
| 228 |
+
# Index cases for semantic search
|
| 229 |
+
case_corpus = [
|
| 230 |
+
f"{case['title']} {case['content']}"
|
| 231 |
+
for case in cases
|
| 232 |
+
]
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
+
## Considerations for Using the Data
|
| 236 |
+
|
| 237 |
+
### Medical Disclaimer
|
| 238 |
+
|
| 239 |
+
⚠️ **IMPORTANT**: This dataset is for research and educational purposes only. It should NOT be used for:
|
| 240 |
+
- Clinical diagnosis or treatment decisions
|
| 241 |
+
- Patient care without proper medical professional oversight
|
| 242 |
+
- Any application that could impact patient health without appropriate validation
|
| 243 |
+
|
| 244 |
+
### Ethical Considerations
|
| 245 |
+
|
| 246 |
+
- **Privacy**: While the data is sourced from public educational resources, ensure compliance with medical data regulations in your jurisdiction
|
| 247 |
+
- **Bias**: Medical datasets may contain biases related to patient demographics, imaging equipment, and diagnostic practices
|
| 248 |
+
- **Accuracy**: Medical information should be verified by qualified healthcare professionals
|
| 249 |
+
- **Intended Use**: This dataset is designed for AI research, model training, and educational purposes
|
| 250 |
+
|
| 251 |
+
### Limitations
|
| 252 |
+
|
| 253 |
+
- Images are resized to max 1024x1024, which may affect fine detail visibility
|
| 254 |
+
- Not all documents have associated images
|
| 255 |
+
- Content is in English only
|
| 256 |
+
- May not represent the full diversity of clinical presentations
|
| 257 |
+
- Information is from specific time periods and may not reflect current medical practices
|
| 258 |
+
|
| 259 |
+
## License and Citation
|
| 260 |
+
|
| 261 |
+
### License
|
| 262 |
+
|
| 263 |
+
This dataset is released under **CC BY-NC 4.0** (Creative Commons Attribution-NonCommercial 4.0 International).
|
| 264 |
+
|
| 265 |
+
- ✅ You can: Use for research, modify, and share
|
| 266 |
+
- ❌ You cannot: Use for commercial purposes
|
| 267 |
+
- 📝 You must: Give appropriate credit and indicate if changes were made
|
| 268 |
+
|
| 269 |
+
### Citation
|
| 270 |
+
|
| 271 |
+
If you use this dataset in your research, please cite:
|
| 272 |
+
|
| 273 |
+
```bibtex
|
| 274 |
+
@dataset{radiology_multimodal_dataset_2025,
|
| 275 |
+
title={Radiology Multimodal Dataset},
|
| 276 |
+
author={ZhangNy},
|
| 277 |
+
year={2025},
|
| 278 |
+
publisher={Hugging Face},
|
| 279 |
+
howpublished={\url{https://huggingface.co/datasets/ZhangNy/radiology-dataset}}
|
| 280 |
+
}
|
| 281 |
+
```
|
| 282 |
+
|
| 283 |
+
### Source Attribution
|
| 284 |
+
|
| 285 |
+
The original content is sourced from:
|
| 286 |
+
- Radiopaedia.org - A collaborative radiology resource
|
| 287 |
+
- Radiology Assistant - Educational radiology tutorials
|
| 288 |
+
|
| 289 |
+
Please refer to these sources for their respective terms of use.
|
| 290 |
+
|
| 291 |
+
## Dataset Statistics
|
| 292 |
+
|
| 293 |
+
### Overall Statistics
|
| 294 |
+
- Total documents: 43,014
|
| 295 |
+
- Total images: Varies by document
|
| 296 |
+
- Average content length: ~2,000 tokens
|
| 297 |
+
|
| 298 |
+
### Distribution by Type
|
| 299 |
+
- Articles: 17,218 (40%)
|
| 300 |
+
- Cases: 25,771 (60%)
|
| 301 |
+
- Tutorials: 25 (<1%)
|
| 302 |
+
|
| 303 |
+
### Content Characteristics
|
| 304 |
+
- Rich clinical descriptions
|
| 305 |
+
- Multimodal (text + images)
|
| 306 |
+
- Structured metadata
|
| 307 |
+
- Educational focus
|
| 308 |
+
|
| 309 |
+
## Contact and Contributions
|
| 310 |
+
|
| 311 |
+
For questions, issues, or contributions related to this dataset:
|
| 312 |
+
- Dataset Repository: https://huggingface.co/datasets/ZhangNy/radiology-dataset
|
| 313 |
+
- Issues: Please report any problems or concerns through the repository
|
| 314 |
+
|
| 315 |
+
## Changelog
|
| 316 |
+
|
| 317 |
+
### Version 1.0 (December 2025)
|
| 318 |
+
- Initial release
|
| 319 |
+
- 43,014 documents (17,218 articles, 25,771 cases, 25 tutorials)
|
| 320 |
+
- Multimodal content with images resized to 1024x1024
|
| 321 |
+
- Comprehensive metadata and structured fields
|
| 322 |
+
|