Datasets:

Modalities:
Image
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
gOLIVES commited on
Commit
28e0400
·
verified ·
1 Parent(s): bb61e8f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -1,4 +1,10 @@
1
  ---
 
 
 
 
 
 
2
  dataset_info:
3
  features:
4
  - name: image
@@ -57,3 +63,56 @@ configs:
57
  - split: train
58
  path: data/train-*
59
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ tags:
4
+ - medical
5
+ pretty_name: 'OLIVES Dataset: Ophthalmic Labels for Investigating Visual Eye Semantics'
6
+ size_categories:
7
+ - 10K<n<100K
8
  dataset_info:
9
  features:
10
  - name: image
 
63
  - split: train
64
  path: data/train-*
65
  ---
66
+
67
+
68
+ # OLIVES_Dataset
69
+
70
+ ## Abstract
71
+ Clinical diagnosis of the eye is performed over multifarious data modalities including scalar clinical labels, vectorized biomarkers, two-dimensional fundus images, and three-dimensional Optical Coherence Tomography (OCT) scans. While the clinical labels, fundus images and OCT scans are instrumental measurements, the vectorized biomarkers are interpreted attributes from the other measurements. Clinical practitioners use all these data modalities for diagnosing and treating eye diseases like Diabetic Retinopathy (DR) or Diabetic Macular Edema (DME). Enabling usage of machine learning algorithms within the ophthalmic medical domain requires research into the relationships and interactions between these relevant data modalities. Existing datasets are limited in that: ($i$) they view the problem as disease prediction without assessing biomarkers, and ($ii$) they do not consider the explicit relationship among all four data modalities over the treatment period. In this paper, we introduce the Ophthalmic Labels for Investigating Visual Eye Semantics (OLIVES) dataset that addresses the above limitations. This is the first OCT and fundus dataset that includes clinical labels, biomarker labels, and time-series patient treatment information from associated clinical trials. The dataset consists of $1268$ fundus eye images each with $49$ OCT scans, and $16$ biomarkers, along with $3$ clinical labels and a disease diagnosis of DR or DME. In total, there are 96 eyes' data averaged over a period of at least two years with each eye treated for an average of 66 weeks and 7 injections. OLIVES dataset has advantages in other fields of machine learning research including self-supervised learning as it provides alternate augmentation schemes that are medically grounded.
72
+
73
+ **Labels**:
74
+ There are two directories for the labels: full_labels and ml_centric labels.
75
+
76
+ **Full labels** contain all the clinical inforamtion used in these studies for the associated studies of interest.
77
+
78
+ **ML Centric labels** are divided into two files: Biomarker_Clinical_Data_Images.csv and Clinical_Data_Images.xlsx.
79
+
80
+ Biomarker_Clinical_Data_Images.csv contains full biomarker and clinical labels for the 9408 images that have this labeled biomarker information.
81
+
82
+ Clinical_Data_Images.xlsx has the BCVA, CST, Eye ID, and Patient ID for the 78000+ images that have clinical data.
83
+
84
+ ## Data Download
85
+
86
+ ```python
87
+ from datasets import load_dataset
88
+ from torch.utils.data import DataLoader
89
+
90
+ olives = load_dataset('gOLIVES/OLIVES_dataset',split='train')
91
+
92
+ # Covert into a Format Usable by Pytorch
93
+ olives = olives.with_format("torch")
94
+
95
+ dataloader = DataLoader(olives, batch_size=4)
96
+ for batch in dataloader:
97
+ print(batch)
98
+
99
+ # Example to get the VMT Biomarker of the first image in the dataset.
100
+ print(olives[0]['VMT'])
101
+
102
+ ```
103
+
104
+ ## Links
105
+
106
+ **Associated Website**: https://ghassanalregib.info/
107
+
108
+ ## Citations
109
+
110
+ If you find the work useful, please include the following citation in your work:
111
+
112
+ >@inproceedings{prabhushankarolives2022,\
113
+ title={OLIVES Dataset: Ophthalmic Labels for Investigating
114
+ Visual Eye Semantics},\
115
+ author={Prabhushankar, Mohit and Kokilepersaud, Kiran and Logan, Yash-yee and Trejo Corona, Stephanie and AlRegib, Ghassan and Wykoff, Charles},\
116
+ booktitle={Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks 2 (NeurIPS Datasets and Benchmarks 2022) },\
117
+ year={2022}\
118
+ }