Files changed (1) hide show
  1. README.md +166 -154
README.md CHANGED
@@ -52,17 +52,16 @@ This model is a Vision Transformer adapted for neuropathology tasks, developed u
52
  ## Model Details
53
 
54
  * **Model Type:** Vision Transformer (ViT) for neuropathology.
55
- * **Developed by:** [https://caai.ai.uky.edu], [Optional: in collaboration with the University of Kentucky [Specific Department/Center, e.g., Sanders-Brown Center on Aging]]
56
- * **Model Date:** [PLACEHOLDER: YYYY-MM-DD of model training completion or publication]
57
- * **Base Model Architecture (if applicable):** [PLACEHOLDER: e.g., DINOv2 ViT-S/14, ViT-B/14. Specify if registers are used, e.g., "Based on ViT-B/14 with 4 register tokens."]
58
- * **Input:** Image (e.g., patches from whole slide images).
59
- * **Output:** Class token and patch tokens [Optional: and register tokens]. These can be used for various downstream tasks (e.g., classification, segmentation, similarity search).
60
- * **Embedding Dimension:** [PLACEHOLDER: Specify for your ViT variant, e.g., 384 for ViT-S, 768 for ViT-B]
61
- * **Patch Size:** [PLACEHOLDER: e.g., 14 or 16. Confirm based on your model, e.g., "14 for a ViT with patch size 14."]
62
  * **Image Size Compatibility:**
63
- * The model was trained on images/patches of size [PLACEHOLDER: e.g., 224x224].
64
- * For an input of [PLACEHOLDER: e.g., 224x224] with a patch size of [PLACEHOLDER: e.g., 14], this results in 1 class token + ([PLACEHOLDER: e.g., 224]/[PLACEHOLDER: e.g., 14])^2 = [PLACEHOLDER: e.g., 256] patch tokens [Optional: + X register tokens].
65
- * The model can accept larger images provided the image dimensions are multiples of the patch size. If not, cropping to the closest smaller multiple may occur.
66
  * **License:** [PLACEHOLDER: Reiterate license chosen in YAML, e.g., Apache 2.0. Add link to full license if custom or 'other'.]
67
  * **Repository:** [PLACEHOLDER: Link to your model repository (e.g., GitHub, Hugging Face Hub)]
68
  * **Paper(s)/Reference(s):**
@@ -77,115 +76,138 @@ This model is a Vision Transformer adapted for neuropathology tasks, developed u
77
  This model is intended for research purposes in the field of neuropathology.
78
 
79
  * **Primary Intended Uses:**
80
- * [PLACEHOLDER: e.g., Automated detection of specific neuropathological features (e.g., amyloid plaques, neurofibrillary tangles, Lewy bodies) in digitized histopathological slides.]
81
- * [PLACEHOLDER: e.g., Classification of tissue samples based on the presence/severity of neuropathological changes.]
82
- * [PLACEHOLDER: e.g., Feature extraction for quantitative analysis of neuropathology.]
83
- * [PLACEHOLDER: e.g., A research tool to explore correlations between image features and disease states/progression.]
84
- * **Primary Intended Users:**
85
- * [PLACEHOLDER: e.g., Neuropathology researchers]
86
- * [PLACEHOLDER: e.g., Computational pathology scientists]
87
- * [PLACEHOLDER: e.g., AI developers working on medical imaging solutions for neurodegenerative diseases]
88
- * **Out-of-Scope Uses:**
89
- * [PLACEHOLDER: e.g., Direct clinical diagnosis or patient management decisions without expert human neuropathologist review and confirmation.]
90
- * [PLACEHOLDER: e.g., Use on staining methods, tissue types, or species significantly different from the training data without thorough validation.]
91
- * [PLACEHOLDER: e.g., Any application with legal or primary diagnostic implications without regulatory clearance.]
92
-
93
  ## How to Get Started with the Model
94
 
95
  [PLACEHOLDER: Provide code snippets for loading and using your model. If available on Hugging Face, show an example using `transformers` or `torch.hub.load`.]
96
 
97
  Example using Hugging Face `transformers` (adjust based on your actual model and task):
98
  ```python
99
- # Ensure you have the necessary libraries installed:
100
- # pip install transformers torch Pillow
101
 
102
- from transformers import AutoImageProcessor, AutoModel # Or AutoModelForImageClassification
103
  import torch
104
  from PIL import Image
105
- import requests # For fetching image from URL if needed
106
-
107
- # Make sure to replace with your actual model identifier on the Hugging Face Hub
108
- # For example: model_id = "your-username/your-model-name"
109
- model_id = "[PLACEHOLDER: your-hf-hub-username/your-model-name]"
110
-
111
- # Load the processor and model
112
- try:
113
- image_processor = AutoImageProcessor.from_pretrained(model_id)
114
- # If your model is for a specific task like classification, use the appropriate AutoModel class
115
- # model = AutoModelForImageClassification.from_pretrained(model_id)
116
- model = AutoModel.from_pretrained(model_id) # For feature extraction
117
- model.eval() # Set model to evaluation mode
118
- except Exception as e:
119
- print(f"Error loading model or processor from Hugging Face Hub: {e}")
120
- print(f"Please ensure '{model_id}' is a valid model identifier and you have an internet connection.")
121
- # Fallback for placeholder if model_id is not set for demonstration
122
- if model_id == "[PLACEHOLDER: your-hf-hub-username/your-model-name]":
123
- print("Using a dummy model structure for demonstration as placeholder ID is used.")
124
- # This is a dummy structure, not a functional model
125
- from transformers import ViTConfig, ViTModel
126
- config = ViTConfig(image_size=224, patch_size=14, num_labels=3, hidden_size=192, num_hidden_layers=12, num_attention_heads=3) # Minimal ViT-Tiny like
127
- model = ViTModel(config) # Or ViTForImageClassification(config)
128
- # A dummy processor
129
- class DummyProcessor:
130
- def __init__(self):
131
- self.size = {"height": 224, "width": 224}
132
- def __call__(self, images, return_tensors=None):
133
- # Simplified dummy preprocessing
134
- return {"pixel_values": torch.randn(1, 3, self.size['height'], self.size['width'])}
135
- image_processor = DummyProcessor()
136
-
137
-
138
- # Example: Load an image
139
- # Option 1: From a local path
140
- image_path = "[PLACEHOLDER: path/to/your/neuropathology_image.png]"
141
- # Option 2: From a URL (example)
142
- # image_url = "[https://placehold.co/224x224/E6E6FA/800080?text=Sample](https://placehold.co/224x224/E6E6FA/800080?text=Sample)\nImage" # Lilac background, purple text
143
- image_url = "[https://placehold.co/224x224/cccccc/333333?text=Sample+Patch](https://placehold.co/224x224/cccccc/333333?text=Sample+Patch)"
144
-
145
-
146
- try:
147
- # image = Image.open(image_path).convert("RGB")
148
- # Uncomment above line and comment below if using local path
149
- image = Image.open(requests.get(image_url, stream=True).raw).convert("RGB")
150
- except FileNotFoundError:
151
- print(f"Image file not found at: {image_path}. Using a dummy image.")
152
- image = Image.new('RGB', (image_processor.size['height'], image_processor.size['width']), color = 'skyblue')
153
- except Exception as e:
154
- print(f"Error loading image: {e}. Using a dummy image.")
155
- image = Image.new('RGB', (224, 224), color = 'skyblue') # Fallback size
156
-
157
- # Preprocess the image
158
- try:
159
- inputs = image_processor(images=image, return_tensors="pt")
160
- except Exception as e:
161
- print(f"Error during image processing: {e}")
162
- inputs = {"pixel_values": torch.randn(1, 3, 224, 224)} # Fallback input
163
-
164
- # Perform inference
165
- with torch.no_grad():
166
- try:
167
  outputs = model(**inputs)
168
- # For feature extraction (AutoModel):
169
- last_hidden_states = outputs.last_hidden_state
170
- class_token_embedding = last_hidden_states[:, 0] # CLS token embedding
171
- patch_embeddings = last_hidden_states[:, 1:] # Patch token embeddings (excluding CLS)
172
- print("Class token embedding shape:", class_token_embedding.shape)
173
- print("Patch embeddings shape:", patch_embeddings.shape)
174
-
175
- # For classification (AutoModelForImageClassification):
176
- # if hasattr(outputs, 'logits'):
177
- # logits = outputs.logits
178
- # predicted_class_idx = logits.argmax(-1).item()
179
- # # Assuming your model config has id2label mapping
180
- # if hasattr(model.config, 'id2label') and model.config.id2label:
181
- # print("Predicted class:", model.config.id2label[predicted_class_idx])
182
- # else:
183
- # print("Predicted class index:", predicted_class_idx)
184
- # else:
185
- # print("Model output does not contain logits. Check if you are using the correct AutoModel class for your task.")
186
-
187
- except Exception as e:
188
- print(f"Error during model inference: {e}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
189
 
190
  ```
191
 
@@ -197,48 +219,46 @@ with torch.no_grad():
197
  * **Description:** [PLACEHOLDER: Describe the data. E.g., "Digitized whole slide images (WSIs) of human post-mortem brain tissue sections from [number] subjects. Sections were stained with [e.g., Hematoxylin and Eosin (H&E), and immunohistochemistry for Amyloid-beta (Aβ) and phosphorylated Tau (pTau)]. Images were acquired using [e.g., Aperio AT2 scanner at 20x magnification]."]
198
  * **Preprocessing:** [PLACEHOLDER: Describe significant preprocessing steps. E.g., "WSIs were tiled into non-overlapping [e.g., 224x224 pixel] patches. Tiles with excessive background or artifacts were excluded. Color normalization using [Method, e.g., Macenko method] was applied."]
199
  * **Annotation (if applicable for supervised fine-tuning or evaluation):** [PLACEHOLDER: Describe the annotation process. E.g., "Regions of interest (ROIs) for [pathologies] were annotated by board-certified neuropathologists. For classification tasks, slide-level or region-level labels for [disease/pathology presence/severity] were provided."]
200
- * **Data Collection and Bias:**
201
- * **Demographics & Characteristics:** [PLACEHOLDER: Describe characteristics of the subjects providing data – e.g., age range, sex distribution, ethnicity distribution (if available and ethically appropriate to share), primary diagnoses, disease stages. Note any significant imbalances or selection criteria. E.g., "Data primarily from individuals over 65 years of age, with a representation of [X% female, Y% male]. The cohort includes cases spanning a spectrum of Alzheimer's Disease neuropathologic change (ADNC)."]
202
- * **Known Biases in Data:** [PLACEHOLDER: Address any known or potential biases in the dataset. E.g., "The dataset is derived from a single academic medical center (University of Kentucky), potentially limiting geographic and scanner-type diversity.", "Underrepresentation of certain comorbid conditions or early disease stages.", "Potential for selection bias based on consent or case availability."]
203
 
204
  ## Training Procedure
205
 
206
- * **Training System/Framework:** [PLACEHOLDER: e.g., "PyTorch", "Hugging Face Transformers library". If custom or specific framework features were essential, mention them, e.g., "Custom training loop implementing DINOv2 self-distillation loss and iBOT masked image modeling."]
207
- * **Base Model (if fine-tuning):** [PLACEHOLDER: e.g., "Pretrained `facebook/dinov2-vitb14` loaded from Hugging Face Hub."]
208
- * **Training Objective(s):** [PLACEHOLDER: Describe the loss functions and training paradigm. E.g., "Self-supervised learning using DINO loss, iBOT masked-image modeling loss, and KoLeo regularization on [CLS] tokens.", or for fine-tuning: "Fine-tuned for [specific task, e.g., multi-class classification of neuropathological features] using a cross-entropy loss function."]
209
  * **Key Hyperparameters (example):**
210
- * Batch size: [PLACEHOLDER]
211
- * Learning rate: [PLACEHOLDER] (and schedule if any)
212
- * Epochs/Iterations: [PLACEHOLDER]
213
- * Optimizer: [PLACEHOLDER: e.g., AdamW]
214
- * Weight decay: [PLACEHOLDER]
215
- * [Optional: Other important parameters like temperature for DINO, mask ratio for iBOT]
216
- * **Data Augmentation:** [PLACEHOLDER: List specific augmentations used. E.g., "Standard augmentations including random cropping, horizontal/vertical flipping, rotations. Color augmentations such as random brightness, contrast, and HED color jitter specifically for histopathology images. [Optional: Stain augmentation techniques if used.]"]
217
- * **Training Regime:** [PLACEHOLDER: e.g., "Trained with fp16 mixed-precision using PyTorch FSDP on [Number]x NVIDIA [Type, e.g., A100] GPUs."]
218
- * [Optional: Parameter-Efficient Fine-Tuning (PEFT): If used, describe e.g., "LoRA was applied to attention and feed-forward network layers with a rank of [r]."]
219
- * [Optional: Layer Freezing: If used, e.g., "The first N layers of the pretrained backbone were frozen during fine-tuning."]
220
-
221
  ## Evaluation
222
 
223
- * **Task(s):** [PLACEHOLDER: Clearly define the task(s) the model was evaluated on. E.g., "Patch-level classification of [pathology A vs. B vs. healthy]", "Detection of [specific cellular feature]", "Slide-level prediction of [disease grade]"]
224
- * **Metrics:** [PLACEHOLDER: List the metrics used for evaluation. E.g., "For classification: Accuracy, Precision, Recall, F1-score (macro/micro/weighted), AUC-ROC, AUC-PR. For detection: mean Average Precision (mAP) at [IoU threshold(s)]."]
225
- * **Evaluation Data:**
226
- * **Dataset(s):** [PLACEHOLDER: Describe the dataset(s) used for evaluation. E.g., "A held-out test set from the University of Kentucky dataset, comprising [N] images/slides from [M] subjects, ensuring no overlap with the training set.", "Optional: An external validation dataset from [Source Y] consisting of [details]."]
227
- * **Demographics and Characteristics:** [PLACEHOLDER: Describe the evaluation set similarly to the training data, highlighting any differences.]
228
- * **Results:** [PLACEHOLDER: Present key quantitative results. Tables are good for multiple metrics/classes. Include confidence intervals or standard deviations if available. E.g., "The model achieved an accuracy of X% and an F1-score of Y for classifying [pathology Z] on the internal UKy test set. On the external validation set [Dataset Name], it achieved an accuracy of A%."]
229
-
230
- ## Bias, Risks, and Limitations
231
-
232
- * **Model Biases:**
233
- * [PLACEHOLDER: Reflect on potential biases. E.g., "Performance may be unequal across different demographic groups if these were imbalanced in the UKy training data and these characteristics correlate with image features.", "Model may exhibit bias towards features prevalent in the specific scanner or staining protocols used at the University of Kentucky.", "Bias may arise from class imbalance in the training data, leading to better performance on majority classes."]
234
- * **Risks:**
235
- * [PLACEHOLDER: Identify potential risks. E.g., "Over-reliance on model predictions in a research setting without thorough critical assessment by domain experts could lead to erroneous scientific conclusions.", "Risk of algorithmic bias perpetuating or amplifying existing disparities if the model is naively applied to populations or data sources different from the training set without careful validation.", "Misinterpretation of model outputs as definitive diagnostic statements (model is for research/assistive use)."]
236
- * **Limitations:**
237
- * [PLACEHOLDER: State known limitations. E.g., "The model was trained primarily on [specific stains/markers, e.g., H&E, Aβ, pTau] and its performance on other stains is not guaranteed.", "Generalization to images from different institutions, scanners, or significantly different tissue preparation protocols may be limited without further fine-tuning or validation.", "Performance on very rare neuropathological features or subtle morphological changes may be suboptimal due to limited representation in the training data.", "The model requires high-quality input images; performance may degrade with significant artifacts (e.g., blur, tissue folds, pen marks)."]
238
- * **Recommendations:**
239
- * Users should critically evaluate model outputs, especially in novel contexts or with data from different sources.
240
- * Extensive validation is recommended before use on datasets with different characteristics than the training data.
241
- * [PLACEHOLDER: Add any other specific recommendations for users.]
 
 
 
 
 
 
 
242
 
243
  ## Ethical Considerations
244
 
@@ -250,15 +270,7 @@ with torch.no_grad():
250
  * This model is intended for research purposes to augment the capabilities of neuropathology researchers. It is not a medical device and should not be used for direct clinical decision-making, diagnosis, or treatment planning without comprehensive validation, regulatory approval (if applicable), and oversight by qualified medical professionals.
251
  * **Fairness and Bias Mitigation:**
252
  * [PLACEHOLDER: Describe any steps taken during development to assess or mitigate bias, or plans for future work in this area. E.g., "Ongoing work includes evaluating model performance across different demographic subgroups represented in the University of Kentucky dataset to identify and address potential disparities."]
253
-
254
- ## Environmental Impact
255
-
256
- * **Hardware Type:** [PLACEHOLDER: e.g., NVIDIA A100 80GB, NVIDIA V100 32GB, or specific University of Kentucky HPC node types]
257
- * **Hours Used:** [PLACEHOLDER: Estimate total GPU/TPU hours for training/fine-tuning, e.g., "Approximately X GPU hours"]
258
- * **Cloud Provider:** [PLACEHOLDER: e.g., University of Kentucky Lipscomb Compute Cluster, AWS, GCP, Azure, Private Infrastructure]
259
- * **Compute Region:** [PLACEHOLDER: e.g., Lexington, KY (for UKy HPC); us-east-1 (if cloud); Not Applicable (if local HPC)]
260
- * **Carbon Emitted (CO2eq):** [PLACEHOLDER: e.g., "X kg". Estimate if possible using tools like CodeCarbon or ML CO2 Impact. If not measured, state "Not quantitatively measured." Consider adding: "We encourage users to be mindful of the computational cost of using and retraining deep learning models."]
261
- * *Software:* [PLACEHOLDER: e.g., PyTorch X.Y, Transformers Z.A, CUDA B.C]
262
 
263
  ## Citation / BibTeX
264
 
 
52
  ## Model Details
53
 
54
  * **Model Type:** Vision Transformer (ViT) for neuropathology.
55
+ * **Developed by:** Center for Applied Artificial Intelligence (CAAI)
56
+ * **Model Date:** 05/05/2025
57
+ * **Base Model Architecture:** Dinov2-giant (https://huggingface.co/facebook/dinov2-giant)
58
+ * **Input:** Image (224x224).
59
+ * **Output:** Class token and patch tokens. These can be used for various downstream tasks (e.g., classification, segmentation, similarity search).
60
+ * **Embedding Dimension:** 1536
61
+ * **Patch Size:** 14
62
  * **Image Size Compatibility:**
63
+ * The model was trained on images/patches of size 224x224.
64
+ * The model can accept images of any size, not just the 224x224 dimensions used in training.
 
65
  * **License:** [PLACEHOLDER: Reiterate license chosen in YAML, e.g., Apache 2.0. Add link to full license if custom or 'other'.]
66
  * **Repository:** [PLACEHOLDER: Link to your model repository (e.g., GitHub, Hugging Face Hub)]
67
  * **Paper(s)/Reference(s):**
 
76
  This model is intended for research purposes in the field of neuropathology.
77
 
78
  * **Primary Intended Uses:**
79
+ * Classification of tissue samples based on the presence/severity of neuropathological changes.
80
+ * Feature extraction for quantitative analysis of neuropathology.
 
 
 
 
 
 
 
 
 
 
 
81
  ## How to Get Started with the Model
82
 
83
  [PLACEHOLDER: Provide code snippets for loading and using your model. If available on Hugging Face, show an example using `transformers` or `torch.hub.load`.]
84
 
85
  Example using Hugging Face `transformers` (adjust based on your actual model and task):
86
  ```python
 
 
87
 
 
88
  import torch
89
  from PIL import Image
90
+ from transformers import AutoModel, AutoImageProcessor
91
+ from torchvision import transforms
92
+
93
+ def get_embeddings_with_processor(image_path, model_path):
94
+ """
95
+ Extract embeddings using a HuggingFace image processor.
96
+ This approach handles normalization and resizing automatically.
97
+
98
+ Args:
99
+ image_path: Path to the image file
100
+ model_path: Path to the model directory
101
+ processor_path: Path to the processor config directory
102
+
103
+ Returns:
104
+ Image embeddings from the model
105
+ """
106
+ # Load model
107
+ model = AutoModel.from_pretrained(model_path)
108
+ model.eval()
109
+
110
+ # Load processor from config
111
+ image_processor = AutoImageProcessor.from_pretrained(model_path)
112
+
113
+ # Process the image
114
+ with torch.no_grad():
115
+ image = Image.open(image_path).convert('RGB')
116
+ inputs = image_processor(images=image, return_tensors="pt")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
117
  outputs = model(**inputs)
118
+ embeddings = outputs.last_hidden_state[:, 0, :]
119
+
120
+ return embeddings
121
+
122
+ def get_embeddings_direct(image_path, model_path, mean=[0.83800817, 0.6516568, 0.78056043], std=[0.08324149, 0.09973671, 0.07153901]):
123
+ """
124
+ Extract embeddings directly without an image processor.
125
+ This approach works with various image resolutions since transformers handle
126
+ different input sizes by design.
127
+
128
+ Args:
129
+ image_path: Path to the image file
130
+ model_path: Path to the model directory
131
+ mean: Normalization mean values
132
+ std: Normalization standard deviation values
133
+
134
+ Returns:
135
+ Image embeddings from the model
136
+ """
137
+ # Load model
138
+ model = AutoModel.from_pretrained(model_path)
139
+ model.eval()
140
+
141
+ # Define transformation - just converting to tensor and normalizing
142
+ transform = transforms.Compose([
143
+ transforms.ToTensor(),
144
+ transforms.Normalize(mean=mean, std=std)
145
+ ])
146
+
147
+ # Process the image
148
+ with torch.no_grad():
149
+ # Open image and convert to RGB
150
+ image = Image.open(image_path).convert('RGB')
151
+ # Convert image to tensor
152
+ image_tensor = transform(image).unsqueeze(0) # Add batch dimension
153
+ # Feed to model
154
+ outputs = model(pixel_values=image_tensor)
155
+ # Get embeddings
156
+ embeddings = outputs.last_hidden_state[:, 0, :]
157
+
158
+ return embeddings
159
+
160
+ def get_embeddings_resized(image_path, model_path, size=(224, 224), mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]):
161
+ """
162
+ Extract embeddings with explicit resizing to 224x224.
163
+ This approach ensures consistent input size regardless of original image dimensions.
164
+
165
+ Args:
166
+ image_path: Path to the image file
167
+ model_path: Path to the model directory
168
+ size: Target size for resizing (default: 224x224)
169
+ mean: Normalization mean values
170
+ std: Normalization standard deviation values
171
+
172
+ Returns:
173
+ Image embeddings from the model
174
+ """
175
+ # Load model
176
+ model = AutoModel.from_pretrained(model_path)
177
+ model.eval()
178
+
179
+ # Define transformation with explicit resize
180
+ transform = transforms.Compose([
181
+ transforms.Resize(size, interpolation=transforms.InterpolationMode.BICUBIC),
182
+ transforms.ToTensor(),
183
+ transforms.Normalize(mean=mean, std=std)
184
+ ])
185
+
186
+ # Process the image
187
+ with torch.no_grad():
188
+ image = Image.open(image_path).convert('RGB')
189
+ image_tensor = transform(image).unsqueeze(0) # Add batch dimension
190
+ outputs = model(pixel_values=image_tensor)
191
+ embeddings = outputs.last_hidden_state[:, 0, :]
192
+
193
+ return embeddings
194
+
195
+ # Example usage
196
+ if __name__ == "__main__":
197
+ image_path = "test.jpg"
198
+ model_path = "IBI-CAAI/NP-TEST-0"
199
+
200
+ # Method 1: Using image processor (recommended for consistency)
201
+ embeddings1 = get_embeddings_with_processor(image_path, model_path)
202
+ print('Embedding shape (with processor):', embeddings1.shape)
203
+
204
+ # Method 2: Direct approach without resizing (works with various resolutions)
205
+ embeddings2 = get_embeddings_direct(image_path, model_path)
206
+ print('Embedding shape (direct):', embeddings2.shape)
207
+
208
+ # Method 3: With explicit resize to 224x224
209
+ embeddings3 = get_embeddings_resized(image_path, model_path)
210
+ print('Embedding shape (resized):', embeddings3.shape)
211
 
212
  ```
213
 
 
219
  * **Description:** [PLACEHOLDER: Describe the data. E.g., "Digitized whole slide images (WSIs) of human post-mortem brain tissue sections from [number] subjects. Sections were stained with [e.g., Hematoxylin and Eosin (H&E), and immunohistochemistry for Amyloid-beta (Aβ) and phosphorylated Tau (pTau)]. Images were acquired using [e.g., Aperio AT2 scanner at 20x magnification]."]
220
  * **Preprocessing:** [PLACEHOLDER: Describe significant preprocessing steps. E.g., "WSIs were tiled into non-overlapping [e.g., 224x224 pixel] patches. Tiles with excessive background or artifacts were excluded. Color normalization using [Method, e.g., Macenko method] was applied."]
221
  * **Annotation (if applicable for supervised fine-tuning or evaluation):** [PLACEHOLDER: Describe the annotation process. E.g., "Regions of interest (ROIs) for [pathologies] were annotated by board-certified neuropathologists. For classification tasks, slide-level or region-level labels for [disease/pathology presence/severity] were provided."]
 
 
 
222
 
223
  ## Training Procedure
224
 
225
+ * **Training System/Framework:** DINO-MX (Modular & Flexible Self-Supervised Training Framework)
226
+ * **Base Model (if fine-tuning):** Pretrained `facebook/dinov2-giant` loaded from Hugging Face Hub.
227
+ * **Training Objective(s):** Self-supervised learning using DINO loss, iBOT masked-image modeling loss.
228
  * **Key Hyperparameters (example):**
229
+ * Batch size: 32
230
+ * Learning rate: 1.0e-4
231
+ * Epochs/Iterations: 5000 Iterations
232
+ * Optimizer: AdamW
233
+ * Weight decay: 0.04-0.4
 
 
 
 
 
 
234
  ## Evaluation
235
 
236
+ * **Task(s):** Classification, KNN, Clustering, Robustness
237
+ * **Metrics:** Accuracy, Precision, Recall, F1
238
+ * **Dataset(s):** Neuro Path dataset
239
+ * **Results:**
240
+ The model achieved strong performance across multiple evaluation methods using the Neuro Path dataset. The model architecture is based on facebook/dinov2-giant.
241
+
242
+ **Linear Probe Performance:**
243
+ - Accuracy: 80.17%
244
+ - Precision: 79.20%
245
+ - Recall: 79.60%
246
+ - F1 Score: 77.88%
247
+
248
+ **K-Nearest Neighbors Classification:**
249
+ - Accuracy: 83.76%
250
+ - Precision: 83.34%
251
+ - Recall: 83.76%
252
+ - F1 Score: 83.40%
253
+
254
+ **Clustering Quality:**
255
+ - Silhouette Score: 0.267
256
+ - Adjusted Mutual Information: 0.473
257
+
258
+ **Robustness Score:** 0.574
259
+
260
+ **Overall Performance Score:** 0.646
261
+
262
 
263
  ## Ethical Considerations
264
 
 
270
  * This model is intended for research purposes to augment the capabilities of neuropathology researchers. It is not a medical device and should not be used for direct clinical decision-making, diagnosis, or treatment planning without comprehensive validation, regulatory approval (if applicable), and oversight by qualified medical professionals.
271
  * **Fairness and Bias Mitigation:**
272
  * [PLACEHOLDER: Describe any steps taken during development to assess or mitigate bias, or plans for future work in this area. E.g., "Ongoing work includes evaluating model performance across different demographic subgroups represented in the University of Kentucky dataset to identify and address potential disparities."]
273
+
 
 
 
 
 
 
 
 
274
 
275
  ## Citation / BibTeX
276