Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,7 @@ metrics:
|
|
| 24 |
# Stylizing ViT Tiny - DDI *(Dermatology)*
|
| 25 |
|
| 26 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 27 |
-
This model is the **Tiny** variant of **Stylizing ViT**, trained on the [**Diverse Dermatology Images (DDI)**](https://ddi-dataset.github.io/) (dermatology) dataset with the following splits: Train: {12} / Val: {34} / Test: {56}.
|
| 28 |
|
| 29 |
**Stylizing ViT** is a novel Vision Transformer encoder that utilizes weight-shared attention blocks for both self- and cross-attention. This design allows the same attention block to maintain anatomical consistency (via self-attention) while performing style transfer (via cross-attention), enabling anatomy-preserving instance style transfer for domain generalization in medical imaging.
|
| 30 |
|
|
|
|
| 24 |
# Stylizing ViT Tiny - DDI *(Dermatology)*
|
| 25 |
|
| 26 |
<!-- Provide a quick summary of what the model is/does. -->
|
| 27 |
+
This model is the **Tiny** variant of **Stylizing ViT**, trained on the [**Diverse Dermatology Images (DDI)**](https://ddi-dataset.github.io/) (dermatology) dataset with the following splits: **Train: {12} / Val: {34} / Test: {56}**.
|
| 28 |
|
| 29 |
**Stylizing ViT** is a novel Vision Transformer encoder that utilizes weight-shared attention blocks for both self- and cross-attention. This design allows the same attention block to maintain anatomical consistency (via self-attention) while performing style transfer (via cross-attention), enabling anatomy-preserving instance style transfer for domain generalization in medical imaging.
|
| 30 |
|