sdoerrich97 commited on
Commit
ba0b287
·
verified ·
1 Parent(s): d790b29

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -24,7 +24,7 @@ metrics:
24
  # Stylizing ViT Small - Epithelium-Stroma *(Histopathology)*
25
 
26
  <!-- Provide a quick summary of what the model is/does. -->
27
- This model is the **Small** variant of **Stylizing ViT**, trained on the [**Aggregated Epithelium-Stroma**](https://github.com/chenxinli001/Task-Aug) (histopathology) dataset with the following splits: Train: {NKI} / Val: {VGH} / Test: {IHC}.
28
 
29
  **Stylizing ViT** is a novel Vision Transformer encoder that utilizes weight-shared attention blocks for both self- and cross-attention. This design allows the same attention block to maintain anatomical consistency (via self-attention) while performing style transfer (via cross-attention), enabling anatomy-preserving instance style transfer for domain generalization in medical imaging.
30
 
 
24
  # Stylizing ViT Small - Epithelium-Stroma *(Histopathology)*
25
 
26
  <!-- Provide a quick summary of what the model is/does. -->
27
+ This model is the **Small** variant of **Stylizing ViT**, trained on the [**Aggregated Epithelium-Stroma**](https://github.com/chenxinli001/Task-Aug) (histopathology) dataset with the following splits: **Train: {NKI} / Val: {VGH} / Test: {IHC}**.
28
 
29
  **Stylizing ViT** is a novel Vision Transformer encoder that utilizes weight-shared attention blocks for both self- and cross-attention. This design allows the same attention block to maintain anatomical consistency (via self-attention) while performing style transfer (via cross-attention), enabling anatomy-preserving instance style transfer for domain generalization in medical imaging.
30