Update README.md
Browse files
README.md
CHANGED
|
@@ -149,19 +149,19 @@ Images and their corresponding style semantic maps were resized to fit the input
|
|
| 149 |
- **Style Transfer Module**: AdaIN
|
| 150 |
|
| 151 |
**v3**
|
| 152 |
-
**Precision:** FP32, FP16, BF16, INT8
|
| 153 |
-
**Embedding Dimensions:** 768
|
| 154 |
-
**Hidden Dimensions:** 3072
|
| 155 |
-
**Attention Type:** Location-Based Multi-Head Attention (Linear Attention)
|
| 156 |
-
**Number of Attention Heads:** 42
|
| 157 |
-
**Number of Attention Layers:** 16
|
| 158 |
-
**Number of Transformer Encoder Layers (Feed-Forward):** 16
|
| 159 |
-
**Number of Transformer Decoder Layers (Feed-Forward):** 16
|
| 160 |
-
**Activation Functions:** ReLU, GeLU
|
| 161 |
-
**Patch Size:** 8
|
| 162 |
-
**Swin Window Size:** 7
|
| 163 |
-
**Swin Shift Size:** 2
|
| 164 |
-
**Style Transfer Module:** Style Adaptive Layer Normalization (SALN)
|
| 165 |
|
| 166 |
#### Speeds, Sizes, Times
|
| 167 |
|
|
|
|
| 149 |
- **Style Transfer Module**: AdaIN
|
| 150 |
|
| 151 |
**v3**
|
| 152 |
+
- **Precision:** FP32, FP16, BF16, INT8
|
| 153 |
+
- **Embedding Dimensions:** 768
|
| 154 |
+
- **Hidden Dimensions:** 3072
|
| 155 |
+
- **Attention Type:** Location-Based Multi-Head Attention (Linear Attention)
|
| 156 |
+
- **Number of Attention Heads:** 42
|
| 157 |
+
- **Number of Attention Layers:** 16
|
| 158 |
+
- **Number of Transformer Encoder Layers (Feed-Forward):** 16
|
| 159 |
+
- **Number of Transformer Decoder Layers (Feed-Forward):** 16
|
| 160 |
+
- **Activation Functions:** ReLU, GeLU
|
| 161 |
+
- **Patch Size:** 8
|
| 162 |
+
- **Swin Window Size:** 7
|
| 163 |
+
- **Swin Shift Size:** 2
|
| 164 |
+
- **Style Transfer Module:** Style Adaptive Layer Normalization (SALN)
|
| 165 |
|
| 166 |
#### Speeds, Sizes, Times
|
| 167 |
|