Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -30,6 +30,35 @@ this dataset contains pre-rendered word stimulus images used to evaluate how wel
|
|
| 30 |
|
| 31 |
each image is a 224x224 black-background PNG with white text, rendered in arial at size 22, centred.
|
| 32 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
### what's included
|
| 34 |
|
| 35 |
```
|
|
@@ -47,7 +76,7 @@ metadata/
|
|
| 47 |
|
| 48 |
| code | description |
|
| 49 |
|------|-------------|
|
| 50 |
-
| ID | identity (e.g.,
|
| 51 |
| TL12 | transposed letters positions 1-2 |
|
| 52 |
| TL-I | transposed letters internal |
|
| 53 |
| TL56 | transposed letters positions 5-6 |
|
|
@@ -101,7 +130,7 @@ img = Image.open(prime_dir / target / f"{condition}.png")
|
|
| 101 |
|
| 102 |
the core analysis computes kendall's tau between model cosine-similarity patterns and human priming scores across the 28 conditions. see the [source code repository](https://github.com/Don-Yin/Orthographic-DNN) for the full pipeline:
|
| 103 |
|
| 104 |
-
1. fine-tune pretrained torchvision models on word classification (training images not included here
|
| 105 |
2. extract layer-wise activations for each prime image pair (identity vs. condition)
|
| 106 |
3. compute cosine similarity at each layer
|
| 107 |
4. correlate with human priming scores using kendall's tau
|
|
@@ -118,7 +147,7 @@ this requires the font files (not redistributable) and generates images with con
|
|
| 118 |
|
| 119 |
## models evaluated
|
| 120 |
|
| 121 |
-
alexnet, densenet169, efficientnet-b1, resnet50, resnet101, vgg16, vgg19, vit-b/16, vit-b/32, vit-l/16, vit-l/32
|
| 122 |
|
| 123 |
## citation
|
| 124 |
|
|
|
|
| 30 |
|
| 31 |
each image is a 224x224 black-background PNG with white text, rendered in arial at size 22, centred.
|
| 32 |
|
| 33 |
+
### example stimuli for the target word "design"
|
| 34 |
+
|
| 35 |
+
<table>
|
| 36 |
+
<tr>
|
| 37 |
+
<td align="center"><b>ID</b><br>(identity)</td>
|
| 38 |
+
<td align="center"><b>TL12</b><br>(transposed 1-2)</td>
|
| 39 |
+
<td align="center"><b>DL-1M</b><br>(deleted middle)</td>
|
| 40 |
+
<td align="center"><b>SN-M</b><br>(substituted middle)</td>
|
| 41 |
+
<td align="center"><b>RF</b><br>(reversed full)</td>
|
| 42 |
+
<td align="center"><b>ALD-ARB</b><br>(all different)</td>
|
| 43 |
+
</tr>
|
| 44 |
+
<tr>
|
| 45 |
+
<td align="center"><img src="prime_data/design/ID.png" width="120"></td>
|
| 46 |
+
<td align="center"><img src="prime_data/design/TL12.png" width="120"></td>
|
| 47 |
+
<td align="center"><img src="prime_data/design/DL-1M.png" width="120"></td>
|
| 48 |
+
<td align="center"><img src="prime_data/design/SN-M.png" width="120"></td>
|
| 49 |
+
<td align="center"><img src="prime_data/design/RF.png" width="120"></td>
|
| 50 |
+
<td align="center"><img src="prime_data/design/ALD-ARB.png" width="120"></td>
|
| 51 |
+
</tr>
|
| 52 |
+
<tr>
|
| 53 |
+
<td align="center">DESIGN</td>
|
| 54 |
+
<td align="center">EDSIGN</td>
|
| 55 |
+
<td align="center">DSIGN</td>
|
| 56 |
+
<td align="center">DESIHN</td>
|
| 57 |
+
<td align="center">NGISE</td>
|
| 58 |
+
<td align="center">CBHAUX</td>
|
| 59 |
+
</tr>
|
| 60 |
+
</table>
|
| 61 |
+
|
| 62 |
### what's included
|
| 63 |
|
| 64 |
```
|
|
|
|
| 76 |
|
| 77 |
| code | description |
|
| 78 |
|------|-------------|
|
| 79 |
+
| ID | identity (e.g., prime and target are both "design") |
|
| 80 |
| TL12 | transposed letters positions 1-2 |
|
| 81 |
| TL-I | transposed letters internal |
|
| 82 |
| TL56 | transposed letters positions 5-6 |
|
|
|
|
| 130 |
|
| 131 |
the core analysis computes kendall's tau between model cosine-similarity patterns and human priming scores across the 28 conditions. see the [source code repository](https://github.com/Don-Yin/Orthographic-DNN) for the full pipeline:
|
| 132 |
|
| 133 |
+
1. fine-tune pretrained torchvision models on word classification (training images not included here; generate with `generate_data.py`)
|
| 134 |
2. extract layer-wise activations for each prime image pair (identity vs. condition)
|
| 135 |
3. compute cosine similarity at each layer
|
| 136 |
4. correlate with human priming scores using kendall's tau
|
|
|
|
| 147 |
|
| 148 |
## models evaluated
|
| 149 |
|
| 150 |
+
alexnet, densenet169, efficientnet-b1, resnet50, resnet101, vgg16, vgg19, vit-b/16, vit-b/32, vit-l/16, vit-l/32, all initialised from imagenet pretrained weights via torchvision.
|
| 151 |
|
| 152 |
## citation
|
| 153 |
|