Idiap-Data commited on
Commit
59e2f9d
·
verified ·
1 Parent(s): 896efe9

Upload 3 files

Browse files
IResNet100-Tuned.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bd1a0b153b406c911ccee3e33faee86e4be065c250d026f8eac9dc65ab257908
3
+ size 260789686
IResNet100-Tuned_checksum.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ Filename: IResNet100-Tuned.pth
2
+ MD5 Hash: 1b8260ffa5514ab7cd3205cfd1b1ced1
README.md CHANGED
@@ -1,3 +1,91 @@
1
- ---
2
- license: cc-by-nc-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+
5
+
6
+ # IResNet100-Tuned
7
+
8
+ In this work, we show that lightweight tuning of vision–language foundation models, combined with domain-adapted face recognition networks, can effectively bridge the domain gap between photographs and paintings. Our fusion approach achieves state-of-the-art accuracy in sitter identification. Face recognition on artworks remains a particularly difficult task compared to traditional FR due to the scarcity of labelled data, stylistic variation, and the interpretive nature of portraiture. However, the results show that adapting modern architectures to this setting is feasible and promising. This opens up new research avenues, including synthetic data generation to augment the limited training set and heterogeneous domain adaptation techniques to improve generalisation across visual domains.
9
+ Project page: https://www.idiap.ch/paper/artface/
10
+
11
+ ## Overview
12
+
13
+ * **Training**: ArtFace was trained on [The Historical Faces dataset](https://github.com/marcohuber/HistoricalFaces) (it consists of 766 paintings of 210 different sitters)
14
+ * **Backbone**: **IResNet100-Tuned** is fine-tuned from **ArcFace-antelopev2** provided in the InsightFace model zoo.
15
+ - Source model zoo: https://github.com/deepinsight/insightface/tree/master/model_zoo
16
+ - Base model: `ArcFace-antelopev2` (IResNet100-based face recognition model)
17
+ - License: **MIT License**
18
+ * **Parameters**: 1M
19
+ * **Task**: Towards Historical Portrait Face Identification via Model Adaptation
20
+ * **Framework**: Pytorch
21
+ * **Output structure**: Batch of face embeddings (ie, features)
22
+
23
+ ## Evaluation of Models:
24
+
25
+ ![ArtFace](https://www.idiap.ch/paper/artface/static/images/ArtFR-Page-9.drawio.png)
26
+
27
+ _Overview of the proposed method: **(a)** LoRA-based adaptation of the CLIP model, and **(b)** head adaptation using triplet loss._
28
+
29
+ ![ArtFace ROC](https://www.idiap.ch/paper/artface/static/images/roc.png){width=80%}
30
+
31
+ _**ROC curves** of tuned and base CLIP, IResNet100, COTS and proposed fusion method. Fusion provides consistent improvements even at low FAR._
32
+
33
+
34
+ | Model | EER | TAR @ 0.1% FAR | TAR @ 1% FAR |
35
+ |-------------------------------------------------|-------|----------------|--------------|
36
+ | COTS FR system | 12.6 | 34.3% | 58.1% |
37
+ | CLIP-Base | 17.9 | 8.4% | 33.2% |
38
+ | IResNet100-Base | 14.0 | 29.9% | 55.1% |
39
+ | CLIP-Base + IResNet100-Base | 13.1 | 29.0% | 54.7% |
40
+ | CLIP-Base + IResNet100-Tuned | 12.6 | 35.1% | 57.9% |
41
+ | CLIP-LoRA + IResNet100-Base | 11.1 | 34.6% | 62.6% |
42
+ | CLIP-LoRA + IResNet100-Tuned | 10.7 | 39.7% | 62.15% |
43
+ |**CLIP-LoRA + IResNet100-Base + IResNet100-Tuned** |**9.9** | **39.7%** | **65.9%** |
44
+
45
+ _Performance Comparison of Base, Tuned models, Fusion, and COTS FR Systems. Fusion enhances overall accuracy._
46
+
47
+ ## Running Code
48
+
49
+ * Minimal code to instantiate the model and perform inference:
50
+
51
+ ``` bash
52
+ # The command below can be used to align the images.
53
+ python align.py -f [path_to_paintings]/* -o data/paintings
54
+ # Run the commands below to test the full model.
55
+ python generate-scores.py fusion
56
+ python evaluate.py table -f out/fusion.csv
57
+ python plot.py roc --log -f out/fusion.csv
58
+ # To use the model directly, use the following code snippet:
59
+ from lib.models import get_model
60
+ from PIL import Image
61
+ model, preprocess = get_model("fusion").torch()
62
+ model.eval()
63
+ image = Image.open("...")
64
+ inputs = preprocess(image)
65
+ embedding = model(inputs).squeeze()
66
+ ```
67
+
68
+
69
+ ## License
70
+ [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)
71
+
72
+ ## Copyright
73
+
74
+ (c) 2025, Francois Poh, Anjith George, Sébastien Marcel Idiap Research Institute, Martigny 1920, Switzerland.
75
+
76
+ https://gitlab.idiap.ch/biometric/code.iccv2025artmetrics.artface/
77
+
78
+ Please refer to the link for information about the License & Copyright terms and conditions.
79
+
80
+ ## Citation
81
+
82
+ If you find our work useful, please cite the following publication:
83
+
84
+ ```bibtex
85
+ @article{poh2025artface,
86
+ title={ArtFace: Towards Historical Portrait Face Identification via Model Adaptation},
87
+ author={Poh, Francois and George, Anjith and Marcel, S{\'e}bastien},
88
+ journal={arXiv preprint arXiv:2508.20626},
89
+ year={2025}
90
+ }
91
+ ```