Update README.md
Browse files
README.md
CHANGED
|
@@ -19,11 +19,12 @@ language:
|
|
| 19 |
<img src='assets/visual_abstract.png' height="50%" width="50%">
|
| 20 |
</div>
|
| 21 |
|
| 22 |
-
|
|
|
|
| 23 |
1. We introduce the use of the LoRA-based PETL technique to adapt large pre-trained face-recognition models to low-resolution datasets.
|
| 24 |
2. We propose an image-quality-based weighting of LoRA modules to create separate proxy encoders for high-resolution and low-resolution data,
|
| 25 |
ensuring effective extraction of embeddings for face recognition.
|
| 26 |
-
3. We demonstrate the superiority of PETAL
|
| 27 |
low-resolution benchmarks while maintaining performance on high-resolution and mixed-quality datasets.
|
| 28 |
|
| 29 |
|
|
@@ -33,7 +34,7 @@ low-resolution benchmarks while maintaining performance on high-resolution and m
|
|
| 33 |
<img src='assets/petalface.png'>
|
| 34 |
</div>
|
| 35 |
|
| 36 |
-
Overview of the proposed
|
| 37 |
final feature projection MLP. The trainable module is highlighted on the right. Specifically, we add two LoRA layers, where the weightage α is
|
| 38 |
decided based on the input-image quality, computed using an off-the-shelf image quality assessment network (IQA).
|
| 39 |
|
|
|
|
| 19 |
<img src='assets/visual_abstract.png' height="50%" width="50%">
|
| 20 |
</div>
|
| 21 |
|
| 22 |
+
PETAL<i>face</i> is the first work which uses image-quality adaptive LoRA layers for low-resolution face recgonition.
|
| 23 |
+
The main contributions of our work are:
|
| 24 |
1. We introduce the use of the LoRA-based PETL technique to adapt large pre-trained face-recognition models to low-resolution datasets.
|
| 25 |
2. We propose an image-quality-based weighting of LoRA modules to create separate proxy encoders for high-resolution and low-resolution data,
|
| 26 |
ensuring effective extraction of embeddings for face recognition.
|
| 27 |
+
3. We demonstrate the superiority of PETAL<i>face</i> in adapting to low-resolution datasets, outperforming other state-of-the-art models on
|
| 28 |
low-resolution benchmarks while maintaining performance on high-resolution and mixed-quality datasets.
|
| 29 |
|
| 30 |
|
|
|
|
| 34 |
<img src='assets/petalface.png'>
|
| 35 |
</div>
|
| 36 |
|
| 37 |
+
Overview of the proposed PETAL<i>face</i> approach: We include an additional trainable module in linear layers present in attention layers and the
|
| 38 |
final feature projection MLP. The trainable module is highlighted on the right. Specifically, we add two LoRA layers, where the weightage α is
|
| 39 |
decided based on the input-image quality, computed using an off-the-shelf image quality assessment network (IQA).
|
| 40 |
|