Update README.md
Browse files
README.md
CHANGED
|
@@ -38,13 +38,16 @@ The model is trained using the following datasets:
|
|
| 38 |
|
| 39 |
## training method
|
| 40 |
|
| 41 |
-
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
## Fine-tuning Weights
|
| 44 |
|
| 45 |
This repository provides one fine-tuned weights:
|
| 46 |
|
| 47 |
-
1. **EmotionCLIP
|
| 48 |
- Fine-tuned on the EmoSet 118K dataset, without additional training specifically for facial emotion recognition.
|
| 49 |
- Final evaluation results:
|
| 50 |
- Loss: 1.5465
|
|
@@ -125,13 +128,9 @@ for idx in range(num_images, rows * cols):
|
|
| 125 |
plt.tight_layout()
|
| 126 |
plt.show()
|
| 127 |
```
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
### Summary
|
| 134 |
-
I proposed a hybrid layer_norm prefix_tuning prompt_tuning training method for efficient fine-tuning CLIP, which can make the model converge faster and have performance comparable to full fine-tuning. However, the loss of generalization performance is still a serious problem. I released EmosetCLIP-V2 trained with this training method, which has an additional neutral category compared to EmosetCLIP-V1, and the performance is slightly improved. Future work aims to expand the training data for difficult categories and optimize the model architecture.
|
| 135 |
-
|
| 136 |
|
| 137 |
---
|
|
|
|
| 38 |
|
| 39 |
## training method
|
| 40 |
|
| 41 |
+
Prefix-Tuning
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
|
| 45 |
|
| 46 |
## Fine-tuning Weights
|
| 47 |
|
| 48 |
This repository provides one fine-tuned weights:
|
| 49 |
|
| 50 |
+
1. **EmotionCLIP Weights**
|
| 51 |
- Fine-tuned on the EmoSet 118K dataset, without additional training specifically for facial emotion recognition.
|
| 52 |
- Final evaluation results:
|
| 53 |
- Loss: 1.5465
|
|
|
|
| 128 |
plt.tight_layout()
|
| 129 |
plt.show()
|
| 130 |
```
|
| 131 |
+
This repository provides two fine-tuned weights:
|
| 132 |
+
- Accuracy: 0.8042
|
| 133 |
+
- Recall: 0.8042
|
| 134 |
+
- F1: 0.8057
|
|
|
|
|
|
|
|
|
|
|
|
|
| 135 |
|
| 136 |
---
|