Update README.md
Browse files
README.md
CHANGED
|
@@ -34,8 +34,7 @@ Our approach leverages *CLIP* as a prior for perceptual tasks, inspired by cogni
|
|
| 34 |
|
| 35 |
## Performance
|
| 36 |
|
| 37 |
-
The model was trained on the
|
| 38 |
-
|
| 39 |
## Usage
|
| 40 |
|
| 41 |
To use the model for inference:
|
|
|
|
| 34 |
|
| 35 |
## Performance
|
| 36 |
|
| 37 |
+
The model was trained and evaluated on the EmoSet dataset, following the standard dataset splits. Our method achieves state-of-the-art performance compared to existing approaches, as described in our paper.
|
|
|
|
| 38 |
## Usage
|
| 39 |
|
| 40 |
To use the model for inference:
|