Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -36,16 +36,14 @@ size_categories:
|
|
| 36 |
|
| 37 |
## Dataset Description
|
| 38 |
|
| 39 |
-
The IRFL dataset consists of idioms, similes,
|
| 40 |
|
| 41 |
-
|
|
|
|
| 42 |
|
| 43 |
-
The figurative
|
| 44 |
|
| 45 |
-
|
| 46 |
-
The figurative understanding task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to understand the relation between an image and a figurative phrase. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The preference task examines VL-PTMs' preference for figurative images. In this task, the model needs to classify phrase images of different categories correctly based on their ranking by the model matching score.
|
| 47 |
-
|
| 48 |
-
We evaluated state-of-the-art VL models and found that the best models achieved 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our understanding task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the preference task.
|
| 49 |
|
| 50 |
|
| 51 |
|
|
@@ -76,7 +74,7 @@ English.
|
|
| 76 |
### Data Fields
|
| 77 |
★ - refers to idiom-only fields
|
| 78 |
|
| 79 |
-
|
| 80 |
- query (★): the idiom definition the answer image originated from.
|
| 81 |
- distractors: the distractor images
|
| 82 |
- answer: the correct image
|
|
@@ -86,7 +84,7 @@ Understanding task
|
|
| 86 |
- definition (★): list of all the definitions of the idiom
|
| 87 |
- phrase: the figurative phrase.
|
| 88 |
|
| 89 |
-
|
| 90 |
- type: the rival categories FvsPO (Figurative images vs. Partial Objects) or FLvsPO (Figurative Literal images vs. Partial Objects)
|
| 91 |
- figurative_type: idiom | metaphor | simile
|
| 92 |
- first_category: the first category images (Figurative images if FvsPO, Figurative Literal images if FLvsPO)
|
|
@@ -98,15 +96,16 @@ The idioms, metaphor, and similes datasets contain all the figurative phrases, a
|
|
| 98 |
|
| 99 |
|
| 100 |
## Dataset Collection
|
| 101 |
-
We collected figurative and literal images for textual idioms, metaphors, and similes using an automatic pipeline we created
|
| 102 |
|
| 103 |
#### Annotation process
|
| 104 |
|
| 105 |
-
We paid Amazon Mechanical Turk Workers to annotate the
|
| 106 |
|
| 107 |
## Considerations for Using the Data
|
| 108 |
-
|
| 109 |
-
|
|
|
|
| 110 |
|
| 111 |
### Licensing Information
|
| 112 |
|
|
|
|
| 36 |
|
| 37 |
## Dataset Description
|
| 38 |
|
| 39 |
+
The IRFL dataset consists of idioms, similes, metaphors with matching figurative and literal images, and two novel tasks of multimodal figurative detection and retrieval.
|
| 40 |
|
| 41 |
+
Using human annotation and an automatic pipeline we created, we collected figurative and literal images for textual idioms, metaphors, and similes.
|
| 42 |
+
We annotated the relations between these images and the figurative phrase they originated from. We created two novel tasks of figurative detection and retrieval using these images.
|
| 43 |
|
| 44 |
+
The figurative detection task evaluates Vision and Language Pre-Trained Models’ (VL-PTMs) ability to choose the image that best visualizes the meaning of a figurative expression. The task is to choose the image that best visualizes the figurative phrase out of X candidates. The retrieval task examines VL-PTMs' preference for figurative images. In this task, Given a set of figurative and partially literal images, the task is to rank the images using the model-matching score such that the figurative images are ranked higher, and calculate the precision at k, where k is the number of figurative images in the input.
|
| 45 |
|
| 46 |
+
We evaluated state-of-the-art VL models and found that the best models achieved 22%, 30%, and 66% accuracy vs. humans 97%, 99.7%, and 100% on our detection task for idioms, metaphors, and similes respectively. The best model achieved an F1 score of 61 on the retrieval task.
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
|
| 49 |
|
|
|
|
| 74 |
### Data Fields
|
| 75 |
★ - refers to idiom-only fields
|
| 76 |
|
| 77 |
+
Detection task
|
| 78 |
- query (★): the idiom definition the answer image originated from.
|
| 79 |
- distractors: the distractor images
|
| 80 |
- answer: the correct image
|
|
|
|
| 84 |
- definition (★): list of all the definitions of the idiom
|
| 85 |
- phrase: the figurative phrase.
|
| 86 |
|
| 87 |
+
Retrieval task
|
| 88 |
- type: the rival categories FvsPO (Figurative images vs. Partial Objects) or FLvsPO (Figurative Literal images vs. Partial Objects)
|
| 89 |
- figurative_type: idiom | metaphor | simile
|
| 90 |
- first_category: the first category images (Figurative images if FvsPO, Figurative Literal images if FLvsPO)
|
|
|
|
| 96 |
|
| 97 |
|
| 98 |
## Dataset Collection
|
| 99 |
+
We collected figurative and literal images for textual idioms, metaphors, and similes using an automatic pipeline we created. We annotated the relations between these images and the figurative phrase they originated from.
|
| 100 |
|
| 101 |
#### Annotation process
|
| 102 |
|
| 103 |
+
We paid Amazon Mechanical Turk Workers to annotate the relation between each image and phrase (Figurative vs. Literal).
|
| 104 |
|
| 105 |
## Considerations for Using the Data
|
| 106 |
+
Idioms: Annotated by crowdworkers with rigorous qualifications and training.
|
| 107 |
+
Metaphors and Similes: Annotated by expert team members.
|
| 108 |
+
Detection and Ranking Tasks: Annotated by crowdworkers not involved in prior IRFL annotations.
|
| 109 |
|
| 110 |
### Licensing Information
|
| 111 |
|