Buckets:
Title: ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO
URL Source: https://arxiv.org/html/2204.03359
Published Time: Thu, 04 Jan 2024 02:00:45 GMT
Markdown Content: Sanghyuk Chun, Wonjae Kim, Song Park, Minsuk Chang♢♢{}^{\diamondsuit}start_FLOATSUPERSCRIPT ♢ end_FLOATSUPERSCRIPT, Seong Joon Oh♣♣{}^{\clubsuit}start_FLOATSUPERSCRIPT ♣ end_FLOATSUPERSCRIPT NAVER AI Lab
♢♢\diamondsuit♢ Now at Google Research ♣♣\clubsuit♣ Now at University of Tübingen
Abstract
Image-Text matching (ITM) is a common task for evaluating the quality of Vision and Language (VL) models. However, existing ITM benchmarks have a significant limitation. They have many missing correspondences, originating from the data construction process itself. For example, a caption is only matched with one image although the caption can be matched with other similar images and vice versa. To correct the massive false negatives, we construct the Extended COCO Validation (ECCV) Caption dataset by supplying the missing associations with machine and human annotators. We employ five state-of-the-art ITM models with diverse properties for our annotation process. Our dataset provides ×\times×3.6 positive image-to-caption associations and ×\times×8.5 caption-to-image associations compared to the original MS-COCO. We also propose to use an informative ranking-based metric mAP@R, rather than the popular Recall@K (R@K). We re-evaluate the existing 25 VL models on existing and proposed benchmarks. Our findings are that the existing benchmarks, such as COCO 1K R@K, COCO 5K R@K, CxC R@1 are highly correlated with each other, while the rankings change when we shift to the ECCV mAP@R. Lastly, we delve into the effect of the bias introduced by the choice of machine annotator. Source code and dataset are available at https://github.com/naver-ai/eccv-caption
1 Introduction
Image-caption aligned datasets (e.g., MS-COCO Caption Lin et al. (2014); Chen et al. (2015), Flickr30k Plummer et al. (2015), Conceptual Caption Sharma et al. (2018); Changpinyo et al. (2021)) have become de-facto standard datasets for training and evaluating Vision-Language (VL) models. Particularly, Image-to-Text Matching (ITM) tasks Frome et al. (2013); Young et al. (2014); Kiros et al. (2014); Faghri et al. (2018); Gu et al. (2018); Lee et al. (2018); Huang et al. (2018); Li et al. (2019); Song and Soleymani (2019); Wehrmann et al. (2019); Wu et al. (2019); Wang et al. (2020); Chen et al. (2020); Diao et al. (2021); Chun et al. (2021); Chen et al. (2021); Huang et al. (2021); Biten et al. (2022) are widely used benchmarks for evaluating a VL model. The existing ITM benchmark datasets are built by annotating captions (by alt-texts Sharma et al. (2018); Changpinyo et al. (2021); Radford et al. (2021), web crawling Desai et al. (2021), or human annotators Chen et al. (2015)) for each image without considering possible associations with other images in the dataset. The collected image-caption pairs are treated as the only positives in the dataset, while other pairs are considered the negatives. However, in practice, there exists more than one caption to describe one image. For example, the description “A man that is standing up and has a tennis racquet” may describe multiple images with tennis players equally well (Figure 1). We have observed that the number of missing positives is tremendous; there exist ×\times×3.6 positive image-to-caption correspondences and ×\times×8.5 caption-to-image correspondences than the original MS-COCO dataset.
While the huge number of false negatives (FNs) in VL datasets is potentially sub-optimal for training VL models, it is downright detrimental for evaluation. For example, the small number of positive correspondences of image-caption-aligned datasets limits the evaluation metrics.1 1 1 In MS-COCO Caption, a caption is only matched to one image, and an image is matched to five captions. Other datasets usually have one caption for each image. In other tasks, such as image retrieval Wah et al. (2011); Krause et al. (2013); Oh Song et al. (2016); Liu et al. (2016), the positives and negatives are defined by class labels; hence, the number of possible matched items is large enough to measure precision or mean average precision (mAP) metrics. On the other hand, because existing ITM benchmarks only have one positive correspondence for each item, they are only able to use recall-based metrics (e.g., Recall@k 𝑘 k italic_k) that are known to be less informative than the precision- or ranking-based evaluation metrics Musgrave et al. (2020). In this paper, we focus on correcting the FNs in the evaluation dataset and the recall-based evaluation metrics to make a fair comparison of VL models.
Figure 1: Inherent multiplicity of correspondences in MS-COCO Caption. While any image-caption pair above makes sense (positive pair), only red and blue image-caption pairs are marked as positive in MS-COCO Caption.
As our first contribution, we correct the FNs in MS-COCO Caption by constructing Extended COCO Validation (ECCV) Caption dataset. We annotate whether each MS-COCO image-caption pair is positive with human workers. The labor cost for this process scales quadratically with the size of the dataset (e.g., MS-COCO has 76B possible image-caption pairs, while the number of images is only 123K). Since verifying every possible image-text pair is not scalable, we subsample the queries in the dataset and reduce the number of candidates for positive matches with the machine-in-the-loop (MITL) annotation process. MITL lets a model reduce the number of candidate positives; then human annotators evaluate the machine-selected candidates. We employ five state-of-the-art ITM models with distinct properties as machine annotators; CLIP Radford et al. (2021), ViLT Kim et al. (2021), VSRN Li et al. (2019), PVSE Song and Soleymani (2019), and PCME Chun et al. (2021). After post-processing, ECCV Caption contains 1,261 image queries (originally 5,000) but with 17.9 positive captions per image query on average (originally 5). It also contains 1,332 caption queries (originally 25,000) with 8.5 positive images per caption (originally 1).
While the use of a machine annotator is inevitable for the sake of scalability, the choice of a particular model may bias the dataset towards the specifics of the model. This can be problematic because different models show different filtered results to the human annotators, which brings the impartialness of the annotated dataset towards any particular model to the surface. In other words, the MITL annotations are not stable across model choices. Our studies show that the underlying ML model conditions the annotated dataset towards favoring certain models over the others. Therefore, this practice could lead to the danger of biased evaluation results using such datasets. We show that the rankings among the VL models can be arbitrarily shifted by modifying the underlying ML model. Our study also shows that using multiple machine annotators can alleviate machine bias in dataset construction. We note that the findings are applicable to a wide range of tasks in which users put labels on samples from a long list of candidate classes; our task is a special case of such a framework.
A similar MITL approach for expanding the positive matches was also employed by Parekh et al. Parekh et al. (2020), resulting in the dataset CrissCrossed Caption (CxC). However, CxC focuses on scoring the text-to-text similarities, resulting in many missing positives in the text-to-image relationship. Furthermore, CxC only employs one language-based machine annotator, which can lead to a biased dataset as our observation. Our ECCV Caption focuses on the inter-modality relationship and utilizes five ITM methods to avoid biased dataset construction. As another attempt to correct COCO benchmark, Chun et al. Chun et al. (2021) annotate pseudo-positives by using the COCO instance classes, called Plausible Match (PM). For example, both images in Figure 1 contain the same object class, “tennis racket”. Hence, the red and blue captions are considered positives for both red and blue images. Although PM items can detect most of the false negatives, it also introduces many false positives. Compared to PM Chun et al. (2021) which relies on noisy proxies for correspondence, we correct the missing false negatives with “human ground truths” with the help of machine annotations. All in all, our dataset results in a higher recall than CxC and high precision than PM.
We not only fix FNs but also evaluation metrics. We argue that R@1 can overestimate the model performance by focusing only on the accuracy of the top-1 item rather than the rest of the items. Instead, we propose to use better ranking-based evaluation metrics, mAP@R 𝑅 R italic_R Musgrave et al. (2020). Our human study shows that mAP@R 𝑅 R italic_R is more aligned to humans than Recall@k 𝑘 k italic_k. Now that the FNs are corrected in the evaluation sets and the evaluation metric is fixed, we re-examine the known ranking of 25 state-of-the-art VL models evaluated in the COCO Caption. We have observed that COCO 5K R@1 & R@5, and CxC R@1 are highly correlated (larger than 0.87 Kendall’s rank correlation τ 𝜏\tau italic_τ). On the other hand, we observe that the rankings across methods measured by mAP@R 𝑅 R italic_R on ECCV Caption and COCO 1K R@1 are less well-correlated (τ 𝜏\tau italic_τ=0.47). This confirms the observation by Musgrave et al. Musgrave et al. (2020) and Chun et al. Chun et al. (2021) on class-based datasets.
Our contributions are as follows. (1) We discover the false negative (FN) problem and quantify the exact number of wrong labels in MS-COCO. There exist ×\times×3.6 positive image-to-caption associations and ×\times×8.5 caption-to-image associations compared to the original MS-COCO. (2) We construct a corrected ITM test dataset, ECCV Caption, to avoid a wrong evaluation by FNs. We employ the machine-in-the-loop (MITL) annotation process to reduce the amount of human verification, resulting in saving 99.9% cost compared to the full exhaustive verification. ECCV Caption shares the same images and captions as the original MS-COCO; therefore, the existing methods can be evaluated on our dataset without additional training. We fix not only the annotations but also the evaluation metric. We propose to use mAP@R 𝑅 R italic_R, a more human-aligned metric than R@1 for comparing model performances as shown in our human study. (3) We re-evaluate 25 state-of-the-art VL models on our ECCV Caption dataset based on mAP@R 𝑅 R italic_R instead of Recall@k 𝑘 k italic_k. In Table 4 and Figure 4, we can observe that focusing on MS-COCO R@1 will mislead the true ranking between the models (MS-COCO R@1 and ECCV mAP@R 𝑅 R italic_R show a low correlation). Our observation aligns with Musgrave et al. Musgrave et al. (2020) and Chun et al. Chun et al. (2021); focusing on R@1 can mislead the true rankings between models. (4) We provide a detailed analysis of the constructed dataset and the model bias. In particular, we focus on avoiding potential model biases in the proposed dataset by employing multiple models. Our analysis shows that our design choice is effective in solving the model bias.
2 Related Works
2.1 Noisy many-to-many correspondences of image-caption datasets
There have been a few attempts to introduce many-to-many or noisy correspondences for VL datasets. Parekh et al. Parekh et al. (2020) construct a CrissCrossed Caption (CxC) dataset by employing a similar MITL approach to ours. However, CxC focuses on intra-modality similarity, particularly text-to-text. They employed the Universal Sentence Encoder Cer et al. (2018) and average bag-of-words (BoW) based on GloVe embeddings Pennington et al. (2014), while we directly focus on the inter-modality relationships and utilizes powerful ITM methods Radford et al. (2021); Kim et al. (2021); Li et al. (2019); Song and Soleymani (2019); Chun et al. (2021) to select candidates for validation by humans. CxC contains human ratings for 89,555 image-to-caption associations, among which 35,585 are positive, ×\times×1.4 more positive relationships than 25,000 in COCO Caption. We show that the additional positives by CxC are precise, but their annotations still have many missing positives (i.e., high precision but low recall), resulting that R@1 on CxC perfectly preserves the rankings of VL models on COCO 5K R@1. On the other hand, our ECCV Caption has ×\times×4.4 positives (×\times×3.6 image-to-caption correspondences and ×\times×8.5 caption-to-image correspondences) compared to COCO Captions and roughly three times more positives compared to CxC. Furthermore, it is possible to measure mAP on our dataset due to the abundance of positive pairs, unlike for CxC.
Another attempt by Chun et al. Chun et al. (2021) focused on precision rather than R@1 by annotating the pseudo-positives in a fully algorithmic approach. The authors defined “plausible matching (PM)” items that have the same instance classes with the query image (or the image corresponding to the query caption) to annotate pseudo-positives. For example, both images in Figure 1 contain the same instance class, “tennis racket”, leading to the conclusion that the red and blue captions are marked as positives for both red and blue images. More precisely, two instances are PM if y 1,y 2∈{0,1}d subscript 𝑦 1 subscript 𝑦 2 superscript 0 1 𝑑 y_{1},y_{2}\in{0,1}^{d}italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ∈ { 0 , 1 } start_POSTSUPERSCRIPT italic_d end_POSTSUPERSCRIPT differ at most ζ 𝜁\zeta italic_ζ positions, where d 𝑑 d italic_d is the number of instance classes (e.g., for COCO, d=80 𝑑 80 d=80 italic_d = 80). Using the class-based pseudo-positives, Chun et al. propose Plausible-Match R-Precision (PMRP) metric, an R-Precision Musgrave et al. (2020) metric based on the PM policy. The authors propose to use multiple ζ 𝜁\zeta italic_ζ (e.g., ζ∈{0,1,2}𝜁 0 1 2\zeta\in{0,1,2}italic_ζ ∈ { 0 , 1 , 2 }) and report the average precision value. PM items can detect many missing false positives in the dataset, but we observe that most PM pseudo-positives are not actual positives (i.e., high recall but low precision) — See Table 2. We also observe that PMRP shows a low correlation to other evaluation metrics; PMRP is a noisy metric compared to others.
2.2 Machine-in-the-loop (MITL) annotation
Humans and machines complement each other in the annotation process as they have different comparative advantages. Humans are the ultimate source of true labels, but they are slow and prone to errors and biases Snow et al. (2008); Sorokin and Forsyth (2008); Ipeirotis et al. (2010). Machines are highly scalable, but their generalizability to unseen samples is limited. Machines are also prone to their own versions of errors and biases Mehrabi et al. (2021); Scimeca et al. (2022). MITL annotations have been designed to take the best of both worlds Boykov and Jolly (2001); Settles (2009); Xu et al. (2016); Benenson et al. (2019).
Depending on the required trade-off between annotation quality and efficiency, one may opt for either single-turn or multi-turn annotation pipeline. The latter serves for the maximal demand for annotation quality: humans and machines alternate to correct and learn from each other’s annotations Settles (2009); Benenson et al. (2019). This is a widely used technique, the applications ranging from building a dictionary of cooking vocabularies Chang et al. (2018), to supporting real-time screen-reading for blind people Guo et al. (2016) and characterizing system failures Nushi et al. (2018). Here, we focus on single-turn MITL annotations to focus on the atomic building block for MITL pipelines in general. There are two types of the single-turn paradigm: machine-verified human annotations Wu and Yang (2006); Verma and Jawahar (2017) or human-verified machine annotations. We focus on the latter, which are highly relevant for dealing with huge sources of data.
Under the human-verification framework, machines make label proposals for each image, focusing more on recall than precision Andriluka et al. (2018); Kuznetsova et al. (2020). Previous crowdsourcing research in human-computer interaction (HCI) had mainly focused on the annotation interface and its effects on the annotation Kaplan et al. (2018); Song et al. (2018); Chung et al. (2019), or building a crowdsourcing workflow that leverages microtask pipelines Bernstein et al. (2010); Kim et al. (2014). We investigate the side effects of the model choice in the MITL annotation paradigm where machines provide candidate label proposals.
3 ECCV Caption Dataset Construction
In this section, we describe ECCV Caption construction details. We annotate image-caption pairs in MS-COCO to solve the multiplicity of MS-COCO. However, the number of candidates is too huge for an exhaustive verification by humans: 76B for the whole dataset and 125M for the test split only. To reduce the amount of judgment by humans, we employ a single-turn machine-in-the-loop (MITL) annotation pipeline, containing three stages: (1) Filtering by machine annotators. (2) Judging the filtered relationships by MTurkers and additional verification by internal workers. (3) Post-processing and merging with CxC.
3.1 Model candidates for machine annotators
We choose five VL models with diverse properties to cover both diversity and practical relevance. The models use different text backbones (Bi-GRU Cho et al. (2014), Transformer Vaswani et al. (2017)), visual backbones (ResNet-152 He et al. (2016), Faster R-CNN Ren et al. (2015), ViT Dosovitskiy et al. (2021)), training objective functions, and training datasets as shown in Table 1. We use the officially released pre-trained weights by the authors. Specifically, we use the CutMix Yun et al. (2019) pre-trained version for PCME to match the retrieval performances with others, and CLIP ViT-B/32, the largest model at the time of our data construction. We describe more details of each method in Section A.1.
Table 1: Overview of the machine annotators. Differences among five ITM models in terms of architectures and training objectives are shown. ViLT and CLIP are trained on a massive amount of aligned VL data, while other methods only use COCO Caption.
We quantify the diversity of the models by measuring the differences in their retrieved items. We first retrieve the top 25 images for each model on the captions of the COCO Caption test split. We measure the similarities of the models in two different metrics. First, for every pair of models, we measure the Kendall rank correlation Kendall (1938) between the two rankings of the retrieved items by the models. We observe that the models usually have low similarity (τ<0.3 𝜏 0.3\tau<0.3 italic_τ < 0.3), except for PVSE and PCME. We additionally measure, for each pair of model i 𝑖 i italic_i and j 𝑗 j italic_j, the average ranking of model i 𝑖 i italic_i’s top-1 ranked item by model j 𝑗 j italic_j. The top-1 items retrieved by the models are usually not included in the top-3 items by the others. These analyses show that the chosen models are diverse and the retrieved items do not correlate that much. The full results are shown in Section A.2.
3.2 Crowdsourcing on Amazon Mechanical Turk
We crowdsource image-caption matches on Amazon Mechanical Turk (MTurk) platform. For the sake of scalability, we subsample 1,333 caption queries and 1,261 image queries from the COCO Caption test split. Since the number of all possible matches is still prohibitive (40M), we employ the filtering strategy to reduce the number of candidates for human verification. We pre-select top-5 captions and images retrieved by the five models. After we remove the duplicate pairs from the (1,261 +++ 1,333) ×\times× 5 ×\times× 5 = 64,850 pairs, 46,424 pairs remain.
We package the task for human annotators into a series of Human Intelligence Tasks (HITs). Each HIT contains 20 pairs consisting of 18 machine-retrieved pairs to be annotated, 1 true positive (i.e., an original positive pair), and 1 true negative (random pair, not in the top-25 of any model). The golden examples are used for the qualification process; if a submitted HIT contains wrong answers to the golden examples, we manually verify the HIT. For each image-caption pair candidate, workers can choose an answer among the choices “100% YES”, “Partially YES, but”, “Mostly NO, because”, and “100% NO”. We use four choices instead three-level (“YES”, “Not Sure”, and “NO”) to discourage workers from selecting “Not Sure” for all the questions. We have assigned 2,160 HITs, consisting of 43,200 pairs to be verified, to 970 MTurk workers. The crowdsourcing details, including an example HIT, compensation details, worker statistics, and detailed statistics for each machine annotator are in Section B.3
3.3 Postprocessing MTurk annotations
We observe that 21,995 associations among 43,200 associations are annotated as positives (“Yes” or “Weak Yes”). We then filter out 18 meaningless captions (e.g., “I am unable to see an image above”), 14 wrong captions found by workers (e.g., “A group of birds flying above the beach” for the image with many kites), and 1 duplicate image found in the training set. The full list is in Section C.1.
Table 2: Precision and recall of the existing benchmarks measured by our human verified positive pairs. A low Prec means that many positives are actually negatives, and a low Recall means that there exist many missing positives.
Table 3: The number of positive images and captions for each dataset. We show the number of positive items for the subset of the COCO Caption test split. The number of query captions and images are 1,332 and 1,261, respectively.
Using the 21,995 human-verified positives, we report precision and recall of the existing benchmarks. Let t i subscript 𝑡 𝑖 t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT be the set of human-annotated positives for the query i 𝑖 i italic_i in Section 3.2 and r i subscript 𝑟 𝑖 r_{i}italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT be the set of positives for i 𝑖 i italic_i in the target dataset. Note that our human-annotated positives are based on the top-5 retrieval items of our machine annotators; if the original “GT” item is ranked in top-5 by none of the models, then t i subscript 𝑡 𝑖 t_{i}italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT will not include the original “GT” item. To prevent this, we use r i′=r i∩h i subscript superscript 𝑟′𝑖 subscript 𝑟 𝑖 subscript ℎ 𝑖 r^{\prime}{i}=r{i}\cap h_{i}italic_r start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = italic_r start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT where h i subscript ℎ 𝑖 h_{i}italic_h start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the set of human-verified pairs for the query i 𝑖 i italic_i (i.e., the top-5 items of our machine annotators). We filter out the case when r i′=0 subscript superscript 𝑟′𝑖 0 r^{\prime}{i}=0 italic_r start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = 0 for preventing an ill-defined result 2 2 2 This filtering process is more critical to “I2T” precision results because the number of the original “GT” item per query is 1. We revise these result from the previous revision (v3).. We define precision and recall of a dataset as Prec=1 N∑i=1 N|r i′∩t i||r i′|𝑃 𝑟 𝑒 𝑐 1 𝑁 superscript subscript 𝑖 1 𝑁 subscript superscript 𝑟′𝑖 subscript 𝑡 𝑖 subscript superscript 𝑟′𝑖 Prec=\frac{1}{N}\sum{i=1}^{N}\frac{|r^{\prime}{i}\cap t{i}|}{|r^{\prime}{i% }|}italic_P italic_r italic_e italic_c = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT divide start_ARG | italic_r start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∩ italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | end_ARG start_ARG | italic_r start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | end_ARG and Recall=1 N∑i=1 N 1−|t i“r i′||t i|𝑅 𝑒 𝑐 𝑎 𝑙 𝑙 1 𝑁 superscript subscript 𝑖 1 𝑁 1 subscript 𝑡 𝑖“subscript superscript 𝑟′𝑖 subscript 𝑡 𝑖 Recall=\frac{1}{N}\sum{i=1}^{N}1-\frac{|t_{i}{}\char 92{}r^{\prime}{i}|}{|% t{i}|}italic_R italic_e italic_c italic_a italic_l italic_l = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT 1 - divide start_ARG | italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT “ italic_r start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | end_ARG start_ARG | italic_t start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | end_ARG. Table 2 shows precision and recall of COCO Caption, CxC Parekh et al. (2020), and Plausible Match (PM) pseudo-positives Chun et al. (2021). While COCO and CxC show high precisions, we observe that their recall is significantly low, around or less than 20%. Evaluating models on such a low-recall dataset with the R@1 metric can be highly misleading. A model may be able to retrieve good enough positive items which are not captured in the dataset, resulting in erroneously low R@1 scores. On the other hand, around 60% of the positives can be captured by PM, but around 60% of pseudo-positives are correct.
Figure 2: ECCV Caption examples. The given caption query: “A herd of zebras standing together in the field”. Red: original positive. Green: annotated as “100% Yes”. Blue: annotated as “Weak Yes”. More examples are in Section C.2.
(a)Number of positive pairs.
(b)Multiplicity by positive items.
Figure 3: Multiplicity in ECCV Caption. (a) The number of positive pairs in ECCV Caption. Dashed lines denote the number of the original COCO positives (1 image for each caption, and 5 captions for each image). ECCV Caption contains plenty of positive items per each modality. (b) PCME-predicted multiplicity against the number of positive captions for each image. There exists a positive correlation.
We consider the CxC positives as the additional sixth machine-human verified annotations, and extend our human-verified positives with CxC positives to construct the final ECCV Caption. Table 3 shows the detailed statistics of CxC, human-verified positives, and our ECCV Caption. Overall, ECCV Caption has ×\times×8.47 positive images and ×\times×3.58 positive captions than the original dataset. Figure 2(a) shows the number of positive images and captions per each item; there exist many positives beyond the original COCO associations. We illustrate example image-caption pairs from ECCV Caption in Figure 2 and Section C.2.
We additionally analyze the multiplicity of ECCV Caption by PCME Chun et al. (2021) that produces a degree of multiplicity (uncertainty) for each query. Figure 2(b) shows that more uncertain images correspond to more captions in our dataset. In other words, our new annotations capture the hidden FNs in COCO well.
4 Re-evaluation of ITM models on ECCV Caption
In this section, we re-evaluate the existing VL models on our new dataset and previous benchmarks. We first introduce the evaluation metrics and comparison methods (§4.1). We compare the performances and analyze the results (§4.2).
4.1 Evaluation metrics and comparison methods
Evalution metrics.
The existing ITM benchmarks (e.g., COCO Caption) use Recall@k 𝑘 k italic_k metrics, particularly Recall@1 (R@1). Specifically, previous works measure R@1 for 5-fold validation splits (i.e., each split has 1K images), and for the full test split Karpathy and Fei-Fei (2015). The former is called COCO 1K R@k 𝑘 k italic_k and the latter is called COCO 5K R@k 𝑘 k italic_k, respectively. Previous studies separately report image-to-text, text-to-image retrieval R@1, R@5 and R@10 scores. However, as shown by Musgrave et al. Musgrave et al. (2020), R@k 𝑘 k italic_k is not an informative metric; embedding spaces with nearly 100% R@1 can have different properties. The problem becomes even worse for the ITM benchmarks, whose queries only have very few (usually only one) references: Even if a model correctly retrieves plausible items that are not among the set of original positives, the current benchmark cannot evaluate the model correctly. It is common to use larger values of k 𝑘 k italic_k to less penalize wrong yet plausible predictions. However, as shown in Figure 2(a), the actual number of plausible positives can be larger than the typical choice of k 𝑘 k italic_k (e.g., 5 or 10). Instead, we suggest using mAP@R 𝑅 R italic_R Musgrave et al. (2020), a modified mAP measured by retrieving R 𝑅 R italic_R items where R 𝑅 R italic_R is the number of positives for the query. Previous ITM benchmarks cannot employ mAP@R 𝑅 R italic_R because R 𝑅 R italic_R is too small (i.e., 1). Thanks to our human-verified ground-truth positives, we can reliably measure mAP@R 𝑅 R italic_R on ECCV Caption.
We additionally conduct a human study to confirm that mAP@R 𝑅 R italic_R is more aligned to humans than R@k 𝑘 k italic_k. We collect 3,200 pairwise preferences of human annotators among (A) only top-1 is wrong (B) only top-1 is correct (C) top-1 to 5 are wrong but others are correct (D) only top-5 is correct, and (E) all items are wrong. For example, if the number of positives is 8, then (A) shows 0 R@1, 100 R@5 and 66.0 mAP@R 𝑅 R italic_R, (B) shows 100 R@k 𝑘 k italic_k and 12.5 mAP@R 𝑅 R italic_R, (C) shows 0 R@k 𝑘 k italic_k and 10.3 mAP@R 𝑅 R italic_R, and (D) shows 0 R@1, 100 R@5 and 2.5 mAP@R 𝑅 R italic_R. We compute user preference scores using Bradley–Terry model Bradley and Terry (1952). We observe that mAP@R 𝑅 R italic_R is exactly aligned to the human preference score: (A: 70.85, B: 13.15, C: 10.66, D: 4.89, E: 0.44). We provide the details of the human study in Section D.1
We also report modified Plausible Match R-Precision (PMRP) scores by changing R 𝑅 R italic_R to min(R,50)𝑅 50\min(R,50)roman_min ( italic_R , 50 ), because the number of pseudo-positives R 𝑅 R italic_R can be very large (e.g., larger than 10,000) but most of them are not actual positive (Table 2). While Chun et al. Chun et al. (2021) proposed to use the average R-Precision for three different thresholds, (e.g., ζ={0,1,2}𝜁 0 1 2\zeta={0,1,2}italic_ζ = { 0 , 1 , 2 }), we only report PMRP when ζ=0 𝜁 0\zeta=0 italic_ζ = 0. We additionally compute R@1, R@5, and PMRP scores on the original COCO Caption, R@1 on CxC, and R@1 and R-Precision on ECCV Caption to analyze the correlation between each evaluation metric to ECCV mAP@R 𝑅 R italic_R.
Table 4: Re-evaluating VL models. ECCV Caption mAP@R, R-Precision (R-P), Recall@1 (R@1), CxC R@1, COCO 1K R@1, 5K R@1, PMRP, and RSUM (the summation of COCO 1K recalls) are shown. The numbers are the average between the image-to-text retrieval and text-to-image retrieval results. Full numbers for each modality and COCO R@5, R@10 results are in Appendix D.3. ††{}^{\dagger}start_FLOATSUPERSCRIPT † end_FLOATSUPERSCRIPT denotes our re-implemention and “zero-shot” for VinVL and ViLT denotes VL pre-trained models without fine-tuning on the COCO Caption for the retrieval task.
(a)Comparison of COCO, CxC, ECCV and PMRP.
(b)Comparison of Recall@1 metrics.
(c)Comparison of ECCV metrics.
Figure 4: Ranking correlation between different evaluation metrics. Ranking of methods is largely perserved between COCO and CxC Recall@1, while it is rarely preserved among COCO Recall@1, ECCV mAP@R and PMRP.
Table 5: Rank correlations between evaluation metrics. Higher τ 𝜏\tau italic_τ denotes two rankings are highly correlated, while τ 𝜏\tau italic_τ values near zero denotes two rankings are barely correlated. We highlight the highly correlated pairs (τ>0.8 𝜏 0.8\tau>0.8 italic_τ > 0.8) with red text. “RSUM” denotes the summation of COCO 1K R@1s, R@5s, R@10s for each modality.
Evaluated methods.
We compare 25 state-of-the-art VL models, whose trained weights are publicly accessible, categorized into four groups: (1) visual semantic embedding (VSE) methods with the ResNet-152 He et al. (2016) image encoder, and Bi-GRU Cho et al. (2014) text encoder, including VSE0, VSE++ Faghri et al. (2018), PVSE Song and Soleymani (2019) (K=1 & K=2), and PCME Chun et al. (2021) (the official model and the CutMix pre-trained version); (2) VSE methods with region features extracted by Visual Genome Krishna et al. (2017) pre-trained Faster R-CNN Ren et al. (2015) based on the implementation by Anderson et al. Anderson et al. (2018) and Lee et al. Lee et al. (2018), including VSRN Li et al. (2019), VSRN + AOQ Chen et al. (2020), CVSE Wang et al. (2020), SGR, SAF Diao et al. (2021), and VSE∞\infty∞ with BUTD region, grid and WSL grid features Chen et al. (2021)3 3 3 Techinally speaking, VSE∞\infty∞ (WSL grid) does not use region features, but CNN features extracted from Instagram-trained ResNext Mahajan et al. (2018a). This study treats all VSE∞\infty∞ variants as region feature-based models for convenience.. (3) Large-scale VL pre-training (VLP) methods, including pre-trained CLIP with ViT-B/32, ViT-B/16, and ViT/L14 backbones Radford et al. (2021), pre-trained and fine-tuned ViLT Kim et al. (2021), pre-trained and fine-tuned VinVL Zhang et al. (2021), and fine-tuned BLIP Li et al. (2022). Here, “pre-trained” signifies that the model is trained with a massive image-text aligned dataset, but is not specifically trained for COCO Caption; “fine-tuned” signifies that the model is fine-tuned on COCO Caption for the ITM task. We note that VL transformers except CLIP need O(|C|×|I|)𝑂 𝐶 𝐼 O(|C|\times|I|)italic_O ( | italic_C | × | italic_I | ) forward operations to compute the full pairwise ranks between |C|𝐶|C|| italic_C | number of captions and |I|𝐼|I|| italic_I | number of images, while other methods only need O(|I|)+O(|C|)𝑂 𝐼 𝑂 𝐶 O(|I|)+O(|C|)italic_O ( | italic_I | ) + italic_O ( | italic_C | ) forward operations to compute the full pairwise ranks based on the cosine similarity. For example, VinVL takes 25 hours to compute the full pairwise ranks for the COCO Caption test split by a single A100 GPU core, while VSE++ only takes 1 minute in the same environment. (4) PVSE models with different negative mining (NM) methods, including no NM, semi-hard NM (SHM) Schroff et al. (2015), and hardest NM (HNM) Faghri et al. (2018).
We use the official trained weights for each model with a few exceptions. We re-implement VSE0, VSE++, PCME with CutMix pre-trained ResNet, and PVSE models with various NM strategies. The training details are in Section D.2.
4.2 Re-evaluation of ITM methods
Table 4 and Figure 4 shows the full comparisons of 25 VL models with different evaluation metrics. We report the Kendall’s rank correlations (tau-b) between metrics in Table 5; larger τ 𝜏\tau italic_τ denotes two metrics are more correlated. We report the full table including modality-wise results, R@5 and R@10 scores in Section D.3. We first observe that R@k 𝑘 k italic_k scores across different datasets have high correlations among themselves (Figure 3(b) and Section D.3). ). In terms of the ranking correlation, we observe that COCO 1K R@1 shows almost τ 𝜏\tau italic_τ=0.9 with the ranking yielded by R@5 (0.87), COCO 5K R@1 (0.89) and R@5 (0.97), or CxC R@1 (0.89). This implies that measuring Recall@k 𝑘 k italic_k on different benchmarks, such as the original COCO Caption, CxC, and ECCV Caption are not more informative than only measuring Recall@k 𝑘 k italic_k on COCO 1K or 5K. On the other hand, the rankings by COCO 1K are not preserved well to PMRP (0.45), ECCV R@1 (0.72), ECCV R-Precision (0.39) and ECCV mAP@R 𝑅 R italic_R (0.47) in Kendall’s τ 𝜏\tau italic_τ. This implies that enlarging K 𝐾 K italic_K of R@k 𝑘 k italic_k (e.g., using R@5, R@10 instead of R@1) cannot be an alternative of mAP@R 𝑅 R italic_R because R@k 𝑘 k italic_k metrics are highly correlated each other as shown in Table 5. We also observe that the rankings by PMRP are relatively less correlated to the other metrics, such as COCO R@1 (0.45), ECCV R@1 (0.29) or ECCV mAP@R 𝑅 R italic_R (0.20) in Kendall’s τ 𝜏\tau italic_τ.
Our re-evaluation shows that existing ITM evaluation benchmarks can overestimate the VL model performance by focusing only on COCO R@1, where the rankings between COCO R@1 and ECCV mAP@R 𝑅 R italic_R are not largely preserved. For example, we observe that the hardest negative mining technique Faghri et al. (2018), previously deemed useful for ITM tasks, is actually selectively effective for R@1, rather than for the actual task itself. Under our new metrics like ECCV mAP@R 𝑅 R italic_R, we observe that the milder strategy of semi-hard negative mining is more effective – See Figure 4(a). Chun et al. Chun et al. (2021) also observed a similar pattern in the CUB Caption dataset Wah et al. (2011) by using the class-defined positives. Our finding is the first observation in the practical large-scale VL dataset. Similarly, we observe that many large-scale VL pre-training methods with high R@1 scores show inferior ECCV mAP@R 𝑅 R italic_R scores compared to other visual semantic embedding techniques. For example, CLIP ViT-L/14 shows superior COCO 1K R@1 than PCME (55.4% and 40.1%, respectively). However, in terms of ECCV mAP@R 𝑅 R italic_R, CLIP shows inferior performances than PCME (28.0% and 37.1%, respectively).
Similarly, we observe that PMRP shows different behaviors compared to other metrics. Especially, we observe that the contrastive models without a negative mining strategy are specialized to PMRP metric – Figure 4(b). We presume that it is because the contrastive learning strategy enforces the features with similar objects to be mapped to a similar embedding space. In contrastive the best models on COCO and ECCV (e.g., BLIP, VinVL, and VSE∞\infty∞) show inferior PMRP scores – Figure 4(c). We presume that it is because PMRP only captures the existence or absence of the objects, while an optimal retrieval also should consider the plausibility between matched image-caption pairs.
(a)Triplet mining strategies.
(b)Contrastive methods.
(c)Best R@1 models.
Figure 5: Rankings of different VL models. Ranking of (a) PVSE models with diverse triplet mining strategies (b) contrastive methods (c) the best models are shown.
5 Discussion and Limitations
Potential machine biases in our dataset.
Our dataset construction process contains the MITL annotation process, where the choice of machine annotators can potentially harm the dataset quality. The positives in our dataset are the retrieved items by the machine annotators. If the machines are biased towards undesired patterns (e.g. favoring certain items over the others), future methods built on our benchmark will overfit those patterns. In this work, we employ five diverse machine annotators to reduce the potential biases by models. In Appendix E, we explore and quantify the effect of the choice of multiple machine annotators on the dataset quality. From the study, we can conclude that our strategy (using more models) is effective to mitigate biases by a specific model.
Scale of ECCV Caption.
In this work, we subsample 1,333 caption queries (5.3% of the full caption queries) and 1,261 image queries (25.2% of the full image queries) to reduce the scale of annotations. Note that without subsampling, we need to verify (25,000 + 5,000) ×\times× 5 ×\times× 5 = 750K pairs, which costs 16 times more than our current version, almost $60K. Because we only subsample queries, not limiting the gallery samples, our dataset is an unbiased subset of the original COCO Caption. To scale up ECCV Caption, we have to reduce the human verification costs by reducing the total number of human verification. This can be achievable by applying a multi-turn MITL annotation process that alternatively repeats training machine annotators with human-annotated associations and verifying machine annotations by human workers. After enough iterations of the multi-turn MITL annotation process, we can automatically scale up our annotations by using the high-quality machine annotators while only low confident associations are verified by humans.
Noisy annotations.
Despite our additional verification process to keep the quality of the annotations, there can be noisy annotations (i.e., false positives) in ECCV Caption due to the noisy nature of crowdsourcing annotations. The noisy annotations can also occur because we use both “100% YES” and “Partially YES” to build positive pairs. However, we still encourage to use ECCV Caption for evaluating VL models, because the existing datasets are noisier; they usually have only one positive item per each query and they have tremendously many FNs. On the other hand, noisy annotations of our dataset are still “plausible” rather than “wrong”. We provide more discussion in Appendix F. Finally, we expect that a multi-turn MITL process can improve not only the labeling cost but also the annotation quality as shown by Benenson et al. Benenson et al. (2019).
6 Conclusion
MS-COCO Caption is a popular dataset for evaluating image-text matching (ITM) methods. Despite its popularity, it suffers from a large number of missing positive matches between images and captions. Fully annotating the missing positives with human labor incurs prohibitive costs. We thus rely on machine annotators to propose candidate positive matches and let crowdsourced human annotators verify the matches. The resulting ITM evaluation benchmark, Extended COCO Validation (ECCV) Caption dataset, contains ×\times×8.47 positive images and ×\times×3.58 positive captions compared to the original MS-COCO Caption. We have re-evaluated 25 ITM methods on ECCV Caption with mAP@R 𝑅 R italic_R, resulting in certain changes in the ranking of methods. We encourage future studies on ITM to evaluate their models on ECCV mAP@R 𝑅 R italic_R that not only focuses on the correctness but also on the diversity of top-k 𝑘 k italic_k retrieved items.
Author Contributions
Main project idea (i.e., false negative problem) is from S Chun and his previous work Chun et al. (2021). S Chun led the project; the other authors actively and significantly contributed to the project with advice and feedback. S Chun, M Chang, W Kim and SJ Oh jointly designed the MITL annotation process; especially, M Chang and W Kim significantly contributed to the design of HIT based on the Human-centered design approach. W Kim conducted large-scale VL transformers (e.g., ViLT, VinVL) retrieval experiments for annotation and evaluation. S Chun implemented and conducted the MITL annotation pipeline and the evaluation pipeline. S Park contributed to HIT verification, the final data cleanup and the data construction. S Chun performed and interpreted data analysis and evaluation. M Chang helped to analyze the HIT results from the HCI point of view. S Chun and SJ Oh wrote the initial version of the manuscript. All authors contributed to the final manuscript.
Appendix
We include additional materials in this document. We first describe the details of our machine annotators (Appendix A), including the explanation of each model (Section A.1) and diversity between each model (Section A.2). We provide the details of Human Intelligence Tasks (HITs) for ECCV Caption construction (Appendix B), such as detailed questionnaire (Section B.1), MTurk worker statistics (Section B.2) and the results (Section B.3). Appendix C describes the post-processing details, including the full list of invalid items (Section C.1) and the examples of ECCV Caption (Section C.2). We include more evaluation results in Appendix D, such as user study details for comparing mAP@R 𝑅 R italic_R and Recall@k 𝑘 k italic_k (Section D.1), training details of re-implemented methods (Section D.2), the full results with various evaluation metrics (Section D.3). Finally, we provide the full bias analysis in Appendix E and the discussions of noisy crowdsource annotations in Appendix F.
Appendix A ECCV Caption Machine Annotators Details
A.1 Machine annotators
To cover both diversity and practical relevance, we have choose five state-of-the-art cross-modal retrieval models with diverse properties.
- •VSRN Li et al. (2019) builds up connections between image regions, and perform reasoning with Graph Convolutional Networks to generates features with semantic relationships. VSRN uses the Faster R-CNN detector Ren et al. (2015) as the visual encoder following Anderson et al. (2018). VSRN employs the triplet loss Schroff et al. (2015) with hardest negative mining (HNM) Faghri et al. (2018).
- •PVSE Song and Soleymani (2019) learns a one-to-many function to solve ambiguous matching by one-to-one function. PVSE is a multi-headed model, focusing on diverse matching between two diverse concepts. PVSE also employs the triplet loss with HNM as VSRN.
- •PCME Chun et al. (2021) is a stochastic model for learning many-to-many correspondences in multi-modal matching tasks. PCME is trained by a probabilistic matching objective function based on the pair-wise matching loss.
- •ViLT Kim et al. (2021) is a vision-language pre-training method with massive paired data (4.1M images and 9.9M captions). While other methods have separated text and visual backbones, ViLT has a unified shared Transformer Vaswani et al. (2017) backbone for text and visual modalities.
- •CLIP Radford et al. (2021) is a contrastive approach for massive but noisy associations and shows powerful zero-shot classification performances. CLIP is trained with 4M image and caption pairs. We use the ViT-B/32 CLIP, the largest one when we start the annotation process.
PVSE, VSRN and PCME use pre-trained visual backbones (ImageNet-trained ResNet, Visual Genome Krishna et al. (2017)-trained Faster R-CNN) and only use COCO Caption dataset as the training dataset. We use the official weights provided by the authors, except for PCME. We re-train PCME with CutMix Yun et al. (2019) pre-trained ResNet-152. This slightly boosts the original performances. We illustrate the example retrieved images by each model in Figure A.1.
Figure A.1: Example retrieved images by the machine annotators. For the given caption (“A guy does a trick on a skateboard.”), we show the top-5 images retrieved by models. The matched pair in the dataset is denoted by red boxes.
A.2 Diversity between machine annotators
(a) Model similarity analysis by Kendall’s τ 𝜏\tau italic_τ. A higher score means that two models are more correlated.
(b) Model similarity analysis by the average ranking. A smaller rank means that two models are more similar.
Table A.1: Model similarity analyses. We measure similarties between the machine annotators in two different ways. (a) We measure the ranking correlations using Kendall’s τ 𝜏\tau italic_τ; 1.0 means two lists are identical, -1.0 indicates two lists are strongly disagreed each other. (b) We measure the average rankings of the image retrieved by a model for other models. Each row indicates the average ranking of the top-1 retrieved image of the row model for other column models.
We illustrate the quantify of the diversity between machine annotators by their retrieved items in Table A.1. We retrieve 25 images by each model from the COCO validation captions, and measure (1) Kendall’s τ 𝜏\tau italic_τ (Table 0(a)), and (2) the average ranking of the top-1 retrieved items by a model of another model.
Table 0(a) shows the Kendall’s rank correlation coefficients (Kendall’s τ 𝜏\tau italic_τ) between the models. Kendall’s τ 𝜏\tau italic_τ is computed on two ranked lists [x 1,x 2,…,x n]subscript 𝑥 1 subscript 𝑥 2…subscript 𝑥 𝑛[x_{1},x_{2},\ldots,x_{n}][ italic_x start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ] and [y 1,y 2,…,y n]subscript 𝑦 1 subscript 𝑦 2…subscript 𝑦 𝑛[y_{1},y_{2},\ldots,y_{n}][ italic_y start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT , … , italic_y start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ]. We say that two pairs ,(x i,x j)subscript 𝑥 𝑖 subscript 𝑥 𝑗(x_{i},x_{j})( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) and (y i,y j)subscript 𝑦 𝑖 subscript 𝑦 𝑗(y_{i},y_{j})( italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ), agree if either (x i>x j subscript 𝑥 𝑖 subscript 𝑥 𝑗 x_{i}>x_{j}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT > italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT and y i>y j subscript 𝑦 𝑖 subscript 𝑦 𝑗 y_{i}>y_{j}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT > italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT) or (x i<x j subscript 𝑥 𝑖 subscript 𝑥 𝑗 x_{i}<x_{j}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT < italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT and y i<y j subscript 𝑦 𝑖 subscript 𝑦 𝑗 y_{i}<y_{j}italic_y start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT < italic_y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT). Kendall’s τ 𝜏\tau italic_τ is computed by τ=#agreed pairs−#non-agreed pairs#all pairs 𝜏#agreed pairs#non-agreed pairs#all pairs\tau=\frac{\text{#\ignorespaces agreed pairs}-\text{#\ignorespaces non-% agreed pairs}}{\text{#\ignorespaces all pairs}}italic_τ = divide start_ARG #agreed pairs - #non-agreed pairs end_ARG start_ARG #all pairs end_ARG. We use the tau-B variant for the tie-breaking.
Appendix B Human Intelligence Tasks (HITs) for ECCV Caption Construction
B.1 HIT details
The example HIT for crowd workers is shown in Figure B.1. Each of the 20 questions in the HIT ask the workers to select the degree of belief that the given image-description pair is a positive match. We have designed the HITs in such a way that not only the positivity of the match is recorded, but also the degrees and rationales for the workers’ judgments are collected. Workers can choose among “100% YES”, “Partially YES, but”, “Mostly NO, because”, and “100% NO”. Here, we use four choices instead three level (“YES”, “Not Sure”, and “NO”) to avoid encouraging the workers to select “Not Sure” for all questions. If a worker chooses “Partially YES, but” or “Mostly NO, because”, then they are asked further questions on the rationale behind their uncertainty. Four possible shortcomings for the image-description match are presented as choices: “the description describes concepts that do not appear in the image”, “the description does not describe the main concepts in the image”, “the description describes the main concepts in a wrong way”, and “the description is grammatically incorrect”. Finally, if a worker thinks the description describes the image in a wrong way, we ask how the description is wrong. The possible choices here (e.g., quantity, color, …) have been crystallized from an internal, preliminary study.
Figure B.1: Example question in a MTurk HIT. The question asks whether the image is correctly described. If unsure (“Partially YES, but …” or “Mostly NO, because …”), the question prompts the worker to provide the rationale. There are 20 of such questions in each HIT.
We make two separated HITs for the annotation process. In the first stage, we verify the results of the image-to-caption retrieval results of five models. We also ask the crowd workers to justify their answer if they choose “Partially YES” or “Mostly NO”. In the second stage, we verify the results of the caption-to-image retrieval results of the models. After we analyzed the first HITs, we have concluded that the justification stage is not highly useful as our expectation. We omit the justification questions in the second stage for reducing annotation costs. The first annotation round was between 23rd Aug 2021 to 7th Sep 2021. During the first stage, 1,000 HITs are verified by human annotators. The second stage was between 24th Jun 2022 to 10th Feb 2022, and 1,160 HITs are verified during this stage.
Table B.1: MTurk worker statistics. The number of the unique workers, the submitted HITs, the approved HITs and the average approve ratio by the number of completed HITs are shown.
B.2 MTurk workers
Before launching the crowdsourcing on AMT, we have conducted an in-lab study involving 70 HITs and 27 workers for 4 days. We have observed that if workers continuously complete HITs, the average elapsed time per HIT is about 4 to 8 minutes. Based on this estimate, we have set the compensation level for each HIT to $1.4 so that a worker can earn $15 per hour in the first stage. For the second stage, we have set the the compensation level for each HIT to $0.65, based on the similar in-lab study without justification questions. The final costs including platform fees for the first and the second stages are $1.65 and $0.78, respectively.
In the main crowdsourcing phases, crowd workers are recruited through AMT. The detailed statistics for workers are shown in Table B.1. Overall, 970 unique workers have completed 2,969 unique Human Intelligence Tasks (HITs), while 807 HITs of them (37.3%) have been rejected by our qualification process. The average elapsed time for each HIT of the first and second annotation phase are 9.5 and 13.8 minutes, respectively. The average HITs per worker is 3.06.
Table B.2: Model-wise annotation overview. The percentages of “100% YES”, “Partially YES”, “Mostly NO” and “100% NO” for each model and each ranking are shown. For example, the first row indicates the annotation results for the top-1 retrieved image and description pairs.
(a) Shortcomings by models.
(b) Detailed errors by models.
Table B.3: Error types. The percentage of the error types by models. There is no statistical significant difference by models.
B.3 MTurk results
We summarize the results of the crowdsourced annotations, corresponding to 2,160 approved HITs on MTurk, in Table B.2 and Table B.3. In Table B.2, we show the ratios of “Yes”, “Weak yes”, “Weak no” and “No” for different models and rankings. Here, we observe that weaker results (i.e., worse ranked pairs) have lower “Yes” and “Weak yes” ratios. For example, the annotation results for PCME show that the “Yes” ratios monotonically decrease as the rank goes down: 52.3% for the most similar pairs, but 27.8% for the least similar pairs. Interestingly, we observe that the average ratio of “Yes” + “Weak yes” for the top-1 retrieved items exceed 80% for all five models (e.g., 86.0% for VSRN), while the R@1 score of each model is known to less than 60% (See Table 4 in the main paper).
From the table, we observe that by letting annotators verify more similar pairs by machines, the annotation process becomes more efficient, i.e., we can acquire the same amount of positive annotations with less number of human verification. However, as we will discuss in depth later, we emphasize that the model power is not only factor to consider: model biases emerge in MITL-produced datasets regardless of the strength of the model.
We additionally show the rationales for the uncertain matches and the specification of the errors in Table B.3. We observe that the models result in similar patterns in the annotations’ rationale and specification of errors. Finally, by our annotation process, the average number of “100% YES” and “Partially YES” images for each caption is 8.3 and 7.1, respectively. It is remarkable since original COCO annotations allow only one image to be positively paired to a caption, revealing the massive amount of missed positive matches.
Appendix C ECCV Caption Post-processing Details
C.1 The full list of invalid captions and images
In this subsection, we list up the invalid captions and images in the original COCO test split. We filter the invalid captions by the following process: (1) We first list up the “true positive” (i.e., the positive pairs in the original COCO test set) annotated by “100% No” items by Turkers or CxC Parekh et al. (2020). (2) We manually validate the items into two categories: totally wrong captions (e.g., “I don’t know” captions) and semantically incorrect captions (e.g., “A group of birds flying above the beach” for an image with kites), The full list of invalid captions with their COCO caption ids are as follows:
- •607516 The first picture is blank all the time on purpose.
- •607486 Why is my first one a blank every time.
- •433639 There is no image here to provide a caption for.
- •248212 I am unable to see an image above.
- •469834 There is no image here to provide a caption for.
- •462530 I really cant see this image very well.
- •469102 There is no image to be reviewed on this hit.
- •743575 There is no image showing on this page to describe.
- •246706 I am unable to see an image above.
- •61717 There is no picture here to describe with a caption.
- •500797 I am unable to see the image above.
- •19273 There is no image for me to write about.
- •630298 There is no image to provide a caption for.
- •576409 I am unable to see the image above.
- •390637 I am unable to see an image above.
- •296557 There is no image here to provide a caption for.
- •450553 I am unable to see an image above.
- •44809 blank image with no pictures available to write about
We also show the list of semantically incorrect captions as the follows:
- •610564 An individual is in the open view in the image.
- •359139 I cant tell if the bears may be fighting or kissing.
- •218995 A baseball player hugging another player as lovers do.
- •609235 Individuals are up and doing something fun today.
- •143250 The bar of the small bathroom has many remotes on it.
- •375316 A photo duplicated a few times and put together.
- •712683 Talk about a bad hair day, his is frightful.
- •75083 A group of birds flying above the beach.
- •625605 It is always wise to have bottles of water on hand in case of an emergency.
- •620511 If the motorcycle brakes down, the bicycle will be good transportation.
- •613949 A full view of an outdoor space with many things to see.
- •129825 A picture of a comment that is open.
- •605566 There is a room with various items in the picture.
- •634829 That thing is really red and slow lol
Figure C.1: An example image of semantically wrong captions. We annotate “A group of birds flying above the beach” as a wrong caption of the figure, while the other captions are available in Figure C.2.
Finally, we omit COCO_val2014_000000578492.jpg from our test set, where the image is duplicated to training images: COCO_train2014_000000388662.jpg and COCO_train2014_000000397819.jpg.
C.2 More examples of ECCV Caption
We illustrate the samples from ECCV Caption in Figure C.2.
Figure C.2: Example sample from ECCV Caption. Positive captions and images in ECCV Caption. Red: original positive. Green: annotated as “100% Yes”. Blue: annotated as “Weak Yes”.
Appendix D More Evaluation Results on ECCV Caption
D.1 User study for evaluation metrics
In this subsection, we describe the details of the user study to compare mAP@R 𝑅 R italic_R and Recall@k 𝑘 k italic_k in terms of human judgement. We first randomly sample 40 captions from the captions whose the number of corresponding images is between 5 to 8. Then, we construct five rankings for each caption: (A) only top-1 is wrong (B) only top-1 is correct (C) top1 to 5 are wrong (D) only top-5 is correct, and (E) all items are wrong. When the number of corresponding images is 5, then we treat (C) as (E). Each ranking system shows different mAP@R 𝑅 R italic_R and Recall@k 𝑘 k italic_k; if we assume the number of the positives is 8, then (A) shows 0 R@1, 100 R@5 and 66.0 mAP@R 𝑅 R italic_R, (B) shows 100 R@k 𝑘 k italic_k and 12.5 mAP@R 𝑅 R italic_R, (C) shows 0 R@k 𝑘 k italic_k and 10.3 mAP@R 𝑅 R italic_R, and (D) shows 0 R@1, 100 R@5 and 2.5 mAP@R 𝑅 R italic_R. The examples of each ranking system is illustrated in Figure D.1.
We collect binary preferences for the all possible combinations of (A) to (E), namely 10 binary pairs. We use MTurk for collecting participants, and we collect 8 participants for each question (the example question is in Figure D.2). As a result, we collect 40×10×8=3,200 40 10 8 3 200 40\times 10\times 8=3,200 40 × 10 × 8 = 3 , 200 binary preferences of the five different rankings. We list the full binary preference in Table D.1. After collecting binary preferences, we restore the preference rankings using Bradley–Terry (BT) model Bradley and Terry (1952). The BT model assumes that for the given pair i 𝑖 i italic_i and j 𝑗 j italic_j, the probability to the pairwise comparison i>j 𝑖 𝑗 i>j italic_i > italic_j is linear to the true ranking score, i.e., P(i>j)=p i p i+p j 𝑃 𝑖 𝑗 subscript 𝑝 𝑖 subscript 𝑝 𝑖 subscript 𝑝 𝑗 P(i>j)=\frac{p_{i}}{p_{i}+p_{j}}italic_P ( italic_i > italic_j ) = divide start_ARG italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG start_ARG italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT + italic_p start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_ARG. Our goal is to estimate p i subscript 𝑝 𝑖 p_{i}italic_p start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, the true ranking preference for each method. Using BT model, we got the following results: A (70.85), B (13.15), C (10.66), D (4.89). This confirms that mAP@R 𝑅 R italic_R is more aligned to humans than R@1; (A) shows 0 R@1 and B shows 100 R@1, however humans prefer (A) to (B) where (A) has higher mAP@R 𝑅 R italic_R than (B) (66.0 and 12.5 if the number of positives is 8).
GT images for “A train on a train track near many trees”.
(A) Only top-1 is wrong.
(B) Only top-1 is correct.
(C) Top-1 to -5 are wrong.
(D) Only top-5 is correct.
(E) All items are wrong.
Figure D.1: Examples of five ranking systems compared by our user study.
Figure D.2: Example question in the user study for evaluation metric comparisons. The question asks which ranking system A or B looks more correct by humans. There are 40 of such questions in each HIT. We collect 8 participants per each question.
Table D.1: Binary preferences for five ranking systems. Each number in row i 𝑖 i italic_i and column j 𝑗 j italic_j denotes that the number of preferences i>j 𝑖 𝑗 i>j italic_i > italic_j. For example, 231 responses preferred “(A) only top-1 is wrong” than “(B) only top-1 is correct”, while the number of the converse case is 89.
D.2 Training details
We follow the implementation details by Chun et al. Chun et al. (2021). We use the AdamP Heo et al. (2021) optimizer and cosine annealing learning rate scheduling Loshchilov and Hutter (2017). For re-implemented VSE, PVSE and PCME, we use the pre-trained ResNet-152 backbone and pre-trained Glove vectors following previous studies Faghri et al. (2018); Song and Soleymani (2019); Chun et al. (2021). We use two-stage training scheme that includes “pre-training” (freezing pre-trained backbones, but only updating the additional modules) and “fine-tuning” (updating the whole parameters). The models are pre-trained during 30 epochs and fine-tuned during 30 epochs. For the improved PCME, we use the ResNet-152 model trained with the CutMix Yun et al. (2019) augmentation for achieving better R@1 accuracies.
D.3 Full table
Table D.2: Re-evaluating VL models: Image-to-text retrieval results.
Table D.3: Re-evaluating VL models: Text-to-image retrieval results.
Table D.2 and Table D.3 show the full results of each model for image-to-text retrieval tasks and text-to-image retrieval tasks, respectively.
Appendix E Analysis of Biases in MITL
In this section, We explore and quantify the effect of the choice of multiple machine annotators to the dataset quality. We delve into the effect of the choice of machine annotators used for machine-in-the-loop (MITL) labeling paradigm to the dataset quality. Specifically, we are interested in the model bias, the type of bias that arises because of the pre-selection of plausible samples by the model in the annotation pipeline. We discuss the generalizability of our framework to general annotation tasks in Section E.1, and the definition of “bias” in our dataset process in Section E.2. We measure the model bias by employing the crowdsourced data as the source for evaluating the ITM models. For a perfectly unbiased data, we shall expect the identical rankings across the version of datasets collected with different models. We show the performances on different versions of the datasets using only one MITL model and provide discussions (Section E.3).
E.1 Image caption matching problem to general annotation tasks
Many real-world applications are powered by state-of-the-art machine learning (ML) models shown to exceed human-level performances in tasks such as natural language understanding Devlin et al. (2019) and image classification He et al. (2016); Geirhos et al. (2018). However, previous studies have shown that two conditions must be met for these models to perform well: massive training data and quality annotations. For example, large datasets that consist of well-curated 1M images Russakovsky et al. (2015), 3.5B Instagram photos Mahajan et al. (2018b); Singh et al. (2022), 300M web photos Dosovitskiy et al. (2021), 400M captioned images Radford et al. (2021), 1.8B noisy captioned images Jia et al. (2021), 15M hierarchically structured images Yun et al. (2021); Dosovitskiy et al. (2021), and synthetic images Zhang et al. (2018); Yun et al. (2019); Cubuk et al. (2019, 2020) are the key factors behind the corresponding models’ success. Moreover, the annotation quality is equally important for the model performances. Mahajan et al. Mahajan et al. (2018b) showed that training on 940M images with well-processed 1.5k labels results in a model comparable to training on 3.5B images with noisy and weak 17k labels; both models show 84.2% ImageNet-1K top-1 accuracy.
An emerging pattern for obtaining quality labels for a large dataset is pipelining a machine learning model and human annotators. Expert human annotators are reliable and produce high-quality label, but they are costly to accommodate. Strong machine annotators are relatively inexpensive, but they result in low-quality and unreliable annotations. One popular method of combining the two is feeding human annotators with machine learning model’s outputs. For example, a model suggests annotations (e.g., candidates of labels Kuznetsova et al. (2020), estimated boxes Kuznetsova et al. (2020), estimated segmentation maps Benenson et al. (2019), estimated descriptions Kayser et al. (2021)) for the given data point, and the annotators only need to confirm or fix the labels given by the machine annotators. This approach is commonly used for building a large-scale dataset, such as OpenImages Kuznetsova et al. (2020); Benenson et al. (2019), e-Vil Kayser et al. (2021).
While rich body of discussions are available for building annotation interfaces and crowdsourcing workflows, we still lack a good understanding of the impact the underlying machine learning models have on the annotators and on the annotation results. In this research, we specifically examine the downstream effects of a common practice where researchers and practitioners consider only one “strong” model in the machine-in-the-loop annotation pipeline. This can be problematic because different models not only show different results to the human annotators, but also in different orders, bring the impartialness of the annotated dataset towards any particular model to the surface. In other words, the machine-in-the-loop annotations are not stable across model choices.
As a realistic scenario for utilizing ML models to aid annotators, we consider the COCO Caption matching task Lin et al. (2014); Chen et al. (2015) that matches each image with sentences in a large database of captions. Due to the sheer bulk of the involved databases (123,287 images and 616,767 captions), it is infeasible for annotators to search through the matching caption. Instead, for each image, we use model-based ranking of possible captions to greatly reduce the search space for annotators. We have conducted studies with five state-of-the-art image-text matching models. Our COCO Caption matching task can be seen as the extreme version of the class label selection task where the number of possible classes is as large as the number of possible descriptions.
A decent overview of the image annotation tools is provided by Sager et al. Sager et al. (2021). The annotation task is determined along two axes: the types of inputs and the expressiveness of the labels. This paper focuses on the image inputs, one of the most frequently annotated type of data. The expressiveness and complexity of labels are directly related to the learning task being addressed. Tagging images with class labels is arguably the most common and basic form in the spectrum of label expressiveness. In the other extreme, we have the image-caption matching task: given an image, annotator has to search through the database of descriptions to find the one that best matches the image Chen et al. (2015). The caption matching task is of the same nature as image tagging: one needs to find the correct label in a list of possible labels. However, the candidate space for the possible labels is exponentially greater for the caption matching task. If the vocabulary size is V 𝑉 V italic_V and the lengths of caption sequences are generally L 𝐿 L italic_L, the size of the candidate space is as large as O(V L)𝑂 superscript 𝑉 𝐿 O(V^{L})italic_O ( italic_V start_POSTSUPERSCRIPT italic_L end_POSTSUPERSCRIPT ). This contrasts against the number of possible class labels that are generally far smaller than V 𝑉 V italic_V.
We consider the image-caption matching as a testbed for analyzing annotation pipelines for two reasons. First, it highly relevant for the MITL annotation paradigm because it is downright infeasible for humans to browse through the database. Second, it inherits the same tool as image tagging, making our experimental results and analyses transferable to general image tagging tasks.
E.2 What Do We Mean by “Bias”?
Bias is an overloaded term with multiple senses. We briefly explore its use in relevant fields and make a definition relevant to our paper.
In statistics and machine learning, bias of an estimator or model refers to the mismatch between its average behavior and the true parameter or underlying function Johnson et al. (2000); Bishop (2006). We partially adopt this definition of bias in a broad sense. The annotation pipeline as a whole can be regarded as a mechanism for assigning plausible labels to a given set of images. When we say that the annotation pipeline is “biased”, we refer to the discrepancy between the resulting annotations and the true, underlying labels for the samples.
In human-related studies, like psychology, neuroscience, human-computer interaction, and increasingly in machine learning, the use of “bias” often points to its underlying human factor. Examples include “confirmation bias” where humans favorably select data that serve their purpose Plous (1993), “reporting bias” where crucial commonsense knowledge is overlooked Easterbrook et al. (1991), and “survivorship bias” where non-surviving cases are under-represented Mangel and Samaniego (1984). In our MITL annotation pipeline, we study the model bias where models hinder humans from generating an unbiased set of labels by presenting humans with only a selection of the candidate labels deemed correct by the models.
E.3 Biases in ECCV Caption
Given the crowdsourced labeled image-caption data of ECCV Caption, our aim is to analyze the degrees of bias in them, depending on the underlying model used for machine-in-the-loop (MITL) labeling paradigm. Specifically, we are interested in the model bias, the type of bias that arises because of the pre-selection of plausible samples by the model in the annotation pipeline. We measure the model bias by employing the crowdsourced data as the source for evaluating the cross-modal retrieval models. For a perfectly unbiased data, we shall expect the identical evaluation results (i.e., in terms of the ranking of the methods) across the version of datasets collected with different models. In this section, we introduce the strategy to measure the model bias and present experimental results and analyses.
We measure the model bias in a labeled dataset by examining whether certain versions of datasets behaves favorably to certain models when they are used as the evaluation benchmark. To measure this, we need to introduce the specific evaluation metrics used for measuring the cross-modal retrieval performances. We use Recall@1 and R-Precision.
Recall@1 is the most widely-used metric for reporting the performances of cross-modal retrieval models. To compute it, we first let m i(x)subscript 𝑚 𝑖 𝑥 m_{i}(x)italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) be the indicator whether the i 𝑖 i italic_i-th retrieved item of the input x 𝑥 x italic_x is a positive match.
m i(x)=1 if i-th retrieved item of x is a positive match;m i(x)=0 otherwise.formulae-sequence subscript 𝑚 𝑖 𝑥 1 if i-th retrieved item of x is a positive match;subscript 𝑚 𝑖 𝑥 0 otherwise.\displaystyle\begin{split}m_{i}(x)&=1\quad\text{if $i$-th retrieved item of $x% $ is a positive match;}\ m_{i}(x)&=0\quad\text{otherwise.}\end{split}start_ROW start_CELL italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = 1 if italic_i -th retrieved item of italic_x is a positive match; end_CELL end_ROW start_ROW start_CELL italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = 0 otherwise. end_CELL end_ROW(E.1)
R@1 measures whether the top-1 retrieved item is a positive match on average:
R@1=1 N∑n=1 N m 1(x n).R@1 1 𝑁 superscript subscript 𝑛 1 𝑁 subscript 𝑚 1 subscript 𝑥 𝑛\text{R@1}=\frac{1}{N}\sum_{n=1}^{N}m_{1}(x_{n}).R@1 = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT italic_m start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) .(E.2)
Despite its popularity, Recall@1 has a serious shortcoming. As argued by Musgrave et al. Musgrave et al. (2020), a high Recall@1 does not always guarantee high-quality retrieval results. Musgrave et al. have proposed to use the R-Precision as an alternative metric. Let R(x)𝑅 𝑥 R(x)italic_R ( italic_x ) be the total number of matched items for the input x 𝑥 x italic_x. Then, R-Precision is defined as follows:
R-P=1 N∑n=1 N 1 R(x)∑i=1 R(x)m i(x n).R-P 1 𝑁 superscript subscript 𝑛 1 𝑁 1 𝑅 𝑥 superscript subscript 𝑖 1 𝑅 𝑥 subscript 𝑚 𝑖 subscript 𝑥 𝑛\text{R-P}=\frac{1}{N}\sum_{n=1}^{N}\frac{1}{R(x)}\sum_{i=1}^{R(x)}m_{i}(x_{n}).R-P = divide start_ARG 1 end_ARG start_ARG italic_N end_ARG ∑ start_POSTSUBSCRIPT italic_n = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_N end_POSTSUPERSCRIPT divide start_ARG 1 end_ARG start_ARG italic_R ( italic_x ) end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_R ( italic_x ) end_POSTSUPERSCRIPT italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) .(E.3)
Despite its good properties, in practice, it is impossible to use R-Precision for cross-modal retrieval benchmarks because many cross-modal retrieval benchmarks only a few number of (e.g., 1) positive pairs in the dataset. However, as shown in Figure A.1, there actually are many plausible positive pairs missed by the original positive pairs.
We further refurbish the metrics by using the fine-grained degrees of pair positivity provided by the annotators (Section B.2), which are not available for conventional cross-modal retrieval datasets. We update the matching function (Equation E.1) as follows:
m i(x)=1 if i-th retrieved item of x is annotated as “100% YES”m i(x)=0.5 if i-th retrieved item of x is annotated as “Partially YES”m i(x)=0 otherwise.formulae-sequence subscript 𝑚 𝑖 𝑥 1 formulae-sequence if i-th retrieved item of x is annotated as “100% YES”subscript 𝑚 𝑖 𝑥 0.5 if i-th retrieved item of x is annotated as “Partially YES”subscript 𝑚 𝑖 𝑥 0 otherwise\displaystyle\begin{split}m_{i}(x)&=1\quad\text{if $i$-th retrieved item of $x% $ is annotated as 100\% YES''}\\ m_{i}(x)&=0.5\quad\text{if $i$-th retrieved item of $x$ is annotated as % Partially YES''}\ m_{i}(x)&=0\quad\text{otherwise}.\end{split}start_ROW start_CELL italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = 1 if italic_i -th retrieved item of italic_x is annotated as “100% YES” end_CELL end_ROW start_ROW start_CELL italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = 0.5 if italic_i -th retrieved item of italic_x is annotated as “Partially YES” end_CELL end_ROW start_ROW start_CELL italic_m start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = 0 otherwise . end_CELL end_ROW(E.4)
We report the modified Recall@1 and R-Precision using the matching function above as the final performance metric for cross-modal retrieval models on our crowdsourced datasets.
(a) Text-to-Image Recall@1.
(b) Text-to-Image R-Precision.
Table E.1: Model performances vs. different annotation processes. Each row indicates performances of the same model by different annotation strategies: using the annotations filtered by a specific model. For example, the first column of the tables shows the model performances by only using “PVSE” filtered annotation. “All” denotes the full annotations are used. The bold numbers denote the best model performances for each annotation strategy, where the best performed model and the model used for the annotation strategy are the same in all experiments.
Table 0(a) and Table 0(b) show the performances on different versions of the datasets using only one MITL model. Not surprisingly, we observe that the best-performing model for each dataset coincides with the model used for the MITL label proposal (i.e., the diagonal elements in Table E.1). Other models tend to show quite some drop in performance. This strongly corroborates the existence of model bias in datasets collected with the aid of machine filtering. We further observe that even for models not used for generating the label proposals (i.e., the non-diagonal elements in Table 0(a) and Table 0(b)), the rankings shift with respect to the underlying MITL model. This suggests that even when one avoids the direct use of the MITL model for evaluating its own performance, one may still observe unstable evaluation results, where different MITL models arbitrarily favor different models.
Figure E.1: Overview of our machine-in-the-loop annotation process. We choose the subset of the verified image caption pairs by crowd workers to control the effect by the models to final annotations.
This suggests that even when one avoids the direct use of the MITL model for evaluating its own performance (i.e., avoiding the diagonal results), one may still observe unstable evaluation results, where different MITL models arbitrarily favor different models.
Figure E.2: Number of models vs. bias quantity. The bias quantities (Eq. (E.5)) with changing the number of models for the annotation filtering process are shown. For both Recall@1 and R-Precision metrics, we observe that using more models reduce the severity of bias; the discrepancy between the resulting annotations by models and the underlying labels for the samples decreases.
A crucial limitation in an analysis of this type is the lack of the true labels. The obtained versions of datasets are clearly enhanced versions compared to the original COCO Caption dataset, but they are still heavily affected by the model biases as seen above. To make a better estimate of how far the datasets are from the true labels, we introduce the multi-model strategy where the workers verify label proposals generated by more than two of the models involved. More specifically, a multi-model strategy involving models Θ={PVSE,PCME}Θ PVSE PCME\Theta={\text{PVSE},\text{PCME}}roman_Θ = { PVSE , PCME } pools the label proposals from both PVSE and PCME and present them to the human annotators. The rest of the verification process is identical as before. The intuition is that the dataset built with the multiple models will be much closer to the true labels for the image-caption matches. In the extreme case, we have the all-model strategy involving all five models considered in this work. See the “All” columns in Table 0(a) and Table 0(b) for the corresponding results.
Based on this intuition, we additionally measure the distance between a version of dataset and the all-model dataset, which is deemed to contain the labels that are closer to the true labels. We define s Θ(ϕ)subscript 𝑠 Θ italic-ϕ s_{\Theta}(\phi)italic_s start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT ( italic_ϕ ) as the performance of the model ϕ italic-ϕ\phi italic_ϕ evaluated upon the dataset built with the multi-model strategy with MITL models Θ Θ\Theta roman_Θ. We write s All(ϕ)subscript 𝑠 All italic-ϕ s_{\text{All}}(\phi)italic_s start_POSTSUBSCRIPT All end_POSTSUBSCRIPT ( italic_ϕ ) as a good proxy for the true performance of the model ϕ italic-ϕ\phi italic_ϕ. We define the model bias incurred by a subset of models Θ Θ\Theta roman_Θ as
ℬ Θ:=1 5∑ϕ∈All|s Θ(ϕ)−s All(ϕ)|assign subscript ℬ Θ 1 5 subscript italic-ϕ All subscript 𝑠 Θ italic-ϕ subscript 𝑠 All italic-ϕ\displaystyle\mathcal{B}{\Theta}:=\frac{1}{5}\sum{\phi\in\text{All}}\left|s_% {\Theta}(\phi)-s_{\text{All}}(\phi)\right|caligraphic_B start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT := divide start_ARG 1 end_ARG start_ARG 5 end_ARG ∑ start_POSTSUBSCRIPT italic_ϕ ∈ All end_POSTSUBSCRIPT | italic_s start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT ( italic_ϕ ) - italic_s start_POSTSUBSCRIPT All end_POSTSUBSCRIPT ( italic_ϕ ) |(E.5)
where All={PVSE, VSRN, PCEM, ViLT, CLIP}All PVSE, VSRN, PCEM, ViLT, CLIP\text{All}={\text{PVSE, VSRN, PCEM, ViLT, CLIP}}All = { PVSE, VSRN, PCEM, ViLT, CLIP }. For example, ℬ{PVSE}subscript ℬ PVSE\mathcal{B}_{{\text{PVSE}}}caligraphic_B start_POSTSUBSCRIPT { PVSE } end_POSTSUBSCRIPT using Recall@1 (Table 0(a)) is computed as follows:
ℬ{PVSE}=(|76.5−76.6|+|68.0−80.1|+|67.7−77.4|+|59.5−72.4|+|51.7−64.3|)/5=9.5 subscript ℬ PVSE 76.5 76.6 68.0 80.1 67.7 77.4 59.5 72.4 51.7 64.3 5 9.5\displaystyle\begin{split}\mathcal{B}_{{\text{PVSE}}}&=(|76.5-76.6|+|68.0-80% .1|+|67.7-77.4|\ &+|59.5-72.4|+|51.7-64.3|)/5=9.5\end{split}start_ROW start_CELL caligraphic_B start_POSTSUBSCRIPT { PVSE } end_POSTSUBSCRIPT end_CELL start_CELL = ( | 76.5 - 76.6 | + | 68.0 - 80.1 | + | 67.7 - 77.4 | end_CELL end_ROW start_ROW start_CELL end_CELL start_CELL + | 59.5 - 72.4 | + | 51.7 - 64.3 | ) / 5 = 9.5 end_CELL end_ROW(E.6)
We break down the degree of bias ℬ Θ subscript ℬ Θ\mathcal{B}{\Theta}caligraphic_B start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT into the bias incurred onto oneself (“self-bias”) and the one incurred onto the other models (“non-self-bias”). We quantify the self-bias for a set of models Θ Θ\Theta roman_Θ to measure the amount of the bias incurred onto oneself: 1|Θ|∑ϕ∈Θ|s Θ(ϕ)−s All(ϕ)|1 Θ subscript italic-ϕ Θ subscript 𝑠 Θ italic-ϕ subscript 𝑠 All italic-ϕ\frac{1}{|\Theta|}\sum{\phi\in\Theta}\left|s_{\Theta}(\phi)-s_{\text{All}}(% \phi)\right|divide start_ARG 1 end_ARG start_ARG | roman_Θ | end_ARG ∑ start_POSTSUBSCRIPT italic_ϕ ∈ roman_Θ end_POSTSUBSCRIPT | italic_s start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT ( italic_ϕ ) - italic_s start_POSTSUBSCRIPT All end_POSTSUBSCRIPT ( italic_ϕ ) |. For example, self-bias for PVSE is |76.5−76.6|=0.1 76.5 76.6 0.1|76.5-76.6|=0.1| 76.5 - 76.6 | = 0.1. The complementary amount of bias, the non-self-bias is computed similarly. For example, PVSE’s non-self-bias is computed as |68.0−80.1|+|67.7−77.4|+|59.5−72.4|+|51.7−64.3|)/4=11.8|68.0-80.1|+|67.7-77.4|+|59.5-72.4|+|51.7-64.3|)/4=11.8| 68.0 - 80.1 | + | 67.7 - 77.4 | + | 59.5 - 72.4 | + | 51.7 - 64.3 | ) / 4 = 11.8.
We plot the degrees of the biases, measured with ℬ Θ subscript ℬ Θ\mathcal{B}_{\Theta}caligraphic_B start_POSTSUBSCRIPT roman_Θ end_POSTSUBSCRIPT, self-bias, and non-self-bias in Figure E.2. We have experimented with changing the numbers of models |Θ|Θ|\Theta|| roman_Θ |. All numbers are averaged over all possible subsets of models of size |Θ|Θ|\Theta|| roman_Θ | (e.g., if |Θ|=2 Θ 2|\Theta|=2| roman_Θ | = 2, the result is the average of n(n−1)/2 𝑛 𝑛 1 2 n(n-1)/2 italic_n ( italic_n - 1 ) / 2 numbers). We omit the case with |Θ|=5 Θ 5|\Theta|=5| roman_Θ | = 5 because all metrics are zero by definition.
We have two observations. First, the smaller number of MITL models make the gap between “self-bias” and “model-bias” larger. This implies that if we use a single model for the MITL annotations, the resulting dataset is highly unlikely to treat the methods differently. Second, the general bias measurements decrease with the number of involved models in the MITL annotation process. For the practical applications, hence, it is advisable to use multiple models to collect the label candidates to verify. In practice, we observe that only three machine annotators (PVSE, PCME and VSRN) achieve better rankings on ECCV mAP@R 𝑅 R italic_R compared to the COCO R@1 ranking (Figure E.3). That suggests that our ECCV Caption is not fully biased towards the selected machine annotators.
Figure E.3: Bias quantity in the dataset. The full rankings of the chosen MITL annotators on our benchmark.
Appendix F Noisy Annotations
Our annotations are built upon crowdsoure annotations. Due to the nature of the noisiness of crowdsource annotations, ECCV Caption contains some wrong annotations. Also, our annotations are chosen not only from “100% YES” but also from “Partially YES”. Note that our HIT is designed for specifying the details of what makes the annotation “partially correct” – See Figure B.1. We illustrate some false positive cases in Figure F.1. The false positives can be happened due to (1) wrong object, e.g., “baseball bat” instead of “tennis racquet” (Figure 0(a)), (2) wrong color, e.g., “blue” instead of “gray” (Figure 0(b)) (c) wrong quantity, e.g., “one” instead of “two” (Figure 0(c)). However, although there exist some false positives in our dataset, we strongly encourage to use ECCV Caption and mAP@R 𝑅 R italic_R for evaluating a new VL model. Even if there exist some false positives, they are not 100% wrong examples; if a model learns good global ranking, then the partially correct examples should be located in the top rankings than other random items. Therefore we strongly encourage to use mAP@R 𝑅 R italic_R instead of Recall@k 𝑘 k italic_k; mAP@R 𝑅 R italic_R can mitigate the error by false positives, while Recall@k 𝑘 k italic_k can amplify errors by noisy annotations by only checking whether the top-K retrieved items are in the true items or not.
(a)“A boy holding a tennis racquet on a tennis court.”.
(b)“A large white airplane flies in the gray sky.”.
(c)“Two men of some sort on a tennis court.”.
Figure F.1: Examples of noisy annotations in ECCV Caption. Examples of false positive images are shown. Each of false positive contains (a) wrong object, e.g., “baseball bat” instead of “tennis racquet” (b) wrong color, e.g., “blue” instead of “gray” (c) wrong quantity, e.g., “one” instead of “two”.
References
- Lin et al. (2014) Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco: Common objects in context. In Proc. ECCV, 2014.
- Chen et al. (2015) Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollár, and C Lawrence Zitnick. Microsoft coco captions: Data collection and evaluation server. arXiv preprint arXiv:1504.00325, 2015.
- Plummer et al. (2015) Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015.
- Sharma et al. (2018) Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning. In ACL, pages 2556–2565, 2018.
- Changpinyo et al. (2021) Soravit Changpinyo, Piyush Sharma, Nan Ding, and Radu Soricut. Conceptual 12m: Pushing web-scale image-text pre-training to recognize long-tail visual concepts. In Proc. CVPR, pages 3558–3568, 2021.
- Frome et al. (2013) Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. Devise: A deep visual-semantic embedding model. In Proc. NeurIPS, pages 2121–2129, 2013.
- Young et al. (2014) Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. ACL, 2:67–78, 2014.
- Kiros et al. (2014) Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539, 2014.
- Faghri et al. (2018) Fartash Faghri, David J Fleet, Jamie Ryan Kiros, and Sanja Fidler. VSE++: Improving visual-semantic embeddings with hard negatives. In Proc. BMVC, 2018.
- Gu et al. (2018) Jiuxiang Gu, Jianfei Cai, Shafiq R Joty, Li Niu, and Gang Wang. Look, imagine and match: Improving textual-visual cross-modal retrieval with generative models. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7181–7189, 2018.
- Lee et al. (2018) Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xiaodong He. Stacked cross attention for image-text matching. In Proc. ECCV, 2018.
- Huang et al. (2018) Yan Huang, Qi Wu, Chunfeng Song, and Liang Wang. Learning semantic concepts and order for image and sentence matching. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6163–6171, 2018.
- Li et al. (2019) Kunpeng Li, Yulun Zhang, Kai Li, Yuanyuan Li, and Yun Fu. Visual semantic reasoning for image-text matching. In Proc. ICCV, pages 4654–4662, 2019.
- Song and Soleymani (2019) Yale Song and Mohammad Soleymani. Polysemous visual-semantic embedding for cross-modal retrieval. In Proc. CVPR, pages 1979–1988, 2019.
- Wehrmann et al. (2019) Jonatas Wehrmann, Douglas M Souza, Mauricio A Lopes, and Rodrigo C Barros. Language-agnostic visual-semantic embeddings. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5804–5813, 2019.
- Wu et al. (2019) Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, and Wei-Ying Ma. Unified visual-semantic embeddings: Bridging vision and language with structured meaning representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6609–6618, 2019.
- Wang et al. (2020) Haoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, and Lin Ma. Consensus-aware visual-semantic embedding for image-text matching. In Proc. ECCV, 2020.
- Chen et al. (2020) Tianlang Chen, Jiajun Deng, and Jiebo Luo. Adaptive offline quintuplet loss for image-text matching. In Proc. ECCV, 2020.
- Diao et al. (2021) Haiwen Diao, Ying Zhang, Lin Ma, and Huchuan Lu. Similarity reasoning and filtration for image-text matching. In Proc. AAAI, 2021.
- Chun et al. (2021) Sanghyuk Chun, Seong Joon Oh, Rafael Sampaio De Rezende, Yannis Kalantidis, and Diane Larlus. Probabilistic embeddings for cross-modal retrieval. In Proc. CVPR, 2021.
- Chen et al. (2021) Jiacheng Chen, Hexiang Hu, Hao Wu, Yuning Jiang, and Changhu Wang. Learning the best pooling strategy for visual semantic embedding. In Proc. CVPR, 2021.
- Huang et al. (2021) Zhenyu Huang, Guocheng Niu, Xiao Liu, Wenbiao Ding, Xinyan Xiao, hua wu, and Xi Peng. Learning with noisy correspondence for cross-modal matching. In A.Beygelzimer, Y.Dauphin, P.Liang, and J.Wortman Vaughan, editors, Proc. NeurIPS, 2021. URL https://openreview.net/forum?id=S9ZyhWC17wJ.
- Biten et al. (2022) Ali Furkan Biten, Andres Mafla, Lluís Gómez, and Dimosthenis Karatzas. Is an image worth five sentences? a new look into semantics for image-text matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 1391–1400, 2022.
- Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In Marina Meila and Tong Zhang, editors, Proc. ICML, volume 139 of Proceedings of Machine Learning Research, pages 8748–8763. PMLR, 18–24 Jul 2021. URL http://proceedings.mlr.press/v139/radford21a.html.
- Desai et al. (2021) Karan Desai, Gaurav Kaul, Zubin Aysola, and Justin Johnson. RedCaps: Web-curated image-text data created by the people, for the people. In NeurIPS Datasets and Benchmarks, 2021.
- Wah et al. (2011) Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset, 2011.
- Krause et al. (2013) Jonathan Krause, Michael Stark, Jia Deng, and Li Fei-Fei. 3d object representations for fine-grained categorization. In Proc. CVPR Worshops, pages 554–561, 2013.
- Oh Song et al. (2016) Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. Deep metric learning via lifted structured feature embedding. In Proc. CVPR, pages 4004–4012, 2016.
- Liu et al. (2016) Ziwei Liu, Ping Luo, Shi Qiu, Xiaogang Wang, and Xiaoou Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proc. CVPR, pages 1096–1104, 2016.
- Musgrave et al. (2020) Kevin Musgrave, Serge Belongie, and Ser-Nam Lim. A metric learning reality check. In Proc. ECCV, 2020.
- Kim et al. (2021) Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In Proc. ICML, 2021.
- Parekh et al. (2020) Zarana Parekh, Jason Baldridge, Daniel Cer, Austin Waters, and Yinfei Yang. Crisscrossed captions: Extended intramodal and intermodal semantic similarity judgments for ms-coco. arXiv preprint arXiv:2004.15020, 2020.
- Cer et al. (2018) Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, et al. Universal sentence encoder. In Proc. EMNLP, 2018.
- Pennington et al. (2014) Jeffrey Pennington, Richard Socher, and Christopher D Manning. Glove: Global vectors for word representation. In Proc. EMNLP, pages 1532–1543, 2014.
- Snow et al. (2008) Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 254–263, Honolulu, Hawaii, October 2008. Association for Computational Linguistics. URL https://aclanthology.org/D08-1027.
- Sorokin and Forsyth (2008) Alexander Sorokin and David Forsyth. Utility data annotation with amazon mechanical turk. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pages 1–8, 2008. doi: 10.1109/CVPRW.2008.4562953.
- Ipeirotis et al. (2010) Panagiotis G Ipeirotis, Foster Provost, and Jing Wang. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD workshop on human computation, pages 64–67, 2010.
- Mehrabi et al. (2021) Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6), July 2021. ISSN 0360-0300. doi: 10.1145/3457607. URL https://doi.org/10.1145/3457607.
- Scimeca et al. (2022) Luca Scimeca, Seong Joon Oh, Sanghyuk Chun, Michael Poli, and Sangdoo Yun. Which shortcut cues will dnns choose? a study from the parameter-space perspective. In International Conference on Learning Representations (ICLR), 2022.
- Boykov and Jolly (2001) Yuri Y Boykov and M-P Jolly. Interactive graph cuts for optimal boundary & region segmentation of objects in nd images. In Proc. ICCV, volume 1, pages 105–112. IEEE, 2001.
- Settles (2009) Burr Settles. Active learning literature survey, 2009.
- Xu et al. (2016) Ning Xu, Brian Price, Scott Cohen, Jimei Yang, and Thomas S Huang. Deep interactive object selection. In Proc. CVPR, pages 373–381, 2016.
- Benenson et al. (2019) Rodrigo Benenson, Stefan Popov, and Vittorio Ferrari. Large-scale interactive object segmentation with human annotators. In Proc. CVPR, pages 11700–11709, 2019.
- Chang et al. (2018) Minsuk Chang, Léonore V Guillain, Hyeungshik Jung, Vivian M Hare, Juho Kim, and Maneesh Agrawala. Recipescape: An interactive tool for analyzing cooking instructions at scale. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, pages 1–12, 2018.
- Guo et al. (2016) Anhong Guo, X.Chen, Haoran Qi, Samuel White, Suman Ghosh, C.Asakawa, and Jeffrey P. Bigham. Vizlens: A robust and interactive screen reader for interfaces in the real world. Proceedings of the 29th Annual Symposium on User Interface Software and Technology, 2016.
- Nushi et al. (2018) Besmira Nushi, Ece Kamar, and E.Horvitz. Towards accountable ai: Hybrid human-machine analyses for characterizing system failure. In HCOMP, 2018.
- Wu and Yang (2006) Wen Wu and Jie Yang. Smartlabel: An object labeling tool using iterated harmonic energy minimization. In Proceedings of the 14th ACM international conference on Multimedia, pages 891–900, 2006.
- Verma and Jawahar (2017) Yashaswi Verma and CV Jawahar. Image annotation by propagating labels from semantic neighbourhoods. IJCV, 121(1):126–148, 2017.
- Andriluka et al. (2018) Mykhaylo Andriluka, Jasper RR Uijlings, and Vittorio Ferrari. Fluid annotation: a human-machine collaboration interface for full image annotation. In Proceedings of the 26th ACM international conference on Multimedia, pages 1957–1966, 2018.
- Kuznetsova et al. (2020) Alina Kuznetsova, Hassan Rom, Neil Alldrin, Jasper Uijlings, Ivan Krasin, Jordi Pont-Tuset, Shahab Kamali, Stefan Popov, Matteo Malloci, Alexander Kolesnikov, et al. The open images dataset v4. IJCV, 128(7):1956–1981, 2020.
- Kaplan et al. (2018) Toni Kaplan, S.Saito, Kotaro Hara, and Jeffrey P. Bigham. Striving to earn more: A survey of work strategies and tool use among crowd workers. In HCOMP, 2018.
- Song et al. (2018) Jean Y. Song, Raymond Fok, Alan Lundgard, Fan Yang, Juho Kim, and Walter S. Lasecki. Two tools are better than one: Tool diversity as a means of improving aggregate crowd performance. 23rd International Conference on Intelligent User Interfaces, 2018.
- Chung et al. (2019) John Joon Young Chung, Jean Y. Song, Sindhu Kutty, Sungsoo Hong, Juho Kim, and Walter S. Lasecki. Efficient elicitation approaches to estimate collective crowd answers. Proceedings of the ACM on Human-Computer Interaction, 3:1 – 25, 2019.
- Bernstein et al. (2010) Michael S Bernstein, Greg Little, Robert C Miller, Björn Hartmann, Mark S Ackerman, David R Karger, David Crowell, and Katrina Panovich. Soylent: a word processor with a crowd inside. In Proceedings of the 23nd annual ACM symposium on User interface software and technology, pages 313–322, 2010.
- Kim et al. (2014) Juho Kim, P.Nguyen, Sarah A. Weir, Philip J. Guo, Rob Miller, and Krzysztof Z Gajos. Crowdsourcing step-by-step information extraction to enhance existing how-to videos. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2014.
- Cho et al. (2014) Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259, 2014.
- Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Proc. NeurIPS, pages 5998–6008, 2017.
- He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proc. CVPR, 2016.
- Ren et al. (2015) Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Proc. NeurIPS, pages 91–99, 2015.
- Dosovitskiy et al. (2021) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In Proc. ICLR, 2021. URL https://openreview.net/forum?id=YicbFdNTTy.
- Yun et al. (2019) Sangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, and Youngjoon Yoo. Cutmix: Regularization strategy to train strong classifiers with localizable features. In Proc. ICCV, 2019.
- Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
- Kendall (1938) Maurice G Kendall. A new measure of rank correlation. Biometrika, 30(1/2):81–93, 1938.
- Karpathy and Fei-Fei (2015) Andrej Karpathy and Li Fei-Fei. Deep visual-semantic alignments for generating image descriptions. In Proc. CVPR, pages 3128–3137, 2015.
- Bradley and Terry (1952) Ralph Allan Bradley and Milton E Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):324–345, 1952.
- Anderson et al. (2018) Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proc. CVPR, 2018.
- Zhang et al. (2021) Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao. Vinvl: Making visual representations matter in vision-language models. In Proc. CVPR, 2021.
- Li et al. (2022) Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation, 2022.
- Schroff et al. (2015) Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In Proc. CVPR, pages 815–823, 2015.
- Krishna et al. (2017) Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. IJCV, 123(1):32–73, 2017.
- Mahajan et al. (2018a) Dhruv Kumar Mahajan, Ross B. Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. Exploring the limits of weakly supervised pretraining. In Proc. ECCV, 2018a.
- Heo et al. (2021) Byeongho Heo, Sanghyuk Chun, Seong Joon Oh, Dongyoon Han, Sangdoo Yun, Gyuwan Kim, Youngjung Uh, and Jung-Woo Ha. Adamp: Slowing down the slowdown for momentum optimizers on scale-invariant weights. In Proc. ICLR, 2021.
- Loshchilov and Hutter (2017) Ilya Loshchilov and Frank Hutter. Sgdr: Stochastic gradient descent with warm restarts. In Proc. ICLR, 2017.
- Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171–4186. Association for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.
- Geirhos et al. (2018) Robert Geirhos, Carlos RM Temme, Jonas Rauber, Heiko H Schütt, Matthias Bethge, and Felix A Wichmann. Generalisation in humans and deep neural networks. In Advances in Neural Information Processing Systems, pages 7538–7550, 2018.
- Russakovsky et al. (2015) Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211–252, 2015.
- Mahajan et al. (2018b) Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens Van Der Maaten. Exploring the limits of weakly supervised pretraining. In Proc. ECCV, pages 181–196, 2018b.
- Singh et al. (2022) Mannat Singh, Laura Gustafson, Aaron Adcock, Vinicius de Freitas Reis, Bugra Gedik, Raj Prateek Kosaraju, Dhruv Mahajan, Ross Girshick, Piotr Dollár, and Laurens van der Maaten. Revisiting weakly supervised pre-training of visual perception models, 2022.
- Jia et al. (2021) Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In Proc. ICML, pages 4904–4916. PMLR, 2021.
- Yun et al. (2021) Sangdoo Yun, Seong Joon Oh, Byeongho Heo, Dongyoon Han, Junsuk Choe, and Sanghyuk Chun. Re-labeling imagenet: from single to multi-labels, from global to localized labels. In Proc. CVPR, 2021.
- Zhang et al. (2018) Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. In Proc. ICLR, 2018.
- Cubuk et al. (2019) Ekin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment: Learning augmentation strategies from data. In Proc. CVPR, pages 113–123, 2019.
- Cubuk et al. (2020) Ekin D Cubuk, Barret Zoph, Jonathon Shlens, and Quoc V Le. Randaugment: Practical automated data augmentation with a reduced search space. In Proc. CVPR Worshops, pages 702–703, 2020.
- Kayser et al. (2021) Maxime Kayser, Oana-Maria Camburu, Leonard Salewski, Cornelius Emde, Virginie Do, Zeynep Akata, and Thomas Lukasiewicz. e-vil: A dataset and benchmark for natural language explanations in vision-language tasks, 2021.
- Sager et al. (2021) Christoph Sager, Christian Janiesch, and Patrick Zschech. A survey of image labelling for computer vision applications. Journal of Business Analytics, pages 1–20, 2021.
- Johnson et al. (2000) Richard A Johnson, Irwin Miller, and John E Freund. Probability and statistics for engineers, volume 2000. Pearson Education London, 2000.
- Bishop (2006) Christopher M Bishop. Pattern recognition. Machine learning, 128(9), 2006.
- Plous (1993) Scott Plous. The psychology of judgment and decision making. Mcgraw-Hill Book Company, 1993.
- Easterbrook et al. (1991) Phillipa J Easterbrook, Ramana Gopalan, JA Berlin, and David R Matthews. Publication bias in clinical research. The Lancet, 337(8746):867–872, 1991.
- Mangel and Samaniego (1984) Marc Mangel and Francisco J Samaniego. Abraham wald’s work on aircraft survivability. Journal of the American Statistical Association, 79(386):259–267, 1984.
Xet Storage Details
- Size:
- 147 kB
- Xet hash:
- 2b88fca44a0befcdba47dbc34b710fe5f1bfa97f9f1b6e46e464f4e8cbc876b4
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.






























