diff --git "a/20240119/2212.08044v3.json" "b/20240119/2212.08044v3.json" new file mode 100644--- /dev/null +++ "b/20240119/2212.08044v3.json" @@ -0,0 +1,1627 @@ +{ + "title": "Benchmarking Robustness of Multimodal Image-Text Models under Distribution Shift", + "abstract": "Multimodal image-text models have shown remarkable performance in the past few years. However, evaluating robustness against distribution shifts is crucial before adopting them in real-world applications. In this work, we investigate the robustness of 12 popular open-sourced image-text models under common perturbations on five tasks (image-text retrieval, visual reasoning, visual entailment, image captioning, and text-to-image generation). In particular, we propose several new multimodal robustness benchmarks by applying 17 image perturbation and 16 text perturbation techniques on top of existing datasets. We observe that multimodal models are not robust to image and text perturbations, especially to image perturbations. Among the tested perturbation methods, character-level perturbations constitute the most severe distribution shift for text, and zoom blur is the most severe shift for image data. We also introduce two new robustness metrics (MMI for MultiModal Impact score and MOR for Missing Object Rate) for proper evaluations of multimodal models. We hope our extensive study sheds light on new directions for the development of robust multimodal models. More details can be found on the project webpage: https://MMRobustness.github.io.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "###figure_1### Multimodal learning has drawn increasing attention, and many datasets and models are collected and proposed to accelerate research in this field (Chen et al., 2020 ###reference_10###; Gan et al., 2020 ###reference_28###; Li et al., 2022b ###reference_58###, 2020b ###reference_59###; Zhang et al., 2021 ###reference_127###; Radford et al., 2021 ###reference_88###; Kim et al., 2021 ###reference_49###; Li et al., 2021a ###reference_52###, 2022a ###reference_53###; Yang et al., 2022 ###reference_119###; Dou et al., 2021 ###reference_20###; Ramesh et al., 2022 ###reference_89###; Wang et al., 2022b ###reference_106###; Alayrac et al., 2022 ###reference_1###; Radford et al., 2021 ###reference_88###; Yu et al., 2022 ###reference_123###).\nDespite the extraordinary performance and exciting potential, we find that multimodal models are often vulnerable under distribution shifts.\nIn Figure 1 ###reference_###, we show interesting examples of image captioning under image perturbations using BLIP (Li et al., 2022a ###reference_53###), and text-to-image generation under text perturbations using Stable Diffusion (Rombach et al., 2022 ###reference_92###).\nFor image captioning, we observe that by simply adding noise, blur, or pixelation to the original image, the generated captions become incorrect.\nFor text-to-image generation, applying keyboard typos, OCR errors, or synonym replacements to the original sentence, can lead to generated images containing incomplete visual information.\n\n\n\nThere is a sizable literature on robustness evaluation of unimodal vision models (Yin et al., 2019 ###reference_121###; Zheng et al., 2016 ###reference_129###; Drenkow et al., 2021 ###reference_21###; Djolonga et al., 2021 ###reference_16###; Goyal et al., 2022 ###reference_33###; Paul and Chen, 2022 ###reference_87###; Bhojanapalli et al., 2021 ###reference_4###; Mahmood et al., 2021 ###reference_72###; Mao et al., 2021 ###reference_75###; Aldahdooh et al., 2021 ###reference_2###; Zhou et al., 2022 ###reference_130###; Wenzel et al., 2022 ###reference_112###) or unimodal language models (Wang et al., 2022c ###reference_108###; Chang et al., 2021 ###reference_8###; Wang et al., 2020 ###reference_107###; Rychalska et al., 2019 ###reference_93###; Goel et al., 2021 ###reference_30###; Singh et al., 2021 ###reference_100###; Dong et al., 2021 ###reference_17###; Gui et al., 2021 ###reference_34###; Malfa and Kwiatkowska, 2022 ###reference_73###; Wang et al., 2021 ###reference_104###).\nSeveral recent work (Galindo and Faria, 2021 ###reference_27###; Fort, 2021 ###reference_26###; Noever and Noever, 2021 ###reference_82###; Goh et al., 2021 ###reference_31###; Daras and Dimakis, 2022 ###reference_14###) have unsystematically tested or probed a few pre-trained multimodal models, including CLIP (Radford et al., 2021 ###reference_88###) and DALL-E 2 (Ramesh et al., 2022 ###reference_89###).\nHowever, the robustness evaluation of multimodal image-text models under distribution shift has rarely been studied.\nTo our best knowledge, there is currently no benchmark dataset nor a comprehensive study of how the perturbed data can affect their performance.\nHence in this work:\nWe build multimodal robustness evaluation benchmarks by leveraging existing datasets and tasks, e.g., image-text retrieval (Flicker30K, COCO), visual reasoning (NLVR2), visual entailment (SNLI-VE), image captioning (COCO), and text-to-image generation (COCO).\nWe analyze the robustness of 12 multimodal models under distribution shifts, which include 17 image perturbation and 16 text perturbation methods.\nWe introduce two new robustness metrics, one termed MMI (MultiModal Impact score), to account for the relative performance drop under distribution shift in 5 downstream applications. The other one is named MOR (Missing Object Rate), which is based on open-set language-guided object detection and the first object-centric metric proposed for text-to-image generation evaluation.\nWe find that multimodal image-text models are more sensitive to image perturbations than text perturbations. In addition, zoom blur is the most effective attack for image perturbations, while character-level perturbations show a higher impact than word-level and sentence-level perturbations for text. In addition, we provided interpretations of performance drop by different perturbation methods using Optimal Transport alignment and attention." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Related Work", + "text": "" + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Multimodal Robustness Benchmark", + "text": "Distribution shift is one of the significant problems of applying models in real-world scenarios (Taori et al., 2020 ###reference_102###; Liu et al., 2022c ###reference_68###).\nDistribution shift happens when the training data distribution is different from the data distribution to which the model has applied at test time .\nA model is said to be robust on the out-of-distribution (OOD) data, if it still produces accurate predictions on the test data.\nTo evaluate the robustness of large pretrained multimodal models under distribution shift, we start by building several evaluation benchmark datasets via perturbing the original image-text pairs on either the image side or text side.\nWe use these perturbations to simulate distribution shifts of various intensities and use them to stress-test the robustness of the given models." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Image Perturbation", + "text": "To simulate distribution shifts for the image data, we adopt the perturbation strategies from ImageNet-C (Hendrycks and Dietterich, 2019 ###reference_38###) and Stylize-ImageNet (Geirhos et al., 2019 ###reference_29###; Michaelis et al., 2019 ###reference_76###).\nWe include Stylize-ImageNet for its effectiveness in perturbing the original image by breaking its shape and texture (Geirhos et al., 2019 ###reference_29###).\nExamples of the perturbed images can be seen in Figure 2 ###reference_###.\nThe perturbations are grouped into five categories: noise, blur, weather, digital, and stylize.\nSpecifically, we use image perturbation techniques, (1) Noise: Gaussian noise, shot noise, impulse noise, speckle noise; (2) Blur: defocus blur, frosted glass blur, motion blur, zoom blur; (3) Weather: snow, frost, fog, brightness; (4) Digital: contrast, elastic, pixelate, JPEG compression; and (5) stylize.\nNote that real-world corruptions can manifest themselves at varying intensities, we thus introduce variation for each corruption following (Hendrycks and Dietterich, 2019 ###reference_38###; Geirhos et al., 2019 ###reference_29###; Michaelis et al., 2019 ###reference_76###).\nIn our evaluation setting, each category has five levels of severity, resulting in perturbation methods in total.\nMore details can be found in Appendix Sec. A ###reference_###.\nNote that these strategies are commonly considered synthetic distribution shifts and can serve as a good starting point since they are precisely defined and easy to apply.\n###figure_2###" + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Text Perturbation", + "text": "To simulate the distribution shifts in language, we design 16 text perturbation techniques grouped into three categories: character-level, word-level, and sentence-level.\nExamples of the text perturbations are shown in Table 1 ###reference_###.\nIn detail, for character-level perturbation, we adopt 6 strategies from (Ma, 2019 ###reference_70###), including\nkeyboard, OCR, character insert (CI), character replace (CR), character swap (CS), character delete (CD).\nThese perturbations can be considered as simulating real-world typos or mistakes during typing.\nFor word-level perturbation, we adopt 5 strategies from EDA and AEDA (Wei and Zou, 2019 ###reference_111###; Karimi et al., 2021 ###reference_48###), including synonym replacement (SR), word insertion (WR), word swap (WS), word deletion (WD), and insert punctuation (IP).\nThese perturbations aim to simulate different writing habits that people may replace, delete, or add words to express the same meaning.\nFor sentence-level perturbation, (1) we first adopt the style transformation strategies from (Li et al., 2018 ###reference_51###; Etinger and Black, 2019 ###reference_24###; Schmidt, 2020 ###reference_96###; Schiappa et al., 2022 ###reference_95###), i.e., transferring the style of text into formal, casual, passive, and active; (2) we also adopt the back translation method from (Ma, 2019 ###reference_70###).\nThese perturbations will focus more on language semantics, due to the differences in speaking or writing styles, or translation errors.\nSimilar to image perturbations, we introduce severity levels to each strategy.\nFor strategies within the character-level and word-level perturbations, we apply 5 severity levels similar to image perturbations, while for strategies within the sentence-level perturbations, there is only one severity level.\nThis leads to a total of text perturbation methods.\nMore details about each text perturbation strategy can be found in Appendix Sec. A ###reference_###.\nWe emphasize that these perturbation techniques cover some of the actual text distribution shifts we encounter in real-world applications (e.g., typos, word swaps, style changes, etc.). Models for text data that are deployed in real-world settings need to be robust with respect to these perturbations.\nCategory\nPerturbation\n\nExample\n\n\n\nOriginal\nClean\n\nAn orange metal bowl strainer filled with apples.\n\nCharacter\nKeyboard\n\nAn orange metal bowk strainer filled witj apples.\n\nOCR\n\nAn 0range metal bowl strainer filled with app1es.\n\nCI\n\nAnd orange metal bowl strainer filled with atpples.\n\nCR\n\nAn orange metal towl strainer fillet with apples.\n\nCS\n\nAn orange meatl bowl stariner filled with apples.\n\nCD\n\nAn orang[X] metal bowl strainer fil[X]ed with apples.\n\nWord\nSR\n\nAn orange alloy bowl strainer filled with apples.\n\nWI\n\nAn old orange metal bowl strainer filled with apples.\n\nWS\n\nAn orange metal strainer bowl filled with apples.\n\nWD\n\nAn orange metal bowl strainer [X] with apples.\n\nIP\n\nAn orange metal bowl ? strainer filled with apples.\n\nSentence\nFormal\n\nAn orange metal bowl strainer contains apples.\n\nCasual\n\nAn orange metal bowl is filled with apples.\n\nPassive\n\nSome apples are in an orange metal bowl strainer.\n\nActive\n\nThere are apples in an orange metal bowl strainer.\n\nBack trans\n\nApples are placed in an orange metal bowl strainer.\nTo build a convincing benchmark, we need to ensure that the perturbed text has the same semantics as the original one.\nOtherwise, for image-text pairs in multimodal learning, the perturbed text will not match the original image and, hence, would no longer represent a meaningful image-text pair.\nIn this work, we use paraphrases from pretrained sentence-transformers (Reimers and Gurevych, 2019 ###reference_91###) to evaluate the semantic similarity between the original and the perturbed sentences.\nSpecifically, \u201cparaphrase-mpnet-base-v2\u201d (Reimers and Gurevych, 2019 ###reference_91###) is used to extract the original and perturbed sentence embeddings for computing similarity score .\nGiven a predefined tolerance threshold , a higher score means the perturbed text still has similar semantics with the original text.\nHowever, if indicating their semantics are different, we will perturb the sentence again until the semantic similarity score meets the requirement, in a reasonable looping time .\nBeyond , we will remove this text sample from our robustness benchmark.\nMore details about the fidelity control process can be found in Appendix Sec. A ###reference_###.\nThis procedure guarantees semantic closeness and ensures our perturbed data could serve as a valid evaluation benchmark for multimodal image-text models.\nTask\nDatasets\nModels\nEvaluation metrics\n\n\n\nImage-text Retrieval\nFlicker30K, COCO\nCLIP, ViLT, TCL, ALBEF, BLIP\nRecall R@K, K=, and RSUM\n\nVisual Reasoning\nNLVR2\nALBEF, ViLT, BLIP, TCL, METER\nPrediction accuracy\n\nVisual Entailment\nSNLI-VE\nALBEF, TCL, METER\nPrediction accuracy\n\nImage Captioning\nCOCO\nBLIP, GRIT, LLaVA, Mini-GPT4, BLIP2\nBLEU, METEOR, ROUGE-L, CIDEr\n\nText-to-image Generation\nCOCO\nStable Diffusion, GLIDE\nFID, CLIP-FID, MOR (ours)" + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Experiments", + "text": "Using our multimodal robustness benchmark, we are able to answer the following questions: (1) How robust are multimodal pretrained image-text models under distribution shift? (2) What is the sensitivity of each model under different perturbation methods? (3) Which model architecture or loss objectives might be more robust under image or text perturbations?\n(4) Are there any particular image/text perturbation methods that can consistently show significant influence?" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Evaluation Tasks, Datasets and Models", + "text": "As shown in Table 2 ###reference_###, we select five widely adopted downstream tasks for a comprehensive robustness evaluation under distribution shift, including image-text retrieval, visual reasoning (VR), visual entailment (VE), image captioning, and text-to-image generation.\nFor each task, we perturb the corresponding datasets,\ni.e., Flickr30K (Young et al., 2014 ###reference_122###), COCO (Lin et al., 2014 ###reference_63###) , NLVR2 (Suhr et al., 2017 ###reference_101###), and SNLI-VE (Xie et al., 2018 ###reference_116###, 2019b ###reference_117###),\nusing the image perturbation (IP) and text perturbation (TP) methods introduced in Sec. 3 ###reference_###.\nThis leads to our 8 benchmark datasets: (1) Flickr30K-IP, Flickr30K-TP, COCO-IP, and COCO-TP for image-text retrieval evaluation; (2) NLVR2-IP and NLVR2-TP for visual reasoning evaluation; (3) SNLI-VE-IP and SNLI-VE-TP for visual entailment evaluation; (4) COCO-IP for image captioning evaluation; and (5) COCO-TP for text-to-image generation evaluation.\nWe select 12 representative large multimodal models, which have publicly released their code and pretrained weights: CLIP (Radford et al., 2021 ###reference_88###), ViLT (Kim et al., 2021 ###reference_49###), ALBEF (Li et al., 2021a ###reference_52###), BLIP (Li et al., 2022a ###reference_53###), TCL (Yang et al., 2022 ###reference_119###), METER (Dou et al., 2021 ###reference_20###), GRIT (Nguyen et al., 2022 ###reference_80###),\nLLaVa (Liu et al., 2023 ###reference_64###), Mini-GPT4 (Zhu et al., 2023 ###reference_131###), BLIP2 (Li et al., 2023 ###reference_54###),\nGLIDE (Nichol et al., 2022 ###reference_81###) and Stable Diffusion (Rombach et al., 2022 ###reference_92###). We appreciate the authors for making their models publicly available." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Evaluation Metrics", + "text": "We adopt standard evaluation metrics for each task. To be specific, for image-text retrieval, we use recall and RSUM (i.e., the sum of recall R@K metric (Wu et al., 2019 ###reference_114###)).\nAs for visual reasoning and visual entailment tasks, we use prediction accuracy.\nFor image captioning, we use standard text evaluation metrics, i.e., BLEU (Papineni et al., 2002 ###reference_84###), METEOR (Denkowski and Lavie, 2014 ###reference_15###), ROUGE-L (Lin, 2004 ###reference_61###), and CIDEr (Vedantam et al., 2015 ###reference_103###).\nFor text-to-image generation, we use FID (Heusel et al., 2017 ###reference_43###) and CLIP-FID (Kynkaanniemi et al., 2022 ###reference_50###; Parmar et al., 2022 ###reference_85###) scores, and our proposed MOR (details will be introduced later) to evaluate the quality of the generated images.\nTo evaluate the robustness of a model, it is crucial to measure the relative performance drop between the in-distribution (ID) and out-of-distribution (OOD) performance.\nRecall the example given by Taori et al. (2020 ###reference_102###), let be the ID dataset (where the model is trained), and be an OOD dataset, then a model should be considered more robust than model if \u2019s performance drop is less significant than when evaluated from to , even though \u2019s absolute accuracy/recall on may still be higher than \u2019s.\nTo quantitatively measure the robustness of multimodal image-text models, we introduce a new robustness evaluation metric, termed MultiModal Impact score (MMI).\nWe compute MMI as the averaged performance drop compared with the non-perturbed performance (\u201cclean\u201d), i.e., where is the perturbed score and is the clean score.\nHere, the score can be any standard metric mentioned above, e.g., recall, RSUM, accuracy, FID, and CLIP-FID.\nIn the following experiments, we report both the standard evaluation metrics on the perturbed (OOD) datasets as well as their corresponding MMI variants.\nMore details about experimental settings can be found in Appendix Sec. B ###reference_###." + }, + { + "section_id": "4.3", + "parent_section_id": "4", + "section_name": "Robustness Evaluation under Distribution Shift", + "text": "We present the evaluation results under image perturbations in Table 3 ###reference_### [Top] and results under text perturbations in Table 3 ###reference_### [Bottom].\nFor simplicity, we only report the RSUM scores here, and the detailed results on each recall (i.e., R1, R5, and R10) and perturbation level can be found in Appendix Sec. C ###reference_###.\n\n\nInspecting Table 3 ###reference_### [Top], we observe that the performance of all models drops under image perturbation.\nAlthough different perturbation methods have various impacts on different models, we observe the following general trends.\nWe find that most multimodal models are most sensitive to zoom blur.\nAdditionally, we find that glass blur and brightness are the two \u201csoftest\u201d perturbation methods, where the performance of all evaluated models deteriorates the least.\nComparing the MMI score for both Flickr30K and COCO datasets, CLIP zero-shot (ZS) is more robust than other models, possibly due to it being trained on the large WIT400M dataset (Radford et al., 2021 ###reference_88###). As indicated in Taori et al. (2020 ###reference_102###), training models on large and diverse datasets often leads to increased robustness.\nFor text perturbations in Table 3 ###reference_### [Bottom], we also find the performance of all models drop.\nIn addition, we observe the following general trends. Character-level perturbations show more influence than word-level and sentence-level perturbations.\nIn particular, keyboard and character replace (CR) consistently show a high impact on models\u2019 robustness, while insert punctuation (IP), formal, and active are the least effective text perturbations.\n\n\n\nFor both image and text perturbations, we see that BLIP shows the best robustness performance on two datasets, i.e., the lowest MMI score.\nWe hypothesize that using an encoder-decoder architecture and generative language modeling objective in BLIP is helpful for image-text retrieval.\nGiven the recent paradigm shift to using generative loss objectives in pre-training multimodal models, e.g., BLIP (Li et al., 2022a ###reference_53###), CoCa (Yu et al., 2022 ###reference_123###), SimVLM (Wang et al., 2022d ###reference_110###) PaLI (Chen et al., 2022 ###reference_9###), Unified-IO (Lu et al., 2022 ###reference_69###), OFA (Wang et al., 2022b ###reference_106###), we believe this observation could be generalized to other multimodal tasks.\n###figure_3### ###figure_4### ###figure_5### ###figure_6### We provide qualitative evidence by visualizing the cross-modal alignment between the image patch and word query using optimal transport (Kim et al., 2021 ###reference_49###).\nAs shown in Figure 4 ###reference_###, when using GT image-text pair, the retrieval model can accurately locate the image patches given word query.\nAfter image perturbations, in particular the ones with high impact like pixelate and zoom blur, we can clearly see that the model has difficulties finding the correct alignment.\nHowever, for the \u201csoftest\u201d perturbations like brightness and glass blur, the model is still able to generate a transport plan (OT coupling matrix) between word and image patch.\nSimilarly, in Figure 4 ###reference_### where the text are perturbed, we can see the retrieval model cannot locate the correct word query under keyboard and CR, but still functions well under IP and formal.\nOverall, the visualization of word patch alignments in Figure 4 ###reference_### and 4 ###reference_### confirm the conclusion drawn from Table 3 ###reference_###, showing that the alignments are worst for perturbations that lead to highest performance degradation.\nThese two tasks are commonly considered to be multimodal classification problems.\nWe present the accuracy results in Tables 11 ###reference_### & 13 ###reference_###, and Tables 12 ###reference_### & 14 ###reference_### (in Appendix Sec. D ###reference_### and Appendix Sec. E ###reference_###) under image and text perturbations, respectively.\n\nFor both the visual reasoning (VR) and visual entailment (VE) task, we observe that zoom blur consistently impacts the model performance the most.\nCharacter-level perturbations show a stronger influence than word-level and sentence-level perturbations, which conform to the observation for image-text retrieval.\nNote that for visual reasoning, the most influential text perturbations are different across the different models, but they all belong to the character-level perturbation category.\nGlass blur is the \u201csoftest\u201d image perturbation for visual reasoning and brightness for visual entailment. Regarding text perturbations, insert punctuation and sentence-level perturbations like formal and active have the least impact on the model\u2019s performance for both tasks.\n\n\n\nInterestingly, when comparing the robustness of the different models, we make the following observation.\nDespite TCL is closely related to ALBEF, its robustness performance in terms of MMI score is significantly better.\nThe major difference between both models is that TCL incorporates an intra-modal contrastive loss objective on top of ALBEF, which enforces the learned representations to be semantic meaningful. Additionally to our findings, it has been previously shown that this strategy is also useful in mitigating the noise in training data (Yang et al., 2022 ###reference_119###). Building on these observations, we recommend that we should consider both intra-modal and cross-modal relations in multimodal representation learning to improve the robustness.\n###figure_7### In this section, we present the image captioning results of BLIP (Li et al., 2022a ###reference_53###) and GRIT (Nguyen et al., 2022 ###reference_80###) under image perturbations. We present the common evaluation metric Bleu4 and CIDEr in Figure 5 ###reference_### and leave other metrics and more results with LLaVa (Liu et al., 2023 ###reference_64###), Mini-GPT4 (Zhu et al., 2023 ###reference_131###), BLIP2 (Li et al., 2023 ###reference_54###) to Appendix Sec. F ###reference_###.\nAs shown in Figure 5 ###reference_###, zoom blur consistently has the most considerable impact across all perturbations on both models.\nOn the other hand, both models are least sensitive to\nglass blur, brightness, and JPEG compression.\nIn addition, we find that across all considered six evaluation metrics, the CIDEr scores are most sensitive to the perturbations, which suggests it is an informative metric for robustness evaluation.\n\nWe provide further insights into the effect of the perturbations by inspecting the Grad-CAM (Selvaraju et al., 2017 ###reference_99###) visualization of BLIP in Figure 5 ###reference_### (c).\nGiven an image, we expect that a robust model is able to attend to different objects according to the word query.\nConfirming the results shown in the bar plots of Figure 5 ###reference_###, we find that \u201chardest\u201d perturbations, including zoom blur and pixelate distract the attention of the model the most. For instance, BLIP cannot localize the table or the glasses in the perturbed images.\nHowever, for \u201csoft\u201d perturbations like brightness, BLIP is able to provide reasonable localization.\n###figure_8### We present a robustness evaluation for text-to-image generation using two popular generative models, Stable Diffusion (Rombach et al., 2022 ###reference_92###) and GLIDE (Nichol et al., 2022 ###reference_81###), under text perturbations.\nDue to limited space, we only show results and the analysis for Stable Diffusion here and present the results for GLIDE in Appendix Sec. G ###reference_###.\nSince diversity is essential in text-to-image generation, we generate multiple images given one text for a proper analysis.\nTo assess the diversity, we provide three evaluation settings, where each caption in the dataset is used to generate 4, 8, and 16 images.\nWe adopt the common FID (Heusel et al., 2017 ###reference_43###) score and CLIP-FID (Kynkaanniemi et al., 2022 ###reference_50###; Parmar et al., 2022 ###reference_85###) score as evaluation metrics and report the mean and standard deviation.\n\n\nAs shown in Figure 6 ###reference_### (a) and (b), we surprisingly find that even for the generation task, character-level perturbations affect the robustness of the models the most compared to word-level and sentence-level perturbations.\nFurthermore, generating more images reduces the variance under each perturbation (e.g., comparing the green against the blue bars). Additionally, we perform a t-test on the generated images and find them to be not correlated after perturbation according to the p-value. This indicates that most text perturbations have an influence on text-to-image generation. Our finding is also corroborated by recent prompt engineering work, where well-designed prompt components can produce coherent outputs (Liu and Chilton, 2022 ###reference_66###).\n\n\n\n\nLastly, we also provide a further inspection of Stable Diffusion by Grad-CAM visualization in Figure 6 ###reference_### (c). We use the original unperturbed word query to visualize the attention map. Keyboard, word deletion, and casual are shown as character-level, word-level, and sentence-level perturbation examples, respectively. In keyboard, the hydrant is missing; in word deletion, the color of the hydrant is incorrect, but no object is missing; in casual, the attention map perfectly matches the generated images, which shows character-level perturbations could be more effective than word level and sentence-level perturbations.\nAs the word deletion in Figure 6 ###reference_### (c), we found Stable Diffusion does not explicitly bind attributes to objects and the reconstructions from the model often mix up attributes and objects, similar to (Ramesh et al., 2022 ###reference_89###).\nTo further provide a quantitative evaluation of the quality of the generated images, we propose a new detection-based metric to capture if the model can faithfully generate images with all the objects mentioned in the text.\nTo achieve this goal, we leverage an open-set zero-shot language-guided object detection model, i.e., GLIP (Li et al., 2021c ###reference_57###), to detect salient objects in the generated images.\nAs shown in Figure 7 ###reference_### left, the inputs to the GLIP model are text prompt and the generated images from text-to-image generation models.\nGiven COCO is an object detection dataset, and it has ground truth labels for the objects, we can simply use the combination of object names from the ground truth labels as the text prompt, i.e., \u201cdog, cake, broccoli\u201d,\nIf the ground truth object can be detected (with a detection threshold ), we assume the object is successfully generated by the text-to-image generation model, otherwise, the object is classified as missing.\n###figure_9### In Figure 7 ###reference_### right, we show a visual comparison of how perturbed captions can affect the generation quality with respect to missing objects.\nWe first use GT captions and perturbed captions to generate some images, and then perform object detection using GLIP on these images.\nNote that for all generated images, we always use the same ground truth COCO object names as text prompts.\nOn the top row, we can find that the prompt \u201ccat, pillow, desk\u201d can be detected successfully, which means they are faithfully generated by the Stable Diffusion model.\nHowever, for the bottom row, the perturbed prompt (CR in this example), some objects can not be detected and are considered as missing, i.e., pillow and desk.\n\nHence, similar to mean corruption error (mCE) in Xie et al. (2019a ###reference_115###), we define our detection-based score, termed Missing Object Rate (MOR), as .\nHere is the number of detected objects from images generated by perturbed captions, and is the number of detected objects from images generated by GT captions. A lower score indicates more objects are missing, which suggests the perturbed text has a high impact on the underlying text-to-image generation model. As shown in Table 4 ###reference_###, we can clearly see that MOR drops significantly for images generated by character-level perturbed captions compared to word-level and sentence-level methods.\nThreshold\nSetting\nGT\nKeyboard\nOcr\nCI\nCR\nCS\nCD\nSR\nRI\nRS\nRD\nIP\nFormal\nCasual\nPassive\nActive\nBack_trans\n\n\n\n0.7\n4-images\n0.00\n-12.47\n-5.22\n-8.41\n-13.25\n-12.15\n-12.63\n-8.23\n-3.14\n-7.33\n-6.05\n-2.81\n-2.10\n-1.42\n-1.36\n0.27\n-0.86\n\n8-images\n0.00\n-11.00\n-4.27\n-6.62\n-11.79\n-11.09\n-10.76\n-6.77\n-1.62\n-6.59\n-4.31\n-2.83\n0.01\n0.69\n-0.17\n1.34\n0.44\n\n16-images\n0.00\n-11.53\n-4.29\n-6.96\n-11.72\n-11.59\n-10.86\n-6.88\n-1.65\n-6.66\n-4.48\n-2.90\n-0.16\n0.17\n-0.75\n0.76\n0.48\n\n0.5\n4-images\n0.00\n-5.33\n-2.97\n-2.96\n-6.60\n-3.97\n-2.45\n-1.00\n0.72\n-1.51\n-4.63\n-1.88\n-0.31\n-2.18\n2.17\n-0.30\n0.65\n\n8-images\n0.00\n-4.94\n-2.28\n-1.18\n-5.83\n-2.48\n-1.55\n-0.34\n1.70\n-1.26\n-2.72\n-1.06\n0.17\n-1.00\n3.41\n0.42\n1.02\n\n16-images\n0.00\n-4.95\n-1.76\n-1.65\n-5.02\n-2.01\n-2.03\n-0.62\n1.41\n-0.90\n-2.50\n-0.69\n0.50\n0.08\n3.36\n0.26\n1.41" + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Discussion", + "text": "Reflecting on the results, we are now equipped to address the questions we initially posed: \n(1) How robust are multimodal pretrained image-text models under distribution shift?\nMultimodal image-text models are sensitive to distribution shifts caused by image and text perturbations, especially shifts in the image space. \n(2) What is the sensitivity of each model under different perturbation methods?\nThe sensitivity of different models under different perturbation methods is different. For example, for the image-text retrieval task, under both image and text perturbations, we can see that BLIP shows the best robustness performance, i.e., the lowest MMI score. \n(3) Which model architecture or loss objectives might be more robust under image or text perturbations?\nWe hypothesize that using an encoder-decoder architecture and generative language modeling objective is helpful . Given the recent paradigm shift to using generative loss objectives in pre-training multimodal models, e.g., BLIP (Li et al., 2022a ###reference_53###), CoCa (Yu et al., 2022 ###reference_123###), SimVLM (Wang et al., 2022d ###reference_110###), PaLI (Chen et al., 2022 ###reference_9###), Unified-IO (Lu et al., 2022 ###reference_69###), OFA (Wang et al., 2022a ###reference_105###), we believe this observation could be generalized to other multimodal tasks. \n(4) Are there any particular image/text perturbation methods that can consistently show significant influence?\nFor image perturbations, zoom blur consistently shows the highest impact on the model\u2019s robustness across 5 tasks, while glass blur and brightness are the least harmful ones.\nFor text, character-level perturbations have a higher impact than word-level and sentence-level perturbations. In particular, keyboard and character replace consistently show high impact, while insert punctuation, formal, and active are the three least effective ones across different settings." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "In this work, we investigate the robustness of large multimodal image-text models under distribution shifts. We introduce several evaluation benchmarks based on 17 image perturbation and 16 text perturbation strategies. We study 5 important downstream tasks, including image-text retrieval, visual reasoning, visual entailment, image captioning, and text-to-image generation, and evaluate 9 popular image-text models.\nWe hope that our proposed benchmark is valuable for analyzing the robustness of image-text models and that our findings provide inspiration to develop and deploy more robust models for real-world applications." + }, + { + "section_id": "7", + "parent_section_id": null, + "section_name": "Broader Impact Statement", + "text": "" + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Perturbation Strategies", + "text": "" + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Experimental Settings", + "text": "" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C More Results on Image-Text Retrieval", + "text": "" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D More Results on Visual Reasoning", + "text": "" + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E More Results on Visual Entailment", + "text": "" + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F More Results on Image Captioning", + "text": "" + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G More Results on Text-to-Image Generation", + "text": "" + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix H Learning-based Distribution Shift", + "text": "In addition to the synthetic perturbation methods in the paper, we also conducted some learning-based distribution shifts (e.g. adversarial robustness) into evaluation.\nWe followed Zhang et al. (2022 ###reference_126###) and adopted several adversarial perturbation methods, which are shown in Table 21 ###reference_###.\nWe conducted experiments using the adversarial perturbation methods in Table 21 ###reference_### on the image-text retrieval task, and the results are shown in the tables below. We provide the results of ALBEF and CLIP on the Flickr30K and COCO datasets in Tables 22 ###reference_###,23 ###reference_###,24 ###reference_###.\nIn Table 22 ###reference_###, we show the image-text retrieval results by adding adversarial perturbations on image modality only by FGSM (Goodfellow et al., 2014 ###reference_32###). In Table 23 ###reference_###, we show the image-text retrieval results by adding adversarial perturbations on text modality only by BERT-Attack (Li et al., 2020a ###reference_56###). In Table 24 ###reference_###, we show the image-text retrieval results by adding adversarial perturbations on multi-modality by Fooling VQA (Xu et al., 2017 ###reference_118###), SSAP (Yang et al., 2021 ###reference_120###), SSAP-MIM (Dong et al., 2017 ###reference_18###), SSAP-SI (Lin et al., 2019 ###reference_62###), and Co-Attack (Zhang et al., 2022 ###reference_126###).\nFrom the results in Tables 22 ###reference_###,23 ###reference_###,24 ###reference_###, we can find that adversarial perturbations can also have a significant impact on the robustness performance. In particular, image adversarial perturbations show a larger influence on the model\u2019s performance than text adversarial perturbations. In addition, combining image and text adversarial perturbations can even lead to a larger performance impact than unimodal adversarial perturbations. As for the multimodal adversarial perturbations, Fooling VQA shows the least performance influence, while Co-Attack shows the highest ability in attacking models." + }, + { + "section_id": "Appendix 9", + "parent_section_id": null, + "section_name": "Appendix I Discussion", + "text": "" + }, + { + "section_id": "Appendix 10", + "parent_section_id": null, + "section_name": "Appendix J More Related Work", + "text": "" + }, + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "ML reproducibility checklist", + "text": "The checklist follows the references. Please\nread the checklist guidelines carefully for information on how to answer these\nquestions. For each question, change the default [TODO] to [Yes] ,\n[No] , or [N/A] . You are strongly encouraged to include a justification to your answer, either by referencing the appropriate section of\nyour paper or providing a brief inline description. For example:\nDid you include the license to the code and datasets? [Yes]\nDid you include the license to the code and datasets? [No] The code and the data are proprietary.\nDid you include the license to the code and datasets? [N/A]\nPlease do not modify the questions and only use the provided macros for your\nanswers. Note that the Checklist section does not count towards the page\nlimit. In your paper, please delete this instructions block and only keep the\nChecklist section heading above along with the questions/answers below.\nFor all authors\u2026\nDo the main claims made in the abstract and introduction accurately reflect the paper\u2019s contributions and scope?\n[Yes]\nDid you describe the limitations of your work?\n[Yes]\nDid you discuss any potential negative societal impacts of your work?\n[Yes]\nHave you read the ethics review guidelines and ensured that your paper conforms to them?\n[Yes]\nIf you are including theoretical results\u2026\nDid you state the full set of assumptions of all theoretical results?\n[N/A]\nDid you include complete proofs of all theoretical results?\n[N/A]\nIf you ran experiments (e.g. for benchmarks)\u2026\nDid you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?\n[Yes] The datasets and models are publicly available. Our code can be found on the project webpage:\nhttps://MMRobustness.github.io ###reference_MMRobustness.github.io###\nDid you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?\n[Yes]\nDid you report error bars (e.g., with respect to the random seed after running experiments multiple times)?\n[Yes]\nDid you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?\n[Yes]\nIf you are using existing assets (e.g., code, data, models) or curating/releasing new assets\u2026\nIf your work uses existing assets, did you cite the creators?\n[Yes]\nDid you mention the license of the assets?\n[Yes]\nDid you include any new assets either in the supplemental material or as a URL?\n[Yes]\nDid you discuss whether and how consent was obtained from people whose data you\u2019re using/curating?\n[N/A]\nDid you discuss whether the data you are using/curating contains personally identifiable information or offensive content?\n[N/A]\nIf you used crowdsourcing or conducted research with human subjects\u2026\nDid you include the full text of instructions given to participants and screenshots, if applicable?\n[Yes]\nDid you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?\n[N/A]\nDid you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?\n[Yes]" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Example of our 16 text perturbations. The original text is taken from the COCO dataset and denoted as clean in the first row.
\n
\n

\n\n\n\nCategory\nPerturbation\n\nExample\n\n\n\nOriginal\nClean\n\nAn orange metal bowl strainer filled with apples.\n\nCharacter\nKeyboard\n\nAn orange metal bowk strainer filled witj apples.\n\nOCR\n\nAn 0range metal bowl strainer filled with app1es.\n\nCI\n\nAnd orange metal bowl strainer filled with atpples.\n\nCR\n\nAn orange metal towl strainer fillet with apples.\n\nCS\n\nAn orange meatl bowl stariner filled with apples.\n\nCD\n\nAn orang[X] metal bowl strainer fil[X]ed with apples.\n\nWord\nSR\n\nAn orange alloy bowl strainer filled with apples.\n\nWI\n\nAn old orange metal bowl strainer filled with apples.\n\nWS\n\nAn orange metal strainer bowl filled with apples.\n\nWD\n\nAn orange metal bowl strainer [X] with apples.\n\nIP\n\nAn orange metal bowl ? strainer filled with apples.\n\nSentence\nFormal\n\nAn orange metal bowl strainer contains apples.\n\nCasual\n\nAn orange metal bowl is filled with apples.\n\nPassive\n\nSome apples are in an orange metal bowl strainer.\n\nActive\n\nThere are apples in an orange metal bowl strainer.\n\nBack trans\n\nApples are placed in an orange metal bowl strainer.\n\n

\n
\n
", + "capture": "Table 1: Example of our 16 text perturbations. The original text is taken from the COCO dataset and denoted as clean in the first row." + }, + "2": { + "table_html": "
\n
Table 2: Evaluation tasks, datasets, models and metrics used in our study.
\n
\n

\n\n\n\nTask\nDatasets\nModels\nEvaluation metrics\n\n\n\nImage-text Retrieval\nFlicker30K, COCO\nCLIP, ViLT, TCL, ALBEF, BLIP\nRecall R@K, K=, and RSUM\n\nVisual Reasoning\nNLVR2\nALBEF, ViLT, BLIP, TCL, METER\nPrediction accuracy\n\nVisual Entailment\nSNLI-VE\nALBEF, TCL, METER\nPrediction accuracy\n\nImage Captioning\nCOCO\nBLIP, GRIT, LLaVA, Mini-GPT4, BLIP2\nBLEU, METEOR, ROUGE-L, CIDEr\n\nText-to-image Generation\nCOCO\nStable Diffusion, GLIDE\nFID, CLIP-FID, MOR (ours)\n\n

\n
\n
", + "capture": "Table 2: Evaluation tasks, datasets, models and metrics used in our study." + }, + "3": { + "table_html": "
\n
Table 3: Image-text retrieval. [Top] Robustness evaluations on Flickr30k-IP and COCO-IP.\n[Bottom] Robustness evaluations on Flickr30k-TP and COCO-TP datasets.\nWe report averaged RSUM where the most effective perturbation results are marked in bold, and the least effective perturbation results are underlined. The MMI impact score is marked in blue, the lower the better.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NoiseBlurWeatherDigitalStylize
DatasetMethod\u2009CleanGauss.ShotImpulseSpeckleDefocusGlassMotionZoom\u00a0Snow\u00a0Frost\u00a0Fog\u00a0BrightContrastElasticPixelJPEGStylize\n\u2009ave \nMMI
Flickr30KViLT FT522.0413.0419.6396.9387.1417.6489.0388.4236.3332.7453.1455.8496.9372.2461.7277.4487.6387.1408.7\n 21.7%\n
CLIP ZS533.7501.7504.2481.2515.5502.1530.1509.7457.8470.7495.6519.7530.1515.4510.4469.5524.6447.6499.2\n 6.5%\n
CLIP FT544.3500.1503.8479.1522.1493.3536.9513.3444.4464.4503.2529.7543.5521.5513.9453.9528.6436.9499.3\n 8.3%\n
TCL ZS563.8464.9467.0458.4498.0429.8506.6388.5251.3407.3449.5434.2509.1473.2434.4247.2502.2343.4427.4\n 24.2%\n
TCL FT573.4529.9532.6527.7551.6504.5566.0513.9397.3521.7551.0554.1568.0557.1421.0372.0555.4448.7516.2\n 10.0%\n
ALBEF FT577.7533.8538.3532.0557.8528.8569.2516.0416.1532.0558.1560.4572.0550.6538.7435.9559.8464.1527.3\n 8.7%\n
BLIP FT580.9536.2538.9528.6560.8529.4571.6525.7412.1456.6513.4568.5574.4555.1545.6490.8563.8482.1527.2\n 9.2%\n
COCOViLT441.5372.2372.6362.9396.7378.1432.0365.4193.7281.1366.1398.1422.4327.1402.2229.8425.8333.9356.5\n 19.3%\n
CLIP ZS394.5363.0361.2330.2368.7358.7391.6362.2294.6294.7329.0371.8391.9356.4369.7308.2388.0314.9350.3\n 11.2%\n
CLIP FT420.5367.2365.3331.7381.5371.0412.2374.4291.0289.3337.3389.9413.9371.7379.7306.4402.1310.2358.5\n 14,7%\n
TCL ZS477.2419.8418.4418.4439.0400.0450.8357.5177.3316.5372.0400.6452.2416.1369.0190.3442.7280.1371.8\n 22.1%\n
TCL FT497.2454.3454.4453.9468.1447.8491.9433.8259.9408.9443.2470.1489.1467.8438.2309.1474.9360.9430.9\n 13.3%\n
ALBEF FT504.6460.0460.6460.3376.4447.1493.0436.5282.2408.8449.8472.6493.8452.1455.0347.0480.9475.8438.3\n 13.1%\n
BLIP FT516.6471.9472.1467.7489.5466.1507.2451.7291.6432.8471.8494.2506.8470.4472.3404.7499.6402.9458.7\n 11.2%\n
\n
\n
\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Character-levelWord-levelSentence-level
DatasetMethod\u2009CleanKeyboardOCR\u00a0\u00a0\u00a0CI\u00a0\u00a0\u00a0CR\u00a0\u00a0\u00a0CS\u00a0\u00a0\u00a0CD\u00a0\u00a0\u00a0SR\u00a0\u00a0\u00a0WI\u00a0\u00a0\u00a0WS\u00a0\u00a0\u00a0WD\u00a0\u00a0\u00a0IPFormalCasualPassiveActiveBack_trans\n\u2009ave \nMMI
Flickr30KViLT FT522.0385.3461.9388.0386.2395.6398.6471.9492.2480.1489.8507.7510.1504.5488.1508.3500.1460.5\n 11.8%\n
CLIP ZS533.7431.8478.2450.5435.2444.6451.3497.1509.6503.3514.1519.4531.7529.3524.8531.4524.2492.3\n 7.8%\n
CLIP FT544.3458.4500.1477.6461.6471.1475.5515.4530.4526.0531.1536.4545.8542.1537.9545.1537.3512.0\n 5.9%\n
TCL ZS563.8433.3499.9443.3428.4444.4448.9511.9523.8519.1528.8548.6544.4542.4530.1547.1535.8501.9\n 11.0%\n
TCL FT573.4494.3545.0504.9492.8501.9502.4554.7566.4560.0564.2573.4571.5569.6562.8572.1566.5543.9\n 5.1%\n
ALBEF FT577.7506.2552.0516.2505.0511.7513.0561.9571.6568.6570.0577.7576.2575.0569.5576.4572.5551.5\n 4.5%\n
BLIP FT580.9518.0559.5527.3518.0526.4525.7565.6576.1572.8573.8580.7579.0578.6574.5579.6574.7558.1\n 3.9%\n
COCOViLT441.5319.2386.2327.0321.7333.1334.1397.8417.5404.4413.6433.1436.5433.6423.2437.1426.0390.3\n 11.6%\n
CLIP ZS394.5285.5286.4286.1285.4285.6285.8347.5363.8355.5368.6374.2393.0391.6379.6393.5381.2341.5\n 13.4%\n
CLIP FT420.5316.1316.7316.5316.4316.7315.6376.2394.6389.9395.3406.6417.3415.2408.7419.4406.2370.5\n 11.9%\n
TCL ZS477.2368.0428.4381.3368.4382.0383.4439.3453.4445.7450.9477.2474.4471.8464.7475.7462.0432.9\n 9.3%\n
TCL FT497.2397.8455.1412.0398.5408.8410.5463.7481.3471.8477.7497.1494.6493.0487.3496.0483.5458.0\n 7.9%\n
ALBEF FT504.6404.5461.7418.9406.1414.7415.5471.4488.9483.3486.3504.5503.1502.0496.4503.7491.3465.8\n 7.7%\n
BLIP FT516.6429.1479.1442.4430.8441.3441.4484.3502.1494.6499.7515.8514.4513.6508.1515.4504.3482.3\n 6.6%\n
\n
\n
\n
\n
", + "capture": "Table 3: Image-text retrieval. [Top] Robustness evaluations on Flickr30k-IP and COCO-IP.\n[Bottom] Robustness evaluations on Flickr30k-TP and COCO-TP datasets.\nWe report averaged RSUM where the most effective perturbation results are marked in bold, and the least effective perturbation results are underlined. The MMI impact score is marked in blue, the lower the better." + }, + "4": { + "table_html": "
\n
Table 4: Quantitative results of Missing Object Rate (MOR) of Stable Diffusion. The most effective perturbation results are marked in bold, and the least effective ones are underlined. The results show that more objects are missing from the images generated by character-level perturbed captions.
\n
\n

\n\n\n\nThreshold\nSetting\nGT\nKeyboard\nOcr\nCI\nCR\nCS\nCD\nSR\nRI\nRS\nRD\nIP\nFormal\nCasual\nPassive\nActive\nBack_trans\n\n\n\n0.7\n4-images\n0.00\n-12.47\n-5.22\n-8.41\n-13.25\n-12.15\n-12.63\n-8.23\n-3.14\n-7.33\n-6.05\n-2.81\n-2.10\n-1.42\n-1.36\n0.27\n-0.86\n\n8-images\n0.00\n-11.00\n-4.27\n-6.62\n-11.79\n-11.09\n-10.76\n-6.77\n-1.62\n-6.59\n-4.31\n-2.83\n0.01\n0.69\n-0.17\n1.34\n0.44\n\n16-images\n0.00\n-11.53\n-4.29\n-6.96\n-11.72\n-11.59\n-10.86\n-6.88\n-1.65\n-6.66\n-4.48\n-2.90\n-0.16\n0.17\n-0.75\n0.76\n0.48\n\n0.5\n4-images\n0.00\n-5.33\n-2.97\n-2.96\n-6.60\n-3.97\n-2.45\n-1.00\n0.72\n-1.51\n-4.63\n-1.88\n-0.31\n-2.18\n2.17\n-0.30\n0.65\n\n8-images\n0.00\n-4.94\n-2.28\n-1.18\n-5.83\n-2.48\n-1.55\n-0.34\n1.70\n-1.26\n-2.72\n-1.06\n0.17\n-1.00\n3.41\n0.42\n1.02\n\n16-images\n0.00\n-4.95\n-1.76\n-1.65\n-5.02\n-2.01\n-2.03\n-0.62\n1.41\n-0.90\n-2.50\n-0.69\n0.50\n0.08\n3.36\n0.26\n1.41\n\n

\n
\n
", + "capture": "Table 4: Quantitative results of Missing Object Rate (MOR) of Stable Diffusion. The most effective perturbation results are marked in bold, and the least effective ones are underlined. The results show that more objects are missing from the images generated by character-level perturbed captions." + }, + "5": { + "table_html": "
\n
Table 5: Image perturbations.
\n
\n

\n\n\n\nCategory\nPerturbation\n\nDescription\nSeverities\n\nNoise\nGaussian Noise\n\nGaussian noise can appear in low-lighting conditions.\n5\n\nShot Noise\n\nShot noise, also called Poisson noise, is electronic noise caused by the discrete nature of light itself.\n5\n\nImpulse Noise\n\nImpulse noise is a color analogue of salt-and-pepper noise and can be caused by bit errors.\n5\n\nSpeckle Noise\n\nSpeckle noise is the noise added to a pixel that tends to be larger if the original pixel intensity is larger.\n5\n\nBlur\nDefocus Blur\n\nDefocus blur occurs when an image is out of focus.\n5\n\nFrosted Glass Blur\n\nFrosted Glass Blur appears with \u201cfrosted glass\u201d windows or panels.\n5\n\nMotion Blur\n\nMotion blur appears when a camera is moving quickly.\n5\n\nZoom Blur\n\nZoom blur occurs when a camera moves toward an object rapidly.\n5\n\nWeather\nSnow\n\nSnow is a visually obstructive form of precipitation.\n5\n\nFrost\n\nFrost forms when lenses or windows are coated with ice crystals.\n5\n\nFog\n\nFog shrouds objects and is rendered with the diamond-square algorithm.\n5\n\nBrightness\n\nBrightness varies with daylight intensity.\n5\n\nDigital\nContrast\n\nContrast can be high or low depending on lighting conditions and the photographed object\u2019s color.\n5\n\nElastic\n\nElastic transformations stretch or contract small image regions.\n5\n\nPixelate\n\nPixelation occurs when upsampling a low-resolution image.\n5\n\nJPEG Compression\n\nJPEG is a lossy image compression format that introduces compression artifacts.\n5\n\nStylize\nStylize\n\nStylized data is generated by transferring the style information to the content images by AdaIN style transfer (Huang and Belongie, 2017).\n5\n\nSum\n17\n\n\u2014\n85\n\n

\n
\n
", + "capture": "Table 5: Image perturbations." + }, + "6": { + "table_html": "
\n
Table 6: Text perturbations.
\n
\n

\n\n\n\nCategory\nPerturbation\n\nDescription\nSeverities\n\nCharacter-level\nKeyboard\n\nSubstitute character by keyboard distance with probability .\n5\n\nOCR\n\nSubstitute character by pre-defined OCR error with probability .\n5\n\nCharacter Insert (CI)\n\nInsert character randomly with probability .\n5\n\nCharacter Replace (CR)\n\nSubstitute character randomly with probability .\n5\n\nCharacter Swap (CS)\n\nSwap character randomly with probability .\n5\n\nCharacter Delete (CD)\n\nDelete character randomly with probability .\n5\n\nWord-level\nSynonym Replacement (SR)\n\nRandomly choose words from the sentence that are not stop words. Replace each of these words with one of its synonyms chosen at random.\n5\n\nWord Insertion (WI)\n\nFind a random synonym of a random word in the sentence that is not a stop word. Insert that synonym into a random position in the sentence. Do this times.\n5\n\nWord Swap (WS)\n\nRandomly choose two words in the sentence and swap their positions. Do this times.\n5\n\nWord Deletion (WD)\n\nEach word in the sentence can be randomly removed with probability .\n5\n\nInsert Punctuation (IP)\n\nRandom insert punctuation in the sentence with probability .\n5\n\nSentence-level\nFormal\n\nTransfer the text style to Formal.\n1\n\nCasual\n\nTransfer the text style to Casual.\n1\n\nPassive\n\nTransfer the text style to Passive.\n1\n\nActive\n\nTransfer the text style to Active.\n1\n\nBack Translation\n\nTranslate source to German and translate it back to English via (Ng et\u00a0al., 2020).\n1\n\nSum\n16\n\n\u2014\n60\n\n

\n
\n
", + "capture": "Table 6: Text perturbations." + }, + "7": { + "table_html": "
\n
Table 7: Example of our 16 text perturbations. The original text is taken from the COCO dataset and denoted as clean in the first row.
\n
\n

\n\n\n\nCategory\nPerturbation\n\nExample\n\n\n\nOriginal\nClean\n\nAn orange metal bowl strainer filled with apples.\n\nCharacter\nKeyboard\n\nAn orange metal bowk strainer filled witj apples.\n\nOCR\n\nAn 0range metal bowl strainer filled with app1es.\n\nCI\n\nAnd orange metal bowl strainer filled with atpples.\n\nCR\n\nAn orange metal towl strainer fillet with apples.\n\nCS\n\nAn orange meatl bowl stariner filled with apples.\n\nCD\n\nAn orang[X] metal bowl strainer fil[X]ed with apples.\n\nWord\nSR\n\nAn orange alloy bowl strainer filled with apples.\n\nWI\n\nAn old orange metal bowl strainer filled with apples.\n\nWS\n\nAn orange metal strainer bowl filled with apples.\n\nWD\n\nAn orange metal bowl strainer [X] with apples.\n\nIP\n\nAn orange metal bowl ? strainer filled with apples.\n\nSentence\nFormal\n\nAn orange metal bowl strainer contains apples.\n\nCasual\n\nAn orange metal bowl is filled with apples.\n\nPassive\n\nSome apples are in an orange metal bowl strainer.\n\nActive\n\nThere are apples in an orange metal bowl strainer.\n\nBack trans\n\nApples are placed in an orange metal bowl strainer.\n\n

\n
\n
", + "capture": "Table 7: Example of our 16 text perturbations. The original text is taken from the COCO dataset and denoted as clean in the first row." + }, + "8": { + "table_html": "
\n
Table 8: Magnitude of perturbations.
\n
\n

\n\n\n\nMethod\n\nParameters\n\n\n\nGaussian noise\n\nFirst normalize the pixel values, then add a random normal noise scaled at values 0.08, 0.12, 0.18, 0.26, 0.38 based on severity\n\nShot noise\n\nSimulate electronic noise caused by the discrete nature of light by applying a combination of salt and pepper noise with amounts ranging from 0.03, 0.06, 0.09, 0.17, 0.27\n\nImpulse noise\n\nSimulate corruptions caused by bit errors by applying a combination of salt and pepper noise with amounts ranging from 0.03, 0.06, 0.09, 0.17, 0.27\n\nSpeckle noise\n\nSimulate additive noise and is similar to Gaussian but where the random value is then multiplied by the normalized pixel value\n\nDefocus blur\n\nImitate a defocused lens over the entire frame, ranging from (3, 0.1), (4, 0.5), (6, 0.5), (8, 0.5), (10, 0.5)\n\nMotion blur\n\nIncrease the radius and sigma of the kernel, ranging from (10, 3), (15, 5), (15, 8), (15, 12), and (20, 15)\n\nZoom blur\n\nIncrease the zoom factor based on severity, ranging from (1, 1.11), (1, 1.16), (1, 1.21), (1, 1.26), (1, 1.33)\n\nGlass Blur\n\nAppear with \u201cfrosted glass\u201d windows or panels, ranging from (0.7, 1, 2), (0.9, 2, 1), (1, 2, 3), (1.1, 3, 2), (1.5, 4, 2)\n\nSnow\n\nAdding a visually obstructive form of precipitation, ranging from (0.1, 0.3, 3, 0.5, 10, 4, 0.8),(0.2, 0.3, 2, 0.5, 12, 4, 0.7), (0.55, 0.3, 4, 0.9, 12, 8, 0.7), (0.55, 0.3, 4.5, 0.85, 12, 8, 0.65), (0.55, 0.3, 2.5, 0.85, 12, 12, 0.55)\n\nFrost\n\nSimulate lenses or windows are coated with ice crystals, ranging from (1, 0.4), (0.8, 0.6), (0.7, 0.7), (0.65, 0.7),(0.6, 0.75)\n\nFog\n\nShroud objects and rendered with the diamond-square algorithm, ranging from (1.5, 2), (2, 2), (2.5, 1.7), (2.5, 1.5), (3, 1.4)\n\nBrightness\n\nSimulate daylight intensity, ranging from 0.1, 0.2, 0.3, 0.4, 0.5\n\nContrast\n\nSimulate lighting conditions, ranging from 0.4, 0.3, 0.2, 0.1, 0.05\n\nElastic\n\nStretch or contract small image regions, ranging from (244 * 2, 244 * 0.7, 244 * 0.1), (244 * 2, 244 * 0.08, 244 * 0.2), (244 * 0.05, 244 * 0.01, 244 * 0.02), (244 * 0.07, 244 * 0.01, 244 * 0.02), (244 * 0.12, 244 * 0.01, 244 * 0.02)\n\nPixelate\n\nUpsample a low-resolution image, ranging from 0.6, 0.5, 0.4, 0.3, 0.25\n\nJPEG Compression\n\nConvert each frame to a JPEG with quality ranging from 25, 18, 15, 10, 7\n\n

\n
\n
", + "capture": "Table 8: Magnitude of perturbations." + }, + "9": { + "table_html": "
\n
Table 9: Image Quality Drop after Perturbation.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SSIM / LPIPS\u2009CleanGauss.ShotImpulseSpeckleDefocusGlassMotionZoom
11.00/0.000.61/0.260.65/0.250.58/0.370.72/0.200.65/0.300.78/0.250.70/0.240.68/0.20
21.00/0.000.49/0.420.52/0.420.45/0.550.66/0.280.59/0.490.73/0.340.59/0.330.57/0.36
31.00/0.000.37/0.630.41/0.600.37/0.680.51/0.510.50/0.610.58/0.470.51/0.410.50/0.44
41.00/0.000.27/0.860.29/0.850.27/0.890.34/0.630.35/0.680.48/0.600.46/0.580.47/0.58
51.00/0.000.19/0.990.23/0.990.19/0.990.27/0.760.32/0.720.32/0.700.33/0.610.36/0.69
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
SSIM / LPIPS\u00a0Snow\u00a0Frost\u00a0Fog\u00a0BrightContrastElasticPixelJPEGStylize
10.66/0.240.68/0.200.63/0.170.79/0.120.67/0.170.66/0.260.72/0.220.78/0.170.72/0.21
20.48/0.370.58/0.280.57/0.320.70/0.210.58/0.250.53/0.290.57/0.350.67/0.230.58/0.34
30.52/0.450.53/0.340.53/0.370.65/0.310.49/0.380.42/0.390.53/0.430.65/0.350.47/0.43
40.46/0.510.53/0.440.52/0.520.53/0.480.39/0.580.41/0.520.47/0.520.60/0.460.37/0.52
50.32/0.630.40/0.520.38/0.640.45/0.640.33/0.730.39/0.780.37/0.630.45/0.680.31/0.67
\n
\n
\n
", + "capture": "Table 9: Image Quality Drop after Perturbation." + }, + "10": { + "table_html": "
\n
Table 10: Human verification of perturbed image-text pairs, where the correction rate means the percentage of given image and text can still be considered as a pair.
\n
\n

\n\n\n\nCorrection Rate\nJudge-1\nJudge-2\nJudge-3\nJudge-4\nJudge-5\nJudge-6\nJudge-7\nJudge-8\nJudge-9\nJudge-10\nAverage\n\nResults\n98.90%\n99.42%\n98.80%\n98.54%\n99.14%\n98.50%\n99.02%\n99.26%\n99.16%\n99.30%\n99.00%\n\n

\n
\n
", + "capture": "Table 10: Human verification of perturbed image-text pairs, where the correction rate means the percentage of given image and text can still be considered as a pair." + }, + "11": { + "table_html": "
\n
Table 11: Visual reasoning: image robustness evaluations for the NLVR2-IP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NoiseBlurWeatherDigitalStylize
DatasetMethod\u2009CleanGauss.ShotImpulseSpeckleDefocusGlassMotionZoom\u00a0Snow\u00a0Frost\u00a0Fog\u00a0BrightContrastElasticPixelJPEGStylize\n\u2009ave \nMMI
devALBEF82.5552.8052.4652.6152.6352.2252.4451.7850.7950.6952.0552.5852.0951.9852.4550.9952.3751.8052.04\n 37.0%\n
ViLT75.7071.6471.4571.5872.4272.9074.7168.7963.9769.4073.0273.5974.3266.7274.1569.1774.7172.3571.46\n 5.6%\n
TCL80.5478.2077.6378.2178.6077.0481.2077.3766.6775.9679.4779.6580.7674.0478.9273.9281.0175.0577.28\n 4.0%\n
BLIP82.4885.3778.5472.6876.5980.0073.6678.5460.9873.6676.5983.9076.1077.0781.4674.6382.9371.7177.42\n 6.1%\n
METER82.3377.3976.2577.2577.7678.7682.0178.2669.3176.1779.4081.0280.7677.5079.3672.9180.6776.1077.70\n 5.6%\n
test-PALBEF83.1453.1752.8553.2253.5052.6853.0952.3951.1951.6052.9853.4952.7853.1353.1251.7253.1052.9552.76\n 36.5%\n
ViLT76.1374.2473.8074.4374.2072.3276.7072.5562.3469.2473.3675.0574.7368.6874.0769.0676.5271.5072.54\n 4.7%\n
TCL81.3378.1077.8778.2578.9178.0081.5978.1767.8175.7479.6280.6481.5274.3579.7674.6181.2875.8577.77\n 4.4%\n
BLIP83.0875.3975.3985.1072.3185.6479.4976.9258.9780.5175.9081.5476.9281.0377.9573.33378.9773.8577.01\n 7.3%\n
METER83.0578.8777.9477.7879.2378.9782.1079.1468.8976.6980.1082.2581.2178.2079.9172.6580.7476.9378.34\n 5.7%\n
\n
\n
", + "capture": "Table 11: Visual reasoning: image robustness evaluations for the NLVR2-IP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better." + }, + "12": { + "table_html": "
\n
Table 12: Visual reasoning: text robustness evaluations for the NLVR2-TP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Character-levelWord-levelSentence-level
DatasetMethod\u2009CleanKeyboardOCR\u00a0\u00a0\u00a0CI\u00a0\u00a0\u00a0CR\u00a0\u00a0\u00a0CS\u00a0\u00a0\u00a0CD\u00a0\u00a0\u00a0SR\u00a0\u00a0\u00a0WI\u00a0\u00a0\u00a0WS\u00a0\u00a0\u00a0WD\u00a0\u00a0\u00a0IPFormalCasualPassiveActiveBack_trans\n\u2009ave \nMMI
devALBEF82.5550.6451.0250.8150.6650.5350.5851.9651.4851.5851.3951.5650.9951.9351.5251.7551.9051.22\n 38.0%\n
ViLT75.7066.2369.1665.4764.3664.7664.9667.1172.7170.7771.7573.4273.2273.4071.8374.4774.5169.88\n 7.7%\n
TCL80.5471.1575.8971.8470.9972.0171.5874.9678.8977.8478.0582.3781.5680.3379.4781.4680.6771.77\n 10.9%\n
BLIP82.4870.7370.2476.5974.6372.6872.2073.1777.5680.0079.5187.8185.3782.9382.9387.8175.6178.11\n 5.3%\n
METER82.3372.3575.8374.1072.7173.8973.3075.1679.3675.4177.6481.6881.9281.5578.6981.0182.2577.30\n 6.1%\n
test-PALBEF83.1451.3951.9951.0451.2651.0551.2452.6952.9552.9552.8853.3053.3953.0652.6853.2653.2352.40\n 37.0%\n
ViLT76.1364.8569.6666.7665.6465.5665.1468.9673.3671.3572.5375.1475.8674.2772.5877.0075.7070.90\n 6.9%\n
TCL81.3371.1676.3172.3571.5671.9072.0775.4980.0378.8078.7882.8882.4681.5280.2582.2881.5372.37\n 11.0%\n
BLIP83.0867.6985.6467.1867.6975.9074.8769.2372.8278.4683.5983.5979.4987.1882.0582.0574.3676.99\n 7.3%\n
METER83.0573.1077.6374.0572.4970.6474.2776.1079.6275.9678.5582.5881.8780.4279.5282.3481.4577.54\n 6.6%\n
\n
\n
", + "capture": "Table 12: Visual reasoning: text robustness evaluations for the NLVR2-TP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better." + }, + "13": { + "table_html": "
\n
Table 13: Visual entailment: image robustness evaluations for the SNLI-VE-IP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
NoiseBlurWeatherDigitalStylize
DatasetMethod\u2009CleanGauss.ShotImpulseSpeckleDefocusGlassMotionZoom\u00a0Snow\u00a0Frost\u00a0Fog\u00a0BrightContrastElasticPixelJPEGStylize\n\u2009ave \nMMI
valALBEF80.8077.5277.5677.3478.7676.5979.2676.6771.7075.6178.7178.7679.8378.1978.4974.2978.9174.5877.22\n 4.4%\n
TCL80.5177.3377.5677.2278.2376.7079.2175.2570.9875.7177.9578.4379.3178.7677.7871.4778.4374.6476.76\n 4.7%\n
METER80.8677.0577.1976.7678.3777.1479.7277.0474.3577.1879.3880.1080.4979.1278.7873.0878.9375.8877.68\n 3.9%\n
testALBEF80.9177.6577.7077.4078.5076.6279.2576.5971.7076.3178.6078.4779.7778.0778.3474.4278.8174.8977.24\n 8.3%\n
TCL80.2977.4677.3877.3078.1776.8079.2775.5671.0776.1378.2478.3879.1978.6877.7471.7678.5974.7076.85\n 4.3%\n
METER81.1977.1677.0976.9078.5877.1480.1377.3974.3577.7979.8480.1880.4679.1878.9172.6779.3276.0877.79\n 4.2%\n
\n
\n
", + "capture": "Table 13: Visual entailment: image robustness evaluations for the SNLI-VE-IP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better." + }, + "14": { + "table_html": "
\n
Table 14: Visual entailment: text robustness evaluations for the SNLI-VE-TP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Character-levelWord-levelSentence-level
DatasetMethod\u2009CleanKeyboardOCR\u00a0\u00a0\u00a0CI\u00a0\u00a0\u00a0CR\u00a0\u00a0\u00a0CS\u00a0\u00a0\u00a0CD\u00a0\u00a0\u00a0SR\u00a0\u00a0\u00a0WI\u00a0\u00a0\u00a0WS\u00a0\u00a0\u00a0WD\u00a0\u00a0\u00a0IPFormalCasualPassiveActiveBack_trans\n\u2009ave \nMMI
valALBEF80.8065.3571.9766.5465.1767.2267.4674.6374.1574.8878.6280.5680.5680.5680.5680.5676.9474.11\n 8.3%\n
TCL80.5165.2471.6365.5864.7267.6767.1674.3274.0474.5277.8479.8479.8479.8479.8479.8475.7973.61\n 8.6%\n
METER80.8666.7074.1767.9966.4168.6469.5374.6573.1972.5578.2876.2480.7280.4980.7680.7277.4374.28\n 8.1%\n
testALBEF80.9164.8771.9065.9965.0366.9167.2774.7774.9374.9078.4480.2080.2080.2080.2080.2077.3173.96\n 8.6%\n
TCL80.2965.2771.8365.8164.6667.6967.2574.5973.7074.4978.0179.7779.7779.7779.8479.8476.6273.67\n 8.2%\n
METER81.1966.0974.2667.3966.3068.9269.7174.8873.8972.9578.3876.6580.9680.8381.2181.0577.1474.41\n 8.4%\n
\n
\n
", + "capture": "Table 14: Visual entailment: text robustness evaluations for the SNLI-VE-TP dataset (averaged accuracy), where the most effective perturbation results are marked in bold and the least effective ones are underlined. Impact score is marked in blue, the lower the better." + }, + "15": { + "table_html": "
\n
Table 15: Top1 classification accuracy of unimodal vision models. The most effective perturbation results are marked in bold and the least effective ones are underlined.
\n
\n

\n\n\n\nModel/corruption\nbright\ncontrast\ndefocus\nelastic\nfog\nglass\ngauss\nimpulse\njpeg\nmotion\npixelate\nsaturate\nshot\nsnow\nspatter\nspeckle\nzoom\n\n\n\ndeit_base_distilled\n0.81\n0.81\n0.57\n0.64\n0.79\n0.59\n0.67\n0.66\n0.69\n0.64\n0.66\n0.80\n0.66\n0.67\n0.74\n0.72\n0.54\n\ndensenet169\n0.72\n0.61\n0.41\n0.45\n0.60\n0.41\n0.42\n0.37\n0.57\n0.41\n0.52\n0.69\n0.41\n0.42\n0.52\n0.47\n0.38\n\neca_nfnet_l0\n0.79\n0.77\n0.47\n0.52\n0.69\n0.50\n0.40\n0.44\n0.65\n0.59\n0.48\n0.78\n0.39\n0.62\n0.72\n0.55\n0.51\n\nefficientnetv2\n0.79\n0.72\n0.51\n0.56\n0.62\n0.53\n0.46\n0.49\n0.68\n0.60\n0.60\n0.78\n0.46\n0.62\n0.72\n0.60\n0.54\n\ngmlp_s16_224\n0.71\n0.72\n0.42\n0.57\n0.66\n0.46\n0.55\n0.53\n0.59\n0.52\n0.58\n0.70\n0.54\n0.34\n0.60\n0.61\n0.39\n\nmixer_b16_224\n0.71\n0.72\n0.31\n0.44\n0.62\n0.35\n0.31\n0.26\n0.44\n0.41\n0.48\n0.63\n0.29\n0.35\n0.50\n0.38\n0.28\n\nmobilenetv3_large\n0.71\n0.46\n0.35\n0.47\n0.51\n0.38\n0.33\n0.35\n0.56\n0.47\n0.44\n0.68\n0.33\n0.38\n0.55\n0.45\n0.38\n\npit_s_224\n0.79\n0.77\n0.50\n0.56\n0.72\n0.51\n0.64\n0.62\n0.67\n0.58\n0.57\n0.77\n0.62\n0.62\n0.70\n0.68\n0.46\n\nregnety_064\n0.75\n0.54\n0.45\n0.51\n0.61\n0.46\n0.43\n0.41\n0.59\n0.48\n0.46\n0.71\n0.40\n0.46\n0.62\n0.48\n0.44\n\nresmlp_24_224\n0.76\n0.73\n0.48\n0.57\n0.61\n0.51\n0.56\n0.54\n0.60\n0.56\n0.54\n0.75\n0.54\n0.56\n0.64\n0.62\n0.46\n\nresnet50d\n0.78\n0.77\n0.44\n0.46\n0.71\n0.47\n0.41\n0.39\n0.63\n0.50\n0.40\n0.76\n0.41\n0.51\n0.63\n0.53\n0.46\n\nresnext101_32x8d\n0.74\n0.53\n0.48\n0.52\n0.60\n0.47\n0.42\n0.37\n0.61\n0.52\n0.56\n0.69\n0.40\n0.42\n0.57\n0.49\n0.49\n\nswin_small_patch4\n0.81\n0.81\n0.52\n0.57\n0.75\n0.52\n0.63\n0.63\n0.57\n0.61\n0.40\n0.80\n0.62\n0.65\n0.77\n0.70\n0.52\n\nvit_small_patch16\n0.62\n0.73\n0.45\n0.50\n0.67\n0.47\n0.34\n0.31\n0.55\n0.52\n0.58\n0.56\n0.31\n0.26\n0.53\n0.41\n0.38\n\n

\n
\n
", + "capture": "Table 15: Top1 classification accuracy of unimodal vision models. The most effective perturbation results are marked in bold and the least effective ones are underlined. " + }, + "16": { + "table_html": "
\n
Table 16: Detailed image captioning results of BLIP and GRIT.
\n
\n

\n\n\n\n\n\n\nNoise\nBlur\nWeather\nDigital\nStylize\n\n\n\n\n\nGT\nGauss\nShot\nImpulse\nSpeckle\nDefocus\nGlass\nMotion\nZoom\nSnow\nFrost\nFog\nBright\nContrast\nElastic\nPixel\nJPEG\nStylize\nAve\nMMI\n\n\n\nBLIP\nBleu_1\n78.9\n70.9\n71.9\n71.1\n74.8\n68.5\n77.5\n66.5\n55.9\n70.6\n73.4\n75.0\n77.3\n71.3\n69.9\n62.7\n76.2\n63.7\n66.5\n15.7%\n\nBleu_2\n63.8\n54.4\n55.7\n54.8\n59.0\n52.3\n62.3\n50.0\n37.5\n53.8\n57.5\n59.3\n61.8\n54.9\n53.4\n45.4\n60.6\n46.2\n51.0\n20.0%\n\nBleu_3\n50.5\n41.3\n42.6\n41.7\n45.8\n39.1\n49.1\n36.7\n24.4\n40.5\n44.3\n45.9\n48.4\n41.6\n40.6\n32.6\n47.4\n32.9\n38.6\n23.6%\n\nBleu_4\n39.7\n31.4\n32.5\n31.7\n35.5\n29.1\n38.4\n26.8\n16.1\n30.7\n34.0\n35.4\n37.8\n31.4\n30.9\n23.8\n37.0\n23.6\n29.2\n26.4%\n\nMeteor\n31.0\n26.1\n26.8\n26.4\n28.5\n24.7\n30.1\n23.6\n17.0\n25.7\n27.7\n28.8\n29.8\n25.7\n25.7\n21.3\n29.3\n21.1\n24.4\n21.5%\n\nRouge_L\n60.0\n53.3\n54.3\n53.7\n56.7\n50.9\n58.8\n49.3\n40.5\n52.4\n55.6\n57.0\n58.6\n53.2\n52.7\n46.8\n57.8\n46.8\n49.9\n16.8%\n\nCIDEr\n133.3\n100.5\n104.3\n101.6\n116.5\n91.8\n128.1\n84.2\n45.9\n95.7\n111.6\n116.1\n125.8\n98.3\n96.8\n68.6\n121.8\n68.7\n93.1\n30.1%\n\nGRIT\nBleu_1\n84.2\n78.6\n79.1\n78.8\n81.1\n79.4\n83.6\n77.9\n60.0\n78.6\n81.8\n83.1\n83.1\n81.4\n77.4\n64.0\n81.6\n68.9\n73.2\n13.0%\n\nBleu_2\n69.1\n62.2\n62.6\n62.4\n65.0\n63.1\n68.4\n61.3\n40.5\n61.8\n65.9\n67.5\n67.7\n65.3\n60.5\n44.3\n65.8\n50.0\n57.4\n16.8%\n\nBleu_3\n54.7\n47.6\n48.1\n47.9\n50.5\n48.7\n53.9\n46.8\n27.1\n47.2\n51.4\n53.0\n53.2\n50.7\n46.1\n30.3\n51.2\n35.5\n43.8\n19.8%\n\nBleu_4\n42.3\n35.8\n36.3\n36.1\n38.5\n36.9\n41.5\n35.2\n18.5\n35.4\n39.2\n40.7\n40.9\n38.5\n34.6\n20.9\n39.1\n25.3\n33.0\n22.0%\n\nMeteor\n30.6\n27.0\n27.2\n27.1\n28.4\n27.5\n30.1\n26.7\n17.7\n27.0\n28.8\n29.6\n29.9\n28.5\n26.2\n18.7\n28.9\n21.2\n25.0\n18.3%\n\nRouge_L\n60.7\n55.8\n56.2\n56.0\n57.8\n56.6\n60.1\n55.4\n42.6\n55.6\n58.3\n59.4\n59.8\n58.0\n55.0\n44.5\n58.4\n48.2\n52.1\n14.2%\n\nCIDEr\n144.0\n117.4\n118.6\n118.0\n128.1\n120.2\n140.0\n115.1\n56.1\n118.1\n131.1\n136.6\n138.3\n128.4\n110.6\n60.0\n131.1\n77.4\n108.1\n25.0%\n\n

\n
\n
", + "capture": "Table 16: Detailed image captioning results of BLIP and GRIT." + }, + "17": { + "table_html": "
\n
Table 17: Image captioning results of BLIP, GRIT, LLaVa, Mini-GPT4, and BLIP2.
\n
\n

\n\n\n\n\n\nNoise\nBlur\nWeather\nDigital\nStylize\n\n\n\n\nGT\nGauss\nShot\nImpulse\nSpeckle\nDefocus\nGlass\nMotion\nZoom\nSnow\nFrost\nFog\nBright\nContrast\nElastic\nPixel\nJPEG\nStylize\nAve\nMMI\n\n\n\nBLIP\n60.0\n53.3\n54.3\n53.7\n56.7\n50.9\n58.8\n49.3\n40.5\n52.4\n55.6\n57.0\n58.6\n53.2\n52.7\n46.8\n57.8\n46.8\n49.9\n16.8%\n\nGRIT\n60.7\n55.8\n56.2\n56.0\n57.8\n56.6\n60.1\n55.4\n42.6\n55.6\n58.3\n59.4\n59.8\n58.0\n55.0\n44.5\n58.4\n48.2\n52.1\n14.2%\n\nLLaVA\n68.6\n62.5\n62.3\n59.8\n63.2\n62.7\n64.9\n63.6\n55.3\n56.1\n57.7\n62.5\n66.2\n60.1\n62.3\n56.9\n64.2\n55.8\n60.9\n11.2%\n\nMini-GPT4\n71.1\n66.8\n66.3\n62.7\n67.1\n66.9\n65.2\n66.9\n60.5\n60.9\n61.3\n67.2\n68.7\n65.6\n67.2\n61.8\n68.9\n62.2\n65.1\n8.5%\n\nBLIP2\n64.2\n61.3\n59.3\n55.2\n60.2\n59.7\n60.9\n60.1\n52.1\n53.7\n55.8\n60.1\n64.2\n57.4\n59.2\n53.8\n61.9\n51.5\n58\n 9.6%\n\n

\n
\n
", + "capture": "Table 17: Image captioning results of BLIP, GRIT, LLaVa, Mini-GPT4, and BLIP2." + }, + "18": { + "table_html": "
\n
Table 18: Text-to-image generation results of Stable Diffusion (FID and CLIP-FID), where \u201cGT\u201d means images generated by GT captions.
\n
\n

\n\n\n\n\n\nFID\nCLIP_FID\n\n\n\n4 image\n8 image\n16 image\n4 image\n8 image\n16 image\n\n\n\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\n\nCharacter\nKeyboard\n315.9\n131.10\n 44.39%\n270.77\n116.78\n 58.34%\n239.27\n109.41\n 82.80%\n68.05\n29.13\n 89.40%\n59.42\n27.31\n 118.46%\n53.04\n26.35\n 167.74%\n\nOcr\n272.9\n119.01\n 24.76%\n227.72\n103.83\n 33.16%\n195.25\n94.93\n 49.17%\n55.20\n25.61\n 53.63%\n46.40\n23.47\n 70.59%\n39.95\n22.28\n 101.67%\n\nCI\n299.1\n126.80\n 36.70%\n254.40\n111.87\n 48.76%\n222.62\n103.82\n 70.08%\n62.94\n27.41\n 75.17%\n54.41\n25.45\n 100.04%\n47.93\n24.29\n 141.95%\n\nCR\n311.2\n129.03\n 42.23%\n268.64\n114.50\n 57.09%\n236.27\n107.73\n 80.51%\n67.65\n28.67\n 88.28%\n58.98\n26.98\n 116.84%\n50.74\n26.06\n 156.13%\n\nCS\n310.3\n131.50\n 41.85%\n265.29\n117.53\n 55.13%\n233.46\n109.85\n 78.36%\n64.70\n28.79\n 80.07%\n56.17\n27.03\n 106.51%\n49.88\n26.08\n 151.79%\n\nCD\n308.5\n125.99\n 40.99%\n264.14\n113.76\n 54.46%\n232.46\n106.40\n 77.60%\n65.03\n28.21\n 80.99%\n56.38\n26.48\n 107.28%\n50.04\n25.60\n 152.60%\n\nWord\nSR\n266.1\n115.45\n 21.62%\n220.86\n98.86\n 29.15%\n188.45\n88.97\n 43.98%\n51.84\n24.62\n 44.28%\n43.43\n22.49\n 59.67%\n37.14\n21.21\n 87.48%\n\nRI\n242.0\n102.38\n 10.60%\n196.42\n83.79\n 14.86%\n163.14\n71.85\n 24.64%\n43.76\n18.47\n 21.79%\n35.28\n15.42\n 29.71%\n28.90\n13.50\n 45.89%\n\nRS\n247.5\n104.33\n 13.15%\n202.32\n85.65\n 18.31%\n169.39\n73.77\n 29.41%\n46.31\n19.47\n 28.89%\n37.78\n16.65\n 38.90%\n31.33\n14.77\n 58.15%\n\nRD\n237.3\n100.41\n 8.44%\n191.81\n81.89\n 12.16%\n158.95\n69.89\n 21.44%\n42.26\n17.50\n 17.62%\n33.80\n14.56\n 24.26%\n27.44\n12.67\n 38.52%\n\nIP\n233.0\n98.81\n 6.49%\n187.02\n79.24\n 9.36%\n153.63\n66.62\n 17.37%\n41.07\n17.17\n 14.31%\n32.50\n13.93\n 19.49%\n26.02\n11.78\n 31.35%\n\nSentence\nFormal\n224.4\n93.92\n 2.58%\n178.94\n74.71\n 4.64%\n145.92\n61.51\n 11.48%\n38.15\n15.20\n 6.18%\n29.60\n11.88\n 8.82%\n23.21\n9.63\n 17.16%\n\nCasual\n225.6\n94.97\n 3.14%\n179.66\n75.11\n 5.06%\n146.16\n61.92\n 11.67%\n37.84\n15.10\n 5.32%\n29.40\n11.90\n 8.09%\n23.03\n9.68\n 16.25%\n\nPassive\n228.7\n96.34\n 4.54%\n183.60\n77.26\n 7.36%\n150.21\n64.49\n 14.76%\n39.46\n16.21\n 9.82%\n31.08\n13.18\n 14.26%\n24.65\n11.14\n 24.43%\n\nActive\n223.1\n93.94\n 1.96%\n176.82\n73.85\n 3.40%\n143.15\n60.26\n 9.37%\n36.94\n14.36\n 2.81%\n28.35\n10.99\n 4.23%\n21.91\n8.77\n 10.60%\n\nBack_trans\n232.6\n98.67\n 6.33%\n187.14\n80.10\n 9.43%\n153.64\n67.80\n 17.38%\n39.99\n16.78\n 11.30%\n31.56\n13.91\n 16.03%\n25.22\n11.94\n 27.31%\n\nGT\nGT\n218.8\n96.66\n\u2014\n171.01\n171.01\n\u2014\n130.89\n61.46\n\u2014\n35.93\n14.85\n\u2014\n27.20\n11.27\n\u2014\n19.81\n8.79\n\u2014\n\n

\n
\n
", + "capture": "Table 18: Text-to-image generation results of Stable Diffusion (FID and CLIP-FID), where \u201cGT\u201d means images generated by GT captions." + }, + "19": { + "table_html": "
\n
Table 19: Text-to-image generation results of GLIDE (FID and CLIP-FID), where \u201cGT\u201d means images generated by GT captions.
\n
\n

\n\n\n\n\n\nFID\nCLIP_FID\n\n\n\n4 image\n8 image\n16 image\n4 image\n8 image\n16 image\n\n\n\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\nmean\nstd\nMMI\n\nCharacter\nKeyboard\n341.39\n110.88\n26.57%\n291.92\n96.54\n34.50%\n256.93\n89.53\n44.42%\n69.61\n25.79\n55.10%\n59.45\n23.75\n72.52%\n51.97\n22.83\n95.52%\n\nOCR\n305.16\n108.71\n13.14%\n255.81\n93.50\n17.86%\n219.74\n85.15\n23.51%\n58.83\n24.76\n31.08%\n48.81\n22.44\n41.64%\n41.18\n21.08\n54.93%\n\nCI\n333.82\n110.89\n23.76%\n284.49\n97.22\n31.08%\n248.06\n89.42\n39.43%\n67.45\n25.38\n50.29%\n57.16\n23.35\n65.87%\n49.53\n22.30\n86.34%\n\nCR\n339.82\n108.85\n25.99%\n290.13\n94.90\n33.68%\n254.63\n88.26\n43.12%\n69.44\n25.47\n54.72%\n59.17\n23.57\n71.71%\n51.53\n22.68\n93.87%\n\nCS\n339.20\n110.27\n25.76%\n288.79\n95.76\n33.06%\n253.56\n88.69\n42.52%\n67.75\n25.00\n50.96%\n57.58\n22.91\n67.09%\n50.17\n22.06\n88.75%\n\nCD\n340.87\n111.23\n26.37%\n291.76\n98.32\n34.43%\n252.68\n87.79\n42.03%\n67.23\n24.82\n49.80%\n57.07\n22.75\n65.61%\n49.52\n21.80\n86.31%\n\nWord\nSR\n306.08\n110.11\n13.48%\n255.45\n94.17\n17.70%\n254.17\n88.23\n42.86%\n56.7\n22.89\n26.34%\n46.65\n20.27\n35.37%\n39.19\n18.88\n47.44%\n\nRI\n286.23\n106.35\n6.12%\n234.68\n88.62\n8.13%\n196.88\n77.27\n10.66%\n50.86\n20.32\n13.32%\n40.64\n17.14\n17.93%\n32.91\n15.14\n23.81%\n\nRS\n283.53\n103.71\n5.12%\n230.54\n85.17\n6.22%\n195.23\n77.45\n9.74%\n48.96\n18.82\n9.09%\n38.61\n15.33\n12.04%\n30.84\n13.06\n16.03%\n\nRD\n286.36\n106.72\n6.17%\n234.16\n88.39\n7.89%\n196.08\n76.79\n10.21%\n50.01\n19.44\n11.43%\n39.86\n16.32\n15.67%\n32.20\n14.34\n21.14%\n\nIP\n278.34\n105.05\n3.19%\n225.52\n85.21\n3.91%\n189.22\n74.35\n6.36%\n47.64\n18.07\n6.15%\n37.21\n14.32\n7.98%\n29.39\n11.80\n10.57%\n\nSentence\nFormal\n274.77\n103.99\n1.87%\n222.19\n84.39\n2.37%\n183.83\n71.87\n3.33%\n46.5\n17.87\n3.61%\n36.29\n14.17\n5.31%\n28.54\n11.84\n7.37%\n\nCasual\n275.48\n103.52\n2.13%\n222.96\n84.60\n2.73%\n184.38\n72.27\n3.64%\n46.82\n18.3\n4.32%\n36.57\n14.63\n6.12%\n28.76\n12.29\n8.20%\n\nPassive\n278.77\n104.93\n3.35%\n226.95\n86.19\n4.57%\n188.60\n74.40\n6.01%\n48.15\n19.11\n7.29%\n37.89\n15.74\n9.95%\n27.21\n10.56\n2.37%\n\nActive\n271.09\n101.61\n0.50%\n218.40\n82.03\n0.63%\n179.91\n69.34\n1.12%\n45.42\n17.01\n1.20%\n35.05\n13.06\n1.71%\n30.19\n13.64\n13.58%\n\nBack_trans\n283.70\n107.07\n5.18%\n231.85\n88.16\n6.82%\n190.23\n73.53\n6.92%\n49.21\n19.86\n9.65%\n39.13\n16.59\n13.55%\n31.46\n14.55\n18.36%\n\nGT\nGT\n269.73\n269.73\n\u2014\n217.04\n81.72\n\u2014-\n177.91\n68.55\n\u2014\n44.88\n16.57\n\u2014\n34.46\n12.47\n\u2014\n26.58\n9.79\n\u2014\n\n

\n
\n
", + "capture": "Table 19: Text-to-image generation results of GLIDE (FID and CLIP-FID), where \u201cGT\u201d means images generated by GT captions." + }, + "20": { + "table_html": "
\n
Table 20: Quantitative results of Missing Object Rate (MOR) of Stable Diffusion. The most effective perturbation results are marked in bold, and the least effective ones are underlined. The results show that more objects are missing from the images generated by character-level perturbed captions.
\n
\n

\n\n\n\nThreshold\nSetting\nGT\nKeyboard\nOcr\nCI\nCR\nCS\nCD\nSR\nRI\nRS\nRD\nIP\nFormal\nCasual\nPassive\nActive\nBack_trans\n\n\n\n0.7\n4-images\n0.00\n-12.47\n-5.22\n-8.41\n-13.25\n-12.15\n-12.63\n-8.23\n-3.14\n-7.33\n-6.05\n-2.81\n-2.10\n-1.42\n-1.36\n0.27\n-0.86\n\n8-images\n0.00\n-11.00\n-4.27\n-6.62\n-11.79\n-11.09\n-10.76\n-6.77\n-1.62\n-6.59\n-4.31\n-2.83\n0.01\n0.69\n-0.17\n1.34\n0.44\n\n16-images\n0.00\n-11.53\n-4.29\n-6.96\n-11.72\n-11.59\n-10.86\n-6.88\n-1.65\n-6.66\n-4.48\n-2.90\n-0.16\n0.17\n-0.75\n0.76\n0.48\n\n0.5\n4-images\n0.00\n-5.33\n-2.97\n-2.96\n-6.60\n-3.97\n-2.45\n-1.00\n0.72\n-1.51\n-4.63\n-1.88\n-0.31\n-2.18\n2.17\n-0.30\n0.65\n\n8-images\n0.00\n-4.94\n-2.28\n-1.18\n-5.83\n-2.48\n-1.55\n-0.34\n1.70\n-1.26\n-2.72\n-1.06\n0.17\n-1.00\n3.41\n0.42\n1.02\n\n16-images\n0.00\n-4.95\n-1.76\n-1.65\n-5.02\n-2.01\n-2.03\n-0.62\n1.41\n-0.90\n-2.50\n-0.69\n0.50\n0.08\n3.36\n0.26\n1.41\n\n

\n
\n
", + "capture": "Table 20: Quantitative results of Missing Object Rate (MOR) of Stable Diffusion. The most effective perturbation results are marked in bold, and the least effective ones are underlined. The results show that more objects are missing from the images generated by character-level perturbed captions." + }, + "21": { + "table_html": "
\n
Table 21: Adversarial perturbation methods.
\n
\n

\n\n\n\nModality\nAdversarial Perturbation Methods\n\n\n\nImage-only\nFGSM\n\nText-only\nBERT-Attack\n\nMultimodal\nFooling VQA, SSAP, SSAP-MIM, SSAP-SI, Co-Attack\n\n

\n
\n
", + "capture": "Table 21: Adversarial perturbation methods." + }, + "22": { + "table_html": "
\n
Table 22: Image-text retrieval results by adding adversarial perturbations on image modality only by FGSM.
\n
\n

\n\n\n\nDataset\nMethod\nClean\nGFSM\n\n\n\nFlickr30K\nALBEF\n577.7\n331.2\n\nCLIP\n544.3\n358.2\n\nCOCO\nALBEF\n504.5\n215.8\n\nCLIP\n420.5\n198.2\n\n

\n
\n
", + "capture": "Table 22: Image-text retrieval results by adding adversarial perturbations on image modality only by FGSM." + }, + "23": { + "table_html": "
\n
Table 23: Image-text retrieval results by adding adversarial perturbations on text modality only by BERT-Attack.
\n
\n

\n\n\n\nDataset\nMethod\nClean\nBERT-Attack\n\n\n\nFlickr30K\nALBEF\n577.7\n534.9\n\nCLIP\n544.3\n512.3\n\nCOCO\nALBEF\n504.5\n431.8\n\nCLIP\n420.5\n374.6\n\n

\n
\n
", + "capture": "Table 23: Image-text retrieval results by adding adversarial perturbations on text modality only by BERT-Attack." + }, + "24": { + "table_html": "
\n
Table 24: Image-text retrieval results by adding adversarial perturbations on multi-modality by Fooling VQA, SSAP, SSAP-MIM, SSAP-SI, and Co-Attack.
\n
\n

\n\n\n\nDataset\nMethod\nClean\nFooling VQA\nSSAP\nSSAP-MIM\nSSAP-SI\nCo-Attack\n\n\n\nFlickr30K\nALBEF\n577.7\n535.0\n231.4\n252.9\n206.7\n210.2\n\nCLIP\n544.3\n510.8\n288.8\n327.5\n262.2\n145.8\n\nCOCO\nALBEF\n504.5\n340.8\n221.8\n252.2\n205.9\n193.8\n\nCLIP\n420.5\n376.0\n237.4\n254.5\n225.7\n172.3\n\n

\n
\n
", + "capture": "Table 24: Image-text retrieval results by adding adversarial perturbations on multi-modality by Fooling VQA, SSAP, SSAP-MIM, SSAP-SI, and Co-Attack." + }, + "25": { + "table_html": "
\n
Table 25: ViLT image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n57.7\n79.0\n84.5\n73.7\n43.3\n70.0\n78.6\n64.0\n413.0\n47.7\n73.8\n82.5\n68.0\n33.5\n61.7\n73.1\n56.1\n372.2\n\nShot\n58.9\n80.4\n85.7\n75.0\n43.9\n70.9\n79.7\n64.8\n419.6\n47.9\n73.9\n82.6\n68.1\n33.3\n61.7\n73.2\n56.1\n372.6\n\nImpluse\n54.3\n76.0\n82.3\n70.9\n40.6\n67.4\n76.3\n61.4\n396.9\n45.9\n71.8\n80.8\n66.2\n32.1\n60.3\n71.9\n54.8\n362.9\n\nSpeckle\n67.9\n89.0\n93.5\n83.4\n49.4\n77.8\n85.6\n70.9\n463.2\n52.2\n78.7\n87.0\n72.6\n36.2\n65.7\n77.0\n59.6\n396.7\n\nBlue\nDefocus\n58.0\n80.3\n86.9\n75.1\n43.0\n70.3\n79.2\n64.1\n417.6\n48.8\n75.2\n83.6\n69.2\n33.9\n62.6\n74.2\n56.9\n378.1\n\nGlass\n74.5\n92.7\n95.9\n87.7\n55.2\n81.9\n88.9\n75.3\n489.0\n58.6\n84.4\n91.1\n79.7\n40.8\n70.6\n81.3\n63.2\n432.0\n\nMotion\n51.1\n72.2\n79.5\n67.6\n41.3\n67.7\n76.6\n61.8\n388.4\n46.5\n72.0\n81.3\n66.6\n32.7\n60.8\n72.1\n55.2\n365.4\n\nZoom\n24.6\n42.2\n50.4\n39.0\n22.7\n43.5\n53.0\n39.7\n236.3\n17.6\n35.2\n44.0\n32.3\n16.4\n35.3\n45.2\n32.3\n193.8\n\nWeather\nSnow\n39.8\n61.3\n70.2\n57.1\n33.8\n59.0\n68.6\n53.8\n332.7\n31.0\n54.2\n64.2\n49.8\n24.1\n48.1\n59.5\n43.9\n281.1\n\nFrost\n65.1\n87.2\n92.1\n81.5\n47.9\n76.1\n84.6\n69.6\n453.1\n46.2\n72.7\n81.5\n66.8\n32.7\n60.8\n72.3\n55.3\n366.1\n\nFog\n66.2\n87.4\n92.2\n82.0\n48.8\n76.5\n84.7\n70.0\n455.8\n52.2\n78.8\n87.1\n72.7\n36.4\n66.2\n77.4\n60.0\n398.1\n\nBrightness\n76.3\n94.1\n97.1\n89.1\n56.0\n83.3\n90.2\n76.5\n496.9\n57.7\n83.1\n90.4\n77.1\n40.3\n70.1\n80.8\n63.7\n422.4\n\nDigital\nContrast\n53.3\n70.7\n75.9\n66.7\n39.5\n62.6\n70.1\n57.4\n372.2\n41.7\n64.2\n72.1\n59.3\n29.6\n54.7\n64.9\n49.7\n327.1\n\nElastic\n67.2\n87.3\n91.8\n82.1\n50.8\n78.5\n86.1\n71.8\n461.7\n54.0\n78.7\n86.5\n73.1\n37.9\n67.1\n77.9\n61.0\n402.2\n\nPixelate\n33.2\n52.1\n59.7\n48.3\n27.2\n48.2\n57.0\n44.1\n277.4\n25.8\n43.9\n51.6\n40.5\n19.8\n39.5\n49.1\n36.1\n229.8\n\nJPEG\n74.1\n92.3\n95.8\n87.4\n54.6\n81.8\n89.0\n75.1\n487.6\n58.4\n84.2\n91.1\n77.9\n40.7\n70.4\n81.1\n64.0\n425.8\n\nStylize\nStylized\n54.2\n74.0\n80.4\n69.5\n40.1\n65.1\n73.4\n59.5\n387.1\n40.6\n64.2\n72.8\n61.6\n29.0\n54.7\n65.4\n49.7\n333.9\n\n

\n
\n
", + "capture": "Table 25: ViLT image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "26": { + "table_html": "
\n
Table 26: CLIP image perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n75.1\n92.8\n96.0\n88.0\n61.7\n85.1\n90.9\n79.3\n501.7\n47.8\n72.1\n80.6\n66.9\n34.7\n58.7\n69.1\n54.2\n363.0\n\nShot\n75.6\n93.4\n96.6\n88.5\n61.7\n85.5\n91.4\n79.5\n504.2\n47.6\n71.6\n80.3\n66.5\n34.2\n58.5\n69.1\n53.9\n361.2\n\nImpluse\n68.2\n90.2\n94.3\n84.2\n57.4\n82.1\n88.9\n76.2\n481.2\n40.1\n65.6\n75.4\n60.4\n30.1\n54.1\n64.8\n49.7\n330.2\n\nSpeckle\n80.2\n95.8\n98.0\n91.3\n62.9\n86.4\n92.2\n80.5\n515.5\n49.5\n73.9\n82.0\n68.5\n34.6\n59.1\n69.6\n54.4\n368.7\n\nBlur\nDefocus\n74.7\n93.4\n96.6\n88.2\n61.3\n85.1\n91.1\n79.1\n502.1\n46.5\n71.3\n80.0\n65.9\n33.7\n58.3\n68.8\n53.6\n358.6\n\nGlass\n85.5\n97.8\n99.0\n94.1\n66.1\n88.4\n93.4\n82.6\n530.1\n55.6\n78.9\n86.4\n73.6\n37.3\n61.7\n71.7\n56.9\n391.6\n\nMotion\n77.0\n94.1\n97.0\n89.4\n63.5\n86.2\n91.9\n80.6\n509.7\n48.8\n72.3\n80.4\n67.1\n34.2\n58.2\n68.3\n53.6\n362.2\n\nZoom\n62.3\n84.6\n90.6\n79.1\n54.8\n79.2\n86.3\n73.5\n457.8\n32.4\n57.0\n67.2\n52.2\n26.9\n50.1\n61.0\n46.0\n294.6\n\nWeather\nSnow\n64.8\n86.9\n93.1\n81.6\n56.2\n81.4\n88.3\n75.3\n470.7\n32.3\n56.2\n67.8\n52.1\n26.8\n50.1\n61.4\n46.1\n294.7\n\nFrost\n72.8\n92.6\n96.5\n87.3\n59.4\n84.0\n90.4\n77.9\n495.6\n41.1\n65.6\n75.6\n60.8\n29.4\n53.2\n64.1\n48.9\n329.0\n\nFog\n80.8\n96.1\n98.2\n91.7\n64.6\n87.3\n92.7\n81.5\n519.7\n51.3\n75.5\n83.6\n70.2\n34.0\n58.5\n68.8\n53.8\n371.8\n\nBrightness\n85.2\n97.6\n98.9\n93.9\n66.4\n88.6\n93.4\n82.8\n530.1\n56.5\n79.8\n87.4\n74.6\n36.4\n60.7\n71.1\n56.0\n391.9\n\nDigital\nContrast\n80.7\n95.9\n98.0\n91.5\n62.7\n86.2\n91.9\n80.3\n515.4\n48.0\n71.5\n80.1\n66.5\n32.5\n56.9\n67.4\n52.2\n356.4\n\nElastic\n79.5\n94.9\n97.3\n90.6\n61.6\n85.8\n91.4\n79.6\n510.4\n50.6\n74.7\n83.1\n69.5\n33.8\n58.5\n69.1\n53.8\n369.7\n\nPixelate\n68.4\n87.6\n92.0\n82.7\n55.5\n79.6\n86.4\n73.8\n469.5\n36.3\n60.4\n70.3\n55.7\n27.9\n51.3\n61.9\n47.0\n308.2\n\nJPEG\n83.6\n96.8\n98.4\n92.9\n65.8\n87.4\n92.7\n82.0\n524.6\n55.3\n78.9\n86.4\n73.5\n35.9\n60.7\n70.9\n55.8\n388.0\n\nStylize\nStylized\n65.3\n83.3\n88.3\n79.0\n51.6\n75.8\n83.2\n70.2\n447.6\n39.9\n62.8\n72.2\n58.3\n28.0\n50.8\n61.2\n46.7\n314.9\n\n

\n
\n
", + "capture": "Table 26: CLIP image perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "27": { + "table_html": "
\n
Table 27: CLIP image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n72.7\n91.2\n95.0\n86.3\n63.1\n86.5\n91.6\n80.4\n500.1\n43.0\n70.3\n80.1\n64.5\n35.1\n63.5\n75.1\n57.9\n367.2\n\nShot\n73.0\n91.9\n95.8\n86.9\n63.9\n87.1\n92.1\n81.0\n503.8\n42.4\n69.9\n79.9\n64.1\n34.9\n63.3\n74.9\n57.7\n365.3\n\nImpluse\n65.1\n87.9\n92.5\n81.8\n59.2\n84.3\n90.1\n77.9\n479.2\n35.6\n63.0\n74.3\n57.6\n29.8\n58.3\n70.7\n53.0\n331.7\n\nSpeckle\n78.1\n95.0\n97.8\n90.3\n66.9\n89.9\n94.4\n83.7\n522.1\n36.5\n65.7\n77.1\n59.8\n36.5\n65.7\n77.1\n59.8\n381.5\n\nBlur\nDefocus\n70.1\n90.2\n94.5\n84.9\n61.6\n85.6\n91.4\n79.5\n493.4\n43.7\n71.7\n81.5\n65.6\n35.2\n63.8\n75.2\n58.1\n371.0\n\nGlass\n82.3\n97.1\n99.1\n92.9\n70.6\n91.9\n95.8\n86.1\n536.9\n52.3\n80.1\n88.5\n73.7\n40.8\n69.9\n80.6\n63.8\n412.2\n\nMotion\n76.1\n93.7\n96.8\n88.9\n65.0\n88.4\n93.3\n82.2\n513.3\n44.6\n71.7\n81.0\n65.8\n36.4\n64.9\n75.8\n59.1\n374.4\n\nZoom\n58.7\n80.9\n87.8\n75.8\n53.0\n78.5\n85.5\n72.3\n444.3\n28.4\n54.1\n65.1\n49.2\n26.6\n52.3\n64.4\n47.8\n291.0\n\nWeather\nSnow\n69.6\n91.3\n95.7\n85.5\n64.2\n88.8\n93.4\n82.1\n503.0\n26.6\n51.7\n63.9\n47.4\n26.4\n54.0\n66.6\n49.0\n289.3\n\nFrost\n81.7\n97.0\n98.9\n92.5\n69.1\n90.9\n95.0\n85.0\n532.5\n37.3\n65.2\n75.8\n59.4\n30.3\n58.4\n70.4\n53.0\n337.3\n\nFog\n80.5\n95.9\n98.3\n91.6\n69.0\n90.8\n95.2\n85.0\n529.7\n47.0\n75.3\n84.6\n69.0\n37.7\n67.0\n78.2\n61.0\n389.9\n\nBrightness\n85.9\n97.8\n99.3\n94.3\n72.3\n92.3\n96.1\n86.9\n543.7\n52.8\n80.1\n88.4\n73.8\n41.2\n70.4\n80.9\n64.2\n413.9\n\nDigital\nContrast\n78.1\n94.9\n97.5\n90.2\n66.9\n89.8\n94.3\n83.6\n521.5\n43.4\n71.6\n81.5\n65.5\n35.6\n64.1\n75.5\n58.4\n371.7\n\nElastic\n76.9\n93.8\n96.9\n89.2\n65.4\n88.0\n92.9\n82.1\n513.9\n45.8\n73.6\n82.8\n67.4\n36.2\n65.0\n76.3\n59.1\n379.7\n\nPixelate\n62.5\n83.9\n88.8\n78.4\n54.4\n78.6\n85.5\n72.8\n453.8\n32.4\n58.3\n68.9\n53.2\n27.3\n53.8\n65.7\n48.9\n306.4\n\nJPEG\n81.5\n96.2\n98.3\n92.0\n68.2\n90.1\n94.2\n84.2\n528.5\n50.4\n78.1\n86.8\n71.8\n39.2\n68.2\n79.4\n62.3\n402.1\n\nStylize\nStylized\n59.9\n80.8\n86.5\n75.7\n51.3\n76.0\n82.6\n70.0\n437.0\n33.3\n59.1\n69.3\n53.9\n28.1\n54.5\n65.9\n49.5\n310.2\n\n

\n
\n
", + "capture": "Table 27: CLIP image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "28": { + "table_html": "
\n
Table 28: BLIP image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n85.1\n94.9\n96.4\n92.1\n74.3\n91.1\n94.4\n86.6\n536.2\n70.1\n88.4\n92.8\n83.8\n55.2\n79.0\n86.4\n73.5\n471.9\n\nShot\n85.4\n95.0\n96.8\n92.4\n75.1\n91.6\n95.0\n87.3\n538.9\n70.1\n88.2\n92.8\n83.7\n55.2\n79.2\n86.5\n73.7\n472.1\n\nImpluse\n83.3\n93.4\n95.7\n90.8\n72.9\n89.9\n93.5\n85.4\n528.6\n68.7\n87.6\n92.3\n82.9\n54.5\n78.6\n86.1\n73.1\n467.7\n\nSpeckle\n91.3\n98.2\n99.1\n96.2\n80.2\n94.8\n97.2\n90.7\n560.8\n74.4\n91.5\n95.0\n87.0\n58.4\n81.6\n88.5\n76.2\n489.5\n\nBlur\nDefocus\n83.8\n93.9\n96.0\n91.2\n73.1\n89.5\n93.2\n85.3\n529.4\n68.0\n87.5\n92.2\n82.6\n54.6\n78.3\n85.4\n72.8\n466.1\n\nGlass\n94.6\n99.6\n99.8\n98.0\n83.4\n96.1\n98.0\n92.5\n571.6\n79.1\n94.3\n97.2\n90.2\n62.0\n84.3\n90.3\n78.9\n507.2\n\nMotion\n82.6\n93.4\n96.0\n90.7\n71.9\n88.9\n92.9\n84.6\n525.7\n65.8\n85.0\n89.8\n80.2\n52.9\n75.6\n82.5\n70.3\n451.7\n\nZoom\n56.2\n74.9\n80.4\n70.5\n53.3\n74.7\n81.6\n69.9\n421.1\n30.7\n52.2\n61.0\n48.0\n31.8\n53.4\n62.5\n49.2\n291.6\n\nWeather\nSnow\n62.2\n82.7\n88.8\n77.9\n56.7\n79.7\n86.5\n74.3\n456.6\n58.3\n80.5\n87.1\n75.3\n49.7\n74.5\n82.8\n69.0\n432.8\n\nFrost\n79.1\n93.0\n96.1\n89.4\n66.4\n86.8\n91.9\n81.7\n513.4\n69.2\n88.0\n92.7\n83.3\n55.7\n79.5\n86.7\n74.0\n471.8\n\nFog\n92.9\n99.2\n99.6\n97.2\n82.8\n96.0\n98.0\n92.3\n568.5\n74.7\n91.7\n95.4\n87.2\n60.1\n82.9\n89.4\n77.5\n494.2\n\nBrightness\n95.6\n99.6\n99.8\n98.3\n84.8\n96.5\n98.3\n93.2\n574.5\n79.1\n94.0\n96.8\n90.0\n61.9\n84.4\n90.5\n78.9\n506.8\n\nDigital\nContrast\n90.2\n97.5\n98.4\n95.4\n79.4\n93.5\n96.1\n89.7\n555.1\n69.5\n87.6\n92.1\n83.1\n56.1\n79.1\n86.1\n73.8\n470.4\n\nElastic\n87.3\n95.4\n96.8\n93.2\n77.5\n92.8\n95.7\n88.7\n545.6\n70.4\n87.9\n92.4\n83.6\n55.9\n79.3\n86.4\n73.9\n472.3\n\nPixelate\n75.6\n88.2\n91.5\n85.1\n64.7\n83.0\n87.8\n78.5\n490.8\n56.1\n76.3\n82.6\n71.6\n44.9\n68.3\n76.5\n63.3\n404.7\n\nJPEG\n92.7\n98.5\n99.3\n96.8\n81.2\n94.9\n97.2\n91.1\n563.8\n77.5\n93.2\n96.4\n89.1\n60.1\n83.0\n89.5\n77.5\n499.6\n\nStylize\nStylized\n73.3\n86.4\n89.3\n83.0\n64.1\n82.1\n87.0\n77.7\n482.1\n55.1\n75.3\n81.6\n70.7\n45.9\n68.6\n76.5\n63.6\n402.9\n\n

\n
\n
", + "capture": "Table 28: BLIP image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "29": { + "table_html": "
\n
Table 29: ALBEF image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n83.9\n94.6\n96.5\n91.7\n73.4\n90.9\n94.5\n86.3\n533.8\n66.1\n86.5\n92.0\n81.5\n52.1\n77.6\n85.7\n71.8\n460.0\n\nShot\n84.9\n95.2\n97.1\n92.4\n74.0\n91.8\n95.2\n87.0\n538.3\n66.2\n86.6\n92.0\n81.6\n52.1\n77.9\n85.8\n71.9\n460.6\n\nImpluse\n83.7\n94.4\n96.3\n91.5\n73.0\n90.5\n94.1\n85.9\n532.0\n66.0\n86.8\n92.1\n81.6\n52.1\n77.6\n85.7\n71.8\n460.3\n\nSpeckle\n90.1\n98.1\n99.1\n95.8\n78.8\n94.6\n97.2\n90.2\n557.8\n69.9\n89.3\n94.1\n84.4\n54.7\n80.1\n87.6\n74.1\n475.8\n\nBlur\nDefocus\n82.6\n94.0\n96.5\n91.1\n71.8\n90.2\n93.6\n85.2\n528.8\n62.6\n84.1\n90.1\n79.0\n50.6\n75.7\n83.9\n70.1\n447.1\n\nGlass\n93.8\n99.2\n99.7\n97.6\n82.3\n96.3\n97.9\n92.1\n569.2\n75.1\n92.1\n96.2\n87.8\n58.1\n82.2\n89.2\n76.5\n493.0\n\nMotion\n80.0\n92.0\n94.2\n88.7\n69.3\n88.2\n92.3\n83.3\n516.0\n61.6\n82.4\n87.9\n77.3\n49.3\n73.8\n81.5\n68.2\n436.5\n\nZoom\n56.0\n73.8\n79.4\n69.7\n52.6\n73.8\n80.4\n69.0\n416.1\n29.4\n51.1\n60.2\n46.9\n29.2\n51.3\n60.9\n47.1\n282.2\n\nWeather\nSnow\n81.7\n94.4\n96.8\n91.0\n73.2\n91.2\n94.7\n86.4\n532.0\n51.3\n76.8\n84.8\n71.0\n44.9\n71.0\n79.9\n65.3\n408.8\n\nFrost\n90.4\n97.5\n98.8\n95.5\n79.5\n94.7\n97.2\n90.5\n558.1\n62.1\n84.7\n90.7\n79.2\n51.0\n76.7\n84.6\n70.8\n449.8\n\nFog\n90.2\n98.1\n99.1\n95.8\n80.5\n95.1\n97.4\n91.0\n560.4\n68.3\n89.1\n94.2\n83.9\n54.6\n79.6\n86.9\n73.7\n472.6\n\nBrightness\n94.5\n99.4\n99.7\n97.8\n83.7\n96.6\n98.2\n92.8\n572.0\n74.6\n92.7\n96.2\n87.8\n58.1\n82.7\n89.5\n76.8\n493.8\n\nDigital\nContrast\n88.2\n96.7\n97.9\n94.3\n78.3\n93.4\n96.0\n89.2\n550.6\n63.8\n85.0\n90.8\n79.9\n51.7\n76.5\n84.3\n70.8\n452.1\n\nElastic\n85.3\n94.7\n96.5\n92.2\n75.3\n91.8\n95.1\n87.4\n538.7\n65.7\n85.6\n91.1\n80.8\n51.7\n76.5\n84.4\n70.9\n455.0\n\nPixelate\n63.8\n78.2\n82.4\n74.8\n55.4\n75.3\n80.7\n70.5\n435.9\n45.9\n65.7\n72.7\n61.4\n36.3\n58.9\n67.5\n54.2\n347.0\n\nJPEG\n91.7\n98.2\n99.1\n96.3\n79.1\n94.6\n97.1\n90.3\n559.8\n71.7\n91.1\n95.4\n86.1\n55.3\n80.0\n87.4\n74.2\n480.9\n\nStylize\nStylized\n70.0\n83.7\n86.9\n80.2\n60.0\n79.0\n84.5\n74.5\n464.1\n50.6\n71.9\n78.6\n67.0\n40.3\n63.2\n71.7\n58.4\n376.4\n\n

\n
\n
", + "capture": "Table 29: ALBEF image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "30": { + "table_html": "
\n
Table 30: TCL image perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n69.3\n86.8\n90.4\n82.2\n55.2\n78.4\n84.8\n72.8\n464.9\n57.9\n80.2\n87.0\n75.0\n44.2\n70.6\n79.9\n64.9\n419.8\n\nShot\n70.1\n87.0\n91.2\n82.8\n55.5\n78.4\n84.7\n72.9\n467.0\n57.2\n79.9\n86.9\n74.7\n44.0\n70.5\n79.9\n64.8\n418.4\n\nImpluse\n67.3\n85.9\n90.3\n81.2\n53.7\n77.4\n83.8\n71.6\n458.4\n57.2\n80.2\n87.0\n74.8\n43.8\n70.4\n79.8\n64.7\n418.4\n\nSpeckle\n78.1\n92.9\n96.4\n89.1\n60.3\n82.3\n88.2\n76.9\n498.0\n62.0\n84.2\n90.5\n78.9\n46.7\n73.3\n82.4\n67.5\n439.0\n\nBlur\nDefocus\n60.0\n82.0\n87.3\n76.4\n50.2\n71.6\n78.7\n66.9\n429.8\n54.7\n79.1\n86.5\n73.4\n39.9\n65.2\n74.6\n59.9\n400.0\n\nGlass\n78.2\n94.0\n97.2\n89.8\n63.8\n84.1\n89.4\n79.1\n506.6\n66.7\n88.7\n94.7\n83.4\n46.5\n72.6\n81.6\n66.9\n450.8\n\nMotion\n51.2\n72.9\n80.5\n68.2\n43.8\n66.0\n74.1\n61.3\n388.5\n47.6\n72.3\n80.7\n66.9\n33.5\n57.0\n66.4\n52.3\n357.5\n\nZoom\n25.0\n44.5\n53.5\n41.0\n27.5\n45.9\n54.9\n42.8\n251.3\n16.7\n33.5\n42.7\n31.0\n15.3\n30.5\n38.7\n28.1\n177.3\n\nWeather\nSnow\n51.7\n75.4\n83.3\n70.1\n47.6\n70.5\n78.8\n65.7\n407.3\n37.1\n63.8\n74.7\n58.5\n28.5\n51.2\n61.2\n47.0\n316.5\n\nFrost\n62.8\n85.5\n91.3\n79.9\n52.0\n75.2\n82.8\n70.0\n449.5\n48.9\n75.1\n83.9\n69.3\n34.5\n59.7\n69.8\n54.7\n372.0\n\nFog\n59.0\n81.7\n89.2\n76.6\n49.5\n73.2\n81.6\n68.1\n434.2\n55.7\n81.3\n89.1\n75.4\n38.1\n63.3\n73.1\n58.2\n400.6\n\nBrightness\n82.4\n96.2\n98.6\n92.4\n61.3\n82.5\n88.1\n77.3\n509.1\n66.8\n88.7\n94.3\n83.3\n47.1\n73.3\n82.0\n67.5\n452.2\n\nDigital\nContrast\n69.8\n89.9\n94.0\n84.6\n56.3\n78.3\n85.0\n73.2\n473.2\n58.5\n82.9\n89.7\n77.0\n41.2\n67.2\n76.6\n61.7\n416.1\n\nElastic\n62.4\n80.6\n85.9\n76.3\n52.0\n73.3\n80.3\n68.5\n434.4\n50.6\n73.3\n80.7\n68.2\n35.6\n59.6\n69.2\n54.8\n369.0\n\nPixelate\n30.4\n46.4\n53.3\n43.4\n25.8\n42.2\n49.1\n39.0\n247.2\n21.2\n36.4\n43.3\n33.7\n17.4\n32.4\n39.5\n29.8\n190.3\n\nJPEG\n78.2\n93.8\n96.6\n89.5\n61.2\n83.4\n89.0\n77.9\n502.2\n63.1\n86.0\n92.0\n80.3\n46.5\n73.1\n82.1\n67.2\n442.7\n\nStylize\nStylized\n44.2\n64.8\n71.2\n60.1\n38.4\n58.5\n66.2\n54.4\n343.4\n33.7\n55.0\n63.7\n50.8\n26.3\n46.4\n55.0\n42.6\n280.1\n\n

\n
\n
", + "capture": "Table 30: TCL image perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "31": { + "table_html": "
\n
Table 31: TCL image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nNoise\nGaussian\n83.1\n94.3\n96.7\n91.4\n71.4\n90.3\n94.1\n85.3\n529.9\n64.8\n85.8\n91.3\n80.6\n50.8\n76.6\n84.9\n70.8\n454.3\n\nShot\n83.3\n95.1\n97.1\n91.8\n71.9\n90.7\n94.5\n85.7\n532.6\n64.8\n85.7\n91.3\n80.6\n50.7\n76.8\n85.1\n70.9\n454.4\n\nImpluse\n82.9\n94.1\n96.5\n91.1\n70.6\n89.9\n93.8\n84.8\n527.7\n64.4\n85.7\n91.5\n80.5\n50.6\n76.7\n85.0\n70.8\n453.9\n\nSpeckle\n88.8\n97.8\n98.7\n95.1\n76.3\n93.5\n96.5\n88.8\n551.6\n67.9\n88.1\n93.4\n83.2\n53.0\n78.8\n86.8\n72.9\n468.1\n\nBlur\nDefocus\n77.0\n90.6\n93.5\n87.1\n66.6\n86.1\n90.7\n81.1\n504.5\n62.8\n84.6\n90.7\n79.4\n50.1\n75.8\n83.8\n69.9\n447.8\n\nGlass\n92.7\n99.1\n99.7\n97.2\n81.2\n95.6\n97.7\n91.5\n566.0\n74.1\n92.4\n96.3\n87.6\n57.7\n82.3\n89.2\n76.4\n491.9\n\nMotion\n78.9\n92.2\n94.9\n88.7\n68.1\n87.6\n92.2\n82.6\n513.9\n60.5\n81.9\n87.8\n76.7\n48.4\n73.4\n81.7\n67.8\n433.8\n\nZoom\n51.8\n70.5\n76.4\n66.2\n48.4\n71.3\n78.9\n66.2\n397.3\n24.5\n45.2\n54.6\n41.5\n27.2\n49.3\n59.1\n45.2\n259.9\n\nWeather\nSnow\n78.8\n93.3\n95.9\n89.3\n70.0\n89.9\n93.8\n84.6\n521.7\n51.5\n76.4\n84.7\n70.9\n44.6\n71.2\n80.5\n65.4\n408.9\n\nFrost\n88.1\n97.5\n98.6\n94.7\n76.6\n93.7\n96.5\n88.9\n551.0\n61.2\n83.1\n89.5\n77.9\n49.6\n75.6\n84.1\n69.8\n443.2\n\nFog\n88.1\n98.0\n99.1\n95.1\n77.9\n94.2\n96.7\n89.6\n554.1\n67.7\n88.3\n93.5\n83.2\n53.9\n79.5\n87.3\n73.5\n470.1\n\nBrightness\n93.7\n99.0\n99.6\n97.4\n81.9\n95.9\n97.9\n91.9\n568.0\n73.4\n91.6\n95.9\n87.0\n57.1\n82.0\n89.1\n76.1\n489.1\n\nDigital\nContrast\n90.0\n97.8\n99.2\n95.7\n78.5\n94.5\n97.1\n90.0\n557.1\n67.4\n87.8\n93.2\n82.8\n53.6\n79.1\n86.7\n73.1\n467.8\n\nElastic\n81.3\n92.4\n94.7\n89.5\n72.1\n90.1\n93.8\n85.3\n524.4\n61.3\n82.4\n88.4\n77.4\n48.9\n74.4\n82.8\n68.7\n438.2\n\nPixelate\n50.1\n66.2\n72.0\n62.8\n45.7\n65.4\n72.5\n61.2\n372.0\n37.7\n57.1\n65.0\n53.3\n32.0\n54.1\n63.1\n49.8\n309.1\n\nJPEG\n90.2\n98.3\n99.3\n95.9\n77.1\n93.9\n96.7\n89.2\n555.4\n69.9\n89.3\n94.3\n84.5\n54.1\n79.8\n87.4\n73.8\n474.9\n\nStylize\nStylized\n65.0\n80.7\n85.0\n76.9\n57.4\n77.5\n83.2\n72.7\n448.7\n45.3\n67.5\n75.3\n62.7\n38.8\n62.6\n71.3\n57.6\n360.9\n\n

\n
\n
", + "capture": "Table 31: TCL image perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "32": { + "table_html": "
\n
Table 32: ViLT text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n55.6\n82.9\n89.3\n75.9\n31.8\n57.7\n68.0\n52.5\n385.3\n40.3\n69.6\n79.9\n63.3\n23.1\n47.3\n59.0\n43.1\n319.2\n\nOcr\n71.1\n92.0\n96.1\n86.4\n45.8\n74.1\n82.8\n67.6\n462.0\n51.9\n80.1\n88.5\n73.5\n32.5\n60.8\n72.5\n55.2\n386.2\n\nCI\n55.3\n83.2\n90.1\n76.2\n31.9\n58.5\n68.9\n53.1\n388.0\n41.1\n70.8\n81.4\n64.4\n24.0\n48.9\n60.8\n44.6\n327.0\n\nCR\n55.7\n82.5\n90.1\n76.1\n31.8\n57.7\n68.3\n52.6\n386.2\n40.8\n69.8\n80.5\n63.7\n23.5\n47.7\n59.4\n43.5\n321.7\n\nCS\n57.6\n83.8\n90.7\n77.4\n33.7\n59.8\n70.0\n54.5\n395.6\n42.3\n72.2\n82.0\n65.5\n24.9\n49.9\n61.7\n45.5\n333.1\n\nCD\n57.3\n84.0\n90.8\n77.4\n34.6\n60.9\n71.0\n55.5\n398.6\n42.3\n71.9\n82.3\n65.5\n25.1\n50.3\n62.3\n45.9\n334.1\n\nWord\nSR\n71.0\n92.4\n96.1\n86.5\n48.9\n77.4\n86.0\n70.8\n471.9\n52.8\n80.9\n88.9\n74.2\n35.2\n64.3\n75.7\n58.4\n397.8\n\nWI\n75.0\n94.0\n97.3\n88.8\n53.9\n82.4\n89.5\n75.3\n492.2\n56.5\n83.4\n90.9\n76.9\n38.6\n68.4\n79.7\n62.2\n417.5\n\nWS\n71.6\n93.0\n96.8\n87.1\n50.4\n80.2\n88.1\n72.9\n480.1\n53.7\n81.4\n89.5\n74.9\n35.8\n66.0\n78.0\n60.0\n404.4\n\nWD\n74.3\n93.9\n97.3\n88.5\n53.0\n82.0\n89.3\n74.8\n489.8\n55.6\n82.5\n90.3\n76.2\n37.8\n68.0\n79.4\n61.7\n413.6\n\nIP\n79.5\n95.7\n98.0\n91.1\n58.1\n85.0\n91.3\n78.1\n507.7\n59.9\n85.4\n92.0\n79.1\n41.8\n71.6\n82.3\n65.2\n433.1\n\nSentence\nFormal\n79.5\n95.7\n98.6\n91.3\n59.2\n85.6\n91.5\n78.8\n510.1\n61.1\n85.8\n92.2\n79.7\n42.6\n72.2\n82.6\n65.8\n436.5\n\nCasual\n78.1\n95.5\n97.8\n90.5\n57.3\n84.9\n90.9\n77.7\n504.5\n60.0\n85.5\n91.7\n79.1\n42.2\n71.9\n82.4\n65.5\n433.6\n\nPassive\n74.0\n94.6\n97.4\n88.7\n53.2\n80.8\n88.1\n74.0\n488.1\n57.9\n84.4\n91.4\n77.9\n40.0\n69.3\n80.2\n63.2\n423.2\n\nActive\n78.5\n95.1\n98.3\n90.6\n58.6\n85.7\n92.1\n78.8\n508.3\n60.9\n85.9\n92.2\n79.7\n42.9\n72.3\n82.9\n66.0\n437.1\n\nBack_trans\n78.0\n94.8\n98.0\n90.3\n56.1\n83.0\n90.2\n76.4\n500.1\n59.1\n84.4\n91.3\n78.3\n40.5\n69.9\n80.7\n63.7\n426.0\n\n

\n
\n
", + "capture": "Table 32: ViLT text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "33": { + "table_html": "
\n
Table 33: CLIP text perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n62.4\n86.9\n93.1\n80.8\n43.5\n68.8\n77.0\n63.1\n431.8\n36.8\n62.1\n72.8\n57.2\n21.0\n41.2\n51.6\n37.9\n285.5\n\nOcr\n73.4\n93.2\n96.7\n87.8\n52.9\n77.3\n84.6\n71.6\n478.2\n37.2\n62.2\n72.6\n57.4\n21.1\n41.5\n51.8\n38.1\n286.4\n\nCI\n66.4\n89.6\n94.7\n83.6\n47.3\n72.3\n80.2\n66.6\n450.5\n37.0\n62.1\n72.8\n57.3\n21.2\n41.4\n51.6\n38.1\n286.1\n\nCR\n63.0\n88.4\n93.8\n81.7\n44.1\n68.7\n77.2\n63.3\n435.2\n36.6\n62.1\n72.7\n57.1\n21.0\n41.4\n51.7\n38.0\n285.4\n\nCS\n65.5\n89.3\n94.9\n83.2\n45.7\n70.4\n78.7\n65.0\n444.6\n36.5\n62.2\n72.6\n57.1\n21.1\n41.4\n51.8\n38.1\n285.6\n\nCD\n66.3\n90.4\n95.4\n84.0\n47.2\n71.9\n80.1\n66.4\n451.3\n36.6\n62.2\n73.0\n57.3\n21.1\n41.4\n51.6\n38.0\n285.8\n\nWord\nSR\n76.0\n95.1\n98.0\n89.7\n58.0\n81.7\n88.2\n76.0\n497.1\n47.0\n72.8\n81.8\n67.2\n29.2\n53.0\n63.6\n48.6\n347.5\n\nWI\n78.3\n95.7\n98.3\n90.8\n61.6\n84.9\n90.9\n79.1\n509.6\n49.9\n74.9\n83.5\n69.4\n32.1\n56.5\n66.9\n51.8\n363.8\n\nWS\n77.2\n95.1\n98.0\n90.1\n59.7\n83.6\n89.8\n77.7\n503.3\n48.9\n73.6\n82.3\n68.3\n30.6\n54.7\n65.3\n50.2\n355.5\n\nWD\n80.9\n96.8\n98.5\n92.1\n61.4\n85.4\n91.1\n79.3\n514.1\n51.7\n76.4\n84.6\n70.9\n32.3\n56.5\n67.1\n51.9\n368.6\n\nIP\n81.8\n97.1\n98.8\n92.6\n63.8\n86.1\n91.6\n80.5\n519.4\n52.4\n76.6\n84.5\n71.2\n34.1\n58.2\n68.4\n53.6\n374.2\n\nSentence\nFormal\n86.4\n98.6\n99.1\n94.7\n66.0\n88.5\n93.1\n82.5\n531.7\n56.8\n80.4\n87.7\n75.0\n36.4\n60.9\n70.8\n56.0\n393.0\n\nCasual\n84.9\n97.9\n99.2\n94.0\n66.1\n88.4\n92.8\n82.4\n529.3\n57.1\n79.6\n87.7\n74.8\n35.9\n60.6\n70.7\n55.7\n391.6\n\nPassive\n84.3\n96.9\n99.2\n93.5\n64.8\n87.3\n92.2\n81.5\n524.8\n54.3\n77.8\n86.1\n72.7\n34.1\n58.4\n68.9\n53.8\n379.6\n\nActive\n85.6\n97.9\n99.2\n94.2\n66.9\n88.8\n93.1\n82.9\n531.4\n57.5\n80.3\n87.9\n75.2\n36.1\n60.8\n70.9\n55.9\n393.5\n\nBack_trans\n83.9\n97.0\n98.5\n93.1\n65.5\n87.2\n92.2\n81.6\n524.2\n55.1\n78.2\n85.7\n73.0\n34.3\n58.9\n69.1\n54.1\n381.2\n\n

\n
\n
", + "capture": "Table 33: CLIP text perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "34": { + "table_html": "
\n
Table 34: CLIP text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n67.0\n91.2\n96.2\n84.8\n48.3\n74.0\n81.6\n68.0\n458.4\n36.8\n66.1\n78.1\n60.3\n24.3\n49.4\n61.3\n45.0\n316.1\n\nOcr\n76.2\n95.4\n98.4\n90.0\n58.5\n83.3\n89.1\n77.0\n500.9\n36.8\n66.3\n77.9\n60.4\n24.4\n49.7\n61.5\n45.2\n316.7\n\nCI\n71.4\n93.3\n96.8\n87.2\n53.2\n78.1\n84.8\n72.0\n477.6\n36.3\n66.6\n78.2\n60.4\n24.4\n49.6\n61.4\n45.1\n316.5\n\nCR\n68.9\n91.7\n96.1\n85.6\n48.7\n74.5\n81.7\n68.3\n461.6\n36.5\n66.3\n78.1\n60.3\n24.3\n49.7\n61.5\n45.2\n316.4\n\nCS\n70.7\n92.4\n96.6\n86.6\n51.0\n76.6\n83.7\n70.4\n471.1\n36.5\n66.5\n78.2\n60.4\n24.4\n49.6\n61.4\n45.1\n316.7\n\nCD\n70.9\n93.3\n97.2\n87.2\n52.1\n77.5\n84.5\n71.3\n475.5\n36.7\n66.1\n77.9\n60.3\n24.2\n49.5\n61.3\n45.0\n315.6\n\nWord\nSR\n78.0\n96.4\n98.5\n91.0\n63.4\n87.2\n92.0\n80.9\n515.4\n45.3\n75.0\n85.1\n68.5\n33.8\n62.7\n74.3\n56.9\n376.2\n\nWI\n81.0\n97.0\n99.0\n92.3\n68.3\n90.4\n94.7\n84.4\n530.4\n48.4\n77.3\n86.8\n70.8\n37.3\n66.8\n78.1\n60.7\n394.6\n\nWS\n80.8\n97.0\n99.0\n92.2\n66.1\n89.3\n93.9\n83.1\n526.0\n48.0\n77.1\n86.7\n70.6\n35.9\n65.3\n76.9\n59.4\n389.9\n\nWD\n81.0\n97.4\n99.1\n92.5\n67.9\n90.7\n95.0\n84.5\n531.1\n49.1\n77.7\n86.8\n71.2\n37.1\n66.7\n78.0\n60.6\n395.3\n\nIP\n83.0\n97.9\n99.2\n93.4\n69.9\n91.2\n95.1\n85.4\n536.4\n51.5\n79.5\n88.1\n73.0\n39.1\n68.7\n79.6\n62.5\n406.6\n\nSentence\nFormal\n85.2\n98.4\n99.5\n94.4\n73.3\n92.9\n96.4\n87.6\n545.8\n53.5\n81.0\n88.9\n74.5\n41.7\n70.8\n81.3\n64.6\n417.3\n\nCasual\n83.9\n97.6\n99.4\n93.6\n72.5\n92.3\n96.4\n87.1\n542.1\n52.5\n80.6\n89.0\n74.0\n41.4\n70.4\n81.2\n64.4\n415.2\n\nPassive\n82.9\n97.7\n99.1\n93.2\n71.3\n91.3\n95.6\n86.1\n537.9\n51.9\n80.0\n88.3\n73.4\n39.6\n68.9\n80.0\n62.8\n408.7\n\nActive\n85.0\n97.6\n99.4\n94.0\n73.5\n92.9\n96.6\n87.7\n545.1\n54.1\n81.4\n89.0\n74.8\n42.2\n71.1\n81.7\n65.0\n419.4\n\nBack_trans\n83.8\n97.7\n99.0\n93.5\n70.4\n91.2\n95.2\n85.6\n537.3\n51.4\n79.1\n88.2\n72.9\n39.6\n68.5\n79.5\n62.5\n406.2\n\n

\n
\n
", + "capture": "Table 34: CLIP text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "35": { + "table_html": "
\n
Table 35: BLIP text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n84.5\n97.3\n98.9\n93.6\n63.8\n84.1\n89.4\n79.1\n518.0\n64.1\n86.4\n91.9\n80.8\n42.7\n67.5\n76.6\n62.2\n429.1\n\nOcr\n93.6\n99.5\n99.8\n97.6\n77.5\n93.1\n96.0\n88.9\n559.5\n74.3\n92.2\n96.0\n87.5\n53.6\n77.7\n85.3\n72.2\n479.1\n\nCI\n86.6\n98.0\n99.3\n94.7\n66.3\n86.1\n90.9\n81.1\n527.3\n66.7\n88.1\n93.4\n82.7\n45.0\n70.2\n79.0\n64.7\n442.4\n\nCR\n84.6\n97.5\n99.0\n93.7\n63.9\n83.8\n89.2\n79.0\n518.0\n64.5\n86.7\n92.1\n81.1\n42.9\n67.7\n76.9\n62.5\n430.8\n\nCS\n87.4\n97.9\n99.3\n94.9\n65.9\n85.4\n90.5\n80.6\n526.4\n67.0\n88.1\n93.2\n82.8\n44.6\n69.7\n78.6\n64.3\n441.3\n\nCD\n86.8\n97.7\n99.2\n94.6\n65.9\n85.7\n90.4\n80.7\n525.7\n67.0\n88.1\n93.3\n82.8\n44.8\n69.7\n78.6\n64.4\n441.4\n\nWord\nSR\n93.8\n99.6\n99.9\n97.8\n80.6\n94.7\n97.0\n90.7\n565.6\n74.2\n92.4\n96.1\n87.6\n55.5\n79.5\n86.7\n73.9\n484.3\n\nWI\n96.0\n99.8\n99.9\n98.6\n85.0\n96.9\n98.5\n93.4\n576.1\n78.1\n94.0\n97.1\n89.7\n60.1\n83.2\n89.6\n77.6\n502.1\n\nWS\n94.8\n99.6\n100.0\n98.1\n83.6\n96.5\n98.4\n92.8\n572.9\n75.9\n93.2\n96.6\n88.6\n58.1\n82.0\n88.9\n76.3\n494.6\n\nWD\n95.1\n99.8\n100.0\n98.3\n83.8\n96.7\n98.5\n93.0\n573.8\n77.3\n93.9\n97.0\n89.4\n59.2\n82.7\n89.5\n77.1\n499.7\n\nIP\n97.3\n99.9\n100.0\n99.0\n87.2\n97.5\n98.9\n94.5\n580.7\n81.8\n95.4\n97.8\n91.7\n63.9\n85.6\n91.3\n80.3\n515.8\n\nSentence\nFormal\n96.5\n99.9\n100.0\n98.8\n86.7\n97.1\n98.8\n94.2\n579.0\n81.7\n95.2\n97.6\n91.5\n63.5\n85.3\n91.2\n80.0\n514.4\n\nCasual\n96.8\n100.0\n100.0\n98.9\n86.0\n97.1\n98.7\n93.9\n578.6\n81.3\n95.0\n97.7\n91.3\n63.4\n85.1\n91.1\n79.8\n513.6\n\nPassive\n96.8\n99.8\n99.9\n98.8\n83.3\n96.5\n98.2\n92.7\n574.5\n80.5\n94.7\n97.3\n90.8\n61.7\n83.8\n90.2\n78.6\n508.1\n\nActive\n97.1\n99.9\n100.0\n99.0\n86.6\n97.2\n98.7\n94.2\n579.6\n81.6\n95.2\n97.7\n91.5\n64.0\n85.5\n91.3\n80.3\n515.4\n\nBack_trans\n96.0\n99.9\n100.0\n98.6\n84.5\n96.1\n98.2\n92.9\n574.7\n79.9\n94.2\n97.0\n90.4\n61.0\n82.9\n89.3\n77.8\n504.3\n\n

\n
\n
", + "capture": "Table 35: BLIP text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "36": { + "table_html": "
\n
Table 36: ALBEF text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n82.1\n96.0\n98.5\n92.2\n59.7\n82.1\n87.7\n76.5\n506.2\n57.9\n82.6\n89.6\n76.7\n38.0\n63.4\n73.0\n58.1\n404.5\n\nOcr\n91.3\n99.2\n99.6\n96.7\n74.6\n92.1\n95.1\n87.3\n552.0\n69.3\n89.9\n94.8\n84.7\n49.5\n74.9\n83.3\n69.2\n461.7\n\nCI\n84.4\n97.2\n98.6\n93.4\n62.5\n84.2\n89.2\n78.6\n516.2\n60.8\n84.7\n91.0\n78.8\n40.6\n66.2\n75.6\n60.8\n418.9\n\nCR\n82.1\n95.9\n98.4\n92.1\n59.9\n81.6\n87.2\n76.2\n505.0\n58.3\n82.9\n89.9\n77.0\n38.3\n63.6\n73.1\n58.3\n406.1\n\nCS\n82.9\n96.8\n98.8\n92.8\n61.6\n83.2\n88.4\n77.7\n511.7\n59.9\n84.1\n90.8\n78.3\n39.8\n65.3\n74.8\n60.0\n414.7\n\nCD\n83.6\n96.7\n98.5\n92.9\n61.9\n83.6\n88.7\n78.1\n513.0\n60.0\n84.1\n90.8\n78.3\n39.9\n65.7\n75.1\n60.2\n415.5\n\nWord\nSR\n92.9\n99.2\n99.8\n97.3\n78.7\n94.5\n96.8\n90.0\n561.9\n70.1\n90.6\n95.1\n85.3\n52.4\n77.7\n85.5\n71.9\n471.4\n\nWI\n94.3\n99.6\n99.9\n97.9\n82.9\n96.6\n98.3\n92.6\n571.6\n73.2\n92.4\n96.3\n87.3\n56.8\n81.6\n88.7\n75.7\n488.9\n\nWS\n93.3\n99.4\n99.9\n97.6\n81.5\n96.3\n98.1\n92.0\n568.6\n72.0\n91.8\n96.1\n86.6\n55.1\n80.6\n88.2\n74.6\n483.7\n\nWD\n93.4\n99.5\n99.9\n97.6\n82.2\n96.5\n98.3\n92.4\n570.0\n72.9\n92.1\n96.1\n87.0\n55.7\n81.1\n88.5\n75.1\n486.3\n\nIP\n95.9\n99.8\n100.0\n98.6\n85.5\n97.5\n98.9\n94.0\n577.7\n77.6\n94.3\n97.2\n89.7\n60.7\n84.3\n90.5\n78.5\n504.5\n\nSentence\nFormal\n95.4\n99.7\n99.9\n98.3\n85.2\n97.3\n98.7\n93.7\n576.2\n77.6\n94.1\n97.0\n89.6\n60.2\n83.9\n90.3\n78.1\n503.1\n\nCasual\n95.1\n99.7\n100.0\n98.3\n84.6\n97.1\n98.5\n93.4\n575.0\n77.1\n94.1\n97.4\n89.5\n59.7\n83.6\n90.1\n77.8\n502.0\n\nPassive\n94.6\n99.4\n100.0\n98.0\n81.5\n96.1\n98.0\n91.8\n569.5\n76.1\n93.4\n96.7\n88.7\n58.4\n82.6\n89.2\n76.7\n496.4\n\nActive\n95.6\n99.8\n100.0\n98.5\n85.0\n97.3\n98.7\n93.7\n576.4\n77.5\n94.2\n97.1\n89.6\n60.4\n84.2\n90.3\n78.3\n503.7\n\nBack_trans\n95.9\n99.7\n99.9\n98.5\n83.0\n96.1\n98.0\n92.3\n572.5\n75.2\n93.0\n96.4\n88.2\n57.4\n81.0\n88.3\n75.6\n491.3\n\n

\n
\n
", + "capture": "Table 36: ALBEF text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "37": { + "table_html": "
\n
Table 37: TCL text perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n63.8\n87.2\n92.7\n81.2\n44.1\n68.8\n76.7\n63.2\n433.3\n49.6\n76.1\n84.9\n70.2\n32.3\n57.2\n67.8\n52.4\n368.0\n\nOcr\n78.2\n94.8\n97.9\n90.3\n58.8\n82.1\n88.1\n76.3\n499.9\n61.4\n85.1\n91.6\n79.4\n42.6\n69.0\n78.7\n63.4\n428.4\n\nCI\n67.3\n88.0\n93.4\n82.9\n45.9\n70.5\n78.3\n64.9\n443.3\n51.9\n78.5\n86.7\n72.4\n34.1\n59.8\n70.3\n54.7\n381.3\n\nCR\n63.1\n85.9\n91.4\n80.1\n43.8\n68.1\n76.1\n62.7\n428.4\n49.7\n76.1\n85.1\n70.3\n32.2\n57.4\n67.9\n52.5\n368.4\n\nCS\n66.5\n88.6\n93.8\n83.0\n46.3\n70.8\n78.5\n65.2\n444.4\n52.6\n78.5\n87.0\n72.7\n34.0\n59.7\n70.1\n54.6\n382.0\n\nCD\n66.7\n89.4\n94.2\n83.4\n47.2\n71.9\n79.4\n66.2\n448.9\n52.6\n78.8\n86.9\n72.8\n34.3\n60.2\n70.6\n55.0\n383.4\n\nWord\nSR\n78.3\n95.3\n97.9\n90.5\n63.2\n86.0\n91.1\n80.1\n511.9\n62.1\n85.7\n91.9\n79.9\n45.8\n72.3\n81.5\n66.5\n439.3\n\nWI\n80.0\n96.3\n98.5\n91.6\n67.0\n88.6\n93.4\n83.0\n523.8\n63.3\n86.8\n93.0\n81.0\n49.5\n76.1\n84.7\n70.1\n453.4\n\nWS\n80.4\n95.9\n98.4\n91.6\n64.8\n87.2\n92.4\n81.5\n519.1\n63.2\n86.5\n92.7\n80.8\n46.5\n73.8\n83.0\n67.8\n445.7\n\nWD\n83.6\n97.1\n98.8\n93.1\n67.0\n89.0\n93.4\n83.1\n528.8\n65.3\n87.2\n93.1\n81.9\n47.6\n74.4\n83.3\n68.4\n450.9\n\nIP\n89.4\n98.6\n99.6\n95.9\n73.4\n92.2\n95.5\n87.0\n548.6\n71.4\n90.8\n95.4\n85.9\n53.5\n79.0\n87.1\n73.2\n477.2\n\nSentence\nFormal\n88.0\n98.0\n99.8\n95.3\n72.0\n91.6\n95.1\n86.2\n544.4\n70.8\n90.6\n95.2\n85.5\n52.9\n78.4\n86.5\n72.6\n474.4\n\nCasual\n87.2\n98.3\n99.5\n95.0\n71.4\n91.2\n94.8\n85.8\n542.4\n69.9\n90.2\n94.9\n85.0\n52.3\n78.1\n86.4\n72.3\n471.8\n\nPassive\n84.5\n97.1\n99.4\n93.7\n67.6\n88.6\n92.9\n83.0\n530.1\n68.6\n89.1\n94.4\n84.0\n50.5\n76.9\n85.2\n70.9\n464.7\n\nActive\n89.3\n98.3\n99.9\n95.8\n72.9\n91.5\n95.1\n86.5\n547.1\n70.9\n90.6\n95.3\n85.6\n53.1\n78.9\n86.9\n73.0\n475.7\n\nBack_trans\n86.0\n97.6\n99.4\n94.3\n69.4\n89.8\n93.6\n84.3\n535.8\n68.5\n89.2\n94.2\n83.9\n50.3\n75.9\n84.1\n70.1\n462.0\n\n

\n
\n
", + "capture": "Table 37: TCL text perturbation performance comparison of Zero-Shot (ZS) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + }, + "38": { + "table_html": "
\n
Table 38: TCL text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels).
\n
\n

\n\n\n\n\nMethod\nFlickr30K (1K)\nMSCOCO (5K)\n\n\nText Retrieval\nImage Retrieval\n\nText Retrieval\nImage Retrieval\n\n\n\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\nR@1\nR@5\nR@10\nMean\nR@1\nR@5\nR@10\nMean\nRSUM\n\nCharacter\nKeyboard\n79.7\n95.2\n97.9\n90.9\n57.0\n79.1\n85.4\n73.8\n494.3\n55.8\n81.3\n88.8\n75.3\n36.9\n62.5\n72.4\n57.3\n397.8\n\nOcr\n90.0\n99.1\n99.7\n96.3\n71.7\n90.4\n94.0\n85.4\n545.0\n67.6\n88.9\n94.0\n83.5\n48.0\n73.9\n82.6\n68.2\n455.1\n\nCI\n82.2\n96.2\n98.3\n92.2\n59.6\n81.4\n87.2\n76.1\n504.9\n58.5\n83.5\n90.4\n77.5\n39.3\n65.3\n75.0\n59.8\n412.0\n\nCR\n79.3\n94.8\n97.8\n90.7\n56.7\n79.1\n85.0\n73.6\n492.8\n55.6\n81.5\n89.0\n75.4\n37.2\n62.7\n72.5\n57.5\n398.5\n\nCS\n80.7\n96.0\n98.2\n91.6\n59.0\n81.2\n86.8\n75.7\n501.9\n57.6\n82.9\n90.2\n76.9\n38.7\n64.8\n74.6\n59.4\n408.8\n\nCD\n81.4\n95.7\n98.3\n91.8\n59.1\n81.2\n86.7\n75.7\n502.4\n58.1\n83.0\n90.0\n77.0\n39.2\n65.3\n75.0\n59.8\n410.5\n\nWord\nSR\n91.0\n99.1\n99.7\n96.6\n76.1\n93.0\n95.8\n88.3\n554.7\n67.8\n89.1\n94.2\n83.7\n51.0\n76.8\n84.8\n70.8\n463.7\n\nWI\n93.4\n99.4\n99.8\n97.5\n80.5\n95.5\n97.7\n91.2\n566.4\n70.8\n91.0\n95.6\n85.8\n55.3\n80.6\n88.0\n74.6\n481.3\n\nWS\n91.0\n99.1\n99.6\n96.6\n78.2\n94.7\n97.4\n90.1\n560.0\n69.2\n90.3\n94.9\n84.8\n52.3\n78.5\n86.6\n72.5\n471.8\n\nWD\n92.6\n99.4\n99.8\n97.3\n79.5\n95.3\n97.6\n90.8\n564.2\n70.8\n90.7\n95.5\n85.7\n53.7\n79.7\n87.3\n73.6\n477.7\n\nIP\n94.9\n99.5\n99.8\n98.1\n84.0\n96.7\n98.5\n93.1\n573.4\n75.6\n92.8\n96.7\n88.3\n59.0\n83.2\n89.9\n77.3\n497.1\n\nSentence\nFormal\n94.4\n99.4\n99.8\n97.9\n83.2\n96.5\n98.3\n92.6\n571.5\n75.3\n92.4\n96.7\n88.1\n58.2\n82.7\n89.5\n76.8\n494.6\n\nCasual\n94.0\n99.5\n99.9\n97.8\n82.1\n96.0\n98.0\n92.1\n569.6\n74.6\n92.1\n96.5\n87.8\n57.9\n82.5\n89.4\n76.6\n493.0\n\nPassive\n92.7\n99.1\n99.8\n97.2\n79.5\n94.5\n97.1\n90.4\n562.8\n73.5\n91.9\n96.1\n87.2\n56.3\n81.3\n88.3\n75.3\n487.3\n\nActive\n94.8\n99.5\n99.8\n98.0\n83.5\n96.4\n98.2\n92.7\n572.1\n75.4\n92.7\n96.6\n88.2\n58.7\n83.0\n89.7\n77.1\n496.0\n\nBack_trans\n93.9\n99.5\n99.9\n97.8\n80.6\n95.3\n97.3\n91.1\n566.5\n72.7\n91.6\n96.0\n86.8\n55.5\n80.3\n87.3\n74.4\n483.5\n\n

\n
\n
", + "capture": "Table 38: TCL text perturbation performance comparison of Fine-tuned (FT) image-text retrieval on Flickr30K and COCO datasets (results are averaged on five perturbation levels)." + } + }, + "image_paths": { + "1": { + "figure_path": "2212.08044v3_figure_1.png", + "caption": "Figure 1: Multimodal models are sensitive to image/text perturbations (original image-text pairs are shown in blue boxes, perturbed ones are in red).\nImage captioning (Top): Adding image perturbations can result in incorrect captions, e.g., the tabby kitten is mistakenly described as a woman/dog.\nText-to-image generation (bottom): Applying text perturbations can result in the generated images containing incomplete visual information, e.g., the tree is missing in the two examples above.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/example_cvpr_v2.png" + }, + "2": { + "figure_path": "2212.08044v3_figure_2.png", + "caption": "Figure 2: Examples of our 17 image perturbations. The original image is taken from the COCO dataset and shown on the top left.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/IP_example.png" + }, + "3(a)": { + "figure_path": "2212.08044v3_figure_3(a).png", + "caption": "Figure 3: Optimal Transport (OT) alignment visualization between text and perturbed images, where pixelate and zoom blur are two high-effective image perturbation methods, brightness and glass blur are two low-effective ones.", + "url": "http://arxiv.org/html/2212.08044v3/x1.png" + }, + "3(b)": { + "figure_path": "2212.08044v3_figure_3(b).png", + "caption": "Figure 3: Optimal Transport (OT) alignment visualization between text and perturbed images, where pixelate and zoom blur are two high-effective image perturbation methods, brightness and glass blur are two low-effective ones.", + "url": "http://arxiv.org/html/2212.08044v3/x2.png" + }, + "3(c)": { + "figure_path": "2212.08044v3_figure_3(c).png", + "caption": "Figure 3: Optimal Transport (OT) alignment visualization between text and perturbed images, where pixelate and zoom blur are two high-effective image perturbation methods, brightness and glass blur are two low-effective ones.", + "url": "http://arxiv.org/html/2212.08044v3/x3.png" + }, + "3(d)": { + "figure_path": "2212.08044v3_figure_3(d).png", + "caption": "Figure 3: Optimal Transport (OT) alignment visualization between text and perturbed images, where pixelate and zoom blur are two high-effective image perturbation methods, brightness and glass blur are two low-effective ones.", + "url": "http://arxiv.org/html/2212.08044v3/x4.png" + }, + "4": { + "figure_path": "2212.08044v3_figure_4.png", + "caption": "Figure 5: (a) Image captioning results of BLIP; (b) Image captioning results of GRIT; (c) Grad-CAM visualizations on the cross-attention maps corresponding to individual words under image perturbations, where zoom blur and pixelate perturbed images show worse word-image attention alignment than the brightness perturbed image. For example, in zoom blur and pixelate, the \u201cdoor\u201d and \u201cglasses\u201d words\u2019 attention maps are not matched with the correct image patches, while in pixelate, all words\u2019 attention maps match correctly.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/captioning_all.png" + }, + "5": { + "figure_path": "2212.08044v3_figure_5.png", + "caption": "Figure 6: (a) Text-to-image generation results of Stable-diffusion in terms of (a) FID scores; (b) CLIP-FID scores.\nSince both scores are the lower the better, a higher bar indicates the model is less robust to a particular perturbation. (c) Grad-CAM visualizations on the cross-attention maps corresponding to perturbed captions and images generated by perturbed captions. We use the original unperturbed word query to visualize the attention map.\nIn keyboard, the hydrant is missing; in word deletion, the color of the hydrant is incorrect, but no object is missing; in casual, the attention map perfectly matches the generated images, which shows character-level perturbations could be more effective than word level and sentence-level perturbations.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/generation_all.png" + }, + "6": { + "figure_path": "2212.08044v3_figure_6.png", + "caption": "Figure 7: Left: Missing Object Rate (MOR) metric calculation. Right: Comparison of detection results between GT-caption-generated images (top) and perturbed-caption-generated images (bottom).", + "url": "http://arxiv.org/html/2212.08044v3/x5.png" + }, + "7": { + "figure_path": "2212.08044v3_figure_7.png", + "caption": "Figure 8: Examples of our 17 image perturbations. The original image is taken from the COCO dataset and shown on the top left.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/IP_example.png" + }, + "8": { + "figure_path": "2212.08044v3_figure_8.png", + "caption": "Figure 9: Image Captioning results of BLIP.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/BLIP_plot.png" + }, + "9": { + "figure_path": "2212.08044v3_figure_9.png", + "caption": "Figure 10: Image Captioning results of GRIT.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/GRIT_plot.png" + }, + "10": { + "figure_path": "2212.08044v3_figure_10.png", + "caption": "Figure 11: Examples of image captioning results under image perturbations of BLIP.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/example_caption_blip.png" + }, + "11": { + "figure_path": "2212.08044v3_figure_11.png", + "caption": "Figure 12: Examples of image captioning results under image perturbations of GRIT.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/example_caption_grit.png" + }, + "12(a)": { + "figure_path": "2212.08044v3_figure_12(a).png", + "caption": "Figure 13: Optimal Transport (OT) alignment visualization between text and images under image perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x6.png" + }, + "12(b)": { + "figure_path": "2212.08044v3_figure_12(b).png", + "caption": "Figure 13: Optimal Transport (OT) alignment visualization between text and images under image perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x7.png" + }, + "12(c)": { + "figure_path": "2212.08044v3_figure_12(c).png", + "caption": "Figure 13: Optimal Transport (OT) alignment visualization between text and images under image perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x8.png" + }, + "13(a)": { + "figure_path": "2212.08044v3_figure_13(a).png", + "caption": "Figure 14: Optimal Transport (OT) alignment visualization between text and images under image perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x9.png" + }, + "13(b)": { + "figure_path": "2212.08044v3_figure_13(b).png", + "caption": "Figure 14: Optimal Transport (OT) alignment visualization between text and images under image perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x10.png" + }, + "13(c)": { + "figure_path": "2212.08044v3_figure_13(c).png", + "caption": "Figure 14: Optimal Transport (OT) alignment visualization between text and images under image perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x11.png" + }, + "14(a)": { + "figure_path": "2212.08044v3_figure_14(a).png", + "caption": "Figure 15: Optimal Transport (OT) alignment visualization between text and images under text perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x12.png" + }, + "14(b)": { + "figure_path": "2212.08044v3_figure_14(b).png", + "caption": "Figure 15: Optimal Transport (OT) alignment visualization between text and images under text perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x13.png" + }, + "14(c)": { + "figure_path": "2212.08044v3_figure_14(c).png", + "caption": "Figure 15: Optimal Transport (OT) alignment visualization between text and images under text perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x14.png" + }, + "15(a)": { + "figure_path": "2212.08044v3_figure_15(a).png", + "caption": "Figure 16: Optimal Transport (OT) alignment visualization between text and images under text perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x15.png" + }, + "15(b)": { + "figure_path": "2212.08044v3_figure_15(b).png", + "caption": "Figure 16: Optimal Transport (OT) alignment visualization between text and images under text perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x16.png" + }, + "15(c)": { + "figure_path": "2212.08044v3_figure_15(c).png", + "caption": "Figure 16: Optimal Transport (OT) alignment visualization between text and images under text perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x17.png" + }, + "16(a)": { + "figure_path": "2212.08044v3_figure_16(a).png", + "caption": "Figure 17: Grad-CAM visualizations on the cross-attention maps corresponding to individual words with image perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x18.png" + }, + "16(b)": { + "figure_path": "2212.08044v3_figure_16(b).png", + "caption": "Figure 17: Grad-CAM visualizations on the cross-attention maps corresponding to individual words with image perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x19.png" + }, + "16(c)": { + "figure_path": "2212.08044v3_figure_16(c).png", + "caption": "Figure 17: Grad-CAM visualizations on the cross-attention maps corresponding to individual words with image perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x20.png" + }, + "17(a)": { + "figure_path": "2212.08044v3_figure_17(a).png", + "caption": "Figure 18: Grad-CAM visualizations on the cross-attention maps\ncorresponding to individual words with image perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x21.png" + }, + "17(b)": { + "figure_path": "2212.08044v3_figure_17(b).png", + "caption": "Figure 18: Grad-CAM visualizations on the cross-attention maps\ncorresponding to individual words with image perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x22.png" + }, + "17(c)": { + "figure_path": "2212.08044v3_figure_17(c).png", + "caption": "Figure 18: Grad-CAM visualizations on the cross-attention maps\ncorresponding to individual words with image perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x23.png" + }, + "18(a)": { + "figure_path": "2212.08044v3_figure_18(a).png", + "caption": "Figure 19: Text-to-image generation Grad-CAM visualizations on the cross-attention maps corresponding to individual words with text perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x24.png" + }, + "18(b)": { + "figure_path": "2212.08044v3_figure_18(b).png", + "caption": "Figure 19: Text-to-image generation Grad-CAM visualizations on the cross-attention maps corresponding to individual words with text perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x25.png" + }, + "18(c)": { + "figure_path": "2212.08044v3_figure_18(c).png", + "caption": "Figure 19: Text-to-image generation Grad-CAM visualizations on the cross-attention maps corresponding to individual words with text perturbations (1/2).", + "url": "http://arxiv.org/html/2212.08044v3/x26.png" + }, + "19(a)": { + "figure_path": "2212.08044v3_figure_19(a).png", + "caption": "Figure 20: Text-to-image generation Grad-CAM visualizations on the cross-attention maps\ncorresponding to individual words with text perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x27.png" + }, + "19(b)": { + "figure_path": "2212.08044v3_figure_19(b).png", + "caption": "Figure 20: Text-to-image generation Grad-CAM visualizations on the cross-attention maps\ncorresponding to individual words with text perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x28.png" + }, + "19(c)": { + "figure_path": "2212.08044v3_figure_19(c).png", + "caption": "Figure 20: Text-to-image generation Grad-CAM visualizations on the cross-attention maps\ncorresponding to individual words with text perturbations (2/2).", + "url": "http://arxiv.org/html/2212.08044v3/x29.png" + }, + "20(a)": { + "figure_path": "2212.08044v3_figure_20(a).png", + "caption": "Figure 21: Text-to-image generation comparison on all 16 generated images. We find that though the generated images do not guarantee to perfectly show all the notions described in the captions, the probability of generating matched images by the unperturbed captions is higher than the perturbed captions, especially character-level perturbations.", + "url": "http://arxiv.org/html/2212.08044v3/x30.png" + }, + "20(b)": { + "figure_path": "2212.08044v3_figure_20(b).png", + "caption": "Figure 21: Text-to-image generation comparison on all 16 generated images. We find that though the generated images do not guarantee to perfectly show all the notions described in the captions, the probability of generating matched images by the unperturbed captions is higher than the perturbed captions, especially character-level perturbations.", + "url": "http://arxiv.org/html/2212.08044v3/x31.png" + }, + "20(c)": { + "figure_path": "2212.08044v3_figure_20(c).png", + "caption": "Figure 21: Text-to-image generation comparison on all 16 generated images. We find that though the generated images do not guarantee to perfectly show all the notions described in the captions, the probability of generating matched images by the unperturbed captions is higher than the perturbed captions, especially character-level perturbations.", + "url": "http://arxiv.org/html/2212.08044v3/x32.png" + }, + "21": { + "figure_path": "2212.08044v3_figure_21.png", + "caption": "Figure 22: Image-text retrieval results on Flick30K-IP.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/Flickr30K_IP_plot.png" + }, + "22": { + "figure_path": "2212.08044v3_figure_22.png", + "caption": "Figure 23: Image-text retrieval results on COCO-IP.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/COCO_IP_plot.png" + }, + "23": { + "figure_path": "2212.08044v3_figure_23.png", + "caption": "Figure 24: Visual reasoning results on NLVR-IP dev set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VR_IP_dev_plot.png" + }, + "24": { + "figure_path": "2212.08044v3_figure_24.png", + "caption": "Figure 25: Visual reasoning results on NLVR-IP test set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VR_IP_test_plot.png" + }, + "25": { + "figure_path": "2212.08044v3_figure_25.png", + "caption": "Figure 26: Visual entailment results on SNLI-VE-IP val set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VE_IP_val_plot.png" + }, + "26": { + "figure_path": "2212.08044v3_figure_26.png", + "caption": "Figure 27: Visual entailment results on SNLI-VE-IP test set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VE_IP_test_plot.png" + }, + "27": { + "figure_path": "2212.08044v3_figure_27.png", + "caption": "Figure 28: Image-text retrieval results on Flick30K-TP.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/Flickr30K_TP_plot.png" + }, + "28": { + "figure_path": "2212.08044v3_figure_28.png", + "caption": "Figure 29: Image-text retrieval results on COCO-TP.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/COCO_TP_plot.png" + }, + "29": { + "figure_path": "2212.08044v3_figure_29.png", + "caption": "Figure 30: Visual reasoning results on NLVR-TP dev set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VR_TP_dev_plot.png" + }, + "30": { + "figure_path": "2212.08044v3_figure_30.png", + "caption": "Figure 31: Visual reasoning results on NLVR-TP test set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VR_TP_test_plot.png" + }, + "31": { + "figure_path": "2212.08044v3_figure_31.png", + "caption": "Figure 32: Visual entailment results on SNLI-VE-TP val set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VE_TP_val_plot.png" + }, + "32": { + "figure_path": "2212.08044v3_figure_32.png", + "caption": "Figure 33: Visual entailment results on SNLI-VE-TP test set.", + "url": "http://arxiv.org/html/2212.08044v3/extracted/5356616/figs_arxiv/plots/VE_TP_test_plot.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Flamingo: a visual language model for few-shot learning.", + "author": "Jean-Baptiste Alayrac et al.", + "venue": "ArXiv, abs/2204.14198, 2022.", + "url": null + } + }, + { + "2": { + "title": "Reveal of vision transformers robustness against adversarial attacks.", + "author": "Ahmed Aldahdooh, Wassim Hamidouche, and Olivier D\u00e9forges.", + "venue": "ArXiv, abs/2106.03734, 2021.", + "url": null + } + }, + { + "3": { + "title": "Synthetic and natural noise both break neural machine translation.", + "author": "Yonatan Belinkov and Yonatan Bisk.", + "venue": "ArXiv, abs/1711.02173, 2018.", + "url": null + } + }, + { + "4": { + "title": "Understanding robustness of transformers for image classification.", + "author": "Srinadh Bhojanapalli et al.", + "venue": "ICCV, pages 10211\u201310221, 2021.", + "url": null + } + }, + { + "5": { + "title": "Experience grounds language.", + "author": "Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob Andreas, Yoshua Bengio,\nJoyce Yue Chai, Mirella Lapata, Angeliki Lazaridou, Jonathan May, Aleksandr\nNisnevich, Nicolas Pinto, and Joseph P. Turian.", + "venue": "In EMNLP, 2020.", + "url": null + } + }, + { + "6": { + "title": "Language models are few-shot learners.", + "author": "Tom B. Brown et al.", + "venue": "ArXiv, abs/2005.14165, 2020.", + "url": null + } + }, + { + "7": { + "title": "Image-text retrieval: A survey on recent research and development.", + "author": "Min Cao, Shiping Li, Juntao Li, Liqiang Nie, and Min Zhang.", + "venue": "ArXiv, abs/2203.14713, 2022.", + "url": null + } + }, + { + "8": { + "title": "Robustness and adversarial examples in natural language processing.", + "author": "Kai-Wei Chang, He He, Robin Jia, and Sameer Singh.", + "venue": "EMNLP: Tutorial Abstracts, 2021.", + "url": null + } + }, + { + "9": { + "title": "Pali: A jointly-scaled multilingual language-image model.", + "author": "Xi Chen et al.", + "venue": "ArXiv, abs/2209.06794, 2022.", + "url": null + } + }, + { + "10": { + "title": "Uniter: Universal image-text representation learning.", + "author": "Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan,\nYu Cheng, and Jingjing Liu.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "11": { + "title": "Dall-eval: Probing the reasoning skills and social biases of\ntext-to-image generative transformers.", + "author": "Jaemin Cho, Abhaysinh Zala, and Mohit Bansal.", + "venue": "ArXiv, abs/2202.04053, 2022.", + "url": null + } + }, + { + "12": { + "title": "Palm: Scaling language modeling with pathways.", + "author": "Aakanksha Chowdhery et al.", + "venue": "ArXiv, abs/2204.02311, 2022.", + "url": null + } + }, + { + "13": { + "title": "Robustbench: a standardized adversarial robustness benchmark.", + "author": "Francesco Croce, Maksym Andriushchenko, Vikash Sehwag, Edoardo Debenedetti,\nNicolas Flammarion, Mung Chiang, Prateek Mittal, and Matthias Hein.", + "venue": "arXiv preprint arXiv:2010.09670, 2020.", + "url": null + } + }, + { + "14": { + "title": "Discovering the hidden vocabulary of dalle-2.", + "author": "Giannis Daras and Alexandros G Dimakis.", + "venue": "arXiv preprint arXiv:2206.00169, 2022.", + "url": null + } + }, + { + "15": { + "title": "Meteor universal: Language specific translation evaluation for any\ntarget language.", + "author": "Michael J. Denkowski and Alon Lavie.", + "venue": "In WMT@ACL, 2014.", + "url": null + } + }, + { + "16": { + "title": "On robustness and transferability of convolutional neural networks.", + "author": "Josip Djolonga et al.", + "venue": "CVPR, 2021.", + "url": null + } + }, + { + "17": { + "title": "Towards robustness against natural language word substitutions.", + "author": "Xinshuai Dong, Anh Tuan Luu, Rongrong Ji, and Hong Liu.", + "venue": "ArXiv, abs/2107.13541, 2021.", + "url": null + } + }, + { + "18": { + "title": "Boosting adversarial attacks with momentum.", + "author": "Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and\nJianguo Li.", + "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern\nRecognition, pages 9185\u20139193, 2017.", + "url": null + } + }, + { + "19": { + "title": "An image is worth 16x16 words: Transformers for image recognition at\nscale.", + "author": "Alexey Dosovitskiy et al.", + "venue": "ICLR, 2021.", + "url": null + } + }, + { + "20": { + "title": "An empirical study of training end-to-end vision-and-language\ntransformers.", + "author": "Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang,\nChenguang Zhu, Nanyun Peng, Zicheng Liu, and Michael Zeng.", + "venue": "ArXiv, abs/2111.02387, 2021.", + "url": null + } + }, + { + "21": { + "title": "Robustness in deep learning for computer vision: Mind the gap?", + "author": "Nathan G. Drenkow, Numair Sani, Ilya Shpitser, and M. Unberath.", + "venue": "ArXiv, abs/2112.00639, 2021.", + "url": null + } + }, + { + "22": { + "title": "On adversarial examples for character-level neural machine\ntranslation.", + "author": "J. Ebrahimi, Daniel Lowd, and Dejing Dou.", + "venue": "In COLING, 2018.", + "url": null + } + }, + { + "23": { + "title": "Multimodal automl for image, text and tabular data.", + "author": "Nick Erickson, Xingjian Shi, James Sharpnack, and Alexander J. Smola.", + "venue": "ACM SIGKDD, 2022.", + "url": null + } + }, + { + "24": { + "title": "Formality style transfer for noisy, user-generated conversations:\nExtracting labeled, parallel data from unlabeled corpora.", + "author": "Isak Czeresnia Etinger and Alan W. Black.", + "venue": "In EMNLP, 2019.", + "url": null + } + }, + { + "25": { + "title": "Data determines distributional robustness in contrastive language\nimage pre-training (clip).", + "author": "Alexander W. Fang, Gabriel Ilharco, Mitchell Wortsman, Yuhao Wan, Vaishaal\nShankar, Achal Dave, and Ludwig Schmidt.", + "venue": "2022.", + "url": null + } + }, + { + "26": { + "title": "Pixels still beat text: Attacking the openai clip model with text\npatches and adversarial pixel perturbations.", + "author": "Stanislav Fort.", + "venue": "2021.", + "url": null + } + }, + { + "27": { + "title": "Understanding clip robustness.", + "author": "Yuri Galindo and Fabio A. Faria.", + "venue": "2021.", + "url": null + } + }, + { + "28": { + "title": "Large-scale adversarial training for vision-and-language\nrepresentation learning.", + "author": "Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu.", + "venue": "ArXiv, abs/2006.06195, 2020.", + "url": null + } + }, + { + "29": { + "title": "Imagenet-trained cnns are biased towards texture; increasing shape\nbias improves accuracy and robustness.", + "author": "Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix\nWichmann, and Wieland Brendel.", + "venue": "ArXiv, abs/1811.12231, 2019.", + "url": null + } + }, + { + "30": { + "title": "Robustness gym: Unifying the nlp evaluation landscape.", + "author": "Karan Goel, Nazneen Rajani, Jesse Vig, Samson Tan, Jason M. Wu, Stephan Zheng,\nCaiming Xiong, Mohit Bansal, and Christopher R\u2019e.", + "venue": "In NAACL, 2021.", + "url": null + } + }, + { + "31": { + "title": "Multimodal neurons in artificial neural networks.", + "author": "Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig\nSchubert, Alec Radford, and Chris Olah.", + "venue": "Distill, 6(3):e30, 2021.", + "url": null + } + }, + { + "32": { + "title": "Explaining and harnessing adversarial examples.", + "author": "Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy.", + "venue": "CoRR, abs/1412.6572, 2014.", + "url": null + } + }, + { + "33": { + "title": "Vision models are more robust and fair when pretrained on uncurated\nimages without supervision.", + "author": "Priya Goyal, Quentin Duval, Isaac Seessel, Mathilde Caron, Ishan Misra, Levent\nSagun, Armand Joulin, and Piotr Bojanowski.", + "venue": "ArXiv, abs/2202.08360, 2022.", + "url": null + } + }, + { + "34": { + "title": "Textflint: Unified multilingual robustness evaluation toolkit for\nnatural language processing.", + "author": "Tao Gui et al.", + "venue": "In ACL, 2021.", + "url": null + } + }, + { + "35": { + "title": "Grit: General robust image task benchmark.", + "author": "Tanmay Gupta, Ryan Marten, Aniruddha Kembhavi, and Derek Hoiem.", + "venue": "ArXiv, abs/2204.13653, 2022.", + "url": null + } + }, + { + "36": { + "title": "An empirical exploration of cross-domain alignment between language\nand electroencephalogram.", + "author": "William Han, Jielin Qiu, Jiacheng Zhu, Mengdi Xu, Douglas Weber, Bo Li, and\nDing Zhao.", + "venue": "ArXiv, abs/2208.06348, 2022.", + "url": null + } + }, + { + "37": { + "title": "Mixgen: A new multi-modal data augmentation.", + "author": "Xiaoshuai Hao, Yi Zhu, Srikar Appalaraju, Aston Zhang, Wanqian Zhang, Boyang\nLi, and Mu Li.", + "venue": "ArXiv, abs/2206.08358, 2022.", + "url": null + } + }, + { + "38": { + "title": "Benchmarking neural network robustness to common corruptions and\nperturbations.", + "author": "Dan Hendrycks and Thomas G. Dietterich.", + "venue": "ArXiv, abs/1903.12261, 2019.", + "url": null + } + }, + { + "39": { + "title": "Pretrained transformers improve out-of-distribution robustness.", + "author": "Dan Hendrycks, Xiaoyuan Liu, Eric Wallace, Adam Dziedzic, Rishabh Krishnan, and\nDawn Xiaodong Song.", + "venue": "In ACL, 2020a.", + "url": null + } + }, + { + "40": { + "title": "Augmix: A simple data processing method to improve robustness and\nuncertainty.", + "author": "Dan Hendrycks, Norman Mu, Ekin Dogus Cubuk, Barret Zoph, Justin Gilmer, and\nBalaji Lakshminarayanan.", + "venue": "ArXiv, abs/1912.02781, 2020b.", + "url": null + } + }, + { + "41": { + "title": "The many faces of robustness: A critical analysis of\nout-of-distribution generalization.", + "author": "Dan Hendrycks, Steven Basart, Norman Mu, Saurav Kadavath, Frank Wang, Evan\nDorundo, Rahul Desai, Tyler Lixuan Zhu, Samyak Parajuli, Mike Guo,\nDawn Xiaodong Song, Jacob Steinhardt, and Justin Gilmer.", + "venue": "ICCV, 2021a.", + "url": null + } + }, + { + "42": { + "title": "Natural adversarial examples.", + "author": "Dan Hendrycks, Kevin Zhao, Steven Basart, Jacob Steinhardt, and Dawn Xiaodong\nSong.", + "venue": "CVPR, 2021b.", + "url": null + } + }, + { + "43": { + "title": "Gans trained by a two time-scale update rule converge to a local nash\nequilibrium.", + "author": "Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp\nHochreiter.", + "venue": "In NIPS, 2017.", + "url": null + } + }, + { + "44": { + "title": "Are you looking? grounding to multiple modalities in\nvision-and-language navigation.", + "author": "Ronghang Hu, Daniel Fried, Anna Rohrbach, Dan Klein, Trevor Darrell, and Kate\nSaenko.", + "venue": "In ACL, 2019.", + "url": null + } + }, + { + "45": { + "title": "Scaling up vision-language pretraining for image captioning.", + "author": "Xiaowei Hu, Zhe Gan, Jianfeng Wang, Zhengyuan Yang, Zicheng Liu, Yumao Lu, and\nLijuan Wang.", + "venue": "CVPR, 2022.", + "url": null + } + }, + { + "46": { + "title": "Arbitrary style transfer in real-time with adaptive instance\nnormalization.", + "author": "Xun Huang and Serge J. Belongie.", + "venue": "ICCV, 2017.", + "url": null + } + }, + { + "47": { + "title": "Transformers are adaptable task planners.", + "author": "Vidhi Jain, Yixin Lin, Eric Undersander, Yonatan Bisk, and Akshara Rai.", + "venue": "ArXiv, abs/2207.02442, 2022.", + "url": null + } + }, + { + "48": { + "title": "Aeda: An easier data augmentation technique for text classification.", + "author": "Akbar Karimi, L. Rossi, and Andrea Prati.", + "venue": "In EMNLP, 2021.", + "url": null + } + }, + { + "49": { + "title": "Vilt: Vision-and-language transformer without convolution or region\nsupervision.", + "author": "Wonjae Kim, Bokyung Son, and Ildoo Kim.", + "venue": "In ICML, 2021.", + "url": null + } + }, + { + "50": { + "title": "The role of imagenet classes in fr\u00e9chet inception distance.", + "author": "Tuomas Kynkaanniemi, Tero Karras, Miika Aittala, Timo Aila, and Jaakko\nLehtinen.", + "venue": "ArXiv, abs/2203.06026, 2022.", + "url": null + } + }, + { + "51": { + "title": "Delete, retrieve, generate: a simple approach to sentiment and style\ntransfer.", + "author": "Juncen Li, Robin Jia, He He, and Percy Liang.", + "venue": "In NAACL, 2018.", + "url": null + } + }, + { + "52": { + "title": "Align before fuse: Vision and language representation learning with\nmomentum distillation.", + "author": "Junnan Li, Ramprasaath R. Selvaraju, Akhilesh Deepak Gotmare, Shafiq R. Joty,\nCaiming Xiong, and Steven C. H. Hoi.", + "venue": "In NeurIPS, 2021a.", + "url": null + } + }, + { + "53": { + "title": "Blip: Bootstrapping language-image pre-training for unified\nvision-language understanding and generation.", + "author": "Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi.", + "venue": "In ICML, 2022a.", + "url": null + } + }, + { + "54": { + "title": "Blip-2: Bootstrapping language-image pre-training with frozen image\nencoders and large language models.", + "author": "Junnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi.", + "venue": "In International Conference on Machine Learning, 2023.", + "url": null + } + }, + { + "55": { + "title": "Adversarial vqa: A new benchmark for evaluating the robustness of vqa\nmodels.", + "author": "Linjie Li, Jie Lei, Zhe Gan, and Jingjing Liu.", + "venue": "ICCV, 2021b.", + "url": null + } + }, + { + "56": { + "title": "Bert-attack: Adversarial attack against bert using bert.", + "author": "Linyang Li, Ruotian Ma, Qipeng Guo, X. Xue, and Xipeng Qiu.", + "venue": "ArXiv, abs/2004.09984, 2020a.", + "url": null + } + }, + { + "57": { + "title": "Grounded language-image pre-training.", + "author": "Liunian Harold Li et al.", + "venue": "CVPR, 2021c.", + "url": null + } + }, + { + "58": { + "title": "Unimo-2: End-to-end unified vision-language grounded learning.", + "author": "Wei Li, Can Gao, Guocheng Niu, Xinyan Xiao, Hao Liu, Jiachen Liu, Hua Wu, and\nHaifeng Wang.", + "venue": "arXiv preprint arXiv:2203.09067, 2022b.", + "url": null + } + }, + { + "59": { + "title": "Oscar: Object-semantics aligned pre-training for vision-language\ntasks.", + "author": "Xiujun Li, Xi Yin, Chunyuan Li, Xiaowei Hu, Pengchuan Zhang, Lei Zhang, Lijuan\nWang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao.", + "venue": "In ECCV, 2020b.", + "url": null + } + }, + { + "60": { + "title": "Foundations and recent trends in multimodal machine learning:\nPrinciples, challenges, and open questions.", + "author": "Paul Pu Liang, Amir Zadeh, and Louis-Philippe Morency.", + "venue": "2022.", + "url": null + } + }, + { + "61": { + "title": "Rouge: A package for automatic evaluation of summaries.", + "author": "Chin-Yew Lin.", + "venue": "In ACL, 2004.", + "url": null + } + }, + { + "62": { + "title": "Nesterov accelerated gradient and scale invariance for adversarial\nattacks.", + "author": "Jiadong Lin, Chuanbiao Song, Kun He, Liwei Wang, and John E. Hopcroft.", + "venue": "arXiv: Learning, 2019.", + "url": null + } + }, + { + "63": { + "title": "Microsoft coco: Common objects in context.", + "author": "Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva\nRamanan, Piotr Doll\u00e1r, and C. Lawrence Zitnick.", + "venue": "In ECCV, 2014.", + "url": null + } + }, + { + "64": { + "title": "Visual instruction tuning.", + "author": "Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.", + "venue": "ArXiv, abs/2304.08485, 2023.", + "url": null + } + }, + { + "65": { + "title": "Detecting textual adversarial examples based on distributional\ncharacteristics of data representations.", + "author": "Na Liu, Mark Dras, and W. Zhang.", + "venue": "In REPL4NLP, 2022a.", + "url": null + } + }, + { + "66": { + "title": "Design guidelines for prompt engineering text-to-image generative\nmodels.", + "author": "Vivian Liu and Lydia B. Chilton.", + "venue": "Proceedings of the 2022 CHI Conference on Human Factors in\nComputing Systems, 2022.", + "url": null + } + }, + { + "67": { + "title": "Comparing recognition performance and robustness of multimodal deep\nlearning models for multimodal emotion recognition.", + "author": "Wei Liu, Jie-Lin Qiu, Wei-Long Zheng, and Bao-Liang Lu.", + "venue": "IEEE Transactions on Cognitive and Developmental Systems,\n14:715\u2013729, 2022b.", + "url": null + } + }, + { + "68": { + "title": "An empirical study on distribution shift robustness from the\nperspective of pre-training and data augmentation.", + "author": "Ziquan Liu, Yi Xu, Yuanhong Xu, Qi Qian, Hao Li, Rong Jin, Xiangyang Ji, and\nAntoni B. Chan.", + "venue": "ArXiv, abs/2205.12753, 2022c.", + "url": null + } + }, + { + "69": { + "title": "Unified-io: A unified model for vision, language, and multi-modal\ntasks.", + "author": "Jiasen Lu, Christopher Clark, Rowan Zellers, Roozbeh Mottaghi, and Aniruddha\nKembhavi.", + "venue": "ArXiv, abs/2206.08916, 2022.", + "url": null + } + }, + { + "70": { + "title": "Nlp augmentation.", + "author": "Edward Ma.", + "venue": "2019.", + "url": null + } + }, + { + "71": { + "title": "Learning word vectors for sentiment analysis.", + "author": "Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, A. Ng, and\nChristopher Potts.", + "venue": "In ACL, 2011.", + "url": null + } + }, + { + "72": { + "title": "On the robustness of vision transformers to adversarial examples.", + "author": "Kaleel Mahmood, Rigel Mahmood, and Marten van Dijk.", + "venue": "ICCV, 2021.", + "url": null + } + }, + { + "73": { + "title": "The king is naked: on the notion of robustness for natural language\nprocessing.", + "author": "Emanuele La Malfa and Marta Z. Kwiatkowska.", + "venue": "In AAAI, 2022.", + "url": null + } + }, + { + "74": { + "title": "Generating images from captions with attention.", + "author": "Elman Mansimov, Emilio Parisotto, Jimmy Ba, and Ruslan Salakhutdinov.", + "venue": "CoRR, abs/1511.02793, 2016.", + "url": null + } + }, + { + "75": { + "title": "Towards robust vision transformer.", + "author": "Xiaofeng Mao, Gege Qi, Yuefeng Chen, Xiaodan Li, Ranjie Duan, Shaokai Ye, Yuan\nHe, and Hui Xue.", + "venue": "ArXiv, abs/2105.07926, 2021.", + "url": null + } + }, + { + "76": { + "title": "Benchmarking robustness in object detection: Autonomous driving when\nwinter is coming.", + "author": "Claudio Michaelis et al.", + "venue": "arXiv preprint arXiv:1907.07484, 2019.", + "url": null + } + }, + { + "77": { + "title": "Film: Following instructions in language with modular methods.", + "author": "So Yeon Min, Devendra Singh Chaplot, Pradeep Ravikumar, Yonatan Bisk, and\nRuslan Salakhutdinov.", + "venue": "ArXiv, abs/2110.07342, 2022.", + "url": null + } + }, + { + "78": { + "title": "Evaluating the robustness of neural language models to input\nperturbations.", + "author": "Milad Moradi and Matthias Samwald.", + "venue": "In EMNLP, 2021.", + "url": null + } + }, + { + "79": { + "title": "Facebook fair\u2019s wmt19 news translation task submission.", + "author": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov.", + "venue": "In Proc. of WMT, 2020.", + "url": null + } + }, + { + "80": { + "title": "Grit: Faster and better image captioning transformer using dual\nvisual features.", + "author": "Van-Quang Nguyen, Masanori Suganuma, and Takayuki Okatani.", + "venue": "ArXiv, abs/2207.09666, 2022.", + "url": null + } + }, + { + "81": { + "title": "Glide: Towards photorealistic image generation and editing with\ntext-guided diffusion models.", + "author": "Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin,\nBob McGrew, Ilya Sutskever, and Mark Chen.", + "venue": "In ICML, 2022.", + "url": null + } + }, + { + "82": { + "title": "Reading isn\u2019t believing: Adversarial attacks on multi-modal neurons.", + "author": "David Noever and Samantha E. Miller Noever.", + "venue": "ArXiv, abs/2103.10480, 2021.", + "url": null + } + }, + { + "83": { + "title": "Gpt-4 technical report.", + "author": "OpenAI.", + "venue": "ArXiv, abs/2303.08774, 2023.", + "url": null + } + }, + { + "84": { + "title": "Bleu: a method for automatic evaluation of machine translation.", + "author": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu.", + "venue": "In ACL, 2002.", + "url": null + } + }, + { + "85": { + "title": "On aliased resizing and surprising subtleties in gan evaluation.", + "author": "Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu.", + "venue": "In CVPR, 2022.", + "url": null + } + }, + { + "86": { + "title": "Perception test : A diagnostic benchmark for multimodal models.", + "author": "Viorica Patraucean et al.", + "venue": "2022.", + "url": null + } + }, + { + "87": { + "title": "Vision transformers are robust learners.", + "author": "Sayak Paul and Pin-Yu Chen.", + "venue": "In AAAI, 2022.", + "url": null + } + }, + { + "88": { + "title": "Learning transferable visual models from natural language\nsupervision, 2021.", + "author": "Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,\nSandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,\nGretchen Krueger, and Ilya Sutskever.", + "venue": "https://github.com/openai/CLIP.", + "url": null + } + }, + { + "89": { + "title": "Hierarchical text-conditional image generation with clip latents.", + "author": "Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.", + "venue": "ArXiv, abs/2204.06125, 2022.", + "url": null + } + }, + { + "90": { + "title": "Do imagenet classifiers generalize to imagenet?", + "author": "Benjamin Recht, Rebecca Roelofs, Ludwig Schmidt, and Vaishaal Shankar.", + "venue": "In ICML, 2019.", + "url": null + } + }, + { + "91": { + "title": "Sentence-bert: Sentence embeddings using siamese bert-networks.", + "author": "Nils Reimers and Iryna Gurevych.", + "venue": "In EMNLP, 11 2019.", + "url": null + } + }, + { + "92": { + "title": "High-resolution image synthesis with latent diffusion models.", + "author": "Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Bj\u00f6rn\nOmmer.", + "venue": "CVPR, 2022.", + "url": null + } + }, + { + "93": { + "title": "Models in the wild: On corruption robustness of neural nlp systems.", + "author": "Barbara Rychalska, Dominika Basaj, Alicja Gosiewska, and P. Biecek.", + "venue": "In ICONIP, 2019.", + "url": null + } + }, + { + "94": { + "title": "Photorealistic text-to-image diffusion models with deep language\nunderstanding.", + "author": "Chitwan Saharia et al.", + "venue": "ArXiv, abs/2205.11487, 2022.", + "url": null + } + }, + { + "95": { + "title": "Multi-modal robustness analysis against language and visual\nperturbations.", + "author": "Madeline Chantry Schiappa, Yogesh Singh Rawat, Shruti Vyas, Vibhav Vineet, and\nHamid Palangi.", + "venue": "ArXiv, abs/2207.02159, 2022.", + "url": null + } + }, + { + "96": { + "title": "Generative text style transfer for improved language sophistication.", + "author": "Robert Schmidt.", + "venue": "2020.", + "url": null + } + }, + { + "97": { + "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text\npairs.", + "author": "Christoph Schuhmann et al.", + "venue": "ArXiv, abs/2111.02114, 2021.", + "url": null + } + }, + { + "98": { + "title": "Laion-5b: An open large-scale dataset for training next generation\nimage-text models.", + "author": "Christoph Schuhmann et al.", + "venue": "2022.", + "url": null + } + }, + { + "99": { + "title": "Grad-cam: Visual explanations from deep networks via gradient-based\nlocalization.", + "author": "Ramprasaath R. Selvaraju, Abhishek Das, Ramakrishna Vedantam, Michael Cogswell,\nDevi Parikh, and Dhruv Batra.", + "venue": "International Journal of Computer Vision, 128:336\u2013359, 2017.", + "url": null + } + }, + { + "100": { + "title": "Robustness tests of nlp machine learning models: Search and\nsemantically replace.", + "author": "Rahul Singh et al.", + "venue": "ArXiv, abs/2104.09978, 2021.", + "url": null + } + }, + { + "101": { + "title": "A corpus of natural language for visual reasoning.", + "author": "Alane Suhr, Mike Lewis, James Yeh, and Yoav Artzi.", + "venue": "In ACL, 2017.", + "url": null + } + }, + { + "102": { + "title": "Measuring robustness to natural distribution shifts in image\nclassification.", + "author": "Rohan Taori, Achal Dave, Vaishaal Shankar, Nicholas Carlini, Benjamin Recht,\nand Ludwig Schmidt.", + "venue": "ArXiv, abs/2007.00644, 2020.", + "url": null + } + }, + { + "103": { + "title": "Cider: Consensus-based image description evaluation.", + "author": "Ramakrishna Vedantam, C. Lawrence Zitnick, and Devi Parikh.", + "venue": "CVPR, 2015.", + "url": null + } + }, + { + "104": { + "title": "Adversarial glue: A multi-task benchmark for robustness evaluation of\nlanguage models.", + "author": "Boxin Wang, Chejian Xu, Shuohang Wang, Zhe Gan, Yu Cheng, Jianfeng Gao,\nAhmed Hassan Awadallah, and Bo Li.", + "venue": "ArXiv, abs/2111.02840, 2021.", + "url": null + } + }, + { + "105": { + "title": "Ofa: Unifying architectures, tasks, and modalities through a simple\nsequence-to-sequence learning framework.", + "author": "Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma,\nChang Zhou, Jingren Zhou, and Hongxia Yang.", + "venue": "In International Conference on Machine Learning,\n2022a.", + "url": null + } + }, + { + "106": { + "title": "Unifying architectures, tasks, and modalities through a simple\nsequence-to-sequence learning framework.", + "author": "Peng Wang, An Yang, Rui Men, Junyang Lin, Shuai Bai, Zhikang Li, Jianxin Ma,\nChang Zhou, Jingren Zhou, and Hongxia Yang.", + "venue": "In ICML, 2022b.", + "url": null + } + }, + { + "107": { + "title": "Cat-gen: Improving robustness in nlp models via controlled\nadversarial text generation.", + "author": "Tianlu Wang, Xuezhi Wang, Yao Qin, Ben Packer, Kang Li, Jilin Chen, Alex\nBeutel, and Ed H. Chi.", + "venue": "In EMNLP, 2020.", + "url": null + } + }, + { + "108": { + "title": "Measure and improve robustness in nlp models: A survey.", + "author": "Xuezhi Wang, Haohan Wang, and Diyi Yang.", + "venue": "ArXiv, abs/2112.08313, 2022c.", + "url": null + } + }, + { + "109": { + "title": "Image quality assessment: from error visibility to structural\nsimilarity.", + "author": "Zhou Wang, Alan Conrad Bovik, Hamid R. Sheikh, and Eero P. Simoncelli.", + "venue": "IEEE Transactions on Image Processing, 13:600\u2013612,\n2004.", + "url": null + } + }, + { + "110": { + "title": "Simvlm: Simple visual language model pretraining with weak\nsupervision.", + "author": "Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, and Yuan Cao.", + "venue": "ArXiv, abs/2108.10904, 2022d.", + "url": null + } + }, + { + "111": { + "title": "Eda: Easy data augmentation techniques for boosting performance on\ntext classification tasks.", + "author": "Jason Wei and Kai Zou.", + "venue": "In EMNLP, 2019.", + "url": null + } + }, + { + "112": { + "title": "Assaying out-of-distribution generalization in transfer learning.", + "author": "F. Wenzel et al.", + "venue": "ArXiv, abs/2207.09239, 2022.", + "url": null + } + }, + { + "113": { + "title": "A broad-coverage challenge corpus for sentence understanding through\ninference.", + "author": "Adina Williams, Nikita Nangia, and Samuel R. Bowman.", + "venue": "In NAACL, 2018.", + "url": null + } + }, + { + "114": { + "title": "Unified visual-semantic embeddings: Bridging vision and language with\nstructured meaning representations.", + "author": "Hao Wu, Jiayuan Mao, Yufeng Zhang, Yuning Jiang, Lei Li, Weiwei Sun, and\nWei-Ying Ma.", + "venue": "CVPR, 2019.", + "url": null + } + }, + { + "115": { + "title": "Adversarial examples improve image recognition.", + "author": "Cihang Xie, Mingxing Tan, Boqing Gong, Jiang Wang, Alan Loddon Yuille, and\nQuoc V. Le.", + "venue": "CVPR, 2019a.", + "url": null + } + }, + { + "116": { + "title": "Visual entailment task for visually-grounded language learning.", + "author": "Ning Xie, Farley Lai, Derek Doran, and Asim Kadav.", + "venue": "arXiv preprint arXiv:1811.10582, 2018.", + "url": null + } + }, + { + "117": { + "title": "Visual entailment: A novel task for fine-grained image understanding.", + "author": "Ning Xie, Farley Lai, Derek Doran, and Asim Kadav.", + "venue": "arXiv preprint arXiv:1901.06706, 2019b.", + "url": null + } + }, + { + "118": { + "title": "Fooling vision and language models despite localization and attention\nmechanism.", + "author": "Xiaojun Xu, Xinyun Chen, Chang Liu, Anna Rohrbach, Trevor Darrell, and\nDawn Xiaodong Song.", + "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern\nRecognition, pages 4951\u20134961, 2017.", + "url": null + } + }, + { + "119": { + "title": "Vision-language pre-training with triple contrastive learning.", + "author": "Jinyu Yang, Jiali Duan, S. Tran, Yi Xu, Sampath Chanda, Liqun Chen, Belinda\nZeng, Trishul M. Chilimbi, and Junzhou Huang.", + "venue": "ArXiv, abs/2202.10401, 2022.", + "url": null + } + }, + { + "120": { + "title": "Defending multimodal fusion models against single-source adversaries.", + "author": "Karren D. Yang, Wan-Yi Lin, Manash Pratim Barman, Filipe Condessa, and Zico\nKolter.", + "venue": "2021 IEEE/CVF Conference on Computer Vision and Pattern\nRecognition (CVPR), pages 3339\u20133348, 2021.", + "url": null + } + }, + { + "121": { + "title": "A fourier perspective on model robustness in computer vision.", + "author": "Dong Yin, Raphael Gontijo Lopes, Jonathon Shlens, Ekin Dogus Cubuk, and Justin\nGilmer.", + "venue": "In NeurIPS, 2019.", + "url": null + } + }, + { + "122": { + "title": "From image descriptions to visual denotations: New similarity metrics\nfor semantic inference over event descriptions.", + "author": "Peter Young, Alice Lai, Micah Hodosh, and J. Hockenmaier.", + "venue": "Transactions of the Association for Computational Linguistics,\n2:67\u201378, 2014.", + "url": null + } + }, + { + "123": { + "title": "Coca: Contrastive captioners are image-text foundation models.", + "author": "Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mojtaba Seyedhosseini, and\nYonghui Wu.", + "venue": "ArXiv, abs/2205.01917, 2022.", + "url": null + } + }, + { + "124": { + "title": "Florence: A new foundation model for computer vision.", + "author": "Lu Yuan et al.", + "venue": "ArXiv, abs/2111.11432, 2021.", + "url": null + } + }, + { + "125": { + "title": "Scaling vision transformers.", + "author": "Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer.", + "venue": "CVPR, 2022.", + "url": null + } + }, + { + "126": { + "title": "Towards adversarial attack on vision-language pre-training models.", + "author": "Jiaming Zhang, Qiaomin Yi, and Jitao Sang.", + "venue": "Proceedings of the 30th ACM International Conference on\nMultimedia, 2022.", + "url": null + } + }, + { + "127": { + "title": "Vinvl: Revisiting visual representations in vision-language models.", + "author": "Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang,\nYejin Choi, and Jianfeng Gao.", + "venue": "CVPR, 2021.", + "url": null + } + }, + { + "128": { + "title": "The unreasonable effectiveness of deep features as a perceptual\nmetric.", + "author": "Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang.", + "venue": "2018 IEEE/CVF Conference on Computer Vision and Pattern\nRecognition, pages 586\u2013595, 2018.", + "url": null + } + }, + { + "129": { + "title": "Improving the robustness of deep neural networks via stability\ntraining.", + "author": "Stephan Zheng, Yang Song, Thomas Leung, and Ian J. Goodfellow.", + "venue": "CVPR, 2016.", + "url": null + } + }, + { + "130": { + "title": "Understanding the robustness in vision transformers.", + "author": "Daquan Zhou, Zhiding Yu, Enze Xie, Chaowei Xiao, Anima Anandkumar, Jiashi Feng,\nand Jos\u00e9 Manuel \u00c1lvarez.", + "venue": "In ICML, 2022.", + "url": null + } + }, + { + "131": { + "title": "Minigpt-4: Enhancing vision-language understanding with advanced\nlarge language models.", + "author": "Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny.", + "venue": "ArXiv, abs/2304.10592, 2023.", + "url": null + } + }, + { + "132": { + "title": "Crossclr: Cross-modal contrastive learning for multi-modal video\nrepresentations.", + "author": "Mohammadreza Zolfaghari, Yi Zhu, Peter Gehler, and Thomas Brox.", + "venue": "In ICCV, 2021.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2212.08044v3" +} \ No newline at end of file