Limitoy / ACL_23_with_limitation /ACL23_1010.json
Limitless063's picture
Duplicate from IbrahimAlAzhar/limitation-generation-dataset-bagels
0f2f2d3 verified
{
"File Number": "1010",
"Title": "Multimodal Relation Extraction with Cross-Modal Retrieval and Synthesis",
"6 Limitation": "In this paper, we suggest incorporating textual and visual data from search engines for multimodal relation extraction. Despite the fact that the proposed model yields competitive results on the benchmark, it still has several limitations. Firstly, using a search engine is a feasible way to obtain related knowledge, but it also brings the issue of noisy evidence. Unrelated visual and textual evidence returned by the search engine may lead to incorrect predictions from the model. Additionally, not all the retrieved evidence is equally reliable, and sometimes sources may contradict each other. On the other hand, retrieval-augmented methods are slower than content-based counterparts, since retrieving evidence from the Internet requires extra time. Therefore, it may not satisfy some of the time-sensitive scenarios. Lastly, evidence may be presented in different forms other than texts and images. For instance, structural information such as tables, info lists, and knowledge graphs also provide important contexts for identifying semantic relations. Humans are able to extract relevant information from these heterogeneous sources for inference, while our relation extraction system can only model and reason over textual and visual evidence.",
"abstractText": "Multimodal relation extraction (MRE) is the task of identifying the semantic relationships between two entities based on the context of the sentence image pair. Existing retrievalaugmented approaches mainly focused on modeling the retrieved textual knowledge, but this may not be able to accurately identify complex relations. To improve the prediction, this research proposes to retrieve textual and visual evidence based on the object, sentence, and whole image. We further develop a novel approach to synthesize the object-level, imagelevel, and sentence-level information for better reasoning between the same and different modalities. Extensive experiments and analyses show that the proposed method is able to effectively select and compare evidence across modalities and significantly outperforms stateof-the-art models. Code and data are available1.",
"1 Introduction": "Relation extraction aims to detect relations among entities in the text and plays an important role in various applications (Zhang et al., 2017; Soares et al., 2019). Early efforts mainly focus on predicting the relations based on the information from one single modality i.e., text. Recently, multimodal relation extraction (MRE) has been proposed to enhance textual representations with the aid of visual clues from images (Zheng et al., 2021a; Chen et al., 2022; Wang et al., 2022). It extends the textbased approaches by providing visual contexts to address the common ambiguity issues in identifying relations. Figure 1 shows an example from the MNRE dataset (Zheng et al., 2021b). To infer the relation between entities Ang Lee and Oscar, the model needs to capture the interactions from visual relations between objects in an image to textual relations in a sentence. The visual relation “holding”\n1https://github.com/THU-BPM/MRE †Corresponding Author.\nbetween two objects helps to detect the relation awarded between two textual entities.\nMost existing efforts focus on modeling the visual and textual content of the input. Zheng et al. (2021a) constructed the textual and visual graphs, then identify the relations based on graph alignments. Chen et al. (2022) presents a hierarchical visual prefix fusion network to incorporate hierarchical multi-scaled visual and textual features. Li et al. (2023a) proposes a fine-grained multimodal alignment approach with Transformer, which aligns visual and textual objects in representation space. Wang et al. (2022) first proposes retrieval-augmented multimodal relation extraction. The given image and sentence are used to retrieve textual evidence from the knowledge base constructed based on Wikipedia. Unlike previous retrieval-based models, we not only retrieve texts but also retrieve visual and textual evidence related to the object, sentence, and entire image. A novel strategy is used to combine evidence from the ob-\n303\nject, sentence, and image levels in order to make better reasoning across modalities. Our key contributions are summarized as follows:\n• We use cross-modal retrieval for obtaining multimodal evidence. To improve prediction accuracy, we further synthesize visual and textual information for relational reasoning.\n• We evaluate our method on the MRE benchmark. Extensive experimental results validate the effectiveness of the proposed approach.",
"2.1 Cross-Modal Retrieval": "This module aims to retrieve visual evidence based on the input text (sentence, entities), and textual evidence based on the input image and objects.\nTextual evidence We first obtain the local visual objects with top m salience by using the visual grounding toolkit (Yang et al., 2019): Vobj = {V 1obj , V 2obj , · · · , V mobj}. Then we retrieve Vimg and Vobj using Google Vision APIs2 to obtain textual evidence, which returns a list of entities Eentity that describe the content of the Vimg and Vobj and provide a more effective explanation for the visual\n2https://cloud.google.com/vision/docs/ detecting-web\ncontent. In addition to Entity, the APIs could return the images’ URLs and the containing pages’ URLs. We propose a web crawler to search the images’ URLs in the containing pages’ and then return the captions Ecaption if found. Note that Eentity and Ecaption contain 10 entities and captions obtained for each Vimg and Vobj as retrieval textual evidence.\nVisual Evidence We use the textual content T of the post to retrieve the visual evidence. More specially, we leverage the Google custom search API3 to retrieve the 10 images Eimage for the textual content in each post.",
"2.2 Cross-Modal Synthesis": "Given the retrieved visual and textual evidence, this module aims to synthesize multimodal information for relation extraction.",
"2.2.1 Visual Encoder": "The visual encoder module encodes the visual content Vimg, Vobj and retrieved visual evidence Eimage of the post. First, we adopt the ResNet (He et al., 2016) which is pretrained on the ImageNet dataset (Deng et al., 2009) to obtain the visual embedding hv ∈ Rn×d, where n and d represents the number of images and the hidden dimension. To fuse the cross-modal visual and textual information, we employ a learnable linear layer hv = Wϕhv + bϕ",
"2.2.2 Textual Encoder": "The textual module encodes the textual content T and retrieved textual evidence Eentity, Ecaption of the post. For each sentence X = [x1, x2, .., xM ] in the textual content T where two entities [E1] and [E2] are mentioned, we follow the labeling schema adopted in Soares et al. (2019) and argument X with four reserved tokens [E1], [/E1], [E2], [/E2] to mark the beginning and the end of each entity mentioned in the sentence:\nX = [ x1, ..., [E1], xi, ..., xj−1, [/E1],\n...,[E2], xk, ..., xl−1, [/E2], ..., xM ] , (1)\nas the input token sequence. We adopt BERT (Devlin et al., 2019) as an encoder and obtain the textual embedding ht ∈ R(M+4)×d, where M and d represents the number of tokens in s and the hidden dimensions. Thanks to informative visual\n3https://developers.google.com/ custom-search/v1\nembeddings, we can better capture the correlation between visual content and textual information.",
"2.2.3 Cross-Modal Selection": "Given the encoded multimodal evidence and inputs hlt ∈ R(M+4)×d, hlv ∈ Rn×d. The module selects visual/textual evidence and compares it against the input image/sentence. Inspired by Vaswani et al. (2017), we leverage multi-head attention to perform the cross-modal selection. We first project the presentations as query, key, and value vectors:\nQl,Kl,V l = xW lq,xW l k,xW l v;x ∈ { hlt,h l v } , (2)\nwhere W lq,W l k,W l v ∈ Rd×dh represent attention\nprojection parameters. We then obtain the hidden features at (l+ 1)-th layer via multi-head attention:\nhl+1t = Attn ( Qlt, [ K lv,K l t ] , [ V lv,V l t ]) , hl+1v = Attn ( Qlv, [ K lt,K l v ] , [ V lt,V l v ]) . (3)\nNote that the textual features ht come from two types: The first is the textual content in the post with two entities, so we get the relational features of the [E1]and [E2] positions. The other is retrieved textual evidence, since it does not have entities, we obtain representations of the CLS position:\nht,content =Avg.(ht,[E1],ht,[E2]),\nht,retrieved = ht,[CLS]. (4)\nwhere ht = {ht,content,ht,retrieved} ∈ Rd is the representation of the textual content and retrieved textual evidence for each post, where d is the embedding size 768. Similarly, we use a learnable linear layer ht = Wθht+bθ to change the dimension d from 768 to 2048 and employ the multi-head attention in Eq. 2, 3, and 4 to update the visual content and retrieved visual evidence.",
"2.2.4 Cross-Modal Consistency": "This module aims to evaluate the consistency between the retrieved textual and visual evidence and the original post. A natural idea is to leverage the textual and visual content in the original post to update the retrieved textual and visual evidence. We could obtain the updated evidence ht,retrieved and hv,retrieved with ht,content and hv,content as:\nht,r. = softmax( ht,c.Wt × (ht,r.W′t)T√\ndt )ht,r.,\nhv,r. = softmax( ht,c.Wv × (hv,r.W′v)T√\ndv )hv,r.,\n(5)\nwhere Wt,W ′ t ∈ R768×768 and Wv,W ′ v ∈ R2048×2048 are trainable projection matrices and dt, dv are hyperparameters.",
"2.3 Classifier": "We concatenate the resulting representations to form the final multimodal representations and leverage a feed-forward neural network to predict the relation:\nhfinal = FFNN([ht,c.;ht,r.;hv,c.;hv,r.]), (6)\nwhere hfinal is then fed into a linear layer followed by a softmax operation to obtain a probability distribution p ∈ Rm over m relation labels.",
"3.1 Experimental Setup": "We evaluate the model on MNRE (Zheng et al., 2021b), which contains 12,247/1,624/1,614 samples in train/dev/test sets, 9,201 images, and 23 relation types. Following prior efforts, we adopt Accuracy, Precision, Recall, and F1 as the evaluation metrics. For fair comparisons, all baselines and our method use ResNet50 (He et al., 2016) as the visual backbone and BERT-base (Devlin et al., 2019) as the textual encoder. We computed the Accuracy and Macro F1 as the evaluation metric. The hyper-parameters are chosen based on the development set. Results are reported with mean and standard deviation based on 5 runs. For the textual encoder of the retrieval-based model, we use the BERT-Base default tokenizer with a max-length of 128 to preprocess data. For the visual encoder of the retrieval-based model, we use ResNet 50 to encode the visual images. We scale the image proportionally so that the short side is 256, and crop the center to 224 ∗ 224. For the feed-forward neural network of the classifier, we set the layer dimensions as hR-1024-verification_labels, where hR = 768 ∗ 2 + 2048 ∗ 2. We use BertAdam with 3e-5 learning rate, warmup with 0.06 to optimize the cross-entropy loss and set the batch size as 16.",
"3.2 Baselines": "We adopt two types of baselines:\nText-based Baselines only encode text content: (1) PCNN (Zeng et al., 2015), (2) BERT (Devlin et al., 2019), and (3) MTB (Soares et al., 2019).\nMulti-modal Baselines encode both text and image contents: (1) UMT (Yu et al., 2020) adopts the multimodal interaction module to obtain the token representations incorporated with visual information and visual representations. (2) UMGF (Zhang et al., 2021) adopts a unified multi-modal graph fusion method. (3) BSG (Zheng et al., 2021a) adopts the textual representation from BERT and the visual characteristics produced by the scene graph (SG). (4) MEGA (Zheng et al., 2021b) adopts a dual graph, which could align multi-modal features between entities and objects to improve performance. (5) VBERT (Li et al., 2019) adopts the single-stream structure which is different from the attention-based methods. (6) MoRe (Wang et al., 2022) obtains more textual information by retrieving images and titles, thereby improving the accuracy of relation classification and named entity recognition. (7) Iformer (Li et al., 2023a) increases the amount of information in the image by detecting the objects. (8) HVPnet (Chen et al., 2022) treats visual representations as visual prefixes that can be inserted to guide textual representations of error-insensitive prediction decisions.",
"3.3 Main Results": "Table 1 shows the mean and standard deviation results with 5 runs of training and testing on MRNE. We first compare text-based and multi-modal baselines and observe the performance improvement after incorporating visual content, indicating that images can help reveal the potential relationship between two entities. For the multi-modal model, Iformer (Li et al., 2023a) and HVPnet (Chen et al., 2022) specifically detect the objects in the image and achieve the average 17.23% F1 and 14.15% Ac-\ncuracy compared with other multi-modal baselines. Therefore, we retrieve textual and visual evidence based on the object, sentence, and whole image, and achieve an average of 2.79% F1 and 1.02% Accuracy gains compared to the best-reported model HVPnet. Thanks to the retrieved visual and textual evidence, the text and image content in the original post is further explained, which helps our model obtain valuable clues to classify the relations between two entities.",
"3.4 Analysis and Discussion": "Ablation Study. We conduct an ablation study to show the effectiveness of different modules of our model on the test set. Ours w/o Object Evidence and Ours w/o Image Evidence remove the descriptions of Objects and Images respectively in the retrieved textual evidence. Correspondingly, Ours w/o Visual Evidence removes the visual evidence for text content retrieval. The results from Table 1 demonstrate that the three types of evidence can bring 1.95%, 1.35%, and 1.44% F1 improvements, respectively. Among them, the textual evidence obtained from the object retrieval brings the greatest benefit, which is also related to the potential entity information contained in the object. The removal of the Cross-Modal Selection and Cross-Modal Consistency modules means that we no longer use the appropriate evidence selection and update the retrieved evidence with the original content, which increases the noise from irrelevant evidence and leads to 1.52% F1 and 1.59% F1 down.\nAnalyze the Impact of Evidence. In Figure 3, we vary the numbers of retrieved visual and textual evidence from 1 ∼ 20 and report the F1 on the test set. The fluctuation results indicate that both the quantity and quality of retrieved evidence affect the performance. Using less textual or vi-\nsual evidence cannot bring enough explanation to the original post, which leads to a decrease in the quality of the model classification. Using too much evidence will introduce false or irrelevant evidence noise, affecting performance. However, no matter how much evidence is adopted, our method consistently outperforms HVPnet, which illustrates the effectiveness of adding evidence. In our model, we adopt 10 textual and visual evidence for each post to achieve the best performance. We believe the Cross-Modal Consistency module can alleviate the irrelevant noise so that the model can obtain helpful auxiliary evidence.\nAnalyze Performance Changes in Tail Relations. We select the tail relations with the least number of data among the 23 relation classes in MNRE, and study their F1 performance changes after adding retrieval evidence in Figure 4. Compared with the 2.79% improvement brought by the evidence on all relations, we find that almost all tail relations can get more than 22.68% F1 improvement (46.28 vs. 68.96), which shows that the retrieved evidence is more helpful for the few-shot tail relation types: It is an attractive property in real-world applications since classes of tail relations are usually more difficult to obtain training labeled data to improve.",
"4 Related Work": "Relation extraction has garnered considerable interest in the research community due to its essential role in various natural language processing applications (Guo et al., 2019; Nan et al., 2020; Hu et al., 2021b,a). The initial efforts in this field focused on detecting relations between entities in the text, with different neural architectures (Zeng et al., 2015; Zhang et al., 2017; Guo et al., 2020) and pretrained language models (Soares et al., 2019; Devlin et al., 2019) used to encode the textual informa-\ntion. Multimodal relation extraction has recently been proposed, where visual clues from images are used to enhance entity representations (Zheng et al., 2021a,b; Chen et al., 2022; Wang et al., 2022). Most existing efforts focus on fusing the visual and textual modalities effciently. Zheng et al. (2021b) constructed the dual modality graph to align multimodal features among entities and objects. Chen et al. (2022) concatenated object-level visual representation as the prefix of each self-attention layer in BERT. Li et al. (2023a) introduced a fine-grained multimodal fusion approach to align visual and textual objects in representation space. Closest to our work, Wang et al. (2022) proposed to retrieve textual information related to the entities based on the given image and sentence. Unlike prior efforts, we not only retrieve texts related to entities but also retrieve visual and textual evidence related to the object, sentence, and entire image. We further synthesize the retrieved object-level, image-level, and sentence-level information for better reasoning between the same and different modalities.",
"5 Conclusion and Future Work": "We propose to retrieve multimodal evidence and model the interactions among the object, sentence, and whole image for better relation extraction. Experiments show that the proposed method achieves competitive results on MNRE. For future research directions, we can utilize open-source image search and caption generation tools to retrieve textual and image evidence. For example, to retrieve visual evidence, one can (1) use a web crawler to search Google Images, or (2) utilize a searchable image database: PiGallery4, where images can be sourced from Open Image Dataset5, which contains ∼9 million images. For retrieving textual evidence, one can use CLIP to generate image captions. Moreover, we can also apply the method of multimodal retrieval to low-resource relation extraction (Hu et al., 2020; Liu et al., 2022b; Hu et al., 2023), natural language inference (Li et al., 2023b, 2022), semantic parsing (Liu et al., 2022a, 2023), and other NLP tasks, thus realizing information enhancement based on images and retrieval.\n4https://github.com/vladmandic/ pigallery\n5https://storage.googleapis.com/ openimages/web/factsfigures_v7.html",
"7 Acknowledgement": "We thank the reviewers for their valuable comments. The work described here was partially supported by grants from the National Key Research and Development Program of China (No. 2018AAA0100204) and from the Research Grants Council of the Hong Kong Special Administrative Region, China (CUHK 14222922, RGC GRF, No. 2151185), NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941. Zhiyang Teng was partially supported by CAAIHuawei MindSpore Open Fund (CAAIXSJLJJ2021-046A).",
"A For every submission:": "3 A1. Did you describe the limitations of your work?\nSection 6\n3 A2. Did you discuss any potential risks of your work? Section 6\n3 A3. Do the abstract and introduction summarize the paper’s main claims? Abstract and Section 1\n7 A4. Have you used AI writing assistants when working on this paper? Left blank.\nB 3 Did you use or create scientific artifacts? Section 2, Section 3\n3 B1. Did you cite the creators of artifacts you used? Section 2, Section 3\n3 B2. Did you discuss the license or terms for use and / or distribution of any artifacts? Section 2, Section 3\n3 B3. Did you discuss if your use of existing artifact(s) was consistent with their intended use, provided that it was specified? For the artifacts you create, do you specify intended use and whether that is compatible with the original access conditions (in particular, derivatives of data accessed for research purposes should not be used outside of research contexts)? Section 2, Section 3\nB4. Did you discuss the steps taken to check whether the data that was collected / used contains any information that names or uniquely identifies individual people or offensive content, and the steps taken to protect / anonymize it? Not applicable. Left blank.\nB5. Did you provide documentation of the artifacts, e.g., coverage of domains, languages, and linguistic phenomena, demographic groups represented, etc.? Not applicable. Left blank.\n3 B6. Did you report relevant statistics like the number of examples, details of train / test / dev splits, etc. for the data that you used / created? Even for commonly-used benchmark datasets, include the number of examples in train / validation / test splits, as these provide necessary context for a reader to understand experimental results. For example, small differences in accuracy on large test sets may be significant, while on small test sets they may not be. Section 3\nC 3 Did you run computational experiments? Section 3\n3 C1. Did you report the number of parameters in the models used, the total computational budget (e.g., GPU hours), and computing infrastructure used? Section 3\nThe Responsible NLP Checklist used at ACL 2023 is adopted from NAACL 2022, with the addition of a question on AI writing assistance.\n3 C2. Did you discuss the experimental setup, including hyperparameter search and best-found hyperparameter values? Section 3\n3 C3. Did you report descriptive statistics about your results (e.g., error bars around results, summary statistics from sets of experiments), and is it transparent whether you are reporting the max, mean, etc. or just a single run? Section 3\n3 C4. If you used existing packages (e.g., for preprocessing, for normalization, or for evaluation), did you report the implementation, model, and parameter settings used (e.g., NLTK, Spacy, ROUGE, etc.)? Section 2, Section 3\nD 7 Did you use human annotators (e.g., crowdworkers) or research with human participants? Left blank.\nD1. Did you report the full text of instructions given to participants, including e.g., screenshots, disclaimers of any risks to participants or annotators, etc.? No response.\nD2. Did you report information about how you recruited (e.g., crowdsourcing platform, students) and paid participants, and discuss if such payment is adequate given the participants’ demographic (e.g., country of residence)? No response.\nD3. Did you discuss whether and how consent was obtained from people whose data you’re using/curating? For example, if you collected data via crowdsourcing, did your instructions to crowdworkers explain how the data would be used? No response.\nD4. Was the data collection protocol approved (or determined exempt) by an ethics review board? No response.\nD5. Did you report the basic demographic and geographic characteristics of the annotator population that is the source of the data? No response."
}