| { |
| "paper_id": "2020", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:23:30.281287Z" |
| }, |
| "title": "An Element-wise Visual-enhanced BiLSTM-CRF Model for Location Name Recognition", |
| "authors": [ |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Komada", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Tsukuba", |
| "location": {} |
| }, |
| "email": "komada@mibel.cs.tsukuba.ac.jp" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Tsukuba", |
| "location": {} |
| }, |
| "email": "inui@cs.tsukuba.ac.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In recent years, previous studies have used visual information in named entity recognition (NER) for social media posts with attached images. However, these methods can only be applied to documents with attached images. In this paper, we propose a NER method that can use element-wise visual information for any documents by using image data corresponding to each word in the document. The proposed method obtains elementwise image data using an image retrieval engine, to be used as extra features in the neural NER model. Experimental results on the standard Japanese NER dataset show that the proposed method achieves a higher F1 value (89.67%) than a baseline method in location name recognition, demonstrating the effectiveness of using element-wise visual information.", |
| "pdf_parse": { |
| "paper_id": "2020", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In recent years, previous studies have used visual information in named entity recognition (NER) for social media posts with attached images. However, these methods can only be applied to documents with attached images. In this paper, we propose a NER method that can use element-wise visual information for any documents by using image data corresponding to each word in the document. The proposed method obtains elementwise image data using an image retrieval engine, to be used as extra features in the neural NER model. Experimental results on the standard Japanese NER dataset show that the proposed method achieves a higher F1 value (89.67%) than a baseline method in location name recognition, demonstrating the effectiveness of using element-wise visual information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Since the 1990s, information extraction, in which computers are used to extract structured data from unstructured documents, has been extensively studied (Cowie and Lehnert, 1996; Grishman and Sundheim, 1996) . Among the entities to be extracted, location information (where) is one of the essential components (5W1H) of event information to be extracted, and the process has evolved to include various tasks, such as location name disambiguation and mapping of location names to real-world geographic locations (Weissenbacher et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 179, |
| "text": "(Cowie and Lehnert, 1996;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 180, |
| "end": 208, |
| "text": "Grishman and Sundheim, 1996)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 512, |
| "end": 540, |
| "text": "(Weissenbacher et al., 2019)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Location name recognition has been typically conducted as a named entity recognition (NER) task (Li et al., 2018) . In this field, deep learning models using visual information have been actively studied in recent years, especially in the extraction of named entities (NEs) from posts in social networking services (SNSs) such as Twitter and SnapChat (Lu et al., 2018; Moon et al., 2019; . These methods use images attached to a post as multimodal features to disambiguate word meanings in the post. For example, the word Washington can be used to refer to Washington, D.C. (LOCATION) or the presidency of George Washington (PERSON). Looking at the attached image, Washington could be further disambiguated.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 113, |
| "text": "(Li et al., 2018)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 351, |
| "end": 368, |
| "text": "(Lu et al., 2018;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 369, |
| "end": 387, |
| "text": "Moon et al., 2019;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 388, |
| "end": 388, |
| "text": "", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As mentioned above, visual information is considered capable of explaining word meanings and provide useful information for location name recognition. For example, Figure 1 shows images of two different modernized cities, Shenzhen in China and Dubai in the UAE, and Figure 2 shows images of rural villages, Manali in India and Hakone in Japan. One can easily recognize common objects from these images: skyscrapers in Figure 1 , and townscapes surrounded by mountains and rivers in Figure 2 . Similarities like these would provide sufficient information to consider that words like Shenzhen and Dubai in documents have the same NE aspect.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 164, |
| "end": 172, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 266, |
| "end": 274, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 418, |
| "end": 426, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 482, |
| "end": 490, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we propose a method for location name recognition that utilizes images more effectively. Specifically, image data are obtained for each word in a document through an image retrieval engine, using the words in the document as a search query, and used as an extra multimodal feature in a neural NER model. The proposed model has two advantages. First, it is robust to unseen words that do not appear in the training data; standard NER models tend to be vulnerable to unseen words. Image data corresponding to each word in the document would provide additional information to clarify word meanings, as shown in the examples of Figure 1 and Figure 2 . Second, our method can be applied to any documents to obtain element-wise image data corresponding to each word in the document; those of previous studies can only be applied to documents with images attached to them.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 639, |
| "end": 647, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 652, |
| "end": 660, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In addition, in the proposed method, we introduce a Gate mechanism to control the extent to which the visual features from images are input to the neural NER model. Polysemous words, abbreviations, and misspellings in a document could result in inappropriate instances in the image data obtained by the image retrieval engine. The gate's function is to remove the harmful effects derived from these instances by increasing or decreasing the degree of effect of a visual feature in the model when an image is appropriate or inappropriate for the document's context, respectively. We evaluate the model's performance for location name recognition using a standard BiLSTM-CRF model as our baseline and then show the effectiveness of element-wise visual information and Gate mechanism, through our experimental results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In NER, machine learning models using conditional random fields (CRFs) have been widely used (Marci\u0144czuk, 2015) . Since the emergence of deep learning in recent years, it has become common to use various neural network-based NER models. Among them, bidirectional long shortterm memory (LSTM) models that include a CRF layer, BiLSTM-CRF, are one of the most common models (Huang et al., 2015; Lample et al., 2016) . Furthermore, a variation of BiLSTM-CRF with pre-trained language models for large unsu-pervised corpora such as Flair (Akbik et al., 2018) have been successful in achieving high performance.", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 111, |
| "text": "(Marci\u0144czuk, 2015)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 371, |
| "end": 391, |
| "text": "(Huang et al., 2015;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 392, |
| "end": 412, |
| "text": "Lample et al., 2016)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 533, |
| "end": 553, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural NER Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Visual information obtained from images (or pictures) has been used in neural NER models, especially when applied to SNS posts that include images related to them. Moon et al. (2019) proposed a neural NER model using images attached to a post as multimodal features. In the model, the image is transformed into a vector representation through a pre-trained CNN-based image recognition model and then combined with the input to the LSTM network for NER. Asgari-Chenaghlu et al. (2020) proposed a similar model to Moon et al. (2019) that could directly use object name class labels obtained by the image recognition model. Lu et al. (2018) and proposed models that obtain one-to-one correspondences between a word in a document and an object in a picture attached to the document to obtain fine-grained visual features. These studies only use image data attached to the document, not element-wise image data corresponding to words in the document.", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 182, |
| "text": "Moon et al. (2019)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 512, |
| "end": 530, |
| "text": "Moon et al. (2019)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 621, |
| "end": 637, |
| "text": "Lu et al. (2018)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Use of Visual Information", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In Chinese NER, each part of a Chinese character in a document can be regarded as a visual feature and mixed into the NER model (Jia and Ma, 2019) . Although this model handles element-wise visual information in the same manner as ours, that is, image data corresponding to each element (character or word) in the document are used in the NER model, it only focuses on the characters' patterns. Our model, described in detail in Section 4, focuses on images that express word meanings.", |
| "cite_spans": [ |
| { |
| "start": 128, |
| "end": 146, |
| "text": "(Jia and Ma, 2019)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Use of Visual Information", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "This section describes the details of the BiLSTM-CRF model as a basis for our baseline model. As mentioned in the previous section, the BiLSTM-CRF model is one of the most common models for NER. The input is a word or character sequence in a document and the output is a sequence of labels representing NE information. In this study, we use a character-based model because the dataset used in the experiments is Japanese and errors caused by word segmentation can be ignored. Characterbased models have been confirmed to outperform word-based models when Japanese documents are used (Misawa et al., 2017) .", |
| "cite_spans": [ |
| { |
| "start": 583, |
| "end": 604, |
| "text": "(Misawa et al., 2017)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let C = {c t } M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "t=1 be the input character sequence, x = {x t } M t=1 be the input vector sequence corresponding to C, and y = {y t } M t=1 be the output label sequence. Here, x is created by concatenating three types of vector (embedding) sequences x c , x w , and x F . The t-th element x t of x is given by Equation 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x t = [x c,t ; x w,t ; x F,t ]", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The sequence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x c = {x c,t } M t=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "is a sequence of character embeddings corresponding to C. Each element of x c , x c,t corresponds to a GloVe embedding (Pennington et al., 2014) for the corresponding character in C.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 144, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In addition to x c , we also use x w and x F , which are sequences of word embeddings to integrate the word meanings into the input. x w is the characterbased word sequence, which is a word sequence whose length is the same as that of the character sequence C.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Let W = {w t } M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "t=1 be a characterbased word sequence, where M denotes the number of characters of a word sequence. Here, let", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "S = {s i } N i=1 , (M \u2265 N )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "be a word sequence in the input document. W is a variation of S, which is created by repeating each word |s i | times where |s i | denotes the number of characters in the word s i . Note that w t and w t+1 in W are the same value if they come from the same word s i . The sequence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "x w = {x w,t } M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "t=1 is a sequence of word embeddings corresponding to W . Each element of {x w,t } is also trained by GloVe (Pennington et al., 2014) . The sequence x F is the alternative version of x w , using the Flair training scheme (Akbik et al., 2018) instead of GloVe.", |
| "cite_spans": [ |
| { |
| "start": 108, |
| "end": 133, |
| "text": "(Pennington et al., 2014)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 221, |
| "end": 241, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The input x is given to the LSTM network layer. In this layer, each unit of the LSTM updates the state of the t-th element x t on the basis of the previous LSTM (c t\u22121 ) and the hidden state (h t\u22121 ), and outputs the updated state as h t and c t .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h t , c t = LST M (x t , h t\u22121 , c t\u22121 ; \u03b8)", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In a BiLSTM network, the output \u2212 \u2192 h of the forward LSTM and the output \u2190 \u2212 h of the backward LSTM are combined to compute the total output \u2190 \u2192 h .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "BiLSTM-CRF Model", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "h = [ \u2190 \u2212 h ; \u2212 \u2192 h ]", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "\u2190 \u2192", |
| "sec_num": null |
| }, |
| { |
| "text": "Next, the output of the LSTM network layer, \u2190 \u2192 h , is sent to the next CRF layer. In this layer, the labeling scheme that takes into account the transition probability between labels is carried out, and the output sequence y is calculated against x. The output is selected for the optimal sequence on the basis of the Equation 4where \u03d5 is the feature function and W CRF is a weight coefficient learned in this layer.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2190 \u2192", |
| "sec_num": null |
| }, |
| { |
| "text": "y * = arg max y \u2211 t W CRF \u2022 \u03d5( \u2190 \u2192 h , y t , y t\u22121 ) (4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2190 \u2192", |
| "sec_num": null |
| }, |
| { |
| "text": "The element y t of the output label sequence y represents the entity label for each character c t . In general, named entity may be composed of multiple characters. So, we use the BIO scheme to represent the chunks of the named entity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2190 \u2192", |
| "sec_num": null |
| }, |
| { |
| "text": "The proposed method is a variation of the visualenhanced BiLSTM-CRF models that enable to integrate visual features into the basic BiLSTM-CRF model described in the previous section. The proposed method utilizes element-wise visual features by obtaining image data for each word in the input document through an image retrieval engine where each word in the document is used as a search query. By retrieving images associated to words, the proposed method can be applied to any documents, while the previous visualenhanced models mentioned in Section 2 can only be applied to documents with images attached to them. Figure 3 shows an overview of the proposed method. The left-hand side shows the basic BiLSTM-CRF model. The right-hand side shows the proposed module to create element-wise visual features. In this section, we describe the proposed method step by step. First, we explain the procedure for constructing queries from the input document (Section 4.1). Next, we explain how to obtain visual embeddings (Section 4.2) and integrate the visual features to the original text features (Section 4.3). We then update the input vector sequence shown in Equation (1) to carry the visual features to the BiLSTM-CRF (Section 4.4).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 616, |
| "end": 624, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Proposed Method", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The given input document is transformed into a character-based word sequence W = {w t } M t=1 by using the same procedure described in the previous section. Then, we construct a query sequence ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieving Image data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Q = {q t } M", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieving Image data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "t=1 : if w t is a noun, q t is w t itself, otherwise q t is empty. As other word types would be irrelevant for image retrieval, we focus only on nouns. Nouns include not only proper nouns but also common nouns. The part-of-speech information is provided by the Japanese POS tagger MeCab, which is described below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieving Image data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Each q t in Q is used as the query for the image retrieval independently of each other; namely, we run the image retrieval M times. The top K retrieved images, referred to as p t , are saved for each run. If a query q t is empty, no retrievals are performed and p t is also set to empty. The P = {p t } M t=1 is sent to the next step as elementwise visual information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Retrieving Image data", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "DenseNets (Huang et al., 2016) are one of the most powerful CNN-based deep neural network architectures, especially for image recognition. A pretrained DenseNet model is applied to the retrieved images p t to obtain visual embeddings. First, each image in p t is sent to the DenseNet, and then the hidden representation of the final hidden layer of the DenseNet is saved. After K times running, the average of the K hidden representations is obtained as the visual embedding v t .", |
| "cite_spans": [ |
| { |
| "start": 10, |
| "end": 30, |
| "text": "(Huang et al., 2016)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtaining Visual Embeddings", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "If p t is empty, we define v t as a zero vector where every element is 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Obtaining Visual Embeddings", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The obtained visual embedding v t are modified to adjust the balance of combinations between the original text features and our visual features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Here, we introduce the Gate mechanism to control how much of the visual features are input to the BiLSTM-CRF model. It works to decrease the degree of effect of the visual features when retrieved images from polysemous words, abbreviations, and misspellings are inappropriate. We also present another simple procedure, which we compared against the Gate mechanism..", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Gate mechanism This procedure is formulated as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "g t = \u03c3(W g \u2022 [v t ; x F,t ] + b) x v,t = g t v t", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The modified visual embedding x v,t is obtained by v t multiplied by g t . The modification weight g t is calculated on the basis of the visual feature v t and the text feature x F,t . We use x F,t because the feature relevance v t and context information around w t needs to be verified. Here, \u03c3() denotes the sigmoid function and W g and b the weight coefficients to be trained. If a visual feature provides useful context, the g t is close to 1, otherwise close to 0. Note that no visual features are considered when g t = 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Simple This procedure is used as a comparison with the Gate mechanism where x v,t is defined as follow.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x v,t = v t", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Note that this procedure is equivalent to the gate function in which g t is fixed at 1. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Combining Visual Embeddings", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Finally, the input vector sequence shown in Equation (1) is updated to Equation 7to input the visual features to the input layer of the BiLSTM-CRF.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Use of Visual Features", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x t = [x c,t ; x w,t ; x F,t ; x v,t ]", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Use of Visual Features", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "5 Experiments", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Use of Visual Features", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We used the Extended Named Entity corpus (ENE corpus) (Hashimoto et al., 2008) , which uses the definition of Sekine's Extended Named Entity Hierarchy (Sekine et al., 2002) 7.1.0 including more than 200 types of named entities including a number of location name types. This corpus is one of the commonly-used datasets for evaluating Japanese NER methods. Each document in the corpus has no attached images. The statistics of ENE corpus are shown in Table 1 . We focused on six classes: Country, Province, County, City, GPE Other, and MIX in the experiments. The first five classes are the original ones enclosed in Sekine's definition. We included MIX to indicate cases that have multiple NE classes. Hereafter, we ignore MIX for convenience because of rare cases. The statistics of each class are shown in Table 3 5.2 Settings", |
| "cite_spans": [ |
| { |
| "start": 54, |
| "end": 78, |
| "text": "(Hashimoto et al., 2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 151, |
| "end": 172, |
| "text": "(Sekine et al., 2002)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 450, |
| "end": 457, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 808, |
| "end": 815, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We constructed three models for location name recognition. The first is the baseline model and the others are the proposed models described in Section 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 Baseline is the BiLSTM-CRF model described in Section 3. No use of visual features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 Visual (Gate) is the proposed visualenhanced BiLSTM-CRF model that utilizes element-wise visual features with the Gate mechanism.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 Visual (Simple) is another proposed model. This model uses the Simple text/visual combination instead of the Gate mechanism.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "For word embeddings x w and character embeddings x c , we conducted the GloVe training with 300 dimensions with the BCCWJ corpus (Maekawa et al., 2014) . We use MeCab (Kudo et al., 2004) with the unidic (Den et al., 2007) dictionary for word segmentation. The Flair embeddings (Akbik et al., 2018) were trained using BCCWJ and ten years of Mainichi newspaper data from 1991 to 2000 with 1024 dimensions.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 151, |
| "text": "(Maekawa et al., 2014)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 167, |
| "end": 186, |
| "text": "(Kudo et al., 2004)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 203, |
| "end": 221, |
| "text": "(Den et al., 2007)", |
| "ref_id": null |
| }, |
| { |
| "start": 277, |
| "end": 297, |
| "text": "(Akbik et al., 2018)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We used Google Images 2 with photo options for the image retrieval used in the proposed models. The top 15 retrieved images for each query are saved. In the dataset, about 43% of words were nouns, enabling non-empty queries to be constructed. The visual embeddings were created from the final hidden layer representation of DenseNet, whose dimensions were 1024. We used the pre-trained DenseNet from PyTorch. We performed an approximate randomization test (Chinchor, 1992) on the F1 values. The mark \"*\" and \"**\" in the table show significant differences compared with the baseline at the 0.05 and 0.01 levels, respectively. In the training of the models, we used Adam (Kingma and Ba, 2014) for optimization. The batch size was 20. We applied the dropout regularization (Srivastava et al., 2014) at p = 0.5 for each node of the input layer and each output node of the LSTM layer. We also used a gradient clipping (Pascanu et al., 2013) of 1.0 to reduce the effects of the gradient exploding.", |
| "cite_spans": [ |
| { |
| "start": 456, |
| "end": 472, |
| "text": "(Chinchor, 1992)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 770, |
| "end": 795, |
| "text": "(Srivastava et al., 2014)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 913, |
| "end": 935, |
| "text": "(Pascanu et al., 2013)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We used the standard BIO schema (Tjong Kim Sang and Veenstra, 1999) for the chunk representation. The performance was measured by the Precision (Prec.), Recall, and F1 values. Only the exact matches were counted as the correct samples, while lenient matches were counted as incorrect.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 67, |
| "text": "(Tjong Kim Sang and Veenstra, 1999)", |
| "ref_id": "BIBREF23" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Experimental results are shown in Table 4 . Both models using element-wise visual features outperformed the baseline model. This result suggests that element-wise visual features are powerful features for location name recognition. Furthermore, the Visual (Gate) model achieved the best F1 value of 89.67%. From the results, the Gate mechanism is an essential part of integrating element-wise visual features into the baseline model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 34, |
| "end": 41, |
| "text": "Table 4", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The following example sentences are samples in the cases where the Visual (Gate) has a correct output while the Baseline has an incorrect output. Here, (1-J) and (2-J) are the original Japanese sentences, and (1-E) and (2-E) are the corresponding English translations. in December 1982.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results and Discussion", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "\u2022 (ex.2-J) City \u2022 (ex.2-E)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "Finally, tomorrow is the last day of our stay in France, except for the day we leave. We're going to Avignon City .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "In the first example, (Jamaica) is a country name to be recognized, and in the second example, (Avignon) is a city name in France. Moreover, the examples of retrieved image data corresponding to these location names are shown in Figure 4 and Figure 5 , respectively. One can see that a typical scene or object is in each image; a beach in Figure 4 and a palace in Figure 5 . It suggests that image data showing scenes or objects strongly relevant to locations provide helpful visual features. Table 5 shows the fine-grained performances of the experimental results. It shows that the City class had the most significant improvement. In fact, we confirmed that the retrieved image data corresponding to city names showed many typical characteristics of the locations, such as buildings, landscapes, and skies. In contrast, for an example of other classes, image data corresponding to country names showed various weakly related objects to the countries. For example, we found some image data showing the president of its country. These seem to suggest not location names but person names.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 229, |
| "end": 237, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 242, |
| "end": 250, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 339, |
| "end": 347, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 364, |
| "end": 373, |
| "text": "Figure 5", |
| "ref_id": "FIGREF5" |
| }, |
| { |
| "start": 494, |
| "end": 501, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "Here we call words that appear in test data but not in training data as unseen words. In general, it is arduous to achieve accurate NER performance on unseen words because they do not appear in training data and thus have poor textual information. Here, we investigated whether our visual features provide supplemental information to unseen words. To realize the investigation, we conducted an analysis focusing on the City class. As shown in Table 2 , the City class differs from other classes in that it has many types of mentions. It implies that there are many unseen mentions to be recognized to the City class. Therefore we compared the extraction performance between seen words and unseen words in the City class. Table 6 shows the details of the results. One can see that the unseen words achieved better performance improvements than the seen words. Furthermore, precision values improved most significantly (Seen(+5.08) \u2192 Unseen(+7.07)). This means that visual features improve the performance of not only true-positive samples but also true-negative samples. The example sentences are shown below. Each underline indicates the unseen true-negative word. And, the corresponding retrieved images are shown in Figure 6 and Figure 7 . These samples were correctly classified by the proposed method while wrongly by the baseline method.", |
| "cite_spans": [ |
| { |
| "start": 1218, |
| "end": 1230, |
| "text": "Figure 6 and", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 443, |
| "end": 450, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 721, |
| "end": 728, |
| "text": "Table 6", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 1231, |
| "end": 1239, |
| "text": "Figure 7", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 (ex.3-J)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 (ex.3-E) This is the fourth year that he has signed an \"advisory contract\" with German Bundesliga powerhouse Bayern Munich", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 (ex.4-J)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 (ex.4-E) This kind of... Cisco's management team...", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "Although, as discussed above, the element-wise visual features contributed to improve the performance of location name recognition, some types of errors remained. It observed that the proposed method tends to cause false-positive errors in compound words including location names. For example, -(the Kyoto Protocol) in (5-J) and (5-E) was wrongly recognized as the location name. This type of error is caused by inadequate query construction. Because every single noun in the document is regarded as the query word independently in the proposed method, both (Kyoto) and (Protocol) were used to the image retrievals. Then they led to the mistaken recognition of Kyoto. Figure 8 : An example photo retrieved by the query (Angola). This photo shows not Angola but Angora rabbit .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 668, |
| "end": 676, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 (ex.5-E) The government's plan to meet the Kyoto Protocol's targets for reducing greenhouse gas emissions includes a number of measures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "It also observed that the proposed method tends to cause false-negative errors when inappropriate images are mixed to the retrieved images. For example, the proposed method missed recognizing (Angola) in (6-J) and (6-E) because the image data retrieved by the query (Angola) includes some inappropriate images of \"Angora rabbit\" like in Figure 8 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 337, |
| "end": 345, |
| "text": "Figure 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "(Angola) is not a polysemous word, but it found that the word means \"Angora rabbit\" in the specific domain 3 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2022 (ex.6-J) Country \u2022 (ex.6-E) Cholera epidemic in Angola Country as civil war continues.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u2022 (ex.1-J)", |
| "sec_num": null |
| }, |
| { |
| "text": "In this study, we proposed a NER model that uses images corresponding to all nouns in a document as features and a Gate mechanism that controls the extent to which visual features are provided as input to the neural NER model. We conducted experiments to confirm its performance in location name recognition. Experimental results show that the proposed method achieved a higher F1value performance than the baseline model in the ENE corpus dataset, with a significant difference of p < 0.01. In future research, we will investigate whether the proposed model is effective for cases other than location names. We also aim to improve our model to be more effective by conducting elaborate query investigations that are motivated by the error analysis. The hyper-parameter K, which means the number of images per word, would be critical for obtaining valuable visual embeddings. Therefore, we will also investigate whether the larger the K, the better the location name recognition performance. The experimental results showed that the proposed method has little contributions when query words are polysemous. We would like to attempt word sequence queries with nouns and adjectives/verbs instead of single noun queries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this study, we found a number of annotation errors in the ENE corpus. We carefully observed 328 mentions related to the location name and corrected them before conducting our experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://images.google.com/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the words \"Angola\" and \"Angora\" are transliterated into the same Japanese string \" \" although they are different in English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank anonymous reviewers for their responsible attitude and helpful comments. This work was supported by JSPS KAKENHI Grant Number JP18K11982.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Contextual String Embeddings for Sequence Labeling", |
| "authors": [ |
| { |
| "first": "Alan", |
| "middle": [], |
| "last": "Akbik", |
| "suffix": "" |
| }, |
| { |
| "first": "Duncan", |
| "middle": [], |
| "last": "Blythe", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Vollgraf", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 27th International Conference on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1638--1649", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alan Akbik, Duncan Blythe, and Roland Vollgraf. 2018. Contextual String Embeddings for Sequence Labeling. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1638-1649.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "and Cina Motamed. 2020. A multimodal deep learning approach for named entity recognition from social media", |
| "authors": [ |
| { |
| "first": "Meysam", |
| "middle": [], |
| "last": "Asgari-Chenaghlu", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "Reza" |
| ], |
| "last": "Feizi-Derakhshi", |
| "suffix": "" |
| }, |
| { |
| "first": "Leili", |
| "middle": [], |
| "last": "Farzinvash", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Balafar", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meysam Asgari-Chenaghlu, M. Reza Feizi-Derakhshi, Leili Farzinvash, M. A. Balafar, and Cina Mo- tamed. 2020. A multimodal deep learning approach for named entity recognition from social media. https://arxiv.org/abs/2001.06888.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The Statistical Significance of the MUC-4 Results", |
| "authors": [ |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Chinchor", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the Fourth Message Uunderstanding Conference (MUC-4)", |
| "volume": "", |
| "issue": "", |
| "pages": "30--50", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nancy Chinchor. 1992. The Statistical Signifi- cance of the MUC-4 Results. In Proceedings of the Fourth Message Uunderstanding Conference (MUC-4), page 30-50.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Information extraction", |
| "authors": [ |
| { |
| "first": "Jim", |
| "middle": [], |
| "last": "Cowie", |
| "suffix": "" |
| }, |
| { |
| "first": "Wendy", |
| "middle": [], |
| "last": "Lehnert", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Communications of the ACM", |
| "volume": "39", |
| "issue": "1", |
| "pages": "80--91", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jim Cowie and Wendy Lehnert. 1996. Information ex- traction. Communications of the ACM, 39(1):80-91.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Kiyotaka Uchimoto, and Hanae Koiso. 2007. electronic dictionary, morphological analysis, database system, uniformity of units, identity of indexes", |
| "authors": [ |
| { |
| "first": "Yasuharu", |
| "middle": [], |
| "last": "Den", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshinobu", |
| "middle": [], |
| "last": "Ogiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Ogura", |
| "suffix": "" |
| }, |
| { |
| "first": "Atsushi", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "Nobuaki", |
| "middle": [], |
| "last": "Minematsu", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "Japanese Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "101--123", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yasuharu Den, Toshinobu Ogiso, Hideki Ogura, At- sushi Yamada, Nobuaki Minematsu, Kiyotaka Uchi- moto, and Hanae Koiso. 2007. electronic dictio- nary, morphological analysis, database system, uni- formity of units, identity of indexes. Japanese Lin- guistics, pages 101-123.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Message Understanding Conference-6", |
| "authors": [ |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Beth", |
| "middle": [], |
| "last": "Sundheim", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of the 16th conference on Computational linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "466--471", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference-6. In Proceedings of the 16th conference on Computational linguistics, page 466-471.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Constructing extended named entity annotated corpora", |
| "authors": [ |
| { |
| "first": "Taiichi", |
| "middle": [], |
| "last": "Hashimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Takashi", |
| "middle": [], |
| "last": "Inui", |
| "suffix": "" |
| }, |
| { |
| "first": "Koji", |
| "middle": [], |
| "last": "Murakami", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "IPSJ SIG Notes", |
| "volume": "", |
| "issue": "", |
| "pages": "113--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taiichi Hashimoto, Takashi Inui, and Koji Murakami. 2008. Constructing extended named entity anno- tated corpora. IPSJ SIG Notes, pages 113-120.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Bidirectional LSTM-CRF Models for Sequence Tagging", |
| "authors": [ |
| { |
| "first": "Zhiheng", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kai", |
| "middle": [], |
| "last": "Yu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirec- tional LSTM-CRF Models for Sequence Tagging. https://arxiv.org/abs/1508.01991.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Attention in Character-Based BiLSTM-CRF for Chinese Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Yaozong", |
| "middle": [], |
| "last": "Jia", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaopan", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ICMAI 2019 Proceedings of the 2019 4th International Conference on Mathematics and Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "1--4", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaozong Jia and Xiaopan Ma. 2019. Attention in Character-Based BiLSTM-CRF for Chinese Named Entity Recognition. In ICMAI 2019 Proceedings of the 2019 4th International Conference on Mathe- matics and Artificial Intelligence, page 1-4.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. https://arxiv.org/abs/1412.6980.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Applying Conditional Random Fields to Japanese Morphologiaical Analysis", |
| "authors": [ |
| { |
| "first": "Taku", |
| "middle": [], |
| "last": "Kudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaoru", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuji", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "IPSJ SIG Notes", |
| "volume": "161", |
| "issue": "", |
| "pages": "89--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying Conditional Random Fields to Japanese Morphologiaical Analysis. IPSJ SIG Notes, 161:89-96.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Neural Architectures for Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Guillaume", |
| "middle": [], |
| "last": "Lample", |
| "suffix": "" |
| }, |
| { |
| "first": "Miguel", |
| "middle": [], |
| "last": "Ballesteros", |
| "suffix": "" |
| }, |
| { |
| "first": "Sandeep", |
| "middle": [], |
| "last": "Subramanian", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuya", |
| "middle": [], |
| "last": "Kawakami", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "260--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural Architectures for Named Entity Recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A Survey on Deep Learning for Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Aixin", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianglei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Chenliang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jing Li, Aixin Sun, Jianglei Han, and Chen- liang Li. 2018. A Survey on Deep Learning for Named Entity Recognition.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Visual Attention Model for Name Tagging in Multimodal Social Media", |
| "authors": [ |
| { |
| "first": "Di", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Leonardo", |
| "middle": [], |
| "last": "Neves", |
| "suffix": "" |
| }, |
| { |
| "first": "Vitor", |
| "middle": [], |
| "last": "Carvalho", |
| "suffix": "" |
| }, |
| { |
| "first": "Ning", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "1990--1999", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Di Lu, Leonardo Neves, Vitor Carvalho, Ning Zhang, and Heng Ji. 2018. Visual Attention Model for Name Tagging in Multimodal Social Media. In Pro- ceedings of the 56th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1990- 1999.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Balanced corpus of contemporary written Japanese. Language Resources and Evaluation", |
| "authors": [ |
| { |
| "first": "Kikuo", |
| "middle": [], |
| "last": "Maekawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Makoto", |
| "middle": [], |
| "last": "Yamazaki", |
| "suffix": "" |
| }, |
| { |
| "first": "Toshinobu", |
| "middle": [], |
| "last": "Ogiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Takehiko", |
| "middle": [], |
| "last": "Maruyama", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Ogura", |
| "suffix": "" |
| }, |
| { |
| "first": "Wakako", |
| "middle": [], |
| "last": "Kashino", |
| "suffix": "" |
| }, |
| { |
| "first": "Hanae", |
| "middle": [], |
| "last": "Koiso", |
| "suffix": "" |
| }, |
| { |
| "first": "Masaya", |
| "middle": [], |
| "last": "Yamaguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Makiro", |
| "middle": [], |
| "last": "Tanaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Yasuharu", |
| "middle": [], |
| "last": "Den", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "48", |
| "issue": "", |
| "pages": "345--371", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kikuo Maekawa, Makoto Yamazaki, Toshinobu Ogiso, Takehiko Maruyama, Hideki Ogura, Wakako Kashino, Hanae Koiso, Masaya Yamaguchi, Makiro Tanaka, and Yasuharu Den. 2014. Balanced corpus of contemporary written Japanese. Language Re- sources and Evaluation, 48:345-371.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic construction of complex features in Conditional Random Fields for Named Entities Recognition", |
| "authors": [ |
| { |
| "first": "Micha\u0142", |
| "middle": [], |
| "last": "Marci\u0144czuk", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Conference Recent Advances in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "413--419", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Micha\u0142 Marci\u0144czuk. 2015. Automatic construction of complex features in Conditional Random Fields for Named Entities Recognition. In Proceedings of the International Conference Recent Advances in Natu- ral Language Processing, pages 413-419.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Character-based Bidirectional LSTM-CRF with words and characters for Japanese Named Entity Recognition", |
| "authors": [ |
| { |
| "first": "Shotaro", |
| "middle": [], |
| "last": "Misawa", |
| "suffix": "" |
| }, |
| { |
| "first": "Motoki", |
| "middle": [], |
| "last": "Taniguchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Yasuhide", |
| "middle": [], |
| "last": "Miura", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Subword and Character Level Models in NLP", |
| "volume": "", |
| "issue": "", |
| "pages": "97--102", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shotaro Misawa, Motoki Taniguchi, and Yasuhide Miura. 2017. Character-based Bidirectional LSTM- CRF with words and characters for Japanese Named Entity Recognition. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 97-102.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Multimodal Named Entity Recognition for Short Social Media Posts", |
| "authors": [ |
| { |
| "first": "Seungwhan", |
| "middle": [], |
| "last": "Moon", |
| "suffix": "" |
| }, |
| { |
| "first": "Leonardo", |
| "middle": [], |
| "last": "Neves", |
| "suffix": "" |
| }, |
| { |
| "first": "Vitor", |
| "middle": [], |
| "last": "Carvalho", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "852--860", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Seungwhan Moon, Leonardo Neves, and Vitor Car- valho. 2019. Multimodal Named Entity Recognition for Short Social Media Posts. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 852-860.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "On the difficulty of training recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Razvan", |
| "middle": [], |
| "last": "Pascanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 30th International Conference on International Conference on Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1310--1318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neu- ral networks. In Proceedings of the 30th Interna- tional Conference on International Conference on Machine Learning, page III-1310-III-1318.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "GloVe: Global Vectors for Word Representation", |
| "authors": [ |
| { |
| "first": "Jeffrey", |
| "middle": [], |
| "last": "Pennington", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1532--1543", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing, pages 1532-1543.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Extended named entity hierarchy", |
| "authors": [ |
| { |
| "first": "Satoshi", |
| "middle": [], |
| "last": "Sekine", |
| "suffix": "" |
| }, |
| { |
| "first": "Kiyoshi", |
| "middle": [], |
| "last": "Sudo", |
| "suffix": "" |
| }, |
| { |
| "first": "Chikashi", |
| "middle": [], |
| "last": "Nobata", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Third International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satoshi Sekine, Kiyoshi Sudo, and Chikashi Nobata. 2002. Extended named entity hierarchy. In Pro- ceedings of the Third International Conference on Language Resources and Evaluation.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Dropout: A simple way to prevent neural networks from overfitting", |
| "authors": [ |
| { |
| "first": "Nitish", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "", |
| "issue": "", |
| "pages": "1929--1958", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, page 1929-1958.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Representing Text Chunks", |
| "authors": [ |
| { |
| "first": "Erik", |
| "middle": [ |
| "F" |
| ], |
| "last": "Tjong", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Sang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jorn", |
| "middle": [], |
| "last": "Veenstra", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the Ninth Conference of the European Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "173--179", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Erik F. Tjong Kim Sang and Jorn Veenstra. 1999. Rep- resenting Text Chunks. In Proceedings of the Ninth Conference of the European Chapter of the Associa- tion for Computational Linguistics, page 173-179.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "SemEval-2019 Task 12: Toponym Resolution in Scientific Papers", |
| "authors": [ |
| { |
| "first": "Davy", |
| "middle": [], |
| "last": "Weissenbacher", |
| "suffix": "" |
| }, |
| { |
| "first": "Arjun", |
| "middle": [], |
| "last": "Magge", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Karen", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Graciela", |
| "middle": [], |
| "last": "Scotch", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gonzalez-Hernandez", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 13th International Workshop on Semantic Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "907--916", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Davy Weissenbacher, Arjun Magge, Karen O'Connor, Matthew Scotch, and Graciela Gonzalez-Hernandez. 2019. SemEval-2019 Task 12: Toponym Resolu- tion in Scientific Papers. In Proceedings of the 13th International Workshop on Semantic Evalua- tion, pages 907-916.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Adaptive co-attention network for named entity recognition in tweets", |
| "authors": [ |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinlan", |
| "middle": [], |
| "last": "Fu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaoyu", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuanjing", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18)", |
| "volume": "", |
| "issue": "", |
| "pages": "5674--5681", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Qi Zhang, Jinlan Fu, Xiaoyu Liu, and Xuanjing Huang. 2018. Adaptive co-attention network for named en- tity recognition in tweets. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intel- ligence, (AAAI-18), pages 5674-5681.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Images of urban cities: (left) Shenzhen, China, (right) Dubai, UAE.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Images of rural villages: (left) Manali, India, (right) Hakone, Japan.", |
| "uris": null |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "text": "The overview of the proposed model. The left-hand side is the basic BiLSTM-CRF model. The righthand side is the proposed module to create element-wise visual features.", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Country\u2022 (ex.1-E) Signed by 117 countries and two regions at the Final Protocol and Convention Signing Conference in Jamaica Country", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "text": "An example photo retrieved by the query (Jamaica).", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "text": "An example photo retrieved by the query (Avignon).", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "num": null, |
| "type_str": "figure", |
| "text": "An example photo retrieved by the query (Bundesliga).", |
| "uris": null |
| }, |
| "FIGREF7": { |
| "num": null, |
| "type_str": "figure", |
| "text": "An example photo retrieved by the query (Cisco).", |
| "uris": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "content": "<table><tr><td colspan=\"3\">: Statistics of ENE corpus</td></tr><tr><td>class</td><td colspan=\"2\">#types #mentions</td></tr><tr><td>City</td><td>2,936</td><td>12,687</td></tr><tr><td>Country</td><td>431</td><td>21,340</td></tr><tr><td>County</td><td>150</td><td>248</td></tr><tr><td>GPE Other</td><td>95</td><td>1,203</td></tr><tr><td>Province</td><td>381</td><td>8,861</td></tr><tr><td>MIX</td><td>21</td><td>66</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "text": "" |
| }, |
| "TABREF2": { |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "Statistics of location name classes" |
| }, |
| "TABREF3": { |
| "num": null, |
| "content": "<table><tr><td>1 .</td></tr><tr><td>Before training, we divided the dataset into</td></tr><tr><td>three parts: training, develop and test in the ratio</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "text": "" |
| }, |
| "TABREF4": { |
| "num": null, |
| "content": "<table><tr><td>: Statistics of dataset</td></tr><tr><td>of 70:15:15. The statistics of dataset are shown in</td></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "text": "" |
| }, |
| "TABREF6": { |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "Experimental Results" |
| }, |
| "TABREF8": { |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "Performance for each class" |
| }, |
| "TABREF9": { |
| "num": null, |
| "content": "<table><tr><td/><td>Model</td><td>Precision</td><td>Recall</td><td>F1</td></tr><tr><td>Seen</td><td>Baseline</td><td>84.99</td><td>79.29</td><td>82.04</td></tr><tr><td colspan=\"2\">Visual (Unseen Baseline</td><td>68.54</td><td>54.95</td><td>61.0</td></tr><tr><td/><td>Visual (</td><td/><td/><td/></tr><tr><td/><td/><td/><td>\u2022 (ex.5-J)</td><td/></tr></table>", |
| "html": null, |
| "type_str": "table", |
| "text": "Gate) 90.07 (+5.08) 80.05 (+0.76) 84.77 (+2.73) Gate) 75.61 (+7.07) 55.86 (+0.91) 64.25 (+3.25)" |
| }, |
| "TABREF10": { |
| "num": null, |
| "content": "<table/>", |
| "html": null, |
| "type_str": "table", |
| "text": "Comparison between seen & unseen mentions for City class" |
| } |
| } |
| } |
| } |