{ "paper_id": "P15-1007", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T09:08:09.887561Z" }, "title": "MultiGranCNN: An Architecture for General Matching of Text Chunks on Multiple Levels of Granularity", "authors": [ { "first": "Wenpeng", "middle": [], "last": "Yin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Munich", "location": { "country": "Germany" } }, "email": "wenpeng@cis.uni-muenchen.de" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Munich", "location": { "country": "Germany" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. Multi-GranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We demonstrate stateof-the-art performance of MultiGranCNN on clause coherence and paraphrase identification tasks.", "pdf_parse": { "paper_id": "P15-1007", "_pdf_hash": "", "abstract": [ { "text": "We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. Multi-GranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We demonstrate stateof-the-art performance of MultiGranCNN on clause coherence and paraphrase identification tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Many natural language processing (NLP) tasks can be posed as classifying the relationship between two TEXTCHUNKS (cf. , Bordes et al. (2014b) ) where a TEXTCHUNK can be a sentence, a clause, a paragraph or any other sequence of words that forms a unit.", "cite_spans": [ { "start": 120, "end": 141, "text": "Bordes et al. (2014b)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Paraphrasing (Figure 1 , top) is one task that we address in this paper and that can be formalized as classifying a TEXTCHUNK relation. The two classes correspond to the sentences being (e.g., the pair ) or not being (e.g., the pair ) paraphrases of each other. Another task we look at is clause coherence (Figure 1 , bottom). Here the two TEXTCHUNK relation classes correspond to the second clause being (e.g., the pair ) or not being (e.g., the pair ) a discourse-coherent continuation of the first clause. Other tasks that can be formalized as TEXTCHUNK relations are question answering (QA) (is the second chunk an answer to the first?), textual inference (does the first chunk imply the second?) and machine translation (are the two chunks translations of each other?). p PDC will also almost certainly fan the flames of speculation about Longhorn's release.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 22, "text": "(Figure 1", "ref_id": "FIGREF0" }, { "start": 324, "end": 333, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "q + PDC will also almost certainly reignite speculation about release dates of Microsoft 's new products. q \u2212 PDC is indifferent to the release of Longhorn.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x The dollar suffered its worst one-day loss in a month, y + falling to 1.7717 marks . . . from 1.7925 marks yesterday. y \u2212 up from 112.78 yen in late New York trading yesterday. In this paper, we present MultiGranCNN, a general architecture for TEXTCHUNK relation classification. MultiGranCNN can be applied to a broad range of different TEXTCHUNK relations. This is a challenge because natural language has a complex structure -both sequential and hierarchicaland because this structure is usually not parallel in the two chunks that must be matched, further increasing the difficulty of the task. A successful detection algorithm therefore needs to capture not only the internal structure of TEXTCHUNKS, but also the rich pattern of their interactions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "MultiGranCNN is based on two innovations that are critical for successful TEXTCHUNK relation classification. First, the architecture is designed to ensure multigranular comparability. For general matching, we need the ability to match short sequences in one chunk with long sequences in the other chunk. For example, what is expressed by a single word in one chunk (\"reignite\" in q + in the figure) may be expressed by a sequence of several words in its paraphrase (\"fan the flames of\" in p). To meet this objective, we learn representations for words, phrases and the entire sentence that are all mutually comparable; in particular, these representations all have the same dimensionality and live in the same space.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most prior work (e.g., Blacoe and Lapata (2012; Hu et al. (2014) ) has neglected the need for multigranular comparability and performed matching within fixed levels only, e.g., only words were matched with words or only sentences with sentences. For a general solution to the problem of matching, we instead need the ability to match a unit on a lower level of granularity in one chunk with a unit on a higher level of granularity in the other chunk. Unlike (Socher et al., 2011) , our model does not rely on parsing and it can more exhaustively search the hypothesis space of possible matchings, including matchings that correspond to conflicting segmentations of the input chunks (see Section 5).", "cite_spans": [ { "start": 23, "end": 47, "text": "Blacoe and Lapata (2012;", "ref_id": "BIBREF1" }, { "start": 48, "end": 64, "text": "Hu et al. (2014)", "ref_id": "BIBREF11" }, { "start": 458, "end": 479, "text": "(Socher et al., 2011)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our second contribution is that MultiGranCNN contains a flexible and modularized match feature component. This component computes the basic features that measure how well phrases of the two chunks match. We investigate three different match feature models that demonstrate that a wide variety of different match feature models can be implemented. The match feature models can be swapped in and out of MultiGranCNN, depending on the characteristics of the task to be solved.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Prior work that has addressed matching tasks has usually focused on a single task like QA (Bordes et al., 2014a; Yu et al., 2014) or paraphrasing (Socher et al., 2011; Madnani et al., 2012; Ji and Eisenstein, 2013) . The ARC architectures proposed by Hu et al. (2014) are intended to be more general, but seem to be somewhat limited in their flexibility to model different matching relations; e.g., they do not perform well for paraphrasing.", "cite_spans": [ { "start": 90, "end": 112, "text": "(Bordes et al., 2014a;", "ref_id": "BIBREF3" }, { "start": 113, "end": 129, "text": "Yu et al., 2014)", "ref_id": "BIBREF28" }, { "start": 146, "end": 167, "text": "(Socher et al., 2011;", "ref_id": "BIBREF25" }, { "start": 168, "end": 189, "text": "Madnani et al., 2012;", "ref_id": "BIBREF20" }, { "start": 190, "end": 214, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF13" }, { "start": 251, "end": 267, "text": "Hu et al. (2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Different match feature models may also be required by factors other than the characteristics of the task. If the amount of labeled training data is small, then we may prefer a match feature model with few parameters that is robust against overfitting. If there is lots of training data, then a richer match feature model may be the right choice. This motivates the need for an architecture like MultiGranCNN that allows selection of the taskappropriate match feature model from a range of different models and its seamless integration into the architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In remaining parts, Section 2 introduces some related work; Section 3 gives an overview of the proposed MultiGranCNN; Section 4 shows how to learn representations for generalized phrases (gphrases); Section 5 describes the three matching models: DIRECTSIM, INDIRECTSIM and CON-CAT; Section 6 describes the two 2D pooling methods: grid-based pooling and phrase-based pooling; Section 7 describes the match feature CNN; Section 8 summarizes the architecture of MultiGran CNN; and Section 9 presents experiments; finally, Section 10 concludes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Paraphrase identification (PI) is a typical task of sentence matching and it has been frequently studied (Qiu et al., 2006; Blacoe and Lapata, 2012; Madnani et al., 2012; Ji and Eisenstein, 2013) . Socher et al. (2011) utilized parsing to model the hierarchical structure of sentences and uses unfolding recursive autoencoders to learn representations for single words and phrases acting as nonleaf nodes in the tree. The main difference to MultiGranCNN is that we stack multiple convolution layers to model flexible phrases and learn representations for them, and aim to address more general sentence correspondence. Bach et al. (2014) claimed that elementary discourse units obtained by segmenting sentences play an important role in paraphrasing. Their conclusion also endorses (Socher et al., 2011) 's and our work, for both take interactions between component phrases into account.", "cite_spans": [ { "start": 105, "end": 123, "text": "(Qiu et al., 2006;", "ref_id": "BIBREF24" }, { "start": 124, "end": 148, "text": "Blacoe and Lapata, 2012;", "ref_id": "BIBREF1" }, { "start": 149, "end": 170, "text": "Madnani et al., 2012;", "ref_id": "BIBREF20" }, { "start": 171, "end": 195, "text": "Ji and Eisenstein, 2013)", "ref_id": "BIBREF13" }, { "start": 198, "end": 218, "text": "Socher et al. (2011)", "ref_id": "BIBREF25" }, { "start": 781, "end": 802, "text": "(Socher et al., 2011)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "QA is another representative sentence matching problem. Yu et al. (2014) modeled sentence representations in a simplified CNN, finally finding the match score by projecting question and answer candidates into the same space. Other relevant QA work includes (Bordes et al., 2014c; Bordes et al., 2014a; Yang et al., 2014; Iyyer et al., 2014) For more general matching, Chopra et al. (2005) and Liu (2013) used a Siamese architecture of shared-weight neural networks (NNs) to model two objects simultaneously, matching their representations and then learning a specific type of sentence relation. We adopt parts of their architecture, but we model phrase representations as well as sentence representations.", "cite_spans": [ { "start": 56, "end": 72, "text": "Yu et al. (2014)", "ref_id": "BIBREF28" }, { "start": 257, "end": 279, "text": "(Bordes et al., 2014c;", "ref_id": "BIBREF5" }, { "start": 280, "end": 301, "text": "Bordes et al., 2014a;", "ref_id": "BIBREF3" }, { "start": 302, "end": 320, "text": "Yang et al., 2014;", "ref_id": "BIBREF27" }, { "start": 321, "end": 340, "text": "Iyyer et al., 2014)", "ref_id": "BIBREF12" }, { "start": 368, "end": 388, "text": "Chopra et al. (2005)", "ref_id": "BIBREF7" }, { "start": 393, "end": 403, "text": "Liu (2013)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Li and Xu (2012) gave a comprehensive introduction to query-document matching and argued that query and document match at different levels: term, phrase, word sense, topic, structure etc. This also applies to sentence matching. Lu and Li (2013) addressed matching of short texts. Interactions between the two texts were obtained via LDA (Blei et al., 2003) and were then the basis for computing a matching score. Compared to MultiGranCNN, drawbacks of this approach are that LDA parameters are not optimized for the specific task and that the interactions are formed on the level of single words only. Gao et al. (2014) modeled interestingness between two documents with deep NNs. They mapped source-target document pairs to feature vectors in a latent space in such a way that the distance between the source document and its corresponding interesting target in that space was minimized. Interestingness is more like topic relevance, based mainly on the aggregated meaning of keywords, as opposed to more structural relationships as is the case for paraphrasing and clause coherence.", "cite_spans": [ { "start": 228, "end": 244, "text": "Lu and Li (2013)", "ref_id": "BIBREF19" }, { "start": 333, "end": 356, "text": "LDA (Blei et al., 2003)", "ref_id": null }, { "start": 602, "end": 619, "text": "Gao et al. (2014)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We briefly discussed (Hu et al., 2014) 's ARC in Section 1. MultiGranCNN is partially inspired by ARC, but introduces multigranular comparability (thus enabling crosslevel matching) and supports a wider range of match feature models.", "cite_spans": [ { "start": 21, "end": 38, "text": "(Hu et al., 2014)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our unsupervised learning component (Section 4, last paragraph) resembles word2vec CBOW (Mikolov et al., 2013) , but learns representations of TEXTCHUNKS as well as words. It also resembles PV-DM (Le and Mikolov, 2014), but our TEXTCHUNK representation is derived using a hierarchical architecture based on convolution and pooling.", "cite_spans": [ { "start": 88, "end": 110, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We use convolution-plus-pooling in two different components of MultiGranCNN. The first component, the generalized phrase CNN (gpCNN), will be introduced in Section 4. This component learns representations for generalized phrases (gphrases) where a generalized phrase is a general term for subsequences of all granularities: words, short phrases, long phrases and the sentence itself. The gpCNN architecture has L layers of convolution, corresponding (for L = 2) to words, short phrases, long phrases and the sentence. We test different values of L in our experiments. We train gpCNN on large data in an unsupervised manner and then fine-tune it on labeled training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of MultiGranCNN", "sec_num": "3" }, { "text": "Using a Siamese configuration, two copies of gpCNN, one for each of the two input TEXTCHUNKS, are the input to the match feature model, presented in Section 5. This model produces s 1 \u00d7 s 2 matching features, one for each pair of g-phrases in the two chunks, where s 1 , s 2 are the number of g-phrases in the two chunks, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of MultiGranCNN", "sec_num": "3" }, { "text": "The s 1 \u00d7s 2 match feature matrix is first reduced to a fixed size by dynamic 2D pooling. The re-sulting fixed size matrix is then the input to the second convolution-plus-pooling component, the match feature CNN (mfCNN) whose output is fed to a multilayer perceptron (MLP) that produces the final match score. Section 6 will give details.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of MultiGranCNN", "sec_num": "3" }, { "text": "We use convolution-plus-pooling for both word sequences and match features because we want to compute increasingly abstract features at multiple levels of granularity. To ensure that g-phrases are mutually comparable when computing the s 1 \u00d7 s 2 match feature matrix, we impose the constraint that all g-phrase representations live in the same space and have the same dimensionality. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overview of MultiGranCNN", "sec_num": "3" }, { "text": "We use several stacked blocks, i.e., convolutionplus-pooling layers, to extract increasingly abstract features of the TEXTCHUNK. The input to the first block are the words of the TEXTCHUNK, represented by CW (Collobert and Weston, 2008) embeddings. Given a TEXTCHUNK of length |S|, let vector c i \u2208 R wd be the concatenated embeddings of words v i\u2212w+1 , . . . , v i where w = 5 is the filter width, d = 50 is the dimensionality of CW embeddings and 0 < i < |S| + w. Embeddings for words v i , i < 1 and i > |S|, are set to zero. We then generate the representation", "cite_spans": [ { "start": 208, "end": 236, "text": "(Collobert and Weston, 2008)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "p i \u2208 R d of the g-phrase v i\u2212w+1 , .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": ". . , v i using the convolution matrix W l \u2208 R d\u00d7wd :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "p i = tanh(W l c i + b l )", "eq_num": "(1)" } ], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "where block index l = 1, bias b l \u2208 R d . We use wide convolution (i.e., we apply the convolution matrix W l to words v i , i < 1 and i > |S|) because this makes sure that each word v i , 1 \u2264 i \u2264 |S|, can be detected by all weights of W l -as opposed to only the rightmost (resp. leftmost) weights for initial (resp. final) words in narrow convolution. The configuration of convolution layers in following blocks (l > 1) is exactly the same except that the input vectors c i are not words, but the output of pooling from the previous layer of convolution -as we will explain presently. The configuration is the same (e.g., all W l \u2208 R d\u00d7wd ) because, by design, all g-phrase representations have the same dimensionality d. This also ensures that each g-phrase representation can be directly compared with each other g-phrase representation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "We use dynamic k-max pooling to extract the k l top values from each dimension after convolution in the l th block and the k L top values in the final block. We set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "k l = max(\u03b1, L \u2212 l L |S| )", "eq_num": "(2)" } ], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "l = 1, \u2022 \u2022 \u2022 , L", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "is the block index, and \u03b1 = 4 is a constant (cf. Kalchbrenner et al. (2014) ) that makes sure a reasonable minimum number of values is passed on to the next layer. We set k L = 1 (not 4, cf. Kalchbrenner et al. (2014) ) because our design dictates that all g-phrase representations, including the representation of the TEXTCHUNK itself, have the same dimensionality. Example: for L = 4, |S| = 20, the k i are [15, 10, 5, 1]. Dynamic k-max pooling keeps the most important features and allows us to stack multiple blocks to extract hiearchical features: units on consecutive layers correspond to larger and larger parts of the TEXTCHUNK thanks to the subset selection property of pooling. For many tasks, labeled data for training gpCNN is limited. We therefore employ unsupervised training to initialize gpCNN as shown in Figure 2 . Similar to CBOW (Mikolov et al., 2013) , we predict a sampled middle word v i from the average of seven vectors: the TEXTCHUNK representation (the final output of gpCNN) and the three words to the left and to the right of v i . We use noise-contrastive estimation (Mnih and Teh, 2012) for training: 10 noise words are sampled for each true example. Figure 3 : General illustration of match feature model. In this example, both S 1 and S 2 have 10 gphrases, so the match feature matrixF \u2208 R s 1 \u00d7s 2 has size 10 \u00d7 10.", "cite_spans": [ { "start": 49, "end": 75, "text": "Kalchbrenner et al. (2014)", "ref_id": "BIBREF14" }, { "start": 191, "end": 217, "text": "Kalchbrenner et al. (2014)", "ref_id": "BIBREF14" }, { "start": 849, "end": 871, "text": "(Mikolov et al., 2013)", "ref_id": "BIBREF21" }, { "start": 1097, "end": 1117, "text": "(Mnih and Teh, 2012)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 822, "end": 830, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1182, "end": 1190, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "gpCNN: Learning Representations for g-Phrases", "sec_num": "4" }, { "text": "Let g 1 , . . . , g s k be an enumeration of the s k gphrases of TEXTCHUNK S k . Let S k \u2208 R s k \u00d7d be the matrix, constructed by concatenating the four matrices of unigram, short phrase, long phrase and sentence representations shown in Figure 2 that contain the learned representations from Section 4 for these s k g-phrases; i.e., row S ki is the learned representation of g i .", "cite_spans": [], "ref_spans": [ { "start": 238, "end": 246, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "The basic design of a match feature model is that we produce an s 1 \u00d7 s 2 matrixF for a pair of TEXTCHUNKS S 1 and S 2 , shown in Figure 3 . F i,j is a score that assesses the relationship between g-phrase g i of S 1 and g-phrase g j of S 2 with respect to the TEXTCHUNK relation of interest (paraphrasing, clause coherence etc). This scoreF i,j is computed based on the vector representations S 1i and S 2j of the two g-phrases. 1 We experiment with three different feature models to compute the match scoreF i,j because we would like our architecture to address a wide variety of different TEXTCHUNK relations. We can model a TEXTCHUNK relation like paraphrasing as \"for each meaning element in one sentence, there must be a similar meaning element in the other sentence\"; thus, a good candidate for the match scoreF i,j is simply vector similarity. In contrast, similarity is a less promising match score for clause coherence; for clause coherence, we want a score that models how good a continuation one g-phrase is for the other. These considerations motivate us to define three different match feature models that we will introduce now.", "cite_spans": [ { "start": 430, "end": 431, "text": "1", "ref_id": null } ], "ref_spans": [ { "start": 130, "end": 138, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "The first match feature model is DIRECTSIM. This model computes the match score of two gphrases as their similarity using a radial basis function kernel:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F i,j = exp( \u2212||S 1i \u2212 S 2j || 2 2\u03b2 )", "eq_num": "(3)" } ], "section": "Match Feature Models", "sec_num": "5" }, { "text": "where we set \u03b2 = 2 (cf. Wu et al. (2013) ). DIRECTSIM is an appropriate feature model for TEXTCHUNK relations like paraphrasing because in that case direct similarity features are helpful in assessing meaning equivalence.", "cite_spans": [ { "start": 24, "end": 40, "text": "Wu et al. (2013)", "ref_id": "BIBREF26" } ], "ref_spans": [], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "The second match feature model is INDIRECT-SIM. Instead of computing the similarity directly as we do for DIRECTSIM, we first transform the representation of the g-phrase in one TEXTCHUNK using a transformation matrix M \u2208 R d\u00d7d , then compute the match score by inner product and sigmoid activation:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F i,j = \u03c3(S 1i MS T 2j + b),", "eq_num": "(4)" } ], "section": "Match Feature Models", "sec_num": "5" }, { "text": "Our motivation is that for a TEXTCHUNK relation like clause coherence, the two TEXTCHUNKS need not have any direct similarity. However, if we map the representations of TEXTCHUNK S 1 into an appropriate space then we can hope that similarity between these transformed representations of S 1 and the representations of TEXTCHUNK S 2 do yield useful features. We will see that this hope is borne out by our experiments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "The third match feature model is CONCAT. This is a general model that can learn any weighted combination of the values of the two vectors:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "F i,j = \u03c3(w T e i,j + b)", "eq_num": "(5)" } ], "section": "Match Feature Models", "sec_num": "5" }, { "text": "where e i,j \u2208 R 2d is the concatenation of S 1i and S 2j . We can learn different combination weights w to solve different types of TEXTCHUNK matching.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "We call this match feature model CONCAT because we implement it by concatenating g-phrase vectors to form a tensor as shown in Figure 4 .", "cite_spans": [], "ref_spans": [ { "start": 127, "end": 135, "text": "Figure 4", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "The match feature models implement multigranular comparability: they match all units in one TEXTCHUNK with all units in the other TEXTCHUNK. This is necessary because a general solution to matching must match a low-level unit like \"reignite\" to a higher-level unit like \"fan the flames of\" (Figure 1 ). Unlike (Socher et al., 2011) , our model does not rely on parsing; therefore, it can more exhaustively search the hypothesis space of possible matchings: mfCNN covers a wide variety of different, possibly overlapping units, not just those of a single parse tree.", "cite_spans": [ { "start": 310, "end": 331, "text": "(Socher et al., 2011)", "ref_id": "BIBREF25" } ], "ref_spans": [ { "start": 290, "end": 299, "text": "(Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Match Feature Models", "sec_num": "5" }, { "text": "The match feature models generate an s 1 \u00d7 s 2 matrix. Since it has variable size, we apply two different dynamic 2D pooling methods, grid-based pooling and phrase-focused pooling, to transform it to a fixed size matrix.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic 2D Pooling", "sec_num": "6" }, { "text": "We need to mapF \u2208 R s 1 \u00d7s 2 into a matrix F of fixed size s * \u00d7 s * where s * is a parameter. Gridbased pooling dividesF into s * \u00d7 s * nonoverlapping (dynamic) pools and copies the maximum value in each dynamic pool to F. This method is similar to (Socher et al., 2011) , but preserves locality better.", "cite_spans": [ { "start": 250, "end": 271, "text": "(Socher et al., 2011)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Grid-based pooling", "sec_num": "6.1" }, { "text": "F can be split into equal regions only if both s 1 and s 2 are divisible by s * . Otherwise, for s 1 > s * and if s 1 mod s * = b, the dynamic pools in the first s * \u2212 b splits each have s 1 s * rows while the remaining b splits each have s 1 s * + 1 rows. In Figure 5 , a s 1 \u00d7 s 2 = 4 \u00d7 5 matrix (left) is split into s * \u00d7 s * = 3 \u00d7 3 dynamic pools (middle): each row is split into [1, 1, 2] and each column is split into [1, 2, 2].", "cite_spans": [], "ref_spans": [ { "start": 260, "end": 268, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Grid-based pooling", "sec_num": "6.1" }, { "text": "If s 1 < s * , we first repeat all rows in batch style with size s 1 until no fewer than s * rows remain. Then the first s * rows are kept and split into s * dynamic pools. The same principle applies to the partitioning of columns. In Figure 5 (right) , the areas with dashed lines and dotted lines are repeated parts for rows and columns, respectively; each cell is its own dynamic pool.", "cite_spans": [], "ref_spans": [ { "start": 235, "end": 251, "text": "Figure 5 (right)", "ref_id": null } ], "eq_spans": [], "section": "Grid-based pooling", "sec_num": "6.1" }, { "text": "In the match feature matrixF \u2208 R s 1 \u00d7s 2 , row i (resp. column j) contains all feature values for gphrase g i of S 1 (resp. g j of S 2 ). Phrase-focused pooling attempts to pick the largest match features Figure 5 : Partition methods in grid-based pooling. Original matrix with size 4 \u00d7 5 is mapped into matrix with size 3 \u00d7 3 and matrix with size 6 \u00d7 7, respectively. Each dynamic pool is distinguished by a border of empty white space around it.", "cite_spans": [], "ref_spans": [ { "start": 206, "end": 214, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Phrase-focused pooling", "sec_num": "6.2" }, { "text": "for a g-phrase g on the assumption that they are the best basis for assessing the relation of g with other g-phrases. To implement this, we sort the values of each row i (resp. each column j) in decreasing order giving us a matrixF r \u2208 R s 1 \u00d7s 2 with sorted rows (resp.F c \u2208 R s 1 \u00d7s 2 with sorted columns). Then we concatenate the columns ofF r (resp. the rows ofF c ) resulting in list", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-focused pooling", "sec_num": "6.2" }, { "text": "F r = {f r 1 , . . . , f r s 1 s 2 } (resp. F c = {f c 1 , . . . , f c s 1 s 2 }) where each f r (f c )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-focused pooling", "sec_num": "6.2" }, { "text": "is an element ofF r (F c ). These two lists are merged into a list F by interleaving them so that members from F r and F c alternate. F is then used to fill the rows of F from top to bottom with each row being filled from left to right. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Phrase-focused pooling", "sec_num": "6.2" }, { "text": "The output of dynamic 2D pooling is further processed by the match feature CNN (mfCNN) as depicted in Figure 6 . mfCNN extracts increasingly abstract interaction features from lower-level interaction features, using several layers of 2D wide convolution and fixed-size 2D pooling.", "cite_spans": [], "ref_spans": [ { "start": 102, "end": 110, "text": "Figure 6", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "mfCNN: Match feature CNN", "sec_num": "7" }, { "text": "We call the combination of a 2D wide convolution layer and a fixed-size 2D pooling layer a block, denoted by index b (b = 1, 2 . . .) . In general, let tensor T b \u2208 R c b \u00d7s b \u00d7s b denote the feature maps in block b; block b has c b feature maps, each of size s b \u00d7 s b (T 1 = F \u2208 R 1\u00d7s * \u00d7s * ). Let W b \u2208 R c b+1 \u00d7c b \u00d7f b \u00d7f b be the filter weights of 2D wide convolution in block b, f b \u00d7f b is then the size of sliding convolution regions. Then the convolution is performed as element-wise multiplication 2 IfF has fewer cells than F, then we simply repeat the filling procedure to fill all cells.", "cite_spans": [], "ref_spans": [ { "start": 117, "end": 133, "text": "(b = 1, 2 . . .)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "mfCNN: Match feature CNN", "sec_num": "7" }, { "text": "between W b and T b as follows:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "mfCNN: Match feature CNN", "sec_num": "7" }, { "text": "T b+1 m,i\u22121,j\u22121 = \u03c3( W b m,:,:,: T b :,i\u2212f b :i,j\u2212f b :j +b b m ) (6) where 0\u2264mused", "html": null } } } }