ACL-OCL / Base_JSON /prefixK /json /K18 /K18-1013.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K18-1013",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:10:38.792326Z"
},
"title": "Learning to Embed Semantic Correspondence for Natural Language Understanding",
"authors": [
{
"first": "Sangkeun",
"middle": [],
"last": "Jung",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Chungnam National University",
"location": {}
},
"email": ""
},
{
"first": "Jinsik",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {},
"email": "jinsik16.lee@sktbrain.com"
},
{
"first": "Jiwon",
"middle": [],
"last": "Kim",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SK telecom",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "While learning embedding models has yielded fruitful results in several NLP subfields, most notably Word2Vec, embedding correspondence has relatively not been well explored especially in the context of natural language understanding (NLU), a task that typically extracts structured semantic knowledge from a text. A NLU embedding model can facilitate analyzing and understanding relationships between unstructured texts and their corresponding structured semantic knowledge, essential for both researchers and practitioners of NLU. Toward this end, we propose a framework that learns to embed semantic correspondence between text and its extracted semantic knowledge, called semantic frame. One key contributed technique is semantic frame reconstruction used to derive a one-to-one mapping between embedded vectors and their corresponding semantic frames. Embedding into semantically meaningful vectors and computing their distances in vector space provide a simple, but effective way to measure semantic similarities. With the proposed framework, we demonstrate three key areas where the embedding model can be effective: visualization, semantic search and re-ranking.",
"pdf_parse": {
"paper_id": "K18-1013",
"_pdf_hash": "",
"abstract": [
{
"text": "While learning embedding models has yielded fruitful results in several NLP subfields, most notably Word2Vec, embedding correspondence has relatively not been well explored especially in the context of natural language understanding (NLU), a task that typically extracts structured semantic knowledge from a text. A NLU embedding model can facilitate analyzing and understanding relationships between unstructured texts and their corresponding structured semantic knowledge, essential for both researchers and practitioners of NLU. Toward this end, we propose a framework that learns to embed semantic correspondence between text and its extracted semantic knowledge, called semantic frame. One key contributed technique is semantic frame reconstruction used to derive a one-to-one mapping between embedded vectors and their corresponding semantic frames. Embedding into semantically meaningful vectors and computing their distances in vector space provide a simple, but effective way to measure semantic similarities. With the proposed framework, we demonstrate three key areas where the embedding model can be effective: visualization, semantic search and re-ranking.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The goal of NLU is to extract meaning from a natural language and infer the user intention. NLU typically involves two tasks: identifying user intent and extracting domain-specific entities, the second of which is often referred to as slot-filling (Mesnil et al., 2013; Jeong and Lee, 2006; Kim et al., 2016) . Typically, the NLU task can be viewed as an extraction of structured text from a raw text. In NLU literature, the structured form of intent and filled slots is called a semantic frame. Figure 1 : Semantic vector learning framework and applications. We assume a pair of corresponding text and semantic frame (t, s), which has semantically the same meaning in a raw text domain (\u03c7 T ), and a semantic frame domain (\u03c7 S ) can be encoded to a vector v in a shared embedding vector space Z. R T and R S are two reader functions that encode raw and structured text to a semantic vector. W is a writing function that decodes a semantic vector to a symbolic semantic frame.",
"cite_spans": [
{
"start": 248,
"end": 269,
"text": "(Mesnil et al., 2013;",
"ref_id": "BIBREF18"
},
{
"start": 270,
"end": 290,
"text": "Jeong and Lee, 2006;",
"ref_id": "BIBREF10"
},
{
"start": 291,
"end": 308,
"text": "Kim et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 496,
"end": 504,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we aim to learn the meaningful distributed semantic representation, rather than focusing on building the NLU system itself. Once we obtained a reliable and reasonable semantic representation in a vector form, we can devise many useful and new applications around the NLU (Figure 1 ). Because all the instances of text and semantic frame are placed on a single vector space, we can obtain the natural and direct distance measure between them. Using the distance measure, the similar text or semantic frame instances can be searched directly and interchangeably by the distance comparison. Moreover, re-ranking of multiple NLU results can be applied without further learning by comparing the distances between the text and the corresponding predicted semantic frame. Converting symbols to vectors makes it possible to do visualization naturally as well.",
"cite_spans": [],
"ref_spans": [
{
"start": 286,
"end": 295,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this study, we assumed that the reasonable semantic vector representation satisfies the following properties.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Property -embedding correspondence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Distributed representation of text should be the same as the distributed representation of the corresponding semantic frame.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 Property -reconstruction: Symbolic semantic frame should be recovered from the learned semantic vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We herein introduce a novel semantic vector learning framework called ESC (Embedding Semantic Correspondence learning), which satisfies the assumed properties. The remainder of the paper is structured as follows: Section 2 describes the detailed structure of the framework. Section 3 introduces semantic vector applications in NLU. Section 4 describes the experimental settings and results. Section 5 discusses the related work. Finally, section 6 presents the conclusion.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our framework consists of text reader, semantic frame reader, and semantic frame writer. The text reader embeds a sequence of tokens to a distributed vector representation. The semantic frame reader reads the structured texts and encodes each to a vector. v t represents a vector semantic frame derived from the text reader, and v s represents a vector semantic frame derived from the semantic frame reader. Finally, the semantic frame writer generates a symbolic semantic frame from a vector representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ESC Framework",
"sec_num": "2"
},
{
"text": "A text reader (Figure 2 ), implementing a neural sentence encoder, reads a sequence of input tokens and encodes each to a vector. In this study, we used long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) for encoding input sequences. The encoding process can be defined as",
"cite_spans": [],
"ref_spans": [
{
"start": 14,
"end": 23,
"text": "(Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Text Reader",
"sec_num": "2.1"
},
{
"text": "\u2212 \u2192 h s = R text (E X (x s ), \u2212 \u2192 h s\u22121 ) v t = sigmoid( \u2212 \u2192 h S )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Reader",
"sec_num": "2.1"
},
{
"text": "where s = {1, 2, ..., S} and \u2212 \u2192 h s is the forward hidden states over the input sequence at time s; R text is an RNN cell; and E X (x s ) is the token embedding function, which returns a distributed vector representation of token x at time s. The final RNN output \u2212 \u2192 h S is taken as v t , which is a semantic vector derived from the text.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Reader",
"sec_num": "2.1"
},
{
"text": "A semantic frame consists of structured tags such as the intent and slot-tag and slot-values. In this study, the intent tag is handled as a symbol, and the slot-tags and slot-values are handled as a sequence of symbols. For example, the sentence, \"Please list all flights from Dallas to Philadelphia on Monday.\" is handled as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "\u2022 intent tag : atis flight",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "\u2022 slot-tag sequence :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "[ fromloc.city name, toloc.city name, depart date.day name ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "\u2022 slot-value sequence :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "[Dallas, Philadelphia, Monday].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "The intent reader is a simple embedding function v intent = E I (i), which returns a distributed vector representation of the intent tag i for a sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "Stacked LSTM layer is used to read the sequences of slot-tags and slot-values. E S (o) is a slot-tag embedding function with o as a token. E V (a) is an embedding function with a as a token. The embedding result E S (o m ) and E V (a m ) are concatenated at time-step m, and the merged vectors are fed to the stacked layer for each timestep ( Figure 2 ). v tag,value -the reading result of sequence of slot-tags and values -is taken from the final output of RNN at time M . Finally, intent, slot-tag and value encoded vectors are merged to construct a distributed semantic frame representation as",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 351,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "v s = sigmoid(W sf ([v intent ; v tag,value ]) + b sf )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "where [; ] denotes the vector concatenation operator. The dimension of v s is same as v t . All embedding weights are randomly initialized and learned through the training process. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader",
"sec_num": "2.2"
},
{
"text": "\u202b\u0727\u202c \u123a\u202b\u0754\u202c \u0b35 \u123b \u202b\u0727\u202c \u123a\u202b\u0754\u202c \u0bcc \u123b \u202b\u0752\u202c \u0be7 \u229a \u202b\u0752\u202a\u123a\u202c\u0750\u074f\u0745\u0740\u202c \u0be7 , \u202b\u0752\u202c \u0be6 \u123b \u0739\u2032 \u0bcc \u0739\u2032 \u0bcc \u0739\u2032 \u0bcc \u2a00 \u2a00 \u2a00 \u2a01 \u202b\u0755\u202c \u0be6\u0be7 \u202b\u0755\u202c \u0ddc \u0be6\u0be7 \u202b\u072e\u202c \u0be6\u0be7 \u2a00 \u202b\u072e\u202c \u0be7\u0be7 \u202b\u0755\u202c \u0be7\u0be7 \u202b\u0755\u202c \u0ddc \u0be7\u0be7 \u015e slot-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader Text Reader",
"sec_num": null
},
{
"text": "\u202b\u202c \u0b35 , \u202b\u202c \u0b36 , \u2026 , \u202b\u202c \u0b37 \u0d4c [ Dallas, Philadelphia ] \u073d \u0b35 , \u073d \u0b36 , \u2026 , \u073d \u0b37 \u0d4c atis_flight \u0745 \u0d4c \u0739 \u0bc2 \u11f1 \u202b\u0752\u202c \u229b \u202b\u0727\u202c \u0bcc \u123a\u202b\u202c \u0b35 \u123b \u202b\u0727\u202c \u123a\u073d \u0b35 \u123b \u229b \u202b\u0727\u202c \u0bcc \u123a\u202b\u202c \u0bc6 \u123b \u202b\u0727\u202c \u123a\u073d \u0bc6 \u123b \u0745 \u229b \u202b\u0727\u202c \u0bc2 Figure 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader Text Reader",
"sec_num": null
},
{
"text": "Text reader, semantic reader and semantic frame writer neural architecture. E X is an embedding function for the input text token x. E I , E S , and E V are the embedding functions for the intent tag, slottag and slot-value, respectively. is a vector concatenation operation; is a cross-entropy; \u2295 is an average calculation; represents the distance calculation.\u0177 intent is a reference intent tag vector and y m slot is a reference slot tag vector at time m. M is the number of slots in a sentence (in the above example, M = 3).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Reader Text Reader",
"sec_num": null
},
{
"text": "One of the objectives of this study is to learn semantically the reasonable vector representations of text and a related semantic frame. Hence, we set the properties of the desirable semantic vector, and the loss functions are defined to satisfy the properties. Loss for Property \"embedding correspondence\" Distance loss measures the dissimilarity between the encoded semantic vectors from the text reader and those from the semantic frame reader in the vector space. The loss is defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "L dist = dist(v t , v s )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "where the dist function can be any vector distance measure; however, in this study, we employed a Euclidean and a cosine distance (=1.0 -cosine similarity).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "Loss for Property \"reconstruction\" Content loss provides a measure of how much semantic information the semantic frame vector contains. Without the content loss, v t and v s tend to quickly converge to zero vectors, implying the failure to learn the semantic representation. To measure the content keeping, symbolic semantic frame generations from semantic vector is performed, and the difference between the original semantic frame and the generated semantic frame is calculated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "Because the semantic frame's slot-value has a large vocabulary size to generate the slot values, a reduced semantic frame is devised to ease the generation problem. A reduced semantic frame is created by simply dropping the slot values from the corresponding semantic frame. For example, in Figure 2 , slot values [Dallas, Philadelphia, Monday] are removed to create a reduced semantic frame. Content loss calculation is performed on this reduced semantic frame. Another advantage of employing reduced semantic frame is that the learned distributed semantic vectors have more abstract power because the learned semantic vectors are less sensitive to the lexical vocabulary.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 299,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "For content loss, the intent and slot-tags' generation qualities are measured. The intent generation network can be simply defined using linear ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "y intent = W \u2032 I v + b I where v",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "is the semantic vector, and y intent is the output vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "The slot-tag generation networks are defined as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "\u2212 \u2192 q m = R G (v, \u2212 \u2192 q m\u22121 ) y m slot = W \u2032 S \u2212 \u2192 q m + b S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "where R G is an RNN cell. The semantic vector v is copied and repeatedly fed into each RNN input. The outputs from the RNN are projected onto the slot tag space with W \u2032 S . Figure 2 shows the intent and slot tag generation networks and the corresponding loss calculation methods. The generational losses can be defined with the cross entropy between the generated tag vector and the reference tag vector as",
"cite_spans": [],
"ref_spans": [
{
"start": 174,
"end": 182,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "L intent = CrossEntropy(\u0177 intent , y intent ) L slot = 1 M M m=1 CrossEntropy(\u0177 m slot , y m slot )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "where M is the number of slots in a sentence. With the combination of intent and slot losses, the content loss(L content ) to reconstruct a semantic frame from a semantic vector v can be defined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "L content = L intent + L slot",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "Finally, the total loss value (L) for learning the semantic frame representation is defined with the distance loss and content loss as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "L = L dist + L content",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "The hyperparameters of the proposed model are summarized in Table 1. 3 Applications",
"cite_spans": [],
"ref_spans": [
{
"start": 60,
"end": 68,
"text": "Table 1.",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Semantic Frame Writer and Loss Functions",
"sec_num": "2.3"
},
{
"text": "Using the learned text-and semantic-frame reader, we can measure not only the instances from the same form (text or semantic frame form) but also from different forms. Let's denote a text as t and a semantic frame as s, and the text and semantic frame reader as R T and R S , respectively. The distance measurements between them can be performed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-form Distance Measurement",
"sec_num": "3.1"
},
{
"text": "\u2022 dist(v i t , v j t ) : t i \u2192 R T (t i ) = v i t t j \u2192 R T (t j ) = v j t \u2022 dist(v i t , v j s ): t i \u2192 R T (t i ) = v i t s j \u2192 R S (t j ) = v j s \u2022 dist(v i s , v j s ): s i \u2192 R S (s i ) = v i s s j \u2192 R S (s j ) = v j s 3.2 Visualization",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-form Distance Measurement",
"sec_num": "3.1"
},
{
"text": "With vector semantic representation, we can visualize the instances (sentences) in an easier and more natural way. Once the symbolic text or semantic frame are converted to vector, vector visualization methods such as t-sne (Maaten and Hinton, 2008) can be used directly to check the relationship between instances or the distribution of the entire corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-form Distance Measurement",
"sec_num": "3.1"
},
{
"text": "Re-ranking the NLU results from multiple NLU modules is difficult but important if a robust NLU system is to be built. Typically, a choice is made by comparing the scores produced by each system. However, this technique is not always feasible because the scores are often in different scales, or are occasionally not provided at all (e.g., in the purely rule-based NLU systems). The vector form of the semantic frame provides a very clear and natural solution for the re-ranking problem. Figure 3 shows the flow of the re-ranking algorithm with the proposed vector semantic representation. In this study, we reordered the NLU results from multiple NLU systems according to their corresponding distances of v t to v s . It is noteworthy that the proposed re-ranking algorithm does not require further learning for ranking such as ensemble learning or learning-to-rank techniques. Further, the proposed methods are applicable to any type of NLU system. Even purely rule-based systems can be satisfactorily compared to purely statistical systems. Figure 3 : Re-ranking multiple NLU results using the semantic vector. The semantic vector from the text (v t ) functions as a pivot. We show three different NLU systems in this illustration.",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 496,
"text": "Figure 3",
"ref_id": null
},
{
"start": 1044,
"end": 1052,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-ranking Without Further Learning",
"sec_num": "3.3"
},
{
"text": "E>h \u03ed text^\u011e \u0175\u0102\u0176\u019a\u015d\u0110 Z\u011e\u0189\u018c\u011e\u0190\u011e\u0176\u019a\u0102\u019a\u015d\u017d\u0176 \u03ed \u011e\u0175\u0102\u0176\u019a\u015d\u0110 &\u018c\u0102\u0175\u011e Z\u011e\u0102\u011a\u011e\u018c \u202b\u0752\u202c \u0be6 \u0c2d E>h \u03ee \u011e\u0175\u0102\u0176\u019a\u015d\u0110 Z\u011e\u0189\u018c\u011e\u0190\u011e\u0176\u019a\u0102\u019a\u015d\u017d\u0176 \u03ee \u011e\u0175\u0102\u0176\u019a\u015d\u0110 &\u018c\u0102\u0175\u011e Z\u011e\u0102\u011a\u011e\u018c \u202b\u0752\u202c \u0be6 \u0c2e E>h \u03ef \u011e\u0175\u0102\u0176\u019a\u015d\u0110 Z\u011e\u0189\u018c\u011e\u0190\u011e\u0176\u019a\u0102\u019a\u015d\u017d\u0176 \u03ef \u011e\u0175\u0102\u0176\u019a\u015d\u0110 &\u018c\u0102\u0175\u011e Z\u011e\u0102\u011a\u011e\u018c \u202b\u0752\u202c \u0be6 \u0c2f d\u011e\u01c6\u019a Z\u011e\u0102\u011a\u011e\u018c \u202b\u0752\u202c \u0be7 \u202b\u0752\u202a\u123a\u202c\u0750\u074f\u0745\u0740\u202c \u0be7 , \u202b\u0752\u202c \u0be6 \u0c2d \u123b \u202b\u0752\u202a\u123a\u202c\u0750\u074f\u0745\u0740\u202c \u0be7 , \u202b\u0752\u202c \u0be6 \u0c2e \u123b \u202b\u0752\u202a\u123a\u202c\u0750\u074f\u0745\u0740\u202c \u0be7 , \u202b\u0752\u202c \u0be6 \u0c2f \u123b \u0111",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Re-ranking Without Further Learning",
"sec_num": "3.3"
},
{
"text": "For training and testing purposes, we used the ATIS2 dataset (Price, 1990). The ATIS2 dataset consists of an annotated intent and slot corpus for an air travel information search task. ATIS2 data set comes with a commonly used training and test split. For tuning parameters, we further split the training set into 90% training and 10% development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "The intuition behind the proposed method is that semantically similar instances will be grouped together if the semantic vector learning is performed successfully. Figure 4 supports that the intuition is correct. In the early stages of training, the instances are scattered randomly; however, as the training progresses, semantically similar instances gather closer to each other. We observed that the proposed framework groups and depicts the sentences based on the intent tag remarkably well.",
"cite_spans": [],
"ref_spans": [
{
"start": 164,
"end": 172,
"text": "Figure 4",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Validity of Learned Semantic Vector with Visualization",
"sec_num": "4.1"
},
{
"text": "In our framework, the instances having different forms (text or semantic frame) can be compared directly on a semantic vector space. To demonstrate that multi-form distance measurement works well, the sentence and semantic frame search results with a sentence and a semantic frame query are shown in Table 2 . Table 2 shows that text to text search is very well done with the learned vector. The retrieved sentence patterns are similar to the given text, and the vocabulary presented is also similar. On the other hand, in the case of the text to semantic frame search, The sentence patterns are similar, but the content words such as city name are not similar. In fact, this is what we predicted, because the content loss for reconstruction property is measured on reduced semantic frame which does not include slot-values. In semantic frame to text search, we can find similar behaviors. Table 2 : Example of most similar instance search results in the test data according to the proposed framework. Top-3 text and semantic frame retrieved instances given a single query are shown in left and right side respectively. ferent city or airport names which are corresponding to slot-values. If we could include the slotvalue generation in the reconstruction loss with large data, a better multi-form semantic search result might be expected.",
"cite_spans": [],
"ref_spans": [
{
"start": 300,
"end": 307,
"text": "Table 2",
"ref_id": null
},
{
"start": 310,
"end": 317,
"text": "Table 2",
"ref_id": null
},
{
"start": 890,
"end": 897,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-form Distance Measurement",
"sec_num": "4.2"
},
{
"text": "To measure the quantitative search performance, precision at K are reported in Table 3 . Precision at K corresponds to the number of same sentence pattern instances in the top K results. From the search result, we can conclude that the learned semantic vectors keep sentence pattern (intent tag and slot-tags) information very well.",
"cite_spans": [],
"ref_spans": [
{
"start": 79,
"end": 86,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Multi-form Distance Measurement",
"sec_num": "4.2"
},
{
"text": "We prepared 11 NLU systems for re-ranking. Nine intent-/slot-combined classifiers and two in- Table 3 : Same sentence pattern (intent and slottags should be matched) search performance (Precision @K). SF, I, S and J stand for semantic frame, intent, slot-tag and joint of intent and slottag. tent/slot joint classifiers were implemented. For the combined classifiers, three intent classifiers and three slot sequential classifiers were prepared and combined. For the joint classifiers, those of Liu and Lane (2016) and Hakkani-T\u00fcr et al. (2016) were each implemented. Here, we did not significantly tune the NLU systems, as the purpose of this paper is to learn the semantic vector, not to build the state-of-the-art NLU systems. A maximum-entropy (MaxEnt)-and a support vector machine (SVM)-based intent classifier were implemented as a traditional sentence classification method. Both classifiers share the same feature set (1-gram, 2-gram, 3-gram, and 4gram around each word). Also, a convolutionalneural network-based (CNN-based) (Kim, 2014) sentence classification method was implemented.",
"cite_spans": [
{
"start": 495,
"end": 514,
"text": "Liu and Lane (2016)",
"ref_id": "BIBREF15"
},
{
"start": 519,
"end": 544,
"text": "Hakkani-T\u00fcr et al. (2016)",
"ref_id": "BIBREF6"
},
{
"start": 1034,
"end": 1045,
"text": "(Kim, 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 94,
"end": 101,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-ranking",
"sec_num": "4.3"
},
{
"text": "A conditional random field (CRF)-based sequential classifier was implemented as a traditional slot classifier. Also, an RNN-and an RNN+CRF-based sequential classifier were implemented as a deep learning method. Bidirectional LSTMs were used to build the simple RNNbased classifier. By placing a CRF layer on top of the bidirectional LSTM network (Lee, 2017) , an RNN+CRF-based network was implemented. In addition, two joint NLU systems (Liu and Lane, 2016; Hakkani-T\u00fcr et al., 2016) are prepared by reusing their codes, which are publicly accessible 12 3 . Table 4 shows the summary of the NLU systems that we prepared and used for the re-ranking experiments. Table 5 shows the performance of all the NLU",
"cite_spans": [
{
"start": 346,
"end": 357,
"text": "(Lee, 2017)",
"ref_id": "BIBREF14"
},
{
"start": 437,
"end": 457,
"text": "(Liu and Lane, 2016;",
"ref_id": "BIBREF15"
},
{
"start": 458,
"end": 483,
"text": "Hakkani-T\u00fcr et al., 2016)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 558,
"end": 565,
"text": "Table 4",
"ref_id": "TABREF7"
},
{
"start": 661,
"end": 668,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Re-ranking",
"sec_num": "4.3"
},
{
"text": "The task of spoken NLU consists of intent classification and domain entity slot filling. Traditionally, both tasks are approached using statistical machine-learning methods (Schwartz et al., 1997; He and Young, 2005; Dietterich, 2002) . Recently, with the advances in deep learning, RNNbased sequence encoding techniques have been used to detect the intent or utterance type (Ravuri and Stolcke, 2015) , and RNN-based neural architectures have been employed for slot-filling tasks (Mesnil et al., 2013 (Mesnil et al., , 2015 . The combinations of CRF and neural networks have also been explored by Xu and Sarikaya (2013) . Recent works have focused on enriching the representations for neural architectures to implement NLU. For example, Chen et al. focused on leveraging substructure embeddings for joint semantic frame parsing . Kim et al. utilized several semantic lexicons, such as WordNet, PPDB, and the Macmillan dictionary, to enrich the word embeddings, and later used them in the initial representation of words for intent detection (Kim et al., 2016) .",
"cite_spans": [
{
"start": 173,
"end": 196,
"text": "(Schwartz et al., 1997;",
"ref_id": "BIBREF25"
},
{
"start": 197,
"end": 216,
"text": "He and Young, 2005;",
"ref_id": "BIBREF7"
},
{
"start": 217,
"end": 234,
"text": "Dietterich, 2002)",
"ref_id": "BIBREF4"
},
{
"start": 375,
"end": 401,
"text": "(Ravuri and Stolcke, 2015)",
"ref_id": "BIBREF24"
},
{
"start": 481,
"end": 501,
"text": "(Mesnil et al., 2013",
"ref_id": "BIBREF18"
},
{
"start": 502,
"end": 524,
"text": "(Mesnil et al., , 2015",
"ref_id": "BIBREF17"
},
{
"start": 598,
"end": 620,
"text": "Xu and Sarikaya (2013)",
"ref_id": "BIBREF26"
},
{
"start": 1042,
"end": 1060,
"text": "(Kim et al., 2016)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Previous NLU works have used statistical modeling for the intent and slot-filling tasks, and input representation. None of the work performed has represented the text and semantic frame as a vector form simultaneously. To our best knowledge, this is the first presentation of a method for learning the distributed semantic vector for both text and semantic frame and it's applications in NLU research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In general natural language processing literature, many raw text to vector studies to learn the vector representations of text have been performed. Mikolov et al. (2013) ; Pennington et al. (2014) ; Collobert et al. (2011) proposed word to vector techniques. Mueller and Thyagarajan (2016) ; Le and Mikolov (2014) introduced embedding methods at the sentence and document level. Some attempts have shown that in this embedding process, certain semantic information such as analogy, antonym, and gender can be obtained in the vector space.",
"cite_spans": [
{
"start": 148,
"end": 169,
"text": "Mikolov et al. (2013)",
"ref_id": "BIBREF19"
},
{
"start": 172,
"end": 196,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF21"
},
{
"start": 199,
"end": 222,
"text": "Collobert et al. (2011)",
"ref_id": "BIBREF3"
},
{
"start": 259,
"end": 289,
"text": "Mueller and Thyagarajan (2016)",
"ref_id": "BIBREF20"
},
{
"start": 292,
"end": 313,
"text": "Le and Mikolov (2014)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Further, many structured text to vector techniques have been introduced recently. Preller (2014) introduced a logic formula embedding method while Bordes et al. (2013) ; Do et al. (2018) proposed translating symbolic structured knowledge such as Wordnet and freebase.",
"cite_spans": [
{
"start": 147,
"end": 167,
"text": "Bordes et al. (2013)",
"ref_id": "BIBREF0"
},
{
"start": 170,
"end": 186,
"text": "Do et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "We herein introduce a novel semantic frame embedding method by simultaneously executing the raw text to vector and structured text to vector method in a single framework to learn semantic representations more directly. In this framework, the text and semantic frame are each projected onto a vector space, and the distance loss between the vectors is minimized to satisfy embedding correspondence. Our research goes a step further to guarantee that the learned vector indeed keep the semantic information by checking the reconstruction the symbolic semantic frame from the vector.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In learning the parameters by minimizing the vector distances, this work is similar to a Siamese constitutional neural network (Chopra et al., 2005; Mueller and Thyagarajan, 2016) or an autoencoder (Hinton and Salakhutdinov, 2006) ; however, the weights are not shared or transposed in this work.",
"cite_spans": [
{
"start": 127,
"end": 148,
"text": "(Chopra et al., 2005;",
"ref_id": "BIBREF2"
},
{
"start": 149,
"end": 179,
"text": "Mueller and Thyagarajan, 2016)",
"ref_id": "BIBREF20"
},
{
"start": 198,
"end": 230,
"text": "(Hinton and Salakhutdinov, 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "In this study, we have proposed a new method to learn a correspondence embedding model for NLU. To learn a valid and meaningful distributed semantic representation, two properties -embedding correspondence and reconstruction -are considered. By minimizing the distance between the semantic vectors which are the outputs of text and semantic frame reader, the semantically equivalent vectors are placed very close in the vector space. In addition, reconstruction consistency from a semantic vector to symbol semantic frame was jointly enforced to prevent the method from learning trivial degenerate mappings (e.g. mapping all to zeros).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Through various experiments with ATIS2 dataset, we confirmed that the learned semantic vectors indeed contain semantic information. Semantic vector visualization and the results of similar text and semantic frame search showed that semantically similar instances are actually located near on the vector space. Also, using the learned semantic vector, re-ranking multiple NLU systems can be implemented without further learning by comparing semantic vector values of text and semantic frame.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "Based on the results of the proposed research, various research directions can be considered in the future. A semantic operation or algebra on a vector space will be a very promising research topic. Furthermore, with enough training data and appropriate modification to our method, adding text reconstruction constraint can be pursed and generating text directly from a semantic vector would be possible, somewhat resembling problem settings of neural machine translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "https://github.com/yvchen/JointSLU.git 2 https://github.com/DSKSD/RNN-for-Joint-NLU 3 The reported performance of C10 and C11 in their paper were not reproduced with the open code.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Table 5 : NLU performance of multiple NLU systems and re-ranked results. Acc., prec., rec., and f m stand for accuracy, precision, recall, and fmeasure, respectively. systems, the proposed re-ranking algorithm's performance, and the oracle performance. Typical choices in re-ranking NLU results are majority voting and score-based ranking. In the majority voting method, the semantic frame most predicted by the NLU systems is selected. The score of the NLU scoring method in Table 5 is the prediction probability. In the case of joint NLU classifiers (C10 and C11), the joint prediction probabilities are used for the score. In the case of combination NLU systems (C1 to C9), the product of the intent and slot prediction probabilities is used for the score.The proposed distance-based re-ranking method using semantic vector shows superior selection performance at both intent and slot-filling tasks. It is noteworthy that the re-ranked intent prediction performance (acc. 97.05) is relatively close to the oracle intent performance (acc. 97.85), which is the upper bound. Compared to the baseline re-ranker (NLU score), the proposed re-ranker (cosine) achieves 33.25% and 7.07% relative error reduction for intent prediction and slot-filling task, respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 1,
"end": 8,
"text": "Table 5",
"ref_id": null
},
{
"start": 477,
"end": 484,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Translating embeddings for modeling multirelational data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia- Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multi- relational data. In Advances in neural information processing systems, pages 2787-2795.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Syntax or semantics? knowledge-guided joint semantic frame parsing",
"authors": [
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakanni-T\u00fcr",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2016,
"venue": "Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "348--355",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yun-Nung Chen, Dilek Hakanni-T\u00fcr, Gokhan Tur, Asli Celikyilmaz, Jianfeng Guo, and Li Deng. 2016. Syntax or semantics? knowledge-guided joint se- mantic frame parsing. In Spoken Language Tech- nology Workshop (SLT), 2016 IEEE, pages 348-355. IEEE.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning a similarity metric discriminatively, with application to face verification",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Raia",
"middle": [],
"last": "Hadsell",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Lecun",
"suffix": ""
}
],
"year": 2005,
"venue": "Computer Vision and Pattern Recognition",
"volume": "1",
"issue": "",
"pages": "539--546",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539-546. IEEE.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Natural language processing (almost) from scratch",
"authors": [
{
"first": "Ronan",
"middle": [],
"last": "Collobert",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "L\u00e9on",
"middle": [],
"last": "Bottou",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Karlen",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Kuksa",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Machine Learning Research",
"volume": "12",
"issue": "",
"pages": "2493--2537",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12(Aug):2493-2537.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Machine learning for sequential data: A review. Structural, syntactic, and statistical pattern recognition",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Dietterich",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "227--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Dietterich. 2002. Machine learning for se- quential data: A review. Structural, syntactic, and statistical pattern recognition, pages 227-246.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Knowledge graph embedding with multiple relation projections",
"authors": [
{
"first": "Kien",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Truyen",
"middle": [],
"last": "Tran",
"suffix": ""
},
{
"first": "Svetha",
"middle": [],
"last": "Venkatesh",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1801.08641"
]
},
"num": null,
"urls": [],
"raw_text": "Kien Do, Truyen Tran, and Svetha Venkatesh. 2018. Knowledge graph embedding with multiple relation projections. arXiv preprint arXiv:1801.08641.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Multi-domain joint semantic frame parsing using bi-directional rnn-lstm",
"authors": [
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-T\u00fcr",
"suffix": ""
},
{
"first": "G\u00f6khan",
"middle": [],
"last": "T\u00fcr",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Yun-Nung",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "715--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dilek Hakkani-T\u00fcr, G\u00f6khan T\u00fcr, Asli Celikyilmaz, Yun-Nung Chen, Jianfeng Gao, Li Deng, and Ye- Yi Wang. 2016. Multi-domain joint semantic frame parsing using bi-directional rnn-lstm. In Inter- speech, pages 715-719.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semantic processing using the hidden vector state model. Computer speech & language",
"authors": [
{
"first": "Yulan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Steve",
"middle": [],
"last": "Young",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "19",
"issue": "",
"pages": "85--106",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulan He and Steve Young. 2005. Semantic process- ing using the hidden vector state model. Computer speech & language, 19(1):85-106.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Reducing the dimensionality of data with neural networks. science",
"authors": [
{
"first": "E",
"middle": [],
"last": "Geoffrey",
"suffix": ""
},
{
"first": "Ruslan R",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "313",
"issue": "",
"pages": "504--507",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural net- works. science, 313(5786):504-507.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Long short-term memory",
"authors": [
{
"first": "Sepp",
"middle": [],
"last": "Hochreiter",
"suffix": ""
},
{
"first": "J\u00fcrgen",
"middle": [],
"last": "Schmidhuber",
"suffix": ""
}
],
"year": 1997,
"venue": "Neural computation",
"volume": "9",
"issue": "8",
"pages": "1735--1780",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Jointly predicting dialog act and named entity for spoken language understanding",
"authors": [
{
"first": "Minwoo",
"middle": [],
"last": "Jeong",
"suffix": ""
},
{
"first": "Gary Geunbae",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2006,
"venue": "Spoken Language Technology Workshop",
"volume": "",
"issue": "",
"pages": "66--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minwoo Jeong and Gary Geunbae Lee. 2006. Jointly predicting dialog act and named entity for spoken language understanding. In Spoken Language Tech- nology Workshop, 2006. IEEE, pages 66-69. IEEE.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Intent detection using semantically enriched word embeddings",
"authors": [
{
"first": "Joo-Kyung",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Asli",
"middle": [],
"last": "Celikyilmaz",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Ye-Yi",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Spoken Language Technology Workshop (SLT)",
"volume": "",
"issue": "",
"pages": "414--419",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joo-Kyung Kim, Gokhan Tur, Asli Celikyilmaz, Bin Cao, and Ye-Yi Wang. 2016. Intent detection using semantically enriched word embeddings. In Spoken Language Technology Workshop (SLT), 2016 IEEE, pages 414-419. IEEE.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Convolutional neural networks for sentence classification",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5882"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim. 2014. Convolutional neural net- works for sentence classification. arXiv preprint arXiv:1408.5882.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Distributed representations of sentences and documents",
"authors": [
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2014,
"venue": "International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1188--1196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Quoc Le and Tomas Mikolov. 2014. Distributed rep- resentations of sentences and documents. In Inter- national Conference on Machine Learning, pages 1188-1196.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Lstm-crf models for named entity recognition",
"authors": [
{
"first": "Changki",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2017,
"venue": "IEICE Transactions on Information and Systems",
"volume": "100",
"issue": "4",
"pages": "882--887",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changki Lee. 2017. Lstm-crf models for named en- tity recognition. IEICE Transactions on Information and Systems, 100(4):882-887.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Attention-based recurrent neural network models for joint intent detection and slot filling",
"authors": [
{
"first": "Bing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ian",
"middle": [],
"last": "Lane",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.01454"
]
},
"num": null,
"urls": [],
"raw_text": "Bing Liu and Ian Lane. 2016. Attention-based recur- rent neural network models for joint intent detection and slot filling. arXiv preprint arXiv:1609.01454.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Visualizing data using t-sne",
"authors": [
{
"first": "Laurens",
"middle": [],
"last": "Van Der Maaten",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2008,
"venue": "Journal of Machine Learning Research",
"volume": "9",
"issue": "",
"pages": "2579--2605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(Nov):2579-2605.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Using recurrent neural networks for slot filling in spoken language understanding",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Kaisheng",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Dilek",
"middle": [],
"last": "Hakkani-Tur",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Larry",
"middle": [],
"last": "Heck",
"suffix": ""
},
{
"first": "Gokhan",
"middle": [],
"last": "Tur",
"suffix": ""
},
{
"first": "Dong",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2015,
"venue": "IEEE/ACM Transactions on Audio, Speech and Language Processing",
"volume": "23",
"issue": "3",
"pages": "530--539",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xi- aodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural networks for slot fill- ing in spoken language understanding. IEEE/ACM Transactions on Audio, Speech and Language Pro- cessing (TASLP), 23(3):530-539.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Investigation of recurrent-neuralnetwork architectures and learning methods for spoken language understanding",
"authors": [
{
"first": "Gr\u00e9goire",
"middle": [],
"last": "Mesnil",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2013,
"venue": "Interspeech",
"volume": "",
"issue": "",
"pages": "3771--3775",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gr\u00e9goire Mesnil, Xiaodong He, Li Deng, and Yoshua Bengio. 2013. Investigation of recurrent-neural- network architectures and learning methods for spo- ken language understanding. In Interspeech, pages 3771-3775.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1301.3781"
]
},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Siamese recurrent architectures for learning sentence similarity",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Mueller",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Thyagarajan",
"suffix": ""
}
],
"year": 2016,
"venue": "AAAI",
"volume": "",
"issue": "",
"pages": "2786--2792",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In AAAI, pages 2786-2792.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "From logical to distributional models",
"authors": [
{
"first": "Anne",
"middle": [],
"last": "Preller",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1412.8527"
]
},
"num": null,
"urls": [],
"raw_text": "Anne Preller. 2014. From logical to distributional mod- els. arXiv preprint arXiv:1412.8527.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Evaluation of spoken language systems: The atis domain",
"authors": [
{
"first": "J",
"middle": [],
"last": "Patti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Price",
"suffix": ""
}
],
"year": 1990,
"venue": "Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patti J Price. 1990. Evaluation of spoken language sys- tems: The atis domain. In Speech and Natural Lan- guage: Proceedings of a Workshop Held at Hidden Valley, Pennsylvania, June 24-27, 1990.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Recurrent neural network and lstm models for lexical utterance classification",
"authors": [
{
"first": "V",
"middle": [],
"last": "Suman",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Ravuri",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stolcke",
"suffix": ""
}
],
"year": 2015,
"venue": "INTERSPEECH",
"volume": "",
"issue": "",
"pages": "135--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Suman V Ravuri and Andreas Stolcke. 2015. Re- current neural network and lstm models for lexical utterance classification. In INTERSPEECH, pages 135-139.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Hidden understanding models for statistical sentence understanding",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Stallard",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 1997,
"venue": "Acoustics, Speech, and Signal Processing",
"volume": "2",
"issue": "",
"pages": "1479--1482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Schwartz, Scott Miller, David Stallard, and John Makhoul. 1997. Hidden understanding models for statistical sentence understanding. In Acoustics, Speech, and Signal Processing, 1997. ICASSP-97., 1997 IEEE International Conference on, volume 2, pages 1479-1482. IEEE.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Convolutional neural network based triangular crf for joint intent detection and slot filling",
"authors": [
{
"first": "Puyang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ruhi",
"middle": [],
"last": "Sarikaya",
"suffix": ""
}
],
"year": 2013,
"venue": "Automatic Speech Recognition and Understanding (ASRU)",
"volume": "",
"issue": "",
"pages": "78--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint in- tent detection and slot filling. In Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, pages 78-83. IEEE.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Visualization of semantic vectors through training process. The plotted points are v t from the text reader by t-sne processing in the testing sentences. The different colors and shape combinations represent different intent tags."
},
"TABREF3": {
"content": "<table/>",
"html": null,
"text": "Hyperparameters of the model.",
"type_str": "table",
"num": null
},
"TABREF4": {
"content": "<table><tr><td colspan=\"2\">No. Text</td><td/><td colspan=\"2\">Semantic Frame</td></tr><tr><td/><td colspan=\"3\">\"Show Delta Airlines from Boston to Salt Lake\"</td><td/></tr><tr><td/><td/><td/><td colspan=\"2\">ATIS FLIGHT</td></tr><tr><td>1</td><td>Show Delta Airlines flights from Boston to Salt Lake</td><td>1</td><td>airline name fromloc.city name toloc.city name</td><td>Delta Airlines Boston Salt Lake</td></tr><tr><td/><td/><td/><td colspan=\"2\">ATIS FLIGHT</td></tr><tr><td>2</td><td>Show Delta Airlines flights from Boston to Salt Lake City</td><td>2</td><td colspan=\"2\">airline name American Airlines fromloc.city name Phonenix toloc.city name Milwaukee</td></tr><tr><td/><td/><td/><td colspan=\"2\">ATIS FLIGHT</td></tr><tr><td>3</td><td>List Delta flights from Seattle to Salt Lake City</td><td>3</td><td>airline name fromloc.city name toloc.city name</td><td>Delta Airlines Montreal Orlando</td></tr><tr><td/><td colspan=\"3\">(a) Text as Query</td><td/></tr><tr><td colspan=\"2\">No. Text</td><td/><td colspan=\"2\">Semantic Frame</td></tr><tr><td/><td colspan=\"4\">[ATIS FLIGHT] flight mod(last), depart date.day name(Wednesday),</td></tr><tr><td/><td colspan=\"3\">fromloc.city name(Oakland), toloc.city name(Salt Lake City)</td><td/></tr><tr><td/><td/><td/><td colspan=\"2\">ATIS FLIGHT</td></tr><tr><td/><td/><td/><td>flight mod</td><td>last</td></tr><tr><td>1</td><td>Get last flight from Oakland to Salt Lake</td><td>1</td><td>depart date.day name</td><td>Wednesday</td></tr><tr><td/><td>City on Wednesday</td><td/><td>fromloc.city name toloc.city name</td><td>Oakland Salt Lake City</td></tr><tr><td/><td/><td/><td colspan=\"2\">ATIS FLIGHT</td></tr><tr><td/><td/><td/><td>flight mod</td><td>first</td></tr><tr><td>2</td><td>Get last flight from Oakland to Salt Lake</td><td>2</td><td>depart date.day name</td><td>Thursday</td></tr><tr><td/><td>City on Wednesday or first flight from Oak-land to Salt Lake City on Thursday</td><td/><td>fromloc.city name toloc.city name</td><td>Oakland Salt Lake City</td></tr><tr><td/><td/><td/><td colspan=\"2\">ATIS FLIGHT</td></tr><tr><td/><td/><td/><td>flight mod</td><td>last</td></tr><tr><td/><td/><td/><td>depart date.day name</td><td>Wednesday</td></tr><tr><td/><td/><td/><td>fromloc.city name</td><td>Oakland</td></tr><tr><td>3</td><td>Get first flight from Oakland to Salt Lake City on Thursday</td><td>3</td><td>toloc.city name or flight mod</td><td>Salt Lake City or first</td></tr><tr><td/><td/><td/><td>depart date.day name</td><td>Thursday</td></tr><tr><td/><td/><td/><td>fromloc.city name</td><td>Oakland</td></tr><tr><td/><td/><td/><td>toloc.city name</td><td>Salt Lake City</td></tr><tr><td/><td colspan=\"3\">(b) Semantic Frame as Query</td><td/></tr></table>",
"html": null,
"text": "Retrieved results have almost same intent tag and slot-tags, but have dif-",
"type_str": "table",
"num": null
},
"TABREF7": {
"content": "<table/>",
"html": null,
"text": "Multiple NLU systems for re-ranking.",
"type_str": "table",
"num": null
}
}
}
}