ACL-OCL / Base_JSON /prefixP /json /P03 /P03-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P03-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:13:46.490706Z"
},
"title": "Extracting Key Semantic Terms from Chinese Speech Query for Web Searches",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singapore",
"location": {}
},
"email": "wanggang_sh@hotmail.com"
},
{
"first": "Tat-Seng",
"middle": [],
"last": "Chua",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "National University of Singa-pore",
"location": {}
},
"email": "chuats@comp.nus.edu.sg"
},
{
"first": "Yong-Cheng",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Shanghai Jiao Tong Univer-sity",
"location": {
"postCode": "200030",
"country": "China"
}
},
"email": "ycwang@mail.sjtu.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper discusses the challenges and proposes a solution to performing information retrieval on the Web using Chinese natural language speech query. The main contribution of this research is in devising a divide-and-conquer strategy to alleviate the speech recognition errors. It uses the query model to facilitate the extraction of main core semantic string (CSS) from the Chinese natural language speech query. It then breaks the CSS into basic components corresponding to phrases, and uses a multi-tier strategy to map the basic components to known phrases in order to further eliminate the errors. The resulting system has been found to be effective.",
"pdf_parse": {
"paper_id": "P03-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper discusses the challenges and proposes a solution to performing information retrieval on the Web using Chinese natural language speech query. The main contribution of this research is in devising a divide-and-conquer strategy to alleviate the speech recognition errors. It uses the query model to facilitate the extraction of main core semantic string (CSS) from the Chinese natural language speech query. It then breaks the CSS into basic components corresponding to phrases, and uses a multi-tier strategy to map the basic components to known phrases in order to further eliminate the errors. The resulting system has been found to be effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "We are entering an information era, where information has become one of the major resources in our daily activities. With its wide spread adoption, Internet has become the largest information wealth for all to share. Currently, most (Chinese) search engines can only support term-based information retrieval, where the users are required to enter the queries directly through keyboards in front of the computer. However, there is a large segment of population in China and the rest of the world who are illiterate and do not have the skills to use the computer. They are thus unable to take advantage of the vast amount of freely available information. Since almost every person can speak and understand spoken language, the research on \"(Chinese) natural language speech query retrieval\" would enable average persons to access information using the current search engines without the need to learn special computer skills or training. They can simply access the search engine using common devices that they are familiar with such as the telephone, PDA and so on.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In order to implement a speech-based information retrieval system, one of the most important challenges is how to obtain the correct query terms from the spoken natural language query that convey the main semantics of the query. This requires the integration of natural language query processing and speech recognition research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Natural language query processing has been an active area of research for many years and many techniques have been developed (Jacobs and Rau1993; Kupie, 1993; Strzalkowski, 1999; Yu et al, 1999) . Most of these techniques, however, focus only on written language, with few devoted to the study of spoken language query processing.",
"cite_spans": [
{
"start": 125,
"end": 145,
"text": "(Jacobs and Rau1993;",
"ref_id": null
},
{
"start": 146,
"end": 158,
"text": "Kupie, 1993;",
"ref_id": null
},
{
"start": 159,
"end": 178,
"text": "Strzalkowski, 1999;",
"ref_id": "BIBREF9"
},
{
"start": 179,
"end": 194,
"text": "Yu et al, 1999)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Speech recognition involves the conversion of acoustic speech signals to a stream of text. Because of the complexity of human vocal tract, the speech signals being observed are different, even for multiple utterances of the same sequence of words by the same person (Lee et al 1996) . Furthermore, the speech signals can be influenced by the differences across different speakers, dialects, transmission distortions, and speaking environments. These have contributed to the noise and variability of speech signals. As one of the main sources of errors in Chinese speech recognition come from substitution (Wang 2002; Zhou 1997) , in which a wrong but similar sounding term is used in place of the correct term, confusion matrix has been used to record confused sound pairs in an attempt to eliminate this error. Confusion matrix has been employed effectively in spoken document retrieval (Singhal et al, 1999 and Srinivasan et al 2000) and to minimize speech recognition errors (Shen et al, 1998) . However, when such method is used directly to correct speech recognition errors, it tends to bring in too many irrelevant terms (Ng 2000) .",
"cite_spans": [
{
"start": 266,
"end": 282,
"text": "(Lee et al 1996)",
"ref_id": "BIBREF4"
},
{
"start": 605,
"end": 616,
"text": "(Wang 2002;",
"ref_id": "BIBREF10"
},
{
"start": 617,
"end": 627,
"text": "Zhou 1997)",
"ref_id": "BIBREF13"
},
{
"start": 888,
"end": 912,
"text": "(Singhal et al, 1999 and",
"ref_id": "BIBREF8"
},
{
"start": 913,
"end": 935,
"text": "Srinivasan et al 2000)",
"ref_id": null
},
{
"start": 978,
"end": 996,
"text": "(Shen et al, 1998)",
"ref_id": "BIBREF7"
},
{
"start": 1127,
"end": 1136,
"text": "(Ng 2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Because important terms in a long document are often repeated several times, there is a good chance that such terms will be correctly recognized at least once by a speech recognition engine with a reasonable level of word recognition rate. Many spoken document retrieval (SDR) systems took advantage of this fact in reducing the speech recognition and matching errors . In contrast to SDR, very little work has been done on Chinese spoken query processing (SQP), which is the use of spoken queries to retrieval textual documents. Moreover, spoken queries in SQP tend to be very short with few repeated terms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we aim to integrate the spoken language and natural language research to process spoken queries with speech recognition errors. The main contribution of this research is in devising a divide-and-conquer strategy to alleviate the speech recognition errors. It first employs the Chinese query model to isolate the Core Semantic String (CSS) that conveys the semantics of the spoken query. It then breaks the CSS into basic components corresponding to phrases, and uses a multitier strategy to map the basic components to known phrases in a dictionary in order to further eliminate the errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the rest of this paper, an overview of the proposed approach is introduced in Section 2. Section 3 describes the query model, while Section 4 outlines the use of multi-tier approach to eliminate errors in CSS. Section 5 discusses the experimental setup and results. Finally, Section 6 contains our concluding remarks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are many challenges in supporting surfing of Web by speech queries. One of the main challenges is that the current speech recognition technology is not very good, especially for average users that do not have any speech trainings. For such unlimited user group, the speech recognition engine could achieve an accuracy of less than 50%. Because of this, the key phrases we derived from the speech query could be in error or missing the main semantic of the query altogether. This would affect the effectiveness of the resulting system tremendously.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "Given the speech-to-text output with errors, the key issue is on how to analyze the query in order to grasp the Core Semantic String (CSS) as accurately as possible. CSS is defined as the key term sequence in the query that conveys the main semantics of the query. For example, given the query: \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "\u00a2 \u00a1 \u00a4 \u00a3 \u00a6 \u00a5 \u00a6 \u00a7 \u00a9 \u00a6 \u00a6 \u00a4 \u00a6 \u00a9 \u00a6 \u00a6 ! # \" \u00a6 $ & % # ' ( # ) \u00a9 0 2 1 \u00a6 3 # 4 2 5 # 6 \u00a9 7 # 8 @ 9 \u00a6 A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "\" (Please tell me the information on how the U.S. separates the most-favored-nation status from human rights issue in china). The CSS in the query is underlined. We can segment the CSS into several basic components that correspond to key concepts such as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "B (U.S.), (China), \u00a2 ! C \" (human rights issue), 3 # 4 2 5 \u00a9 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "(the most-favored-nation status) and 7 \u00a6 8 (separate). Because of the difficulty in handling speech recognition errors involving multiple segments of CSSs, we limit our research to queries that contain only one CSS string. However, we allow a CSS to include multiple basic components as depicted in the above example. This is reasonable as most queries posed by the users on the Web tend to be short with only a few characters (Pu 2000) .",
"cite_spans": [
{
"start": 427,
"end": 436,
"text": "(Pu 2000)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "Thus the accurate extraction of CSS and its separation into basic components is essential to alleviate the speech recognition errors. First of all, isolating CSS from the rest of speech enables us to ignore errors in other parts of speech, such as the greetings and polite remarks, which have no effects on the outcome of the query. Second, by separating the CSS into basic components, we can limit the propagation of errors, and employ the set of known phrases in the domain to help correct the errors in these components separately. To achieve this, we process the query in three main stages as illustrated in Figure 1 . First, given the user's oral query, the system uses a speech recognition engine to convert the speech to text. Second, we analyze the query using a query model (QM) to extract CSS from the query with minimum errors. QM defines the structures and some of the standard phrases used in typical queries. Third, we divide the CSS into basic components, and employ a multi-tier approach to match the ba-QM",
"cite_spans": [],
"ref_spans": [
{
"start": 612,
"end": 620,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview of the proposed approach",
"sec_num": "2"
},
{
"text": "Multi-Tier mapping",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Confusion matrix Phrase Dictionary",
"sec_num": null
},
{
"text": "Query CSS sic components to the nearest known phrases in order to correct the speech recognition errors. The aim here is to improve recall without excessive lost in precision. The resulting key components are then used as query to standard search engine. The following sections describe the details of our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Speech",
"sec_num": null
},
{
"text": "Query model (QM) is used to analyze the query and extract the core semantic string (CSS) that contains the main semantic of the query. There are two main components for a query model. The first is query component dictionary, which is a set of phrases that has certain semantic functions, such as the polite remarks, prepositions, time etc. The other component is the query structure, which defines a sequence of acceptable semantically tagged tokens, such as \"Begin, Core Semantic String, Question Phrase, and End\". Each query structure also includes its occurrence probability within the query corpus. Table 2 gives some examples of query structures.",
"cite_spans": [],
"ref_spans": [
{
"start": 603,
"end": 610,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Query Model (QM)",
"sec_num": "3"
},
{
"text": "In order to come up with a set of generalized query structures, we use a query log of typical queries posed by users. The query log consists of 557 queries, collected from twenty-eight human subjects at the Shanghai Jiao Tong University (Ying 2002) . Each subject is asked to pose 20 separate queries to retrieve general information from the Web.",
"cite_spans": [
{
"start": 237,
"end": 248,
"text": "(Ying 2002)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Query Model Generation",
"sec_num": "3.1"
},
{
"text": "After analyzing the queries, we derive a query model comprising 51 query structures and a set of query components. For each query structure, we compute its probability of occurrence, which is used to determine the more likely structure containing CSS in case there are multiple CSSs found. As part of the analysis of the query log, we classify the query components into ten classes, as listed in Table 1 . These ten classes are called semantic tags. They can be further divided into two main categories: the closed class and open class. Closed classes are those that have relatively fixed word lists. These include question phrases, quantifiers, polite remarks, prepositions, time and commonly used verb and subject-verb phrases. We collect all the phrases belonging to closed classes from the query log and store them in the query component dictionary. The open class is the CSS, which we do not know in advance. CSS typically includes person's names, events and country's names etc. Given the set of sample queries, a heuristic rulebased approach is used to analyze the queries, and break them into basic components with assigned semantic tags by matching the words listed in Table 1. Any sequences of words or phrases not found in the closed class are tagged as CSS (with Semantic Tag 9). We can thus derive the query structures of the form given in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 396,
"end": 403,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1353,
"end": 1360,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Query Model Generation",
"sec_num": "3.1"
},
{
"text": "Due to speech recognition errors, we do not expect the query components and hence the query structure to be recognized correctly. Instead, we parse the query structure in order to isolate and extract CSS. To facilitate this, we employ the Finite State Automata (FSA) to model the query structure. FSA models the expected sequences of tokens in typical queries and annotate the semantic tags, including CSS. A FSA is defined for each of the 51 query structures. An example of FSA is given in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 491,
"end": 499,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "Because CSS is an open set, we do not know its content in advance. Instead, we use the following two rules to determine the candidates for CSS: (a) it is an unknown string not present in the Query Component Dictionary; and (b) its length is not less than two, as the average length of concepts in Chinese is greater than one (Wang 1992) .",
"cite_spans": [
{
"start": 325,
"end": 336,
"text": "(Wang 1992)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "At each stage of parsing the query using FSA (Hobbs et al 1997) , we need to make decision on which state to proceed and how to handle unexpected tokens in the query. Thus at each stage, FSA needs to perform three functions: a) Goto function: It maps a pair consisting of a state and an input symbol into a new state or the fail state. We use G(N,X) =N' to define the goto function from State N to State N', given the occurrence of token X. b) Fail function: It is consulted whenever the goto function reports a failure when encountering an unexpected token. We use f(N) =N' to represent the fail function. c) Output function: In the FSA, certain states are designated as output states, which indicate that a sequence of tokens has been found and are tagged with the appropriate semantic tag. To construct a goto function, we begin with a graph consisting of one vertex which represents State 0.We then enter each token X into the graph by adding a directed path to the graph that begins at the start state. New vertices and edges are added to the graph so that there will be, starting at the start state, a path in the graph that spells out the token X. The token X is added to the output function of the state at which the path terminates.",
"cite_spans": [
{
"start": 45,
"end": 63,
"text": "(Hobbs et al 1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "For example, suppose that our Query Component Dictionary consists of seven phrases as follows: \" \u00a1 \u00a3 \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "(please help me);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "\u00a4 \u00a6 \u00a5 (some); \u00a3 (about); \u00a7 \u00a6 (news); \u00a9 (collect); \u00a2 (tell me); \u00a3 # \u00a5 # \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "(what do you have)\". Adding these tokens into the graph will result in a FSA as shown in Figure 2 . The path from State 0 to State 3 spells out the phrase \" \u00a2 (Please help me)\", and on completion of this path, we associate its output with semantic tag 6. Similarly, the output of \"\u00a4 \u00a5 (some)\" is associated with State 5, and semantic tag 4, and so on.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 97,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "We now use an example to illustrate the process of parsing the query. Suppose the user issues a speech query: \" \u00a2 \u00a9 \u00a4 \u00a5 \u00a3 \u00a6 \u00a7\u00a8 \" (please help me to collect some information about Bin Laden). However, the result of speech recognition with errors is: \" (please) (help)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "\u00a2 (me) \u00a9 (receive) ! (send) \u00a4 \u00a5 (some) \u00a3 \u00a6 (about) \" (half) (pull) # (light) (of) \u00a7 \u00a6 (news)\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "Note that there are 4 mis-recognized characters which are underlined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "Note : indicates the semantic tag. The FSA begins with State 0. When the system encounters the sequence of characters (please) (help) \u00a2 (me), the state changes from 0 to 1, 2 and eventually to 3. At State 3, the system recognizes a polite remark phrase and output a token with semantic tag 6. Next, the system meets the character \u00a9 (receive), it will transit to State 10, because of g(0, \u00a9 )=10. When the system sees the next character ! (send), which does not have a corresponding transition rule, the goto function reports a failure. Because the length of the string is 2 and the string is not in the Query Component Dictionary, the semantic tag 9 is assigned to token\"\u00a9 ! \" according to the definition of CSS.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "By repeating the above process, we obtain the following result: \u00a2 \u00a9 ! \u00a4 \u00a5 \u00a3 \u00a9 \" # \u00a7 \u00a3 6 9 4 7 9 3 Here the semantic tags are as defined in Table 1 . It is noted that because of speech recognition errors, the system detected two CSSs, and both of them contain speech recognition errors.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 146,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Modeling of Query Structure as FSA",
"sec_num": "3.2"
},
{
"text": "Given that we may find multiple CSSs, the next stage is to analyze the CSSs found along with their surrounding context in order to determine the most probable CSS. The approach is based on the premise that choosing the best sense for an input vector amounts to choosing the most probable sense given that vector. The input vector i has three components: left context (L i ), the CSS itself (CSS i ), and right context (R i ). The probability of such a structure occurring in the Query Model is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CSS Extraction by Query Model",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= = n j j ij i p C s 0 ) * (",
"eq_num": "(1)"
}
],
"section": "CSS Extraction by Query Model",
"sec_num": "3.3"
},
{
"text": "where C ij is set to 1 if the input vector i (L i , R i ) matches the two corresponding left and right CSS context of the query structure j, and 0 otherwise. p j is the possibility of occurrence of the j th query structure, and n is the total number of the structures in the Query Model. Note that Equation 1gives a detected CSS higher weight if it matches to more query structures with higher occurrence probabilities. We simply select the best CSS i such that",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CSS Extraction by Query Model",
"sec_num": "3.3"
},
{
"text": ") ( max arg i i s according to Eqn(1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CSS Extraction by Query Model",
"sec_num": "3.3"
},
{
"text": "For illustration, let's consider the above example with 2 detected CSSs. The two CSS vectors are: [6, 9, 4] and [7, 9, 3] . From the Query Model, we know that the probability of occurrence, p j , of structure [6, 9, 4] is 0, and that of structure [7, 9, 3] is 0.03, with the latter matches to only one structure. Hence the s i values for them are 0 and 0.03 respectively. Thus the most probable core semantic structure is [7, 9, 3] and the CSS \"",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "[6, 9, 4] and [7, 9, 3]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "CSS Extraction by Query Model",
"sec_num": "3.3"
},
{
"text": "\" (half) (pull) # (light)\" is extracted.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CSS Extraction by Query Model",
"sec_num": "3.3"
},
{
"text": "Because of speech recognition error, the CSS obtained is likely to contain error, or in the worse case, missing the main semantics of the query altogether. We now discuss how we alleviate the errors in CSS for the former case. We will first break the CSS into one or more basic semantic parts, and then apply the multi-tier method to map the query components to known phrases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Query Terms Generation",
"sec_num": "4"
},
{
"text": "In many cases, the CSS obtained may be made up of several semantic components equivalent to base noun phrases. Here we employ a technique based on Chinese cut marks (Wang 1992) to perform the segmentation. The Chinese cut marks are tokens that can separate a Chinese sentence into several semantic parts. Zhou (1997) used such technique to detect new Chinese words, and reported good results with precision and recall of 92% and 70% respectively. By separating the CSS into basic key components, we can limit the propagation of errors.",
"cite_spans": [
{
"start": 165,
"end": 176,
"text": "(Wang 1992)",
"ref_id": "BIBREF11"
},
{
"start": 305,
"end": 316,
"text": "Zhou (1997)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Breaking CSS into Basic Components",
"sec_num": "4.1"
},
{
"text": "In order to further eliminate the speech recognition errors, we propose a multi-tier approach to map the basic components in CSS into known phrases by using a combination of matching techniques. To do this, we need to build up a phrase dictionary containing typical concepts used in general and specific domains. Most basic CSS components should be mapped to one of these phrases. Thus even if a basic component contains errors, as long as we can find a sufficiently similar phrase in the phrase dictionary, we can use this in place of the erroneous CSS component, thus eliminating the errors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "We collected a phrase dictionary containing about 32,842 phrases, covering mostly base noun phrase and named entity. The phrases are derived from two sources. We first derived a set of common phrases from the digital dictionary and the logs in the search engine used at the Shanghai Jiao Tong University. We also derived a set of domain specific phrases by extracting the base noun phrases and named entities from the on-line news articles obtained during the period. This approach is reasonable as in practice we can use recent web or news articles to extract concepts to update the phrase dictionary. Given the phrase dictionary, the next problem then is to map the basic CSS components to the nearest phrases in the dictionary. As the basic components may contain errors, we cannot match them exactly just at the character level. We thus propose to match each basic component with the known phrases in the dictionary at three levels: (a) character level; (b) syllable string level; and (c) confusion syllable string level. The purpose of matching at levels b and c is to overcome the homophone problem in CSS. For example, \" (Laden)\" is wrongly recognized as \" # (pull lamp)\" by the speech recognition engine. Such errors cannot be re-solved at the character matching level, but it can probably be matched at the syllable string level. The confusion matrix is used to further reduce the effect of speech recognition errors due to similar sounding characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "To account for possible errors in CSS components, we perform similarity, instead of exact, matching at the three levels. Given the basic CSS component q i , and a phrase c j in the dictionary, we compute:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "= = ) , ( 0 * |} | |, max{| ) , ( ) , ( i i c q LCS k k i i i i i i M c q c q LCS c q Sim (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "where LCS(q i ,c j ) gives the number of characters/ syllable matched between q i and c i in the order of their appearance using the longest common subsequence matching (LCS) algorithm (Cormen et al 1990) . M k is introduced to accounts for the similarity between the two matching units, and is dependent on the level of matching. If the matching is performed at the character or syllable string levels, the basic matching unit is one character or one syllable and the similarity between the two matching units is 1. If the matching is done at the confusion syllable string level, M k is the corresponding coefficients in the confusion matrix. Hence LCS (q i ,c j )",
"cite_spans": [
{
"start": 185,
"end": 204,
"text": "(Cormen et al 1990)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "gives the degree of match between q i and c j , normalized by the maximum length of q i or c j ; and \u03a3M gives the degree of similarity between the units being matched.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "The three level of matching also ranges from being more exact at the character level, to less exact at the confusion syllable level. Thus if we can find a relevant phrase with sim(q i ,c j )>\u00a1 at the higher character level, we will not perform further matching at the lower levels. Otherwise, we will relax the constraint to perform the matching at successively lower levels, probably at the expense of precision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "The detail of algorithm is listed as follows: Input: Basic CSS Component, q i a. Match q i with phrases in dictionary at character level using Eqn.(2). b. If we cannot find a match, then match q i with phrases at the syllable level using Eqn.(2). c. If we still cannot find a match, match q i with phrases at the confusion syllable level using Eqn.(2). d. If we found a match, set q' i =c j ; otherwise set q' i =q i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "For example, given a query: \" \u00a2 \u00a4 \u00a3 \u00a5 \u00a6 \u00a7 \u00a9 \" (please tell me some news about Iraq). If the query is wrongly recognized as \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "\" ! # % $ & \u00a2 ' \u00a7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "\". If, however, we could correctly extract the CSS \" \" (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "from this mis-recognized query, then we could ignore the speech recognition errors in other parts of the above query. Even if there are errors in the CSS extracted, such as \" ) (chen) 0 \" 1 (waterside)\" instead of \" ) \u00a4 0 \" 2 (chen shui bian)\", we could apply the syllable string level matching to correct the homophone errors. For CSS errors such as \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "3 (corrupt) \u00a4 \" 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "(usually)\" instead of the correct CSS \"5 6 7",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "(Taliban)\", which could not be corrected at the syllable string matching level, we could apply the confusion syllable string matching to overcome this error.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Multi-tier query term mapping",
"sec_num": "4.2"
},
{
"text": "As our system aims to correct the errors and extract CSS components in spoken queries, it is important to demonstrate that our system is able to handle queries of different characteristics. To this end, we devised two sets of test queries as follows. a) Corpus with short queries",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and analysis",
"sec_num": "5"
},
{
"text": "We devised 10 queries, each containing a CSS with only one basic component. This is the typical type of queries posed by the users on the web. We asked 10 different people to \"speak\" the queries, and used the IBM ViaVoice 98 to perform the speech to text conversion. This gives rise to a collection of 100 spoken queries. There is a total of 1,340 Chinese characters in the test queries with a speech recognition error rate of 32.5%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments and analysis",
"sec_num": "5"
},
{
"text": "In order to test on queries used in standard test corpuses, we adopted the query topics (1-10) employed in TREC-5 Chinese-Language track. Here each query contains more than one key semantic component. We rephrased the queries into natural language query format, and asked twelve subjects to \"read\" the queries. We again used the IBM ViaVoice 98 to perform the speech recognition on the resulting 120 different spoken queries, giving rise to a total of 2,354 Chinese characters with a speech recognition error rate of 23.75%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b) Corpus with long queries",
"sec_num": null
},
{
"text": "We devised two experiments to evaluate the performance of our techniques. The first experiment was designed to test the effectiveness of our query model in extracting CSSs. The second was designed to test the accuracy of our overall system in extracting basic query components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "b) Corpus with long queries",
"sec_num": null
},
{
"text": "The test results show that by using our query model, we could correctly extract 99% and 96% of CSSs from the spoken queries for the short and long query category respectively. The errors are mainly due to the wrong tagging of some query components, which caused the query model to miss the correct query structure, or match to a wrong structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": "For example: given the query \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": "\u00a2 \u00a1 \u00a3 \u00a6 \u00a5 # \u00a7 \u00a9 \u00a9 5 6 \" 7 \u00a7 \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": "\" (please tell me some news about Taliban). If it is wrongly recognized as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": ") ! \u00a1 \u00a3 \u00a2 \u00a5 \u00a4 \u00a8 \u00a6 \u00a6 \" 6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": "7 \u00a7 \u00a5 9 7 9 10 which is a nonsensical sentence. Since the probabilities of occurrence both query structures [0, 9, 7] and [7, 9, 10] are 0, we could not find the CSS at all. This error is mainly due to the mis-recognition of the last query component \" \u00a7 \u00a6 (news)\" to \" \u00a7 \u00a1 (afternoon)\". It confuses the Query Model, which could not find the correct CSS.",
"cite_spans": [
{
"start": 108,
"end": 111,
"text": "[0,",
"ref_id": null
},
{
"start": 112,
"end": 114,
"text": "9,",
"ref_id": null
},
{
"start": 115,
"end": 117,
"text": "7]",
"ref_id": null
},
{
"start": 122,
"end": 125,
"text": "[7,",
"ref_id": null
},
{
"start": 126,
"end": 128,
"text": "9,",
"ref_id": null
},
{
"start": 129,
"end": 132,
"text": "10]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": "The overall results indicate that there are fewer errors in short queries as such queries contain only one CSS component. This is encouraging as in practice most users issue only short queries.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 1: Accuracy of extracting CSSs",
"sec_num": "5.1"
},
{
"text": "In order to test the accuracy of extracting basic query components, we asked one subject to manually divide the CSS into basic components, and used that as the ground truth. We compared the following two methods of extracting CSS components: a) As a baseline, we simply performed the standard stop word removal and divided the query into components with the help of a dictionary. However, there is no attempt to correct the speech recognition errors in these components.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Test 2: Accuracy of extracting basic query components",
"sec_num": "5.2"
},
{
"text": "Here we assume that the natural language query is a bag of words with stop word removed (Ricardo, 1999) . Currently, most search engines are based on this approach. b) We applied our query model to extract CSS and employed the multi-tier mapping approach to extract and correct the errors in the basic CSS components. Tables 3 and 4 give the comparisons between Methods (a) and (b), which clearly show that our method outperforms the baseline method by over 20.2% and 20 % in F 1 measure for the short and long queries respectively. +20.0% The improvement is largely due to the use of our approach to extract CSS and correct the speech recognition errors in the CSS components. More detailed analysis of long queries in Table 3 reveals that our method performs worse than the baseline method in recall. This is mainly due to errors in extracting and breaking CSS into basic components. Although we used the multi-tier mapping approach to reduce the errors from speech recognition, its improvement is insufficient to offset the lost in recall due to errors in extracting CSS. On the other hand, for the short query cases, without the errors in breaking CSS, our system is more effective than the baseline in recall. It is noted that in both cases, our system performs significantly better than the baseline in terms of precision and F 1 measures.",
"cite_spans": [
{
"start": 88,
"end": 103,
"text": "(Ricardo, 1999)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 318,
"end": 332,
"text": "Tables 3 and 4",
"ref_id": "TABREF2"
},
{
"start": 720,
"end": 727,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Test 2: Accuracy of extracting basic query components",
"sec_num": "5.2"
},
{
"text": "Although research on natural language query processing and speech recognition has been carried out for many years, the combination of these two approaches to help a large population of infrequent users to \"surf the web by voice\" has been relatively recent. This paper outlines a divide-and-conquer approach to alleviate the effect of speech recognition error, and in extracting key CSS components for use in a standard search engine to retrieve relevant documents. The main innovative steps in our system are: (a) we use a query model to isolate CSS in speech queries; (b) we break the CSS into basic components; and (c) we employ a multi-tier approach to map the basic components to known phrases in the dictionary. The tests demonstrate that our approach is effective.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The work is only the beginning. Further research can be carried out as follows. First, as most of the queries are about named entities such as the persons or organizations, we need to perform named entity analysis on the queries to better extract its structure, and in mapping to known named entities. Second, most speech recognition engine will return a list of probable words for each syllable. This could be incorporated into our framework to facilitate multi-tier mapping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Improved Spoken Document Retrieval by Exploring Extra Acoustic and Linguistic Cues",
"authors": [
{
"first": "Hsin-Min",
"middle": [],
"last": "Berlin Chen",
"suffix": ""
},
{
"first": "Lin-Shan",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 7th European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berlin Chen, Hsin-min Wang, and Lin-Shan Lee (2001), \"Improved Spoken Document Retrieval by Exploring Extra Acoustic and Linguistic Cues\", Proceedings of the 7th European Confer- ence on Speech Communication and Technology located at http://homepage.iis.sinica.edu.tw/",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Innovations in Text Interpretation",
"authors": [
{
"first": "Paul",
"middle": [
"S"
],
"last": "Jacobs",
"suffix": ""
},
{
"first": "Lisa",
"middle": [
"F"
],
"last": "Rau",
"suffix": ""
}
],
"year": 1993,
"venue": "Artificial Intelligence",
"volume": "63",
"issue": "",
"pages": "143--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Paul S. Jacobs and Lisa F. Rau (1993), Innova- tions in Text Interpretation, Artificial Intelli- gence, Volume 63, October 1993 (Special Issue on Text Understanding) pp.143-191",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Introduction to algorithms",
"authors": [
{
"first": "H",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Charles",
"middle": [
"E"
],
"last": "Cormen",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"L"
],
"last": "Leiserson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rivest",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas H. Cormen, Charles E. Leiserson and Ronald L. Rivest (1990), \"Introduction to algo- rithms\", published by McGraw-Hill.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text, Finite-State Language Processing, Emmanuel Roche and Yves Schabes",
"authors": [
{
"first": "Jerry",
"middle": [
"R"
],
"last": "Hobbs",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of 16 th annual conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "181--190",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jerry R. Hobbs, et al,(1997) , FASTUS: A Cas- caded Finite-State Transducer for Extracting In- formation from Natural-Language Text, Finite- State Language Processing, Emmanuel Roche and Yves Schabes, pp. 383 -406, MIT Press, Julian Kupiec (1993), MURAX: \"A robust linguis- tic approach for question answering using an one-line encyclopedia\", Proceedings of 16 th an- nual conference on Research and Development in Information Retrieval (SIGIR), pp.181-190",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Survey on Automatic Speech Recognition with an Illustrative Example On Continuous Speech Recognition of Mandarin",
"authors": [
{
"first": "Chin-Hui",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "1--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Hui Lee et al (1996), \"A Survey on Auto- matic Speech Recognition with an Illustrative Example On Continuous Speech Recognition of Mandarin\", in Computational Linguistics and Chinese Language Processing, pp. 1-36",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Spoken Document Retrieval for the languages of Hong Kong",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Pui Yu",
"middle": [],
"last": "Hui",
"suffix": ""
}
],
"year": 2000,
"venue": "International Symposium on Intelligent Multimedia, Video and Speech Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Helen Meng and Pui Yu Hui (2001), \"Spoken Document Retrieval for the languages of Hong Kong\", International Symposium on Intelligent Multimedia, Video and Speech Processing, May 2001, located at www.se.cuhk.edu.hk/PEOPLE/ Kenney Ng (2000), \"Information Fusion For Spo- ken Document Retrieval\", Proceedings of ICASSP'00, Istanbul, Turkey, Jun, located at http://www.sls.lcs.mit.edu/sls/publications/",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Understanding Chinese Users' Information Behaviors through Analysis of Web Search Term Logs",
"authors": [
{
"first": "Pu",
"middle": [],
"last": "Hsiao Tieh",
"suffix": ""
}
],
"year": 2000,
"venue": "Journal of Computers",
"volume": "",
"issue": "",
"pages": "75--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsiao Tieh Pu (2000), \"Understanding Chinese Users' Information Behaviors through Analysis of Web Search Term Logs\", Journal of Com- puters, pp.75-82",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Character Error Correction for Chinese Speech Recognition System",
"authors": [
{
"first": "",
"middle": [],
"last": "Liqin",
"suffix": ""
},
{
"first": "Haixin",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Chai",
"suffix": ""
},
{
"first": "Tang",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Donald",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of International Symposium on Chinese Spoken Language Processing Symposium Proceedings",
"volume": "",
"issue": "",
"pages": "136--138",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liqin, Shen, Haixin Chai, Yong Qin and Tang Donald (1998), \"Character Error Correction for Chinese Speech Recognition System\", Proceed- ings of International Symposium on Chinese Spoken Language Processing Symposium Pro- ceedings, pp.136-138",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Document Expansion for Speech Retrieval",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Singhal",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 22 nd Annual International conference on Research and Development in Information Retrieval (SIGIR)",
"volume": "",
"issue": "",
"pages": "34--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Singhal and Fernando Pereira (1999), \"Document Expansion for Speech Retrieval\", Proceedings of the 22 nd Annual International conference on Research and Development in In- formation Retrieval (SIGIR), pp. 34~41",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Natural language information retrieval",
"authors": [
{
"first": "Tomek",
"middle": [],
"last": "Strzalkowski",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomek Strzalkowski (1999), \"Natural language information retrieval\", Boston: Kluwer Publish- ing.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Web surfing by Chinese Speech",
"authors": [
{
"first": "Gang",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gang Wang (2002), \"Web surfing by Chinese Speech\", Master thesis, National University of Singapore.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multi-Scale Audio Indexing for translingual spoken document retrieval",
"authors": [
{
"first": "Helen",
"middle": [],
"last": "Hsin-Min Wang",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Berlin",
"middle": [],
"last": "Schone",
"suffix": ""
},
{
"first": "Wai-Kt",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of IEEE International Conference on Acoustics, Speech, Signal processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hsin-min Wang, Helen Meng, Patrick Schone, Ber- lin Chen and Wai-Kt Lo (2001), \"Multi-Scale Audio Indexing for translingual spoken docu- ment retrieval\", Proceedings of IEEE Interna- tional Conference on Acoustics, Speech, Signal processing , Salt Lake City, USA, May 2001, lo- cated at http://www.iis.sinica.edu.tw/~whm/ Yongcheng Wang (1992), Technology and basis of Chinese Information Processing, Shanghai Jiao Tong University Press Baeza-Yates, Ricardo and Ribeiro-Neto, Berthier (1999), \"Introduction to modern information re- trieval\", Published by London: Library Associa- tion Publishing.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Detection of Unknown Chinese Words Using a Hybrid Approach Computer Processing of",
"authors": [
{
"first": "Hai-Nan",
"middle": [],
"last": "Ying",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 1997,
"venue": "Oriental Languages",
"volume": "11",
"issue": "1",
"pages": "63--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai-nan Ying, Yong Ji and Wei Shen, (2002), \"re- port of query log\", internal report in Shanghai Jiao Tong University Guodong Zhou and Kim Teng Lua (1997) Detec- tion of Unknown Chinese Words Using a Hybrid Approach Computer Processing of Oriental Lan- guages, Vol 11, No 1, 1997, 63-75",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Language Modelling in Mandarin Speech Recognition",
"authors": [
{
"first": "Guodong",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guodong Zhou (1997), \"Language Modelling in Mandarin Speech Recognition\", Ph.D. Thesis, National University of Singapore.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Overview of the proposed approach",
"uris": null,
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"text": "FSA for part of Query Component Dictionary",
"uris": null,
"type_str": "figure"
},
"TABREF0": {
"text": "Definition and Examples of Semantic tags",
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">Sem Tag Name of tag</td><td colspan=\"2\">Example</td></tr><tr><td>1.</td><td>Verb-Object Phrase</td><td colspan=\"3\">\u00a1 g ive \u00a2 \u00a3 (me)</td></tr><tr><td>2. 3.</td><td>Question Phrase Question Field</td><td colspan=\"3\">\u00a4 \u00a6 \u00a5 \u00a7 (is there ) (news), \u00a9 (report)</td></tr><tr><td>4.</td><td>Quantifier</td><td/><td colspan=\"2\">(some)</td></tr><tr><td>5. 6.</td><td>Verb Phrase Polite Remark</td><td colspan=\"3\">(find) ollect \u00a1 c \u00a2 \u00a6 ! # \" $ &amp; % (please help \u00a3 me)</td></tr><tr><td>7.</td><td>Preposition</td><td colspan=\"3\">' ( (about), (about)</td><td>\u00a4 '</td></tr><tr><td>8.</td><td>Subject-Verb phrase</td><td colspan=\"2\">\u00a3 (I) )</td><td>(want)</td></tr><tr><td>9.</td><td>Core Semantic String</td><td colspan=\"3\">9.11 (9.11 event) 0 \u00a6 1</td></tr><tr><td>10.</td><td>Time</td><td>2 3</td><td colspan=\"2\">(today)</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"text": "",
"num": null,
"html": null,
"content": "<table><tr><td/><td>: Examples of Query Structure</td></tr><tr><td/><td>Q1: 0, 2, 7, 9, 3, 0: 0.0025,</td></tr><tr><td>1</td><td>\u00a4 \u00a6 \u00a5 \u00a7 2 Is there any information on September 11? ' ( 9.11 0 \u00a6 1 4 7 9 3 \u00a9</td></tr><tr><td/><td>Q2: 0, 1, 7, 9, 3, 0 :0.01</td></tr><tr><td>2</td><td>\u00a3 \u00a4 ' 1 7 Give me some information about Ben laden. 5 6 7 9 3 4</td></tr></table>",
"type_str": "table"
},
"TABREF2": {
"text": "Comparison of Methods a and b for short query",
"num": null,
"html": null,
"content": "<table><tr><td>Average</td><td>Average</td><td>F 1</td></tr><tr><td>Precision</td><td>Recall</td><td/></tr><tr><td>Method a 31%</td><td>58.5%</td><td>40.5%</td></tr><tr><td>Method b 53.98%</td><td>69.4%</td><td>60.7%</td></tr><tr><td colspan=\"2\">+22.98% +10.9%</td><td>+20.2%</td></tr></table>",
"type_str": "table"
},
"TABREF3": {
"text": "Comparison of Methods a and b for long query",
"num": null,
"html": null,
"content": "<table><tr><td>Average</td><td>Average</td><td>F 1</td></tr><tr><td>Precision</td><td>Recall</td><td/></tr><tr><td>Method a 39.23%</td><td>85.99%</td><td>53.9%</td></tr><tr><td>Method b 67.75%</td><td>81.31%</td><td>73.9%</td></tr><tr><td colspan=\"2\">+28.52% -4.68%</td><td/></tr></table>",
"type_str": "table"
}
}
}
}