ACL-OCL / Base_JSON /prefixC /json /C04 /C04-1004.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "C04-1004",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:20:49.529437Z"
},
"title": "Discriminative Hidden Markov Modeling with Long State Dependence using a kNN Ensemble",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Guodong",
"suffix": "",
"affiliation": {},
"email": "zhougd@i2r.a-star.edu.sg"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper proposes a discriminative HMM (DHMM) with long state dependence (LSD-DHMM) to segment and label sequential data. The LSD-DHMM overcomes the strong context independent assumption in traditional generative HMMs (GHMMs) and models the sequential data in a discriminative way, by assuming a novel mutual information independence. As a result, the LSD-DHMM separately models the long state dependence in its state transition model and the observation dependence in its output model. In this paper, a variable-length mutual informationbased modeling approach and an ensemble of kNN probability estimators are proposed to capture the long state dependence and the observation dependence respectively. The evaluation on shallow parsing shows that the LSD-DHMM not only significantly outperforms GHMMs but also much outperforms other DHMMs. This suggests that the LSD-DHMM can effectively capture the long context dependence to segment and label sequential data.",
"pdf_parse": {
"paper_id": "C04-1004",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper proposes a discriminative HMM (DHMM) with long state dependence (LSD-DHMM) to segment and label sequential data. The LSD-DHMM overcomes the strong context independent assumption in traditional generative HMMs (GHMMs) and models the sequential data in a discriminative way, by assuming a novel mutual information independence. As a result, the LSD-DHMM separately models the long state dependence in its state transition model and the observation dependence in its output model. In this paper, a variable-length mutual informationbased modeling approach and an ensemble of kNN probability estimators are proposed to capture the long state dependence and the observation dependence respectively. The evaluation on shallow parsing shows that the LSD-DHMM not only significantly outperforms GHMMs but also much outperforms other DHMMs. This suggests that the LSD-DHMM can effectively capture the long context dependence to segment and label sequential data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A Hidden Markov Model (HMM) is a model where a sequence of observations is generated in addition to the Markov state sequence. It is a latent variable model in the sense that only the observation sequence is known while the state sequence remains \"hidden\". In recent years, HMMs have enjoyed great success in many tagging applications, most notably part-of-speech (POS) tagging (Church 1988; Weischedel et al 1993; Merialdo 1994) and named entity recognition (Bikel et al 1999; Zhou et al 2002) . Moreover, there have been also efforts to extend the use of HMMs to word sense disambiguation (Segond et al 1997) and shallow/full parsing (Brants et al 1997; Skut et al 1998; Zhou et al 2000) .",
"cite_spans": [
{
"start": 378,
"end": 391,
"text": "(Church 1988;",
"ref_id": null
},
{
"start": 392,
"end": 414,
"text": "Weischedel et al 1993;",
"ref_id": "BIBREF21"
},
{
"start": 415,
"end": 429,
"text": "Merialdo 1994)",
"ref_id": "BIBREF12"
},
{
"start": 459,
"end": 477,
"text": "(Bikel et al 1999;",
"ref_id": "BIBREF0"
},
{
"start": 478,
"end": 494,
"text": "Zhou et al 2002)",
"ref_id": "BIBREF23"
},
{
"start": 591,
"end": 610,
"text": "(Segond et al 1997)",
"ref_id": "BIBREF18"
},
{
"start": 636,
"end": 655,
"text": "(Brants et al 1997;",
"ref_id": "BIBREF2"
},
{
"start": 656,
"end": 672,
"text": "Skut et al 1998;",
"ref_id": "BIBREF19"
},
{
"start": 673,
"end": 689,
"text": "Zhou et al 2000)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Traditionally, a HMM segments and labels sequential data in a generative way, assigning a joint probability to paired observation and state sequences. More formally, a generative (first-order) HMM (GHMM) is given by a finite set of states including an designated initial state and an designated final state, a set of possible observation , two conditional probability distributions: a state transition model from s to , for and an output model, for",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "S O s , ' o ' s ) | ( ' s s p ) | ( s o p S s \u2208 s O S \u2208 \u2208 , ) | ' s s",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": ". A sequence of observations is generated by starting from the designated initial state, transmiting to a new state according to , emitting an observation selected by that new state according to p , transmiting to another new state and so on until the designated final state is generated.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "( p ) ( s | o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "There are several problems with this generative approach. First, many tasks would benefit from a richer representation of observations-in particular a representation that describes observations in terms of many overlapping features, such as capitalization, word endings, part-of-speech in addition to the traditional word identity. Note that these features always depends on each other. Furthermore, to define a joint probability over the observation and state sequences, the generative approach needs to enumerate all the possible observation sequences. However, in some tasks, the set of all the possible observation sequences is not reasonably enumerable. Second, the generative approach fails to effectively model the dependence in the observation sequence. Moreover, it is difficult for the generative approach to model the long state dependence since it is not reasonably practical for ngram modeling(e.g. bigram for the first-order GHMM and trigram for the secnodorder GHMM) to be beyond trigram. Third, the generative approach normally estimates the parameters to maximize the likelihood of the observation sequence. However, in many NLP tasks, the goal is to predict the state sequence given the observation sequence. In other words, the generative approach inappropriately applies a generative joint probability model for a conditional probability problem. In summary, the main reasons behind these problems of the generative approach are the strong context independent assumption and the generative nature in modeling sequential data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "While the dependence between successive states can be directly modeled by its state transition model, the generative approach fails to directly capture the observation dependence in the output model. From this viewpoint, a GHMM can be also called an observation independent HMM.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To resolve above problems in GHMMs, some researches have been done to move from the generative approach to the discriminative approach. Discriminative HMMs (DHMMs) do not expend modeling effort on the observation sequnce, which are fixed at test time. Instead, DHMMs model the state sequence depending on arbitrary, nonindependent features of the observation sequence, normally without forcing the model to account for the distribution of those dependencies. Punyakanok and Roth (2000) proposed a projection-based DHMM (PDHMM) which represents the probability of a state transition given not only the current observation but also past and future observations and used the SNoW classifier (Roth 1998 , Carlson et al 1999 to estimate it (SNoW-PDHMM thereafter). McCallum et al (2000) proposed the extact same model and used maximum entropy to estimate it (ME-PDHMM thereafter). Lafferty et al (2001) extanded ME-PDHMM using conditional random fields by incorporating the factored state representation of the same model (that is, representing the probability of a state given the observation sequence and the previous state) to alleviate the label bias problem in projection-based DHMMs, which can be biased towards states with few successor states (CRF-DHMM thereafter). Similar work can also be found in Bouttou (1991). Punyakanok and Roth (2000) also proposed a nonprojection-based DMM which separates the dependence of a state on the previous state and the observation sequence, by rewriting the GHMM in a discriminative way and heuristically extending the notation of an observation to the observation sequence. Zhou et al (2000) systematically derived the exact same model as in Punyakanok and Roth (2000) and used back-off modeling to esimate the probability of a state given the observation sequence (Backoff-DHMM thereafter) while Punyakanok and Roth (2000) used the SNoW classifier to estimate it(SNoW-DHMM thereafter).",
"cite_spans": [
{
"start": 459,
"end": 485,
"text": "Punyakanok and Roth (2000)",
"ref_id": null
},
{
"start": 688,
"end": 698,
"text": "(Roth 1998",
"ref_id": "BIBREF17"
},
{
"start": 699,
"end": 719,
"text": ", Carlson et al 1999",
"ref_id": "BIBREF3"
},
{
"start": 760,
"end": 781,
"text": "McCallum et al (2000)",
"ref_id": "BIBREF11"
},
{
"start": 876,
"end": 897,
"text": "Lafferty et al (2001)",
"ref_id": "BIBREF9"
},
{
"start": 1319,
"end": 1345,
"text": "Punyakanok and Roth (2000)",
"ref_id": null
},
{
"start": 1682,
"end": 1708,
"text": "Punyakanok and Roth (2000)",
"ref_id": null
},
{
"start": 1837,
"end": 1863,
"text": "Punyakanok and Roth (2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "This paper follows our previous work in Zhou et al (2000) and proposes an alternative nonprojection-based DHMM with long state dependence (LSD-DHMM), which separates the dependence of a state on the previous states and the observation sequence. Moreover, a variablelength mutual information based modeling approach (VLMI) is proposed to capture the long state dependence of a state on the previous states.",
"cite_spans": [
{
"start": 51,
"end": 57,
"text": "(2000)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In addition, an ensemble of kNN probability estimators is proposed to capture the observation dependence of a state on the observation sequence. Experimentation shows that VLMI effectively captures the long state dependence. It also shows that the kNN ensemble captures the dependence between the features of the observation sequence more effectively than classifier-based approaches, by forcing the model to account for the distribution of those dependencies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "The layout of this paper is as follows. Section 2 first proposes the LSD-DHMM and then presents the VLMI to capture the long state dependence. Section 3 presents the kNN probability estimator to capture the observation dependence while Section 4 presents the kNN ensemble. Section 5 introduces shallow parsing, while experimental results are given in Section 6. Finally, some conclusion will be drawn in Section 7.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In principle, given an observation sequence , the goal of a conditional probability model is to find a stochastic optimal",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "state sequence s that maximizes n n o o o o L 2 1 1 = ) | ( log 1 1 n n o s p n n s s s L 2 1 1 = ) | ( log max arg 1 1 * 1 n n s o s p S n = (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "By applying the Bayes' rule, we can rewrite the equation (1) as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "} ) , ( ) ( {log max arg )} | ( {log max arg 1 1 1 1 1 * 1 1 n n n s n n s o s MI s p o s p s n n + = = (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "Obviously, the second term MI captures the mutual information between the state sequence and the observation sequence o . To compute efficiently, we propose a novel mutual information independence assumption:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": ") , ( 1 1 n n o s n 1 n s 1 MI ) , ( 1 1 n n o s \u2211 = = n i n i n n o s MI o s MI 1 1 1 1 ) , ( ) , ( or \u2211 = = \u22c5 n i n n n n p o p s p o s p 1 1 1 1 1 log ) ( ) ( ) , ( log \u22c5 n i n i o p s o s p 1 1 ) ( ) ( ) , (",
"eq_num": "(3)"
}
],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "That is, we assume a state is only dependent on the observation sequence o and independent on other states in the state sequence s . This assumption is reasonable because the dependence among the states in the state sequence has been",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "n 1 n 1 n 1 s directly captured by the first term log in equation (2). ) ( 1 n s p | ( log ) ( log } ) | ( ) ( 1 2 1 1 \u2211 = n i n i n i o s p s p o s s p \u2212 i s 1 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "By applying the assumption (3) into the equation 2and using the chain rule, we have: ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "} ) ) , ( { max arg } ) | ( log ) | ( log { max arg log ) ( log log ) | ( log { max arg 1 2 1 1 1 1 2 1 1 1 1 2 1 1 * 1 1 1 \u2211 \u2211 \u2211",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "+ = n i n i i i s n i n i i n i i i s n i n i i n i i i s s s MI o s p s s p p s p s s p s n n n (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "The above model consists of two models: the state transition model \u2211 which measures the state dependence of a state given the previous states, and the output model which measures the observation dependence of a state given the observation sequence in a discriminative way. Therefore, we call the above model as in equation (4) a discriminative HMM (DHMM) with long state dependence (LSD-DHMM). The LSD-DHMM separates the dependence of a state on the previous states and the observation sequence. The main difference between a GHMM and a LSD-DHMM lies in their output models in that the output model of a LSD-DHMM directly captures the context dependence between successive observations in determining the \"hidden\" states while the output model of the GHMM fails to do so. That is, the output model of a LSD-DHMM overcomes the strong context independent assumption in the GHMM and becomes observation context dependent. Therefore, the LSD-DHMM can also be called an observation context dependent HMM. Compared with other DHMMs, the LSD-DHMM explicitly models the long state dependence and the non-projection nature of the LSD-DHMM alleviates the label bias problem inherent in projection-based DHMMs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "= n i i s MI 2 , ( \u2211 = n i n i o s p 1 1 ) | ( log",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "Computation of a LSD-DHMM consists of two parts. The first is to compute the state transition model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": ". Traditionally, ngram modeling(e.g. bigram for the first-order GHMM and trigram for the second-order GHMM) is used to estimate the state transition model. However, such approach fails to capture the long state dependence since it is not reasonably practical for ngram modeling to be beyond trigram. In this paper, a variable-length mutual information-based modeling approach (VLMI) is proposed as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "For each i \u2211 = \u2212 n i i i s s MI 2 1 1 ) , ( ) 2 ( n i \u2264 \u2264 , we first find a minimal ) i 0 ( k k p \u2264",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "where the frequency of s is bigger than a threshold (e.g. 10) and then estimate using",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "1 \u2212 i k ) 1 \u2212 , ( 1 i i s s MI ) ) ( ( ) , ( 1 \u2212 \u22c5 = i i k i k i p s p s p s s ) ) | 1 n o ) | i i E s N i o + ( ) 1 \u2212 i k s MI n i o s p 1 | ( log ( i s \u2248 ) | ( 1 n i o s ( p i N i o o \u2212 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "In this way, the long state dependence can be captured maximally in a dynamical way. Here, the frequencies of variable-length state sequences are estimated using the simple Good-Turing approach (Gale et al 1995) .",
"cite_spans": [
{
"start": 194,
"end": 211,
"text": "(Gale et al 1995)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "\u2211 = n i 1 p i E = L L ) | ( i E \u2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "The second is to estimate the output model:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": ". Ideally, we would have sufficient training data for every event whose conditional probability we wish to calculate. Unfortunately, there is rarely enough training data to compute accurate probabilities when decoding on new data. Traditionally, there are two existing approaches to resolve this problem: linear interpolation (Jelinek 1989 ) and back-off (Katz 1987) . However, these two approaches only work well when the number of different information sources is limited. When a long context is considered, the number of different information sources is exponential and not reasonably enumerable. The current tendency is to recast it as a classification problem and use the output of a classifier, e.g. the maximum entropy classifier (Ratnaparkhi 1999) to estimate the state probability distribution given the observation sequence. In the next two sections, we will propose a more effective ensemble of kNN probability estimators to resolve this problem.",
"cite_spans": [
{
"start": 326,
"end": 339,
"text": "(Jelinek 1989",
"ref_id": "BIBREF7"
},
{
"start": 355,
"end": 366,
"text": "(Katz 1987)",
"ref_id": "BIBREF8"
},
{
"start": 737,
"end": 755,
"text": "(Ratnaparkhi 1999)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "LSD-DHMM: Discriminative HMM with Long State Dependence",
"sec_num": "2."
},
{
"text": "The main challenge for the LSD-DHMM is how to reliably estimate p in its output model. For efficiency, we can always assume",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": ", where the pattern entry . That is, we only consider the observation dependence in a window of 2N+1 observations (e.g. we only consider the current observation, the previous observation and the next observation when N=1). For convenience, we denote P as the conditional state probability distribution of the states given E and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "i ) | ( i i E s p i s i E ) | ( i E P \u2022 ) (E kNN i ) | ( i E P \u2022 FrequentEn FrequentEn | ( kNN E p k i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "as the conditional state probability of given .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "= = \u2022 i E P ) | (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "The kNN probability estimator estimates by first finding the K nearest neighbors of frequently occurring pattern entries and then aggregating them to make a proper estimation of . Here, the conditional state probability distribution is estimated instead of the classification in a traditional kNN classifier. To do so, all the frequently occurring pattern entries are extracted from the training corpus in an exhaustive way and stored in a dictionary . In order to limit the dictionary size and keep efficiency, we constrain a valid set of pattern entry forms ValidEntry to consider only the most informative information sources. Generally, ValidEntry can be determined manually or automatically according to the applications. In Section 5, we will give an example.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "} ,..., 2 , 1 | { K k E k i = ary tryDiction Form Form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "Given a pattern entry E and a dictionary of frequently occurring pattern entries , a simple algorithm is applied to find the K nearest neighbors of the pattern entry from the dictionary as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "i ary tryDiction i E \u2022 compare",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "with each entry in the dictionary and find all the compatible entries i E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "\u2022 compute the cosine similarity between E and each of the compatible entries i \u2022 sort out the K nearest neighbors according to their cosine similarities Finally, the conditional state probability distribution of the pattern entry is aggregated over those of its K nearest neighbors weighted by their frequencies and cosine similarities :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": ") ( k i E f ) \u2211 \u2211 = = \u22c5 \u2022 \u22c5 \u22c5 K k k i k i K k k i k i k i E f kNN E p E P E f kNN E p 1 1 ) ( ) | ( ) | ( ) ( ) | ( (5) p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Probability Estimator",
"sec_num": "3."
},
{
"text": "In the literature, an ensemble has been widely used in the classification problem to combine several classifiers (Breiman 1996; Hamamoto 1997; Dietterich 1998; Zhou Z.H. et al 2002; Kim et al 2003) . It is well known that an ensemble often outperforms the individual classifiers that make it up (Hansen et al 1990) .",
"cite_spans": [
{
"start": 113,
"end": 127,
"text": "(Breiman 1996;",
"ref_id": null
},
{
"start": 128,
"end": 142,
"text": "Hamamoto 1997;",
"ref_id": null
},
{
"start": 143,
"end": 159,
"text": "Dietterich 1998;",
"ref_id": null
},
{
"start": 160,
"end": 181,
"text": "Zhou Z.H. et al 2002;",
"ref_id": null
},
{
"start": 182,
"end": 197,
"text": "Kim et al 2003)",
"ref_id": null
},
{
"start": 295,
"end": 314,
"text": "(Hansen et al 1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Ensemble",
"sec_num": "4."
},
{
"text": "In this paper, an ensemble of kNN probability estimators is proposed to estimate the conditional state probability distribution P instead of the classification. This is done through a bagging technique (Breiman 1996) to aggregate several kNN probability estimators. In bagging, the M kNN probability estimators in the ensemble",
"cite_spans": [
{
"start": 202,
"end": 216,
"text": "(Breiman 1996)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Ensemble",
"sec_num": "4."
},
{
"text": ") | ( i E \u2022 } M ,..., 2 , 1 | { m kNN ENS m = =",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Ensemble",
"sec_num": "4."
},
{
"text": "are trained independently via a bootstrap technique and then they are aggregated via an appropriate aggregation method. Usually, we have a single training set and need M training sample sets to construct a kNN ensemble with M independent kNN probability estimators. From the statistical viewpoint, we need to make the training sample sets different as much as possible in order to obtain a higher aggregation performance. For doing this, we often use the bootstrap technique which builds M replicate data sets by randomly re-sampling with replacement from the given training set repeatedly. Each example in the given training set may appear repeatedly or not at all in any particular replicate training sample set. Each training sample set is used to train a certain kNN probability estimator. Finally, the conditional state probability distribution of the pattern entry E is averaged over those of the M kNN probability estimators in the ensemble:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Ensemble",
"sec_num": "4."
},
{
"text": "i M kNN E P E P M m m i i \u2211 = \u2022 = \u2022 1 ) , | ( ) | ( (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "kNN Ensemble",
"sec_num": "4."
},
{
"text": "In order to evaluate the LSD-DHMM and the proposed variable-length mutual information modeling approach for the long state dependence in the state transition model and the kNN ensemble for the observation dependence in the output model, we have applied it in the application of shallow parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "For shallow parsing, we have o , where is the word sequence and is the part-of-speech (POS) sequence, while the \"hidden\" states are represented as structural tags to bracket and differentiate various categories of phrases. The basic idea of using the structural tags to represent the \"hidden\" states is similar to Skut et al (1998) and Zhou et al (2000) . Here, a structural tag consists of three parts:",
"cite_spans": [
{
"start": 314,
"end": 331,
"text": "Skut et al (1998)",
"ref_id": "BIBREF19"
},
{
"start": 336,
"end": 353,
"text": "Zhou et al (2000)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "i i w p = 1 n n w w w w L 2 1 1 = n n p p p L 2 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "\u2022 Boundary Category (BOUNDARY): it is a set of four values: \"O\"/\"B\"/\"M\"/\"E\", where \"O\" means that current word is a whOle phrase and \"B\"/\"M\"/\"E\" means that current word is at the Beginning/in the Middle/at the End of a phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "\u2022 Phrase Category (PHRASE): it is used to denote the category of the phrase.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "\u2022 Part-of-Speech (POS): Because of the limited number of boundary and phrase categories, the POS is added into the structural tag to represent more accurate state transition and output models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "For example, given the following POS tagged sentence as the observation sequence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "He/PRP reckons/VBZ the/DT current/JJ account/NN deficit/NN will/MD narrow/VB to/TO only/RB $/$ 1.8/CD billion/CD in/IN September/NNP ./.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "We can have a corresponding sequence of structural tags as the \"hidden\" state sequence:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "O_NP_PRP ( ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Shallow Parsing",
"sec_num": "5."
},
{
"text": "The corpus used in shallow parsing is extracted from the PENN TreeBank (Marcus et al. 1993 ) of 1 million words (25 sections) by a program provided by Sabine Buchholz from Tilburg University. All the evaluations are 5-fold crossvalidated. For shallow parsing, we use the Fmeasure to measure the performance. Here, the Fmeasure is the weighted harmonic mean of the precision (P) and the recall (R): Rijsbergen 1979) , where the precision (P) is the percentage of predicted phrase chunks that are actually correct and the recall (R) is the percentage of correct phrase chunks that are actually found.",
"cite_spans": [
{
"start": 71,
"end": 90,
"text": "(Marcus et al. 1993",
"ref_id": "BIBREF10"
},
{
"start": 398,
"end": 414,
"text": "Rijsbergen 1979)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "6."
},
{
"text": "P R RP + + = 2 2 ) 1 ( \u03b2 \u03b2 F with =1 (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimentation",
"sec_num": "6."
},
{
"text": "Tables 1, 2 and 3 show the detailed performance of LSD-DHMMs. In this paper, the valid set of pattern entry forms ValidEntry is defined to include those pattern entry forms within a windows of 7 observations(including current, left 3 and right 3 observations) where for to be included in a pattern entry, all or one of the overlapping features in each of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03b2",
"sec_num": "2"
},
{
"text": "Form j w , p j ) ( ..., 1 i j p p i j \u2264 + or ) ( ..., , 1 j i p p j i p i \u2264 + ) ( ..., 1 1 j i p p j i p \u2212 +",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03b2",
"sec_num": "2"
},
{
"text": "should be included in the same pattern entry while for to be included in a pattern entry, all or one of the overlapping features in each of or should be included in the same pattern entry. Table 1 shows the effect of different number of nearest neighbors in the kNN probability estimator and considered previous states in the variablelength mutual information modeling approach of the LSD-DHMM, using only one kNN probability estimator in the ensemble to estimate in the output model. It shows that finding 3 nearest neighbors in the kNN probability estimator performs best. It also shows that further increasing the number of nearest neighbors does not increase or even decrease the performance. This may be due to introduction of noisy neighbors when the number of nearest neighbors increases. Moreover, Table 1 shows that the LSD-DHMM performs best when six previous states is considered in the variable-length mutual information-based modeling approach and further considering more previous states only slightly increase the performance. This suggests that the state dependence exists well beyond traditional ngram modeling (e.g. bigram and trigram) to six previous states and the variable-length mutual informationbased modeling approach can capture the long state dependence. In the following experimentation, we will only use the LSD-DHMM with 3 nearest neighbors used in the kNN probability estimator and 6 previous states considered in the variablelength mutual information modeling approach. Table 2 shows the effect of different number of kNN probability estimators in the ensemble. It shows that 15 bootstrap replicates are enough for the k-NN ensemble on shallow parsing and increase the F-measure by 0.71 compared with the ensemble of only one kNN probability estimator. Table 3 compares the LSD-DHMM with GHMMs and other DHMMs. It shows that all the DHMMs significantly outperform GHMMs due to the modeling of the observation dependence and allowing for non-independent, difficult to enumerate observation features. It also shows that our LSD-DHMM much outperforms other DHMMs due to the modeling of the long state dependence using the variable-length mutual information-based modeling approach in the LSD-DHMM. Moverover, Table 3 shows that noprojection-based DHMMs (i.e. CRF-DHMM, SNoW-DHMM, Backoff-DHMM and LSD-DHMM) outperform projection-based DHMMs. It may be due to alleviation of the label bias problem inherent in the projection-based DHMMs. Finally, Table 2 also compares the kNN ensemble with popular classifier-based approaches, such as SNoW and Maximum Entropy, in estimating the output model of the LSD-DHMM. It shows that the kNN ensemble outperforms these classifierbased approaches. This suggests that the kNN ensemble captures the dependence between the features of the observation sequence more effectively by forcing the model to account for the distribution of those dependencies. ",
"cite_spans": [],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 806,
"end": 813,
"text": "Table 1",
"ref_id": "TABREF0"
},
{
"start": 1502,
"end": 1509,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 1785,
"end": 1792,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 2238,
"end": 2245,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 2475,
"end": 2482,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "\u03b2",
"sec_num": "2"
},
{
"text": "j p ( ..., , 2 1 i j p p i j p + ) p j+ , p i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03b2",
"sec_num": "2"
},
{
"text": ") | ( 1 n i o s p",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u03b2",
"sec_num": "2"
},
{
"text": "Hidden Markov Models (HMMs) are a powerful probabilistic tool for modeling sequential data and have been applied with success to many textrelated tasks, such as shallow paring. In these cases, the observations are usually modified as multinomial distributions over a discrete dictionary and the HMM parameters are set to maximize the likelihood of the observations. This paper presents a discriminative HMM with long state dependence that allows observations to be represented as arbitrary overlapping features and defines the conditional probability of the state sequence given the observation sequence. It does so by assuming a novel mutual information independence to separate the dependence of a state given the observation sequence and the previous states. Finally, the long state dependence and the observation dependence can be effectively captured by a variable-length mutual information model and a kNN ensemble respectively. In future work, we will explore our model in other applications, such as full parsing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7."
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "An Algorithm that Learns What's in a Name",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning (Special Issue on NLP)",
"volume": "34",
"issue": "3",
"pages": "211--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bikel D.M., Schwartz R. & Weischedel R.M. (1999). An Algorithm that Learns What's in a Name. Machine Learning (Special Issue on NLP). 34(3): 211-231.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Une approche theorique de l'apprentissage connexionniste: Applications a la reconnaissance de la parole",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bottou L. (1991). Une approche theorique de l'apprentissage connexionniste: Applications a la reconnaissance de la parole. Doctoral dissertation, Universite de Paris XI.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Tagging Grammatical Functions",
"authors": [
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Skut",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Krenn",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Conference on Empirical Methods on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brants T., Skut W., & Krenn B. (1997). Tagging Grammatical Functions. Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP'1997). Brown Univ. RI.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "The SNoW learning architecture",
"authors": [
{
"first": "A",
"middle": [],
"last": "Carlson",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Cumby",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rosen",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carlson A, Cumby C. Rosen J. and Roth D. 1999. The SNoW learning architecture. Techinical Report UIUCDCS-R-99-2101. UIUC Computer Science Department.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A Stochastic Pars Program and Noun Phrase Parser for Unrestricted Text",
"authors": [
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Second Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Church K.W. (1998). A Stochastic Pars Program and Noun Phrase Parser for Unrestricted Text. Proceedings of the Second Conference on Applied Natural Language Processing (ANLP'1998). Austin, Texas.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Fundamentals of neural networks",
"authors": [
{
"first": "L",
"middle": [],
"last": "Fausett",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fausett L. (1994). Fundamentals of neural networks. Prentice Hall Press.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Good-Turing frequency estimation without tears",
"authors": [
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Sampson",
"suffix": ""
}
],
"year": 1995,
"venue": "Journal of Quantitative Linguistics",
"volume": "2",
"issue": "",
"pages": "217--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gale W.A. and Sampson G. 1995. Good-Turing frequency estimation without tears. Journal of Quantitative Linguistics. 2:217-237.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Self-Organized Language Modeling for Speech Recognition",
"authors": [
{
"first": "F",
"middle": [],
"last": "Jelinek",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "450--506",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jelinek F. (1989). Self-Organized Language Modeling for Speech Recognition. In Alex Waibel and Kai-Fu Lee(Editors). Readings in Speech Recognitiopn. Morgan Kaufmann. 450- 506.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer",
"authors": [
{
"first": "S",
"middle": [
"M"
],
"last": "Katz",
"suffix": ""
}
],
"year": 1987,
"venue": "IEEE Transactions on Acoustics. Speech and Signal Processing",
"volume": "35",
"issue": "",
"pages": "400--401",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katz S.M. (1987). Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. IEEE Transactions on Acoustics. Speech and Signal Processing. 35: 400-401.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Conditional random fields: probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lafferty J. McCallum A and Pereira F. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. ICML-20.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Buliding a large annotated corpus of English: The Penn Treebank",
"authors": [
{
"first": "M",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Santorini",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Marcinkiewicz",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "313--330",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marcus M., Santorini B. & Marcinkiewicz M.A. (1993). Buliding a large annotated corpus of English: The Penn Treebank. Computational Linguistics. 19(2):313-330.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Maximum entropy Markov models for information extraction and segmentation. ICML-19. 591-598",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "McCallum A. Freitag D. and Pereira F. 2000. Maximum entropy Markov models for information extraction and segmentation. ICML- 19. 591-598. Stanford, California.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Tagging English Text with a Probabilistic Model",
"authors": [
{
"first": "B",
"middle": [],
"last": "Merialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "20",
"issue": "2",
"pages": "155--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Merialdo B. (1994). Tagging English Text with a Probabilistic Model. Computational Linguistics. 20(2): 155-171.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The Use of Classifiers in Sequential Inference NIPS-13",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "The Use of Classifiers in Sequential Inference NIPS-13.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Rabiner",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the IEEE",
"volume": "77",
"issue": "",
"pages": "257--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rabiner L.R. (1989). A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition\". Proceedings of the IEEE, 77(2): 257-286.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning to parsing natural language with maximum entropy models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "151--175",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ratnaparkhi A. 1999. Learning to parsing natural language with maximum entropy models. Machine Learning. 34:151-175.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Learning to resolve natural language ambiguities: A unified approach",
"authors": [
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "806--813",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roth D. 1998. Learning to resolve natural language ambiguities: A unified approach. In Proceedings of the National Conference on Artificial Intelligence. 806-813.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "An Experiment in Semantic Tagging using Hidden Markov Model Tagging",
"authors": [
{
"first": "F",
"middle": [],
"last": "Segond",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Schiller",
"suffix": ""
},
{
"first": "F",
"middle": [
"P"
],
"last": "Grefenstette & Chanod",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the Joint ACL/EACL workshop on Automatic Information Extraction and Building of Lexical Semantic Resources",
"volume": "",
"issue": "",
"pages": "78--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Segond F., Schiller A., Grefenstette & Chanod F.P. (1997). An Experiment in Semantic Tagging using Hidden Markov Model Tagging. Proceedings of the Joint ACL/EACL workshop on Automatic Information Extraction and Building of Lexical Semantic Resources. pp.78-81. Madrid, Spain.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Chunk Tagger -Statistical Recognition of Noun Phrases",
"authors": [
{
"first": "W",
"middle": [],
"last": "Skut",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Brants",
"suffix": ""
}
],
"year": 1979,
"venue": "Proceedings of the ESSLLI'98 workshop on Automatic Acquisition of Syntax and Parsing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Skut W. & Brants T. (1998). Chunk Tagger - Statistical Recognition of Noun Phrases. Proceedings of the ESSLLI'98 workshop on Automatic Acquisition of Syntax and Parsing. Univ. of Saarbrucken. Germany. van Rijsbergen C.J. (1979). Information Retrieval. Buttersworth, London.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm",
"authors": [
{
"first": "A",
"middle": [
"J"
],
"last": "Viterbi",
"suffix": ""
}
],
"year": 1967,
"venue": "IEEE Transactions on Information Theory",
"volume": "13",
"issue": "",
"pages": "260--269",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Viterbi A.J. (1967). Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm. IEEE Transactions on Information Theory. 13: 260-269.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Coping with Ambiguity and Unknown Words through",
"authors": [
{
"first": "R",
"middle": [],
"last": "Weischedel",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Meteer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Palmucci",
"suffix": ""
}
],
"year": 1993,
"venue": "Probabilistic Methods. Computational Linguistics",
"volume": "19",
"issue": "2",
"pages": "359--382",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weischedel R., Meteer M., Schwartz R., Ramshaw L. & Palmucci J. (1993). Coping with Ambiguity and Unknown Words through Probabilistic Methods. Computational Linguistics. 19(2): 359- 382.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Error-driven HMM-based Chunk Tagger with Context-Dependent Lexicon",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Guodong",
"suffix": ""
},
{
"first": "&",
"middle": [],
"last": "Su Jian",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Joint Conference on Empirical Methods on Natural Language Processing and Very Large Corpus",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou GuoDong & Su Jian, (2000). Error-driven HMM-based Chunk Tagger with Context- Dependent Lexicon. Proceedings of the Joint Conference on Empirical Methods on Natural Language Processing and Very Large Corpus (EMNLP/ VLC'2000). Hong Kong.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Named Entity Recognition Using a HMM-based Chunk Tagger",
"authors": [
{
"first": "Zhou",
"middle": [],
"last": "Guodong",
"suffix": ""
},
{
"first": "&",
"middle": [],
"last": "Su Jian",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Annual Meeting for Computational Linguistics (ACL'2002",
"volume": "",
"issue": "",
"pages": "473--480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhou GuoDong & Su Jian. (2002). Named Entity Recognition Using a HMM-based Chunk Tagger, Proceedings of the Conference on Annual Meeting for Computational Linguistics (ACL'2002). 473-480, Philadelphia.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"uris": null,
"type_str": "figure",
"text": "He/PRP) O_VP _VBZ (reckons/VBZ) B_NP _DT (the/DT) M_NP _JJ (current/JJ) M_NP _NN (account/NN) E_NP _NN (deficit/NN) B_VP _MD (will/MD) E_VP _VB (narrow/VB) O_PP _TO (to/TO) B_QP _RB (only/RB) M_QP _$ ($/$) M_QP _CD (1.8/CD) E_QP _CD (billion/CD) O_PP _IN (in/IN) O_NP _NNP(September/NNP) O_O _. (./.) and an equivalent phrase chunked sentence as the shallow parsing result: [NP He/PRP] [VP reckons/VBZ] [ NP the/DT current/JJ account/NN deficit/NN] [VP will/MD narrow/VB] [PP to/TO] [QP only/RB $/$ 1.8/CD billion/CD] [PP in/IN] [NP September/NNP] [O ./.]",
"num": null
},
"TABREF0": {
"content": "<table><tr><td colspan=\"5\">Shallow Parsing</td><td/><td/><td colspan=\"2\">Number of nearest neighbors</td><td/></tr><tr><td/><td/><td/><td/><td/><td>1</td><td>2</td><td>3</td><td>4</td><td>5</td></tr><tr><td>Number of</td><td>considered</td><td>previous</td><td>states</td><td>1 2 4 6 8</td><td>93.12 93.65 93.90 94.12 94.15</td><td>93.50 93.82 94.15 94.28 94.35</td><td>93.76 94.23 94.42 94.53 94.55</td><td>93.70 94.19 94.38 94.54 94.52</td><td>93.66 94.12 94.35 94.51 94.50</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Effect of different numbers of nearest neighbors in the kNN probability estimator and previous states considered in the variable-length mutual information modeling approach of the LSD-DHMMs, using only a probability estimator in the ensemble",
"num": null
},
"TABREF1": {
"content": "<table><tr><td colspan=\"2\">: The Effect of different number of kNN</td></tr><tr><td colspan=\"2\">probability estimators in the ensemble on shallow</td></tr><tr><td>parsing</td><td/></tr><tr><td>Number of kNN probability</td><td>F-measure</td></tr><tr><td>estimators in the ensemble</td><td/></tr><tr><td>1</td><td>94.53</td></tr><tr><td>2</td><td>94.77</td></tr><tr><td>4</td><td>94.93</td></tr><tr><td>8</td><td>95.06</td></tr><tr><td>14</td><td>95.21</td></tr><tr><td>15</td><td>95.24</td></tr><tr><td>16</td><td>95.24</td></tr><tr><td>20</td><td>95.25</td></tr><tr><td>25</td><td>95.25</td></tr><tr><td>28</td><td>95.36</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
},
"TABREF2": {
"content": "<table><tr><td colspan=\"3\">: Comparison of LSD-DHMMs with</td></tr><tr><td colspan=\"2\">GHMMs and other DHMMs</td><td/></tr><tr><td>Models</td><td/><td>F</td></tr><tr><td>GHMMs</td><td>First order</td><td>92.14</td></tr><tr><td/><td>Second order</td><td>92.41</td></tr><tr><td>DHMMs</td><td>ME-PDMM</td><td>93.26</td></tr><tr><td/><td>CRF-DMM</td><td>94.04</td></tr><tr><td/><td>SNoW-PDMM</td><td>93.44</td></tr><tr><td/><td>SNoW-DMM</td><td>94.12</td></tr><tr><td/><td>Backoff-DMM</td><td>93.68</td></tr><tr><td/><td>LSD-DMM(Ensemble)</td><td>95.24</td></tr><tr><td/><td>LSD-DMM(ME)</td><td>94.25</td></tr><tr><td/><td>LSD-DMM(SNoW)</td><td>94.41</td></tr></table>",
"type_str": "table",
"html": null,
"text": "",
"num": null
}
}
}
}