ACL-OCL / Base_JSON /prefixN /json /N09 /N09-1032.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N09-1032",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:42:53.231193Z"
},
"title": "Domain Adaptation with Latent Semantic Association for Named Entity Recognition",
"authors": [
{
"first": "Honglei",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "IBM China Research Laboratory",
"institution": "",
"location": {
"settlement": "Beijing",
"country": "P. R. China"
}
},
"email": ""
},
{
"first": "Huijia",
"middle": [],
"last": "Zhu",
"suffix": "",
"affiliation": {
"laboratory": "IBM China Research Laboratory",
"institution": "",
"location": {
"settlement": "Beijing",
"country": "P. R. China"
}
},
"email": "zhuhuiji@cn.ibm.com"
},
{
"first": "Zhili",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {
"laboratory": "IBM China Research Laboratory",
"institution": "",
"location": {
"settlement": "Beijing",
"country": "P. R. China"
}
},
"email": "guozhili@cn.ibm.com"
},
{
"first": "Xiaoxun",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "IBM China Research Laboratory",
"institution": "",
"location": {
"settlement": "Beijing",
"country": "P. R. China"
}
},
"email": "zhangxx@cn.ibm.com"
},
{
"first": "Xian",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {
"laboratory": "IBM China Research Laboratory",
"institution": "",
"location": {
"settlement": "Beijing",
"country": "P. R. China"
}
},
"email": "wuxian@cn.ibm.com"
},
{
"first": "Zhong",
"middle": [],
"last": "Su",
"suffix": "",
"affiliation": {
"laboratory": "IBM China Research Laboratory",
"institution": "",
"location": {
"settlement": "Beijing",
"country": "P. R. China"
}
},
"email": "suzhong@cn.ibm.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Domain adaptation is an important problem in named entity recognition (NER). NER classifiers usually lose accuracy in the domain transfer due to the different data distribution between the source and the target domains. The major reason for performance degrading is that each entity type often has lots of domainspecific term representations in the different domains. The existing approaches usually need an amount of labeled target domain data for tuning the original model. However, it is a labor-intensive and time-consuming task to build annotated training data set for every target domain. We present a domain adaptation method with latent semantic association (LaSA). This method effectively overcomes the data distribution difference without leveraging any labeled target domain data. LaSA model is constructed to capture latent semantic association among words from the unlabeled corpus. It groups words into a set of concepts according to the related context snippets. In the domain transfer, the original term spaces of both domains are projected to a concept space using LaSA model at first, then the original NER model is tuned based on the semantic association features. Experimental results on English and Chinese corpus show that LaSA-based domain adaptation significantly enhances the performance of NER.",
"pdf_parse": {
"paper_id": "N09-1032",
"_pdf_hash": "",
"abstract": [
{
"text": "Domain adaptation is an important problem in named entity recognition (NER). NER classifiers usually lose accuracy in the domain transfer due to the different data distribution between the source and the target domains. The major reason for performance degrading is that each entity type often has lots of domainspecific term representations in the different domains. The existing approaches usually need an amount of labeled target domain data for tuning the original model. However, it is a labor-intensive and time-consuming task to build annotated training data set for every target domain. We present a domain adaptation method with latent semantic association (LaSA). This method effectively overcomes the data distribution difference without leveraging any labeled target domain data. LaSA model is constructed to capture latent semantic association among words from the unlabeled corpus. It groups words into a set of concepts according to the related context snippets. In the domain transfer, the original term spaces of both domains are projected to a concept space using LaSA model at first, then the original NER model is tuned based on the semantic association features. Experimental results on English and Chinese corpus show that LaSA-based domain adaptation significantly enhances the performance of NER.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named entities (NE) are phrases that contain names of persons, organizations, locations, etc. NER is an important task in information extraction and natural language processing (NLP) applications. Supervised learning methods can effectively solve NER problem by learning a model from manually labeled data (Borthwick, 1999; Sang and Meulder, 2003; Gao et al., 2005; Florian et al., 2003) . However, empirical study shows that NE types have different distribution across domains (Guo et al., 2006) . Trained NER classifiers in the source domain usually lose accuracy in a new target domain when the data distribution is different between both domains. Domain adaptation is a challenge for NER and other NLP applications. In the domain transfer, the reason for accuracy loss is that each NE type often has various specific term representations and context clues in the different domains. For example, {\"economist\", \"singer\", \"dancer\", \"athlete\", \"player\", \"philosopher\", ...} are used as context clues for NER. However, the distribution of these representations are varied with domains. We expect to do better domain adaptation for NER by exploiting latent semantic association among words from different domains. Some approaches have been proposed to group words into \"topics\" to capture important relationships between words, such as Latent Semantic Indexing (LSI) (Deerwester et al., 1990) , probabilistic Latent Semantic Indexing (pLSI) (Hofmann, 1999) , Latent Dirichlet Allocation (LDA) (Blei et al., 2003) . These models have been successfully employed in topic modeling, dimensionality reduction for text categorization (Blei et al., 2003) , ad hoc IR (Wei and Croft., 2006) , and so on.",
"cite_spans": [
{
"start": 306,
"end": 323,
"text": "(Borthwick, 1999;",
"ref_id": "BIBREF5"
},
{
"start": 324,
"end": 347,
"text": "Sang and Meulder, 2003;",
"ref_id": "BIBREF24"
},
{
"start": 348,
"end": 365,
"text": "Gao et al., 2005;",
"ref_id": "BIBREF15"
},
{
"start": 366,
"end": 387,
"text": "Florian et al., 2003)",
"ref_id": "BIBREF12"
},
{
"start": 478,
"end": 496,
"text": "(Guo et al., 2006)",
"ref_id": "BIBREF17"
},
{
"start": 1365,
"end": 1390,
"text": "(Deerwester et al., 1990)",
"ref_id": "BIBREF11"
},
{
"start": 1439,
"end": 1454,
"text": "(Hofmann, 1999)",
"ref_id": "BIBREF18"
},
{
"start": 1491,
"end": 1510,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 1626,
"end": 1645,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 1658,
"end": 1680,
"text": "(Wei and Croft., 2006)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we present a domain adaptation method with latent semantic association. We focus on capturing the hidden semantic association among words in the domain adaptation. We introduce the LaSA model to overcome the distribution difference between the source domain and the target domain. LaSA model is constructed from the unlabeled corpus at first. It learns latent semantic association among words from their related context snippets. In the domain transfer, words in the corpus are associated with a low-dimension concept space using LaSA model, then the original NER model is tuned using these generated semantic association features. The intuition behind our method is that words in one concept set will have similar semantic features or latent semantic association, and share syntactic and semantic context in the corpus. They can be considered as behaving in the same way for discriminative learning in the source and target domains. The proposed method associates words from different domains on a semantic level rather than by lexical occurrence. It can better bridge the domain distribution gap without any labeled target domain samples. Experimental results on English and Chinese corpus show that LaSA-based adaptation significantly enhances NER performance across domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows. Section 2 briefly describes the related works. Section 3 presents a domain adaptation method based on latent semantic association. Section 4 illustrates how to learn LaSA model from the unlabeled corpus. Section 5 shows experimental results on large-scale English and Chinese corpus across domains, respectively. The conclusion is given in Section 6.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some domain adaptation techniques have been employed in NLP in recent years. Some of them focus on quantifying the generalizability of certain features across domains. Roark and Bacchiani (2003) use maximum a posteriori (MAP) estimation to combine training data from the source and target domains. Chelba and Acero (2004) use the parameters of the source domain maximum entropy classifier as the means of a Gaussian prior when training a new model on the target data. Daume III and Marcu (2006) use an empirical Bayes model to estimate a latent variable model grouping instances into domain-specific or common across both domains. Daume III (2007) further augments the feature space on the instances of both domains. Jiang and Zhai (2006) exploit the domain structure contained in the training examples to avoid over-fitting the training domains. Arnold et al. (2008) exploit feature hierarchy for transfer learning in NER. Instance weighting (Jiang and Zhai, 2007) and active learning (Chan and Ng, 2007) are also employed in domain adaptation. Most of these approaches need the labeled target domain samples for the model estimation in the domain transfer. Obviously, they require much efforts for labeling the target domain samples.",
"cite_spans": [
{
"start": 168,
"end": 194,
"text": "Roark and Bacchiani (2003)",
"ref_id": "BIBREF23"
},
{
"start": 298,
"end": 321,
"text": "Chelba and Acero (2004)",
"ref_id": "BIBREF7"
},
{
"start": 468,
"end": 494,
"text": "Daume III and Marcu (2006)",
"ref_id": "BIBREF10"
},
{
"start": 631,
"end": 647,
"text": "Daume III (2007)",
"ref_id": "BIBREF9"
},
{
"start": 717,
"end": 738,
"text": "Jiang and Zhai (2006)",
"ref_id": "BIBREF19"
},
{
"start": 847,
"end": 867,
"text": "Arnold et al. (2008)",
"ref_id": "BIBREF1"
},
{
"start": 943,
"end": 965,
"text": "(Jiang and Zhai, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 986,
"end": 1005,
"text": "(Chan and Ng, 2007)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Some approaches exploit the common structure of related problems. Ando et al. (2005) learn predicative structures from multiple tasks and unlabeled data. Blitzer et al. (2006 Blitzer et al. ( , 2007 employ structural corresponding learning (SCL) to infer a good feature representation from unlabeled source and target data sets in the domain transfer. We present LaSA model to overcome the data gap across domains by capturing latent semantic association among words from unlabeled source and target data.",
"cite_spans": [
{
"start": 66,
"end": 84,
"text": "Ando et al. (2005)",
"ref_id": "BIBREF0"
},
{
"start": 154,
"end": 174,
"text": "Blitzer et al. (2006",
"ref_id": "BIBREF3"
},
{
"start": 175,
"end": 198,
"text": "Blitzer et al. ( , 2007",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "In addition, Miller et al. (2004) and Freitag (2004) employ distributional and hierarchical clustering methods to improve the performance of NER within a single domain. Li and McCallum (2005) present a semi-supervised sequence modeling with syntactic topic models. In this paper, we focus on capturing hidden semantic association among words in the domain adaptation.",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "Miller et al. (2004)",
"ref_id": "BIBREF14"
},
{
"start": 38,
"end": 52,
"text": "Freitag (2004)",
"ref_id": "BIBREF13"
},
{
"start": 169,
"end": 191,
"text": "Li and McCallum (2005)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "The challenge in domain adaptation is how to capture latent semantic association from the source and target domain data. We present a LaSA-based domain adaptation method in this section. NER can be considered as a classification problem. Let X be a feature space to represent the observed word instances, and let Y be the set of class labels. Let p s (x, y) and p t (x, y) be the true underlying distributions for the source and the target domains, respectively. In order to minimize the efforts required in the domain transfer, we often expect to use p s (x, y) to approximate p t (x, y).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Based on Latent Semantic Association",
"sec_num": "3"
},
{
"text": "However, data distribution are often varied with the domains. For example, in the economics-to-entertainment domain transfer, although many NE triggers (e.g. \"company\" and \"Mr.\") are used in both domains, some are totally new, like \"dancer\", \"singer\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Based on Latent Semantic Association",
"sec_num": "3"
},
{
"text": "Moreover, many useful words (e.g. \"economist\") in the economics NER are useless in the entertainment domain. The above examples show that features could change behavior across domains. Some useful predictive features from one domain are not predictive or do not appear in another domain. Although some triggers (e.g. \"singer\", \"economist\") are completely distinct for each domain, they often appear in the similar syntactic and semantic context. For example, triggers of person entity often appear as the subject of \"visited\", \"said\", etc, or are modified by \"excellent\", \"popular\", \"famous\" etc. Such latent semantic association among words provides useful hints for overcoming the data distribution gap of both domains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Based on Latent Semantic Association",
"sec_num": "3"
},
{
"text": "Hence, we present a LaSA model \u03b8 s,t to capture latent semantic association among words in the domain adaptation. \u03b8 s,t is learned from the unlabeled source and target domain data. Each instance is characterized by its co-occurred context distribution in the learning. Semantic association feature in \u03b8 s,t is a hidden random variable that is inferred from data. In the domain adaptation, we transfer the problem of semantic association mapping to a posterior inference task using LaSA model. Latent semantic concept association set of a word instance x (denoted by SA(x)) is generated by \u03b8 s,t . Instances in the same concept set are considered as behaving in the same way for discriminative learning in both domains. Even though word instances do not appear in a training corpus (or appear rarely) but are in similar context, they still might have relatively high probability in the same semantic concept set. Obviously, SA(x) can better bridge the gap between the two distributions p s (y|x) and p t (y|x). Hence, LaSA model can enhance the estimate of the source domain distribution p s (y|x; \u03b8 s,t ) to better approximate the target domain distribution p t (y|x; \u03b8 s,t ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain Adaptation Based on Latent Semantic Association",
"sec_num": "3"
},
{
"text": "Context Documents",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning LaSA Model from Virtual",
"sec_num": "4"
},
{
"text": "In the domain adaptation, LaSA model is employed to find the latent semantic association structures of \"words\" in a text corpus. We will illustrate how to build LaSA model from words and their context snippets in this section. LaSA model actually can be considered as a general probabilistic topic model. It can be learned on the unlabeled corpus using the popular hidden topic models such as LDA or pLSI.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning LaSA Model from Virtual",
"sec_num": "4"
},
{
"text": "The distribution of content words (e.g. nouns, adjectives) is usually varied with domains. Hence, in the domain adaptation, we focus on capturing the latent semantic association among content words. In order to learn latent relationships among words from the unlabeled corpus, each content word is characterized by a virtual context document as follows. Given a content word x i , the virtual context document of x i (denoted by vd x i ) consists of all the context units around x i in the corpus. Let n be the total number of the sentences which contain x i in the corpus. vd x i is constructed as follows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Virtual Context Document",
"sec_num": "4.1"
},
{
"text": "vd xi = {F (x s1 i ), ..., F (x s k i ), ..., F (x sn i )} where, F (x s k i ) denotes the context feature set of x i in the sentence s k , 1 \u2264 k \u2264 n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Virtual Context Document",
"sec_num": "4.1"
},
{
"text": "Given the context window size {-t, t} (i.e. previous t words and next t words around",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Virtual Context Document",
"sec_num": "4.1"
},
{
"text": "x i in s k ). F (x s k i )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Virtual Context Document",
"sec_num": "4.1"
},
{
"text": "usually consists of the following features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Virtual Context Document",
"sec_num": "4.1"
},
{
"text": "1. Anchor unit A xi C : the current focused word unit x i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Virtual Context Document",
"sec_num": "4.1"
},
{
"text": "A xi L : The nearest left adjacent unit x i\u22121 around x i , denoted by A L (x i\u22121 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Left adjacent unit",
"sec_num": "2."
},
{
"text": "A xi R : The nearest right adjacent unit x i+1 around x i , denoted by A R (x i+1 ). 4. Left context set C xi L : the other left adjacent units {x i\u2212t , ..., x i\u2212j , ..., x i\u22122 } (2 \u2264 j \u2264 t) around x i , de- noted by {C L (x i\u2212t ), ..., C L (x i\u2212j ), ..., C L (x i\u22122 )}. 5. Right context set C xi R : the other right adjacent units {x i+2 , ..., x i+j , ..., x i+t } (2 \u2264 j \u2264 t ) around x i , de- noted by {C R (x i+2 ), ..., C R (x i+j ), ..., C R (x i+t )}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "For example, given x i =\"singer\", s k =\"This popular new singer attended the new year party\". Let the context window size be {-3,3}.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "F (singer) = {singer, A L (new), A R (attend(ed)), C L (this), C L (popular), C R (the), C R (new) }.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "vd x i actually describes the semantic and syntactic feature distribution of x i in the domains. We construct the feature vector of x i with all the observed context features in vd x i . Given vd",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "x i = {f 1 , ..., f j , .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": ".., f m }, f j denotes jth context feature around x i , 1 \u2264 j \u2264 m, m denotes the total number of features in vd x i . The value of f j is calculated by Mutual Information (Church and Hanks, 1990) between x i and f j .",
"cite_spans": [
{
"start": 171,
"end": 195,
"text": "(Church and Hanks, 1990)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "W eight(f j , x i ) = log 2 P (f j , x i ) P (f j )P (x i )",
"eq_num": "(1)"
}
],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "where, P (f j , x i ) is the joint probability of x i and f j co-occurred in the corpus, P (f j ) is the probability of f j occurred in the corpus. P (x i ) is the probability of x i occurred in the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Right adjacent unit",
"sec_num": "3."
},
{
"text": "Topic models are statistical models of text that posit a hidden space of topics in which the corpus is embedded (Blei et al., 2003) . LDA (Blei et al., 2003 ) is a probabilistic model that can be used to model and discover underlying topic structures of documents. LDA assumes that there are K \"topics\", multinomial distributions over words, which describes a collection. Each document exhibits multiple topics, and each word in each document is associated with one of them. LDA imposes a Dirichlet distribution on the topic mixture weights corresponding to the documents in the corpus. The topics derived by LDA seem to possess semantic coherence. Those words with similar semantics are likely to occur in the same topic. Since the number of LDA model parameters depends only on the number of topic mixtures and vocabulary size, LDA is less prone to over-fitting and is capable of estimating the probability of unobserved test documents. LDA is already successfully applied to enhance document representations in text classification (Blei et al., 2003) , information retrieval (Wei and Croft., 2006) .",
"cite_spans": [
{
"start": 112,
"end": 131,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 138,
"end": 156,
"text": "(Blei et al., 2003",
"ref_id": "BIBREF2"
},
{
"start": 1034,
"end": 1053,
"text": "(Blei et al., 2003)",
"ref_id": "BIBREF2"
},
{
"start": 1078,
"end": 1100,
"text": "(Wei and Croft., 2006)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning LaSA Model",
"sec_num": "4.2"
},
{
"text": "In the following, we illustrate how to construct LDA-style LaSA model \u03b8 s,t on the virtual context documents. Algorithm 1 describes LaSA model training method in detail, where, Function AddT o(data, Set) denotes that data is added to Set. Given a large-scale unlabeled data set D u which consists of the source and target domain data, virtual context document for each candidate content word is extracted from D u at first, then the value of each feature in a virtual context document is calculated using its Mutual Information ( see Equation 1 in Section 4.1) instead of the counts when running ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning LaSA Model",
"sec_num": "4.2"
},
{
"text": "(x i , X s,t ); 12 foreach x k \u2208 X s,t do 13 foreach sentence S i \u2208 Du do 14 if x k \u2208 S i then 15 F (x S i k ) \u2190\u2212 16 {x k , A x k L , A x k R , C x k L , C x k R }; AddT o(F (x S i k ), vdx k ); AddT o(vdx k , V D s,t ); 17 \u2022 Generate LaSA model \u03b8 s,t with Dirichlet distribution on V D s,t . 18 end 19",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning LaSA Model",
"sec_num": "4.2"
},
{
"text": "LDA. LaSA model \u03b8 s,t with Dirichlet distribution is generated on the virtual context document set V D s,t using the algorithm presented by Blei et al (2003) . LaSA model learns the posterior distribution to decompose words and their corresponding virtual context documents into topics. Table 1 lists top 10 nouns from a random selection of 5 topics computed on the unlabeled economics and entertainment domain data. As shown, words in the same topic are representative nouns. They actually are grouped into broad concept sets. For example, set 1, 3 and 4 correspond to nominal person, nominal organization and location, respectively. With a large-scale unlabeled corpus, we will have enough words assigned to each topic concept to better approximate the underlying semantic association distribution.",
"cite_spans": [
{
"start": 140,
"end": 157,
"text": "Blei et al (2003)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 287,
"end": 307,
"text": "Table 1 lists top 10",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Learning LaSA Model",
"sec_num": "4.2"
},
{
"text": "In LDA-style LaSA model, the topic mixture is drawn from a conjugate Dirichlet prior that remains the same for all the virtual context docu-ments. Hence, given a word x i in the corpus, we may perform posterior inference to determine the conditional distribution of the hidden topic feature variables associated with x i . Latent semantic association set of x i (denoted by SA(x i )) is generated using Algorithm 2. Here, Multinomial(\u03b8 s,t (vd x i )) refers to sample from the posterior distribution over topics given a virtual document vd x i . In the domain adaptation, we do semantic association inference on the source domain training data using LaSA model at first, then the original source domain NER model is tuned on the source domain training data set by incorporating these generated semantic association features. \u2022 Extract vdx i from the corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning LaSA Model",
"sec_num": "4.2"
},
{
"text": "\u2022 Draw topic weights \u03b8 s,t (vdx i ) from Dirichlet(\u03b1);",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "9",
"sec_num": null
},
{
"text": "\u2022 foreach f j in vdx i do 11 draw a topic z j \u2208{ 1,...,K} from Multinomial(\u03b8 s,t (vdx i ));",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "10",
"sec_num": null
},
{
"text": "AddT o(z j , T opics(vdx i ));",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "12",
"sec_num": null
},
{
"text": "\u2022 Rank all the topics in T opics(vdx i );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "13",
"sec_num": null
},
{
"text": "\u2022 SA(x i ) \u2190\u2212 top n topics in T opics(vdx i );",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "14",
"sec_num": null
},
{
"text": "LaSA model better models latent semantic association distribution in the source and the target domains. By grouping words into concepts, we effectively overcome the data distribution difference of both domains. Thus, we may reduce the number of parameters required to model the target domain data, and improve the quality of the estimated parameters in the domain transfer. LaSA model extends the traditional bag-of-words topic models to context-dependence concept association model. It has potential use for concept grouping.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "end 16",
"sec_num": "15"
},
{
"text": "We evaluate LaSA-based domain adaptation method on both English and Chinese corpus in this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In the experiments, we focus on recognizing person (PER), location (LOC) and organization (ORG) in the given four domains, including economics (Eco), entertainment (Ent), politics (Pol) and sports (Spo).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "In the NER domain adaptation, nouns and adjectives make a significant impact on the performance. Thus, we focus on capturing latent semantic association for high-frequency nouns and adjectives (i.e. occurrence count \u2265 50 ) in the unlabeled corpus. LaSA models for nouns and adjectives are learned from the unlabeled corpus using Algorithm 1 (see section 4.2), respectively. Our empirical study shows that better adaptation is obtained with a 50-topic LaSA model. Therefore, we set the number of topics N as 50, and define the context view window size as {-3,3} (i.e. previous 3 words and next 3 words) in the LaSA model learning. LaSA features for other irrespective words (e.g. token unit \"the\") are assigned with a default topic value N +1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "All the basic NER models are trained on the domain-specific training data using RRM classifier (Guo et al., 2005) . RRM is a generalization Winnow learning algorithm (Zhang et al., 2002) . We set the context view window size as {-2,2} in NER. Given a word instance x, we employ local linguistic features (e.g. word unit, part of speech) of x and its context units ( i.e. previous 2 words and next 2 words ) in NER. All Chinese texts in the experiments are automatically segmented into words using HMM.",
"cite_spans": [
{
"start": 95,
"end": 113,
"text": "(Guo et al., 2005)",
"ref_id": "BIBREF16"
},
{
"start": 166,
"end": 186,
"text": "(Zhang et al., 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "In LaSA-based domain adaptation, the semantic association features of each unit in the observation window {-2,2} are generated by LaSA model at first, then the basic source domain NER model is tuned on the original source domain training data set by incorporating the semantic association features. For example, given the sentence \"This popular new singer attended the new year party\", Figure 1 illustrates various features and views at the current word w i = \"singer\" in LaSA-based adaptation. In the viewing window at the word \"singer\" (see Figure 1 ), each word unit around \"singer\" is codified with a set of primitive features (e.g. P OS, SA, T ag), together with its relative position to \"singer\".",
"cite_spans": [],
"ref_spans": [
{
"start": 386,
"end": 394,
"text": "Figure 1",
"ref_id": "FIGREF2"
},
{
"start": 543,
"end": 551,
"text": "Figure 1",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "\u2192 Tagging \u2192 Position w i\u22122 w i\u22121 w i w i+1 w",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "Here, \"SA\" denotes semantic association feature set which is generated by LaSA model. \"T ag\" denotes NE tags labeled in the data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "Given the input vector constructed with the above features, RRM method is then applied to train linear weight vectors, one for each possible class-label. In the decoding stage, the class with the maximum confidence is then selected for each token unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "In our evaluation, only NEs with correct boundaries and correct class labels are considered as the correct recognition. We use the standard Precision (P), Recall (R), and F-measure (F = 2P R P +R ) to measure the performance of NER models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setting",
"sec_num": "5.1"
},
{
"text": "We built large-scale English and Chinese annotated corpus. English corpus are generated from wikipedia while Chinese corpus are selected from Chinese newspapers. Moreover, test data do not overlap with training data and unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "5.2"
},
{
"text": "Wikipedia provides a variety of data resources for NER and other NLP research (Richman and Schone, 2008) . We generate all the annotated English corpus from wikipedia. With the limitation of efforts, only PER NEs in the corpus are automatically tagged using an English person gazetteer. We automatically extract an English Person gazetteer from wikipedia at first. Then we select the articles from wikipedia and tag them using this gazetteer.",
"cite_spans": [
{
"start": 78,
"end": 104,
"text": "(Richman and Schone, 2008)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generate English Annotated Corpus from Wikipedia",
"sec_num": "5.2.1"
},
{
"text": "In order to build the English Person gazetteer from wikipdedia, we manually selected several key phrases, including \"births\", \"deaths\", \"surname\", \"given names\" and \"human names\" at first. For each article title of interest, we extracted the categories to which that entry was assigned. The entry is considered as a person name if its related explicit category links contain any one of the key phrases, such as \"Category: human names\". We totally extracted 25,219 person name candidates from 204,882 wikipedia articles. And we expanded this gazetteer by adding the other available common person names. Finally, we obtained a large-scale gazetteer of 51,253 person names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generate English Annotated Corpus from Wikipedia",
"sec_num": "5.2.1"
},
{
"text": "All the articles selected from wikipedia are further tagged using the above large-scale gazetteer. Since human annotated set were not available, we held out more than 100,000 words of text from the automatically tagged corpus to as a test set in each domain. We also randomly select 17M unlabeled English data (see Table 3 ) from Wikipedia. These unlabeled data are used to build the English LaSA model. ",
"cite_spans": [],
"ref_spans": [
{
"start": 315,
"end": 322,
"text": "Table 3",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Generate English Annotated Corpus from Wikipedia",
"sec_num": "5.2.1"
},
{
"text": "We built a large-scale high-quality Chinese NE annotated corpus. All the data are news articles from several Chinese newspapers in 2001 and 2002. All the NEs (i.e. PER, LOC and ORG ) in the corpus are manually tagged. Cross-validation checking is employed to ensure the quality of the annotated corpus. All the domain-specific training and test data are selected from this annotated corpus according to the domain categories (see Table 4 ). 8.46M unlabeled Chinese data (see Table 5 ) are randomly selected from this corpus to build the Chinese LaSA model.",
"cite_spans": [],
"ref_spans": [
{
"start": 430,
"end": 437,
"text": "Table 4",
"ref_id": "TABREF9"
},
{
"start": 475,
"end": 482,
"text": "Table 5",
"ref_id": "TABREF11"
}
],
"eq_spans": [],
"section": "Chinese Data",
"sec_num": "5.2.2"
},
{
"text": "All the experiments are conducted on the above large-scale English and Chinese corpus. The overall performance enhancement of NER by LaSA-based domain adaptation is evaluated at first. Since the distribution of each NE type is different across domains, we also analyze the performance enhancement on each entity type by LaSA-based adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "5.3"
},
{
"text": "LaSA-based Domain Adaptation Table 6 and 7 show the experimental results for all pairs of domain adaptation on both English and Chinese corpus, respectively. In the experiment, the basic source domain NER model M s is learned from the specific domain training data set D dom (see Table 2 and 4 in Section 5.2). Here, dom \u2208 {Eco, Ent, P ol, Spo}. F in dom denotes the top-line F-measure of M s in the source trained domain dom. When M s is directly applied in a new target domain, its F-measure in this basic transfer is considered as baseline (denoted by F Base ). F LaSA denotes F-measure of M s achieved in the target domain with LaSA-based domain adaptation. \u03b4(F ) =",
"cite_spans": [],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 6",
"ref_id": "TABREF13"
},
{
"start": 280,
"end": 287,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Performance Enhancement of NER by",
"sec_num": "5.3.1"
},
{
"text": "F LaSA \u2212F Base F Base",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Enhancement of NER by",
"sec_num": "5.3.1"
},
{
"text": ", which denotes the relative F-measure enhancement by LaSA-based domain adaptation. Experimental results on English and Chinese corpus indicate that the performance of M s significantly degrades in each basic domain transfer without using LaSA model (see Table 6 and 7). For example, in the \"Eco\u2192Ent\" transfer on Chinese corpus (see Table 6 ). For example, in the \"Pol\u2192Eco\" transfer, F Base is 63.62% while F LaSA achieves 68.10%. Compared with F Base , LaSA-based method significantly enhances F-measure by 7.04%. We perform t-tests on F-measure of all the comparison experiments on English corpus. The p-value is 2.44E-06, which shows that the improvement is statistically significant. Table 6 also gives the accuracy loss due to transfer in each domain adaptation on English corpus. The accuracy loss is defined as loss = 1 \u2212 F F in dom . And the relative reduction in error is defined as \u03b4(loss)= |1 \u2212 loss LaSA loss Base |. Experimental results indicate that the relative reduction in error is above 9.93% with LaSA-based transfer in each test on English corpus. LaSA model significantly decreases the accuracy loss by 29.38% in average. Especially for \"Spo\u2192Pol\" transfer, \u03b4(loss) achieves 63.98% with LaSA-based adaptation. All the above results show that LaSA-based adaptation significantly reduces the accuracy loss in the domain transfer for English NER without any labeled target domain samples.",
"cite_spans": [],
"ref_spans": [
{
"start": 255,
"end": 262,
"text": "Table 6",
"ref_id": "TABREF13"
},
{
"start": 333,
"end": 340,
"text": "Table 6",
"ref_id": "TABREF13"
},
{
"start": 688,
"end": 695,
"text": "Table 6",
"ref_id": "TABREF13"
}
],
"eq_spans": [],
"section": "Performance Enhancement of NER by",
"sec_num": "5.3.1"
},
{
"text": "Experimental results on Chinese corpus also show that LaSA-based adaptation effectively increases the accuracy in all the tests (see Table 7 ). For example, in the \"Eco\u2192Ent\" transfer, compared with F Base , LaSA-based adaptation significantly increases Fmeasure by 9.88%. We also perform t-tests on F-measure of 12 comparison experiments on Chinese corpus. The p-value is 1.99E-06, which shows that the enhancement is statistically significant. Moreover, the relative reduction in error is above 10% with LaSA-based method in each test. LaSA model decreases the accuracy loss by 16.43% in average. Especially for the \"Eco\u2192Ent\" transfer (see Table 7 ), \u03b4(loss) achieves 26.29% with LaSA-based method.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 7",
"ref_id": "TABREF14"
},
{
"start": 641,
"end": 648,
"text": "Table 7",
"ref_id": "TABREF14"
}
],
"eq_spans": [],
"section": "Performance Enhancement of NER by",
"sec_num": "5.3.1"
},
{
"text": "All the above experimental results on English and Chinese corpus show that LaSA-based domain adaptation significantly decreases the accuracy loss in the transfer without any labeled target domain data. Although automatically tagging introduced some errors in English source training data, the relative reduction in errors in English NER adaptation seems comparable to that one in Chinese NER adaptation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Enhancement of NER by",
"sec_num": "5.3.1"
},
{
"text": "Our statistic data (Guo et al., 2006) show that the distribution of NE types varies with domains. Each NE type has different domain features. Thus, the performance stability of each NE type recognition is very important in the domain transfer. Figure 2 gives F-measure of each NE type recognition achieved by LaSA-based adaptation on English and Chinese corpus. Experimental results show that LaSA-based adaptation effectively increases the accuracy of each NE type recognition in the most of the domain transfer tests. We perform t-tests on F-measure of the comparison experiments on each NE type, respectively. All the p-value is less than 0.01, which shows that the improvement on each NE type recognition is statistically significant. Especially, the p-value of English and Chinese PER is 2.44E-06 and 9.43E-05, respectively, which shows that the improvement on PER recognition is very significant. For example, in the \"Eco\u2192Pol\" transfer on Chinese corpus, compared with F Base , LaSA-based adaptation enhances F-measure of PER recognition by 9.53 percent points. Performance enhancement for ORG recognition is less than that one for PER and LOC recognition using LaSA model since ORG NEs usually contain much more domainspecific information than PER and LOC.",
"cite_spans": [
{
"start": 19,
"end": 37,
"text": "(Guo et al., 2006)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 244,
"end": 252,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Accuracy Enhancement for Each NE Type Recognition",
"sec_num": "5.3.2"
},
{
"text": "The major reason for error reduction is that external context and internal units are better semantically associated using LaSA model. For example, LaSA Figure 2 : PER, LOC and ORG recognition in the transfer model better groups various titles from different domains (see Table 1 in Section 4.2). Various industry terms in ORG NEs are also grouped into the semantic sets. These semantic associations provide useful hints for detecting the boundary of NEs in the new target domain. All the above results show that LaSA model better compensates for the feature distribution difference of each NE type across domains.",
"cite_spans": [],
"ref_spans": [
{
"start": 152,
"end": 160,
"text": "Figure 2",
"ref_id": null
},
{
"start": 271,
"end": 278,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Accuracy Enhancement for Each NE Type Recognition",
"sec_num": "5.3.2"
},
{
"text": "We present a domain adaptation method with LaSA model in this paper. LaSA model captures latent semantic association among words from the unlabeled corpus. It better groups words into a set of concepts according to the related context snippets. LaSAbased domain adaptation method projects words to a low-dimension concept feature space in the transfer. It effectively overcomes the data distribution gap across domains without using any labeled target domain data. Experimental results on English and Chinese corpus show that LaSA-based domain adaptation significantly enhances the performance of NER across domains. Especially, LaSA model effectively increases the accuracy of each NE type recognition in the domain transfer. Moreover, LaSA-based domain adaptation method works well across languages. To further reduce the accuracy loss, we will explore informative sampling to capture fine-grained data difference in the domain transfer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Ando",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "In Journal of Machine Learning Research",
"volume": "6",
"issue": "",
"pages": "1817--1853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Ando and Tong Zhang. 2005. A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. In Journal of Machine Learning Research 6 (2005), pages 1817-1853.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Exploiting Feature Hierarchy for Transfer Learning in Named Entity Recognition",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Arnold",
"suffix": ""
},
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "William",
"middle": [
"W"
],
"last": "Cohen",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of 46th Annual Meeting of the Association of Computational Linguistics (ACL'08)",
"volume": "",
"issue": "",
"pages": "245--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Arnold, Ramesh Nallapati, and William W. Co- hen. 2008. Exploiting Feature Hierarchy for Trans- fer Learning in Named Entity Recognition. In Pro- ceedings of 46th Annual Meeting of the Association of Computational Linguistics (ACL'08), pages 245-253.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Latent Dirichlet Allocation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Blei",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
}
],
"year": 2003,
"venue": "Journal of Machine Learning Research",
"volume": "3",
"issue": "",
"pages": "993--1022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Blei, Andrew Ng, and Michael Jordan. 2003. La- tent Dirichlet Allocation. Journal of Machine Learn- ing Research, 3:993-1022.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Domain Adaptation with Structural Correspondence Learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006)",
"volume": "",
"issue": "",
"pages": "120--128",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain Adaptation with Structural Correspon- dence Learning. In Proceedings of the 2006 Confer- ence on Empirical Methods in Natural Language Pro- cessing (EMNLP 2006), pages 120-128.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification",
"authors": [
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Dredze",
"suffix": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07)",
"volume": "",
"issue": "",
"pages": "440--447",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics (ACL'07), pages 440-447.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Maximum Entropy Approach to Named Entity Recognition",
"authors": [
{
"first": "Andrew",
"middle": [],
"last": "Borthwick",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrew Borthwick. 1999. A Maximum Entropy Ap- proach to Named Entity Recognition. Ph.D. thesis, New York University.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Domain Adaptation with Active Learning for Word Sense Disambiguation",
"authors": [
{
"first": "Yee",
"middle": [],
"last": "Seng Chan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yee Seng Chan and Hwee Tou Ng. 2007. Domain Adap- tation with Active Learning for Word Sense Disam- biguation. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics (ACL'07).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Adaptation of maximum entropy capitalizer: Little data can help a lot",
"authors": [
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Acero",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ciprian Chelba and Alex Acero. 2004. Adaptation of maximum entropy capitalizer: Little data can help a lot. In Proceedings of the 2004 Conference on Empir- ical Methods in Natural Language Processing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Word association norms, mutual information and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information and lexicogra- phy. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Frustratingly Easy Domain Adaptation",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daume III. 2007. Frustratingly Easy Domain Adap- tation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Domain adaptation for statistical classifiers",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daume",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Artificial Intelligence Research",
"volume": "26",
"issue": "",
"pages": "101--126",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daume III and Daniel Marcu. 2006. Domain adap- tation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101-126.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Indexing by latent semantic analysis",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "Susan",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society for Information Science",
"volume": "41",
"issue": "6",
"pages": "391--407",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Deerwester, Susan T. Dumais, and Richard Harsh- man. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Sci- ence, 41(6):391-407.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Named entity recogintion through classifier combination",
"authors": [
{
"first": "Radu",
"middle": [],
"last": "Florian",
"suffix": ""
},
{
"first": "Abe",
"middle": [],
"last": "Ittycheriah",
"suffix": ""
},
{
"first": "Hongyan",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Radu Florian, Abe Ittycheriah, Hongyan Jing, and Tong Zhang. 2003. Named entity recogintion through clas- sifier combination. In Proceedings of the 2003 Confer- ence on Computational Natural Language Learning.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Trained Named Entity Recognition Using Distributional Clusters",
"authors": [
{
"first": "",
"middle": [],
"last": "Freitag",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Freitag. 2004. Trained Named Entity Recognition Using Distributional Clusters. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP 2004).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Name Tagging with Word Clusters and Discriminative Training",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jethran",
"middle": [],
"last": "Guinness",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Zamanian",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL 04",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name Tagging with Word Clusters and Discrim- inative Training. In Proceedings of HLT-NAACL 04.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach",
"authors": [
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Anndy",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Changning",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguisitc",
"volume": "31",
"issue": "4",
"pages": "531--574",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jianfeng Gao, Mu Li, Anndy Wu, and Changning Huang. 2005. Chinese Word Segmentation and Named Entity Recognition: A Pragmatic Approach. Computational Linguisitc, 31(4):531-574.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Chinese Named Entity Recognition Based on Multilevel Linguistic Features",
"authors": [
{
"first": "Honglei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Gang",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2005,
"venue": "In Lecture Notes in Artificial Intelligence",
"volume": "3248",
"issue": "",
"pages": "90--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Honglei Guo, Jianmin Jiang, Gang Hu, and Tong Zhang. 2005. Chinese Named Entity Recognition Based on Multilevel Linguistic Features. In Lecture Notes in Ar- tificial Intelligence, 3248:90-99.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Empirical Study on the Performance Stability of Named Entity Recognition Model across Domains",
"authors": [
{
"first": "Honglei",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "509--516",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Honglei Guo, Li Zhang, and Zhong Su. 2006. Empirical Study on the Performance Stability of Named Entity Recognition Model across Domains. In Proceedings of the 2006 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP 2006), pages 509- 516.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Probabilistic latent semantic indexing",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 1999,
"venue": "Proceedings of the 22th Annual International SIGIR Conference on Research and Development in Information Retrieval (SIGIR'99)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22th Annual Inter- national SIGIR Conference on Research and Develop- ment in Information Retrieval (SIGIR'99).",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Exploiting Domain Structure for Named Entity Recognition",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of HLT-NAACL 2006",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Jiang and ChengXiang Zhai. 2006. Exploiting Do- main Structure for Named Entity Recognition. In Pro- ceedings of HLT-NAACL 2006, pages 74-81.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Instance Weighting for Domain Adaptation in NLP",
"authors": [
{
"first": "Jing",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Chengxiang",
"middle": [],
"last": "Zhai",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL'07)",
"volume": "",
"issue": "",
"pages": "264--271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing Jiang and ChengXiang Zhai. 2007. Instance Weighting for Domain Adaptation in NLP. In Pro- ceedings of the 45th Annual Meeting of the Associ- ation of Computational Linguistics (ACL'07), pages 264-271.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semi-supervised sequence modeling with syntactic topic models",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Twenty AAAI Conference on Artificial Intelligence (AAAI-05)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Li and Andrew McCallum. 2005. Semi-supervised sequence modeling with syntactic topic models. In Proceedings of Twenty AAAI Conference on Artificial Intelligence (AAAI-05).",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Mining Wiki Resources for Multilingual Named Entity Recognition",
"authors": [
{
"first": "Alexander",
"middle": [
"E"
],
"last": "Richman",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Schone",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 46th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander E. Richman and Patrick Schone. 2008. Min- ing Wiki Resources for Multilingual Named Entity Recognition. In Proceedings of the 46th Annual Meet- ing of the Association of Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Supervised and unsupervised PCFG adaptation to novel domains",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
},
{
"first": "Michiel",
"middle": [],
"last": "Bacchiani",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Human Language Technology Conference of the North American Chapter",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian Roark and Michiel Bacchiani. 2003. Supervised and unsupervised PCFG adaptation to novel domains. In Proceedings of the 2003 Human Language Technol- ogy Conference of the North American Chapter of the Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Introduction to the conll-2003 shared task: Language independent named entity recognition",
"authors": [
{
"first": "Erik",
"middle": [
"F"
],
"last": "Tjong",
"suffix": ""
},
{
"first": "Kim",
"middle": [],
"last": "Sang",
"suffix": ""
},
{
"first": "Fien",
"middle": [],
"last": "De Meulder",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Computational Natural Language Learning (CoNLL-2003)",
"volume": "",
"issue": "",
"pages": "142--147",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language independent named entity recognition. In Proceed- ings of the 2003 Conference on Computational Natural Language Learning (CoNLL-2003), pages 142-147.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "LDA-based document models for ad-hoc retrieval",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Croft",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 29th Annual International SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xing Wei and Bruce Croft. 2006. LDA-based document models for ad-hoc retrieval. In Proceedings of the 29th Annual International SIGIR Conference on Research and Development in Information Retrieval.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Text chunking based on a generalization of Winnow",
"authors": [
{
"first": "Tong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Fred",
"middle": [],
"last": "Damerau",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "615--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tong Zhang, Fred Damerau, and David Johnson. 2002 Text chunking based on a generalization of Winnow. Journal of Machine Learning Research, 2:615-637.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "LaSA model: \u03b8 s,t ; Virtual context document set: V D s,t = \u2205; 6 Candidate content word set: X s,t = \u2205; word x i \u2208 Du do 10 if Frequency(x i )\u2265 the predefined threshold then 11 AddT o"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "\u03b8 s,t : LaSA model with multinomial distribution; 2 Dirichlet(\u03b1): Dirichlet distribution with parameter \u03b1; 3 \u2022 x i : Content word; SA(x i ): Latent semantic association set of x i ;"
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Feature window in LaSA-based adaptation"
},
"TABREF1": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
},
"TABREF2": {
"content": "<table><tr><td>LaSA Model</td></tr><tr><td>Inputs:</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "Generate Latent Semantic Association Set of Word x i Using K-topic"
},
"TABREF4": {
"content": "<table><tr><td>Domains</td><td colspan=\"2\">Training Data Set</td><td colspan=\"2\">Test Data Set</td></tr><tr><td/><td>Size</td><td>PERs</td><td>Size</td><td>PERs</td></tr><tr><td>Pol</td><td>0.45M</td><td>9,383</td><td>0.23M</td><td>6,067</td></tr><tr><td>Eco</td><td>1.06M</td><td>21,023</td><td>0.34M</td><td>6,951</td></tr><tr><td>Spo</td><td>0.47M</td><td>17,727</td><td>0.20M</td><td>6,075</td></tr><tr><td>Ent</td><td>0.36M</td><td>12,821</td><td>0.15M</td><td>5,395</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": "shows the data distribution of the training and test data sets."
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "English training and test data sets"
},
"TABREF7": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Domain distribution in the unlabeled English data set"
},
"TABREF9": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Chinese training and test data sets"
},
"TABREF11": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Domain distribution in the unlabeled Chinese data set"
},
"TABREF13": {
"content": "<table/>",
"html": null,
"type_str": "table",
"num": null,
"text": "Experimental results on English corpus"
},
"TABREF14": {
"content": "<table><tr><td>Source \u2192 Target</td><td/><td colspan=\"3\">Performance in the domain transfer</td><td/></tr><tr><td>Eco\u2192Ent Pol\u2192Ent Spo\u2192Ent</td><td>F Base 60.45% 69.89% 68.66%</td><td>F LaSA 66.42% 73.07% 70.89%</td><td>\u03b4(F ) +9.88% +4.55% +3.25%</td><td>\u03b4(loss) 26.29% 23.96% 15.38%</td><td>F T op F in Ent =83.16% F in Ent =83.16% F in Ent =83.16%</td></tr><tr><td>Ent\u2192Eco Pol\u2192Eco Spo\u2192Eco</td><td>58.50% 62.89% 60.44%</td><td>61.35% 64.93% 63.20%</td><td>+ 4.87% +3.24% + 4.57 %</td><td>11.98% 10.52% 12.64%</td><td>F in Eco =82.28% F in Eco =82.28% F in Eco =82.28%</td></tr><tr><td>Eco\u2192Pol Ent\u2192Pol Spo\u2192Pol</td><td>67.03% 66.64 % 65.40%</td><td>70.90 % 68.94 % 67.20%</td><td>+5.77% +3.45% +2.75%</td><td>27.78% 16.06% 11.57%</td><td>F in P ol =80.96% F in P ol =80.96% F in P ol =80.96%</td></tr><tr><td>Eco\u2192Spo Ent\u2192Spo Pol\u2192Spo</td><td>67.20% 70.05% 70.99%</td><td>70.77% 72.20% 73.86%</td><td>+5.31% +3.07% +4.04%</td><td>15.47% 10.64% 14.91%</td><td>F in Spo =90.24% F in Spo =90.24% F in Spo =90.24%</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": ", F in eco of M s is 82.28% while F Base of M s is 60.45% in the entertainment domain. Fmeasure of M s significantly degrades by 21.83 per-"
},
"TABREF15": {
"content": "<table><tr><td>: Experimental results on Chinese corpus</td></tr><tr><td>cent points in this basic transfer. Significant perfor-</td></tr><tr><td>mance degrading of M s is observed in all the basic</td></tr><tr><td>transfer. It shows that the data distribution of both</td></tr><tr><td>domains is very different in each possible transfer.</td></tr><tr><td>Experimental results on English corpus show that</td></tr><tr><td>LaSA-based adaptation effectively enhances the per-</td></tr><tr><td>formance in each domain transfer (see</td></tr></table>",
"html": null,
"type_str": "table",
"num": null,
"text": ""
}
}
}
}