ACL-OCL / Base_JSON /prefixN /json /N13 /N13-1006.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N13-1006",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:40:43.889860Z"
},
"title": "Named Entity Recognition with Bilingual Constraints",
"authors": [
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Mengqiu",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "mengqiu@stanford.edu"
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": "",
"affiliation": {},
"email": "manning@stanford.edu"
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {},
"email": "tliu@ir.hit.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Different languages contain complementary cues about entities, which can be used to improve Named Entity Recognition (NER) systems. We propose a method that formulates the problem of exploring such signals on unannotated bilingual text as a simple Integer Linear Program, which encourages entity tags to agree via bilingual constraints. Bilingual NER experiments on the large OntoNotes 4.0 Chinese-English corpus show that the proposed method can improve strong baselines for both Chinese and English. In particular, Chinese performance improves by over 5% absolute F 1 score. We can then annotate a large amount of bilingual text (80k sentence pairs) using our method, and add it as uptraining data to the original monolingual NER training corpus. The Chinese model retrained on this new combined dataset outperforms the strong baseline by over 3% F 1 score.",
"pdf_parse": {
"paper_id": "N13-1006",
"_pdf_hash": "",
"abstract": [
{
"text": "Different languages contain complementary cues about entities, which can be used to improve Named Entity Recognition (NER) systems. We propose a method that formulates the problem of exploring such signals on unannotated bilingual text as a simple Integer Linear Program, which encourages entity tags to agree via bilingual constraints. Bilingual NER experiments on the large OntoNotes 4.0 Chinese-English corpus show that the proposed method can improve strong baselines for both Chinese and English. In particular, Chinese performance improves by over 5% absolute F 1 score. We can then annotate a large amount of bilingual text (80k sentence pairs) using our method, and add it as uptraining data to the original monolingual NER training corpus. The Chinese model retrained on this new combined dataset outperforms the strong baseline by over 3% F 1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Named Entity Recognition (NER) is an important task for many applications, such as information extraction and machine translation. State-of-the-art supervised NER methods require large amounts of annotated data, which are difficult and expensive to produce manually, especially for resource-poor languages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A promising approach for improving NER performance without annotating more data is to exploit unannotated bilingual text (bitext), which are relatively easy to obtain for many language pairs, borrowing from the resources made available by statis-tical machine translation research. 1 Different languages contain complementary cues about entities. For example, in Figure 1 , the word \"\u672c (Ben)\" is common in Chinese but rarely appears as a translated foreign name. However, its aligned word on the English side (\"Ben\") provides a strong clue that this is a person name. Judicious use of this type of bilingual cues can help to recognize errors a monolingual tagger would make, allowing us to produce more accurately tagged bitext. Each side of the tagged bitext can then be used to expand the original monolingual training dataset, which may lead to higher accuracy in the monolingual taggers.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 371,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous work such as Li et al. (2012) and Kim et al. (2012) demonstrated that bilingual corpus annotated with NER labels can be used to improve monolingual tagger performance. But a major drawback of their approaches are the need for manual annotation efforts to create such corpora. To avoid this requirement, Burkett et al. (2010) suggested a \"multi-view\" learning scheme based on re-ranking. Noisy output of a \"strong\" tagger is used as training data to learn parameters of a log-linear re-ranking model with additional bilingual features, simulated by a \"weak\" tagger. The learned parameters are then reused with the \"strong\" tagger to re-rank its own outputs for unseen inputs. Designing good \"weak\" taggers so that they complement the \"view\" of bilingual features in the log-linear re-ranker is crucial to the success of this algorithm. Unfortunately there is no principled way of designing such \"weak\" taggers.",
"cite_spans": [
{
"start": 22,
"end": 38,
"text": "Li et al. (2012)",
"ref_id": "BIBREF15"
},
{
"start": 43,
"end": 60,
"text": "Kim et al. (2012)",
"ref_id": "BIBREF11"
},
{
"start": 312,
"end": 333,
"text": "Burkett et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we would like to explore a conceptually much simpler idea that can also take advantage of the large amount of unannotated bitext, without complicated machinery. More specifically, we introduce a joint inference method that formulates the bilingual NER tagging problem as an Integer Linear Program (ILP) and solves it during decoding. We propose a set of intuitive and effective bilingual constraints that encourage NER results to agree across the two languages. Experimental results on the OntoNotes 4.0 named entity annotated Chinese-English parallel corpus show that the proposed method can improve the strong Chinese NER baseline by over 5% F 1 score and also give small improvements over the English baseline. Moreover, by adding the automatically tagged data to the original NER training corpus and retraining the monolingual model using an uptraining regimen , we can improve the monolingual Chinese NER performance by over 3% F 1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "NER is a sequence labeling task where we assign a named entity tag to each word in an input sentence. One commonly used tagging scheme is the BIO scheme. The tag B-X (Begin) represents the first word of a named entity of type X, for example, PER (Person) or LOC (Location). The tag I-X (Inside) indicates that a word is part of an entity but not first word. The tag O (Outside) is used for all nonentity words. 2 See Figure 1 for an example tagged sentence.",
"cite_spans": [],
"ref_spans": [
{
"start": 417,
"end": 425,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "Conditional Random Fields (CRF) (Lafferty et al., 2001 ) is a state-of-the-art sequence labeling model widely used in NER. A first-order linear-chain CRF 2 While the performance of NER is measured at the entity level (not the tag level).",
"cite_spans": [
{
"start": 32,
"end": 54,
"text": "(Lafferty et al., 2001",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "defines the following conditional probability:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "P CRF (y|x) = 1 Z(x) i M i (y i , y i\u22121 |x) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "where x and y are the input and output sequences, respectively, Z(x) is the partition function, and M i is the clique potential for edge clique i. Decoding in CRF involves finding the most likely output sequence that maximizes this objective, and is commonly done by the Viterbi algorithm. Roth and Yih (2005) proposed an ILP inference algorithm, which can capture more task-specific and global constraints than the vanilla Viterbi algorithm. Our work is inspired by Roth and Yih (2005) . But instead of directly solving the shortest-path problem in the ILP formulation, we re-define the conditional probability as:",
"cite_spans": [
{
"start": 290,
"end": 309,
"text": "Roth and Yih (2005)",
"ref_id": "BIBREF22"
},
{
"start": 467,
"end": 486,
"text": "Roth and Yih (2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P M AR (y|x) = i P (y i |x)",
"eq_num": "(2)"
}
],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "where P (y i |x) is the marginal probability given by an underlying CRF model computed using forwardbackward inference. Since the early HMM literature, it has been well known that using the marginal distributions at each position works well, as opposed to Viterbi MAP sequence labeling (M\u00e9rialdo, 1994) . Our experimental results also supports this claim, as we will show in Section 6. Our objective is to find an optimal NER tag sequence:",
"cite_spans": [
{
"start": 286,
"end": 302,
"text": "(M\u00e9rialdo, 1994)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "y = arg max y P M AR (y|x) = arg max y i log P (y i |x)",
"eq_num": "(3)"
}
],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "Then an ILP can be used to solve the inference problem as classification problem with constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "The objective function is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "max |x| i=1 y\u2208Y z y i log P y i (4)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "where Y is the set of all possible named entity tags. P y i = P (y i = y|x) is the CRF marginal probability that the i th word is tagged with y, and z y i is an indicator that equals 1 iff the i th word is tagged y; otherwise, z y i is 0. If no constraints are identified, then Eq. 4achieves maximum when all z y i are assigned to 1, which violates the condition that each word should only be assigned a single entity tag. We can express this with constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200i : y\u2208Y z y i = 1",
"eq_num": "(5)"
}
],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "After adding the constraints, the probability of the sequence is maximized when each word is assigned the tag with highest probability. However, some invalid results may still exist. For example a tag O may be wrongly followed by a tag I-X, although a named entity cannot start with I-X. Therefore, we can add the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200i, \u2200X : z B-X i\u22121 + z I-X i\u22121 \u2212 z I-X i \u2265 0",
"eq_num": "(6)"
}
],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "which specifies that when the i th word is tagged with I-X (z I-X i = 1), then the previous word can only be tagged with B-X or",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "I-X (z B-X i\u22121 + z I-X i\u22121 \u2265 1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraint-based Monolingual NER",
"sec_num": "2"
},
{
"text": "This section demonstrates how to jointly perform NER for two languages with bilingual constraints. We assume sentences have been aligned into pairs, and the word alignment between each pair of sentences is also given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "NER with Bilingual Constraints",
"sec_num": "3"
},
{
"text": "We first introduce the simplest hard constraints, i.e., each word alignment pair should have the same named entity tag. For example, in Figure 1 , the Chinese word \"\u7f8e\u8054\u50a8\" was aligned with the English words \"the\", \"Federal\" and \"Reserve\". Therefore, they have the same named entity tags ORG. 3",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "3 The prefix Band Iare ignored.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "Similarly, \"\u672c\" and \"Ben\" as well as \"\u4f2f\u5357\u514b\" and \"Bernanke\" were all tagged with the tag PER. The objective function for bilingual NER can be expressed as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max |xc| i=1 y\u2208Y z y i log P y i + |xe| j=1 y\u2208Y z y j log P y j",
"eq_num": "(7)"
}
],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "where P y i and P y j are the probabilities of the i th Chinese word and j th English word to be tagged with y, respectively. x c and x e are respectively the Chinese and English sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "Similar to monolingual constrained NER (Section 2), monolingual constraints are added for each language as shown in Eqs. 8and 9:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "\u2200i : y\u2208Y z y i = 1; \u2200j : y\u2208Y z y j = 1 (8) \u2200i, \u2200X : z B-X i + z I-X i \u2212 z B-X i+1 \u2265 0 (9) \u2200j, \u2200X : z B-X j + z I-X j \u2212 z B-X j+1 \u2265 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "Bilingual constraints are added in Eq. 10:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200(i, j) \u2208 A, \u2200X : z B-X i + z I-X i = z B-X j + z I-X j",
"eq_num": "(10)"
}
],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "where A = {(i, j)} is the word alignment pair set, i.e., the i th Chinese word and the j th English word were aligned together. Chinese word i is tagged with a named entity type X (z",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "B-X i + z I-X i = 1), iff English word j is tagged with X (z B-X j +z I-X j = 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": ". Therefore, these hard bilingual constraints guarantee that when two words are aligned, they are tagged with the same named entity tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "However, in practice, aligned word pairs do not always have the same tag because of the difference in annotation standards across different languages. For example, in Figure 2 (a), the Chinese word \"\u5f00\u53d1 \u533a\" is a location. However, it is aligned to the words, \"development\" and \"zone\", which are not named entities in English. Word alignment error is another serious problem that can cause violation of hard constraints. In Figure 2 (b), the English word \"Agency\" is wrongly aligned with the Chinese word \"\u7535 (report)\". Thus, these two words cannot be assigned with the same tag.",
"cite_spans": [],
"ref_spans": [
{
"start": 167,
"end": 175,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 421,
"end": 429,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "To address these two problems, we present a probabilistic model for bilingual NER which can lead to an optimization problem with two soft bilingual constraints: 1) allow word-aligned pairs to have different named entity tags; 2) consider word alignment probabilities to reduce the influence of wrong word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "This O development O zone O is O located O in O . . . \u8fd9\u4e2a O \u5f00\u53d1\u533a B\u2212LOC \u4f4d O \u4e8e O . . . (a) Inconsistent named entity standards Xinhua B\u2212ORG News I\u2212ORG Agency I\u2212ORG February O 16th O \u65b0\u534e\u793e B\u2212ORG \u4e8c\u6708 B\u2212LOC \u5341\u516d\u65e5 O \u7535 O (b) Word alignment error",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hard Bilingual Constraints",
"sec_num": "3.1"
},
{
"text": "The new probabilistic model for bilingual NER is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (y c , y e |x c , x e , A) = P (y c , y e , x c , x e , A) P (x c , x e , A) = P (y c , x c , x e , A) P (x c , x e , A) \u2022 P (y e , x c , x e , A) P (x c , x e , A) \u2022 P (y c , y e , x c , x e , A)P (x c , x e , A) P (y c , x c , x e , A)P (y e , x c , x e , A)",
"eq_num": "(11)"
}
],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "\u2248 P (y c |x c )P (y e |x e ) P (y c , y e |A) P (y c |A)P (y e |A)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "where y c and y e respectively denotes Chinese and English named entity output sequences. A is the set of word alignment pairs. If we assume that named entity tag assignments in Chinese is only dependent on the observed Chinese sentence, then we can drop the A and x e term in the first factor of Eq. (11), and arrive at the first factor of Eq. (12); similarly we can use the same assumption to derive the second factor in Eq. (12) for English; alternatively, if we assume the named entity tag assignments are only dependent on the cross-lingual word associations via word alignment, then we can drop x c and x e terms in the third factor of Eq. 11and arrive at the third factor of Eq. (12). These factors represent the two major sources of information in the model: monolingual surface observation, and cross-lingual word associations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "The first two factors of Eq. (12) can be further decomposed into the product of probabilities of all words in each language sentence like Eq. 2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "Assuming that the tags are independent between different word alignment pairs, then the last factor of Eq. (12) can be decomposed into: where y ca and y ea respectively denotes Chinese and English named entity tags in a word alignment pair a. \u03bb ycye = P (ycye) P (yc)P (ye) is the pointwise mutual information (PMI) score between a Chinese named entity tag y c and an English named entity tag y e . If y c = y e , then the score will be high; otherwise the score will be low. A number of methods for calculating the scores are provided at the end of this section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "P (y c , y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "We use ILP to maximize Eq. (12). The new objective function is expressed as follow:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max |xc| i=1 y\u2208Y z y i log P y i + |xe| j=1 y\u2208Y z y j log P y j + a\u2208A yc\u2208Y ye\u2208Y z ycye a log \u03bb ycye a",
"eq_num": "(14)"
}
],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "where z ycye a is an indicator that equals 1 iff the Chinese and English named entity tags are y c and y e respectively, given a word alignment pair a; otherwise, z ycye a is 0. Monolingual constraints such as Eqs. (8) and (9) need to be added. In addition, one and only one possible named entity tag pair exists for a word alignment pair. This condition can be expressed as the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2200a \u2208 A : yc\u2208Y ye\u2208Y z ycye a = 1",
"eq_num": "(15)"
}
],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "When the tag pair of a word alignment pair is determined, the corresponding monolingual named en-tity tags can also be identified. This rule can be expressed by the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "\u2200a = (i, j) \u2208 A : z ycye a \u2264 z yc i , z ycye a \u2264 z ye j (16)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "Thus, if z ycye a = 1, then z yc i and z ye j must be both equal to 1. Here, the i th Chinese word and the j th English word are aligned together.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "In contrast to hard bilingual constraints, inconsistent named entity tags for an aligned word pair are allowed in soft bilingual constraints, but are given lower \u03bb ycye scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "To calculate the \u03bb ycye score, an annotated bilingual NER corpus is consulted. We count from all word alignment pairs the number of times y c and y e occur together (C(y c y e )) and separately (C(y c ) and C(y e )). Afterwards, \u03bb ycye is calculated with maximum likelihood estimation as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb ycye = P (y c y e ) P (y c )P (y e ) = N \u00d7 C(y c y e ) C(y c )C(y e )",
"eq_num": "(17)"
}
],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "where N is the total number of word alignment pairs. However, in this paper, we assume that no named entity annotated bilingual corpus is available. Thus, the above method is only used as Oracle. A realistic method for calculating the \u03bb ycye score requires the use of two initial monolingual NER models, such as baseline CRF, to predict named entity tags for each language on an unannotated bitext. We count from this automatically tagged corpus the statistics mentioned above. This method is henceforth referred to as Auto.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "A simpler approach is to manually set the value of \u03bb ycye : if y c = y e then we assign a larger value to \u03bb ycye ; else we assign an ad-hoc smaller value. In fact, if we set \u03bb ycye = 1 iff y c = y e ; otherwise, \u03bb ycye = 0, then the soft constraints backs off to hard constraints. We refer to this set of soft constraints as Soft-tag.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Soft Constraints with Tag Uncertainty",
"sec_num": "3.2"
},
{
"text": "So far, we assumed that a word alignment set A is known. In practice, only the word alignment probability P a for each word pair a is provided. We can set a threshold \u03b8 for P a to tune the set A: a \u2208 A iff P a \u2265 \u03b8. This condition can be regarded as a kind of hard word alignment. However, the following problem exists: the smaller the \u03b8, the noisier the word alignments are; the larger the \u03b8, the more possible word alignments are lost. To ameliorate this problem, we introduce another set of soft bilingual constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints with Alignment Uncertainty",
"sec_num": "3.3"
},
{
"text": "We can re-express Eq. (13) as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints with Alignment Uncertainty",
"sec_num": "3.3"
},
{
"text": "a\u2208A \u03bb ycye a = a\u2208A (\u03bb ycye a ) Ia (18)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints with Alignment Uncertainty",
"sec_num": "3.3"
},
{
"text": "where A is the set of all word pairs between two languages. I a = 1 iff P a \u2265 \u03b8; otherwise, I a = 0. We can then replace the hard indicator I a with the word alignment probability P a , Eq. 14is then transformed into the following equation:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints with Alignment Uncertainty",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "max |Wc| i y\u2208Y z y i log P y i + |We| j y\u2208Y z y j log P y j + a\u2208A yc\u2208Y ye\u2208Y z ycye a P a log \u03bb ycye a",
"eq_num": "(19)"
}
],
"section": "Constraints with Alignment Uncertainty",
"sec_num": "3.3"
},
{
"text": "We name the set of constraints above Soft-align, which has the same constraints as Soft-tag, i.e., Eqs. (8), (9), (15) and (16).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Constraints with Alignment Uncertainty",
"sec_num": "3.3"
},
{
"text": "We conduct experiments on the latest OntoNotes 4.0 corpus (LDC2011T03). OntoNotes is a large, manually annotated corpus that contains various text genres and annotations, such as part-of-speech tags, named entity labels, syntactic parse trees, predicateargument structures and co-references (Hovy et al., 2006) . Aside from English, this corpus also contains several Chinese and Arabic corpora. Some of these corpora contain bilingual parallel documents. We used the Chinese-English parallel corpus with named entity labels as our development and test data. This corpus includes about 400 document pairs (chtb 0001-0325, ectb 1001-1078). We used oddnumbered documents as development data and evennumbered documents as test data. We used all other portions of the named entity annotated corpus as training data for the monolingual systems. There were a total of \u223c660 Chinese documents (\u223c16k sentences) and \u223c1,400 English documents (\u223c39k sentences). OntoNotes annotates 18 named entity types, such as person, location, date and money. In this paper, we selected the four most common named entity types, i.e., PER (Person), LOC (Location), Chinese NER Templates 00: 1 (class bias param) 01: ORG (Organization) and GPE (Geo-Political Entities), and discarded the others.",
"cite_spans": [
{
"start": 291,
"end": 310,
"text": "(Hovy et al., 2006)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "w i+k , \u22121 \u2264 k \u2264 1 02: w i+k\u22121 \u2022 w i+k , 0 \u2264 k \u2264 1 03: shape(w i+k ), \u22124 \u2264 k \u2264 4 04: prefix(w i , k), 1 \u2264 k \u2264 4 05: prefix(w i\u22121 , k), 1 \u2264 k \u2264 4 06: suffix(w i , k), 1 \u2264 k \u2264 4 07: suffix(w i\u22121 , k), 1 \u2264 k \u2264 4 08: radical(w i , k), 1 \u2264 k \u2264 len(w i ) Unigram Features y i \u2022 00 -08 Bigram Features y i\u22121 \u2022 y i \u2022 00 -08",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "Since the bilingual corpus is only aligned at the document level, we performed sentence alignment using the Champollion Tool Kit (CTK). 4 After removing sentences with no aligned sentence, a total of 8,249 sentence pairs were retained.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We used the BerkeleyAligner, 5 to produce word alignments over the sentence-aligned datasets. BerkeleyAligner also gives posterior probabilities P a for each aligned word pair.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We used the CRF-based Stanford NER tagger (using Viterbi decoding) as our baseline monolingual NER tool. 6 English features were taken from Finkel et al. (2005) . Table 1 lists the basic features of Chinese NER, where \u2022 means string concatenation and y i is the named entity tag of the i th word w i . Moreover, shape(w i ) is the shape of w i , such as date and number. prefix/suffix(w i , k) denotes the k-characters prefix/suffix of w i . radical(w i , k) denotes the radical of the k th Chinese character of w i . 7 len(w i ) is the number of Chinese characters in w i .",
"cite_spans": [
{
"start": 140,
"end": 160,
"text": "Finkel et al. (2005)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 163,
"end": 170,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "To make the baseline CRF taggers stronger, we added word clustering features to improve generalization over unseen data for both Chinese and English. Word clustering features have been successfully used in several English tasks, including which has included our English and Chinese NER implementations. 7 The radical of a Chinese character can be found at: www.",
"cite_spans": [
{
"start": 303,
"end": 304,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "unicode.org/charts/unihan.html NER (Miller et al., 2004) and dependency parsing (Koo et al., 2008) . To our knowledge, this work is the first use of word clustering features for Chinese NER. A C++ implementation of the Brown word clustering algorithms (Brown et al., 1992) was used to obtain the word clusters (Liang, 2005) . 8 Raw text was obtained from the fifth edition of Chinese Gigaword (LDC2011T13). One million paragraphs from Xinhua news section were randomly selected, and the Stanford Word Segmenter with LDC standard was applied to segment Chinese text into words. 9 About 46 million words were obtained which were clustered into 1,000 word classes.",
"cite_spans": [
{
"start": 35,
"end": 56,
"text": "(Miller et al., 2004)",
"ref_id": "BIBREF19"
},
{
"start": 80,
"end": 98,
"text": "(Koo et al., 2008)",
"ref_id": "BIBREF12"
},
{
"start": 252,
"end": 272,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF1"
},
{
"start": 310,
"end": 323,
"text": "(Liang, 2005)",
"ref_id": "BIBREF16"
},
{
"start": 326,
"end": 327,
"text": "8",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "During development, we tuned the word alignment probability thresholds to find the best value. Figure 3 shows the performance curves. When the word alignment probability threshold \u03b8 is set to 0.9, the hard bilingual constraints perform well for both Chinese and English. But as the thresholds value gets smaller, and more noisy word alignments are introduced, we see the hard bilingual constraints method starts to perform badly.",
"cite_spans": [],
"ref_spans": [
{
"start": 95,
"end": 103,
"text": "Figure 3",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Threshold Tuning",
"sec_num": "5"
},
{
"text": "In Soft-tag setting, where inconsistent tag assignments within aligned word pairs are allowed but penalized, different languages have different optimal threshold values. For example, Chinese has an optimal threshold of 0.7, whereas English has 0.2. Thus, the optimal thresholds for different languages need to be selected with care when Soft-tag is applied in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Tuning",
"sec_num": "5"
},
{
"text": "Soft-align eliminates the need for careful tuning of word alignment thresholds, and therefore can be more easily used in practice. Experimental results of Soft-align confirms our hypothesis -the performance of both Chinese and English NER systems improves with decreasing threshold. However, we can still improve efficiency by setting a low threshold to prune away very unlikely word alignments. We set the threshold to 0.1 for Soft-align to increase speed, and we observed very minimal performance lost when doing so.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Tuning",
"sec_num": "5"
},
{
"text": "We also found that automatically estimated bilingual tag PMI scores (Auto) gave comparable results to Oracle. Therefore this technique is effective for computing the PMI scores, avoiding the need of manually annotating named entity bilingual corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Threshold Tuning",
"sec_num": "5"
},
{
"text": "The main results on Chinese and English test sets with the optimal word alignment threshold for each method are shown in Table 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "The CRF-based Chinese NER with and without word clustering features are compared here. The word clustering features significantly (p < 0.01) improved the performance of Chinese NER, 10 giving us a strong Chinese NER baseline. 11 The effectiveness of word clustering for English NER has been proved in previous work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "The performance of ILP with only monolingual constraints is quite comparable with the CRF results, especially on English. The greater ILP performance on English is probably due to more accurate marginal probabilities estimated by the English CRF model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "The ILP model with hard bilingual constraints gives a slight performance improvement on Chinese, but affects performance negatively on English. Once we introduced tagging uncertainties into the Soft-tag bilingual constraints, we see a very sig-nificant (p < 0.01) performance boost on Chinese. This method also improves the recall on English, with a smaller decrease in precision. Overall, it improves English F 1 score by about 0.4%, which is unfortunately not statistically significant.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "Compared with Soft-tag, the final Soft-align method can further improve performance on both Chinese and English. This is likely to be because: 1) Soft-align includes more word alignment pairs, thereby improving recall; and 2) uses probabilities to cut wrong word alignments, thereby improving precision. In particular, compared with the strong CRF baseline, the gain on Chinese side is almost 5.5% in absolute F 1 score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "Decoding/inferenc efficiency of different methods are shown in the last column of Table 2. 12 Compared with Viterbi decoding in CRF, monolingual ILP decoding is about 2.3 times slower. Bilingual ILP decoding, with either hard or soft constraints, is significantly slower than the monolingual methods. The reason is that the number of monolingual ILP constraints doubles, and there are additionally many more bilingual constraints. The difference in speed between the Soft-tag and Soft-align methods is attributed to the difference in number of word alignment pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 82,
"end": 90,
"text": "Table 2.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "Since each sentence pair can be decoded indepen- dently, parallelization the decoding process can result in significant speedup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bilingual NER Results",
"sec_num": "6"
},
{
"text": "The above results show the usefulness of our method in a bilingual setting, where we are presented with sentence aligned data, and are tagging both languages at the same time. To have a greater impact on general monolingual NER systems, we employ a semi-supervised learning setting. First, we tag a large amount of unannotated bitext with our bilingual constraint-based NER tagger. Then we mix the automatically tagged results with the original monolingual Chinese training data to train a new model. Our bitext is derived from the Chinese-English part of the Foreign Broadcast Information Service corpus (FBIS, LDC2003E14). The best performing bilingual model Soft-align with threshold \u03b8 = 0.1 was used under the same experimental setting as described in Section 4 Table 3 shows that the performance of the semisupervised method improves with more additional data. We simply appended these data to the original training data. We also have done the experiments to down weight the additional training data by duplicating the original training data. There was some slight improvements, but not very significant. Finally, when we add 80k sentences, the F 1 score is improved by 3.32%, which is significantly (p < 0.01) better than the baseline, and most of the contribution comes from recall improvement.",
"cite_spans": [],
"ref_spans": [
{
"start": 766,
"end": 773,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Semi-supervised NER Results",
"sec_num": "7"
},
{
"text": "Before the end of experimental section, let us summarize the usage of different kinds of data resources used in our experiments, as shown in Table 4, where and \u00d7 denote whether the corresponding resources are required. In the bilingual case, during training, only the monolingual named entity annotated data (NE-mono) is necessary to train a monolingual NER tagger. During the test, unannotated bitext (Bitext) is required by the word aligner and our bilingual NER tagger. Named entity annotated bitext (NE-bitext) is used to evaluate our bilingual model. In the semi-supervised case, besides the original NE-mono data, the Bitext is used as input to our bilingual NER tagger to product additional training data. To evaluate the final NER model, only NE-mono is needed. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised NER Results",
"sec_num": "7"
},
{
"text": "Previous work explored the use of bilingual corpora to improve existing monolingual analyzers. Huang et al. (2009) proposed methods to improve parsing performance using bilingual parallel corpus. Li et al. (2012) jointly labeled bilingual named entities with a cyclic CRF model, where approximate inference was done using loopy belief propagation. These methods require manually annotated bilingual corpora, which are expensive to construct, and hard to obtain. Kim et al. (2012) proposed a method of labeling bilingual corpora with named entity labels automatically based on Wikipedia. However, this method is restricted to topics covered by Wikipedia. Similar to our work, Burkett et al. (2010) also assumed that annotated bilingual corpora are scarce. Beyond the difference discussed in Section 1, their re-ranking strategy may lose the correct named entity results if they are not included in the top-N outputs. Furthermore, we consider the word alignment probabilities in our method which can reduce the influence of word alignment errors. Finally, we test our method on a large standard publicly available corpus (8,249 sentences), while they used a much smaller (200 sentences) manually annotated bilingual NER corpus for results validation.",
"cite_spans": [
{
"start": 95,
"end": 114,
"text": "Huang et al. (2009)",
"ref_id": "BIBREF10"
},
{
"start": 196,
"end": 212,
"text": "Li et al. (2012)",
"ref_id": "BIBREF15"
},
{
"start": 462,
"end": 479,
"text": "Kim et al. (2012)",
"ref_id": "BIBREF11"
},
{
"start": 675,
"end": 696,
"text": "Burkett et al. (2010)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "In addition to bilingual corpora, bilingual dictionaries are also useful resources. Huang and Vogel (2002) and Chen et al. (2010) proposed approaches for extracting bilingual named entity pairs from unannotated bitext, in which verification is based on bilingual named entity dictionaries. However, large-scale bilingual named entity dictionaries are difficult to obtain for most language pairs. Yarowsky and Ngai (2001) proposed a projection method that transforms high-quality analysis results of one language, such as English, into other languages on the basis of word alignment. Das and Petrov (2011) applied the above idea to part-ofspeech tagging with a more complex model. Fu et al. (2011) projected English named entities onto Chinese by carefully designed heuristic rules. Although this type of method does not require manually annotated bilingual corpora or dictionaries, errors in source language results, wrong word alignments and inconsistencies between the languages limit application of this method.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "Huang and Vogel (2002)",
"ref_id": "BIBREF9"
},
{
"start": 111,
"end": 129,
"text": "Chen et al. (2010)",
"ref_id": "BIBREF3"
},
{
"start": 396,
"end": 420,
"text": "Yarowsky and Ngai (2001)",
"ref_id": "BIBREF23"
},
{
"start": 583,
"end": 604,
"text": "Das and Petrov (2011)",
"ref_id": "BIBREF4"
},
{
"start": 680,
"end": 696,
"text": "Fu et al. (2011)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "Constraint-based monolingual methods by using ILP have been successfully applied to many natural language processing tasks, such as Semantic Role Labeling (Punyakanok et al., 2004) , Dependency Parsing (Martins et al., 2009) and Textual Entailment (Berant et al., 2011) . Zhuang and Zong (2010) proposed a joint inference method for bilingual semantic role labeling with ILP. However, their approach requires training an alignment model with a manually annotated corpus.",
"cite_spans": [
{
"start": 155,
"end": 180,
"text": "(Punyakanok et al., 2004)",
"ref_id": "BIBREF21"
},
{
"start": 202,
"end": 224,
"text": "(Martins et al., 2009)",
"ref_id": "BIBREF17"
},
{
"start": 248,
"end": 269,
"text": "(Berant et al., 2011)",
"ref_id": "BIBREF0"
},
{
"start": 272,
"end": 294,
"text": "Zhuang and Zong (2010)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "8"
},
{
"text": "We proposed a novel ILP based inference algorithm with bilingual constraints for NER. This method can jointly infer bilingual named entities without using any annotated bilingual corpus. We investigate various bilingual constraints: hard and soft constraints. Out empirical study on largescale OntoNotes Chinese-English parallel NER data showed that Soft-align method, which allows inconsistent named entity tags between two aligned words and considers word alignment probabilities, can significantly improve over the performance of a strong Chinese NER baseline. Our work is the first to evaluate performance on a large-scale standard dataset. Finally, we can also improve monolingual Chinese NER performance significantly, by combining the original monolingual training data with new data obtained from bitext tagged by our method. The final ILP-based bilingual NER tagger with soft constraints is publicly available at: github.com/carfly/bi_ilp Future work could apply the bilingual constraintbased method to other tasks, such as part-of-speech tagging and relation extraction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "9"
},
{
"text": "opus.lingfil.uu.se",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "champollion.sourceforge.net 5 code.google.com/p/berkeleyaligner 6 nlp.stanford.edu/software/CRF-NER.shtml,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "github.com/percyliang/brown-cluster 9 nlp.stanford.edu/software/segmenter.shtml",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use paired bootstrap resampling significance test(Efron and Tibshirani, 1993).11 To the best of our knowledge, there was no performance report of state-of-the-art NER results on the latest OntoNotes dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "CPU: Intel Xeon E5-2660 2.20GHz. And the speed calculation of ILP inference methods exclude the time needed to obtain marginal probabilities from the CRF models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors would like to thank Rob Voigt and the three anonymous reviewers for their valuable comments and suggestions. We gratefully acknowledge the support of the National Natural Science Foundation of China (NSFC) via grant 61133012, the National \"863\" Project via grant 2011AA01A207 and 2012AA011102, the Ministry of Education Research of Social Sciences Youth funded projects via grant 12YJCZH304, the Defense Advanced Research Projects Agency (DARPA) Machine Reading Program under Air Force Research Laboratory (AFRL) prime contract no. FA8750-09-C-0181 and the support of the DARPA Broad Operational Language Translation (BOLT) program through IBM.Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Global learning of typed entailment rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "610--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 610-619, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Classbased n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Comput. Linguist",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vin- cent J. Della Pietra, and Jenifer C. Lai. 1992. Class- based n-gram models of natural language. Comput. Linguist., 18(4):467-479, December.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Learning better monolingual models with unannotated bilingual text",
"authors": [
{
"first": "David",
"middle": [],
"last": "Burkett",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Blitzer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Fourteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "46--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Burkett, Slav Petrov, John Blitzer, and Dan Klein. 2010. Learning better monolingual models with unan- notated bilingual text. In Proceedings of the Four- teenth Conference on Computational Natural Lan- guage Learning, pages 46-54, Uppsala, Sweden, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On jointly recognizing and aligning bilingual named entities",
"authors": [
{
"first": "Yufeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
},
{
"first": "Keh-Yih",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "631--639",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yufeng Chen, Chengqing Zong, and Keh-Yih Su. 2010. On jointly recognizing and aligning bilingual named entities. In Proceedings of the 48th Annual Meet- ing of the Association for Computational Linguistics, pages 631-639, Uppsala, Sweden, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised part-of-speech tagging with bilingual graph-based projections",
"authors": [
{
"first": "Dipanjan",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "600--609",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based pro- jections. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, pages 600-609, Port- land, Oregon, USA, June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "An Introduction to the Bootstrap",
"authors": [
{
"first": "B",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Efron and R. J. Tibshirani. 1993. An Introduction to the Bootstrap. Chapman & Hall, New York.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Incorporating non-local information into information extraction systems by gibbs sampling",
"authors": [
{
"first": "Jenny",
"middle": [
"Rose"
],
"last": "Finkel",
"suffix": ""
},
{
"first": "Trond",
"middle": [],
"last": "Grenager",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05)",
"volume": "",
"issue": "",
"pages": "363--370",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jenny Rose Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL'05), pages 363-370, Ann Arbor, Michigan, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Generating chinese named entity data from a parallel corpus",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "264--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiji Fu, Bing Qin, and Ting Liu. 2011. Generating chinese named entity data from a parallel corpus. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 264-272, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Ontonotes: the 90% solution",
"authors": [
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
},
{
"first": "Mitchell",
"middle": [],
"last": "Marcus",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Lance",
"middle": [],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "Ralph",
"middle": [],
"last": "Weischedel",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Human Language Technology Conference of the NAACL",
"volume": "",
"issue": "",
"pages": "57--60",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: the 90% solution. In Proceedings of the Human Lan- guage Technology Conference of the NAACL, Com- panion Volume: Short Papers, NAACL-Short '06, pages 57-60, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Improved named entity translation and bilingual named entity extraction",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 4th IEEE International Conference on Multimodal Interfaces",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Huang and Stephan Vogel. 2002. Improved named entity translation and bilingual named entity extrac- tion. In Proceedings of the 4th IEEE International Conference on Multimodal Interfaces, ICMI 2002, Washington, DC, USA. IEEE Computer Society.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Bilingually-constrained (monolingual) shift-reduce parsing",
"authors": [
{
"first": "Liang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Wenbin",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1222--1231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang Huang, Wenbin Jiang, and Qun Liu. 2009. Bilingually-constrained (monolingual) shift-reduce parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1222-1231, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Multilingual named entity recognition using parallel data and metadata from wikipedia",
"authors": [
{
"first": "Sungchul",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "Hwanjo",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "694--702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sungchul Kim, Kristina Toutanova, and Hwanjo Yu. 2012. Multilingual named entity recognition using parallel data and metadata from wikipedia. In Pro- ceedings of the 50th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 694-702, Jeju Island, Korea, July. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Simple semi-supervised dependency parsing",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "595--603",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of ACL-08: HLT, pages 595-603, Columbus, Ohio, June. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Minimum bayes-risk techniques in automatic speech recognition and statistical machine translation",
"authors": [
{
"first": "Shankar",
"middle": [],
"last": "Kumar",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shankar Kumar. 2005. Minimum bayes-risk techniques in automatic speech recognition and statistical ma- chine translation. Ph.D. thesis, Baltimore, MD, USA. AAI3155633.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "John",
"middle": [
"D"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Proba- bilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Joint bilingual name tagging for parallel corpora",
"authors": [
{
"first": "Qi",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Haibo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Wen",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st ACM International Conference on Information and Knowledge Management (CIKM 2012)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qi Li, Haibo Li, Heng Ji, Wen Wang, Jing Zheng, and Fei Huang. 2012. Joint bilingual name tagging for paral- lel corpora. In Proceedings of the 21st ACM Inter- national Conference on Information and Knowledge Management (CIKM 2012), Honolulu, Hawaii, Octo- ber.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Semi-supervised learning for natural language",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2005. Semi-supervised learning for natural language. Master's thesis, MIT.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Concise integer linear programming formulations for dependency parsing",
"authors": [
{
"first": "Andre",
"middle": [],
"last": "Martins",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "342--350",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andre Martins, Noah Smith, and Eric Xing. 2009. Con- cise integer linear programming formulations for de- pendency parsing. In Proceedings of the Joint Con- ference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Lan- guage Processing of the AFNLP, pages 342-350, Sun- tec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Tagging english text with a probabilistic model",
"authors": [
{
"first": "Bernard",
"middle": [],
"last": "M\u00e9rialdo",
"suffix": ""
}
],
"year": 1994,
"venue": "Comput. Linguist",
"volume": "20",
"issue": "2",
"pages": "155--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernard M\u00e9rialdo. 1994. Tagging english text with a probabilistic model. Comput. Linguist., 20(2):155- 171.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Name tagging with word clusters and discriminative training",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "Jethran",
"middle": [],
"last": "Guinness",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Zamanian",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Main Proceedings",
"volume": "",
"issue": "",
"pages": "337--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and dis- criminative training. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Main Proceedings, pages 337-342, Boston, Massachusetts, USA, May 2 -May 7. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Uptraining for accurate deterministic question parsing",
"authors": [
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
},
{
"first": "Pi-Chuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Hiyan",
"middle": [],
"last": "Alshawi",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "705--713",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate deter- ministic question parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Lan- guage Processing, pages 705-713, Cambridge, MA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Semantic role labeling via integer linear programming inference",
"authors": [
{
"first": "Vasin",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Dav",
"middle": [],
"last": "Zimak",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "1346--1352",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vasin Punyakanok, Dan Roth, Wen-tau Yih, and Dav Zi- mak. 2004. Semantic role labeling via integer lin- ear programming inference. In Proceedings of Coling 2004, pages 1346-1352, Geneva, Switzerland, Aug 23-Aug 27. COLING.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Integer linear programming inference for conditional random fields",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd international conference on Machine learning, ICML '05",
"volume": "",
"issue": "",
"pages": "736--743",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Roth and Wen-tau Yih. 2005. Integer linear pro- gramming inference for conditional random fields. In Proceedings of the 22nd international conference on Machine learning, ICML '05, pages 736-743, New York, NY, USA. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Inducing multilingual POS taggers and NP bracketers via robust projection across aligned corpora",
"authors": [
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Grace",
"middle": [],
"last": "Ngai",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, NAACL '01",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Yarowsky and Grace Ngai. 2001. Inducing mul- tilingual POS taggers and NP bracketers via robust projection across aligned corpora. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Lan- guage technologies, NAACL '01, pages 1-8, Strouds- burg, PA, USA. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Joint inference for bilingual semantic role labeling",
"authors": [
{
"first": "Tao",
"middle": [],
"last": "Zhuang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "304--314",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tao Zhuang and Chengqing Zong. 2010. Joint inference for bilingual semantic role labeling. In Proceedings of the 2010 Conference on Empirical Methods in Natu- ral Language Processing, pages 304-314, Cambridge, MA, October. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"text": "Example of NER labels between two word-aligned bilingual parallel sentences.",
"uris": null
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"text": "Errors of hard bilingual constraints method.",
"uris": null
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"text": "e |A) P (y c |A)P (y e |A) = a\u2208A P (y ca y ea ) P (y ca )P (y ea )",
"uris": null
},
"FIGREF4": {
"type_str": "figure",
"num": null,
"text": "Performance curves of different bilingual constraints methods on development set.",
"uris": null
},
"TABREF0": {
"html": null,
"text": "The O chairman O of O the B\u2212ORG Federal I\u2212ORG Reserve I\u2212ORG is O Ben B\u2212PER Bernanke I\u2212PER",
"num": null,
"content": "<table><tr><td>\u7f8e\u8054\u50a8 B\u2212ORG</td><td>\u4e3b\u5e2d O</td><td>\u662f O</td><td>\u672c B\u2212PER</td><td>\u4f2f\u5357\u514b I\u2212PER</td></tr></table>",
"type_str": "table"
},
"TABREF1": {
"html": null,
"text": "Basic features of Chinese NER.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "Semi-supervised results on Chinese test set.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF6": {
"html": null,
"text": "Summarization of the data resource usage",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}