ACL-OCL / Base_JSON /prefixP /json /P06 /P06-1027.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P06-1027",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:25:12.849701Z"
},
"title": "Semi-Supervised Conditional Random Fields for Improved Sequence Segmentation and Labeling",
"authors": [
{
"first": "Feng",
"middle": [],
"last": "Jiao",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Waterloo",
"location": {}
},
"email": ""
},
{
"first": "Shaojun",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": ""
},
{
"first": "Chi-Hoon",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": ""
},
{
"first": "Russell",
"middle": [],
"last": "Greiner",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Alberta",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present a new semi-supervised training procedure for conditional random fields (CRFs) that can be used to train sequence segmentors and labelers from a combination of labeled and unlabeled training data. Our approach is based on extending the minimum entropy regularization framework to the structured prediction case, yielding a training objective that combines unlabeled conditional entropy with labeled conditional likelihood. Although the training objective is no longer concave, it can still be used to improve an initial model (e.g. obtained from supervised training) by iterative ascent. We apply our new training algorithm to the problem of identifying gene and protein mentions in biological texts, and show that incorporating unlabeled data improves the performance of the supervised CRF in this case.",
"pdf_parse": {
"paper_id": "P06-1027",
"_pdf_hash": "",
"abstract": [
{
"text": "We present a new semi-supervised training procedure for conditional random fields (CRFs) that can be used to train sequence segmentors and labelers from a combination of labeled and unlabeled training data. Our approach is based on extending the minimum entropy regularization framework to the structured prediction case, yielding a training objective that combines unlabeled conditional entropy with labeled conditional likelihood. Although the training objective is no longer concave, it can still be used to improve an initial model (e.g. obtained from supervised training) by iterative ascent. We apply our new training algorithm to the problem of identifying gene and protein mentions in biological texts, and show that incorporating unlabeled data improves the performance of the supervised CRF in this case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Semi-supervised learning is often touted as one of the most natural forms of training for language processing tasks, since unlabeled data is so plentiful whereas labeled data is usually quite limited or expensive to obtain. The attractiveness of semisupervised learning for language tasks is further heightened by the fact that the models learned are large and complex, and generally even thousands of labeled examples can only sparsely cover the parameter space. Moreover, in complex structured prediction tasks, such as parsing or sequence modeling (part-of-speech tagging, word segmentation, named entity recognition, and so on), it is considerably more difficult to obtain labeled training data than for classification tasks (such as document classification), since hand-labeling individual words and word boundaries is much harder than assigning text-level class labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Many approaches have been proposed for semisupervised learning in the past, including: generative models (Castelli and Cover 1996; Cohen and Cozman 2006; Nigam et al. 2000) , self-learning (Celeux and Govaert 1992; Yarowsky 1995) , cotraining (Blum and Mitchell 1998) , informationtheoretic regularization (Corduneanu and Jaakkola 2006; Grandvalet and Bengio 2004) , and graphbased transductive methods (Zhou et al. 2004; Zhou et al. 2005; Zhu et al. 2003) . Unfortunately, these techniques have been developed primarily for single class label classification problems, or class label classification with a structured input (Zhou et al. 2004; Zhou et al. 2005; Zhu et al. 2003) . Although still highly desirable, semi-supervised learning for structured classification problems like sequence segmentation and labeling have not been as widely studied as in the other semi-supervised settings mentioned above, with the sole exception of generative models.",
"cite_spans": [
{
"start": 105,
"end": 130,
"text": "(Castelli and Cover 1996;",
"ref_id": "BIBREF4"
},
{
"start": 131,
"end": 153,
"text": "Cohen and Cozman 2006;",
"ref_id": "BIBREF6"
},
{
"start": 154,
"end": 172,
"text": "Nigam et al. 2000)",
"ref_id": "BIBREF16"
},
{
"start": 189,
"end": 214,
"text": "(Celeux and Govaert 1992;",
"ref_id": "BIBREF5"
},
{
"start": 215,
"end": 229,
"text": "Yarowsky 1995)",
"ref_id": "BIBREF20"
},
{
"start": 243,
"end": 267,
"text": "(Blum and Mitchell 1998)",
"ref_id": "BIBREF2"
},
{
"start": 306,
"end": 336,
"text": "(Corduneanu and Jaakkola 2006;",
"ref_id": "BIBREF7"
},
{
"start": 337,
"end": 364,
"text": "Grandvalet and Bengio 2004)",
"ref_id": "BIBREF10"
},
{
"start": 403,
"end": 421,
"text": "(Zhou et al. 2004;",
"ref_id": "BIBREF22"
},
{
"start": 422,
"end": 439,
"text": "Zhou et al. 2005;",
"ref_id": "BIBREF23"
},
{
"start": 440,
"end": 456,
"text": "Zhu et al. 2003)",
"ref_id": "BIBREF24"
},
{
"start": 623,
"end": 641,
"text": "(Zhou et al. 2004;",
"ref_id": "BIBREF22"
},
{
"start": 642,
"end": 659,
"text": "Zhou et al. 2005;",
"ref_id": "BIBREF23"
},
{
"start": 660,
"end": 676,
"text": "Zhu et al. 2003)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With generative models, it is natural to include unlabeled data using an expectation-maximization approach (Nigam et al. 2000) . However, generative models generally do not achieve the same accuracy as discriminatively trained models, and therefore it is preferable to focus on discriminative approaches. Unfortunately, it is far from obvious how unlabeled training data can be naturally incorporated into a discriminative training criterion. For example, unlabeled data simply cancels from the objective if one attempts to use a traditional conditional likelihood criterion. Nevertheless, recent progress has been made on incorporating unlabeled data in discriminative training procedures. For example, dependencies can be introduced between the labels of nearby instances and thereby have an effect on training (Zhu et al. 2003; Li and McCallum 2005; Altun et al. 2005) . These models are trained to encourage nearby data points to have the same class label, and they can obtain impressive accuracy using a very small amount of labeled data. However, since they model pairwise similarities among data points, most of these approaches require joint inference over the whole data set at test time, which is not practical for large data sets.",
"cite_spans": [
{
"start": 107,
"end": 126,
"text": "(Nigam et al. 2000)",
"ref_id": "BIBREF16"
},
{
"start": 813,
"end": 830,
"text": "(Zhu et al. 2003;",
"ref_id": "BIBREF24"
},
{
"start": 831,
"end": 852,
"text": "Li and McCallum 2005;",
"ref_id": "BIBREF12"
},
{
"start": 853,
"end": 871,
"text": "Altun et al. 2005)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a new semi-supervised training method for conditional random fields (CRFs) that incorporates both labeled and unlabeled sequence data to estimate a discriminative structured predictor. CRFs are a flexible and powerful model for structured predictors based on undirected graphical models that have been globally conditioned on a set of input covariates (Lafferty et al. 2001) . CRFs have proved to be particularly useful for sequence segmentation and labeling tasks, since, as conditional models of the labels given inputs, they relax the independence assumptions made by traditional generative models like hidden Markov models. As such, CRFs provide additional flexibility for using arbitrary overlapping features of the input sequence to define a structured conditional model over the output sequence, while maintaining two advantages: first, efficient dynamic program can be used for inference in both classification and training, and second, the training objective is concave in the model parameters, which permits global optimization.",
"cite_spans": [
{
"start": 378,
"end": 400,
"text": "(Lafferty et al. 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To obtain a new semi-supervised training algorithm for CRFs, we extend the minimum entropy regularization framework of Grandvalet and Bengio (2004) to structured predictors. The resulting objective combines the likelihood of the CRF on labeled training data with its conditional entropy on unlabeled training data. Unfortunately, the maximization objective is no longer concave, but we can still use it to effectively improve an initial supervised model. To develop an effective training procedure, we first show how the derivative of the new objective can be computed from the covariance matrix of the features on the unlabeled data (combined with the labeled conditional likelihood). This relationship facilitates the development of an efficient dynamic programming for computing the gradient, and thereby allows us to perform efficient iterative ascent for training. We apply our new training technique to the problem of sequence labeling and segmentation, and demonstrate it specifically on the problem of identifying gene and protein mentions in biological texts. Our results show the advantage of semi-supervised learning over the standard supervised algorithm.",
"cite_spans": [
{
"start": 119,
"end": 147,
"text": "Grandvalet and Bengio (2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In what follows, we use the same notation as (Lafferty et al. 2001 ",
"cite_spans": [
{
"start": 45,
"end": 66,
"text": "(Lafferty et al. 2001",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "\u00a5 \u00a7 B C 6 5 7 D E ! % \" 2 1 2 1 2 1 E \" % 6 F G @",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": ". We would like to build a CRF model",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "H P I $ 7 Q E ( R S T I E ( V U X W \u1ef2 b a c d 2 e g f d i h p d 9 \" % $ V (@S T I E ( U X W \u1ef2 r q f \" h 9 \" % $ V ( % s@",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "over sequential input and output data 9 \" % $",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": ", where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "f f \" 2 1 2 1 2 1 3 \" f a ( ) t , h 9 \" % $ V ( h 9 \" % $ V ( 0 \" 2 1 2 1 2 1 \" h a 9 \" % $ V ( % (t and T I E ( \u00fc c w v U X W \u1ef2 r q f \" h 9 \" % $ V ( % s@",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "Our goal is to learn such a model from the combined set of labeled and unlabeled examples,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "\u00a5 \u00a6 p x \u00a5 y B .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "The standard supervised CRF training procedure is based upon maximizing the log conditional likelihood of the labeled examples in where the first term is the penalized log conditional likelihood of the labeled data under the CRF, (1), and the second line is the negative conditional entropy of the CRF on the unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a5 C \u00a6 f ( \u00fc 5 c e H P I $ Q ( V G f (",
"eq_num": "(1)"
}
],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "Here, f is a tradeoff parameter that controls the influence of the unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "This approach resembles that taken by (Grandvalet and Bengio 2004) for single variable classification, but here applied to structured CRF training. The motivation is that minimizing conditional entropy over unlabeled data encourages the algorithm to find putative labelings for the unlabeled data that are mutually reinforcing with the supervised labels; that is, greater certainty on the putative labelings coincides with greater conditional likelihood on the supervised labels, and vice versa. For a single classification variable this criterion has been shown to effectively partition unlabeled data into clusters (Grandvalet and Bengio 2004; Roberts et al. 2000) .",
"cite_spans": [
{
"start": 617,
"end": 645,
"text": "(Grandvalet and Bengio 2004;",
"ref_id": "BIBREF10"
},
{
"start": 646,
"end": 666,
"text": "Roberts et al. 2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "To motivate the approach in more detail, consider the overlap between the probability distribution over a label sequence and the empirical distribution of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "\u00a1 H E ( on the unlabeled data \u00a5 G B",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": ". The overlap can be measured by the Kullback-Leibler divergence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "\u00a2 H I $ 7 Q E ( \u00a1 H E ( 4 \u00a1 H E ( % (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": ". It is well known that Kullback-Leibler divergence (Cover and Thomas 1991) is positive and increases as the overlap between the two distributions decreases. In other words, maximizing Kullback-Leibler divergence implies that the overlap between two distributions is minimized. The total overlap over all possible label sequences can be defined as",
"cite_spans": [
{
"start": 52,
"end": 75,
"text": "(Cover and Thomas 1991)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "c 2 v \u00a2 H P I $ 7 Q E ( \u00a1 H E ( 4 \u00a1 H E ( % (c 4 v c \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 H P I $ 7 Q E ( \u00a1 H E ( H P I $ 7 Q E ( \u00a1 H E ( \u00a1 H E (c \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a1 H E ( c 4 v H P I $ 7 Q E ( H P I $ 7 Q E (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "which motivates the negative entropy term in (2). The combined training objective (2) exploits unlabeled data to improve the CRF model, as we will see. However, one drawback with this approach is that the entropy regularization term is not concave. To see why, note that the entropy regularizer can be seen as a composition,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "f ( h f ( % ( , where h \" ! , h ( # v v v and v $ a ! , v f ( % ' & \u00a3 U X W \u1ef2 # a d 2 e f d p h p d 9 \" % $ V (@ . For scalar f , the second derivative of a composition, h (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": ", is given by (Boyd and Vandenberghe 2004) ",
"cite_spans": [
{
"start": 14,
"end": 42,
"text": "(Boyd and Vandenberghe 2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "0 )) f ( 1 ) f (t \u00a5 2 G h f ( %",
"eq_num": "( 3 )"
}
],
"section": "Semi-supervised CRF training",
"sec_num": "2"
},
{
"text": "As (2) is not concave, many of the standard global maximization techniques do not apply. However, one can still use unlabeled data to improve a supervised CRF via iterative ascent. To derive an efficient iterative ascent procedure, we need to compute gradient of (2) with respect to the parameters f . Taking derivative of the objective function (2) with respect to f yields Appendix A for the derivation) 6 6 f f (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "5 c e 7 h \" % $",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "( V c 2 v H P I $ 7 Q ( h \" % $ ( 9 8 6 6 f G f ( e f F c e 5 D E ' @ \u00a7 A ' B & v \u00a3 \u00a5 C E D G F \u00a5 H h \" % $ V ( 3 I f",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "The first three items on the right hand side are just the standard gradient of the CRF objective, 6 f ( % 6 f (Lafferty et al. 2001) , and the final item is the gradient of the entropy regularizer (the derivation of which is given in Appendix A.",
"cite_spans": [
{
"start": 110,
"end": 132,
"text": "(Lafferty et al. 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "Here,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "@ \u00a7 A B & v \u00a3 C E D P F R Q h # \" % $ V ( T S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "is the conditional covariance matrix of the features, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "h V U 9 \" % $ V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "@ \u00a7 A B & v \u00a3 H hU 9 \" % $ V ( h p d 9 \" % $ V ( 3 I\u00a8 b a B & v \u00a3 hU 9 \" % $ V ( h p d 9 \" % $ V (@ c a B & v \u00a3 h d U 9 \" % $ V (@ e a B & v \u00a3 h p d 9 \" % $ V (@c w v H P I $ 7 Q E ( hU 9 \" % $ V ( h p d 9 \" % $ V (@ (4) c 4 v H P I $ 7 Q E ( hU 9 \" % $ V (@ c w v H I $ Q E ( h p d 9 \" % $ (@",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "To efficiently calculate the gradient, we need to be able to efficiently compute the expectations with respect to $ in (3) and (4). However, this can pose a challenge in general, because there are exponentially many values for $ . Techniques for computing the linear feature expectations in (3) are already well known if $ is sufficiently structured (e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "An efficient training procedure",
"sec_num": "3"
},
{
"text": "forms a Markov chain) (Lafferty et al. 2001) . However, we now have to develop efficient techniques for computing the quadratic feature expectations in (4).",
"cite_spans": [
{
"start": 22,
"end": 44,
"text": "(Lafferty et al. 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "For the quadratic feature expectations, first note that the diagonal terms, \u1e84 g f , are straightforward, since each feature is an indicator, we have that hU 9 \" % $ ( hU 9 \" % $ V (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": ", and therefore the diagonal terms in the conditional covariance are just linear feature expectations a B ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "& v \u00a3 hU 9 \" % $ V (@ a B & v \u00a3 hU 9 \" % $ V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "h 4 \u00a5 \u00a4 \u00a7 \u00a6 4 q \u00a9\" \u00a3 s 0 \" % $ 7 Q B \u00a6 \" % E ( g $ B \" )( $ r \" ( and 4 \u00a6 \u00a9 \" % $ 7 Q\" % E ( $ \" ( g \" \u00a3 \u00a2 (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "respectively. Following (Lafferty et al. 2001) , we also add special start and stop states,",
"cite_spans": [
{
"start": 24,
"end": 46,
"text": "(Lafferty et al. 2001)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "\u00a1 \" ! start and \u00a1 $ # D E",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "stop. The conditional probability of a label sequence can now be expressed concisely in a matrix form. For each position W in the observation sequence , define the ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "Q \u00a4 Q & % Q \u00a4 Q matrix random variable ' U E ( & ) ( ' U )\" Q E ( 1 0 by ' U )\" Q E ( U X W \u1ef2 3 2 U )\" Q E ( % ( where 2 U )\" Q E ( cd 5 4 d p h p d 7 6 8 U \" % $ Q9 \u00a9 @ )\" ( 0 \" % & A e c d C B d d 6 U \" % $ 7 Q D @ \" % A",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "I! Q E ( & Q P S S R U T \u00a8 G V \u00a3 W Y X \u00e0 \u00a5 W F W Y b U` Y c R V U and recurrence I U E ( & I U E E ( ' U E ( Similarly, the backward vectors d U E ( are given by d# D E Q E ( P S S R U T \u00a8 G V \u00a3 W Y F W Y b U` Y c R V U d U E ( ' U D E E ( d U D E E (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "With these definitions, the expectation of the product of each pair of feature functions, h ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d U 9 \" % $ V ( 0 \" h p d 9 \" % $ V ( % ( , hU 9 \" % $ V ( 0 \" d 9 \" % $ V",
"eq_num": "( %"
}
],
"section": "$",
"sec_num": null
},
{
"text": "' h g D E \u00a6 i E \" ) Q E ( i E p \u00a6 e g D E ' \u00a6 E (@ 4 \u00a6 4 \u00a5 \u00a4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "Then the quadratic feature expectations can be computed by the following recursion, where the two double sums in each expectation correspond to the two cases depending on which feature occurs first (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "8 g o ccuring before 8 i ). a B & v \u00a3 hU 9 \" % $ V ( h p d 9 \" % $ V (@c \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 # D E c g \u00a6 i e \u00a6 g \u00a3 q i c 4 \u00a5 \u00a4 r \u00a6 4 hU s 6 8 g \" % $ 7 Q9 u t )\" ( 0 \" % A c 4 \u00a4 \u00a4\u00a6 4 \u00a4 \u00a4 \u00a4 h p d 6 8 i % \" % $ Q9 w v 9 )) \" )))( 0 \" % A I x gE ) Q E ( ' y g )\" Q E ( ' y g D E \u00a6 i E \" )) Q E ( ' i )) \" ))) Q E ( d i ))) Q E ( % 2 T I E ( e c \u00a3 ' \u00a4 V \u00a6 # D E c g \u00a6 i e \u00a6 i q g c 4 \u00a5 \u00a4 \u00a7 \u00a6 4 hU 6 8 i % \" % $ 7 Q9 w v 9 )\" ( 0 \" % & A c 4 \u00a4 \u00a4 r \u00a6 4 \u00a5 \u00a4 \u00a4 \u00a4 h p d 6 8 g \" % $ 7 Q9 u t ))\" )))( 0 \" % & A I i E ))) Q E ( ' i )))\" )) Q E ( ' i D E \u00a6 gE )) \" ) Q E ( ' h g ) \" Q E ( d i 0 Q E ( % 2 T I E ( a B & v \u00a3 hU 9 \" % $ V ( 3 d 9 \" % $ V (@c \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 # D E c g \u00a6 i e \u00a6 g \u00a3 i c 4 \u00a4\u00a6 4 hU 6 8 g \" % $ 7 Q9\u1e97 )\" ( 0 \" % A c 4 \u00a4 \u00a4 d 6 a i\" % $ Qv ))\" % A I x gE ) Q E ( ' h g ) \" Q E ( ' h g D E \u00a6 i E \" )) Q\u00a2 ( d i 8 )) Q E ( % 2 T I E ( e c \u00a4 \u00a7 \u00a6 # D E c g \u00a6 i e \u00a6 i q g c 4 \u00a4 \u00a6 4 hU 6 8 i ) \" % $ 7 Q9 u v 9 )\" ( 0 \" % A c 4 \u00a5 \u00a4 \u00a4 d 6 g \" % $ 7 Q \u00a3 t\u00a8 ))\" % A I i E )) Q E ( ' i D E \u00a6 gE ))\" ) Q\u00a2 ( ' g ) \" Q E ( d i Q E ( % 2 T I E ( a B & v \u00a3 U 9 \" % $ V ( 3 d 9 \" % $ (@c \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 # D E c g \u00a6 i e \u00a6 g \u00a3 q i c 4 \u00a4 U 6 g \" % $ 7 Q\u1e97 )\" % A c 4 d i ) \" % $ 7 Q \u00a3 v \" % E ( IgE ) Q E ( ' g D E \u00a6 i E ) \" Q E ( d i Q E ( T I E ( e c \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 # D E c g \u00a6 i e \u00a6 i q g c 4 \u00a4 U 6 i\" % $ 7 Qv p )\" % A c 4 \u00a4 d 3 g \" % $ 7 Q \u00a3 t\u00a8 \" % E ( I i E Q E ( ' i D E \u00a6 gE \" ) Q E ( d i 0 ) Q E ( T I E (",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "The computation of these expectations can be organized in a trellis, as illustrated in Figure 1 . Once we obtain the gradient of the objective function (2), we use limited-memory L-BFGS, a quasi-Newton optimization algorithm (McCallum 2002; Nocedal and Wright 2000) , to find the local maxima with the initial value being set to be the optimal solution of the supervised CRF on labeled data.",
"cite_spans": [
{
"start": 225,
"end": 240,
"text": "(McCallum 2002;",
"ref_id": "BIBREF13"
},
{
"start": 241,
"end": 265,
"text": "Nocedal and Wright 2000)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 87,
"end": 95,
"text": "Figure 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "$",
"sec_num": null
},
{
"text": "The time and space complexity of the semisupervised CRF training procedure is greater than that of standard supervised CRF training, but nevertheless remains a small degree polynomial in the size of the training data. Let factor. However, the space requirements of the two training methods are the same. That is, even though the covariance matrix has size \u00a3 3 e ( , there is never any need to store the entire matrix in memory. Rather, since we only need to compute the product of the covariance with f , the calculation can be performed iteratively without using extra space beyond that already required by supervised CRF training. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Time and space complexity",
"sec_num": "4"
},
{
"text": "We have developed our new semi-supervised training procedure to address the problem of information extraction from biomedical text, which has received significant attention in the past few years.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying gene and protein mentions",
"sec_num": "5"
},
{
"text": "We have specifically focused on the problem of identifying explicit mentions of gene and protein names (McDonald and Pereira 2005) . Recently, McDonald and Pereira (2005) have obtained interesting results on this problem by using a standard supervised CRF approach. However, our contention is that stronger results could be obtained in this domain by exploiting a large corpus of unannotated biomedical text to improve the quality of the predictions, which we now show. Given a biomedical text, the task of identifying gene mentions can be interpreted as a tagging task, where each word in the text can be labeled with a tag that indicates whether it is the beginning of gene mention (B), the continuation of a gene mention (I), or outside of any gene mention (O). To compare the performance of different taggers learned by different mechanisms, one can measure the precision, recall and F-measure, given by In our evaluation, we compared the proposed semi-supervised learning approach to the state of the art supervised CRF of McDonald and Pereira (2005) , and also to self-training (Celeux and Govaert 1992; Yarowsky 1995) First we evaluated the performance of the semisupervised CRF in detail, by varying the ratio between the amount of labeled and unlabeled data, and also varying the tradeoff parameter f . We choose a labeled training set consisting of 5448 words, and considered alternative unlabeled training sets, \u00a1 (5210 words), (10,208 words), and",
"cite_spans": [
{
"start": 103,
"end": 130,
"text": "(McDonald and Pereira 2005)",
"ref_id": "BIBREF15"
},
{
"start": 133,
"end": 170,
"text": "Recently, McDonald and Pereira (2005)",
"ref_id": null
},
{
"start": 1028,
"end": 1055,
"text": "McDonald and Pereira (2005)",
"ref_id": "BIBREF15"
},
{
"start": 1084,
"end": 1109,
"text": "(Celeux and Govaert 1992;",
"ref_id": "BIBREF5"
},
{
"start": 1110,
"end": 1124,
"text": "Yarowsky 1995)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Identifying gene and protein mentions",
"sec_num": "5"
},
{
"text": "(25,145 words), consisting of the same, 2 times and 5 times as many sentences as respectively. All of these sets were disjoint and selected randomly from the full corpus, the smaller one in , consisting of 184,903 words in total. To determine sensitivity to the parameter f we examined a range of discrete values",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "F \" Y F \u00a3 \u00a2 S \" Y F \u00a3 \u00a2 \u00a5 \u00a4 \" S \" \u00a6 \u00a4 \" S F\" 8 a F\" \u00a6 \u00a4 F",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": ". In our first experiment, we train the CRF models using labeled set and unlabeled sets \u00a1 , and \u00a2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "respectively. Then test the performance on the sets \u00a1 , and \u00a2 respectively The results of our evaluation are shown in Table 1 . The performance of the supervised CRF algorithm, trained only on the labeled set , is given on the first row in The results of this experiment demonstrate quite clearly that in most cases the semi-supervised CRF obtains higher precision, recall and F-measure than the fully supervised CRF, yielding a 20% improvement in the best case.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 1",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "In our second experiment, again we train the CRF models using labeled set and unlabeled sets \u00a1 , and \u00a2 respectively with increasing values of f , but we test the performance on the heldout set \u00a7 which is the full corpus minus the labeled set and unlabeled sets \u00a1 , and \u00a2 . The results of our evaluation are shown in Table 2 and Figure 2 . The blue line in Figure 2 is the result of the supervised CRF algorithm, trained only on the labeled set . In particular, by using the supervised CRF model, the system predicted 3334 out of 7472 gene mentions, of which 2435 were correct, resulting in a precision of 0.73, recall of 0.33 and F-measure of 0.45. The other curves are those of the semi-supervised CRFs.",
"cite_spans": [],
"ref_spans": [
{
"start": 316,
"end": 323,
"text": "Table 2",
"ref_id": "TABREF9"
},
{
"start": 328,
"end": 336,
"text": "Figure 2",
"ref_id": "FIGREF4"
},
{
"start": 356,
"end": 364,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "The results of this experiment demonstrate quite clearly that the semi-supervised CRFs simultane- ously increase both the number of predicted gene mentions and the number of correct predictions, thus the precision remains almost the same as the supervised CRF, and the recall increases significantly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "Both experiments as illustrated in Figure 2 and Tables 1 and 2 show that clearly better results are obtained by incorporating additional unlabeled training data, even when evaluating on disjoint testing data ( Figure 2 ). The performance of the semi-supervised CRF is not overly sensitive to the tradeoff parameter f , except that f cannot be set too large.",
"cite_spans": [],
"ref_spans": [
{
"start": 35,
"end": 43,
"text": "Figure 2",
"ref_id": "FIGREF4"
},
{
"start": 48,
"end": 62,
"text": "Tables 1 and 2",
"ref_id": "TABREF7"
},
{
"start": 210,
"end": 218,
"text": "Figure 2",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "\u00a2",
"sec_num": null
},
{
"text": "For completeness, we also compared our results to the self-learning algorithm, which has commonly been referred to as bootstrapping in natural language processing and originally popularized by the work of Yarowsky in word sense disambiguation (Abney 2004; Yarowsky 1995) . In fact, similar ideas have been developed in pattern recognition under the name of the decision-directed algorithm (Duda and Hart 1973) , and also traced back to 1970s in the EM literature (Celeux and Govaert 1992) . The basic algorithm works as follows: We implemented this self training approach and tried it in our experiments. Unfortunately, we were not able to obtain any improvements over the standard supervised CRF with self-learning, using the sets",
"cite_spans": [
{
"start": 243,
"end": 255,
"text": "(Abney 2004;",
"ref_id": "BIBREF0"
},
{
"start": 256,
"end": 270,
"text": "Yarowsky 1995)",
"ref_id": "BIBREF20"
},
{
"start": 389,
"end": 409,
"text": "(Duda and Hart 1973)",
"ref_id": "BIBREF9"
},
{
"start": 463,
"end": 488,
"text": "(Celeux and Govaert 1992)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to self-training",
"sec_num": "5.1"
},
{
"text": "\u00a5 \u00a7 \u00a6 \u00a3 \" and \u00a5 y B \u00a2 \u00a1 \u00a1 \" \" \u00a2 \u00a4 \u00a3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to self-training",
"sec_num": "5.1"
},
{
"text": ". The semi-supervised CRF remains the best of the approaches we have tried on this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to self-training",
"sec_num": "5.1"
},
{
"text": "We have presented a new semi-supervised training algorithm for CRFs, based on extending minimum conditional entropy regularization to the structured prediction case. Our approach is motivated by the information-theoretic argument (Grandvalet and Bengio 2004; Roberts et al. 2000) that unlabeled examples can provide the most benefit when classes have small overlap. An iterative ascent optimization procedure was developed for this new criterion, which exploits a nested dynamic programming approach to efficiently compute the covariance matrix of the features.",
"cite_spans": [
{
"start": 230,
"end": 258,
"text": "(Grandvalet and Bengio 2004;",
"ref_id": "BIBREF10"
},
{
"start": 259,
"end": 279,
"text": "Roberts et al. 2000)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and further directions",
"sec_num": "6"
},
{
"text": "We applied our new approach to the problem of identifying gene name occurrences in biological text, exploiting the availability of auxiliary unlabeled data to improve the performance of the state of the art supervised CRF approach in this domain. Our semi-supervised CRF approach shares all of the benefits of the standard CRF training, including the ability to exploit arbitrary features of the inputs, while obtaining improved accuracy through the use of unlabeled data. The main drawback is that training time is increased because of the extra nested loop needed to calculate feature covariances. Nevertheless, the algorithm is sufficiently efficient to be trained on unlabeled data sets that yield a notable improvement in classification accuracy over standard supervised training. To further accelerate the training process of our semi-supervised CRFs, we may apply stochastic gradient optimization method with adaptive gain adjustment as proposed by Vishwanathan et al. (2006) .",
"cite_spans": [
{
"start": 956,
"end": 982,
"text": "Vishwanathan et al. (2006)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and further directions",
"sec_num": "6"
},
{
"text": "We wish to show that ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Deriving the gradient of the entropy",
"sec_num": null
},
{
"text": "& if present).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Research supported by Genome Alberta, Genome Canada, and the Alberta Ingenuity Centre for Machine Learning.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "In the vector form, this can be written as (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Understanding the Yarowsky algorithm",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "3",
"pages": "365--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Abney. (2004). Understanding the Yarowsky algorithm. Computational Linguistics, 30(3):365-395.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Maximum margin semi-supervised learning for structured variables",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Altun",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Mcallester",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Belkin",
"suffix": ""
}
],
"year": 2005,
"venue": "Advances in Neural Information Processing Systems 18",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Altun, D. McAllester and M. Belkin. (2005). Maximum margin semi-supervised learning for structured variables. Advances in Neural Information Processing Systems 18.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combining labeled and unlabeled data with co-training",
"authors": [
{
"first": "A",
"middle": [],
"last": "Blum",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the Workshop on Computational Learning Theory",
"volume": "",
"issue": "",
"pages": "92--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Blum and T. Mitchell. (1998). Combining labeled and unlabeled data with co-training. Proceedings of the Work- shop on Computational Learning Theory, 92-100.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Convex Optimization",
"authors": [
{
"first": "S",
"middle": [],
"last": "Boyd",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Vandenberghe",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Boyd and L. Vandenberghe. (2004). Convex Optimization. Cambridge University Press.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The relative value of labeled and unlabeled samples in pattern recognition with an unknown mixing parameter",
"authors": [
{
"first": "V",
"middle": [],
"last": "Castelli",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Cover",
"suffix": ""
}
],
"year": 1996,
"venue": "IEEE Trans. on Information Theory",
"volume": "42",
"issue": "6",
"pages": "2102--2117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Castelli and T. Cover. (1996). The relative value of la- beled and unlabeled samples in pattern recognition with an unknown mixing parameter. IEEE Trans. on Informa- tion Theory, 42(6):2102-2117.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A classification EM algorithm for clustering and two stochastic versions",
"authors": [
{
"first": "G",
"middle": [],
"last": "Celeux",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Govaert",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational Statistics and Data Analysis",
"volume": "14",
"issue": "",
"pages": "315--332",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Celeux and G. Govaert. (1992). A classification EM al- gorithm for clustering and two stochastic versions. Com- putational Statistics and Data Analysis, 14:315-332.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Risks of semi-supervised learning. Semi-Supervised Learning",
"authors": [
{
"first": "I",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Cozman",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "55--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "I. Cohen and F. Cozman. (2006). Risks of semi-supervised learning. Semi-Supervised Learning, O. Chapelle, B. Scholk\u00f6pf and A. Zien, (Editors), 55-70, MIT Press.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Data dependent regularization. Semi-Supervised Learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Corduneanu",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Jaakkola",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "163--182",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Corduneanu and T. Jaakkola. (2006). Data dependent regularization. Semi-Supervised Learning, O. Chapelle, B. Scholk\u00f6pf and A. Zien, (Editors), 163-182, MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Elements of Information Theory",
"authors": [
{
"first": "T",
"middle": [],
"last": "Cover",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Thomas",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Cover and J. Thomas, (1991). Elements of Information Theory, John Wiley & Sons.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Pattern Classification and Scene Analysis",
"authors": [
{
"first": "R",
"middle": [],
"last": "Duda",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hart",
"suffix": ""
}
],
"year": 1973,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Duda and P. Hart. (1973). Pattern Classification and Scene Analysis, John Wiley & Sons.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Semi-supervised learning by entropy minimization",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Grandvalet",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems",
"volume": "17",
"issue": "",
"pages": "529--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Grandvalet and Y. Bengio. (2004). Semi-supervised learn- ing by entropy minimization, Advances in Neural Infor- mation Processing Systems, 17:529-536.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Conditional random fields: probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 18th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum and F. Pereira. (2001). Conditional random fields: probabilistic models for segmenting and labeling sequence data. Proceedings of the 18th Interna- tional Conference on Machine Learning, 282-289.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Semi-supervised sequence modeling with syntactic topic models",
"authors": [
{
"first": "W",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Twentieth National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "813--818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Li and A. McCallum. (2005). Semi-supervised sequence modeling with syntactic topic models. Proceedings of Twentieth National Conference on Artificial Intelligence, 813-818.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "MALLET: A machine learning for language toolkit",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2002,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. McCallum. (2002). MALLET: A machine learning for language toolkit. [http://mallet.cs.umass.edu]",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Conditional random field biomedical entity tagger",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Lerman",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Jin",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald, K. Lerman and Y. Jin. (2005). Con- ditional random field biomedical entity tagger. [http://www.seas.upenn.edu/ sryantm/software/BioTagger/]",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Identifying gene and protein mentions in text using conditional random fields",
"authors": [
{
"first": "R",
"middle": [],
"last": "Mcdonald",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2005,
"venue": "BMC Bioinformatics",
"volume": "",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. McDonald and F. Pereira. (2005). Identifying gene and protein mentions in text using conditional random fields. BMC Bioinformatics 2005, 6(Suppl 1):S6.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Text classification from labeled and unlabeled documents using EM. Machine learning",
"authors": [
{
"first": "K",
"middle": [],
"last": "Nigam",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Thrun",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "39",
"issue": "",
"pages": "135--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Nigam, A. McCallum, S. Thrun and T. Mitchell. (2000). Text classification from labeled and unlabeled documents using EM. Machine learning. 39(2/3):135-167.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Numerical Optimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nocedal",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Wright",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nocedal and S. Wright. (2000). Numerical Optimization, Springer.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Maximum certainty data partitioning",
"authors": [
{
"first": "S",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Everson",
"suffix": ""
},
{
"first": "I",
"middle": [],
"last": "Rezek",
"suffix": ""
}
],
"year": 2000,
"venue": "Pattern Recognition",
"volume": "33",
"issue": "5",
"pages": "833--839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Roberts, R. Everson and I. Rezek. (2000). Maximum cer- tainty data partitioning. Pattern Recognition, 33(5):833- 839.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Accelerated training of conditional random fields with stochastic meta-descent",
"authors": [
{
"first": "S",
"middle": [],
"last": "Vishwanathan",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Schraudolph",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Schmidt",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Murphy",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 23th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Vishwanathan, N. Schraudolph, M. Schmidt and K. Mur- phy. (2006). Accelerated training of conditional random fields with stochastic meta-descent. Proceedings of the 23th International Conference on Machine Learning.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Unsupervised word sense disambiguation rivaling supervised methods",
"authors": [
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 33rd",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Yarowsky. (1995). Unsupervised word sense disambigua- tion rivaling supervised methods. Proceedings of the 33rd",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Lin- guistics, 189-196.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning with local and global consistency",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Bousquet",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Lal",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
}
],
"year": 2004,
"venue": "Advances in Neural Information Processing Systems",
"volume": "16",
"issue": "",
"pages": "321--328",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Zhou, O. Bousquet, T. Navin Lal, J. Weston and B. Sch\u00f6lkopf. (2004). Learning with local and global con- sistency. Advances in Neural Information Processing Sys- tems, 16:321-328.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Learning from labeled and unlabeled data on a directed graph",
"authors": [
{
"first": "D",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Sch\u00f6lkopf",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 22nd International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1041--1048",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Zhou, J. Huang and B. Sch\u00f6lkopf. (2005). Learning from labeled and unlabeled data on a directed graph. Proceed- ings of the 22nd International Conference on Machine Learning, 1041-1048.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Semisupervised learning using Gaussian fields and harmonic functions",
"authors": [
{
"first": "X",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Ghahramani",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 20th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "912--919",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "X. Zhu, Z. Ghahramani and J. Lafferty. (2003). Semi- supervised learning using Gaussian fields and harmonic functions. Proceedings of the 20th International Confer- ence on Machine Learning, 912-919.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "training method, since the Viterbi decoder needs to access each path.For training, supervised CRF training requires for semi-supervised training arises from the extra nested loop required to calculated the quadratic feature expectations, which introduces in an additional H B \u00a1",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Trellis for computing the expectation of a feature product over a pair of feature functions, This leads to one double sum.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": ", using the same feature set as(McDonald and Pereira 2005). The CRF training procedures, supervised and semi-supervised, were run with the same regularization function,",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Performance of the supervised and semisupervised CRFs. The sets\u00a8, \u00a9 and refer to the unlabeled training set used by the semi-supervised algorithm.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF6": {
"text": "or other inference algorithm, and add the pair",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF7": {
"type_str": "table",
"html": null,
"content": "<table><tr><td colspan=\"2\">(corresponding to results obtained by the semi-supervised CRFs on f ). By comparison, the F</td></tr><tr><td>the held-out sets by increasing the value of \u00a1 , and</td><td>are given in Table 1 \u00a2</td></tr></table>",
"num": null,
"text": ""
},
"TABREF8": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>,</td><td>and</td><td>\u00a2</td></tr></table>",
"num": null,
"text": "Performance of the semi-supervised CRFs obtained on the held-out sets \u00a1"
},
"TABREF9": {
"type_str": "table",
"html": null,
"content": "<table><tr><td>,</td><td>and</td><td>\u00a2</td></tr></table>",
"num": null,
"text": "Performance of the semi-supervised CRFs trained by using unlabeled sets \u00a1"
}
}
}
}