col_1,col_2,col_3,col_4,col_5,col_6,col_7,col_8,abstract,authors,id,references,col_acknowledgments,col_spelling,col_brazilian,col_each,col_error,col_errors,col_last,col_computational,col_deep,col_9,col_squibs,col_evaluation,col_dependent,col_csiro,col_reference,col_book,col_robots,col_judith,col_reviewed,col_adam,col_10,col_automatic,col_type,col_partition,col_ctb,col_previous,col_semeval-2012,col_word,col_edge,col_web,col_when,col_than,col_a,col_compositional,col_linguistic,col_100,col_lexicalization,col_vw-ccg,col_proof,col_appendix,col_language,col_42,col_philipp,col_tu,col_automated,col_claudia,col_obituary,col_jane,col_recognizing,Concatenated Text,Future_Work "1 introduction :In automatic speech and language processing, many technologies make extensive use of written or read text sets. These linguistic corpora are a necessity to train models or to extract rules, and the quality of the results strongly depends on a corpus’ content. Often, the reference corpus should provide a maximum diversity of content. For example, in Tian, Nurminen, and Kiss (2005), and Tian and Nurminen (2009), it turns out that maximizing the text coverage of the learning corpus improves an automatic syllabification based on a neural network. Similarly, a high quality speech synthesis system based on the selection of speech units requires a rich corpus in terms of diphones, diphones in context, triphones, and prosodic markers. In particular, Bunnell (2010) shows the importance of a good coverage of diphones and triphones for the intelligibility of a voice produced by a unit selection speech synthesis system. To cover the best attributes needed for a task, several strategies are then possible. A first method—very simple—is to collect text randomly, but it soon becomes expensive because of the natural distribution of linguistic events following the Zipf’s law. Very few events are extremely frequent and many events are very rare. This problem is often made difficult by the fact that many technologies require several variants of the same event (as in a Text-to-Speech [TTS] system using several acoustical versions of the same phonological unit). Usually, a large volume of data needs to be collected. However, and depending on the applications, building such corpora is often achieved under a constraint of parsimony. As an example, for a TTS system, a high-quality synthetic voice generally needs a huge number of speech recordings. But minimizing the duration of a recording is also a critical point to ensure uniform quality of the voice, to reduce the drudgery of the recording, to reduce the financial cost, or to follow a technical constraint on the amount of collected data for embedded systems. Moreover, a reduced set tends to limit the need of human implication for checking the data (transcription and annotation). Similarly, in the natural language processing field (NLP), the adaptation of a generic model to a specific domain often requires new annotated data that illustrate its specificities (as in Candito, Anguiano, and Seddah 2011). However, the creation cost of such data highly depends on the kind of labels used to adapt the model. In particular, the annotation in syntax trees is really more expensive than in Part-of-Speech (POS) tags. Then, it could be more efficient to annotate a compact corpus that reflects the phenomena variability than a corpus with a natural distribution of events, which implies many redundancies (see Neubig and Mori 2010). In a machine learning framework, the active learning strategy can be used as an alternative that reduces the manual data annotation effort to design the training corpus without diminishing the quality of the model to train (see Settles 2010 or Schein, Sandler, and Ungar 2004). It consists of building the corpus iteratively by choosing an item according to an external source of information (a user or an experimental measure). This approach has been applied in NLP, speech recognition, and spoken language understanding (see for instance Tomanek and Olsson 2009 and Gotab, Béchet, and Damnati 2009). A second alternative, when no direct quality measure is available, consists of covering a large set of attributes that may impact the final quality (after annotation or recording). This kind of approach might also be preferred when the final corpus is built in one batch (for instance, because of out-sourcing or annotator/performer consistency constraints). A method could be an automatic extraction from a huge text corpus of a minimal sized subset that covers the identified attributes. This problem is a generalization of the Set-Covering Problem (SCP), which is an NPhard problem, as shown in Karp (1972). It is then necessary to use heuristics or sub-optimal algorithms for a reasonable computation time. Moreover, Raz and Safra (1997) and Alon, Moshkovitz, and Safra (2006) have shown that the SCP cannot be polynomially approximated with ratio c × ln(n) unless P = NP, when c is a constant, and n refers to the size of the universe to cover. That means that one cannot be certain to obtain a result under this ratio with any polynomial algorithm. However, the latter complexity results are given for any kind of distribution in the mono-representation case. One can ask if good multi-represented coverages can be achieved efficiently on data following Zipf’s law, which is usual in the domain of NLP. Within the field of speech processing, the most frequently used strategy is a greedy method based on an agglomeration policy. This iterative algorithm selects the sentence with the highest score at each iteration. The score reflects the contribution of the sentence to the covering under construction. In Gauvain, Lamel, and Eskénazi (1990), this methodology has been applied to build a database of read speech from a text corpus for the evaluation of speech recognition systems using hierarchically organized covering attributes. Van Santen and Buchsbaum (1997) have tested different variants of greedy selection of texts by varying the units to cover (diphones, duration, etc.) and the “scores” for a sentence depending on the considered applications. In Tian, Nurminen, and Kiss (2005), the learning corpus for an automatic system of syllabification is designed using a greedy approach with the Levenshtein distance as a score function in order to maximize its text diversity. In François and Boëffard (2001), the methodology gives a priority to the rarest categories of allophones. The latter methodology has been implemented for the definition of the multi-speaker corpus Neologos in Krstulović et al. (2006). In the article of Krul et al. (2006), the authors constructed a corpus where the distribution of diphonemes/triphonemes matches a uniform distribution. A greedy algorithm is led by a score function based on the Kullback-Liebler divergence. A similar method is used in Krul et al. (2007) to design a reduced database in accordance with a specific domain distribution. Kawai et al. (2000) propose a pair exchange mechanism that Rojc and Kačič (2000) apply after a first reverse greedy algorithm—also called spitting greedy—deleting the useless sentences. In Cadic, Boidin, and d’Alessandro (2010), the covering of “sandwich” units (defined to be more adapted to corpus-based speech synthesis) is carried out by generating new sentences in a semi-automatic way. Candidates are generated using finite state transducers. The sentences are ordered according to a greedy criterion (their sandwiches richness) and presented to a human evaluator. This collection of artificial and rich sentences enables an effective reduction of the size of the covering but requires expensive human intervention to obtain semantically correct sentences that will be therefore easier to record. The results of these previously cited studies are difficult to compare because of the different initial corpora and covering constraints (partial or full covering) and evaluation criteria (the number of gathered sentences, the Kullback divergence, etc.). In Zhang and Nakamura (2008), a priority policy for the rare units is added into an agglomerative greedy algorithm in order to get a covering of triphoneme classes from a large text corpus in Chinese language. The results show that this priority policy driven by the score function and the phonetic content of the sentences reduces the covering size compared with a standard agglomerative greedy algorithm. Similarly, in François and Boëffard (2002), several combinations of greedy algorithms (agglomeration, spitting, pair exchange, or priority to rare units) were applied to the construction of a corpus for speech synthesis in French containing at least three representatives of the most frequent diphones. Based on this work, the best strategy would be the application of an agglomerative greedy followed by a spitting greedy algorithm. During the agglomeration phase, the score of a sentence corresponds to the number of its unit instances that remain to be covered normalized by its length. During the spitting phase, at each iteration, the longest redundant sentence is removed from the covering. This algorithm is called the Agglomeration and Spitting Algorithm (ASA). As an alternative to a greedy algorithm, which is sub-optimal, solving the SCP using Lagrangian relaxation principles can provide an exact solution for problems of reasonable size. However, for speech processing, the SCP has several millions of sentences with tens of thousands of covering features. Considering these practical constraints, Chevelu et al. (2007) adapted a Lagrangian relaxation based algorithm proposed by Caprara, Fischetti, and Toth (1999). In the context of Italian railways, Caprara, Fischetti, and Toth proposed heuristics to solve scheduling problems and won a competition, called Faster, organized by the Italian Operational Research Society in 1994, ahead of other Lagrangian relaxation heuristics–based algorithms, like Ceria, Nobili, and Sassano (1998). In Chevelu et al. (2007, 2008), the algorithm takes into account the constraints of multi-representation. A minimal number of representatives for the same unit may be required. The proposed algorithm, called LamSCP — Lagrangian-based Algorithm for Multi-represented SCP — is applied to extract coverings of diphonemes with a mono- or a 5-representation and coverings of triphonemes with mono-representation constraints. These results are compared with the greedy strategy ASA and are about 5% to 10% better. Besides, the LamSCP provides a lower bound for the cost of the optimal covering and allows for evaluating the quality of the results. In Barbot, Boëffard, and Delhay (2012), phonological content of diphoneme coverings is studied regarding many parameters. These coverings are obtained by different algorithms (LamSCP, ASA, greedy based on the Kullback divergence) and some of the coverings are randomly completed to reach a given size (from 20,000 to 30,000 phones). It turns out that the coverings obtained using LamSCP and ASA provide a good representation of short units and the representation of long units mainly depends on the length of the corpus. In this article, we present in more detail the LamSCP algorithm and its score functions and heuristics that take into account multi-representation constraints. We deepen the study about the performance of LamSCP for the construction of a phonologically rich corpus according to the size of the search space. We evaluate LamSCP and ASA algorithms on a corpus of sentences in English for a covering of multi-represented diphones, where the minimal number of required unit representatives varies from one to five. We also compare them in the case of very constrained triphoneme coverings in English and French, which represent about 12 times more units to cover. Additionally, both algorithms are tested to provide multi-represented coverings of POS tags in order to assess their ability to deal with different kinds of linguistic data. A particular effort has been made on methodology to obtain comparable measures, to study the stability of both algorithms, and to establish confidence intervals for each solution. This article is organized as follows. In Section 2, the SCP framework and the associated notations are introduced. The ASA algorithm is described in Section 3 and the LamSCP is detailed in Section 4. The experimental methodology is presented in Section 5 and results are discussed in Section 6. Before concluding in Section 8, we present experiments in the context of TTS where we evaluate on that task the benefits of a reduction in section 7.","2 the set-covering problem :Before describing the SCP-solving algorithms proposed in this article, we introduce in this section some notations and the Lagrangian properties used by LamSCP. Let us consider a corpus A composed of n sentences s1, . . . , sn. According to the target applications, these sentences are annotated with respect to phonological, acoustic, prosodic attributes, and so forth. Each sentence is then associated with a family of units of different types. The set of units present in A is denoted U = {u1, . . . , um} and A can be represented by a matrix A = (aij), where aij is the number of instances of unit ui in the sentence sj. Therefore, the j th column of A corresponds to sentence sj in A. To simplify the writing, we define the sets M = {1, . . . , m} and N = {1, . . . , n}. For a given vector of integers B = (b1, . . . , bm)T, a reduction X of A, also called covering of U , is defined as a subset of A which contains, for every i ∈ M, at least bi instances of ui. It can be described by a vector X = (x1, . . . , xn)T where xj = 1 if sj belongs to X and xj = 0 otherwise. In other words, a covering is a solution X ∈ {0, 1}n of the following system ∀i ∈ M, ∑ j∈N aijxj ≥ bi (1) that is, AX ≥ B where B is called the constraint vector. Our aim is to optimize a covering according to a cost function minimization criterion. The covering cost is given by summing the costs of the sentences that compose the covering. The optimization problem can be formulated as the following SCP: X∗ = arg min X∈{0,1}n AX≥B CX (2) where C = (c1, . . . , cn) is the cost vector and cj the cost of the sentence sj. Because of the objective to minimize the total length of the covering, we have chosen to define the cost of a sentence as one of its length features. According to the considered application, the sentence cost can be defined as its number of phones (one of our objectives is to design a phonetically rich script with a minimal speech recording duration), or its number of words, part-of-speech tags, breath groups, and so on. In Caprara, Fischetti, and Toth (1999), Caprara, Toth, and Fischetti (2000), and Ceria, Nobili, and Sassano (1998), the studied crew scheduling problem is a particular case of Equation (2) where A is a binary matrix and B = 1Rm (i.e., with mono-representation constraints). In order to ensure that Equation (1) admits a solution, we assume that, for each i ∈ M, the minimal number bi of ui instances required in the covering is not greater than the number (A1Rn )i of ui instances in A, that is A1Rn ≥ B. Under this assumption, A is the maximal size solution of Equation (1), represented by X = 1Rn. In the case where bi is greater than the number of ui instances in A, bi is set to (A1Rn )i. To drive the SCP algorithms during the sentence selection phase, the covering capacity µj of sentence sj is defined as the number of its unit instances required in the covering in view of the constraint vector: µj = ∑ i∈M min { aij, bi } (3) Let us notice that µj does not consider the excess unit instances: For example, if sj contains aij = 10 instances of ui and at least bi = 3 instances of ui are required, the contribution of ui to µj derivation only takes into account three instances of ui.","3 greedy algorithm asa :In this section, the two main steps that compose the algorithm ASA are briefly described. First, an agglomerative greedy procedure is applied to A so as to derive a covering. Next, a spitting greedy procedure reduces this covering in order to approach the optimal solution of Equation (2). The greedy strategy builds a sub-optimal solution to the SCP Equation (2) in an iterative way. At each iteration, the lowest cost sentence is chosen from A. If several sentences correspond to the lowest cost, the one coming first (i.e., the one with the lowest index) is chosen. Initially, the set of selected sentences X is empty, the matrix à associated with the candidate sentences is assigned to A, the current covering capacity of sj is given by µ̃j = µj, and the current constraint vector B̃ = B. The cost of sentence sj is defined by σj = { cj/µ̃j if µ̃j ̸= 0 ∞ otherwise (4) Indeed, if µ̃j = 0, it turns out that sj does not cover any unit missing in the solution X under construction and its infinite cost σj avoids its selection. At each iteration, the selected sentence s is added to X . Taking into account the content of s, B̃ is updated to max { B̃ − Ã∆, 0Rm } where the jth entry of ∆ equals 1 if sj = s and 0 otherwise. Next, the associated column of s in à is set to 0Rm . For each sentence sj with a non-zero µ̃j feature, µ̃j is then updated using à and B̃ in Equation (3). At last, the agglomerative greedy algorithm is stopped as soon as all the constraints are satisfied, that is, B̃ = 0Rm . The spitting greedy strategy also consists in building iteratively a sub-optimal solution Y to Equation (2) by reducing the size of a covering. The initial covering Y is set to the solution X derived by the agglomerative phase described earlier. At each iteration, the set of the redundant sentences of Y is calculated and the costliest one (according to the cost function C) is removed from Y . An element s of Y is said to be redundant if for each ui ∈ U , its number of instances into Y , denoted mi(Y ), and into Y \ {s}, denoted mi(Y \ {s}), check min {mi(Y ), bi} = min {mi(Y \ {s}), bi}. In other words, s is a redundant element of the covering Y if Y \ {s} is also a covering solution of Equation (1). The spitting greedy algorithm stops when the redundant sentence set is empty.","4 lagrangian relaxation based–algorithm :This section describes the main phases of the algorithm called LamSCP. This algorithm takes advantage of the Lagrangian relaxation properties reviewed herein in order to approach the optimal solution of Equation (2) as close as possible. Strongly inspired by Caprara, Fischetti, and Toth (1999), but generalized to the multi-representation problem, this algorithm provides a lower bound of the optimal solution cost. Having such information is very useful for assessing the achievements of the SCP algorithms. Let us briefly recall the main principles of Lagrangian relaxation on which LamSCP is based to solve Equation (2) (see Fisher [1981] for more details on Lagrangian relaxation). First, the Lagrangian function associated with Equation (2) is defined by L(X,Λ) = CX +ΛT(B − AX) = ΛTB + C(Λ)X (5) where Λ ∈ ( R+ )m, X ∈ {0, 1}n, and C(Λ) = C − ΛTA. The coordinates of Λ = (λ1, . . . , λm)T are called Lagrangian multipliers and can be interpreted as a weighting of constraints (1). The jth entry of C(Λ), called Lagrangian cost cj(Λ) of sentence sj, takes into account its cost cj and the adequacy of its composition to address Equation (2). For every covering X and every Λ ∈ ( R+ )m, the Lagrangian function satisfies L(X,Λ) ≤ CX. Thus, the dual Lagrangian function defined by L(Λ) = min X∈{0,1}n L(X,Λ) (6) presents the following fundamental property: For every Λ ∈ Rm+ and every covering X, we have L(Λ) ≤ CX. Hence, L(Λ) is a lower bound of the minimal covering cost, CX∗, but does not necessarily correspond to the cost of a covering. In order to compute L(Λ), an acceptable solution for the vector X minimizing L(X,Λ) is X(Λ) = (x1(Λ), . . . , xn(Λ)) T where xj(Λ) = { 1 if cj(Λ) < 0 0 if cj(Λ) > 0 ∈ {0, 1} otherwise (7) Additionally, the dual Lagrangian function and the Lagrangian costs inform about the potential usefulness of sentences in the optimal covering. More precisely, for a given Λ and a known upper bound UB of minimal covering cost, the gap g(Λ) = UB − L(Λ) measures the relaxation quality. If cj(Λ) is strictly greater than g(Λ), we can check that any covering containing sj has a cost value strictly greater than UB. Hence, sentence sj is not selected and xj can be fixed at zero. Similarly, if cj(Λ) < −g(Λ), any covering with a cost lower than UB contains sj and one can fix xj to 1. Therefore, an optimal covering is made up of sentences with a low Lagrangian cost, as done in Caprara, Toth, and Fischetti (2000) and Ceria, Nobili, and Sassano (1998), and the higher the relaxation quality (i.e., the lower g(Λ)) is, the cheaper the covering will be. The resolution of the dual problem of Equation (2) consists in finding Λ∗ ∈ Rm+ that maximizes the lower bound L(Λ), that is Λ∗ = arg max Λ∈Rm+ L(Λ) (8) Because this real variable function L is concave and piecewise affine, a well-known approach for finding a near-optimal multiplier vector is the subgradient algorithm. The LamSCP is an iterative algorithm, composed of several procedures that aim to either improve the current best solution or reduce the combinatorial issue related to the considered problem. In order to derive a good solution, the algorithm calls on a great number of greedy procedures to solve sub-problems with the help of the Lagrangian costs. As for the combinatorial reduction, the most frequently used heuristic consists of downsizing the problem by mainly considering the sentences with low Lagrangian costs. The algorithm is organized around a main procedure called 3-phases. This procedure can single-handedly solve a multi-represented SCP. As its name suggests, the 3-phases functioning consists in iterating a sequence of the three following subprocedures as shown in Figure 1:r The subgradient phase calculates an estimation Λ̃ of Λ∗ that maximizes the dual Lagrangian function. This procedure requires an upper bound UB of the optimal covering cost. UB is initialized by a greedy algorithm (rather than the cost of the whole corpus A). This phase is detailed in Section 4.2.1.r The heuristic phase explores the neighborhood of Λ̃ by generating a great number of Lagrangian vectors Λ̃p. A greedy-like procedure is associated with each Λ̃p so as to compute a covering using the Lagrangian cost vector C(Λ̃p). If, during this exploration, a less costly covering than the best known one (corresponding to the cost UB) is found, the upper bound UB is then updated to the cost of this less costly solution. Similarly, if a better estimation of Λ∗ is obtained, Λ̃ is updated. This phase is described in Section 4.2.2.r The column fixing phase selects a set F of sentences that are most likely to belong to the optimal covering. This phase is detailed in Section 4.2.3. Following the column fixing phase, the constraint vector is updated and the unselected sentences define a set-covering sub-problem, called a residual problem. This sub-problem is processed similarly, via an additional iteration of the three phases. This iterative process is stopped when the residual problem is empty or when the associated dual Lagrangian function indicates a cost is too high. Indeed, because this function indicates a minimal cost for covering the sub-problem, its addition to the cost CF of the sentences already retained in F gives a lower bound of the total cost of the solution under construction, which should not rise beyond the cost UB of the best known solution so as to be potentially more advantageous. 4.2.1 Subgradient Phase. In order to reach the quality goal, the subgradient phase provides a near-optimal solution Λ̃ of the dual Lagrangian problem (8) using a subgradient type algorithm. This iterative approach generates a sequence (Λp) using the following updating formula (see Caprara, Fischetti, and Toth 1999) Λp+1 = max { Λp + µ g(Λp) ||S(Λp)||2 S(Λp), 0 } (9) where S(Λp) = B − AX(Λp) so as to take into account the multi-representation constraints. Parameter µ is adjusted to fit the convergence fastness according to the method proposed by Caprara, Fischetti, and Toth (1999). At the first call of 3-phases, Λ0 is arbitrarily defined as follows: for each i ∈ M, λ0i = minj∈N aij ̸=0 cj µj (10) As for UB, its initial value is set to the cost of a covering previously calculated. In order to evaluate how much the covering cost derived by ASA can be improved, we have chosen to initialize UB by this value. At the following iterations of 3-phases, Λ0 is given by a random perturbation (less than 10%) of the best known vector Λ̃ (of which the entries of the sentences fixed in the last column fixing phase are removed) and UB corresponds to the cost of the best covering found (after subtraction of the cost of the sentences fixed in the last column fixing phase). In another approach proposed by Ceria, Nobili, and Sassano (1998), UB corresponds to the upper bound of a dual Lagrangian problem, and the subgradient procedure simultaneously estimates the upper bound and the lower bound, generating two sequences of multipliers. The subgradient phase also calls two procedures: pricing and spitting. Procedure pricing aims to reduce the size of the search space. For each unit ui, the pricing selects the 5bi smallest Lagrangian cost sentences covering ui. If this selection contains less than 5m sentences, where m is the maximal entry of B, it is completed by low Lagrangian cost sentences (less than 0.1) to the limit of 5m sentences. The set of the chosen sentences is denoted P and its design guarantees a sufficient number of instances for each unit to cover and a small variety in its composition. Actually, the subgradient method is applied on P instead of A, and P is updated every 10 subgradient iterations. Finally, at each iteration, definition (7) of X(Λp) and the large number of Lagrangian costs close to zero allow a considerable number of vectors S(Λp). In order to get around the computational difficulty of finding the steepest accent direction, Caprara, Fischetti, and Toth (1999) propose a heuristic that, according to the experimental results, accelerates the convergence of the subgradient phase. This heuristic is implemented in the spitting procedure. Called at each iteration, this procedure extracts from P the subset S of sentences with a Lagrangian cost lower than 0.001. S is then reduced using a spitting greedy algorithm to remove its redundant elements in decreasing Lagrangian cost order. At last, for every j ∈ M, xj(Λp) = 1 if sj ∈ S and xj(Λp) = 0 otherwise. Thus, S(Λp) does not necessarily correspond to a subgradient vector. 4.2.2 Heuristic Phase. The heuristic phase calculates a large number of coverings before keeping the best one. To that end, a sequence of 150 multiplier vectors is generated by perturbing Λ̃ using the formula Λ̃p+1 = max { Λ̃p + µg(Λ̃p)S(Λ̃p), 0 } where Λ̃0 = Λ̃ and µ is provided by the subgradient phase, so as to allow for a change in a large number of Λ̃p. With each Λ̃p, an agglomerative greedy algorithm followed by a spitting greedy one are associated in order to calculate a covering. The agglomerative greedy chooses at each iteration the sentence sj with the lowest cost σj(Λ̃p) where σj(Λ̃p) = cj(Λ̃p) ∗ µ̃j if cj(Λ̃p) < 0 and µ̃j > 0 cj(Λ̃p)/µ̃j if cj(Λ̃p) ≥ 0 and µ̃j > 0 ∞ if µ̃j = 0 (11) This cost function provides an advantage to low Lagrangian cost sentences sj containing µ̃j unit instances that could be helpful to the covering under construction. The agglomerative step uses several heuristics so as to reduce the search space. It is run within a limited subset Pl of P , composed of the sentences sj with the lowest costs σj(Λ̃k). At each iteration, a sentence of Pl is selected and the cost of the sentences of P are updated. If the maximum sentence cost in Pl becomes greater than the minimal cost in P \ Pl, the working subset Pl is also updated. The definitions of P and Pl guarantee that the agglomeration step provides a nonpartial solution of the considered SCP. This solution is then reduced during the spitting step by iteratively removing its redundant sentences sj with the highest costs cj. At the end of the heuristic phase, the best found covering X ∗ and its cost CX∗ (stored in UB) are kept as well as the highest value of L(Λ̃p) (found during the subgradient or heuristic phases). 4.2.3 Column Fixing Phase. The column fixing phase aims to reduce the problem size by choosing “promising” sentences among the ones with very low Lagrangian cost or containing rare unit instances. The unselected sentences are less interesting for resolving the SCP and the residual problem associated with these sentences should be the subject of another call of the 3-phases. More precisely, the column fixing phase calculates the subset Q composed of sentences sj with a negative Lagrangian cost cj(Λ̃). Q is represented by the binary vector Q = (q1, . . . , qn)T where qj = 1 if sj ∈ Q. For each ui ∈ U , the number of its instances covered by Q is given by (AQ)i. If (AQ)i ≤ bi, then ui is considered as a rare unit and all the elements of Q containing some instances of ui are fixed in a set F . The covering constraints that are not satisfied by F constitute a residual SCP. In order to complete F with a few sentences of the best known covering, a greedy-type algorithm is run on X ∗ \ F to derive a solution of this residual SCP. From the obtained solution, the max{ ( BT1Rm ) /20, 1} lowest Lagrangian cost sentences are also added in F . The sentences that are “fixed” in F during the column fixing phase stay in F up to the end of the 3-phases. After the column fixing phase, the residual sub-problem is processed by iterating the three phases and the next fixed sentences are added to F . The 3-phases procedure is encapsulated in an outer loop that permits the partial reconsideration of the solution X ∗ provided by this procedure. To that end, the refining procedure, proposed in Caprara, Fischetti, and Toth (1999), is used in order to select elements of X ∗ that contribute at least to the gap g(Λ̃). In the case of the SCP with multirepresentation constraints, the definition of the contribution of sj ∈ X ∗ can be adapted as follows: δj = max{cj(Λ̃), 0}+ ∑ i∈M aij>0 λ̃i(AX∗ − B)i aij (AX∗)i (12) The second term of Equation (12) consists of sharing the contribution λ̃i(AX∗ − B)i of the excess instances of ui in X ∗ according to the distribution of ui instances in X ∗. Therefore, the refining procedure ranks in an increasing order the elements sj of X ∗ according to their δj value, and fixes the first elements in a set G until its given covering rate τG reaches π. τG represents the rate of covering constraints satisfied by G and is defined by τG = 1 − ∑ i∈M max{bi − (AG)i, 0} BT1Rm (13) where G denotes the binary vector corresponding to G. The LamSCP is made up of the main procedures introduced in the previous sections interlinked by the following steps. First, because of the adaptation of the algorithm to the SCP with multi-representation constraints, the entries of matrix A are clipped to the constraint vector in order to simplify the calculations such as the ones of µ1, . . . ,µn. This threshold application implies that the excess instances of each unit ui are taken into account in each sentence sj beyond the number bi, in the derivation of δj. After the initialization of Λ0 using Equation (10) and the upper bound UB, procedure 3-phases is called and provides a solution X to the complete SCP. The refining function fixes a sentence subset G such that G covers a given rate π of the covering constraints. Parameter π starts at a minimum value πmin = 0.3. The residual SCP is then processed by 3-phases. The π value grows at a rate of 20 percent whenever 3-phases does not improve the solution to the complete SCP. If π is greater or equal to 1, the refining procedure fixes the whole best solution and the residual problem is then empty. On the other hand, π is set to πmin if a better solution is found in order to challenge G and improve this solution. This iterative sequence composed of the 3-phases and refining procedures is carried out until the residual problem is not empty, the gap g(Λ̃) is positive, and the number of iterations has not reached 20.","5 experiments :We propose a twofold comparison of the ASA and LamSCP algorithms: One part is focused on the covering cost for a large SCP, and the other on the stability of the solutions. Moreover, in order to assess the ability and the behavior of both algorithms to process different linguistic data, a first set of experiments deals with phonological attributes (mainly covering co-occurrences of phonemes) and a second set with grammatical labels (mainly covering co-occurrences of POS labels). Both attribute types, often involved in automatic linguistic processing, were chosen because their distribution consists of few highly frequent events and numerous rare events. On the one hand, for TTS tasks, the phonological type covering is a useful preliminary step of the text corpus design before the recording step. In order to produce the signal corresponding to a requested sentence, the unit selection engine requires at least one instance of each phone (or 2-phone, depending on the concatenation process). Because the recording and the post-recording annotation process are expensive tasks, the recording length of such a corpus has to be as short as possible. On the other hand, in order to train a domain-specific dependency parser, the covering of POS sequences may be useful for increasing the diversity of syntax patterns. Because the dependency annotation is a highly expensive task, the adaptation corpus to annotate needs to be as small as possible, containing characteristic examples of the specific lexical variation rather than following the natural distribution. One can expect that increasing its diversity of POS sequences may lead to more diversity in the syntax trees. Experiments on covering co-occurrences of phonemes are carried out on two large phonologically annotated text corpora, and consist of covering at least k instances of each phoneme, diphoneme until n-phoneme (i.e., triphoneme if n = 3, diphoneme if n = 2). The cost cj of the sentence sj is given by its number of phones. From this point, this kind of SCP is called a “k-covering of n-phonemes.” A first corpus, Gutenberg, is composed of texts in English, mainly extracted from novels and short stories. This corpus is the production of the Gutenberg Project, presented by Hart (2003), and has been used by Kominek and Black (2003) to design the speech corpus Arctic. A second corpus, in French, named Le-Monde, is extracted from articles published in the newspaper Le Monde in 1997. Table 1 summarizes the main features of both corpora. The phonological annotation of the Gutenberg corpus comes from the Arctic/ Festvox database (see Kominek and Black 2003), and the annotation of the Le-Monde corpus is a by-product of the Neologos project, detailed by Krstulović et al. (2006). For each corpus, we have collected every phoneme, diphoneme, triphoneme, and their occurrences in each sentence so as to define the set U of units to cover and the matrix A. A is built by collecting one sentence after the other following the ordering inside the corpus, and one unit after the other inside the sentences. After this matrix translation, we obtain two description files and two index files. The first file describes the matrix A and the second one the cost vector C. Because of the low matrix density, we have chosen a sparse representation to save space and computation time: For instance, the 2-phoneme Gutenberg matrix is about 2.2% dense. We only store the cells of A that have a non-zero value so as to get a sparse matrix. The index files are made for the correspondence between the general covering problem and the application domain. The implementation is made in C. In terms of software engineering, our algorithms are working on an SCP that does not depend on the application data. For example, there is no information on what types of units are to be covered. The algorithms only have the matrix of occurrences A, the cost vector C, and the constraint vector B. A set of translation files (from application data to SCP and from SCP to application data) is built before each computation. As a consequence, there is no difficulty in addressing a different set of features to cover on the same or on a different corpus. To study the achievements of ASA and LamSCP on different types of data, we have also chosen to address the “k-covering of n-POS” on the corpus Le-Monde. The grammatical and syntactical analyses are processed by the Synapse development analyzer presented in Synapse (2011). In order to consider a SCP with a substantial number of required units, a very detailed level of POS tagging has been selected, providing 141 distinct tags after analyzing Le-Monde. For example, this level provides tags like “Determiner male singular Article” or “Noun female singular,” whereas the simplest level gives “Determiner” or “Noun.” This latter level of description would have given only nine different POS tags after analyzing Le-Monde. The main associated statistics are given in Table 2. For these experiments of POS covering, the cost of a sentence is defined as its number of POS occurrences. We used a PC with 2 CPUs (E5320/1.86GHz/4 cores/64bits) and 32 GB RAM for the phonological coverings and the POS coverings were computed using a PC with 8 CPUs (Intel Xeon X7550/2.00Ghz/8 cores/64bits) and 128 GB RAM. Our implementations do not take advantage of any parallelism. The following sections detail more precisely the different experiments conducted on French or English. The aim of Experiment 1 is the assessment of the achievements of both algorithms, ASA and LamSCP, and the robustness of the results when the sentence ordering is modified in the corpus to reduce. Indeed, one of the difficulties of the greedy methodology is that the score function has discrete values and several sentences can yield the same score. In our implementation, among the sentences showing the best current score, the first one encountered is chosen. We would like to measure the influence of this random choice on the stability of the results. LamSCP uses greedy strategies based on Lagrangian costs. Because the Lagrangian costs cj(Λ) take the SCP in its entirety into account and are continuous real-value functions of Λ, they would be more selective than the sentence costs used by ASA. A simple solution for evaluating the stability consists of proceeding with an important amount of experiments on the same SCP by randomly modifying the sentence ranking in A. Experiment 1 measures the impact of these permutations on the solutions computed by both algorithms. The considered SCP is the 1-covering of 2-phonemes on the corpus Le-Monde. Considering the computation time (more than 5 hours for LamSCP), only 47 instances of the SCP are considered, each instance corresponding to a random sentence ordering in Le-Monde. The 95% confidence intervals are derived using the bootstrap method, concerning the covering cost, the number and the length of sentences in coverings, the computation time, and the “distance” between the covering costs and the associated lower bound L(Λ̃). 5.2 Stability of the Algorithms for the k-Covering of 2-Phonemes in English One of the goals of Experiment 2 is to compare the achievements of both algorithms on corpus Gutenberg, which has different features from the ones of Le-Monde. The sentences in Gutenberg are shorter on average and the associated variation of sentence length is lower. Furthermore, Gutenberg is 10 times smaller than Le-Monde. In order to compare with the results of the previous experiment done on Le-Monde, we first observed a 1-covering of 2-phonemes on the Gutenberg corpus. The search space seems smaller than in Experiment 1. According to Table 1, Le-Monde is composed of more sentences than Gutenberg. Le-Monde contains 33,165,050 occurrences of 2-phonemes and Gutenberg only 3,025,474. Moreover, the number of attributes to cover is lower: 1,207 2-phonemes in Le-Monde and 2,012 in Gutenberg. We can also notice that five 2-phonemes have only one occurrence in Le-Monde and that the total cost of the five sentences covering these rare units is 751 phones, whereas this is the case for 109 2-phonemes in Gutenberg and the total cost of the 104 concerned sentences is 3,606 phones. Finally, if we consider the density of matrix A, 8.4% of the cells are non-empty for Experiment 1 and 2.2% for Experiment 2. As with Experiment 1, the sentence ordering in Gutenberg has been randomly modified to produce 60 instances of the SCP and similar solution statistics have been computed for both algorithms. The second objective is to test and compare the ability of two algorithms to deal with the constraints of multi-representation. For this, we apply the same methodology to the k-covering of 2-phonemes in Gutenberg, for k from 2 to 5. We note that for the same original corpus to reduce, the size of the search space decreases when k increases. These different SCPs enable us to compare the performance of two algorithms depending on the size of the search space. In Experiment 3, the aim is to observe the behavior of both algorithms on very constrained problems. For this, we study their ability to treat a covering of 3-phonemes. We try to assess the impact on the solution features and on the stability of such an increase in the number of attributes to cover with many rare events. So as to compute statistics on the 1-covering of 3-phonemes, an instance of Gutenberg has been proposed to ASA and to LamSCP. This instance counts 29,489 units to cover and the density of matrix A is 0.24%. The computation time is nearly 5 days for LamSCP and we then chose to carry out 35 instances of the 1-covering of 3-phonemes on Gutenberg. Additionally, it is interesting to compare these results with those of Experiment 2 concerning the 1-covering of 2-phonemes on the same corpus, which corresponds to a larger search space. In order to pursue the objective set out in the description of Experiment 3, that is, the ability of algorithms to treat with numerous constraints and a heavy-tail distribution of units, Experiment 4 consisted of testing both algorithms on the 1-covering of 3- phonemes on Le-Monde. The search space seems larger than in the previous experiment. Let us recall that Le-Monde contains 3.18 times more sentences than Gutenberg and it counts 27,650 units to cover. Furthermore, Gutenberg contains 5,000 3-phonemes with only one occurrence, which requires the selection of 4,180 sentences with a total length equal to 137,714 phones, whereas Le-Monde contains 2,274 rare 3-phonemes scattered in 2,107 sentences measuring a total of 283,208 phones. The associated matrix density is 0.69%. Because the computation of a first instance takes more than 8 days, we have limited the number of instances to 30 for this SCP. 5.5 k-Covering of 1-POS and 2-POS in French The main goal of Experiment 5 is to study the behavior of both algorithms, ASA and LamSCP, dealing with another kind of linguistic attribute, and to compare this with the previous experiments. To achieve this goal, we consider POS attributes and the associated SCP: 1- and 5-coverings of POS, 1- and 5-coverings of 2-POS defined on Le-Monde. Indeed, we can observe in Table 2 that the global statistics of POS tags in Le-Monde are quite different from their phonological counterparts summarized in Table 1. In particular, the density of matrix A is 11.03% for a 1-POS covering and 0.57% for a 2-POS covering. Also, the search space size seems to decrease when considering successively the mono- and the multi-coverings of 1-POS, and the mono- and the multicoverings of 2-POS, permitting us to compare them with the results coming from the experiments on phonological coverings. We evaluate the stability by computing 50 randomly mixed versions of the corpus Le-Monde.","6 results and discussion :In this section, the results of the experiments described in Section 5 are provided and discussed. As a consequence, the organization of this section and Section 5 are similar. Table 3 shows the main results of Experiment 1, concerning the 1-covering of 2- phonemes from the corpus Le-Monde. Symbol ± indicates that the mentioned value corresponds to a 95% confidence interval, calculated using the bootstrap method from the 47 instances of the SCP. In order to cover each of the 1,207 2-phonemes of Le-Monde, ASA drastically reduces the size of the initial corpus by 99.94% (±0.00). However, on average, LamSCP calculates a 9.00% shorter covering. The lower bound L(Λ̃) for the optimal covering cost is 7,689 ± 5 phones. L(Λ̃) is not a minimum value and may not correspond to the cost of a real covering. Because this lower bound is updated all along the execution of LamSCP, we do not mention a specific calculation time for this result. For one instance of the SCP, let CX∗ASA be the size of the solution given by ASA. The quantity τASA = 1 − L(Λ̃)/CX∗ASA indicates that the optimum solution to SCP is at most τASA times shorter than the covering calculated by ASA. It can be observed that the optimal solution is at most 10.13% (±0.19) shorter than the one yielded by ASA and at most 1.24% (±0.08) shorter than the solution yielded by LamSCP. The solutions obtained by LamSCP and the optimal solution to the SCP are therefore very close. Considered among the 47 instances of the SCP the best solutions yielded by ASA (8,447 phones) and LamSCP (7,767 phones), LamSCP is 8.75% better than ASA in terms of covering costs, while the best lower bound for the SCP is 7,715 phones, only 0.67% (respectively, 8.66%) shorter than the best covering by LamSCP (respectively, ASA). The average length of the sentences selected by both algorithms is far below the average length of the sentences in the corpus (96.81 phones). LamSCP tends to choose sentences that are slightly longer than ASA, with an average 28.97 (±0.12) phones compared with 25.48 (±0.10) phones. Moreover, ASA selects on average 335.73 (±1.69) sentences per solution, about 24.91% more than LamSCP, which selects 268.76 (±1.08) sentences on average. This seems to indicate that LamSCP makes fewer local choices than ASA. This hypothesis can also be validated through the analysis of the variability of the results. The relative variation of the covering costs calculated by LamSCP is 13.01/7,786 = 0.16%, and 57.40/8,555 = 0.67% by ASA; that is to say a stability of the costs 4 times greater for the solutions yielded by LamSCP than for ASA. Moreover, the solutions are composed of a very stable number of sentences: The associated relative standard deviation is 5.31/335.73 = 1.58% for the 47 instances solved by ASA, and 3.85/268.76 = 1.43% for the instances solved by LamSCP. It turns out that the results of both algorithms are very stable when the order of the sentences is modified in the original corpus. Finally, concerning computation time, the resolution of an instance of the SCP lasts on average 5 hr 41 min 18 sec (±8 min 28 sec) for LamSCP versus 51 sec (±0 sec) for ASA. On average over the 47 instances, LamSCP takes 390 (±9) times as long as ASA. 6.2 Stability of the Algorithms for k-Covering of 2-Phonemes in English The considered SCP consists of covering at least k times each of the 2,012 2-phonemes of the Gutenberg corpus, with k varying from 1 to 5. The results are summarized in Table 4. For all instances of these SCPs, it has been observed that LamSCP computes shorter coverings than ASA. However, that advantage diminishes as k grows: The cost advantage offered by LamSCP compared with ASA decreases from 9.73% (±0.13) for k = 1 to 4.50% (±0.04) for k = 5. Also, the solutions obtained from ASA and LamSCP seem to get closer to the optimal solution as k rises. The corresponding figures are presented in Table 5: For instance, the optimal solution is at most 0.75% (±0.02) shorter than that obtained by LamSCP for k = 1, and 0.27% (±0.00) for k = 5. Because the search area diminishes as k increases, it may be observed that the algorithms tend to be more stable. This is true both for the size of the solutions, as well as for the number of sentences that define them. Table 6 represents the variation of the size of the solutions as a function of k. This variation is calculated as follows: For a given k number and a given algorithm, the standard deviation of the size of the k-covering computed by that algorithm is divided by the average size of these coverings. Thus, it can be noted that LamSCP offers a stability 4 to 8 times superior to ASA concerning the size of the coverings. As for the number of sentences, the relative standard deviation similarly decreases from 0.97% to 0.28% when k increases from 1 to 5 for ASA solutions, and from 0.42% to 0.15% for LamSCP ones. One can note that the increase of the minimal number k of instances of each unit to cover leads to a selection, by LamSCP and ASA, of longer sentences on average. The average length of the sentences picked for a 1-covering was quite low. As the constraints increase along with k, it only seems natural that the algorithms tend to select longer sentences, as shorter sentences no longer contain enough occurrences of 2-phonemes. Moreover, as described in Section 2, when the minimal number bi of a unit ui demanded in the covering exceeds the number of instances of that unit in the initial corpus, all sentences containing instances of ui in the initial corpus are selected, and bi is set to (A1Rn )i. Thus, as k increases, the algorithm tends to select more and more sentences, and their length tends towards the average value over the whole corpus, which is 28.51 phones for Gutenberg. As for computation time, although it increases as k grows, because of the increasing number of constraints to update, the ratio between the computation time of LamSCP and ASA tends to diminish, as shown in Table 7. This tendency may find an explanation in the fact that the search space diminishes as k increases, which causes a lesser number of selected sentences to be questioned during the 3-phase iteration of LamSCP. Also, we notice that the average computation time of the two algorithms is greater in Experiment 1, owing to a greater number of sentences in corpus Le-Monde and a higher density of matrix A. Moreover, the ratio between the computation times of Time LamSCP / Time ASA 333 (± 7) 280 (± 5) 292 (± 6) 218 (± 9) 194 (± 8) LamSCP and ASA decreases between Experiments 1 and 2, going from 390 (±9) to 333 (±7). Again, this can be explained by the diminishing of the search space. For k = 1, the advantage offered by LamSCP on the covering costs compared with ASA is slightly higher than that observed in Experiment 1: 9.73% (±0.13) in this case, versus 9.00% (±0.20) in the previous experiment. This seems to contradict the idea that the performance of LamSCP improves as the search area becomes wider. However, the distributions of the units to cover in Gutenberg and Le-Monde are different, and the variation on the length of the sentences in Le-Monde is very high, which may account for this slight difference in terms of gain. Note that the size of the calculated coverings and the lower value L(Λ̃) are closer in the experiment carried out on Gutenberg. It is difficult, however, to perform further comparisons with Experiment 1 regarding the “distance“ between the costs of the solutions computed by these algorithms, and the optimal covering cost, given that the quality of the lower bound cannot be evaluated. The gain in stability offered by LamSCP, both for the costs of the solutions or the number of sentences, is more important than that noticed during the previous experiment. We think that the increase is due to a more restricted search space, and less variability of the length of the sentences in corpus Gutenberg, which may be observed in Table 1. Table 8 sums up the main results of Experiment 3, where 35 instances of 1-covering of 3-phonemes from Gutenberg were processed. According to the L(Λ̃) values, covering all 3-phonemes requires a solution size greater than or equal to 226,635 phones. On average, the solution measures 227,360 ± 12 phones using LamSCP, and 236,828 ± 94 phones using ASA. The optimal covering is at most 0.35% (±0.00) shorter than solutions derived by LamSCP and 4.33% (±0.04) shorter than the ones derived by ASA. We can then observe that both algorithms manage to compute solutions with close sizes when scaling up the required attribute set. The solutions are very stable, even more than in Experiment 2: The relative variation of their size is 0.12% for ASA and 0.01% for LamSCP; the relative variation of their sentence number is 0.17% for ASA and 0.07% for LamSCP. This increase of stability is due to a smaller search space and the increase of the number of rare units required, which also compels the algorithms to select a higher number of inevitable sentences for all the instances of the SCP. Furthermore, the decrease of the ratio between the computation time of LamSCP and ASA from 332 for the 1-covering of 2-phonemes to 130 for the one of 3-phonemes on Gutenberg may confirm this idea, which has also been put forward in Experiment 2. Concerning the length of the selected sentences by both algorithms, it is greater than the one for the 5-covering of 2-phonemes, observed in Experiment 2, and slightly deviates from the average sentence length for the whole corpus. Consequently, it turns out that covering longer and generally rarer units involves a selection of longer sentences. This is confirmed by the fact that the sentences of Gutenberg covering units with a single occurrence in Gutenberg represent more than half the size of the solutions and are composed of 33 phones on average. In this section, we analyze the results of Experiment 4, the 30 instances of the 1- covering of 3-phonemes from Le-Monde carried out by ASA and LamSCP. The results are given in Table 9. First, although the main features of Le-Monde and Gutenberg are different, notice that the closeness between the size of coverings calculated by both algorithms and the lower bound L(Λ̃) is comparable to the one observed in Experiment 3. Indeed, the optimal covering size is at most 0.48% (±0.03) and 4.35% (±0.05) shorter than the solution size derived by LamSCP and ASA, respectively. Similarly, the size of solutions and the number of selected sentences are as stable as those observed in the previous experiment: The solution length varies from 0.01% for LamSCP to 0.10% for ASA and the number of sentences fluctuates about 0.16% for ASA and 0.10% for LamSCP. As for the comparison with the results of Experiment 1 (1-covering of 2-phonemes from Le-Monde), the main trends are similar to the ones observed for the transition from the 1-covering of 2-phonemes to the 1-covering of 3-phonemes from Gutenberg. However, in Experiment 4, the average selected sentence length has markedly increased, approaching the mean value on the whole corpus: 89.02 (±0.03) for ASA and 92.64 (±0.03) for LamSCP, whereas in Experiment 1 these values are, respectively, 25.48 (±0.10) and 28.97 (±0.12). We have already observed in Experiment 3 that covering longer units increases the length of selected sentences but this high amplitude seems to be inherent to the design of corpus Le-Monde. Furthermore, notice that the 1-coverings of 2-phonemes from Le-Monde are almost half as small as the ones from Gutenberg, whereas the 1-coverings of 3-phonemes from Le-Monde are between twice and three times longer than the ones from Gutenberg. This is due to the fact that the 3-phonemes with a single instance in Le-Monde are very scattered in long sentences (their length mean is about 134 phones), and these indispensable sentences represent nearly half the size of the solutions. The other sentences of the solutions are around 70 phones long. Lastly, the ratio between the computation time of both algorithms is about 84, which is smaller than the ratios previously observed, but this SCP is the most time consuming: 2 hr 10 min for ASA and more than 7 days for LamSCP. 6.5 k-Covering of 1-POS and 2-POS in French Table 10 sums up the main results of Experiment 5, dealing with the 1- and 5-coverings of 1-POS and 2-POS. For all these SCP, LamSCP produces smaller coverings, composed of longer sentences, than the coverings obtained with ASA. When the search space diminishes, the relative “distance” between the size of solutions provided by both algorithms decreases, as well as between the lower bound L(Λ̃) and the size of solutions obtained by ASA. These trends were also observed in the earlier experiments. In particular, as for the 1-covering of 1-POS, not only does LamSCP provide 10.06% (± 0.00) shorter solutions than ASA, but its solutions are optimal for all 50 instances of this SCP. Indeed, the lower bound value varies from 482.51 to 482.87 occurrences of 1-POS while all the solutions given by LamSCP are made of exactly 483 occurrences of POS. For the other k-coverings of n-POS, the optimal solution is at worst 0.39% (±0.02) shorter than the covering given by LamSCP for (k, n) = (5, 1), 0.11% (±0.00) for (k, n) = (1, 2), and 0.22% (±0.00) for (k, n) = (5, 2). The solutions obtained by ASA or by LamSCP are very stable. For example, the relative standard deviation of number of POS in a covering solution varies from 0.00% to 1.23% for ASA, and from 0.00% to 0.04% for LamSCP. As previously observed for both algorithms, their computation times grow when the number of required covering features increases. However, the ratio between the computation time of LamSCP and ASA does not behave as in Experiment 2 (see Table 7): For the k-covering of 1-POS, this ratio increases from 290 (±11) to 657 (± 43) when k goes from 1 to 5, and for the k-covering of 2-POS, it increases from 75 (±5) to 108 (±6).","7 evaluation on a text-to-speech synthesis system :In the previous sections, different algorithms dealing with corpus reduction were introduced and studied. The proposed experiments mainly evaluate the effects of these algorithms in terms of corpus reduction but not according to a practical task. This section proposes an experiment to assess the impact of the corpus reduction on a unit selection speech synthesis system. As explained in Section 1, a corpus reduction for a TTS system is a trade-off between minimizing the recording and post-processing time to build the speech corpus and keeping the highest phonological richness of the corpus to ensure the quality of the synthetic speech. The goal of this experiment is to measure this trade-off by evaluating the quality of the same TTS system fed with different speech corpora uttered by the same speaker. Note that the intrinsic quality of this system is not the purpose here. Firstly, a brief presentation of a state-of-the-art unit selection–based TTS system is proposed in Section 7.1. The linguistic parameters used by the TTS system are detailed because they are linked to the required features in the reduction stage. In Section 7.2, corpora used in the experiment are introduced. The attributes to cover and the methodology of evaluation are described in Section 7.3; the results are given and discussed in Section 7.4. For this experiment, a state-of-the-art unit selection–based TTS system is used to produce an acoustic signal from an input text. A linguistic front end processes the text to extract features taken into account by the algorithm that selects segments in a speech corpus (see Boëffard and d’Alessandro 2012). The input text is converted into a sequence of phonemes using a French phonetizer proposed by Béchet (2001). Non-speech sound labels can be added to this sequence (silences, breaths, para-verbal events, etc.). A vector of features is defined as follows: 1. The phone or non-speech sound label 2. Is the described segment a non-speech sound? 3. Is the phone in the onset of the syllable? 4. Is the phone in the coda of the syllable? 5. Is the phone in the last syllable of its syntagm? 6. Is the current syllable at the end of a word? 7. Is the current syllable at the beginning of a word? Extraction of features is done using the ROOTS toolkit described in Boëffard et al. (2012). The unit selection process aims to associate a signal segment from the speech corpus to each vector of features computed from the input text. This is performed in two steps. In the first step, for each unit, a set of candidates that match the same features are extracted from the speech corpus. In the second step, given all candidates, the best path is searched using an optimization algorithm so as to produce the sequence of speech units. The algorithm tries to minimize three sub-costs, commonly used in unit selection based TTS systems, which are spectral discrepancies based on MFCC distance, amplitude, and f0 distances. Two corpora are used in this experiment. The first one, Learning corpus, is an annotated acoustic corpus used to provide speech data for the TTS engine. It is an expressive corpus in French, spoken by a male speaker reading Albertine disparue, an excerpt from À la recherche du temps perdu by Marcel Proust. The corpus is composed of 3,138 sentences automatically annotated using a process described in Boëffard et al. (2012). The overall length of the speech corpus is 9 hr 57 min. When creating a voice for a unit selection– based TTS system, long sentences are generally removed or split into syntagm groups in order to help the speaker. A second corpus, named Test corpus, is a text corpus that is synthesized and used in the listening experiment. It is composed of 30 short sentences randomly extracted from a phonetically balanced corpus in French, proposed by Combescure (1981). The use of a corpus with a different linguistic style minimizes the bias introduced by the learning corpus. Statistics are given in Table 11. For this experiment, two reduced corpora are evaluated. They are built by reducing the full learning corpus using the two different algorithms presented in the previous sections: ASA and LamSCP. As described in Section 7.1, the unit selection process of the speech synthesis system is based on a set of phonological attributes. It seems natural to try to cover features that reflect the variability of these attributes. For this experiment, algorithms must cover all the units at least once, where a unit is described by the following: r Its label, that is, one of the 35 phonemes or a non-speech sound labelr The structure of the syllable that contains the phoneme, if it is a vowelr The position of the associated syllable in the word (start, middle, or end)r A Boolean indicated if the associated syllable is at the end of a syntagm The feature extraction is performed by the same set of tools used by the speech synthesis engine. Given this set of features, the learning corpus contains 1,497 classes of units. The cost function to minimize by the reduction algorithms is the total length of the set of the selected syntagms, in phones. Two speech synthesis systems are defined, extracting the speech units to concatenate from the coverings provided by LamSCP and ASA. Two other systems are added as baselines. First, a system named Full, built with the whole learning corpus, is used as an upper bound. Second, a system named Random uses a random reduction of the Learning corpus as a pool corpus of speech units. This reduction is done by randomly selecting sentences from the whole learning corpus until the size of the covering obtained by LamSCP is reached. Random is used as a lower bound. Whereas the optimization efficiency is measured by statistics on the reduced corpora, the quality of the synthesized speech signals is evaluated by a listening test. The protocol is based on a MUSHRA test, presented in ITU-R (2003), where for every sentence of Test corpus, the signals synthesized by the four systems are presented to each tester in a random order. If a system is not able to produce a signal for a requested sentence (because of a missing 2-phone in the pool corpus), an empty signal is presented. Ten native French testers (four naives and six experts) are asked to evaluate the overall quality of the stimuli and to give a mark from 0 to 100 (by steps of 5 points). The Learning corpus composed of 19,587 syntagms is first reduced using ASA and LamSCP. Statistics of the resulting solutions are summarized in Table 12. The covering of the 1,497 constraints divides the input corpus size by almost 10 and reduces the 10 hours of speech to around 1 hr 20 min. As for the previous experiments, even though different kinds of features are mixed (phonemes, syllable structures, position in a word or a syntagm) the ASA algorithm produces a solution close to the optimal one. However, LamSCP is again slightly better in terms of covering size. For the measure of the acoustic impact of corpus reduction, the listening test results are presented in Table 13 for the average marks and in Table 14 for the average ranks. Note that without a natural speech reference during the test, the marks should not be seen as an absolute score. Even if the LamSCP corpus is slightly smaller than the one from ASA, the acoustic quality of both systems is comparable according to the testers (with a slight advantage for the LamSCP corpus). In comparison with the baseline, which uses the whole learning corpus, the acoustic degradation is significant. This illustrates well the trade-off between corpus size and speech quality: For a 90% corpus size reduction, the acoustic quality drops by 10 points. Further research should be focused on the set of attributes to cover and their number of occurrences in order to improve this compromise. As expected, the baseline built from random sentences is preferred significantly less than the other systems because of the lack of relatively rare acoustic units.","8 conclusion :This article discussed the building of linguistic-rich corpora under an objective of parsimony. This task, a generalization of SCP, turns out to be an NP-hard problem that cannot be polynomially approximated. We studied the behavior of several algorithms in the particular domain of NLP, where the considered events follow a heavy-tailed type distribution. The proposed algorithms have been compared through three kinds of experiments: The first one is the coverings of 2- and 3-phonemes from two text corpora, one in French, the other in English; the second one consists of the coverings of part-of-speech labels from a corpus in French; the third one evaluates the impact of both algorithms on the acoustic quality of a corpus-based TTS system. The first algorithm, ASA, is composed of an agglomerative greedy strategy followed by a spitting greedy stage. The second one, LamSCP, is based on Lagrangian relaxation principles combined with greedy strategies. LamSCP is our adaptation of an algorithm proposed in Caprara, Fischetti, and Toth (1999) to the multi-representation constraints. The comparison of SCP solutions is mainly about their size, their maximal distance with the optimal covering, and their robustness in case of perturbation of the initial corpus ordering. Although ASA is much faster than LamSCP, it does not permit us to singlehandedly assess the quality of its solution in terms of size. The main assets of LamSCP are the calculation of a lower bound to the optimal covering size and shorter solutions than the ones obtained by ASA. Indeed, in our experiments of phonological coverings, the optimal solution is at most 1.24% (10.13%, respectively) smaller than the solutions derived by LamSCP (ASA, respectively). As for the coverings of 1-POS, LamSCP provides the optimal solution in a case of a mono-representation constraint, whereas the ASA solution is 10.17% greater than the optimal one. These relative gaps between the lower bounds and solution sizes of both algorithms generally decrease when the size of the search space decreases. Thanks to the lower bound derived by LamSCP, we empirically show that it is possible to get almost optimal solutions in a linguistic framework following Zipf’s law distribution, despite the theoretic complexity of the multi-represented SCP. Concerning the last experiment in the TTS framework, even if LamSCP provides a smaller corpus, the subjective test shows no significant difference between the TTS systems based on LamSCP and ASA corpora. Therefore, we think that ASA remains the most adequate strategy, in terms of performance, ease of development, and computation time to solve SCP in the NLP field. However, it would be interesting to test a parallelized version of the heuristic phase that calls an important number of greedy sub-procedures. Our future prospects for this work are in automatic language processing and speech synthesis. First, in the framework of the Phorevox project supported by the French National Research Agency, we are considering the automatic design of exercise contents for language learning by the selection of texts covering some phonological or linguistic difficulties. Secondly, this work is a preliminary step to building a phonetically rich script before its recording in order to produce a high quality speech synthesis. The covering choices, such as the attributes to cover, the number of required occurrences, or the “sentence” length (utterances, syntagms, etc.) need to be validated. Moreover, in this article, we have observed the great impact of the distribution of rare units in the corpus to reduce, and we believe it will be interesting to adapt the “sentence” granularity according to this distribution.","Linguistic corpus design is a critical concern for building rich annotated corpora useful in different domains of applications. For example, speech technologies such as ASR (Automatic Speech Recognition) or TTS (Text-to-Speech) need a huge amount of speech data to train datadriven models or to produce synthetic speech. Collecting data is always related to costs (recording speech, verifying annotations, etc.), and as a rule of thumb, the more data you gather, the more costly your application will be. Within this context, we present in this article solutions to reduce the amount of linguistic text content while maintaining a sufficient level of linguistic richness required by a model or an application. This problem can be formalized as a Set Covering Problem (SCP) and we evaluate two algorithmic heuristics applied to design large text corpora in English and French for covering phonological information or POS labels. The first considered algorithm is a standard greedy solution with an agglomerative/spitting strategy and we propose a second algorithm based on Lagrangian relaxation. The latter approach provides a lower bound to the cost of each covering solution. This lower bound can be used as a metric to evaluate the quality of a reduced corpus whatever the algorithm applied. Experiments show that a suboptimal algorithm like a greedy algorithm achieves good results; the cost of its solutions is not so far from the lower bound (about 4.35% for 3-phoneme coverings). Usually, constraints in SCP are binary; we proposed here a generalization where the constraints on each covering feature can be multi-valued.","[{""affiliations"": [], ""name"": ""Nelly Barbot""}, {""affiliations"": [], ""name"": ""Olivier Bo\u00ebffard""}, {""affiliations"": [], ""name"": ""Jonathan Chevelu""}, {""affiliations"": [], ""name"": ""Arnaud Delhay""}]",SP:066100434f650471b7f8100e81aef651fc0bf5f5,"[{""authors"": [""Alon"", ""Noga"", ""Dana Moshkovitz"", ""Shmuel Safra.""], ""title"": ""Algorithmic construction of sets for k-restrictions"", ""venue"": ""ACM Transactions on Algorithms (TALG), 2(2):153\u2013177."", ""year"": 2006}, {""authors"": [""Barbot"", ""Nelly"", ""Olivier Bo\u00ebffard"", ""Arnaud Delhay.""], ""title"": ""Comparing performance of different set-covering strategies for linguistic content optimization in speech corpora"", ""venue"": ""Proceedings of the International"", ""year"": 2012}, {""authors"": [""B\u00e9chet"", ""Fr\u00e9d\u00e9ric.""], ""title"": ""Liaphon: un systeme complet de phon\u00e9tisation de textes"", ""venue"": ""Traitement automatique des langues, 42(1):47\u201367."", ""year"": 2001}, {""authors"": [""Bo\u00ebffard"", ""Olivier"", ""Laure Charonnat"", ""S\u00e9bastien Le Maguer"", ""Damien Lolive"", ""Ga\u00eblle Vidal.""], ""title"": ""Towards fully automatic annotation of audiobooks for TTS"", ""venue"": ""Proceedings of the International Conference on"", ""year"": 2012}, {""authors"": [""Bunnell"", ""H. Timothy.""], ""title"": ""Crafting small databases for unit selection TTS: Effects on intelligibility"", ""venue"": ""Proceedings of the ISCA Tutorial and Research Workshop on Speech Synthesis (SSW7), pages 40\u201344, Kyoto."", ""year"": 2010}, {""authors"": [""Cadic"", ""Didier"", ""C\u00e9dric Boidin"", ""Christophe d\u2019Alessandro""], ""title"": ""Towards optimal TTS corpora"", ""venue"": ""In Proceedings of the International Conference on Language Resources and Evaluation (LREC),"", ""year"": 2010}, {""authors"": [""Candito"", ""Marie"", ""Enrique Henestroza Anguiano"", ""Djam\u00e9 Seddah.""], ""title"": ""A word clustering approach to domain adaptation: Effective parsing of biomedical texts"", ""venue"": ""Proceedings of the 12th International"", ""year"": 2011}, {""authors"": [""Caprara"", ""Alberto"", ""Matteo Fischetti"", ""Paolo Toth.""], ""title"": ""A heuristic method for the set covering problem"", ""venue"": ""Operations Research, 47(5):730\u2013743."", ""year"": 1999}, {""authors"": [""Caprara"", ""Alberto"", ""Paolo Toth"", ""Matteo Fischetti.""], ""title"": ""Algorithms for the set covering problem"", ""venue"": ""Annals of Operations Research, 98(1-4):353\u2013371."", ""year"": 2000}, {""authors"": [""Ceria"", ""Sebasti\u00e1n"", ""Paolo Nobili"", ""Antonio Sassano.""], ""title"": ""A Lagrangian-based heuristic for large-scale set covering problems"", ""venue"": ""Mathematical Programming, 81(2):215\u2013228."", ""year"": 1998}, {""authors"": [""Chevelu"", ""Jonathan"", ""Nelly Barbot"", ""Olivier Bo\u00ebffard"", ""Arnaud Delhay.""], ""title"": ""Lagrangian relaxation for optimal corpus design"", ""venue"": ""Proceedings of the ISCA Tutorial and Research Workshop on Speech Synthesis"", ""year"": 2007}, {""authors"": [""Chevelu"", ""Jonathan"", ""Nelly Barbot"", ""Olivier Bo\u00ebffard"", ""Arnaud Delhay.""], ""title"": ""Comparing set-covering strategies for optimal corpus design"", ""venue"": ""Proceedings of the International Conference on Language"", ""year"": 2008}, {""authors"": [""Combescure"", ""Pierre.""], ""title"": ""20 listes de 10 phrases phon\u00e9tiquement \u00e9quilibr\u00e9es"", ""venue"": ""Revue d\u2019Acoustique, 56:34\u201338."", ""year"": 1981}, {""authors"": [""Fisher"", ""Marshall L.""], ""title"": ""The Lagrangian relaxation method for solving integer programming problems"", ""venue"": ""Management Science, 27(1):1\u201318."", ""year"": 1981}, {""authors"": [""Fran\u00e7ois"", ""H\u00e9l\u00e8ne"", ""Olivier Bo\u00ebffard.""], ""title"": ""Design of an optimal continuous speech database for text-to-speech synthesis considered as a set covering problem"", ""venue"": ""Proceedings of the European Conference on"", ""year"": 2001}, {""authors"": [""Fran\u00e7ois"", ""H\u00e9l\u00e8ne"", ""Olivier Bo\u00ebffard""], ""title"": ""The greedy algorithm and its application"", ""year"": 2002}, {""authors"": [""Gauvain"", ""Jean-Luc"", ""Lori Lamel"", ""Maxine Esk\u00e9nazi.""], ""title"": ""Design considerations and text selection for Bref, a large French readspeech corpus"", ""venue"": ""Proceedings of the International Conference of Spoken Language"", ""year"": 1990}, {""authors"": [""Gotab"", ""Pierre"", ""Fr\u00e9d\u00e9ric B\u00e9chet"", ""G\u00e9raldine Damnati.""], ""title"": ""Active learning for rule-based and corpus-based spoken language understanding models"", ""venue"": ""Proceedings of the IEEE workshop on"", ""year"": 2009}, {""authors"": [""Hart"", ""Michael.""], ""title"": ""Project gutenberg"", ""venue"": ""http://www.gutenberg.org/ (Last consulted April 2015)."", ""year"": 2003}, {""authors"": [""Karp"", ""Richard M.""], ""title"": ""Reducibility among combinatorial problems"", ""venue"": ""Complexity of Computer Computations, the IBM Research Symposia Series. Springer, pages 85\u2013103."", ""year"": 1972}, {""authors"": [""Kawai"", ""Hisashi"", ""Seiichi Yamamoto"", ""Norio Higuchi"", ""Tohru Shimizu.""], ""title"": ""A design method of speech corpus for text-to-speech synthesis taking account of prosody"", ""venue"": ""Proceedings of the"", ""year"": 2000}, {""authors"": [""Kominek"", ""John"", ""Alan W. Black.""], ""title"": ""The CMU Arctic speech databases for speech synthesis research"", ""venue"": ""Technical Report CMU-LTI-03-177, Carnegie Mellon University Language Technologies"", ""year"": 2003}, {""authors"": [""Krstulovi\u0107"", ""Sacha"", ""Fr\u00e9d\u00e9ric Bimbot"", ""Olivier Bo\u00ebffard"", ""Delphine Charlet"", ""Dominique Fohr"", ""Odile Mella""], ""title"": ""Optimizing the coverage of a speech database through a selection of representative speaker"", ""year"": 2006}, {""authors"": [""Krul"", ""Aleksandra"", ""G\u00e9raldine Damnati"", ""Fran\u00e7ois Yvon"", ""C\u00e9dric Boidin"", ""Thierry Moudenc""], ""title"": ""Adaptive database reduction for domain specific speech"", ""year"": 2007}, {""authors"": [""Krul"", ""Aleksandra"", ""G\u00e9raldine Damnati"", ""Fran\u00e7ois Yvon"", ""Thierry Moudenc""], ""title"": ""Corpus design based on the Kullback-Leibler divergence for Text-To-Speech synthesis application"", ""year"": 2006}, {""authors"": [""Neubig"", ""Graham"", ""Shinsuke Mori.""], ""title"": ""Word-based partial annotation for efficient corpus construction"", ""venue"": ""Proceedings of the International Conference on Language Resources and Evaluation (LREC),"", ""year"": 2010}, {""authors"": [""Raz"", ""Ran"", ""Shmuel Safra.""], ""title"": ""A sub-constant error-probability low-degree test, and a sub-constant error-probability PCP characterization of NP"", ""venue"": ""Proceedings of the twenty-ninth annual ACM symposium"", ""year"": 1997}, {""authors"": [""Rojc"", ""Matej"", ""Zdravko Ka\u010di\u010d.""], ""title"": ""Design of optimal Slovenian speech corpus for use in the concatenative speech synthesis system"", ""venue"": ""Proceedings of the International Conference on Language Resources and"", ""year"": 2000}, {""authors"": [""Schein"", ""Andrew I."", ""Ted S. Sandler"", ""Lyle H. Ungar.""], ""title"": ""Bayesian example selection using BaBiES"", ""venue"": ""Technical Report MS-CIS-04-08, Department of Computer and Information Science, University of"", ""year"": 2004}, {""authors"": [""Settles"", ""Burr.""], ""title"": ""Active learning literature survey"", ""venue"": ""Technical Report 1648, Department of Computer Sciences, University of Wisconsin, Madison."", ""year"": 2010}, {""authors"": [""Synapse.""], ""title"": ""Documentation technique: Composant d\u2019\u00e9tiquetage et lemmatisation"", ""venue"": ""http://www.synapse-fr.com/."", ""year"": 2011}, {""authors"": [""Tian"", ""Jilei"", ""Jani Nurminen.""], ""title"": ""Optimization of text database using hierachical clustering"", ""venue"": ""Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),"", ""year"": 2009}, {""authors"": [""Tian"", ""Jilei"", ""Jani Nurminen"", ""Imre Kiss.""], ""title"": ""Optimal subset selection from text databases"", ""venue"": ""Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal"", ""year"": 2005}, {""authors"": [""Tomanek"", ""Katrin"", ""Fredrik Olsson.""], ""title"": ""A Web survey on the use of active learning to support annotation of text data"", ""venue"": ""Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural"", ""year"": 2009}, {""authors"": [""Van Santen"", ""Jan P.H."", ""Adam L. Buchsbaum.""], ""title"": ""Methods for optimal text selection"", ""venue"": ""Proceedings of the European"", ""year"": 1997}, {""authors"": [""Rhodes. Zhang"", ""Jin-Song"", ""Satoshi Nakamura""], ""title"": ""An improved greedy search"", ""venue"": ""Conference on Speech Communication and Technology (Eurospeech),"", ""year"": 2008}]",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :In automatic speech and language processing, many technologies make extensive use of written or read text sets. These linguistic corpora are a necessity to train models or to extract rules, and the quality of the results strongly depends on a corpus’ content. Often, the reference corpus should provide a maximum diversity of content. For example, in Tian, Nurminen, and Kiss (2005), and Tian and Nurminen (2009), it turns out that maximizing the text coverage of the learning corpus improves an automatic syllabification based on a neural network. Similarly, a high quality speech synthesis system based on the selection of speech units requires a rich corpus in terms of diphones, diphones in context, triphones, and prosodic markers. In particular, Bunnell (2010) shows the importance of a good coverage of diphones and triphones for the intelligibility of a voice produced by a unit selection speech synthesis system. To cover the best attributes needed for a task, several strategies are then possible. A first method—very simple—is to collect text randomly, but it soon becomes expensive because of the natural distribution of linguistic events following the Zipf’s law. Very few events are extremely frequent and many events are very rare. This problem is often made difficult by the fact that many technologies require several variants of the same event (as in a Text-to-Speech [TTS] system using several acoustical versions of the same phonological unit). Usually, a large volume of data needs to be collected. However, and depending on the applications, building such corpora is often achieved under a constraint of parsimony. As an example, for a TTS system, a high-quality synthetic voice generally needs a huge number of speech recordings. But minimizing the duration of a recording is also a critical point to ensure uniform quality of the voice, to reduce the drudgery of the recording, to reduce the financial cost, or to follow a technical constraint on the amount of collected data for embedded systems. Moreover, a reduced set tends to limit the need of human implication for checking the data (transcription and annotation). Similarly, in the natural language processing field (NLP), the adaptation of a generic model to a specific domain often requires new annotated data that illustrate its specificities (as in Candito, Anguiano, and Seddah 2011). However, the creation cost of such data highly depends on the kind of labels used to adapt the model. In particular, the annotation in syntax trees is really more expensive than in Part-of-Speech (POS) tags. Then, it could be more efficient to annotate a compact corpus that reflects the phenomena variability than a corpus with a natural distribution of events, which implies many redundancies (see Neubig and Mori 2010). In a machine learning framework, the active learning strategy can be used as an alternative that reduces the manual data annotation effort to design the training corpus without diminishing the quality of the model to train (see Settles 2010 or Schein, Sandler, and Ungar 2004). It consists of building the corpus iteratively by choosing an item according to an external source of information (a user or an experimental measure). This approach has been applied in NLP, speech recognition, and spoken language understanding (see for instance Tomanek and Olsson 2009 and Gotab, Béchet, and Damnati 2009). A second alternative, when no direct quality measure is available, consists of covering a large set of attributes that may impact the final quality (after annotation or recording). This kind of approach might also be preferred when the final corpus is built in one batch (for instance, because of out-sourcing or annotator/performer consistency constraints). A method could be an automatic extraction from a huge text corpus of a minimal sized subset that covers the identified attributes. This problem is a generalization of the Set-Covering Problem (SCP), which is an NPhard problem, as shown in Karp (1972). It is then necessary to use heuristics or sub-optimal algorithms for a reasonable computation time. Moreover, Raz and Safra (1997) and Alon, Moshkovitz, and Safra (2006) have shown that the SCP cannot be polynomially approximated with ratio c × ln(n) unless P = NP, when c is a constant, and n refers to the size of the universe to cover. That means that one cannot be certain to obtain a result under this ratio with any polynomial algorithm. However, the latter complexity results are given for any kind of distribution in the mono-representation case. One can ask if good multi-represented coverages can be achieved efficiently on data following Zipf’s law, which is usual in the domain of NLP. Within the field of speech processing, the most frequently used strategy is a greedy method based on an agglomeration policy. This iterative algorithm selects the sentence with the highest score at each iteration. The score reflects the contribution of the sentence to the covering under construction. In Gauvain, Lamel, and Eskénazi (1990), this methodology has been applied to build a database of read speech from a text corpus for the evaluation of speech recognition systems using hierarchically organized covering attributes. Van Santen and Buchsbaum (1997) have tested different variants of greedy selection of texts by varying the units to cover (diphones, duration, etc.) and the “scores” for a sentence depending on the considered applications. In Tian, Nurminen, and Kiss (2005), the learning corpus for an automatic system of syllabification is designed using a greedy approach with the Levenshtein distance as a score function in order to maximize its text diversity. In François and Boëffard (2001), the methodology gives a priority to the rarest categories of allophones. The latter methodology has been implemented for the definition of the multi-speaker corpus Neologos in Krstulović et al. (2006). In the article of Krul et al. (2006), the authors constructed a corpus where the distribution of diphonemes/triphonemes matches a uniform distribution. A greedy algorithm is led by a score function based on the Kullback-Liebler divergence. A similar method is used in Krul et al. (2007) to design a reduced database in accordance with a specific domain distribution. Kawai et al. (2000) propose a pair exchange mechanism that Rojc and Kačič (2000) apply after a first reverse greedy algorithm—also called spitting greedy—deleting the useless sentences. In Cadic, Boidin, and d’Alessandro (2010), the covering of “sandwich” units (defined to be more adapted to corpus-based speech synthesis) is carried out by generating new sentences in a semi-automatic way. Candidates are generated using finite state transducers. The sentences are ordered according to a greedy criterion (their sandwiches richness) and presented to a human evaluator. This collection of artificial and rich sentences enables an effective reduction of the size of the covering but requires expensive human intervention to obtain semantically correct sentences that will be therefore easier to record. The results of these previously cited studies are difficult to compare because of the different initial corpora and covering constraints (partial or full covering) and evaluation criteria (the number of gathered sentences, the Kullback divergence, etc.). In Zhang and Nakamura (2008), a priority policy for the rare units is added into an agglomerative greedy algorithm in order to get a covering of triphoneme classes from a large text corpus in Chinese language. The results show that this priority policy driven by the score function and the phonetic content of the sentences reduces the covering size compared with a standard agglomerative greedy algorithm. Similarly, in François and Boëffard (2002), several combinations of greedy algorithms (agglomeration, spitting, pair exchange, or priority to rare units) were applied to the construction of a corpus for speech synthesis in French containing at least three representatives of the most frequent diphones. Based on this work, the best strategy would be the application of an agglomerative greedy followed by a spitting greedy algorithm. During the agglomeration phase, the score of a sentence corresponds to the number of its unit instances that remain to be covered normalized by its length. During the spitting phase, at each iteration, the longest redundant sentence is removed from the covering. This algorithm is called the Agglomeration and Spitting Algorithm (ASA). As an alternative to a greedy algorithm, which is sub-optimal, solving the SCP using Lagrangian relaxation principles can provide an exact solution for problems of reasonable size. However, for speech processing, the SCP has several millions of sentences with tens of thousands of covering features. Considering these practical constraints, Chevelu et al. (2007) adapted a Lagrangian relaxation based algorithm proposed by Caprara, Fischetti, and Toth (1999). In the context of Italian railways, Caprara, Fischetti, and Toth proposed heuristics to solve scheduling problems and won a competition, called Faster, organized by the Italian Operational Research Society in 1994, ahead of other Lagrangian relaxation heuristics–based algorithms, like Ceria, Nobili, and Sassano (1998). In Chevelu et al. (2007, 2008), the algorithm takes into account the constraints of multi-representation. A minimal number of representatives for the same unit may be required. The proposed algorithm, called LamSCP — Lagrangian-based Algorithm for Multi-represented SCP — is applied to extract coverings of diphonemes with a mono- or a 5-representation and coverings of triphonemes with mono-representation constraints. These results are compared with the greedy strategy ASA and are about 5% to 10% better. Besides, the LamSCP provides a lower bound for the cost of the optimal covering and allows for evaluating the quality of the results. In Barbot, Boëffard, and Delhay (2012), phonological content of diphoneme coverings is studied regarding many parameters. These coverings are obtained by different algorithms (LamSCP, ASA, greedy based on the Kullback divergence) and some of the coverings are randomly completed to reach a given size (from 20,000 to 30,000 phones). It turns out that the coverings obtained using LamSCP and ASA provide a good representation of short units and the representation of long units mainly depends on the length of the corpus. In this article, we present in more detail the LamSCP algorithm and its score functions and heuristics that take into account multi-representation constraints. We deepen the study about the performance of LamSCP for the construction of a phonologically rich corpus according to the size of the search space. We evaluate LamSCP and ASA algorithms on a corpus of sentences in English for a covering of multi-represented diphones, where the minimal number of required unit representatives varies from one to five. We also compare them in the case of very constrained triphoneme coverings in English and French, which represent about 12 times more units to cover. Additionally, both algorithms are tested to provide multi-represented coverings of POS tags in order to assess their ability to deal with different kinds of linguistic data. A particular effort has been made on methodology to obtain comparable measures, to study the stability of both algorithms, and to establish confidence intervals for each solution. This article is organized as follows. In Section 2, the SCP framework and the associated notations are introduced. The ASA algorithm is described in Section 3 and the LamSCP is detailed in Section 4. The experimental methodology is presented in Section 5 and results are discussed in Section 6. Before concluding in Section 8, we present experiments in the context of TTS where we evaluate on that task the benefits of a reduction in section 7. 2 the set-covering problem :Before describing the SCP-solving algorithms proposed in this article, we introduce in this section some notations and the Lagrangian properties used by LamSCP. Let us consider a corpus A composed of n sentences s1, . . . , sn. According to the target applications, these sentences are annotated with respect to phonological, acoustic, prosodic attributes, and so forth. Each sentence is then associated with a family of units of different types. The set of units present in A is denoted U = {u1, . . . , um} and A can be represented by a matrix A = (aij), where aij is the number of instances of unit ui in the sentence sj. Therefore, the j th column of A corresponds to sentence sj in A. To simplify the writing, we define the sets M = {1, . . . , m} and N = {1, . . . , n}. For a given vector of integers B = (b1, . . . , bm)T, a reduction X of A, also called covering of U , is defined as a subset of A which contains, for every i ∈ M, at least bi instances of ui. It can be described by a vector X = (x1, . . . , xn)T where xj = 1 if sj belongs to X and xj = 0 otherwise. In other words, a covering is a solution X ∈ {0, 1}n of the following system ∀i ∈ M, ∑ j∈N aijxj ≥ bi (1) that is, AX ≥ B where B is called the constraint vector. Our aim is to optimize a covering according to a cost function minimization criterion. The covering cost is given by summing the costs of the sentences that compose the covering. The optimization problem can be formulated as the following SCP: X∗ = arg min X∈{0,1}n AX≥B CX (2) where C = (c1, . . . , cn) is the cost vector and cj the cost of the sentence sj. Because of the objective to minimize the total length of the covering, we have chosen to define the cost of a sentence as one of its length features. According to the considered application, the sentence cost can be defined as its number of phones (one of our objectives is to design a phonetically rich script with a minimal speech recording duration), or its number of words, part-of-speech tags, breath groups, and so on. In Caprara, Fischetti, and Toth (1999), Caprara, Toth, and Fischetti (2000), and Ceria, Nobili, and Sassano (1998), the studied crew scheduling problem is a particular case of Equation (2) where A is a binary matrix and B = 1Rm (i.e., with mono-representation constraints). In order to ensure that Equation (1) admits a solution, we assume that, for each i ∈ M, the minimal number bi of ui instances required in the covering is not greater than the number (A1Rn )i of ui instances in A, that is A1Rn ≥ B. Under this assumption, A is the maximal size solution of Equation (1), represented by X = 1Rn. In the case where bi is greater than the number of ui instances in A, bi is set to (A1Rn )i. To drive the SCP algorithms during the sentence selection phase, the covering capacity µj of sentence sj is defined as the number of its unit instances required in the covering in view of the constraint vector: µj = ∑ i∈M min { aij, bi } (3) Let us notice that µj does not consider the excess unit instances: For example, if sj contains aij = 10 instances of ui and at least bi = 3 instances of ui are required, the contribution of ui to µj derivation only takes into account three instances of ui. 3 greedy algorithm asa :In this section, the two main steps that compose the algorithm ASA are briefly described. First, an agglomerative greedy procedure is applied to A so as to derive a covering. Next, a spitting greedy procedure reduces this covering in order to approach the optimal solution of Equation (2). The greedy strategy builds a sub-optimal solution to the SCP Equation (2) in an iterative way. At each iteration, the lowest cost sentence is chosen from A. If several sentences correspond to the lowest cost, the one coming first (i.e., the one with the lowest index) is chosen. Initially, the set of selected sentences X is empty, the matrix à associated with the candidate sentences is assigned to A, the current covering capacity of sj is given by µ̃j = µj, and the current constraint vector B̃ = B. The cost of sentence sj is defined by σj = { cj/µ̃j if µ̃j ̸= 0 ∞ otherwise (4) Indeed, if µ̃j = 0, it turns out that sj does not cover any unit missing in the solution X under construction and its infinite cost σj avoids its selection. At each iteration, the selected sentence s is added to X . Taking into account the content of s, B̃ is updated to max { B̃ − Ã∆, 0Rm } where the jth entry of ∆ equals 1 if sj = s and 0 otherwise. Next, the associated column of s in à is set to 0Rm . For each sentence sj with a non-zero µ̃j feature, µ̃j is then updated using à and B̃ in Equation (3). At last, the agglomerative greedy algorithm is stopped as soon as all the constraints are satisfied, that is, B̃ = 0Rm . The spitting greedy strategy also consists in building iteratively a sub-optimal solution Y to Equation (2) by reducing the size of a covering. The initial covering Y is set to the solution X derived by the agglomerative phase described earlier. At each iteration, the set of the redundant sentences of Y is calculated and the costliest one (according to the cost function C) is removed from Y . An element s of Y is said to be redundant if for each ui ∈ U , its number of instances into Y , denoted mi(Y ), and into Y \ {s}, denoted mi(Y \ {s}), check min {mi(Y ), bi} = min {mi(Y \ {s}), bi}. In other words, s is a redundant element of the covering Y if Y \ {s} is also a covering solution of Equation (1). The spitting greedy algorithm stops when the redundant sentence set is empty. 4 lagrangian relaxation based–algorithm :This section describes the main phases of the algorithm called LamSCP. This algorithm takes advantage of the Lagrangian relaxation properties reviewed herein in order to approach the optimal solution of Equation (2) as close as possible. Strongly inspired by Caprara, Fischetti, and Toth (1999), but generalized to the multi-representation problem, this algorithm provides a lower bound of the optimal solution cost. Having such information is very useful for assessing the achievements of the SCP algorithms. Let us briefly recall the main principles of Lagrangian relaxation on which LamSCP is based to solve Equation (2) (see Fisher [1981] for more details on Lagrangian relaxation). First, the Lagrangian function associated with Equation (2) is defined by L(X,Λ) = CX +ΛT(B − AX) = ΛTB + C(Λ)X (5) where Λ ∈ ( R+ )m, X ∈ {0, 1}n, and C(Λ) = C − ΛTA. The coordinates of Λ = (λ1, . . . , λm)T are called Lagrangian multipliers and can be interpreted as a weighting of constraints (1). The jth entry of C(Λ), called Lagrangian cost cj(Λ) of sentence sj, takes into account its cost cj and the adequacy of its composition to address Equation (2). For every covering X and every Λ ∈ ( R+ )m, the Lagrangian function satisfies L(X,Λ) ≤ CX. Thus, the dual Lagrangian function defined by L(Λ) = min X∈{0,1}n L(X,Λ) (6) presents the following fundamental property: For every Λ ∈ Rm+ and every covering X, we have L(Λ) ≤ CX. Hence, L(Λ) is a lower bound of the minimal covering cost, CX∗, but does not necessarily correspond to the cost of a covering. In order to compute L(Λ), an acceptable solution for the vector X minimizing L(X,Λ) is X(Λ) = (x1(Λ), . . . , xn(Λ)) T where xj(Λ) = { 1 if cj(Λ) < 0 0 if cj(Λ) > 0 ∈ {0, 1} otherwise (7) Additionally, the dual Lagrangian function and the Lagrangian costs inform about the potential usefulness of sentences in the optimal covering. More precisely, for a given Λ and a known upper bound UB of minimal covering cost, the gap g(Λ) = UB − L(Λ) measures the relaxation quality. If cj(Λ) is strictly greater than g(Λ), we can check that any covering containing sj has a cost value strictly greater than UB. Hence, sentence sj is not selected and xj can be fixed at zero. Similarly, if cj(Λ) < −g(Λ), any covering with a cost lower than UB contains sj and one can fix xj to 1. Therefore, an optimal covering is made up of sentences with a low Lagrangian cost, as done in Caprara, Toth, and Fischetti (2000) and Ceria, Nobili, and Sassano (1998), and the higher the relaxation quality (i.e., the lower g(Λ)) is, the cheaper the covering will be. The resolution of the dual problem of Equation (2) consists in finding Λ∗ ∈ Rm+ that maximizes the lower bound L(Λ), that is Λ∗ = arg max Λ∈Rm+ L(Λ) (8) Because this real variable function L is concave and piecewise affine, a well-known approach for finding a near-optimal multiplier vector is the subgradient algorithm. The LamSCP is an iterative algorithm, composed of several procedures that aim to either improve the current best solution or reduce the combinatorial issue related to the considered problem. In order to derive a good solution, the algorithm calls on a great number of greedy procedures to solve sub-problems with the help of the Lagrangian costs. As for the combinatorial reduction, the most frequently used heuristic consists of downsizing the problem by mainly considering the sentences with low Lagrangian costs. The algorithm is organized around a main procedure called 3-phases. This procedure can single-handedly solve a multi-represented SCP. As its name suggests, the 3-phases functioning consists in iterating a sequence of the three following subprocedures as shown in Figure 1:r The subgradient phase calculates an estimation Λ̃ of Λ∗ that maximizes the dual Lagrangian function. This procedure requires an upper bound UB of the optimal covering cost. UB is initialized by a greedy algorithm (rather than the cost of the whole corpus A). This phase is detailed in Section 4.2.1.r The heuristic phase explores the neighborhood of Λ̃ by generating a great number of Lagrangian vectors Λ̃p. A greedy-like procedure is associated with each Λ̃p so as to compute a covering using the Lagrangian cost vector C(Λ̃p). If, during this exploration, a less costly covering than the best known one (corresponding to the cost UB) is found, the upper bound UB is then updated to the cost of this less costly solution. Similarly, if a better estimation of Λ∗ is obtained, Λ̃ is updated. This phase is described in Section 4.2.2.r The column fixing phase selects a set F of sentences that are most likely to belong to the optimal covering. This phase is detailed in Section 4.2.3. Following the column fixing phase, the constraint vector is updated and the unselected sentences define a set-covering sub-problem, called a residual problem. This sub-problem is processed similarly, via an additional iteration of the three phases. This iterative process is stopped when the residual problem is empty or when the associated dual Lagrangian function indicates a cost is too high. Indeed, because this function indicates a minimal cost for covering the sub-problem, its addition to the cost CF of the sentences already retained in F gives a lower bound of the total cost of the solution under construction, which should not rise beyond the cost UB of the best known solution so as to be potentially more advantageous. 4.2.1 Subgradient Phase. In order to reach the quality goal, the subgradient phase provides a near-optimal solution Λ̃ of the dual Lagrangian problem (8) using a subgradient type algorithm. This iterative approach generates a sequence (Λp) using the following updating formula (see Caprara, Fischetti, and Toth 1999) Λp+1 = max { Λp + µ g(Λp) ||S(Λp)||2 S(Λp), 0 } (9) where S(Λp) = B − AX(Λp) so as to take into account the multi-representation constraints. Parameter µ is adjusted to fit the convergence fastness according to the method proposed by Caprara, Fischetti, and Toth (1999). At the first call of 3-phases, Λ0 is arbitrarily defined as follows: for each i ∈ M, λ0i = minj∈N aij ̸=0 cj µj (10) As for UB, its initial value is set to the cost of a covering previously calculated. In order to evaluate how much the covering cost derived by ASA can be improved, we have chosen to initialize UB by this value. At the following iterations of 3-phases, Λ0 is given by a random perturbation (less than 10%) of the best known vector Λ̃ (of which the entries of the sentences fixed in the last column fixing phase are removed) and UB corresponds to the cost of the best covering found (after subtraction of the cost of the sentences fixed in the last column fixing phase). In another approach proposed by Ceria, Nobili, and Sassano (1998), UB corresponds to the upper bound of a dual Lagrangian problem, and the subgradient procedure simultaneously estimates the upper bound and the lower bound, generating two sequences of multipliers. The subgradient phase also calls two procedures: pricing and spitting. Procedure pricing aims to reduce the size of the search space. For each unit ui, the pricing selects the 5bi smallest Lagrangian cost sentences covering ui. If this selection contains less than 5m sentences, where m is the maximal entry of B, it is completed by low Lagrangian cost sentences (less than 0.1) to the limit of 5m sentences. The set of the chosen sentences is denoted P and its design guarantees a sufficient number of instances for each unit to cover and a small variety in its composition. Actually, the subgradient method is applied on P instead of A, and P is updated every 10 subgradient iterations. Finally, at each iteration, definition (7) of X(Λp) and the large number of Lagrangian costs close to zero allow a considerable number of vectors S(Λp). In order to get around the computational difficulty of finding the steepest accent direction, Caprara, Fischetti, and Toth (1999) propose a heuristic that, according to the experimental results, accelerates the convergence of the subgradient phase. This heuristic is implemented in the spitting procedure. Called at each iteration, this procedure extracts from P the subset S of sentences with a Lagrangian cost lower than 0.001. S is then reduced using a spitting greedy algorithm to remove its redundant elements in decreasing Lagrangian cost order. At last, for every j ∈ M, xj(Λp) = 1 if sj ∈ S and xj(Λp) = 0 otherwise. Thus, S(Λp) does not necessarily correspond to a subgradient vector. 4.2.2 Heuristic Phase. The heuristic phase calculates a large number of coverings before keeping the best one. To that end, a sequence of 150 multiplier vectors is generated by perturbing Λ̃ using the formula Λ̃p+1 = max { Λ̃p + µg(Λ̃p)S(Λ̃p), 0 } where Λ̃0 = Λ̃ and µ is provided by the subgradient phase, so as to allow for a change in a large number of Λ̃p. With each Λ̃p, an agglomerative greedy algorithm followed by a spitting greedy one are associated in order to calculate a covering. The agglomerative greedy chooses at each iteration the sentence sj with the lowest cost σj(Λ̃p) where σj(Λ̃p) = cj(Λ̃p) ∗ µ̃j if cj(Λ̃p) < 0 and µ̃j > 0 cj(Λ̃p)/µ̃j if cj(Λ̃p) ≥ 0 and µ̃j > 0 ∞ if µ̃j = 0 (11) This cost function provides an advantage to low Lagrangian cost sentences sj containing µ̃j unit instances that could be helpful to the covering under construction. The agglomerative step uses several heuristics so as to reduce the search space. It is run within a limited subset Pl of P , composed of the sentences sj with the lowest costs σj(Λ̃k). At each iteration, a sentence of Pl is selected and the cost of the sentences of P are updated. If the maximum sentence cost in Pl becomes greater than the minimal cost in P \ Pl, the working subset Pl is also updated. The definitions of P and Pl guarantee that the agglomeration step provides a nonpartial solution of the considered SCP. This solution is then reduced during the spitting step by iteratively removing its redundant sentences sj with the highest costs cj. At the end of the heuristic phase, the best found covering X ∗ and its cost CX∗ (stored in UB) are kept as well as the highest value of L(Λ̃p) (found during the subgradient or heuristic phases). 4.2.3 Column Fixing Phase. The column fixing phase aims to reduce the problem size by choosing “promising” sentences among the ones with very low Lagrangian cost or containing rare unit instances. The unselected sentences are less interesting for resolving the SCP and the residual problem associated with these sentences should be the subject of another call of the 3-phases. More precisely, the column fixing phase calculates the subset Q composed of sentences sj with a negative Lagrangian cost cj(Λ̃). Q is represented by the binary vector Q = (q1, . . . , qn)T where qj = 1 if sj ∈ Q. For each ui ∈ U , the number of its instances covered by Q is given by (AQ)i. If (AQ)i ≤ bi, then ui is considered as a rare unit and all the elements of Q containing some instances of ui are fixed in a set F . The covering constraints that are not satisfied by F constitute a residual SCP. In order to complete F with a few sentences of the best known covering, a greedy-type algorithm is run on X ∗ \ F to derive a solution of this residual SCP. From the obtained solution, the max{ ( BT1Rm ) /20, 1} lowest Lagrangian cost sentences are also added in F . The sentences that are “fixed” in F during the column fixing phase stay in F up to the end of the 3-phases. After the column fixing phase, the residual sub-problem is processed by iterating the three phases and the next fixed sentences are added to F . The 3-phases procedure is encapsulated in an outer loop that permits the partial reconsideration of the solution X ∗ provided by this procedure. To that end, the refining procedure, proposed in Caprara, Fischetti, and Toth (1999), is used in order to select elements of X ∗ that contribute at least to the gap g(Λ̃). In the case of the SCP with multirepresentation constraints, the definition of the contribution of sj ∈ X ∗ can be adapted as follows: δj = max{cj(Λ̃), 0}+ ∑ i∈M aij>0 λ̃i(AX∗ − B)i aij (AX∗)i (12) The second term of Equation (12) consists of sharing the contribution λ̃i(AX∗ − B)i of the excess instances of ui in X ∗ according to the distribution of ui instances in X ∗. Therefore, the refining procedure ranks in an increasing order the elements sj of X ∗ according to their δj value, and fixes the first elements in a set G until its given covering rate τG reaches π. τG represents the rate of covering constraints satisfied by G and is defined by τG = 1 − ∑ i∈M max{bi − (AG)i, 0} BT1Rm (13) where G denotes the binary vector corresponding to G. The LamSCP is made up of the main procedures introduced in the previous sections interlinked by the following steps. First, because of the adaptation of the algorithm to the SCP with multi-representation constraints, the entries of matrix A are clipped to the constraint vector in order to simplify the calculations such as the ones of µ1, . . . ,µn. This threshold application implies that the excess instances of each unit ui are taken into account in each sentence sj beyond the number bi, in the derivation of δj. After the initialization of Λ0 using Equation (10) and the upper bound UB, procedure 3-phases is called and provides a solution X to the complete SCP. The refining function fixes a sentence subset G such that G covers a given rate π of the covering constraints. Parameter π starts at a minimum value πmin = 0.3. The residual SCP is then processed by 3-phases. The π value grows at a rate of 20 percent whenever 3-phases does not improve the solution to the complete SCP. If π is greater or equal to 1, the refining procedure fixes the whole best solution and the residual problem is then empty. On the other hand, π is set to πmin if a better solution is found in order to challenge G and improve this solution. This iterative sequence composed of the 3-phases and refining procedures is carried out until the residual problem is not empty, the gap g(Λ̃) is positive, and the number of iterations has not reached 20. 5 experiments :We propose a twofold comparison of the ASA and LamSCP algorithms: One part is focused on the covering cost for a large SCP, and the other on the stability of the solutions. Moreover, in order to assess the ability and the behavior of both algorithms to process different linguistic data, a first set of experiments deals with phonological attributes (mainly covering co-occurrences of phonemes) and a second set with grammatical labels (mainly covering co-occurrences of POS labels). Both attribute types, often involved in automatic linguistic processing, were chosen because their distribution consists of few highly frequent events and numerous rare events. On the one hand, for TTS tasks, the phonological type covering is a useful preliminary step of the text corpus design before the recording step. In order to produce the signal corresponding to a requested sentence, the unit selection engine requires at least one instance of each phone (or 2-phone, depending on the concatenation process). Because the recording and the post-recording annotation process are expensive tasks, the recording length of such a corpus has to be as short as possible. On the other hand, in order to train a domain-specific dependency parser, the covering of POS sequences may be useful for increasing the diversity of syntax patterns. Because the dependency annotation is a highly expensive task, the adaptation corpus to annotate needs to be as small as possible, containing characteristic examples of the specific lexical variation rather than following the natural distribution. One can expect that increasing its diversity of POS sequences may lead to more diversity in the syntax trees. Experiments on covering co-occurrences of phonemes are carried out on two large phonologically annotated text corpora, and consist of covering at least k instances of each phoneme, diphoneme until n-phoneme (i.e., triphoneme if n = 3, diphoneme if n = 2). The cost cj of the sentence sj is given by its number of phones. From this point, this kind of SCP is called a “k-covering of n-phonemes.” A first corpus, Gutenberg, is composed of texts in English, mainly extracted from novels and short stories. This corpus is the production of the Gutenberg Project, presented by Hart (2003), and has been used by Kominek and Black (2003) to design the speech corpus Arctic. A second corpus, in French, named Le-Monde, is extracted from articles published in the newspaper Le Monde in 1997. Table 1 summarizes the main features of both corpora. The phonological annotation of the Gutenberg corpus comes from the Arctic/ Festvox database (see Kominek and Black 2003), and the annotation of the Le-Monde corpus is a by-product of the Neologos project, detailed by Krstulović et al. (2006). For each corpus, we have collected every phoneme, diphoneme, triphoneme, and their occurrences in each sentence so as to define the set U of units to cover and the matrix A. A is built by collecting one sentence after the other following the ordering inside the corpus, and one unit after the other inside the sentences. After this matrix translation, we obtain two description files and two index files. The first file describes the matrix A and the second one the cost vector C. Because of the low matrix density, we have chosen a sparse representation to save space and computation time: For instance, the 2-phoneme Gutenberg matrix is about 2.2% dense. We only store the cells of A that have a non-zero value so as to get a sparse matrix. The index files are made for the correspondence between the general covering problem and the application domain. The implementation is made in C. In terms of software engineering, our algorithms are working on an SCP that does not depend on the application data. For example, there is no information on what types of units are to be covered. The algorithms only have the matrix of occurrences A, the cost vector C, and the constraint vector B. A set of translation files (from application data to SCP and from SCP to application data) is built before each computation. As a consequence, there is no difficulty in addressing a different set of features to cover on the same or on a different corpus. To study the achievements of ASA and LamSCP on different types of data, we have also chosen to address the “k-covering of n-POS” on the corpus Le-Monde. The grammatical and syntactical analyses are processed by the Synapse development analyzer presented in Synapse (2011). In order to consider a SCP with a substantial number of required units, a very detailed level of POS tagging has been selected, providing 141 distinct tags after analyzing Le-Monde. For example, this level provides tags like “Determiner male singular Article” or “Noun female singular,” whereas the simplest level gives “Determiner” or “Noun.” This latter level of description would have given only nine different POS tags after analyzing Le-Monde. The main associated statistics are given in Table 2. For these experiments of POS covering, the cost of a sentence is defined as its number of POS occurrences. We used a PC with 2 CPUs (E5320/1.86GHz/4 cores/64bits) and 32 GB RAM for the phonological coverings and the POS coverings were computed using a PC with 8 CPUs (Intel Xeon X7550/2.00Ghz/8 cores/64bits) and 128 GB RAM. Our implementations do not take advantage of any parallelism. The following sections detail more precisely the different experiments conducted on French or English. The aim of Experiment 1 is the assessment of the achievements of both algorithms, ASA and LamSCP, and the robustness of the results when the sentence ordering is modified in the corpus to reduce. Indeed, one of the difficulties of the greedy methodology is that the score function has discrete values and several sentences can yield the same score. In our implementation, among the sentences showing the best current score, the first one encountered is chosen. We would like to measure the influence of this random choice on the stability of the results. LamSCP uses greedy strategies based on Lagrangian costs. Because the Lagrangian costs cj(Λ) take the SCP in its entirety into account and are continuous real-value functions of Λ, they would be more selective than the sentence costs used by ASA. A simple solution for evaluating the stability consists of proceeding with an important amount of experiments on the same SCP by randomly modifying the sentence ranking in A. Experiment 1 measures the impact of these permutations on the solutions computed by both algorithms. The considered SCP is the 1-covering of 2-phonemes on the corpus Le-Monde. Considering the computation time (more than 5 hours for LamSCP), only 47 instances of the SCP are considered, each instance corresponding to a random sentence ordering in Le-Monde. The 95% confidence intervals are derived using the bootstrap method, concerning the covering cost, the number and the length of sentences in coverings, the computation time, and the “distance” between the covering costs and the associated lower bound L(Λ̃). 5.2 Stability of the Algorithms for the k-Covering of 2-Phonemes in English One of the goals of Experiment 2 is to compare the achievements of both algorithms on corpus Gutenberg, which has different features from the ones of Le-Monde. The sentences in Gutenberg are shorter on average and the associated variation of sentence length is lower. Furthermore, Gutenberg is 10 times smaller than Le-Monde. In order to compare with the results of the previous experiment done on Le-Monde, we first observed a 1-covering of 2-phonemes on the Gutenberg corpus. The search space seems smaller than in Experiment 1. According to Table 1, Le-Monde is composed of more sentences than Gutenberg. Le-Monde contains 33,165,050 occurrences of 2-phonemes and Gutenberg only 3,025,474. Moreover, the number of attributes to cover is lower: 1,207 2-phonemes in Le-Monde and 2,012 in Gutenberg. We can also notice that five 2-phonemes have only one occurrence in Le-Monde and that the total cost of the five sentences covering these rare units is 751 phones, whereas this is the case for 109 2-phonemes in Gutenberg and the total cost of the 104 concerned sentences is 3,606 phones. Finally, if we consider the density of matrix A, 8.4% of the cells are non-empty for Experiment 1 and 2.2% for Experiment 2. As with Experiment 1, the sentence ordering in Gutenberg has been randomly modified to produce 60 instances of the SCP and similar solution statistics have been computed for both algorithms. The second objective is to test and compare the ability of two algorithms to deal with the constraints of multi-representation. For this, we apply the same methodology to the k-covering of 2-phonemes in Gutenberg, for k from 2 to 5. We note that for the same original corpus to reduce, the size of the search space decreases when k increases. These different SCPs enable us to compare the performance of two algorithms depending on the size of the search space. In Experiment 3, the aim is to observe the behavior of both algorithms on very constrained problems. For this, we study their ability to treat a covering of 3-phonemes. We try to assess the impact on the solution features and on the stability of such an increase in the number of attributes to cover with many rare events. So as to compute statistics on the 1-covering of 3-phonemes, an instance of Gutenberg has been proposed to ASA and to LamSCP. This instance counts 29,489 units to cover and the density of matrix A is 0.24%. The computation time is nearly 5 days for LamSCP and we then chose to carry out 35 instances of the 1-covering of 3-phonemes on Gutenberg. Additionally, it is interesting to compare these results with those of Experiment 2 concerning the 1-covering of 2-phonemes on the same corpus, which corresponds to a larger search space. In order to pursue the objective set out in the description of Experiment 3, that is, the ability of algorithms to treat with numerous constraints and a heavy-tail distribution of units, Experiment 4 consisted of testing both algorithms on the 1-covering of 3- phonemes on Le-Monde. The search space seems larger than in the previous experiment. Let us recall that Le-Monde contains 3.18 times more sentences than Gutenberg and it counts 27,650 units to cover. Furthermore, Gutenberg contains 5,000 3-phonemes with only one occurrence, which requires the selection of 4,180 sentences with a total length equal to 137,714 phones, whereas Le-Monde contains 2,274 rare 3-phonemes scattered in 2,107 sentences measuring a total of 283,208 phones. The associated matrix density is 0.69%. Because the computation of a first instance takes more than 8 days, we have limited the number of instances to 30 for this SCP. 5.5 k-Covering of 1-POS and 2-POS in French The main goal of Experiment 5 is to study the behavior of both algorithms, ASA and LamSCP, dealing with another kind of linguistic attribute, and to compare this with the previous experiments. To achieve this goal, we consider POS attributes and the associated SCP: 1- and 5-coverings of POS, 1- and 5-coverings of 2-POS defined on Le-Monde. Indeed, we can observe in Table 2 that the global statistics of POS tags in Le-Monde are quite different from their phonological counterparts summarized in Table 1. In particular, the density of matrix A is 11.03% for a 1-POS covering and 0.57% for a 2-POS covering. Also, the search space size seems to decrease when considering successively the mono- and the multi-coverings of 1-POS, and the mono- and the multicoverings of 2-POS, permitting us to compare them with the results coming from the experiments on phonological coverings. We evaluate the stability by computing 50 randomly mixed versions of the corpus Le-Monde. 6 results and discussion :In this section, the results of the experiments described in Section 5 are provided and discussed. As a consequence, the organization of this section and Section 5 are similar. Table 3 shows the main results of Experiment 1, concerning the 1-covering of 2- phonemes from the corpus Le-Monde. Symbol ± indicates that the mentioned value corresponds to a 95% confidence interval, calculated using the bootstrap method from the 47 instances of the SCP. In order to cover each of the 1,207 2-phonemes of Le-Monde, ASA drastically reduces the size of the initial corpus by 99.94% (±0.00). However, on average, LamSCP calculates a 9.00% shorter covering. The lower bound L(Λ̃) for the optimal covering cost is 7,689 ± 5 phones. L(Λ̃) is not a minimum value and may not correspond to the cost of a real covering. Because this lower bound is updated all along the execution of LamSCP, we do not mention a specific calculation time for this result. For one instance of the SCP, let CX∗ASA be the size of the solution given by ASA. The quantity τASA = 1 − L(Λ̃)/CX∗ASA indicates that the optimum solution to SCP is at most τASA times shorter than the covering calculated by ASA. It can be observed that the optimal solution is at most 10.13% (±0.19) shorter than the one yielded by ASA and at most 1.24% (±0.08) shorter than the solution yielded by LamSCP. The solutions obtained by LamSCP and the optimal solution to the SCP are therefore very close. Considered among the 47 instances of the SCP the best solutions yielded by ASA (8,447 phones) and LamSCP (7,767 phones), LamSCP is 8.75% better than ASA in terms of covering costs, while the best lower bound for the SCP is 7,715 phones, only 0.67% (respectively, 8.66%) shorter than the best covering by LamSCP (respectively, ASA). The average length of the sentences selected by both algorithms is far below the average length of the sentences in the corpus (96.81 phones). LamSCP tends to choose sentences that are slightly longer than ASA, with an average 28.97 (±0.12) phones compared with 25.48 (±0.10) phones. Moreover, ASA selects on average 335.73 (±1.69) sentences per solution, about 24.91% more than LamSCP, which selects 268.76 (±1.08) sentences on average. This seems to indicate that LamSCP makes fewer local choices than ASA. This hypothesis can also be validated through the analysis of the variability of the results. The relative variation of the covering costs calculated by LamSCP is 13.01/7,786 = 0.16%, and 57.40/8,555 = 0.67% by ASA; that is to say a stability of the costs 4 times greater for the solutions yielded by LamSCP than for ASA. Moreover, the solutions are composed of a very stable number of sentences: The associated relative standard deviation is 5.31/335.73 = 1.58% for the 47 instances solved by ASA, and 3.85/268.76 = 1.43% for the instances solved by LamSCP. It turns out that the results of both algorithms are very stable when the order of the sentences is modified in the original corpus. Finally, concerning computation time, the resolution of an instance of the SCP lasts on average 5 hr 41 min 18 sec (±8 min 28 sec) for LamSCP versus 51 sec (±0 sec) for ASA. On average over the 47 instances, LamSCP takes 390 (±9) times as long as ASA. 6.2 Stability of the Algorithms for k-Covering of 2-Phonemes in English The considered SCP consists of covering at least k times each of the 2,012 2-phonemes of the Gutenberg corpus, with k varying from 1 to 5. The results are summarized in Table 4. For all instances of these SCPs, it has been observed that LamSCP computes shorter coverings than ASA. However, that advantage diminishes as k grows: The cost advantage offered by LamSCP compared with ASA decreases from 9.73% (±0.13) for k = 1 to 4.50% (±0.04) for k = 5. Also, the solutions obtained from ASA and LamSCP seem to get closer to the optimal solution as k rises. The corresponding figures are presented in Table 5: For instance, the optimal solution is at most 0.75% (±0.02) shorter than that obtained by LamSCP for k = 1, and 0.27% (±0.00) for k = 5. Because the search area diminishes as k increases, it may be observed that the algorithms tend to be more stable. This is true both for the size of the solutions, as well as for the number of sentences that define them. Table 6 represents the variation of the size of the solutions as a function of k. This variation is calculated as follows: For a given k number and a given algorithm, the standard deviation of the size of the k-covering computed by that algorithm is divided by the average size of these coverings. Thus, it can be noted that LamSCP offers a stability 4 to 8 times superior to ASA concerning the size of the coverings. As for the number of sentences, the relative standard deviation similarly decreases from 0.97% to 0.28% when k increases from 1 to 5 for ASA solutions, and from 0.42% to 0.15% for LamSCP ones. One can note that the increase of the minimal number k of instances of each unit to cover leads to a selection, by LamSCP and ASA, of longer sentences on average. The average length of the sentences picked for a 1-covering was quite low. As the constraints increase along with k, it only seems natural that the algorithms tend to select longer sentences, as shorter sentences no longer contain enough occurrences of 2-phonemes. Moreover, as described in Section 2, when the minimal number bi of a unit ui demanded in the covering exceeds the number of instances of that unit in the initial corpus, all sentences containing instances of ui in the initial corpus are selected, and bi is set to (A1Rn )i. Thus, as k increases, the algorithm tends to select more and more sentences, and their length tends towards the average value over the whole corpus, which is 28.51 phones for Gutenberg. As for computation time, although it increases as k grows, because of the increasing number of constraints to update, the ratio between the computation time of LamSCP and ASA tends to diminish, as shown in Table 7. This tendency may find an explanation in the fact that the search space diminishes as k increases, which causes a lesser number of selected sentences to be questioned during the 3-phase iteration of LamSCP. Also, we notice that the average computation time of the two algorithms is greater in Experiment 1, owing to a greater number of sentences in corpus Le-Monde and a higher density of matrix A. Moreover, the ratio between the computation times of Time LamSCP / Time ASA 333 (± 7) 280 (± 5) 292 (± 6) 218 (± 9) 194 (± 8) LamSCP and ASA decreases between Experiments 1 and 2, going from 390 (±9) to 333 (±7). Again, this can be explained by the diminishing of the search space. For k = 1, the advantage offered by LamSCP on the covering costs compared with ASA is slightly higher than that observed in Experiment 1: 9.73% (±0.13) in this case, versus 9.00% (±0.20) in the previous experiment. This seems to contradict the idea that the performance of LamSCP improves as the search area becomes wider. However, the distributions of the units to cover in Gutenberg and Le-Monde are different, and the variation on the length of the sentences in Le-Monde is very high, which may account for this slight difference in terms of gain. Note that the size of the calculated coverings and the lower value L(Λ̃) are closer in the experiment carried out on Gutenberg. It is difficult, however, to perform further comparisons with Experiment 1 regarding the “distance“ between the costs of the solutions computed by these algorithms, and the optimal covering cost, given that the quality of the lower bound cannot be evaluated. The gain in stability offered by LamSCP, both for the costs of the solutions or the number of sentences, is more important than that noticed during the previous experiment. We think that the increase is due to a more restricted search space, and less variability of the length of the sentences in corpus Gutenberg, which may be observed in Table 1. Table 8 sums up the main results of Experiment 3, where 35 instances of 1-covering of 3-phonemes from Gutenberg were processed. According to the L(Λ̃) values, covering all 3-phonemes requires a solution size greater than or equal to 226,635 phones. On average, the solution measures 227,360 ± 12 phones using LamSCP, and 236,828 ± 94 phones using ASA. The optimal covering is at most 0.35% (±0.00) shorter than solutions derived by LamSCP and 4.33% (±0.04) shorter than the ones derived by ASA. We can then observe that both algorithms manage to compute solutions with close sizes when scaling up the required attribute set. The solutions are very stable, even more than in Experiment 2: The relative variation of their size is 0.12% for ASA and 0.01% for LamSCP; the relative variation of their sentence number is 0.17% for ASA and 0.07% for LamSCP. This increase of stability is due to a smaller search space and the increase of the number of rare units required, which also compels the algorithms to select a higher number of inevitable sentences for all the instances of the SCP. Furthermore, the decrease of the ratio between the computation time of LamSCP and ASA from 332 for the 1-covering of 2-phonemes to 130 for the one of 3-phonemes on Gutenberg may confirm this idea, which has also been put forward in Experiment 2. Concerning the length of the selected sentences by both algorithms, it is greater than the one for the 5-covering of 2-phonemes, observed in Experiment 2, and slightly deviates from the average sentence length for the whole corpus. Consequently, it turns out that covering longer and generally rarer units involves a selection of longer sentences. This is confirmed by the fact that the sentences of Gutenberg covering units with a single occurrence in Gutenberg represent more than half the size of the solutions and are composed of 33 phones on average. In this section, we analyze the results of Experiment 4, the 30 instances of the 1- covering of 3-phonemes from Le-Monde carried out by ASA and LamSCP. The results are given in Table 9. First, although the main features of Le-Monde and Gutenberg are different, notice that the closeness between the size of coverings calculated by both algorithms and the lower bound L(Λ̃) is comparable to the one observed in Experiment 3. Indeed, the optimal covering size is at most 0.48% (±0.03) and 4.35% (±0.05) shorter than the solution size derived by LamSCP and ASA, respectively. Similarly, the size of solutions and the number of selected sentences are as stable as those observed in the previous experiment: The solution length varies from 0.01% for LamSCP to 0.10% for ASA and the number of sentences fluctuates about 0.16% for ASA and 0.10% for LamSCP. As for the comparison with the results of Experiment 1 (1-covering of 2-phonemes from Le-Monde), the main trends are similar to the ones observed for the transition from the 1-covering of 2-phonemes to the 1-covering of 3-phonemes from Gutenberg. However, in Experiment 4, the average selected sentence length has markedly increased, approaching the mean value on the whole corpus: 89.02 (±0.03) for ASA and 92.64 (±0.03) for LamSCP, whereas in Experiment 1 these values are, respectively, 25.48 (±0.10) and 28.97 (±0.12). We have already observed in Experiment 3 that covering longer units increases the length of selected sentences but this high amplitude seems to be inherent to the design of corpus Le-Monde. Furthermore, notice that the 1-coverings of 2-phonemes from Le-Monde are almost half as small as the ones from Gutenberg, whereas the 1-coverings of 3-phonemes from Le-Monde are between twice and three times longer than the ones from Gutenberg. This is due to the fact that the 3-phonemes with a single instance in Le-Monde are very scattered in long sentences (their length mean is about 134 phones), and these indispensable sentences represent nearly half the size of the solutions. The other sentences of the solutions are around 70 phones long. Lastly, the ratio between the computation time of both algorithms is about 84, which is smaller than the ratios previously observed, but this SCP is the most time consuming: 2 hr 10 min for ASA and more than 7 days for LamSCP. 6.5 k-Covering of 1-POS and 2-POS in French Table 10 sums up the main results of Experiment 5, dealing with the 1- and 5-coverings of 1-POS and 2-POS. For all these SCP, LamSCP produces smaller coverings, composed of longer sentences, than the coverings obtained with ASA. When the search space diminishes, the relative “distance” between the size of solutions provided by both algorithms decreases, as well as between the lower bound L(Λ̃) and the size of solutions obtained by ASA. These trends were also observed in the earlier experiments. In particular, as for the 1-covering of 1-POS, not only does LamSCP provide 10.06% (± 0.00) shorter solutions than ASA, but its solutions are optimal for all 50 instances of this SCP. Indeed, the lower bound value varies from 482.51 to 482.87 occurrences of 1-POS while all the solutions given by LamSCP are made of exactly 483 occurrences of POS. For the other k-coverings of n-POS, the optimal solution is at worst 0.39% (±0.02) shorter than the covering given by LamSCP for (k, n) = (5, 1), 0.11% (±0.00) for (k, n) = (1, 2), and 0.22% (±0.00) for (k, n) = (5, 2). The solutions obtained by ASA or by LamSCP are very stable. For example, the relative standard deviation of number of POS in a covering solution varies from 0.00% to 1.23% for ASA, and from 0.00% to 0.04% for LamSCP. As previously observed for both algorithms, their computation times grow when the number of required covering features increases. However, the ratio between the computation time of LamSCP and ASA does not behave as in Experiment 2 (see Table 7): For the k-covering of 1-POS, this ratio increases from 290 (±11) to 657 (± 43) when k goes from 1 to 5, and for the k-covering of 2-POS, it increases from 75 (±5) to 108 (±6). 7 evaluation on a text-to-speech synthesis system :In the previous sections, different algorithms dealing with corpus reduction were introduced and studied. The proposed experiments mainly evaluate the effects of these algorithms in terms of corpus reduction but not according to a practical task. This section proposes an experiment to assess the impact of the corpus reduction on a unit selection speech synthesis system. As explained in Section 1, a corpus reduction for a TTS system is a trade-off between minimizing the recording and post-processing time to build the speech corpus and keeping the highest phonological richness of the corpus to ensure the quality of the synthetic speech. The goal of this experiment is to measure this trade-off by evaluating the quality of the same TTS system fed with different speech corpora uttered by the same speaker. Note that the intrinsic quality of this system is not the purpose here. Firstly, a brief presentation of a state-of-the-art unit selection–based TTS system is proposed in Section 7.1. The linguistic parameters used by the TTS system are detailed because they are linked to the required features in the reduction stage. In Section 7.2, corpora used in the experiment are introduced. The attributes to cover and the methodology of evaluation are described in Section 7.3; the results are given and discussed in Section 7.4. For this experiment, a state-of-the-art unit selection–based TTS system is used to produce an acoustic signal from an input text. A linguistic front end processes the text to extract features taken into account by the algorithm that selects segments in a speech corpus (see Boëffard and d’Alessandro 2012). The input text is converted into a sequence of phonemes using a French phonetizer proposed by Béchet (2001). Non-speech sound labels can be added to this sequence (silences, breaths, para-verbal events, etc.). A vector of features is defined as follows: 1. The phone or non-speech sound label 2. Is the described segment a non-speech sound? 3. Is the phone in the onset of the syllable? 4. Is the phone in the coda of the syllable? 5. Is the phone in the last syllable of its syntagm? 6. Is the current syllable at the end of a word? 7. Is the current syllable at the beginning of a word? Extraction of features is done using the ROOTS toolkit described in Boëffard et al. (2012). The unit selection process aims to associate a signal segment from the speech corpus to each vector of features computed from the input text. This is performed in two steps. In the first step, for each unit, a set of candidates that match the same features are extracted from the speech corpus. In the second step, given all candidates, the best path is searched using an optimization algorithm so as to produce the sequence of speech units. The algorithm tries to minimize three sub-costs, commonly used in unit selection based TTS systems, which are spectral discrepancies based on MFCC distance, amplitude, and f0 distances. Two corpora are used in this experiment. The first one, Learning corpus, is an annotated acoustic corpus used to provide speech data for the TTS engine. It is an expressive corpus in French, spoken by a male speaker reading Albertine disparue, an excerpt from À la recherche du temps perdu by Marcel Proust. The corpus is composed of 3,138 sentences automatically annotated using a process described in Boëffard et al. (2012). The overall length of the speech corpus is 9 hr 57 min. When creating a voice for a unit selection– based TTS system, long sentences are generally removed or split into syntagm groups in order to help the speaker. A second corpus, named Test corpus, is a text corpus that is synthesized and used in the listening experiment. It is composed of 30 short sentences randomly extracted from a phonetically balanced corpus in French, proposed by Combescure (1981). The use of a corpus with a different linguistic style minimizes the bias introduced by the learning corpus. Statistics are given in Table 11. For this experiment, two reduced corpora are evaluated. They are built by reducing the full learning corpus using the two different algorithms presented in the previous sections: ASA and LamSCP. As described in Section 7.1, the unit selection process of the speech synthesis system is based on a set of phonological attributes. It seems natural to try to cover features that reflect the variability of these attributes. For this experiment, algorithms must cover all the units at least once, where a unit is described by the following: r Its label, that is, one of the 35 phonemes or a non-speech sound labelr The structure of the syllable that contains the phoneme, if it is a vowelr The position of the associated syllable in the word (start, middle, or end)r A Boolean indicated if the associated syllable is at the end of a syntagm The feature extraction is performed by the same set of tools used by the speech synthesis engine. Given this set of features, the learning corpus contains 1,497 classes of units. The cost function to minimize by the reduction algorithms is the total length of the set of the selected syntagms, in phones. Two speech synthesis systems are defined, extracting the speech units to concatenate from the coverings provided by LamSCP and ASA. Two other systems are added as baselines. First, a system named Full, built with the whole learning corpus, is used as an upper bound. Second, a system named Random uses a random reduction of the Learning corpus as a pool corpus of speech units. This reduction is done by randomly selecting sentences from the whole learning corpus until the size of the covering obtained by LamSCP is reached. Random is used as a lower bound. Whereas the optimization efficiency is measured by statistics on the reduced corpora, the quality of the synthesized speech signals is evaluated by a listening test. The protocol is based on a MUSHRA test, presented in ITU-R (2003), where for every sentence of Test corpus, the signals synthesized by the four systems are presented to each tester in a random order. If a system is not able to produce a signal for a requested sentence (because of a missing 2-phone in the pool corpus), an empty signal is presented. Ten native French testers (four naives and six experts) are asked to evaluate the overall quality of the stimuli and to give a mark from 0 to 100 (by steps of 5 points). The Learning corpus composed of 19,587 syntagms is first reduced using ASA and LamSCP. Statistics of the resulting solutions are summarized in Table 12. The covering of the 1,497 constraints divides the input corpus size by almost 10 and reduces the 10 hours of speech to around 1 hr 20 min. As for the previous experiments, even though different kinds of features are mixed (phonemes, syllable structures, position in a word or a syntagm) the ASA algorithm produces a solution close to the optimal one. However, LamSCP is again slightly better in terms of covering size. For the measure of the acoustic impact of corpus reduction, the listening test results are presented in Table 13 for the average marks and in Table 14 for the average ranks. Note that without a natural speech reference during the test, the marks should not be seen as an absolute score. Even if the LamSCP corpus is slightly smaller than the one from ASA, the acoustic quality of both systems is comparable according to the testers (with a slight advantage for the LamSCP corpus). In comparison with the baseline, which uses the whole learning corpus, the acoustic degradation is significant. This illustrates well the trade-off between corpus size and speech quality: For a 90% corpus size reduction, the acoustic quality drops by 10 points. Further research should be focused on the set of attributes to cover and their number of occurrences in order to improve this compromise. As expected, the baseline built from random sentences is preferred significantly less than the other systems because of the lack of relatively rare acoustic units. 8 conclusion :This article discussed the building of linguistic-rich corpora under an objective of parsimony. This task, a generalization of SCP, turns out to be an NP-hard problem that cannot be polynomially approximated. We studied the behavior of several algorithms in the particular domain of NLP, where the considered events follow a heavy-tailed type distribution. The proposed algorithms have been compared through three kinds of experiments: The first one is the coverings of 2- and 3-phonemes from two text corpora, one in French, the other in English; the second one consists of the coverings of part-of-speech labels from a corpus in French; the third one evaluates the impact of both algorithms on the acoustic quality of a corpus-based TTS system. The first algorithm, ASA, is composed of an agglomerative greedy strategy followed by a spitting greedy stage. The second one, LamSCP, is based on Lagrangian relaxation principles combined with greedy strategies. LamSCP is our adaptation of an algorithm proposed in Caprara, Fischetti, and Toth (1999) to the multi-representation constraints. The comparison of SCP solutions is mainly about their size, their maximal distance with the optimal covering, and their robustness in case of perturbation of the initial corpus ordering. Although ASA is much faster than LamSCP, it does not permit us to singlehandedly assess the quality of its solution in terms of size. The main assets of LamSCP are the calculation of a lower bound to the optimal covering size and shorter solutions than the ones obtained by ASA. Indeed, in our experiments of phonological coverings, the optimal solution is at most 1.24% (10.13%, respectively) smaller than the solutions derived by LamSCP (ASA, respectively). As for the coverings of 1-POS, LamSCP provides the optimal solution in a case of a mono-representation constraint, whereas the ASA solution is 10.17% greater than the optimal one. These relative gaps between the lower bounds and solution sizes of both algorithms generally decrease when the size of the search space decreases. Thanks to the lower bound derived by LamSCP, we empirically show that it is possible to get almost optimal solutions in a linguistic framework following Zipf’s law distribution, despite the theoretic complexity of the multi-represented SCP. Concerning the last experiment in the TTS framework, even if LamSCP provides a smaller corpus, the subjective test shows no significant difference between the TTS systems based on LamSCP and ASA corpora. Therefore, we think that ASA remains the most adequate strategy, in terms of performance, ease of development, and computation time to solve SCP in the NLP field. However, it would be interesting to test a parallelized version of the heuristic phase that calls an important number of greedy sub-procedures. Our future prospects for this work are in automatic language processing and speech synthesis. First, in the framework of the Phorevox project supported by the French National Research Agency, we are considering the automatic design of exercise contents for language learning by the selection of texts covering some phonological or linguistic difficulties. Secondly, this work is a preliminary step to building a phonetically rich script before its recording in order to produce a high quality speech synthesis. The covering choices, such as the attributes to cover, the number of required occurrences, or the “sentence” length (utterances, syntagms, etc.) need to be validated. Moreover, in this article, we have observed the great impact of the distribution of rare units in the corpus to reduce, and we believe it will be interesting to adapt the “sentence” granularity according to this distribution. Linguistic corpus design is a critical concern for building rich annotated corpora useful in different domains of applications. For example, speech technologies such as ASR (Automatic Speech Recognition) or TTS (Text-to-Speech) need a huge amount of speech data to train datadriven models or to produce synthetic speech. Collecting data is always related to costs (recording speech, verifying annotations, etc.), and as a rule of thumb, the more data you gather, the more costly your application will be. Within this context, we present in this article solutions to reduce the amount of linguistic text content while maintaining a sufficient level of linguistic richness required by a model or an application. This problem can be formalized as a Set Covering Problem (SCP) and we evaluate two algorithmic heuristics applied to design large text corpora in English and French for covering phonological information or POS labels. The first considered algorithm is a standard greedy solution with an agglomerative/spitting strategy and we propose a second algorithm based on Lagrangian relaxation. The latter approach provides a lower bound to the cost of each covering solution. This lower bound can be used as a metric to evaluate the quality of a reduced corpus whatever the algorithm applied. Experiments show that a suboptimal algorithm like a greedy algorithm achieves good results; the cost of its solutions is not so far from the lower bound (about 4.35% for 3-phoneme coverings). Usually, constraints in SCP are binary; we proposed here a generalization where the constraints on each covering feature can be multi-valued. [{""affiliations"": [], ""name"": ""Nelly Barbot""}, {""affiliations"": [], ""name"": ""Olivier Bo\u00ebffard""}, {""affiliations"": [], ""name"": ""Jonathan Chevelu""}, {""affiliations"": [], ""name"": ""Arnaud Delhay""}] SP:066100434f650471b7f8100e81aef651fc0bf5f5 [{""authors"": [""Alon"", ""Noga"", ""Dana Moshkovitz"", ""Shmuel Safra.""], ""title"": ""Algorithmic construction of sets for k-restrictions"", ""venue"": ""ACM Transactions on Algorithms (TALG), 2(2):153\u2013177."", ""year"": 2006}, {""authors"": [""Barbot"", ""Nelly"", ""Olivier Bo\u00ebffard"", ""Arnaud Delhay.""], ""title"": ""Comparing performance of different set-covering strategies for linguistic content optimization in speech corpora"", ""venue"": ""Proceedings of the International"", ""year"": 2012}, {""authors"": [""B\u00e9chet"", ""Fr\u00e9d\u00e9ric.""], ""title"": ""Liaphon: un systeme complet de phon\u00e9tisation de textes"", ""venue"": ""Traitement automatique des langues, 42(1):47\u201367."", ""year"": 2001}, {""authors"": [""Bo\u00ebffard"", ""Olivier"", ""Laure Charonnat"", ""S\u00e9bastien Le Maguer"", ""Damien Lolive"", ""Ga\u00eblle Vidal.""], ""title"": ""Towards fully automatic annotation of audiobooks for TTS"", ""venue"": ""Proceedings of the International Conference on"", ""year"": 2012}, {""authors"": [""Bunnell"", ""H. Timothy.""], ""title"": ""Crafting small databases for unit selection TTS: Effects on intelligibility"", ""venue"": ""Proceedings of the ISCA Tutorial and Research Workshop on Speech Synthesis (SSW7), pages 40\u201344, Kyoto."", ""year"": 2010}, {""authors"": [""Cadic"", ""Didier"", ""C\u00e9dric Boidin"", ""Christophe d\u2019Alessandro""], ""title"": ""Towards optimal TTS corpora"", ""venue"": ""In Proceedings of the International Conference on Language Resources and Evaluation (LREC),"", ""year"": 2010}, {""authors"": [""Candito"", ""Marie"", ""Enrique Henestroza Anguiano"", ""Djam\u00e9 Seddah.""], ""title"": ""A word clustering approach to domain adaptation: Effective parsing of biomedical texts"", ""venue"": ""Proceedings of the 12th International"", ""year"": 2011}, {""authors"": [""Caprara"", ""Alberto"", ""Matteo Fischetti"", ""Paolo Toth.""], ""title"": ""A heuristic method for the set covering problem"", ""venue"": ""Operations Research, 47(5):730\u2013743."", ""year"": 1999}, {""authors"": [""Caprara"", ""Alberto"", ""Paolo Toth"", ""Matteo Fischetti.""], ""title"": ""Algorithms for the set covering problem"", ""venue"": ""Annals of Operations Research, 98(1-4):353\u2013371."", ""year"": 2000}, {""authors"": [""Ceria"", ""Sebasti\u00e1n"", ""Paolo Nobili"", ""Antonio Sassano.""], ""title"": ""A Lagrangian-based heuristic for large-scale set covering problems"", ""venue"": ""Mathematical Programming, 81(2):215\u2013228."", ""year"": 1998}, {""authors"": [""Chevelu"", ""Jonathan"", ""Nelly Barbot"", ""Olivier Bo\u00ebffard"", ""Arnaud Delhay.""], ""title"": ""Lagrangian relaxation for optimal corpus design"", ""venue"": ""Proceedings of the ISCA Tutorial and Research Workshop on Speech Synthesis"", ""year"": 2007}, {""authors"": [""Chevelu"", ""Jonathan"", ""Nelly Barbot"", ""Olivier Bo\u00ebffard"", ""Arnaud Delhay.""], ""title"": ""Comparing set-covering strategies for optimal corpus design"", ""venue"": ""Proceedings of the International Conference on Language"", ""year"": 2008}, {""authors"": [""Combescure"", ""Pierre.""], ""title"": ""20 listes de 10 phrases phon\u00e9tiquement \u00e9quilibr\u00e9es"", ""venue"": ""Revue d\u2019Acoustique, 56:34\u201338."", ""year"": 1981}, {""authors"": [""Fisher"", ""Marshall L.""], ""title"": ""The Lagrangian relaxation method for solving integer programming problems"", ""venue"": ""Management Science, 27(1):1\u201318."", ""year"": 1981}, {""authors"": [""Fran\u00e7ois"", ""H\u00e9l\u00e8ne"", ""Olivier Bo\u00ebffard.""], ""title"": ""Design of an optimal continuous speech database for text-to-speech synthesis considered as a set covering problem"", ""venue"": ""Proceedings of the European Conference on"", ""year"": 2001}, {""authors"": [""Fran\u00e7ois"", ""H\u00e9l\u00e8ne"", ""Olivier Bo\u00ebffard""], ""title"": ""The greedy algorithm and its application"", ""year"": 2002}, {""authors"": [""Gauvain"", ""Jean-Luc"", ""Lori Lamel"", ""Maxine Esk\u00e9nazi.""], ""title"": ""Design considerations and text selection for Bref, a large French readspeech corpus"", ""venue"": ""Proceedings of the International Conference of Spoken Language"", ""year"": 1990}, {""authors"": [""Gotab"", ""Pierre"", ""Fr\u00e9d\u00e9ric B\u00e9chet"", ""G\u00e9raldine Damnati.""], ""title"": ""Active learning for rule-based and corpus-based spoken language understanding models"", ""venue"": ""Proceedings of the IEEE workshop on"", ""year"": 2009}, {""authors"": [""Hart"", ""Michael.""], ""title"": ""Project gutenberg"", ""venue"": ""http://www.gutenberg.org/ (Last consulted April 2015)."", ""year"": 2003}, {""authors"": [""Karp"", ""Richard M.""], ""title"": ""Reducibility among combinatorial problems"", ""venue"": ""Complexity of Computer Computations, the IBM Research Symposia Series. Springer, pages 85\u2013103."", ""year"": 1972}, {""authors"": [""Kawai"", ""Hisashi"", ""Seiichi Yamamoto"", ""Norio Higuchi"", ""Tohru Shimizu.""], ""title"": ""A design method of speech corpus for text-to-speech synthesis taking account of prosody"", ""venue"": ""Proceedings of the"", ""year"": 2000}, {""authors"": [""Kominek"", ""John"", ""Alan W. Black.""], ""title"": ""The CMU Arctic speech databases for speech synthesis research"", ""venue"": ""Technical Report CMU-LTI-03-177, Carnegie Mellon University Language Technologies"", ""year"": 2003}, {""authors"": [""Krstulovi\u0107"", ""Sacha"", ""Fr\u00e9d\u00e9ric Bimbot"", ""Olivier Bo\u00ebffard"", ""Delphine Charlet"", ""Dominique Fohr"", ""Odile Mella""], ""title"": ""Optimizing the coverage of a speech database through a selection of representative speaker"", ""year"": 2006}, {""authors"": [""Krul"", ""Aleksandra"", ""G\u00e9raldine Damnati"", ""Fran\u00e7ois Yvon"", ""C\u00e9dric Boidin"", ""Thierry Moudenc""], ""title"": ""Adaptive database reduction for domain specific speech"", ""year"": 2007}, {""authors"": [""Krul"", ""Aleksandra"", ""G\u00e9raldine Damnati"", ""Fran\u00e7ois Yvon"", ""Thierry Moudenc""], ""title"": ""Corpus design based on the Kullback-Leibler divergence for Text-To-Speech synthesis application"", ""year"": 2006}, {""authors"": [""Neubig"", ""Graham"", ""Shinsuke Mori.""], ""title"": ""Word-based partial annotation for efficient corpus construction"", ""venue"": ""Proceedings of the International Conference on Language Resources and Evaluation (LREC),"", ""year"": 2010}, {""authors"": [""Raz"", ""Ran"", ""Shmuel Safra.""], ""title"": ""A sub-constant error-probability low-degree test, and a sub-constant error-probability PCP characterization of NP"", ""venue"": ""Proceedings of the twenty-ninth annual ACM symposium"", ""year"": 1997}, {""authors"": [""Rojc"", ""Matej"", ""Zdravko Ka\u010di\u010d.""], ""title"": ""Design of optimal Slovenian speech corpus for use in the concatenative speech synthesis system"", ""venue"": ""Proceedings of the International Conference on Language Resources and"", ""year"": 2000}, {""authors"": [""Schein"", ""Andrew I."", ""Ted S. Sandler"", ""Lyle H. Ungar.""], ""title"": ""Bayesian example selection using BaBiES"", ""venue"": ""Technical Report MS-CIS-04-08, Department of Computer and Information Science, University of"", ""year"": 2004}, {""authors"": [""Settles"", ""Burr.""], ""title"": ""Active learning literature survey"", ""venue"": ""Technical Report 1648, Department of Computer Sciences, University of Wisconsin, Madison."", ""year"": 2010}, {""authors"": [""Synapse.""], ""title"": ""Documentation technique: Composant d\u2019\u00e9tiquetage et lemmatisation"", ""venue"": ""http://www.synapse-fr.com/."", ""year"": 2011}, {""authors"": [""Tian"", ""Jilei"", ""Jani Nurminen.""], ""title"": ""Optimization of text database using hierachical clustering"", ""venue"": ""Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP),"", ""year"": 2009}, {""authors"": [""Tian"", ""Jilei"", ""Jani Nurminen"", ""Imre Kiss.""], ""title"": ""Optimal subset selection from text databases"", ""venue"": ""Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal"", ""year"": 2005}, {""authors"": [""Tomanek"", ""Katrin"", ""Fredrik Olsson.""], ""title"": ""A Web survey on the use of active learning to support annotation of text data"", ""venue"": ""Proceedings of the NAACL HLT 2009 Workshop on Active Learning for Natural"", ""year"": 2009}, {""authors"": [""Van Santen"", ""Jan P.H."", ""Adam L. Buchsbaum.""], ""title"": ""Methods for optimal text selection"", ""venue"": ""Proceedings of the European"", ""year"": 1997}, {""authors"": [""Rhodes. Zhang"", ""Jin-Song"", ""Satoshi Nakamura""], ""title"": ""An improved greedy search"", ""venue"": ""Conference on Speech Communication and Technology (Eurospeech),"", ""year"": 2008}]","8 conclusion :This article discussed the building of linguistic-rich corpora under an objective of parsimony. This task, a generalization of SCP, turns out to be an NP-hard problem that cannot be polynomially approximated. We studied the behavior of several algorithms in the particular domain of NLP, where the considered events follow a heavy-tailed type distribution. The proposed algorithms have been compared through three kinds of experiments: The first one is the coverings of 2- and 3-phonemes from two text corpora, one in French, the other in English; the second one consists of the coverings of part-of-speech labels from a corpus in French; the third one evaluates the impact of both algorithms on the acoustic quality of a corpus-based TTS system. The first algorithm, ASA, is composed of an agglomerative greedy strategy followed by a spitting greedy stage. The second one, LamSCP, is based on Lagrangian relaxation principles combined with greedy strategies. LamSCP is our adaptation of an algorithm proposed in Caprara, Fischetti, and Toth (1999) to the multi-representation constraints. The comparison of SCP solutions is mainly about their size, their maximal distance with the optimal covering, and their robustness in case of perturbation of the initial corpus ordering. Although ASA is much faster than LamSCP, it does not permit us to singlehandedly assess the quality of its solution in terms of size. The main assets of LamSCP are the calculation of a lower bound to the optimal covering size and shorter solutions than the ones obtained by ASA. Indeed, in our experiments of phonological coverings, the optimal solution is at most 1.24% (10.13%, respectively) smaller than the solutions derived by LamSCP (ASA, respectively). As for the coverings of 1-POS, LamSCP provides the optimal solution in a case of a mono-representation constraint, whereas the ASA solution is 10.17% greater than the optimal one. These relative gaps between the lower bounds and solution sizes of both algorithms generally decrease when the size of the search space decreases. Thanks to the lower bound derived by LamSCP, we empirically show that it is possible to get almost optimal solutions in a linguistic framework following Zipf’s law distribution, despite the theoretic complexity of the multi-represented SCP. Concerning the last experiment in the TTS framework, even if LamSCP provides a smaller corpus, the subjective test shows no significant difference between the TTS systems based on LamSCP and ASA corpora. Therefore, we think that ASA remains the most adequate strategy, in terms of performance, ease of development, and computation time to solve SCP in the NLP field. However, it would be interesting to test a parallelized version of the heuristic phase that calls an important number of greedy sub-procedures. Our future prospects for this work are in automatic language processing and speech synthesis. First, in the framework of the Phorevox project supported by the French National Research Agency, we are considering the automatic design of exercise contents for language learning by the selection of texts covering some phonological or linguistic difficulties. Secondly, this work is a preliminary step to building a phonetically rich script before its recording in order to produce a high quality speech synthesis. The covering choices, such as the attributes to cover, the number of required occurrences, or the “sentence” length (utterances, syntagms, etc.) need to be validated. Moreover, in this article, we have observed the great impact of the distribution of rare units in the corpus to reduce, and we believe it will be interesting to adapt the “sentence” granularity according to this distribution." "1 introduction :Statistical Machine Translation (SMT) advanced near the beginning of the century from word-based models (Brown et al. 1993) towards more advanced models that take contextual information into account. Phrase-based (Koehn, Och, and Marcu 2003; Och and Ney 2004) and N-gram-based (Casacuberta and Vidal 2004; Mariño et al. 2006) models are two instances of such frameworks. Although the two models have some common properties, they are substantially different. The present work is a step towards combining the benefits and remedying the flaws of these two frameworks. Phrase-based systems have a simple but effective mechanism that learns larger chunks of translation called bilingual phrases.1 Memorizing larger units enables the phrase-based model to learn local dependencies such as short-distance reorderings, idiomatic collocations, and insertions and deletions that are internal to the phrase pair. The model, however, has the following drawbacks: (i) it makes independence assumptions over phrases, ignoring the contextual information outside of phrases, (ii) the reordering model has difficulties in dealing with long-range reorderings, (iii) problems in both search and modeling require the use of a hard reordering limit, and (iv) it has the spurious phrasal segmentation problem, which allows multiple derivations of a bilingual sentence pair that have the same word alignment but different model scores. N-gram-based models are Markov models over sequences of tuples that are generated monotonically. Tuples are minimal translation units (MTUs) composed of source and target cepts.2 The N-gram-based model has the following drawbacks: (i) only precalculated orderings are hypothesized during decoding, (ii) it cannot memorize and use lexical reordering triggers, (iii) it cannot perform long distance reorderings, and (iv) using tuples presents a more difficult search problem than in phrase-based SMT. The Operation Sequence Model. In this article we present a novel model that tightly integrates translation and reordering into a single generative process. Our model explains the translation process as a linear sequence of operations that generates a source and target sentence in parallel, in a target left-to-right order. Possible operations are (i) generation of a sequence of source and target words, (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations that do the actual reordering. The probability of a sequence of operations is defined according to an N-gram model, that is, the probability of an operation depends on the n − 1 preceding operations. Because the translation (lexical generation) and reordering operations are coupled in a single generative story, the reordering decisions may depend on preceding translation decisions and translation decisions may depend 1 A Phrase pair in phrase-based SMT is a pair of sequences of words. The sequences are not necessarily linguistic constituents. Phrase pairs are built by combining minimal translation units and ordering information. As is customary we use the term phrase to refer to phrase pairs if there is no ambiguity. 2 A cept is a group of source (or target) words connected to a group of target (or source) words in a particular alignment (Brown et al. 1993). on preceding reordering decisions. This provides a natural reordering mechanism that is able to deal with local and long-distance reorderings in a consistent way. Like the N-gram-based SMT model, the operation sequence model (OSM) is based on minimal translation units and takes both source and target information into account. This mechanism has several useful properties. Firstly, no phrasal independence assumption is made. The model has access to both source and target context outside of phrases. Secondly the model learns a unique derivation of a bilingual sentence given its alignments, thus avoiding the spurious phrasal segmentation problem. The OSM, however, uses operation N-grams (rather than tuple N-grams), which encapsulate both translation and reordering information. This allows the OSM to use lexical triggers for reordering like phrase-based SMT. Our reordering approach is entirely different from the tuple N-gram model. We consider all possible orderings instead of a small set of POS-based pre-calculated orderings, as is used in N-gram-based SMT, which makes their approach dependent on the availability of a source and target POS-tagger. We show that despite using POS tags the reordering patterns learned by N-gram-based SMT are not as general as those learned by our model. Combining MTU-model with Phrase-Based Decoding. Using minimal translation units makes the search much more difficult because of the poor translation coverage, inaccurate future cost estimates, and pruning of correct hypotheses because of insufficient context. The ability to memorize and produce larger translation units gives an edge to the phrase-based systems during decoding, in terms of better search performance and superior selection of translation units. In this article, we combine N-gram-based modeling with phrase-based decoding to benefit from both approaches. Our model is based on minimal translation units, but we use phrases during decoding. Through an extensive evaluation we found that this combination not only improves the search accuracy but also the BLEU scores. Our in-house phrase-based decoder outperformed state-of-the-art phrase-based (Moses and Phrasal) and N-gram-based (NCode) systems on three translation tasks. Comparative Experiments. Motivated by these results, we integrated the OSM into the state-of-the-art phrase-based system Moses (Koehn et al. 2007). Our aim was to directly compare the performance of the lexicalized reordering model to the OSM and to see whether we can improve the performance further by using both models together. Our integration of the OSM into Moses gave a statistically significant improvement over a competitive baseline system in most cases. In order to assess the contribution of improved reordering versus the contribution of better modeling with MTUs in the OSM-augmented Moses system, we removed the reordering operations from the stream of operations. This is equivalent to integrating the conventional N-gram tuple sequence model (Mariño et al. 2006) into a phrasebased decoder, as also tried by Niehues et al. (2011). Small gains were observed in most cases, showing that much of the improvement obtained by the OSM is due to better reordering. Generalized Operation Sequence Model. The primary strength of the OSM over the lexicalized reordering model is its ability to take advantage of the wider contextual information. In an error analysis we found that the lexically driven OSM often falls back to very small context sizes because of data sparsity. We show that this problem can be addressed by learning operation sequences over generalized representations such as POS tags. The article is organized into seven sections. Section 2 is devoted to a literature review. We discuss the pros and cons of the phrase-based and N-gram-based SMT frameworks in terms of both model and search. Section 3 presents our model. We show how our model combines the benefits of both of the frameworks and removes their drawbacks. Section 4 provides an empirical evaluation of our preliminary system, which uses an MTU-based decoder, against state-of-the-art phrase-based (Moses and Phrasal) and N-gram-based (Ncode) systems on three standard tasks of translating German-to-English, Spanish-to-English, and French-to-English. Our results show improvements over the baseline systems, but we noticed that using minimal translation units during decoding makes the search problem difficult, which suggests using larger units in search. Section 5 presents an extension to our system to combine phrasebased decoding with the operation sequence model to address the problems in search. Section 5.1 empirically shows that information available in phrases can be used to improve the search performance and translation quality. Finally, we probe whether integrating our model into the phrase-based SMT framework addresses the mentioned drawbacks and improves translation quality. Section 6 provides an empirical evaluation of our integration on six standard tasks of translating German–English, French–English, and Spanish–English pairs. Our integration gives statistically significant improvements over submission quality baseline systems. Section 7 concludes.","2 previous work : The phrase-based model (Koehn et al. 2003; Och and Ney 2004) segments a bilingual sentence pair into phrases that are continuous sequences of words. These phrases are then reordered through a lexicalized reordering model that takes into account the orientation of a phrase with respect to its previous phrase (Tillmann and Zhang 2005) or block of phrases (Galley and Manning 2008). Phrase-based models memorize local dependencies such as short reorderings, translations of idioms, and the insertion and deletion of words sensitive to local context. Phrase-based systems, however, have the following drawbacks. Handling of Non-local Dependencies. Phrase-based SMT models dependencies between words and their translations inside of a phrase well. However, dependencies across phrase boundaries are ignored because of the strong phrasal independence assumption. Consider the bilingual sentence pair shown in Figure 1(a). Reordering of the German word stimmen is internal to the phrase-pair gegen ihre Kampagne stimmen -‘vote against your campaign’ and therefore represented by the translation model. However, the model fails to correctly translate the test sentence shown in Figure 1(b), which is translated as ‘they would for the legalization of abortion in Canada vote’, failing to displace the verb. The language model does not provide enough evidence to counter the dispreference of the translation model against jumping over the source words für die Legalisieurung der Abtreibung in Kanada and translating stimmen - ‘vote’ at its correct position. Weak Reordering Model. The lexicalized reordering model is primarily designed to deal with short-distance movement of phrases such as swapping two adjacent phrases and cannot properly handle long-range jumps. The model only learns an orientation of how a phrase was reordered with respect to its previous and next phrase; it makes independence assumptions over previously translated phrases and does not take into account how previous words were translated and reordered. Although such an independence assumption is useful to reduce sparsity, it is overly generalizing and does not help to disambiguate good reorderings from the bad ones. Moreover, a vast majority of extracted phrases are singletons and the corresponding probability of orientation given phrase-pair estimates are based on a single observation. Due to sparsity, the model falls back to use one-word phrases instead, the orientation of which is ambiguous and can only be judged based on context that is ignored. This drawback has been addressed by Cherry (2013) by using sparse features for reordering models. Hard Distortion Limit. The lexicalized reordering model fails to filter out bad largescale reorderings effectively (Koehn 2010). A hard distortion limit is therefore required during decoding in order to produce good translations. A distortion limit beyond eight words lets the translation accuracy drop because of search errors (Koehn et al. 2005). The use of a hard limit is undesirable for German–English and similar language pairs with significantly different syntactic structures. Several researchers have tried to address this problem. Moore and Quirk (2007) proposed improved future cost estimation to enable higher distortion limits in phrasal MT. Green, Galley, and Manning (2010) additionally proposed discriminative distortion models to achieve better translation accuracy than the baseline phrase-based system for a distortion limit of 15 words. Bisazza and Federico (2013) recently proposed a novel method to dynamically select which longrange reorderings to consider during the hypothesis extension process in a phrasebased decoder and showed an improvement in a German–English task by increasing the distortion limit to 18. Spurious Phrasal Segmentation. A problem with the phrase-based model is that there is no unique correct phrasal segmentation of a sentence. Therefore, all possible ways of segmenting a bilingual sentence consistent with the word alignment are learned and used. This leads to two problems: (i) phrase frequencies are obtained by counting all possible occurrences in the training corpus, and (ii) different segmentations producing the same translation are generated during decoding. The former leads to questionable parameter estimates and the latter may lead to search errors because the probability of a translation is fragmented across different segmentations. Furthermore, the diversity in N-best translation lists is reduced. N-gram-based SMT (Mariño et al. 2006) uses an N-gram model that jointly generates the source and target strings as a sequence of bilingual translation units called tuples. Tuples are essentially minimal phrases, atomic units that cannot be decomposed any further. The tuples are generated left to right in target word order. Reordering is not part of the statistical model. The parameters of the N-gram model are learned from bilingual data where the tuples have been arranged in target word order (see Figure 2). Decoders for N-gram-based SMT reorder the source words in a preprocessing step so that the translation can be done monotonically. The reordering is performed with POS-based rewrite rules (see Figure 2 for an example) that have been learned from the training data (Crego and Mariño 2006). Word lattices are used to compactly represent a number of alternative reorderings. Using parts of speech instead of words in the rewrite rules makes them more general and helps to avoid data sparsity problems. The mechanism has several useful properties. Because it is based on minimal units, there is only one derivation for each aligned bilingual sentence pair. The model therefore avoids spurious ambiguity. The model makes no phrasal independence assumption and generates a tuple monotonically by looking at a context of n previous tuples, thus capturing context across phrasal boundaries. On the other hand, N-gram-based systems have the following drawbacks. Weak Reordering Model. The main drawback of N-gram-based SMT is its poor reordering mechanism. Firstly, by linearizing the source, N-gram-based SMT throws away useful information about how a particular word is reordered with respect to the previous word. This information is instead stored in the form of rewrite rules, which have no influence on the translation score. The model does not learn lexical reordering triggers and reorders through the learned rules only. Secondly, search is performed only on the precalculated word permutations created based on the source-side words. Often, evidence of the correct reordering is available in the translation model and the targetside language model. All potential reorderings that are not supported by the rewrite rules are pruned in the pre-processing step. To demonstrate this, consider the bilingual sentence pair in Figure 2 again. N-gram-based MT will linearize the word sequence gegen ihre Kampagne stimmen to stimmen gegen ihre Kampagne, so that it is in the same order as the English words. At the same time, it learns a POS rule: IN PRP NN VB → VB IN PRP NN. The POS-based rewrite rules serve to precompute the orderings that will be hypothesized during decoding. However, notice that this rule cannot generalize to the test sentence in Figure 1(b), even though the tuple translation model learned the trigram < sie – ‘they’ würden – ‘would’ stimmen – ‘vote’ > and it is likely that the monolingual language model has seen the trigram they would vote. Hard Reordering Limit. Due to sparsity, only rules with seven or fewer tags are extracted. This subsequently constrains the reordering window to seven or fewer words, preventing the N-gram model from hypothesizing long-range reorderings that require larger jumps. The need to perform long-distance reordering motivated the idea of using syntax trees (Crego and Mariño 2007) to form rewrite rules. However, the rules are still extracted ignoring the target-side, and search is performed only on the precalculated orderings. Difficult Search Problem. Using MTUs makes the search problem much more difficult because of poor translation option selection. To illustrate this consider the phrase pair schoss ein Tor – ‘scored a goal’, consisting of units schoss – ‘scored’, ein – ‘a’, and Tor – ‘goal’. It is likely that the N-gram system does not have the tuple schoss – ‘scored’ in its N-best translation options because it is an uncommon translation. Even if schoss – ‘scored’ is hypothesized, it will be ranked quite low in the stack and may be pruned, before ein and Tor are generated in the next steps. A similar problem is also reported in Costa-jussà et al. (2007): When trying to reproduce the sentences in the N-best translation output of the phrase-based system, the N-gram-based system was able to produce only 37.5% of sentences in the Spanish-to-English and English-to-Spanish translation task, despite having been trained on the same word alignment. A phrase-based system, on the other hand, is likely to have access to the phrasal unit schoss ein Tor – ‘scored a goal’ and can generate it in a single step.","3 operation sequence model :Now we present a novel generative model that explains the translation process as a linear sequence of operations that generate a source and target sentence in parallel. Possible operations are (i) generation of a sequence of source and/or target words, (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations that do the actual reordering. The probability of a sequence of operations is defined according to an N-gram model, that is, the probability of an operation depends on the n − 1 preceding operations. Because the translation (generation) and reordering operations are coupled in a single generative story, the reordering decisions may depend on preceding translation decisions, and translation decisions may depend on preceding reordering decisions. This provides a natural reordering mechanism able to deal with local and long-distance reorderings consistently. The generative story of the model is motivated by the complex reordering in the German-to-English translation task. The English words are generated in linear order,3 and the German words are generated in parallel with their English translations. Mostly, the generation is done monotonically. Occasionally the translator inserts a gap on the German side to skip some words to be generated later. Each inserted gap acts as a designated landing site for the translator to jump back to. When the translator needs to cover the skipped words, it jumps back to one of the open gaps. After this is done, the translator jumps forward again and continues the translation. We will now, step by step, present the characteristics of the new model by means of examples. 3 Generating the English words in order is also what the decoder does when translating from German to English. 3.1.1 Basic Operations. The generation of the German–English sentence pair Peter liest – ‘Peter reads’ is straightforward because it is a simple 1-to-1 word-based translation without reordering: Generate (Peter , Peter) Generate (liest , reads) 3.1.2 Insertions and Deletions. The translation Es ist ja nicht so schlimm – ‘it is not that bad’, requires the insertion of an additional German word ja, which is used as a discourse particle in this construction. Generate (Es , it) Generate (ist , is) Generate Source Only (ja) Generate (nicht , not) Generate (so , that) Generate (schlimm , bad) Conversely, the translation Lies mit – ‘Read with me’ requires the deletion of an untranslated English word me. Generate (Lies , Read) Generate (mit , with) Generate Target Only (me) 3.1.3 Reordering. Let us now turn to an example that requires reordering, and revisit the example in Figure 1(a). The generation of this sentence in our model starts with generating sie – ‘they’, followed by the generation of würden – ‘would’. Then a gap is inserted on the German side, followed by the generation of stimmen – ‘vote’. At this point, the (partial) German and English sentences look as follows: Operation Sequence Generation Generate(sie, they) Generate (würden, would) sie würden stimmen ↓ Insert Gap Generate(stimmen, vote) ‘they would vote’ The arrow sign ↓ denotes the position after the previously covered German word. The translation proceeds as follows. We jump back to the open gap on the German side and fill it by generating gegen – ‘against’, Ihre – ‘your’ and Kampagne – ‘campaign’. Let us discuss some useful properties of this mechanism: 1. We have learned a reordering pattern sie würden stimmen – ‘they would vote’, which can be used to generalize the test sentence in Figure 1(b). In this case the translator jumps back and generates the tuples für – ‘for’, die – ‘the’, Legalisierung – ‘legalization’, der – ‘of’, Abtreibung – ‘abortion’, in – ‘in’, Kanada – ‘Canada’. 2. The model handles both local (Figure 1 (a)) and long-range reorderings (Figure 1 (b)) in a unified manner, regardless of how many words separate würden and stimmen. 3. Learning the operation sequence Generate(sie, they) Generate(würden, would) Insert Gap Generate(stimmen, vote) is like learning a phrase pair sie würden X stimmen – ‘they would vote’. The open gap represented by acts as a placeholder for the skipped phrases and serves a similar purpose as the non-terminal category X in a discontinuous phrase-based system. 4. The model couples lexical generation and reordering information. Translation decisions are triggered by reordering decisions and vice versa. Notice how the reordering decision is triggered by the translation decision in the example. The probability of a gap insertion operation after the generation of the auxiliaries würden – ‘would’ will be high because reordering is necessary in order to move the second part of the German verb complex (stimmen) to its correct position at the end of the clause. Complex reorderings can be achieved by inserting multiple gaps and/or recursively inserting a gap within a gap. Consider the generation of the example in Figure 3 (borrowed from Chiang [2007]). The generation of this bilingual sentence pair proceeds as follows: Generate(Aozhou, Australia) Generate(shi, is) Insert Gap Generate(zhiyi, one of ) At this point, the (partial) Chinese and English sentences look like this: Aozhou shi zhiyi ↓ Australia is one of The translator now jumps back and recursively inserts a gap inside of the gap before continuing translation: Jump Back (1) Insert Gap Generate(shaoshu, the few) Generate(guojia, countries) Aozhou shi shaoshu guojia ↓ zhiyi Australia is one of the few countries The rest of the sentence pair is generated as follows: Jump Back (1) Insert Gap Generate(de, that) Jump Back (1) Insert Gap Generate(you, have) Generate(bangjiao, diplomatic relationships) Jump Back (1) Generate(yu, with) Generate(Beihan, North Korea) Note that the translator jumps back and opens new gaps recursively to exhibit a property similar to the hierarchical model. However, our model uses a deterministic algorithm (see Algorithm 1 later in this article) to convert each bilingual sentence pair given the alignment to a unique derivation, thus avoiding spurious ambiguity unlike hierarchical and phrase-based models. Multiple gaps can simultaneously exist at any time during generation. The translator decides based on the next English word to be covered which open gap to jump to. Figure 4 shows a German–English subordinate clause pair. The generation of this example is carried out as follows: Insert Gap Generate(nicht, do not) Insert Gap Generate(wollen, want to) At this point, the (partial) German and English sentences look as follows: nicht wollen ↓ do not want to The inserted gaps act as placeholders for the skipped prepositional phrase über konkrete Zahlen – ‘on specific figures’ and the verb phrase verhandeln – ‘negotiate’. When the translator decides to generate any of the skipped words, it jumps back to one of the open gaps. The Jump Back operation closes the gap that it jumps to. The translator proceeds monotonically from that point until it needs to jump again. The generation proceeds as follows: Jump Back (1) Generate(verhandeln, negotiate) nicht verhandeln ↓ wollen do not want to negotiate The translation ends by jumping back to the open gap and generating the prepositional phrase as follows: Jump Back (1) Generate(über, on) Generate(konkrete, specific) Generate(Zahlen, figures) 5. Notice that although our model is based on minimal units, we can nevertheless memorize phrases (along with reordering information) through operation subsequences that are memorized by learning an N-gram model over these operation sequences. Some interesting phrases that our model learns are: Phrases Operation Sub-sequence nicht X wollen – ‘do not want to’ Generate (nicht , do not) Insert Gap Generate (wollen , want to) verhandeln wollen – ‘want to negotiate’ Insert Gap Generate (wollen , want to) Jump Back(1) Generate (verhandeln , negotiate) X represents , the Insert Gap operation on the German side in our notation. 3.1.4 Generation of Discontinuous Source Units. Now we discuss how discontinuous source cepts can be represented in our generative model. The Insert Gap operation discussed in the previous section can also be used to generate discontinuous source cepts. The generation of any such cept is done in several steps. See the example in Figure 5. The gappy cept hat...gelesen – ‘read’ can be generated as shown. Operation Sequence Generation Generate(er, he) Generate (hat gelesen, read) er hat gelesen ↓ Insert Gap Continue Source Cept he read After the generation of er – ‘he’, the first part of the German complex verb hat is generated as an incomplete translation of ‘read’. The second part gelesen is added to a queue to be generated later. A gap is then inserted for the skipped words ein and Buch. Lastly, the second word (gelesen) of the unfinished German cept hat...gelesen is added to complete the translation of ‘read’ through a Continue Source Cept operation. Discontinuous cepts on the English side cannot be generated analogously because of the fundamental assumption of the model that English (target-side) will be generated from left to right. This is a shortcoming of our approach, which we will discuss later in Section 4.1. Our model uses five translation and three reordering operations, which are repeatedly applied in a sequence. The following is a definition of each of these operations. Generate (X,Y): X and Y are German and English cepts, respectively, each with one or more words. Words in X (German) may be consecutive or discontinuous, but the words in Y (English) must be consecutive. This operation causes the words in Y and the first word in X to be added to the English and German strings, respectively, that were generated so far. Subsequent words in X are added to a queue to be generated later. All the English words in Y are generated immediately because English (target-side) is generated in linear order as per the assumption of the model.4 The generation of the second (and subsequent) German words in a multiword cept can be delayed by gaps, jumps, and other operations defined in the following. 4 Note that when we are translating in the opposite direction (i.e., English-to-German), then German becomes target-side and is generated monotonically and gaps and jumps are performed on English (now source-side). Continue Source Cept: The German words added to the queue by the Generate (X,Y) operation are generated by the Continue Source Cept operation. Each Continue Source Cept operation removes one German word from the queue and copies it to the German string. If X contains more than one German word, say n many, then it requires n translation operations, an initial Generate (X1...Xn, Y) operation, and n − 1 Continue Source Cept operations. For example kehrten...zurück – ‘returned’ is generated by the operation Generate (kehrten zurück, returned), which adds kehrten and ‘returned’ to the German and English strings and zurück to a queue. A Continue Source Cept operation later removes zurück from the queue and adds it to the German string. Generate Source Only (X): The words in X are added at the current position in the German string. This operation is used to generate a German word with no coresponding English word. It is performed immediately after its preceding German word is covered. This is because there is no evidence on the English side that indicates when to generate X.5 Generate Source Only (X) helps us learn a source word deletion model. It is used during decoding, where a German word X is either translated to some English word(s) by a Generate (X,Y) operation or deleted with a Generate Source Only (X) operation. Generate Target Only (Y): The words in Y are added at the current position in the English string. This operation is used to generate an English word with no corresponding German word. We do not utilize this operation in MTU-based decoding where it is hard to predict when to add unaligned target words during decoding. We therefore modified the alignments to remove this, by aligning unaligned target words (see Section 4.1 for details). In phrase-based decoding, however, this is not necessary, as we can easily predict unaligned target words where they are present in a phrase pair. Generate Identical: The same word is added at the current position in both the German and English strings. The Generate Identical operation is used during decoding for the translation of unknown words. The probability of this operation is estimated from singleton German words that are translated to an identical string. For example, for a tuple QCRI – ‘QCRI’, where German QCRI was observed exactly once during training, we use a Generate Identical operation rather than Generate (QCRI, QCRI). We now discuss the set of reordering operations used by the generative story. Reordering has to be performed whenever the German word to be generated next does not immediately follow the previously generated German word. During the generation process, the translator maintains an index that specifies the position after the previously covered German word (j), an index (Z) that specifies the index after the right-most German word covered so far, and an index of the next German word to be covered (j′). The set of reordering operations used in generation depends upon these indexes. Please refer to Algorithm 1 for details. 5 We want to preserve a 1-to-1 relationship between operation sequences and aligned sentence pairs. If we allowed an unaligned source word to be generated at any time, we would obtain several operation sequences that produce the same aligned sentence pair. Insert Gap: This operation inserts a gap, which acts as a placeholder for the skipped words. There can be more than one open gap at a time. Jump Back (W): This operation lets the translator jump back to an open gap. It takes a parameter W specifying which gap to jump to. The Jump Back (1) operation jumps to the closest gap to Z, Jump Back (2) jumps to the second closest gap to Z, and so forth. After the backward jump, the target gap is closed. Jump Forward: This operation makes the translator jump to Z. It is performed when the next German word to be generated is to the right of the last German word generated and does not follow it immediately. It will be followed by an Insert Gap or Jump Back (W) operation if the next source word is not at position Z. We use Algorithm 1 to convert an aligned bilingual sentence pair to a sequence of operations. Table 1 shows step by step by means of an example (Figure 6) how the conversion is done. The values of the index variables are displayed at each point. Table 1 Step-wise generation of Example in Figure 6. The arrow indicates position j. Figure 6 Discontinuous cept translation. Our model is estimated from a sequence of operations obtained through the transformation of a word-aligned bilingual corpus. An operation can be to generate source and target words or to perform reordering by inserting gaps and jumping forward and backward. Let O = o1, . . . , oJ be a sequence of operations as hypothesized by the translator to generate a word-aligned bilingual sentence pair< F, E, A >. The translation model is then defined as: pT(F, E, A) = p(o1, .., oJ) = J∏ j=1 p(oj|oj−n+1...oj−1) where n indicates the amount of context used and A defines the word-alignment function between E and F. Our translation model is implemented as an N-gram model of operations using the SRILM toolkit (Stolcke 2002) with Kneser-Ney smoothing (Kneser and Ney 1995). The translate operations in our model (the operations with a name starting with Generate) encapsulate tuples. Tuples are minimal translation units extracted from the word-aligned corpus. The idea is similar to N-gram-based SMT except that the tuples in the N-gram model are generated monotonically. We do not impose the restriction of monotonicity in our model but integrate reordering operations inside the generative model. As in the tuple N-gram model, there is a 1-to-1 correspondence between aligned sentence pairs and operation sequences, that is, we get exactly one operation sequence per bilingual sentence given its alignments. The corpus conversion algorithm (Algorithm 1) maps each bilingual sentence pair given its alignment into a unique sequence of operations deterministically, thus maintaining a 1-to-1 correspondence. This property of the model is useful because it addresses the spurious phrasal segmentation problem in phrase-based models. A phrase-based model assigns different scores to a derivation based on which phrasal segmentation is chosen. Unlike this, the OSM assigns only one score because the model does not suffer from spurious ambiguity. 3.6.1 Discriminative Model. We use a log-linear approach (Och 2003) to make use of standard features along with several novel features that we introduce to improve endto-end accuracy. We search for a target string E that maximizes a linear combination of feature functions: Ê = arg max E ⎧⎨ ⎩ J∑ j=1 λjhj(F, E) ⎫⎬ ⎭ where λj is the weight associated with the feature hj(F, E). Apart from the OSM and standard features such as target-side language model, length bonus, distortion limit, and IBM lexical features (Koehn, Och, and Marcu 2003), we used the following new features: Deletion Penalty. Deleting a source word (Generate Source Only (X)) is a common operation in the generative story. Because there is no corresponding target-side word, the monolingual language model score tends to favor this operation. The deletion penalty counts the number of deleted source words. Gap and Open Gap Count. These features are introduced to guide the reordering decisions. We observe a large amount of reordering in the automatically word aligned training text. However, given only the source sentence (and little world knowledge), it is not realistic to try to model the reasons for all of this reordering. Therefore we can use a more robust model that reorders less than humans do. The gap count feature sums to the total number of gaps inserted while producing a target sentence. The open gap count feature is a penalty paid once for each translation operation (Generate(X,Y), Generate Identical, Generate Source Only (X)) performed whose value is the number of currently open gaps. This penalty controls how quickly gaps are closed. Distance-Based Features. We have two distance-based features to control the reordering decisions. One of the features is the Gap Distance, which calculates the distance between the first word of a source cept X and the start of the leftmost gap. This cost is paid once for each translation operation (Generate, Generate Identical, Generate Source Only (X)). For a source cept covering the positions X1, . . . , Xn, we get the feature value gj = X1 − S, where S is the index of the left-most source word where a gap starts. Another distance-based penalty used in our model is the Source Gap Width. This feature only applies in the case of a discontinuous translation unit and computes the distance between the words of a gappy cept. Let f = f1 . . . , fi, . . . , fn be a gappy source cept where xi is the index of the ith source word in the cept f . The value of the gap-width penalty is calculated as: wj = n∑ i=2 xi − xi−1 − 1","4 mtu-based search :We explored two decoding strategies in this work. Our first decoder complements the model and only uses minimal translation units in left-to-right stack-based decoding, similar to that used in Pharaoh (Koehn 2004a). The overall process can be roughly divided into the following steps: (i) extraction of translation units, (ii) future cost estimation, (iii) hypothesis extension, and (iv) recombination and pruning. The last two steps are repeated iteratively until all the words in the source sentence have been translated. Our hypotheses maintain the index of the last source word covered (j), the position of the right-most source word covered so far (Z), the number of open gaps, the number of gaps so far inserted, the previously generated operations, the generated target string, and the accumulated values of all the features discussed in Section 3.6.1. The sequence of operations may include translation operations (generate, continue source cept, etc.) and reordering operations (gap insertions, jumps). Recombination6 is performed on hypotheses having the same coverage vector, monolingual language model context, and OSM context. We do histogram-based pruning, maintaining the 500 best hypotheses for each stack. A large beam size is required to cope with the search errors that result from using minimal translation units during decoding. We address this problem in Section 5. 6 Note that although we are using minimal translation units, recombination is still useful as different derivations can arise through different alignments between source and target fragments. Also, recombination can still take place if hypotheses differ slightly in the output (Koehn 2010). Aligned bilingual training corpora often contain unaligned target words and discontinuous target cepts, both of which pose problems. Unlike discontinuous source cepts, discontinuous target cepts such as hinunterschüttete – ‘poured . . . down’ in constructions like den Drink hinunterschüttete – ‘poured the drink down’ cannot be handled by the operation sequence model because it generates the English words in strict left-to-right order. Therefore they have to be eliminated. Unaligned target words are only problematic for the MTU-based decoder, which has difficulties predicting where to insert them. Thus, we eliminate unaligned target words in MTU-based decoding. We use a three-step process (Durrani, Schmid, and Fraser 2011) that modifies the alignments and removes unaligned and discontinuous targets. If a source word is aligned with multiple target words that are not consecutive, first the link to the least frequent target word is identified, and the group (consecutive adjacent words) of links containing this word is retained while the others are deleted. The intuition here is to keep the alignments containing content words (which are less frequent than functional words). For example, the alignment link hinunterschüttete – ‘down’ is deleted and only the link hinunterschüttete – ‘poured’ is retained because ‘down’ occurs more frequently than ‘poured’. Crego and Yvon (2009) used split tokens to deal with this phenomenon. For MTU-based decoding we also need to deal with unaligned target words. For each unaligned target word, we determine the (left or right) neighbor that it appears more frequently with and align it with the same source word as this neighbor. Crego, de Gispert, and Mariño (2005) and Mariño et al. (2006) instead used lexical probabilities p( f |e) obtained from IBM Model 1 (Brown et al. 1993) to decide whether to attach left or right. A more sophisticated strategy based on part-of-speech entropy was proposed by Gispert and Mariño (2006). We evaluated our systems on German-to-English, French-to-English, and Spanish-toEnglish news translation for the purpose of development and evaluation. We used data from the eighth version of the Europarl Corpus and the News Commentary made available for the translation task of the Eighth Workshop on Statistical Machine Translation.7 The bilingual corpora contained roughly 2M bilingual sentence pairs, which we obtained by concatenating news commentary (≈ 184K sentences) and Europarl for the estimation of the translation model. Word alignments were generated with GIZA++ (Och and Ney 2003), using the grow-diag-final-and heuristic8 (Koehn et al. 2005). All data are lowercased, and we use the Moses tokenizer. We took news-test-2008 as the dev set for optimization and news-test 2009-2012 for testing. The feature weights are tuned with Z-MERT (Zaidan 2009). 4.2.1 Baseline Systems. We compared our system with (i) Moses9 (Koehn et al. 2007), (ii) Phrasal10 (Cer et al. 2010), and (iii) Ncode11 (Crego, Yvon, and Mariño 2011). We used 7 http://www.statmt.org/wmt13/translation-task.html 8 We also tested other symmetrization heuristics such as “Union” and “Intersection” but found the GDFA heuristic gave best results for all language pairs. 9 http://www.statmt.org/moses/ 10 http://nlp.stanford.edu/phrasal/ 11 http://www.limsi.fr/Individu/jmcrego/bincoder/ all these toolkits with their default settings. Phrasal provides two main extensions to Moses: a hierarchical reordering model (Galley and Manning 2008) and discontinuous source and target phrases (Galley and Manning 2010). We used the default stack sizes of 100 for Moses,12 200 for Phrasal, and 25 for Ncode (with 2n stacks). A 5-gram English language model is used. Both phrase-based systems use the 20 best translation options per source phrase; Ncode uses the 25 best tuple translations and a 4-gram tuple sequence model. A hard distortion limit of 6 is used in the default configuration of both phrasebased systems. Among the other defaults, we retained the hard source gap penalty of 15 and a target gap penalty of 7 in Phrasal. We provide Moses and Ncode with the same post-edited alignments13 from which we had removed target-side discontinuities. We feed the original alignments to Phrasal because of its ability to learn discontinuous source and target phrases. All the systems use MERT for the optimization of the weight vector. 4.2.2 Training. Training steps include: (i) post-editing of the alignments (Section 4.1), (ii) generation of the operation sequence (Algorithm 1), and (iii) estimation of the N-gram translation (OSM) and language models using the SRILM toolkit (Stolcke 2002) with Kneser-Ney smoothing. We used 5-gram models. 4.2.3 Summary of Developmental Experiments. During the developent of the MTU-based decoder, we performed a number of experiments to obtain optimal settings for the system. We list here a summary of the results from those experiments: We found that discontinuous source-side cepts do not improve translation quality in most cases but increase the decoding time by multiple folds. We will therefore only use continuous cepts. We performed experiments by varying the distortion limit from the conventional window of 6 words to infinity (= no hard limit). We found that the performance of our system is robust when removing the hard reordering constraint and even saw a slight improvement in results in the case of German-to-English systems. Using no distortion limit, however, significantly increases the decoding time. We will therefore use a window of 16 words, which we found to be optimal on the development set. The performance of the MTU-based decoder is sensitive to the stack size. A high limit of 500 is required for decent search accuracy. We will discuss this further in the next section. We found using 10 best translation options for each extracted cept during decoding to be optimal. 4.2.4 Comparison with the Baseline Systems. In this section we compare our system (OSMmtu) with the three baseline systems. We used Kevin Gimpel’s tester,14 which uses bootstrap resampling (Koehn 2004b) to test which of our results are significantly better than the baseline results. We mark a baseline result with “*” in order to indicate 12 Using stack sizes from 200–1,000 did not improve results. 13 Using post-processed alignments gave better results than using the original alignments for these baseline systems. 14 http://www.ark.cs.cmu.edu/MT/ that our model shows a significant improvement over this baseline with a confidence of p < 0.05. We use 1,000 samples during bootstrap resampling. Our German-to-English results (see Table 2) are significantly better than the baseline systems in most cases. Our French-to-English results show a significant improvement over Moses in three out of four cases, and over Phrasal in half of the cases. The N-gram-based system NCode was better or similar to our system on the French task. Our Spanish-to-English system also showed roughly the same translation quality as the baseline systems, but was significantly worse on the WMT12 task.","5 phrase-based search :The MTU-based decoder is the most straightforward implementation of a decoder for the operation sequence model, but it faces search problems that cause a drop in translation accuracy. Although the OSM captures both source and target contexts and provides a better reordering mechanism, the ability to memorize and produce larger translation units gives an edge to the phrase-based model during decoding in terms of better search performance and superior selection of translation units. In this section, we combine N-gram-based modeling with phrase-based decoding. This combination not only improves search accuracy but also increases translation quality in terms of BLEU. The operation sequence model, although based on minimal translation units, can learn larger translation chunks by memorizing a sequence of operations. However, it often has difficulties to produce the same translations as the phrase-based system because of the following drawbacks of MTU-based decoding: (i) the MTU-based decoder does not have access to all the translation units that a phrase-based decoder uses as part of a larger phrase, (ii) it requires a larger beam size to prevent early pruning of correct hypotheses, and (iii) it uses less-powerful future-cost estimates than the phrase-based decoder. To demonstrate these problems, consider the phrase pair which the model memorizes through the sequence: Generate(Wie, What is) Insert Gap Generate (Sie, your) Jump Back (1) Generate (heissen, name) The MTU-based decoder needs three separate tuple translations to generate the same phrasal translation: Wie – ‘What is’, Sie – ‘your’ and heißen – ‘name’. Here we are faced with three challenges. Translation Coverage: The first problem is that the N-gram model does not have the same coverage of translation options. The English cepts ‘What is’, ‘your’, and ‘name’ are not good candidate translations for the German cepts Wie, Sie, and heißen, which are usually translated to ‘How’, ‘you’, and ‘call’, respectively, in isolation. When extracting tuple translations for these cepts from the Europarl data for our system, the tuple Wie – ‘What is’ is ranked 124th, heißen – ‘name’ is ranked 56th, and Sie – ‘your’ is ranked 9th in the list of n-best translation candidates. Typically, only the 20 best translation options are used, for the sake of efficiency, and such phrasal units with less frequent translations are never hypothesized in the N-gram-based systems. The phrase-based system, on the other hand, can extract the phrase Wie heißen Sie – ‘what is your name’ even if it is observed only once during training. Larger Beam Size: Even when we allow a huge number of translation options and therefore hypothesize such units, we are faced with another challenge. A larger beam size is required in MTU-based decoding to prevent uncommon translations from getting pruned. The phrase-based system can generate the phrase pair Wie heißen Sie – ‘what is your name’ in a single step, placing it directly into the stack three words to the right. The MTU-based decoder generates this phrase in three stacks with the tuple translations Wie – ‘What is’, Sie – ‘your’, and heißen – ‘name’. A very large stack size is required during decoding to prevent the pruning of Wie – ‘What is’, which is ranked quite low in the stack until the tuple Sie – ‘your’ is hypothesized in the next stack. Although the translation quality achieved by phrase-based SMT remains the same when varying the beam size, the performance of our system varies drastically with different beam sizes (especially for the German–English experiments where the search is more difficult due to a higher number of reorderings). Costa-jussà et al. (2007) also report a significant drop in the performance of N-gram-based SMT when a beam size of 10 is used instead of 50 in their experiments. Future Cost Estimation: A third problem is caused by inaccurate future cost estimation. Using phrases helps phrase-based SMT to better estimate the future language model cost because of the larger context available, and allows the decoder to capture local (phrase-internal) reorderings in the future cost. In comparison, the future cost for tuples is based on unigram probabilities. The future cost estimate for the phrase pair Wie heißen Sie – ‘What is your name’ is estimated by calculating the cost of each feature. A bigram language model cost, for example, is estimated in the phrase-based system as follows: plm = p(What) × p(is|What) × p(your|What is) × p(name|What is your) The translation model cost is estimated as: ptm = p(What is your name|Wie heißen Sie) Phrase-based SMT is aware during the preprocessing step that the words Wie heißen Sie may be translated as a phrase. This is helpful for estimating a more accurate future cost because the context is already available. The same is not true for the MTU-based decoder, to which only minimal units are available. The MTU-based decoder does not have the information that Wie heißen Sie may be translated as a phrase during decoding. The future cost estimate available to the operation sequence model for the span covering Wie heißen Sie will have unigram probabilities for both the translation and language models. plm = p(What) × p(is|What) × p(your) × p(name) The translation model cost is estimated as: ptm = p(Generate(Wie, What is)) × p(Generate(heißen,name)) × p(Generate(Sie, your)) A more accurate future cost estimate for the translation model cost would be: ptm = p(Generate(Wie,What is)) × p(Insert Gap|C2) × p(Generate(Sie,your)|C3) × p(Jump Back(1)|C4) × p(Generate(heißen,name)|C5) where Ci is the context for the generation of the ith operation—that is, up to m previous operations. For example C1 = Generate(Wie, What is), C2 = Generate(Wie,What is) Insert Gap, and so on. The future cost estimates computed in this manner are much more accurate because not only do they consider context, but they also take the reordering operations into account (Durrani, Fraser, and Schmid 2013). We extended our in-house OSM decoder to use phrases instead of MTUs during decoding. In order to check whether phrase-based decoding solves the mentioned problems and improves the search accuracy, we evaluated the baseline MTU decoder and the phrase-based decoder with the same model parameters and tuned weights. This allows us to directly compare the model scores. We tuned the feature weights by running MERT with the MTU decoder on the dev set. Table 3 shows results from running both, the MTU-based (OSMmtu) and the phrase-based (OSMphr) decoder, on the WMT09 test set. Improved search accuracy is the percentage of times each decoder was able to produce a better model score than the other. Our phrase-based decoder uses a stack size of 200. Table 3 shows the percentage of times the MTU-based and phrase-based decoder produce better model scores than their counterpart. It shows that the phrase-based decoder produces better model scores for almost 48% of the hypotheses (on average) across the three language pairs, whereas the MTU-based decoder (using a much higher stack size [500]) produces better hypotheses 8.2% of the time on average. This improvement in search is also reflected in translation quality. Our phrase-based decoder outperforms the MTU-based decoder in all the cases and gives a significant improvement in 8 out of 12 cases (Table 4). In Section 4.1 we discussed the problem of handling unaligned and discontinuous target words in MTU-based decoding. An advantage of phrase-based decoding is that we can use such units during decoding if they appear within the extracted phrases. We use a Generate Target Only (Y) operation whenever the unaligned target word Y occurs in a phrase. Similarly, we use the operation Generate (hinunterschüttete, poured down) when the discontinuous tuple hinunterschüttete – ‘poured ... down’ occurs in a phrase. While training the model, we simply ignore the discontinuity and pretend that the word ‘down’ immediately follows ‘poured’. This can be done by linearizing the subsequent parts of discontinuous target cepts to appear after the first word of the cept. During decoding we use phrase-internal alignments to hypothesize such a linearization. This is done only for the estimation of the OSM, and the target for all other purposes is generated in its original order. This heuristic allows us to deal with target discontinuities without extending the operation sequence model in complicated ways. It results in better BLEU accuracy in comparison with the post-editing of the alignments method described in Section 4.1. For details and empirical results refer to Durrani et al. (2013a) (see Table 2 therein, compare Rows 4 and 5). Note that the OSM, like the discontinuous phrase-based model (Galley and Manning 2010), allows all possible geometries as shown in Figure 7. However, because our decoder only uses continuous phrases, we cannot hypothesize (ii) and (iii) unless they appear inside of a phrase. But our model could be integrated into a discontinuous phrase-based system to overcome this limitation.","6 further comparative experiments :Our model, like the reordering models (Tillmann and Zhang 2005; Galley and Manning 2008) used in phrase-based decoders, is lexicalized. However, our model has richer conditioning as it considers both translation and reordering context across phrasal boundaries. The lexicalized reordering model used in phrase-based SMT only accounts for how a phrase pair was reordered with respect to its previous phrase (or block of phrases). Although such an independence assumption is useful to reduce sparsity, it is overgeneralizing, with only three possible orientations. Moreover, because most of the extracted phrases are observed only once, the corresponding probability of orientation given phrase-pair estimates is very sparse. The model often has to fall back to short oneword phrases. However, most short phrases are observed frequently with all possible orientations during training. This makes it difficult for the decoder to decide which orientation should be picked during decoding. The model therefore overly relies on the language model to break such ties. The OSM may also suffer from data sparsity and the back-off smoothing may fall back to very short contexts. But it might still be able to disambiguate better than the lexicalized reordering models. Also these drawbacks can be addressed by learning an OSM over generalized word representation such as POS tags, as we show in this section. In an effort to make a comparison of the operation sequence model with the lexicalized reordering model, we incorporate the OSM into the phrase-based Moses decoder. This allows us to exactly compare the two models in identical settings. We integrate the OSM into the hypothesis extension process of the phrase-based decoder. We convert each phrase pair into a sequence of operations by extracting the MTUs within the phrase pair and using phrase internal alignments. The OSM is used as a feature in the log-linear framework. We also use four supportive features: the Gap, Open Gap, Gap-distance, and Deletion counts, as described earlier (see Section 3.6.1). Our Moses (Koehn et al. 2007) baseline systems are based on the setup described in Durrani et al. (2013b). We trained our systems with the following settings: maximum sentence length 80, grow-diag-final and symmetrization of GIZA++ alignments, an interpolated Kneser-Ney smoothed 5-gram language model with KenLM (Heafield 2011) used at runtime, distortion limit of 6, minimum Bayes-risk decoding (Kumar and Byrne 2004), cube pruning (Huang and Chiang 2007), and the no-reordering-over-punctuation heuristic. We used factored models (Koehn and Hoang 2007), for German–English and English–German. We trained the lexicalized reordering model (Koehn et al. 2005) with msd-bidirectional-fe settings. Table 5 shows that the OSM results in higher gains than the lexicalized reordering model on top of a plain phrase-based baseline (Pb). The average improvement obtained using the lexicalized reordering model (Pblex) over the baseline (Pb) is 0.50. In comparison, the average improvement obtained by using the OSM (Pbosm) over the baseline (Pb) is 0.74. The average improvement obtained by the combination (Pblex+osm) is 0.97. The average improvement obtained by adding the OSM over the baseline (Pblex) is 0.47. We tested for significance and found that in seven out of eight cases adding the OSM on top of Pblex gives a statistically significant improvement with a confidence of p < 0.05. Significant differences are marked with an asterisk. In an additional experiment, we studied how much the translation quality decreases when all reordering operations are removed from the operation sequence model during training and decoding. The resulting model is similar to the tuple sequence model (TSM) of Mariño et al. (2006), except that we use phrase-internal reordering rather than POS-based rewrite rules to do the source linearization. Table 6 shows an average improvement of just 0.13 on top of the baseline phrase-based system with lexicalized reordering, which is much lower than the 0.46 points obtained with the full operation sequence model. Bilingual translation models (without reordering) have been integrated into phrase-based systems before, either inside the decoder (Niehues et al. 2011) or to rerank the N-best candidate translations in the output of a phrase-based system (Zhang et al. 2013). Both groups reported improvements of similar magnitude when using a targetorder left-to-right TSM model for German–English and French–English translation with shared task data, but higher gains on other data sets and language pairs. Zhang et al. (2013) showed further gains by combining models with target and source left-to-right and right-to-left orders. The assumption of generating the target in monotonic order is a weakness of our work that can be addressed following Zhang et al. (2013). By generating MTUs in source order and allowing gaps and jumps on the target side, the model will be able to learn other reordering patterns that are ignored by the standard OSM. Because of data sparsity, it is impossible to observe all possible reordering patterns with all possible lexical choices in translation operations. The lexically driven OSM therefore often backs off to very small context sizes. Consider the example shown in Figure 1. The learned pattern sie würden stimmen – ‘they would vote’ cannot be generalized to er würde wählen – ‘he would vote’. We found that the OSM uses only two preceding operations as context on average. This problem can be addressed by replacing words with POS tags (or any other generalized representation such as Morph tags, word clusters) to allow the model to consider a wider syntactic context where this is appropriate, thus improving lexical decisions and the reordering capability of the model. Crego and Yvon (2010) and Niehues et al. (2011) have shown improvements in translation quality when using a TSM model over POS units. We estimate OSMs over generalized tags and add these as separate features to the loglinear framework.15 Experiments. We enabled factored sequence models (Koehn and Hoang 2007) in German–English language pairs as these have been shown to be useful previously. We used LoPar (Schmid 2000) to obtain morphological analysis and POS annotation of German and MXPOST (Ratnaparkhi 1998), a maximum entropy model for English POS tags. We simply estimate OSMs over POS tags16 by replacing the words by the corresponding tags during training. Table 7 shows that a system with an additional POS-based OSM (Pblex+osm(s)+osm(p)) gives an average improvement of +0.26 over the baseline (Pblex+osm(s)) system that uses an OSM over surface forms only. The overall gain by using OSMs over the baseline system is +0.70. OSM over surface tags considers 3-gram on average, and OSM over POS tags considers 4.5-grams on average, thus considering wider contextual information when making translation and reordering decisions. Table 8 shows the wall-clock decoding time (in minutes) from running the Moses decoder (on news-test2013) with and without the OSMs. Each decoder is run with 24 threads on a machine with 140GB RAM and 24 processors. Timings vary between experiments because of the fact that machines were somewhat busy in some cases. But generally, the OSM increases decoding time by more than half an hour.17 Table 9 shows the overall sizes of phrase-based translation and reordering models along with the OSMs. It also shows the model sizes when filtered on news-test2013. A similar amount of reduction could be achieved by applying filtering to the OSMs following the language model filtering described by Heafield and Lavie (2010). 15 We also tried to amalgamate lexically driven OSM and generalized OSMs into a single model rather than using these as separate features. However, this attempt was unsuccessful (See Durrani et al. [2014] for details). 16 We also found using morphological tags and automatic word clusters to be useful in our recent IWSLT evaluation campaign (Birch, Durrani, and Koehn 2013; Durrani et al. 2014). 17 The code for the OSM in Moses can be greatly optimized but requires major modifications to source and target phrase classes in Moses.","7 conclusion :In this article we presented a new model for statistical MT that combines the benefits of two state-of-the-art SMT frameworks, namely, N-gram-based and phrase-based SMT. Like the N-gram-based model, it addresses two drawbacks of phrasal MT by better handling dependencies across phrase boundaries, and solving the phrasal segmentation problem. In contrast to N-gram-based MT, our model has a generative story that tightly couples translation and reordering. Furthermore, it is able to consider all possible reorderings, unlike N-gram systems that perform search only on a limited number of pre-calculated orderings. Our model is able to correctly reorder words across large distances, and it memorizes frequent phrasal translations including their reordering as probable operation sequences. We tested a version of our system that decodes based on minimal translation units (MTUs) against the state-of-the-art phrase-based systems Moses and Phrasal and the N-gram-based system Ncode for German-to-English, French-to-English, and Spanishto-English on three standard test sets. Our system shows statistically significant improvements in 9 out of 12 cases in the German-to-English translation task, and 10 out of 12 cases in the French-to-English translation task. Our Spanish-to-English results are similar to the baseline systems in most of the cases but consistently worse than Ncode. MTU-based decoding suffers from poor translation coverage, inaccurate future cost estimates, and pruning of correct hypotheses. Phrase-based SMT, on the other hand, avoids these drawbacks by using larger translation chunks during search. We therefore extended our decoder to use phrases instead of cepts while keeping the statistical model unchanged. We found that combining a model based on minimal units with phrase-based decoding improves both search accuracy and translation quality. Our system extended with phrase-based decoding showed improvements over all the baseline systems, including our MTU-based decoder. In most of the cases, the difference was significant. Our results show that OSM consistently outperforms the Moses lexicalized reordering model and gives statistically significant gains over a very competitive Moses baseline system. We showed that considering both translation and reordering context is important and ignoring reordering context results in a significant reduction in the performance. We also showed that an OSM based on surface forms suffers from data sparsity and that an OSM based on a generalized representation with part-of-speech tags improves the translation quality by considering a larger context. In the future we would like to study whether the insight of using minimal units for modeling and search based on composed rules would hold for hierarchical SMT. Vaswani et al. (2011) recently showed that a Markov model over the derivation history of minimal rules can obtain the same translation quality as using grammars formed with composed rules, which we believe is quite promising.",,"In this article, we present a novel machine translation model, the Operation Sequence Model (OSM), which combines the benefits of phrase-based and N-gram-based statistical machine translation (SMT) and remedies their drawbacks. The model represents the translation process as a linear sequence of operations. The sequence includes not only translation operations but also reordering operations. As in N-gram-based SMT, the model is: (i) based on minimal translation units, (ii) takes both source and target information into account, (iii) does not make a phrasal independence assumption, and (iv) avoids the spurious phrasal segmentation problem. As in phrase-based SMT, the model (i) has the ability to memorize lexical reordering triggers, (ii) builds the search graph dynamically, and (iii) decodes with large translation units during search. The unique properties of the model are (i) its strong coupling of reordering and translation where translation and reordering decisions are conditioned on n previous translation and reordering decisions, and (ii) the ability to model local and long-range reorderings consistently. Using BLEU as a metric of translation accuracy, we found that our system performs significantly","[{""affiliations"": [], ""name"": ""Nadir Durrani""}, {""affiliations"": [], ""name"": ""QCRI Qatar""}, {""affiliations"": [], ""name"": ""Helmut Schmid""}, {""affiliations"": [], ""name"": ""LMU Munich""}, {""affiliations"": [], ""name"": ""Alexander Fraser""}, {""affiliations"": [], ""name"": ""Philipp Koehn""}, {""affiliations"": [], ""name"": ""Hinrich Sch\u00fctze""}]",SP:314417703be66faa0d2a7d4d072ca98d6ddf9791,"[{""authors"": [""Birch"", ""Alexandra"", ""Nadir Durrani"", ""Philipp Koehn.""], ""title"": ""Edinburgh SLT and MT System Description for the IWSLT 2013 Evaluation"", ""venue"": ""Proceedings of the 10th International Workshop on Spoken Language"", ""year"": 2013}, {""authors"": [""Bisazza"", ""Arianna"", ""Marcello Federico.""], ""title"": ""Efficient Solutions for Word Reordering in German-English Phrase-Based Statistical Machine Translation"", ""venue"": ""Proceedings of the Eighth"", ""year"": 2013}, {""authors"": [""Brown"", ""Peter F"", ""Stephen A. Della Pietra"", ""Vincent J. Della Pietra"", ""R.L. Mercer""], ""title"": ""The Mathematics of Statistical Machine Translation: Parameter"", ""year"": 1993}, {""authors"": [""Casacuberta"", ""Francisco"", ""Enrique Vidal.""], ""title"": ""Machine Translation with Inferred Stochastic Finite-State Transducers"", ""venue"": ""Computational Linguistics, 30:205\u2013225."", ""year"": 2004}, {""authors"": [""Cer"", ""Daniel"", ""Michel Galley"", ""Daniel Jurafsky"", ""Christopher D. Manning.""], ""title"": ""Phrasal: A Statistical Machine Translation Toolkit for Exploring New Model Features"", ""venue"": ""Proceedings of the North American Chapter"", ""year"": 2010}, {""authors"": [""Cherry"", ""Colin.""], ""title"": ""Improved Reordering for Phrase-Based Translation Using Sparse Features"", ""venue"": ""Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics:"", ""year"": 2013}, {""authors"": [""Chiang"", ""David.""], ""title"": ""Hierarchical Phrase-Based Translation"", ""venue"": ""Computational Linguistics, 33(2):201\u2013228."", ""year"": 2007}, {""authors"": [""Costa-juss\u00e0"", ""Marta R"", ""Josep M. Crego"", ""David Vilar"", ""Jos\u00e9 A.R. Fonollosa"", ""Jos\u00e9 B. Mari\u00f1o"", ""Hermann Ney""], ""title"": ""Analysis and System Combination of Phrase- and N-Gram-Based Statistical Machine"", ""year"": 2007}, {""authors"": [""Crego"", ""Josep M."", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Improving Statistical MT by Coupling Reordering and Decoding"", ""venue"": ""Machine Translation, 20(3):199\u2013215."", ""year"": 2006}, {""authors"": [""Crego"", ""Josep M."", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Syntax-Enhanced N-gram-Based SMT"", ""venue"": ""Proceedings of the 11th Machine Translation Summit, pages 111\u2013118, Copenhagen."", ""year"": 2007}, {""authors"": [""Crego"", ""Josep M."", ""Fran\u00e7ois Yvon.""], ""title"": ""Gappy Translation Units under Left-to-Right SMT Decoding"", ""venue"": ""Proceedings of the Meeting of the European Association for Machine Translation,"", ""year"": 2009}, {""authors"": [""Crego"", ""Josep M."", ""Fran\u00e7ois Yvon.""], ""title"": ""Improving Reordering with Linguistically Informed Bilingual N-Grams"", ""venue"": ""COLING 2010: Posters, pages 197\u2013205, Beijing."", ""year"": 2010}, {""authors"": [""Crego"", ""Josep M."", ""Fran\u00e7ois Yvon"", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Ncode: an Open Source Bilingual N-gram SMT Toolkit"", ""venue"": ""The Prague Bulletin of Mathematical Linguistics, 96:49\u201358."", ""year"": 2011}, {""authors"": [""Durrani"", ""Nadir"", ""Alexander Fraser"", ""Helmut Schmid.""], ""title"": ""Model With Minimal Translation Units, But Decode With Phrases"", ""venue"": ""the 2013 Conference of the North American Chapter of the Association for"", ""year"": 2013}, {""authors"": [""Durrani"", ""Nadir"", ""Alexander Fraser"", ""Helmut Schmid"", ""Hieu Hoang"", ""Philipp Koehn""], ""title"": ""Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT"", ""venue"": ""In Proceedings of the 51st Annual Meeting"", ""year"": 2013}, {""authors"": [""Durrani"", ""Nadir"", ""Barry Haddow"", ""Kenneth Heafield"", ""Philipp Koehn.""], ""title"": ""Edinburgh\u2019s Machine Translation Systems for European Language Pairs"", ""venue"": ""Proceedings of the Eighth Workshop on"", ""year"": 2013}, {""authors"": [""Durrani"", ""Nadir"", ""Philipp Koehn"", ""Helmut Schmid"", ""Alexander Fraser.""], ""title"": ""Investigating the Usefulness of Generalized Word Representations in SMT"", ""venue"": ""Proceedings of the 25th Annual"", ""year"": 2014}, {""authors"": [""Durrani"", ""Nadir"", ""Helmut Schmid"", ""Alexander Fraser""], ""title"": ""A Joint Sequence Translation Model with Integrated"", ""year"": 2011}, {""authors"": [""Galley"", ""Michel"", ""Christopher D. Manning.""], ""title"": ""A Simple and Effective Hierarchical Phrase Reordering Model"", ""venue"": ""Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 848\u2013856,"", ""year"": 2008}, {""authors"": [""Galley"", ""Michel"", ""Christopher D. Manning.""], ""title"": ""Accurate Non-Hierarchical Phrase-Based Translation"", ""venue"": ""Human Language Technologies: The 2010 Annual Conference of the North American Chapter of"", ""year"": 2010}, {""authors"": [""Gispert"", ""Adri\u00e0"", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Linguistic Tuple Segmentation in N-Gram-Based Statistical Machine Translation"", ""venue"": ""INTERSPEECH, pages 1,149\u20131,152, Pittsburgh, PA."", ""year"": 2006}, {""authors"": [""Green"", ""Spence"", ""Michel Galley"", ""Christopher D. Manning.""], ""title"": ""Improved Models of Distortion Cost for Statistical Machine Translation"", ""venue"": ""Human Language Technologies: The 2010 Annual Conference"", ""year"": 2010}, {""authors"": [""Heafield"", ""Kenneth.""], ""title"": ""KenLM: Faster and Smaller Language Model Queries"", ""venue"": ""Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187\u2013197, Edinburgh."", ""year"": 2011}, {""authors"": [""Heafield"", ""Kenneth"", ""Alon Lavie.""], ""title"": ""Combining Machine Translation Output with Open Source: The Carnegie Mellon Multi-Engine Machine Translation Scheme"", ""venue"": ""The Prague Bulletin of Mathematical"", ""year"": 2010}, {""authors"": [""Huang"", ""Liang"", ""David Chiang.""], ""title"": ""Forest Rescoring: Faster Decoding with Integrated Language Models"", ""venue"": ""Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics,"", ""year"": 2007}, {""authors"": [""Kneser"", ""Reinhard"", ""Hermann Ney.""], ""title"": ""Improved Backing-off for M-gram Language Modeling"", ""venue"": ""Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181\u2013184."", ""year"": 1995}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Pharaoh: A Beam Search Decoder for Phrase-Based Statistical Machine Translation Models"", ""venue"": ""Association for Machine Translation in the Americas, pages 115\u2013124,"", ""year"": 2004}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Statistical Significance Tests for Machine Translation Evaluation"", ""venue"": ""Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388\u2013395, Barcelona."", ""year"": 2004}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Statistical Machine Translation"", ""venue"": ""Cambridge University Press."", ""year"": 2010}, {""authors"": [""Koehn"", ""Philipp"", ""Amittai Axelrod"", ""Alexandra Birch"", ""Chris Callison-Burch"", ""Miles Osborne"", ""David Talbot.""], ""title"": ""Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation"", ""venue"": ""In"", ""year"": 2005}, {""authors"": [""Koehn"", ""Philipp"", ""Hieu Hoang.""], ""title"": ""Factored Translation Models"", ""venue"": ""Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural"", ""year"": 2007}, {""authors"": [""Constantin"", ""Evan Herbst.""], ""title"": ""Moses: Open Source Toolkit for Statistical Machine Translation"", ""venue"": ""Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics: Demonstrations."", ""year"": 2007}, {""authors"": [""Koehn"", ""Philipp"", ""Franz J. Och"", ""Daniel Marcu.""], ""title"": ""Statistical Phrase-Based Translation"", ""venue"": ""2003 Meeting of the North American Chapter of the Association for Computational Linguistics, pages 127\u2013133,"", ""year"": 2003}, {""authors"": [""Kumar"", ""Shankar"", ""William J. Byrne.""], ""title"": ""Minimum Bayes-Risk Decoding for Statistical Machine Translation"", ""venue"": ""Human Language Technologies: The 2004 Annual Conference of the North American Chapter of"", ""year"": 2004}, {""authors"": [""Mari\u00f1o"", ""Jos\u00e9 B."", ""Rafael E. Banchs"", ""Josep M. Crego"", ""Adri\u00e0 de Gispert"", ""Patrik Lambert"", ""Jos\u00e9 A.R. Fonollosa"", ""Marta R. Costa-juss\u00e0.""], ""title"": ""N-gram-Based Machine Translation"", ""venue"": ""Computational Linguistics,"", ""year"": 2006}, {""authors"": [""Moore"", ""Robert"", ""Chris Quirk.""], ""title"": ""Faster Beam Search Decoding for Phrasal Statistical Machine Translation"", ""venue"": ""Proceedings of the 11th Machine Translation Summit, Copenhagen."", ""year"": 2007}, {""authors"": [""Niehues"", ""Jan"", ""Teresa Herrmann"", ""Stephan Vogel"", ""Alex Waibel.""], ""title"": ""Wider Context by Using Bilingual Language Models in Machine Translation"", ""venue"": ""Proceedings of the Sixth Workshop on"", ""year"": 2011}, {""authors"": [""Och"", ""Franz J.""], ""title"": ""Minimum Error Rate Training in Statistical Machine Translation"", ""venue"": ""Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160\u2013167, Sapporo."", ""year"": 2003}, {""authors"": [""Och"", ""Franz J."", ""Hermann Ney.""], ""title"": ""A Systematic Comparison of Various Statistical Alignment Models"", ""venue"": ""Computational Linguistics, 29(1):19\u201351."", ""year"": 2003}, {""authors"": [""Och"", ""Franz J."", ""Hermann Ney.""], ""title"": ""The Alignment Template Approach to Statistical Machine Translation"", ""venue"": ""Computational Linguistics, 30(1):417\u2013449."", ""year"": 2004}, {""authors"": [""Ratnaparkhi"", ""Adwait.""], ""title"": ""Maximum Entropy Models for Natural Language Ambiguity Resolution"", ""venue"": ""Ph.D. thesis, University of Pennsylvania, Philadelphia, PA."", ""year"": 1998}, {""authors"": [""Schmid"", ""Helmut.""], ""title"": ""Lopar: Design and implementation"", ""venue"": ""Bericht des sonderforschungsbereiches \u201csprachtheoretische grundlagen f\u00fcr die computerlinguistik.\u201d Technical"", ""year"": 2000}, {""authors"": [""Stolcke"", ""Andreas.""], ""title"": ""SRILM - An Extensible Language Modeling Toolkit"", ""venue"": ""International Conference on Spoken Language Processing, Denver, CO."", ""year"": 2002}, {""authors"": [""Tillmann"", ""Christoph"", ""Tong Zhang.""], ""title"": ""A Localized Prediction Model for Statistical Machine Translation"", ""venue"": ""Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics,"", ""year"": 2005}, {""authors"": [""Vaswani"", ""Ashish"", ""Haitao Mi"", ""Liang Huang"", ""David Chiang.""], ""title"": ""Rule Markov Models for Fast Tree-to-String Translation"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for Computational"", ""year"": 2011}, {""authors"": [""Zaidan"", ""Omar F.""], ""title"": ""Z-MERT: A Fully Configurable Open Source Tool for Minimum Error Rate Training of Machine Translation Systems"", ""venue"": ""The Prague Bulletin of Mathematical Linguistics,"", ""year"": 2009}, {""authors"": [""Zhang"", ""Hui"", ""Kristina Toutanova"", ""Chris Quirk"", ""Jianfeng Gao.""], ""title"": ""Beyond Left-to-Right: Multiple Decomposition Structures for SMT"", ""venue"": ""the 2013 Conference of the North American Chapter of the"", ""year"": 2013}]",acknowledgments :We would like to thank the anonymous reviewers and Andreas Maletti and François Yvon for their helpful feedback and suggestions. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreements 287658 (EU-Bridge) and 287688 (MateCat). Alexander Fraser was funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. Helmut Schmid was supported by Deutsche Forschungsgemeinschaft grant SFB 732. This publication only reflects the authors’ views.,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :Statistical Machine Translation (SMT) advanced near the beginning of the century from word-based models (Brown et al. 1993) towards more advanced models that take contextual information into account. Phrase-based (Koehn, Och, and Marcu 2003; Och and Ney 2004) and N-gram-based (Casacuberta and Vidal 2004; Mariño et al. 2006) models are two instances of such frameworks. Although the two models have some common properties, they are substantially different. The present work is a step towards combining the benefits and remedying the flaws of these two frameworks. Phrase-based systems have a simple but effective mechanism that learns larger chunks of translation called bilingual phrases.1 Memorizing larger units enables the phrase-based model to learn local dependencies such as short-distance reorderings, idiomatic collocations, and insertions and deletions that are internal to the phrase pair. The model, however, has the following drawbacks: (i) it makes independence assumptions over phrases, ignoring the contextual information outside of phrases, (ii) the reordering model has difficulties in dealing with long-range reorderings, (iii) problems in both search and modeling require the use of a hard reordering limit, and (iv) it has the spurious phrasal segmentation problem, which allows multiple derivations of a bilingual sentence pair that have the same word alignment but different model scores. N-gram-based models are Markov models over sequences of tuples that are generated monotonically. Tuples are minimal translation units (MTUs) composed of source and target cepts.2 The N-gram-based model has the following drawbacks: (i) only precalculated orderings are hypothesized during decoding, (ii) it cannot memorize and use lexical reordering triggers, (iii) it cannot perform long distance reorderings, and (iv) using tuples presents a more difficult search problem than in phrase-based SMT. The Operation Sequence Model. In this article we present a novel model that tightly integrates translation and reordering into a single generative process. Our model explains the translation process as a linear sequence of operations that generates a source and target sentence in parallel, in a target left-to-right order. Possible operations are (i) generation of a sequence of source and target words, (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations that do the actual reordering. The probability of a sequence of operations is defined according to an N-gram model, that is, the probability of an operation depends on the n − 1 preceding operations. Because the translation (lexical generation) and reordering operations are coupled in a single generative story, the reordering decisions may depend on preceding translation decisions and translation decisions may depend 1 A Phrase pair in phrase-based SMT is a pair of sequences of words. The sequences are not necessarily linguistic constituents. Phrase pairs are built by combining minimal translation units and ordering information. As is customary we use the term phrase to refer to phrase pairs if there is no ambiguity. 2 A cept is a group of source (or target) words connected to a group of target (or source) words in a particular alignment (Brown et al. 1993). on preceding reordering decisions. This provides a natural reordering mechanism that is able to deal with local and long-distance reorderings in a consistent way. Like the N-gram-based SMT model, the operation sequence model (OSM) is based on minimal translation units and takes both source and target information into account. This mechanism has several useful properties. Firstly, no phrasal independence assumption is made. The model has access to both source and target context outside of phrases. Secondly the model learns a unique derivation of a bilingual sentence given its alignments, thus avoiding the spurious phrasal segmentation problem. The OSM, however, uses operation N-grams (rather than tuple N-grams), which encapsulate both translation and reordering information. This allows the OSM to use lexical triggers for reordering like phrase-based SMT. Our reordering approach is entirely different from the tuple N-gram model. We consider all possible orderings instead of a small set of POS-based pre-calculated orderings, as is used in N-gram-based SMT, which makes their approach dependent on the availability of a source and target POS-tagger. We show that despite using POS tags the reordering patterns learned by N-gram-based SMT are not as general as those learned by our model. Combining MTU-model with Phrase-Based Decoding. Using minimal translation units makes the search much more difficult because of the poor translation coverage, inaccurate future cost estimates, and pruning of correct hypotheses because of insufficient context. The ability to memorize and produce larger translation units gives an edge to the phrase-based systems during decoding, in terms of better search performance and superior selection of translation units. In this article, we combine N-gram-based modeling with phrase-based decoding to benefit from both approaches. Our model is based on minimal translation units, but we use phrases during decoding. Through an extensive evaluation we found that this combination not only improves the search accuracy but also the BLEU scores. Our in-house phrase-based decoder outperformed state-of-the-art phrase-based (Moses and Phrasal) and N-gram-based (NCode) systems on three translation tasks. Comparative Experiments. Motivated by these results, we integrated the OSM into the state-of-the-art phrase-based system Moses (Koehn et al. 2007). Our aim was to directly compare the performance of the lexicalized reordering model to the OSM and to see whether we can improve the performance further by using both models together. Our integration of the OSM into Moses gave a statistically significant improvement over a competitive baseline system in most cases. In order to assess the contribution of improved reordering versus the contribution of better modeling with MTUs in the OSM-augmented Moses system, we removed the reordering operations from the stream of operations. This is equivalent to integrating the conventional N-gram tuple sequence model (Mariño et al. 2006) into a phrasebased decoder, as also tried by Niehues et al. (2011). Small gains were observed in most cases, showing that much of the improvement obtained by the OSM is due to better reordering. Generalized Operation Sequence Model. The primary strength of the OSM over the lexicalized reordering model is its ability to take advantage of the wider contextual information. In an error analysis we found that the lexically driven OSM often falls back to very small context sizes because of data sparsity. We show that this problem can be addressed by learning operation sequences over generalized representations such as POS tags. The article is organized into seven sections. Section 2 is devoted to a literature review. We discuss the pros and cons of the phrase-based and N-gram-based SMT frameworks in terms of both model and search. Section 3 presents our model. We show how our model combines the benefits of both of the frameworks and removes their drawbacks. Section 4 provides an empirical evaluation of our preliminary system, which uses an MTU-based decoder, against state-of-the-art phrase-based (Moses and Phrasal) and N-gram-based (Ncode) systems on three standard tasks of translating German-to-English, Spanish-to-English, and French-to-English. Our results show improvements over the baseline systems, but we noticed that using minimal translation units during decoding makes the search problem difficult, which suggests using larger units in search. Section 5 presents an extension to our system to combine phrasebased decoding with the operation sequence model to address the problems in search. Section 5.1 empirically shows that information available in phrases can be used to improve the search performance and translation quality. Finally, we probe whether integrating our model into the phrase-based SMT framework addresses the mentioned drawbacks and improves translation quality. Section 6 provides an empirical evaluation of our integration on six standard tasks of translating German–English, French–English, and Spanish–English pairs. Our integration gives statistically significant improvements over submission quality baseline systems. Section 7 concludes. 2 previous work : The phrase-based model (Koehn et al. 2003; Och and Ney 2004) segments a bilingual sentence pair into phrases that are continuous sequences of words. These phrases are then reordered through a lexicalized reordering model that takes into account the orientation of a phrase with respect to its previous phrase (Tillmann and Zhang 2005) or block of phrases (Galley and Manning 2008). Phrase-based models memorize local dependencies such as short reorderings, translations of idioms, and the insertion and deletion of words sensitive to local context. Phrase-based systems, however, have the following drawbacks. Handling of Non-local Dependencies. Phrase-based SMT models dependencies between words and their translations inside of a phrase well. However, dependencies across phrase boundaries are ignored because of the strong phrasal independence assumption. Consider the bilingual sentence pair shown in Figure 1(a). Reordering of the German word stimmen is internal to the phrase-pair gegen ihre Kampagne stimmen -‘vote against your campaign’ and therefore represented by the translation model. However, the model fails to correctly translate the test sentence shown in Figure 1(b), which is translated as ‘they would for the legalization of abortion in Canada vote’, failing to displace the verb. The language model does not provide enough evidence to counter the dispreference of the translation model against jumping over the source words für die Legalisieurung der Abtreibung in Kanada and translating stimmen - ‘vote’ at its correct position. Weak Reordering Model. The lexicalized reordering model is primarily designed to deal with short-distance movement of phrases such as swapping two adjacent phrases and cannot properly handle long-range jumps. The model only learns an orientation of how a phrase was reordered with respect to its previous and next phrase; it makes independence assumptions over previously translated phrases and does not take into account how previous words were translated and reordered. Although such an independence assumption is useful to reduce sparsity, it is overly generalizing and does not help to disambiguate good reorderings from the bad ones. Moreover, a vast majority of extracted phrases are singletons and the corresponding probability of orientation given phrase-pair estimates are based on a single observation. Due to sparsity, the model falls back to use one-word phrases instead, the orientation of which is ambiguous and can only be judged based on context that is ignored. This drawback has been addressed by Cherry (2013) by using sparse features for reordering models. Hard Distortion Limit. The lexicalized reordering model fails to filter out bad largescale reorderings effectively (Koehn 2010). A hard distortion limit is therefore required during decoding in order to produce good translations. A distortion limit beyond eight words lets the translation accuracy drop because of search errors (Koehn et al. 2005). The use of a hard limit is undesirable for German–English and similar language pairs with significantly different syntactic structures. Several researchers have tried to address this problem. Moore and Quirk (2007) proposed improved future cost estimation to enable higher distortion limits in phrasal MT. Green, Galley, and Manning (2010) additionally proposed discriminative distortion models to achieve better translation accuracy than the baseline phrase-based system for a distortion limit of 15 words. Bisazza and Federico (2013) recently proposed a novel method to dynamically select which longrange reorderings to consider during the hypothesis extension process in a phrasebased decoder and showed an improvement in a German–English task by increasing the distortion limit to 18. Spurious Phrasal Segmentation. A problem with the phrase-based model is that there is no unique correct phrasal segmentation of a sentence. Therefore, all possible ways of segmenting a bilingual sentence consistent with the word alignment are learned and used. This leads to two problems: (i) phrase frequencies are obtained by counting all possible occurrences in the training corpus, and (ii) different segmentations producing the same translation are generated during decoding. The former leads to questionable parameter estimates and the latter may lead to search errors because the probability of a translation is fragmented across different segmentations. Furthermore, the diversity in N-best translation lists is reduced. N-gram-based SMT (Mariño et al. 2006) uses an N-gram model that jointly generates the source and target strings as a sequence of bilingual translation units called tuples. Tuples are essentially minimal phrases, atomic units that cannot be decomposed any further. The tuples are generated left to right in target word order. Reordering is not part of the statistical model. The parameters of the N-gram model are learned from bilingual data where the tuples have been arranged in target word order (see Figure 2). Decoders for N-gram-based SMT reorder the source words in a preprocessing step so that the translation can be done monotonically. The reordering is performed with POS-based rewrite rules (see Figure 2 for an example) that have been learned from the training data (Crego and Mariño 2006). Word lattices are used to compactly represent a number of alternative reorderings. Using parts of speech instead of words in the rewrite rules makes them more general and helps to avoid data sparsity problems. The mechanism has several useful properties. Because it is based on minimal units, there is only one derivation for each aligned bilingual sentence pair. The model therefore avoids spurious ambiguity. The model makes no phrasal independence assumption and generates a tuple monotonically by looking at a context of n previous tuples, thus capturing context across phrasal boundaries. On the other hand, N-gram-based systems have the following drawbacks. Weak Reordering Model. The main drawback of N-gram-based SMT is its poor reordering mechanism. Firstly, by linearizing the source, N-gram-based SMT throws away useful information about how a particular word is reordered with respect to the previous word. This information is instead stored in the form of rewrite rules, which have no influence on the translation score. The model does not learn lexical reordering triggers and reorders through the learned rules only. Secondly, search is performed only on the precalculated word permutations created based on the source-side words. Often, evidence of the correct reordering is available in the translation model and the targetside language model. All potential reorderings that are not supported by the rewrite rules are pruned in the pre-processing step. To demonstrate this, consider the bilingual sentence pair in Figure 2 again. N-gram-based MT will linearize the word sequence gegen ihre Kampagne stimmen to stimmen gegen ihre Kampagne, so that it is in the same order as the English words. At the same time, it learns a POS rule: IN PRP NN VB → VB IN PRP NN. The POS-based rewrite rules serve to precompute the orderings that will be hypothesized during decoding. However, notice that this rule cannot generalize to the test sentence in Figure 1(b), even though the tuple translation model learned the trigram < sie – ‘they’ würden – ‘would’ stimmen – ‘vote’ > and it is likely that the monolingual language model has seen the trigram they would vote. Hard Reordering Limit. Due to sparsity, only rules with seven or fewer tags are extracted. This subsequently constrains the reordering window to seven or fewer words, preventing the N-gram model from hypothesizing long-range reorderings that require larger jumps. The need to perform long-distance reordering motivated the idea of using syntax trees (Crego and Mariño 2007) to form rewrite rules. However, the rules are still extracted ignoring the target-side, and search is performed only on the precalculated orderings. Difficult Search Problem. Using MTUs makes the search problem much more difficult because of poor translation option selection. To illustrate this consider the phrase pair schoss ein Tor – ‘scored a goal’, consisting of units schoss – ‘scored’, ein – ‘a’, and Tor – ‘goal’. It is likely that the N-gram system does not have the tuple schoss – ‘scored’ in its N-best translation options because it is an uncommon translation. Even if schoss – ‘scored’ is hypothesized, it will be ranked quite low in the stack and may be pruned, before ein and Tor are generated in the next steps. A similar problem is also reported in Costa-jussà et al. (2007): When trying to reproduce the sentences in the N-best translation output of the phrase-based system, the N-gram-based system was able to produce only 37.5% of sentences in the Spanish-to-English and English-to-Spanish translation task, despite having been trained on the same word alignment. A phrase-based system, on the other hand, is likely to have access to the phrasal unit schoss ein Tor – ‘scored a goal’ and can generate it in a single step. 3 operation sequence model :Now we present a novel generative model that explains the translation process as a linear sequence of operations that generate a source and target sentence in parallel. Possible operations are (i) generation of a sequence of source and/or target words, (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations that do the actual reordering. The probability of a sequence of operations is defined according to an N-gram model, that is, the probability of an operation depends on the n − 1 preceding operations. Because the translation (generation) and reordering operations are coupled in a single generative story, the reordering decisions may depend on preceding translation decisions, and translation decisions may depend on preceding reordering decisions. This provides a natural reordering mechanism able to deal with local and long-distance reorderings consistently. The generative story of the model is motivated by the complex reordering in the German-to-English translation task. The English words are generated in linear order,3 and the German words are generated in parallel with their English translations. Mostly, the generation is done monotonically. Occasionally the translator inserts a gap on the German side to skip some words to be generated later. Each inserted gap acts as a designated landing site for the translator to jump back to. When the translator needs to cover the skipped words, it jumps back to one of the open gaps. After this is done, the translator jumps forward again and continues the translation. We will now, step by step, present the characteristics of the new model by means of examples. 3 Generating the English words in order is also what the decoder does when translating from German to English. 3.1.1 Basic Operations. The generation of the German–English sentence pair Peter liest – ‘Peter reads’ is straightforward because it is a simple 1-to-1 word-based translation without reordering: Generate (Peter , Peter) Generate (liest , reads) 3.1.2 Insertions and Deletions. The translation Es ist ja nicht so schlimm – ‘it is not that bad’, requires the insertion of an additional German word ja, which is used as a discourse particle in this construction. Generate (Es , it) Generate (ist , is) Generate Source Only (ja) Generate (nicht , not) Generate (so , that) Generate (schlimm , bad) Conversely, the translation Lies mit – ‘Read with me’ requires the deletion of an untranslated English word me. Generate (Lies , Read) Generate (mit , with) Generate Target Only (me) 3.1.3 Reordering. Let us now turn to an example that requires reordering, and revisit the example in Figure 1(a). The generation of this sentence in our model starts with generating sie – ‘they’, followed by the generation of würden – ‘would’. Then a gap is inserted on the German side, followed by the generation of stimmen – ‘vote’. At this point, the (partial) German and English sentences look as follows: Operation Sequence Generation Generate(sie, they) Generate (würden, would) sie würden stimmen ↓ Insert Gap Generate(stimmen, vote) ‘they would vote’ The arrow sign ↓ denotes the position after the previously covered German word. The translation proceeds as follows. We jump back to the open gap on the German side and fill it by generating gegen – ‘against’, Ihre – ‘your’ and Kampagne – ‘campaign’. Let us discuss some useful properties of this mechanism: 1. We have learned a reordering pattern sie würden stimmen – ‘they would vote’, which can be used to generalize the test sentence in Figure 1(b). In this case the translator jumps back and generates the tuples für – ‘for’, die – ‘the’, Legalisierung – ‘legalization’, der – ‘of’, Abtreibung – ‘abortion’, in – ‘in’, Kanada – ‘Canada’. 2. The model handles both local (Figure 1 (a)) and long-range reorderings (Figure 1 (b)) in a unified manner, regardless of how many words separate würden and stimmen. 3. Learning the operation sequence Generate(sie, they) Generate(würden, would) Insert Gap Generate(stimmen, vote) is like learning a phrase pair sie würden X stimmen – ‘they would vote’. The open gap represented by acts as a placeholder for the skipped phrases and serves a similar purpose as the non-terminal category X in a discontinuous phrase-based system. 4. The model couples lexical generation and reordering information. Translation decisions are triggered by reordering decisions and vice versa. Notice how the reordering decision is triggered by the translation decision in the example. The probability of a gap insertion operation after the generation of the auxiliaries würden – ‘would’ will be high because reordering is necessary in order to move the second part of the German verb complex (stimmen) to its correct position at the end of the clause. Complex reorderings can be achieved by inserting multiple gaps and/or recursively inserting a gap within a gap. Consider the generation of the example in Figure 3 (borrowed from Chiang [2007]). The generation of this bilingual sentence pair proceeds as follows: Generate(Aozhou, Australia) Generate(shi, is) Insert Gap Generate(zhiyi, one of ) At this point, the (partial) Chinese and English sentences look like this: Aozhou shi zhiyi ↓ Australia is one of The translator now jumps back and recursively inserts a gap inside of the gap before continuing translation: Jump Back (1) Insert Gap Generate(shaoshu, the few) Generate(guojia, countries) Aozhou shi shaoshu guojia ↓ zhiyi Australia is one of the few countries The rest of the sentence pair is generated as follows: Jump Back (1) Insert Gap Generate(de, that) Jump Back (1) Insert Gap Generate(you, have) Generate(bangjiao, diplomatic relationships) Jump Back (1) Generate(yu, with) Generate(Beihan, North Korea) Note that the translator jumps back and opens new gaps recursively to exhibit a property similar to the hierarchical model. However, our model uses a deterministic algorithm (see Algorithm 1 later in this article) to convert each bilingual sentence pair given the alignment to a unique derivation, thus avoiding spurious ambiguity unlike hierarchical and phrase-based models. Multiple gaps can simultaneously exist at any time during generation. The translator decides based on the next English word to be covered which open gap to jump to. Figure 4 shows a German–English subordinate clause pair. The generation of this example is carried out as follows: Insert Gap Generate(nicht, do not) Insert Gap Generate(wollen, want to) At this point, the (partial) German and English sentences look as follows: nicht wollen ↓ do not want to The inserted gaps act as placeholders for the skipped prepositional phrase über konkrete Zahlen – ‘on specific figures’ and the verb phrase verhandeln – ‘negotiate’. When the translator decides to generate any of the skipped words, it jumps back to one of the open gaps. The Jump Back operation closes the gap that it jumps to. The translator proceeds monotonically from that point until it needs to jump again. The generation proceeds as follows: Jump Back (1) Generate(verhandeln, negotiate) nicht verhandeln ↓ wollen do not want to negotiate The translation ends by jumping back to the open gap and generating the prepositional phrase as follows: Jump Back (1) Generate(über, on) Generate(konkrete, specific) Generate(Zahlen, figures) 5. Notice that although our model is based on minimal units, we can nevertheless memorize phrases (along with reordering information) through operation subsequences that are memorized by learning an N-gram model over these operation sequences. Some interesting phrases that our model learns are: Phrases Operation Sub-sequence nicht X wollen – ‘do not want to’ Generate (nicht , do not) Insert Gap Generate (wollen , want to) verhandeln wollen – ‘want to negotiate’ Insert Gap Generate (wollen , want to) Jump Back(1) Generate (verhandeln , negotiate) X represents , the Insert Gap operation on the German side in our notation. 3.1.4 Generation of Discontinuous Source Units. Now we discuss how discontinuous source cepts can be represented in our generative model. The Insert Gap operation discussed in the previous section can also be used to generate discontinuous source cepts. The generation of any such cept is done in several steps. See the example in Figure 5. The gappy cept hat...gelesen – ‘read’ can be generated as shown. Operation Sequence Generation Generate(er, he) Generate (hat gelesen, read) er hat gelesen ↓ Insert Gap Continue Source Cept he read After the generation of er – ‘he’, the first part of the German complex verb hat is generated as an incomplete translation of ‘read’. The second part gelesen is added to a queue to be generated later. A gap is then inserted for the skipped words ein and Buch. Lastly, the second word (gelesen) of the unfinished German cept hat...gelesen is added to complete the translation of ‘read’ through a Continue Source Cept operation. Discontinuous cepts on the English side cannot be generated analogously because of the fundamental assumption of the model that English (target-side) will be generated from left to right. This is a shortcoming of our approach, which we will discuss later in Section 4.1. Our model uses five translation and three reordering operations, which are repeatedly applied in a sequence. The following is a definition of each of these operations. Generate (X,Y): X and Y are German and English cepts, respectively, each with one or more words. Words in X (German) may be consecutive or discontinuous, but the words in Y (English) must be consecutive. This operation causes the words in Y and the first word in X to be added to the English and German strings, respectively, that were generated so far. Subsequent words in X are added to a queue to be generated later. All the English words in Y are generated immediately because English (target-side) is generated in linear order as per the assumption of the model.4 The generation of the second (and subsequent) German words in a multiword cept can be delayed by gaps, jumps, and other operations defined in the following. 4 Note that when we are translating in the opposite direction (i.e., English-to-German), then German becomes target-side and is generated monotonically and gaps and jumps are performed on English (now source-side). Continue Source Cept: The German words added to the queue by the Generate (X,Y) operation are generated by the Continue Source Cept operation. Each Continue Source Cept operation removes one German word from the queue and copies it to the German string. If X contains more than one German word, say n many, then it requires n translation operations, an initial Generate (X1...Xn, Y) operation, and n − 1 Continue Source Cept operations. For example kehrten...zurück – ‘returned’ is generated by the operation Generate (kehrten zurück, returned), which adds kehrten and ‘returned’ to the German and English strings and zurück to a queue. A Continue Source Cept operation later removes zurück from the queue and adds it to the German string. Generate Source Only (X): The words in X are added at the current position in the German string. This operation is used to generate a German word with no coresponding English word. It is performed immediately after its preceding German word is covered. This is because there is no evidence on the English side that indicates when to generate X.5 Generate Source Only (X) helps us learn a source word deletion model. It is used during decoding, where a German word X is either translated to some English word(s) by a Generate (X,Y) operation or deleted with a Generate Source Only (X) operation. Generate Target Only (Y): The words in Y are added at the current position in the English string. This operation is used to generate an English word with no corresponding German word. We do not utilize this operation in MTU-based decoding where it is hard to predict when to add unaligned target words during decoding. We therefore modified the alignments to remove this, by aligning unaligned target words (see Section 4.1 for details). In phrase-based decoding, however, this is not necessary, as we can easily predict unaligned target words where they are present in a phrase pair. Generate Identical: The same word is added at the current position in both the German and English strings. The Generate Identical operation is used during decoding for the translation of unknown words. The probability of this operation is estimated from singleton German words that are translated to an identical string. For example, for a tuple QCRI – ‘QCRI’, where German QCRI was observed exactly once during training, we use a Generate Identical operation rather than Generate (QCRI, QCRI). We now discuss the set of reordering operations used by the generative story. Reordering has to be performed whenever the German word to be generated next does not immediately follow the previously generated German word. During the generation process, the translator maintains an index that specifies the position after the previously covered German word (j), an index (Z) that specifies the index after the right-most German word covered so far, and an index of the next German word to be covered (j′). The set of reordering operations used in generation depends upon these indexes. Please refer to Algorithm 1 for details. 5 We want to preserve a 1-to-1 relationship between operation sequences and aligned sentence pairs. If we allowed an unaligned source word to be generated at any time, we would obtain several operation sequences that produce the same aligned sentence pair. Insert Gap: This operation inserts a gap, which acts as a placeholder for the skipped words. There can be more than one open gap at a time. Jump Back (W): This operation lets the translator jump back to an open gap. It takes a parameter W specifying which gap to jump to. The Jump Back (1) operation jumps to the closest gap to Z, Jump Back (2) jumps to the second closest gap to Z, and so forth. After the backward jump, the target gap is closed. Jump Forward: This operation makes the translator jump to Z. It is performed when the next German word to be generated is to the right of the last German word generated and does not follow it immediately. It will be followed by an Insert Gap or Jump Back (W) operation if the next source word is not at position Z. We use Algorithm 1 to convert an aligned bilingual sentence pair to a sequence of operations. Table 1 shows step by step by means of an example (Figure 6) how the conversion is done. The values of the index variables are displayed at each point. Table 1 Step-wise generation of Example in Figure 6. The arrow indicates position j. Figure 6 Discontinuous cept translation. Our model is estimated from a sequence of operations obtained through the transformation of a word-aligned bilingual corpus. An operation can be to generate source and target words or to perform reordering by inserting gaps and jumping forward and backward. Let O = o1, . . . , oJ be a sequence of operations as hypothesized by the translator to generate a word-aligned bilingual sentence pair< F, E, A >. The translation model is then defined as: pT(F, E, A) = p(o1, .., oJ) = J∏ j=1 p(oj|oj−n+1...oj−1) where n indicates the amount of context used and A defines the word-alignment function between E and F. Our translation model is implemented as an N-gram model of operations using the SRILM toolkit (Stolcke 2002) with Kneser-Ney smoothing (Kneser and Ney 1995). The translate operations in our model (the operations with a name starting with Generate) encapsulate tuples. Tuples are minimal translation units extracted from the word-aligned corpus. The idea is similar to N-gram-based SMT except that the tuples in the N-gram model are generated monotonically. We do not impose the restriction of monotonicity in our model but integrate reordering operations inside the generative model. As in the tuple N-gram model, there is a 1-to-1 correspondence between aligned sentence pairs and operation sequences, that is, we get exactly one operation sequence per bilingual sentence given its alignments. The corpus conversion algorithm (Algorithm 1) maps each bilingual sentence pair given its alignment into a unique sequence of operations deterministically, thus maintaining a 1-to-1 correspondence. This property of the model is useful because it addresses the spurious phrasal segmentation problem in phrase-based models. A phrase-based model assigns different scores to a derivation based on which phrasal segmentation is chosen. Unlike this, the OSM assigns only one score because the model does not suffer from spurious ambiguity. 3.6.1 Discriminative Model. We use a log-linear approach (Och 2003) to make use of standard features along with several novel features that we introduce to improve endto-end accuracy. We search for a target string E that maximizes a linear combination of feature functions: Ê = arg max E ⎧⎨ ⎩ J∑ j=1 λjhj(F, E) ⎫⎬ ⎭ where λj is the weight associated with the feature hj(F, E). Apart from the OSM and standard features such as target-side language model, length bonus, distortion limit, and IBM lexical features (Koehn, Och, and Marcu 2003), we used the following new features: Deletion Penalty. Deleting a source word (Generate Source Only (X)) is a common operation in the generative story. Because there is no corresponding target-side word, the monolingual language model score tends to favor this operation. The deletion penalty counts the number of deleted source words. Gap and Open Gap Count. These features are introduced to guide the reordering decisions. We observe a large amount of reordering in the automatically word aligned training text. However, given only the source sentence (and little world knowledge), it is not realistic to try to model the reasons for all of this reordering. Therefore we can use a more robust model that reorders less than humans do. The gap count feature sums to the total number of gaps inserted while producing a target sentence. The open gap count feature is a penalty paid once for each translation operation (Generate(X,Y), Generate Identical, Generate Source Only (X)) performed whose value is the number of currently open gaps. This penalty controls how quickly gaps are closed. Distance-Based Features. We have two distance-based features to control the reordering decisions. One of the features is the Gap Distance, which calculates the distance between the first word of a source cept X and the start of the leftmost gap. This cost is paid once for each translation operation (Generate, Generate Identical, Generate Source Only (X)). For a source cept covering the positions X1, . . . , Xn, we get the feature value gj = X1 − S, where S is the index of the left-most source word where a gap starts. Another distance-based penalty used in our model is the Source Gap Width. This feature only applies in the case of a discontinuous translation unit and computes the distance between the words of a gappy cept. Let f = f1 . . . , fi, . . . , fn be a gappy source cept where xi is the index of the ith source word in the cept f . The value of the gap-width penalty is calculated as: wj = n∑ i=2 xi − xi−1 − 1 4 mtu-based search :We explored two decoding strategies in this work. Our first decoder complements the model and only uses minimal translation units in left-to-right stack-based decoding, similar to that used in Pharaoh (Koehn 2004a). The overall process can be roughly divided into the following steps: (i) extraction of translation units, (ii) future cost estimation, (iii) hypothesis extension, and (iv) recombination and pruning. The last two steps are repeated iteratively until all the words in the source sentence have been translated. Our hypotheses maintain the index of the last source word covered (j), the position of the right-most source word covered so far (Z), the number of open gaps, the number of gaps so far inserted, the previously generated operations, the generated target string, and the accumulated values of all the features discussed in Section 3.6.1. The sequence of operations may include translation operations (generate, continue source cept, etc.) and reordering operations (gap insertions, jumps). Recombination6 is performed on hypotheses having the same coverage vector, monolingual language model context, and OSM context. We do histogram-based pruning, maintaining the 500 best hypotheses for each stack. A large beam size is required to cope with the search errors that result from using minimal translation units during decoding. We address this problem in Section 5. 6 Note that although we are using minimal translation units, recombination is still useful as different derivations can arise through different alignments between source and target fragments. Also, recombination can still take place if hypotheses differ slightly in the output (Koehn 2010). Aligned bilingual training corpora often contain unaligned target words and discontinuous target cepts, both of which pose problems. Unlike discontinuous source cepts, discontinuous target cepts such as hinunterschüttete – ‘poured . . . down’ in constructions like den Drink hinunterschüttete – ‘poured the drink down’ cannot be handled by the operation sequence model because it generates the English words in strict left-to-right order. Therefore they have to be eliminated. Unaligned target words are only problematic for the MTU-based decoder, which has difficulties predicting where to insert them. Thus, we eliminate unaligned target words in MTU-based decoding. We use a three-step process (Durrani, Schmid, and Fraser 2011) that modifies the alignments and removes unaligned and discontinuous targets. If a source word is aligned with multiple target words that are not consecutive, first the link to the least frequent target word is identified, and the group (consecutive adjacent words) of links containing this word is retained while the others are deleted. The intuition here is to keep the alignments containing content words (which are less frequent than functional words). For example, the alignment link hinunterschüttete – ‘down’ is deleted and only the link hinunterschüttete – ‘poured’ is retained because ‘down’ occurs more frequently than ‘poured’. Crego and Yvon (2009) used split tokens to deal with this phenomenon. For MTU-based decoding we also need to deal with unaligned target words. For each unaligned target word, we determine the (left or right) neighbor that it appears more frequently with and align it with the same source word as this neighbor. Crego, de Gispert, and Mariño (2005) and Mariño et al. (2006) instead used lexical probabilities p( f |e) obtained from IBM Model 1 (Brown et al. 1993) to decide whether to attach left or right. A more sophisticated strategy based on part-of-speech entropy was proposed by Gispert and Mariño (2006). We evaluated our systems on German-to-English, French-to-English, and Spanish-toEnglish news translation for the purpose of development and evaluation. We used data from the eighth version of the Europarl Corpus and the News Commentary made available for the translation task of the Eighth Workshop on Statistical Machine Translation.7 The bilingual corpora contained roughly 2M bilingual sentence pairs, which we obtained by concatenating news commentary (≈ 184K sentences) and Europarl for the estimation of the translation model. Word alignments were generated with GIZA++ (Och and Ney 2003), using the grow-diag-final-and heuristic8 (Koehn et al. 2005). All data are lowercased, and we use the Moses tokenizer. We took news-test-2008 as the dev set for optimization and news-test 2009-2012 for testing. The feature weights are tuned with Z-MERT (Zaidan 2009). 4.2.1 Baseline Systems. We compared our system with (i) Moses9 (Koehn et al. 2007), (ii) Phrasal10 (Cer et al. 2010), and (iii) Ncode11 (Crego, Yvon, and Mariño 2011). We used 7 http://www.statmt.org/wmt13/translation-task.html 8 We also tested other symmetrization heuristics such as “Union” and “Intersection” but found the GDFA heuristic gave best results for all language pairs. 9 http://www.statmt.org/moses/ 10 http://nlp.stanford.edu/phrasal/ 11 http://www.limsi.fr/Individu/jmcrego/bincoder/ all these toolkits with their default settings. Phrasal provides two main extensions to Moses: a hierarchical reordering model (Galley and Manning 2008) and discontinuous source and target phrases (Galley and Manning 2010). We used the default stack sizes of 100 for Moses,12 200 for Phrasal, and 25 for Ncode (with 2n stacks). A 5-gram English language model is used. Both phrase-based systems use the 20 best translation options per source phrase; Ncode uses the 25 best tuple translations and a 4-gram tuple sequence model. A hard distortion limit of 6 is used in the default configuration of both phrasebased systems. Among the other defaults, we retained the hard source gap penalty of 15 and a target gap penalty of 7 in Phrasal. We provide Moses and Ncode with the same post-edited alignments13 from which we had removed target-side discontinuities. We feed the original alignments to Phrasal because of its ability to learn discontinuous source and target phrases. All the systems use MERT for the optimization of the weight vector. 4.2.2 Training. Training steps include: (i) post-editing of the alignments (Section 4.1), (ii) generation of the operation sequence (Algorithm 1), and (iii) estimation of the N-gram translation (OSM) and language models using the SRILM toolkit (Stolcke 2002) with Kneser-Ney smoothing. We used 5-gram models. 4.2.3 Summary of Developmental Experiments. During the developent of the MTU-based decoder, we performed a number of experiments to obtain optimal settings for the system. We list here a summary of the results from those experiments: We found that discontinuous source-side cepts do not improve translation quality in most cases but increase the decoding time by multiple folds. We will therefore only use continuous cepts. We performed experiments by varying the distortion limit from the conventional window of 6 words to infinity (= no hard limit). We found that the performance of our system is robust when removing the hard reordering constraint and even saw a slight improvement in results in the case of German-to-English systems. Using no distortion limit, however, significantly increases the decoding time. We will therefore use a window of 16 words, which we found to be optimal on the development set. The performance of the MTU-based decoder is sensitive to the stack size. A high limit of 500 is required for decent search accuracy. We will discuss this further in the next section. We found using 10 best translation options for each extracted cept during decoding to be optimal. 4.2.4 Comparison with the Baseline Systems. In this section we compare our system (OSMmtu) with the three baseline systems. We used Kevin Gimpel’s tester,14 which uses bootstrap resampling (Koehn 2004b) to test which of our results are significantly better than the baseline results. We mark a baseline result with “*” in order to indicate 12 Using stack sizes from 200–1,000 did not improve results. 13 Using post-processed alignments gave better results than using the original alignments for these baseline systems. 14 http://www.ark.cs.cmu.edu/MT/ that our model shows a significant improvement over this baseline with a confidence of p < 0.05. We use 1,000 samples during bootstrap resampling. Our German-to-English results (see Table 2) are significantly better than the baseline systems in most cases. Our French-to-English results show a significant improvement over Moses in three out of four cases, and over Phrasal in half of the cases. The N-gram-based system NCode was better or similar to our system on the French task. Our Spanish-to-English system also showed roughly the same translation quality as the baseline systems, but was significantly worse on the WMT12 task. 5 phrase-based search :The MTU-based decoder is the most straightforward implementation of a decoder for the operation sequence model, but it faces search problems that cause a drop in translation accuracy. Although the OSM captures both source and target contexts and provides a better reordering mechanism, the ability to memorize and produce larger translation units gives an edge to the phrase-based model during decoding in terms of better search performance and superior selection of translation units. In this section, we combine N-gram-based modeling with phrase-based decoding. This combination not only improves search accuracy but also increases translation quality in terms of BLEU. The operation sequence model, although based on minimal translation units, can learn larger translation chunks by memorizing a sequence of operations. However, it often has difficulties to produce the same translations as the phrase-based system because of the following drawbacks of MTU-based decoding: (i) the MTU-based decoder does not have access to all the translation units that a phrase-based decoder uses as part of a larger phrase, (ii) it requires a larger beam size to prevent early pruning of correct hypotheses, and (iii) it uses less-powerful future-cost estimates than the phrase-based decoder. To demonstrate these problems, consider the phrase pair which the model memorizes through the sequence: Generate(Wie, What is) Insert Gap Generate (Sie, your) Jump Back (1) Generate (heissen, name) The MTU-based decoder needs three separate tuple translations to generate the same phrasal translation: Wie – ‘What is’, Sie – ‘your’ and heißen – ‘name’. Here we are faced with three challenges. Translation Coverage: The first problem is that the N-gram model does not have the same coverage of translation options. The English cepts ‘What is’, ‘your’, and ‘name’ are not good candidate translations for the German cepts Wie, Sie, and heißen, which are usually translated to ‘How’, ‘you’, and ‘call’, respectively, in isolation. When extracting tuple translations for these cepts from the Europarl data for our system, the tuple Wie – ‘What is’ is ranked 124th, heißen – ‘name’ is ranked 56th, and Sie – ‘your’ is ranked 9th in the list of n-best translation candidates. Typically, only the 20 best translation options are used, for the sake of efficiency, and such phrasal units with less frequent translations are never hypothesized in the N-gram-based systems. The phrase-based system, on the other hand, can extract the phrase Wie heißen Sie – ‘what is your name’ even if it is observed only once during training. Larger Beam Size: Even when we allow a huge number of translation options and therefore hypothesize such units, we are faced with another challenge. A larger beam size is required in MTU-based decoding to prevent uncommon translations from getting pruned. The phrase-based system can generate the phrase pair Wie heißen Sie – ‘what is your name’ in a single step, placing it directly into the stack three words to the right. The MTU-based decoder generates this phrase in three stacks with the tuple translations Wie – ‘What is’, Sie – ‘your’, and heißen – ‘name’. A very large stack size is required during decoding to prevent the pruning of Wie – ‘What is’, which is ranked quite low in the stack until the tuple Sie – ‘your’ is hypothesized in the next stack. Although the translation quality achieved by phrase-based SMT remains the same when varying the beam size, the performance of our system varies drastically with different beam sizes (especially for the German–English experiments where the search is more difficult due to a higher number of reorderings). Costa-jussà et al. (2007) also report a significant drop in the performance of N-gram-based SMT when a beam size of 10 is used instead of 50 in their experiments. Future Cost Estimation: A third problem is caused by inaccurate future cost estimation. Using phrases helps phrase-based SMT to better estimate the future language model cost because of the larger context available, and allows the decoder to capture local (phrase-internal) reorderings in the future cost. In comparison, the future cost for tuples is based on unigram probabilities. The future cost estimate for the phrase pair Wie heißen Sie – ‘What is your name’ is estimated by calculating the cost of each feature. A bigram language model cost, for example, is estimated in the phrase-based system as follows: plm = p(What) × p(is|What) × p(your|What is) × p(name|What is your) The translation model cost is estimated as: ptm = p(What is your name|Wie heißen Sie) Phrase-based SMT is aware during the preprocessing step that the words Wie heißen Sie may be translated as a phrase. This is helpful for estimating a more accurate future cost because the context is already available. The same is not true for the MTU-based decoder, to which only minimal units are available. The MTU-based decoder does not have the information that Wie heißen Sie may be translated as a phrase during decoding. The future cost estimate available to the operation sequence model for the span covering Wie heißen Sie will have unigram probabilities for both the translation and language models. plm = p(What) × p(is|What) × p(your) × p(name) The translation model cost is estimated as: ptm = p(Generate(Wie, What is)) × p(Generate(heißen,name)) × p(Generate(Sie, your)) A more accurate future cost estimate for the translation model cost would be: ptm = p(Generate(Wie,What is)) × p(Insert Gap|C2) × p(Generate(Sie,your)|C3) × p(Jump Back(1)|C4) × p(Generate(heißen,name)|C5) where Ci is the context for the generation of the ith operation—that is, up to m previous operations. For example C1 = Generate(Wie, What is), C2 = Generate(Wie,What is) Insert Gap, and so on. The future cost estimates computed in this manner are much more accurate because not only do they consider context, but they also take the reordering operations into account (Durrani, Fraser, and Schmid 2013). We extended our in-house OSM decoder to use phrases instead of MTUs during decoding. In order to check whether phrase-based decoding solves the mentioned problems and improves the search accuracy, we evaluated the baseline MTU decoder and the phrase-based decoder with the same model parameters and tuned weights. This allows us to directly compare the model scores. We tuned the feature weights by running MERT with the MTU decoder on the dev set. Table 3 shows results from running both, the MTU-based (OSMmtu) and the phrase-based (OSMphr) decoder, on the WMT09 test set. Improved search accuracy is the percentage of times each decoder was able to produce a better model score than the other. Our phrase-based decoder uses a stack size of 200. Table 3 shows the percentage of times the MTU-based and phrase-based decoder produce better model scores than their counterpart. It shows that the phrase-based decoder produces better model scores for almost 48% of the hypotheses (on average) across the three language pairs, whereas the MTU-based decoder (using a much higher stack size [500]) produces better hypotheses 8.2% of the time on average. This improvement in search is also reflected in translation quality. Our phrase-based decoder outperforms the MTU-based decoder in all the cases and gives a significant improvement in 8 out of 12 cases (Table 4). In Section 4.1 we discussed the problem of handling unaligned and discontinuous target words in MTU-based decoding. An advantage of phrase-based decoding is that we can use such units during decoding if they appear within the extracted phrases. We use a Generate Target Only (Y) operation whenever the unaligned target word Y occurs in a phrase. Similarly, we use the operation Generate (hinunterschüttete, poured down) when the discontinuous tuple hinunterschüttete – ‘poured ... down’ occurs in a phrase. While training the model, we simply ignore the discontinuity and pretend that the word ‘down’ immediately follows ‘poured’. This can be done by linearizing the subsequent parts of discontinuous target cepts to appear after the first word of the cept. During decoding we use phrase-internal alignments to hypothesize such a linearization. This is done only for the estimation of the OSM, and the target for all other purposes is generated in its original order. This heuristic allows us to deal with target discontinuities without extending the operation sequence model in complicated ways. It results in better BLEU accuracy in comparison with the post-editing of the alignments method described in Section 4.1. For details and empirical results refer to Durrani et al. (2013a) (see Table 2 therein, compare Rows 4 and 5). Note that the OSM, like the discontinuous phrase-based model (Galley and Manning 2010), allows all possible geometries as shown in Figure 7. However, because our decoder only uses continuous phrases, we cannot hypothesize (ii) and (iii) unless they appear inside of a phrase. But our model could be integrated into a discontinuous phrase-based system to overcome this limitation. 6 further comparative experiments :Our model, like the reordering models (Tillmann and Zhang 2005; Galley and Manning 2008) used in phrase-based decoders, is lexicalized. However, our model has richer conditioning as it considers both translation and reordering context across phrasal boundaries. The lexicalized reordering model used in phrase-based SMT only accounts for how a phrase pair was reordered with respect to its previous phrase (or block of phrases). Although such an independence assumption is useful to reduce sparsity, it is overgeneralizing, with only three possible orientations. Moreover, because most of the extracted phrases are observed only once, the corresponding probability of orientation given phrase-pair estimates is very sparse. The model often has to fall back to short oneword phrases. However, most short phrases are observed frequently with all possible orientations during training. This makes it difficult for the decoder to decide which orientation should be picked during decoding. The model therefore overly relies on the language model to break such ties. The OSM may also suffer from data sparsity and the back-off smoothing may fall back to very short contexts. But it might still be able to disambiguate better than the lexicalized reordering models. Also these drawbacks can be addressed by learning an OSM over generalized word representation such as POS tags, as we show in this section. In an effort to make a comparison of the operation sequence model with the lexicalized reordering model, we incorporate the OSM into the phrase-based Moses decoder. This allows us to exactly compare the two models in identical settings. We integrate the OSM into the hypothesis extension process of the phrase-based decoder. We convert each phrase pair into a sequence of operations by extracting the MTUs within the phrase pair and using phrase internal alignments. The OSM is used as a feature in the log-linear framework. We also use four supportive features: the Gap, Open Gap, Gap-distance, and Deletion counts, as described earlier (see Section 3.6.1). Our Moses (Koehn et al. 2007) baseline systems are based on the setup described in Durrani et al. (2013b). We trained our systems with the following settings: maximum sentence length 80, grow-diag-final and symmetrization of GIZA++ alignments, an interpolated Kneser-Ney smoothed 5-gram language model with KenLM (Heafield 2011) used at runtime, distortion limit of 6, minimum Bayes-risk decoding (Kumar and Byrne 2004), cube pruning (Huang and Chiang 2007), and the no-reordering-over-punctuation heuristic. We used factored models (Koehn and Hoang 2007), for German–English and English–German. We trained the lexicalized reordering model (Koehn et al. 2005) with msd-bidirectional-fe settings. Table 5 shows that the OSM results in higher gains than the lexicalized reordering model on top of a plain phrase-based baseline (Pb). The average improvement obtained using the lexicalized reordering model (Pblex) over the baseline (Pb) is 0.50. In comparison, the average improvement obtained by using the OSM (Pbosm) over the baseline (Pb) is 0.74. The average improvement obtained by the combination (Pblex+osm) is 0.97. The average improvement obtained by adding the OSM over the baseline (Pblex) is 0.47. We tested for significance and found that in seven out of eight cases adding the OSM on top of Pblex gives a statistically significant improvement with a confidence of p < 0.05. Significant differences are marked with an asterisk. In an additional experiment, we studied how much the translation quality decreases when all reordering operations are removed from the operation sequence model during training and decoding. The resulting model is similar to the tuple sequence model (TSM) of Mariño et al. (2006), except that we use phrase-internal reordering rather than POS-based rewrite rules to do the source linearization. Table 6 shows an average improvement of just 0.13 on top of the baseline phrase-based system with lexicalized reordering, which is much lower than the 0.46 points obtained with the full operation sequence model. Bilingual translation models (without reordering) have been integrated into phrase-based systems before, either inside the decoder (Niehues et al. 2011) or to rerank the N-best candidate translations in the output of a phrase-based system (Zhang et al. 2013). Both groups reported improvements of similar magnitude when using a targetorder left-to-right TSM model for German–English and French–English translation with shared task data, but higher gains on other data sets and language pairs. Zhang et al. (2013) showed further gains by combining models with target and source left-to-right and right-to-left orders. The assumption of generating the target in monotonic order is a weakness of our work that can be addressed following Zhang et al. (2013). By generating MTUs in source order and allowing gaps and jumps on the target side, the model will be able to learn other reordering patterns that are ignored by the standard OSM. Because of data sparsity, it is impossible to observe all possible reordering patterns with all possible lexical choices in translation operations. The lexically driven OSM therefore often backs off to very small context sizes. Consider the example shown in Figure 1. The learned pattern sie würden stimmen – ‘they would vote’ cannot be generalized to er würde wählen – ‘he would vote’. We found that the OSM uses only two preceding operations as context on average. This problem can be addressed by replacing words with POS tags (or any other generalized representation such as Morph tags, word clusters) to allow the model to consider a wider syntactic context where this is appropriate, thus improving lexical decisions and the reordering capability of the model. Crego and Yvon (2010) and Niehues et al. (2011) have shown improvements in translation quality when using a TSM model over POS units. We estimate OSMs over generalized tags and add these as separate features to the loglinear framework.15 Experiments. We enabled factored sequence models (Koehn and Hoang 2007) in German–English language pairs as these have been shown to be useful previously. We used LoPar (Schmid 2000) to obtain morphological analysis and POS annotation of German and MXPOST (Ratnaparkhi 1998), a maximum entropy model for English POS tags. We simply estimate OSMs over POS tags16 by replacing the words by the corresponding tags during training. Table 7 shows that a system with an additional POS-based OSM (Pblex+osm(s)+osm(p)) gives an average improvement of +0.26 over the baseline (Pblex+osm(s)) system that uses an OSM over surface forms only. The overall gain by using OSMs over the baseline system is +0.70. OSM over surface tags considers 3-gram on average, and OSM over POS tags considers 4.5-grams on average, thus considering wider contextual information when making translation and reordering decisions. Table 8 shows the wall-clock decoding time (in minutes) from running the Moses decoder (on news-test2013) with and without the OSMs. Each decoder is run with 24 threads on a machine with 140GB RAM and 24 processors. Timings vary between experiments because of the fact that machines were somewhat busy in some cases. But generally, the OSM increases decoding time by more than half an hour.17 Table 9 shows the overall sizes of phrase-based translation and reordering models along with the OSMs. It also shows the model sizes when filtered on news-test2013. A similar amount of reduction could be achieved by applying filtering to the OSMs following the language model filtering described by Heafield and Lavie (2010). 15 We also tried to amalgamate lexically driven OSM and generalized OSMs into a single model rather than using these as separate features. However, this attempt was unsuccessful (See Durrani et al. [2014] for details). 16 We also found using morphological tags and automatic word clusters to be useful in our recent IWSLT evaluation campaign (Birch, Durrani, and Koehn 2013; Durrani et al. 2014). 17 The code for the OSM in Moses can be greatly optimized but requires major modifications to source and target phrase classes in Moses. 7 conclusion :In this article we presented a new model for statistical MT that combines the benefits of two state-of-the-art SMT frameworks, namely, N-gram-based and phrase-based SMT. Like the N-gram-based model, it addresses two drawbacks of phrasal MT by better handling dependencies across phrase boundaries, and solving the phrasal segmentation problem. In contrast to N-gram-based MT, our model has a generative story that tightly couples translation and reordering. Furthermore, it is able to consider all possible reorderings, unlike N-gram systems that perform search only on a limited number of pre-calculated orderings. Our model is able to correctly reorder words across large distances, and it memorizes frequent phrasal translations including their reordering as probable operation sequences. We tested a version of our system that decodes based on minimal translation units (MTUs) against the state-of-the-art phrase-based systems Moses and Phrasal and the N-gram-based system Ncode for German-to-English, French-to-English, and Spanishto-English on three standard test sets. Our system shows statistically significant improvements in 9 out of 12 cases in the German-to-English translation task, and 10 out of 12 cases in the French-to-English translation task. Our Spanish-to-English results are similar to the baseline systems in most of the cases but consistently worse than Ncode. MTU-based decoding suffers from poor translation coverage, inaccurate future cost estimates, and pruning of correct hypotheses. Phrase-based SMT, on the other hand, avoids these drawbacks by using larger translation chunks during search. We therefore extended our decoder to use phrases instead of cepts while keeping the statistical model unchanged. We found that combining a model based on minimal units with phrase-based decoding improves both search accuracy and translation quality. Our system extended with phrase-based decoding showed improvements over all the baseline systems, including our MTU-based decoder. In most of the cases, the difference was significant. Our results show that OSM consistently outperforms the Moses lexicalized reordering model and gives statistically significant gains over a very competitive Moses baseline system. We showed that considering both translation and reordering context is important and ignoring reordering context results in a significant reduction in the performance. We also showed that an OSM based on surface forms suffers from data sparsity and that an OSM based on a generalized representation with part-of-speech tags improves the translation quality by considering a larger context. In the future we would like to study whether the insight of using minimal units for modeling and search based on composed rules would hold for hierarchical SMT. Vaswani et al. (2011) recently showed that a Markov model over the derivation history of minimal rules can obtain the same translation quality as using grammars formed with composed rules, which we believe is quite promising. In this article, we present a novel machine translation model, the Operation Sequence Model (OSM), which combines the benefits of phrase-based and N-gram-based statistical machine translation (SMT) and remedies their drawbacks. The model represents the translation process as a linear sequence of operations. The sequence includes not only translation operations but also reordering operations. As in N-gram-based SMT, the model is: (i) based on minimal translation units, (ii) takes both source and target information into account, (iii) does not make a phrasal independence assumption, and (iv) avoids the spurious phrasal segmentation problem. As in phrase-based SMT, the model (i) has the ability to memorize lexical reordering triggers, (ii) builds the search graph dynamically, and (iii) decodes with large translation units during search. The unique properties of the model are (i) its strong coupling of reordering and translation where translation and reordering decisions are conditioned on n previous translation and reordering decisions, and (ii) the ability to model local and long-range reorderings consistently. Using BLEU as a metric of translation accuracy, we found that our system performs significantly [{""affiliations"": [], ""name"": ""Nadir Durrani""}, {""affiliations"": [], ""name"": ""QCRI Qatar""}, {""affiliations"": [], ""name"": ""Helmut Schmid""}, {""affiliations"": [], ""name"": ""LMU Munich""}, {""affiliations"": [], ""name"": ""Alexander Fraser""}, {""affiliations"": [], ""name"": ""Philipp Koehn""}, {""affiliations"": [], ""name"": ""Hinrich Sch\u00fctze""}] SP:314417703be66faa0d2a7d4d072ca98d6ddf9791 [{""authors"": [""Birch"", ""Alexandra"", ""Nadir Durrani"", ""Philipp Koehn.""], ""title"": ""Edinburgh SLT and MT System Description for the IWSLT 2013 Evaluation"", ""venue"": ""Proceedings of the 10th International Workshop on Spoken Language"", ""year"": 2013}, {""authors"": [""Bisazza"", ""Arianna"", ""Marcello Federico.""], ""title"": ""Efficient Solutions for Word Reordering in German-English Phrase-Based Statistical Machine Translation"", ""venue"": ""Proceedings of the Eighth"", ""year"": 2013}, {""authors"": [""Brown"", ""Peter F"", ""Stephen A. Della Pietra"", ""Vincent J. Della Pietra"", ""R.L. Mercer""], ""title"": ""The Mathematics of Statistical Machine Translation: Parameter"", ""year"": 1993}, {""authors"": [""Casacuberta"", ""Francisco"", ""Enrique Vidal.""], ""title"": ""Machine Translation with Inferred Stochastic Finite-State Transducers"", ""venue"": ""Computational Linguistics, 30:205\u2013225."", ""year"": 2004}, {""authors"": [""Cer"", ""Daniel"", ""Michel Galley"", ""Daniel Jurafsky"", ""Christopher D. Manning.""], ""title"": ""Phrasal: A Statistical Machine Translation Toolkit for Exploring New Model Features"", ""venue"": ""Proceedings of the North American Chapter"", ""year"": 2010}, {""authors"": [""Cherry"", ""Colin.""], ""title"": ""Improved Reordering for Phrase-Based Translation Using Sparse Features"", ""venue"": ""Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics:"", ""year"": 2013}, {""authors"": [""Chiang"", ""David.""], ""title"": ""Hierarchical Phrase-Based Translation"", ""venue"": ""Computational Linguistics, 33(2):201\u2013228."", ""year"": 2007}, {""authors"": [""Costa-juss\u00e0"", ""Marta R"", ""Josep M. Crego"", ""David Vilar"", ""Jos\u00e9 A.R. Fonollosa"", ""Jos\u00e9 B. Mari\u00f1o"", ""Hermann Ney""], ""title"": ""Analysis and System Combination of Phrase- and N-Gram-Based Statistical Machine"", ""year"": 2007}, {""authors"": [""Crego"", ""Josep M."", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Improving Statistical MT by Coupling Reordering and Decoding"", ""venue"": ""Machine Translation, 20(3):199\u2013215."", ""year"": 2006}, {""authors"": [""Crego"", ""Josep M."", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Syntax-Enhanced N-gram-Based SMT"", ""venue"": ""Proceedings of the 11th Machine Translation Summit, pages 111\u2013118, Copenhagen."", ""year"": 2007}, {""authors"": [""Crego"", ""Josep M."", ""Fran\u00e7ois Yvon.""], ""title"": ""Gappy Translation Units under Left-to-Right SMT Decoding"", ""venue"": ""Proceedings of the Meeting of the European Association for Machine Translation,"", ""year"": 2009}, {""authors"": [""Crego"", ""Josep M."", ""Fran\u00e7ois Yvon.""], ""title"": ""Improving Reordering with Linguistically Informed Bilingual N-Grams"", ""venue"": ""COLING 2010: Posters, pages 197\u2013205, Beijing."", ""year"": 2010}, {""authors"": [""Crego"", ""Josep M."", ""Fran\u00e7ois Yvon"", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Ncode: an Open Source Bilingual N-gram SMT Toolkit"", ""venue"": ""The Prague Bulletin of Mathematical Linguistics, 96:49\u201358."", ""year"": 2011}, {""authors"": [""Durrani"", ""Nadir"", ""Alexander Fraser"", ""Helmut Schmid.""], ""title"": ""Model With Minimal Translation Units, But Decode With Phrases"", ""venue"": ""the 2013 Conference of the North American Chapter of the Association for"", ""year"": 2013}, {""authors"": [""Durrani"", ""Nadir"", ""Alexander Fraser"", ""Helmut Schmid"", ""Hieu Hoang"", ""Philipp Koehn""], ""title"": ""Can Markov Models Over Minimal Translation Units Help Phrase-Based SMT"", ""venue"": ""In Proceedings of the 51st Annual Meeting"", ""year"": 2013}, {""authors"": [""Durrani"", ""Nadir"", ""Barry Haddow"", ""Kenneth Heafield"", ""Philipp Koehn.""], ""title"": ""Edinburgh\u2019s Machine Translation Systems for European Language Pairs"", ""venue"": ""Proceedings of the Eighth Workshop on"", ""year"": 2013}, {""authors"": [""Durrani"", ""Nadir"", ""Philipp Koehn"", ""Helmut Schmid"", ""Alexander Fraser.""], ""title"": ""Investigating the Usefulness of Generalized Word Representations in SMT"", ""venue"": ""Proceedings of the 25th Annual"", ""year"": 2014}, {""authors"": [""Durrani"", ""Nadir"", ""Helmut Schmid"", ""Alexander Fraser""], ""title"": ""A Joint Sequence Translation Model with Integrated"", ""year"": 2011}, {""authors"": [""Galley"", ""Michel"", ""Christopher D. Manning.""], ""title"": ""A Simple and Effective Hierarchical Phrase Reordering Model"", ""venue"": ""Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 848\u2013856,"", ""year"": 2008}, {""authors"": [""Galley"", ""Michel"", ""Christopher D. Manning.""], ""title"": ""Accurate Non-Hierarchical Phrase-Based Translation"", ""venue"": ""Human Language Technologies: The 2010 Annual Conference of the North American Chapter of"", ""year"": 2010}, {""authors"": [""Gispert"", ""Adri\u00e0"", ""Jos\u00e9 B. Mari\u00f1o.""], ""title"": ""Linguistic Tuple Segmentation in N-Gram-Based Statistical Machine Translation"", ""venue"": ""INTERSPEECH, pages 1,149\u20131,152, Pittsburgh, PA."", ""year"": 2006}, {""authors"": [""Green"", ""Spence"", ""Michel Galley"", ""Christopher D. Manning.""], ""title"": ""Improved Models of Distortion Cost for Statistical Machine Translation"", ""venue"": ""Human Language Technologies: The 2010 Annual Conference"", ""year"": 2010}, {""authors"": [""Heafield"", ""Kenneth.""], ""title"": ""KenLM: Faster and Smaller Language Model Queries"", ""venue"": ""Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187\u2013197, Edinburgh."", ""year"": 2011}, {""authors"": [""Heafield"", ""Kenneth"", ""Alon Lavie.""], ""title"": ""Combining Machine Translation Output with Open Source: The Carnegie Mellon Multi-Engine Machine Translation Scheme"", ""venue"": ""The Prague Bulletin of Mathematical"", ""year"": 2010}, {""authors"": [""Huang"", ""Liang"", ""David Chiang.""], ""title"": ""Forest Rescoring: Faster Decoding with Integrated Language Models"", ""venue"": ""Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics,"", ""year"": 2007}, {""authors"": [""Kneser"", ""Reinhard"", ""Hermann Ney.""], ""title"": ""Improved Backing-off for M-gram Language Modeling"", ""venue"": ""Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 181\u2013184."", ""year"": 1995}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Pharaoh: A Beam Search Decoder for Phrase-Based Statistical Machine Translation Models"", ""venue"": ""Association for Machine Translation in the Americas, pages 115\u2013124,"", ""year"": 2004}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Statistical Significance Tests for Machine Translation Evaluation"", ""venue"": ""Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388\u2013395, Barcelona."", ""year"": 2004}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Statistical Machine Translation"", ""venue"": ""Cambridge University Press."", ""year"": 2010}, {""authors"": [""Koehn"", ""Philipp"", ""Amittai Axelrod"", ""Alexandra Birch"", ""Chris Callison-Burch"", ""Miles Osborne"", ""David Talbot.""], ""title"": ""Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation"", ""venue"": ""In"", ""year"": 2005}, {""authors"": [""Koehn"", ""Philipp"", ""Hieu Hoang.""], ""title"": ""Factored Translation Models"", ""venue"": ""Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural"", ""year"": 2007}, {""authors"": [""Constantin"", ""Evan Herbst.""], ""title"": ""Moses: Open Source Toolkit for Statistical Machine Translation"", ""venue"": ""Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics: Demonstrations."", ""year"": 2007}, {""authors"": [""Koehn"", ""Philipp"", ""Franz J. Och"", ""Daniel Marcu.""], ""title"": ""Statistical Phrase-Based Translation"", ""venue"": ""2003 Meeting of the North American Chapter of the Association for Computational Linguistics, pages 127\u2013133,"", ""year"": 2003}, {""authors"": [""Kumar"", ""Shankar"", ""William J. Byrne.""], ""title"": ""Minimum Bayes-Risk Decoding for Statistical Machine Translation"", ""venue"": ""Human Language Technologies: The 2004 Annual Conference of the North American Chapter of"", ""year"": 2004}, {""authors"": [""Mari\u00f1o"", ""Jos\u00e9 B."", ""Rafael E. Banchs"", ""Josep M. Crego"", ""Adri\u00e0 de Gispert"", ""Patrik Lambert"", ""Jos\u00e9 A.R. Fonollosa"", ""Marta R. Costa-juss\u00e0.""], ""title"": ""N-gram-Based Machine Translation"", ""venue"": ""Computational Linguistics,"", ""year"": 2006}, {""authors"": [""Moore"", ""Robert"", ""Chris Quirk.""], ""title"": ""Faster Beam Search Decoding for Phrasal Statistical Machine Translation"", ""venue"": ""Proceedings of the 11th Machine Translation Summit, Copenhagen."", ""year"": 2007}, {""authors"": [""Niehues"", ""Jan"", ""Teresa Herrmann"", ""Stephan Vogel"", ""Alex Waibel.""], ""title"": ""Wider Context by Using Bilingual Language Models in Machine Translation"", ""venue"": ""Proceedings of the Sixth Workshop on"", ""year"": 2011}, {""authors"": [""Och"", ""Franz J.""], ""title"": ""Minimum Error Rate Training in Statistical Machine Translation"", ""venue"": ""Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160\u2013167, Sapporo."", ""year"": 2003}, {""authors"": [""Och"", ""Franz J."", ""Hermann Ney.""], ""title"": ""A Systematic Comparison of Various Statistical Alignment Models"", ""venue"": ""Computational Linguistics, 29(1):19\u201351."", ""year"": 2003}, {""authors"": [""Och"", ""Franz J."", ""Hermann Ney.""], ""title"": ""The Alignment Template Approach to Statistical Machine Translation"", ""venue"": ""Computational Linguistics, 30(1):417\u2013449."", ""year"": 2004}, {""authors"": [""Ratnaparkhi"", ""Adwait.""], ""title"": ""Maximum Entropy Models for Natural Language Ambiguity Resolution"", ""venue"": ""Ph.D. thesis, University of Pennsylvania, Philadelphia, PA."", ""year"": 1998}, {""authors"": [""Schmid"", ""Helmut.""], ""title"": ""Lopar: Design and implementation"", ""venue"": ""Bericht des sonderforschungsbereiches \u201csprachtheoretische grundlagen f\u00fcr die computerlinguistik.\u201d Technical"", ""year"": 2000}, {""authors"": [""Stolcke"", ""Andreas.""], ""title"": ""SRILM - An Extensible Language Modeling Toolkit"", ""venue"": ""International Conference on Spoken Language Processing, Denver, CO."", ""year"": 2002}, {""authors"": [""Tillmann"", ""Christoph"", ""Tong Zhang.""], ""title"": ""A Localized Prediction Model for Statistical Machine Translation"", ""venue"": ""Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics,"", ""year"": 2005}, {""authors"": [""Vaswani"", ""Ashish"", ""Haitao Mi"", ""Liang Huang"", ""David Chiang.""], ""title"": ""Rule Markov Models for Fast Tree-to-String Translation"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for Computational"", ""year"": 2011}, {""authors"": [""Zaidan"", ""Omar F.""], ""title"": ""Z-MERT: A Fully Configurable Open Source Tool for Minimum Error Rate Training of Machine Translation Systems"", ""venue"": ""The Prague Bulletin of Mathematical Linguistics,"", ""year"": 2009}, {""authors"": [""Zhang"", ""Hui"", ""Kristina Toutanova"", ""Chris Quirk"", ""Jianfeng Gao.""], ""title"": ""Beyond Left-to-Right: Multiple Decomposition Structures for SMT"", ""venue"": ""the 2013 Conference of the North American Chapter of the"", ""year"": 2013}] acknowledgments :We would like to thank the anonymous reviewers and Andreas Maletti and François Yvon for their helpful feedback and suggestions. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreements 287658 (EU-Bridge) and 287688 (MateCat). Alexander Fraser was funded by Deutsche Forschungsgemeinschaft grant Models of Morphosyntax for Statistical Machine Translation. Helmut Schmid was supported by Deutsche Forschungsgemeinschaft grant SFB 732. This publication only reflects the authors’ views.","1 introduction :Statistical Machine Translation (SMT) advanced near the beginning of the century from word-based models (Brown et al. 1993) towards more advanced models that take contextual information into account. Phrase-based (Koehn, Och, and Marcu 2003; Och and Ney 2004) and N-gram-based (Casacuberta and Vidal 2004; Mariño et al. 2006) models are two instances of such frameworks. Although the two models have some common properties, they are substantially different. The present work is a step towards combining the benefits and remedying the flaws of these two frameworks. Phrase-based systems have a simple but effective mechanism that learns larger chunks of translation called bilingual phrases.1 Memorizing larger units enables the phrase-based model to learn local dependencies such as short-distance reorderings, idiomatic collocations, and insertions and deletions that are internal to the phrase pair. The model, however, has the following drawbacks: (i) it makes independence assumptions over phrases, ignoring the contextual information outside of phrases, (ii) the reordering model has difficulties in dealing with long-range reorderings, (iii) problems in both search and modeling require the use of a hard reordering limit, and (iv) it has the spurious phrasal segmentation problem, which allows multiple derivations of a bilingual sentence pair that have the same word alignment but different model scores. N-gram-based models are Markov models over sequences of tuples that are generated monotonically. Tuples are minimal translation units (MTUs) composed of source and target cepts.2 The N-gram-based model has the following drawbacks: (i) only precalculated orderings are hypothesized during decoding, (ii) it cannot memorize and use lexical reordering triggers, (iii) it cannot perform long distance reorderings, and (iv) using tuples presents a more difficult search problem than in phrase-based SMT. The Operation Sequence Model. In this article we present a novel model that tightly integrates translation and reordering into a single generative process. Our model explains the translation process as a linear sequence of operations that generates a source and target sentence in parallel, in a target left-to-right order. Possible operations are (i) generation of a sequence of source and target words, (ii) insertion of gaps as explicit target positions for reordering operations, and (iii) forward and backward jump operations that do the actual reordering. The probability of a sequence of operations is defined according to an N-gram model, that is, the probability of an operation depends on the n − 1 preceding operations. Because the translation (lexical generation) and reordering operations are coupled in a single generative story, the reordering decisions may depend on preceding translation decisions and translation decisions may depend 1 A Phrase pair in phrase-based SMT is a pair of sequences of words. The sequences are not necessarily linguistic constituents. Phrase pairs are built by combining minimal translation units and ordering information. As is customary we use the term phrase to refer to phrase pairs if there is no ambiguity. 2 A cept is a group of source (or target) words connected to a group of target (or source) words in a particular alignment (Brown et al. 1993). on preceding reordering decisions. This provides a natural reordering mechanism that is able to deal with local and long-distance reorderings in a consistent way. Like the N-gram-based SMT model, the operation sequence model (OSM) is based on minimal translation units and takes both source and target information into account. This mechanism has several useful properties. Firstly, no phrasal independence assumption is made. The model has access to both source and target context outside of phrases. Secondly the model learns a unique derivation of a bilingual sentence given its alignments, thus avoiding the spurious phrasal segmentation problem. The OSM, however, uses operation N-grams (rather than tuple N-grams), which encapsulate both translation and reordering information. This allows the OSM to use lexical triggers for reordering like phrase-based SMT. Our reordering approach is entirely different from the tuple N-gram model. We consider all possible orderings instead of a small set of POS-based pre-calculated orderings, as is used in N-gram-based SMT, which makes their approach dependent on the availability of a source and target POS-tagger. We show that despite using POS tags the reordering patterns learned by N-gram-based SMT are not as general as those learned by our model. Combining MTU-model with Phrase-Based Decoding. Using minimal translation units makes the search much more difficult because of the poor translation coverage, inaccurate future cost estimates, and pruning of correct hypotheses because of insufficient context. The ability to memorize and produce larger translation units gives an edge to the phrase-based systems during decoding, in terms of better search performance and superior selection of translation units. In this article, we combine N-gram-based modeling with phrase-based decoding to benefit from both approaches. Our model is based on minimal translation units, but we use phrases during decoding. Through an extensive evaluation we found that this combination not only improves the search accuracy but also the BLEU scores. Our in-house phrase-based decoder outperformed state-of-the-art phrase-based (Moses and Phrasal) and N-gram-based (NCode) systems on three translation tasks. Comparative Experiments. Motivated by these results, we integrated the OSM into the state-of-the-art phrase-based system Moses (Koehn et al. 2007). Our aim was to directly compare the performance of the lexicalized reordering model to the OSM and to see whether we can improve the performance further by using both models together. Our integration of the OSM into Moses gave a statistically significant improvement over a competitive baseline system in most cases. In order to assess the contribution of improved reordering versus the contribution of better modeling with MTUs in the OSM-augmented Moses system, we removed the reordering operations from the stream of operations. This is equivalent to integrating the conventional N-gram tuple sequence model (Mariño et al. 2006) into a phrasebased decoder, as also tried by Niehues et al. (2011). Small gains were observed in most cases, showing that much of the improvement obtained by the OSM is due to better reordering. Generalized Operation Sequence Model. The primary strength of the OSM over the lexicalized reordering model is its ability to take advantage of the wider contextual information. In an error analysis we found that the lexically driven OSM often falls back to very small context sizes because of data sparsity. We show that this problem can be addressed by learning operation sequences over generalized representations such as POS tags. The article is organized into seven sections. Section 2 is devoted to a literature review. We discuss the pros and cons of the phrase-based and N-gram-based SMT frameworks in terms of both model and search. Section 3 presents our model. We show how our model combines the benefits of both of the frameworks and removes their drawbacks. Section 4 provides an empirical evaluation of our preliminary system, which uses an MTU-based decoder, against state-of-the-art phrase-based (Moses and Phrasal) and N-gram-based (Ncode) systems on three standard tasks of translating German-to-English, Spanish-to-English, and French-to-English. Our results show improvements over the baseline systems, but we noticed that using minimal translation units during decoding makes the search problem difficult, which suggests using larger units in search. Section 5 presents an extension to our system to combine phrasebased decoding with the operation sequence model to address the problems in search. Section 5.1 empirically shows that information available in phrases can be used to improve the search performance and translation quality. Finally, we probe whether integrating our model into the phrase-based SMT framework addresses the mentioned drawbacks and improves translation quality. Section 6 provides an empirical evaluation of our integration on six standard tasks of translating German–English, French–English, and Spanish–English pairs. Our integration gives statistically significant improvements over submission quality baseline systems. Section 7 concludes. 2 previous work : The phrase-based model (Koehn et al. 2003; Och and Ney 2004) segments a bilingual sentence pair into phrases that are continuous sequences of words. These phrases are then reordered through a lexicalized reordering model that takes into account the orientation of a phrase with respect to its previous phrase (Tillmann and Zhang 2005) or block of phrases (Galley and Manning 2008). Phrase-based models memorize local dependencies such as short reorderings, translations of idioms, and the insertion and deletion of words sensitive to local context. Phrase-based systems, however, have the following drawbacks. Handling of Non-local Dependencies. Phrase-based SMT models dependencies between words and their translations inside of a phrase well. However, dependencies across phrase boundaries are ignored because of the strong phrasal independence assumption. Consider the bilingual sentence pair shown in Figure 1(a). Reordering of the German word stimmen is internal to the phrase-pair gegen ihre Kampagne stimmen -‘vote against your campaign’ and therefore represented by the translation model. However, the model fails to correctly translate the test sentence shown in Figure 1(b), which is translated as ‘they would for the legalization of abortion in Canada vote’, failing to displace the verb. The language model does not provide enough evidence to counter the dispreference of the translation model against jumping over the source words für die Legalisieurung der Abtreibung in Kanada and translating stimmen - ‘vote’ at its correct position. Weak Reordering Model. The lexicalized reordering model is primarily designed to deal with short-distance movement of phrases such as swapping two adjacent phrases and cannot properly handle long-range jumps. The model only learns an orientation of how a phrase was reordered with respect to its previous and next phrase; it makes independence assumptions over previously translated phrases and does not take into account how previous words were translated and reordered. Although such an independence assumption is useful to reduce sparsity, it is overly generalizing and does not help to disambiguate good reorderings from the bad ones. Moreover, a vast majority of extracted phrases are singletons and the corresponding probability of orientation given phrase-pair estimates are based on a single observation. Due to sparsity, the model falls back to use one-word phrases instead, the orientation of which is ambiguous and can only be judged based on context that is ignored. This drawback has been addressed by Cherry (2013) by using sparse features for reordering models. Hard Distortion Limit. The lexicalized reordering model fails to filter out bad largescale reorderings effectively (Koehn 2010). A hard distortion limit is therefore required during decoding in order to produce good translations. A distortion limit beyond eight words lets the translation accuracy drop because of search errors (Koehn et al. 2005). The use of a hard limit is undesirable for German–English and similar language pairs with significantly different syntactic structures. Several researchers have tried to address this problem. Moore and Quirk (2007) proposed improved future cost estimation to enable higher distortion limits in phrasal MT. Green, Galley, and Manning (2010) additionally proposed discriminative distortion models to achieve better translation accuracy than the baseline phrase-based system for a distortion limit of 15 words. Bisazza and Federico (2013) recently proposed a novel method to dynamically select which longrange reorderings to consider during the hypothesis extension process in a phrasebased decoder and showed an improvement in a German–English task by increasing the distortion limit to 18. Spurious Phrasal Segmentation. A problem with the phrase-based model is that there is no unique correct phrasal segmentation of a sentence. Therefore, all possible ways of segmenting a bilingual sentence consistent with the word alignment are learned and used. This leads to two problems: (i) phrase frequencies are obtained by counting all possible occurrences in the training corpus, and (ii) different segmentations producing the same translation are generated during decoding. The former leads to questionable parameter estimates and the latter may lead to search errors because the probability of a translation is fragmented across different segmentations. Furthermore, the diversity in N-best translation lists is reduced. N-gram-based SMT (Mariño et al. 2006) uses an N-gram model that jointly generates the source and target strings as a sequence of bilingual translation units called tuples. Tuples are essentially minimal phrases, atomic units that cannot be decomposed any further. The tuples are generated left to right in target word order. Reordering is not part of the statistical model. The parameters of the N-gram model are learned from bilingual data where the tuples have been arranged in target word order (see Figure 2). Decoders for N-gram-based SMT reorder the source words in a preprocessing step so that the translation can be done monotonically. The reordering is performed with POS-based rewrite rules (see Figure 2 for an example) that have been learned from the training data (Crego and Mariño 2006). Word lattices are used to compactly represent a number of alternative reorderings. Using parts of speech instead of words in the rewrite rules makes them more general and helps to avoid data sparsity problems. The mechanism has several useful properties. Because it is based on minimal units, there is only one derivation for each aligned bilingual sentence pair. The model therefore avoids spurious ambiguity. The model makes no phrasal independence assumption and generates a tuple monotonically by looking at a context of n previous tuples, thus capturing context across phrasal boundaries. On the other hand, N-gram-based systems have the following drawbacks. Weak Reordering Model. The main drawback of N-gram-based SMT is its poor reordering mechanism. Firstly, by linearizing the source, N-gram-based SMT throws away useful information about how a particular word is reordered with respect to the previous word. This information is instead stored in the form of rewrite rules, which have no influence on the translation score. The model does not learn lexical reordering triggers and reorders through the learned rules only. Secondly, search is performed only on the precalculated word permutations created based on the source-side words. Often, evidence of the correct reordering is available in the translation model and the targetside language model. All potential reorderings that are not supported by the rewrite rules are pruned in the pre-processing step. To demonstrate this, consider the bilingual sentence pair in Figure 2 again. N-gram-based MT will linearize the word sequence gegen ihre Kampagne stimmen to stimmen gegen ihre Kampagne, so that it is in the same order as the English words. At the same time, it learns a POS rule: IN PRP NN VB → VB IN PRP NN. The POS-based rewrite rules serve to precompute the orderings that will be hypothesized during decoding. However, notice that this rule cannot generalize to the test sentence in Figure 1(b), even though the tuple translation model learned the trigram < sie – ‘they’ würden – ‘would’ stimmen – ‘vote’ > and it is likely that the monolingual language model has seen the trigram they would vote. Hard Reordering Limit. Due to sparsity, only rules with seven or fewer tags are extracted. This subsequently constrains the reordering window to seven or fewer words, preventing the N-gram model from hypothesizing long-range reorderings that require larger jumps. The need to perform long-distance reordering motivated the idea of using syntax trees (Crego and Mariño 2007) to form rewrite rules. However, the rules are still extracted ignoring the target-side, and search is performed only on the precalculated orderings. Difficult Search Problem. Using MTUs makes the search problem much more difficult because of poor translation option selection. To illustrate this consider the phrase pair schoss ein Tor – ‘scored a goal’, consisting of units schoss – ‘scored’, ein – ‘a’, and Tor – ‘goal’. It is likely that the N-gram system does not have the tuple schoss – ‘scored’ in its N-best translation options because it is an uncommon translation. Even if schoss – ‘scored’ is hypothesized, it will be ranked quite low in the stack and may be pruned, before ein and Tor are generated in the next steps. A similar problem is also reported in Costa-jussà et al. (2007): When trying to reproduce the sentences in the N-best translation output of the phrase-based system, the N-gram-based system was able to produce only 37.5% of sentences in the Spanish-to-English and English-to-Spanish translation task, despite having been trained on the same word alignment. A phrase-based system, on the other hand, is likely to have access to the phrasal unit schoss ein Tor – ‘scored a goal’ and can generate it in a single step. 4 mtu-based search :We explored two decoding strategies in this work. Our first decoder complements the model and only uses minimal translation units in left-to-right stack-based decoding, similar to that used in Pharaoh (Koehn 2004a). The overall process can be roughly divided into the following steps: (i) extraction of translation units, (ii) future cost estimation, (iii) hypothesis extension, and (iv) recombination and pruning. The last two steps are repeated iteratively until all the words in the source sentence have been translated. Our hypotheses maintain the index of the last source word covered (j), the position of the right-most source word covered so far (Z), the number of open gaps, the number of gaps so far inserted, the previously generated operations, the generated target string, and the accumulated values of all the features discussed in Section 3.6.1. The sequence of operations may include translation operations (generate, continue source cept, etc.) and reordering operations (gap insertions, jumps). Recombination6 is performed on hypotheses having the same coverage vector, monolingual language model context, and OSM context. We do histogram-based pruning, maintaining the 500 best hypotheses for each stack. A large beam size is required to cope with the search errors that result from using minimal translation units during decoding. We address this problem in Section 5. 6 Note that although we are using minimal translation units, recombination is still useful as different derivations can arise through different alignments between source and target fragments. Also, recombination can still take place if hypotheses differ slightly in the output (Koehn 2010). Aligned bilingual training corpora often contain unaligned target words and discontinuous target cepts, both of which pose problems. Unlike discontinuous source cepts, discontinuous target cepts such as hinunterschüttete – ‘poured . . . down’ in constructions like den Drink hinunterschüttete – ‘poured the drink down’ cannot be handled by the operation sequence model because it generates the English words in strict left-to-right order. Therefore they have to be eliminated. Unaligned target words are only problematic for the MTU-based decoder, which has difficulties predicting where to insert them. Thus, we eliminate unaligned target words in MTU-based decoding. We use a three-step process (Durrani, Schmid, and Fraser 2011) that modifies the alignments and removes unaligned and discontinuous targets. If a source word is aligned with multiple target words that are not consecutive, first the link to the least frequent target word is identified, and the group (consecutive adjacent words) of links containing this word is retained while the others are deleted. The intuition here is to keep the alignments containing content words (which are less frequent than functional words). For example, the alignment link hinunterschüttete – ‘down’ is deleted and only the link hinunterschüttete – ‘poured’ is retained because ‘down’ occurs more frequently than ‘poured’. Crego and Yvon (2009) used split tokens to deal with this phenomenon. For MTU-based decoding we also need to deal with unaligned target words. For each unaligned target word, we determine the (left or right) neighbor that it appears more frequently with and align it with the same source word as this neighbor. Crego, de Gispert, and Mariño (2005) and Mariño et al. (2006) instead used lexical probabilities p( f |e) obtained from IBM Model 1 (Brown et al. 1993) to decide whether to attach left or right. A more sophisticated strategy based on part-of-speech entropy was proposed by Gispert and Mariño (2006). We evaluated our systems on German-to-English, French-to-English, and Spanish-toEnglish news translation for the purpose of development and evaluation. We used data from the eighth version of the Europarl Corpus and the News Commentary made available for the translation task of the Eighth Workshop on Statistical Machine Translation.7 The bilingual corpora contained roughly 2M bilingual sentence pairs, which we obtained by concatenating news commentary (≈ 184K sentences) and Europarl for the estimation of the translation model. Word alignments were generated with GIZA++ (Och and Ney 2003), using the grow-diag-final-and heuristic8 (Koehn et al. 2005). All data are lowercased, and we use the Moses tokenizer. We took news-test-2008 as the dev set for optimization and news-test 2009-2012 for testing. The feature weights are tuned with Z-MERT (Zaidan 2009). 4.2.1 Baseline Systems. We compared our system with (i) Moses9 (Koehn et al. 2007), (ii) Phrasal10 (Cer et al. 2010), and (iii) Ncode11 (Crego, Yvon, and Mariño 2011). We used 7 http://www.statmt.org/wmt13/translation-task.html 8 We also tested other symmetrization heuristics such as “Union” and “Intersection” but found the GDFA heuristic gave best results for all language pairs. 9 http://www.statmt.org/moses/ 10 http://nlp.stanford.edu/phrasal/ 11 http://www.limsi.fr/Individu/jmcrego/bincoder/ all these toolkits with their default settings. Phrasal provides two main extensions to Moses: a hierarchical reordering model (Galley and Manning 2008) and discontinuous source and target phrases (Galley and Manning 2010). We used the default stack sizes of 100 for Moses,12 200 for Phrasal, and 25 for Ncode (with 2n stacks). A 5-gram English language model is used. Both phrase-based systems use the 20 best translation options per source phrase; Ncode uses the 25 best tuple translations and a 4-gram tuple sequence model. A hard distortion limit of 6 is used in the default configuration of both phrasebased systems. Among the other defaults, we retained the hard source gap penalty of 15 and a target gap penalty of 7 in Phrasal. We provide Moses and Ncode with the same post-edited alignments13 from which we had removed target-side discontinuities. We feed the original alignments to Phrasal because of its ability to learn discontinuous source and target phrases. All the systems use MERT for the optimization of the weight vector. 4.2.2 Training. Training steps include: (i) post-editing of the alignments (Section 4.1), (ii) generation of the operation sequence (Algorithm 1), and (iii) estimation of the N-gram translation (OSM) and language models using the SRILM toolkit (Stolcke 2002) with Kneser-Ney smoothing. We used 5-gram models. 4.2.3 Summary of Developmental Experiments. During the developent of the MTU-based decoder, we performed a number of experiments to obtain optimal settings for the system. We list here a summary of the results from those experiments: We found that discontinuous source-side cepts do not improve translation quality in most cases but increase the decoding time by multiple folds. We will therefore only use continuous cepts. We performed experiments by varying the distortion limit from the conventional window of 6 words to infinity (= no hard limit). We found that the performance of our system is robust when removing the hard reordering constraint and even saw a slight improvement in results in the case of German-to-English systems. Using no distortion limit, however, significantly increases the decoding time. We will therefore use a window of 16 words, which we found to be optimal on the development set. The performance of the MTU-based decoder is sensitive to the stack size. A high limit of 500 is required for decent search accuracy. We will discuss this further in the next section. We found using 10 best translation options for each extracted cept during decoding to be optimal. 4.2.4 Comparison with the Baseline Systems. In this section we compare our system (OSMmtu) with the three baseline systems. We used Kevin Gimpel’s tester,14 which uses bootstrap resampling (Koehn 2004b) to test which of our results are significantly better than the baseline results. We mark a baseline result with “*” in order to indicate 12 Using stack sizes from 200–1,000 did not improve results. 13 Using post-processed alignments gave better results than using the original alignments for these baseline systems. 14 http://www.ark.cs.cmu.edu/MT/ that our model shows a significant improvement over this baseline with a confidence of p < 0.05. We use 1,000 samples during bootstrap resampling. Our German-to-English results (see Table 2) are significantly better than the baseline systems in most cases. Our French-to-English results show a significant improvement over Moses in three out of four cases, and over Phrasal in half of the cases. The N-gram-based system NCode was better or similar to our system on the French task. Our Spanish-to-English system also showed roughly the same translation quality as the baseline systems, but was significantly worse on the WMT12 task. 5 phrase-based search :The MTU-based decoder is the most straightforward implementation of a decoder for the operation sequence model, but it faces search problems that cause a drop in translation accuracy. Although the OSM captures both source and target contexts and provides a better reordering mechanism, the ability to memorize and produce larger translation units gives an edge to the phrase-based model during decoding in terms of better search performance and superior selection of translation units. In this section, we combine N-gram-based modeling with phrase-based decoding. This combination not only improves search accuracy but also increases translation quality in terms of BLEU. The operation sequence model, although based on minimal translation units, can learn larger translation chunks by memorizing a sequence of operations. However, it often has difficulties to produce the same translations as the phrase-based system because of the following drawbacks of MTU-based decoding: (i) the MTU-based decoder does not have access to all the translation units that a phrase-based decoder uses as part of a larger phrase, (ii) it requires a larger beam size to prevent early pruning of correct hypotheses, and (iii) it uses less-powerful future-cost estimates than the phrase-based decoder. To demonstrate these problems, consider the phrase pair which the model memorizes through the sequence: Generate(Wie, What is) Insert Gap Generate (Sie, your) Jump Back (1) Generate (heissen, name) The MTU-based decoder needs three separate tuple translations to generate the same phrasal translation: Wie – ‘What is’, Sie – ‘your’ and heißen – ‘name’. Here we are faced with three challenges. Translation Coverage: The first problem is that the N-gram model does not have the same coverage of translation options. The English cepts ‘What is’, ‘your’, and ‘name’ are not good candidate translations for the German cepts Wie, Sie, and heißen, which are usually translated to ‘How’, ‘you’, and ‘call’, respectively, in isolation. When extracting tuple translations for these cepts from the Europarl data for our system, the tuple Wie – ‘What is’ is ranked 124th, heißen – ‘name’ is ranked 56th, and Sie – ‘your’ is ranked 9th in the list of n-best translation candidates. Typically, only the 20 best translation options are used, for the sake of efficiency, and such phrasal units with less frequent translations are never hypothesized in the N-gram-based systems. The phrase-based system, on the other hand, can extract the phrase Wie heißen Sie – ‘what is your name’ even if it is observed only once during training. Larger Beam Size: Even when we allow a huge number of translation options and therefore hypothesize such units, we are faced with another challenge. A larger beam size is required in MTU-based decoding to prevent uncommon translations from getting pruned. The phrase-based system can generate the phrase pair Wie heißen Sie – ‘what is your name’ in a single step, placing it directly into the stack three words to the right. The MTU-based decoder generates this phrase in three stacks with the tuple translations Wie – ‘What is’, Sie – ‘your’, and heißen – ‘name’. A very large stack size is required during decoding to prevent the pruning of Wie – ‘What is’, which is ranked quite low in the stack until the tuple Sie – ‘your’ is hypothesized in the next stack. Although the translation quality achieved by phrase-based SMT remains the same when varying the beam size, the performance of our system varies drastically with different beam sizes (especially for the German–English experiments where the search is more difficult due to a higher number of reorderings). Costa-jussà et al. (2007) also report a significant drop in the performance of N-gram-based SMT when a beam size of 10 is used instead of 50 in their experiments. Future Cost Estimation: A third problem is caused by inaccurate future cost estimation. Using phrases helps phrase-based SMT to better estimate the future language model cost because of the larger context available, and allows the decoder to capture local (phrase-internal) reorderings in the future cost. In comparison, the future cost for tuples is based on unigram probabilities. The future cost estimate for the phrase pair Wie heißen Sie – ‘What is your name’ is estimated by calculating the cost of each feature. A bigram language model cost, for example, is estimated in the phrase-based system as follows: plm = p(What) × p(is|What) × p(your|What is) × p(name|What is your) The translation model cost is estimated as: ptm = p(What is your name|Wie heißen Sie) Phrase-based SMT is aware during the preprocessing step that the words Wie heißen Sie may be translated as a phrase. This is helpful for estimating a more accurate future cost because the context is already available. The same is not true for the MTU-based decoder, to which only minimal units are available. The MTU-based decoder does not have the information that Wie heißen Sie may be translated as a phrase during decoding. The future cost estimate available to the operation sequence model for the span covering Wie heißen Sie will have unigram probabilities for both the translation and language models. plm = p(What) × p(is|What) × p(your) × p(name) The translation model cost is estimated as: ptm = p(Generate(Wie, What is)) × p(Generate(heißen,name)) × p(Generate(Sie, your)) A more accurate future cost estimate for the translation model cost would be: ptm = p(Generate(Wie,What is)) × p(Insert Gap|C2) × p(Generate(Sie,your)|C3) × p(Jump Back(1)|C4) × p(Generate(heißen,name)|C5) where Ci is the context for the generation of the ith operation—that is, up to m previous operations. For example C1 = Generate(Wie, What is), C2 = Generate(Wie,What is) Insert Gap, and so on. The future cost estimates computed in this manner are much more accurate because not only do they consider context, but they also take the reordering operations into account (Durrani, Fraser, and Schmid 2013). We extended our in-house OSM decoder to use phrases instead of MTUs during decoding. In order to check whether phrase-based decoding solves the mentioned problems and improves the search accuracy, we evaluated the baseline MTU decoder and the phrase-based decoder with the same model parameters and tuned weights. This allows us to directly compare the model scores. We tuned the feature weights by running MERT with the MTU decoder on the dev set. Table 3 shows results from running both, the MTU-based (OSMmtu) and the phrase-based (OSMphr) decoder, on the WMT09 test set. Improved search accuracy is the percentage of times each decoder was able to produce a better model score than the other. Our phrase-based decoder uses a stack size of 200. Table 3 shows the percentage of times the MTU-based and phrase-based decoder produce better model scores than their counterpart. It shows that the phrase-based decoder produces better model scores for almost 48% of the hypotheses (on average) across the three language pairs, whereas the MTU-based decoder (using a much higher stack size [500]) produces better hypotheses 8.2% of the time on average. This improvement in search is also reflected in translation quality. Our phrase-based decoder outperforms the MTU-based decoder in all the cases and gives a significant improvement in 8 out of 12 cases (Table 4). In Section 4.1 we discussed the problem of handling unaligned and discontinuous target words in MTU-based decoding. An advantage of phrase-based decoding is that we can use such units during decoding if they appear within the extracted phrases. We use a Generate Target Only (Y) operation whenever the unaligned target word Y occurs in a phrase. Similarly, we use the operation Generate (hinunterschüttete, poured down) when the discontinuous tuple hinunterschüttete – ‘poured ... down’ occurs in a phrase. While training the model, we simply ignore the discontinuity and pretend that the word ‘down’ immediately follows ‘poured’. This can be done by linearizing the subsequent parts of discontinuous target cepts to appear after the first word of the cept. During decoding we use phrase-internal alignments to hypothesize such a linearization. This is done only for the estimation of the OSM, and the target for all other purposes is generated in its original order. This heuristic allows us to deal with target discontinuities without extending the operation sequence model in complicated ways. It results in better BLEU accuracy in comparison with the post-editing of the alignments method described in Section 4.1. For details and empirical results refer to Durrani et al. (2013a) (see Table 2 therein, compare Rows 4 and 5). Note that the OSM, like the discontinuous phrase-based model (Galley and Manning 2010), allows all possible geometries as shown in Figure 7. However, because our decoder only uses continuous phrases, we cannot hypothesize (ii) and (iii) unless they appear inside of a phrase. But our model could be integrated into a discontinuous phrase-based system to overcome this limitation. 7 conclusion :In this article we presented a new model for statistical MT that combines the benefits of two state-of-the-art SMT frameworks, namely, N-gram-based and phrase-based SMT. Like the N-gram-based model, it addresses two drawbacks of phrasal MT by better handling dependencies across phrase boundaries, and solving the phrasal segmentation problem. In contrast to N-gram-based MT, our model has a generative story that tightly couples translation and reordering. Furthermore, it is able to consider all possible reorderings, unlike N-gram systems that perform search only on a limited number of pre-calculated orderings. Our model is able to correctly reorder words across large distances, and it memorizes frequent phrasal translations including their reordering as probable operation sequences. We tested a version of our system that decodes based on minimal translation units (MTUs) against the state-of-the-art phrase-based systems Moses and Phrasal and the N-gram-based system Ncode for German-to-English, French-to-English, and Spanishto-English on three standard test sets. Our system shows statistically significant improvements in 9 out of 12 cases in the German-to-English translation task, and 10 out of 12 cases in the French-to-English translation task. Our Spanish-to-English results are similar to the baseline systems in most of the cases but consistently worse than Ncode. MTU-based decoding suffers from poor translation coverage, inaccurate future cost estimates, and pruning of correct hypotheses. Phrase-based SMT, on the other hand, avoids these drawbacks by using larger translation chunks during search. We therefore extended our decoder to use phrases instead of cepts while keeping the statistical model unchanged. We found that combining a model based on minimal units with phrase-based decoding improves both search accuracy and translation quality. Our system extended with phrase-based decoding showed improvements over all the baseline systems, including our MTU-based decoder. In most of the cases, the difference was significant. Our results show that OSM consistently outperforms the Moses lexicalized reordering model and gives statistically significant gains over a very competitive Moses baseline system. We showed that considering both translation and reordering context is important and ignoring reordering context results in a significant reduction in the performance. We also showed that an OSM based on surface forms suffers from data sparsity and that an OSM based on a generalized representation with part-of-speech tags improves the translation quality by considering a larger context. In the future we would like to study whether the insight of using minimal units for modeling and search based on composed rules would hold for hierarchical SMT. Vaswani et al. (2011) recently showed that a Markov model over the derivation history of minimal rules can obtain the same translation quality as using grammars formed with composed rules, which we believe is quite promising." "1 introduction :Almost 50 years since Damerau’s groundbreaking work was published (Damerau 1964), the figures he set for the proportion of spelling errors in typed text, along with their classification, still remain in use. According to Damerau, over 80% of all misspelled words present a single error which, in turn, falls into one out of four categories, to wit, Insertion (an extra letter is inserted), Omission (one letter is missing), Substitution (one letter is wrong), and Transposition (two adjacent characters are transposed). In fact, these very same figures lie at the heart of much current mainstream research on related topics, from automatic text spelling correction (e.g., Pirinen and Lindén 2014) to more advanced information retrieval techniques (e.g., Stein, Hoppe, and Gollub 2012) to optical character recognition techniques (e.g., Reynaert 2011). The reason for such popularity may rest in the simplicity of the approach, whereby numbers can be assigned ∗ EACH–USP, Arlindo Béttio, 1000. 03828-000. São Paulo, SP – Brazil. E-mail: norton@usp.br. Submission received: 30 January 2014; revised submission received: 21 July 2014; accepted for publication: 18 September 2014. doi:10.1162/COLI a 00216 © 2015 Association for Computational Linguistics to the probability that a certain type of spelling error might take place, simply because that seems to be the frequency with which people make that kind of mistake. Surprisingly enough, even though Damerau derived his statistics uniquely from texts in English, his findings are applied, with very little to no adaptation at all, to research in a range of different languages, such as Basque (e.g., Aduriz et al. 1997), Persian (e.g., Miangah 2014), and Arabic (e.g., Alkanhal et al. 2012). The question then becomes how appropriate these figures are when applied to languages other than English. In fact, some researchers have already noticed this potential flaw, and tried to adapt Damerau’s findings to their own language, usually by modifying DamerauLevenstein edit distance (Wagner and Fischer 1974) to match some language-specific data, but still without verifying the appropriateness of Damerau’s statistics in the target language (e.g., Rytting et al. 2011; Richter, Stranák, and Rosen 2012), or by taking into account some feature of that language, such as the presence of diacritics, for example (e.g., Andrade et al. 2012). In this article, we move a step further by analyzing a set of typed texts from two different corpora in Brazilian Portuguese—a language spoken by over 190 million people1—and deriving statistics tailored to this language. We then compare our statistics with results obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006). As we will show, the behavior demonstrated by native speakers of these languages follow a very similar pattern, straying from Damerau’s original findings while, at the same time, holding them still useful. As a limitation, in this research we only account for non-word errors, that is, errors that, once made, result in some non-existing word, and which may be detected by a regular dictionary look-up (Deorowicz and Ciura 2005). As an indication of the impact of these results, we added a new module (Gimenes, Roman, and Carvalho 2014)2 to OpenOffice Writer3—a freely available word processor—so as to reorder its suggestions list according to the statistics presented in this article. Within Writer, when typing Pao, instead of Pão (‘bread’ in Portuguese), the correct form comes in fifth place in the suggestion list, with Poca, Pocá, Poça, and Próça coming in the first four positions. With this module, it was observed that, for the first testing corpus, in 27.34% of the cases there was an improvement over Writer’s ranking (i.e., the correct form was ranked higher in the list), whereas in 5.84% the new list was actually worse, and in 66.82% no changes were observed. In the second corpus, an improvement was observed in 19.90% of the cases, in 9.00% figures were worse, with 71.10% remaining unchanged. Some 10–21% increase in accuracy is not something to be neglected, especially at the price of changing weights in an edit distance scheme. ","2 related work :One of the first efforts to derive statistics similar to those by Damerau in a language other than English was made by Medeiros (1995), who analyzed a set of corpora in European Portuguese. In his work, Medeiros found that 80.1% of all spelling errors would fit into one of Damerau’s categories. He also noticed that around 20% of all linguistic errors would be related to the use of diacritics, meaning that they should be taken into account during string edit distance or probability calculations. By 1 According to 2010’s demographic census: http://www.ibge.gov.br/home/estatistica/populacao/ censo2010/caracteristicas_da_populacao/resultados_do_universo.pdf. 2 Available at http://ppgsi.each.usp.br/arquivos/RelTec/PPgSI-001_2014.pdf. 3 https://www.openoffice.org/. linguistic errors, the author meant linguistically motivated errors, as opposed to those caused by slipping fingers in a keyboard, for example. However illustrative, Medeiros’s findings suffer from a major drawback, to the extent that he relied on pre-existing lists of erroneous words in the related literature, along with handwritten errors made by high school students, with only part of the data coming from e-mail exchanges—that is, from actual typed text. The problem with this approach is that these data come without any frequency analysis, which renders them inappropriate for the application of computer techniques based on statistics, also making it hard to compare to any findings related to error frequency. Spanish was another language found to confirm Damerau’s findings (Bustamante, Arnaiz, and Ginés 2006). In that case, however, the authors made a much more detailed categorization of the errors they found, beyond the four classes originally proposed by Damerau, in a corpus of written (presumably typed) texts from Mexico and Spain. Still, a careful inspection into these categories (by grouping insertions and repetitions of the same letter, and taking errors related to the misuse of diacritics, capitalization, and cognitive replacements to be substitutions) leads one back to Damerau’s four basic errors. In that case, a total of 86.7% of all errors were found to fit into one of these groups. The main difference, however, between both Damerau and Medeiros, and the work by Bustamante, Arnaiz, and Ginés (2006), is that the majority of spelling errors (54.9% of all errors, corresponding to around 49% of all single errors) were related to the misuse of diacritics, with special emphasis on their omission, which was responsible for 51.5% of all spelling errors (i.e., around 46% of all single errors) found in the corpus. Finally, one of the latest additions to this list was made by Baba and Suzuki (2012), in which the authors analyzed spelling errors both in English and Japanese, with the last one typed on a Roman keyboard (that is, transliterated). Interestingly, the authors report almost no difference in the distribution of errors among Damerau’s four classes, both for English and Japanese. By looking at the details, however, they found some specific errors in Japanese, which they attribute to the phonological and orthographic characteristics of the language. In fact, a breakdown analysis of the four classes show accentuated differences in the type of substitutions (that is, a vowel by another vowel, a vowel by a consonant, etc.), insertions (inserted vowel vs. consonant), deletions (whether or not the deleted character was a repetition of some of its neighbors), and transpositions (adjacent characters versus syllables) that were made. As we will show in the following sections, this overall confirmation of Damerau’s findings, but with remarked differences caused by each language’s idiosyncrasies, seems to be the common behavior both in the related work and ours. This, in turn, may shed some light on the reasons why English-based spelling checking software does not seem to perform so badly in other languages, but still not as well as it could, should each language’s own characteristics be taken into account. in C2 for single errors—still a substantial proportion. Interestingly, if one takes errors involving diacritics to be substitutions, then we end up with a total of 89.86% of all single errors falling into one of Damerau’s categories in C1, with 88.91% in C2, thereby confirming Damerau’s statistics. An analysis of the distribution of single errors among Damerau’s four original categories (i.e., disregarding diacritic-related misspellings), and the distribution among the categories related to the use of diacritics also shows no statistically significant difference between both corpora (ks = 0.5, p = 0.6994 for Damerau’s categories and ks = 0.6, p = 0.3291 for the diacritics—including cedilla—set). Finally, regarding the number of misspelled words, we have once again confirmed Damerau’s results, in that over 85% of all wrong words, be they repetition of existing words or not, have a single spelling error. Table 3 shows these results. In this table,6 we present the figures both when taking word repetitions into account and when ruling them out (that is, when keeping the statistics only for new words). This is also in line with the results obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006), in which it was found that 86.7% of all spelling errors were single errors. 6 Total = 966 × 1 + 70 × 2 + 7 × 3 = 1, 127 errors, with another 12 distributed in the last category, for C1, and 911 × 1 + 95 × 2 + 37 × 3 = 1, 212 errors, with another 48 distributed in the last category, for C2.","3 materials and methods :Because we were interested in source texts written in Brazilian Portuguese, where authors were free to type with little to no intervention by spelling correction facilities, we decided to use the corpus presented in Roman et al. (2013), which comprises 1,808 texts, typed in by 452 different participants in a Web experiment, with a total of 62,858 words in length. As a source, this corpus has the advantages of (1) being produced by a number of different people of different ages, educational attainment, and background; (2) being produced through a Web browser, without any help from spelling correction modules usually present in text editors; and (3) being freely available for download over the Web. We will refer to this corpus as C1. In order to verify the statistics gathered on C1, in June 2011 we collected a corpus of blog posts from four different Web sites4 that describe travel diaries along with comments by visitors. With a total of 26,418 words, spread over 192 posts, and being written in Brazilian Portuguese, this corpus presents the same advantages as C1, except for the fact that we cannot verify participants’ details, which means we cannot make any statement on the participants’ distribution according to age, background, and so forth. Still, the main characteristics of availability and free text production remain. We will refer to this corpus as C2. By comparing the statistics from both corpora we intend to reduce the effect of any bias related to writing style and characteristics of the participants. Given that our goal was to identify non-word errors, we relied on a commercial text editor to highlight misspellings in both corpora. The highlighted words were then manually inspected by one of the researchers, and errors were grouped in categories. Besides Damerau’s four original sets, we identified three extra categories: errors involving the use of diacritics, errors related to the use of the cedilla, and space-related errors. The first of these groups was inspired by the observation that the use of diacritics is a key issue in Portuguese (Andrade et al. 2012), potentially playing an important role in the making of spelling mistakes. Following Damerau, this group can be subdivided into four other subcategories: missing diacritic, addition of diacritic, right diacritic applied to the wrong character, and wrong diacritic applied to the right character. The second group concerns the misuse of the cedilla—a special diacritical mark that in Portuguese can only be applied to the character c, resulting in ç. The reason for this character deserving a whole category of its own lies in the existence of two distinct keyboard layouts in the Brazilian market: ABNT-2 and US-accents. The main difference between both layouts is that whereas ABNT-2 presents a separate key for ç, that key does not exist on the US-Accents keyboard. Hence, while on ABNT-2 the user hits a single key to get a ç, on US-Accents it is a composite—two keys must be pressed: the single quotation mark and c. Ultimately, errors involving the cedilla may be interpreted both as a regular character mistake, or an error related to the use of diacritics, depending on the keyboard. Finally, because space-related errors, although not so common (see Section 4), are usually difficult to deal with via spelling correctors, we decided to give them an independent set of categories. As a mistake, spaces have the annoying characteristic of not being handled by edit distance operations (Attia et al. 2012), thereby passing undisturbed by systems that use string distance-based ranking when giving alternative spellings to the user. The reason for this problem is that, by joining two words together, for example, the number of mistakes dramatically rises, if one takes that new string to be a single word. Along with these three categories, there is a “leftover” fourth one— others—comprising linguistic expression errors, first-letter capitalization, and wrong diminutive or augmentative. 4 http://bragatte.wordpress.com/. http://guilhermebragatte.blogspot.com.br/. http://forasteironairlanda.wordpress.com/tag/nomadismo/. http://sussuemdublin.wordpress.com/2011/01/.","4 results and discussion :Results both for C1 and C2 are shown in Tables 1 and 2.5 Even though the total number of mistakes is almost the same in both corpora (1,139 for C1 vs. 1,260 for C2), the proportion of spelling errors in C2 is more than twice as high as that of C1. In this case, C1 had a mean error rate of 1.81% (that is, 0.0181 error per word), while C2 reached 4.77%. This difference may be due to the fact that C1 was generated by students from a university, that is, people with a higher educational level. It may also have something to do with the fact that blogs are more conversation-like, which may lead people to relax the spelling rules they choose to follow, especially when it comes to errors involving diacritics. Still, despite this discrepancy, a two-sample KolmogorovSmirnov test showed no statistically significant differences between these corpora, on the distribution of the proportion of errors among categories, neither for the total number of errors (ks = 0.4286, p = 0.1528), nor for the number of errors per word (ks = 0.4286, p = 0.1528 for single, ks = 0.2143, p = 0.9048 for double, ks = 0.2857, p = 0.6172 for triple, and ks = 0.1429, p = 0.9988 for multiple errors). As it turns out, errors related to the use of diacritics correspond to approximately half of all spelling errors in both corpora (overall, 47.15% in C1 and 50.08% in C2, corresponding to 49.48% of all single errors in C1 and 58.84% in C2), if we include in this sum cedilla-related errors. By taking the cedilla as a simple substitution, the numbers drop to 41.09% in C1 and 45.63% in C2 (overall), with 44.72% in C1 and 57.08% 5 In these tables, for example, 22 insertions found in words with two errors mean that, from all errors found in such words, 22 were insertions, disregarding whether a word presents two insertions or an insertion along with some other type of error. To save space, rows filled with 0, that is categories with no examples, were removed. Space-related errors were also broken down into Substitution by Space, as in pre qualified instead of pre-qualified; Space Insertion, as in w ord instead of word; Space Transposition, as in m yword instead of my word; and Missing Space, as in myword. For reasons noted in Section 2, we have chosen the work by Bustamante, Arnaiz, and Ginés (2006), on European and Mexican Spanish, as a benchmark against which to compare ours. Table 4 shows the comparison results. In this case, for the sake of comparison, and given that each research has a classification scheme of its own, we had both to reorganize some of the categories and present the figures in terms of number of words, instead of number of errors. Although results are shown considering cedilla-related errors to be cases of a missing diacritic, whenever appropriate the figures for taking such errors as simple substitutions are also presented within parentheses. By comparing the data distributions in Table 4, one sees no differences7 between our results (both for C1 and C2) and those for Spanish, no matter whether one takes cedillarelated errors to be a missing diacritic or a substitution. This, in turn, might indicate that such errors and their distributions are not related to the language or culture themselves, but instead to the set of characters (and corresponding diacritical marks) allowed within each specific language. Also, the data show the importance of taking diacritical marks into consideration when designing spelling correction systems for these languages. As it turned out, this type of error is responsible for over 40% of all wrong words, both in Brazilian Portuguese and Spanish.","5 conclusion :In this article we presented some statistics collected from two different corpora of typed texts in Brazilian Portuguese. Our first contribution is a description of the error rate distribution among the four original categories defined by Damerau (1964), along with the distribution of errors related to the misuse of diacritical marks. Results show not only that Damerau’s figures still hold, should we put aside diacritics, but also make a 7 With cedilla being a diacritic omission: ks = 0.3, p = 0.7591 both between Spanish and C1, and between Spanish and C2. With cedilla being a character substitution: ks = 0.3, p = 0.7591 both between Spanish and C1, and between Spanish and C2. point about the importance such marks have on the misspelling of words in Brazilian Portuguese, as indicated by the frequency with which this type of error is made, thereby rendering spelling correction systems that rely solely on Damerau’s results unfit for this language. On this account, a straightforward experiment with a commercially available text editor has shown a 10–21% improvement, depending on the testing corpus, in the ranking of suggestions for misspelled words. As an additional contribution, we have shown that our results are very much like those obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006), in that we could see no statistically significant difference on error distribution between these two languages. This, in turn, may be taken as an indication that the distribution of errors does not depend on language or culture, but instead on the character set that people are allowed to use. As such, it would not come as a surprise to find out that the same distribution can also be observed in other languages, such as Italian, French, and, to some extent, Turkish, for example, which make an intensive use of a similar set of characters. This is something that is worth investigating, and we leave it for future research.",,,,"Fifty years after Damerau set up his statistics for the distribution of errors in typed texts, his findings are still used in a range of different languages. Because these statistics were derived from texts in English, the question of whether they actually apply to other languages has been raised. We address this issue through the analysis of a set of typed texts in Brazilian Portuguese, deriving statistics tailored to this language. Results show that diacritical marks play a major role, as indicated by the frequency of mistakes involving them, thereby rendering Damerau’s original findings mostly unfit for spelling correction systems, although still holding them useful, should one set aside such marks. Furthermore, a comparison between these results and those published for Spanish show no statistically significant differences between both languages—an indication that the distribution of spelling errors depends on the adopted character set rather than the language itself.","[{""affiliations"": [], ""name"": ""Priscila A. Gimenes""}, {""affiliations"": [], ""name"": ""Norton T. Roman""}, {""affiliations"": [], ""name"": ""M. B. R. Carvalho""}]",SP:1759bf80767626c66b77ddf36a4d2d9c62f4f022,"[{""authors"": [""Aduriz"", ""Itziar"", ""I\u00f1aki Alegria"", ""Xabier Artola"", ""Nerea Ezeiza"", ""Kepa Sarasola"", ""Miriam Urkia.""], ""title"": ""A spelling corrector for Basque based on morphology"", ""venue"": ""Literary and Linguistic Computing,"", ""year"": 1997}, {""authors"": [""Alkanhal"", ""Mohamed I"", ""Mohamed A. Al-Badrashiny"", ""Mansour M. Alghamdi"", ""Abdulaziz O. Al-Qabbany""], ""title"": ""Automatic stochastic Arabic spelling correction with emphasis on space"", ""year"": 2012}, {""authors"": [""Andrade"", ""Guilherme"", ""F. Teixeira"", ""C.R. Xavier"", ""R.S. Oliveira"", ""Leonardo C. da Rocha"", ""A.G. Evsukoff""], ""title"": ""Hasch: High performance automatic spell checker for Portuguese texts"", ""year"": 2012}, {""authors"": [""Attia"", ""Mohammed"", ""Pavel Pecina"", ""Younes Samih"", ""Khaled Shaalan"", ""Josef van Genabith.""], ""title"": ""Improved spelling error detection and correction for Arabic"", ""venue"": ""Proceedings of COLING-2012,"", ""year"": 2012}, {""authors"": [""Baba"", ""Yukino"", ""Hisami Suzuki.""], ""title"": ""How are spelling errors generated and corrected? A study of corrected and uncorrected spelling errors using keystroke logs"", ""venue"": ""Proceedings of ACL-2012,"", ""year"": 2012}, {""authors"": [""Bustamante"", ""Flora Ram\u0131\u0301rez"", ""Alfredo Arnaiz"", ""Mar Gin\u00e9s""], ""title"": ""A spell checker for a world language: The new Microsoft\u2019s Spanish spell checker"", ""venue"": ""In Proceedings of LREC-2006,"", ""year"": 2006}, {""authors"": [""Damerau"", ""Fred J.""], ""title"": ""A technique for computer detection and correction of spelling errors"", ""venue"": ""Communications of the ACM, 7(3):171\u2013176."", ""year"": 1964}, {""authors"": [""Deorowicz"", ""Sebastian"", ""Marcin G. Ciura.""], ""title"": ""Correcting spelling errors by modeling their causes"", ""venue"": ""International Journal of Applied Mathematics and Computer Science, 15(2):275\u2013285."", ""year"": 2005}, {""authors"": [""Gimenes"", ""Priscila Azar"", ""Norton Trevisan Roman"", ""Ariadne Maria Brito Rizzoni Carvalho.""], ""title"": ""An OO writer module for spelling correction in Brazilian Portuguese"", ""venue"": ""Technical Report PPgSI-001/"", ""year"": 2014}, {""authors"": [""Medeiros"", ""Jos\u00e9 Carlos Dinis.""], ""title"": ""Processamento morfol\u00f3gico e correc c\u00e3o ortogr\u00e1fica do portugu\u00eas"", ""venue"": ""Master\u2019s thesis, Instituto Superior T\u00e9cnico \u2013 Universidade T\u00e9cnica de Lisboa, February."", ""year"": 1995}, {""authors"": [""Miangah"", ""Tayebeh Mosavi.""], ""title"": ""Farsispell: A spell-checking system for Persian using a large monolingual corpus"", ""venue"": ""Literary and Linguistic Computing, 29(1):56\u201373."", ""year"": 2014}, {""authors"": [""Pirinen"", ""Tommi A."", ""Krister Lind\u00e9n.""], ""title"": ""State-of-the-art in weighted finite-state spell-checking"", ""venue"": ""Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Processing,"", ""year"": 2014}, {""authors"": [""Reynaert"", ""Martin W.C.""], ""title"": ""Character confusion versus focus word-based correction of spelling and OCR variants in corpora"", ""venue"": ""International Journal on Document Analysis and Recognition,"", ""year"": 2011}, {""authors"": [""Richter"", ""Michal"", ""Pavel Stran\u00e1k"", ""Alexandr Rosen.""], ""title"": ""Korektor\u2014a system for contextual spell-checking and diacritics completion"", ""venue"": ""Proceedings of COLING-2012, pages 1,019\u20131,028, Mumbai."", ""year"": 2012}, {""authors"": [""Roman"", ""Norton Trevisan"", ""Paul Piwek"", ""Alexandre Rossi Alvares""], ""title"": ""Introducing a corpus of human-authored dialogue summaries in Portuguese"", ""year"": 2013}, {""authors"": [""Christian Hettick"", ""Tim Buckwalter"", ""Charles C. Blake.""], ""title"": ""Spelling correction for dialectal Arabic dictionary lookup"", ""venue"": ""ACM Transactions on Asian Language Information Processing,"", ""year"": 2011}, {""authors"": [""Stein"", ""Benno"", ""Dennis Hoppe"", ""Tim Gollub.""], ""title"": ""The impact of spelling errors on patent search"", ""venue"": ""Proceedings of EACL-2012, pages 570\u2013579, Avignon."", ""year"": 2012}, {""authors"": [""Wagner"", ""Robert A."", ""Michael J. Fischer.""], ""title"": ""The string-to-string correction problem"", ""venue"": ""Journal of the ACM, 21(1): 168\u2013173. 183"", ""year"": 1974}]",,spelling error patterns in :,brazilian portuguese :Priscila A. Gimenes,"each / usp :Norton T. Roman∗ Ariadne M. B. R. Carvalho Institute of Computing / Unicamp Fifty years after Damerau set up his statistics for the distribution of errors in typed texts, his findings are still used in a range of different languages. Because these statistics were derived from texts in English, the question of whether they actually apply to other languages has been raised. We address this issue through the analysis of a set of typed texts in Brazilian Portuguese, deriving statistics tailored to this language. Results show that diacritical marks play a major role, as indicated by the frequency of mistakes involving them, thereby rendering Damerau’s original findings mostly unfit for spelling correction systems, although still holding them useful, should one set aside such marks. Furthermore, a comparison between these results and those published for Spanish show no statistically significant differences between both languages—an indication that the distribution of spelling errors depends on the adopted character set rather than the language itself.",error category single two three over three total (%) : ,errors words in c1 (%) words in c2 (%) words in c1 (%) words in c2 (%) :,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :Almost 50 years since Damerau’s groundbreaking work was published (Damerau 1964), the figures he set for the proportion of spelling errors in typed text, along with their classification, still remain in use. According to Damerau, over 80% of all misspelled words present a single error which, in turn, falls into one out of four categories, to wit, Insertion (an extra letter is inserted), Omission (one letter is missing), Substitution (one letter is wrong), and Transposition (two adjacent characters are transposed). In fact, these very same figures lie at the heart of much current mainstream research on related topics, from automatic text spelling correction (e.g., Pirinen and Lindén 2014) to more advanced information retrieval techniques (e.g., Stein, Hoppe, and Gollub 2012) to optical character recognition techniques (e.g., Reynaert 2011). The reason for such popularity may rest in the simplicity of the approach, whereby numbers can be assigned ∗ EACH–USP, Arlindo Béttio, 1000. 03828-000. São Paulo, SP – Brazil. E-mail: norton@usp.br. Submission received: 30 January 2014; revised submission received: 21 July 2014; accepted for publication: 18 September 2014. doi:10.1162/COLI a 00216 © 2015 Association for Computational Linguistics to the probability that a certain type of spelling error might take place, simply because that seems to be the frequency with which people make that kind of mistake. Surprisingly enough, even though Damerau derived his statistics uniquely from texts in English, his findings are applied, with very little to no adaptation at all, to research in a range of different languages, such as Basque (e.g., Aduriz et al. 1997), Persian (e.g., Miangah 2014), and Arabic (e.g., Alkanhal et al. 2012). The question then becomes how appropriate these figures are when applied to languages other than English. In fact, some researchers have already noticed this potential flaw, and tried to adapt Damerau’s findings to their own language, usually by modifying DamerauLevenstein edit distance (Wagner and Fischer 1974) to match some language-specific data, but still without verifying the appropriateness of Damerau’s statistics in the target language (e.g., Rytting et al. 2011; Richter, Stranák, and Rosen 2012), or by taking into account some feature of that language, such as the presence of diacritics, for example (e.g., Andrade et al. 2012). In this article, we move a step further by analyzing a set of typed texts from two different corpora in Brazilian Portuguese—a language spoken by over 190 million people1—and deriving statistics tailored to this language. We then compare our statistics with results obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006). As we will show, the behavior demonstrated by native speakers of these languages follow a very similar pattern, straying from Damerau’s original findings while, at the same time, holding them still useful. As a limitation, in this research we only account for non-word errors, that is, errors that, once made, result in some non-existing word, and which may be detected by a regular dictionary look-up (Deorowicz and Ciura 2005). As an indication of the impact of these results, we added a new module (Gimenes, Roman, and Carvalho 2014)2 to OpenOffice Writer3—a freely available word processor—so as to reorder its suggestions list according to the statistics presented in this article. Within Writer, when typing Pao, instead of Pão (‘bread’ in Portuguese), the correct form comes in fifth place in the suggestion list, with Poca, Pocá, Poça, and Próça coming in the first four positions. With this module, it was observed that, for the first testing corpus, in 27.34% of the cases there was an improvement over Writer’s ranking (i.e., the correct form was ranked higher in the list), whereas in 5.84% the new list was actually worse, and in 66.82% no changes were observed. In the second corpus, an improvement was observed in 19.90% of the cases, in 9.00% figures were worse, with 71.10% remaining unchanged. Some 10–21% increase in accuracy is not something to be neglected, especially at the price of changing weights in an edit distance scheme. 2 related work :One of the first efforts to derive statistics similar to those by Damerau in a language other than English was made by Medeiros (1995), who analyzed a set of corpora in European Portuguese. In his work, Medeiros found that 80.1% of all spelling errors would fit into one of Damerau’s categories. He also noticed that around 20% of all linguistic errors would be related to the use of diacritics, meaning that they should be taken into account during string edit distance or probability calculations. By 1 According to 2010’s demographic census: http://www.ibge.gov.br/home/estatistica/populacao/ censo2010/caracteristicas_da_populacao/resultados_do_universo.pdf. 2 Available at http://ppgsi.each.usp.br/arquivos/RelTec/PPgSI-001_2014.pdf. 3 https://www.openoffice.org/. linguistic errors, the author meant linguistically motivated errors, as opposed to those caused by slipping fingers in a keyboard, for example. However illustrative, Medeiros’s findings suffer from a major drawback, to the extent that he relied on pre-existing lists of erroneous words in the related literature, along with handwritten errors made by high school students, with only part of the data coming from e-mail exchanges—that is, from actual typed text. The problem with this approach is that these data come without any frequency analysis, which renders them inappropriate for the application of computer techniques based on statistics, also making it hard to compare to any findings related to error frequency. Spanish was another language found to confirm Damerau’s findings (Bustamante, Arnaiz, and Ginés 2006). In that case, however, the authors made a much more detailed categorization of the errors they found, beyond the four classes originally proposed by Damerau, in a corpus of written (presumably typed) texts from Mexico and Spain. Still, a careful inspection into these categories (by grouping insertions and repetitions of the same letter, and taking errors related to the misuse of diacritics, capitalization, and cognitive replacements to be substitutions) leads one back to Damerau’s four basic errors. In that case, a total of 86.7% of all errors were found to fit into one of these groups. The main difference, however, between both Damerau and Medeiros, and the work by Bustamante, Arnaiz, and Ginés (2006), is that the majority of spelling errors (54.9% of all errors, corresponding to around 49% of all single errors) were related to the misuse of diacritics, with special emphasis on their omission, which was responsible for 51.5% of all spelling errors (i.e., around 46% of all single errors) found in the corpus. Finally, one of the latest additions to this list was made by Baba and Suzuki (2012), in which the authors analyzed spelling errors both in English and Japanese, with the last one typed on a Roman keyboard (that is, transliterated). Interestingly, the authors report almost no difference in the distribution of errors among Damerau’s four classes, both for English and Japanese. By looking at the details, however, they found some specific errors in Japanese, which they attribute to the phonological and orthographic characteristics of the language. In fact, a breakdown analysis of the four classes show accentuated differences in the type of substitutions (that is, a vowel by another vowel, a vowel by a consonant, etc.), insertions (inserted vowel vs. consonant), deletions (whether or not the deleted character was a repetition of some of its neighbors), and transpositions (adjacent characters versus syllables) that were made. As we will show in the following sections, this overall confirmation of Damerau’s findings, but with remarked differences caused by each language’s idiosyncrasies, seems to be the common behavior both in the related work and ours. This, in turn, may shed some light on the reasons why English-based spelling checking software does not seem to perform so badly in other languages, but still not as well as it could, should each language’s own characteristics be taken into account. in C2 for single errors—still a substantial proportion. Interestingly, if one takes errors involving diacritics to be substitutions, then we end up with a total of 89.86% of all single errors falling into one of Damerau’s categories in C1, with 88.91% in C2, thereby confirming Damerau’s statistics. An analysis of the distribution of single errors among Damerau’s four original categories (i.e., disregarding diacritic-related misspellings), and the distribution among the categories related to the use of diacritics also shows no statistically significant difference between both corpora (ks = 0.5, p = 0.6994 for Damerau’s categories and ks = 0.6, p = 0.3291 for the diacritics—including cedilla—set). Finally, regarding the number of misspelled words, we have once again confirmed Damerau’s results, in that over 85% of all wrong words, be they repetition of existing words or not, have a single spelling error. Table 3 shows these results. In this table,6 we present the figures both when taking word repetitions into account and when ruling them out (that is, when keeping the statistics only for new words). This is also in line with the results obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006), in which it was found that 86.7% of all spelling errors were single errors. 6 Total = 966 × 1 + 70 × 2 + 7 × 3 = 1, 127 errors, with another 12 distributed in the last category, for C1, and 911 × 1 + 95 × 2 + 37 × 3 = 1, 212 errors, with another 48 distributed in the last category, for C2. 3 materials and methods :Because we were interested in source texts written in Brazilian Portuguese, where authors were free to type with little to no intervention by spelling correction facilities, we decided to use the corpus presented in Roman et al. (2013), which comprises 1,808 texts, typed in by 452 different participants in a Web experiment, with a total of 62,858 words in length. As a source, this corpus has the advantages of (1) being produced by a number of different people of different ages, educational attainment, and background; (2) being produced through a Web browser, without any help from spelling correction modules usually present in text editors; and (3) being freely available for download over the Web. We will refer to this corpus as C1. In order to verify the statistics gathered on C1, in June 2011 we collected a corpus of blog posts from four different Web sites4 that describe travel diaries along with comments by visitors. With a total of 26,418 words, spread over 192 posts, and being written in Brazilian Portuguese, this corpus presents the same advantages as C1, except for the fact that we cannot verify participants’ details, which means we cannot make any statement on the participants’ distribution according to age, background, and so forth. Still, the main characteristics of availability and free text production remain. We will refer to this corpus as C2. By comparing the statistics from both corpora we intend to reduce the effect of any bias related to writing style and characteristics of the participants. Given that our goal was to identify non-word errors, we relied on a commercial text editor to highlight misspellings in both corpora. The highlighted words were then manually inspected by one of the researchers, and errors were grouped in categories. Besides Damerau’s four original sets, we identified three extra categories: errors involving the use of diacritics, errors related to the use of the cedilla, and space-related errors. The first of these groups was inspired by the observation that the use of diacritics is a key issue in Portuguese (Andrade et al. 2012), potentially playing an important role in the making of spelling mistakes. Following Damerau, this group can be subdivided into four other subcategories: missing diacritic, addition of diacritic, right diacritic applied to the wrong character, and wrong diacritic applied to the right character. The second group concerns the misuse of the cedilla—a special diacritical mark that in Portuguese can only be applied to the character c, resulting in ç. The reason for this character deserving a whole category of its own lies in the existence of two distinct keyboard layouts in the Brazilian market: ABNT-2 and US-accents. The main difference between both layouts is that whereas ABNT-2 presents a separate key for ç, that key does not exist on the US-Accents keyboard. Hence, while on ABNT-2 the user hits a single key to get a ç, on US-Accents it is a composite—two keys must be pressed: the single quotation mark and c. Ultimately, errors involving the cedilla may be interpreted both as a regular character mistake, or an error related to the use of diacritics, depending on the keyboard. Finally, because space-related errors, although not so common (see Section 4), are usually difficult to deal with via spelling correctors, we decided to give them an independent set of categories. As a mistake, spaces have the annoying characteristic of not being handled by edit distance operations (Attia et al. 2012), thereby passing undisturbed by systems that use string distance-based ranking when giving alternative spellings to the user. The reason for this problem is that, by joining two words together, for example, the number of mistakes dramatically rises, if one takes that new string to be a single word. Along with these three categories, there is a “leftover” fourth one— others—comprising linguistic expression errors, first-letter capitalization, and wrong diminutive or augmentative. 4 http://bragatte.wordpress.com/. http://guilhermebragatte.blogspot.com.br/. http://forasteironairlanda.wordpress.com/tag/nomadismo/. http://sussuemdublin.wordpress.com/2011/01/. 4 results and discussion :Results both for C1 and C2 are shown in Tables 1 and 2.5 Even though the total number of mistakes is almost the same in both corpora (1,139 for C1 vs. 1,260 for C2), the proportion of spelling errors in C2 is more than twice as high as that of C1. In this case, C1 had a mean error rate of 1.81% (that is, 0.0181 error per word), while C2 reached 4.77%. This difference may be due to the fact that C1 was generated by students from a university, that is, people with a higher educational level. It may also have something to do with the fact that blogs are more conversation-like, which may lead people to relax the spelling rules they choose to follow, especially when it comes to errors involving diacritics. Still, despite this discrepancy, a two-sample KolmogorovSmirnov test showed no statistically significant differences between these corpora, on the distribution of the proportion of errors among categories, neither for the total number of errors (ks = 0.4286, p = 0.1528), nor for the number of errors per word (ks = 0.4286, p = 0.1528 for single, ks = 0.2143, p = 0.9048 for double, ks = 0.2857, p = 0.6172 for triple, and ks = 0.1429, p = 0.9988 for multiple errors). As it turns out, errors related to the use of diacritics correspond to approximately half of all spelling errors in both corpora (overall, 47.15% in C1 and 50.08% in C2, corresponding to 49.48% of all single errors in C1 and 58.84% in C2), if we include in this sum cedilla-related errors. By taking the cedilla as a simple substitution, the numbers drop to 41.09% in C1 and 45.63% in C2 (overall), with 44.72% in C1 and 57.08% 5 In these tables, for example, 22 insertions found in words with two errors mean that, from all errors found in such words, 22 were insertions, disregarding whether a word presents two insertions or an insertion along with some other type of error. To save space, rows filled with 0, that is categories with no examples, were removed. Space-related errors were also broken down into Substitution by Space, as in pre qualified instead of pre-qualified; Space Insertion, as in w ord instead of word; Space Transposition, as in m yword instead of my word; and Missing Space, as in myword. For reasons noted in Section 2, we have chosen the work by Bustamante, Arnaiz, and Ginés (2006), on European and Mexican Spanish, as a benchmark against which to compare ours. Table 4 shows the comparison results. In this case, for the sake of comparison, and given that each research has a classification scheme of its own, we had both to reorganize some of the categories and present the figures in terms of number of words, instead of number of errors. Although results are shown considering cedilla-related errors to be cases of a missing diacritic, whenever appropriate the figures for taking such errors as simple substitutions are also presented within parentheses. By comparing the data distributions in Table 4, one sees no differences7 between our results (both for C1 and C2) and those for Spanish, no matter whether one takes cedillarelated errors to be a missing diacritic or a substitution. This, in turn, might indicate that such errors and their distributions are not related to the language or culture themselves, but instead to the set of characters (and corresponding diacritical marks) allowed within each specific language. Also, the data show the importance of taking diacritical marks into consideration when designing spelling correction systems for these languages. As it turned out, this type of error is responsible for over 40% of all wrong words, both in Brazilian Portuguese and Spanish. 5 conclusion :In this article we presented some statistics collected from two different corpora of typed texts in Brazilian Portuguese. Our first contribution is a description of the error rate distribution among the four original categories defined by Damerau (1964), along with the distribution of errors related to the misuse of diacritical marks. Results show not only that Damerau’s figures still hold, should we put aside diacritics, but also make a 7 With cedilla being a diacritic omission: ks = 0.3, p = 0.7591 both between Spanish and C1, and between Spanish and C2. With cedilla being a character substitution: ks = 0.3, p = 0.7591 both between Spanish and C1, and between Spanish and C2. point about the importance such marks have on the misspelling of words in Brazilian Portuguese, as indicated by the frequency with which this type of error is made, thereby rendering spelling correction systems that rely solely on Damerau’s results unfit for this language. On this account, a straightforward experiment with a commercially available text editor has shown a 10–21% improvement, depending on the testing corpus, in the ranking of suggestions for misspelled words. As an additional contribution, we have shown that our results are very much like those obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006), in that we could see no statistically significant difference on error distribution between these two languages. This, in turn, may be taken as an indication that the distribution of errors does not depend on language or culture, but instead on the character set that people are allowed to use. As such, it would not come as a surprise to find out that the same distribution can also be observed in other languages, such as Italian, French, and, to some extent, Turkish, for example, which make an intensive use of a similar set of characters. This is something that is worth investigating, and we leave it for future research. Fifty years after Damerau set up his statistics for the distribution of errors in typed texts, his findings are still used in a range of different languages. Because these statistics were derived from texts in English, the question of whether they actually apply to other languages has been raised. We address this issue through the analysis of a set of typed texts in Brazilian Portuguese, deriving statistics tailored to this language. Results show that diacritical marks play a major role, as indicated by the frequency of mistakes involving them, thereby rendering Damerau’s original findings mostly unfit for spelling correction systems, although still holding them useful, should one set aside such marks. Furthermore, a comparison between these results and those published for Spanish show no statistically significant differences between both languages—an indication that the distribution of spelling errors depends on the adopted character set rather than the language itself. [{""affiliations"": [], ""name"": ""Priscila A. Gimenes""}, {""affiliations"": [], ""name"": ""Norton T. Roman""}, {""affiliations"": [], ""name"": ""M. B. R. Carvalho""}] SP:1759bf80767626c66b77ddf36a4d2d9c62f4f022 [{""authors"": [""Aduriz"", ""Itziar"", ""I\u00f1aki Alegria"", ""Xabier Artola"", ""Nerea Ezeiza"", ""Kepa Sarasola"", ""Miriam Urkia.""], ""title"": ""A spelling corrector for Basque based on morphology"", ""venue"": ""Literary and Linguistic Computing,"", ""year"": 1997}, {""authors"": [""Alkanhal"", ""Mohamed I"", ""Mohamed A. Al-Badrashiny"", ""Mansour M. Alghamdi"", ""Abdulaziz O. Al-Qabbany""], ""title"": ""Automatic stochastic Arabic spelling correction with emphasis on space"", ""year"": 2012}, {""authors"": [""Andrade"", ""Guilherme"", ""F. Teixeira"", ""C.R. Xavier"", ""R.S. Oliveira"", ""Leonardo C. da Rocha"", ""A.G. Evsukoff""], ""title"": ""Hasch: High performance automatic spell checker for Portuguese texts"", ""year"": 2012}, {""authors"": [""Attia"", ""Mohammed"", ""Pavel Pecina"", ""Younes Samih"", ""Khaled Shaalan"", ""Josef van Genabith.""], ""title"": ""Improved spelling error detection and correction for Arabic"", ""venue"": ""Proceedings of COLING-2012,"", ""year"": 2012}, {""authors"": [""Baba"", ""Yukino"", ""Hisami Suzuki.""], ""title"": ""How are spelling errors generated and corrected? A study of corrected and uncorrected spelling errors using keystroke logs"", ""venue"": ""Proceedings of ACL-2012,"", ""year"": 2012}, {""authors"": [""Bustamante"", ""Flora Ram\u0131\u0301rez"", ""Alfredo Arnaiz"", ""Mar Gin\u00e9s""], ""title"": ""A spell checker for a world language: The new Microsoft\u2019s Spanish spell checker"", ""venue"": ""In Proceedings of LREC-2006,"", ""year"": 2006}, {""authors"": [""Damerau"", ""Fred J.""], ""title"": ""A technique for computer detection and correction of spelling errors"", ""venue"": ""Communications of the ACM, 7(3):171\u2013176."", ""year"": 1964}, {""authors"": [""Deorowicz"", ""Sebastian"", ""Marcin G. Ciura.""], ""title"": ""Correcting spelling errors by modeling their causes"", ""venue"": ""International Journal of Applied Mathematics and Computer Science, 15(2):275\u2013285."", ""year"": 2005}, {""authors"": [""Gimenes"", ""Priscila Azar"", ""Norton Trevisan Roman"", ""Ariadne Maria Brito Rizzoni Carvalho.""], ""title"": ""An OO writer module for spelling correction in Brazilian Portuguese"", ""venue"": ""Technical Report PPgSI-001/"", ""year"": 2014}, {""authors"": [""Medeiros"", ""Jos\u00e9 Carlos Dinis.""], ""title"": ""Processamento morfol\u00f3gico e correc c\u00e3o ortogr\u00e1fica do portugu\u00eas"", ""venue"": ""Master\u2019s thesis, Instituto Superior T\u00e9cnico \u2013 Universidade T\u00e9cnica de Lisboa, February."", ""year"": 1995}, {""authors"": [""Miangah"", ""Tayebeh Mosavi.""], ""title"": ""Farsispell: A spell-checking system for Persian using a large monolingual corpus"", ""venue"": ""Literary and Linguistic Computing, 29(1):56\u201373."", ""year"": 2014}, {""authors"": [""Pirinen"", ""Tommi A."", ""Krister Lind\u00e9n.""], ""title"": ""State-of-the-art in weighted finite-state spell-checking"", ""venue"": ""Alexander Gelbukh, editor, Computational Linguistics and Intelligent Text Processing,"", ""year"": 2014}, {""authors"": [""Reynaert"", ""Martin W.C.""], ""title"": ""Character confusion versus focus word-based correction of spelling and OCR variants in corpora"", ""venue"": ""International Journal on Document Analysis and Recognition,"", ""year"": 2011}, {""authors"": [""Richter"", ""Michal"", ""Pavel Stran\u00e1k"", ""Alexandr Rosen.""], ""title"": ""Korektor\u2014a system for contextual spell-checking and diacritics completion"", ""venue"": ""Proceedings of COLING-2012, pages 1,019\u20131,028, Mumbai."", ""year"": 2012}, {""authors"": [""Roman"", ""Norton Trevisan"", ""Paul Piwek"", ""Alexandre Rossi Alvares""], ""title"": ""Introducing a corpus of human-authored dialogue summaries in Portuguese"", ""year"": 2013}, {""authors"": [""Christian Hettick"", ""Tim Buckwalter"", ""Charles C. Blake.""], ""title"": ""Spelling correction for dialectal Arabic dictionary lookup"", ""venue"": ""ACM Transactions on Asian Language Information Processing,"", ""year"": 2011}, {""authors"": [""Stein"", ""Benno"", ""Dennis Hoppe"", ""Tim Gollub.""], ""title"": ""The impact of spelling errors on patent search"", ""venue"": ""Proceedings of EACL-2012, pages 570\u2013579, Avignon."", ""year"": 2012}, {""authors"": [""Wagner"", ""Robert A."", ""Michael J. Fischer.""], ""title"": ""The string-to-string correction problem"", ""venue"": ""Journal of the ACM, 21(1): 168\u2013173. 183"", ""year"": 1974}] spelling error patterns in : brazilian portuguese :Priscila A. Gimenes each / usp :Norton T. Roman∗ Ariadne M. B. R. Carvalho Institute of Computing / Unicamp Fifty years after Damerau set up his statistics for the distribution of errors in typed texts, his findings are still used in a range of different languages. Because these statistics were derived from texts in English, the question of whether they actually apply to other languages has been raised. We address this issue through the analysis of a set of typed texts in Brazilian Portuguese, deriving statistics tailored to this language. Results show that diacritical marks play a major role, as indicated by the frequency of mistakes involving them, thereby rendering Damerau’s original findings mostly unfit for spelling correction systems, although still holding them useful, should one set aside such marks. Furthermore, a comparison between these results and those published for Spanish show no statistically significant differences between both languages—an indication that the distribution of spelling errors depends on the adopted character set rather than the language itself. error category single two three over three total (%) : errors words in c1 (%) words in c2 (%) words in c1 (%) words in c2 (%) :","5 conclusion :In this article we presented some statistics collected from two different corpora of typed texts in Brazilian Portuguese. Our first contribution is a description of the error rate distribution among the four original categories defined by Damerau (1964), along with the distribution of errors related to the misuse of diacritical marks. Results show not only that Damerau’s figures still hold, should we put aside diacritics, but also make a 7 With cedilla being a diacritic omission: ks = 0.3, p = 0.7591 both between Spanish and C1, and between Spanish and C2. With cedilla being a character substitution: ks = 0.3, p = 0.7591 both between Spanish and C1, and between Spanish and C2. point about the importance such marks have on the misspelling of words in Brazilian Portuguese, as indicated by the frequency with which this type of error is made, thereby rendering spelling correction systems that rely solely on Damerau’s results unfit for this language. On this account, a straightforward experiment with a commercially available text editor has shown a 10–21% improvement, depending on the testing corpus, in the ranking of suggestions for misspelled words. As an additional contribution, we have shown that our results are very much like those obtained for Spanish (Bustamante, Arnaiz, and Ginés 2006), in that we could see no statistically significant difference on error distribution between these two languages. This, in turn, may be taken as an indication that the distribution of errors does not depend on language or culture, but instead on the character set that people are allowed to use. As such, it would not come as a surprise to find out that the same distribution can also be observed in other languages, such as Italian, French, and, to some extent, Turkish, for example, which make an intensive use of a similar set of characters. This is something that is worth investigating, and we leave it for future research." "1 the deep learning tsunami :Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology ∗ Departments of Computer Science and Linguistics, Stanford University, Stanford CA 94305-9020, U.S.A. E-mail: manning@cs.stanford.edu. 1 http://www.wired.com/2014/12/fb/. 2 https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton. doi:10.1162/COLI a 00239 © 2015 Association for Computational Linguistics that surely would make the world a better place.” Well, that sounds very nice! So, should computational linguistics researchers be afraid? I’d argue, no. To return to the Hitchhiker’s Guide to the Galaxy theme that Geoff Hinton introduced, we need to turn the book over and look at the back cover, which says in large, friendly letters: “Don’t panic.”","2 the success of deep learning :There is no doubt that Deep Learning has ushered in amazing technological advances in the last few years. I won’t give an extensive rundown of successes, but here is one example. A recent Google blog post told about Neon, the new transcription system for Google Voice.3 After admitting that in the past Google Voice voicemail transcriptions often weren’t fully intelligible, the post explained the development of Neon, an improved voicemail system that delivers more accurate transcriptions, like this: “Using a (deep breath) long short-term memory deep recurrent neural network (whew!), we cut our transcription errors by 49%.” Do we not all dream of developing a new approach to a problem which halves the error rate of the previously state-of-the-art system?","3 why computational linguists need not worry :Michael Jordan, in his AMA, gave two reasons why he wasn’t convinced that Deep Learning would solve NLP: “Although current deep learning research tends to claim to encompass NLP, I’m (1) much less convinced about the strength of the results, compared to the results in, say, vision; (2) much less convinced in the case of NLP than, say, vision, the way to go is to couple huge amounts of data with black-box learning architectures.”4 Jordan is certainly right about his first point: So far, problems in higher-level language processing have not seen the dramatic error rate reductions from deep learning that have been seen in speech recognition and in object recognition in vision. Although there have been gains from deep learning approaches, they have been more modest than sudden 25% or 50% error reductions. It could easily turn out that this remains the case. The really dramatic gains may only have been possible on true signal processing tasks. On the other hand, I’m much less convinced by his second argument. However, I do have my own two reasons why NLP need not worry about deep learning: (1) It just has to be wonderful for our field for the smartest and most influential people in machine learning to be saying that NLP is the problem area to focus on; and (2) Our field is the domain science of language technology; it’s not about the best method of machine learning—the central issue remains the domain problems. The domain problems will not go away. Joseph Reisinger wrote on his blog: “I get pitched regularly by startups doing ‘generic machine learning’ which is, in all honesty, a pretty ridiculous idea. Machine learning is not undifferentiated heavy lifting, it’s not commoditizable like EC2, and closer to design than coding.”5 From this perspective, it is people in linguistics, people in NLP, who are the designers. Recently at ACL conferences, there has been an over-focus on numbers, on beating the state of the art. Call it playing the Kaggle game. More of the field’s effort should go into problems, approaches, and architectures. Recently, one thing that I’ve been devoting a lot of time to—together with many other 3 http://googleblog.blogspot.com/2015/07/neon-prescription-or-rather-new.html. 4 http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan. 5 http://thedatamines.com/post/13177389506/why-generic-machine-learning-fails. collaborators—is the development of Universal Dependencies.6 The goal is to develop a common syntactic dependency representation and POS and feature label sets that can be used with reasonable linguistic fidelity and human usability across all human languages. That’s just one example; there are many other design efforts underway in our field. One other current example is the idea of Abstract Meaning Representation.7","4 deep learning of language :Where has Deep Learning helped NLP? The gains so far have not so much been from true Deep Learning (use of a hierarchy of more abstract representations to promote generalization) as from the use of distributed word representations—through the use of real-valued vector representations of words and concepts. Having a dense, multidimensional representation of similarity between all words is incredibly useful in NLP, but not only in NLP. Indeed, the importance of distributed representations evokes the “Parallel Distributed Processing” mantra of the earlier surge of neural network methods, which had a much more cognitive-science directed focus (Rumelhart and McClelland 1986). It can better explain human-like generalization, but also, from an engineering perspective, the use of small dimensionality and dense vectors for words allows us to model large contexts, leading to greatly improved language models. Especially seen from this new perspective, the exponentially greater sparsity that comes from increasing the order of traditional word n-gram models seems conceptually bankrupt. I do believe that the idea of deep models will also prove useful. The sharing that occurs within deep representations can theoretically give an exponential representational advantage, and, in practice, offers improved learning systems. The general approach to building Deep Learning systems is compelling and powerful: The researcher defines a model architecture and a top-level loss function and then both the parameters and the representations of the model self-organize so as to minimize this loss, in an end-to-end learning framework. We are starting to see the power of such deep systems in recent work in neural machine translation (Sutskever, Vinyals, and Le 2014; Luong et al. 2015). Finally, I have been an advocate for focusing more on compositionality in models, for language in particular, and for artificial intelligence in general. Intelligence requires being able to understand bigger things from knowing about smaller parts. In particular for language, understanding novel and complex sentences crucially depends on being able to construct their meaning compositionally from smaller parts—words and multiword expressions—of which they are constituted. Recently, there have been many, many papers showing how systems can be improved by using distributed word representations from “deep learning” approaches, such as word2vec (Mikolov et al. 2013) or GloVe (Pennington, Socher, and Manning 2014). However, this is not actually building Deep Learning models, and I hope in the future that more people focus on the strongly linguistic question of whether we can build meaning composition functions in Deep Learning systems.","5 scientific questions that connect computational linguistics and deep learning :I encourage people to not get into the rut of doing no more than using word vectors to make performance go up a couple of percent. Even more strongly, I would like to 6 http://universaldependencies.github.io/docs/. 7 http://amr.isi.edu. suggest that we might return instead to some of the interesting linguistic and cognitive issues that motivated noncategorical representations and neural network approaches. One example of noncategorical phenomena in language is the POS of words in the gerund V-ing form, such as driving. This form is classically described as ambiguous between a verbal form and a nominal gerund. In fact, however, the situation is more complex, as V-ing forms can appear in any of the four core categories of Chomsky (1970): V + − N + Adjective: an unassuming man Noun: the opening of the store − Verb: she is eating dinner Preposition: concerning your point What is even more interesting is that there is evidence that there is not just an ambiguity but mixed noun–verb status. For example, a classic linguistic text for being a noun is appearing with a determiner, while a classic linguistic test for being a verb is taking a direct object. However, it is well known that the gerund nominalization can do both of these things at once: (1) The not observing this rule is that which the world has blamed in our satorist. (Dryden, Essay Dramatick Poesy, 1684, page 310) (2) The only mental provision she was making for the evening of life, was the collecting and transcribing all the riddles of every sort that she could meet with. (Jane Austen, Emma, 1816) (3) The difficulty is in the getting the gold into Erewhon. (Sam Butler, Erewhon Revisited, 1902) This is oftentimes analyzed by some sort of category-change operation within the levels of a phrase-structure tree, but there is good evidence that this is in fact a case of noncategorical behavior in language. Indeed, this construction was used early on as an example of a “squish” by Ross (1972). Diachronically, the V-ing form shows a history of increasing verbalization, but in many periods it shows a notably non-discrete status. For example, we find clearly graded judgments in this domain: (4) Tom’s winning the election was a big upset. (5) ?This teasing John all the time has got to stop. (6) ?There is no marking exams on Fridays. (7) *The cessation hostilities was unexpected. Various combinations of determiner and verb object do not sound so good, but still much better than trying to put a direct object after a nominalization via a derivational morpheme such as -ation. Houston (1985, page 320) shows that assignment of V-ing forms to a discrete part-of-speech classification is less successful (in a predictive sense) than a continuum in explaining the spoken alternation between -ing vs. -in’, suggesting that “grammatical categories exist along a continuum which does not exhibit sharp boundaries between the categories.” A different, interesting example was explored by one of my graduate school classmates, Whitney Tabor. Tabor (1994) looked at the use of kind of and sort of, an example that I then used in the introductory chapter of my 1999 textbook (Manning and Schütze 1999). The nouns kind or sort can head an NP or be used as a hedging adverbial modifier: (8) [That kind [of knife]] isn’t used much. (9) We are [kind of] hungry. The interesting thing is that there is a path of reanalysis through ambiguous forms, such as the following pair, which suggests how one form emerged from the other. (10) [a [kind [of dense rock]]] (11) [a [[kind of] dense] rock] Tabor (1994) discusses how Old English has kind but few or no uses of kind of. Beginning in Middle English, ambiguous contexts, which provide a breeding ground for the reanalysis, start to appear (the 1570 example in Example (13)), and then, later, examples that are unambiguously the hedging modifier appear (the 1830 example in Example (14)): (12) A nette sent in to the see, and of alle kind of fishis gedrynge (Wyclif, 1382) (13) Their finest and best, is a kind of course red cloth (True Report, 1570) (14) I was kind of provoked at the way you came up (Mass. Spy, 1830) This is history not synchrony. Presumably kids today learn the softener use of kind/sort of first. Did the reader notice an example of it in the quote in my first paragraph? (15) NLP is kind of like a rabbit in the headlights of the deep learning machine (Neil Lawrence, DL workshop panel, 2015) Whitney Tabor modeled this evolution with a small, but already deep, recurrent neural network—one with two hidden layers. He did that in 1994, taking advantage of the opportunity to work with Dave Rumelhart at Stanford. Just recently, there has started to be some new work harnessing the power of distributed representations for modeling and explaining linguistic variation and change. Sagi, Kaufmann, and Clark (2011)—actually using the more traditional method of Latent Semantic Analysis to generate distributed word representations—show how distributed representations can capture a semantic change: the broadening and narrowing of reference over time. They look at examples such as how in Old English deer was any animal, whereas in Middle and Modern English it applies to one clear animal family. The words dog and hound have swapped: In Middle English, hound was used for any kind of canine, while now it is used for a particular sub-kind, whereas the reverse is true for dog. Kulkarni et al. (2015) use neural word embeddings to model the shift in meaning of words such as gay over the last century (exploiting the online Google Books Ngrams corpus). At a recent ACL workshop, Kim et al. (2014) use a similar approach—using word2vec—to look at recent changes in the meaning of words. For example, in Figure 1, they show how around 2000, the meaning of the word cell changed rapidly from being close in meaning to closet and dungeon to being close in meaning to phone and cordless. The meaning of a word in this context is the average over the meanings of all senses of a word, weighted by their frequency of use. These more scientific uses of distributed representations and Deep Learning for modeling phenomena characterize the previous boom in neural networks. There has been a bit of a kerfuffle online lately about citing and crediting work in Deep Learning, and from that perspective, it seems to me that the two people who scarcely get mentioned any more are Dave Rumelhart and Jay McClelland. Starting from the Parallel Distributed Processing Research Group in San Diego, their research program was aimed at a clearly more scientific and cognitive study of neural networks. Now, there are indeed some good questions about the adequacy of neural network approaches for rule-governed linguistic behavior. Old timers in our community should remember that arguing against the adequacy of neural networks for rule-governed linguistic behavior was the foundation for the rise to fame of Steve Pinker—and the foundation of the career of about six of his graduate students. It would take too much space to go through the issues here, but in the end, I think it was a productive debate. It led to a vast amount of work by Paul Smolensky on how basically categorical systems can emerge and be represented in a neural substrate (Smolensky and Legendre 2006). Indeed, Paul Smolensky arguably went too far down the rabbit hole, devoting a large part of his career to developing a new categorical model of phonology, Optimality Theory (Prince and Smolensky 2004). There is a rich body of earlier scientific work that has been neglected. It would be good to return some emphasis within NLP to cognitive and scientific investigation of language rather than almost exclusively using an engineering model of research. Overall, I think we should feel excited and glad to live in a time when Natural Language Processing is seen as so central to both the further development of machine learning and industry application problems. The future is bright. However, I would encourage everyone to think about problems, architectures, cognitive science, and the details of human language, how it is learned, processed, and how it changes, rather than just chasing state-of-the-art numbers on a benchmark task.",,,,"Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology","[{""affiliations"": [], ""name"": ""Christopher D. Manning""}]",SP:004e03879ab56d9a4ae9ba5b7644f5c0875d2453,"[{""authors"": [""Chomsky"", ""Noam.""], ""title"": ""Remarks on nominalization"", ""venue"": ""R. Jacobs and P. Rosenbaum, editors, Readings in English Transformational Grammar. Ginn, Waltham, MA, pages 184\u2013221."", ""year"": 1970}, {""authors"": [""Houston"", ""Ann Celeste.""], ""title"": ""Continuity and Change in English Morphology: The Variable (ing)"", ""venue"": ""Ph.D. thesis, University of Pennsylvania."", ""year"": 1985}, {""authors"": [""Kim"", ""Yoon"", ""Yi-I Chiu"", ""Kentaro Hanaki"", ""Darshan Hegde"", ""Slav Petrov.""], ""title"": ""Temporal analysis of language through neural language models"", ""venue"": ""Proceedings of the ACL 2014 Workshop on Language"", ""year"": 2014}, {""authors"": [""Kulkarni"", ""Vivek"", ""Rami Al-Rfou"", ""Bryan Perozzi"", ""Steven Skiena.""], ""title"": ""Statistically significant detection of linguistic change"", ""venue"": ""Proceedings of the 24th International World Wide Web Conference"", ""year"": 2015}, {""authors"": [""Luong"", ""Minh-Thang"", ""Ilya Sutskever"", ""Quoc V. Le"", ""Oriol Vinyals"", ""Wojciech Zaremba.""], ""title"": ""Addressing the rare word problem in neural machine translation"", ""venue"": ""Proceedings of the 53rd Annual Meeting of the Association"", ""year"": 2015}, {""authors"": [""Manning"", ""Christopher D."", ""Hinrich Sch\u00fctze.""], ""title"": ""Foundations of Statistical Natural Language Processing"", ""venue"": ""MIT Press, Cambridge, MA."", ""year"": 1999}, {""authors"": [""Mikolov"", ""Tomas"", ""Ilya Sutskever"", ""Kai Chen"", ""Greg S. Corrado"", ""Jeffrey Dean""], ""title"": ""Distributed representations of words"", ""year"": 2013}, {""authors"": [""Pennington"", ""Jeffrey"", ""Richard Socher"", ""Christopher D. Manning.""], ""title"": ""GloVe: Global vectors for word representation"", ""venue"": ""Proceedings of the 2014 Conference on Empirical Methods in Natural Language"", ""year"": 2014}, {""authors"": [""Prince"", ""Alan"", ""Paul Smolensky.""], ""title"": ""Optimality Theory: Constraint Interaction in Generative Grammar"", ""venue"": ""Blackwell, Oxford."", ""year"": 2004}, {""authors"": [""Ross"", ""John R.""], ""title"": ""The category squish: Endstation Hauptwort"", ""venue"": ""Papers from the Eighth Regional Meeting, pages 316\u2013328, Chicago."", ""year"": 1972}, {""authors"": [""Sagi"", ""Eyal"", ""Stefan Kaufmann"", ""Brady Clark.""], ""title"": ""Tracing semantic change with latent semantic analysis"", ""venue"": ""Kathryn Allen and Justyna Robinson, editors, Current Methods in Historical Semantics. De Gruyter"", ""year"": 2011}, {""authors"": [""Smolensky"", ""Paul"", ""G\u00e9raldine Legendre.""], ""title"": ""The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar, volume 1"", ""venue"": ""MIT Press, Cambridge, MA."", ""year"": 2006}, {""authors"": [""Sutskever"", ""Ilya"", ""Oriol Vinyals"", ""Quoc V. Le.""], ""title"": ""Sequence to sequence learning with neural networks"", ""venue"": ""Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in"", ""year"": 2014}, {""authors"": [""Tabor"", ""Whitney.""], ""title"": ""Syntactic Innovation: A Connectionist Model"", ""venue"": ""Ph.D. thesis, Stanford. 707"", ""year"": 1994}]",acknowledgments :This Last Words contribution covers part of my 2015 ACL Presidential Address. Thanks to Paola Merlo for suggesting writing it up for publication.,,,,,,last words :,computational linguistics and :,deep learning :Christopher D. Manning∗ Stanford University,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 the deep learning tsunami :Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology ∗ Departments of Computer Science and Linguistics, Stanford University, Stanford CA 94305-9020, U.S.A. E-mail: manning@cs.stanford.edu. 1 http://www.wired.com/2014/12/fb/. 2 https://www.reddit.com/r/MachineLearning/comments/2lmo0l/ama_geoffrey_hinton. doi:10.1162/COLI a 00239 © 2015 Association for Computational Linguistics that surely would make the world a better place.” Well, that sounds very nice! So, should computational linguistics researchers be afraid? I’d argue, no. To return to the Hitchhiker’s Guide to the Galaxy theme that Geoff Hinton introduced, we need to turn the book over and look at the back cover, which says in large, friendly letters: “Don’t panic.” 2 the success of deep learning :There is no doubt that Deep Learning has ushered in amazing technological advances in the last few years. I won’t give an extensive rundown of successes, but here is one example. A recent Google blog post told about Neon, the new transcription system for Google Voice.3 After admitting that in the past Google Voice voicemail transcriptions often weren’t fully intelligible, the post explained the development of Neon, an improved voicemail system that delivers more accurate transcriptions, like this: “Using a (deep breath) long short-term memory deep recurrent neural network (whew!), we cut our transcription errors by 49%.” Do we not all dream of developing a new approach to a problem which halves the error rate of the previously state-of-the-art system? 3 why computational linguists need not worry :Michael Jordan, in his AMA, gave two reasons why he wasn’t convinced that Deep Learning would solve NLP: “Although current deep learning research tends to claim to encompass NLP, I’m (1) much less convinced about the strength of the results, compared to the results in, say, vision; (2) much less convinced in the case of NLP than, say, vision, the way to go is to couple huge amounts of data with black-box learning architectures.”4 Jordan is certainly right about his first point: So far, problems in higher-level language processing have not seen the dramatic error rate reductions from deep learning that have been seen in speech recognition and in object recognition in vision. Although there have been gains from deep learning approaches, they have been more modest than sudden 25% or 50% error reductions. It could easily turn out that this remains the case. The really dramatic gains may only have been possible on true signal processing tasks. On the other hand, I’m much less convinced by his second argument. However, I do have my own two reasons why NLP need not worry about deep learning: (1) It just has to be wonderful for our field for the smartest and most influential people in machine learning to be saying that NLP is the problem area to focus on; and (2) Our field is the domain science of language technology; it’s not about the best method of machine learning—the central issue remains the domain problems. The domain problems will not go away. Joseph Reisinger wrote on his blog: “I get pitched regularly by startups doing ‘generic machine learning’ which is, in all honesty, a pretty ridiculous idea. Machine learning is not undifferentiated heavy lifting, it’s not commoditizable like EC2, and closer to design than coding.”5 From this perspective, it is people in linguistics, people in NLP, who are the designers. Recently at ACL conferences, there has been an over-focus on numbers, on beating the state of the art. Call it playing the Kaggle game. More of the field’s effort should go into problems, approaches, and architectures. Recently, one thing that I’ve been devoting a lot of time to—together with many other 3 http://googleblog.blogspot.com/2015/07/neon-prescription-or-rather-new.html. 4 http://www.reddit.com/r/MachineLearning/comments/2fxi6v/ama_michael_i_jordan. 5 http://thedatamines.com/post/13177389506/why-generic-machine-learning-fails. collaborators—is the development of Universal Dependencies.6 The goal is to develop a common syntactic dependency representation and POS and feature label sets that can be used with reasonable linguistic fidelity and human usability across all human languages. That’s just one example; there are many other design efforts underway in our field. One other current example is the idea of Abstract Meaning Representation.7 4 deep learning of language :Where has Deep Learning helped NLP? The gains so far have not so much been from true Deep Learning (use of a hierarchy of more abstract representations to promote generalization) as from the use of distributed word representations—through the use of real-valued vector representations of words and concepts. Having a dense, multidimensional representation of similarity between all words is incredibly useful in NLP, but not only in NLP. Indeed, the importance of distributed representations evokes the “Parallel Distributed Processing” mantra of the earlier surge of neural network methods, which had a much more cognitive-science directed focus (Rumelhart and McClelland 1986). It can better explain human-like generalization, but also, from an engineering perspective, the use of small dimensionality and dense vectors for words allows us to model large contexts, leading to greatly improved language models. Especially seen from this new perspective, the exponentially greater sparsity that comes from increasing the order of traditional word n-gram models seems conceptually bankrupt. I do believe that the idea of deep models will also prove useful. The sharing that occurs within deep representations can theoretically give an exponential representational advantage, and, in practice, offers improved learning systems. The general approach to building Deep Learning systems is compelling and powerful: The researcher defines a model architecture and a top-level loss function and then both the parameters and the representations of the model self-organize so as to minimize this loss, in an end-to-end learning framework. We are starting to see the power of such deep systems in recent work in neural machine translation (Sutskever, Vinyals, and Le 2014; Luong et al. 2015). Finally, I have been an advocate for focusing more on compositionality in models, for language in particular, and for artificial intelligence in general. Intelligence requires being able to understand bigger things from knowing about smaller parts. In particular for language, understanding novel and complex sentences crucially depends on being able to construct their meaning compositionally from smaller parts—words and multiword expressions—of which they are constituted. Recently, there have been many, many papers showing how systems can be improved by using distributed word representations from “deep learning” approaches, such as word2vec (Mikolov et al. 2013) or GloVe (Pennington, Socher, and Manning 2014). However, this is not actually building Deep Learning models, and I hope in the future that more people focus on the strongly linguistic question of whether we can build meaning composition functions in Deep Learning systems. 5 scientific questions that connect computational linguistics and deep learning :I encourage people to not get into the rut of doing no more than using word vectors to make performance go up a couple of percent. Even more strongly, I would like to 6 http://universaldependencies.github.io/docs/. 7 http://amr.isi.edu. suggest that we might return instead to some of the interesting linguistic and cognitive issues that motivated noncategorical representations and neural network approaches. One example of noncategorical phenomena in language is the POS of words in the gerund V-ing form, such as driving. This form is classically described as ambiguous between a verbal form and a nominal gerund. In fact, however, the situation is more complex, as V-ing forms can appear in any of the four core categories of Chomsky (1970): V + − N + Adjective: an unassuming man Noun: the opening of the store − Verb: she is eating dinner Preposition: concerning your point What is even more interesting is that there is evidence that there is not just an ambiguity but mixed noun–verb status. For example, a classic linguistic text for being a noun is appearing with a determiner, while a classic linguistic test for being a verb is taking a direct object. However, it is well known that the gerund nominalization can do both of these things at once: (1) The not observing this rule is that which the world has blamed in our satorist. (Dryden, Essay Dramatick Poesy, 1684, page 310) (2) The only mental provision she was making for the evening of life, was the collecting and transcribing all the riddles of every sort that she could meet with. (Jane Austen, Emma, 1816) (3) The difficulty is in the getting the gold into Erewhon. (Sam Butler, Erewhon Revisited, 1902) This is oftentimes analyzed by some sort of category-change operation within the levels of a phrase-structure tree, but there is good evidence that this is in fact a case of noncategorical behavior in language. Indeed, this construction was used early on as an example of a “squish” by Ross (1972). Diachronically, the V-ing form shows a history of increasing verbalization, but in many periods it shows a notably non-discrete status. For example, we find clearly graded judgments in this domain: (4) Tom’s winning the election was a big upset. (5) ?This teasing John all the time has got to stop. (6) ?There is no marking exams on Fridays. (7) *The cessation hostilities was unexpected. Various combinations of determiner and verb object do not sound so good, but still much better than trying to put a direct object after a nominalization via a derivational morpheme such as -ation. Houston (1985, page 320) shows that assignment of V-ing forms to a discrete part-of-speech classification is less successful (in a predictive sense) than a continuum in explaining the spoken alternation between -ing vs. -in’, suggesting that “grammatical categories exist along a continuum which does not exhibit sharp boundaries between the categories.” A different, interesting example was explored by one of my graduate school classmates, Whitney Tabor. Tabor (1994) looked at the use of kind of and sort of, an example that I then used in the introductory chapter of my 1999 textbook (Manning and Schütze 1999). The nouns kind or sort can head an NP or be used as a hedging adverbial modifier: (8) [That kind [of knife]] isn’t used much. (9) We are [kind of] hungry. The interesting thing is that there is a path of reanalysis through ambiguous forms, such as the following pair, which suggests how one form emerged from the other. (10) [a [kind [of dense rock]]] (11) [a [[kind of] dense] rock] Tabor (1994) discusses how Old English has kind but few or no uses of kind of. Beginning in Middle English, ambiguous contexts, which provide a breeding ground for the reanalysis, start to appear (the 1570 example in Example (13)), and then, later, examples that are unambiguously the hedging modifier appear (the 1830 example in Example (14)): (12) A nette sent in to the see, and of alle kind of fishis gedrynge (Wyclif, 1382) (13) Their finest and best, is a kind of course red cloth (True Report, 1570) (14) I was kind of provoked at the way you came up (Mass. Spy, 1830) This is history not synchrony. Presumably kids today learn the softener use of kind/sort of first. Did the reader notice an example of it in the quote in my first paragraph? (15) NLP is kind of like a rabbit in the headlights of the deep learning machine (Neil Lawrence, DL workshop panel, 2015) Whitney Tabor modeled this evolution with a small, but already deep, recurrent neural network—one with two hidden layers. He did that in 1994, taking advantage of the opportunity to work with Dave Rumelhart at Stanford. Just recently, there has started to be some new work harnessing the power of distributed representations for modeling and explaining linguistic variation and change. Sagi, Kaufmann, and Clark (2011)—actually using the more traditional method of Latent Semantic Analysis to generate distributed word representations—show how distributed representations can capture a semantic change: the broadening and narrowing of reference over time. They look at examples such as how in Old English deer was any animal, whereas in Middle and Modern English it applies to one clear animal family. The words dog and hound have swapped: In Middle English, hound was used for any kind of canine, while now it is used for a particular sub-kind, whereas the reverse is true for dog. Kulkarni et al. (2015) use neural word embeddings to model the shift in meaning of words such as gay over the last century (exploiting the online Google Books Ngrams corpus). At a recent ACL workshop, Kim et al. (2014) use a similar approach—using word2vec—to look at recent changes in the meaning of words. For example, in Figure 1, they show how around 2000, the meaning of the word cell changed rapidly from being close in meaning to closet and dungeon to being close in meaning to phone and cordless. The meaning of a word in this context is the average over the meanings of all senses of a word, weighted by their frequency of use. These more scientific uses of distributed representations and Deep Learning for modeling phenomena characterize the previous boom in neural networks. There has been a bit of a kerfuffle online lately about citing and crediting work in Deep Learning, and from that perspective, it seems to me that the two people who scarcely get mentioned any more are Dave Rumelhart and Jay McClelland. Starting from the Parallel Distributed Processing Research Group in San Diego, their research program was aimed at a clearly more scientific and cognitive study of neural networks. Now, there are indeed some good questions about the adequacy of neural network approaches for rule-governed linguistic behavior. Old timers in our community should remember that arguing against the adequacy of neural networks for rule-governed linguistic behavior was the foundation for the rise to fame of Steve Pinker—and the foundation of the career of about six of his graduate students. It would take too much space to go through the issues here, but in the end, I think it was a productive debate. It led to a vast amount of work by Paul Smolensky on how basically categorical systems can emerge and be represented in a neural substrate (Smolensky and Legendre 2006). Indeed, Paul Smolensky arguably went too far down the rabbit hole, devoting a large part of his career to developing a new categorical model of phonology, Optimality Theory (Prince and Smolensky 2004). There is a rich body of earlier scientific work that has been neglected. It would be good to return some emphasis within NLP to cognitive and scientific investigation of language rather than almost exclusively using an engineering model of research. Overall, I think we should feel excited and glad to live in a time when Natural Language Processing is seen as so central to both the further development of machine learning and industry application problems. The future is bright. However, I would encourage everyone to think about problems, architectures, cognitive science, and the details of human language, how it is learned, processed, and how it changes, rather than just chasing state-of-the-art numbers on a benchmark task. Deep Learning waves have lapped at the shores of computational linguistics for several years now, but 2015 seems like the year when the full force of the tsunami hit the major Natural Language Processing (NLP) conferences. However, some pundits are predicting that the final damage will be even worse. Accompanying ICML 2015 in Lille, France, there was another, almost as big, event: the 2015 Deep Learning Workshop. The workshop ended with a panel discussion, and at it, Neil Lawrence said, “NLP is kind of like a rabbit in the headlights of the Deep Learning machine, waiting to be flattened.” Now that is a remark that the computational linguistics community has to take seriously! Is it the end of the road for us? Where are these predictions of steamrollering coming from? At the June 2015 opening of the Facebook AI Research Lab in Paris, its director Yann LeCun said: “The next big step for Deep Learning is natural language understanding, which aims to give machines the power to understand not just individual words but entire sentences and paragraphs.”1 In a November 2014 Reddit AMA (Ask Me Anything), Geoff Hinton said, “I think that the most exciting areas over the next five years will be really understanding text and videos. I will be disappointed if in five years’ time we do not have something that can watch a YouTube video and tell a story about what happened. In a few years time we will put [Deep Learning] on a chip that fits into someone’s ear and have an English-decoding chip that’s just like a real Babel fish.”2 And Yoshua Bengio, the third giant of modern Deep Learning, has also increasingly oriented his group’s research toward language, including recent exciting new developments in neural machine translation systems. It’s not just Deep Learning researchers. When leading machine learning researcher Michael Jordan was asked at a September 2014 AMA, “If you got a billion dollars to spend on a huge research project that you get to lead, what would you like to do?”, he answered: “I’d use the billion dollars to build a NASA-size program focusing on natural language processing, in all of its glory (semantics, pragmatics, etc.).” He went on: “Intellectually I think that NLP is fascinating, allowing us to focus on highly structured inference problems, on issues that go to the core of ‘what is thought’ but remain eminently practical, and on a technology [{""affiliations"": [], ""name"": ""Christopher D. Manning""}] SP:004e03879ab56d9a4ae9ba5b7644f5c0875d2453 [{""authors"": [""Chomsky"", ""Noam.""], ""title"": ""Remarks on nominalization"", ""venue"": ""R. Jacobs and P. Rosenbaum, editors, Readings in English Transformational Grammar. Ginn, Waltham, MA, pages 184\u2013221."", ""year"": 1970}, {""authors"": [""Houston"", ""Ann Celeste.""], ""title"": ""Continuity and Change in English Morphology: The Variable (ing)"", ""venue"": ""Ph.D. thesis, University of Pennsylvania."", ""year"": 1985}, {""authors"": [""Kim"", ""Yoon"", ""Yi-I Chiu"", ""Kentaro Hanaki"", ""Darshan Hegde"", ""Slav Petrov.""], ""title"": ""Temporal analysis of language through neural language models"", ""venue"": ""Proceedings of the ACL 2014 Workshop on Language"", ""year"": 2014}, {""authors"": [""Kulkarni"", ""Vivek"", ""Rami Al-Rfou"", ""Bryan Perozzi"", ""Steven Skiena.""], ""title"": ""Statistically significant detection of linguistic change"", ""venue"": ""Proceedings of the 24th International World Wide Web Conference"", ""year"": 2015}, {""authors"": [""Luong"", ""Minh-Thang"", ""Ilya Sutskever"", ""Quoc V. Le"", ""Oriol Vinyals"", ""Wojciech Zaremba.""], ""title"": ""Addressing the rare word problem in neural machine translation"", ""venue"": ""Proceedings of the 53rd Annual Meeting of the Association"", ""year"": 2015}, {""authors"": [""Manning"", ""Christopher D."", ""Hinrich Sch\u00fctze.""], ""title"": ""Foundations of Statistical Natural Language Processing"", ""venue"": ""MIT Press, Cambridge, MA."", ""year"": 1999}, {""authors"": [""Mikolov"", ""Tomas"", ""Ilya Sutskever"", ""Kai Chen"", ""Greg S. Corrado"", ""Jeffrey Dean""], ""title"": ""Distributed representations of words"", ""year"": 2013}, {""authors"": [""Pennington"", ""Jeffrey"", ""Richard Socher"", ""Christopher D. Manning.""], ""title"": ""GloVe: Global vectors for word representation"", ""venue"": ""Proceedings of the 2014 Conference on Empirical Methods in Natural Language"", ""year"": 2014}, {""authors"": [""Prince"", ""Alan"", ""Paul Smolensky.""], ""title"": ""Optimality Theory: Constraint Interaction in Generative Grammar"", ""venue"": ""Blackwell, Oxford."", ""year"": 2004}, {""authors"": [""Ross"", ""John R.""], ""title"": ""The category squish: Endstation Hauptwort"", ""venue"": ""Papers from the Eighth Regional Meeting, pages 316\u2013328, Chicago."", ""year"": 1972}, {""authors"": [""Sagi"", ""Eyal"", ""Stefan Kaufmann"", ""Brady Clark.""], ""title"": ""Tracing semantic change with latent semantic analysis"", ""venue"": ""Kathryn Allen and Justyna Robinson, editors, Current Methods in Historical Semantics. De Gruyter"", ""year"": 2011}, {""authors"": [""Smolensky"", ""Paul"", ""G\u00e9raldine Legendre.""], ""title"": ""The Harmonic Mind: From Neural Computation to Optimality-Theoretic Grammar, volume 1"", ""venue"": ""MIT Press, Cambridge, MA."", ""year"": 2006}, {""authors"": [""Sutskever"", ""Ilya"", ""Oriol Vinyals"", ""Quoc V. Le.""], ""title"": ""Sequence to sequence learning with neural networks"", ""venue"": ""Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, editors, Advances in"", ""year"": 2014}, {""authors"": [""Tabor"", ""Whitney.""], ""title"": ""Syntactic Innovation: A Connectionist Model"", ""venue"": ""Ph.D. thesis, Stanford. 707"", ""year"": 1994}] acknowledgments :This Last Words contribution covers part of my 2015 ACL Presidential Address. Thanks to Paola Merlo for suggesting writing it up for publication. last words : computational linguistics and : deep learning :Christopher D. Manning∗ Stanford University","4 deep learning of language :Where has Deep Learning helped NLP? The gains so far have not so much been from true Deep Learning (use of a hierarchy of more abstract representations to promote generalization) as from the use of distributed word representations—through the use of real-valued vector representations of words and concepts. Having a dense, multidimensional representation of similarity between all words is incredibly useful in NLP, but not only in NLP. Indeed, the importance of distributed representations evokes the “Parallel Distributed Processing” mantra of the earlier surge of neural network methods, which had a much more cognitive-science directed focus (Rumelhart and McClelland 1986). It can better explain human-like generalization, but also, from an engineering perspective, the use of small dimensionality and dense vectors for words allows us to model large contexts, leading to greatly improved language models. Especially seen from this new perspective, the exponentially greater sparsity that comes from increasing the order of traditional word n-gram models seems conceptually bankrupt. I do believe that the idea of deep models will also prove useful. The sharing that occurs within deep representations can theoretically give an exponential representational advantage, and, in practice, offers improved learning systems. The general approach to building Deep Learning systems is compelling and powerful: The researcher defines a model architecture and a top-level loss function and then both the parameters and the representations of the model self-organize so as to minimize this loss, in an end-to-end learning framework. We are starting to see the power of such deep systems in recent work in neural machine translation (Sutskever, Vinyals, and Le 2014; Luong et al. 2015). Finally, I have been an advocate for focusing more on compositionality in models, for language in particular, and for artificial intelligence in general. Intelligence requires being able to understand bigger things from knowing about smaller parts. In particular for language, understanding novel and complex sentences crucially depends on being able to construct their meaning compositionally from smaller parts—words and multiword expressions—of which they are constituted. Recently, there have been many, many papers showing how systems can be improved by using distributed word representations from “deep learning” approaches, such as word2vec (Mikolov et al. 2013) or GloVe (Pennington, Socher, and Manning 2014). However, this is not actually building Deep Learning models, and I hope in the future that more people focus on the strongly linguistic question of whether we can build meaning composition functions in Deep Learning systems. 5 scientific questions that connect computational linguistics and deep learning :I encourage people to not get into the rut of doing no more than using word vectors to make performance go up a couple of percent. Even more strongly, I would like to 6 http://universaldependencies.github.io/docs/. 7 http://amr.isi.edu. suggest that we might return instead to some of the interesting linguistic and cognitive issues that motivated noncategorical representations and neural network approaches. One example of noncategorical phenomena in language is the POS of words in the gerund V-ing form, such as driving. This form is classically described as ambiguous between a verbal form and a nominal gerund. In fact, however, the situation is more complex, as V-ing forms can appear in any of the four core categories of Chomsky (1970): V + − N + Adjective: an unassuming man Noun: the opening of the store − Verb: she is eating dinner Preposition: concerning your point What is even more interesting is that there is evidence that there is not just an ambiguity but mixed noun–verb status. For example, a classic linguistic text for being a noun is appearing with a determiner, while a classic linguistic test for being a verb is taking a direct object. However, it is well known that the gerund nominalization can do both of these things at once: (1) The not observing this rule is that which the world has blamed in our satorist. (Dryden, Essay Dramatick Poesy, 1684, page 310) (2) The only mental provision she was making for the evening of life, was the collecting and transcribing all the riddles of every sort that she could meet with. (Jane Austen, Emma, 1816) (3) The difficulty is in the getting the gold into Erewhon. (Sam Butler, Erewhon Revisited, 1902) This is oftentimes analyzed by some sort of category-change operation within the levels of a phrase-structure tree, but there is good evidence that this is in fact a case of noncategorical behavior in language. Indeed, this construction was used early on as an example of a “squish” by Ross (1972). Diachronically, the V-ing form shows a history of increasing verbalization, but in many periods it shows a notably non-discrete status. For example, we find clearly graded judgments in this domain: (4) Tom’s winning the election was a big upset. (5) ?This teasing John all the time has got to stop. (6) ?There is no marking exams on Fridays. (7) *The cessation hostilities was unexpected. Various combinations of determiner and verb object do not sound so good, but still much better than trying to put a direct object after a nominalization via a derivational morpheme such as -ation. Houston (1985, page 320) shows that assignment of V-ing forms to a discrete part-of-speech classification is less successful (in a predictive sense) than a continuum in explaining the spoken alternation between -ing vs. -in’, suggesting that “grammatical categories exist along a continuum which does not exhibit sharp boundaries between the categories.” A different, interesting example was explored by one of my graduate school classmates, Whitney Tabor. Tabor (1994) looked at the use of kind of and sort of, an example that I then used in the introductory chapter of my 1999 textbook (Manning and Schütze 1999). The nouns kind or sort can head an NP or be used as a hedging adverbial modifier: (8) [That kind [of knife]] isn’t used much. (9) We are [kind of] hungry. The interesting thing is that there is a path of reanalysis through ambiguous forms, such as the following pair, which suggests how one form emerged from the other. (10) [a [kind [of dense rock]]] (11) [a [[kind of] dense] rock] Tabor (1994) discusses how Old English has kind but few or no uses of kind of. Beginning in Middle English, ambiguous contexts, which provide a breeding ground for the reanalysis, start to appear (the 1570 example in Example (13)), and then, later, examples that are unambiguously the hedging modifier appear (the 1830 example in Example (14)): (12) A nette sent in to the see, and of alle kind of fishis gedrynge (Wyclif, 1382) (13) Their finest and best, is a kind of course red cloth (True Report, 1570) (14) I was kind of provoked at the way you came up (Mass. Spy, 1830) This is history not synchrony. Presumably kids today learn the softener use of kind/sort of first. Did the reader notice an example of it in the quote in my first paragraph? (15) NLP is kind of like a rabbit in the headlights of the deep learning machine (Neil Lawrence, DL workshop panel, 2015) Whitney Tabor modeled this evolution with a small, but already deep, recurrent neural network—one with two hidden layers. He did that in 1994, taking advantage of the opportunity to work with Dave Rumelhart at Stanford. Just recently, there has started to be some new work harnessing the power of distributed representations for modeling and explaining linguistic variation and change. Sagi, Kaufmann, and Clark (2011)—actually using the more traditional method of Latent Semantic Analysis to generate distributed word representations—show how distributed representations can capture a semantic change: the broadening and narrowing of reference over time. They look at examples such as how in Old English deer was any animal, whereas in Middle and Modern English it applies to one clear animal family. The words dog and hound have swapped: In Middle English, hound was used for any kind of canine, while now it is used for a particular sub-kind, whereas the reverse is true for dog. Kulkarni et al. (2015) use neural word embeddings to model the shift in meaning of words such as gay over the last century (exploiting the online Google Books Ngrams corpus). At a recent ACL workshop, Kim et al. (2014) use a similar approach—using word2vec—to look at recent changes in the meaning of words. For example, in Figure 1, they show how around 2000, the meaning of the word cell changed rapidly from being close in meaning to closet and dungeon to being close in meaning to phone and cordless. The meaning of a word in this context is the average over the meanings of all senses of a word, weighted by their frequency of use. These more scientific uses of distributed representations and Deep Learning for modeling phenomena characterize the previous boom in neural networks. There has been a bit of a kerfuffle online lately about citing and crediting work in Deep Learning, and from that perspective, it seems to me that the two people who scarcely get mentioned any more are Dave Rumelhart and Jay McClelland. Starting from the Parallel Distributed Processing Research Group in San Diego, their research program was aimed at a clearly more scientific and cognitive study of neural networks. Now, there are indeed some good questions about the adequacy of neural network approaches for rule-governed linguistic behavior. Old timers in our community should remember that arguing against the adequacy of neural networks for rule-governed linguistic behavior was the foundation for the rise to fame of Steve Pinker—and the foundation of the career of about six of his graduate students. It would take too much space to go through the issues here, but in the end, I think it was a productive debate. It led to a vast amount of work by Paul Smolensky on how basically categorical systems can emerge and be represented in a neural substrate (Smolensky and Legendre 2006). Indeed, Paul Smolensky arguably went too far down the rabbit hole, devoting a large part of his career to developing a new categorical model of phonology, Optimality Theory (Prince and Smolensky 2004). There is a rich body of earlier scientific work that has been neglected. It would be good to return some emphasis within NLP to cognitive and scientific investigation of language rather than almost exclusively using an engineering model of research. Overall, I think we should feel excited and glad to live in a time when Natural Language Processing is seen as so central to both the further development of machine learning and industry application problems. The future is bright. However, I would encourage everyone to think about problems, architectures, cognitive science, and the details of human language, how it is learned, processed, and how it changes, rather than just chasing state-of-the-art numbers on a benchmark task." "1 introduction :A sentiment lexicon is regarded as the most valuable resource for sentiment analysis (Pang and Lee 2008), and lays the groundwork of much sentiment analysis research, for example, sentiment classification (Yu and Hatzivassiloglou 2003; Kim and Hovy 2004) and opinion summarization (Hu and Liu 2004). To avoid manually annotating sentiment words, an automatically learning sentiment lexicon has attracted considerable attention in the community of sentiment analysis. The existing work determines word sentiment polarities either by the statistical information (e.g., the co-occurrence of words with predefined sentiment seed words) derived from a large corpus (Riloff, Wiebe, and Wilson 2003; Hu and Liu 2006) or by the word semantic information (e.g., synonym relations) found in existing human-created resources (e.g., WordNet) (Takamura, Inui, and Okumura 2005; Rao and Ravichandran 2009). However, current work mainly focuses on English sentiment lexicon generation or expansion, while sentiment lexicon learning for other languages has not been well studied. In this article, we address the issue of cross-lingual sentiment lexicon learning, which aims to generate sentiment lexicons for a non-English language (hereafter referred to as “the target language”) with the help of the available English sentiment lexicons. The underlying motivation of this task is to leverage the existing English sentiment lexicons and substantial linguistic resources to label the sentiment polarities of the words in the target language. To this end, we need an approach to transferring the sentiment information from English words to the words in the target language. The few existing approaches first build word relations between English and the target language. Then, based on the word relation and English sentiment seed words, they determine the sentiment polarities of the words in the target language. In these two steps, relation-building plays a fundamental role because it is responsible for the transfer of sentiment information between the two languages. Two approaches are often used to connect the words in different languages in the literature. One is based on translation entries in cross-lingual dictionaries (Hassan et al. 2011). The other relies on a machine translation (MT) engine as a black box to translate the sentiment words in English to the target language (Steinberger et al. 2011). The two approaches in Duh, Fujino, and Nagata (2011) and Mihalcea, Banea, and Weibe (2007) tend to use a small set of vocabularies to translate the natural language, which leads to a low coverage of generated sentiment lexicons for the target language. To solve this problem, we propose a generic approach to addressing the task of cross-lingual sentiment lexicon learning. Specifically, we model this task with a bilingual word graph, which is composed of two intra-language subgraphs and an interlanguage subgraph. The intra-language subgraphs are used to model the semantic relations among the words in the same languages. When building them, we incorporate both synonym and antonym word relations in a novel manner, represented by positive and negative sign weights in the subgraphs, respectively. These two intra-language subgraphs are then connected by the inter-language subgraph. We propose Bilingual word graph Label Propagation (BLP), which simultaneously takes the inter-language relations and the intra-language relations into account in an iterative way. Moreover, we leverage the word alignment information derived from a parallel corpus to build the inter-language relations. We connect two words from different languages that are aligned to each other in a parallel sentence pair. Taking advantage of a large parallel corpus, this approach significantly improves the coverage of the generated sentiment lexicon. The experimental results on Chinese sentiment lexicon learning show the effectiveness of the proposed approach in terms of both precision and recall. We further evaluate the impact of the learned sentiment lexicon on sentence-level sentiment classification. When using words in the learned sentiment lexicon as features for sentiment classification of the target language, the sentiment classification can achieve a high performance. We make the following contributions in this article. 1. We present a generic approach to automatically learning sentiment lexicons for the target language with the available sentiment lexicon in English, and we formalize the cross-lingual sentiment learning task on a bilingual word graph. 2. We build a bilingual word graph by using synonym and antonym word relations and propose a bilingual word graph label propagation approach, which effectively leverages the inter-language relations and both types (synonym and antonym) of the intra-language relations in sentiment lexicon learning. 3. We leverage the word alignment information derived from a large number of parallel sentences in sentiment lexicon learning. We build the inter-language relation in the bilingual word graph upon word alignment, and achieve significant results.","2 related work : In general, the work on sentiment lexicon learning focuses mainly on English and can be categorized as co-occurrence–based approaches (Hatzivassiloglou and McKeown 1997; Riloff, Wiebe, and Wilson 2003; Qiu et al. 2011) and semantic-based approaches (Mihalcea, Banea, and Wiebe 2007; Takamura, Inui, and Okumura 2005; Kim and Hovy 2004). The co-occurrence-based approaches determine the sentiment polarity of a given word according to the statistical information, like the co-occurrence of the word to predefined sentiment seed words or the co-occurrence to product features. The statistical information is mainly derived from certain corpora. One of the earliest work conducted by Hatzivassiloglou and McKeown (1997) assumes that the conjunction words can convey the polarity relation of the two words they connect. For example, the conjunction word and tends to link two words with the same polarity, whereas the conjunction word but is likely to link two words with opposite polarities. Their approach only considers adjectives, not nouns or verbs, and it is unable to extract adjectives that are not conjoined by conjunctions. Riloff et al. (2003) define several pattern templates and extract sentiment words by two bootstrapping approaches. Turney and Littman (2003) calculate the pointwise mutual information (PMI) of a given word with positive and negative sets of sentiment words. The sentiment polarity of the word is determined by average PMI values of the positive and negative sets. To obtain PMI, they provide queries (consisting of the given word and the sentiment word) to the search engine. The number of hits and the position (if the given word is near the sentiment word) are used to estimate the association of the given word to the sentiment word. Hu and Liu (2004) research sentiment word learning on customer reviews and they assume that the sentiment words tend to be correlated with product features. The frequent nouns and noun phrases are treated as product features. Then they extract the adjective words as sentiment words from those sentences that contain one or more product features. This approach may work on a product review corpus, where one product feature may frequently appear. But for other corpora, like news articles, this approach may not be effective. Qiu et al. (2011) combine sentiment lexicon learning and opinion target extraction. A double propagation approach is proposed to learn sentiment words and to extract opinion targets simultaneously, based on eight manually defined rules. The semantic-based approaches determine the sentiment polarity of a given word according to the word semantic relation, like the synonyms of sentiment seed words. The word semantic relation is usually obtained from dictionaries, for example, WordNet.1 Kim and Hovy (2004) assume that the synonyms of a positive (negative) word are positive (negative) and its antonyms are negative (positive). Initializing with a set of sentiment words, they expand sentiment lexicons based on these two kinds of word relations. Kamps et al. (2004) build a synonym graph according to the synonym relation (synset) derived from WordNet. The sentiment polarity of a word is calculated by the shortest path to two sentiment words good and bad. However, the shortest path cannot precisely describe the sentiment orientation, considering there are only five steps between the word good and the word bad in WordNet (Hassan et al. 2011). Takamura et al. (2005) construct a word graph with the gloss of WordNet. Words are connected if a word appears in the gloss of another. The word sentiment polarity is determined by the weight of its connections on the word graph. Based on WordNet, Rao and Ravichandran (2009) exploit several graph-based semi-supervised learning methods like Mincuts and Label Propagation. The word polarity orientations are induced by initializing some sentiment seed words in the WordNet graph. Esuli et al. (2006, 2007) and Baccianella et al. (2010) treat sentiment word learning as a machine learning problem, that is, to classify the polarity orientations of the words in WordNet. They select seven positive words and seven negative words and expand them through the see-also and antonym relations in WordNet. These expanded words are then used for training. They train a ternary classifier to predict the sentiment polarities of all the words in WordNet and use the glosses (textual definitions of the words in WordNet) as the features of classification. The sentiment lexicon generated is the well-known SentiWordNet.2 The work on cross-lingual sentiment lexicon learning is still at an early stage and can be categorized into two types, according to how they bridge the words in two languages. Mihalcea et al. (2007) generate sentiment lexicon for Romanian by directly translating the English sentiment words into Romanian through bilingual English–Romanian dictionaries. When confronting multiword translations, they translate the multiwords word by word. Then the validated translations must occur at least three times on the Web. The approach proposed by Hassan et al. (2011) learns sentiment words based on English WordNet and WordNets in the target languages (e.g., Hindi and Arabic). Crosslingual dictionaries are used to connect the words in two languages and the polarity of a given word is determined by the average hitting time from the word to the English sentiment word set. These approaches connect words in two languages based on crosslingual dictionaries. The main concern of these approaches is the effect of morphological inflection (i.e., a word may be mapped to multiple words in cross-lingual dictionaries). 1 http://wordnet.princeton.edu/. 2 http://sentiwordnet.isti.cnr.it/. For example, one single English word typically has four Spanish or Italian word forms (two each for gender and for number) and many Russian word forms (due to gender, number, and case distinctions) (Steinberger et al. 2011). Usually, this approach requires an additional process to disambiguate the sentiment polarities of all the morphological variants. To improve the sentiment classification for the target language, Banea, Mihalcea, and Wiebe (2010) translate the English sentiment lexicon into the target language using Google Translator.3 Similarly, Google Translator is used by Steinberger et al. (2011). They manually produce two high-level gold-standard sentiment lexicons for two languages (e.g., English and Spanish) and then translate them into the third language (e.g., Italian) via Google Translator. They believe that those words in the third language that appear in both translation lists are likely to be sentiment words. These approaches connect the words in two languages based on MT engines. The main concern of these approaches is the low overlapping between the vocabularies of natural documents and the vocabularies of the documents translated by MT engines (Duh, Fujino, and Nagata 2011; Meng et al. 2012a). The shortcoming of these MT-based approaches inevitably leads to low coverage. Our task resembles the task of cross-lingual sentiment classification, like Wan (2009), Lu et al. (2011), and Meng et al. (2012a), which classifies the sentiment polarities of product reviews. Generally, these studies use semi-supervised learning approaches and regard translations from labeled English sentiment reviews as the training data. The terms in each review are leveraged as the features for training, which has proven to be effective in sentiment classification (Pang and Lee 2008). We can regard the task of sentiment lexicon learning as word-level sentiment classification. However, for wordlevel sentiment classification, it is not straightforward to extract features for a single word. Without sufficient features, it is difficult for these approaches to perform well in learning. Another line of cross-lingual sentiment classification uses Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003) or its variants, like Boyd-Graber and Resnik (2010) or He, Alani, and Zhou (2010). These studies assume that each review is a mixture of sentiments and each sentiment is a probability over words. Then they apply the LDA-like approach to model the sentiment polarity of each review. Nonetheless, this assumption may not be applicable in sentiment lexicon learning because a single word can be regarded as the minimal semantic unit, and it is difficult, if not impossible, to infer the latent topics from a single word. Recall that different from the sentiment classification of product reviews where the instances are normally independent, words in sentiment lexicon learning are highly related with each other, like synonyms and antonyms. Through these relations, the words can naturally form a word graph. Thus we use the graph-based learning approach to leverage the word distributions in sentiment lexicon learning. In the next section, we will introduce our proposed graph-based cross-lingual sentiment lexicon learning.","3 cross-lingual sentiment lexicon learning :In this work, we model the task of cross-lingual sentiment lexicon learning with a bilingual word graph, where (1) the words in the two languages are represented by the nodes in two intra-language subgraphs, respectively; (2) the synonym and antonym word relations within each language are represented by the positive and negative sign 3 http://translate.google.com/. weights in the corresponding intra-language subgraphs; and (3) the two intra-language subgraphs are connected by an inter-language subgraph. Mathematically, we build a graph G = (XE ∪ XT, WE ∪ W̃E ∪ WT ∪ W̃T ∪ WA) that consists of two intra-language subgraphs GE = (XE, WE ∪ W̃E) and GT = (XT, WT ∪ W̃T ) as shown in Figure 1. These two subgraphs are connected by the inter-language graph GR = (XE ∪ XT, WA). The elements of WE, WT, and WA are positive real numbers, that is, WE, WT, and WA ∈ R+, and W̃E and W̃T ∈ R−. Because G incorporates the words in two languages, we call it a Bilingual Word Graph. Specifically, the positive weights, WE and WT, represent the synonym intra-language relations, and the negative weights, W̃E and W̃T, represent the antonym intra-language relations. The inter-language relations, WA, represent the connections between the words in the two languages. For cross-lingual sentiment lexicon learning, XE = {X L E, X U E } denotes the labeled and unlabeled words in English and XT denotes the unlabeled words in the target language. Given the labels YLE = {yE1 , ..., yEl} of the seeds X L E, we aim to predict the sentiment polarities of the words XT. In the remainder of this section, we will present the bilingual word graph construction and the algorithm of bilingual word graph label propagation. We represent the words in English and in the target language as the nodes of the bilingual word graph. We use the synonym and antonym relations of the words in the same language to build W and W̃ in the intra-language graph, respectively. In the rest of this section, we will focus on how to build the inter-language relation. Intuitively, there are two ways to connect the words in two languages. One is to insert links to the words if there exist entry mappings between the words in bilingual dictionaries (e.g., the English–Chinese dictionary). This method is simple and straightforward, but it suffers from two limitations. (1) Dictionaries are static during a certain period, whereas the sentiment lexicon evolves over time. (2) The entries in dictionaries tend to be the expressions of formal and written languages, but people prefer using colloquial language in expressing their sentiments or opinions on-line. These limitations lead to the low coverage of the links from English to the target language. An alternative way is to use an MT engine as a black box to build the inter-language relation. One can send each word in English to a publicly available MT engine and get the translations in the target language. Edges can then be inserted into the graph between the English words and their corresponding translations. This approach suffers from the problem of low coverage as well because MT engines tend to use a small set of vocabularies (Duh, Fujino, and Nagata 2011). In this article, we propose to leverage a large bilingual parallel corpus, which is readily available in the MT research community, to build the bilingual word graph. The parallel corpus consists of a large number of parallel sentence pairs from two different languages that have been used as the foundation of the state-of-the-art statistical MT engines. Like the example shown in Figure 2, the two sentences in English and Chinese are parallel sentences, which express the same meaning in different languages. We can easily derive the word alignment from the sentence pairs, automatically using a stateof-the-art toolkit, like GIZA++4 or BerkeleyAligner.5 In this example, the Chinese word (happy) is linked to the English word happy and we say that these two words are aligned. Similarly the English words best and wishes are both aligned to (wish). The word alignment information encodes the rich association information between the words from the two languages. We are therefore motivated to leverage the parallel corpus and word alignment to build the bilingual word graph for cross-lingual sentiment lexicon learning. We take the words from both languages in the bilingual parallel corpus as the nodes in the bilingual word graph, and build the inter-language relations by connecting the two words that are aligned together in a sentence pair from a parallel corpus. There are several advantages of using a parallel corpus to build the inter-language subgraph. First, large parallel corpora are extensively used for training statistical MT engines and can be easily reused in our task. The parallel sentence pairs are usually automatically collected and mined from the Web. As a result, they contain the different and practical variations of words and phrases embedded in sentiment expressions. Second, the parallel corpus can be dynamically changed when necessary because it is relatively easy to collect from the Web. Consequentially, the novel sentiment information inferred from the parallel corpus can easily update the existing sentiment lexicons. These advantages can greatly improve the coverage of the generated sentiment lexicon, as demonstrated later in our experiments. As commonly used semi-supervised approaches, label propagation (Zhu and Ghahramani 2002) and its variants (Zhu, Ghahramani, and Lafferty 2003; Zhou et al. 2004) have been applied to many applications, such as part-of-speech tagging (Das and Petrov 2011; Li, Graca, and Taskar 2012), image annotation (Wang, Huang, and Ding 2011), protein function prediction (Jiang 2011; Jiang and McQuay 2012), and so forth. 4 http://www.statmt.org/moses/giza/GIZA++.html. 5 http://nlp.cs.berkeley.edu. The underlying idea of label propagation is that the connected nodes in the graph tend to share the same sentiment labels. In bilingual word graph label propagation, the words tend to share same sentiment labels if they are connected by synonym relations or word alignment and tend to belong to different sentiment labels if connected by antonym relations. In this article we propose bilingual word graph label propagation for cross-lingual sentiment lexicon learning. Let F = {FE, FT} denote the predicted labels of the unlabeled words X. The loss function can be defined as El(F) = µ n∑ i=1 ‖fEi − yEi‖ 2 + µ m∑ i=1 ‖ fTi − yTi‖ 2 (1) where n and m denote the numbers of English words and words in the target language. Let Y = {YE, YT} denote the initial sentiment labels of all the words; the loss function means that the prediction could not change too much from the initial label assignment. Similar to Zhou et al. (2004), we define the smoothness function to indicate that if two words are connected by synonym relation or by word alignment, then they tend to share the same sentiment label. The smoothness function can be further represented by two parts, that is, the inter-language smoothness Einters (F) and the synonym intra-language smoothness Eintras (F) Einters (F) = 1 2 n∑ i=1 m∑ j=1 wAij‖ fEi√ dALii − fTj√ dARjj ‖2 (2) Eintras (F) = ρ1 1 2 n∑ i,j=1 wEij‖ fEi√ dEii − fEj√ dEjj ‖2 + ρ2 1 2 m∑ i,j=1 wTij‖ fTi√ dTii − fTj√ dTjj ‖2 (3) DAL and DAR are defined as DAL = diag( ∑ j wA(1, j), . . . , ∑ j wA(n, j)) ′ and DAR = diag( ∑ i wA(i, 1), . . . , ∑ i wA(i, m)) ′ . DE and DT are the degree matrices of the synonym intra-language relations WE and WT, respectively. We then define the distance function to indicate that if two words are connected by the antonym relation they tend to belong to different sentiment labels. The distance function can be defined as Eintrad (F) = ρ3 1 2 n∑ i,j=1 |w̃Eij |‖ fEi√ d̃Eii − fEj√ d̃Ejj ‖2 + ρ4 1 2 m∑ i,j=1 |w̃Tij |‖ fTi√ d̃Tii − fTj√ d̃Tjj ‖2 (4) where D̃E and D̃T are the degree matrices of the absolute value of the antonym intralanguage relations W̃E and W̃T, respectively. Intuitively, for the inter-language smoothness and the synonym intra-language smoothness, the nearer the words connect with each other, the better performance will be achieved, whereas for the antonym intralanguage distance, the farther the better. The objective functions can be defined as arg min(E(F)) = arg min(Eintras (F) + E inter s (F) + El(F)) arg max(E(F)) = arg max(Eintrad (F)) Thus, we define the whole objective function for cross-lingual sentiment lexicon learning as arg min(E(F)) = arg min(Eintras (F) + E inter s (F) + El(F) − E intra d (F)) (5) To obtain the solution to Equation (5), we differentiate the objective function according to FE and FT, and we have ∂E(F) ∂FE |FE=F⋆E = ρ1SEFE + 1 2 SAFT − ρ3S̃EFE + µFE − µYE = 0 (6) ∂E(F) ∂FT |FT=F⋆T = ρ2STFT + 1 2 SA ′ FE − ρ4S̃TFT + µFT − µYT = 0 (7) where P ′ is the transpose of the matrix P. The graph Laplacians SE and ST of the synonym intra-language relations are (I − D − 12 E WED − 12 E ) and (I − D − 12 T WTD − 12 T ), where I is the identity matrix. The graph Laplacians S̃E and S̃T of the antonym intra-language relations are (I − D̃ − 12 E W̃ED̃ − 12 E ) and (I − D̃ − 12 T W̃TD̃ − 12 T ), which has been proven to be positive semi-definite (Kunegis et al. 2010). The graph Laplacian SA of the interlanguage relation is (I − D − 12 AL WAD − 12 AR ). From Equations (6) and (7), we can obtain the optimal solutions (ME − SAM −1 T S ′ A)FE = 2µ(YE − SAM −1 T YT ) (8) (MT − S ′ AM −1 E SA)FT = 2µ(YT − S ′ AM −1 E YE) (9) where ME = 2ρ1SE − 2ρ3S̃E + 2µI and MT = 2ρ2ST − 2ρ4S̃T + 2µI. To avoid computing the inverse matrix in Equations (8) and (9), we apply the Jacobi algorithm (Saad 2003) to calculate the solutions as described in Algorithm 1. In line 1, we set the label of the positive seed xi as y L E+ i = (1, 0) and the label of the negative seed xj as y L E− j = (0, 1). We set the label YUE of the unlabeled words as zero, and then generate YE with Y L E and Y U E . Line 2 sets YT as zero matrix. In line 3, we compute the matrixes SE, S̃E, ST, S̃T, SA, and then compute the matrixes ME and MT. The sentiment information is simultaneously Algorithm 1. Bilingual word graph label propagation Input: Given G = (XE ∪ XT, WE ∪ W̃E ∪ WT ∪ W̃T ∪ WA),XE, label YLE for X L E, initialize µ and ρ1∼4 Output: FT for XT and FE for XE 1. Initialize YE with the English sentiment seeds 2. Set YT as zero 3. Calculate SE, S̃E, ST, S̃T, and SA, then calculate ME and MT 4. Loop 5. f (t+1) Ei = 1 (ME−SAM −1 T S ′ A)ii (2µ(YE − SAM −1 T YT )i − ∑ j 6=i(ME − SAM −1 T S ′ A)ij f (t) Ei ) 6. f (t+1) Ti = 1 (MT−S ′ AM −1 E SA)ii (2µ(YT − S ′ AM −1 E YE)i − ∑ j 6=i(MT − S ′ AM −1 E SA)ij f (t) Ti ) 7. Until FE and FT convergence propagated through lines 4–7 until the predicted labels FE and FT are converged. For an unlabeled word xi, if | f (i, 0) − f (i, 1)| < ξ (ξ is set as 1.0E −4), xi is regarded as neutral; if ( f (i, 0) − f (i, 1)) ≥ ξ, xi is regarded as positive; and if ( f (i, 1) − f (i, 0)) ≥ ξ, xi is regarded as negative.","4 experiment : We conduct experiments on Chinese sentiment lexicon learning. As in previous work (Baccianella, Esuli, and Sebastiani 2010), the sentiment words in General Inquirer lexicon are selected as the English seeds (Stone 1997). From the GI lexicon we collect 2,005 positive words and 1,635 negative words. To build the bilingual word graph, we adopt the Chinese–English parallel corpus, which is obtained from the news articles published by Xinhua News Agency in Chinese and English collections, using the automatic parallel sentence identification approach (Munteanu and Marcu 2005). Altogether, we collect more than 25M parallel sentence pairs in English and Chinese. We remove all the stopwords in Chinese and English (e.g., (of) and am) together with the low-frequency words that occur fewer than 5 times. After preprocessing, we finally have more than 174,000 English words, among which 3,519 words have sentiment labels and more than 146,000 Chinese words for which we need to predict the sentiment labels. To transfer sentiment information to Chinese unlabeled words more efficiently, we remove the unlabeled English words in the word graph (i.e., XUE = Φ). The unsupervised method, namely, BerkeleyAligner, is used to align the parallel sentences in this article (Liang, Taskar, and Klein 2006). As an unsupervised method, it does not require us to manually collect training data and does not need the complex training processing, and its performance is competitive with supervised methods. With these two advantages, we can focus more on our task of cross-lingual sentiment lexicon learning. Based on the word alignment derived by BerkeleyAligner, the inter-language WA is initialized with the normalized alignment frequency. The English and Chinese versions6 of WordNet are used to build the intra-language relations WE, W̃E, WT, and W̃T, respectively. WordNet (Miller 1995) groups words into synonym sets, called synsets. We collect about 117,000 synsets from the English WordNet and about 80,000 synsets from the Chinese WordNet. In total, we obtain 8,406 and 6,312 antonym synset pairs. We first generate both positive and negative scores for each unlabeled word and then determine the word sentiment polarities based on its scores. We rank the two sets of newly labeled sentiment words according to their polarity scores. The top-ranked Chinese words are shown in Table 1. We manually label the top-ranked 1K sentiment words. For P@10K, we sequentially divide the top 10K ranked list into ten equal parts. One hundred sentiment words are randomly selected from each part for labeling. Similar to the evaluation of TREC Blog Distillation (Ounis, Macdonald, and Soboroff 2008), all the labeled words from each approach are used in the evaluation. We then evaluate the ranked lists with two metrics, Precision@K and Recall. 6 http://www.globalwordnet.org/gwa/wordnet table.html. In this set of experiments, we examine the influence of graph topologies on sentiment lexicon learning. Mono: This approach learns the Chinese sentiment lexicon based only on the Chinese monolingual word graph GT = (XT, WT ∪ W̃T). Because it needs labeled sentiment words, we incorporate the English labeled sentiment words XE and the interlanguage relation WA in the first iteration. Then we set XE and WA to be zero in later iterations. BLP-WOA (bilingual word graph without antonym): This approach is based on the bilingual word graph. It only involves the inter-language relation WA and the synonym intra-language relations WE and WT. W̃E and W̃T are set to be zero. BLP: This approach is based on the bilingual word graph. It incorporates the interlanguage relation WA, the synonym intra-language relations WE and WT, and the antonym intra-language relations W̃E and W̃T. In these approaches, µ is set to 0.1 as in Zhou et al. (2004). The precision of these approaches are shown in Figure 3. The figure shows that the approaches based on the bilingual word graph significantly outperform the one based on the monolingual word graph. The bilingual word graph can bring in more word relations and accelerate the sentiment propagation. Besides, in the bilingual word graph, the English sentiment seed words can continually provide accurate sentiment information. Thus we observe the increase in the approaches based on the bilingual word graph in term of both precision and recall (Table 2). Meanwhile, we find that adding the antonym relation in the bilingual word graph slightly enhances precision in top-ranked words and similar findings are observed in our later experiments. It appears that the antonym relations depict word relations in a more accurate way and can refine the word sentiment scores more precisely. However, the synonym relation and word alignment relation dominate, whereas the antonym relation accounts for only a small percentage of the graph. It is hard for the antonym relation to introduce new relations into the graph and thus it cannot help to further improve recall. In this set of experiments, we compare our approach with the baseline and existing approaches. Rule: For the intra-language relation, this approach assumes that the synonyms of a positive (negative) word are always positive (negative), and the antonyms of a positive (negative) word are always negative (positive). For the inter-language relation, we regard the Chinese word aligned to positive (negative) English words as positive (negative). If a word connects to both positive and negative English words, we regard it as objective. Based on this heuristic, we generate two sets of sentiment words. SOP: Hassan et al. (2011) present a method to predict the semantic orientation of unlabeled words based on the mean hitting time to the two sets of sentiment seed words. Given the graph G = (XE ∪ XT, WE ∪ W̃E ∪ WT ∪ W̃T ∪ WA), it defines the transition probability from node i to node j as p(j|i) = wi,j∑ k wi,k The mean hitting time h(i|j) is the average number of the weighted steps from word i to word j. Starting with the word i and ending with the sentiment word k ∈ M, the mean hitting time h(i|M) can be formally defined as h(i|M) = { 0, i ∈ M∑ j∈V p(j|i) × h(j|M) + 1, otherwise Let M+ and M− denote the GI positive and negative seeds. If h(i|M+) is greater than h(i|M−), the word xi is classified as negative; otherwise it is classified as positive. The generated positive words and negative words are then ranked according to their polarity scores, respectively. MAD: Talukdar and Crammer (2009) propose a MAD algorithm to modify the adsorption algorithm (Baluja et al. 2008) by adding a new regularization term. In particular, besides the positive and negative labels, a dummy label is assigned to each word in the MAD approach. Two additional columns, representing the scores of the dummy label, are added into Y and F, respectively. We denote these two matrices with the dummy labels as Ŷ and F̂. Meanwhile, R̂ is used to represent the initial dummy scores of all the words. For a word xi, the newly added columns in Ŷi and F̂i are set to zero (i.e., ŷ(i, 3) = f̂ (i, 3) = 0). r̂(i, 0) and r̂(i, 1) are set to zero, and r̂(i, 3) is assigned to one. Then, the predicted label F̂i of the word xi is iteratively obtained by F̂ (t+1) i = 1 M̂ii (λ1Ŷi + λ2 ∑ j WijF̂ (t) j + λ3γR̂i) M̂ii = (λ1 + λ2 ∑ j 6=i Wji + λ3) λ1∼3 and γ are used to tune the importance of each iteration term. We set λ1∼2 to one, λ3 to 10, and γ to 0.1, which produces reasonably good results. After propagation, f̂ (i, 0) and f̂ (i, 1) are used to determine the sentiment polarity of the word xi. We show recall of the learned Chinese sentiment words in Table 3. Compared with BLP and SOP, the Rule approach learns fewer sentiment words. The coverage of the Rule approach is inevitably low because many words in the corpus are aligned to both positive and negative words. For example, in most cases the positive Chinese word (helpful) is aligned to the positive English word helpful. But sometimes it is aligned (or misaligned) to the negative English words, like freak. Under this situation, the word tends to be predicted as objective. In SOP, the positive and negative scores are related to the distances of the word to the positive and negative seed words, and the distance is usually coarse-grained to depict the sentiment polarity. For example, the shortest path between the word good and the word bad in WordNet is only 5 (Kamps et al. 2004). The Rule and SOP approaches find different sentiment words. We then evaluate the learned Chinese polarity word lists by precision at k. As illustrated in Figure 4, the significance test indicates that our approach significantly outperforms the Rule and SOP approaches. The major difference of our approach is that the polarity information can be transferred between English and Chinese and within each language at the same time, whereas in the other two approaches the polarity information mainly transfers from English to Chinese and once a word gets a polarity score, it is difficult to change or refine. The idea of the MAD approach is similar to bilingual graph label propagation, but the MAD approach fails to leverage the antonym intra-language relation. We observe that the MAD approach can achieve a comparable result to the BLP approach. MAD can obtain a smoother label score by adding a dummy label. But the dummy label does not influence the sentiment labels too much because it is not used in the determination of the word sentiment polarity. Besides, MAD cannot deal with the antonym relation. As a result, these experiments demonstrate the overall superiority of our approach in cross-lingual sentiment lexicon learning. This also indicates the effectiveness of the BLP approach in Chinese sentiment lexicon learning. This set of experiments is to examine the ways to build the inter-language relation. BLP-dict: The inter-language relation is built upon the translation entries from LDC7 and Universal Dictionary (UD).8 From these dictionaries (both English–Chinese and Chinese–English dictionaries), we collect 41,034 translation entries between the English and Chinese words. If the English word xi can be translated to the Chinese word xj in UD dictionary, wA(i, j) and wA(j, i) are set to 1. BLP-MT: All the Chinese (English) words are translated into English (Chinese) by Google Translator. If the Chinese word xi can be translated to the English word xj, the wA(i, j) and wA(j, i) are set to 1. If a Chinese word is translated to an English phrase, we assume that the Chinese word is projected to each word in the English phrase. To improve the coverage, we translate the English sentiment seed words with three other methods; they are word collocation, coordinated phrase, and punctuation, as mentioned in Meng et al. (2012b). The learned Chinese sentiment word lists are also evaluated with precision at k. As shown in Figure 5, we find that the alignment-based approach outperforms the dictionary-based and MT-based approaches. The reason that contributes to this is that we can build more inter-language relations based on word alignment, compared 7 http://projects.ldc.upenn.edu/Chinese/LDC ch.htm. 8 http://www.dicts.info/uddl.php. with the translation entries from the dictionary and the translation pairs from Google Translator. For example, the English word move is often translated to (shift) and (affect, touch) by dictionaries or MT engines. From the parallel sentences, besides these word translation pairs, the word move can be also aligned to (plain sailing bon voyage) that is commonly used in Chinese greeting texts. This translation entry is hard to find in dictionaries or by MT engines. The words are aligned between the two parallel sentences. Sometimes the word move may be forced to be aligned to in the parallel sentences good luck and best wishes on your career move and . Thus, when building the inter-language relations with word alignment, our approach is likely to learn more sentiment word candidates. It is also the reason why the dictionary-based and MT-based approaches learn fewer sentiment words than our approach, as indicated in Table 4. According to our statistic, on average a Chinese word is connected to 2.3 and 2.1 English words if we build the inter-language relations with the dictionary and Google Translator, respectively. By building the inter-language relation with word alignment, our approach connects a Chinese word to 16.21 English words an average, which greatly increases the coverage of the learned sentiment lexicon. The following set of experiments reveals the influence of the intra-language relation. BLP-A: As the baseline of this set of experiments, it does not build the intralanguage relations with either English or Chinese WordNet synsets. Only the inter-language relation with word alignment is used to build the graph. That means WE, W̃E, WT, and W̃T are defined as zero matrixes. BLP-AE: Word alignment and the English WordNet synsets are used to build the intra-English relation WE, but the intra-Chinese relation WT and W̃T are set to zero matrixes. BLP-AC: Word alignment and the Chinese WordNet synsets are used to build the intra-Chinese relation WT, but the intra-English relation WE and W̃E are set to zero matrixes. As Figure 6 shows, when combining both English and Chinese intra-language relations, the precision curves of both positive and negative predictions increase. This indicates that adding the intra-language relations has a positive influence. The improvement can be explained by the ability of the intra-language relations to refine the polarity scores. For example, the English word sophisticated can be aligned to the positive Chinese word (delicate) as well as the negative Chinese word (wily, wicked). In the GI lexicon, the English word sophisticated is labeled as positive. In the bilingual word graph that contains only the inter-language relations, the negative Chinese word is likely to be labeled as positive. However, with the intra-language relation, the negative Chinese word may connect to the other negative Chinese words, like (foxy); and the Chinese positive word may connect to the other positive Chinese words, like (elaborate). Thus the polarity score of the word can be refined by the intra-language relation in each iteration of propagation. Another advantage of the intra-language relation is that it helps to reduce the noise introduced by the inter-language relation. For example, sometimes the Chinese positive word (help) is misaligned to the negative English word freak by the inter-language relation, but it is also connected to the synonyms (help) and (salutary) (which are positive) by the intra-language relations. The polarity score of the word can be adjusted by the intra-language relation. Thus, though the inter-language relation brings in certain noisy alignments, the intra-language relation can help to refine the polarity score of the word using its intra-language relation. ρ1 and ρ2 in Equation (3) tune the English and Chinese synonym intra-language propagation, while ρ3 and ρ4 in Equation (4) adjust the English and Chinese antonym intralanguage propagation. For simplicity, let ρ1 equal ρ2 and let ρ3 equal ρ4. Then we tune ρ1,2 and ρ3,4 together. When ρ1,2 and ρ3,4 range from {1e − 2, 1e − 1, 1, 10, 100, 1000}, Precision@1K ranges from 0.631 to 0.689 and Recall ranges from 0.651 to 0.729 on average. In general, we find that when 1 ≤ ρ3,4 < ρ1,2 ≤ 10, we can obtain better results. Sentiment classification is one of the most extensively studied tasks in the community of sentiment analysis (Pang and Lee 2008). To see whether the performance improvement in lexicon learning also improves the results of sentiment classification, we apply the generated Chinese sentiment lexicons to sentence-level sentiment classification. Data set: The NTCIR sentiment-labeled corpus is used for sentiment classification (Seki et al. 2008, 2009). We extract the Chinese sentences that have positive, negative, or neutral labels. The numbers of extracted sentences are shown in Table 5. The learned sentiment words in the Mono and BLP approaches are used as classification features. We implement the following baselines for comparison. BSL DF: The Chinese word unigrams and bigrams are extracted from the NTCIR data set as features. We rank the features according to their frequencies and gradually increase the value of N for the Top-N classification features. BSL LF: The words in existing Chinese sentiment lexicons are used as features. A total of 836 positive words and 1,254 negative words are collected from HowNet.9 We use LibSVM10 and perform 10-fold cross-validation on the NTCIR polarity sentences. The accuracies over N number of features are plotted in Figure 7. Our approach achieves a very promising improvement, although the features and the sentences that need to be classified are selected from different corpora. This suggests that the generated sentiment lexicon is adaptive and qualitative enough for sentiment classification.","5 conclusions and future work :In this article, we studied the task of cross-lingual sentiment lexicon learning. We built a bilingual word graph with the words in two languages and connected them with the inter-language and intra-language relations. We proposed a bilingual word graph label propagation approach to transduce the sentiment information from English sentiment words to the words in the target language. The synonym and antonym relations among 9 http://www.keenage.com/. 10 http://www.csie.ntu.edu.tw/∼cjlin/libsvm/. the words in the same languages are leveraged to build the intra-language relations. Word alignment derived from a large parallel corpus is used to build the inter-language relations. Experiments on Chinese sentiment lexicon learning demonstrate the effectiveness of the proposed approach. There are three main conclusions from this work. First, the bilingual word graph is suitable for sentiment information transfer and the proposed approach can iteratively improve the precision of the generated sentiment lexicon. Second, building the inter-language relations with the large parallel corpus can significantly improve the coverage. Third, by incorporating the antonym relations into the bilingual word graph, the BLP approach can achieve an improvement in precision. In the future, we will explore the opportunity of expanding or generating the sentiment lexicons for multiple languages by bootstrapping.",,,,"In this article we address the task of cross-lingual sentiment lexicon learning, which aims to automatically generate sentiment lexicons for the target languages with available English sentiment lexicons. We formalize the task as a learning problem on a bilingual word graph, in which the intra-language relations among the words in the same language and the interlanguage relations among the words between different languages are properly represented. With the words in the English sentiment lexicon as seeds, we propose a bilingual word graph label propagation approach to induce sentiment polarities of the unlabeled words in the target language. Particularly, we show that both synonym and antonym word relations can be used to build the intra-language relation, and that the word alignment information derived from bilingual parallel sentences can be effectively leveraged to build the inter-language relation. The evaluation of Chinese sentiment lexicon learning shows that the proposed approach outperforms existing approaches in both precision and recall. Experiments conducted on the NTCIR data set further demonstrate the effectiveness of the learned sentiment lexicon in sentence-level sentiment classification.","[{""affiliations"": [], ""name"": ""Dehong Gao""}, {""affiliations"": [], ""name"": ""Furu Wei""}, {""affiliations"": [], ""name"": ""Wenjie Li""}, {""affiliations"": [], ""name"": ""Xiaohua Liu""}, {""affiliations"": [], ""name"": ""Ming Zhou""}]",SP:013ad460023a35d367e90751a36dd74c493752bd,"[{""authors"": [""Banea"", ""Carmen"", ""Rada Mihalcea"", ""Janyce Wiebe""], ""title"": ""Multilingual subjectivity: Are more languages better"", ""venue"": ""In Proceedings of the 23rd International Conference on Computational Linguistics,"", ""year"": 2010}, {""authors"": [""Blei"", ""David M."", ""Andrew Y. Ng"", ""Michael I. Jordan.""], ""title"": ""Latent Dirichlet allocation"", ""venue"": ""Journal of Machine Learning Research, 3:993\u20131022."", ""year"": 2003}, {""authors"": [""Boyd-Graber"", ""Jordan"", ""Philip Resnik.""], ""title"": ""Holistic sentiment analysis across languages: Multilingual supervised latent Dirichlet allocation"", ""venue"": ""Proceedings of the 2010 Conference on Empirical Methods in"", ""year"": 2010}, {""authors"": [""Das"", ""Dipanjan"", ""Slav Petrov.""], ""title"": ""Unsupervised part-of-speech tagging with bilingual graph-based projections"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for Computational"", ""year"": 2011}, {""authors"": [""Duh"", ""Kevin"", ""Akinori Fujino"", ""Masaaki Nagata""], ""title"": ""Is machine translation ripe"", ""year"": 2011}, {""authors"": [""Esuli"", ""Andrea"", ""Fabrizio Sebastiani.""], ""title"": ""Sentiwordnet: A publicly available lexical resource for opinion mining"", ""venue"": ""Proceedings of the 3rd International Conference on Language Resources and Evaluation,"", ""year"": 2006}, {""authors"": [""Esuli"", ""Andrea"", ""Fabrizio Sebastiani.""], ""title"": ""Random-walk models of term semantics: An application to opinionrelated properties"", ""venue"": ""Proceedings of the 3rd Language and Technology Conference,"", ""year"": 2007}, {""authors"": [""Hassan"", ""Ahmed"", ""Amjad Abu-Jbara"", ""Rahul Jha"", ""Dragomir Radev.""], ""title"": ""Identifying the semantic orientation of foreign words"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics,"", ""year"": 2011}, {""authors"": [""Hatzivassiloglou"", ""Vasileios"", ""Kathleen R. McKeown.""], ""title"": ""Predicting the semantic orientation of adjectives"", ""venue"": ""Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference"", ""year"": 1997}, {""authors"": [""He"", ""Yulan"", ""Harith Alani"", ""Deyu Zhou.""], ""title"": ""Exploring English lexicon knowledge for Chinese sentiment analysis"", ""venue"": ""Proceedings of the 2010 CIPS-SIGHAN Joint Conference on Chinese Language"", ""year"": 2010}, {""authors"": [""Hu"", ""Minqing"", ""Bing Liu.""], ""title"": ""Mining and summarizing customer reviews"", ""venue"": ""Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 168\u2013177,"", ""year"": 2004}, {""authors"": [""Hu"", ""Minqing"", ""Bing Liu.""], ""title"": ""Opinion extraction and summarization on the Web"", ""venue"": ""Proceedings of the 21st National Conference on Artificial Intelligence, pages 1,621\u20131,624, Boston, MA."", ""year"": 2006}, {""authors"": [""Jiang"", ""Jonathan Q.""], ""title"": ""Learning protein functions from bi-relational graph of proteins and function annotations"", ""venue"": ""Algorithms in Bioinformatics, 6833:128\u2013138, Springer Berlin Heidelberg."", ""year"": 2011}, {""authors"": [""Jiang"", ""Jonathan Q."", ""Lisa J. McQuay.""], ""title"": ""Predicting protein function by multi-label correlated semi-supervised learning"", ""venue"": ""IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(4):1059\u20131069."", ""year"": 2012}, {""authors"": [""Kim"", ""Soo-Min"", ""Eduard Hovy.""], ""title"": ""Determining the sentiment of opinions"", ""venue"": ""Proceedings of the 20th International Conference on Computational Linguistics, pages 355\u2013363, Geneva."", ""year"": 2004}, {""authors"": [""Kunegis"", ""Jerome"", ""Stephan Schmidt"", ""Andreas Lommatzsch"", ""J\u00fcrgen Lerner"", ""Ernesto W. De"", ""Luca Sahin Albayrak.""], ""title"": ""Spectral analysis of signed graphs for clustering, prediction and visualization"", ""venue"": ""In"", ""year"": 2010}, {""authors"": [""Li"", ""Shen"", ""Joao V. Graca"", ""Ben Taskar.""], ""title"": ""Wiki-ly supervised part-of-speech tagging"", ""venue"": ""Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural"", ""year"": 2012}, {""authors"": [""Liang"", ""Percy"", ""Ben Taskar"", ""Dan Klein.""], ""title"": ""Alignment by agreement"", ""venue"": ""Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of"", ""year"": 2006}, {""authors"": [""Lu"", ""Bin"", ""Chenhao Tan"", ""Claire Cardie"", ""Benjamin K. Tsou.""], ""title"": ""Joint bilingual sentiment classification with unlabeled parallel corpora"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for"", ""year"": 2011}, {""authors"": [""Meng"", ""Xinfan"", ""Furu Wei"", ""Xiaohua Liu"", ""Ming Zhou"", ""Ge Xu"", ""Houfeng Wang.""], ""title"": ""Cross-lingual mixture model for sentiment classification"", ""venue"": ""Proceedings of the 50th Annual Meeting of the Association"", ""year"": 2012}, {""authors"": [""Meng"", ""Xinfan"", ""Furu Wei"", ""Ge Xu"", ""Longkai Zhang"", ""Xiaohua Liu"", ""Ming Zhou"", ""Houfeng Wang""], ""title"": ""Lost in translations? Building sentiment lexicons using context-based machine translation"", ""year"": 2012}, {""authors"": [""Mihalcea"", ""Rada"", ""Carmen Banea"", ""Janyce Wiebe.""], ""title"": ""Learning multilingual subjective language via cross-lingual projections"", ""venue"": ""Proceedings of the 45th Annual Meeting of the Association for"", ""year"": 2007}, {""authors"": [""Miller"", ""George A.""], ""title"": ""Wordnet: A lexical database for English"", ""venue"": ""Communications of the ACM, 38(11):39\u201341."", ""year"": 1995}, {""authors"": [""Munteanu"", ""Dragos Stefan"", ""Daniel Marcu.""], ""title"": ""Improving machine translation performance by exploiting non-parallel corpora"", ""venue"": ""Journal of Computational Linguistics, 31(4):477\u2013504."", ""year"": 2005}, {""authors"": [""Ounis"", ""Iadh"", ""Craig Macdonald"", ""Ian Soboroff.""], ""title"": ""Overview of the TREC-2010 blog track"", ""venue"": ""Proceedings of the NTCIR07 workshop, pages 104\u2013111, Tokyo."", ""year"": 2008}, {""authors"": [""Pang"", ""Bo"", ""Lillian Lee.""], ""title"": ""Opinion mining and sentiment analysis, volume 2"", ""venue"": ""Foundations and Trends in Information Retrieval. Now Publishers, Inc."", ""year"": 2008}, {""authors"": [""Qiu"", ""Guang"", ""Bing Liu"", ""Jiajun Bu"", ""Chun Chen.""], ""title"": ""Opinion word expansion and target extraction through double propagation"", ""venue"": ""Computational Linguistics, 37(1):9\u201327."", ""year"": 2011}, {""authors"": [""Rao"", ""Delip"", ""Deepak Ravichandran.""], ""title"": ""Semi-supervised polarity lexicon induction"", ""venue"": ""Proceedings of the 12th Conference of the European Chapter of the ACL, pages 675\u2013682, Athens."", ""year"": 2009}, {""authors"": [""Riloff"", ""Ellen"", ""Janyce Wiebe"", ""Theresa Wilson.""], ""title"": ""Learning subjective nouns using extraction pattern bootstrapping"", ""venue"": ""Proceedings of the 7th Conference on Natural Language Learning, pages 25\u201332,"", ""year"": 2003}, {""authors"": [""Saad"", ""Yousef.""], ""title"": ""Iterative Methods for Sparse Linear Systems"", ""venue"": ""Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2nd edition."", ""year"": 2003}, {""authors"": [""Seki"", ""Yohei"", ""David Kirk Evans"", ""Lun-Wei Ku"", ""Le Sun"", ""Hsin-Hsi Chen"", ""Noriko Kando.""], ""title"": ""Overview of multilingual opinion analysis task at NTCIR-7"", ""venue"": ""Proceedings of the NTCIR07 Workshop,"", ""year"": 2008}, {""authors"": [""Seki"", ""Yohei"", ""Lun-Wei Ku"", ""Le Sun"", ""Hsin-Hsi Chen"", ""Noriko Kando.""], ""title"": ""Overview of multilingual opinion analysis task at NTCIR-8: A step toward cross lingual opinion analysis"", ""venue"": ""Proceedings of the"", ""year"": 2009}, {""authors"": [""Stone"", ""Philip J.""], ""title"": ""Thematic text analysis: New agendas for analyzing text content"", ""venue"": ""Text Analysis for the Social Sciences, chapter 2. Lawerence Erlbaum, Mahwah, NJ."", ""year"": 1997}, {""authors"": [""Takamura"", ""Hiroya"", ""Takashi Inui"", ""Manabu Okumura.""], ""title"": ""Extracting semantic orientations of words using spin model"", ""venue"": ""Proceedings of the 43rd Annual Meeting of the Association for Computational"", ""year"": 2005}, {""authors"": [""Talukdar"", ""Partha Pratim"", ""Koby Crammer.""], ""title"": ""New regularized algorithms for transductive learning"", ""venue"": ""Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases,"", ""year"": 2009}, {""authors"": [""Turney"", ""Peter D."", ""Michael L. Littman.""], ""title"": ""Measuring praise and criticism: Inference of semantic orientation from association"", ""venue"": ""ACM Transactions on Information Systems, 21(4):315\u2013346."", ""year"": 2003}, {""authors"": [""Wan"", ""Xiaojun.""], ""title"": ""Co-training for cross-lingual sentiment classification"", ""venue"": ""Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural"", ""year"": 2009}, {""authors"": [""Wang"", ""Hua"", ""Heng Huang"", ""Chris Ding.""], ""title"": ""Image annotation using bi-relational graph of images and semantic labels"", ""venue"": ""the 24th IEEE Conference on Computer Vision and Pattern Recognition, pages 793\u2013800,"", ""year"": 2011}, {""authors"": [""Yu"", ""Hong"", ""Vasileios Hatzivassiloglou.""], ""title"": ""Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences"", ""venue"": ""Proceedings of the 2003"", ""year"": 2003}, {""authors"": [""Zhou"", ""Dengyong"", ""Olivier Bousquet"", ""Thomas Navin Lal"", ""Jason Weston"", ""Bernhard Scholkopf.""], ""title"": ""Learning with local and global consistency"", ""venue"": ""Proceedings of Advances in Neural Information Processing"", ""year"": 2004}, {""authors"": [""Zhu"", ""Xiaojin"", ""Zoubin Ghahramani.""], ""title"": ""Learning from labeled and unlabeled data with label propagation"", ""venue"": ""Technical report CMU-CALD-02-107, Carnegie Mellon University."", ""year"": 2002}, {""authors"": [""Zhu"", ""Xiaojin"", ""Zoubin Ghahramani"", ""John Lafferty.""], ""title"": ""Semi-supervised learning using Gaussian fields and harmonic functions"", ""venue"": ""International Conference Machine Learning,"", ""year"": 2003}]",acknowledgments :The work described in this article was supported by a Hong Kong RGC project (PolyU no. 5202/12E) and a National Nature Science Foundation of China (NSFC no. 61272291).,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :A sentiment lexicon is regarded as the most valuable resource for sentiment analysis (Pang and Lee 2008), and lays the groundwork of much sentiment analysis research, for example, sentiment classification (Yu and Hatzivassiloglou 2003; Kim and Hovy 2004) and opinion summarization (Hu and Liu 2004). To avoid manually annotating sentiment words, an automatically learning sentiment lexicon has attracted considerable attention in the community of sentiment analysis. The existing work determines word sentiment polarities either by the statistical information (e.g., the co-occurrence of words with predefined sentiment seed words) derived from a large corpus (Riloff, Wiebe, and Wilson 2003; Hu and Liu 2006) or by the word semantic information (e.g., synonym relations) found in existing human-created resources (e.g., WordNet) (Takamura, Inui, and Okumura 2005; Rao and Ravichandran 2009). However, current work mainly focuses on English sentiment lexicon generation or expansion, while sentiment lexicon learning for other languages has not been well studied. In this article, we address the issue of cross-lingual sentiment lexicon learning, which aims to generate sentiment lexicons for a non-English language (hereafter referred to as “the target language”) with the help of the available English sentiment lexicons. The underlying motivation of this task is to leverage the existing English sentiment lexicons and substantial linguistic resources to label the sentiment polarities of the words in the target language. To this end, we need an approach to transferring the sentiment information from English words to the words in the target language. The few existing approaches first build word relations between English and the target language. Then, based on the word relation and English sentiment seed words, they determine the sentiment polarities of the words in the target language. In these two steps, relation-building plays a fundamental role because it is responsible for the transfer of sentiment information between the two languages. Two approaches are often used to connect the words in different languages in the literature. One is based on translation entries in cross-lingual dictionaries (Hassan et al. 2011). The other relies on a machine translation (MT) engine as a black box to translate the sentiment words in English to the target language (Steinberger et al. 2011). The two approaches in Duh, Fujino, and Nagata (2011) and Mihalcea, Banea, and Weibe (2007) tend to use a small set of vocabularies to translate the natural language, which leads to a low coverage of generated sentiment lexicons for the target language. To solve this problem, we propose a generic approach to addressing the task of cross-lingual sentiment lexicon learning. Specifically, we model this task with a bilingual word graph, which is composed of two intra-language subgraphs and an interlanguage subgraph. The intra-language subgraphs are used to model the semantic relations among the words in the same languages. When building them, we incorporate both synonym and antonym word relations in a novel manner, represented by positive and negative sign weights in the subgraphs, respectively. These two intra-language subgraphs are then connected by the inter-language subgraph. We propose Bilingual word graph Label Propagation (BLP), which simultaneously takes the inter-language relations and the intra-language relations into account in an iterative way. Moreover, we leverage the word alignment information derived from a parallel corpus to build the inter-language relations. We connect two words from different languages that are aligned to each other in a parallel sentence pair. Taking advantage of a large parallel corpus, this approach significantly improves the coverage of the generated sentiment lexicon. The experimental results on Chinese sentiment lexicon learning show the effectiveness of the proposed approach in terms of both precision and recall. We further evaluate the impact of the learned sentiment lexicon on sentence-level sentiment classification. When using words in the learned sentiment lexicon as features for sentiment classification of the target language, the sentiment classification can achieve a high performance. We make the following contributions in this article. 1. We present a generic approach to automatically learning sentiment lexicons for the target language with the available sentiment lexicon in English, and we formalize the cross-lingual sentiment learning task on a bilingual word graph. 2. We build a bilingual word graph by using synonym and antonym word relations and propose a bilingual word graph label propagation approach, which effectively leverages the inter-language relations and both types (synonym and antonym) of the intra-language relations in sentiment lexicon learning. 3. We leverage the word alignment information derived from a large number of parallel sentences in sentiment lexicon learning. We build the inter-language relation in the bilingual word graph upon word alignment, and achieve significant results. 2 related work : In general, the work on sentiment lexicon learning focuses mainly on English and can be categorized as co-occurrence–based approaches (Hatzivassiloglou and McKeown 1997; Riloff, Wiebe, and Wilson 2003; Qiu et al. 2011) and semantic-based approaches (Mihalcea, Banea, and Wiebe 2007; Takamura, Inui, and Okumura 2005; Kim and Hovy 2004). The co-occurrence-based approaches determine the sentiment polarity of a given word according to the statistical information, like the co-occurrence of the word to predefined sentiment seed words or the co-occurrence to product features. The statistical information is mainly derived from certain corpora. One of the earliest work conducted by Hatzivassiloglou and McKeown (1997) assumes that the conjunction words can convey the polarity relation of the two words they connect. For example, the conjunction word and tends to link two words with the same polarity, whereas the conjunction word but is likely to link two words with opposite polarities. Their approach only considers adjectives, not nouns or verbs, and it is unable to extract adjectives that are not conjoined by conjunctions. Riloff et al. (2003) define several pattern templates and extract sentiment words by two bootstrapping approaches. Turney and Littman (2003) calculate the pointwise mutual information (PMI) of a given word with positive and negative sets of sentiment words. The sentiment polarity of the word is determined by average PMI values of the positive and negative sets. To obtain PMI, they provide queries (consisting of the given word and the sentiment word) to the search engine. The number of hits and the position (if the given word is near the sentiment word) are used to estimate the association of the given word to the sentiment word. Hu and Liu (2004) research sentiment word learning on customer reviews and they assume that the sentiment words tend to be correlated with product features. The frequent nouns and noun phrases are treated as product features. Then they extract the adjective words as sentiment words from those sentences that contain one or more product features. This approach may work on a product review corpus, where one product feature may frequently appear. But for other corpora, like news articles, this approach may not be effective. Qiu et al. (2011) combine sentiment lexicon learning and opinion target extraction. A double propagation approach is proposed to learn sentiment words and to extract opinion targets simultaneously, based on eight manually defined rules. The semantic-based approaches determine the sentiment polarity of a given word according to the word semantic relation, like the synonyms of sentiment seed words. The word semantic relation is usually obtained from dictionaries, for example, WordNet.1 Kim and Hovy (2004) assume that the synonyms of a positive (negative) word are positive (negative) and its antonyms are negative (positive). Initializing with a set of sentiment words, they expand sentiment lexicons based on these two kinds of word relations. Kamps et al. (2004) build a synonym graph according to the synonym relation (synset) derived from WordNet. The sentiment polarity of a word is calculated by the shortest path to two sentiment words good and bad. However, the shortest path cannot precisely describe the sentiment orientation, considering there are only five steps between the word good and the word bad in WordNet (Hassan et al. 2011). Takamura et al. (2005) construct a word graph with the gloss of WordNet. Words are connected if a word appears in the gloss of another. The word sentiment polarity is determined by the weight of its connections on the word graph. Based on WordNet, Rao and Ravichandran (2009) exploit several graph-based semi-supervised learning methods like Mincuts and Label Propagation. The word polarity orientations are induced by initializing some sentiment seed words in the WordNet graph. Esuli et al. (2006, 2007) and Baccianella et al. (2010) treat sentiment word learning as a machine learning problem, that is, to classify the polarity orientations of the words in WordNet. They select seven positive words and seven negative words and expand them through the see-also and antonym relations in WordNet. These expanded words are then used for training. They train a ternary classifier to predict the sentiment polarities of all the words in WordNet and use the glosses (textual definitions of the words in WordNet) as the features of classification. The sentiment lexicon generated is the well-known SentiWordNet.2 The work on cross-lingual sentiment lexicon learning is still at an early stage and can be categorized into two types, according to how they bridge the words in two languages. Mihalcea et al. (2007) generate sentiment lexicon for Romanian by directly translating the English sentiment words into Romanian through bilingual English–Romanian dictionaries. When confronting multiword translations, they translate the multiwords word by word. Then the validated translations must occur at least three times on the Web. The approach proposed by Hassan et al. (2011) learns sentiment words based on English WordNet and WordNets in the target languages (e.g., Hindi and Arabic). Crosslingual dictionaries are used to connect the words in two languages and the polarity of a given word is determined by the average hitting time from the word to the English sentiment word set. These approaches connect words in two languages based on crosslingual dictionaries. The main concern of these approaches is the effect of morphological inflection (i.e., a word may be mapped to multiple words in cross-lingual dictionaries). 1 http://wordnet.princeton.edu/. 2 http://sentiwordnet.isti.cnr.it/. For example, one single English word typically has four Spanish or Italian word forms (two each for gender and for number) and many Russian word forms (due to gender, number, and case distinctions) (Steinberger et al. 2011). Usually, this approach requires an additional process to disambiguate the sentiment polarities of all the morphological variants. To improve the sentiment classification for the target language, Banea, Mihalcea, and Wiebe (2010) translate the English sentiment lexicon into the target language using Google Translator.3 Similarly, Google Translator is used by Steinberger et al. (2011). They manually produce two high-level gold-standard sentiment lexicons for two languages (e.g., English and Spanish) and then translate them into the third language (e.g., Italian) via Google Translator. They believe that those words in the third language that appear in both translation lists are likely to be sentiment words. These approaches connect the words in two languages based on MT engines. The main concern of these approaches is the low overlapping between the vocabularies of natural documents and the vocabularies of the documents translated by MT engines (Duh, Fujino, and Nagata 2011; Meng et al. 2012a). The shortcoming of these MT-based approaches inevitably leads to low coverage. Our task resembles the task of cross-lingual sentiment classification, like Wan (2009), Lu et al. (2011), and Meng et al. (2012a), which classifies the sentiment polarities of product reviews. Generally, these studies use semi-supervised learning approaches and regard translations from labeled English sentiment reviews as the training data. The terms in each review are leveraged as the features for training, which has proven to be effective in sentiment classification (Pang and Lee 2008). We can regard the task of sentiment lexicon learning as word-level sentiment classification. However, for wordlevel sentiment classification, it is not straightforward to extract features for a single word. Without sufficient features, it is difficult for these approaches to perform well in learning. Another line of cross-lingual sentiment classification uses Latent Dirichlet Allocation (LDA) (Blei, Ng, and Jordan 2003) or its variants, like Boyd-Graber and Resnik (2010) or He, Alani, and Zhou (2010). These studies assume that each review is a mixture of sentiments and each sentiment is a probability over words. Then they apply the LDA-like approach to model the sentiment polarity of each review. Nonetheless, this assumption may not be applicable in sentiment lexicon learning because a single word can be regarded as the minimal semantic unit, and it is difficult, if not impossible, to infer the latent topics from a single word. Recall that different from the sentiment classification of product reviews where the instances are normally independent, words in sentiment lexicon learning are highly related with each other, like synonyms and antonyms. Through these relations, the words can naturally form a word graph. Thus we use the graph-based learning approach to leverage the word distributions in sentiment lexicon learning. In the next section, we will introduce our proposed graph-based cross-lingual sentiment lexicon learning. 3 cross-lingual sentiment lexicon learning :In this work, we model the task of cross-lingual sentiment lexicon learning with a bilingual word graph, where (1) the words in the two languages are represented by the nodes in two intra-language subgraphs, respectively; (2) the synonym and antonym word relations within each language are represented by the positive and negative sign 3 http://translate.google.com/. weights in the corresponding intra-language subgraphs; and (3) the two intra-language subgraphs are connected by an inter-language subgraph. Mathematically, we build a graph G = (XE ∪ XT, WE ∪ W̃E ∪ WT ∪ W̃T ∪ WA) that consists of two intra-language subgraphs GE = (XE, WE ∪ W̃E) and GT = (XT, WT ∪ W̃T ) as shown in Figure 1. These two subgraphs are connected by the inter-language graph GR = (XE ∪ XT, WA). The elements of WE, WT, and WA are positive real numbers, that is, WE, WT, and WA ∈ R+, and W̃E and W̃T ∈ R−. Because G incorporates the words in two languages, we call it a Bilingual Word Graph. Specifically, the positive weights, WE and WT, represent the synonym intra-language relations, and the negative weights, W̃E and W̃T, represent the antonym intra-language relations. The inter-language relations, WA, represent the connections between the words in the two languages. For cross-lingual sentiment lexicon learning, XE = {X L E, X U E } denotes the labeled and unlabeled words in English and XT denotes the unlabeled words in the target language. Given the labels YLE = {yE1 , ..., yEl} of the seeds X L E, we aim to predict the sentiment polarities of the words XT. In the remainder of this section, we will present the bilingual word graph construction and the algorithm of bilingual word graph label propagation. We represent the words in English and in the target language as the nodes of the bilingual word graph. We use the synonym and antonym relations of the words in the same language to build W and W̃ in the intra-language graph, respectively. In the rest of this section, we will focus on how to build the inter-language relation. Intuitively, there are two ways to connect the words in two languages. One is to insert links to the words if there exist entry mappings between the words in bilingual dictionaries (e.g., the English–Chinese dictionary). This method is simple and straightforward, but it suffers from two limitations. (1) Dictionaries are static during a certain period, whereas the sentiment lexicon evolves over time. (2) The entries in dictionaries tend to be the expressions of formal and written languages, but people prefer using colloquial language in expressing their sentiments or opinions on-line. These limitations lead to the low coverage of the links from English to the target language. An alternative way is to use an MT engine as a black box to build the inter-language relation. One can send each word in English to a publicly available MT engine and get the translations in the target language. Edges can then be inserted into the graph between the English words and their corresponding translations. This approach suffers from the problem of low coverage as well because MT engines tend to use a small set of vocabularies (Duh, Fujino, and Nagata 2011). In this article, we propose to leverage a large bilingual parallel corpus, which is readily available in the MT research community, to build the bilingual word graph. The parallel corpus consists of a large number of parallel sentence pairs from two different languages that have been used as the foundation of the state-of-the-art statistical MT engines. Like the example shown in Figure 2, the two sentences in English and Chinese are parallel sentences, which express the same meaning in different languages. We can easily derive the word alignment from the sentence pairs, automatically using a stateof-the-art toolkit, like GIZA++4 or BerkeleyAligner.5 In this example, the Chinese word (happy) is linked to the English word happy and we say that these two words are aligned. Similarly the English words best and wishes are both aligned to (wish). The word alignment information encodes the rich association information between the words from the two languages. We are therefore motivated to leverage the parallel corpus and word alignment to build the bilingual word graph for cross-lingual sentiment lexicon learning. We take the words from both languages in the bilingual parallel corpus as the nodes in the bilingual word graph, and build the inter-language relations by connecting the two words that are aligned together in a sentence pair from a parallel corpus. There are several advantages of using a parallel corpus to build the inter-language subgraph. First, large parallel corpora are extensively used for training statistical MT engines and can be easily reused in our task. The parallel sentence pairs are usually automatically collected and mined from the Web. As a result, they contain the different and practical variations of words and phrases embedded in sentiment expressions. Second, the parallel corpus can be dynamically changed when necessary because it is relatively easy to collect from the Web. Consequentially, the novel sentiment information inferred from the parallel corpus can easily update the existing sentiment lexicons. These advantages can greatly improve the coverage of the generated sentiment lexicon, as demonstrated later in our experiments. As commonly used semi-supervised approaches, label propagation (Zhu and Ghahramani 2002) and its variants (Zhu, Ghahramani, and Lafferty 2003; Zhou et al. 2004) have been applied to many applications, such as part-of-speech tagging (Das and Petrov 2011; Li, Graca, and Taskar 2012), image annotation (Wang, Huang, and Ding 2011), protein function prediction (Jiang 2011; Jiang and McQuay 2012), and so forth. 4 http://www.statmt.org/moses/giza/GIZA++.html. 5 http://nlp.cs.berkeley.edu. The underlying idea of label propagation is that the connected nodes in the graph tend to share the same sentiment labels. In bilingual word graph label propagation, the words tend to share same sentiment labels if they are connected by synonym relations or word alignment and tend to belong to different sentiment labels if connected by antonym relations. In this article we propose bilingual word graph label propagation for cross-lingual sentiment lexicon learning. Let F = {FE, FT} denote the predicted labels of the unlabeled words X. The loss function can be defined as El(F) = µ n∑ i=1 ‖fEi − yEi‖ 2 + µ m∑ i=1 ‖ fTi − yTi‖ 2 (1) where n and m denote the numbers of English words and words in the target language. Let Y = {YE, YT} denote the initial sentiment labels of all the words; the loss function means that the prediction could not change too much from the initial label assignment. Similar to Zhou et al. (2004), we define the smoothness function to indicate that if two words are connected by synonym relation or by word alignment, then they tend to share the same sentiment label. The smoothness function can be further represented by two parts, that is, the inter-language smoothness Einters (F) and the synonym intra-language smoothness Eintras (F) Einters (F) = 1 2 n∑ i=1 m∑ j=1 wAij‖ fEi√ dALii − fTj√ dARjj ‖2 (2) Eintras (F) = ρ1 1 2 n∑ i,j=1 wEij‖ fEi√ dEii − fEj√ dEjj ‖2 + ρ2 1 2 m∑ i,j=1 wTij‖ fTi√ dTii − fTj√ dTjj ‖2 (3) DAL and DAR are defined as DAL = diag( ∑ j wA(1, j), . . . , ∑ j wA(n, j)) ′ and DAR = diag( ∑ i wA(i, 1), . . . , ∑ i wA(i, m)) ′ . DE and DT are the degree matrices of the synonym intra-language relations WE and WT, respectively. We then define the distance function to indicate that if two words are connected by the antonym relation they tend to belong to different sentiment labels. The distance function can be defined as Eintrad (F) = ρ3 1 2 n∑ i,j=1 |w̃Eij |‖ fEi√ d̃Eii − fEj√ d̃Ejj ‖2 + ρ4 1 2 m∑ i,j=1 |w̃Tij |‖ fTi√ d̃Tii − fTj√ d̃Tjj ‖2 (4) where D̃E and D̃T are the degree matrices of the absolute value of the antonym intralanguage relations W̃E and W̃T, respectively. Intuitively, for the inter-language smoothness and the synonym intra-language smoothness, the nearer the words connect with each other, the better performance will be achieved, whereas for the antonym intralanguage distance, the farther the better. The objective functions can be defined as arg min(E(F)) = arg min(Eintras (F) + E inter s (F) + El(F)) arg max(E(F)) = arg max(Eintrad (F)) Thus, we define the whole objective function for cross-lingual sentiment lexicon learning as arg min(E(F)) = arg min(Eintras (F) + E inter s (F) + El(F) − E intra d (F)) (5) To obtain the solution to Equation (5), we differentiate the objective function according to FE and FT, and we have ∂E(F) ∂FE |FE=F⋆E = ρ1SEFE + 1 2 SAFT − ρ3S̃EFE + µFE − µYE = 0 (6) ∂E(F) ∂FT |FT=F⋆T = ρ2STFT + 1 2 SA ′ FE − ρ4S̃TFT + µFT − µYT = 0 (7) where P ′ is the transpose of the matrix P. The graph Laplacians SE and ST of the synonym intra-language relations are (I − D − 12 E WED − 12 E ) and (I − D − 12 T WTD − 12 T ), where I is the identity matrix. The graph Laplacians S̃E and S̃T of the antonym intra-language relations are (I − D̃ − 12 E W̃ED̃ − 12 E ) and (I − D̃ − 12 T W̃TD̃ − 12 T ), which has been proven to be positive semi-definite (Kunegis et al. 2010). The graph Laplacian SA of the interlanguage relation is (I − D − 12 AL WAD − 12 AR ). From Equations (6) and (7), we can obtain the optimal solutions (ME − SAM −1 T S ′ A)FE = 2µ(YE − SAM −1 T YT ) (8) (MT − S ′ AM −1 E SA)FT = 2µ(YT − S ′ AM −1 E YE) (9) where ME = 2ρ1SE − 2ρ3S̃E + 2µI and MT = 2ρ2ST − 2ρ4S̃T + 2µI. To avoid computing the inverse matrix in Equations (8) and (9), we apply the Jacobi algorithm (Saad 2003) to calculate the solutions as described in Algorithm 1. In line 1, we set the label of the positive seed xi as y L E+ i = (1, 0) and the label of the negative seed xj as y L E− j = (0, 1). We set the label YUE of the unlabeled words as zero, and then generate YE with Y L E and Y U E . Line 2 sets YT as zero matrix. In line 3, we compute the matrixes SE, S̃E, ST, S̃T, SA, and then compute the matrixes ME and MT. The sentiment information is simultaneously Algorithm 1. Bilingual word graph label propagation Input: Given G = (XE ∪ XT, WE ∪ W̃E ∪ WT ∪ W̃T ∪ WA),XE, label YLE for X L E, initialize µ and ρ1∼4 Output: FT for XT and FE for XE 1. Initialize YE with the English sentiment seeds 2. Set YT as zero 3. Calculate SE, S̃E, ST, S̃T, and SA, then calculate ME and MT 4. Loop 5. f (t+1) Ei = 1 (ME−SAM −1 T S ′ A)ii (2µ(YE − SAM −1 T YT )i − ∑ j 6=i(ME − SAM −1 T S ′ A)ij f (t) Ei ) 6. f (t+1) Ti = 1 (MT−S ′ AM −1 E SA)ii (2µ(YT − S ′ AM −1 E YE)i − ∑ j 6=i(MT − S ′ AM −1 E SA)ij f (t) Ti ) 7. Until FE and FT convergence propagated through lines 4–7 until the predicted labels FE and FT are converged. For an unlabeled word xi, if | f (i, 0) − f (i, 1)| < ξ (ξ is set as 1.0E −4), xi is regarded as neutral; if ( f (i, 0) − f (i, 1)) ≥ ξ, xi is regarded as positive; and if ( f (i, 1) − f (i, 0)) ≥ ξ, xi is regarded as negative. 4 experiment : We conduct experiments on Chinese sentiment lexicon learning. As in previous work (Baccianella, Esuli, and Sebastiani 2010), the sentiment words in General Inquirer lexicon are selected as the English seeds (Stone 1997). From the GI lexicon we collect 2,005 positive words and 1,635 negative words. To build the bilingual word graph, we adopt the Chinese–English parallel corpus, which is obtained from the news articles published by Xinhua News Agency in Chinese and English collections, using the automatic parallel sentence identification approach (Munteanu and Marcu 2005). Altogether, we collect more than 25M parallel sentence pairs in English and Chinese. We remove all the stopwords in Chinese and English (e.g., (of) and am) together with the low-frequency words that occur fewer than 5 times. After preprocessing, we finally have more than 174,000 English words, among which 3,519 words have sentiment labels and more than 146,000 Chinese words for which we need to predict the sentiment labels. To transfer sentiment information to Chinese unlabeled words more efficiently, we remove the unlabeled English words in the word graph (i.e., XUE = Φ). The unsupervised method, namely, BerkeleyAligner, is used to align the parallel sentences in this article (Liang, Taskar, and Klein 2006). As an unsupervised method, it does not require us to manually collect training data and does not need the complex training processing, and its performance is competitive with supervised methods. With these two advantages, we can focus more on our task of cross-lingual sentiment lexicon learning. Based on the word alignment derived by BerkeleyAligner, the inter-language WA is initialized with the normalized alignment frequency. The English and Chinese versions6 of WordNet are used to build the intra-language relations WE, W̃E, WT, and W̃T, respectively. WordNet (Miller 1995) groups words into synonym sets, called synsets. We collect about 117,000 synsets from the English WordNet and about 80,000 synsets from the Chinese WordNet. In total, we obtain 8,406 and 6,312 antonym synset pairs. We first generate both positive and negative scores for each unlabeled word and then determine the word sentiment polarities based on its scores. We rank the two sets of newly labeled sentiment words according to their polarity scores. The top-ranked Chinese words are shown in Table 1. We manually label the top-ranked 1K sentiment words. For P@10K, we sequentially divide the top 10K ranked list into ten equal parts. One hundred sentiment words are randomly selected from each part for labeling. Similar to the evaluation of TREC Blog Distillation (Ounis, Macdonald, and Soboroff 2008), all the labeled words from each approach are used in the evaluation. We then evaluate the ranked lists with two metrics, Precision@K and Recall. 6 http://www.globalwordnet.org/gwa/wordnet table.html. In this set of experiments, we examine the influence of graph topologies on sentiment lexicon learning. Mono: This approach learns the Chinese sentiment lexicon based only on the Chinese monolingual word graph GT = (XT, WT ∪ W̃T). Because it needs labeled sentiment words, we incorporate the English labeled sentiment words XE and the interlanguage relation WA in the first iteration. Then we set XE and WA to be zero in later iterations. BLP-WOA (bilingual word graph without antonym): This approach is based on the bilingual word graph. It only involves the inter-language relation WA and the synonym intra-language relations WE and WT. W̃E and W̃T are set to be zero. BLP: This approach is based on the bilingual word graph. It incorporates the interlanguage relation WA, the synonym intra-language relations WE and WT, and the antonym intra-language relations W̃E and W̃T. In these approaches, µ is set to 0.1 as in Zhou et al. (2004). The precision of these approaches are shown in Figure 3. The figure shows that the approaches based on the bilingual word graph significantly outperform the one based on the monolingual word graph. The bilingual word graph can bring in more word relations and accelerate the sentiment propagation. Besides, in the bilingual word graph, the English sentiment seed words can continually provide accurate sentiment information. Thus we observe the increase in the approaches based on the bilingual word graph in term of both precision and recall (Table 2). Meanwhile, we find that adding the antonym relation in the bilingual word graph slightly enhances precision in top-ranked words and similar findings are observed in our later experiments. It appears that the antonym relations depict word relations in a more accurate way and can refine the word sentiment scores more precisely. However, the synonym relation and word alignment relation dominate, whereas the antonym relation accounts for only a small percentage of the graph. It is hard for the antonym relation to introduce new relations into the graph and thus it cannot help to further improve recall. In this set of experiments, we compare our approach with the baseline and existing approaches. Rule: For the intra-language relation, this approach assumes that the synonyms of a positive (negative) word are always positive (negative), and the antonyms of a positive (negative) word are always negative (positive). For the inter-language relation, we regard the Chinese word aligned to positive (negative) English words as positive (negative). If a word connects to both positive and negative English words, we regard it as objective. Based on this heuristic, we generate two sets of sentiment words. SOP: Hassan et al. (2011) present a method to predict the semantic orientation of unlabeled words based on the mean hitting time to the two sets of sentiment seed words. Given the graph G = (XE ∪ XT, WE ∪ W̃E ∪ WT ∪ W̃T ∪ WA), it defines the transition probability from node i to node j as p(j|i) = wi,j∑ k wi,k The mean hitting time h(i|j) is the average number of the weighted steps from word i to word j. Starting with the word i and ending with the sentiment word k ∈ M, the mean hitting time h(i|M) can be formally defined as h(i|M) = { 0, i ∈ M∑ j∈V p(j|i) × h(j|M) + 1, otherwise Let M+ and M− denote the GI positive and negative seeds. If h(i|M+) is greater than h(i|M−), the word xi is classified as negative; otherwise it is classified as positive. The generated positive words and negative words are then ranked according to their polarity scores, respectively. MAD: Talukdar and Crammer (2009) propose a MAD algorithm to modify the adsorption algorithm (Baluja et al. 2008) by adding a new regularization term. In particular, besides the positive and negative labels, a dummy label is assigned to each word in the MAD approach. Two additional columns, representing the scores of the dummy label, are added into Y and F, respectively. We denote these two matrices with the dummy labels as Ŷ and F̂. Meanwhile, R̂ is used to represent the initial dummy scores of all the words. For a word xi, the newly added columns in Ŷi and F̂i are set to zero (i.e., ŷ(i, 3) = f̂ (i, 3) = 0). r̂(i, 0) and r̂(i, 1) are set to zero, and r̂(i, 3) is assigned to one. Then, the predicted label F̂i of the word xi is iteratively obtained by F̂ (t+1) i = 1 M̂ii (λ1Ŷi + λ2 ∑ j WijF̂ (t) j + λ3γR̂i) M̂ii = (λ1 + λ2 ∑ j 6=i Wji + λ3) λ1∼3 and γ are used to tune the importance of each iteration term. We set λ1∼2 to one, λ3 to 10, and γ to 0.1, which produces reasonably good results. After propagation, f̂ (i, 0) and f̂ (i, 1) are used to determine the sentiment polarity of the word xi. We show recall of the learned Chinese sentiment words in Table 3. Compared with BLP and SOP, the Rule approach learns fewer sentiment words. The coverage of the Rule approach is inevitably low because many words in the corpus are aligned to both positive and negative words. For example, in most cases the positive Chinese word (helpful) is aligned to the positive English word helpful. But sometimes it is aligned (or misaligned) to the negative English words, like freak. Under this situation, the word tends to be predicted as objective. In SOP, the positive and negative scores are related to the distances of the word to the positive and negative seed words, and the distance is usually coarse-grained to depict the sentiment polarity. For example, the shortest path between the word good and the word bad in WordNet is only 5 (Kamps et al. 2004). The Rule and SOP approaches find different sentiment words. We then evaluate the learned Chinese polarity word lists by precision at k. As illustrated in Figure 4, the significance test indicates that our approach significantly outperforms the Rule and SOP approaches. The major difference of our approach is that the polarity information can be transferred between English and Chinese and within each language at the same time, whereas in the other two approaches the polarity information mainly transfers from English to Chinese and once a word gets a polarity score, it is difficult to change or refine. The idea of the MAD approach is similar to bilingual graph label propagation, but the MAD approach fails to leverage the antonym intra-language relation. We observe that the MAD approach can achieve a comparable result to the BLP approach. MAD can obtain a smoother label score by adding a dummy label. But the dummy label does not influence the sentiment labels too much because it is not used in the determination of the word sentiment polarity. Besides, MAD cannot deal with the antonym relation. As a result, these experiments demonstrate the overall superiority of our approach in cross-lingual sentiment lexicon learning. This also indicates the effectiveness of the BLP approach in Chinese sentiment lexicon learning. This set of experiments is to examine the ways to build the inter-language relation. BLP-dict: The inter-language relation is built upon the translation entries from LDC7 and Universal Dictionary (UD).8 From these dictionaries (both English–Chinese and Chinese–English dictionaries), we collect 41,034 translation entries between the English and Chinese words. If the English word xi can be translated to the Chinese word xj in UD dictionary, wA(i, j) and wA(j, i) are set to 1. BLP-MT: All the Chinese (English) words are translated into English (Chinese) by Google Translator. If the Chinese word xi can be translated to the English word xj, the wA(i, j) and wA(j, i) are set to 1. If a Chinese word is translated to an English phrase, we assume that the Chinese word is projected to each word in the English phrase. To improve the coverage, we translate the English sentiment seed words with three other methods; they are word collocation, coordinated phrase, and punctuation, as mentioned in Meng et al. (2012b). The learned Chinese sentiment word lists are also evaluated with precision at k. As shown in Figure 5, we find that the alignment-based approach outperforms the dictionary-based and MT-based approaches. The reason that contributes to this is that we can build more inter-language relations based on word alignment, compared 7 http://projects.ldc.upenn.edu/Chinese/LDC ch.htm. 8 http://www.dicts.info/uddl.php. with the translation entries from the dictionary and the translation pairs from Google Translator. For example, the English word move is often translated to (shift) and (affect, touch) by dictionaries or MT engines. From the parallel sentences, besides these word translation pairs, the word move can be also aligned to (plain sailing bon voyage) that is commonly used in Chinese greeting texts. This translation entry is hard to find in dictionaries or by MT engines. The words are aligned between the two parallel sentences. Sometimes the word move may be forced to be aligned to in the parallel sentences good luck and best wishes on your career move and . Thus, when building the inter-language relations with word alignment, our approach is likely to learn more sentiment word candidates. It is also the reason why the dictionary-based and MT-based approaches learn fewer sentiment words than our approach, as indicated in Table 4. According to our statistic, on average a Chinese word is connected to 2.3 and 2.1 English words if we build the inter-language relations with the dictionary and Google Translator, respectively. By building the inter-language relation with word alignment, our approach connects a Chinese word to 16.21 English words an average, which greatly increases the coverage of the learned sentiment lexicon. The following set of experiments reveals the influence of the intra-language relation. BLP-A: As the baseline of this set of experiments, it does not build the intralanguage relations with either English or Chinese WordNet synsets. Only the inter-language relation with word alignment is used to build the graph. That means WE, W̃E, WT, and W̃T are defined as zero matrixes. BLP-AE: Word alignment and the English WordNet synsets are used to build the intra-English relation WE, but the intra-Chinese relation WT and W̃T are set to zero matrixes. BLP-AC: Word alignment and the Chinese WordNet synsets are used to build the intra-Chinese relation WT, but the intra-English relation WE and W̃E are set to zero matrixes. As Figure 6 shows, when combining both English and Chinese intra-language relations, the precision curves of both positive and negative predictions increase. This indicates that adding the intra-language relations has a positive influence. The improvement can be explained by the ability of the intra-language relations to refine the polarity scores. For example, the English word sophisticated can be aligned to the positive Chinese word (delicate) as well as the negative Chinese word (wily, wicked). In the GI lexicon, the English word sophisticated is labeled as positive. In the bilingual word graph that contains only the inter-language relations, the negative Chinese word is likely to be labeled as positive. However, with the intra-language relation, the negative Chinese word may connect to the other negative Chinese words, like (foxy); and the Chinese positive word may connect to the other positive Chinese words, like (elaborate). Thus the polarity score of the word can be refined by the intra-language relation in each iteration of propagation. Another advantage of the intra-language relation is that it helps to reduce the noise introduced by the inter-language relation. For example, sometimes the Chinese positive word (help) is misaligned to the negative English word freak by the inter-language relation, but it is also connected to the synonyms (help) and (salutary) (which are positive) by the intra-language relations. The polarity score of the word can be adjusted by the intra-language relation. Thus, though the inter-language relation brings in certain noisy alignments, the intra-language relation can help to refine the polarity score of the word using its intra-language relation. ρ1 and ρ2 in Equation (3) tune the English and Chinese synonym intra-language propagation, while ρ3 and ρ4 in Equation (4) adjust the English and Chinese antonym intralanguage propagation. For simplicity, let ρ1 equal ρ2 and let ρ3 equal ρ4. Then we tune ρ1,2 and ρ3,4 together. When ρ1,2 and ρ3,4 range from {1e − 2, 1e − 1, 1, 10, 100, 1000}, Precision@1K ranges from 0.631 to 0.689 and Recall ranges from 0.651 to 0.729 on average. In general, we find that when 1 ≤ ρ3,4 < ρ1,2 ≤ 10, we can obtain better results. Sentiment classification is one of the most extensively studied tasks in the community of sentiment analysis (Pang and Lee 2008). To see whether the performance improvement in lexicon learning also improves the results of sentiment classification, we apply the generated Chinese sentiment lexicons to sentence-level sentiment classification. Data set: The NTCIR sentiment-labeled corpus is used for sentiment classification (Seki et al. 2008, 2009). We extract the Chinese sentences that have positive, negative, or neutral labels. The numbers of extracted sentences are shown in Table 5. The learned sentiment words in the Mono and BLP approaches are used as classification features. We implement the following baselines for comparison. BSL DF: The Chinese word unigrams and bigrams are extracted from the NTCIR data set as features. We rank the features according to their frequencies and gradually increase the value of N for the Top-N classification features. BSL LF: The words in existing Chinese sentiment lexicons are used as features. A total of 836 positive words and 1,254 negative words are collected from HowNet.9 We use LibSVM10 and perform 10-fold cross-validation on the NTCIR polarity sentences. The accuracies over N number of features are plotted in Figure 7. Our approach achieves a very promising improvement, although the features and the sentences that need to be classified are selected from different corpora. This suggests that the generated sentiment lexicon is adaptive and qualitative enough for sentiment classification. 5 conclusions and future work :In this article, we studied the task of cross-lingual sentiment lexicon learning. We built a bilingual word graph with the words in two languages and connected them with the inter-language and intra-language relations. We proposed a bilingual word graph label propagation approach to transduce the sentiment information from English sentiment words to the words in the target language. The synonym and antonym relations among 9 http://www.keenage.com/. 10 http://www.csie.ntu.edu.tw/∼cjlin/libsvm/. the words in the same languages are leveraged to build the intra-language relations. Word alignment derived from a large parallel corpus is used to build the inter-language relations. Experiments on Chinese sentiment lexicon learning demonstrate the effectiveness of the proposed approach. There are three main conclusions from this work. First, the bilingual word graph is suitable for sentiment information transfer and the proposed approach can iteratively improve the precision of the generated sentiment lexicon. Second, building the inter-language relations with the large parallel corpus can significantly improve the coverage. Third, by incorporating the antonym relations into the bilingual word graph, the BLP approach can achieve an improvement in precision. In the future, we will explore the opportunity of expanding or generating the sentiment lexicons for multiple languages by bootstrapping. In this article we address the task of cross-lingual sentiment lexicon learning, which aims to automatically generate sentiment lexicons for the target languages with available English sentiment lexicons. We formalize the task as a learning problem on a bilingual word graph, in which the intra-language relations among the words in the same language and the interlanguage relations among the words between different languages are properly represented. With the words in the English sentiment lexicon as seeds, we propose a bilingual word graph label propagation approach to induce sentiment polarities of the unlabeled words in the target language. Particularly, we show that both synonym and antonym word relations can be used to build the intra-language relation, and that the word alignment information derived from bilingual parallel sentences can be effectively leveraged to build the inter-language relation. The evaluation of Chinese sentiment lexicon learning shows that the proposed approach outperforms existing approaches in both precision and recall. Experiments conducted on the NTCIR data set further demonstrate the effectiveness of the learned sentiment lexicon in sentence-level sentiment classification. [{""affiliations"": [], ""name"": ""Dehong Gao""}, {""affiliations"": [], ""name"": ""Furu Wei""}, {""affiliations"": [], ""name"": ""Wenjie Li""}, {""affiliations"": [], ""name"": ""Xiaohua Liu""}, {""affiliations"": [], ""name"": ""Ming Zhou""}] SP:013ad460023a35d367e90751a36dd74c493752bd [{""authors"": [""Banea"", ""Carmen"", ""Rada Mihalcea"", ""Janyce Wiebe""], ""title"": ""Multilingual subjectivity: Are more languages better"", ""venue"": ""In Proceedings of the 23rd International Conference on Computational Linguistics,"", ""year"": 2010}, {""authors"": [""Blei"", ""David M."", ""Andrew Y. Ng"", ""Michael I. Jordan.""], ""title"": ""Latent Dirichlet allocation"", ""venue"": ""Journal of Machine Learning Research, 3:993\u20131022."", ""year"": 2003}, {""authors"": [""Boyd-Graber"", ""Jordan"", ""Philip Resnik.""], ""title"": ""Holistic sentiment analysis across languages: Multilingual supervised latent Dirichlet allocation"", ""venue"": ""Proceedings of the 2010 Conference on Empirical Methods in"", ""year"": 2010}, {""authors"": [""Das"", ""Dipanjan"", ""Slav Petrov.""], ""title"": ""Unsupervised part-of-speech tagging with bilingual graph-based projections"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for Computational"", ""year"": 2011}, {""authors"": [""Duh"", ""Kevin"", ""Akinori Fujino"", ""Masaaki Nagata""], ""title"": ""Is machine translation ripe"", ""year"": 2011}, {""authors"": [""Esuli"", ""Andrea"", ""Fabrizio Sebastiani.""], ""title"": ""Sentiwordnet: A publicly available lexical resource for opinion mining"", ""venue"": ""Proceedings of the 3rd International Conference on Language Resources and Evaluation,"", ""year"": 2006}, {""authors"": [""Esuli"", ""Andrea"", ""Fabrizio Sebastiani.""], ""title"": ""Random-walk models of term semantics: An application to opinionrelated properties"", ""venue"": ""Proceedings of the 3rd Language and Technology Conference,"", ""year"": 2007}, {""authors"": [""Hassan"", ""Ahmed"", ""Amjad Abu-Jbara"", ""Rahul Jha"", ""Dragomir Radev.""], ""title"": ""Identifying the semantic orientation of foreign words"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics,"", ""year"": 2011}, {""authors"": [""Hatzivassiloglou"", ""Vasileios"", ""Kathleen R. McKeown.""], ""title"": ""Predicting the semantic orientation of adjectives"", ""venue"": ""Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference"", ""year"": 1997}, {""authors"": [""He"", ""Yulan"", ""Harith Alani"", ""Deyu Zhou.""], ""title"": ""Exploring English lexicon knowledge for Chinese sentiment analysis"", ""venue"": ""Proceedings of the 2010 CIPS-SIGHAN Joint Conference on Chinese Language"", ""year"": 2010}, {""authors"": [""Hu"", ""Minqing"", ""Bing Liu.""], ""title"": ""Mining and summarizing customer reviews"", ""venue"": ""Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 168\u2013177,"", ""year"": 2004}, {""authors"": [""Hu"", ""Minqing"", ""Bing Liu.""], ""title"": ""Opinion extraction and summarization on the Web"", ""venue"": ""Proceedings of the 21st National Conference on Artificial Intelligence, pages 1,621\u20131,624, Boston, MA."", ""year"": 2006}, {""authors"": [""Jiang"", ""Jonathan Q.""], ""title"": ""Learning protein functions from bi-relational graph of proteins and function annotations"", ""venue"": ""Algorithms in Bioinformatics, 6833:128\u2013138, Springer Berlin Heidelberg."", ""year"": 2011}, {""authors"": [""Jiang"", ""Jonathan Q."", ""Lisa J. McQuay.""], ""title"": ""Predicting protein function by multi-label correlated semi-supervised learning"", ""venue"": ""IEEE/ACM Transactions on Computational Biology and Bioinformatics, 9(4):1059\u20131069."", ""year"": 2012}, {""authors"": [""Kim"", ""Soo-Min"", ""Eduard Hovy.""], ""title"": ""Determining the sentiment of opinions"", ""venue"": ""Proceedings of the 20th International Conference on Computational Linguistics, pages 355\u2013363, Geneva."", ""year"": 2004}, {""authors"": [""Kunegis"", ""Jerome"", ""Stephan Schmidt"", ""Andreas Lommatzsch"", ""J\u00fcrgen Lerner"", ""Ernesto W. De"", ""Luca Sahin Albayrak.""], ""title"": ""Spectral analysis of signed graphs for clustering, prediction and visualization"", ""venue"": ""In"", ""year"": 2010}, {""authors"": [""Li"", ""Shen"", ""Joao V. Graca"", ""Ben Taskar.""], ""title"": ""Wiki-ly supervised part-of-speech tagging"", ""venue"": ""Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural"", ""year"": 2012}, {""authors"": [""Liang"", ""Percy"", ""Ben Taskar"", ""Dan Klein.""], ""title"": ""Alignment by agreement"", ""venue"": ""Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of"", ""year"": 2006}, {""authors"": [""Lu"", ""Bin"", ""Chenhao Tan"", ""Claire Cardie"", ""Benjamin K. Tsou.""], ""title"": ""Joint bilingual sentiment classification with unlabeled parallel corpora"", ""venue"": ""Proceedings of the 49th Annual Meeting of the Association for"", ""year"": 2011}, {""authors"": [""Meng"", ""Xinfan"", ""Furu Wei"", ""Xiaohua Liu"", ""Ming Zhou"", ""Ge Xu"", ""Houfeng Wang.""], ""title"": ""Cross-lingual mixture model for sentiment classification"", ""venue"": ""Proceedings of the 50th Annual Meeting of the Association"", ""year"": 2012}, {""authors"": [""Meng"", ""Xinfan"", ""Furu Wei"", ""Ge Xu"", ""Longkai Zhang"", ""Xiaohua Liu"", ""Ming Zhou"", ""Houfeng Wang""], ""title"": ""Lost in translations? Building sentiment lexicons using context-based machine translation"", ""year"": 2012}, {""authors"": [""Mihalcea"", ""Rada"", ""Carmen Banea"", ""Janyce Wiebe.""], ""title"": ""Learning multilingual subjective language via cross-lingual projections"", ""venue"": ""Proceedings of the 45th Annual Meeting of the Association for"", ""year"": 2007}, {""authors"": [""Miller"", ""George A.""], ""title"": ""Wordnet: A lexical database for English"", ""venue"": ""Communications of the ACM, 38(11):39\u201341."", ""year"": 1995}, {""authors"": [""Munteanu"", ""Dragos Stefan"", ""Daniel Marcu.""], ""title"": ""Improving machine translation performance by exploiting non-parallel corpora"", ""venue"": ""Journal of Computational Linguistics, 31(4):477\u2013504."", ""year"": 2005}, {""authors"": [""Ounis"", ""Iadh"", ""Craig Macdonald"", ""Ian Soboroff.""], ""title"": ""Overview of the TREC-2010 blog track"", ""venue"": ""Proceedings of the NTCIR07 workshop, pages 104\u2013111, Tokyo."", ""year"": 2008}, {""authors"": [""Pang"", ""Bo"", ""Lillian Lee.""], ""title"": ""Opinion mining and sentiment analysis, volume 2"", ""venue"": ""Foundations and Trends in Information Retrieval. Now Publishers, Inc."", ""year"": 2008}, {""authors"": [""Qiu"", ""Guang"", ""Bing Liu"", ""Jiajun Bu"", ""Chun Chen.""], ""title"": ""Opinion word expansion and target extraction through double propagation"", ""venue"": ""Computational Linguistics, 37(1):9\u201327."", ""year"": 2011}, {""authors"": [""Rao"", ""Delip"", ""Deepak Ravichandran.""], ""title"": ""Semi-supervised polarity lexicon induction"", ""venue"": ""Proceedings of the 12th Conference of the European Chapter of the ACL, pages 675\u2013682, Athens."", ""year"": 2009}, {""authors"": [""Riloff"", ""Ellen"", ""Janyce Wiebe"", ""Theresa Wilson.""], ""title"": ""Learning subjective nouns using extraction pattern bootstrapping"", ""venue"": ""Proceedings of the 7th Conference on Natural Language Learning, pages 25\u201332,"", ""year"": 2003}, {""authors"": [""Saad"", ""Yousef.""], ""title"": ""Iterative Methods for Sparse Linear Systems"", ""venue"": ""Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, 2nd edition."", ""year"": 2003}, {""authors"": [""Seki"", ""Yohei"", ""David Kirk Evans"", ""Lun-Wei Ku"", ""Le Sun"", ""Hsin-Hsi Chen"", ""Noriko Kando.""], ""title"": ""Overview of multilingual opinion analysis task at NTCIR-7"", ""venue"": ""Proceedings of the NTCIR07 Workshop,"", ""year"": 2008}, {""authors"": [""Seki"", ""Yohei"", ""Lun-Wei Ku"", ""Le Sun"", ""Hsin-Hsi Chen"", ""Noriko Kando.""], ""title"": ""Overview of multilingual opinion analysis task at NTCIR-8: A step toward cross lingual opinion analysis"", ""venue"": ""Proceedings of the"", ""year"": 2009}, {""authors"": [""Stone"", ""Philip J.""], ""title"": ""Thematic text analysis: New agendas for analyzing text content"", ""venue"": ""Text Analysis for the Social Sciences, chapter 2. Lawerence Erlbaum, Mahwah, NJ."", ""year"": 1997}, {""authors"": [""Takamura"", ""Hiroya"", ""Takashi Inui"", ""Manabu Okumura.""], ""title"": ""Extracting semantic orientations of words using spin model"", ""venue"": ""Proceedings of the 43rd Annual Meeting of the Association for Computational"", ""year"": 2005}, {""authors"": [""Talukdar"", ""Partha Pratim"", ""Koby Crammer.""], ""title"": ""New regularized algorithms for transductive learning"", ""venue"": ""Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases,"", ""year"": 2009}, {""authors"": [""Turney"", ""Peter D."", ""Michael L. Littman.""], ""title"": ""Measuring praise and criticism: Inference of semantic orientation from association"", ""venue"": ""ACM Transactions on Information Systems, 21(4):315\u2013346."", ""year"": 2003}, {""authors"": [""Wan"", ""Xiaojun.""], ""title"": ""Co-training for cross-lingual sentiment classification"", ""venue"": ""Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural"", ""year"": 2009}, {""authors"": [""Wang"", ""Hua"", ""Heng Huang"", ""Chris Ding.""], ""title"": ""Image annotation using bi-relational graph of images and semantic labels"", ""venue"": ""the 24th IEEE Conference on Computer Vision and Pattern Recognition, pages 793\u2013800,"", ""year"": 2011}, {""authors"": [""Yu"", ""Hong"", ""Vasileios Hatzivassiloglou.""], ""title"": ""Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences"", ""venue"": ""Proceedings of the 2003"", ""year"": 2003}, {""authors"": [""Zhou"", ""Dengyong"", ""Olivier Bousquet"", ""Thomas Navin Lal"", ""Jason Weston"", ""Bernhard Scholkopf.""], ""title"": ""Learning with local and global consistency"", ""venue"": ""Proceedings of Advances in Neural Information Processing"", ""year"": 2004}, {""authors"": [""Zhu"", ""Xiaojin"", ""Zoubin Ghahramani.""], ""title"": ""Learning from labeled and unlabeled data with label propagation"", ""venue"": ""Technical report CMU-CALD-02-107, Carnegie Mellon University."", ""year"": 2002}, {""authors"": [""Zhu"", ""Xiaojin"", ""Zoubin Ghahramani"", ""John Lafferty.""], ""title"": ""Semi-supervised learning using Gaussian fields and harmonic functions"", ""venue"": ""International Conference Machine Learning,"", ""year"": 2003}] acknowledgments :The work described in this article was supported by a Hong Kong RGC project (PolyU no. 5202/12E) and a National Nature Science Foundation of China (NSFC no. 61272291).","5 conclusions and future work :In this article, we studied the task of cross-lingual sentiment lexicon learning. We built a bilingual word graph with the words in two languages and connected them with the inter-language and intra-language relations. We proposed a bilingual word graph label propagation approach to transduce the sentiment information from English sentiment words to the words in the target language. The synonym and antonym relations among 9 http://www.keenage.com/. 10 http://www.csie.ntu.edu.tw/∼cjlin/libsvm/. the words in the same languages are leveraged to build the intra-language relations. Word alignment derived from a large parallel corpus is used to build the inter-language relations. Experiments on Chinese sentiment lexicon learning demonstrate the effectiveness of the proposed approach. There are three main conclusions from this work. First, the bilingual word graph is suitable for sentiment information transfer and the proposed approach can iteratively improve the precision of the generated sentiment lexicon. Second, building the inter-language relations with the large parallel corpus can significantly improve the coverage. Third, by incorporating the antonym relations into the bilingual word graph, the BLP approach can achieve an improvement in precision. In the future, we will explore the opportunity of expanding or generating the sentiment lexicons for multiple languages by bootstrapping." "1 introduction :Interest in applying natural language processing (NLP) technology to medical information has increased in recent years. Much of this work has been focused on information retrieval and extraction from clinical notes, electronic medical records, and biomedical academic literature, but there has been some work in directly analyzing the spoken language of individuals elicited during the administration of diagnostic instruments in clinical settings. Analyzing spoken language data can reveal information not only ∗ Rochester Institute of Technology, College of Liberal Arts, 92 Lomb Memorial Dr., Rochester, NY 14623. E-mail: emilypx@rit.edu. ∗∗ Google, Inc., 1001 SW Fifth Avenue, Suite 1100, Portland OR 97204. E-mail: roarkbr@gmail.com. Submission received: 30 December 2013; revised submission received: 21 January 2015; accepted for publication: 4 May 2015. doi:10.1162/COLI a 00232 © 2015 Association for Computational Linguistics about impairments in language but also about a patient’s neurological status with respect to other cognitive processes such as memory and executive function, which are often impaired in individuals with neurodevelopmental disorders, such as autism and language impairment, and neurodegenerative conditions, particularly dementia. Many widely used instruments for diagnosing certain neurological disorders include a task in which the person must produce an uninterrupted stream of spontaneous spoken language in response to a stimulus. A person might be asked, for instance, to retell a brief narrative or to describe the events depicted in a drawing. Much of the previous work in applying NLP techniques to such clinically elicited spoken language data has relied on parsing and language modeling to enable the automatic extraction of linguistic features, such as syntactic complexity and measures of vocabulary use and diversity, which can then be used as markers for various neurological impairments (Solorio and Liu 2008; Gabani et al. 2009; Roark et al. 2011; de la Rosa et al. 2013; Fraser et al. 2014). In this article, we instead use NLP techniques to analyze the content, rather than the linguistic characteristics, of weakly structured spoken language data elicited using neuropsychological assessment instruments. We will show that the content of such spoken responses contains information that can be used for accurate screening for neurodegenerative disorders. The features we explore are grounded in the idea that individuals recalling the same narrative are likely to use the same sorts of words and semantic concepts. In other words, a retelling of a narrative will be faithful to the source narrative and similar to other retellings. This similarity can be measured with techniques such as latent semantic analysis (LSA) cosine distance or the summary-level statistics that are widely used in evaluation of machine translation or automatic summarization, such as BLEU, Meteor, or ROUGE. Perhaps not surprisingly, however, previous work in using this type of spoken language data suggests that people with neurological impairments tend to include irrelevant or off-topic information and to exclude important pieces of information, or story elements, in their retellings that are usually included by neurotypical individuals (Hier, Hagenlocker, and Shindler 1985; Ulatowska et al. 1988; Chenery and Murdoch 1994; Chapman et al. 1995; Ehrlich, Obler, and Clark 1997; Vuorinen, Laine, and Rinne 2000; Creamer and Schmitter-Edgecombe 2010). Thus, it is often not the quantity of correctly recalled information but the quality of that information that reveals the most about a person’s diagnostic status. Summary statistics like LSA cosine distance and BLEU, which are measures of the overall degree of similarity between two texts, fail to capture these sorts of patterns. The work discussed here is an attempt to reveal these patterns and to leverage them for diagnostic classification of individuals with neurodegenerative conditions, including mild cognitive impairment and dementia of the Alzheimer’s type. Our method for extracting the elements used in a retelling of a narrative relies on establishing a word alignment between a retelling and a source narrative. Given the correspondences between the words used in a retelling and the words used in the source narrative, we can determine with relative ease the identities of the story elements of the source narrative that were used in the retelling. These word alignments are much like those used to build machine translation models. The amount of data required to generate accurate word alignment models for machine translation, however, far exceeds the amount of monolingual source-to-retelling parallel data available to train word alignment models for our task. We therefore combine several approaches for producing reliable word alignments that exploit the peculiarities of our training data, including an entirely novel alignment approach relying on random walks on graphs. In this article, we demonstrate that this approach to word alignment is as accurate as and more efficient than standard hidden Markov model (HMM)-based alignment (derived using the Berkeley aligner [Liang, Taskar, and Klein 2006]) for this particular data. In addition, we show that the presence or absence of specific story elements in a narrative retelling, extracted automatically from these task-specific word alignments, predicts diagnostic group membership more reliably than not only other dementia screening tools but also the lexical and semantic overlap measures widely used in NLP to evaluate pairwise language sample similarity. Finally, we apply our techniques to a picture description task that lacks an existing scoring mechanism, highlighting the generalizability and adaptability of these techniques. The importance of accurate screening tools for neurodegenerative disorders cannot be overstated given the increased prevalence of these disorders currently being observed worldwide. In the industrialized world, for the first time in recorded history, the population over 60 years of age outnumbers the population under 15 years of age, and it is expected to be double that of children by 2050 (United Nations 2002). As the elderly population grows and as researchers find new ways to slow or halt the progression of dementia, the demand for objective, simple, and noninvasive screening tools for dementia and related disorders will grow. Although we will not discuss the application of our methods to the narratives of children, the need for simple screening protocols for neurodevelopmental disorders such as autism and language impairment is equally urgent. The results presented here indicate that the path toward these goals might include automated spoken language analysis.","2 background : Because of the variety of intact cognitive functions required to generate a narrative, the inability to coherently produce or recall a narrative is associated with many different disorders, including not only neurodegenerative conditions related to dementia, but also autism (Tager-Flusberg 1995; Diehl, Bennetto, and Young 2006), language impairment (Norbury and Bishop 2003; Bishop and Donlan 2005), attention deficit disorder (Tannock, Purvis, and Schachar 1993), and schizophrenia (Lysaker et al. 2003). The bulk of the research presented here, however, focuses on the utility of a particular narrative recall task, the Wechsler Logical Memory subtest of the Wechsler Memory Scale (Wechsler 1997), for diagnosing mild cognitive impairment (MCI). (This and other abbreviations are listed in Table 1.) MCI is the stage of cognitive decline between the sort of decline expected in typical aging and the decline associated with dementia or Alzheimer’s disease (Petersen et al. 1999; Ritchie and Touchon 2000; Petersen 2011). MCI is characterized by subtle deficits in functions of memory and cognition that are clinically significant but do not prevent carrying out the activities of daily life. This intermediary phase of decline has been identified and named numerous times: mild cognitive decline, mild neurocognitive decline, very mild dementia, isolated memory impairment, questionable dementia, and incipient dementia. Although there continues to be disagreement about the diagnostic validity of the designation (Ritchie and Touchon 2000; Ritchie, Artero, and Touchon 2001), a number of recent studies have found evidence that seniors with some subtypes of MCI are significantly more likely to develop dementia than the population as a whole (Busse et al. 2006; Manly et al. 2008; Plassman et al. 2008). Early detection can benefit both patients and researchers investigating treatments for halting or slowing the progression of dementia, but identifying MCI can be problematic, as most dementia screening instruments, such as the Mini-Mental State Exam (MMSE) (Folstein, Folstein, and McHugh 1975), lack sufficient sensitivity to the very subtle cognitive deficits that characterize the disorder (Morris et al. 2001; Ravaglia et al. 2005; Hoops et al. 2009). Diagnosis of MCI currently requires both a lengthy neuropsychological evaluation of the patient and an interview with a family member or close associate, both of which should be repeated at regular intervals in order to have a baseline for future comparison. One goal of the work presented here is to determine whether an analysis of spoken language responses to a narrative recall task, the Wechsler Logical Memory subtest, can be used as a more efficient and less intrusive screening tool for MCI. In the Wechsler Logical Memory (WLM) narrative recall subtest of the Wechsler Memory Scale, the individual listens to a brief narrative and must verbally retell the narrative to the examiner once immediately upon hearing the story and again after a delay of 20 to 30 minutes. The examiner scores each retelling according to how many story elements the patient uses in the retelling. The standard scoring procedure, described in more detail in Section 3.2, results in a single summary score for each retelling, immediate and delayed, corresponding to the total number of story elements recalled in that retelling. The Anna Thompson narrative, shown in Figure 1 (later in this article), has been used as the primary WLM narrative for over 70 years and has been found to be sensitive to dementia and related conditions, particularly in combination with tests of verbal fluency and memory. Multiple studies have demonstrated a significant difference in performance on the WLM between individuals with MCI and typically aging controls under the standard scoring procedure (Storandt and Hill 1989; Petersen et al. 1999; Wang and Zhou 2002; Nordlund et al. 2005). Further studies have shown that performance on the WLM can help predict whether MCI will progress into Alzheimer’s disease (Morris et al. 2001; Artero et al. 2003; Tierney et al. 2005). The WLM can also serve as a cognitive indicator of physiological characteristics associated with Alzheimer’s disease. WLM scores in the impaired range are associated with the presence of changes in Pittsburgh compound B and cerebrospinal fluid amyloid beta protein, two biomarkers of Alzheimer’s disease (Galvin et al. 2010). Poor performance on the WLM and other narrative memory tests has also been strongly correlated with increased density of Alzheimer related lesions detected in postmortem neuropathological studies, even in the absence of previously reported or detected dementia (Schmitt et al. 2000; Bennett et al. 2006; Price et al. 2009). We note that clinicians do not use the WLM as a diagnostic test by itself for MCI or any other type of dementia. The WLM summary score is just one of a large number of instrumentally derived scores of memory and cognitive function that, in combination with one another and with a clinician’s expert observations and examination, can indicate the presence of a dementia, aphasia, or other neurological disorder. Much of the previous work in applying automated analysis of unannotated transcripts of narratives for diagnostic purposes has focused not on evaluating properties specific to narratives but rather on using narratives as a data source from which to extract speech and language features. Solorio and Liu (2008) were able to distinguish the narratives of a small set of children with specific language impairment (SLI) from those of typically developing children using perplexity scores derived from part-ofspeech language models. In a follow-up study on a larger group of children, Gabani et al. (2009) again used part-of-speech language models in an attempt to characterize the agrammaticality that is associated with language impairment. Two part-of-speech language models were trained for that experiment: one on the language of children with SLI and one on the language of typically developing children. The perplexity of each child’s utterances was calculated according to each of the models. In addition, the authors extracted a number of other structural linguistic features including mean length of utterance, total words used in the narrative, and measures of accurate subject–verb agreement. These scores collectively performed well in distinguishing children with language impairment, achieving an F1 measure of just over 70% when used within a support vector machine (SVM) for classification. In a continuation of this work, de la Rosa et al. (2013) explored complex language-model-based lexical and syntactic features to more accurately characterize the language used in narratives by children with language impairment. Roark et al. (2011) extracted a subset of the features used by Gabani et al. (2009), along with a much larger set of language complexity features derived from syntactic parse trees for utterances from narratives produced by elderly individuals for the diagnosis of MCI. These features included simple measures, such as words per clause, and more complex measures of tree depth, embedding, and branching, such as Frazier and Yngve scores. Selecting a subset of these features for classification with an SVM yielded a classification accuracy of 0.73, as measured by the area under the receiver operating characteristic curve (AUC). A similar approach was followed by Fraser et al. (2014) to distinguish different types of primary progressive aphasia, a group of subtypes of dementia distinct from Alzheimer’s disease and MCI, in a small group of elderly individuals. The authors considered almost 60 linguistic features, including some of those explored by Roark et al. (2011) as well as numerous others relating to part-of-speech frequencies and ratios. Using a variety of classifiers and feature combinations for three different two-way classification tasks, the authors achieved classification accuracies ranging between 0.71 and 1.0. An alternative to analyzing narratives in terms of syntactic and lexical features is to evaluate the content of the narrative retellings themselves in terms of their fidelity to the source narrative. Hakkani-Tur, Vergyri, and Tur (2010) developed a method of automatically evaluating an audio recording of a picture description task, in which the patient looks at a picture and narrates the events occurring in the picture, similar to the task we will be analyzing in Section 8. After using automatic speech recognition (ASR) to transcribe the recording, the authors measured unigram overlap between the ASR output transcript and a predefined list of key semantic concepts. This unigram overlap measure correlated highly with manually assigned counts of these semantic concepts. The authors did not investigate whether the scores, derived either manually or automatically, were associated with any particular diagnostic group or disorder. Dunn et al. (2002) were among the first to apply automated methods specifically to scoring the WLM subtest and determining the relationship between these scores and measures of cognitive function. The authors used Latent Semantic Analysis (LSA) to measure the semantic distance from a retelling to the source narrative. The LSA scores correlated very highly with the scores assigned by examiners under the standard scoring guidelines and with independent measures of cognitive functioning. In subsequent work comparing individuals with and without an English-speaking background (Lautenschlager et al. 2006), the authors proposed that LSA-based scoring of the WLM as a cognitive measure is less biased against people with different linguistic and cultural backgrounds than other widely used cognitive measures. This work demonstrates not only that accurate automated scoring of narrative recall tasks is possible but also that the objectivity offered by automated measures has specific benefits for tests like the WLM, which are often administered by practitioners working in a community setting and serving a diverse population. We will compare the utility of this approach with our alignment-based approach subsequently in the article. More recently, Lehr et al. (2013) used a supervised method for scoring the responses to the WLM, transcribed both manually and via ASR, using conditional random fields. This technique resulted in slightly higher scoring and classification accuracy than the unsupervised method described here. An unsupervised variant of their algorithm, which relied on the methods described in this article to provide training data to the conditional random field, yielded about half of the scoring gains and nearly all of the classification gains of what we report here. A hybrid method that used the methods in this article to derive features was the best performing system in that paper. Hence the methods described here are important components to that approach. We also note, however, that the supervised classifier-based approach to scoring retellings requires a significant amount of hand-labeled training data, thus rendering the technique impractical for application to a new narrative or to any picture description task. The importance of this distinction will become clear in Section 8, in which the approach outlined here is applied to a new data set lacking an existing scoring mechanism or a linguistic reference against which the responses can be scored. In this article, we will be discussing the application of our methods to manually generated transcripts of retellings and picture descriptions produced by adults with and without neurodegenerative disorders. We note, however, that the same techniques have been applied to narratives transcribed using ASR output (Lehr et al. 2012, 2013) with little degradation in accuracy, given sufficient adaptation of the acoustic and language models to the WLM retelling domain. In addition, we have applied alignment-based scoring to the narratives of children with neurodevelopmental disorders, including autism and language impairment (Prud’hommeaux and Rouhizadeh 2012), with similarly strong diagnostic classification accuracy, further demonstrating the applicability of these methods to a variety of input formats, elicitation techniques, and diagnostic goals.","3 data : The participants for this study were drawn from an ongoing study of brain aging at the Layton Aging and Alzheimer’s Disease Center at the Oregon Health and Science University. Seventy-two of these participants had received a diagnosis of MCI, and 163 individuals served as typically aging controls. Demographic information about the experimental participants is shown in Table 2. There were no significant differences in age and years of education between the two groups. The Layton Center data included retellings for individuals who were not eligible for the present study because of their age or diagnosis. Transcriptions of 48 retellings produced by these ineligible participants were used to train and tune the word alignment model but were not used to evaluate the word alignment, scoring, or classification accuracy. We diagnose MCI using the Clinical Dementia Rating (CDR) scale (Morris 1993), following earlier work on MCI (Petersen et al. 1999; Morris et al. 2001), as well as the work of Shankle et al. (2005) and Roark et al. (2011), who have previously attempted diagnostic classification using neuropsychological instrument subtest responses. The CDR is a numerical dementia staging scale that indicates the presence of dementia and its level of severity. The CDR score is derived from measures of cognitive function in six domains: Memory; Orientation; Judgment and Problem Solving; Community Affairs; Home and Hobbies; and Personal Care. These measures are determined during an extensive semi-structured interview with the patient and a close family member or caregiver. A CDR of 0 indicates the absence of dementia, and a CDR of 0.5 corresponds to a diagnosis of MCI (Ritchie and Touchon 2000). This measure has high expert interrater reliability (Morris 1993) and is assigned without any information derived from the WLM subtest. The WLM test, discussed in detail in Section 2.2, is a subtest of the Wechsler Memory Scale (Wechsler 1997), a neuropsychological instrument used to evaluate memory function in adults. Under standard administration of the WLM, the examiner reads a brief narrative to the participant, excerpts of which are shown in Figure 1. The participant then retells the narrative to the examiner twice: once immediately upon hearing the narrative and a second time after 20 to 30 minutes. Two retellings from one of the participants in our study are shown in Figures 2 and 3. (There are currently two narrative retelling subtests that can be administered as part of the Wechsler Memory Scale, but the Anna Thompson narrative used in the present study is the more widely used and has appeared in every version of the Wechsler Memory Scale with only minor modifications since the instrument was first released 70 years ago.) Following the published scoring guidelines, the examiner scores the participant’s response by counting how many of the 25 story elements are recalled in the retelling without regard to their ordering or relative importance in the story. We refer to this as the summary score. The boundaries between story elements are indicated with slashes in Figure 1. The retelling in Figure 2, produced by a participant without MCI, received a summary score of 12 for the 12 story elements recalled: Anna, Boston, employed, as a cook, and robbed of, she had four, small children, reported, station, touched by the woman’s story, took up a collection, and for her. The retelling in Figure 3, produced by the same participant after receiving a diagnosis of MCI two years later, earns a summary score of 5 for the 5 elements recalled: robbed, children, had not eaten, touched by the woman’s story, and took up a collection. Note that some of the story elements in these retellings were not recalled verbatim. The scoresheet provided with the exam indicates the lexical substitutions and degree of paraphrasing that are permitted, such as Ann or Annie for Anna, or any indication that the story evoked sympathy for touched by the woman’s story. Although the scoring guidelines have an air of arbitrariness in that paraphrasing is only sometimes permitted, they do allow the test to be scored with high inter-rater reliability (Mitchell 1987). Recall that each participant produces two retellings for the WLM: an immediate retelling and a delayed retelling. Each participant’s two retellings were transcribed at the utterance level. The transcripts were downcased, and all pause-fillers, incomplete words, and punctuation were removed. The transcribed retellings were scored manually according to the published scoring guidelines, as described earlier in this section.","4 diagnostic classification framework : The goal of the work presented here is to demonstrate the utility of a variety of features derived from the WLM retellings for diagnostic classification of individuals with MCI. To perform this classification, we use LibSVM (Chang and Lin 2011), as implemented within the Waikato Environment for Knowledge Analysis (Weka) API (Hall et al. 2009), to train SVM classifiers, using a radial basis function kernel and default parameter settings. We evaluate classification via receiver operating characteristic (ROC) curves, which have long been widely used to evaluate diagnostic tests (Zweig and Campbell 1993; Faraggi and Reiser 2002; Fan, Upadhye, and Worster 2006) and are also increasingly used in machine learning to evaluate classifiers in ranking scenarios (Cortes, Mohri, and Rastogi 2007; Ridgway et al. 2014). Analysis of ROC curves allows for classifier evaluation without selecting a specific, potentially arbitrary, operating point. To use standard clinical terminology, ROC curves track the tradeoff between sensitivity and specificity. Sensitivity (true positive rate) is what is commonly called recall in computational linguistics and related fields—that is, the percentage of items in the positive class that were correctly classified as positives. Specificity (true negative rate) is the percentage of items in the negative class that were correctly classified as negatives, which is equal to one minus the false positive rate. If the threshold is set so that nothing scores above threshold, the sensitivity (true positive rate, recall) is 0.0 and specificity (true negative rate) is 1.0. If the threshold is set so that everything scores above threshold, sensitivity is 1.0 and specificity is 0.0. As we sweep across intervening threshold settings, the ROC curve plots sensitivity versus one minus specificity, true positive rate versus false positive rate, providing insight into the precision/recall tradeoff at all possible operating points. Each point (tp, fp) in the curve has the true positive rate as the first dimension and false positive rate as the second dimension. Hence each curve starts at the origin (0, 0), the point corresponding to a threshold where nothing scores above threshold, and ends at (1, 1), the point where everything scores above threshold. ROC curves can be characterized by the area underneath them (“area under curve” or AUC). A perfect classifier, with all positive items ranked above all negative items, has an ROC curve that starts at point (0, 0), goes straight up to (1, 0)—the point where true positive is 1.0 and false positive is 0.0 (since it is a perfect classifier)—before continuing straight over to the final point (1, 1). The area under this curve is 1.0, hence a perfect classifier has an AUC of 1.0. A random classifier, whose ROC curve is a straight diagonal line from the origin to (1, 1), has an AUC of 0.5. The AUC is equivalent to the probability that a randomly chosen positive example is ranked higher than a randomly chosen negative example, and is, in fact, equivalent to the Wilcoxon-Mann-Whitney statistic (Hanley and McNeil 1982). This statistic allows for classifier comparison without the need of pre-specifying arbitrary thresholds. For tasks like clinical screening, different tradeoffs between sensitivity and specificity may apply, depending on the scenario. See Fan, Upadhye, and Worster (2006) for a useful discussion of clinical use of ROC curves and the AUC score. In that paper, the authors note that there are multiple scales for interpreting the value of AUC, but that a rule-of-thumb is that AUC ≤ 0.75 is generally not clinically useful. For the present article, however, AUC mainly provides us the means for evaluating the relative quality of different classifiers. One key issue for this sort of analysis is the estimation of the AUC for a particular classifier. Leave-pair-out cross-validation—proposed by Cortes, Mohri, and Rastogi (2007) and extensively validated in Pahikkala et al. (2008) and Airolaa et al. (2011)— is a method for providing an unbiased estimate of the AUC, and the one we use in this article. In the leave-pair-out technique, every pairing between a negative example (i.e., a participant without MCI) and a positive example (i.e., a participant with MCI) is tested using a classifier trained on all of the remaining examples. The results of each positive/negative pair can be used to calculate the Wilcoxon-Mann-Whitney statistic as follows. Let s(e) be the score of some example e; let P be the set of positive examples and N the set of negative examples; and let [s(p) > s(n)] be 1 if true and 0 if false. Then: AUC(s, P, n) = 1|P||N| ∑ p∈P ∑ n∈N [s(p) > s(n)] (1) Although this method is compute-intensive, it does provide an unbiased estimate of the AUC, whereas other cross-validation setups lead to biased estimates. Another benefit of using the AUC is that standard deviation can be calculated. The standard deviation for the AUC is calculated as follows, where AUC is abbreviated as A to improve readability: σ2a = A(1− A) + (|P| − 1)( A2−A − A 2) + (|N| − 1)( 2A21+A − A 2) |P||N| (2) Previous work has shown that the WLM summary scores assigned during standard administration of the WLM, particularly in combination with other tests of verbal fluency and memory, are sensitive to the presence of MCI and other dementias (Storandt and Hill 1989; Petersen et al. 1999; Schmitt et al. 2000; Wang and Zhou 2002; Nordlund et al. 2005; Bennett et al. 2006; Price et al. 2009). We note, however, that the WLM test alone is not typically used as a diagnostic test. One of the goals of this work is to explore the utility of the standard WLM summary scores for diagnostic classification. A more ambitious goal is to demonstrate that using smaller units of information derived from story elements, rather than gross summary-level scores, can greatly improve diagnostic accuracy. Finally, we will show that using element-level scores automatically extracted from word alignments can achieve diagnostic classification accuracy comparable to that achieved using manually assigned scores. We therefore will compare the accuracy, measured in terms of AUC, of SVM classifiers trained on both summary-level and elementlevel WLM scores extracted from word alignments to the accuracy of classifiers built using a variety of alternative feature sets, both manually and automatically derived, shown in Table 3. First, we consider the accuracy of classifiers using the expert-assigned WLM scores as features. For each of the 235 experimental participants, we generate two summary scores: one for the immediate retelling and one for the delayed retelling. The summary score ranges from 0, indicating that no elements were recalled, to 25, indicating that all elements were recalled. Previous work using manually assigned scores as features indicate that certain elements are more powerful in their ability to predict the presence of MCI (Prud’hommeaux 2012). In addition to the summary score, we therefore also provide the SVM with a vector of 50 story element-level scores: For each of the 25 elements in each of the two retellings per patient, there is a vector element with the value of 0 if the element was not recalled, or 1 if the element was recalled. Classification accuracy with participants with MCI using these two manually derived feature sets is shown in Table 3. We then present in Table 3 the classification accuracy of several summary-level features derived automatically from the WLM retellings, using standard NLP techniques for evaluating the similarity of two texts. We note that none of these features makes reference to the published WLM scoring guidelines or to the predefined element boundaries. Each of these feature sets contains two scores ranging between 0 and 1 for each participant, one for each of the two retellings: (1) cosine similarity between a retelling and the source narrative measured using LSA, proposed by Dunn et al. (2002) and calculated using the University of Colorado’s online LSA interface (available at http://lsa.colorado.edu/) with the 300-factor ninth-grade reading level topic space; (2) unigram overlap precision of a retelling relative to the source, proposed by HakkaniTur, Vergyri, and Tur (2010); (3) BLEU, the n-gram overlap metric commonly used to evaluate the quality of machine translation output (Papineni et al. 2002); and (4) the F-measure for ROUGE-SU4, the n-gram overlap metric commonly used to evaluate automatic summarization output (Lin 2004). The remaining two automatically derived features are a set of binary scores corresponding to the exact match via grep of each of the open-class unigrams in the source narrative and a summary score thereof. Finally, in order to compare the WLM with another standard psychometric test, we also show the accuracy of a classifier trained only on the expert-assigned manual scores for the MMSE (Folstein, Folstein, and McHugh 1975), a clinician-administered 30- point questionnaire that measures a patient’s degree of cognitive impairment. Although it is widely used to screen for dementias such as Alzheimer’s disease, the MMSE is reported not to be particularly sensitive to MCI (Morris et al. 2001; Ravaglia et al. 2005; Hoops et al. 2009). The MMSE is entirely independent of the WLM and, though brief (5–10 minutes), requires more time to administer than the WLM. In Table 3, we see that the WLM-based features yield higher accuracy than the MMSE, which is notable given the role that the MMSE plays in dementia screening. In addition, although all of the automatically derived feature sets yield higher classification than the MMSE, the manually derived WLM element-level scores are by far the most accurate feature set for diagnostic classification. Summary-level statistics, whether derived manually using established scoring mechanisms or automatically using a variety of text-similarity metrics used in the NLP community, seem not to provide sufficient power to distinguish the two diagnostic groups. In the next several sections, we describe a method for accurately automatically extracting the identities of the recalled story elements from WLM retellings via word alignment in order to try to achieve classification accuracy comparable to that of the manually assigned WLM story elements and higher than that of the other automatic scoring methods.","5 wlm scoring via alignment :The approach presented here for automatic scoring of the WLM subtest relies on word alignments of the type used in machine translation for building phrased-based translation models. The motivation for using word alignment is the inherent similarity between narrative retelling and translation. In translation, a sentence in one language is converted into another language; the translation will have different words presented in a different order, but the meaning of the original sentence will be preserved. In narrative retelling, the source narrative is “translated” into the idiolect of the individual retelling the story. Again, the retelling will have different words, possibly presented in a different order, but at least some of the meaning will be preserved. We will show that although the algorithm for extracting scores from the alignments is simple, the process of getting high quality word alignments from the corpora of narrative retellings is challenging. Although researchers in other NLP tasks that rely on alignments, such as textual entailment and summarization, sometimes eschew the sort of word-level alignments that are used in machine translation, we have no a priori reason to believe that this sort of alignment will be inadequate for the purposes of scoring narrative retellings. In addition, unlike many of the alignment algorithms proposed for tasks such as textual entailment, the methods for unsupervised word alignment used in machine translation require no external resources or hand-labeled data, making it simple to adapt our automated scoring techniques to new scenarios. We will show that the word alignment algorithms used in machine translation, when modified in particular ways, provide sufficient information for highly accurate scoring of narrative retellings and subsequent diagnostic classification of the individuals generating those retellings. Figure 4 shows a visual grid representation of a manually generated word alignment between the source narrative shown in Figure 1 on the vertical axis and the example WLM retelling in Figure 2 on the horizontal axis. Table 4 shows the word-index-toword-index alignment, in which the first index of each sentence is 0 and in which null alignments are not shown. When creating these manual alignments, the labelers assigned the “possible” denotation under one of these two conditions: (1) when the alignment was ambiguous, as outlined in Och and Ney (2003); and (2) when a particular word in the retelling was a logical alignment to a word in the source narrative, but it would not have been counted as a permissible substitution under the published scoring guidelines. For this reason, we see that Taylor and sixty-seven are considered to be possible alignments because although they are logical alignments, they are not permissible substitutions according to the published scoring guidelines. Note that the word dollars is considered to be only a possible alignment, as well, since the element fifty-six dollars is not correctly recalled in this retelling under the standard scoring guidelines. In Figure 4, sure alignments are marked in black and possible alignments are marked in gray. In Figure 5, sure alignments are marked with S and possible alignments are marked with P. Manually generated alignments like this one are the gold standard against which any automatically generated alignments can be compared to determine the accuracy of the alignment. From an accurate word-to-word alignment, the identities of the story elements used in a retellings can be accurately extracted, and from that set of story elements, the score that is assigned under the standard scoring procedure can be calculated. As described earlier, the published scoring guidelines for the WLM specify the source words that compose each story element. Figure 5 displays the source narrative with the element IDs (A− Y) and word IDs (1− 65) explicitly labeled. Element Q, for instance, consists of the words 39 and 40, small children. Using this information, we can determine which story elements were used in a retelling from the alignments as follows: for each word in the source narrative, if that word is aligned to a word in the retelling, the story element that it is associated with is considered to be recalled. For instance, if there is an alignment between the retelling word sympathetic and the source word touched, the story element touched by the woman’s story would be counted as correctly recalled. Note that in the WLM, every word in the source narrative is part of one of the story elements. Thus, when we convert alignments to scores in the way just described, any alignment can generate a story element. This is true even for an alignment between function words such as the and of, which would be unlikely individually to indicate that a story element had been recalled. To avoid such scoring errors, we disregard any word alignment pair containing a function word from the source narrative. The two exceptions to this rule are the final two words, for her, which are not content words but together make a single story element. Recall that in the manually derived word alignments, certain alignment pairs were marked as possible if the word in the retelling was logically equivalent to the word in the source but was not a permissible substitute according to the published scoring guidelines. When extracting scores from a manual alignment, only sure alignments are considered. This enables us to extract scores from a manual word alignment with 100% accuracy. The possible manual alignments are used only for calculating alignment error rate (AER) of an automatic word alignment model. From the list of story elements extracted in this way, the summary score reported under standard scoring guidelines can be determined simply by counting the number of story elements extracted. Table 5 shows the story elements extracted from the manual word alignment in Table 4. The WLM immediate and delayed retellings for all of the 235 experimental participants and the 48 retellings from participants in the larger study who were not eligible for the present study were transcribed at the word level. Partial words, punctuation, and pause-fillers were excluded from all transcriptions used for this study. The retellings were manually scored according to published guidelines. In addition, we manually produced word-level alignments between each retelling and the source narrative presented. These manual alignments were used to evaluate the word alignment quality and never to train the word alignment model. Word alignment for phrase-based machine translation typically takes as input a sentence-aligned parallel corpus or bi-text, in which a sentence on one side of the corpus is a translation of the sentence in that same position on the other side of the corpus. Because we are interested in learning how to align words in the source narrative to words in the retellings, our primary parallel corpus must consist of source narrative text on one side and retelling text on the other. Because the retellings contain omissions, reorderings, and embellishments, we are obliged to consider the full text of the source narrative and of each retelling to be a “sentence” in the parallel corpus. We compiled three parallel corpora to be used for the word alignment experiments:r Corpus 1: A 518-line source-to-retelling corpus consisting of the source narrative paired with each of the two retellings from the 235 experimental participants as well as the 48 retellings from ineligible individuals.r Corpus 2: A 268,324-line pairwise retelling-to-retelling corpus, consisting of every possible pairwise combination of the 518 available retellings.r Corpus 3: A 976-line word identity corpus, consisting of every word that appears in any retelling and the source narrative paired with itself. The explicit parallel alignments of word identities that compose Corpus 3 are included in order to encourage the alignment of a word in a retelling to that same word in the source, if it exists. The word alignment techniques that we use are unsupervised. Other than the transcriptions themselves, no manually generated data is used to build the word alignment models. Therefore, as in the case with most experiments involving word alignment, we build a model for the data we wish to evaluate using that same data. We do, however, use the 48 retellings from the individuals who were not experimental participants as a development set for tuning the various parameters of our word alignment system, which are described in the following. We begin by building two word alignment models using the Berkeley aligner (Liang, Taskar, and Klein 2006), a state-of-the-art word alignment package that relies on IBM Models 1 and 2 (Brown et al. 1993) and an HMM. We chose to use the Berkeley aligner, rather than the more widely used Giza++ alignment package, for this task because its joint training and posterior decoding algorithms yield lower alignment error rates on most data sets (including the data set used here [Prud’hommeaux and Roark 2011]) and because it offers functionality for testing an existing model on new data and, more crucially, for outputting posterior probabilities. The smaller of our two Berkeley-generated models is trained on Corpus 1 (the source-to-retelling parallel corpus described earlier) and ten copies of Corpus 3 (the word identity corpus). The larger model is trained on Corpus 1, Corpus 2 (the pairwise retelling corpus), and 100 copies of Corpus 3. Both models are then tested on the 470 retellings from our 235 experimental participants. In addition, we use both models to align every retelling to every other retelling so that we will have all pairwise alignments available for use in the graph-based model presented in the next section. We note that the Berkeley aligner occasionally fails to return an alignment for a sentence pair, either because one of the sentences is too long or because the time required to perform the necessary calculations exceeds some maximum allotted time. In these cases, in order to generate alignments for all retellings and to build a complete graph that includes all retellings, we back off to the alignments and posteriors generated by IBM Model 1. The first two rows of Table 6 show the precision, recall, and alignment error rate (AER) (Och and Ney 2003) for these two Berkeley aligner models. We note that although the AER for the larger model is lower, the time required to train the model is significantly longer. The alignments generated by the Berkeley aligner serve not only as a baseline for comparison of word alignment quality but also as a springboard for the novel graph-based method of alignment we will now discuss. Graph-based methods, in which paths or random walks are traced through an interconnected graph of nodes in order to learn more about the nodes themselves, have been used for NLP tasks in information extraction and retrieval, including Web-page ranking (PageRank; Page et al. 1999) and extractive summarization (LexRank; Erkan and Radev 2004; Otterbacher, Erkan, and Radev 2009). In the PageRank algorithm, the nodes of the graph are Web pages and the edges connecting the nodes are the hyperlinks leading from those pages to other pages. The nodes in the LexRank algorithm are sentences in a document and the edges are the similarity scores between those sentences. The number of times that a particular node is visited in a random walk reveals information about the importance of that node and its relationship to the other nodes. In many applications of random walks, the goal is to determine which node is the most central or has the highest prestige. In word alignment, however, the goal is to learn new relationships and strengthen existing relationships between words in a retelling and words in the source narrative. In the case of our graph-based method for word alignment, each node represents a word in one of the retellings or in the source narrative. The edges are the normalized posterior-weighted alignments that the Berkeley aligner proposes between each word and (1) words in the source narrative, and (2) words in the other retellings. We generate these edges by using an existing baseline alignment model to align every retelling to every other retelling and to the source narrative. The posterior probabilities produced by the baseline alignment model serve as the weights on the edges. At each step in the walk, the choice of the next destination node can be determined according to the strength of the outgoing edges, as measured by the posterior probability of that alignment. Starting at a word in one of the retellings, represented by a node in the graph, the algorithm can walk from that node either to another retelling word in the graph to which it is aligned or to a word in the source narrative to which it is aligned. At each step in the walk, there is an empirically derived probability, λ, that sets the likelihood of transitioning to another retelling word versus a word in the source narrative. This probability functions similarly to the damping factor used in PageRank and LexRank, although its purpose is quite different. Once the decision whether to walk to a retelling word or source word has been made, the destination word itself is chosen according to the weights, which are the posterior probabilities assigned by the baseline alignment model. When the walk arrives at a source narrative word, that particular random walk ends, and the count for that source word as a possible alignment for the input retelling word is incremented by one. For each word in each retelling, we perform 1,000 of these random walks, thereby generating a distribution for each retelling word over all of the words in the source narrative. The new alignment for the word is the source word with the highest frequency in that distribution. Pseudocode for this algorithm is provided in Figure 6. Consider the following excerpts of five of the retellings. In each excerpt, the word that should align to the source word touched is rendered in bold: the police were so moved by the story that they took up a collection for her the fellow was sympathetic and made a collection for her so that she can feed the children the police were touched by their story so they took up a collection the police were so impressed with her story they took up a collection the police felt sorry for her and took up a collection Figure 7 presents a small idealized subgraph of the pairwise alignments of these five retellings. The arrows represent the alignments proposed by the Berkeley aligner between the relevant words in the retellings and their alignment (or lack of alignment) to the word touched in the source narrative. Thin arrows indicate alignment edges in the graph between retelling words, and bold arrows indicate alignment edges between retelling words and words in the source narrative. Words in the retellings are rendered as nodes with a single outline, and words in the source are rendered as nodes with a double outline. We see that a number of these words were not aligned to the correct source word, touched. They are all, however, aligned to other retelling words that are in turn eventually aligned to the source word. Starting at any of the nodes in the graph, it is possible to walk from node to node and eventually reach the correct source word. Although sympathetic was not aligned to touched by the Berkeley aligner, its correct alignment can be recovered from the graph by following the path through other retelling words. After hundreds or thousands of random walks on the graph, evidence for the correct alignment will accumulate. The approach as described might seem most beneficial to a system in need of improvements to recall rather than precision. Our baseline systems, however, are already favoring recall over precision. For this reason we include the NULL word in the list of words in the source narrative. We note that most implementations of both IBM Model 1 and HMM-based alignment also model the probability of aligning to a hidden word, NULL. In word alignment for machine translation, alignment to NULL usually indicates that a word in one language has no equivalent in the other language because the two languages express the same idea or construction in a slightly different way. Romance languages, for instance, often use prepositions before infinitival complements (e.g., Italian cerco di ridere) when English does not (e.g., I try to laugh). In the alignment of narrative retellings, however, alignment to NULL often indicates that the word in question is part of an aside or a piece of information that was not expressed in the source narrative. Any retelling word that is not aligned to a source word by the baseline alignment system will implicitly be aligned to the hidden source word NULL, guaranteeing that every retelling word has at least one outgoing alignment edge and allowing us to model the likelihood of being unaligned. A word that was unaligned by the original system can remain unaligned. A word that should have been left unaligned but was mistakenly aligned to a source word by the original system can recover its correct (lack of) alignment by following an edge to another retelling word that was correctly left unaligned (i.e., aligned to NULL). Figure 8 shows the graph in Figure 7 with the addition of the NULL node and the corresponding alignment edges to that node. This figure also includes two new retelling words, food and apple, and their respective alignment edges. Here we see that although the retelling word food was incorrectly aligned to the source word touched by the baseline system, its correct alignment to NULL can be recovered by traversing the edge to retelling word apple and from there, the edge to the source word NULL. The optimal values for the following two parameters for the random walk must be determined: (1) the value of λ, the probability of walking to a retelling word node rather than a source word, and (2) the posterior probability threshold for including a particular edge in the graph. We optimize these parameters by testing the output of the graphbased approach on the development set of 48 retellings from the individuals who were not eligible for the study, discussed in Section 5.3. Recall that these additional retellings were included in the training data for the alignment model but were not included in the test set used to evaluate its performance. Tuning on this set of retellings therefore introduces no additional words, out-of-vocabulary words, or other information to the graph, while preventing overfitting. The posterior threshold is set to 0.5 in the Berkeley aligner’s default configuration, and we found that this value did indeed yield the lowest AER for the Berkeley aligner on our data. When building the graph using Berkeley alignments and posteriors, however, we can adjust the value of this threshold to optimize the AER of the alignments produced via random walks. Using the development set of 48 retellings, we determined that the AER is minimized when the value of λ is 0.8 and the alignment inclusion posterior threshold is 0.5. Recall the two baseline alignment models generated by the Berkeley aligner, described in Section 5.4: (1) the small Berkeley model, trained on Corpus 1 (the source-to-retelling corpus) and 10 instances of Corpus 3 (the word identity corpus), and (2) the large Berkeley model (trained on Corpus 1, Corpus 2, the full pairwise retelling-to-retelling corpus, and 100 instances of Corpus 3). Using these models, we generate full retellingto-retelling alignments, on which we can then build two graph-based alignment models: the small graph-based model and the large graph-based model. The manual gold alignments for the 235 experimental participants were evaluated against the alignments produced by each of the four models. Table 6 presents the precision, recall, and AER for the alignments of the experimental participants. Not surprisingly, the larger models yield lower error rates than the smaller models. More interestingly, each graph-based model outperforms the Berkeley model of the corresponding size by a large margin. The performance of the small graph-based model is particularly remarkable because it yields an AER superior to the large Berkeley model while requiring significantly fewer computing resources. Each of the graphbased models generated the full set of alignments in only a few minutes, whereas the large Berkeley model required 14 hours of training.","6 scoring evaluation :The element-level scores induced, as described in Section 5.2, from the four word alignments for all 235 experimental participants were evaluated against the manual per-element scores. We report the precision, recall, and F-measure for all four alignment models in Table 7. In addition, we report Cohen’s kappa as a measure of reliability between our automated scores and the manually assigned scores. We see that as AER improves, scoring accuracy also improves, with the large graph-based model outperforming all other models in terms of precision, F-measure, and inter-rater reliability. The scoring accuracy levels reported here are comparable to the levels of inter-rater agreement typically reported for the WLM, and reliability between our automated scores and the manual scores, as measured by Cohen’s kappa, is well within the ranges reported in the literature (Johnson, Storandt, and Balota 2003). As will be shown in the following section, scoring accuracy is important for achieving high classification accuracy of MCI.","7 diagnostic classification :As discussed in Section 2, poor performance on the WLM test is associated with MCI. We now use the scores we have extracted from the word alignments as features with an SVM to perform diagnostic classification for distinguishing participants with MCI from those without, as described in Section 4.1. Table 8 shows the classification results for the scores derived from the four alignment models along with the classification results using the examiner-assigned manual scores, the MMSE, and the four alternative automated scoring approaches described in Section 4.2. It appears that, in all cases, the per-element scores are more effective than the summary scores in classifying the two diagnostic groups. In addition, we see that our automated scores have classificatory power comparable to that of the manual gold scores, and that as scoring accuracy increases from the small Berkeley model to the graph-based models and bigger models, classification accuracy improves. This suggests both that accurate scores are crucial for accurate classification and that pursuing even further improvements in word alignment is likely to result in improved diagnostic differentiation. We note that although the large Berkeley model achieved the highest classification accuracy of the automated methods, this very slight margin of difference may not justify its significantly greater computational requirements. In addition to using summary scores and element-level scores as features for the story-element based models, we also perform feature selection over both sets of features using the chi-square statistic. Feature selection is performed separately on each training set for each fold in the cross-validation to avoid introducing bias from the testing example. We train and test the SVM using the top n story element features, from n = 1 to n = 50. We report here the accuracy for the top seven story elements (n = 7), which yielded the highest AUC measure. We note that over all of the folds, only 8 of the 50 features ever appeared among the seven most informative. In all cases, the per-element scores are more effective than the summary scores in classifying the two diagnostic groups, and performing feature selection results in improved classification accuracy. All of the element-level feature sets automatically extracted from alignments outperform the MMSE and all of the alternative automatic scoring procedures, which suggests that the extra complexity required to extract element-level features is well worth the time and effort. We note that the final classification results for all four alignment models are not drastically different from one another, despite the large reductions in word alignment error rate and improvements in scoring accuracy observed in the larger models and graph-based models. This seeming disconnect between word alignment accuracy and downstream application performance has also been observed in the machine translation literature, where reductions in AER do not necessarily lead to meaningful increases in BLEU, the widely accepted measure of machine translation quality (Ayan and Dorr 2006; Lopez and Resnik 2006; Fraser and Marcu 2007). Our results, however, show that a feature set consisting of manually assigned WLM scores yields the highest classification accuracy of any of the feature sets evaluated here. As discussed in Section 5.2, our WLM score extraction method is designed such that element-level scores can be extracted with perfect accuracy from a perfect word alignment. Thus, the goal of seeking perfect or near-perfect word alignment accuracy is worthwhile because it will necessarily result in perfect or near-perfect scoring accuracy, which in turn is likely to yield classification accuracy approaching that of manually assigned scores.","8 application to task with non-linguistic reference :As we discussed earlier, one of the advantages of using an unsupervised method of scoring is the resulting generalizability to new data sets, particularly those generated from a non-linguistic stimulus. The Boston Diagnostic Aphasia Examination (BDAE) (Goodglass and Kaplan 1972), an instrument widely used to diagnose aphasia in adults, includes one such task, popularly known as the cookie theft picture description task. In this test, the person views a drawing of a lively scene in a family’s kitchen and must tell the examiner about all of the actions they see in the picture. The picture is reproduced below in Figure 9. Describing visually presented material is quite different from a task such as the WLM, in which language comprehension and memory play a crucial role. Nevertheless, the processing and language production demands of a picture description task may lead to differences in performance in groups with certain cognitive and language problems. In fact, it is widely reported that the picture descriptions of seniors with dementia of the Alzheimer’s type differ from those of typically aging seniors in terms of information content (Hier, Hagenlocker, and Shindler 1985; Gilesa, Patterson, and Hodge 1996). Interestingly, this reduction in information is not necessarily accompanied by a reduction in the amount of language produced. Rather, it seems that seniors with Alzheimer’s dementia tend to include redundant information, repetitions, intrusions, and revisions that result in language samples of length comparable to that of typically aging seniors. TalkBank (MacWhinney 2007), the online database of audio and transcribed speech, has made available the DementiaBank corpus of descriptions of the cookie theft picture by hundreds of individuals, some of whom have one of a number of types of dementia, including MCI, vascular dementia, possible Alzheimer’s disease, and probable Alzheimer’s disease. From this corpus were selected a subset of individuals without dementia and a subset with probable Alzheimer’s disease. We limit the set of descriptions to those with more than 25 but fewer than 100 words, yielding 130 descriptions for each diagnostic group. There was no significant difference in description word count between the two diagnostic groups. The first task was to generate a source description to which all other narratives should be aligned. Working under the assumption that the control participants would produce good descriptions, we calculated the BLEU score of every pair of descriptions from the control group. The description with the highest average pairwise BLEU score was selected as the source description. After confirming that this description did in fact contain all of the action portrayed in the picture, we removed all extraneous conversational asides from the description in order to ensure that it contained all and only information about the picture. The selected source description is as follows: The boy is getting cookies out of the cookie jar. And the stool is just about to fall over. The little girl is reaching up for a cookie. And the mother is drying dishes. The water is running into the sink and the sink is running over onto the floor. And that little girl is laughing. We then built an alignment model on the full pairwise description parallel corpus (2602 = 67,600 sentences) and a word identity corpus consisting of each word in each description reproduced 100 times. Using this trained model, which corresponds to the large Berkeley model that achieved the highest classification accuracy for the WLM data, we then aligned every description to the artificial source description. We also built a graph-based alignment model using these alignments and the parameter settings that maximized word alignment accuracy in the WLM data. Because the artificial source description is not a true linguistic reference for this task, we did not produce manual word alignments against which the alignment quality could be evaluated and against which the parameters could be tuned. Instead, we evaluated only the downstream application of diagnostic classification. The method for scoring the WLM relies directly on the predetermined list of story elements, whereas the cookie theft picture description administration instructions do not include an explicit set of items that must be described. Recall that the automated scoring method we propose uses only the open-class or content words in the source narrative. In order to generate scores for the descriptions, we propose a scoring technique that considers each content word in the source description to be its own story element. Any word in a retelling that aligns to one of the content words in the source narrative is considered to be a match for that content word element. This results in a large number of elements, but it allows the scoring method to be easily adapted to other narrative production scenarios that similarly do not have explicit scoring guidelines. Using these scores as features, we again used an SVM to classify the two diagnostic groups, typically aging and probable Alzheimer’s disease, and evaluated the classifier using leave-pair-out validation. Table 9 shows the classification results using the content word scoring features produced using the Berkeley aligner alignments and the graph-based alignments. These can be compared to classification results using the summary similarity metrics BLEU and unigram precision. We see that using word-level features, regardless of which alignment model they are extracted from, results in significantly higher classification accuracy than both the simple similarity metrics and the summary scores. The alignmentbased scoring approach yields features with remarkably high classification accuracy given the somewhat ad hoc selection of the source narrative from the set of control retellings. These results demonstrate the flexibility and utility of the alignment-based approach to scoring narratives. Not only can it be adapted to other narrative retelling instruments, but it can relatively trivially be adapted to instruments that use nonlinguistic stimuli for elicitation. All that is needed to build an alignment model is a sufficiently large collection of retellings of the same narrative or descriptions of the same picture. Procuring such a collection of descriptions or retellings can be done easily outside a clinical setting using a platform such as Amazon’s Mechanical Turk. No handlabeled data, outside lexical resources, prior knowledge of the content of the story, or existing scoring guidelines are required.","Among the more recent applications for natural language processing algorithms has been the analysis of spoken language data for diagnostic and remedial purposes, fueled by the demand for simple, objective, and unobtrusive screening tools for neurological disorders such as dementia. The automated analysis of narrative retellings in particular shows potential as a component of such a screening tool since the ability to produce accurate and meaningful narratives is noticeably impaired in individuals with dementia and its frequent precursor, mild cognitive impairment, as well as other neurodegenerative and neurodevelopmental disorders. In this article, we present a method for extracting narrative recall scores automatically and highly accurately from a word-level alignment between a retelling and the source narrative. We propose improvements to existing machine translation-based systems for word alignment, including a novel method of word alignment relying on random walks on a graph that achieves alignment accuracy superior to that of standard expectation maximization-based techniques for word alignment in a fraction of the time required for expectation maximization. In addition, the narrative recall score features extracted from these high-quality word alignments yield diagnostic classification accuracy comparable to that achieved using manually assigned scores and significantly higher than that achieved with summary-level text similarity metrics used in other areas of NLP. These methods can be trivially adapted to spontaneous language samples elicited with non-linguistic stimuli, thereby demonstrating the flexibility and generalizability of these methods.","[{""affiliations"": [], ""name"": ""Emily Prud\u2019hommeaux""}, {""affiliations"": [], ""name"": ""Brian Roark""}]",SP:9fff4c3673ff8364cd50b46aa7b757952eefa5de,"[{""authors"": [""Airolaa"", ""Antti"", ""Tapio Pahikkalaa"", ""Willem Waegemanc"", ""Bernard De Baetsc"", ""Tapio Salakoskia""], ""title"": ""An experimental comparison of cross-validation techniques for estimating the area under the ROC"", ""year"": 2011}, {""authors"": [""Artero"", ""Sylvain"", ""Mary Tierney"", ""Jacques Touchon"", ""Karen Ritchie""], ""title"": ""Prediction of transition from cognitive"", ""year"": 2003}, {""authors"": [""Ayan"", ""Necip Fazil"", ""Bonnie J. Dorr.""], ""title"": ""Going beyond AER: An extensive analysis of word alignments and their impact on MT"", ""venue"": ""Proceedings of the 21st International Conference on Computational Linguistics and"", ""year"": 2006}, {""authors"": [""D.A. Bennett"", ""J.A. Schneider"", ""Z. Arvanitakis"", ""J.F. Kelly"", ""N.T. Aggarwal"", ""R.C. Shah"", ""R.S. Wilson""], ""title"": ""Neuropathology of older persons without cognitive impairment"", ""year"": 2006}, {""authors"": [""Bishop"", ""Dorothy"", ""Chris Donlan.""], ""title"": ""The role of syntax in encoding and recall of pictorial narratives: Evidence from specific language impairment"", ""venue"": ""British Journal of Developmental Psychology,"", ""year"": 2005}, {""authors"": [""Brown"", ""Peter"", ""Vincent Della Pietra"", ""Steven Della Pietra"", ""Robert Mercer.""], ""title"": ""The mathematics of statistical machine translation: Parameter estimation"", ""venue"": ""Computational Linguistics,"", ""year"": 1993}, {""authors"": [""Chang"", ""Chih-Chung"", ""Chih-Jen Lin.""], ""title"": ""LIBSVM: A library for support vector machines"", ""venue"": ""ACM Transactions on Intelligent Systems and Technology, 2(27):1\u201327."", ""year"": 2011}, {""authors"": [""Chapman"", ""Sandra"", ""Hanna Ulatowska"", ""Kristin King"", ""Julene Johnson"", ""Donald McIntire.""], ""title"": ""Discourse in early Alzheimer\u2019s disease versus normal advanced aging"", ""venue"": ""American Journal of"", ""year"": 1995}, {""authors"": [""Chenery"", ""Helen J."", ""Bruce E. Murdoch.""], ""title"": ""The production of narrative discourse in response to animations in persons with dementia of the Alzheimer\u2019s type: Preliminary findings"", ""venue"": ""Aphasiology,"", ""year"": 1994}, {""authors"": [""Cortes"", ""Corinna"", ""Mehryar Mohri"", ""Ashish Rastogi.""], ""title"": ""An alternative ranking problem for search engines"", ""venue"": ""Proceedings of the 6th Workshop on Experimental Algorithms, volume 4525 of Lecture Notes in"", ""year"": 2007}, {""authors"": [""Creamer"", ""Scott"", ""Maureen Schmitter-Edgecombe""], ""title"": ""Narrative comprehension in Alzheimer\u2019s disease: Assessing inferences and memory operations with a think-aloud procedure"", ""year"": 2010}, {""authors"": [""de la Rosa"", ""Gabriela Ramirez"", ""Thamar Solorio"", ""Manuel Montes y Gomez"", ""Aquiles Iglesias"", ""Yang Liu"", ""Lisa Bedore"", ""Elizabeth Pena""], ""title"": ""Exploring word class n-grams to measure language"", ""year"": 2013}, {""authors"": [""Diehl"", ""Joshua J."", ""Loisa Bennetto"", ""Edna Carter Young.""], ""title"": ""Story recall and narrative coherence of high-functioning children with autism spectrum disorders"", ""venue"": ""Journal of Abnormal Child Psychology,"", ""year"": 2006}, {""authors"": [""Dunn"", ""John C."", ""Osvaldo P. Almeida"", ""Lee Barclay"", ""Anna Waterreus"", ""Leon Flicker.""], ""title"": ""Latent semantic analysis: A new method to measure prose recall"", ""venue"": ""Journal of Clinical and Experimental Neuropsychology,"", ""year"": 2002}, {""authors"": [""Ehrlich"", ""Jonathan S."", ""Loraine K. Obler"", ""Lynne Clark.""], ""title"": ""Ideational and semantic contributions to narrative production in adults with dementia of the Alzheimer\u2019s type"", ""venue"": ""Journal of Communication Disorders,"", ""year"": 1997}, {""authors"": [""Erkan"", ""G\u00fcnes"", ""Dragomir R. Radev""], ""title"": ""LexRank: Graph-based lexical centrality as salience in text summarization"", ""year"": 2004}, {""authors"": [""Fan"", ""Jerome"", ""Suneel Upadhye"", ""Andrew Worster.""], ""title"": ""Understanding receiver operating characteristic (ROC) curves"", ""venue"": ""Canadian Journal of Emergency Medicine, 8:19\u201320."", ""year"": 2006}, {""authors"": [""Faraggi"", ""David"", ""Benjamin Reiser.""], ""title"": ""Estimation of the area under the ROC curve"", ""venue"": ""Statistics in Medicine, 21:3093\u20133106."", ""year"": 2002}, {""authors"": [""M. Folstein"", ""S. Folstein"", ""P. McHugh.""], ""title"": ""Mini-mental state\u2014a practical method for grading the cognitive state of patients for the clinician"", ""venue"": ""Journal of Psychiatric Research, 12:189\u2013198."", ""year"": 1975}, {""authors"": [""Fraser"", ""Alexander"", ""Daniel Marcu.""], ""title"": ""Measuring word alignment quality for statistical machine translation"", ""venue"": ""Computational Linguistics, 33(3):293\u2013303."", ""year"": 2007}, {""authors"": [""Fraser"", ""Kathleen C"", ""Jed A. Meltzer"", ""Naida L. Graham"", ""Carol Leonard"", ""Graeme Hirst"", ""Sandra E. Black"", ""Elizabeth Rochon""], ""title"": ""Automated classification of primary progressive aphasia subtypes"", ""year"": 2014}, {""authors"": [""Gabani"", ""Keyur"", ""Melissa Sherman"", ""Thamar Solorio"", ""Yang Liu""], ""title"": ""A corpus-based approach for the prediction of language impairment in monolingual English and Spanish-English bilingual"", ""year"": 2009}, {""authors"": [""Galvin"", ""James"", ""Anne Fagan"", ""David Holtzman"", ""Mark Mintun"", ""John Morris.""], ""title"": ""Relationship of dementia screening tests with biomarkers of Alzheimer\u2019s disease"", ""venue"": ""Brain, 133:3290\u20133300."", ""year"": 2010}, {""authors"": [""Gilesa"", ""Elaine"", ""Karalyn Patterson"", ""John R. Hodge""], ""title"": ""Performance on the Boston cookie theft picture description task in patients with early dementia of the Alzheimer\u2019s type: Missing information"", ""year"": 1996}, {""authors"": [""H. Goodglass"", ""E. Kaplan.""], ""title"": ""Boston Diagnostic Aphasia Examination"", ""venue"": ""Lea and Febiger, Philadelphia, PA."", ""year"": 1972}, {""authors"": [""Hakkani-Tur"", ""Dilek"", ""Dimitra Vergyri"", ""Gokhan Tur.""], ""title"": ""Speech-based automated cognitive status assessment"", ""venue"": ""Proceedings of the Conference of the International Speech Communication"", ""year"": 2010}, {""authors"": [""Hall"", ""Mark"", ""Eibe Frank"", ""Geoffrey Holmes"", ""Bernhard Pfahringer"", ""Peter Reutemann"", ""Ian H. Witten.""], ""title"": ""The WEKA data mining software: An update"", ""venue"": ""SIGKDD Explorations, 11(1):10\u201318."", ""year"": 2009}, {""authors"": [""Hanley"", ""James"", ""Barbara McNeil.""], ""title"": ""The meaning and use of the area under a receiver operating characteristic (ROC) curve"", ""venue"": ""Radiology, 143:29\u201336."", ""year"": 1982}, {""authors"": [""D. Hier"", ""K. Hagenlocker"", ""A. Shindler.""], ""title"": ""Language disintegration in dementia: Effects of etiology and severity"", ""venue"": ""Brain and Language, 25:117\u2013133."", ""year"": 1985}, {""authors"": [""S. Hoops"", ""S. Nazem"", ""A.D. Siderowf"", ""J.E. Duda"", ""S.X. Xie"", ""M.B. Stern"", ""D. Weintraub.""], ""title"": ""Validity of the MoCA and MMSE in the detection of MCI and dementia in Parkinson disease"", ""venue"": ""Neurology,"", ""year"": 2009}, {""authors"": [""Johnson"", ""David K."", ""Martha Storandt"", ""David A. Balota.""], ""title"": ""Discourse analysis of logical memory recall in normal aging and in dementia of the Alzheimer type"", ""venue"": ""Neuropsychology, 17(1):82\u201392."", ""year"": 2003}, {""authors"": [""Lautenschlager"", ""Nicola T"", ""John C. Dunn"", ""Kathryn Bonney"", ""Leon Flicker"", ""Osvaldo P. Almeida""], ""title"": ""Latent semantic analysis: An improved method to measure cognitive performance in subjects"", ""year"": 2006}, {""authors"": [""Lehr"", ""Maider"", ""Emily Prud\u2019hommeaux"", ""Izhak Shafran"", ""Brian Roark""], ""title"": ""Fully automated neuropsychological assessment for detecting mild cognitive impairment"", ""venue"": ""In Proceedings of the 13th Annual Conference"", ""year"": 2012}, {""authors"": [""Lehr"", ""Maider"", ""Izhak Shafran"", ""Emily Prud\u2019hommeaux"", ""Brian Roark""], ""title"": ""Discriminative joint modeling of lexical variation and acoustic confusion for automated narrative retelling"", ""year"": 2013}, {""authors"": [""Liang"", ""Percy"", ""Ben Taskar"", ""Dan Klein.""], ""title"": ""Alignment by agreement"", ""venue"": ""Proceedings of the Human Language Technology Conference of the NAACL, pages 104\u2013111, New York, NY."", ""year"": 2006}, {""authors"": [""Lin"", ""Chin-Yiu.""], ""title"": ""ROUGE: A package for automatic evaluation of summaries"", ""venue"": ""Proceedings of the Workshop on Text Summarization Branches Out, pages 74\u201381, Barcelona."", ""year"": 2004}, {""authors"": [""Lopez"", ""Adam"", ""Philip Resnik.""], ""title"": ""Word-based alignment, phrase-based translation: What\u2019s the link"", ""venue"": ""Proceedings"", ""year"": 2006}, {""authors"": [""Lysaker"", ""Paul"", ""Amanda Wickett"", ""Neil Wilke"", ""John Lysaker.""], ""title"": ""Narrative incoherence in schizophrenia: The absent agent-protagonist and the collapse of internal dialogue"", ""venue"": ""American Journal of"", ""year"": 2003}, {""authors"": [""MacWhinney"", ""Brian.""], ""title"": ""The TalkBank Project"", ""venue"": ""J. C. Beal, K. P. Corrigan, and H. L. Moisl, editors, Creating and Digitizing Language Corpora: Synchronic Databases, Vol.1, pages 163\u2013180, Palgrave-Macmillan,"", ""year"": 2007}, {""authors"": [""Manly"", ""Jennifer J."", ""Ming Tang"", ""Nicole Schupf"", ""Yaakov Stern"", ""Jean-Paul G. Vonsattel"", ""Richard Mayeux.""], ""title"": ""Frequency and course of mild cognitive impairment in a multiethnic community"", ""venue"": ""Annals of"", ""year"": 2008}, {""authors"": [""Mitchell"", ""Margaret.""], ""title"": ""Scoring discrepancies on two subtests of the Wechsler memory scale"", ""venue"": ""Journal of Consulting and Clinical Psychology, 55:914\u2013915."", ""year"": 1987}, {""authors"": [""Morris"", ""John.""], ""title"": ""The Clinical Dementia Rating (CDR): Current version and scoring rules"", ""venue"": ""Neurology, 43:2412\u20132414."", ""year"": 1993}, {""authors"": [""Morris"", ""John"", ""Martha Storandt"", ""J. Phillip Miller"", ""Daniel McKeel"", ""Joseph Price"", ""Eugene Rubin"", ""Leonard Berg.""], ""title"": ""Mild cognitive impairment represents early-stage Alzheimer disease"", ""venue"": ""Archives of"", ""year"": 2001}, {""authors"": [""Norbury"", ""Courtenay"", ""Dorothy Bishop.""], ""title"": ""Narrative skills of children with communication impairments"", ""venue"": ""International Journal of Language and Communication Disorders, 38:287\u2013313."", ""year"": 2003}, {""authors"": [""A. Nordlund"", ""S. Rolstad"", ""P. Hellstrom"", ""M. Sjogren"", ""S. Hansen"", ""A. Wallin.""], ""title"": ""The Goteborg MCI study: Mild cognitive impairment is a heterogeneous condition"", ""venue"": ""Journal of Neurology, Neurosurgery and"", ""year"": 2005}, {""authors"": [""Och"", ""Franz Josef"", ""Hermann Ney.""], ""title"": ""A systematic comparison of various statistical alignment models"", ""venue"": ""Computational Linguistics, 29(1):19\u201351."", ""year"": 2003}, {""authors"": [""Otterbacher"", ""Jahna"", ""G\u00fcnes Erkan"", ""Dragomir R. Radev.""], ""title"": ""Biased LexRank: Passage retrieval using random walks with question-based priors"", ""venue"": ""Information Processing Management,"", ""year"": 2009}, {""authors"": [""Page"", ""Lawrence"", ""Sergey Brin"", ""Rajeev Motwani"", ""Terry Winograd""], ""title"": ""The PageRank citation ranking: Bringing order"", ""year"": 1999}, {""authors"": [""Prud\u2019hommeaux"", ""Roark""], ""title"": ""Graph-Based Word Alignment for Clinical Language Evaluation to the web"", ""venue"": ""Technical Report 1999-66,"", ""year"": 1999}, {""authors"": [""Pahikkala"", ""Tapio"", ""Antti Airola"", ""Jorma Boberg"", ""Tapio Salakoski.""], ""title"": ""Exact and efficient leave-pair-out cross-validation for ranking RLS"", ""venue"": ""The Second International and Interdisciplinary Conference on Adaptive"", ""year"": 2008}, {""authors"": [""Papineni"", ""Kishore"", ""Salim Roukos"", ""Todd Ward"", ""Wei jing Zhu.""], ""title"": ""BLEU: A method for automatic evaluation of machine translation"", ""venue"": ""Proceedings of the 40th Annual Meeting of the Association for"", ""year"": 2002}, {""authors"": [""Petersen"", ""Ronald"", ""Glenn Smith"", ""Stephen Waring"", ""Robert Ivnik"", ""Eric Tangalos"", ""Emre Kokmen.""], ""title"": ""Mild cognitive impairment: Clinical characterizations and outcomes"", ""venue"": ""Archives of Neurology,"", ""year"": 1999}, {""authors"": [""Petersen"", ""Ronald C.""], ""title"": ""Mild cognitive impairment"", ""venue"": ""The New England Journal of Medicine, 364(23):2227\u20132234."", ""year"": 2011}, {""authors"": [""John J. McArdle"", ""Robert J. Willis"", ""Robert B. Wallace.""], ""title"": ""Prevalence of cognitive impairment without dementia in the United States"", ""venue"": ""Annals of Internal Medicine, 148:427\u201334."", ""year"": 2008}, {""authors"": [""John C. Morris.""], ""title"": ""Neuropathology of nondemented aging: Presumptive evidence for preclinical Alzheimer disease"", ""venue"": ""Neurobiology of Aging, 30(7):1026\u20131036."", ""year"": 2009}, {""authors"": [""Prud\u2019hommeaux"", ""Emily"", ""Brian Roark""], ""title"": ""Alignment of spoken narratives for automated neuropsychological assessment"", ""venue"": ""In Proceedings of the IEEE Workshop on Automatic Speech Recognition"", ""year"": 2011}, {""authors"": [""Prud\u2019hommeaux"", ""Emily"", ""Brian Roark""], ""title"": ""Graph-based alignment of narratives"", ""year"": 2012}, {""authors"": [""Prud\u2019hommeaux"", ""Emily"", ""Masoud Rouhizadeh""], ""title"": ""Automatic detection of pragmatic deficits in children with autism"", ""venue"": ""In Proceedings of the 3rd Workshop on Child, Computer and Interaction,"", ""year"": 2012}, {""authors"": [""Prud\u2019hommeaux"", ""Emily Tucker""], ""title"": ""Alignment of Narrative Retellings for Automated Neuropsychological Assessment"", ""venue"": ""Ph.D. thesis,"", ""year"": 2012}, {""authors"": [""Ravaglia"", ""Giovanni"", ""Paola Forti"", ""Fabiola Maioli"", ""Lucia Servadei"", ""Mabel Martelli"", ""Nicoletta Brunetti"", ""Luciana Bastagli"", ""Erminia Mariani""], ""title"": ""Screening for mild cognitive impairment in elderly"", ""year"": 2005}, {""authors"": [""Ridgway"", ""James"", ""Pierre Alquier"", ""Nicolas Chopin"", ""Feng Liang.""], ""title"": ""PAC-Bayesian AUC classification and scoring"", ""venue"": ""Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and"", ""year"": 2014}, {""authors"": [""Ritchie"", ""Karen"", ""Sylvaine Artero"", ""Jacques Touchon.""], ""title"": ""Classification criteria for mild cognitive impairment: A population-based validation study"", ""venue"": ""Neurology, 56:37\u201342."", ""year"": 2001}, {""authors"": [""Ritchie"", ""Karen"", ""Jacques Touchon.""], ""title"": ""Mild cognitive impairment: Conceptual basis and current nosological status"", ""venue"": ""Lancet, 355:225\u2013228."", ""year"": 2000}, {""authors"": [""Roark"", ""Brian"", ""Margaret Mitchell"", ""John-Paul Hosom"", ""Kristina Hollingshead"", ""Jeffrey Kaye.""], ""title"": ""Spoken language derived measures for detecting mild cognitive impairment"", ""venue"": ""IEEE Transactions on Audio,"", ""year"": 2011}, {""authors"": [""F.A. Schmitt"", ""D.G. Davis"", ""D.R. Wekstein"", ""C.D. Smith"", ""J.W. Ashford"", ""W.R. Markesbery.""], ""title"": ""Preclinical AD revisited: Neuropathology of cognitively normal older adults"", ""venue"": ""Neurology,"", ""year"": 2000}, {""authors"": [""Shankle"", ""William R"", ""A. Kimball Romney"", ""Junko Hara"", ""Dennis Fortier"", ""Malcolm B. Dick"", ""James M. Chen"", ""Timothy Chan"", ""Xijiang Sun""], ""title"": ""Methods to improve the detection of mild cognitive"", ""year"": 2005}, {""authors"": [""Solorio"", ""Thamar"", ""Yang Liu.""], ""title"": ""Using language models to identify language impairment in Spanish-English bilingual children"", ""venue"": ""Proceedings of the ACL 2008 Workshop on Biomedical Natural Language"", ""year"": 2008}, {""authors"": [""Storandt"", ""Martha"", ""Robert Hill.""], ""title"": ""Very mild senile dementia of the Alzheimer\u2019s type: II"", ""venue"": ""Psychometric test performance. Archives of Neurology, 46:383\u2013386."", ""year"": 1989}, {""authors"": [""Tager-Flusberg"", ""Helen.""], ""title"": ""Once upon a ribbit: Stories narrated by autistic children"", ""venue"": ""British Journal of Developmental Psychology, 13(1):45\u201359."", ""year"": 1995}, {""authors"": [""Tannock"", ""Rosemary"", ""Karen L. Purvis"", ""Russell J. Schachar.""], ""title"": ""Narrative abilities in children with attention deficit hyperactivity disorder and normal peers"", ""venue"": ""Journal of Abnormal Child Psychology,"", ""year"": 1993}, {""authors"": [""Tierney"", ""Mary"", ""Christie Yao"", ""Alex Kiss"", ""Ian McDowell.""], ""title"": ""Neuropsychological tests accurately predict incident Alzheimer disease after 5 and 10 years"", ""venue"": ""Neurology, 64:1853\u20131859."", ""year"": 2005}, {""authors"": [""Ulatowska"", ""Hanna"", ""Lee Allard"", ""Adrienne Donnell"", ""Jean Bristow"", ""Sara M. Haynes"", ""Adelaide Flower"", ""Alvin J. North""], ""title"": ""Discourse performance in subjects with dementia"", ""year"": 1988}, {""authors"": [""United Nations.""], ""title"": ""World Population Ageing 1950\u20132050"", ""venue"": ""United Nations, New York."", ""year"": 2002}, {""authors"": [""Vuorinen"", ""Elina"", ""Matti Laine"", ""Juha Rinne.""], ""title"": ""Common pattern of language impairment in vascular dementia and in Alzheimer disease"", ""venue"": ""Alzheimer Disease and Associated Disorders, 14(2):81\u201386."", ""year"": 2000}, {""authors"": [""Wang"", ""Qing-Song"", ""Jiang-Ning Zhou.""], ""title"": ""Retrieval and encoding of episodic memory in normal aging and patients with mild cognitive impairment"", ""venue"": ""Brain Research, 924:113\u2013115."", ""year"": 2002}, {""authors"": [""Wechsler"", ""David.""], ""title"": ""Wechsler Memory Scale - Third Edition"", ""venue"": ""The Psychological Corporation, San Antonio, TX."", ""year"": 1997}, {""authors"": [""Zweig"", ""Mark H."", ""Gregory Campbell.""], ""title"": ""Receiver-operating characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine"", ""venue"": ""Clinical Chemistry, 39:561\u2013577."", ""year"": 1993}]","acknowledgments :This research was conducted while both authors were at the Center for Spoken Language Understanding at the Oregon Health and Science University, in Portland, Oregon. This work was supported in part by NSF grant BCS-0826654 and NIH NIDCD grants R01DC012033-01 and R01DC007129. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not reflect the views of the NIH or NSF. Some of the results reported here appeared previously in Prud’hommeaux and Roark (2012) and the first author’s dissertation (Prud’hommeaux 2012). We thank Jan van Santen, Richard Sproat, and Chris Callison-Burch for their valuable input and the clinicians at the OHSU Layton Center for their care in collecting the data.",,,,,,,,,"9 conclusions and future work :The work presented here demonstrates the utility of adapting NLP algorithms to clinically elicited data for diagnostic purposes. In particular, the approach we describe for automatically analyzing clinically elicited language data shows promise as part of a pipeline for a screening tool for mild cognitive impairment. The methods offer the additional benefit of being general and flexible enough to be adapted to new data sets, even those without existing evaluation guidelines. In addition, the novel graph-based approach to word alignment results in large reductions in alignment error rate. These reductions in error rate in turn lead to human-level scoring accuracy and improved diagnostic classification. The demand for simple, objective, and unobtrusive screening tools for MCI and other neurodegenerative and neurodevelopmental disorders will continue to grow as the prevalence of these disorders increases. Although high-level measures of text similarity used in other NLP applications, such as machine translation, do achieve reasonable classification accuracy when applied to the WLM narrative data, the work presented here indicates that automated methods that approximate manual elementlevel scoring procedures yield superior results. Although the results are quite robust, several enhancements and improvements can be made. First, although we were able to achieve decent word alignment accuracy, especially with our graph-based approach, many alignment errors remain. Exploration of the graph used here reveals that many correct alignments remain undiscovered, with an oracle AER of 11%. One clear weakness is the selection of only a single alignment from the distribution of source words at the end of 1,000 walks, since this does not allow for one-to-many mappings. We would also like to experiment with including nondirectional edges and outgoing edges on source words. In our future work, we also plan to examine longitudinal data for individual participants to see whether our techniques can detect subtle differences in recall and coherence between a recent retelling and a series of earlier baseline retellings. Because the CDR, the dementia staging system often used to identify MCI, relies on observed changes in cognitive function over time, longitudinal analysis of performance on narrative retelling and picture description tasks might be the most promising application for this approach to analyzing clinically elicited language data.",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :Interest in applying natural language processing (NLP) technology to medical information has increased in recent years. Much of this work has been focused on information retrieval and extraction from clinical notes, electronic medical records, and biomedical academic literature, but there has been some work in directly analyzing the spoken language of individuals elicited during the administration of diagnostic instruments in clinical settings. Analyzing spoken language data can reveal information not only ∗ Rochester Institute of Technology, College of Liberal Arts, 92 Lomb Memorial Dr., Rochester, NY 14623. E-mail: emilypx@rit.edu. ∗∗ Google, Inc., 1001 SW Fifth Avenue, Suite 1100, Portland OR 97204. E-mail: roarkbr@gmail.com. Submission received: 30 December 2013; revised submission received: 21 January 2015; accepted for publication: 4 May 2015. doi:10.1162/COLI a 00232 © 2015 Association for Computational Linguistics about impairments in language but also about a patient’s neurological status with respect to other cognitive processes such as memory and executive function, which are often impaired in individuals with neurodevelopmental disorders, such as autism and language impairment, and neurodegenerative conditions, particularly dementia. Many widely used instruments for diagnosing certain neurological disorders include a task in which the person must produce an uninterrupted stream of spontaneous spoken language in response to a stimulus. A person might be asked, for instance, to retell a brief narrative or to describe the events depicted in a drawing. Much of the previous work in applying NLP techniques to such clinically elicited spoken language data has relied on parsing and language modeling to enable the automatic extraction of linguistic features, such as syntactic complexity and measures of vocabulary use and diversity, which can then be used as markers for various neurological impairments (Solorio and Liu 2008; Gabani et al. 2009; Roark et al. 2011; de la Rosa et al. 2013; Fraser et al. 2014). In this article, we instead use NLP techniques to analyze the content, rather than the linguistic characteristics, of weakly structured spoken language data elicited using neuropsychological assessment instruments. We will show that the content of such spoken responses contains information that can be used for accurate screening for neurodegenerative disorders. The features we explore are grounded in the idea that individuals recalling the same narrative are likely to use the same sorts of words and semantic concepts. In other words, a retelling of a narrative will be faithful to the source narrative and similar to other retellings. This similarity can be measured with techniques such as latent semantic analysis (LSA) cosine distance or the summary-level statistics that are widely used in evaluation of machine translation or automatic summarization, such as BLEU, Meteor, or ROUGE. Perhaps not surprisingly, however, previous work in using this type of spoken language data suggests that people with neurological impairments tend to include irrelevant or off-topic information and to exclude important pieces of information, or story elements, in their retellings that are usually included by neurotypical individuals (Hier, Hagenlocker, and Shindler 1985; Ulatowska et al. 1988; Chenery and Murdoch 1994; Chapman et al. 1995; Ehrlich, Obler, and Clark 1997; Vuorinen, Laine, and Rinne 2000; Creamer and Schmitter-Edgecombe 2010). Thus, it is often not the quantity of correctly recalled information but the quality of that information that reveals the most about a person’s diagnostic status. Summary statistics like LSA cosine distance and BLEU, which are measures of the overall degree of similarity between two texts, fail to capture these sorts of patterns. The work discussed here is an attempt to reveal these patterns and to leverage them for diagnostic classification of individuals with neurodegenerative conditions, including mild cognitive impairment and dementia of the Alzheimer’s type. Our method for extracting the elements used in a retelling of a narrative relies on establishing a word alignment between a retelling and a source narrative. Given the correspondences between the words used in a retelling and the words used in the source narrative, we can determine with relative ease the identities of the story elements of the source narrative that were used in the retelling. These word alignments are much like those used to build machine translation models. The amount of data required to generate accurate word alignment models for machine translation, however, far exceeds the amount of monolingual source-to-retelling parallel data available to train word alignment models for our task. We therefore combine several approaches for producing reliable word alignments that exploit the peculiarities of our training data, including an entirely novel alignment approach relying on random walks on graphs. In this article, we demonstrate that this approach to word alignment is as accurate as and more efficient than standard hidden Markov model (HMM)-based alignment (derived using the Berkeley aligner [Liang, Taskar, and Klein 2006]) for this particular data. In addition, we show that the presence or absence of specific story elements in a narrative retelling, extracted automatically from these task-specific word alignments, predicts diagnostic group membership more reliably than not only other dementia screening tools but also the lexical and semantic overlap measures widely used in NLP to evaluate pairwise language sample similarity. Finally, we apply our techniques to a picture description task that lacks an existing scoring mechanism, highlighting the generalizability and adaptability of these techniques. The importance of accurate screening tools for neurodegenerative disorders cannot be overstated given the increased prevalence of these disorders currently being observed worldwide. In the industrialized world, for the first time in recorded history, the population over 60 years of age outnumbers the population under 15 years of age, and it is expected to be double that of children by 2050 (United Nations 2002). As the elderly population grows and as researchers find new ways to slow or halt the progression of dementia, the demand for objective, simple, and noninvasive screening tools for dementia and related disorders will grow. Although we will not discuss the application of our methods to the narratives of children, the need for simple screening protocols for neurodevelopmental disorders such as autism and language impairment is equally urgent. The results presented here indicate that the path toward these goals might include automated spoken language analysis. 2 background : Because of the variety of intact cognitive functions required to generate a narrative, the inability to coherently produce or recall a narrative is associated with many different disorders, including not only neurodegenerative conditions related to dementia, but also autism (Tager-Flusberg 1995; Diehl, Bennetto, and Young 2006), language impairment (Norbury and Bishop 2003; Bishop and Donlan 2005), attention deficit disorder (Tannock, Purvis, and Schachar 1993), and schizophrenia (Lysaker et al. 2003). The bulk of the research presented here, however, focuses on the utility of a particular narrative recall task, the Wechsler Logical Memory subtest of the Wechsler Memory Scale (Wechsler 1997), for diagnosing mild cognitive impairment (MCI). (This and other abbreviations are listed in Table 1.) MCI is the stage of cognitive decline between the sort of decline expected in typical aging and the decline associated with dementia or Alzheimer’s disease (Petersen et al. 1999; Ritchie and Touchon 2000; Petersen 2011). MCI is characterized by subtle deficits in functions of memory and cognition that are clinically significant but do not prevent carrying out the activities of daily life. This intermediary phase of decline has been identified and named numerous times: mild cognitive decline, mild neurocognitive decline, very mild dementia, isolated memory impairment, questionable dementia, and incipient dementia. Although there continues to be disagreement about the diagnostic validity of the designation (Ritchie and Touchon 2000; Ritchie, Artero, and Touchon 2001), a number of recent studies have found evidence that seniors with some subtypes of MCI are significantly more likely to develop dementia than the population as a whole (Busse et al. 2006; Manly et al. 2008; Plassman et al. 2008). Early detection can benefit both patients and researchers investigating treatments for halting or slowing the progression of dementia, but identifying MCI can be problematic, as most dementia screening instruments, such as the Mini-Mental State Exam (MMSE) (Folstein, Folstein, and McHugh 1975), lack sufficient sensitivity to the very subtle cognitive deficits that characterize the disorder (Morris et al. 2001; Ravaglia et al. 2005; Hoops et al. 2009). Diagnosis of MCI currently requires both a lengthy neuropsychological evaluation of the patient and an interview with a family member or close associate, both of which should be repeated at regular intervals in order to have a baseline for future comparison. One goal of the work presented here is to determine whether an analysis of spoken language responses to a narrative recall task, the Wechsler Logical Memory subtest, can be used as a more efficient and less intrusive screening tool for MCI. In the Wechsler Logical Memory (WLM) narrative recall subtest of the Wechsler Memory Scale, the individual listens to a brief narrative and must verbally retell the narrative to the examiner once immediately upon hearing the story and again after a delay of 20 to 30 minutes. The examiner scores each retelling according to how many story elements the patient uses in the retelling. The standard scoring procedure, described in more detail in Section 3.2, results in a single summary score for each retelling, immediate and delayed, corresponding to the total number of story elements recalled in that retelling. The Anna Thompson narrative, shown in Figure 1 (later in this article), has been used as the primary WLM narrative for over 70 years and has been found to be sensitive to dementia and related conditions, particularly in combination with tests of verbal fluency and memory. Multiple studies have demonstrated a significant difference in performance on the WLM between individuals with MCI and typically aging controls under the standard scoring procedure (Storandt and Hill 1989; Petersen et al. 1999; Wang and Zhou 2002; Nordlund et al. 2005). Further studies have shown that performance on the WLM can help predict whether MCI will progress into Alzheimer’s disease (Morris et al. 2001; Artero et al. 2003; Tierney et al. 2005). The WLM can also serve as a cognitive indicator of physiological characteristics associated with Alzheimer’s disease. WLM scores in the impaired range are associated with the presence of changes in Pittsburgh compound B and cerebrospinal fluid amyloid beta protein, two biomarkers of Alzheimer’s disease (Galvin et al. 2010). Poor performance on the WLM and other narrative memory tests has also been strongly correlated with increased density of Alzheimer related lesions detected in postmortem neuropathological studies, even in the absence of previously reported or detected dementia (Schmitt et al. 2000; Bennett et al. 2006; Price et al. 2009). We note that clinicians do not use the WLM as a diagnostic test by itself for MCI or any other type of dementia. The WLM summary score is just one of a large number of instrumentally derived scores of memory and cognitive function that, in combination with one another and with a clinician’s expert observations and examination, can indicate the presence of a dementia, aphasia, or other neurological disorder. Much of the previous work in applying automated analysis of unannotated transcripts of narratives for diagnostic purposes has focused not on evaluating properties specific to narratives but rather on using narratives as a data source from which to extract speech and language features. Solorio and Liu (2008) were able to distinguish the narratives of a small set of children with specific language impairment (SLI) from those of typically developing children using perplexity scores derived from part-ofspeech language models. In a follow-up study on a larger group of children, Gabani et al. (2009) again used part-of-speech language models in an attempt to characterize the agrammaticality that is associated with language impairment. Two part-of-speech language models were trained for that experiment: one on the language of children with SLI and one on the language of typically developing children. The perplexity of each child’s utterances was calculated according to each of the models. In addition, the authors extracted a number of other structural linguistic features including mean length of utterance, total words used in the narrative, and measures of accurate subject–verb agreement. These scores collectively performed well in distinguishing children with language impairment, achieving an F1 measure of just over 70% when used within a support vector machine (SVM) for classification. In a continuation of this work, de la Rosa et al. (2013) explored complex language-model-based lexical and syntactic features to more accurately characterize the language used in narratives by children with language impairment. Roark et al. (2011) extracted a subset of the features used by Gabani et al. (2009), along with a much larger set of language complexity features derived from syntactic parse trees for utterances from narratives produced by elderly individuals for the diagnosis of MCI. These features included simple measures, such as words per clause, and more complex measures of tree depth, embedding, and branching, such as Frazier and Yngve scores. Selecting a subset of these features for classification with an SVM yielded a classification accuracy of 0.73, as measured by the area under the receiver operating characteristic curve (AUC). A similar approach was followed by Fraser et al. (2014) to distinguish different types of primary progressive aphasia, a group of subtypes of dementia distinct from Alzheimer’s disease and MCI, in a small group of elderly individuals. The authors considered almost 60 linguistic features, including some of those explored by Roark et al. (2011) as well as numerous others relating to part-of-speech frequencies and ratios. Using a variety of classifiers and feature combinations for three different two-way classification tasks, the authors achieved classification accuracies ranging between 0.71 and 1.0. An alternative to analyzing narratives in terms of syntactic and lexical features is to evaluate the content of the narrative retellings themselves in terms of their fidelity to the source narrative. Hakkani-Tur, Vergyri, and Tur (2010) developed a method of automatically evaluating an audio recording of a picture description task, in which the patient looks at a picture and narrates the events occurring in the picture, similar to the task we will be analyzing in Section 8. After using automatic speech recognition (ASR) to transcribe the recording, the authors measured unigram overlap between the ASR output transcript and a predefined list of key semantic concepts. This unigram overlap measure correlated highly with manually assigned counts of these semantic concepts. The authors did not investigate whether the scores, derived either manually or automatically, were associated with any particular diagnostic group or disorder. Dunn et al. (2002) were among the first to apply automated methods specifically to scoring the WLM subtest and determining the relationship between these scores and measures of cognitive function. The authors used Latent Semantic Analysis (LSA) to measure the semantic distance from a retelling to the source narrative. The LSA scores correlated very highly with the scores assigned by examiners under the standard scoring guidelines and with independent measures of cognitive functioning. In subsequent work comparing individuals with and without an English-speaking background (Lautenschlager et al. 2006), the authors proposed that LSA-based scoring of the WLM as a cognitive measure is less biased against people with different linguistic and cultural backgrounds than other widely used cognitive measures. This work demonstrates not only that accurate automated scoring of narrative recall tasks is possible but also that the objectivity offered by automated measures has specific benefits for tests like the WLM, which are often administered by practitioners working in a community setting and serving a diverse population. We will compare the utility of this approach with our alignment-based approach subsequently in the article. More recently, Lehr et al. (2013) used a supervised method for scoring the responses to the WLM, transcribed both manually and via ASR, using conditional random fields. This technique resulted in slightly higher scoring and classification accuracy than the unsupervised method described here. An unsupervised variant of their algorithm, which relied on the methods described in this article to provide training data to the conditional random field, yielded about half of the scoring gains and nearly all of the classification gains of what we report here. A hybrid method that used the methods in this article to derive features was the best performing system in that paper. Hence the methods described here are important components to that approach. We also note, however, that the supervised classifier-based approach to scoring retellings requires a significant amount of hand-labeled training data, thus rendering the technique impractical for application to a new narrative or to any picture description task. The importance of this distinction will become clear in Section 8, in which the approach outlined here is applied to a new data set lacking an existing scoring mechanism or a linguistic reference against which the responses can be scored. In this article, we will be discussing the application of our methods to manually generated transcripts of retellings and picture descriptions produced by adults with and without neurodegenerative disorders. We note, however, that the same techniques have been applied to narratives transcribed using ASR output (Lehr et al. 2012, 2013) with little degradation in accuracy, given sufficient adaptation of the acoustic and language models to the WLM retelling domain. In addition, we have applied alignment-based scoring to the narratives of children with neurodevelopmental disorders, including autism and language impairment (Prud’hommeaux and Rouhizadeh 2012), with similarly strong diagnostic classification accuracy, further demonstrating the applicability of these methods to a variety of input formats, elicitation techniques, and diagnostic goals. 3 data : The participants for this study were drawn from an ongoing study of brain aging at the Layton Aging and Alzheimer’s Disease Center at the Oregon Health and Science University. Seventy-two of these participants had received a diagnosis of MCI, and 163 individuals served as typically aging controls. Demographic information about the experimental participants is shown in Table 2. There were no significant differences in age and years of education between the two groups. The Layton Center data included retellings for individuals who were not eligible for the present study because of their age or diagnosis. Transcriptions of 48 retellings produced by these ineligible participants were used to train and tune the word alignment model but were not used to evaluate the word alignment, scoring, or classification accuracy. We diagnose MCI using the Clinical Dementia Rating (CDR) scale (Morris 1993), following earlier work on MCI (Petersen et al. 1999; Morris et al. 2001), as well as the work of Shankle et al. (2005) and Roark et al. (2011), who have previously attempted diagnostic classification using neuropsychological instrument subtest responses. The CDR is a numerical dementia staging scale that indicates the presence of dementia and its level of severity. The CDR score is derived from measures of cognitive function in six domains: Memory; Orientation; Judgment and Problem Solving; Community Affairs; Home and Hobbies; and Personal Care. These measures are determined during an extensive semi-structured interview with the patient and a close family member or caregiver. A CDR of 0 indicates the absence of dementia, and a CDR of 0.5 corresponds to a diagnosis of MCI (Ritchie and Touchon 2000). This measure has high expert interrater reliability (Morris 1993) and is assigned without any information derived from the WLM subtest. The WLM test, discussed in detail in Section 2.2, is a subtest of the Wechsler Memory Scale (Wechsler 1997), a neuropsychological instrument used to evaluate memory function in adults. Under standard administration of the WLM, the examiner reads a brief narrative to the participant, excerpts of which are shown in Figure 1. The participant then retells the narrative to the examiner twice: once immediately upon hearing the narrative and a second time after 20 to 30 minutes. Two retellings from one of the participants in our study are shown in Figures 2 and 3. (There are currently two narrative retelling subtests that can be administered as part of the Wechsler Memory Scale, but the Anna Thompson narrative used in the present study is the more widely used and has appeared in every version of the Wechsler Memory Scale with only minor modifications since the instrument was first released 70 years ago.) Following the published scoring guidelines, the examiner scores the participant’s response by counting how many of the 25 story elements are recalled in the retelling without regard to their ordering or relative importance in the story. We refer to this as the summary score. The boundaries between story elements are indicated with slashes in Figure 1. The retelling in Figure 2, produced by a participant without MCI, received a summary score of 12 for the 12 story elements recalled: Anna, Boston, employed, as a cook, and robbed of, she had four, small children, reported, station, touched by the woman’s story, took up a collection, and for her. The retelling in Figure 3, produced by the same participant after receiving a diagnosis of MCI two years later, earns a summary score of 5 for the 5 elements recalled: robbed, children, had not eaten, touched by the woman’s story, and took up a collection. Note that some of the story elements in these retellings were not recalled verbatim. The scoresheet provided with the exam indicates the lexical substitutions and degree of paraphrasing that are permitted, such as Ann or Annie for Anna, or any indication that the story evoked sympathy for touched by the woman’s story. Although the scoring guidelines have an air of arbitrariness in that paraphrasing is only sometimes permitted, they do allow the test to be scored with high inter-rater reliability (Mitchell 1987). Recall that each participant produces two retellings for the WLM: an immediate retelling and a delayed retelling. Each participant’s two retellings were transcribed at the utterance level. The transcripts were downcased, and all pause-fillers, incomplete words, and punctuation were removed. The transcribed retellings were scored manually according to the published scoring guidelines, as described earlier in this section. 4 diagnostic classification framework : The goal of the work presented here is to demonstrate the utility of a variety of features derived from the WLM retellings for diagnostic classification of individuals with MCI. To perform this classification, we use LibSVM (Chang and Lin 2011), as implemented within the Waikato Environment for Knowledge Analysis (Weka) API (Hall et al. 2009), to train SVM classifiers, using a radial basis function kernel and default parameter settings. We evaluate classification via receiver operating characteristic (ROC) curves, which have long been widely used to evaluate diagnostic tests (Zweig and Campbell 1993; Faraggi and Reiser 2002; Fan, Upadhye, and Worster 2006) and are also increasingly used in machine learning to evaluate classifiers in ranking scenarios (Cortes, Mohri, and Rastogi 2007; Ridgway et al. 2014). Analysis of ROC curves allows for classifier evaluation without selecting a specific, potentially arbitrary, operating point. To use standard clinical terminology, ROC curves track the tradeoff between sensitivity and specificity. Sensitivity (true positive rate) is what is commonly called recall in computational linguistics and related fields—that is, the percentage of items in the positive class that were correctly classified as positives. Specificity (true negative rate) is the percentage of items in the negative class that were correctly classified as negatives, which is equal to one minus the false positive rate. If the threshold is set so that nothing scores above threshold, the sensitivity (true positive rate, recall) is 0.0 and specificity (true negative rate) is 1.0. If the threshold is set so that everything scores above threshold, sensitivity is 1.0 and specificity is 0.0. As we sweep across intervening threshold settings, the ROC curve plots sensitivity versus one minus specificity, true positive rate versus false positive rate, providing insight into the precision/recall tradeoff at all possible operating points. Each point (tp, fp) in the curve has the true positive rate as the first dimension and false positive rate as the second dimension. Hence each curve starts at the origin (0, 0), the point corresponding to a threshold where nothing scores above threshold, and ends at (1, 1), the point where everything scores above threshold. ROC curves can be characterized by the area underneath them (“area under curve” or AUC). A perfect classifier, with all positive items ranked above all negative items, has an ROC curve that starts at point (0, 0), goes straight up to (1, 0)—the point where true positive is 1.0 and false positive is 0.0 (since it is a perfect classifier)—before continuing straight over to the final point (1, 1). The area under this curve is 1.0, hence a perfect classifier has an AUC of 1.0. A random classifier, whose ROC curve is a straight diagonal line from the origin to (1, 1), has an AUC of 0.5. The AUC is equivalent to the probability that a randomly chosen positive example is ranked higher than a randomly chosen negative example, and is, in fact, equivalent to the Wilcoxon-Mann-Whitney statistic (Hanley and McNeil 1982). This statistic allows for classifier comparison without the need of pre-specifying arbitrary thresholds. For tasks like clinical screening, different tradeoffs between sensitivity and specificity may apply, depending on the scenario. See Fan, Upadhye, and Worster (2006) for a useful discussion of clinical use of ROC curves and the AUC score. In that paper, the authors note that there are multiple scales for interpreting the value of AUC, but that a rule-of-thumb is that AUC ≤ 0.75 is generally not clinically useful. For the present article, however, AUC mainly provides us the means for evaluating the relative quality of different classifiers. One key issue for this sort of analysis is the estimation of the AUC for a particular classifier. Leave-pair-out cross-validation—proposed by Cortes, Mohri, and Rastogi (2007) and extensively validated in Pahikkala et al. (2008) and Airolaa et al. (2011)— is a method for providing an unbiased estimate of the AUC, and the one we use in this article. In the leave-pair-out technique, every pairing between a negative example (i.e., a participant without MCI) and a positive example (i.e., a participant with MCI) is tested using a classifier trained on all of the remaining examples. The results of each positive/negative pair can be used to calculate the Wilcoxon-Mann-Whitney statistic as follows. Let s(e) be the score of some example e; let P be the set of positive examples and N the set of negative examples; and let [s(p) > s(n)] be 1 if true and 0 if false. Then: AUC(s, P, n) = 1|P||N| ∑ p∈P ∑ n∈N [s(p) > s(n)] (1) Although this method is compute-intensive, it does provide an unbiased estimate of the AUC, whereas other cross-validation setups lead to biased estimates. Another benefit of using the AUC is that standard deviation can be calculated. The standard deviation for the AUC is calculated as follows, where AUC is abbreviated as A to improve readability: σ2a = A(1− A) + (|P| − 1)( A2−A − A 2) + (|N| − 1)( 2A21+A − A 2) |P||N| (2) Previous work has shown that the WLM summary scores assigned during standard administration of the WLM, particularly in combination with other tests of verbal fluency and memory, are sensitive to the presence of MCI and other dementias (Storandt and Hill 1989; Petersen et al. 1999; Schmitt et al. 2000; Wang and Zhou 2002; Nordlund et al. 2005; Bennett et al. 2006; Price et al. 2009). We note, however, that the WLM test alone is not typically used as a diagnostic test. One of the goals of this work is to explore the utility of the standard WLM summary scores for diagnostic classification. A more ambitious goal is to demonstrate that using smaller units of information derived from story elements, rather than gross summary-level scores, can greatly improve diagnostic accuracy. Finally, we will show that using element-level scores automatically extracted from word alignments can achieve diagnostic classification accuracy comparable to that achieved using manually assigned scores. We therefore will compare the accuracy, measured in terms of AUC, of SVM classifiers trained on both summary-level and elementlevel WLM scores extracted from word alignments to the accuracy of classifiers built using a variety of alternative feature sets, both manually and automatically derived, shown in Table 3. First, we consider the accuracy of classifiers using the expert-assigned WLM scores as features. For each of the 235 experimental participants, we generate two summary scores: one for the immediate retelling and one for the delayed retelling. The summary score ranges from 0, indicating that no elements were recalled, to 25, indicating that all elements were recalled. Previous work using manually assigned scores as features indicate that certain elements are more powerful in their ability to predict the presence of MCI (Prud’hommeaux 2012). In addition to the summary score, we therefore also provide the SVM with a vector of 50 story element-level scores: For each of the 25 elements in each of the two retellings per patient, there is a vector element with the value of 0 if the element was not recalled, or 1 if the element was recalled. Classification accuracy with participants with MCI using these two manually derived feature sets is shown in Table 3. We then present in Table 3 the classification accuracy of several summary-level features derived automatically from the WLM retellings, using standard NLP techniques for evaluating the similarity of two texts. We note that none of these features makes reference to the published WLM scoring guidelines or to the predefined element boundaries. Each of these feature sets contains two scores ranging between 0 and 1 for each participant, one for each of the two retellings: (1) cosine similarity between a retelling and the source narrative measured using LSA, proposed by Dunn et al. (2002) and calculated using the University of Colorado’s online LSA interface (available at http://lsa.colorado.edu/) with the 300-factor ninth-grade reading level topic space; (2) unigram overlap precision of a retelling relative to the source, proposed by HakkaniTur, Vergyri, and Tur (2010); (3) BLEU, the n-gram overlap metric commonly used to evaluate the quality of machine translation output (Papineni et al. 2002); and (4) the F-measure for ROUGE-SU4, the n-gram overlap metric commonly used to evaluate automatic summarization output (Lin 2004). The remaining two automatically derived features are a set of binary scores corresponding to the exact match via grep of each of the open-class unigrams in the source narrative and a summary score thereof. Finally, in order to compare the WLM with another standard psychometric test, we also show the accuracy of a classifier trained only on the expert-assigned manual scores for the MMSE (Folstein, Folstein, and McHugh 1975), a clinician-administered 30- point questionnaire that measures a patient’s degree of cognitive impairment. Although it is widely used to screen for dementias such as Alzheimer’s disease, the MMSE is reported not to be particularly sensitive to MCI (Morris et al. 2001; Ravaglia et al. 2005; Hoops et al. 2009). The MMSE is entirely independent of the WLM and, though brief (5–10 minutes), requires more time to administer than the WLM. In Table 3, we see that the WLM-based features yield higher accuracy than the MMSE, which is notable given the role that the MMSE plays in dementia screening. In addition, although all of the automatically derived feature sets yield higher classification than the MMSE, the manually derived WLM element-level scores are by far the most accurate feature set for diagnostic classification. Summary-level statistics, whether derived manually using established scoring mechanisms or automatically using a variety of text-similarity metrics used in the NLP community, seem not to provide sufficient power to distinguish the two diagnostic groups. In the next several sections, we describe a method for accurately automatically extracting the identities of the recalled story elements from WLM retellings via word alignment in order to try to achieve classification accuracy comparable to that of the manually assigned WLM story elements and higher than that of the other automatic scoring methods. 5 wlm scoring via alignment :The approach presented here for automatic scoring of the WLM subtest relies on word alignments of the type used in machine translation for building phrased-based translation models. The motivation for using word alignment is the inherent similarity between narrative retelling and translation. In translation, a sentence in one language is converted into another language; the translation will have different words presented in a different order, but the meaning of the original sentence will be preserved. In narrative retelling, the source narrative is “translated” into the idiolect of the individual retelling the story. Again, the retelling will have different words, possibly presented in a different order, but at least some of the meaning will be preserved. We will show that although the algorithm for extracting scores from the alignments is simple, the process of getting high quality word alignments from the corpora of narrative retellings is challenging. Although researchers in other NLP tasks that rely on alignments, such as textual entailment and summarization, sometimes eschew the sort of word-level alignments that are used in machine translation, we have no a priori reason to believe that this sort of alignment will be inadequate for the purposes of scoring narrative retellings. In addition, unlike many of the alignment algorithms proposed for tasks such as textual entailment, the methods for unsupervised word alignment used in machine translation require no external resources or hand-labeled data, making it simple to adapt our automated scoring techniques to new scenarios. We will show that the word alignment algorithms used in machine translation, when modified in particular ways, provide sufficient information for highly accurate scoring of narrative retellings and subsequent diagnostic classification of the individuals generating those retellings. Figure 4 shows a visual grid representation of a manually generated word alignment between the source narrative shown in Figure 1 on the vertical axis and the example WLM retelling in Figure 2 on the horizontal axis. Table 4 shows the word-index-toword-index alignment, in which the first index of each sentence is 0 and in which null alignments are not shown. When creating these manual alignments, the labelers assigned the “possible” denotation under one of these two conditions: (1) when the alignment was ambiguous, as outlined in Och and Ney (2003); and (2) when a particular word in the retelling was a logical alignment to a word in the source narrative, but it would not have been counted as a permissible substitution under the published scoring guidelines. For this reason, we see that Taylor and sixty-seven are considered to be possible alignments because although they are logical alignments, they are not permissible substitutions according to the published scoring guidelines. Note that the word dollars is considered to be only a possible alignment, as well, since the element fifty-six dollars is not correctly recalled in this retelling under the standard scoring guidelines. In Figure 4, sure alignments are marked in black and possible alignments are marked in gray. In Figure 5, sure alignments are marked with S and possible alignments are marked with P. Manually generated alignments like this one are the gold standard against which any automatically generated alignments can be compared to determine the accuracy of the alignment. From an accurate word-to-word alignment, the identities of the story elements used in a retellings can be accurately extracted, and from that set of story elements, the score that is assigned under the standard scoring procedure can be calculated. As described earlier, the published scoring guidelines for the WLM specify the source words that compose each story element. Figure 5 displays the source narrative with the element IDs (A− Y) and word IDs (1− 65) explicitly labeled. Element Q, for instance, consists of the words 39 and 40, small children. Using this information, we can determine which story elements were used in a retelling from the alignments as follows: for each word in the source narrative, if that word is aligned to a word in the retelling, the story element that it is associated with is considered to be recalled. For instance, if there is an alignment between the retelling word sympathetic and the source word touched, the story element touched by the woman’s story would be counted as correctly recalled. Note that in the WLM, every word in the source narrative is part of one of the story elements. Thus, when we convert alignments to scores in the way just described, any alignment can generate a story element. This is true even for an alignment between function words such as the and of, which would be unlikely individually to indicate that a story element had been recalled. To avoid such scoring errors, we disregard any word alignment pair containing a function word from the source narrative. The two exceptions to this rule are the final two words, for her, which are not content words but together make a single story element. Recall that in the manually derived word alignments, certain alignment pairs were marked as possible if the word in the retelling was logically equivalent to the word in the source but was not a permissible substitute according to the published scoring guidelines. When extracting scores from a manual alignment, only sure alignments are considered. This enables us to extract scores from a manual word alignment with 100% accuracy. The possible manual alignments are used only for calculating alignment error rate (AER) of an automatic word alignment model. From the list of story elements extracted in this way, the summary score reported under standard scoring guidelines can be determined simply by counting the number of story elements extracted. Table 5 shows the story elements extracted from the manual word alignment in Table 4. The WLM immediate and delayed retellings for all of the 235 experimental participants and the 48 retellings from participants in the larger study who were not eligible for the present study were transcribed at the word level. Partial words, punctuation, and pause-fillers were excluded from all transcriptions used for this study. The retellings were manually scored according to published guidelines. In addition, we manually produced word-level alignments between each retelling and the source narrative presented. These manual alignments were used to evaluate the word alignment quality and never to train the word alignment model. Word alignment for phrase-based machine translation typically takes as input a sentence-aligned parallel corpus or bi-text, in which a sentence on one side of the corpus is a translation of the sentence in that same position on the other side of the corpus. Because we are interested in learning how to align words in the source narrative to words in the retellings, our primary parallel corpus must consist of source narrative text on one side and retelling text on the other. Because the retellings contain omissions, reorderings, and embellishments, we are obliged to consider the full text of the source narrative and of each retelling to be a “sentence” in the parallel corpus. We compiled three parallel corpora to be used for the word alignment experiments:r Corpus 1: A 518-line source-to-retelling corpus consisting of the source narrative paired with each of the two retellings from the 235 experimental participants as well as the 48 retellings from ineligible individuals.r Corpus 2: A 268,324-line pairwise retelling-to-retelling corpus, consisting of every possible pairwise combination of the 518 available retellings.r Corpus 3: A 976-line word identity corpus, consisting of every word that appears in any retelling and the source narrative paired with itself. The explicit parallel alignments of word identities that compose Corpus 3 are included in order to encourage the alignment of a word in a retelling to that same word in the source, if it exists. The word alignment techniques that we use are unsupervised. Other than the transcriptions themselves, no manually generated data is used to build the word alignment models. Therefore, as in the case with most experiments involving word alignment, we build a model for the data we wish to evaluate using that same data. We do, however, use the 48 retellings from the individuals who were not experimental participants as a development set for tuning the various parameters of our word alignment system, which are described in the following. We begin by building two word alignment models using the Berkeley aligner (Liang, Taskar, and Klein 2006), a state-of-the-art word alignment package that relies on IBM Models 1 and 2 (Brown et al. 1993) and an HMM. We chose to use the Berkeley aligner, rather than the more widely used Giza++ alignment package, for this task because its joint training and posterior decoding algorithms yield lower alignment error rates on most data sets (including the data set used here [Prud’hommeaux and Roark 2011]) and because it offers functionality for testing an existing model on new data and, more crucially, for outputting posterior probabilities. The smaller of our two Berkeley-generated models is trained on Corpus 1 (the source-to-retelling parallel corpus described earlier) and ten copies of Corpus 3 (the word identity corpus). The larger model is trained on Corpus 1, Corpus 2 (the pairwise retelling corpus), and 100 copies of Corpus 3. Both models are then tested on the 470 retellings from our 235 experimental participants. In addition, we use both models to align every retelling to every other retelling so that we will have all pairwise alignments available for use in the graph-based model presented in the next section. We note that the Berkeley aligner occasionally fails to return an alignment for a sentence pair, either because one of the sentences is too long or because the time required to perform the necessary calculations exceeds some maximum allotted time. In these cases, in order to generate alignments for all retellings and to build a complete graph that includes all retellings, we back off to the alignments and posteriors generated by IBM Model 1. The first two rows of Table 6 show the precision, recall, and alignment error rate (AER) (Och and Ney 2003) for these two Berkeley aligner models. We note that although the AER for the larger model is lower, the time required to train the model is significantly longer. The alignments generated by the Berkeley aligner serve not only as a baseline for comparison of word alignment quality but also as a springboard for the novel graph-based method of alignment we will now discuss. Graph-based methods, in which paths or random walks are traced through an interconnected graph of nodes in order to learn more about the nodes themselves, have been used for NLP tasks in information extraction and retrieval, including Web-page ranking (PageRank; Page et al. 1999) and extractive summarization (LexRank; Erkan and Radev 2004; Otterbacher, Erkan, and Radev 2009). In the PageRank algorithm, the nodes of the graph are Web pages and the edges connecting the nodes are the hyperlinks leading from those pages to other pages. The nodes in the LexRank algorithm are sentences in a document and the edges are the similarity scores between those sentences. The number of times that a particular node is visited in a random walk reveals information about the importance of that node and its relationship to the other nodes. In many applications of random walks, the goal is to determine which node is the most central or has the highest prestige. In word alignment, however, the goal is to learn new relationships and strengthen existing relationships between words in a retelling and words in the source narrative. In the case of our graph-based method for word alignment, each node represents a word in one of the retellings or in the source narrative. The edges are the normalized posterior-weighted alignments that the Berkeley aligner proposes between each word and (1) words in the source narrative, and (2) words in the other retellings. We generate these edges by using an existing baseline alignment model to align every retelling to every other retelling and to the source narrative. The posterior probabilities produced by the baseline alignment model serve as the weights on the edges. At each step in the walk, the choice of the next destination node can be determined according to the strength of the outgoing edges, as measured by the posterior probability of that alignment. Starting at a word in one of the retellings, represented by a node in the graph, the algorithm can walk from that node either to another retelling word in the graph to which it is aligned or to a word in the source narrative to which it is aligned. At each step in the walk, there is an empirically derived probability, λ, that sets the likelihood of transitioning to another retelling word versus a word in the source narrative. This probability functions similarly to the damping factor used in PageRank and LexRank, although its purpose is quite different. Once the decision whether to walk to a retelling word or source word has been made, the destination word itself is chosen according to the weights, which are the posterior probabilities assigned by the baseline alignment model. When the walk arrives at a source narrative word, that particular random walk ends, and the count for that source word as a possible alignment for the input retelling word is incremented by one. For each word in each retelling, we perform 1,000 of these random walks, thereby generating a distribution for each retelling word over all of the words in the source narrative. The new alignment for the word is the source word with the highest frequency in that distribution. Pseudocode for this algorithm is provided in Figure 6. Consider the following excerpts of five of the retellings. In each excerpt, the word that should align to the source word touched is rendered in bold: the police were so moved by the story that they took up a collection for her the fellow was sympathetic and made a collection for her so that she can feed the children the police were touched by their story so they took up a collection the police were so impressed with her story they took up a collection the police felt sorry for her and took up a collection Figure 7 presents a small idealized subgraph of the pairwise alignments of these five retellings. The arrows represent the alignments proposed by the Berkeley aligner between the relevant words in the retellings and their alignment (or lack of alignment) to the word touched in the source narrative. Thin arrows indicate alignment edges in the graph between retelling words, and bold arrows indicate alignment edges between retelling words and words in the source narrative. Words in the retellings are rendered as nodes with a single outline, and words in the source are rendered as nodes with a double outline. We see that a number of these words were not aligned to the correct source word, touched. They are all, however, aligned to other retelling words that are in turn eventually aligned to the source word. Starting at any of the nodes in the graph, it is possible to walk from node to node and eventually reach the correct source word. Although sympathetic was not aligned to touched by the Berkeley aligner, its correct alignment can be recovered from the graph by following the path through other retelling words. After hundreds or thousands of random walks on the graph, evidence for the correct alignment will accumulate. The approach as described might seem most beneficial to a system in need of improvements to recall rather than precision. Our baseline systems, however, are already favoring recall over precision. For this reason we include the NULL word in the list of words in the source narrative. We note that most implementations of both IBM Model 1 and HMM-based alignment also model the probability of aligning to a hidden word, NULL. In word alignment for machine translation, alignment to NULL usually indicates that a word in one language has no equivalent in the other language because the two languages express the same idea or construction in a slightly different way. Romance languages, for instance, often use prepositions before infinitival complements (e.g., Italian cerco di ridere) when English does not (e.g., I try to laugh). In the alignment of narrative retellings, however, alignment to NULL often indicates that the word in question is part of an aside or a piece of information that was not expressed in the source narrative. Any retelling word that is not aligned to a source word by the baseline alignment system will implicitly be aligned to the hidden source word NULL, guaranteeing that every retelling word has at least one outgoing alignment edge and allowing us to model the likelihood of being unaligned. A word that was unaligned by the original system can remain unaligned. A word that should have been left unaligned but was mistakenly aligned to a source word by the original system can recover its correct (lack of) alignment by following an edge to another retelling word that was correctly left unaligned (i.e., aligned to NULL). Figure 8 shows the graph in Figure 7 with the addition of the NULL node and the corresponding alignment edges to that node. This figure also includes two new retelling words, food and apple, and their respective alignment edges. Here we see that although the retelling word food was incorrectly aligned to the source word touched by the baseline system, its correct alignment to NULL can be recovered by traversing the edge to retelling word apple and from there, the edge to the source word NULL. The optimal values for the following two parameters for the random walk must be determined: (1) the value of λ, the probability of walking to a retelling word node rather than a source word, and (2) the posterior probability threshold for including a particular edge in the graph. We optimize these parameters by testing the output of the graphbased approach on the development set of 48 retellings from the individuals who were not eligible for the study, discussed in Section 5.3. Recall that these additional retellings were included in the training data for the alignment model but were not included in the test set used to evaluate its performance. Tuning on this set of retellings therefore introduces no additional words, out-of-vocabulary words, or other information to the graph, while preventing overfitting. The posterior threshold is set to 0.5 in the Berkeley aligner’s default configuration, and we found that this value did indeed yield the lowest AER for the Berkeley aligner on our data. When building the graph using Berkeley alignments and posteriors, however, we can adjust the value of this threshold to optimize the AER of the alignments produced via random walks. Using the development set of 48 retellings, we determined that the AER is minimized when the value of λ is 0.8 and the alignment inclusion posterior threshold is 0.5. Recall the two baseline alignment models generated by the Berkeley aligner, described in Section 5.4: (1) the small Berkeley model, trained on Corpus 1 (the source-to-retelling corpus) and 10 instances of Corpus 3 (the word identity corpus), and (2) the large Berkeley model (trained on Corpus 1, Corpus 2, the full pairwise retelling-to-retelling corpus, and 100 instances of Corpus 3). Using these models, we generate full retellingto-retelling alignments, on which we can then build two graph-based alignment models: the small graph-based model and the large graph-based model. The manual gold alignments for the 235 experimental participants were evaluated against the alignments produced by each of the four models. Table 6 presents the precision, recall, and AER for the alignments of the experimental participants. Not surprisingly, the larger models yield lower error rates than the smaller models. More interestingly, each graph-based model outperforms the Berkeley model of the corresponding size by a large margin. The performance of the small graph-based model is particularly remarkable because it yields an AER superior to the large Berkeley model while requiring significantly fewer computing resources. Each of the graphbased models generated the full set of alignments in only a few minutes, whereas the large Berkeley model required 14 hours of training. 6 scoring evaluation :The element-level scores induced, as described in Section 5.2, from the four word alignments for all 235 experimental participants were evaluated against the manual per-element scores. We report the precision, recall, and F-measure for all four alignment models in Table 7. In addition, we report Cohen’s kappa as a measure of reliability between our automated scores and the manually assigned scores. We see that as AER improves, scoring accuracy also improves, with the large graph-based model outperforming all other models in terms of precision, F-measure, and inter-rater reliability. The scoring accuracy levels reported here are comparable to the levels of inter-rater agreement typically reported for the WLM, and reliability between our automated scores and the manual scores, as measured by Cohen’s kappa, is well within the ranges reported in the literature (Johnson, Storandt, and Balota 2003). As will be shown in the following section, scoring accuracy is important for achieving high classification accuracy of MCI. 7 diagnostic classification :As discussed in Section 2, poor performance on the WLM test is associated with MCI. We now use the scores we have extracted from the word alignments as features with an SVM to perform diagnostic classification for distinguishing participants with MCI from those without, as described in Section 4.1. Table 8 shows the classification results for the scores derived from the four alignment models along with the classification results using the examiner-assigned manual scores, the MMSE, and the four alternative automated scoring approaches described in Section 4.2. It appears that, in all cases, the per-element scores are more effective than the summary scores in classifying the two diagnostic groups. In addition, we see that our automated scores have classificatory power comparable to that of the manual gold scores, and that as scoring accuracy increases from the small Berkeley model to the graph-based models and bigger models, classification accuracy improves. This suggests both that accurate scores are crucial for accurate classification and that pursuing even further improvements in word alignment is likely to result in improved diagnostic differentiation. We note that although the large Berkeley model achieved the highest classification accuracy of the automated methods, this very slight margin of difference may not justify its significantly greater computational requirements. In addition to using summary scores and element-level scores as features for the story-element based models, we also perform feature selection over both sets of features using the chi-square statistic. Feature selection is performed separately on each training set for each fold in the cross-validation to avoid introducing bias from the testing example. We train and test the SVM using the top n story element features, from n = 1 to n = 50. We report here the accuracy for the top seven story elements (n = 7), which yielded the highest AUC measure. We note that over all of the folds, only 8 of the 50 features ever appeared among the seven most informative. In all cases, the per-element scores are more effective than the summary scores in classifying the two diagnostic groups, and performing feature selection results in improved classification accuracy. All of the element-level feature sets automatically extracted from alignments outperform the MMSE and all of the alternative automatic scoring procedures, which suggests that the extra complexity required to extract element-level features is well worth the time and effort. We note that the final classification results for all four alignment models are not drastically different from one another, despite the large reductions in word alignment error rate and improvements in scoring accuracy observed in the larger models and graph-based models. This seeming disconnect between word alignment accuracy and downstream application performance has also been observed in the machine translation literature, where reductions in AER do not necessarily lead to meaningful increases in BLEU, the widely accepted measure of machine translation quality (Ayan and Dorr 2006; Lopez and Resnik 2006; Fraser and Marcu 2007). Our results, however, show that a feature set consisting of manually assigned WLM scores yields the highest classification accuracy of any of the feature sets evaluated here. As discussed in Section 5.2, our WLM score extraction method is designed such that element-level scores can be extracted with perfect accuracy from a perfect word alignment. Thus, the goal of seeking perfect or near-perfect word alignment accuracy is worthwhile because it will necessarily result in perfect or near-perfect scoring accuracy, which in turn is likely to yield classification accuracy approaching that of manually assigned scores. 8 application to task with non-linguistic reference :As we discussed earlier, one of the advantages of using an unsupervised method of scoring is the resulting generalizability to new data sets, particularly those generated from a non-linguistic stimulus. The Boston Diagnostic Aphasia Examination (BDAE) (Goodglass and Kaplan 1972), an instrument widely used to diagnose aphasia in adults, includes one such task, popularly known as the cookie theft picture description task. In this test, the person views a drawing of a lively scene in a family’s kitchen and must tell the examiner about all of the actions they see in the picture. The picture is reproduced below in Figure 9. Describing visually presented material is quite different from a task such as the WLM, in which language comprehension and memory play a crucial role. Nevertheless, the processing and language production demands of a picture description task may lead to differences in performance in groups with certain cognitive and language problems. In fact, it is widely reported that the picture descriptions of seniors with dementia of the Alzheimer’s type differ from those of typically aging seniors in terms of information content (Hier, Hagenlocker, and Shindler 1985; Gilesa, Patterson, and Hodge 1996). Interestingly, this reduction in information is not necessarily accompanied by a reduction in the amount of language produced. Rather, it seems that seniors with Alzheimer’s dementia tend to include redundant information, repetitions, intrusions, and revisions that result in language samples of length comparable to that of typically aging seniors. TalkBank (MacWhinney 2007), the online database of audio and transcribed speech, has made available the DementiaBank corpus of descriptions of the cookie theft picture by hundreds of individuals, some of whom have one of a number of types of dementia, including MCI, vascular dementia, possible Alzheimer’s disease, and probable Alzheimer’s disease. From this corpus were selected a subset of individuals without dementia and a subset with probable Alzheimer’s disease. We limit the set of descriptions to those with more than 25 but fewer than 100 words, yielding 130 descriptions for each diagnostic group. There was no significant difference in description word count between the two diagnostic groups. The first task was to generate a source description to which all other narratives should be aligned. Working under the assumption that the control participants would produce good descriptions, we calculated the BLEU score of every pair of descriptions from the control group. The description with the highest average pairwise BLEU score was selected as the source description. After confirming that this description did in fact contain all of the action portrayed in the picture, we removed all extraneous conversational asides from the description in order to ensure that it contained all and only information about the picture. The selected source description is as follows: The boy is getting cookies out of the cookie jar. And the stool is just about to fall over. The little girl is reaching up for a cookie. And the mother is drying dishes. The water is running into the sink and the sink is running over onto the floor. And that little girl is laughing. We then built an alignment model on the full pairwise description parallel corpus (2602 = 67,600 sentences) and a word identity corpus consisting of each word in each description reproduced 100 times. Using this trained model, which corresponds to the large Berkeley model that achieved the highest classification accuracy for the WLM data, we then aligned every description to the artificial source description. We also built a graph-based alignment model using these alignments and the parameter settings that maximized word alignment accuracy in the WLM data. Because the artificial source description is not a true linguistic reference for this task, we did not produce manual word alignments against which the alignment quality could be evaluated and against which the parameters could be tuned. Instead, we evaluated only the downstream application of diagnostic classification. The method for scoring the WLM relies directly on the predetermined list of story elements, whereas the cookie theft picture description administration instructions do not include an explicit set of items that must be described. Recall that the automated scoring method we propose uses only the open-class or content words in the source narrative. In order to generate scores for the descriptions, we propose a scoring technique that considers each content word in the source description to be its own story element. Any word in a retelling that aligns to one of the content words in the source narrative is considered to be a match for that content word element. This results in a large number of elements, but it allows the scoring method to be easily adapted to other narrative production scenarios that similarly do not have explicit scoring guidelines. Using these scores as features, we again used an SVM to classify the two diagnostic groups, typically aging and probable Alzheimer’s disease, and evaluated the classifier using leave-pair-out validation. Table 9 shows the classification results using the content word scoring features produced using the Berkeley aligner alignments and the graph-based alignments. These can be compared to classification results using the summary similarity metrics BLEU and unigram precision. We see that using word-level features, regardless of which alignment model they are extracted from, results in significantly higher classification accuracy than both the simple similarity metrics and the summary scores. The alignmentbased scoring approach yields features with remarkably high classification accuracy given the somewhat ad hoc selection of the source narrative from the set of control retellings. These results demonstrate the flexibility and utility of the alignment-based approach to scoring narratives. Not only can it be adapted to other narrative retelling instruments, but it can relatively trivially be adapted to instruments that use nonlinguistic stimuli for elicitation. All that is needed to build an alignment model is a sufficiently large collection of retellings of the same narrative or descriptions of the same picture. Procuring such a collection of descriptions or retellings can be done easily outside a clinical setting using a platform such as Amazon’s Mechanical Turk. No handlabeled data, outside lexical resources, prior knowledge of the content of the story, or existing scoring guidelines are required. Among the more recent applications for natural language processing algorithms has been the analysis of spoken language data for diagnostic and remedial purposes, fueled by the demand for simple, objective, and unobtrusive screening tools for neurological disorders such as dementia. The automated analysis of narrative retellings in particular shows potential as a component of such a screening tool since the ability to produce accurate and meaningful narratives is noticeably impaired in individuals with dementia and its frequent precursor, mild cognitive impairment, as well as other neurodegenerative and neurodevelopmental disorders. In this article, we present a method for extracting narrative recall scores automatically and highly accurately from a word-level alignment between a retelling and the source narrative. We propose improvements to existing machine translation-based systems for word alignment, including a novel method of word alignment relying on random walks on a graph that achieves alignment accuracy superior to that of standard expectation maximization-based techniques for word alignment in a fraction of the time required for expectation maximization. In addition, the narrative recall score features extracted from these high-quality word alignments yield diagnostic classification accuracy comparable to that achieved using manually assigned scores and significantly higher than that achieved with summary-level text similarity metrics used in other areas of NLP. These methods can be trivially adapted to spontaneous language samples elicited with non-linguistic stimuli, thereby demonstrating the flexibility and generalizability of these methods. [{""affiliations"": [], ""name"": ""Emily Prud\u2019hommeaux""}, {""affiliations"": [], ""name"": ""Brian Roark""}] SP:9fff4c3673ff8364cd50b46aa7b757952eefa5de [{""authors"": [""Airolaa"", ""Antti"", ""Tapio Pahikkalaa"", ""Willem Waegemanc"", ""Bernard De Baetsc"", ""Tapio Salakoskia""], ""title"": ""An experimental comparison of cross-validation techniques for estimating the area under the ROC"", ""year"": 2011}, {""authors"": [""Artero"", ""Sylvain"", ""Mary Tierney"", ""Jacques Touchon"", ""Karen Ritchie""], ""title"": ""Prediction of transition from cognitive"", ""year"": 2003}, {""authors"": [""Ayan"", ""Necip Fazil"", ""Bonnie J. Dorr.""], ""title"": ""Going beyond AER: An extensive analysis of word alignments and their impact on MT"", ""venue"": ""Proceedings of the 21st International Conference on Computational Linguistics and"", ""year"": 2006}, {""authors"": [""D.A. Bennett"", ""J.A. Schneider"", ""Z. Arvanitakis"", ""J.F. Kelly"", ""N.T. Aggarwal"", ""R.C. Shah"", ""R.S. Wilson""], ""title"": ""Neuropathology of older persons without cognitive impairment"", ""year"": 2006}, {""authors"": [""Bishop"", ""Dorothy"", ""Chris Donlan.""], ""title"": ""The role of syntax in encoding and recall of pictorial narratives: Evidence from specific language impairment"", ""venue"": ""British Journal of Developmental Psychology,"", ""year"": 2005}, {""authors"": [""Brown"", ""Peter"", ""Vincent Della Pietra"", ""Steven Della Pietra"", ""Robert Mercer.""], ""title"": ""The mathematics of statistical machine translation: Parameter estimation"", ""venue"": ""Computational Linguistics,"", ""year"": 1993}, {""authors"": [""Chang"", ""Chih-Chung"", ""Chih-Jen Lin.""], ""title"": ""LIBSVM: A library for support vector machines"", ""venue"": ""ACM Transactions on Intelligent Systems and Technology, 2(27):1\u201327."", ""year"": 2011}, {""authors"": [""Chapman"", ""Sandra"", ""Hanna Ulatowska"", ""Kristin King"", ""Julene Johnson"", ""Donald McIntire.""], ""title"": ""Discourse in early Alzheimer\u2019s disease versus normal advanced aging"", ""venue"": ""American Journal of"", ""year"": 1995}, {""authors"": [""Chenery"", ""Helen J."", ""Bruce E. Murdoch.""], ""title"": ""The production of narrative discourse in response to animations in persons with dementia of the Alzheimer\u2019s type: Preliminary findings"", ""venue"": ""Aphasiology,"", ""year"": 1994}, {""authors"": [""Cortes"", ""Corinna"", ""Mehryar Mohri"", ""Ashish Rastogi.""], ""title"": ""An alternative ranking problem for search engines"", ""venue"": ""Proceedings of the 6th Workshop on Experimental Algorithms, volume 4525 of Lecture Notes in"", ""year"": 2007}, {""authors"": [""Creamer"", ""Scott"", ""Maureen Schmitter-Edgecombe""], ""title"": ""Narrative comprehension in Alzheimer\u2019s disease: Assessing inferences and memory operations with a think-aloud procedure"", ""year"": 2010}, {""authors"": [""de la Rosa"", ""Gabriela Ramirez"", ""Thamar Solorio"", ""Manuel Montes y Gomez"", ""Aquiles Iglesias"", ""Yang Liu"", ""Lisa Bedore"", ""Elizabeth Pena""], ""title"": ""Exploring word class n-grams to measure language"", ""year"": 2013}, {""authors"": [""Diehl"", ""Joshua J."", ""Loisa Bennetto"", ""Edna Carter Young.""], ""title"": ""Story recall and narrative coherence of high-functioning children with autism spectrum disorders"", ""venue"": ""Journal of Abnormal Child Psychology,"", ""year"": 2006}, {""authors"": [""Dunn"", ""John C."", ""Osvaldo P. Almeida"", ""Lee Barclay"", ""Anna Waterreus"", ""Leon Flicker.""], ""title"": ""Latent semantic analysis: A new method to measure prose recall"", ""venue"": ""Journal of Clinical and Experimental Neuropsychology,"", ""year"": 2002}, {""authors"": [""Ehrlich"", ""Jonathan S."", ""Loraine K. Obler"", ""Lynne Clark.""], ""title"": ""Ideational and semantic contributions to narrative production in adults with dementia of the Alzheimer\u2019s type"", ""venue"": ""Journal of Communication Disorders,"", ""year"": 1997}, {""authors"": [""Erkan"", ""G\u00fcnes"", ""Dragomir R. Radev""], ""title"": ""LexRank: Graph-based lexical centrality as salience in text summarization"", ""year"": 2004}, {""authors"": [""Fan"", ""Jerome"", ""Suneel Upadhye"", ""Andrew Worster.""], ""title"": ""Understanding receiver operating characteristic (ROC) curves"", ""venue"": ""Canadian Journal of Emergency Medicine, 8:19\u201320."", ""year"": 2006}, {""authors"": [""Faraggi"", ""David"", ""Benjamin Reiser.""], ""title"": ""Estimation of the area under the ROC curve"", ""venue"": ""Statistics in Medicine, 21:3093\u20133106."", ""year"": 2002}, {""authors"": [""M. Folstein"", ""S. Folstein"", ""P. McHugh.""], ""title"": ""Mini-mental state\u2014a practical method for grading the cognitive state of patients for the clinician"", ""venue"": ""Journal of Psychiatric Research, 12:189\u2013198."", ""year"": 1975}, {""authors"": [""Fraser"", ""Alexander"", ""Daniel Marcu.""], ""title"": ""Measuring word alignment quality for statistical machine translation"", ""venue"": ""Computational Linguistics, 33(3):293\u2013303."", ""year"": 2007}, {""authors"": [""Fraser"", ""Kathleen C"", ""Jed A. Meltzer"", ""Naida L. Graham"", ""Carol Leonard"", ""Graeme Hirst"", ""Sandra E. Black"", ""Elizabeth Rochon""], ""title"": ""Automated classification of primary progressive aphasia subtypes"", ""year"": 2014}, {""authors"": [""Gabani"", ""Keyur"", ""Melissa Sherman"", ""Thamar Solorio"", ""Yang Liu""], ""title"": ""A corpus-based approach for the prediction of language impairment in monolingual English and Spanish-English bilingual"", ""year"": 2009}, {""authors"": [""Galvin"", ""James"", ""Anne Fagan"", ""David Holtzman"", ""Mark Mintun"", ""John Morris.""], ""title"": ""Relationship of dementia screening tests with biomarkers of Alzheimer\u2019s disease"", ""venue"": ""Brain, 133:3290\u20133300."", ""year"": 2010}, {""authors"": [""Gilesa"", ""Elaine"", ""Karalyn Patterson"", ""John R. Hodge""], ""title"": ""Performance on the Boston cookie theft picture description task in patients with early dementia of the Alzheimer\u2019s type: Missing information"", ""year"": 1996}, {""authors"": [""H. Goodglass"", ""E. Kaplan.""], ""title"": ""Boston Diagnostic Aphasia Examination"", ""venue"": ""Lea and Febiger, Philadelphia, PA."", ""year"": 1972}, {""authors"": [""Hakkani-Tur"", ""Dilek"", ""Dimitra Vergyri"", ""Gokhan Tur.""], ""title"": ""Speech-based automated cognitive status assessment"", ""venue"": ""Proceedings of the Conference of the International Speech Communication"", ""year"": 2010}, {""authors"": [""Hall"", ""Mark"", ""Eibe Frank"", ""Geoffrey Holmes"", ""Bernhard Pfahringer"", ""Peter Reutemann"", ""Ian H. Witten.""], ""title"": ""The WEKA data mining software: An update"", ""venue"": ""SIGKDD Explorations, 11(1):10\u201318."", ""year"": 2009}, {""authors"": [""Hanley"", ""James"", ""Barbara McNeil.""], ""title"": ""The meaning and use of the area under a receiver operating characteristic (ROC) curve"", ""venue"": ""Radiology, 143:29\u201336."", ""year"": 1982}, {""authors"": [""D. Hier"", ""K. Hagenlocker"", ""A. Shindler.""], ""title"": ""Language disintegration in dementia: Effects of etiology and severity"", ""venue"": ""Brain and Language, 25:117\u2013133."", ""year"": 1985}, {""authors"": [""S. Hoops"", ""S. Nazem"", ""A.D. Siderowf"", ""J.E. Duda"", ""S.X. Xie"", ""M.B. Stern"", ""D. Weintraub.""], ""title"": ""Validity of the MoCA and MMSE in the detection of MCI and dementia in Parkinson disease"", ""venue"": ""Neurology,"", ""year"": 2009}, {""authors"": [""Johnson"", ""David K."", ""Martha Storandt"", ""David A. Balota.""], ""title"": ""Discourse analysis of logical memory recall in normal aging and in dementia of the Alzheimer type"", ""venue"": ""Neuropsychology, 17(1):82\u201392."", ""year"": 2003}, {""authors"": [""Lautenschlager"", ""Nicola T"", ""John C. Dunn"", ""Kathryn Bonney"", ""Leon Flicker"", ""Osvaldo P. Almeida""], ""title"": ""Latent semantic analysis: An improved method to measure cognitive performance in subjects"", ""year"": 2006}, {""authors"": [""Lehr"", ""Maider"", ""Emily Prud\u2019hommeaux"", ""Izhak Shafran"", ""Brian Roark""], ""title"": ""Fully automated neuropsychological assessment for detecting mild cognitive impairment"", ""venue"": ""In Proceedings of the 13th Annual Conference"", ""year"": 2012}, {""authors"": [""Lehr"", ""Maider"", ""Izhak Shafran"", ""Emily Prud\u2019hommeaux"", ""Brian Roark""], ""title"": ""Discriminative joint modeling of lexical variation and acoustic confusion for automated narrative retelling"", ""year"": 2013}, {""authors"": [""Liang"", ""Percy"", ""Ben Taskar"", ""Dan Klein.""], ""title"": ""Alignment by agreement"", ""venue"": ""Proceedings of the Human Language Technology Conference of the NAACL, pages 104\u2013111, New York, NY."", ""year"": 2006}, {""authors"": [""Lin"", ""Chin-Yiu.""], ""title"": ""ROUGE: A package for automatic evaluation of summaries"", ""venue"": ""Proceedings of the Workshop on Text Summarization Branches Out, pages 74\u201381, Barcelona."", ""year"": 2004}, {""authors"": [""Lopez"", ""Adam"", ""Philip Resnik.""], ""title"": ""Word-based alignment, phrase-based translation: What\u2019s the link"", ""venue"": ""Proceedings"", ""year"": 2006}, {""authors"": [""Lysaker"", ""Paul"", ""Amanda Wickett"", ""Neil Wilke"", ""John Lysaker.""], ""title"": ""Narrative incoherence in schizophrenia: The absent agent-protagonist and the collapse of internal dialogue"", ""venue"": ""American Journal of"", ""year"": 2003}, {""authors"": [""MacWhinney"", ""Brian.""], ""title"": ""The TalkBank Project"", ""venue"": ""J. C. Beal, K. P. Corrigan, and H. L. Moisl, editors, Creating and Digitizing Language Corpora: Synchronic Databases, Vol.1, pages 163\u2013180, Palgrave-Macmillan,"", ""year"": 2007}, {""authors"": [""Manly"", ""Jennifer J."", ""Ming Tang"", ""Nicole Schupf"", ""Yaakov Stern"", ""Jean-Paul G. Vonsattel"", ""Richard Mayeux.""], ""title"": ""Frequency and course of mild cognitive impairment in a multiethnic community"", ""venue"": ""Annals of"", ""year"": 2008}, {""authors"": [""Mitchell"", ""Margaret.""], ""title"": ""Scoring discrepancies on two subtests of the Wechsler memory scale"", ""venue"": ""Journal of Consulting and Clinical Psychology, 55:914\u2013915."", ""year"": 1987}, {""authors"": [""Morris"", ""John.""], ""title"": ""The Clinical Dementia Rating (CDR): Current version and scoring rules"", ""venue"": ""Neurology, 43:2412\u20132414."", ""year"": 1993}, {""authors"": [""Morris"", ""John"", ""Martha Storandt"", ""J. Phillip Miller"", ""Daniel McKeel"", ""Joseph Price"", ""Eugene Rubin"", ""Leonard Berg.""], ""title"": ""Mild cognitive impairment represents early-stage Alzheimer disease"", ""venue"": ""Archives of"", ""year"": 2001}, {""authors"": [""Norbury"", ""Courtenay"", ""Dorothy Bishop.""], ""title"": ""Narrative skills of children with communication impairments"", ""venue"": ""International Journal of Language and Communication Disorders, 38:287\u2013313."", ""year"": 2003}, {""authors"": [""A. Nordlund"", ""S. Rolstad"", ""P. Hellstrom"", ""M. Sjogren"", ""S. Hansen"", ""A. Wallin.""], ""title"": ""The Goteborg MCI study: Mild cognitive impairment is a heterogeneous condition"", ""venue"": ""Journal of Neurology, Neurosurgery and"", ""year"": 2005}, {""authors"": [""Och"", ""Franz Josef"", ""Hermann Ney.""], ""title"": ""A systematic comparison of various statistical alignment models"", ""venue"": ""Computational Linguistics, 29(1):19\u201351."", ""year"": 2003}, {""authors"": [""Otterbacher"", ""Jahna"", ""G\u00fcnes Erkan"", ""Dragomir R. Radev.""], ""title"": ""Biased LexRank: Passage retrieval using random walks with question-based priors"", ""venue"": ""Information Processing Management,"", ""year"": 2009}, {""authors"": [""Page"", ""Lawrence"", ""Sergey Brin"", ""Rajeev Motwani"", ""Terry Winograd""], ""title"": ""The PageRank citation ranking: Bringing order"", ""year"": 1999}, {""authors"": [""Prud\u2019hommeaux"", ""Roark""], ""title"": ""Graph-Based Word Alignment for Clinical Language Evaluation to the web"", ""venue"": ""Technical Report 1999-66,"", ""year"": 1999}, {""authors"": [""Pahikkala"", ""Tapio"", ""Antti Airola"", ""Jorma Boberg"", ""Tapio Salakoski.""], ""title"": ""Exact and efficient leave-pair-out cross-validation for ranking RLS"", ""venue"": ""The Second International and Interdisciplinary Conference on Adaptive"", ""year"": 2008}, {""authors"": [""Papineni"", ""Kishore"", ""Salim Roukos"", ""Todd Ward"", ""Wei jing Zhu.""], ""title"": ""BLEU: A method for automatic evaluation of machine translation"", ""venue"": ""Proceedings of the 40th Annual Meeting of the Association for"", ""year"": 2002}, {""authors"": [""Petersen"", ""Ronald"", ""Glenn Smith"", ""Stephen Waring"", ""Robert Ivnik"", ""Eric Tangalos"", ""Emre Kokmen.""], ""title"": ""Mild cognitive impairment: Clinical characterizations and outcomes"", ""venue"": ""Archives of Neurology,"", ""year"": 1999}, {""authors"": [""Petersen"", ""Ronald C.""], ""title"": ""Mild cognitive impairment"", ""venue"": ""The New England Journal of Medicine, 364(23):2227\u20132234."", ""year"": 2011}, {""authors"": [""John J. McArdle"", ""Robert J. Willis"", ""Robert B. Wallace.""], ""title"": ""Prevalence of cognitive impairment without dementia in the United States"", ""venue"": ""Annals of Internal Medicine, 148:427\u201334."", ""year"": 2008}, {""authors"": [""John C. Morris.""], ""title"": ""Neuropathology of nondemented aging: Presumptive evidence for preclinical Alzheimer disease"", ""venue"": ""Neurobiology of Aging, 30(7):1026\u20131036."", ""year"": 2009}, {""authors"": [""Prud\u2019hommeaux"", ""Emily"", ""Brian Roark""], ""title"": ""Alignment of spoken narratives for automated neuropsychological assessment"", ""venue"": ""In Proceedings of the IEEE Workshop on Automatic Speech Recognition"", ""year"": 2011}, {""authors"": [""Prud\u2019hommeaux"", ""Emily"", ""Brian Roark""], ""title"": ""Graph-based alignment of narratives"", ""year"": 2012}, {""authors"": [""Prud\u2019hommeaux"", ""Emily"", ""Masoud Rouhizadeh""], ""title"": ""Automatic detection of pragmatic deficits in children with autism"", ""venue"": ""In Proceedings of the 3rd Workshop on Child, Computer and Interaction,"", ""year"": 2012}, {""authors"": [""Prud\u2019hommeaux"", ""Emily Tucker""], ""title"": ""Alignment of Narrative Retellings for Automated Neuropsychological Assessment"", ""venue"": ""Ph.D. thesis,"", ""year"": 2012}, {""authors"": [""Ravaglia"", ""Giovanni"", ""Paola Forti"", ""Fabiola Maioli"", ""Lucia Servadei"", ""Mabel Martelli"", ""Nicoletta Brunetti"", ""Luciana Bastagli"", ""Erminia Mariani""], ""title"": ""Screening for mild cognitive impairment in elderly"", ""year"": 2005}, {""authors"": [""Ridgway"", ""James"", ""Pierre Alquier"", ""Nicolas Chopin"", ""Feng Liang.""], ""title"": ""PAC-Bayesian AUC classification and scoring"", ""venue"": ""Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and"", ""year"": 2014}, {""authors"": [""Ritchie"", ""Karen"", ""Sylvaine Artero"", ""Jacques Touchon.""], ""title"": ""Classification criteria for mild cognitive impairment: A population-based validation study"", ""venue"": ""Neurology, 56:37\u201342."", ""year"": 2001}, {""authors"": [""Ritchie"", ""Karen"", ""Jacques Touchon.""], ""title"": ""Mild cognitive impairment: Conceptual basis and current nosological status"", ""venue"": ""Lancet, 355:225\u2013228."", ""year"": 2000}, {""authors"": [""Roark"", ""Brian"", ""Margaret Mitchell"", ""John-Paul Hosom"", ""Kristina Hollingshead"", ""Jeffrey Kaye.""], ""title"": ""Spoken language derived measures for detecting mild cognitive impairment"", ""venue"": ""IEEE Transactions on Audio,"", ""year"": 2011}, {""authors"": [""F.A. Schmitt"", ""D.G. Davis"", ""D.R. Wekstein"", ""C.D. Smith"", ""J.W. Ashford"", ""W.R. Markesbery.""], ""title"": ""Preclinical AD revisited: Neuropathology of cognitively normal older adults"", ""venue"": ""Neurology,"", ""year"": 2000}, {""authors"": [""Shankle"", ""William R"", ""A. Kimball Romney"", ""Junko Hara"", ""Dennis Fortier"", ""Malcolm B. Dick"", ""James M. Chen"", ""Timothy Chan"", ""Xijiang Sun""], ""title"": ""Methods to improve the detection of mild cognitive"", ""year"": 2005}, {""authors"": [""Solorio"", ""Thamar"", ""Yang Liu.""], ""title"": ""Using language models to identify language impairment in Spanish-English bilingual children"", ""venue"": ""Proceedings of the ACL 2008 Workshop on Biomedical Natural Language"", ""year"": 2008}, {""authors"": [""Storandt"", ""Martha"", ""Robert Hill.""], ""title"": ""Very mild senile dementia of the Alzheimer\u2019s type: II"", ""venue"": ""Psychometric test performance. Archives of Neurology, 46:383\u2013386."", ""year"": 1989}, {""authors"": [""Tager-Flusberg"", ""Helen.""], ""title"": ""Once upon a ribbit: Stories narrated by autistic children"", ""venue"": ""British Journal of Developmental Psychology, 13(1):45\u201359."", ""year"": 1995}, {""authors"": [""Tannock"", ""Rosemary"", ""Karen L. Purvis"", ""Russell J. Schachar.""], ""title"": ""Narrative abilities in children with attention deficit hyperactivity disorder and normal peers"", ""venue"": ""Journal of Abnormal Child Psychology,"", ""year"": 1993}, {""authors"": [""Tierney"", ""Mary"", ""Christie Yao"", ""Alex Kiss"", ""Ian McDowell.""], ""title"": ""Neuropsychological tests accurately predict incident Alzheimer disease after 5 and 10 years"", ""venue"": ""Neurology, 64:1853\u20131859."", ""year"": 2005}, {""authors"": [""Ulatowska"", ""Hanna"", ""Lee Allard"", ""Adrienne Donnell"", ""Jean Bristow"", ""Sara M. Haynes"", ""Adelaide Flower"", ""Alvin J. North""], ""title"": ""Discourse performance in subjects with dementia"", ""year"": 1988}, {""authors"": [""United Nations.""], ""title"": ""World Population Ageing 1950\u20132050"", ""venue"": ""United Nations, New York."", ""year"": 2002}, {""authors"": [""Vuorinen"", ""Elina"", ""Matti Laine"", ""Juha Rinne.""], ""title"": ""Common pattern of language impairment in vascular dementia and in Alzheimer disease"", ""venue"": ""Alzheimer Disease and Associated Disorders, 14(2):81\u201386."", ""year"": 2000}, {""authors"": [""Wang"", ""Qing-Song"", ""Jiang-Ning Zhou.""], ""title"": ""Retrieval and encoding of episodic memory in normal aging and patients with mild cognitive impairment"", ""venue"": ""Brain Research, 924:113\u2013115."", ""year"": 2002}, {""authors"": [""Wechsler"", ""David.""], ""title"": ""Wechsler Memory Scale - Third Edition"", ""venue"": ""The Psychological Corporation, San Antonio, TX."", ""year"": 1997}, {""authors"": [""Zweig"", ""Mark H."", ""Gregory Campbell.""], ""title"": ""Receiver-operating characteristic (ROC) plots: A fundamental evaluation tool in clinical medicine"", ""venue"": ""Clinical Chemistry, 39:561\u2013577."", ""year"": 1993}] acknowledgments :This research was conducted while both authors were at the Center for Spoken Language Understanding at the Oregon Health and Science University, in Portland, Oregon. This work was supported in part by NSF grant BCS-0826654 and NIH NIDCD grants R01DC012033-01 and R01DC007129. Any opinions, findings, conclusions or recommendations expressed in this publication are those of the authors and do not reflect the views of the NIH or NSF. Some of the results reported here appeared previously in Prud’hommeaux and Roark (2012) and the first author’s dissertation (Prud’hommeaux 2012). We thank Jan van Santen, Richard Sproat, and Chris Callison-Burch for their valuable input and the clinicians at the OHSU Layton Center for their care in collecting the data. 9 conclusions and future work :The work presented here demonstrates the utility of adapting NLP algorithms to clinically elicited data for diagnostic purposes. In particular, the approach we describe for automatically analyzing clinically elicited language data shows promise as part of a pipeline for a screening tool for mild cognitive impairment. The methods offer the additional benefit of being general and flexible enough to be adapted to new data sets, even those without existing evaluation guidelines. In addition, the novel graph-based approach to word alignment results in large reductions in alignment error rate. These reductions in error rate in turn lead to human-level scoring accuracy and improved diagnostic classification. The demand for simple, objective, and unobtrusive screening tools for MCI and other neurodegenerative and neurodevelopmental disorders will continue to grow as the prevalence of these disorders increases. Although high-level measures of text similarity used in other NLP applications, such as machine translation, do achieve reasonable classification accuracy when applied to the WLM narrative data, the work presented here indicates that automated methods that approximate manual elementlevel scoring procedures yield superior results. Although the results are quite robust, several enhancements and improvements can be made. First, although we were able to achieve decent word alignment accuracy, especially with our graph-based approach, many alignment errors remain. Exploration of the graph used here reveals that many correct alignments remain undiscovered, with an oracle AER of 11%. One clear weakness is the selection of only a single alignment from the distribution of source words at the end of 1,000 walks, since this does not allow for one-to-many mappings. We would also like to experiment with including nondirectional edges and outgoing edges on source words. In our future work, we also plan to examine longitudinal data for individual participants to see whether our techniques can detect subtle differences in recall and coherence between a recent retelling and a series of earlier baseline retellings. Because the CDR, the dementia staging system often used to identify MCI, relies on observed changes in cognitive function over time, longitudinal analysis of performance on narrative retelling and picture description tasks might be the most promising application for this approach to analyzing clinically elicited language data.","2 background : Because of the variety of intact cognitive functions required to generate a narrative, the inability to coherently produce or recall a narrative is associated with many different disorders, including not only neurodegenerative conditions related to dementia, but also autism (Tager-Flusberg 1995; Diehl, Bennetto, and Young 2006), language impairment (Norbury and Bishop 2003; Bishop and Donlan 2005), attention deficit disorder (Tannock, Purvis, and Schachar 1993), and schizophrenia (Lysaker et al. 2003). The bulk of the research presented here, however, focuses on the utility of a particular narrative recall task, the Wechsler Logical Memory subtest of the Wechsler Memory Scale (Wechsler 1997), for diagnosing mild cognitive impairment (MCI). (This and other abbreviations are listed in Table 1.) MCI is the stage of cognitive decline between the sort of decline expected in typical aging and the decline associated with dementia or Alzheimer’s disease (Petersen et al. 1999; Ritchie and Touchon 2000; Petersen 2011). MCI is characterized by subtle deficits in functions of memory and cognition that are clinically significant but do not prevent carrying out the activities of daily life. This intermediary phase of decline has been identified and named numerous times: mild cognitive decline, mild neurocognitive decline, very mild dementia, isolated memory impairment, questionable dementia, and incipient dementia. Although there continues to be disagreement about the diagnostic validity of the designation (Ritchie and Touchon 2000; Ritchie, Artero, and Touchon 2001), a number of recent studies have found evidence that seniors with some subtypes of MCI are significantly more likely to develop dementia than the population as a whole (Busse et al. 2006; Manly et al. 2008; Plassman et al. 2008). Early detection can benefit both patients and researchers investigating treatments for halting or slowing the progression of dementia, but identifying MCI can be problematic, as most dementia screening instruments, such as the Mini-Mental State Exam (MMSE) (Folstein, Folstein, and McHugh 1975), lack sufficient sensitivity to the very subtle cognitive deficits that characterize the disorder (Morris et al. 2001; Ravaglia et al. 2005; Hoops et al. 2009). Diagnosis of MCI currently requires both a lengthy neuropsychological evaluation of the patient and an interview with a family member or close associate, both of which should be repeated at regular intervals in order to have a baseline for future comparison. One goal of the work presented here is to determine whether an analysis of spoken language responses to a narrative recall task, the Wechsler Logical Memory subtest, can be used as a more efficient and less intrusive screening tool for MCI. In the Wechsler Logical Memory (WLM) narrative recall subtest of the Wechsler Memory Scale, the individual listens to a brief narrative and must verbally retell the narrative to the examiner once immediately upon hearing the story and again after a delay of 20 to 30 minutes. The examiner scores each retelling according to how many story elements the patient uses in the retelling. The standard scoring procedure, described in more detail in Section 3.2, results in a single summary score for each retelling, immediate and delayed, corresponding to the total number of story elements recalled in that retelling. The Anna Thompson narrative, shown in Figure 1 (later in this article), has been used as the primary WLM narrative for over 70 years and has been found to be sensitive to dementia and related conditions, particularly in combination with tests of verbal fluency and memory. Multiple studies have demonstrated a significant difference in performance on the WLM between individuals with MCI and typically aging controls under the standard scoring procedure (Storandt and Hill 1989; Petersen et al. 1999; Wang and Zhou 2002; Nordlund et al. 2005). Further studies have shown that performance on the WLM can help predict whether MCI will progress into Alzheimer’s disease (Morris et al. 2001; Artero et al. 2003; Tierney et al. 2005). The WLM can also serve as a cognitive indicator of physiological characteristics associated with Alzheimer’s disease. WLM scores in the impaired range are associated with the presence of changes in Pittsburgh compound B and cerebrospinal fluid amyloid beta protein, two biomarkers of Alzheimer’s disease (Galvin et al. 2010). Poor performance on the WLM and other narrative memory tests has also been strongly correlated with increased density of Alzheimer related lesions detected in postmortem neuropathological studies, even in the absence of previously reported or detected dementia (Schmitt et al. 2000; Bennett et al. 2006; Price et al. 2009). We note that clinicians do not use the WLM as a diagnostic test by itself for MCI or any other type of dementia. The WLM summary score is just one of a large number of instrumentally derived scores of memory and cognitive function that, in combination with one another and with a clinician’s expert observations and examination, can indicate the presence of a dementia, aphasia, or other neurological disorder. Much of the previous work in applying automated analysis of unannotated transcripts of narratives for diagnostic purposes has focused not on evaluating properties specific to narratives but rather on using narratives as a data source from which to extract speech and language features. Solorio and Liu (2008) were able to distinguish the narratives of a small set of children with specific language impairment (SLI) from those of typically developing children using perplexity scores derived from part-ofspeech language models. In a follow-up study on a larger group of children, Gabani et al. (2009) again used part-of-speech language models in an attempt to characterize the agrammaticality that is associated with language impairment. Two part-of-speech language models were trained for that experiment: one on the language of children with SLI and one on the language of typically developing children. The perplexity of each child’s utterances was calculated according to each of the models. In addition, the authors extracted a number of other structural linguistic features including mean length of utterance, total words used in the narrative, and measures of accurate subject–verb agreement. These scores collectively performed well in distinguishing children with language impairment, achieving an F1 measure of just over 70% when used within a support vector machine (SVM) for classification. In a continuation of this work, de la Rosa et al. (2013) explored complex language-model-based lexical and syntactic features to more accurately characterize the language used in narratives by children with language impairment. Roark et al. (2011) extracted a subset of the features used by Gabani et al. (2009), along with a much larger set of language complexity features derived from syntactic parse trees for utterances from narratives produced by elderly individuals for the diagnosis of MCI. These features included simple measures, such as words per clause, and more complex measures of tree depth, embedding, and branching, such as Frazier and Yngve scores. Selecting a subset of these features for classification with an SVM yielded a classification accuracy of 0.73, as measured by the area under the receiver operating characteristic curve (AUC). A similar approach was followed by Fraser et al. (2014) to distinguish different types of primary progressive aphasia, a group of subtypes of dementia distinct from Alzheimer’s disease and MCI, in a small group of elderly individuals. The authors considered almost 60 linguistic features, including some of those explored by Roark et al. (2011) as well as numerous others relating to part-of-speech frequencies and ratios. Using a variety of classifiers and feature combinations for three different two-way classification tasks, the authors achieved classification accuracies ranging between 0.71 and 1.0. An alternative to analyzing narratives in terms of syntactic and lexical features is to evaluate the content of the narrative retellings themselves in terms of their fidelity to the source narrative. Hakkani-Tur, Vergyri, and Tur (2010) developed a method of automatically evaluating an audio recording of a picture description task, in which the patient looks at a picture and narrates the events occurring in the picture, similar to the task we will be analyzing in Section 8. After using automatic speech recognition (ASR) to transcribe the recording, the authors measured unigram overlap between the ASR output transcript and a predefined list of key semantic concepts. This unigram overlap measure correlated highly with manually assigned counts of these semantic concepts. The authors did not investigate whether the scores, derived either manually or automatically, were associated with any particular diagnostic group or disorder. Dunn et al. (2002) were among the first to apply automated methods specifically to scoring the WLM subtest and determining the relationship between these scores and measures of cognitive function. The authors used Latent Semantic Analysis (LSA) to measure the semantic distance from a retelling to the source narrative. The LSA scores correlated very highly with the scores assigned by examiners under the standard scoring guidelines and with independent measures of cognitive functioning. In subsequent work comparing individuals with and without an English-speaking background (Lautenschlager et al. 2006), the authors proposed that LSA-based scoring of the WLM as a cognitive measure is less biased against people with different linguistic and cultural backgrounds than other widely used cognitive measures. This work demonstrates not only that accurate automated scoring of narrative recall tasks is possible but also that the objectivity offered by automated measures has specific benefits for tests like the WLM, which are often administered by practitioners working in a community setting and serving a diverse population. We will compare the utility of this approach with our alignment-based approach subsequently in the article. More recently, Lehr et al. (2013) used a supervised method for scoring the responses to the WLM, transcribed both manually and via ASR, using conditional random fields. This technique resulted in slightly higher scoring and classification accuracy than the unsupervised method described here. An unsupervised variant of their algorithm, which relied on the methods described in this article to provide training data to the conditional random field, yielded about half of the scoring gains and nearly all of the classification gains of what we report here. A hybrid method that used the methods in this article to derive features was the best performing system in that paper. Hence the methods described here are important components to that approach. We also note, however, that the supervised classifier-based approach to scoring retellings requires a significant amount of hand-labeled training data, thus rendering the technique impractical for application to a new narrative or to any picture description task. The importance of this distinction will become clear in Section 8, in which the approach outlined here is applied to a new data set lacking an existing scoring mechanism or a linguistic reference against which the responses can be scored. In this article, we will be discussing the application of our methods to manually generated transcripts of retellings and picture descriptions produced by adults with and without neurodegenerative disorders. We note, however, that the same techniques have been applied to narratives transcribed using ASR output (Lehr et al. 2012, 2013) with little degradation in accuracy, given sufficient adaptation of the acoustic and language models to the WLM retelling domain. In addition, we have applied alignment-based scoring to the narratives of children with neurodevelopmental disorders, including autism and language impairment (Prud’hommeaux and Rouhizadeh 2012), with similarly strong diagnostic classification accuracy, further demonstrating the applicability of these methods to a variety of input formats, elicitation techniques, and diagnostic goals. 9 conclusions and future work :The work presented here demonstrates the utility of adapting NLP algorithms to clinically elicited data for diagnostic purposes. In particular, the approach we describe for automatically analyzing clinically elicited language data shows promise as part of a pipeline for a screening tool for mild cognitive impairment. The methods offer the additional benefit of being general and flexible enough to be adapted to new data sets, even those without existing evaluation guidelines. In addition, the novel graph-based approach to word alignment results in large reductions in alignment error rate. These reductions in error rate in turn lead to human-level scoring accuracy and improved diagnostic classification. The demand for simple, objective, and unobtrusive screening tools for MCI and other neurodegenerative and neurodevelopmental disorders will continue to grow as the prevalence of these disorders increases. Although high-level measures of text similarity used in other NLP applications, such as machine translation, do achieve reasonable classification accuracy when applied to the WLM narrative data, the work presented here indicates that automated methods that approximate manual elementlevel scoring procedures yield superior results. Although the results are quite robust, several enhancements and improvements can be made. First, although we were able to achieve decent word alignment accuracy, especially with our graph-based approach, many alignment errors remain. Exploration of the graph used here reveals that many correct alignments remain undiscovered, with an oracle AER of 11%. One clear weakness is the selection of only a single alignment from the distribution of source words at the end of 1,000 walks, since this does not allow for one-to-many mappings. We would also like to experiment with including nondirectional edges and outgoing edges on source words. In our future work, we also plan to examine longitudinal data for individual participants to see whether our techniques can detect subtle differences in recall and coherence between a recent retelling and a series of earlier baseline retellings. Because the CDR, the dementia staging system often used to identify MCI, relies on observed changes in cognitive function over time, longitudinal analysis of performance on narrative retelling and picture description tasks might be the most promising application for this approach to analyzing clinically elicited language data." "1 introduction :There is very little similar about coffee and cups. Coffee refers to a plant, which is a living organism or a hot brown (liquid) drink. In contrast, a cup is a man-made solid of broadly well-defined shape and size with a specific function relating to the consumption of liquids. Perhaps the only clear trait these concepts have in common is that they are concrete entities. Nevertheless, in what is currently the most popular evaluation gold standard for semantic similarity, WordSim(WS)-353 (Finkelstein et al. 2001), coffee and ∗ Computer Laboratory University of Cambridge, UK. E-mail: {felix.hill, anna.korhonen}@ cl.cam.ac.uk. ∗∗ Technion, Israel Institute of Technology, Haifa, Israel. E-mail: roiri@ie.technion.ac.il. Submission received: 25 July 2014; revised submission received: 10 June 2015; accepted for publication: 31 August 2015. doi:10.1162/COLI a 00237 © 2015 Association for Computational Linguistics cup are rated as more “similar” than pairs such as car and train, which share numerous common properties (function, material, dynamic behavior, wheels, windows, etc.). Such anomalies also exist in other gold standards such as the MEN data set (Bruni et al. 2012a). As a consequence, these evaluations effectively penalize models for learning the evident truth that coffee and cup are dissimilar. Although clearly different, coffee and cup are very much related. The psychological literature refers to the conceptual relationship between these concepts as association, although it has been given a range of names including relatedness (Budanitsky and Hirst 2006; Agirre et al. 2009), topical similarity (Hatzivassiloglou et al. 2001), and domain similarity (Turney 2012). Association contrasts with similarity, the relation connecting cup and mug (Tversky 1977). At its strongest, the similarity relation is exemplified by pairs of synonyms; words with identical referents. Computational models that effectively capture similarity as distinct from association have numerous applications. Such models are used for the automatic generation of dictionaries, thesauri, ontologies, and language correction tools (Biemann 2005; Cimiano, Hotho, and Staab 2005; Li et al. 2006). Machine translation systems, which aim to define mappings between fragments of different languages whose meaning is similar, but not necessarily associated, are another established application (He et al. 2008; Marton, Callison-Burch, and Resnik 2009). Moreover, since, as we establish, similarity is a cognitively complex operation that can require rich, structured conceptual knowledge to compute accurately, similarity estimation constitutes an effective proxy evaluation for general-purpose representation-learning models whose ultimate application is variable or unknown (Collobert and Weston 2008; Baroni and Lenci 2010). As we show in Section 2, the predominant gold standards for semantic evaluation in NLP do not measure the ability of models to reflect similarity. In particular, in both WS353 and MEN, pairs of words with associated meaning, such as coffee and cup (rating = 6.810), telephone and communication (7.510), or movie and theater (7.710), receive a high rating regardless of whether or not their constituents are similar. Thus, the utility of such resources to the development and application of similarity models is limited, a problem exacerbated by the fact that many researchers appear unaware of what their evaluation resources actually measure.1 Although certain smaller gold standards—those of Rubenstein and Goodenough (1965) (RG) and Agirre et al. (2009) (WS-Sim)—do focus clearly on similarity, these resources suffer from other important limitations. For instance, as we show, and as is also the case for WS-353 and MEN, state-of-the-art models have reached the average performance of a human annotator on these evaluations. It is common practice in NLP to define the upper limit for automated performance on an evaluation as the average human performance or inter-annotator agreement (Yong and Foo 1999; Cunningham 2005; Resnik and Lin 2010). Based on this established principle and the current evaluations, it would therefore be reasonable to conclude that the problem of representation learning, at least for similarity modeling, is approaching resolution. However, circumstantial evidence suggests that distributional models are far from perfect. For instance, we are some way from automatically generated dictionaries, thesauri, or ontologies that can be used with the same confidence as their manually created equivalents. 1 For instance, Huang et al. (2012, pages 1, 4, 10) and Reisinger and Mooney (2010b, page 4) refer to MEN and/or WS-353 as “similarity data sets.” Others evaluate on both these association-based and genuine similarity-based gold standards with no reference to the fact that they measure different things (Medelyan et al. 2009; Li et al. 2014). Motivated by these observations, in Section 3 we present SimLex-999, a gold standard resource for evaluating the ability of models to reflect similarity. SimLex-999 was produced by 500 paid native English speakers, recruited via Amazon Mechanical Turk,2 who were asked to rate the similarity, as opposed to association, of concepts via a simple visual interface. The choice of evaluation pairs in SimLex-999 was motivated by empirical evidence that humans represent concepts of distinct part-of-speech (POS) (Gentner 1978) and conceptual concreteness (Hill, Korhonen, and Bentz 2014) differently. Whereas existing gold standards contain only concrete noun concepts (MEN) or cover only some of these distinctions via a random selection of items (WS-353, RG), SimLex-999 contains a principled selection of adjective, verb, and noun concept pairs covering the full concreteness spectrum. This design enables more nuanced analyses of how computational models overcome the distinct challenges of representing concepts of these types. In Section 4 we present quantitative and qualitative analyses of the SimLex-999 ratings, which indicate that participants found it unproblematic to quantify consistently the similarity of the full range of concepts and to distinguish it from association. Unlike existing data sets, SimLex-999 therefore contains a significant number of pairs, such as [movie, theater], which are strongly associated but receive low similarity scores. The second main contribution of this paper, presented in Section 5, is the evaluation of state-of-the-art distributional semantic models using SimLex-999. These include the well-known neural language models (NLMs) of Huang et al. (2012), Collobert and Weston (2008), and Mikolov et al. (2013a), which we compare with traditional vectorspace co-occurrence models (VSMs) (Turney and Pantel 2010) with and without dimensionality reduction (SVD) (Landauer and Dumais 1997). Our analyses demonstrate how SimLex-999 can be applied to uncover substantial differences in the ability of models to represent concepts of different types. Despite these differences, the models we consider each share the characteristic of being better able to capture association than similarity. We show that the difficulty of estimating similarity is driven primarily by those strongly associated pairs with a high (association) rating in gold standards such as WS-353 and MEN, but a low similarity rating in SimLex-999. As a result of including these challenging cases, together with a wider diversity of lexical concepts in general, current models achieve notably lower scores on SimLex-999 than on existing gold standard evaluations, and well below the SimLex-999 inter-human agreement ceiling. Finally, we explore ways in which distributional models might improve on this performance in similarity modeling. To do so, we evaluate the models on the SimLex999 subsets of adjectives, nouns, and verbs, as well as on abstract and concrete subsets and subsets of more and less strongly associated pairs (Sections 5.2.2–5.2.4). As part of these analyses, we confirm the hypothesis (Agirre et al. 2009; Levy and Goldberg 2014) that models learning from input informed by dependency parsing, rather than simple running-text input, yield improved similarity estimation and, specifically, clearer distinction between similarity and association. In contrast, we find no evidence for a related hypothesis (Agirre et al. 2009; Kiela and Clark 2014) that smaller context windows improve the ability of models to capture similarity. We do, however, observe clear differences in model performance on the distinct concept types included in SimLex-999. Taken together, these experiments demonstrate the benefit of the diversity of concepts 2 www.mturk.com/. included in SimLex-999; it would not have been possible to derive similar insights by evaluating based on existing gold standards. We conclude by discussing how observations such as these can guide future research into distributional semantic models. By facilitating better-defined evaluations and finer-grained analyses, we hope that SimLex-999 will ultimately contribute to the development of models that accurately reflect human intuitions of similarity for the full range of concepts in language.","2 design motivation :In this section, we motivate the design decisions made in developing SimLex-999. We begin (2.1) by examining the distinction between similarity and association. We then show that for a meaningful treatment of similarity it is also important to take a principled approach to both POS and conceptual concreteness (2.2). We finish by reviewing existing gold standards, and show that none enables a satisfactory evaluation of the capability of models to capture similarity (2.3). The difference between association and similarity is exemplified by the concept pairs [car, bike] and [car, petrol]. Car is said to be (semantically) similar to bike and associated with (but not similar to) petrol. Intuitively, car and bike can be understood as similar because of their common physical features (e.g., wheels), their common function (transport), or because they fall within a clearly definable category (modes of transport). In contrast, car and petrol are associated because they frequently occur together in space and language, in this case as a result of a clear functional relationship (Plaut 1995; McRae, Khalkhali, and Hare 2012). Association and similarity are neither mutually exclusive nor independent. Bike and car, for instance, are related to some degree by both relations. Because it is common in both the physical world and in language for distinct entities to interact, it is relatively easy to conceive of concept pairs, such as car and petrol, that are strongly associated but not similar. Identifying pairs of concepts for which the converse is true is comparatively more difficult. One exception is common concepts paired with low frequency synonyms, such as camel and dromedary. Because the essence of association is co-occurrence (linguistic or otherwise [McRae, Khalkhali, and Hare 2012]), such pairs can seem, at least intuitively, to be similar but not strongly associated. To explore the interaction between the two cognitive phenomena quantitatively, we exploited perhaps the only two existing large-scale means of quantifying similarity and association. To estimate similarity, we considered proximity in the WordNet taxonomy (Fellbaum 1998). Specifically, we applied the measure of Wu and Palmer (1994) (henceforth WupSim), which approximates similarity on a [0,1] scale reflecting the minimum distance between any two synsets of two given concepts in WordNet. WupSim has been shown to correlate well with human judgments on the similarity-focused RG data set (Wu and Palmer 1994). To estimate association, we extracted ratings directly from the University of South Florida Free Association Database (USF) (Nelson, McEvoy, and Schreiber 2004). These data were generated by presenting human subjects with one of 5,000 cue concepts and asking them to write the first word that comes into their head that is associated with or meaningfully related to that concept. Each cue concept c was normed in this way by over 10 participants, resulting in a set of associates for each cue, and a total of over 72,000 (c, a) pairs. Moreover, for each such pair, the proportion of participants who produced associate a when presented with cue c can be used as a proxy for the strength of association between the two concepts. By measuring WupSim between all pairs in the USF data set, we observed, as expected, a high correlation between similarity and association strength across all USF pairs (Spearman ρ = 0.65, p < 0.001). However, in line with the intuitive ubiquity of pairs such as car and petrol, of the USF pairs (all of which are associated to a greater or lesser degree) over 10% had a WupSim score of less than 0.25. These include pairs of ontologically different entities with a clear functional relationship in the world [refrigerator, food], which may be of differing concreteness [lung, disease]; pairs in which one concept is a small concrete part of a larger abstract category [sheriff, police]; pairs in a relationship of modification or subcategorization [gravy, boat]; and even those whose principal connection is phonetic [wiggle, giggle]. As we show in Section 2.2, these are precisely the sort of pairs that are not contained in existing evaluation gold standards. Table 1 lists the USF noun pairs with the lowest similarity scores overall, and also those with the largest additive discrepancy between association strength and similarity. 2.1.1 Association and Similarity in NLP. As noted in the Introduction, the similarity/association distinction is not only of interest to researchers in psychology or linguistics. Models of similarity are particularly applicable to various NLP tasks, such as lexical resource building, semantic parsing, and machine translation (Haghighi et al. 2008; He et al. 2008; Marton, Callison-Burch, and Resnik 2009; Beltagy, Erk, and Mooney 2014). Models of association, on the other hand, may be better suited to tasks such as wordsense disambiguation (Navigli 2009), and applications such as text classification (Phan, Nguyen, and Horiguchi 2008) in which the target classes correspond to topical domains such as agriculture or sport (Rose, Stevenson, and Whitehead 2002). Much recent research in distributional semantics does not distinguish between association and similarity in a principled way (see, e.g., Reisinger and Mooney 2010b; Huang et al. 2012; Luong, Socher, and Manning 2013).3 One exception is Turney (2012), who constructs two distributional models with different features and parameter settings, explicitly designed to capture either similarity or association. Using the output of these two models as input to a logistic regression classifier, Turney predicts whether two 3 Several papers that take a knowledge-based or symbolic approach to meaning do address the similarity/association issue (Budanitsky and Hirst 2006). concepts are associated, similar, or both, with 61% accuracy. However, in the absence of a gold standard covering the full range of similarity ratings (rather than a list of pairs identified as being similar or not) Turney cannot confirm directly that the similarityfocused model does indeed effectively quantify similarity. Agirre et al. (2009) explicitly examine the distinction between association and similarity in relation to distributional semantic models. Their study is based on the partition of WS-353 into a subset focused on similarity, which we refer to as WS-Sim, and a subset focused on association, which we term WS-Rel. More precisely, WS-Sim is the union of the pairs in WS-353 judged by three annotators to be similar and the set U of entirely unrelated pairs, and WS-Rel is the union of U and pairs judged to be associated but not similar. Agirre et al. confirm the importance of the association/similarity distinction by showing that certain models perform relatively well on WS-Rel, whereas others perform comparatively better on WS-Sim. However, as shown in the following section, a model need not be an exemplary model of similarity in order to perform well on WS-Sim, because an important class of concept pair (associated but not similar entities) is not represented in this data set. Therefore the insights that can be drawn from the results of the Agirre et al. study are limited. Several other authors touch on the similarity/association distinction in inspecting the output of distributional models (Andrews, Vigliocco, and Vinson 2009; Kiela and Clark 2014; Levy and Goldberg 2014). Although the strength of the conclusions that can be drawn from such qualitative analyses is clearly limited, there appear to be two broad areas of consensus concerning similarity and distributional models:r Models that learn from input annotated for syntactic or dependency relations better reflect similarity, whereas approaches that learn from running-text or bag-of-words input better model association (Agirre et al. 2009; Levy and Goldberg 2014).r Models with larger context windows may learn representations that better capture association, whereas models with narrower windows better reflect similarity (Agirre et al. 2009; Kiela and Clark 2014). Empirical studies have shown that the performance of both humans and distributional models depends on the POS category of the concepts learned. Gentner (2006) showed that children find verb concepts harder to learn than noun concepts, and Markman and Wisniewski (1997) present evidence that different cognitive operations are used when comparing two nouns or two verbs. Hill, Reichart, and Korhonen (2014) demonstrate differences in the ability of distributional models to acquire noun and verb semantics. Further, they show that these differences are greater for models that learn from both text and perceptual input (as with humans). In addition to POS category, differences in human and computational concept learning and representation have been attributed to the effects of concreteness, the extent to which a concept has a directly perceptible physical referent. On the cognitive side, these “concreteness effects” are well established, even if the causes are still debated (Paivio 1991; Hill, Korhonen, and Bentz 2014). Concreteness has also been associated with differential performance in computational text-based (Hill, Kiela, and Korhonen 2013) and multi-modal semantic models (Kiela et al. 2014). For brevity, we do not exhaustively review all methods that have been used to evaluate semantic models, but instead focus on the similarity or association-based gold standards that are most commonly applied in recent work in NLP. In each case, we consider how well the data set satisfies one of the three following criteria: Representative. The resource should cover the full range of concepts that occur in natural language. In particular, it should include cases representing the different ways in which humans represent or process concepts, and cases that are both challenging and straightforward for computational models. Clearly defined. In order for a gold standard to be diagnostic of how well a model can be applied to downstream applications, a clear understanding is needed of what exactly the gold standard measures. In particular, it must clearly distinguish between dissociable semantic relations such as association and similarity. Consistent and reliable. Untrained native speakers must be able to quantify the target property consistently, without requiring lengthy or detailed instructions. This ensures that the data reflect a meaningful cognitive or semantic phenomenon, and also enables the data set to be scaled up or transferred to other languages at minimal cost and effort. We begin our review of existing evaluation with the gold standard most commonly applied in current NLP research. WordSim-353. WS-353 (Finkelstein et al. 2001) is perhaps the most commonly used evaluation gold standard for semantic models. Despite its name, and the fact that it is often referred to as a “similarity gold standard,”4 in fact, the instructions given to annotators when producing WS-353 were ambiguous with respect to similarity and association. Subjects were asked to: Assign a numerical similarity score between 0 and 10 (0 = words totally unrelated, 10 = words VERY closely related) ... when estimating similarity of antonyms, consider them “similar” (i.e., belonging to the same domain or representing features of the same concept), not “dissimilar”. As we confirm analytically in Section 5.2, these instructions result in pairs being rated according to association rather than similarity.5 WS-353 consequently suffers two important limitations as an evaluation of similarity (which also afflict other resources to a greater or lesser degree): 1. Many dissimilar word pairs receive a high rating. 2. No associated but dissimilar concepts receive low ratings. As noted in the Introduction, an arguably more serious third limitation of WS-353 is low inter-annotator agreement, and the fact that state-of-the-art models such as those 4 See, e.g., Huang et al. 2012 and Bansal, Gimpel, and Livescu 2014. 5 This fact is also noted by the data set authors. See www.cs.technion.ac.il/~gabr/resources/ data/wordsim353/. of Collobert and Weston (2008) and Huang et al. (2012) reach, or even surpass, the inter-annotator agreement ceiling in estimating the WS-353 scores. Huang et al. report a Spearman correlation of ρ = 0.713 between their model output and WS-353. This is 10 percentage points higher than inter-annotator agreement (ρ = 0.611) when defined as the average pairwise correlation between two annotators, as is common in NLP work (Padó, Padó, and Erk 2007; Reisinger and Mooney 2010a; Silberer and Lapata 2014). It could be argued that a different comparison is more appropriate: Because the model is compared to the gold-standard average across all annotators, we should compare a single annotator with the (almost) gold-standard average over all other annotators. Based on this metric the average performance of an annotator on WS-353 is ρ = 0.756, which is still only marginally better than the best automatic method.6 Thus, at least according to the established wisdom in NLP evaluation (Yong and Foo 1999; Cunningham 2005; Resnik and Lin 2010), the strength of the conclusions that can be inferred from improvements on WS-353 is limited. At the same time, however, state-of-the-art distributional models are clearly not perfect representation-learning or even similarity estimation engines, as evidenced by the fact they cannot yet be applied, for instance, to generate flawless lexical resources (Alfonseca and Manandhar 2002). WS-Sim. WS-Sim is the set of pairs in WS-353 identified by Agirre et al. (2009) as either containing similar or unrelated (neither similar nor associated) concepts. The ratings in WS-Sim are mapped directly from WS-353, so that all concept pairs in WSSim that receive a high rating are associated and all pairs that receive a low rating are unassociated. Consequently, any model that simply reflects association would score highly on WS-Sim, irrespective of how well it captures similarity. Such a possibility could be excluded by requiring models to perform well on WSSim and poorly on WS-Rel, the subset of WS-353 identified by Agirre et al. (2009) as containing no pairs of similar concepts. However, although this would exclude models of pure association, it would not test the ability of models to quantify the similarity of the pairs in WS-Sim. Put another way, the WS-Sim/WS-Rel partition could in theory resolve limitation (1) of WS-353 but it would not resolve limitation (2): Models are not tested on their ability to attribute low scores to associated but dissimilar pairs. In fact, there are more fundamental limitations of WS-Sim as a similarity-based evaluation resource. It does not, strictly speaking, reflect similarity at all, since the ratings of its constituent pairs were assigned by the WS-353 annotators, who were asked to estimate association, not similarity. Moreover, it inherits the limitation of low interannotator agreement from WS-353. The average pairwise correlation between annotators on WS-Sim is ρ = 0.667, and the average correlation of a single annotator with the gold standard is only ρ = 0.651, both below the performance of automatic methods (Agirre et al. 2009). Finally, the small size of WS-Sim renders it poorly representative of the full range of concepts that semantic models may be required to learn. Rubenstein & Goodenough. Prior to WS-353, the smaller RG data set, consisting of 65 pairs, was often used to evaluate semantic models. The 15 raters employed in the data collection were asked to rate the “similarity of meaning” of each concept pair. Thus RG does appear to reflect similarity rather than association. However, although limitation (1) of WS-353 is therefore avoided, RG still suffers from limitation (2): By inspection, 6 Individual annotator responses for WS-353 were downloaded from www.cs.technion.ac.il/~gabr/ resources/data/wordsim353. it is clear that the low similarity pairs in RG are not associated. A further limitation is that distributional models now achieve better performance on RG (correlations of up to Pearson r = 0.86 [Hassan and Mihalcea 2011]) than the reported inter-annotator agreement of r = 0.85 (Rubenstein and Goodenough 1965). Finally, the size of RG renders it an even less comprehensive evaluation than WS-Sim. The MEN Test Collection. A larger data set, MEN (Bruni et al. 2012a), is used in a handful of recent studies (Bruni et al. 2012b; Bernardi et al. 2013). As with WS-353, both terms similarity and relatedness are used by the authors when describing MEN, although the annotators were expressly asked to rate pairs according to relatedness.7 The construction of MEN differed from RG and WS-353 in that each pair was only considered by one rater, who ranked it for relatedness relative to 50 other pairs in the data set. An overall score out of 50 was then attributed to each pair corresponding to how many times it was ranked as more related than an alternative. However, because these rankings are based on relatedness, with respect to evaluating similarity MEN necessarily suffers from both of the limitations (1) and (2) that apply to WS-353. Further, there is a strong bias towards concrete concepts in MEN because the concepts were originally selected from those identified in an image-bank (Bruni et al. 2012a). Synonym Detection Sets. Multiple-choice synonym detection tasks, such as the TOEFL test questions (Landauer and Dumais 1997), are an alternative means of evaluating distributional models. A question in the TOEFL task consists of a cue word and four possible answer words, only one of which is a true synonym. Models are scored on the number of true synonyms identified out of 80 questions. The questions were designed by linguists to evaluate synonymy, so, unlike the evaluations considered thus far, TOEFL-style tests effectively discriminate between similarity and association. However, because they require a zero-one classification of pairs as synonymous or not, they do not test how well models discern pairs of medium or low similarity. More generally, in opposition to the fuzzy, statistical approaches to meaning predominant in both cognitive psychology (Griffiths, Steyvers, and Tenenbaum 2007) and NLP (Turney and Pantel 2010), they do not require similarity to be measured on a continuous scale.","3 the simlex-999 data set :Having considered the limitations of existing gold standards, in this section we describe the design of SimLex-999 in detail. Separating similarity from association. To create a test of the ability of models to capture similarity as opposed to association, we started with the ≈ 72,000 pairs of concepts in the USF data set. As the output of a free-association experiment, each of these pairs is associated to a greater or lesser extent. Importantly, inspecting the pairs revealed that a good range of similarity values are represented. In particular, there were many examples of hypernym/hyponym pairs [body, abdomen], cohyponym pairs [cat, dog], synonyms or near synonyms [deodorant, antiperspirant], and antonym pairs [good, evil]. From this cohort, we excluded pairs containing a multiple-word item [hot dog, mustard], 7 http://clic.cimec.unitn.it/~elia.bruni/MEN.html. and pairs containing a capital letter [Mexico, sun]. We ultimately sampled 900 of the SimLex-999 pairs from the resulting cohort of pairs, according to the stratification procedures outlined in the following sections. To complement this cohort with entirely unassociated pairs, we paired up the concepts from the 900 associated pairs at random. From these random parings, we excluded those that coincidentally occurred elsewhere in USF (and therefore had a degree of association). From the remaining pairs, we accepted only those in which both concepts had been subject to the USF norming procedure, ensuring that these non-USF pairs were indeed unassociated rather than simply not normed. We sampled the remaining 99 SimLex-999 pairs from this resulting cohort of unassociated pairs. POS category. In light of the conceptual differences outlined in Section 2.2, SimLex999 includes subsets of pairs from the three principle meaning-bearing POS categories: nouns, verbs, and adjectives. To classify potential pairs according to POS, we counted the frequency with which the items in each pair occurred with the three possible tags in the POS-tagged British National Corpus (Leech, Garside, and Bryant 1994). To minimize POS ambiguity, which could lead to inconsistent ratings, we excluded pairs containing a concept with lower than 75% tendency towards one particular POS. This yielded three sets of potential pairs : [A,A] pairs (of two concepts whose majority tag was Adjective), [N,N] pairs, and [V,V] pairs. Given the likelihood that different cognitive operations are used in estimating the similarity between items of different POS-category (Section 2.2), concept pairs were presented to raters in batches defined according to POS. Unlike both WS-353 and MEN, pairs of concepts of mixed POS ([white, rabbit], [run,marathon]) were excluded. POS categories are generally considered to reflect very broad ontological classes (Fellbaum 1998). We thus felt it would be very difficult, or even counter-intuitive, for annotators to quantify the similarity of mixed POS pairs according to our instructions. Concreteness. Although a clear majority of pairs in gold standards such as MEN and RG contain concrete items, perhaps surprisingly, the vast majority of adjective, noun, and verb concepts in everyday language are in fact abstract (Hill, Reichart, and Korhonen 2014; Kiela et al. 2014).8 To facilitate the evaluation of models for both concrete and abstract concept meaning, and in light of the cognitive and computational modeling differences between abstract and concrete concepts noted in Section 2.2, we aimed to include both concept types in SimLex-999. Unlike the POS distinction, concreteness is generally considered to be a gradual phenomenon. One benefit of sampling pairs for SimLex-999 from the USF data set is that most items have been rated according to concreteness on a scale of 1–7 by at least 10 human subjects. As Figure 1 demonstrates, concreteness (as the average over these ratings) interacts with POS on these concepts: Nouns are on average more concrete than verbs, which are more concrete than adjectives. However, there is also clear variation in concreteness within each POS category. We therefore aimed to select pairs for SimLex999 that spanned the full abstract–concrete continuum within each POS category. After excluding any pairs that contained an item with no concreteness rating, for each potential SimLex-999 pair we considered both the concreteness of the first item and the additive difference in concreteness between the two items. This enabled us 8 According to the USF concreteness ratings, 72% of noun or verb types in the British National Corpus are more abstract than the concept war, a concept many would already consider quite abstract. ● athletefailure treebelief propertychristmas coughscaremakeseem liberal loud darkhappy Nouns Verbs Adjectives 2 3 4 5 6 7 Concreteness Rating Figure 1 Boxplots showing the interaction between concreteness and POS for concepts in USF. The white boxes range from the first to third quartiles and the central vertical line indicates the median. to stratify our sampling equally across four classes: (C1) concrete first item (rating > 4) with below-median concreteness difference; (C2) concrete first item (rating> 4), second item of lower concreteness and the difference being greater than the median; (C3) abstract first item (rating≤ 4) with below-median concreteness difference; and (C4) abstract first item (rating ≤ 4) with the second item of greater concreteness and the difference being greater than the median. Final sampling. From the associated (USF) cohort of potential pairs we selected 600 noun pairs, 200 verb pairs, and 100 adjective pairs, and from the unassociated (non-USF) cohort, we sampled 66 nouns pairs, 22 verb pairs, and 11 adjective pairs. In both cases, the sampling was stratified such that, in each POS subset, each of the four concreteness classes C1−C4 was equally represented. The annotator instructions for SimLex-999 are shown in Figure 2. We did not attempt to formalize the notion of similarity, but rather introduce it via the well-understood idea of synonymy, and in contrast to association. Even if a formal characterization of similarity existed, the evidence in Section 2 suggests that the instructions would need separate cases to cover different concept types, increasing the difficulty of the rating task. Therefore, we preferred to appeal to intuition on similarity, and to verify post hoc that subjects were able to interpret and apply the informal characterization consistently for each concept type. Immediately following the instructions in Figure 2, participants were presented with two “checkpoint” questions, one with abstract examples and one with concrete examples. In each case the participant was required to identify the most similar pair from a set of three options, all of which were associated, but only one of which was clearly similar (e.g. [bread, butter] [bread, toast] [stale, bread]). After this, the participants began rating pairs in groups of six or seven pairs by moving a slider, as shown in Figure 3. This group size was chosen because the (relative) rating of a set of pairs implicitly requires pairwise comparisons between all pairs in that set. Therefore, larger groups would have significantly increased the cognitive load on the annotators. Another advantage of grouping was the clear break (submitting a set of ratings and moving to the next page) between the tasks of rating adjective, noun, and verb pairs. For better inter-group calibration, from the second group onwards the last pair of the previous group became the first pair of the present group, and participants were asked to re-assign the rating previously attributed to the first pair before rating the remaining new items. As with MEN, WS-353, and RG, SimLex-999 consists of pairs of concept words together with a numerical rating. Thus, unlike in the small evaluation constructed by Huang et al. (2012), words are not rated in a phrasal or sentential context. Such meaning-in-context evaluations are motivated by a desire to disambiguate words that otherwise might be considered to have multiple senses. We did not attempt to construct an evaluation based on meaning-in-context for several reasons. First, determining the set of senses for a given word, and then the set of contexts that represent those senses, introduces a high degree of subjectivity into the design process. Second, ensuring that a model has learned a high quality representation of a given concept would have required evaluating that concept in each of its given contexts, necessitating many more cases and a far greater annotation effort. Third, in the (infrequent) case that some concept c1 in an evaluation pair (c1, c2) is genuinely (etymologically) polysemous, c2 can provide sufficient context to disambiguate c1.9 9 This is supported by the fact that the WordNet-based methods that perform best at modeling human ratings model the similarity between concepts c1 and c2 as the minimum of all pairwise distances between the senses of c1 and the senses of c2 (Resnik 1995; Pedersen, Patwardhan, and Michelizzi 2004). Finally, the POS grouping of pairs in the survey can also serve to disambiguate in the case that the conflicting senses of the polysemous concept are of differing POS categories. Each participant was asked to rate 20 groups of pairs on a 0–6 scale of integers (nonintegral ratings were not possible). Checkpoint multiple-choice questions were inserted at points between the 20 groups in order to ensure the participant had retained the correct notion of similarity. In addition to the checkpoint of three noun pairs presented before the first group (which contained noun pairs), checkpoint questions containing adjective pairs were inserted before the first adjective group and checkpoints of three verb pairs were inserted before the first verb group. From the 999 evaluation pairs, 14 noun pairs, 4 verb pairs, and 2 adjective pairs were selected as a consistency set. The data set of pairs was then partitioned into 10 tranches, each consisting of 119 pairs, of which 20 were from the consistency set and the remaining 99 unique to that tranche. To reduce workload, each annotator was asked to rate the pairs in a single tranche only. The tranche itself was divided into 20 groups, with each group consisting of 7 pairs (with the exception of the last group of the 20, which had 6). Of these seven pairs, the first pair was the last pair from the previous group, and the second pair was taken from the consistency set. The remaining pairs were unique to that particular group and tranche. The design enabled control for possible systematic differences between annotators and tranches, which could be detected by variation on the consistency set. Five hundred residents of the United States were recruited from Amazon Mechanical Turk, each with at least 95% approval rate for work on the Web service. Each participant was required to check a box confirming that he or she was a native speaker of English and warned that work would be rejected if the pattern of responses indicated otherwise. The participants were distributed evenly to rate pairs in one of the ten question tranches, so that each pair was rated by approximately 50 subjects. Participants took between 8 and 21 minutes to rate the 119 pairs across the 20 groups, together with the checkpoint questions. In order to correct for systematic differences in the overall calibration of the rating scale between respondents, we measured the average (mean) response of each rater on the consistency set. For 32 respondents, the absolute difference between this average and the mean of all such averages was greater than 1 (though never greater than 2); that is, 32 respondents demonstrated a clear tendency to rate pairs as either more or less similar than the overall rater population. To correct for this bias, we increased (or decreased) the rating of such respondents for each pair by one, except in cases where they had given the maximum rating, 6 (or minimum rating, 0). This adjustment, which ensured that the average response of each participant was within one of the mean of all respondents on the consistency set, resulted in a small increase to the inter-rater agreement on the data set as a whole. After controlling for systematic calibration differences, we imposed three conditions for the responses of a rater to be included in the final data collation. First, the average pairwise Spearman correlation of responses with all other responses for a participant could not be more than one standard deviation below the mean of all such averages. Second, the increase in inter-rater agreement when a rater was excluded from the analysis needed to be smaller than at least 50 other raters (i.e., 10% of raters were excluded on this criterion). Third, we excluded the six participants who got one or more of the checkpoint questions wrong. A total of 99 participants were excluded based on one or more of these conditions, but no more than 16 from any one tranche (so that each pair in the final data set was rated by a minimum of 36 raters). Finally, we computed average (mean) scores for each pair, and transformed all scores linearly from the interval [0, 6] to the interval [0, 10].","4 analysis of the data set :In this section we analyze the responses of the SimLex-999 annotators and the resulting ratings. First, by considering inter-annotator agreement, we examine the consistency with which annotators were able to apply the characterization of similarity, outlined in the instructions for the range of concept types in SimLex-999. Second, we verify that a valid notion of similarity was understood by the annotators, in that they were able to accurately separate similarity from association. As in previous annotation or data collection for computational semantics (Padó, Padó, and Erk 2007; Reisinger and Mooney 2010a; Silberer and Lapata 2014) we computed the inter-rater agreement as the average of pairwise Spearman ρ correlations between the ratings of all respondents. Overall agreement was ρ = 0.67. This compares favorably with the agreement on WS-353 (ρ = 0.61 using the same method). The design of the MEN rating system precludes a conventional calculation of inter-rater agreement (Bruni et al. 2012b). However, two of the creators of MEN who independently rated the data set achieved an agreement of ρ = 0.68.10 The SimLex-999 inter-rater agreement suggests that participants were able to understand the (single) characterization of similarity presented in the instructions and to apply it to concepts of various types consistently. This conclusion was supported by inspection of the brief feedback offered by the majority of annotators in a final text field in the questionnaire: 78% expressed sentiment that the test was clear, easy to complete, or some similar sentiment. Interestingly, as shown in Figure 4 (left), agreement was not uniform across the concept types. Contrary to what might be expected given established concreteness effects (Paivio 1991), we observed not only higher inter-rater agreement but also less per-pair variability for abstract rather than concrete concepts.11 Strikingly, the highest inter-rater consistency and lowest per-pair variation (defined as the inverse of the standard deviation of all ratings for that pair) was observed on adjective pairs. Although we are unsure exactly what drives this effect, a possible cause 10 Reported at http://clic.cimec.unitn.it/~elia.bruni/MEN. It is reasonable to assume that actual agreement on MEN may be somewhat lower than 0.68, given the small sample size and the expertise of the raters. 11 Per-pair variability was measured by calculating the standard deviation of responses for each pair, and averaging these scores across the pairs of each concept type. is that many pairs of adjectives in SimLex-999 cohabit a single salient, one-dimensional scale ( freezing > cold > warm > hot). This may be a consequence of the fact that many pairs in SimLex-999 were selected (from USF) to have a degree of association. On inspection, pairs of nouns and verbs in SimLex-999 do not appear to occupy scales in the same way, possibly because concepts of these POS categories come to be associated via a more diverse range of relations. It seems plausible that humans are able to estimate the similarity of scale-based concepts more consistently than pairs of concepts related in a less uni-dimensional fashion. Regardless of cause, however, the high agreement on adjectives is a satisfactory property of SimLex-999. Adjectives exhibit various aspects of lexical semantics that have proved challenging for computational models, including antonymy, polarity (Williams and Anand 2009), and sentiment (Wiebe 2000). To approach the high level of human confidence on the adjective pairs in SimLex-999, it may be necessary to focus particularly on developing automatic ways to capture these phenomena. Inspection of the SimLex-999 ratings indicated that pairs were indeed evaluated according to similarity rather than association. Table 2 includes examples that demonstrate a clear dissociation between the two semantic relations. To verify this effect quantitatively, we recruited 100 additional participants to rate the WS-353 pairs, but following the SimLex-999 instructions and question format. As shown in Fig 5(a), there were clear differences between these new ratings and the original WS-353 ratings. In particular, a high proportion of pairs was given a lower rating by subjects following the SimLex-999 instructions than those following the WS-353 guidelines: The mean SimLex rating was 4.07 compared with 5.91 for WS-353. This was consistent with our expectations that pairs of associated but dissimilar concepts would receive lower ratings based on the SimLex-999 than on the WS-353 instructions, whereas pairs that were both associated and similar would receive similar ratings in both cases. To confirm this, we compared the WS-353 and SimLex-999based ratings on the subsets WS-Rel and WS-Sim, which were hand-sorted by Agirre et al. (2009) to include pairs connected by association (and not similarity) and those connected by similarity (but possibly also association), respectively. As shown in Figure 5(b–c), the correlation between the SimLex-999-based and WS353 ratings was notably higher (ρ = 0.73) on the WS-Sim subset than the WS-Rel subset (ρ = 0.38). Specifically, the tendency of subjects following the SimLex-999 instructions to assign lower ratings than those following the WS-353 instructions was far more pronounced for pairs in WS-Sim (Figure 5(b)) than for those in WS-Rel (Figure 5(c)). This observation suggests that the associated but dissimilar pairs in WS-353 were an important driver of the overall lower mean for SimLex-999-based ratings, and thus provide strong evidence that the SimLex-999 instructions do indeed enable subjects to distinguish similarity from association effectively. We have established the validity of similarity as a notion understood by human raters and distinct from association. However, much theoretical semantics focuses on relations between words or concepts that are finer-grained than similarity and association. These include meronymy (a part to its whole, e.g., blade–knife), hypernymy (a category concept to a member of that category, e.g., animal–dog), and cohyponymy (two members of the same implicit category, e.g., the pair of animals dog–cat) (Cruse 1986). Beyond theoretical interest, these relations can have practical relevance. For instance, hypernymy can form the basis of semantic entailment and therefore textual inference: The proposition a cat is on the table entails that an animal is on the table precisely because of the hypernymy relation from animal to cat. We chose not to make these finer-grained relations the basis of our evaluation for several reasons. At present, detecting relations such as hypernymy using distributional methods is challenging, even when supported by supervised classifiers with access to labeled pairs (Levy et al. 2015). Such a designation can seem to require specific world-knowledge (is a snale a reptile?), can be gradual, as evidenced by typicality effects (Rosch, Simpson, and Miller 1976), or simply highly subjective. Moreover, a fine-grained relation R will only be attested (to any degree) between a small subset of all possible word pairs, whereas similarity can in theory be quantified for any two words chosen at random. We thus considered a focus on fine-grained semantic relations to be less appropriate for a general-purpose evaluation of representation quality. Nevertheless, post hoc analysis of the SimLex annotator responses and fine-grained relation classes, as defined by lexicographers, yields further interesting insights into the nature of both similarity and association. Of the 999 word pairs in SimLex, 382 are also connected by one of the common finer-grained semantic relations in WordNet. For each of these relations, Figure 6 shows the average similarity rating and average USF free association score for all pairs that exhibit that relation. In cases where a relationship of hypernymy/hyponymy exists between the words in a pair (not necessarily immediate : 1 hypernym, 2 hypernym, etc.) similarity and association coincide. Hyper/hyponym pairs that are separated by fewer levels in the WordNet hierarchy are both more strongly associated and rated as more similar. However, there are also interesting discrepancies between similarity and association. Unsurprisingly, pairs that are classed as synonyms in WordNet (i.e., having at least one sense in some common synset) are rated as more similar than pairs of any other relation type by SimLex annotators. In contrast, antonyms are the most strongly associated word pairs among these finer-grained relations. Further, pairs consisting of a meronym and holonym (part and whole) are comparatively strongly associated but not judged to be similar. The analysis also highlights a case that can be particularly problematic when rating similarity: cohyponyms, or members of the same salient category (such as knife and fork). We gave no specific guidelines for how to rate such pairs in the SimLex annotator instructions, and whether they are considered similar or not seems to be a matter of perspective. On one hand, their membership of a common category could make them appear similar, particularly if the category is relatively specific. On the other hand, in the case of knife and fork, for instance, the underlying category cultery might provide a backdrop against which the differences of distinct members become particularly salient.","5 evaluating models with simlex-999 :In this section, we demonstrate the applicability of SimLex-999 by analyzing the performance of various distributional semantic models in estimating the new ratings. The models were selected to cover the main classes of representation learning architectures (Baroni, Dinu, and Kruszewski 2014): Vector space co-occurrence (counting) models and NLMs (Bengio et al. 2003). We first show that SimLex-999 is notably more difficult for state-of-the-art models to estimate than existing gold standards. We then conduct more focused analyses on the various concept subsets defined in SimLex-999, exploring possible causes for the comparatively low performance of current models and, in turn, demonstrating how SimLex-999 can be applied to investigate such questions. Collobert & Weston. Collobert and Weston (2008) apply the architecture of an NLM to learn a word representations vw for each word w in some corpus vocabulary V. Each sentence s in the input text is represented by a matrix containing the vector representations of the words in s in order. The model then computes output scores f (s) and f (sw), where sw denotes an “incorrect” sentence created from s by replacing its last word with some other word w from V. Training involves updating the parameters of the function f and the entries of the vector representations vw such that f (s) is larger than f (sw) for any w in V, other than the correct final word of s. This corresponds to minimizing the sum of the following sentence objectives Cs over all sentences in the input corpus, which is achieved via (mini-batch) stochastic gradient descent: Cs = ∑ w∈V max(0, 1− f (s) + f (sw)) The relatively low-dimension, dense (vector) representations learned by this model and the other NLMs introduced in this section are sometimes referred to as embeddings (Turian, Ratinov, and Bengio 2010). Collobert and Weston (2008) train their models on 852 million words of text from a 2007 dump of Wikipedia and the RCV1 Corpus (Lewis et al. 2004) and use their embeddings to achieve state-of-the-art results on a variety of NLP tasks. We downloaded the embeddings directly from the authors’ Web page.12 Huang et al. Huang et al. (2012) present a NLM that learns word embeddings to maximize the likelihood of predicting the last word in a sentence s based on (i) the previous words in that sentence (local context, as with Collobert and Weston [2008]) and (ii) the document d in which that word occurs (global context). As with Collobert and Weston (2008), the model represents input sentences as a matrix of word embeddings. In addition, it represents documents in the input corpus as single-vector averages over all word embeddings in that document. It can then compute scores g(s, d) and g(sw, d), whereas before sw is a sentence with an “incorrect” randomly selected last word. 12 http://ml.nec-labs.com/senna/. Training is again by stochastic gradient descent, and corresponds to minimizing the sum of the sentence objectives Cs,d over all of the sentences in the corpus: Cs,d = ∑ w∈V max(0, 1− g(s, d) + g(sw, d)) The combination of local and global contexts in the objective encourages the final word embeddings to reflect aspects of both the meaning of nearby words and of the documents in which those words appear. When learning from 990M words of Wikipedia text, Huang et al. report a Spearman correlation of ρ = 71.3 between the cosine similarity of their model embeddings and the WS-353 scores, which constitutes state-of-the-art performance for a NLM model on that data set. We downloaded these embeddings from the authors’ Web page.13 Mikolov et al. Mikolov et al. (2013a) present an architecture that learns word embeddings similar to those of standard NLMs but with no nonlinear hidden layer (resulting in a simpler scoring function). This enables faster representation learning for large vocabularies. Despite this simplification, the embeddings achieve state-of-theart performance on several semantic tasks including sentence completion and analogy modeling (Mikolov et al. 2013a, 2013b). For each word type w in the vocabulary V, the model learns both a “targetembedding” rw ∈ Rd and a “context-embedding” r̂w ∈ Rd such that, given a target word, its ability to predict nearby context words is maximized. The probability of seeing context word c given target w is defined as: p(c|w) = e r̂c·rw∑ v∈V er̂v·rw The model learns from a set of (target-word, context-word) pairs, extracted from a corpus of sentences as follows. In a given sentence s (of length N), for each position n ≤ N, each word wn is treated in turn as a target word. An integer t(n) is then sampled from a uniform distribution on {1, . . . k}, where k > 0 is a predefined maximum contextwindow parameter. The pair tokens {(wn, wn+j) : −t(n) ≤ j ≤ t(n), wi ∈ s} are then appended to the training data. Thus, target/context training pairs are such that (i) only words within a k-window of the target are selected as context words for that target, and (ii) words closer to the target are more likely to be selected than those further away. The training objective is then to maximize the log probability T, defined here, across all such examples from s, and then across all sentences in the corpus. This is achieved by stochastic gradient descent. T = 1N N∑ n=1 ∑ −t(n)≤j≤t(n),j 6=0 log(p(wn+j|wn)) As with other NLMs, Mikolov et al.’s model captures conceptual semantics by exploiting the fact that words appearing in similar linguistic contexts are likely to 13 www.socher.org/index.php/Main/ImprovingWordRepresentationsViaGlobalContextAndMultiple WordPrototypes. have similar meanings. Informally, the model adjusts its embeddings to increase the probability of observing the training corpus. Because this probability increases with p(c|w), and p(c|w) increases with the dot product r̂c · rw, the updates have the effect of moving each target-embedding incrementally “closer” to the context-embeddings of its collocates. In the target-embedding space, this results in embeddings of concept words that regularly occur in similar contexts moving closer together. We use the author’s Word2vec software in order to train their model and use the target embeddings in our evaluations. We experimented with embeddings of dimension 100, 200, 300, 400, and 500 and found that 200 gave the best performance on both WS-353 and SimLex-999. Vector Space Model (VSM). As an alternative to NLMs, we constructed a vector space model following the guidelines for optimal performance outlined by Kiela and Clark (2014). After extracting the 2,000 most frequent word tokens in the corpus that are not in a common list of stopwords14 as features, we populated a matrix of co-occurrence counts with a row for each of the concepts in some pair in our evaluation sets, and a column for each of the features. Co-occurrence was counted within a specified window size, although never across a sentence boundary. This resulting matrix was then weighted according to Pointwise Mutual Information (PMI) (Recchia and Jones 2009). The rows of the resulting matrix constitute the vector representations of the concepts. SVD. As proposed initially in Landauer and Dumais (1997), we also experimented with models in which SVD (Golub and Reinsch 1970) is applied to the PMI-weighted VSM matrix, reducing the dimension of each concept representation to 300 (which yielded best results after experimenting, as before, with 100–500 dimension vectors). For each model described in this section, we calculate similarity as the cosine similarity between the (vector) representations learned by that model. In experimenting with different models on SimLex-999, we aimed to answer the following questions: (i) How well do the established models perform on SimLex-999 versus on existing gold standards? (ii) Are any observed differences caused by the potential of different models to measure similarity vs. association? (iii) Are there interesting differences in ability of models to capture similarity between adjectives vs. nouns vs. verbs? (iv) In this case, are the observed differences driven by concreteness, and its interaction with POS, or are other factors also relevant? Overall Performance on SimLex-999. Figure 7 shows the performance of the NLMs on SimLex-999 versus on comparable data sets, measured by Spearman’s ρ correlation. All models estimate the ratings of MEN and WS-353 more accurately than SimLex-999. The Huang et al. (2012) model performs well on WS-353,15 but is not very robust to changes in evaluation gold standard, and performs worst of all the models on SimLex-999. Given the focus of the WS-353 ratings, it is tempting to explain this by concluding that the 14 Taken from the Python Natural Language Toolkit (Bird 2006). 15 This score, based on embeddings downloaded from the authors’ webpage, is notably lower than the score reported in Huang et al. (2012), mentioned in Section 5.1. global context objective leads the Huang et al. (2012) model to focus on association rather than similarity. However, the true explanation may be less simple, since the Huang et al. (2012) model performs weakly on the association-based MEN data set. The Collobert and Weston (2008) model is more robust across WS-353 and MEN, but still does not match the performance of the Mikolov et al. (2013a) model on SimLex-999. Figure 8 compares the best performing NLM model (Mikolov et al. 2013a) with the VSM and SVD models.16 In contrast to recent results that emphasize the superiority of NLMs over alternatives (Baroni, Dinu, and Kruszewski 2014), we observed no clear advantage for the NLM over the VSM or SVD when considering the associationbased gold standards WS-353 and MEN together. While the NLM is the strongest performer on WS-353, SVD is the strongest performer on MEN. However, the NLM model performs notably better than the alternatives at modeling similarity, as measured by SimLex-999. Comparing all models in Figures 7 and 8 suggests that SimLex-999 is notably more challenging to model than the alternative data sets, with correlation scores ranging from 0.098 to 0.414. Thus, even when state-of-the-art models are trained for several days on massive text corpora,17 their performance on SimLex-999 is well below the interannotator agreement (Figure 7). This suggests that there is ample scope for SimLex-999 to guide the development of improved models. Modeling Similarity vs. Association. The comparatively low performance of NLM, VSM, and SVD models on SimLex-999 compared with MEN and WS-353 is consistent with our hypothesis that modeling similarity is more difficult than modeling association. Indeed, given that many strongly associated but dissimilar pairs, such as [coffee, cup], are likely to have high co-occurrence in the training data, and that all models infer connections between concepts from linguistic co-occurrence in some form or another, 16 We conduct this comparison on the smaller RCV1 Corpus (Lewis et al. 2004) because training the VSM and SVD models is comparatively slow. 17 Training times reported by Huang et al. (2012) and for Collobert and Weston (2008) at http://ronan.collobert.com/senna/. it seems plausible that models may overestimate the similarity of such pairs because they are “distracted” by association. To test this hypothesis more precisely, we compared the performance of models on the whole of SimLex-999 versus its 333 most associated pairs (according to the USF free association scores). Importantly, pairs in this strongly associated subset still span the full range of possible similarity scores (min similarity = 0.23 [shrink, grow], max similarity = 9.80 [vanish, disappear]). As shown in Figure 9, all models performed worse when the evaluation was restricted to pairs of strongly associated concepts, which was consistent with our hypothesis. The Collobert and Weston (2008) model was better than the Huang et al. (2012) model at estimating similarity in the face of high association. This is not entirely surprising given the global-context objective in the latter model, which may have encouraged more association-based connections between concepts. The Mikolov et al. model, however, performed notably better than both other NLMs. Moreover, this superiority is proportionally greater when evaluating on the most associated pairs only (as indicated by the difference between the red and gray bars), suggesting that the improvement is driven at least in part by an increased ability to “distinguish” similarity from association. To understand better how the architecture of models captures information pertinent to similarity modeling, we performed two additional experiments using SimLex-999. These comparisons were also motivated by the hypotheses, made in previous studies and outlined in Section 2.1.2, that both dependency-informed input and smaller context windows encourage models to capture similarity rather than association. We tested the first hypothesis using the embeddings of Levy and Goldberg (2014), whose model extends the Mikolov et al. (2013a) model so that target-context training instances are extracted based on dependency-parsed rather than simple running text. As illustrated in Figure 9, the dependency-based embeddings outperform the original (running text) embeddings trained on the same corpus. Moreover, the comparatively large increase in the red bar compared to the gray bar suggests that an important part of the improvement of the dependency-based model derives from a greater ability to discern similarity from association. Our comparisons provided less support for the second (window size) hypothesis. As shown in Figure 10, there is a negligible improvement in the performance of the model when the window size is reduced from 10 to 2. However, for the SVD model we observed the converse. The SVD model with window size 10 slightly outperforms the SVD model with window 2, and this improvement is quite pronounced on the most associated pairs in SimLex-999. Learning Concepts of Different POS. Given the theoretical likelihood of variation in model performance across POS categories noted in Section 2.2, we evaluated the Mikolov et al. (2013a), VSM, and SVD models on the subsets of SimLex-999 containing adjective, noun, and verb concept pairs. The analyses yield two notable conclusions, as shown in Figure 11. First, perhaps contrary to intuition, all models estimate the similarity of adjectives better than other concept categories. This aligns with the (also unexpected) observation that humans rate the similarity of adjectives more consistently and with more agreement than other parts of speech (see the dashed lines). However, the parallels between human raters and the models do not extend to verbs and nouns; verb similarity is rated more consistently than noun similarity by humans, but models estimate these ratings more accurately for nouns than for verbs. To better understand the linguistic information exploited by models when acquiring concepts of different POS, we also computed performance on the POS subsets of SimLex-999 of the dependency-based model of Levy and Goldberg (2014) and the standard skipgram model, in which linguistic contexts are encoded as simple bagsof-words (BOW) (Mikolov et al. (2013a) [trained on the same Wikipedia text]). As shown in Figure 12, dependency-aware contexts yield the largest improvements for capturing verb similarity. This aligns with the cognitive theory of verbs as relational concepts (Markman and Wisniewski 1997) whose meanings rely on their interaction with (or dependency on) other words or concepts. It is also consistent with research on the automatic acquisition of verb semantics, in which syntactic features have proven particularly important (Sun, Korhonen, and Krymolowski 2008). Although a deeper exploration of these effects is beyond the scope of this work, this preliminary analysis again highlights how the word classes integrated into SimLex-999 are pertinent to a range of questions concerning lexical semantics. Learning Concrete and Abstract Concepts. Given the strong interdependence between POS and conceptual concreteness (Figure 1), we aimed to explore whether the variation in model performance on different POS categories was in fact driven by an underlying effect of concreteness. To do so, we ranked each pair in the SimLex-999 data set according to the sum of the concreteness of the two words, and compared performance of models on the most concrete and least concrete quartiles according to this ranking (Figure 13). Interestingly, the performance of models on the most abstract and most concrete pairs suggests that the distinction characterized by concreteness is at least partially independent of POS. Specifically, while the Mikolov et al. model was the highest performer of all POS categories, its performance was worse than both the simple VSM and SVD models (of window size 10) on the most concrete concept pairs. This finding supports the growing evidence for systematic differences in representation and/or similarity operations between abstract and concrete concepts (Hill, Kiela, and Korhonen 2013), and suggests that at least part of these concreteness effects are independent of POS. In particular, it appears that models built from underlying vectors of co-occurrence counts, such as VSMs and SVD, are better equipped to capture the semantics of concrete entities, whereas the embeddings learned by NLMs can better capture abstract semantics.","6 conclusion :Although the ultimate test of semantic models should be their utility in downstream applications, the research community can undoubtedly benefit from ways to evaluate the general quality of the representations learned by such models, prior to their integration in any particular system. We have presented SimLex-999, a gold standard resource for the evaluation of semantic representations containing similarity ratings of word pairs of different POS categories and concreteness levels. The development of SimLex-999 was principally motivated by two factors. First, as we demonstrated, several existing gold standards measure the ability of models to capture association rather than similarity, and others do not adequately test their ability to discriminate similarity from association. This is despite the many potential applications for accurate similarity-focused representation learning models. Analysis of the ratings of the 500 SimLex-999 annotators showed that subjects can consistently quantify similarity, as distinct from association, and apply it to various concept types, based on minimal intuitive instructions. Second, as we showed, state-of-the-art models trained solely on running-text corpora have now reached or surpassed the human agreement ceiling on WordSim-353 and MEN, the most popular existing gold standards, as well as on RG and WS-Sim. These evaluations may therefore have limited use in guiding or moderating future improvements to distributional semantic models. Nevertheless, there is clearly still room for improvement in terms of the use of distributional models in functional applications. We therefore consider the comparatively low performance of state-of-the-art models on SimLex-999 to be one of its principal strengths. There is clear room under the inter-rating ceiling to guide the development of the next generation of distributional models. We conducted a brief exploration of how models might improve on this performance, and verified the hypotheses that models trained on dependency-based input capture similarity more effectively than those trained on running-text input. The evidence that smaller context windows are also beneficial for similarity models was mixed, however. Indeed, we showed that the optimal window size depends on both the general model architecture and the part-of-speech and concreteness of the target concepts. Our analysis of these hypotheses illustrates how the design of SimLex-999— covering a principled set of concept categories and including meta-information on concreteness and free-association strength—enables fine-grained analyses of the performance and parameterization of semantic models. However, these experiments only scratch the surface in terms of the possible analyses. We hope that researchers will adopt the resource as a robust means of answering a diverse range of questions pertinent to similarity modeling, distributional semantics, and representation learning in general. In particular, for models to learn high-quality representations for all linguistic concepts, we believe that future work must uncover ways to explicitly or implicitly infer “deeper,” more general, conceptual properties such as intentionality, polarity, subjectivity, or concreteness (Gershman and Dyer 2014). However, although improving corpusbased models in this direction is certainly realistic, models that learn exclusively via the linguistic modality may never reach human-level performance on evaluations such as SimLex-999. This is because much conceptual knowledge, and particularly that which underlines similarity computations for concrete concepts, appears to be grounded in the perceptual modalities as much as in language (Barsalou et al. 2003). Whatever the means by which the improvements are achieved, accurate conceptlevel representation is likely to constitute a necessary first step towards learning informative, language-neutral phrasal and sentential representations. Such representations would be hugely valuable for fundamental NLP applications such as language understanding tools and machine translation. Distributional semantics aims to infer the meaning of words based on the company they keep (Firth 1957). However, although words that occur together in text often have associated meanings, these meanings may be very similar or indeed very different. Thus, possibly excepting the population of Argentina, most people would agree that, strictly speaking, Maradona is not synonymous with football (despite their high rating of 8.62 in WordSim-353). The challenge for the next generation of distributional models may therefore be to infer what is useful from the co-occurrence signal and to overlook what is not. Perhaps only then will models capture most, or even all, of what humans know when they know how to use a language.",,,"We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness so that pairs of entities that are associated but not actually similar (Freud, psychology) have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider, range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun, and verb pairs, together with an independent rating of concreteness and (free) association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-ofthe-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures.","[{""affiliations"": [], ""name"": ""Felix Hill""}, {""affiliations"": [], ""name"": ""Roi Reichart""}, {""affiliations"": [], ""name"": ""Anna Korhonen""}]",SP:bc23a2bd2507750d9ea8f0a4937c6bc8162c549c,"[{""authors"": [""Agirre"", ""Eneko"", ""Enrique Alfonseca"", ""Keith Hall"", ""Jana Kravalova"", ""Marius Pa\u015fca"", ""Aitor Soroa.""], ""title"": ""A study on similarity and relatedness using distributional and Wordnet-based approaches"", ""venue"": ""Proceedings"", ""year"": 2009}, {""authors"": [""Alfonseca"", ""Enrique"", ""Suresh Manandhar.""], ""title"": ""Extending a lexical ontology by a combination of distributional semantics signatures"", ""venue"": ""G. Schrieber et al., Knowledge Engineering and Knowledge"", ""year"": 2002}, {""authors"": [""Andrews"", ""Mark"", ""Gabriella Vigliocco"", ""David Vinson.""], ""title"": ""Integrating experiential and distributional data to learn semantic representations"", ""venue"": ""Psychological Review, 116(3):463."", ""year"": 2009}, {""authors"": [""Bansal"", ""Mohit"", ""Kevin Gimpel"", ""Karen Livescu.""], ""title"": ""Tailoring continuous word representations for dependency parsing"", ""venue"": ""Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Baroni"", ""Marco"", ""Georgiana Dinu"", ""Germ\u00e1n Kruszewski.""], ""title"": ""Don\u2019t count, predict! A systematic comparison of context-counting vs"", ""venue"": ""context-predicting semantic vectors. In Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Baroni"", ""Marco"", ""Alessandro Lenci.""], ""title"": ""Distributional memory: A general framework for corpus-based semantics"", ""venue"": ""Computational Linguistics, 36(4):673\u2013721."", ""year"": 2010}, {""authors"": [""Barsalou"", ""Lawrence W."", ""W. Kyle Simmons"", ""Aron K. Barbey"", ""Christine D. Wilson.""], ""title"": ""Grounding conceptual knowledge in modality-specific systems"", ""venue"": ""Trends in Cognitive Sciences, 7(2):84\u201391."", ""year"": 2003}, {""authors"": [""Beltagy"", ""Islam"", ""Katrin Erk"", ""Raymond Mooney.""], ""title"": ""Semantic parsing using distributional semantics and probabilistic logic"", ""venue"": ""ACL 2014 Workshop on Semantic Parsing."", ""year"": 2014}, {""authors"": [""Bengio"", ""Yoshua"", ""R\u00e9jean Ducharme"", ""Pascal Vincent"", ""Christian Jauvin.""], ""title"": ""A neural probabilistic language model"", ""venue"": ""The Journal of Machine Learning Research, 3:1137\u20131155."", ""year"": 2003}, {""authors"": [""Bernardi"", ""Raffaella"", ""Georgiana Dinu"", ""Marco Marelli"", ""Marco Baroni.""], ""title"": ""A relatedness benchmark to test the role of determiners in compositional distributional semantics"", ""venue"": ""Proceedings of"", ""year"": 2013}, {""authors"": [""Biemann"", ""Chris.""], ""title"": ""Ontology learning from text: A survey of methods"", ""venue"": ""LDV Forum, 20(2):75\u201393."", ""year"": 2005}, {""authors"": [""Bird"", ""Steven.""], ""title"": ""Nltk: the natural language toolkit"", ""venue"": ""Proceedings of the COLING/ACL on Interactive Presentation sessions, pages 69\u201372, Sydney."", ""year"": 2006}, {""authors"": [""Bruni"", ""Elia"", ""Gemma Boleda"", ""Marco Baroni"", ""Nam-Khanh Tran.""], ""title"": ""Distributional semantics in technicolor"", ""venue"": ""Proceedings of ACL, Jeju Island."", ""year"": 2012}, {""authors"": [""Bruni"", ""Elia"", ""Jasper Uijlings"", ""Marco Baroni"", ""Nicu Sebe.""], ""title"": ""Distributional semantics with eyes: Using image analysis to improve computational representations of word meaning"", ""venue"": ""Proceedings of the 20th"", ""year"": 2012}, {""authors"": [""Budanitsky"", ""Alexander"", ""Graeme Hirst.""], ""title"": ""Evaluating Wordnet-based measures of lexical semantic relatedness"", ""venue"": ""Computational Linguistics, 32(1):13\u201347."", ""year"": 2006}, {""authors"": [""Cimiano"", ""Philipp"", ""Andreas Hotho"", ""Steffen Staab.""], ""title"": ""Learning concept hierarchies from text corpora using formal concept analysis"", ""venue"": ""J. Artif. Intell. Res. (JAIR), 24:305\u2013339."", ""year"": 2005}, {""authors"": [""R. Collobert"", ""J. Weston.""], ""title"": ""A unified architecture for natural language processing: Deep neural networks with multitask learning"", ""venue"": ""International Conference on Machine Learning, ICML, Helsinki."", ""year"": 2008}, {""authors"": [""Cruse"", ""D. Alan.""], ""title"": ""Lexical semantics"", ""venue"": ""Cambridge University Press."", ""year"": 1986}, {""authors"": [""Cunningham"", ""Hamish.""], ""title"": ""Information extraction, automatic"", ""venue"": ""Encyclopedia of language and linguistics, pages 665\u2013677."", ""year"": 2005}, {""authors"": [""Fellbaum"", ""Christiane""], ""title"": ""WordNet"", ""venue"": ""Wiley Online Library."", ""year"": 1998}, {""authors"": [""Finkelstein"", ""Lev"", ""Evgeniy Gabrilovich"", ""Yossi Matias"", ""Ehud Rivlin"", ""Zach Solan"", ""Gadi Wolfman"", ""Eytan Ruppin.""], ""title"": ""Placing search in context: The concept revisited"", ""venue"": ""Proceedings of the"", ""year"": 2001}, {""authors"": [""J.R. Firth""], ""title"": ""Papers in Linguistics 1934\u20131951"", ""venue"": ""Oxford University Press."", ""year"": 1957}, {""authors"": [""Gentner"", ""Dedre.""], ""title"": ""On relational meaning: The acquisition of verb meaning"", ""venue"": ""Child Development, pages 988\u2013998."", ""year"": 1978}, {""authors"": [""Gentner"", ""Dedre.""], ""title"": ""Why verbs are hard to learn"", ""venue"": ""Action meets word: How Children Learn Verbs, pages 544\u2013564. 692"", ""year"": 2006}, {""authors"": [""Gershman"", ""Anatole"", ""Yulia Tsvetkov"", ""Leonid Boytsov"", ""Eric Nyberg"", ""Chris Dyer.""], ""title"": ""Metaphor detection with cross-lingual model transfer"", ""venue"": ""Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Golub"", ""Gene H."", ""Christian Reinsch.""], ""title"": ""Singular value decomposition and least squares solutions"", ""venue"": ""Numerische Mathematik, 14(5):403\u2013420."", ""year"": 1970}, {""authors"": [""Griffiths"", ""Thomas L."", ""Mark Steyvers"", ""Joshua B. Tenenbaum.""], ""title"": ""Topics in semantic representation"", ""venue"": ""Psychological Review, 114(2):211."", ""year"": 2007}, {""authors"": [""Haghighi"", ""Aria"", ""Percy Liang"", ""Taylor Berg-Kirkpatrick"", ""Dan Klein.""], ""title"": ""Learning bilingual lexicons from monolingual corpora"", ""venue"": ""Proceedings of ACL 2008, Columbus, OH."", ""year"": 2008}, {""authors"": [""Hassan"", ""Samer"", ""Rada Mihalcea.""], ""title"": ""Semantic relatedness using salient semantic analysis"", ""venue"": ""AAAI, San Francisco, CA."", ""year"": 2011}, {""authors"": [""Hatzivassiloglou"", ""Vasileios"", ""Judith L. Klavans"", ""Melissa L. Holcombe"", ""Regina Barzilay"", ""Min-Yen Kan"", ""Kathleen McKeown.""], ""title"": ""Simfinder: A flexible clustering tool for summarization"", ""venue"": ""In"", ""year"": 2001}, {""authors"": [""He"", ""Xiaodong"", ""Mei Yang"", ""Jianfeng Gao"", ""Patrick Nguyen"", ""Robert Moore.""], ""title"": ""Indirect-HMM-based hypothesis alignment for combining outputs from machine translation systems"", ""venue"": ""Proceedings"", ""year"": 2008}, {""authors"": [""Hill"", ""Felix"", ""Douwe Kiela"", ""Anna Korhonen.""], ""title"": ""Concreteness and corpora: A theoretical and practical analysis"", ""venue"": ""CMCL 2013, page 75, Sofia."", ""year"": 2013}, {""authors"": [""Hill"", ""Felix"", ""Anna Korhonen"", ""Christian Bentz.""], ""title"": ""A quantitative empirical analysis of the abstract/concrete distinction"", ""venue"": ""Cognitive Science, 38(1):162\u2013177."", ""year"": 2014}, {""authors"": [""Hill"", ""Felix"", ""Roi Reichart"", ""Anna Korhonen.""], ""title"": ""Multi-modal models for concrete and abstract concept meaning"", ""venue"": ""Transactions of the Association for Computational Linguistics (TACL), 2:285\u2013296."", ""year"": 2014}, {""authors"": [""Huang"", ""Eric H."", ""Richard Socher"", ""Christopher D. Manning"", ""Andrew Y. Ng.""], ""title"": ""Improving word representations via global context and multiple word prototypes"", ""venue"": ""Proceedings of ACL, pages 873\u2013882,"", ""year"": 2012}, {""authors"": [""Kiela"", ""Douwe"", ""Stephen Clark.""], ""title"": ""A systematic study of semantic vector space model parameters"", ""venue"": ""Proceedings of the 2nd Workshop on Continuous Vector"", ""year"": 2014}, {""authors"": [""Kiela"", ""Douwe"", ""Felix Hill"", ""Anna Korhonen"", ""Stephen Clark.""], ""title"": ""Improving multi-modal representations using image dispersion: Why less is sometimes more"", ""venue"": ""Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Landauer"", ""Thomas K."", ""Susan T. Dumais.""], ""title"": ""A solution to Plato\u2019s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge"", ""venue"": ""Psychological Review,"", ""year"": 1997}, {""authors"": [""Leech"", ""Geoffrey"", ""Roger Garside"", ""Michael Bryant.""], ""title"": ""Claws4: The tagging of the British National Corpus"", ""venue"": ""Proceedings of COLING, pages 622\u2013628, Kyoto."", ""year"": 1994}, {""authors"": [""Levy"", ""Omer"", ""Yoav Goldberg.""], ""title"": ""Dependency-based word embeddings"", ""venue"": ""Proceedings of ACL, volume 2."", ""year"": 2014}, {""authors"": [""Levy"", ""Omer"", ""Steffen Remus"", ""Chris Biemann"", ""Idol Dagan""], ""title"": ""Do supervised distributional methods really learn lexical inference relations"", ""venue"": ""Proceedings of NAACL,"", ""year"": 2015}, {""authors"": [""Lewis"", ""David D."", ""Yiming Yang"", ""Tony G. Rose"", ""Fan Li.""], ""title"": ""Rcv1: A new benchmark collection for text categorization research"", ""venue"": ""The Journal of Machine Learning Research, 5:361\u2013397."", ""year"": 2004}, {""authors"": [""Li"", ""Changliang"", ""Bo Xu"", ""Gaowei Wu"", ""Xiuying Wang"", ""Wendong Ge"", ""Yan Li.""], ""title"": ""Obtaining better word representations via language transfer"", ""venue"": ""A. Gelbukh, editor, Computational Linguistics and Intelligent"", ""year"": 2014}, {""authors"": [""Li"", ""Mu"", ""Yang Zhang"", ""Muhua Zhu"", ""Ming Zhou.""], ""title"": ""Exploring distributional similarity based models for query spelling correction"", ""venue"": ""Proceedings of ALC, pages 1025\u20131032."", ""year"": 2006}, {""authors"": [""Luong"", ""Minh-Thang"", ""Richard Socher"", ""Christopher D. Manning.""], ""title"": ""Better word representations with recursive neural networks for morphology"", ""venue"": ""CoNLL-2013, page 104, Sofia."", ""year"": 2013}, {""authors"": [""Markman"", ""Arthur B."", ""Edward J. Wisniewski.""], ""title"": ""Similar and different: The differentiation of basic-level categories"", ""venue"": ""Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(1):54."", ""year"": 1997}, {""authors"": [""Marton"", ""Yuval"", ""Chris Callison-Burch"", ""Philip Resnik.""], ""title"": ""Improved statistical machine translation using monolinguallyderived paraphrases"", ""venue"": ""Proceedings of EMNLP, pages 381\u2013390, Edinburgh."", ""year"": 2009}, {""authors"": [""McRae"", ""Ken"", ""Saman Khalkhali"", ""Mary Hare""], ""title"": ""Semantic and associative relations in adolescents and young"", ""year"": 2012}, {""authors"": [""Medelyan"", ""Olena"", ""David Milne"", ""Catherine Legg"", ""Ian H. Witten.""], ""title"": ""Mining meaning from Wikipedia"", ""venue"": ""International Journal of Human-Computer Studies, 67(9):716\u2013754."", ""year"": 2009}, {""authors"": [""Mikolov"", ""Tomas"", ""Kai Chen"", ""Greg Corrado"", ""Jeffrey Dean.""], ""title"": ""Efficient estimation of word representations in vector space"", ""venue"": ""Proceedings of International Conference of Learning Representations,"", ""year"": 2013}, {""authors"": [""Mikolov"", ""Tomas"", ""Ilya Sutskever"", ""Kai Chen"", ""Greg S. Corrado"", ""Jeff Dean.""], ""title"": ""Distributed representations of words and phrases and their compositionality"", ""venue"": ""Advances in Neural Information"", ""year"": 2013}, {""authors"": [""Navigli"", ""Roberto.""], ""title"": ""Word sense disambiguation: A survey"", ""venue"": ""ACM Computing Surveys (CSUR), 41(2):10."", ""year"": 2009}, {""authors"": [""Nelson"", ""Douglas L."", ""Cathy L. McEvoy"", ""Thomas A. Schreiber.""], ""title"": ""The University of South Florida free association, rhyme, and word fragment norms"", ""venue"": ""Behavior Research Methods, Instruments, & Computers,"", ""year"": 2004}, {""authors"": [""Pad\u00f3"", ""Sebastian"", ""Ulrike Pad\u00f3"", ""Katrin Erk.""], ""title"": ""Flexible, corpus-based modelling of human plausibility judgements"", ""venue"": ""Proceedings of EMNLP-CoNLL, pages 400\u2013409, Prague."", ""year"": 2007}, {""authors"": [""Paivio"", ""Allan.""], ""title"": ""Dual coding theory: Retrospect and current status"", ""venue"": ""Canadian Journal of Psychology/Revue canadienne de psychologie, 45(3):255."", ""year"": 1991}, {""authors"": [""Pedersen"", ""Ted"", ""Siddharth Patwardhan"", ""Jason Michelizzi.""], ""title"": ""Wordnet:: Similarity: Measuring the relatedness of concepts"", ""venue"": ""Demonstration Papers at HLT-NAACL 2004, pages 38\u201341, New York,"", ""year"": 2004}, {""authors"": [""Phan"", ""Xuan-Hieu"", ""Le-Minh Nguyen"", ""Susumu Horiguchi.""], ""title"": ""Learning to classify short and sparse text & Web with hidden topics from large-scale data collections"", ""venue"": ""Proceedings of the 17th"", ""year"": 2008}, {""authors"": [""Plaut"", ""David C.""], ""title"": ""Semantic and associative priming in a distributed attractor network"", ""venue"": ""Proceedings"", ""year"": 1995}, {""authors"": [""Recchia"", ""Gabriel"", ""Michael N. Jones.""], ""title"": ""More data trumps smarter algorithms: Comparing pointwise mutual information with latent semantic analysis"", ""venue"": ""Behavior Research Methods, 41(3):647\u2013656."", ""year"": 2009}, {""authors"": [""Reisinger"", ""Joseph"", ""Raymond Mooney.""], ""title"": ""A mixture model with sharing for lexical semantics"", ""venue"": ""Proceedings of EMNLP, pages 1173\u20131182, Cambridge, MA."", ""year"": 2010}, {""authors"": [""Reisinger"", ""Joseph"", ""Raymond J. Mooney.""], ""title"": ""Multi-prototype vector-space models of word meaning"", ""venue"": ""Human Language Technologies: The 2010 Annual Conference of the North American Chapter of"", ""year"": 2010}, {""authors"": [""Resnik"", ""Philip.""], ""title"": ""Using information content to evaluate semantic similarity in a taxonomy"", ""venue"": ""Proceedings of IJCAI."", ""year"": 1995}, {""authors"": [""Resnik"", ""Philip"", ""Jimmy Lin.""], ""title"": ""11 evaluations of NLP systems"", ""venue"": ""The handbook of computational linguistics and natural language processing, 57:271."", ""year"": 2010}, {""authors"": [""Rosch"", ""Eleanor"", ""Carol Simpson"", ""R. Scott Miller.""], ""title"": ""Structural bases of typicality effects"", ""venue"": ""Journal of Experimental Psychology: Human Perception and Performance, 2(4):491."", ""year"": 1976}, {""authors"": [""Rose"", ""Tony"", ""Mark Stevenson"", ""Miles Whitehead.""], ""title"": ""The Reuters corpus volume 1\u2014from yesterday\u2019s news to tomorrow\u2019s language resources"", ""venue"": ""LREC, volume 2, pages 827\u2013832,"", ""year"": 2002}, {""authors"": [""Rubenstein"", ""Herbert"", ""John B. Goodenough.""], ""title"": ""Contextual correlates of synonymy"", ""venue"": ""Communications of the ACM, 8(10):627\u2013633."", ""year"": 1965}, {""authors"": [""Silberer"", ""Carina"", ""Mirella Lapata.""], ""title"": ""Learning grounded meaning representations with autoencoders"", ""venue"": ""Proceedings of ACL, Sofia."", ""year"": 2014}, {""authors"": [""Sun"", ""Lin"", ""Anna Korhonen"", ""Yuval Krymolowski.""], ""title"": ""Verb class discovery from rich syntactic data"", ""venue"": ""A. Gelbukh, editor, Computational Linguistics and Intelligent Text processing. Springer,"", ""year"": 2008}, {""authors"": [""Turian"", ""Joseph"", ""Lev Ratinov"", ""Yoshua Bengio.""], ""title"": ""Word representations: A simple and general method for semi-supervised learning"", ""venue"": ""Proceedings of ACL, pages 384\u2013394,"", ""year"": 2010}, {""authors"": [""Turney"", ""Peter D.""], ""title"": ""Domain and function: A dual-space model of semantic relations and compositions"", ""venue"": ""Journal 694"", ""year"": 2012}, {""authors"": [""Turney"", ""Peter D."", ""Patrick Pantel.""], ""title"": ""From frequency to meaning: Vector space models of semantics"", ""venue"": ""Journal of Artificial Intelligence Research, 37(1):141\u2013188."", ""year"": 2010}, {""authors"": [""Tversky"", ""Amos.""], ""title"": ""Features of similarity"", ""venue"": ""Psychological Review, 84(4):327."", ""year"": 1977}, {""authors"": [""Wiebe"", ""Janyce.""], ""title"": ""Learning subjective adjectives from corpora"", ""venue"": ""AAAI/IAAI, pages 735\u2013740, Austin, TX."", ""year"": 2000}, {""authors"": [""Williams"", ""Gbolahan K"", ""Sarabjot Singh Anand""], ""title"": ""Predicting the polarity"", ""year"": 2009}, {""authors"": [""Wu"", ""Zhibiao"", ""Martha Palmer.""], ""title"": ""Verbs, semantics and lexical selection"", ""venue"": ""Proceedings of ACL, pages 133\u2013138, Las Cruces, NM."", ""year"": 1994}, {""authors"": [""Yong"", ""Chung"", ""Shou King Foo.""], ""title"": ""A case study on inter-annotator agreement for word sense disambiguation"", ""venue"": ""Proceedings of the ACL SIGLEX Workshop on Standardizing"", ""year"": 1999}]",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :There is very little similar about coffee and cups. Coffee refers to a plant, which is a living organism or a hot brown (liquid) drink. In contrast, a cup is a man-made solid of broadly well-defined shape and size with a specific function relating to the consumption of liquids. Perhaps the only clear trait these concepts have in common is that they are concrete entities. Nevertheless, in what is currently the most popular evaluation gold standard for semantic similarity, WordSim(WS)-353 (Finkelstein et al. 2001), coffee and ∗ Computer Laboratory University of Cambridge, UK. E-mail: {felix.hill, anna.korhonen}@ cl.cam.ac.uk. ∗∗ Technion, Israel Institute of Technology, Haifa, Israel. E-mail: roiri@ie.technion.ac.il. Submission received: 25 July 2014; revised submission received: 10 June 2015; accepted for publication: 31 August 2015. doi:10.1162/COLI a 00237 © 2015 Association for Computational Linguistics cup are rated as more “similar” than pairs such as car and train, which share numerous common properties (function, material, dynamic behavior, wheels, windows, etc.). Such anomalies also exist in other gold standards such as the MEN data set (Bruni et al. 2012a). As a consequence, these evaluations effectively penalize models for learning the evident truth that coffee and cup are dissimilar. Although clearly different, coffee and cup are very much related. The psychological literature refers to the conceptual relationship between these concepts as association, although it has been given a range of names including relatedness (Budanitsky and Hirst 2006; Agirre et al. 2009), topical similarity (Hatzivassiloglou et al. 2001), and domain similarity (Turney 2012). Association contrasts with similarity, the relation connecting cup and mug (Tversky 1977). At its strongest, the similarity relation is exemplified by pairs of synonyms; words with identical referents. Computational models that effectively capture similarity as distinct from association have numerous applications. Such models are used for the automatic generation of dictionaries, thesauri, ontologies, and language correction tools (Biemann 2005; Cimiano, Hotho, and Staab 2005; Li et al. 2006). Machine translation systems, which aim to define mappings between fragments of different languages whose meaning is similar, but not necessarily associated, are another established application (He et al. 2008; Marton, Callison-Burch, and Resnik 2009). Moreover, since, as we establish, similarity is a cognitively complex operation that can require rich, structured conceptual knowledge to compute accurately, similarity estimation constitutes an effective proxy evaluation for general-purpose representation-learning models whose ultimate application is variable or unknown (Collobert and Weston 2008; Baroni and Lenci 2010). As we show in Section 2, the predominant gold standards for semantic evaluation in NLP do not measure the ability of models to reflect similarity. In particular, in both WS353 and MEN, pairs of words with associated meaning, such as coffee and cup (rating = 6.810), telephone and communication (7.510), or movie and theater (7.710), receive a high rating regardless of whether or not their constituents are similar. Thus, the utility of such resources to the development and application of similarity models is limited, a problem exacerbated by the fact that many researchers appear unaware of what their evaluation resources actually measure.1 Although certain smaller gold standards—those of Rubenstein and Goodenough (1965) (RG) and Agirre et al. (2009) (WS-Sim)—do focus clearly on similarity, these resources suffer from other important limitations. For instance, as we show, and as is also the case for WS-353 and MEN, state-of-the-art models have reached the average performance of a human annotator on these evaluations. It is common practice in NLP to define the upper limit for automated performance on an evaluation as the average human performance or inter-annotator agreement (Yong and Foo 1999; Cunningham 2005; Resnik and Lin 2010). Based on this established principle and the current evaluations, it would therefore be reasonable to conclude that the problem of representation learning, at least for similarity modeling, is approaching resolution. However, circumstantial evidence suggests that distributional models are far from perfect. For instance, we are some way from automatically generated dictionaries, thesauri, or ontologies that can be used with the same confidence as their manually created equivalents. 1 For instance, Huang et al. (2012, pages 1, 4, 10) and Reisinger and Mooney (2010b, page 4) refer to MEN and/or WS-353 as “similarity data sets.” Others evaluate on both these association-based and genuine similarity-based gold standards with no reference to the fact that they measure different things (Medelyan et al. 2009; Li et al. 2014). Motivated by these observations, in Section 3 we present SimLex-999, a gold standard resource for evaluating the ability of models to reflect similarity. SimLex-999 was produced by 500 paid native English speakers, recruited via Amazon Mechanical Turk,2 who were asked to rate the similarity, as opposed to association, of concepts via a simple visual interface. The choice of evaluation pairs in SimLex-999 was motivated by empirical evidence that humans represent concepts of distinct part-of-speech (POS) (Gentner 1978) and conceptual concreteness (Hill, Korhonen, and Bentz 2014) differently. Whereas existing gold standards contain only concrete noun concepts (MEN) or cover only some of these distinctions via a random selection of items (WS-353, RG), SimLex-999 contains a principled selection of adjective, verb, and noun concept pairs covering the full concreteness spectrum. This design enables more nuanced analyses of how computational models overcome the distinct challenges of representing concepts of these types. In Section 4 we present quantitative and qualitative analyses of the SimLex-999 ratings, which indicate that participants found it unproblematic to quantify consistently the similarity of the full range of concepts and to distinguish it from association. Unlike existing data sets, SimLex-999 therefore contains a significant number of pairs, such as [movie, theater], which are strongly associated but receive low similarity scores. The second main contribution of this paper, presented in Section 5, is the evaluation of state-of-the-art distributional semantic models using SimLex-999. These include the well-known neural language models (NLMs) of Huang et al. (2012), Collobert and Weston (2008), and Mikolov et al. (2013a), which we compare with traditional vectorspace co-occurrence models (VSMs) (Turney and Pantel 2010) with and without dimensionality reduction (SVD) (Landauer and Dumais 1997). Our analyses demonstrate how SimLex-999 can be applied to uncover substantial differences in the ability of models to represent concepts of different types. Despite these differences, the models we consider each share the characteristic of being better able to capture association than similarity. We show that the difficulty of estimating similarity is driven primarily by those strongly associated pairs with a high (association) rating in gold standards such as WS-353 and MEN, but a low similarity rating in SimLex-999. As a result of including these challenging cases, together with a wider diversity of lexical concepts in general, current models achieve notably lower scores on SimLex-999 than on existing gold standard evaluations, and well below the SimLex-999 inter-human agreement ceiling. Finally, we explore ways in which distributional models might improve on this performance in similarity modeling. To do so, we evaluate the models on the SimLex999 subsets of adjectives, nouns, and verbs, as well as on abstract and concrete subsets and subsets of more and less strongly associated pairs (Sections 5.2.2–5.2.4). As part of these analyses, we confirm the hypothesis (Agirre et al. 2009; Levy and Goldberg 2014) that models learning from input informed by dependency parsing, rather than simple running-text input, yield improved similarity estimation and, specifically, clearer distinction between similarity and association. In contrast, we find no evidence for a related hypothesis (Agirre et al. 2009; Kiela and Clark 2014) that smaller context windows improve the ability of models to capture similarity. We do, however, observe clear differences in model performance on the distinct concept types included in SimLex-999. Taken together, these experiments demonstrate the benefit of the diversity of concepts 2 www.mturk.com/. included in SimLex-999; it would not have been possible to derive similar insights by evaluating based on existing gold standards. We conclude by discussing how observations such as these can guide future research into distributional semantic models. By facilitating better-defined evaluations and finer-grained analyses, we hope that SimLex-999 will ultimately contribute to the development of models that accurately reflect human intuitions of similarity for the full range of concepts in language. 2 design motivation :In this section, we motivate the design decisions made in developing SimLex-999. We begin (2.1) by examining the distinction between similarity and association. We then show that for a meaningful treatment of similarity it is also important to take a principled approach to both POS and conceptual concreteness (2.2). We finish by reviewing existing gold standards, and show that none enables a satisfactory evaluation of the capability of models to capture similarity (2.3). The difference between association and similarity is exemplified by the concept pairs [car, bike] and [car, petrol]. Car is said to be (semantically) similar to bike and associated with (but not similar to) petrol. Intuitively, car and bike can be understood as similar because of their common physical features (e.g., wheels), their common function (transport), or because they fall within a clearly definable category (modes of transport). In contrast, car and petrol are associated because they frequently occur together in space and language, in this case as a result of a clear functional relationship (Plaut 1995; McRae, Khalkhali, and Hare 2012). Association and similarity are neither mutually exclusive nor independent. Bike and car, for instance, are related to some degree by both relations. Because it is common in both the physical world and in language for distinct entities to interact, it is relatively easy to conceive of concept pairs, such as car and petrol, that are strongly associated but not similar. Identifying pairs of concepts for which the converse is true is comparatively more difficult. One exception is common concepts paired with low frequency synonyms, such as camel and dromedary. Because the essence of association is co-occurrence (linguistic or otherwise [McRae, Khalkhali, and Hare 2012]), such pairs can seem, at least intuitively, to be similar but not strongly associated. To explore the interaction between the two cognitive phenomena quantitatively, we exploited perhaps the only two existing large-scale means of quantifying similarity and association. To estimate similarity, we considered proximity in the WordNet taxonomy (Fellbaum 1998). Specifically, we applied the measure of Wu and Palmer (1994) (henceforth WupSim), which approximates similarity on a [0,1] scale reflecting the minimum distance between any two synsets of two given concepts in WordNet. WupSim has been shown to correlate well with human judgments on the similarity-focused RG data set (Wu and Palmer 1994). To estimate association, we extracted ratings directly from the University of South Florida Free Association Database (USF) (Nelson, McEvoy, and Schreiber 2004). These data were generated by presenting human subjects with one of 5,000 cue concepts and asking them to write the first word that comes into their head that is associated with or meaningfully related to that concept. Each cue concept c was normed in this way by over 10 participants, resulting in a set of associates for each cue, and a total of over 72,000 (c, a) pairs. Moreover, for each such pair, the proportion of participants who produced associate a when presented with cue c can be used as a proxy for the strength of association between the two concepts. By measuring WupSim between all pairs in the USF data set, we observed, as expected, a high correlation between similarity and association strength across all USF pairs (Spearman ρ = 0.65, p < 0.001). However, in line with the intuitive ubiquity of pairs such as car and petrol, of the USF pairs (all of which are associated to a greater or lesser degree) over 10% had a WupSim score of less than 0.25. These include pairs of ontologically different entities with a clear functional relationship in the world [refrigerator, food], which may be of differing concreteness [lung, disease]; pairs in which one concept is a small concrete part of a larger abstract category [sheriff, police]; pairs in a relationship of modification or subcategorization [gravy, boat]; and even those whose principal connection is phonetic [wiggle, giggle]. As we show in Section 2.2, these are precisely the sort of pairs that are not contained in existing evaluation gold standards. Table 1 lists the USF noun pairs with the lowest similarity scores overall, and also those with the largest additive discrepancy between association strength and similarity. 2.1.1 Association and Similarity in NLP. As noted in the Introduction, the similarity/association distinction is not only of interest to researchers in psychology or linguistics. Models of similarity are particularly applicable to various NLP tasks, such as lexical resource building, semantic parsing, and machine translation (Haghighi et al. 2008; He et al. 2008; Marton, Callison-Burch, and Resnik 2009; Beltagy, Erk, and Mooney 2014). Models of association, on the other hand, may be better suited to tasks such as wordsense disambiguation (Navigli 2009), and applications such as text classification (Phan, Nguyen, and Horiguchi 2008) in which the target classes correspond to topical domains such as agriculture or sport (Rose, Stevenson, and Whitehead 2002). Much recent research in distributional semantics does not distinguish between association and similarity in a principled way (see, e.g., Reisinger and Mooney 2010b; Huang et al. 2012; Luong, Socher, and Manning 2013).3 One exception is Turney (2012), who constructs two distributional models with different features and parameter settings, explicitly designed to capture either similarity or association. Using the output of these two models as input to a logistic regression classifier, Turney predicts whether two 3 Several papers that take a knowledge-based or symbolic approach to meaning do address the similarity/association issue (Budanitsky and Hirst 2006). concepts are associated, similar, or both, with 61% accuracy. However, in the absence of a gold standard covering the full range of similarity ratings (rather than a list of pairs identified as being similar or not) Turney cannot confirm directly that the similarityfocused model does indeed effectively quantify similarity. Agirre et al. (2009) explicitly examine the distinction between association and similarity in relation to distributional semantic models. Their study is based on the partition of WS-353 into a subset focused on similarity, which we refer to as WS-Sim, and a subset focused on association, which we term WS-Rel. More precisely, WS-Sim is the union of the pairs in WS-353 judged by three annotators to be similar and the set U of entirely unrelated pairs, and WS-Rel is the union of U and pairs judged to be associated but not similar. Agirre et al. confirm the importance of the association/similarity distinction by showing that certain models perform relatively well on WS-Rel, whereas others perform comparatively better on WS-Sim. However, as shown in the following section, a model need not be an exemplary model of similarity in order to perform well on WS-Sim, because an important class of concept pair (associated but not similar entities) is not represented in this data set. Therefore the insights that can be drawn from the results of the Agirre et al. study are limited. Several other authors touch on the similarity/association distinction in inspecting the output of distributional models (Andrews, Vigliocco, and Vinson 2009; Kiela and Clark 2014; Levy and Goldberg 2014). Although the strength of the conclusions that can be drawn from such qualitative analyses is clearly limited, there appear to be two broad areas of consensus concerning similarity and distributional models:r Models that learn from input annotated for syntactic or dependency relations better reflect similarity, whereas approaches that learn from running-text or bag-of-words input better model association (Agirre et al. 2009; Levy and Goldberg 2014).r Models with larger context windows may learn representations that better capture association, whereas models with narrower windows better reflect similarity (Agirre et al. 2009; Kiela and Clark 2014). Empirical studies have shown that the performance of both humans and distributional models depends on the POS category of the concepts learned. Gentner (2006) showed that children find verb concepts harder to learn than noun concepts, and Markman and Wisniewski (1997) present evidence that different cognitive operations are used when comparing two nouns or two verbs. Hill, Reichart, and Korhonen (2014) demonstrate differences in the ability of distributional models to acquire noun and verb semantics. Further, they show that these differences are greater for models that learn from both text and perceptual input (as with humans). In addition to POS category, differences in human and computational concept learning and representation have been attributed to the effects of concreteness, the extent to which a concept has a directly perceptible physical referent. On the cognitive side, these “concreteness effects” are well established, even if the causes are still debated (Paivio 1991; Hill, Korhonen, and Bentz 2014). Concreteness has also been associated with differential performance in computational text-based (Hill, Kiela, and Korhonen 2013) and multi-modal semantic models (Kiela et al. 2014). For brevity, we do not exhaustively review all methods that have been used to evaluate semantic models, but instead focus on the similarity or association-based gold standards that are most commonly applied in recent work in NLP. In each case, we consider how well the data set satisfies one of the three following criteria: Representative. The resource should cover the full range of concepts that occur in natural language. In particular, it should include cases representing the different ways in which humans represent or process concepts, and cases that are both challenging and straightforward for computational models. Clearly defined. In order for a gold standard to be diagnostic of how well a model can be applied to downstream applications, a clear understanding is needed of what exactly the gold standard measures. In particular, it must clearly distinguish between dissociable semantic relations such as association and similarity. Consistent and reliable. Untrained native speakers must be able to quantify the target property consistently, without requiring lengthy or detailed instructions. This ensures that the data reflect a meaningful cognitive or semantic phenomenon, and also enables the data set to be scaled up or transferred to other languages at minimal cost and effort. We begin our review of existing evaluation with the gold standard most commonly applied in current NLP research. WordSim-353. WS-353 (Finkelstein et al. 2001) is perhaps the most commonly used evaluation gold standard for semantic models. Despite its name, and the fact that it is often referred to as a “similarity gold standard,”4 in fact, the instructions given to annotators when producing WS-353 were ambiguous with respect to similarity and association. Subjects were asked to: Assign a numerical similarity score between 0 and 10 (0 = words totally unrelated, 10 = words VERY closely related) ... when estimating similarity of antonyms, consider them “similar” (i.e., belonging to the same domain or representing features of the same concept), not “dissimilar”. As we confirm analytically in Section 5.2, these instructions result in pairs being rated according to association rather than similarity.5 WS-353 consequently suffers two important limitations as an evaluation of similarity (which also afflict other resources to a greater or lesser degree): 1. Many dissimilar word pairs receive a high rating. 2. No associated but dissimilar concepts receive low ratings. As noted in the Introduction, an arguably more serious third limitation of WS-353 is low inter-annotator agreement, and the fact that state-of-the-art models such as those 4 See, e.g., Huang et al. 2012 and Bansal, Gimpel, and Livescu 2014. 5 This fact is also noted by the data set authors. See www.cs.technion.ac.il/~gabr/resources/ data/wordsim353/. of Collobert and Weston (2008) and Huang et al. (2012) reach, or even surpass, the inter-annotator agreement ceiling in estimating the WS-353 scores. Huang et al. report a Spearman correlation of ρ = 0.713 between their model output and WS-353. This is 10 percentage points higher than inter-annotator agreement (ρ = 0.611) when defined as the average pairwise correlation between two annotators, as is common in NLP work (Padó, Padó, and Erk 2007; Reisinger and Mooney 2010a; Silberer and Lapata 2014). It could be argued that a different comparison is more appropriate: Because the model is compared to the gold-standard average across all annotators, we should compare a single annotator with the (almost) gold-standard average over all other annotators. Based on this metric the average performance of an annotator on WS-353 is ρ = 0.756, which is still only marginally better than the best automatic method.6 Thus, at least according to the established wisdom in NLP evaluation (Yong and Foo 1999; Cunningham 2005; Resnik and Lin 2010), the strength of the conclusions that can be inferred from improvements on WS-353 is limited. At the same time, however, state-of-the-art distributional models are clearly not perfect representation-learning or even similarity estimation engines, as evidenced by the fact they cannot yet be applied, for instance, to generate flawless lexical resources (Alfonseca and Manandhar 2002). WS-Sim. WS-Sim is the set of pairs in WS-353 identified by Agirre et al. (2009) as either containing similar or unrelated (neither similar nor associated) concepts. The ratings in WS-Sim are mapped directly from WS-353, so that all concept pairs in WSSim that receive a high rating are associated and all pairs that receive a low rating are unassociated. Consequently, any model that simply reflects association would score highly on WS-Sim, irrespective of how well it captures similarity. Such a possibility could be excluded by requiring models to perform well on WSSim and poorly on WS-Rel, the subset of WS-353 identified by Agirre et al. (2009) as containing no pairs of similar concepts. However, although this would exclude models of pure association, it would not test the ability of models to quantify the similarity of the pairs in WS-Sim. Put another way, the WS-Sim/WS-Rel partition could in theory resolve limitation (1) of WS-353 but it would not resolve limitation (2): Models are not tested on their ability to attribute low scores to associated but dissimilar pairs. In fact, there are more fundamental limitations of WS-Sim as a similarity-based evaluation resource. It does not, strictly speaking, reflect similarity at all, since the ratings of its constituent pairs were assigned by the WS-353 annotators, who were asked to estimate association, not similarity. Moreover, it inherits the limitation of low interannotator agreement from WS-353. The average pairwise correlation between annotators on WS-Sim is ρ = 0.667, and the average correlation of a single annotator with the gold standard is only ρ = 0.651, both below the performance of automatic methods (Agirre et al. 2009). Finally, the small size of WS-Sim renders it poorly representative of the full range of concepts that semantic models may be required to learn. Rubenstein & Goodenough. Prior to WS-353, the smaller RG data set, consisting of 65 pairs, was often used to evaluate semantic models. The 15 raters employed in the data collection were asked to rate the “similarity of meaning” of each concept pair. Thus RG does appear to reflect similarity rather than association. However, although limitation (1) of WS-353 is therefore avoided, RG still suffers from limitation (2): By inspection, 6 Individual annotator responses for WS-353 were downloaded from www.cs.technion.ac.il/~gabr/ resources/data/wordsim353. it is clear that the low similarity pairs in RG are not associated. A further limitation is that distributional models now achieve better performance on RG (correlations of up to Pearson r = 0.86 [Hassan and Mihalcea 2011]) than the reported inter-annotator agreement of r = 0.85 (Rubenstein and Goodenough 1965). Finally, the size of RG renders it an even less comprehensive evaluation than WS-Sim. The MEN Test Collection. A larger data set, MEN (Bruni et al. 2012a), is used in a handful of recent studies (Bruni et al. 2012b; Bernardi et al. 2013). As with WS-353, both terms similarity and relatedness are used by the authors when describing MEN, although the annotators were expressly asked to rate pairs according to relatedness.7 The construction of MEN differed from RG and WS-353 in that each pair was only considered by one rater, who ranked it for relatedness relative to 50 other pairs in the data set. An overall score out of 50 was then attributed to each pair corresponding to how many times it was ranked as more related than an alternative. However, because these rankings are based on relatedness, with respect to evaluating similarity MEN necessarily suffers from both of the limitations (1) and (2) that apply to WS-353. Further, there is a strong bias towards concrete concepts in MEN because the concepts were originally selected from those identified in an image-bank (Bruni et al. 2012a). Synonym Detection Sets. Multiple-choice synonym detection tasks, such as the TOEFL test questions (Landauer and Dumais 1997), are an alternative means of evaluating distributional models. A question in the TOEFL task consists of a cue word and four possible answer words, only one of which is a true synonym. Models are scored on the number of true synonyms identified out of 80 questions. The questions were designed by linguists to evaluate synonymy, so, unlike the evaluations considered thus far, TOEFL-style tests effectively discriminate between similarity and association. However, because they require a zero-one classification of pairs as synonymous or not, they do not test how well models discern pairs of medium or low similarity. More generally, in opposition to the fuzzy, statistical approaches to meaning predominant in both cognitive psychology (Griffiths, Steyvers, and Tenenbaum 2007) and NLP (Turney and Pantel 2010), they do not require similarity to be measured on a continuous scale. 3 the simlex-999 data set :Having considered the limitations of existing gold standards, in this section we describe the design of SimLex-999 in detail. Separating similarity from association. To create a test of the ability of models to capture similarity as opposed to association, we started with the ≈ 72,000 pairs of concepts in the USF data set. As the output of a free-association experiment, each of these pairs is associated to a greater or lesser extent. Importantly, inspecting the pairs revealed that a good range of similarity values are represented. In particular, there were many examples of hypernym/hyponym pairs [body, abdomen], cohyponym pairs [cat, dog], synonyms or near synonyms [deodorant, antiperspirant], and antonym pairs [good, evil]. From this cohort, we excluded pairs containing a multiple-word item [hot dog, mustard], 7 http://clic.cimec.unitn.it/~elia.bruni/MEN.html. and pairs containing a capital letter [Mexico, sun]. We ultimately sampled 900 of the SimLex-999 pairs from the resulting cohort of pairs, according to the stratification procedures outlined in the following sections. To complement this cohort with entirely unassociated pairs, we paired up the concepts from the 900 associated pairs at random. From these random parings, we excluded those that coincidentally occurred elsewhere in USF (and therefore had a degree of association). From the remaining pairs, we accepted only those in which both concepts had been subject to the USF norming procedure, ensuring that these non-USF pairs were indeed unassociated rather than simply not normed. We sampled the remaining 99 SimLex-999 pairs from this resulting cohort of unassociated pairs. POS category. In light of the conceptual differences outlined in Section 2.2, SimLex999 includes subsets of pairs from the three principle meaning-bearing POS categories: nouns, verbs, and adjectives. To classify potential pairs according to POS, we counted the frequency with which the items in each pair occurred with the three possible tags in the POS-tagged British National Corpus (Leech, Garside, and Bryant 1994). To minimize POS ambiguity, which could lead to inconsistent ratings, we excluded pairs containing a concept with lower than 75% tendency towards one particular POS. This yielded three sets of potential pairs : [A,A] pairs (of two concepts whose majority tag was Adjective), [N,N] pairs, and [V,V] pairs. Given the likelihood that different cognitive operations are used in estimating the similarity between items of different POS-category (Section 2.2), concept pairs were presented to raters in batches defined according to POS. Unlike both WS-353 and MEN, pairs of concepts of mixed POS ([white, rabbit], [run,marathon]) were excluded. POS categories are generally considered to reflect very broad ontological classes (Fellbaum 1998). We thus felt it would be very difficult, or even counter-intuitive, for annotators to quantify the similarity of mixed POS pairs according to our instructions. Concreteness. Although a clear majority of pairs in gold standards such as MEN and RG contain concrete items, perhaps surprisingly, the vast majority of adjective, noun, and verb concepts in everyday language are in fact abstract (Hill, Reichart, and Korhonen 2014; Kiela et al. 2014).8 To facilitate the evaluation of models for both concrete and abstract concept meaning, and in light of the cognitive and computational modeling differences between abstract and concrete concepts noted in Section 2.2, we aimed to include both concept types in SimLex-999. Unlike the POS distinction, concreteness is generally considered to be a gradual phenomenon. One benefit of sampling pairs for SimLex-999 from the USF data set is that most items have been rated according to concreteness on a scale of 1–7 by at least 10 human subjects. As Figure 1 demonstrates, concreteness (as the average over these ratings) interacts with POS on these concepts: Nouns are on average more concrete than verbs, which are more concrete than adjectives. However, there is also clear variation in concreteness within each POS category. We therefore aimed to select pairs for SimLex999 that spanned the full abstract–concrete continuum within each POS category. After excluding any pairs that contained an item with no concreteness rating, for each potential SimLex-999 pair we considered both the concreteness of the first item and the additive difference in concreteness between the two items. This enabled us 8 According to the USF concreteness ratings, 72% of noun or verb types in the British National Corpus are more abstract than the concept war, a concept many would already consider quite abstract. ● athletefailure treebelief propertychristmas coughscaremakeseem liberal loud darkhappy Nouns Verbs Adjectives 2 3 4 5 6 7 Concreteness Rating Figure 1 Boxplots showing the interaction between concreteness and POS for concepts in USF. The white boxes range from the first to third quartiles and the central vertical line indicates the median. to stratify our sampling equally across four classes: (C1) concrete first item (rating > 4) with below-median concreteness difference; (C2) concrete first item (rating> 4), second item of lower concreteness and the difference being greater than the median; (C3) abstract first item (rating≤ 4) with below-median concreteness difference; and (C4) abstract first item (rating ≤ 4) with the second item of greater concreteness and the difference being greater than the median. Final sampling. From the associated (USF) cohort of potential pairs we selected 600 noun pairs, 200 verb pairs, and 100 adjective pairs, and from the unassociated (non-USF) cohort, we sampled 66 nouns pairs, 22 verb pairs, and 11 adjective pairs. In both cases, the sampling was stratified such that, in each POS subset, each of the four concreteness classes C1−C4 was equally represented. The annotator instructions for SimLex-999 are shown in Figure 2. We did not attempt to formalize the notion of similarity, but rather introduce it via the well-understood idea of synonymy, and in contrast to association. Even if a formal characterization of similarity existed, the evidence in Section 2 suggests that the instructions would need separate cases to cover different concept types, increasing the difficulty of the rating task. Therefore, we preferred to appeal to intuition on similarity, and to verify post hoc that subjects were able to interpret and apply the informal characterization consistently for each concept type. Immediately following the instructions in Figure 2, participants were presented with two “checkpoint” questions, one with abstract examples and one with concrete examples. In each case the participant was required to identify the most similar pair from a set of three options, all of which were associated, but only one of which was clearly similar (e.g. [bread, butter] [bread, toast] [stale, bread]). After this, the participants began rating pairs in groups of six or seven pairs by moving a slider, as shown in Figure 3. This group size was chosen because the (relative) rating of a set of pairs implicitly requires pairwise comparisons between all pairs in that set. Therefore, larger groups would have significantly increased the cognitive load on the annotators. Another advantage of grouping was the clear break (submitting a set of ratings and moving to the next page) between the tasks of rating adjective, noun, and verb pairs. For better inter-group calibration, from the second group onwards the last pair of the previous group became the first pair of the present group, and participants were asked to re-assign the rating previously attributed to the first pair before rating the remaining new items. As with MEN, WS-353, and RG, SimLex-999 consists of pairs of concept words together with a numerical rating. Thus, unlike in the small evaluation constructed by Huang et al. (2012), words are not rated in a phrasal or sentential context. Such meaning-in-context evaluations are motivated by a desire to disambiguate words that otherwise might be considered to have multiple senses. We did not attempt to construct an evaluation based on meaning-in-context for several reasons. First, determining the set of senses for a given word, and then the set of contexts that represent those senses, introduces a high degree of subjectivity into the design process. Second, ensuring that a model has learned a high quality representation of a given concept would have required evaluating that concept in each of its given contexts, necessitating many more cases and a far greater annotation effort. Third, in the (infrequent) case that some concept c1 in an evaluation pair (c1, c2) is genuinely (etymologically) polysemous, c2 can provide sufficient context to disambiguate c1.9 9 This is supported by the fact that the WordNet-based methods that perform best at modeling human ratings model the similarity between concepts c1 and c2 as the minimum of all pairwise distances between the senses of c1 and the senses of c2 (Resnik 1995; Pedersen, Patwardhan, and Michelizzi 2004). Finally, the POS grouping of pairs in the survey can also serve to disambiguate in the case that the conflicting senses of the polysemous concept are of differing POS categories. Each participant was asked to rate 20 groups of pairs on a 0–6 scale of integers (nonintegral ratings were not possible). Checkpoint multiple-choice questions were inserted at points between the 20 groups in order to ensure the participant had retained the correct notion of similarity. In addition to the checkpoint of three noun pairs presented before the first group (which contained noun pairs), checkpoint questions containing adjective pairs were inserted before the first adjective group and checkpoints of three verb pairs were inserted before the first verb group. From the 999 evaluation pairs, 14 noun pairs, 4 verb pairs, and 2 adjective pairs were selected as a consistency set. The data set of pairs was then partitioned into 10 tranches, each consisting of 119 pairs, of which 20 were from the consistency set and the remaining 99 unique to that tranche. To reduce workload, each annotator was asked to rate the pairs in a single tranche only. The tranche itself was divided into 20 groups, with each group consisting of 7 pairs (with the exception of the last group of the 20, which had 6). Of these seven pairs, the first pair was the last pair from the previous group, and the second pair was taken from the consistency set. The remaining pairs were unique to that particular group and tranche. The design enabled control for possible systematic differences between annotators and tranches, which could be detected by variation on the consistency set. Five hundred residents of the United States were recruited from Amazon Mechanical Turk, each with at least 95% approval rate for work on the Web service. Each participant was required to check a box confirming that he or she was a native speaker of English and warned that work would be rejected if the pattern of responses indicated otherwise. The participants were distributed evenly to rate pairs in one of the ten question tranches, so that each pair was rated by approximately 50 subjects. Participants took between 8 and 21 minutes to rate the 119 pairs across the 20 groups, together with the checkpoint questions. In order to correct for systematic differences in the overall calibration of the rating scale between respondents, we measured the average (mean) response of each rater on the consistency set. For 32 respondents, the absolute difference between this average and the mean of all such averages was greater than 1 (though never greater than 2); that is, 32 respondents demonstrated a clear tendency to rate pairs as either more or less similar than the overall rater population. To correct for this bias, we increased (or decreased) the rating of such respondents for each pair by one, except in cases where they had given the maximum rating, 6 (or minimum rating, 0). This adjustment, which ensured that the average response of each participant was within one of the mean of all respondents on the consistency set, resulted in a small increase to the inter-rater agreement on the data set as a whole. After controlling for systematic calibration differences, we imposed three conditions for the responses of a rater to be included in the final data collation. First, the average pairwise Spearman correlation of responses with all other responses for a participant could not be more than one standard deviation below the mean of all such averages. Second, the increase in inter-rater agreement when a rater was excluded from the analysis needed to be smaller than at least 50 other raters (i.e., 10% of raters were excluded on this criterion). Third, we excluded the six participants who got one or more of the checkpoint questions wrong. A total of 99 participants were excluded based on one or more of these conditions, but no more than 16 from any one tranche (so that each pair in the final data set was rated by a minimum of 36 raters). Finally, we computed average (mean) scores for each pair, and transformed all scores linearly from the interval [0, 6] to the interval [0, 10]. 4 analysis of the data set :In this section we analyze the responses of the SimLex-999 annotators and the resulting ratings. First, by considering inter-annotator agreement, we examine the consistency with which annotators were able to apply the characterization of similarity, outlined in the instructions for the range of concept types in SimLex-999. Second, we verify that a valid notion of similarity was understood by the annotators, in that they were able to accurately separate similarity from association. As in previous annotation or data collection for computational semantics (Padó, Padó, and Erk 2007; Reisinger and Mooney 2010a; Silberer and Lapata 2014) we computed the inter-rater agreement as the average of pairwise Spearman ρ correlations between the ratings of all respondents. Overall agreement was ρ = 0.67. This compares favorably with the agreement on WS-353 (ρ = 0.61 using the same method). The design of the MEN rating system precludes a conventional calculation of inter-rater agreement (Bruni et al. 2012b). However, two of the creators of MEN who independently rated the data set achieved an agreement of ρ = 0.68.10 The SimLex-999 inter-rater agreement suggests that participants were able to understand the (single) characterization of similarity presented in the instructions and to apply it to concepts of various types consistently. This conclusion was supported by inspection of the brief feedback offered by the majority of annotators in a final text field in the questionnaire: 78% expressed sentiment that the test was clear, easy to complete, or some similar sentiment. Interestingly, as shown in Figure 4 (left), agreement was not uniform across the concept types. Contrary to what might be expected given established concreteness effects (Paivio 1991), we observed not only higher inter-rater agreement but also less per-pair variability for abstract rather than concrete concepts.11 Strikingly, the highest inter-rater consistency and lowest per-pair variation (defined as the inverse of the standard deviation of all ratings for that pair) was observed on adjective pairs. Although we are unsure exactly what drives this effect, a possible cause 10 Reported at http://clic.cimec.unitn.it/~elia.bruni/MEN. It is reasonable to assume that actual agreement on MEN may be somewhat lower than 0.68, given the small sample size and the expertise of the raters. 11 Per-pair variability was measured by calculating the standard deviation of responses for each pair, and averaging these scores across the pairs of each concept type. is that many pairs of adjectives in SimLex-999 cohabit a single salient, one-dimensional scale ( freezing > cold > warm > hot). This may be a consequence of the fact that many pairs in SimLex-999 were selected (from USF) to have a degree of association. On inspection, pairs of nouns and verbs in SimLex-999 do not appear to occupy scales in the same way, possibly because concepts of these POS categories come to be associated via a more diverse range of relations. It seems plausible that humans are able to estimate the similarity of scale-based concepts more consistently than pairs of concepts related in a less uni-dimensional fashion. Regardless of cause, however, the high agreement on adjectives is a satisfactory property of SimLex-999. Adjectives exhibit various aspects of lexical semantics that have proved challenging for computational models, including antonymy, polarity (Williams and Anand 2009), and sentiment (Wiebe 2000). To approach the high level of human confidence on the adjective pairs in SimLex-999, it may be necessary to focus particularly on developing automatic ways to capture these phenomena. Inspection of the SimLex-999 ratings indicated that pairs were indeed evaluated according to similarity rather than association. Table 2 includes examples that demonstrate a clear dissociation between the two semantic relations. To verify this effect quantitatively, we recruited 100 additional participants to rate the WS-353 pairs, but following the SimLex-999 instructions and question format. As shown in Fig 5(a), there were clear differences between these new ratings and the original WS-353 ratings. In particular, a high proportion of pairs was given a lower rating by subjects following the SimLex-999 instructions than those following the WS-353 guidelines: The mean SimLex rating was 4.07 compared with 5.91 for WS-353. This was consistent with our expectations that pairs of associated but dissimilar concepts would receive lower ratings based on the SimLex-999 than on the WS-353 instructions, whereas pairs that were both associated and similar would receive similar ratings in both cases. To confirm this, we compared the WS-353 and SimLex-999based ratings on the subsets WS-Rel and WS-Sim, which were hand-sorted by Agirre et al. (2009) to include pairs connected by association (and not similarity) and those connected by similarity (but possibly also association), respectively. As shown in Figure 5(b–c), the correlation between the SimLex-999-based and WS353 ratings was notably higher (ρ = 0.73) on the WS-Sim subset than the WS-Rel subset (ρ = 0.38). Specifically, the tendency of subjects following the SimLex-999 instructions to assign lower ratings than those following the WS-353 instructions was far more pronounced for pairs in WS-Sim (Figure 5(b)) than for those in WS-Rel (Figure 5(c)). This observation suggests that the associated but dissimilar pairs in WS-353 were an important driver of the overall lower mean for SimLex-999-based ratings, and thus provide strong evidence that the SimLex-999 instructions do indeed enable subjects to distinguish similarity from association effectively. We have established the validity of similarity as a notion understood by human raters and distinct from association. However, much theoretical semantics focuses on relations between words or concepts that are finer-grained than similarity and association. These include meronymy (a part to its whole, e.g., blade–knife), hypernymy (a category concept to a member of that category, e.g., animal–dog), and cohyponymy (two members of the same implicit category, e.g., the pair of animals dog–cat) (Cruse 1986). Beyond theoretical interest, these relations can have practical relevance. For instance, hypernymy can form the basis of semantic entailment and therefore textual inference: The proposition a cat is on the table entails that an animal is on the table precisely because of the hypernymy relation from animal to cat. We chose not to make these finer-grained relations the basis of our evaluation for several reasons. At present, detecting relations such as hypernymy using distributional methods is challenging, even when supported by supervised classifiers with access to labeled pairs (Levy et al. 2015). Such a designation can seem to require specific world-knowledge (is a snale a reptile?), can be gradual, as evidenced by typicality effects (Rosch, Simpson, and Miller 1976), or simply highly subjective. Moreover, a fine-grained relation R will only be attested (to any degree) between a small subset of all possible word pairs, whereas similarity can in theory be quantified for any two words chosen at random. We thus considered a focus on fine-grained semantic relations to be less appropriate for a general-purpose evaluation of representation quality. Nevertheless, post hoc analysis of the SimLex annotator responses and fine-grained relation classes, as defined by lexicographers, yields further interesting insights into the nature of both similarity and association. Of the 999 word pairs in SimLex, 382 are also connected by one of the common finer-grained semantic relations in WordNet. For each of these relations, Figure 6 shows the average similarity rating and average USF free association score for all pairs that exhibit that relation. In cases where a relationship of hypernymy/hyponymy exists between the words in a pair (not necessarily immediate : 1 hypernym, 2 hypernym, etc.) similarity and association coincide. Hyper/hyponym pairs that are separated by fewer levels in the WordNet hierarchy are both more strongly associated and rated as more similar. However, there are also interesting discrepancies between similarity and association. Unsurprisingly, pairs that are classed as synonyms in WordNet (i.e., having at least one sense in some common synset) are rated as more similar than pairs of any other relation type by SimLex annotators. In contrast, antonyms are the most strongly associated word pairs among these finer-grained relations. Further, pairs consisting of a meronym and holonym (part and whole) are comparatively strongly associated but not judged to be similar. The analysis also highlights a case that can be particularly problematic when rating similarity: cohyponyms, or members of the same salient category (such as knife and fork). We gave no specific guidelines for how to rate such pairs in the SimLex annotator instructions, and whether they are considered similar or not seems to be a matter of perspective. On one hand, their membership of a common category could make them appear similar, particularly if the category is relatively specific. On the other hand, in the case of knife and fork, for instance, the underlying category cultery might provide a backdrop against which the differences of distinct members become particularly salient. 5 evaluating models with simlex-999 :In this section, we demonstrate the applicability of SimLex-999 by analyzing the performance of various distributional semantic models in estimating the new ratings. The models were selected to cover the main classes of representation learning architectures (Baroni, Dinu, and Kruszewski 2014): Vector space co-occurrence (counting) models and NLMs (Bengio et al. 2003). We first show that SimLex-999 is notably more difficult for state-of-the-art models to estimate than existing gold standards. We then conduct more focused analyses on the various concept subsets defined in SimLex-999, exploring possible causes for the comparatively low performance of current models and, in turn, demonstrating how SimLex-999 can be applied to investigate such questions. Collobert & Weston. Collobert and Weston (2008) apply the architecture of an NLM to learn a word representations vw for each word w in some corpus vocabulary V. Each sentence s in the input text is represented by a matrix containing the vector representations of the words in s in order. The model then computes output scores f (s) and f (sw), where sw denotes an “incorrect” sentence created from s by replacing its last word with some other word w from V. Training involves updating the parameters of the function f and the entries of the vector representations vw such that f (s) is larger than f (sw) for any w in V, other than the correct final word of s. This corresponds to minimizing the sum of the following sentence objectives Cs over all sentences in the input corpus, which is achieved via (mini-batch) stochastic gradient descent: Cs = ∑ w∈V max(0, 1− f (s) + f (sw)) The relatively low-dimension, dense (vector) representations learned by this model and the other NLMs introduced in this section are sometimes referred to as embeddings (Turian, Ratinov, and Bengio 2010). Collobert and Weston (2008) train their models on 852 million words of text from a 2007 dump of Wikipedia and the RCV1 Corpus (Lewis et al. 2004) and use their embeddings to achieve state-of-the-art results on a variety of NLP tasks. We downloaded the embeddings directly from the authors’ Web page.12 Huang et al. Huang et al. (2012) present a NLM that learns word embeddings to maximize the likelihood of predicting the last word in a sentence s based on (i) the previous words in that sentence (local context, as with Collobert and Weston [2008]) and (ii) the document d in which that word occurs (global context). As with Collobert and Weston (2008), the model represents input sentences as a matrix of word embeddings. In addition, it represents documents in the input corpus as single-vector averages over all word embeddings in that document. It can then compute scores g(s, d) and g(sw, d), whereas before sw is a sentence with an “incorrect” randomly selected last word. 12 http://ml.nec-labs.com/senna/. Training is again by stochastic gradient descent, and corresponds to minimizing the sum of the sentence objectives Cs,d over all of the sentences in the corpus: Cs,d = ∑ w∈V max(0, 1− g(s, d) + g(sw, d)) The combination of local and global contexts in the objective encourages the final word embeddings to reflect aspects of both the meaning of nearby words and of the documents in which those words appear. When learning from 990M words of Wikipedia text, Huang et al. report a Spearman correlation of ρ = 71.3 between the cosine similarity of their model embeddings and the WS-353 scores, which constitutes state-of-the-art performance for a NLM model on that data set. We downloaded these embeddings from the authors’ Web page.13 Mikolov et al. Mikolov et al. (2013a) present an architecture that learns word embeddings similar to those of standard NLMs but with no nonlinear hidden layer (resulting in a simpler scoring function). This enables faster representation learning for large vocabularies. Despite this simplification, the embeddings achieve state-of-theart performance on several semantic tasks including sentence completion and analogy modeling (Mikolov et al. 2013a, 2013b). For each word type w in the vocabulary V, the model learns both a “targetembedding” rw ∈ Rd and a “context-embedding” r̂w ∈ Rd such that, given a target word, its ability to predict nearby context words is maximized. The probability of seeing context word c given target w is defined as: p(c|w) = e r̂c·rw∑ v∈V er̂v·rw The model learns from a set of (target-word, context-word) pairs, extracted from a corpus of sentences as follows. In a given sentence s (of length N), for each position n ≤ N, each word wn is treated in turn as a target word. An integer t(n) is then sampled from a uniform distribution on {1, . . . k}, where k > 0 is a predefined maximum contextwindow parameter. The pair tokens {(wn, wn+j) : −t(n) ≤ j ≤ t(n), wi ∈ s} are then appended to the training data. Thus, target/context training pairs are such that (i) only words within a k-window of the target are selected as context words for that target, and (ii) words closer to the target are more likely to be selected than those further away. The training objective is then to maximize the log probability T, defined here, across all such examples from s, and then across all sentences in the corpus. This is achieved by stochastic gradient descent. T = 1N N∑ n=1 ∑ −t(n)≤j≤t(n),j 6=0 log(p(wn+j|wn)) As with other NLMs, Mikolov et al.’s model captures conceptual semantics by exploiting the fact that words appearing in similar linguistic contexts are likely to 13 www.socher.org/index.php/Main/ImprovingWordRepresentationsViaGlobalContextAndMultiple WordPrototypes. have similar meanings. Informally, the model adjusts its embeddings to increase the probability of observing the training corpus. Because this probability increases with p(c|w), and p(c|w) increases with the dot product r̂c · rw, the updates have the effect of moving each target-embedding incrementally “closer” to the context-embeddings of its collocates. In the target-embedding space, this results in embeddings of concept words that regularly occur in similar contexts moving closer together. We use the author’s Word2vec software in order to train their model and use the target embeddings in our evaluations. We experimented with embeddings of dimension 100, 200, 300, 400, and 500 and found that 200 gave the best performance on both WS-353 and SimLex-999. Vector Space Model (VSM). As an alternative to NLMs, we constructed a vector space model following the guidelines for optimal performance outlined by Kiela and Clark (2014). After extracting the 2,000 most frequent word tokens in the corpus that are not in a common list of stopwords14 as features, we populated a matrix of co-occurrence counts with a row for each of the concepts in some pair in our evaluation sets, and a column for each of the features. Co-occurrence was counted within a specified window size, although never across a sentence boundary. This resulting matrix was then weighted according to Pointwise Mutual Information (PMI) (Recchia and Jones 2009). The rows of the resulting matrix constitute the vector representations of the concepts. SVD. As proposed initially in Landauer and Dumais (1997), we also experimented with models in which SVD (Golub and Reinsch 1970) is applied to the PMI-weighted VSM matrix, reducing the dimension of each concept representation to 300 (which yielded best results after experimenting, as before, with 100–500 dimension vectors). For each model described in this section, we calculate similarity as the cosine similarity between the (vector) representations learned by that model. In experimenting with different models on SimLex-999, we aimed to answer the following questions: (i) How well do the established models perform on SimLex-999 versus on existing gold standards? (ii) Are any observed differences caused by the potential of different models to measure similarity vs. association? (iii) Are there interesting differences in ability of models to capture similarity between adjectives vs. nouns vs. verbs? (iv) In this case, are the observed differences driven by concreteness, and its interaction with POS, or are other factors also relevant? Overall Performance on SimLex-999. Figure 7 shows the performance of the NLMs on SimLex-999 versus on comparable data sets, measured by Spearman’s ρ correlation. All models estimate the ratings of MEN and WS-353 more accurately than SimLex-999. The Huang et al. (2012) model performs well on WS-353,15 but is not very robust to changes in evaluation gold standard, and performs worst of all the models on SimLex-999. Given the focus of the WS-353 ratings, it is tempting to explain this by concluding that the 14 Taken from the Python Natural Language Toolkit (Bird 2006). 15 This score, based on embeddings downloaded from the authors’ webpage, is notably lower than the score reported in Huang et al. (2012), mentioned in Section 5.1. global context objective leads the Huang et al. (2012) model to focus on association rather than similarity. However, the true explanation may be less simple, since the Huang et al. (2012) model performs weakly on the association-based MEN data set. The Collobert and Weston (2008) model is more robust across WS-353 and MEN, but still does not match the performance of the Mikolov et al. (2013a) model on SimLex-999. Figure 8 compares the best performing NLM model (Mikolov et al. 2013a) with the VSM and SVD models.16 In contrast to recent results that emphasize the superiority of NLMs over alternatives (Baroni, Dinu, and Kruszewski 2014), we observed no clear advantage for the NLM over the VSM or SVD when considering the associationbased gold standards WS-353 and MEN together. While the NLM is the strongest performer on WS-353, SVD is the strongest performer on MEN. However, the NLM model performs notably better than the alternatives at modeling similarity, as measured by SimLex-999. Comparing all models in Figures 7 and 8 suggests that SimLex-999 is notably more challenging to model than the alternative data sets, with correlation scores ranging from 0.098 to 0.414. Thus, even when state-of-the-art models are trained for several days on massive text corpora,17 their performance on SimLex-999 is well below the interannotator agreement (Figure 7). This suggests that there is ample scope for SimLex-999 to guide the development of improved models. Modeling Similarity vs. Association. The comparatively low performance of NLM, VSM, and SVD models on SimLex-999 compared with MEN and WS-353 is consistent with our hypothesis that modeling similarity is more difficult than modeling association. Indeed, given that many strongly associated but dissimilar pairs, such as [coffee, cup], are likely to have high co-occurrence in the training data, and that all models infer connections between concepts from linguistic co-occurrence in some form or another, 16 We conduct this comparison on the smaller RCV1 Corpus (Lewis et al. 2004) because training the VSM and SVD models is comparatively slow. 17 Training times reported by Huang et al. (2012) and for Collobert and Weston (2008) at http://ronan.collobert.com/senna/. it seems plausible that models may overestimate the similarity of such pairs because they are “distracted” by association. To test this hypothesis more precisely, we compared the performance of models on the whole of SimLex-999 versus its 333 most associated pairs (according to the USF free association scores). Importantly, pairs in this strongly associated subset still span the full range of possible similarity scores (min similarity = 0.23 [shrink, grow], max similarity = 9.80 [vanish, disappear]). As shown in Figure 9, all models performed worse when the evaluation was restricted to pairs of strongly associated concepts, which was consistent with our hypothesis. The Collobert and Weston (2008) model was better than the Huang et al. (2012) model at estimating similarity in the face of high association. This is not entirely surprising given the global-context objective in the latter model, which may have encouraged more association-based connections between concepts. The Mikolov et al. model, however, performed notably better than both other NLMs. Moreover, this superiority is proportionally greater when evaluating on the most associated pairs only (as indicated by the difference between the red and gray bars), suggesting that the improvement is driven at least in part by an increased ability to “distinguish” similarity from association. To understand better how the architecture of models captures information pertinent to similarity modeling, we performed two additional experiments using SimLex-999. These comparisons were also motivated by the hypotheses, made in previous studies and outlined in Section 2.1.2, that both dependency-informed input and smaller context windows encourage models to capture similarity rather than association. We tested the first hypothesis using the embeddings of Levy and Goldberg (2014), whose model extends the Mikolov et al. (2013a) model so that target-context training instances are extracted based on dependency-parsed rather than simple running text. As illustrated in Figure 9, the dependency-based embeddings outperform the original (running text) embeddings trained on the same corpus. Moreover, the comparatively large increase in the red bar compared to the gray bar suggests that an important part of the improvement of the dependency-based model derives from a greater ability to discern similarity from association. Our comparisons provided less support for the second (window size) hypothesis. As shown in Figure 10, there is a negligible improvement in the performance of the model when the window size is reduced from 10 to 2. However, for the SVD model we observed the converse. The SVD model with window size 10 slightly outperforms the SVD model with window 2, and this improvement is quite pronounced on the most associated pairs in SimLex-999. Learning Concepts of Different POS. Given the theoretical likelihood of variation in model performance across POS categories noted in Section 2.2, we evaluated the Mikolov et al. (2013a), VSM, and SVD models on the subsets of SimLex-999 containing adjective, noun, and verb concept pairs. The analyses yield two notable conclusions, as shown in Figure 11. First, perhaps contrary to intuition, all models estimate the similarity of adjectives better than other concept categories. This aligns with the (also unexpected) observation that humans rate the similarity of adjectives more consistently and with more agreement than other parts of speech (see the dashed lines). However, the parallels between human raters and the models do not extend to verbs and nouns; verb similarity is rated more consistently than noun similarity by humans, but models estimate these ratings more accurately for nouns than for verbs. To better understand the linguistic information exploited by models when acquiring concepts of different POS, we also computed performance on the POS subsets of SimLex-999 of the dependency-based model of Levy and Goldberg (2014) and the standard skipgram model, in which linguistic contexts are encoded as simple bagsof-words (BOW) (Mikolov et al. (2013a) [trained on the same Wikipedia text]). As shown in Figure 12, dependency-aware contexts yield the largest improvements for capturing verb similarity. This aligns with the cognitive theory of verbs as relational concepts (Markman and Wisniewski 1997) whose meanings rely on their interaction with (or dependency on) other words or concepts. It is also consistent with research on the automatic acquisition of verb semantics, in which syntactic features have proven particularly important (Sun, Korhonen, and Krymolowski 2008). Although a deeper exploration of these effects is beyond the scope of this work, this preliminary analysis again highlights how the word classes integrated into SimLex-999 are pertinent to a range of questions concerning lexical semantics. Learning Concrete and Abstract Concepts. Given the strong interdependence between POS and conceptual concreteness (Figure 1), we aimed to explore whether the variation in model performance on different POS categories was in fact driven by an underlying effect of concreteness. To do so, we ranked each pair in the SimLex-999 data set according to the sum of the concreteness of the two words, and compared performance of models on the most concrete and least concrete quartiles according to this ranking (Figure 13). Interestingly, the performance of models on the most abstract and most concrete pairs suggests that the distinction characterized by concreteness is at least partially independent of POS. Specifically, while the Mikolov et al. model was the highest performer of all POS categories, its performance was worse than both the simple VSM and SVD models (of window size 10) on the most concrete concept pairs. This finding supports the growing evidence for systematic differences in representation and/or similarity operations between abstract and concrete concepts (Hill, Kiela, and Korhonen 2013), and suggests that at least part of these concreteness effects are independent of POS. In particular, it appears that models built from underlying vectors of co-occurrence counts, such as VSMs and SVD, are better equipped to capture the semantics of concrete entities, whereas the embeddings learned by NLMs can better capture abstract semantics. 6 conclusion :Although the ultimate test of semantic models should be their utility in downstream applications, the research community can undoubtedly benefit from ways to evaluate the general quality of the representations learned by such models, prior to their integration in any particular system. We have presented SimLex-999, a gold standard resource for the evaluation of semantic representations containing similarity ratings of word pairs of different POS categories and concreteness levels. The development of SimLex-999 was principally motivated by two factors. First, as we demonstrated, several existing gold standards measure the ability of models to capture association rather than similarity, and others do not adequately test their ability to discriminate similarity from association. This is despite the many potential applications for accurate similarity-focused representation learning models. Analysis of the ratings of the 500 SimLex-999 annotators showed that subjects can consistently quantify similarity, as distinct from association, and apply it to various concept types, based on minimal intuitive instructions. Second, as we showed, state-of-the-art models trained solely on running-text corpora have now reached or surpassed the human agreement ceiling on WordSim-353 and MEN, the most popular existing gold standards, as well as on RG and WS-Sim. These evaluations may therefore have limited use in guiding or moderating future improvements to distributional semantic models. Nevertheless, there is clearly still room for improvement in terms of the use of distributional models in functional applications. We therefore consider the comparatively low performance of state-of-the-art models on SimLex-999 to be one of its principal strengths. There is clear room under the inter-rating ceiling to guide the development of the next generation of distributional models. We conducted a brief exploration of how models might improve on this performance, and verified the hypotheses that models trained on dependency-based input capture similarity more effectively than those trained on running-text input. The evidence that smaller context windows are also beneficial for similarity models was mixed, however. Indeed, we showed that the optimal window size depends on both the general model architecture and the part-of-speech and concreteness of the target concepts. Our analysis of these hypotheses illustrates how the design of SimLex-999— covering a principled set of concept categories and including meta-information on concreteness and free-association strength—enables fine-grained analyses of the performance and parameterization of semantic models. However, these experiments only scratch the surface in terms of the possible analyses. We hope that researchers will adopt the resource as a robust means of answering a diverse range of questions pertinent to similarity modeling, distributional semantics, and representation learning in general. In particular, for models to learn high-quality representations for all linguistic concepts, we believe that future work must uncover ways to explicitly or implicitly infer “deeper,” more general, conceptual properties such as intentionality, polarity, subjectivity, or concreteness (Gershman and Dyer 2014). However, although improving corpusbased models in this direction is certainly realistic, models that learn exclusively via the linguistic modality may never reach human-level performance on evaluations such as SimLex-999. This is because much conceptual knowledge, and particularly that which underlines similarity computations for concrete concepts, appears to be grounded in the perceptual modalities as much as in language (Barsalou et al. 2003). Whatever the means by which the improvements are achieved, accurate conceptlevel representation is likely to constitute a necessary first step towards learning informative, language-neutral phrasal and sentential representations. Such representations would be hugely valuable for fundamental NLP applications such as language understanding tools and machine translation. Distributional semantics aims to infer the meaning of words based on the company they keep (Firth 1957). However, although words that occur together in text often have associated meanings, these meanings may be very similar or indeed very different. Thus, possibly excepting the population of Argentina, most people would agree that, strictly speaking, Maradona is not synonymous with football (despite their high rating of 8.62 in WordSim-353). The challenge for the next generation of distributional models may therefore be to infer what is useful from the co-occurrence signal and to overlook what is not. Perhaps only then will models capture most, or even all, of what humans know when they know how to use a language. We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness so that pairs of entities that are associated but not actually similar (Freud, psychology) have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider, range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun, and verb pairs, together with an independent rating of concreteness and (free) association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-ofthe-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures. [{""affiliations"": [], ""name"": ""Felix Hill""}, {""affiliations"": [], ""name"": ""Roi Reichart""}, {""affiliations"": [], ""name"": ""Anna Korhonen""}] SP:bc23a2bd2507750d9ea8f0a4937c6bc8162c549c [{""authors"": [""Agirre"", ""Eneko"", ""Enrique Alfonseca"", ""Keith Hall"", ""Jana Kravalova"", ""Marius Pa\u015fca"", ""Aitor Soroa.""], ""title"": ""A study on similarity and relatedness using distributional and Wordnet-based approaches"", ""venue"": ""Proceedings"", ""year"": 2009}, {""authors"": [""Alfonseca"", ""Enrique"", ""Suresh Manandhar.""], ""title"": ""Extending a lexical ontology by a combination of distributional semantics signatures"", ""venue"": ""G. Schrieber et al., Knowledge Engineering and Knowledge"", ""year"": 2002}, {""authors"": [""Andrews"", ""Mark"", ""Gabriella Vigliocco"", ""David Vinson.""], ""title"": ""Integrating experiential and distributional data to learn semantic representations"", ""venue"": ""Psychological Review, 116(3):463."", ""year"": 2009}, {""authors"": [""Bansal"", ""Mohit"", ""Kevin Gimpel"", ""Karen Livescu.""], ""title"": ""Tailoring continuous word representations for dependency parsing"", ""venue"": ""Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Baroni"", ""Marco"", ""Georgiana Dinu"", ""Germ\u00e1n Kruszewski.""], ""title"": ""Don\u2019t count, predict! A systematic comparison of context-counting vs"", ""venue"": ""context-predicting semantic vectors. In Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Baroni"", ""Marco"", ""Alessandro Lenci.""], ""title"": ""Distributional memory: A general framework for corpus-based semantics"", ""venue"": ""Computational Linguistics, 36(4):673\u2013721."", ""year"": 2010}, {""authors"": [""Barsalou"", ""Lawrence W."", ""W. Kyle Simmons"", ""Aron K. Barbey"", ""Christine D. Wilson.""], ""title"": ""Grounding conceptual knowledge in modality-specific systems"", ""venue"": ""Trends in Cognitive Sciences, 7(2):84\u201391."", ""year"": 2003}, {""authors"": [""Beltagy"", ""Islam"", ""Katrin Erk"", ""Raymond Mooney.""], ""title"": ""Semantic parsing using distributional semantics and probabilistic logic"", ""venue"": ""ACL 2014 Workshop on Semantic Parsing."", ""year"": 2014}, {""authors"": [""Bengio"", ""Yoshua"", ""R\u00e9jean Ducharme"", ""Pascal Vincent"", ""Christian Jauvin.""], ""title"": ""A neural probabilistic language model"", ""venue"": ""The Journal of Machine Learning Research, 3:1137\u20131155."", ""year"": 2003}, {""authors"": [""Bernardi"", ""Raffaella"", ""Georgiana Dinu"", ""Marco Marelli"", ""Marco Baroni.""], ""title"": ""A relatedness benchmark to test the role of determiners in compositional distributional semantics"", ""venue"": ""Proceedings of"", ""year"": 2013}, {""authors"": [""Biemann"", ""Chris.""], ""title"": ""Ontology learning from text: A survey of methods"", ""venue"": ""LDV Forum, 20(2):75\u201393."", ""year"": 2005}, {""authors"": [""Bird"", ""Steven.""], ""title"": ""Nltk: the natural language toolkit"", ""venue"": ""Proceedings of the COLING/ACL on Interactive Presentation sessions, pages 69\u201372, Sydney."", ""year"": 2006}, {""authors"": [""Bruni"", ""Elia"", ""Gemma Boleda"", ""Marco Baroni"", ""Nam-Khanh Tran.""], ""title"": ""Distributional semantics in technicolor"", ""venue"": ""Proceedings of ACL, Jeju Island."", ""year"": 2012}, {""authors"": [""Bruni"", ""Elia"", ""Jasper Uijlings"", ""Marco Baroni"", ""Nicu Sebe.""], ""title"": ""Distributional semantics with eyes: Using image analysis to improve computational representations of word meaning"", ""venue"": ""Proceedings of the 20th"", ""year"": 2012}, {""authors"": [""Budanitsky"", ""Alexander"", ""Graeme Hirst.""], ""title"": ""Evaluating Wordnet-based measures of lexical semantic relatedness"", ""venue"": ""Computational Linguistics, 32(1):13\u201347."", ""year"": 2006}, {""authors"": [""Cimiano"", ""Philipp"", ""Andreas Hotho"", ""Steffen Staab.""], ""title"": ""Learning concept hierarchies from text corpora using formal concept analysis"", ""venue"": ""J. Artif. Intell. Res. (JAIR), 24:305\u2013339."", ""year"": 2005}, {""authors"": [""R. Collobert"", ""J. Weston.""], ""title"": ""A unified architecture for natural language processing: Deep neural networks with multitask learning"", ""venue"": ""International Conference on Machine Learning, ICML, Helsinki."", ""year"": 2008}, {""authors"": [""Cruse"", ""D. Alan.""], ""title"": ""Lexical semantics"", ""venue"": ""Cambridge University Press."", ""year"": 1986}, {""authors"": [""Cunningham"", ""Hamish.""], ""title"": ""Information extraction, automatic"", ""venue"": ""Encyclopedia of language and linguistics, pages 665\u2013677."", ""year"": 2005}, {""authors"": [""Fellbaum"", ""Christiane""], ""title"": ""WordNet"", ""venue"": ""Wiley Online Library."", ""year"": 1998}, {""authors"": [""Finkelstein"", ""Lev"", ""Evgeniy Gabrilovich"", ""Yossi Matias"", ""Ehud Rivlin"", ""Zach Solan"", ""Gadi Wolfman"", ""Eytan Ruppin.""], ""title"": ""Placing search in context: The concept revisited"", ""venue"": ""Proceedings of the"", ""year"": 2001}, {""authors"": [""J.R. Firth""], ""title"": ""Papers in Linguistics 1934\u20131951"", ""venue"": ""Oxford University Press."", ""year"": 1957}, {""authors"": [""Gentner"", ""Dedre.""], ""title"": ""On relational meaning: The acquisition of verb meaning"", ""venue"": ""Child Development, pages 988\u2013998."", ""year"": 1978}, {""authors"": [""Gentner"", ""Dedre.""], ""title"": ""Why verbs are hard to learn"", ""venue"": ""Action meets word: How Children Learn Verbs, pages 544\u2013564. 692"", ""year"": 2006}, {""authors"": [""Gershman"", ""Anatole"", ""Yulia Tsvetkov"", ""Leonid Boytsov"", ""Eric Nyberg"", ""Chris Dyer.""], ""title"": ""Metaphor detection with cross-lingual model transfer"", ""venue"": ""Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Golub"", ""Gene H."", ""Christian Reinsch.""], ""title"": ""Singular value decomposition and least squares solutions"", ""venue"": ""Numerische Mathematik, 14(5):403\u2013420."", ""year"": 1970}, {""authors"": [""Griffiths"", ""Thomas L."", ""Mark Steyvers"", ""Joshua B. Tenenbaum.""], ""title"": ""Topics in semantic representation"", ""venue"": ""Psychological Review, 114(2):211."", ""year"": 2007}, {""authors"": [""Haghighi"", ""Aria"", ""Percy Liang"", ""Taylor Berg-Kirkpatrick"", ""Dan Klein.""], ""title"": ""Learning bilingual lexicons from monolingual corpora"", ""venue"": ""Proceedings of ACL 2008, Columbus, OH."", ""year"": 2008}, {""authors"": [""Hassan"", ""Samer"", ""Rada Mihalcea.""], ""title"": ""Semantic relatedness using salient semantic analysis"", ""venue"": ""AAAI, San Francisco, CA."", ""year"": 2011}, {""authors"": [""Hatzivassiloglou"", ""Vasileios"", ""Judith L. Klavans"", ""Melissa L. Holcombe"", ""Regina Barzilay"", ""Min-Yen Kan"", ""Kathleen McKeown.""], ""title"": ""Simfinder: A flexible clustering tool for summarization"", ""venue"": ""In"", ""year"": 2001}, {""authors"": [""He"", ""Xiaodong"", ""Mei Yang"", ""Jianfeng Gao"", ""Patrick Nguyen"", ""Robert Moore.""], ""title"": ""Indirect-HMM-based hypothesis alignment for combining outputs from machine translation systems"", ""venue"": ""Proceedings"", ""year"": 2008}, {""authors"": [""Hill"", ""Felix"", ""Douwe Kiela"", ""Anna Korhonen.""], ""title"": ""Concreteness and corpora: A theoretical and practical analysis"", ""venue"": ""CMCL 2013, page 75, Sofia."", ""year"": 2013}, {""authors"": [""Hill"", ""Felix"", ""Anna Korhonen"", ""Christian Bentz.""], ""title"": ""A quantitative empirical analysis of the abstract/concrete distinction"", ""venue"": ""Cognitive Science, 38(1):162\u2013177."", ""year"": 2014}, {""authors"": [""Hill"", ""Felix"", ""Roi Reichart"", ""Anna Korhonen.""], ""title"": ""Multi-modal models for concrete and abstract concept meaning"", ""venue"": ""Transactions of the Association for Computational Linguistics (TACL), 2:285\u2013296."", ""year"": 2014}, {""authors"": [""Huang"", ""Eric H."", ""Richard Socher"", ""Christopher D. Manning"", ""Andrew Y. Ng.""], ""title"": ""Improving word representations via global context and multiple word prototypes"", ""venue"": ""Proceedings of ACL, pages 873\u2013882,"", ""year"": 2012}, {""authors"": [""Kiela"", ""Douwe"", ""Stephen Clark.""], ""title"": ""A systematic study of semantic vector space model parameters"", ""venue"": ""Proceedings of the 2nd Workshop on Continuous Vector"", ""year"": 2014}, {""authors"": [""Kiela"", ""Douwe"", ""Felix Hill"", ""Anna Korhonen"", ""Stephen Clark.""], ""title"": ""Improving multi-modal representations using image dispersion: Why less is sometimes more"", ""venue"": ""Proceedings of ACL, Baltimore, MD."", ""year"": 2014}, {""authors"": [""Landauer"", ""Thomas K."", ""Susan T. Dumais.""], ""title"": ""A solution to Plato\u2019s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge"", ""venue"": ""Psychological Review,"", ""year"": 1997}, {""authors"": [""Leech"", ""Geoffrey"", ""Roger Garside"", ""Michael Bryant.""], ""title"": ""Claws4: The tagging of the British National Corpus"", ""venue"": ""Proceedings of COLING, pages 622\u2013628, Kyoto."", ""year"": 1994}, {""authors"": [""Levy"", ""Omer"", ""Yoav Goldberg.""], ""title"": ""Dependency-based word embeddings"", ""venue"": ""Proceedings of ACL, volume 2."", ""year"": 2014}, {""authors"": [""Levy"", ""Omer"", ""Steffen Remus"", ""Chris Biemann"", ""Idol Dagan""], ""title"": ""Do supervised distributional methods really learn lexical inference relations"", ""venue"": ""Proceedings of NAACL,"", ""year"": 2015}, {""authors"": [""Lewis"", ""David D."", ""Yiming Yang"", ""Tony G. Rose"", ""Fan Li.""], ""title"": ""Rcv1: A new benchmark collection for text categorization research"", ""venue"": ""The Journal of Machine Learning Research, 5:361\u2013397."", ""year"": 2004}, {""authors"": [""Li"", ""Changliang"", ""Bo Xu"", ""Gaowei Wu"", ""Xiuying Wang"", ""Wendong Ge"", ""Yan Li.""], ""title"": ""Obtaining better word representations via language transfer"", ""venue"": ""A. Gelbukh, editor, Computational Linguistics and Intelligent"", ""year"": 2014}, {""authors"": [""Li"", ""Mu"", ""Yang Zhang"", ""Muhua Zhu"", ""Ming Zhou.""], ""title"": ""Exploring distributional similarity based models for query spelling correction"", ""venue"": ""Proceedings of ALC, pages 1025\u20131032."", ""year"": 2006}, {""authors"": [""Luong"", ""Minh-Thang"", ""Richard Socher"", ""Christopher D. Manning.""], ""title"": ""Better word representations with recursive neural networks for morphology"", ""venue"": ""CoNLL-2013, page 104, Sofia."", ""year"": 2013}, {""authors"": [""Markman"", ""Arthur B."", ""Edward J. Wisniewski.""], ""title"": ""Similar and different: The differentiation of basic-level categories"", ""venue"": ""Journal of Experimental Psychology: Learning, Memory, and Cognition, 23(1):54."", ""year"": 1997}, {""authors"": [""Marton"", ""Yuval"", ""Chris Callison-Burch"", ""Philip Resnik.""], ""title"": ""Improved statistical machine translation using monolinguallyderived paraphrases"", ""venue"": ""Proceedings of EMNLP, pages 381\u2013390, Edinburgh."", ""year"": 2009}, {""authors"": [""McRae"", ""Ken"", ""Saman Khalkhali"", ""Mary Hare""], ""title"": ""Semantic and associative relations in adolescents and young"", ""year"": 2012}, {""authors"": [""Medelyan"", ""Olena"", ""David Milne"", ""Catherine Legg"", ""Ian H. Witten.""], ""title"": ""Mining meaning from Wikipedia"", ""venue"": ""International Journal of Human-Computer Studies, 67(9):716\u2013754."", ""year"": 2009}, {""authors"": [""Mikolov"", ""Tomas"", ""Kai Chen"", ""Greg Corrado"", ""Jeffrey Dean.""], ""title"": ""Efficient estimation of word representations in vector space"", ""venue"": ""Proceedings of International Conference of Learning Representations,"", ""year"": 2013}, {""authors"": [""Mikolov"", ""Tomas"", ""Ilya Sutskever"", ""Kai Chen"", ""Greg S. Corrado"", ""Jeff Dean.""], ""title"": ""Distributed representations of words and phrases and their compositionality"", ""venue"": ""Advances in Neural Information"", ""year"": 2013}, {""authors"": [""Navigli"", ""Roberto.""], ""title"": ""Word sense disambiguation: A survey"", ""venue"": ""ACM Computing Surveys (CSUR), 41(2):10."", ""year"": 2009}, {""authors"": [""Nelson"", ""Douglas L."", ""Cathy L. McEvoy"", ""Thomas A. Schreiber.""], ""title"": ""The University of South Florida free association, rhyme, and word fragment norms"", ""venue"": ""Behavior Research Methods, Instruments, & Computers,"", ""year"": 2004}, {""authors"": [""Pad\u00f3"", ""Sebastian"", ""Ulrike Pad\u00f3"", ""Katrin Erk.""], ""title"": ""Flexible, corpus-based modelling of human plausibility judgements"", ""venue"": ""Proceedings of EMNLP-CoNLL, pages 400\u2013409, Prague."", ""year"": 2007}, {""authors"": [""Paivio"", ""Allan.""], ""title"": ""Dual coding theory: Retrospect and current status"", ""venue"": ""Canadian Journal of Psychology/Revue canadienne de psychologie, 45(3):255."", ""year"": 1991}, {""authors"": [""Pedersen"", ""Ted"", ""Siddharth Patwardhan"", ""Jason Michelizzi.""], ""title"": ""Wordnet:: Similarity: Measuring the relatedness of concepts"", ""venue"": ""Demonstration Papers at HLT-NAACL 2004, pages 38\u201341, New York,"", ""year"": 2004}, {""authors"": [""Phan"", ""Xuan-Hieu"", ""Le-Minh Nguyen"", ""Susumu Horiguchi.""], ""title"": ""Learning to classify short and sparse text & Web with hidden topics from large-scale data collections"", ""venue"": ""Proceedings of the 17th"", ""year"": 2008}, {""authors"": [""Plaut"", ""David C.""], ""title"": ""Semantic and associative priming in a distributed attractor network"", ""venue"": ""Proceedings"", ""year"": 1995}, {""authors"": [""Recchia"", ""Gabriel"", ""Michael N. Jones.""], ""title"": ""More data trumps smarter algorithms: Comparing pointwise mutual information with latent semantic analysis"", ""venue"": ""Behavior Research Methods, 41(3):647\u2013656."", ""year"": 2009}, {""authors"": [""Reisinger"", ""Joseph"", ""Raymond Mooney.""], ""title"": ""A mixture model with sharing for lexical semantics"", ""venue"": ""Proceedings of EMNLP, pages 1173\u20131182, Cambridge, MA."", ""year"": 2010}, {""authors"": [""Reisinger"", ""Joseph"", ""Raymond J. Mooney.""], ""title"": ""Multi-prototype vector-space models of word meaning"", ""venue"": ""Human Language Technologies: The 2010 Annual Conference of the North American Chapter of"", ""year"": 2010}, {""authors"": [""Resnik"", ""Philip.""], ""title"": ""Using information content to evaluate semantic similarity in a taxonomy"", ""venue"": ""Proceedings of IJCAI."", ""year"": 1995}, {""authors"": [""Resnik"", ""Philip"", ""Jimmy Lin.""], ""title"": ""11 evaluations of NLP systems"", ""venue"": ""The handbook of computational linguistics and natural language processing, 57:271."", ""year"": 2010}, {""authors"": [""Rosch"", ""Eleanor"", ""Carol Simpson"", ""R. Scott Miller.""], ""title"": ""Structural bases of typicality effects"", ""venue"": ""Journal of Experimental Psychology: Human Perception and Performance, 2(4):491."", ""year"": 1976}, {""authors"": [""Rose"", ""Tony"", ""Mark Stevenson"", ""Miles Whitehead.""], ""title"": ""The Reuters corpus volume 1\u2014from yesterday\u2019s news to tomorrow\u2019s language resources"", ""venue"": ""LREC, volume 2, pages 827\u2013832,"", ""year"": 2002}, {""authors"": [""Rubenstein"", ""Herbert"", ""John B. Goodenough.""], ""title"": ""Contextual correlates of synonymy"", ""venue"": ""Communications of the ACM, 8(10):627\u2013633."", ""year"": 1965}, {""authors"": [""Silberer"", ""Carina"", ""Mirella Lapata.""], ""title"": ""Learning grounded meaning representations with autoencoders"", ""venue"": ""Proceedings of ACL, Sofia."", ""year"": 2014}, {""authors"": [""Sun"", ""Lin"", ""Anna Korhonen"", ""Yuval Krymolowski.""], ""title"": ""Verb class discovery from rich syntactic data"", ""venue"": ""A. Gelbukh, editor, Computational Linguistics and Intelligent Text processing. Springer,"", ""year"": 2008}, {""authors"": [""Turian"", ""Joseph"", ""Lev Ratinov"", ""Yoshua Bengio.""], ""title"": ""Word representations: A simple and general method for semi-supervised learning"", ""venue"": ""Proceedings of ACL, pages 384\u2013394,"", ""year"": 2010}, {""authors"": [""Turney"", ""Peter D.""], ""title"": ""Domain and function: A dual-space model of semantic relations and compositions"", ""venue"": ""Journal 694"", ""year"": 2012}, {""authors"": [""Turney"", ""Peter D."", ""Patrick Pantel.""], ""title"": ""From frequency to meaning: Vector space models of semantics"", ""venue"": ""Journal of Artificial Intelligence Research, 37(1):141\u2013188."", ""year"": 2010}, {""authors"": [""Tversky"", ""Amos.""], ""title"": ""Features of similarity"", ""venue"": ""Psychological Review, 84(4):327."", ""year"": 1977}, {""authors"": [""Wiebe"", ""Janyce.""], ""title"": ""Learning subjective adjectives from corpora"", ""venue"": ""AAAI/IAAI, pages 735\u2013740, Austin, TX."", ""year"": 2000}, {""authors"": [""Williams"", ""Gbolahan K"", ""Sarabjot Singh Anand""], ""title"": ""Predicting the polarity"", ""year"": 2009}, {""authors"": [""Wu"", ""Zhibiao"", ""Martha Palmer.""], ""title"": ""Verbs, semantics and lexical selection"", ""venue"": ""Proceedings of ACL, pages 133\u2013138, Las Cruces, NM."", ""year"": 1994}, {""authors"": [""Yong"", ""Chung"", ""Shou King Foo.""], ""title"": ""A case study on inter-annotator agreement for word sense disambiguation"", ""venue"": ""Proceedings of the ACL SIGLEX Workshop on Standardizing"", ""year"": 1999}]","1 introduction :There is very little similar about coffee and cups. Coffee refers to a plant, which is a living organism or a hot brown (liquid) drink. In contrast, a cup is a man-made solid of broadly well-defined shape and size with a specific function relating to the consumption of liquids. Perhaps the only clear trait these concepts have in common is that they are concrete entities. Nevertheless, in what is currently the most popular evaluation gold standard for semantic similarity, WordSim(WS)-353 (Finkelstein et al. 2001), coffee and ∗ Computer Laboratory University of Cambridge, UK. E-mail: {felix.hill, anna.korhonen}@ cl.cam.ac.uk. ∗∗ Technion, Israel Institute of Technology, Haifa, Israel. E-mail: roiri@ie.technion.ac.il. Submission received: 25 July 2014; revised submission received: 10 June 2015; accepted for publication: 31 August 2015. doi:10.1162/COLI a 00237 © 2015 Association for Computational Linguistics cup are rated as more “similar” than pairs such as car and train, which share numerous common properties (function, material, dynamic behavior, wheels, windows, etc.). Such anomalies also exist in other gold standards such as the MEN data set (Bruni et al. 2012a). As a consequence, these evaluations effectively penalize models for learning the evident truth that coffee and cup are dissimilar. Although clearly different, coffee and cup are very much related. The psychological literature refers to the conceptual relationship between these concepts as association, although it has been given a range of names including relatedness (Budanitsky and Hirst 2006; Agirre et al. 2009), topical similarity (Hatzivassiloglou et al. 2001), and domain similarity (Turney 2012). Association contrasts with similarity, the relation connecting cup and mug (Tversky 1977). At its strongest, the similarity relation is exemplified by pairs of synonyms; words with identical referents. Computational models that effectively capture similarity as distinct from association have numerous applications. Such models are used for the automatic generation of dictionaries, thesauri, ontologies, and language correction tools (Biemann 2005; Cimiano, Hotho, and Staab 2005; Li et al. 2006). Machine translation systems, which aim to define mappings between fragments of different languages whose meaning is similar, but not necessarily associated, are another established application (He et al. 2008; Marton, Callison-Burch, and Resnik 2009). Moreover, since, as we establish, similarity is a cognitively complex operation that can require rich, structured conceptual knowledge to compute accurately, similarity estimation constitutes an effective proxy evaluation for general-purpose representation-learning models whose ultimate application is variable or unknown (Collobert and Weston 2008; Baroni and Lenci 2010). As we show in Section 2, the predominant gold standards for semantic evaluation in NLP do not measure the ability of models to reflect similarity. In particular, in both WS353 and MEN, pairs of words with associated meaning, such as coffee and cup (rating = 6.810), telephone and communication (7.510), or movie and theater (7.710), receive a high rating regardless of whether or not their constituents are similar. Thus, the utility of such resources to the development and application of similarity models is limited, a problem exacerbated by the fact that many researchers appear unaware of what their evaluation resources actually measure.1 Although certain smaller gold standards—those of Rubenstein and Goodenough (1965) (RG) and Agirre et al. (2009) (WS-Sim)—do focus clearly on similarity, these resources suffer from other important limitations. For instance, as we show, and as is also the case for WS-353 and MEN, state-of-the-art models have reached the average performance of a human annotator on these evaluations. It is common practice in NLP to define the upper limit for automated performance on an evaluation as the average human performance or inter-annotator agreement (Yong and Foo 1999; Cunningham 2005; Resnik and Lin 2010). Based on this established principle and the current evaluations, it would therefore be reasonable to conclude that the problem of representation learning, at least for similarity modeling, is approaching resolution. However, circumstantial evidence suggests that distributional models are far from perfect. For instance, we are some way from automatically generated dictionaries, thesauri, or ontologies that can be used with the same confidence as their manually created equivalents. 1 For instance, Huang et al. (2012, pages 1, 4, 10) and Reisinger and Mooney (2010b, page 4) refer to MEN and/or WS-353 as “similarity data sets.” Others evaluate on both these association-based and genuine similarity-based gold standards with no reference to the fact that they measure different things (Medelyan et al. 2009; Li et al. 2014). Motivated by these observations, in Section 3 we present SimLex-999, a gold standard resource for evaluating the ability of models to reflect similarity. SimLex-999 was produced by 500 paid native English speakers, recruited via Amazon Mechanical Turk,2 who were asked to rate the similarity, as opposed to association, of concepts via a simple visual interface. The choice of evaluation pairs in SimLex-999 was motivated by empirical evidence that humans represent concepts of distinct part-of-speech (POS) (Gentner 1978) and conceptual concreteness (Hill, Korhonen, and Bentz 2014) differently. Whereas existing gold standards contain only concrete noun concepts (MEN) or cover only some of these distinctions via a random selection of items (WS-353, RG), SimLex-999 contains a principled selection of adjective, verb, and noun concept pairs covering the full concreteness spectrum. This design enables more nuanced analyses of how computational models overcome the distinct challenges of representing concepts of these types. In Section 4 we present quantitative and qualitative analyses of the SimLex-999 ratings, which indicate that participants found it unproblematic to quantify consistently the similarity of the full range of concepts and to distinguish it from association. Unlike existing data sets, SimLex-999 therefore contains a significant number of pairs, such as [movie, theater], which are strongly associated but receive low similarity scores. The second main contribution of this paper, presented in Section 5, is the evaluation of state-of-the-art distributional semantic models using SimLex-999. These include the well-known neural language models (NLMs) of Huang et al. (2012), Collobert and Weston (2008), and Mikolov et al. (2013a), which we compare with traditional vectorspace co-occurrence models (VSMs) (Turney and Pantel 2010) with and without dimensionality reduction (SVD) (Landauer and Dumais 1997). Our analyses demonstrate how SimLex-999 can be applied to uncover substantial differences in the ability of models to represent concepts of different types. Despite these differences, the models we consider each share the characteristic of being better able to capture association than similarity. We show that the difficulty of estimating similarity is driven primarily by those strongly associated pairs with a high (association) rating in gold standards such as WS-353 and MEN, but a low similarity rating in SimLex-999. As a result of including these challenging cases, together with a wider diversity of lexical concepts in general, current models achieve notably lower scores on SimLex-999 than on existing gold standard evaluations, and well below the SimLex-999 inter-human agreement ceiling. Finally, we explore ways in which distributional models might improve on this performance in similarity modeling. To do so, we evaluate the models on the SimLex999 subsets of adjectives, nouns, and verbs, as well as on abstract and concrete subsets and subsets of more and less strongly associated pairs (Sections 5.2.2–5.2.4). As part of these analyses, we confirm the hypothesis (Agirre et al. 2009; Levy and Goldberg 2014) that models learning from input informed by dependency parsing, rather than simple running-text input, yield improved similarity estimation and, specifically, clearer distinction between similarity and association. In contrast, we find no evidence for a related hypothesis (Agirre et al. 2009; Kiela and Clark 2014) that smaller context windows improve the ability of models to capture similarity. We do, however, observe clear differences in model performance on the distinct concept types included in SimLex-999. Taken together, these experiments demonstrate the benefit of the diversity of concepts 2 www.mturk.com/. included in SimLex-999; it would not have been possible to derive similar insights by evaluating based on existing gold standards. We conclude by discussing how observations such as these can guide future research into distributional semantic models. By facilitating better-defined evaluations and finer-grained analyses, we hope that SimLex-999 will ultimately contribute to the development of models that accurately reflect human intuitions of similarity for the full range of concepts in language. 6 conclusion :Although the ultimate test of semantic models should be their utility in downstream applications, the research community can undoubtedly benefit from ways to evaluate the general quality of the representations learned by such models, prior to their integration in any particular system. We have presented SimLex-999, a gold standard resource for the evaluation of semantic representations containing similarity ratings of word pairs of different POS categories and concreteness levels. The development of SimLex-999 was principally motivated by two factors. First, as we demonstrated, several existing gold standards measure the ability of models to capture association rather than similarity, and others do not adequately test their ability to discriminate similarity from association. This is despite the many potential applications for accurate similarity-focused representation learning models. Analysis of the ratings of the 500 SimLex-999 annotators showed that subjects can consistently quantify similarity, as distinct from association, and apply it to various concept types, based on minimal intuitive instructions. Second, as we showed, state-of-the-art models trained solely on running-text corpora have now reached or surpassed the human agreement ceiling on WordSim-353 and MEN, the most popular existing gold standards, as well as on RG and WS-Sim. These evaluations may therefore have limited use in guiding or moderating future improvements to distributional semantic models. Nevertheless, there is clearly still room for improvement in terms of the use of distributional models in functional applications. We therefore consider the comparatively low performance of state-of-the-art models on SimLex-999 to be one of its principal strengths. There is clear room under the inter-rating ceiling to guide the development of the next generation of distributional models. We conducted a brief exploration of how models might improve on this performance, and verified the hypotheses that models trained on dependency-based input capture similarity more effectively than those trained on running-text input. The evidence that smaller context windows are also beneficial for similarity models was mixed, however. Indeed, we showed that the optimal window size depends on both the general model architecture and the part-of-speech and concreteness of the target concepts. Our analysis of these hypotheses illustrates how the design of SimLex-999— covering a principled set of concept categories and including meta-information on concreteness and free-association strength—enables fine-grained analyses of the performance and parameterization of semantic models. However, these experiments only scratch the surface in terms of the possible analyses. We hope that researchers will adopt the resource as a robust means of answering a diverse range of questions pertinent to similarity modeling, distributional semantics, and representation learning in general. In particular, for models to learn high-quality representations for all linguistic concepts, we believe that future work must uncover ways to explicitly or implicitly infer “deeper,” more general, conceptual properties such as intentionality, polarity, subjectivity, or concreteness (Gershman and Dyer 2014). However, although improving corpusbased models in this direction is certainly realistic, models that learn exclusively via the linguistic modality may never reach human-level performance on evaluations such as SimLex-999. This is because much conceptual knowledge, and particularly that which underlines similarity computations for concrete concepts, appears to be grounded in the perceptual modalities as much as in language (Barsalou et al. 2003). Whatever the means by which the improvements are achieved, accurate conceptlevel representation is likely to constitute a necessary first step towards learning informative, language-neutral phrasal and sentential representations. Such representations would be hugely valuable for fundamental NLP applications such as language understanding tools and machine translation. Distributional semantics aims to infer the meaning of words based on the company they keep (Firth 1957). However, although words that occur together in text often have associated meanings, these meanings may be very similar or indeed very different. Thus, possibly excepting the population of Argentina, most people would agree that, strictly speaking, Maradona is not synonymous with football (despite their high rating of 8.62 in WordSim-353). The challenge for the next generation of distributional models may therefore be to infer what is useful from the co-occurrence signal and to overlook what is not. Perhaps only then will models capture most, or even all, of what humans know when they know how to use a language." "1 introduction :With the emergence of popular social media services such as Twitter and Facebook, many studies in the area of Natural Language Processing (NLP) have been published that analyze the text data from these services for a variety of applications, such as opinion mining, sentiment analysis, event detection, or crisis management (Culotta 2010; Sriram et al. 2010; Yin et al. 2012). Many of these studies have primarily relied on building classification models for different learning tasks, such as text classification or Named Entity Recognition. The effectiveness of these models is often evaluated using cross-validation techniques. Cross-validation, first introduced by Geisser (1974), has been acclaimed as the most popular evaluation method for estimating prediction errors in regression and ∗ CSIRO, Sydney, New South Wales, Australia. E-mail: {sarvnaz.karimi, jie.jin}@csiro.au. ∗∗ Sabik Software Solutions Pty Ltd, Sydney, New South Wales, Australia. E-mail: jiri@baum.com.au. Submission received: 4 October 2013; revised submission received: 9 September 2014; accepted for publication: 10 November 2014. doi:10.1162/COLI a 00230 © 2015 Association for Computational Linguistics classification problems. In that method, the data D are randomly partitioned into k non-overlapping subsets (folds) Dk of approximately equal size. The validation step is repeated k times, using a different Dv = Dk as the validation data and Dt = D \ Dk as the training data each time. The final evaluation is the average over the k validation steps. Cross-validation is found to have a lower variance than a single hold-out set validation and thus it is commonly used on both moderate and large amounts of data without introducing efficiency concerns (Arlot and Celisse 2010). Compared with other choices of k, 10-fold cross-validation has been accepted as the most reliable method, which gives a highly accurate estimate of the generalization error of a given model for a variety of algorithms and applications. Despite its wide applications, debates on the appropriateness of cross-validation have been raised in a number of areas, particularly in time series analysis (Bergmeir and Benı́tez 2012) and chemical engineering (Sheridan 2013). A fundamental assumption of cross-validation is that the data need to be independent and identically distributed (i.i.d.) between folds (Arlot and Celisse 2010). Therefore, if the data points used for training and validation are not independent, cross-validation is no longer valid and would usually overestimate the validity of the model. For time series forecasting, because the data are comprised of correlated observations and might be generated by a process that evolves over time, the training set and the validation set are not independent if randomly chosen for cross-validation. Researchers since the early 1990s have used modified variants of cross-validation to compensate for time dependence within times series (Chu and Marron 1991; Bergmeir and Benı́tez 2012). In the area of chemical engineering, the work of (Sheridan 2013) investigates the dependence in chemical data and observes that the existence of similar compounds or molecules across the data set leads to overoptimistic results using standard k-fold cross-validation. We take such observations from the time series and chemical domains as a warning to investigate the data dependence in computational linguistics. We argue that even when the data appear to be independently generated and when there is no reason to believe that temporal dependencies are present, unexpected statistical dependencies may be induced through an incorrect application of cross-validation. Once there is a chance of having similar or otherwise dependent data points, random split of the data without taking this factor into account would cause incorrect or at least unreliable evaluation, which may lead to invalid or at least unjustified conclusions. Although similar concerns have been raised by a prior study (Lerman et al. 2008) that cross-validation might not be suitable for measuring the accuracy of public opinion forecasting, there is a lack of systematic analysis on how potential data dependency might invalidate cross-validation and what alternative evaluation methods exist. With the aim of gaining further insight into this issue, we perform a detailed empirical study based on text classification in Twitter, and show that inappropriate choice of cross-validation techniques could potentially lead to misleading conclusions. This concern could apply more generally to other data types of similar nature. We also explore several evaluation methods, mostly borrowed from research on time series, that are better suited for statistically dependent data and that could be adapted by researchers working in the NLP area.","2 unexpected statistical dependence in microblog data :We argue that microblog data, such as Twitter messages, known as tweets, are statistically dependent in nature. It has been demonstrated that there is redundancy in Twitter language (Zanzotto, Pennacchiotti, and Tsioutsiouliklis 2011), which in turn can translate to evidence against statistical independence of microblog posts in given periods of time or occurrences of the same event. In summary, statistical dependence in tweets can arise for the following reasons: Events The events of interest, or events being discussed by the microbloggers at large, may be temporally limited within certain specific time horizons; Textual Links Hashtags and other idiomatic expressions may be invented, gain and lose popularity, fall out of use, or be reused with a different meaning. In addition, particular microbloggers may actively share information on certain types of events or express their opinions on certain topics in similar contexts; Twinning There may be “twinning” of tweets (and therefore data points) because of various forms of retweeting, where different microbloggers post substantially the same tweets, in response to one another. Many existing studies that use microblog data (e.g., for tweet classification)—especially those related to detecting or monitoring events—have adopted k-fold cross-validation to evaluate the effectiveness of the learned classification models (e.g., Culotta 2010; Sriram et al. 2010; Jiang et al. 2011; Uysal and Croft 2011; Takemura and Tajima 2012; Kumar et al. 2014). However, they overlook the possible statistical dependence among microblog data in terms of content (e.g., sharing hashtags in Twitter), relevance to current events, and time of publishing that could potentially impact their evaluation results. In Table 1 we use examples of these studies to illustrate how potential statistical dependence of microblog data is omitted, which might have impacted the validity of the results in these studies. The potential source of the ignored statistical dependence shown in the last column is categorized into Events, Textual Links, and Twinning (described above), based on the list of features that the authors have used in their machine learning approaches. For example, if one is interested in detecting specific events (e.g., diseases [Culotta 2010] or disasters [Verma et al. 2011; Kumar, Jiang, and Fang 2014]), using tweets from the middle of an event to train for and evaluate the detection of the onset of that same event is clearly invalid. In addition, certain users (e.g., authorities that announce disease outbreaks such as @ECDC Outbreaks, or car accidents such as @emergencyAUS) may always post in the same way [Verma et al. 2011; Kumar, Jiang, and Fang 2014], making tweets share similar contexts; or microbloggers always use the same hashtags to indicate similar topics (Jiang et al. 2011). More generally, if substantially verbatim “twin” copies of tweets find their way into both the training and validation data sets (e.g., [Uysal and Croft 2011; Verma et al. 2011; Kumar, Jiang, and Fang 2014]), model validity will be overestimated. We note that, as we do not have access to the data sets used in these studies, we cannot verify the level of influence that the data dependence and their choice of evaluation have on their reported results. However, we raise a warning that there might be an overlooked unexpected dependence influencing their results.","3 validation for statistically dependent data :Where the data is not independent, dependence-aware validation needs to be used. We describe four dependence-aware validation methods that take data dependence into consideration in the evaluation. In the following discussion, we refer to the data set used in the experiments as D = {d1, d2, · · · , dn}. The training data is referred to as Dt, and testing, evaluation or validation data as Dv, where both Dt and Dv are subsets of D.","4 a case study on tweet classification :We focus on two tweet classification tasks in our case study. The first is a binary classification, where tweets are classified into disaster-related or not; the second is a disaster type classification, where tweets are predicted to one of the following six classes: nondisaster, storm, earthquake, flooding, fire, and other disasters. In our experiments, we use LibSVM (Chang and Lin 2011) to build discriminative classifiers for our classification tasks. Tweets are short texts currently limited up to 140 characters. Often, microbloggers use tweets in reply to others by using mentions (which are Twitter usernames and are preceded by @), or use hashtags such as #CycloneEvan to make the grouping of similar messages easier, or to increase the visibility of their posts to others interested in the same topic. Use of links to Web pages, mostly full stories of what is briefly reported in the tweet, is also popular. Selecting features for a text classifier that is built on Twitter data can therefore benefit from both conventional textual features, such as n-grams, and Twitter-specific features, such as hashtags and mentions. In our experiments, we investigate the effect of the following features and their combinations on classification of tweets for disasters: (1) n-grams: Unigrams and bigrams of tweet text at the word level, excluding any hashtag or mention in the text. To find n-grams we pre-process tweets to remove stopwords; (2) Hashtag: Two different features are explored. First, a binary feature of the hashtags in the tweets, which indicates whether a hashtag exists in the tweet or not; second, the total number of hashtags in a tweet; (3) Mention: Two types of features (binary and mention count) are explored, exactly the same as hashtags explained above; and (4) Link: A binary feature that specifies whether or not a tweet contains any link to a Web page. We randomly sampled a total of 7,500 English tweets published in two years from December 2010 to December 2012 from a system (Yin et al. 2012) that stores billions of tweets from the Twitter streaming API. Explicit retweets were excluded. This set contained a number of disasters such as earthquakes in Christchurch, New Zealand 2011; York floods, England 2012; and Hurricane Sandy, United States 2012. For a machine learner such as a classifier to work, we need to present it with a representative set of labeled training data. We therefore annotated our tweet data manually to identify disaster tweets and their types. We annotated the data set based on two main questions: Is this tweet talking about a specific disaster? What type of disaster is it talking about? Types of disasters were defined as earthquake, fire, flooding, storm, other, and non-disaster. Annotations were done by three annotators for each tweet, who were hired through the crowd-sourcing service Crowdflower. After taking majority votes where at least two out of the three annotators agreed on both questions, we ended up with a set of 6,580 annotated tweets, of which 2,897 tweets were identified as disaster-related and 3,683 as non-disaster. In disaster tweets, 37% were annotated with earthquake, followed by fire, flooding, and storm constituting 23%, 21%, and 12%, respectively. For our tweet classification tasks, we set up experiments to compare five validation methods—standard 10-fold cross-validation, border-split cross-validation, neighborsplit validation, time-split validation, and time-border-split validation—for identifying tweets that are relevant to a disaster, and whether or not we can broadly identify the type of disaster. We evaluate classification effectiveness using the accuracy metric, which is the percentage of correctly classified tweets. We used the following settings for our evaluation schemes: r k-fold cross-validation: We used k = 10 folds.r Border-split cross-validation: We used k = 10 folds for border-split and a radius h = 20 days. We assume that for most events, the social media activity on the topic dies after three weeks of their occurrence. r Neighbor-split validation: We used cosine similarity to find a subset of the data that has the least number of neighbors in the data set. We weighted hashtags double and disregarded mentions in calculating the similarity. Neighborhood was decided using a minimum threshold of 0.25 on the resulting cosine similarity. The size of the test data set was the same as in time-split (below) but the size of the training data set was 5,868.r Time-split validation: We chose the cut-off time t∗ so that 90% of the data was used as training (5,922 tweets) and 10% as validation (658 tweets).r Time-border-split validation: We used the same t∗ and h as time-split and border-split. This resulted in a training data set containing 87.1% of the data and a validation data set identical to the one in time-split at 10% of the data (658 tweets). In removing the border, 2.9% of the data were discarded. The size of the training set was 5,750 tweets. We run two sets of experiments: Discriminating disaster tweets from non-disaster (Disaster or Not), and classifying tweets into the six classes of earthquake, fire, flooding, storm, other, and non-disaster (Disaster Type). Table 2 compares the classification results for SVM on a range of feature combinations using five different validation methods. We aim to show that inappropriate choice of cross-validation could possibly lead to misleading conclusions, including overestimated classification accuracies, and suboptimal feature sets that yield the highest accuracies. k-fold Cross-Validation. The first set of experiments was conducted using standard 10-fold cross-validation. For discriminating disaster tweets from non-disaster, SVM achieved a maximum of 92.8% accuracy when unigrams and hashtags were used. A similar result of 92.7% accuracy was recorded for classifying tweets to their disaster types. Unsurprisingly, having hashtags as additional features was the most helpful. As a side note, the standard deviations of 10-fold cross-validation are quite small; although it is well-known that these are not an unbiased estimate of the true variance (see, for instance, Bengio and Grandvalet [2004]), it nevertheless seems to suggest a degree of confidence. If we would assume that tweets are statistically independent, we could conclude that the SVM classifier using unigrams plus hashtags is the best performer for classifying disaster tweets with a high accuracy of over 92%. This is what most previous studies have done. However, this result is overoptimistic. For example, during the cross-validation, tweets, with the same hashtags are distributed over all folds, making it easier for the classifier to associate labels with known hashtags. Border-Split Cross-Validation. In contrast to 10-fold cross-validation, border-split cross-validation gives a much lower performance score across all of the results. The standard deviations are also much larger. Neighbor-split Validation. The neighbour-split results are very different. Unlike previous work (Sheridan 2013), we find that neighbor-split judges the effectiveness of the models quite highly, near the scores from 10-fold cross-validation but ranking the feature combinations differently. We believe the high scores are because it tends to pick non-disaster tweets into the validation set, as those tweets have fewest neighbors. This makes it easier to classify all validation tweets into the majority class (i.e., non-disasters). Time-split and Time-border-split Validation. Neither of these validations leads to results similar to 10-fold cross-validation; they are substantially lower, in one case by over 20 percentage points (bottom row of table). Numerically, they are similar to the border-split cross-validation results, but again with a different ranking of the feature combinations. Time-split and time-border-split methods gave similar results to each other, with different rankings but only small numerical differences. Coincidentally, in our data set, the time t∗ fell largely between events of interest, so the elimination of the border resulted in only a small correction, confounded with the effect of the small reduction in training set size. The validation methods provide very different results, with a number of the differences around 20 percentage points. In addition, they disagree on the ranking of the feature combinations, which may be the more important problem in many studies. This is particularly visible with the Disaster or Not classification; the best combination of features (bold in Table 2) is different, depending on the validation method used. A study based on 10-fold cross-validation would suggest Unigram+Hashtag for this problem, which is the second-worst combination of features, according to time-border-split validation. Further, the confidence intervals, if used, would suggest confidence in this choice. Given our experiments, we believe that the standard k-fold cross-validation substantially overestimates the performance of the models in our case study. Time-split and time-border-split validation are more likely to represent accurate evaluation when temporally dependent data is involved, as they simulate true prospective prediction. However, they both have the downside that they only rely on one pair of training and validation data sets. This means that they will have a larger variance than a cross-validation method, and do not give any measure of confidence in their own results. Border-split cross-validation may also be acceptable, provided that its assumptions are satisfied—primarily, that the data points are independent beyond a certain radius. Depending on a particular task, this may be easy to decide (e.g., separate events) or hard (e.g., a number of overlapping events with no clear time difference). We also find that neighbor-split overestimates rather than underestimates the performance of the models relative to time-split and time-border-split. This represents a hazard to its use: The neighborhood measure may interact in unexpected ways with other features of the data, rendering the validation unpredictable. Ideally, one would use multiple large test data sets, all collected during separate, non-overlapping periods of time, after the training data set. This would be the most reliable measure of the performance of the models, but also the most expensive in terms of required data collection and annotation. Failing that, we recommend time-split or time-border-split validation when temporal statistical dependence cannot be ruled out.","5 conclusions :We used a common task in NLP, text classification, on a relatively recent but widely used data source, Twitter streams, to show that blindly following the same evaluation method for tasks of similar nature could lead to invalid conclusions. In particular, we investigated the most common evaluation method for machine learning applications, standard 10-fold cross-validation, and compared it with other validation methods that take the statistical dependence in the data into account, including time-split, bordersplit, neighbor-split, and a combination of time- and border-split validation. We showed how cross-validation can overestimate the effectiveness of a tweet classification application. We argued that text in microblogs or other similar text from social media (e.g., Web forums or even online news) can be statistically dependent for specific studies, such as those looking at events. Researchers therefore need to be careful in choosing evaluation methodologies based on the nature of the data at hand to avoid bias in their results.",,,,"In recent years, many studies have been published on data collected from social media, especially microblogs such as Twitter. However, rather few of these studies have considered evaluation methodologies that take into account the statistically dependent nature of such data, which breaks the theoretical conditions for using cross-validation. Despite concerns raised in the past about using cross-validation for data of similar characteristics, such as time series, some of these studies evaluate their work using standard k-fold cross-validation. Through experiments on Twitter data collected during a two-year period that includes disastrous events, we show that by ignoring the statistical dependence of the text messages published in social media, standard cross-validation can result in misleading conclusions in a machine learning task. We explore alternative evaluation methods that explicitly deal with statistical dependence in text. Our work also raises concerns for any other data for which similar conditions might hold.","[{""affiliations"": [], ""name"": ""Sarvnaz Karimi""}, {""affiliations"": [], ""name"": ""Jie Yin""}, {""affiliations"": [], ""name"": ""Jiri Baum""}]",SP:17ced766be8b5ad0aeadb2cbd25ee9d31eb0e63a,"[{""authors"": [""S. Arlot"", ""A. Celisse.""], ""title"": ""A survey of cross-validation procedures for model selection"", ""venue"": ""Statistics Surveys, 4:40\u201379. Bengio, Y. and Y. Grandvalet. 2004. No"", ""year"": 2010}, {""authors"": [""C. Lin""], ""title"": ""LIBSVM: A library for support vector machines"", ""venue"": ""Information Sciences,"", ""year"": 2011}, {""authors"": [""Chu"", ""C.-K"", ""J.S. Marron""], ""title"": ""Comparison of two bandwidth selectors"", ""year"": 1991}, {""authors"": [""S. Geisser""], ""title"": ""A predictive approach to the random effect mode"", ""venue"": ""Biometrika Trust, 61(9):101\u2013107."", ""year"": 1974}, {""authors"": [""L. Jiang"", ""M. Yu"", ""M. Zhou"", ""X. Liu"", ""T. Zhao.""], ""title"": ""Target-dependent Twitter sentiment classification"", ""venue"": ""ACL-HLT, pages 151\u2013160, Portland, OR."", ""year"": 2011}, {""authors"": [""A. Kumar"", ""M. Jiang"", ""Y. Fang.""], ""title"": ""Where not to go?: Detecting road hazards using Twitter"", ""venue"": ""SIGIR, pages 1223\u20131226, Gold Coast."", ""year"": 2014}, {""authors"": [""K. Lerman"", ""A. Gilder"", ""M. Dredze"", ""F. Pereira.""], ""title"": ""Reading the markets: Forecasting public opinion of political candidates by news analysis"", ""venue"": ""COLING, pages 473\u2013480, Manchester."", ""year"": 2008}, {""authors"": [""R.P. Sheridan""], ""title"": ""Time-split cross-validation as a method for estimating the goodness of prospective prediction"", ""venue"": ""Journal of Chemical Information and Modeling,"", ""year"": 2013}, {""authors"": [""B. Sriram"", ""D. Fuhry"", ""E. Demir"", ""H. Ferhatosmanoglu"", ""M. Demirbas.""], ""title"": ""Short text classification in Twitter to improve information filtering"", ""venue"": ""SIGIR, pages 841\u2013842, Geneva."", ""year"": 2010}, {""authors"": [""H. Takemura"", ""K. Tajima.""], ""title"": ""Tweet classification based on their lifetime duration"", ""venue"": ""CIKM, pages 2367\u20132370, Maui, HI."", ""year"": 2012}, {""authors"": [""I. Uysal"", ""W.B. Croft.""], ""title"": ""User oriented tweet ranking: A filtering approach to microblogs"", ""venue"": ""CIKM, pages 2261\u20132264, Glasgow."", ""year"": 2011}, {""authors"": [""S. Verma"", ""S. Vieweg"", ""W.J. Corvey"", ""L. Palen"", ""J.H. Martin"", ""M. Palmer"", ""A. Schram"", ""K.M. Anderson""], ""title"": ""Natural language processing to the rescue? Extracting \u201csituational awareness"", ""year"": 2011}, {""authors"": [""J. Yin"", ""S. Karimi"", ""B. Robinson"", ""M. Cameron.""], ""title"": ""ESA: Emergency situation awareness via microbloggers"", ""venue"": ""CIKM, pages 2701\u20132703, Maui, HI."", ""year"": 2012}, {""authors"": [""F.M. Zanzotto"", ""M. Pennacchiotti"", ""K. Tsioutsiouliklis.""], ""title"": ""Linguistic redundancy in Twitter"", ""venue"": ""EMNLP, pages 659\u2013669, Edinburgh. 548"", ""year"": 2011}]",,,,,,,,,,,squibs :,evaluation methods for statistically :,dependent text :Sarvnaz Karimi∗,"csiro :Jie Yin∗ Jiri Baum∗∗ Sabik Software Solutions In recent years, many studies have been published on data collected from social media, especially microblogs such as Twitter. However, rather few of these studies have considered evaluation methodologies that take into account the statistically dependent nature of such data, which breaks the theoretical conditions for using cross-validation. Despite concerns raised in the past about using cross-validation for data of similar characteristics, such as time series, some of these studies evaluate their work using standard k-fold cross-validation. Through experiments on Twitter data collected during a two-year period that includes disastrous events, we show that by ignoring the statistical dependence of the text messages published in social media, standard cross-validation can result in misleading conclusions in a machine learning task. We explore alternative evaluation methods that explicitly deal with statistical dependence in text. Our work also raises concerns for any other data for which similar conditions might hold.","reference study data dependence concern :Sriram et al. (2010) Classification of tweets into five categories of events, news, deals, opinions, and private messages 5,407 tweets, no mention of removing retweets Events, Textual Links, Twinning Culotta (2010) Classification of tweets using regression models into flu or not flu 500,000 tweets from a 10-week period Events, Textual Links, Twinning Verma et al. (2011) Classification of tweets to specific disaster events, and identification of tweets that contained situational awareness content 1,965 tweets collected from specific disasters in the U.S. by keyword search Twinning, Textual Links Jiang et al. (2011) Sentiment classification of tweets; hashtags were mentioned as one of the features, retweets were considered to share the same sentiment Tweets found using keyword search, without removing retweets Twinning, Textual Links Uysal and Croft (2011) Personalized tweet ranking using retweet behavior. Decision tree classifiers were used based on features from the tweet content, user behavior, and tweet author 24,200 tweets, from which 2,547 were retweeted by the seed users Twinning, Textual Links Takemura and Tajima (2012) Classification of tweets into three categories based on whether they should be read now, later, or outdated 9,890 tweets from a fixed period of time, annotated for time-(in)dependency Events, Textual Links, Twinning Kumar, Jiang, and Fang (2014) Classification of tweets into two categories of road hazard and non-hazard 30,876 tweets, retweets were not removed Events, Textual Links, Twinning Border-Split Cross-Validation. This method, proposed by Chu and Marron (1991), is a modification of k-fold cross-validation for time series data. Data are partitioned into the folds in time sequence rather than randomly; then in each validation step, data points within a time distance h of any point in the validation data set are excluded from the training data. That is, for each validation step k, the validation data is Dv = Dk, there is a border data set Db = {db ∈ D \ Dv : (∃dv∈Dv) |tdb − tdv | ≤ h}, which is disregarded, and the training data is Dt = D \ (Dk ∪ Db). This method assumes that the data dependence is confined to some known radius h, with data points beyond that radius being independent. Time-Split Validation. In time-split validation (Sheridan 2013), a particular time t∗ is chosen; data points prior to this time are allocated to the training data set and data points after t∗ are used as validation data; that is, Dt = {dt ∈ D : tdt ≤ t ∗} and Dv = {dv ∈ D : tdv > t ∗}. Note that this is not a cross-validation method, as no “crossing” takes place. The motivation is to emulate prospective prediction: A model is built using only the information available up to time t∗ and evaluated on data that are collected after that time. Time-Border-Split Validation. This is a combination of border-split and time-split validation. Both a time t∗ and a radius h are chosen; data points prior to time t∗ − h are allocated to the training data set and data points after time t∗ are used as validation data; that is, Dt = {dt ∈ D : tdt ≤ t ∗ − h} and Dv = {dv ∈ D : tdv > t ∗}. The remaining data points are not used. The motivation is to combine the conservative aspects of both time-split and border-split, emulating prospective prediction more carefully. Neighbor-Split Validation. Proposed by (Sheridan 2013), this method assumes the existence of some similarity metric for the data points. The number of neighbors for each data point is calculated, using a threshold on the similarity metric. A desired fraction of data points with the fewest neighbors are then allocated to the validation data Dv and the rest to the training data Dt. The motivation of this approach is to deliberately reduce the similarity between training and validation data. It is inspired by leave-class-out validation or cross-validation, which assumes a pre-existing classification (rather than similarity metric) and allocates data points to the validation data Dv or cross-validation folds Dk, according to this classification. The advantage of neighborsplit over leave-class-out is that the size of the validation data is a parameter that can be chosen, rather than being dependent on the (potentially unbalanced) sizes of the classes in the classification.",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :With the emergence of popular social media services such as Twitter and Facebook, many studies in the area of Natural Language Processing (NLP) have been published that analyze the text data from these services for a variety of applications, such as opinion mining, sentiment analysis, event detection, or crisis management (Culotta 2010; Sriram et al. 2010; Yin et al. 2012). Many of these studies have primarily relied on building classification models for different learning tasks, such as text classification or Named Entity Recognition. The effectiveness of these models is often evaluated using cross-validation techniques. Cross-validation, first introduced by Geisser (1974), has been acclaimed as the most popular evaluation method for estimating prediction errors in regression and ∗ CSIRO, Sydney, New South Wales, Australia. E-mail: {sarvnaz.karimi, jie.jin}@csiro.au. ∗∗ Sabik Software Solutions Pty Ltd, Sydney, New South Wales, Australia. E-mail: jiri@baum.com.au. Submission received: 4 October 2013; revised submission received: 9 September 2014; accepted for publication: 10 November 2014. doi:10.1162/COLI a 00230 © 2015 Association for Computational Linguistics classification problems. In that method, the data D are randomly partitioned into k non-overlapping subsets (folds) Dk of approximately equal size. The validation step is repeated k times, using a different Dv = Dk as the validation data and Dt = D \ Dk as the training data each time. The final evaluation is the average over the k validation steps. Cross-validation is found to have a lower variance than a single hold-out set validation and thus it is commonly used on both moderate and large amounts of data without introducing efficiency concerns (Arlot and Celisse 2010). Compared with other choices of k, 10-fold cross-validation has been accepted as the most reliable method, which gives a highly accurate estimate of the generalization error of a given model for a variety of algorithms and applications. Despite its wide applications, debates on the appropriateness of cross-validation have been raised in a number of areas, particularly in time series analysis (Bergmeir and Benı́tez 2012) and chemical engineering (Sheridan 2013). A fundamental assumption of cross-validation is that the data need to be independent and identically distributed (i.i.d.) between folds (Arlot and Celisse 2010). Therefore, if the data points used for training and validation are not independent, cross-validation is no longer valid and would usually overestimate the validity of the model. For time series forecasting, because the data are comprised of correlated observations and might be generated by a process that evolves over time, the training set and the validation set are not independent if randomly chosen for cross-validation. Researchers since the early 1990s have used modified variants of cross-validation to compensate for time dependence within times series (Chu and Marron 1991; Bergmeir and Benı́tez 2012). In the area of chemical engineering, the work of (Sheridan 2013) investigates the dependence in chemical data and observes that the existence of similar compounds or molecules across the data set leads to overoptimistic results using standard k-fold cross-validation. We take such observations from the time series and chemical domains as a warning to investigate the data dependence in computational linguistics. We argue that even when the data appear to be independently generated and when there is no reason to believe that temporal dependencies are present, unexpected statistical dependencies may be induced through an incorrect application of cross-validation. Once there is a chance of having similar or otherwise dependent data points, random split of the data without taking this factor into account would cause incorrect or at least unreliable evaluation, which may lead to invalid or at least unjustified conclusions. Although similar concerns have been raised by a prior study (Lerman et al. 2008) that cross-validation might not be suitable for measuring the accuracy of public opinion forecasting, there is a lack of systematic analysis on how potential data dependency might invalidate cross-validation and what alternative evaluation methods exist. With the aim of gaining further insight into this issue, we perform a detailed empirical study based on text classification in Twitter, and show that inappropriate choice of cross-validation techniques could potentially lead to misleading conclusions. This concern could apply more generally to other data types of similar nature. We also explore several evaluation methods, mostly borrowed from research on time series, that are better suited for statistically dependent data and that could be adapted by researchers working in the NLP area. 2 unexpected statistical dependence in microblog data :We argue that microblog data, such as Twitter messages, known as tweets, are statistically dependent in nature. It has been demonstrated that there is redundancy in Twitter language (Zanzotto, Pennacchiotti, and Tsioutsiouliklis 2011), which in turn can translate to evidence against statistical independence of microblog posts in given periods of time or occurrences of the same event. In summary, statistical dependence in tweets can arise for the following reasons: Events The events of interest, or events being discussed by the microbloggers at large, may be temporally limited within certain specific time horizons; Textual Links Hashtags and other idiomatic expressions may be invented, gain and lose popularity, fall out of use, or be reused with a different meaning. In addition, particular microbloggers may actively share information on certain types of events or express their opinions on certain topics in similar contexts; Twinning There may be “twinning” of tweets (and therefore data points) because of various forms of retweeting, where different microbloggers post substantially the same tweets, in response to one another. Many existing studies that use microblog data (e.g., for tweet classification)—especially those related to detecting or monitoring events—have adopted k-fold cross-validation to evaluate the effectiveness of the learned classification models (e.g., Culotta 2010; Sriram et al. 2010; Jiang et al. 2011; Uysal and Croft 2011; Takemura and Tajima 2012; Kumar et al. 2014). However, they overlook the possible statistical dependence among microblog data in terms of content (e.g., sharing hashtags in Twitter), relevance to current events, and time of publishing that could potentially impact their evaluation results. In Table 1 we use examples of these studies to illustrate how potential statistical dependence of microblog data is omitted, which might have impacted the validity of the results in these studies. The potential source of the ignored statistical dependence shown in the last column is categorized into Events, Textual Links, and Twinning (described above), based on the list of features that the authors have used in their machine learning approaches. For example, if one is interested in detecting specific events (e.g., diseases [Culotta 2010] or disasters [Verma et al. 2011; Kumar, Jiang, and Fang 2014]), using tweets from the middle of an event to train for and evaluate the detection of the onset of that same event is clearly invalid. In addition, certain users (e.g., authorities that announce disease outbreaks such as @ECDC Outbreaks, or car accidents such as @emergencyAUS) may always post in the same way [Verma et al. 2011; Kumar, Jiang, and Fang 2014], making tweets share similar contexts; or microbloggers always use the same hashtags to indicate similar topics (Jiang et al. 2011). More generally, if substantially verbatim “twin” copies of tweets find their way into both the training and validation data sets (e.g., [Uysal and Croft 2011; Verma et al. 2011; Kumar, Jiang, and Fang 2014]), model validity will be overestimated. We note that, as we do not have access to the data sets used in these studies, we cannot verify the level of influence that the data dependence and their choice of evaluation have on their reported results. However, we raise a warning that there might be an overlooked unexpected dependence influencing their results. 3 validation for statistically dependent data :Where the data is not independent, dependence-aware validation needs to be used. We describe four dependence-aware validation methods that take data dependence into consideration in the evaluation. In the following discussion, we refer to the data set used in the experiments as D = {d1, d2, · · · , dn}. The training data is referred to as Dt, and testing, evaluation or validation data as Dv, where both Dt and Dv are subsets of D. 4 a case study on tweet classification :We focus on two tweet classification tasks in our case study. The first is a binary classification, where tweets are classified into disaster-related or not; the second is a disaster type classification, where tweets are predicted to one of the following six classes: nondisaster, storm, earthquake, flooding, fire, and other disasters. In our experiments, we use LibSVM (Chang and Lin 2011) to build discriminative classifiers for our classification tasks. Tweets are short texts currently limited up to 140 characters. Often, microbloggers use tweets in reply to others by using mentions (which are Twitter usernames and are preceded by @), or use hashtags such as #CycloneEvan to make the grouping of similar messages easier, or to increase the visibility of their posts to others interested in the same topic. Use of links to Web pages, mostly full stories of what is briefly reported in the tweet, is also popular. Selecting features for a text classifier that is built on Twitter data can therefore benefit from both conventional textual features, such as n-grams, and Twitter-specific features, such as hashtags and mentions. In our experiments, we investigate the effect of the following features and their combinations on classification of tweets for disasters: (1) n-grams: Unigrams and bigrams of tweet text at the word level, excluding any hashtag or mention in the text. To find n-grams we pre-process tweets to remove stopwords; (2) Hashtag: Two different features are explored. First, a binary feature of the hashtags in the tweets, which indicates whether a hashtag exists in the tweet or not; second, the total number of hashtags in a tweet; (3) Mention: Two types of features (binary and mention count) are explored, exactly the same as hashtags explained above; and (4) Link: A binary feature that specifies whether or not a tweet contains any link to a Web page. We randomly sampled a total of 7,500 English tweets published in two years from December 2010 to December 2012 from a system (Yin et al. 2012) that stores billions of tweets from the Twitter streaming API. Explicit retweets were excluded. This set contained a number of disasters such as earthquakes in Christchurch, New Zealand 2011; York floods, England 2012; and Hurricane Sandy, United States 2012. For a machine learner such as a classifier to work, we need to present it with a representative set of labeled training data. We therefore annotated our tweet data manually to identify disaster tweets and their types. We annotated the data set based on two main questions: Is this tweet talking about a specific disaster? What type of disaster is it talking about? Types of disasters were defined as earthquake, fire, flooding, storm, other, and non-disaster. Annotations were done by three annotators for each tweet, who were hired through the crowd-sourcing service Crowdflower. After taking majority votes where at least two out of the three annotators agreed on both questions, we ended up with a set of 6,580 annotated tweets, of which 2,897 tweets were identified as disaster-related and 3,683 as non-disaster. In disaster tweets, 37% were annotated with earthquake, followed by fire, flooding, and storm constituting 23%, 21%, and 12%, respectively. For our tweet classification tasks, we set up experiments to compare five validation methods—standard 10-fold cross-validation, border-split cross-validation, neighborsplit validation, time-split validation, and time-border-split validation—for identifying tweets that are relevant to a disaster, and whether or not we can broadly identify the type of disaster. We evaluate classification effectiveness using the accuracy metric, which is the percentage of correctly classified tweets. We used the following settings for our evaluation schemes: r k-fold cross-validation: We used k = 10 folds.r Border-split cross-validation: We used k = 10 folds for border-split and a radius h = 20 days. We assume that for most events, the social media activity on the topic dies after three weeks of their occurrence. r Neighbor-split validation: We used cosine similarity to find a subset of the data that has the least number of neighbors in the data set. We weighted hashtags double and disregarded mentions in calculating the similarity. Neighborhood was decided using a minimum threshold of 0.25 on the resulting cosine similarity. The size of the test data set was the same as in time-split (below) but the size of the training data set was 5,868.r Time-split validation: We chose the cut-off time t∗ so that 90% of the data was used as training (5,922 tweets) and 10% as validation (658 tweets).r Time-border-split validation: We used the same t∗ and h as time-split and border-split. This resulted in a training data set containing 87.1% of the data and a validation data set identical to the one in time-split at 10% of the data (658 tweets). In removing the border, 2.9% of the data were discarded. The size of the training set was 5,750 tweets. We run two sets of experiments: Discriminating disaster tweets from non-disaster (Disaster or Not), and classifying tweets into the six classes of earthquake, fire, flooding, storm, other, and non-disaster (Disaster Type). Table 2 compares the classification results for SVM on a range of feature combinations using five different validation methods. We aim to show that inappropriate choice of cross-validation could possibly lead to misleading conclusions, including overestimated classification accuracies, and suboptimal feature sets that yield the highest accuracies. k-fold Cross-Validation. The first set of experiments was conducted using standard 10-fold cross-validation. For discriminating disaster tweets from non-disaster, SVM achieved a maximum of 92.8% accuracy when unigrams and hashtags were used. A similar result of 92.7% accuracy was recorded for classifying tweets to their disaster types. Unsurprisingly, having hashtags as additional features was the most helpful. As a side note, the standard deviations of 10-fold cross-validation are quite small; although it is well-known that these are not an unbiased estimate of the true variance (see, for instance, Bengio and Grandvalet [2004]), it nevertheless seems to suggest a degree of confidence. If we would assume that tweets are statistically independent, we could conclude that the SVM classifier using unigrams plus hashtags is the best performer for classifying disaster tweets with a high accuracy of over 92%. This is what most previous studies have done. However, this result is overoptimistic. For example, during the cross-validation, tweets, with the same hashtags are distributed over all folds, making it easier for the classifier to associate labels with known hashtags. Border-Split Cross-Validation. In contrast to 10-fold cross-validation, border-split cross-validation gives a much lower performance score across all of the results. The standard deviations are also much larger. Neighbor-split Validation. The neighbour-split results are very different. Unlike previous work (Sheridan 2013), we find that neighbor-split judges the effectiveness of the models quite highly, near the scores from 10-fold cross-validation but ranking the feature combinations differently. We believe the high scores are because it tends to pick non-disaster tweets into the validation set, as those tweets have fewest neighbors. This makes it easier to classify all validation tweets into the majority class (i.e., non-disasters). Time-split and Time-border-split Validation. Neither of these validations leads to results similar to 10-fold cross-validation; they are substantially lower, in one case by over 20 percentage points (bottom row of table). Numerically, they are similar to the border-split cross-validation results, but again with a different ranking of the feature combinations. Time-split and time-border-split methods gave similar results to each other, with different rankings but only small numerical differences. Coincidentally, in our data set, the time t∗ fell largely between events of interest, so the elimination of the border resulted in only a small correction, confounded with the effect of the small reduction in training set size. The validation methods provide very different results, with a number of the differences around 20 percentage points. In addition, they disagree on the ranking of the feature combinations, which may be the more important problem in many studies. This is particularly visible with the Disaster or Not classification; the best combination of features (bold in Table 2) is different, depending on the validation method used. A study based on 10-fold cross-validation would suggest Unigram+Hashtag for this problem, which is the second-worst combination of features, according to time-border-split validation. Further, the confidence intervals, if used, would suggest confidence in this choice. Given our experiments, we believe that the standard k-fold cross-validation substantially overestimates the performance of the models in our case study. Time-split and time-border-split validation are more likely to represent accurate evaluation when temporally dependent data is involved, as they simulate true prospective prediction. However, they both have the downside that they only rely on one pair of training and validation data sets. This means that they will have a larger variance than a cross-validation method, and do not give any measure of confidence in their own results. Border-split cross-validation may also be acceptable, provided that its assumptions are satisfied—primarily, that the data points are independent beyond a certain radius. Depending on a particular task, this may be easy to decide (e.g., separate events) or hard (e.g., a number of overlapping events with no clear time difference). We also find that neighbor-split overestimates rather than underestimates the performance of the models relative to time-split and time-border-split. This represents a hazard to its use: The neighborhood measure may interact in unexpected ways with other features of the data, rendering the validation unpredictable. Ideally, one would use multiple large test data sets, all collected during separate, non-overlapping periods of time, after the training data set. This would be the most reliable measure of the performance of the models, but also the most expensive in terms of required data collection and annotation. Failing that, we recommend time-split or time-border-split validation when temporal statistical dependence cannot be ruled out. 5 conclusions :We used a common task in NLP, text classification, on a relatively recent but widely used data source, Twitter streams, to show that blindly following the same evaluation method for tasks of similar nature could lead to invalid conclusions. In particular, we investigated the most common evaluation method for machine learning applications, standard 10-fold cross-validation, and compared it with other validation methods that take the statistical dependence in the data into account, including time-split, bordersplit, neighbor-split, and a combination of time- and border-split validation. We showed how cross-validation can overestimate the effectiveness of a tweet classification application. We argued that text in microblogs or other similar text from social media (e.g., Web forums or even online news) can be statistically dependent for specific studies, such as those looking at events. Researchers therefore need to be careful in choosing evaluation methodologies based on the nature of the data at hand to avoid bias in their results. In recent years, many studies have been published on data collected from social media, especially microblogs such as Twitter. However, rather few of these studies have considered evaluation methodologies that take into account the statistically dependent nature of such data, which breaks the theoretical conditions for using cross-validation. Despite concerns raised in the past about using cross-validation for data of similar characteristics, such as time series, some of these studies evaluate their work using standard k-fold cross-validation. Through experiments on Twitter data collected during a two-year period that includes disastrous events, we show that by ignoring the statistical dependence of the text messages published in social media, standard cross-validation can result in misleading conclusions in a machine learning task. We explore alternative evaluation methods that explicitly deal with statistical dependence in text. Our work also raises concerns for any other data for which similar conditions might hold. [{""affiliations"": [], ""name"": ""Sarvnaz Karimi""}, {""affiliations"": [], ""name"": ""Jie Yin""}, {""affiliations"": [], ""name"": ""Jiri Baum""}] SP:17ced766be8b5ad0aeadb2cbd25ee9d31eb0e63a [{""authors"": [""S. Arlot"", ""A. Celisse.""], ""title"": ""A survey of cross-validation procedures for model selection"", ""venue"": ""Statistics Surveys, 4:40\u201379. Bengio, Y. and Y. Grandvalet. 2004. No"", ""year"": 2010}, {""authors"": [""C. Lin""], ""title"": ""LIBSVM: A library for support vector machines"", ""venue"": ""Information Sciences,"", ""year"": 2011}, {""authors"": [""Chu"", ""C.-K"", ""J.S. Marron""], ""title"": ""Comparison of two bandwidth selectors"", ""year"": 1991}, {""authors"": [""S. Geisser""], ""title"": ""A predictive approach to the random effect mode"", ""venue"": ""Biometrika Trust, 61(9):101\u2013107."", ""year"": 1974}, {""authors"": [""L. Jiang"", ""M. Yu"", ""M. Zhou"", ""X. Liu"", ""T. Zhao.""], ""title"": ""Target-dependent Twitter sentiment classification"", ""venue"": ""ACL-HLT, pages 151\u2013160, Portland, OR."", ""year"": 2011}, {""authors"": [""A. Kumar"", ""M. Jiang"", ""Y. Fang.""], ""title"": ""Where not to go?: Detecting road hazards using Twitter"", ""venue"": ""SIGIR, pages 1223\u20131226, Gold Coast."", ""year"": 2014}, {""authors"": [""K. Lerman"", ""A. Gilder"", ""M. Dredze"", ""F. Pereira.""], ""title"": ""Reading the markets: Forecasting public opinion of political candidates by news analysis"", ""venue"": ""COLING, pages 473\u2013480, Manchester."", ""year"": 2008}, {""authors"": [""R.P. Sheridan""], ""title"": ""Time-split cross-validation as a method for estimating the goodness of prospective prediction"", ""venue"": ""Journal of Chemical Information and Modeling,"", ""year"": 2013}, {""authors"": [""B. Sriram"", ""D. Fuhry"", ""E. Demir"", ""H. Ferhatosmanoglu"", ""M. Demirbas.""], ""title"": ""Short text classification in Twitter to improve information filtering"", ""venue"": ""SIGIR, pages 841\u2013842, Geneva."", ""year"": 2010}, {""authors"": [""H. Takemura"", ""K. Tajima.""], ""title"": ""Tweet classification based on their lifetime duration"", ""venue"": ""CIKM, pages 2367\u20132370, Maui, HI."", ""year"": 2012}, {""authors"": [""I. Uysal"", ""W.B. Croft.""], ""title"": ""User oriented tweet ranking: A filtering approach to microblogs"", ""venue"": ""CIKM, pages 2261\u20132264, Glasgow."", ""year"": 2011}, {""authors"": [""S. Verma"", ""S. Vieweg"", ""W.J. Corvey"", ""L. Palen"", ""J.H. Martin"", ""M. Palmer"", ""A. Schram"", ""K.M. Anderson""], ""title"": ""Natural language processing to the rescue? Extracting \u201csituational awareness"", ""year"": 2011}, {""authors"": [""J. Yin"", ""S. Karimi"", ""B. Robinson"", ""M. Cameron.""], ""title"": ""ESA: Emergency situation awareness via microbloggers"", ""venue"": ""CIKM, pages 2701\u20132703, Maui, HI."", ""year"": 2012}, {""authors"": [""F.M. Zanzotto"", ""M. Pennacchiotti"", ""K. Tsioutsiouliklis.""], ""title"": ""Linguistic redundancy in Twitter"", ""venue"": ""EMNLP, pages 659\u2013669, Edinburgh. 548"", ""year"": 2011}] squibs : evaluation methods for statistically : dependent text :Sarvnaz Karimi∗ csiro :Jie Yin∗ Jiri Baum∗∗ Sabik Software Solutions In recent years, many studies have been published on data collected from social media, especially microblogs such as Twitter. However, rather few of these studies have considered evaluation methodologies that take into account the statistically dependent nature of such data, which breaks the theoretical conditions for using cross-validation. Despite concerns raised in the past about using cross-validation for data of similar characteristics, such as time series, some of these studies evaluate their work using standard k-fold cross-validation. Through experiments on Twitter data collected during a two-year period that includes disastrous events, we show that by ignoring the statistical dependence of the text messages published in social media, standard cross-validation can result in misleading conclusions in a machine learning task. We explore alternative evaluation methods that explicitly deal with statistical dependence in text. Our work also raises concerns for any other data for which similar conditions might hold. reference study data dependence concern :Sriram et al. (2010) Classification of tweets into five categories of events, news, deals, opinions, and private messages 5,407 tweets, no mention of removing retweets Events, Textual Links, Twinning Culotta (2010) Classification of tweets using regression models into flu or not flu 500,000 tweets from a 10-week period Events, Textual Links, Twinning Verma et al. (2011) Classification of tweets to specific disaster events, and identification of tweets that contained situational awareness content 1,965 tweets collected from specific disasters in the U.S. by keyword search Twinning, Textual Links Jiang et al. (2011) Sentiment classification of tweets; hashtags were mentioned as one of the features, retweets were considered to share the same sentiment Tweets found using keyword search, without removing retweets Twinning, Textual Links Uysal and Croft (2011) Personalized tweet ranking using retweet behavior. Decision tree classifiers were used based on features from the tweet content, user behavior, and tweet author 24,200 tweets, from which 2,547 were retweeted by the seed users Twinning, Textual Links Takemura and Tajima (2012) Classification of tweets into three categories based on whether they should be read now, later, or outdated 9,890 tweets from a fixed period of time, annotated for time-(in)dependency Events, Textual Links, Twinning Kumar, Jiang, and Fang (2014) Classification of tweets into two categories of road hazard and non-hazard 30,876 tweets, retweets were not removed Events, Textual Links, Twinning Border-Split Cross-Validation. This method, proposed by Chu and Marron (1991), is a modification of k-fold cross-validation for time series data. Data are partitioned into the folds in time sequence rather than randomly; then in each validation step, data points within a time distance h of any point in the validation data set are excluded from the training data. That is, for each validation step k, the validation data is Dv = Dk, there is a border data set Db = {db ∈ D \ Dv : (∃dv∈Dv) |tdb − tdv | ≤ h}, which is disregarded, and the training data is Dt = D \ (Dk ∪ Db). This method assumes that the data dependence is confined to some known radius h, with data points beyond that radius being independent. Time-Split Validation. In time-split validation (Sheridan 2013), a particular time t∗ is chosen; data points prior to this time are allocated to the training data set and data points after t∗ are used as validation data; that is, Dt = {dt ∈ D : tdt ≤ t ∗} and Dv = {dv ∈ D : tdv > t ∗}. Note that this is not a cross-validation method, as no “crossing” takes place. The motivation is to emulate prospective prediction: A model is built using only the information available up to time t∗ and evaluated on data that are collected after that time. Time-Border-Split Validation. This is a combination of border-split and time-split validation. Both a time t∗ and a radius h are chosen; data points prior to time t∗ − h are allocated to the training data set and data points after time t∗ are used as validation data; that is, Dt = {dt ∈ D : tdt ≤ t ∗ − h} and Dv = {dv ∈ D : tdv > t ∗}. The remaining data points are not used. The motivation is to combine the conservative aspects of both time-split and border-split, emulating prospective prediction more carefully. Neighbor-Split Validation. Proposed by (Sheridan 2013), this method assumes the existence of some similarity metric for the data points. The number of neighbors for each data point is calculated, using a threshold on the similarity metric. A desired fraction of data points with the fewest neighbors are then allocated to the validation data Dv and the rest to the training data Dt. The motivation of this approach is to deliberately reduce the similarity between training and validation data. It is inspired by leave-class-out validation or cross-validation, which assumes a pre-existing classification (rather than similarity metric) and allocates data points to the validation data Dv or cross-validation folds Dk, according to this classification. The advantage of neighborsplit over leave-class-out is that the size of the validation data is a parameter that can be chosen, rather than being dependent on the (potentially unbalanced) sizes of the classes in the classification.", ,,,,,,,,"I volunteered to review this book because I found Dr. Markowitz’s 1995 book, Using Speech Recognition, extremely helpful when I first became involved in medical applications of speech. In fact, I made all my students in that area read it. This book is very different, but in its own way, just as useful. In fact, it is a must-read for anyone contemplating a new speech application or enhancing a current one, as well as for anyone searching for new directions in dialogue research.","[{""affiliations"": [], ""name"": ""Judith A. Markowitz""}]",SP:1f0848033050983612eb42224be414133f2cf75b,"[{""authors"": [""K. Bhatt"", ""S. Argamon"", ""M.W. Evens""], ""title"": ""Hedged responses and expressions of affect in human/human and human/ computer tutorial interactions"", ""venue"": ""In Proceedings of COGSCI"", ""year"": 2004}, {""authors"": [""B. Reeves"", ""C. Nass""], ""title"": ""The media equation: How people treat computers, television, and new media like real people and places"", ""year"": 1996}]",,,,,,,,,,,,,,,,book reviews :,robots that talk and listen :,"judith a. markowitz (editor) :(President of J. Markowitz, Consultants, Chicago, IL) Berlin/Boston/Munich: Walter de Gruyter, 2015, hardbound, ISBN 978-1-61451-603-3; PDF, e-ISBN 978-1-61451-440-4 EPUB; e-ISBN 978-1-61451-915-7","reviewed by martha evens :Illinois Institute of Technology I volunteered to review this book because I found Dr. Markowitz’s 1995 book, Using Speech Recognition, extremely helpful when I first became involved in medical applications of speech. In fact, I made all my students in that area read it. This book is very different, but in its own way, just as useful. In fact, it is a must-read for anyone contemplating a new speech application or enhancing a current one, as well as for anyone searching for new directions in dialogue research. © 2015 Association for Computational Linguistics This book is divided into four parts, followed by a separate conclusion. It is always hard to do justice to all the papers and authors in a collection like this—with twelve varied papers addressing many different parts of the complex problem of enabling a robot to carry on a spoken dialogue—but I will try to give at least a brief taste of each. Part I is about “Images.” In the first chapter of Part I, Steve Mushkin, the founder and president of Latitude Research describes an experiment in which 348 children aged 8–12 from six digitally-advanced countries were asked to draw and describe in words how they would like to interact with a personal robot in the future. In the second chapter Markowitz herself reviews the language capabilities and behavior of powerful cultural icons, such as Frankenstein, the Hebrew Golem, the Japanese karakuri, and Pygmalion’s beloved statue, Galatea. This is highly relevant, because science fiction, whether on paper or in the movies, part of traditional mythology, or local social history, has largely shaped our expectations about what new technology promises or threatens. In the third chapter, David Dufty, author of the well-known book How to Build an Android: The True Story of Philip K. Dick’s Robotic Resurrection, believes that robots are intrinsically part of art and entertainment, and that is the major reason why we should build them. Part II is called “Frameworks and Guidelines.” The first of the three chapters in this part, written by Bilge Mutlu and two of his students at the University of Wisconsin, Madison, discusses the requirements for designing a robot capable of sustaining productive dialogue. They test their framework in robot-human interactions; first using a robot as a teacher, then to assess linguistic factors that contribute to expertise. Director of Columbia University’s Japanese Language Program, focuses on the knowledge that any teacher must have to be effective in teaching a foreign language. Based on her series of research studies on the teaching of Japanese, her concern is that a robot (or human) teacher must understand the bond between language and culture, which she sees as inseparable. She emphasizes the importance of using human-like facial expressions in the teaching machine, at first for teaching pronunciation, for communicating emotional doi:10.1162/COLI r 00224 reactions to the student’s contributions, and making the student feel comfortable with the machine. In this excellent paper, chock full of valuable references, Nazikian unintentionally reminds us of the huge and inexplicable divide between the world of Computer-Aided Language Learning (CALL) and the world of AI in education. Art Graesser and his group at the University of Memphis (Link et al., 2001) have carried out a long series of experiments studying the effect of using expressive faces on the screen coordinated with the dialogue, but the work at Memphis is never mentioned in this paper and there are no references to papers in Discourse Processes or the Journal of the Learning Sciences. The third paper in Part II, written by Manfred Tscheligi and his student, Nicole Mirnig, at the University of Salzburg, focuses on the problems of Comprehension, Coherence, and Consistency. These qualities depend, they argue, on the use and transfer of mental models, which they find essential to effective dialogue with any robot capable of learning what human beings want and giving it to them. Part III is about constructing a speech-enabled robot capable of useful, real-world tasks. In Chapter 7, Jonathan Connell of IBM’s T. J. Watson Research Center explains how he defined these capabilities for a robot named ELI (the Extensible Language Interface). First of all, such a robot needs to be able to accept and synthesize multimodal information. It needs to recognize speech, gestures, and object manipulations and put this information together into an algorithm for getting you a cup of coffee? It has to find your kitchen, pick up the right coffee pot, heat the water in a microwave safe measuring cup, and put in just the right amount of coffee, cream, and sugar. In the process, it will need to learn new nouns (the names that you use for objects and rooms in your house) and new verbs (for activities like opening bottles and boiling water). Their robot parser is a finite state machine, but it can handle new names and new activities. The emphasis is on the grounding of new language in a new environment. In Chapter 8, Alan Wagner, a Senior Research Scientist at Georgia Tech, outlines what a robot has to know before you can teach it to tell lies or recognize lies produced by others. Unlike ELI, this robot does not yet exist and I find it hard to imagine wanting to build it or even use it myself, but the description of deceptive language in this chapter has charm. Joerg Wolf and Guido Bugmann, at the University of Plymouth in the UK, recount their experience getting university students to teach their off-the-shelf robot how to play a simple card game. They began by collecting a corpus of sessions in which one university student taught another to play the game. They used this corpus to develop the rules for their robot, grammar rules, rules for anaphora, dialogue rules, rules for dealing cards, and then they carried out a series of experiments in which university students tried to teach the robot the game. One of many problems that they had not anticipated was that when the robot did not understand what the student wanted him to do, the student raised his or her voice and tried to simplify the explanation. The result was many out-of-grammar and out-of-vocabulary errors. Part IV contains two very different papers, one about improving audition (so that the robot can understand what you are telling it, even when several people are talking at once or machines are clanking) and the other about how design factors, such as robot voices and gesture speed affect memory and engagement in both children and adults. The experts in audition, François Grondin and François Michaud from the Robotics Laboratory at the Canadian Université de Sherbrooke, have developed algorithms for localization (figuring out where a sound is coming from), tracking (following sounds as people or robots move), and separation (extracting sounds from a given person or machine when several are making noises at once). They have achieved remarkable success in a number of experiments run on a system with a bank of separate microphones and parallel hardware capable of processing all this information at once. Sandra Okita, then at Stanford, now at Columbia University, and Victor Ng-Thow-Hing from the Honda Research Institute USA in Mountain View teamed up to carry out a series of experiments with robot voices. The objective of their studies was to determine how design choices like robot voices and gesture speed effect how humans of all ages respond to a robot. They tried out two voices with young children, one robot-like (monotone) and one more human-like. Both robots followed exactly the same script, but the children remembered much more of what the humanoid voice said and they were much more likely to be willing to talk to that robot again. Teenagers preferred the humanoid voice, but there was much less difference between the two groups interacting with the different voices. Adults seemed to be affected even less. Similar experiments with gesture speed showed that robots who gestured rapidly were judged to be happier and more pleasant to spend time with. Again, small children were affected the most, but teenagers also had strong preferences here. Again, adults were affected less. We badly need more work of the kind described by Okita and Ng-Thow-Hing. Too many people believed Reeves and Nass’s (1996) Media Equation, which is the claim that people interact with computers in the same way that they interact with people; and decided that they could substitute research on human interaction for research on interactions between people and machines. Our own experiments (Bhatt, Argamon, and Evens, 2004) have convinced us that the Media Equation is certainly not true for the students interacting with our intelligent tutoring system, who are constantly polite to human tutors, whether their professors or their peers, but often very rude to our computer tutor. The concluding chapter in the book, written by Roger K. Moore, Professor of Computer Science at the University of Sheffield and the Editor-in-Chief of Computer Speech and Language, takes us back to the future where the first chapter began. He argues that we have a long way to go to meet the goal of intelligent communication with machines. We need to build robots that understand human behavior and are capable of generating the language necessary to change it. Moore essentially focuses this chapter on his own view of the major conflict of opinion and approach separating the experts featured in this book. It will never be enough to simply tack on a multi-purpose language module to an existing robot, he argues. Spoken communication is a fundamental and integral part of ordinary human beings; and if we are to succeed in our goal of constructing “intelligent communicative machines,” we need to make the communication function a central part of the machine design and not simply an add-on function. There is a major gulf between Moore’s argument and the experts quoted by Dufty back in the third chapter (page 55). Dufty quotes Sylvia Solon, deputy editor of Wired in the UK, as saying, “There is no point making robots look and act like humans.” Dufty adds that Martin Robbins, who blogs for the Guardian, under the name “The Lay Scientist,” wrote: “Humanoid robots are seen as the future, but for almost any specific task you can think of, a focused simple design will work better.” Of course, Dufty goes on to tell us about the use of robots in art and entertainment and about the workings of the Philip K. Dick android, so he may not agree entirely with Solon and Robbins. There is a cogent case on the opposite shore of this gulf in at least three of the other chapters (4, 7, and 9). In Chapter 4, Mutlu’s group at the University of Wisconsin describe the simple semantic grammar used by their robot (a Wakamaru), along with more sophisticated dialogue and behavior models and algorithms to control gaze and gestures. Jonathan Connell begins Chapter 7 with “Suppose you buy a general fetch-and-carry robot from Sears and take it home” and continues in this vein. In Chapter 9, Wolf and Bugmann designed their system to operate on components made of production rules, but they still have a lot to teach us. Personally, I am on a raft drifting in the middle of this gulf. I think that we need thoughtful experiments with existing robots now, to help us discover the problems, but I believe that really satisfactory solutions will require much more research. I cannot agree with Moore, however, when he argues that it is unethical to attempt to imitate human communication by stitching together the limited technologies we have today. It seems to me that much of what we now know about human communication comes from attempting to do just that. In the process of reading this book I discovered that I had not heard about two other collections edited by Markowitz and published by Springer in 2013. Both of these were edited by Neustein & Markowitz jointly. Dr. Amy Neustein is founder and CEO of Linguistic Technology Systems and Editor-in-Chief of the International Journal of Speech Technology. The first is entitled Mobile Speech and Advanced Natural Language Solutions (hardbound ISBN 978-1-4614-6017-6; e-Book ISBN 978-1-4614-6018-3); the second, Where Humans Meet Machines: Innovative Solutions for Knotty Natural Language Problems (ISBN 978-1-4614-6933-9, eBook ISBN 978-1-4614-6934-6).",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"I volunteered to review this book because I found Dr. Markowitz’s 1995 book, Using Speech Recognition, extremely helpful when I first became involved in medical applications of speech. In fact, I made all my students in that area read it. This book is very different, but in its own way, just as useful. In fact, it is a must-read for anyone contemplating a new speech application or enhancing a current one, as well as for anyone searching for new directions in dialogue research. [{""affiliations"": [], ""name"": ""Judith A. Markowitz""}] SP:1f0848033050983612eb42224be414133f2cf75b [{""authors"": [""K. Bhatt"", ""S. Argamon"", ""M.W. Evens""], ""title"": ""Hedged responses and expressions of affect in human/human and human/ computer tutorial interactions"", ""venue"": ""In Proceedings of COGSCI"", ""year"": 2004}, {""authors"": [""B. Reeves"", ""C. Nass""], ""title"": ""The media equation: How people treat computers, television, and new media like real people and places"", ""year"": 1996}] book reviews : robots that talk and listen : judith a. markowitz (editor) :(President of J. Markowitz, Consultants, Chicago, IL) Berlin/Boston/Munich: Walter de Gruyter, 2015, hardbound, ISBN 978-1-61451-603-3; PDF, e-ISBN 978-1-61451-440-4 EPUB; e-ISBN 978-1-61451-915-7 reviewed by martha evens :Illinois Institute of Technology I volunteered to review this book because I found Dr. Markowitz’s 1995 book, Using Speech Recognition, extremely helpful when I first became involved in medical applications of speech. In fact, I made all my students in that area read it. This book is very different, but in its own way, just as useful. In fact, it is a must-read for anyone contemplating a new speech application or enhancing a current one, as well as for anyone searching for new directions in dialogue research. © 2015 Association for Computational Linguistics This book is divided into four parts, followed by a separate conclusion. It is always hard to do justice to all the papers and authors in a collection like this—with twelve varied papers addressing many different parts of the complex problem of enabling a robot to carry on a spoken dialogue—but I will try to give at least a brief taste of each. Part I is about “Images.” In the first chapter of Part I, Steve Mushkin, the founder and president of Latitude Research describes an experiment in which 348 children aged 8–12 from six digitally-advanced countries were asked to draw and describe in words how they would like to interact with a personal robot in the future. In the second chapter Markowitz herself reviews the language capabilities and behavior of powerful cultural icons, such as Frankenstein, the Hebrew Golem, the Japanese karakuri, and Pygmalion’s beloved statue, Galatea. This is highly relevant, because science fiction, whether on paper or in the movies, part of traditional mythology, or local social history, has largely shaped our expectations about what new technology promises or threatens. In the third chapter, David Dufty, author of the well-known book How to Build an Android: The True Story of Philip K. Dick’s Robotic Resurrection, believes that robots are intrinsically part of art and entertainment, and that is the major reason why we should build them. Part II is called “Frameworks and Guidelines.” The first of the three chapters in this part, written by Bilge Mutlu and two of his students at the University of Wisconsin, Madison, discusses the requirements for designing a robot capable of sustaining productive dialogue. They test their framework in robot-human interactions; first using a robot as a teacher, then to assess linguistic factors that contribute to expertise. Director of Columbia University’s Japanese Language Program, focuses on the knowledge that any teacher must have to be effective in teaching a foreign language. Based on her series of research studies on the teaching of Japanese, her concern is that a robot (or human) teacher must understand the bond between language and culture, which she sees as inseparable. She emphasizes the importance of using human-like facial expressions in the teaching machine, at first for teaching pronunciation, for communicating emotional doi:10.1162/COLI r 00224 reactions to the student’s contributions, and making the student feel comfortable with the machine. In this excellent paper, chock full of valuable references, Nazikian unintentionally reminds us of the huge and inexplicable divide between the world of Computer-Aided Language Learning (CALL) and the world of AI in education. Art Graesser and his group at the University of Memphis (Link et al., 2001) have carried out a long series of experiments studying the effect of using expressive faces on the screen coordinated with the dialogue, but the work at Memphis is never mentioned in this paper and there are no references to papers in Discourse Processes or the Journal of the Learning Sciences. The third paper in Part II, written by Manfred Tscheligi and his student, Nicole Mirnig, at the University of Salzburg, focuses on the problems of Comprehension, Coherence, and Consistency. These qualities depend, they argue, on the use and transfer of mental models, which they find essential to effective dialogue with any robot capable of learning what human beings want and giving it to them. Part III is about constructing a speech-enabled robot capable of useful, real-world tasks. In Chapter 7, Jonathan Connell of IBM’s T. J. Watson Research Center explains how he defined these capabilities for a robot named ELI (the Extensible Language Interface). First of all, such a robot needs to be able to accept and synthesize multimodal information. It needs to recognize speech, gestures, and object manipulations and put this information together into an algorithm for getting you a cup of coffee? It has to find your kitchen, pick up the right coffee pot, heat the water in a microwave safe measuring cup, and put in just the right amount of coffee, cream, and sugar. In the process, it will need to learn new nouns (the names that you use for objects and rooms in your house) and new verbs (for activities like opening bottles and boiling water). Their robot parser is a finite state machine, but it can handle new names and new activities. The emphasis is on the grounding of new language in a new environment. In Chapter 8, Alan Wagner, a Senior Research Scientist at Georgia Tech, outlines what a robot has to know before you can teach it to tell lies or recognize lies produced by others. Unlike ELI, this robot does not yet exist and I find it hard to imagine wanting to build it or even use it myself, but the description of deceptive language in this chapter has charm. Joerg Wolf and Guido Bugmann, at the University of Plymouth in the UK, recount their experience getting university students to teach their off-the-shelf robot how to play a simple card game. They began by collecting a corpus of sessions in which one university student taught another to play the game. They used this corpus to develop the rules for their robot, grammar rules, rules for anaphora, dialogue rules, rules for dealing cards, and then they carried out a series of experiments in which university students tried to teach the robot the game. One of many problems that they had not anticipated was that when the robot did not understand what the student wanted him to do, the student raised his or her voice and tried to simplify the explanation. The result was many out-of-grammar and out-of-vocabulary errors. Part IV contains two very different papers, one about improving audition (so that the robot can understand what you are telling it, even when several people are talking at once or machines are clanking) and the other about how design factors, such as robot voices and gesture speed affect memory and engagement in both children and adults. The experts in audition, François Grondin and François Michaud from the Robotics Laboratory at the Canadian Université de Sherbrooke, have developed algorithms for localization (figuring out where a sound is coming from), tracking (following sounds as people or robots move), and separation (extracting sounds from a given person or machine when several are making noises at once). They have achieved remarkable success in a number of experiments run on a system with a bank of separate microphones and parallel hardware capable of processing all this information at once. Sandra Okita, then at Stanford, now at Columbia University, and Victor Ng-Thow-Hing from the Honda Research Institute USA in Mountain View teamed up to carry out a series of experiments with robot voices. The objective of their studies was to determine how design choices like robot voices and gesture speed effect how humans of all ages respond to a robot. They tried out two voices with young children, one robot-like (monotone) and one more human-like. Both robots followed exactly the same script, but the children remembered much more of what the humanoid voice said and they were much more likely to be willing to talk to that robot again. Teenagers preferred the humanoid voice, but there was much less difference between the two groups interacting with the different voices. Adults seemed to be affected even less. Similar experiments with gesture speed showed that robots who gestured rapidly were judged to be happier and more pleasant to spend time with. Again, small children were affected the most, but teenagers also had strong preferences here. Again, adults were affected less. We badly need more work of the kind described by Okita and Ng-Thow-Hing. Too many people believed Reeves and Nass’s (1996) Media Equation, which is the claim that people interact with computers in the same way that they interact with people; and decided that they could substitute research on human interaction for research on interactions between people and machines. Our own experiments (Bhatt, Argamon, and Evens, 2004) have convinced us that the Media Equation is certainly not true for the students interacting with our intelligent tutoring system, who are constantly polite to human tutors, whether their professors or their peers, but often very rude to our computer tutor. The concluding chapter in the book, written by Roger K. Moore, Professor of Computer Science at the University of Sheffield and the Editor-in-Chief of Computer Speech and Language, takes us back to the future where the first chapter began. He argues that we have a long way to go to meet the goal of intelligent communication with machines. We need to build robots that understand human behavior and are capable of generating the language necessary to change it. Moore essentially focuses this chapter on his own view of the major conflict of opinion and approach separating the experts featured in this book. It will never be enough to simply tack on a multi-purpose language module to an existing robot, he argues. Spoken communication is a fundamental and integral part of ordinary human beings; and if we are to succeed in our goal of constructing “intelligent communicative machines,” we need to make the communication function a central part of the machine design and not simply an add-on function. There is a major gulf between Moore’s argument and the experts quoted by Dufty back in the third chapter (page 55). Dufty quotes Sylvia Solon, deputy editor of Wired in the UK, as saying, “There is no point making robots look and act like humans.” Dufty adds that Martin Robbins, who blogs for the Guardian, under the name “The Lay Scientist,” wrote: “Humanoid robots are seen as the future, but for almost any specific task you can think of, a focused simple design will work better.” Of course, Dufty goes on to tell us about the use of robots in art and entertainment and about the workings of the Philip K. Dick android, so he may not agree entirely with Solon and Robbins. There is a cogent case on the opposite shore of this gulf in at least three of the other chapters (4, 7, and 9). In Chapter 4, Mutlu’s group at the University of Wisconsin describe the simple semantic grammar used by their robot (a Wakamaru), along with more sophisticated dialogue and behavior models and algorithms to control gaze and gestures. Jonathan Connell begins Chapter 7 with “Suppose you buy a general fetch-and-carry robot from Sears and take it home” and continues in this vein. In Chapter 9, Wolf and Bugmann designed their system to operate on components made of production rules, but they still have a lot to teach us. Personally, I am on a raft drifting in the middle of this gulf. I think that we need thoughtful experiments with existing robots now, to help us discover the problems, but I believe that really satisfactory solutions will require much more research. I cannot agree with Moore, however, when he argues that it is unethical to attempt to imitate human communication by stitching together the limited technologies we have today. It seems to me that much of what we now know about human communication comes from attempting to do just that. In the process of reading this book I discovered that I had not heard about two other collections edited by Markowitz and published by Springer in 2013. Both of these were edited by Neustein & Markowitz jointly. Dr. Amy Neustein is founder and CEO of Linguistic Technology Systems and Editor-in-Chief of the International Journal of Speech Technology. The first is entitled Mobile Speech and Advanced Natural Language Solutions (hardbound ISBN 978-1-4614-6017-6; e-Book ISBN 978-1-4614-6018-3); the second, Where Humans Meet Machines: Innovative Solutions for Knotty Natural Language Problems (ISBN 978-1-4614-6933-9, eBook ISBN 978-1-4614-6934-6).", ,,,,,,,,"A long time ago now (maybe 1988?), Gerald (Gazdar) and I supervised Adam’s DPhil at the University of Sussex. Adam was my age, give or take a year, having come to academia a little late, and was my first doctoral student. Adam’s topic was polysemy, and I’m not really sure that much supervision was actually required, though I recall fun exchanges trying to model the subtleties of word meaning using symbolic knowledge representation techniques—an experience that was clearly enough to convince Adam later that this was a bad idea. In fact, Adam’s thesis title itself was Polysemy. Much as we encourage short thesis titles, pulling off the one-word title is a tall order, requiring a unique combination of focus and coverage, breadth and depth, and, most of all, authority. Adam completely nailed it, at least from the perspective of the pre-empirical Computational Linguistics of the early 1990s. Three years later, after a spell working for dictionary publishers, Adam joined me as a research fellow, now at the University of Brighton. I had a project to explore the automatic enrichment of lexical databases to support the latest trends in language analysis, and, in particular, task-specific lexical resources. I was really pleased and excited to recruit Adam—he had lost none of his intellectual independence, a quality I particularly valued. Within a few weeks he came to me with his own plan for the research—a “detour,” as he put it, from the original workplan. I still have the e-mail, dated 6 April 1995, in which he proposed that, instead of chasing a prescriptive notion of a single lexical resource that needed to be customized to each domain, we should let the domain determine the lexicon, providing lexicographic tools to explore words, and particularly word senses, that were significant for that domain. In that e-mail, Computational Lexicography at Brighton was born. Over the next eight years or so, Computational Lexicography became a key part of our group’s success, increasingly under Adam’s direct leadership. The key project, WASPS, developed the WASPbench—the direct precursor of the Sketch Engine, recruiting David (Tugwell) to the team. In addition, Adam was one of the founding organizers of SENSEVAL, an initiative to bring international teams of researchers together to work in friendly competition on a pre-determined word sense disambiguation task (and which has now transformed into SEMEVAL). Together we secured funding to support the first two rounds of SENSEVAL; each round required the preparation of standardized data sets, guided by Adam’s highly tuned intuitions about lexical data preparation and management. And we engaged somewhat in the European funding merry-go-round, most fondly in the CONCEDE project, working on dictionaries for Central European languages with amazing teams from the MULTEXT-EAST consortium, and with Georgian and German colleagues in the GREG project.","[{""affiliations"": [], ""name"": ""Adam Kilgarriff""}, {""affiliations"": [], ""name"": ""Roger Evans""}]",SP:abc692872b31b28c2a71c7bbf580d44025f08604,[],,,,,,,,,,,,,,,,,,,,"adam kilgarriff :Roger Evans∗ University of Brighton A long time ago now (maybe 1988?), Gerald (Gazdar) and I supervised Adam’s DPhil at the University of Sussex. Adam was my age, give or take a year, having come to academia a little late, and was my first doctoral student. Adam’s topic was polysemy, and I’m not really sure that much supervision was actually required, though I recall fun exchanges trying to model the subtleties of word meaning using symbolic knowledge representation techniques—an experience that was clearly enough to convince Adam later that this was a bad idea. In fact, Adam’s thesis title itself was Polysemy. Much as we encourage short thesis titles, pulling off the one-word title is a tall order, requiring a unique combination of focus and coverage, breadth and depth, and, most of all, authority. Adam completely nailed it, at least from the perspective of the pre-empirical Computational Linguistics of the early 1990s. Three years later, after a spell working for dictionary publishers, Adam joined me as a research fellow, now at the University of Brighton. I had a project to explore the automatic enrichment of lexical databases to support the latest trends in language analysis, and, in particular, task-specific lexical resources. I was really pleased and excited to recruit Adam—he had lost none of his intellectual independence, a quality I particularly valued. Within a few weeks he came to me with his own plan for the research—a “detour,” as he put it, from the original workplan. I still have the e-mail, dated 6 April 1995, in which he proposed that, instead of chasing a prescriptive notion of a single lexical resource that needed to be customized to each domain, we should let the domain determine the lexicon, providing lexicographic tools to explore words, and particularly word senses, that were significant for that domain. In that e-mail, Computational Lexicography at Brighton was born. Over the next eight years or so, Computational Lexicography became a key part of our group’s success, increasingly under Adam’s direct leadership. The key project, WASPS, developed the WASPbench—the direct precursor of the Sketch Engine, recruiting David (Tugwell) to the team. In addition, Adam was one of the founding organizers of SENSEVAL, an initiative to bring international teams of researchers together to work in friendly competition on a pre-determined word sense disambiguation task (and which has now transformed into SEMEVAL). Together we secured funding to support the first two rounds of SENSEVAL; each round required the preparation of standardized data sets, guided by Adam’s highly tuned intuitions about lexical data preparation and management. And we engaged somewhat in the European funding merry-go-round, most fondly in the CONCEDE project, working on dictionaries for Central European languages with amazing teams from the MULTEXT-EAST consortium, and with Georgian and German colleagues in the GREG project. ∗ E-mail: R.P.Evans@brighton.ac.uk; Twitter: @rogerevansbton. doi:10.1162/COLI a 00234 © 2015 Association for Computational Linguistics But Adam was not entirely comfortable in academia, or at least not in a version of academia that didn’t share his drive for the practical as well as the theoretical. He didn’t have tenure, nor any clear route to achieve tenure, which meant that he could not apply for and hold grants in his own right (although he freely admitted he was happy not to have the associated administrative responsibilities); he set up a high quality masters program in Computational Lexicography, which ran for a couple of years, but the funding model didn’t really work, and it quickly evolved into the highly successful, but independent, Lexicom workshop series, still running today; and he couldn’t engage university support for developing the WASPbench as a commercial product. So in 2003, he spread his wings, left the university, and set up Lexical Computing Ltd. For many people, Lexical Computing and the Sketch Engine are what Adam is best known for. He spent eleven years tirelessly developing the company, the software, the methodology, the resources, the discipline. It was an environment in which he seemed completely at ease, sometimes the shameless promoter of his wares, sometimes the astute academic authority, often the source of practical solutions to real problems, and the instigator of new initiatives, and always the generous facilitator, educator, and friend. For me personally, though, this was a time when our friendship was more prominent than our professional relationship. We would meet for the odd drink, usually in the Constant Service pub (Adam’s favorite), and chat about life, family, sometimes work, and occasional schemes for new collaborations, though the company didn’t leave him very much time for that. It was one of those relaxed, undemanding friendships that just picks up whenever and wherever we find the time to meet, but remains strong nevertheless. Adam’s illness was as unexpected to him as to anyone. Over the summer of 2014, he was making plans for new directions and projects. And then, there was a brief hiatus in communication before we heard the news in early November. And yet, even then, he seemed reconciled—not resigned, but resolved, calm, dignified. I was upset, angry, helpless—useless really, and feeling very selfish in my distress. I saw Adam three times after he became ill and they are all good, strong memories, and that is more to his credit than mine. The first was in his kitchen, with early spring sunshine, drinking strong coffee he had made very meticulously, watching the winter birds scavenging in the garden, just chatting about nothing in particular, and gossiping about work for a couple of hours. The second was a surprise trip to the pub—the surprise being that Adam was strong enough to get there (and back) on his own, and drink a couple of pints, too. We went to the Constant Service, as always, and it was one of our occasional “NLP group” outings, so a good crowd was there. The third was back in his kitchen, this time for work a few weeks later. Ironically, the university system that struggled to engage with Adam’s practical drive is now fully signed up to demonstrating the “impact” of its research. Adam’s work on Computational Lexicography at Brighton and afterwards through Lexical Computing, featured as an “Impact Case Study” in recent national evaluations, has subsequently been selected for a wider national initiative showcasing UK Computer Science research, currently in development1. Adam was happy to cooperate with this, in part to alleviate boredom, and we arranged a Skype call with a technical author for the initiative from his kitchen. Adam was in excellent form describing his work, his passion, and still full of ideas for gentle academic engagement if his “retirement” would allow it. 1 http://cs-academic-impact.uk. Shortly after that meeting we heard the news of Adam’s relapse and decision not to continue treatment. Like everyone else, I followed his blog, and also emailed a little privately. I arranged to go and visit again, but Adam wasn’t well enough so we cancelled. Like everyone else, I waited for the inevitable blog post. Adam’s funeral was in a modest church in the village of Rottingdean just along the coast from Brighton. A beautiful setting and a sunny afternoon. The church was absolutely packed—standing room only—we estimate about 250 people; family, friends, and colleagues from far and wide. A committed atheist, the service focused on fond memories of Adam from those closest to him, with just one hymn, Immortal Invisible, as all his blog readers will understand. A beautiful and fitting farewell to a man who, it seems, was to everyone a friend first, and a colleague, boss, or antagonist, second. There have been many comments on Adam’s blog, on twitter, in academic forums, which say much more and so much better than I can. Some have said that Adam will be remembered for the Sketch Engine and the amazing data resources that have been built up around it. I would say that his real legacy is much more deeply intellectual than that. Adam would probably smile with satisfaction that the two things can coexist so comfortably—a rare combination of the intellectual and practitioner, a real giant of the field. Rest now in peace, Adam. Roger Brighton, June 2015",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"A long time ago now (maybe 1988?), Gerald (Gazdar) and I supervised Adam’s DPhil at the University of Sussex. Adam was my age, give or take a year, having come to academia a little late, and was my first doctoral student. Adam’s topic was polysemy, and I’m not really sure that much supervision was actually required, though I recall fun exchanges trying to model the subtleties of word meaning using symbolic knowledge representation techniques—an experience that was clearly enough to convince Adam later that this was a bad idea. In fact, Adam’s thesis title itself was Polysemy. Much as we encourage short thesis titles, pulling off the one-word title is a tall order, requiring a unique combination of focus and coverage, breadth and depth, and, most of all, authority. Adam completely nailed it, at least from the perspective of the pre-empirical Computational Linguistics of the early 1990s. Three years later, after a spell working for dictionary publishers, Adam joined me as a research fellow, now at the University of Brighton. I had a project to explore the automatic enrichment of lexical databases to support the latest trends in language analysis, and, in particular, task-specific lexical resources. I was really pleased and excited to recruit Adam—he had lost none of his intellectual independence, a quality I particularly valued. Within a few weeks he came to me with his own plan for the research—a “detour,” as he put it, from the original workplan. I still have the e-mail, dated 6 April 1995, in which he proposed that, instead of chasing a prescriptive notion of a single lexical resource that needed to be customized to each domain, we should let the domain determine the lexicon, providing lexicographic tools to explore words, and particularly word senses, that were significant for that domain. In that e-mail, Computational Lexicography at Brighton was born. Over the next eight years or so, Computational Lexicography became a key part of our group’s success, increasingly under Adam’s direct leadership. The key project, WASPS, developed the WASPbench—the direct precursor of the Sketch Engine, recruiting David (Tugwell) to the team. In addition, Adam was one of the founding organizers of SENSEVAL, an initiative to bring international teams of researchers together to work in friendly competition on a pre-determined word sense disambiguation task (and which has now transformed into SEMEVAL). Together we secured funding to support the first two rounds of SENSEVAL; each round required the preparation of standardized data sets, guided by Adam’s highly tuned intuitions about lexical data preparation and management. And we engaged somewhat in the European funding merry-go-round, most fondly in the CONCEDE project, working on dictionaries for Central European languages with amazing teams from the MULTEXT-EAST consortium, and with Georgian and German colleagues in the GREG project. [{""affiliations"": [], ""name"": ""Adam Kilgarriff""}, {""affiliations"": [], ""name"": ""Roger Evans""}] SP:abc692872b31b28c2a71c7bbf580d44025f08604 [] adam kilgarriff :Roger Evans∗ University of Brighton A long time ago now (maybe 1988?), Gerald (Gazdar) and I supervised Adam’s DPhil at the University of Sussex. Adam was my age, give or take a year, having come to academia a little late, and was my first doctoral student. Adam’s topic was polysemy, and I’m not really sure that much supervision was actually required, though I recall fun exchanges trying to model the subtleties of word meaning using symbolic knowledge representation techniques—an experience that was clearly enough to convince Adam later that this was a bad idea. In fact, Adam’s thesis title itself was Polysemy. Much as we encourage short thesis titles, pulling off the one-word title is a tall order, requiring a unique combination of focus and coverage, breadth and depth, and, most of all, authority. Adam completely nailed it, at least from the perspective of the pre-empirical Computational Linguistics of the early 1990s. Three years later, after a spell working for dictionary publishers, Adam joined me as a research fellow, now at the University of Brighton. I had a project to explore the automatic enrichment of lexical databases to support the latest trends in language analysis, and, in particular, task-specific lexical resources. I was really pleased and excited to recruit Adam—he had lost none of his intellectual independence, a quality I particularly valued. Within a few weeks he came to me with his own plan for the research—a “detour,” as he put it, from the original workplan. I still have the e-mail, dated 6 April 1995, in which he proposed that, instead of chasing a prescriptive notion of a single lexical resource that needed to be customized to each domain, we should let the domain determine the lexicon, providing lexicographic tools to explore words, and particularly word senses, that were significant for that domain. In that e-mail, Computational Lexicography at Brighton was born. Over the next eight years or so, Computational Lexicography became a key part of our group’s success, increasingly under Adam’s direct leadership. The key project, WASPS, developed the WASPbench—the direct precursor of the Sketch Engine, recruiting David (Tugwell) to the team. In addition, Adam was one of the founding organizers of SENSEVAL, an initiative to bring international teams of researchers together to work in friendly competition on a pre-determined word sense disambiguation task (and which has now transformed into SEMEVAL). Together we secured funding to support the first two rounds of SENSEVAL; each round required the preparation of standardized data sets, guided by Adam’s highly tuned intuitions about lexical data preparation and management. And we engaged somewhat in the European funding merry-go-round, most fondly in the CONCEDE project, working on dictionaries for Central European languages with amazing teams from the MULTEXT-EAST consortium, and with Georgian and German colleagues in the GREG project. ∗ E-mail: R.P.Evans@brighton.ac.uk; Twitter: @rogerevansbton. doi:10.1162/COLI a 00234 © 2015 Association for Computational Linguistics But Adam was not entirely comfortable in academia, or at least not in a version of academia that didn’t share his drive for the practical as well as the theoretical. He didn’t have tenure, nor any clear route to achieve tenure, which meant that he could not apply for and hold grants in his own right (although he freely admitted he was happy not to have the associated administrative responsibilities); he set up a high quality masters program in Computational Lexicography, which ran for a couple of years, but the funding model didn’t really work, and it quickly evolved into the highly successful, but independent, Lexicom workshop series, still running today; and he couldn’t engage university support for developing the WASPbench as a commercial product. So in 2003, he spread his wings, left the university, and set up Lexical Computing Ltd. For many people, Lexical Computing and the Sketch Engine are what Adam is best known for. He spent eleven years tirelessly developing the company, the software, the methodology, the resources, the discipline. It was an environment in which he seemed completely at ease, sometimes the shameless promoter of his wares, sometimes the astute academic authority, often the source of practical solutions to real problems, and the instigator of new initiatives, and always the generous facilitator, educator, and friend. For me personally, though, this was a time when our friendship was more prominent than our professional relationship. We would meet for the odd drink, usually in the Constant Service pub (Adam’s favorite), and chat about life, family, sometimes work, and occasional schemes for new collaborations, though the company didn’t leave him very much time for that. It was one of those relaxed, undemanding friendships that just picks up whenever and wherever we find the time to meet, but remains strong nevertheless. Adam’s illness was as unexpected to him as to anyone. Over the summer of 2014, he was making plans for new directions and projects. And then, there was a brief hiatus in communication before we heard the news in early November. And yet, even then, he seemed reconciled—not resigned, but resolved, calm, dignified. I was upset, angry, helpless—useless really, and feeling very selfish in my distress. I saw Adam three times after he became ill and they are all good, strong memories, and that is more to his credit than mine. The first was in his kitchen, with early spring sunshine, drinking strong coffee he had made very meticulously, watching the winter birds scavenging in the garden, just chatting about nothing in particular, and gossiping about work for a couple of hours. The second was a surprise trip to the pub—the surprise being that Adam was strong enough to get there (and back) on his own, and drink a couple of pints, too. We went to the Constant Service, as always, and it was one of our occasional “NLP group” outings, so a good crowd was there. The third was back in his kitchen, this time for work a few weeks later. Ironically, the university system that struggled to engage with Adam’s practical drive is now fully signed up to demonstrating the “impact” of its research. Adam’s work on Computational Lexicography at Brighton and afterwards through Lexical Computing, featured as an “Impact Case Study” in recent national evaluations, has subsequently been selected for a wider national initiative showcasing UK Computer Science research, currently in development1. Adam was happy to cooperate with this, in part to alleviate boredom, and we arranged a Skype call with a technical author for the initiative from his kitchen. Adam was in excellent form describing his work, his passion, and still full of ideas for gentle academic engagement if his “retirement” would allow it. 1 http://cs-academic-impact.uk. Shortly after that meeting we heard the news of Adam’s relapse and decision not to continue treatment. Like everyone else, I followed his blog, and also emailed a little privately. I arranged to go and visit again, but Adam wasn’t well enough so we cancelled. Like everyone else, I waited for the inevitable blog post. Adam’s funeral was in a modest church in the village of Rottingdean just along the coast from Brighton. A beautiful setting and a sunny afternoon. The church was absolutely packed—standing room only—we estimate about 250 people; family, friends, and colleagues from far and wide. A committed atheist, the service focused on fond memories of Adam from those closest to him, with just one hymn, Immortal Invisible, as all his blog readers will understand. A beautiful and fitting farewell to a man who, it seems, was to everyone a friend first, and a colleague, boss, or antagonist, second. There have been many comments on Adam’s blog, on twitter, in academic forums, which say much more and so much better than I can. Some have said that Adam will be remembered for the Sketch Engine and the amazing data resources that have been built up around it. I would say that his real legacy is much more deeply intellectual than that. Adam would probably smile with satisfaction that the two things can coexist so comfortably—a rare combination of the intellectual and practitioner, a real giant of the field. Rest now in peace, Adam. Roger Brighton, June 2015", "1 early machine translation in china :The history of machine translation (MT) in China dates back to 1956. At that time the new country had immense construction projects to recover what had been ruined in the war. However, the government precisely recognized the significance of machine translation, and started to explore this area, as the fourth country following the United States, the United Kingdom, and the Soviet Union. In 1959, Russian–Chinese machine translation was demonstrated on a Type-104 general-purpose computer made in China. This first MT system had a dictionary of 2,030 entries, and 29 groups of rules for lexical analysis. Programmed by machine instructions, the system was able to translate nine different types of sentences. It used punched tape as the input, and the output was a special kind of code for Chinese characters, since there was no Chinese character output device at the time. As the pioneer in Chinese MT, the system touched the issues of word sense disambiguation, word reordering, and proposed the idea of predicate-focused sentence analysis and pivot language for multilingual translation. In the same year, machine translation research at the Harbin Institute of Technology (HIT) was started by Prof. Zhen Wang (and later Prof. Kaizhu Wang), focusing on the Russian–Chinese MT group. The pursuit for MT has never halted after these forerunners. © Association for Computational Linguistics doi:10.1162/COLI a 00240 © 2015 Association for Computational Linguistics","2 the cemt series :In 1960, I was admitted to HIT. Five years later, I graduated and became a faculty member in the computer department of HIT, which was probably the first computer discipline among Chinese universities. I started my research, however, not from machine translation but from information retrieval (IR). I was fully occupied by how to effectively store books and documents on computers, and then retrieve them quickly and accurately. The start of my research in MT was incidentally caused by IR problems. At that time, Ming Zhou was my Ph.D. student. He is now the principal researcher of Natural Language Computing at Microsoft Research Asia (MSRA), and many of you may be acquainted with him. In 1985, at the beginning of his graduate study, he was aiming to address the topic of word extraction for Chinese documents to boost IR performance. For an exhaustive survey, Ming went to Beijing from Harbin alone, and buried himself at the National Library for over a month. He came back disappointed, finding that the related work was some language-dependent solutions for English. Actually, many research directions encountered this problem at that time. That’s why Ming and I decided to develop an MT system through which we could first translate Chinese materials into English, so as to take advantage of the solutions proposed for English, and finally translate the results back into Chinese, if necessary. In those years, the translation from Chinese to other foreign languages was less studied in China. Everything was hard in the beginning. We had to build everything from scratch, such as collecting and inputting each entry of the translation dictionary. Fortunately, we were not alone. I came to know many peer scholars, including Prof. Weitian Wu, Zhiwei Feng, Prof. Zhendong Dong, Prof. Shiwen Yu, and Prof. Changning Huang, as well as Dr. Zhaoxiong Chen. Although we didn’t work together, we could always learn from each other and inspire each other in MT research. After three years’ effort, we accomplished a rule-based MT system named CEMT-I (Li et al. 1988). It ran on an IBM PC XT1 and was capable of translating eight kinds of Chinese sentence patterns with fewer than one thousand rules. It had a dictionary of 30,000 Chinese-English entries. Simple or even crude as it now seems, it really encouraged every member of our team. After that, we developed CEMT-II (Zhou et al. 1990) and CEMT-III (Zhao, Li, and Zhang 1995) successively. The CEMT series seemed to have a special kind of magic. Almost all the students who participated in these projects devoted themselves to machine translation in their following careers, including Ming Zhou, Min Zhang, and Tiejun Zhao.","3 dear and bt863 :Inspired by the success of the CEMT series, we also developed a computer-aided translation system called “DEAR.” DEAR was put to market via a software chain store. Although it did not sell much, it was our first effort to commercialize the MT technology. I still remember how excited I was when I saw DEAR placed on the shelves for the first time. Today, it still reminds me that research work cannot just stay in the lab. Also in the 1980s, China’s NLP field was marked by a milestone event: the establishment of the Chinese Information Processing Society of China (CIPS). From then on, NLP researchers throughout the country have been connected and the academic exchange has been facilitated at the national scale. It was far beyond my imagination then that, 1 https://en.wikipedia.org/wiki/IBM_Personal_Computer_XT. thirty years later, I would have the honor to be the president of this society, leading it to keep on contributing to the development of world-level NLP technology. I usually regard the series of MT systems that we developed as a large family. In 1994, BT863 joined this family with some new features (Zhao, Li, and Wang 1995; Wang et al. 1997). First, BT863 was distinguished as a bi-direction translation between Chinese and English under a uniform architecture. Second, in addition to the rules, it was augmented with examples and templates learned from a corpus. Finally, this system is remembered for its top performance in the early national MT evaluation organized by the 863 High Tech Program of China.","4 syntactic and semantic parsing :Time passed quickly. The rising of the Internet made communication more convenient, and our research was gradually connected with international peers. We concentrated on the mining and accumulation of bilingual and multilingual corpora. We explored how to integrate rule-based and example-based MT models under a unifying statistical framework. However, as more and more work was conducted, I found it more difficult to go deeper. I began to realize that translation problems cannot rely only on translation methods. From word segmentation, morphology, word meaning, to named entity, syntax, and semantics, every step in this procedure affects the quality of translation. I remember an interesting story. One day, my student Wanxiang Che input his name into our machine translation system. The system literally translated his name into ‘thousands of cars flying in the sky’. This was rated as the joke of the year in my lab, but the underlying problem is worth pondering. Traditional Chinese medicine advocates the treatment of both symptoms and root causes. The same principle applies to MT research, in which models for word alignment, decoding, reordering, and so forth, can solve the surface problems of machine translation, whereas understanding the word sense, sentence structure, and semantics is the solution to the fundamental problems. We therefore carried out research on syntactic analysis, including phrase-structure parsing and dependency parsing. In those days, dependency parsing on Chinese was not widely studied. There was no well-accepted annotation or transformed standard. Therefore, we referred to a large number of linguistic studies, developed a Chinese syntactic dependency annotation standard, and annotated a 50,000-sentence Chinese syntactic dependency treebank on this basis. This is the largest Chinese dependency treebank available. Differently from those transformed from phrase-structure treebanks, our dependency structure uses native dependency structure, which can handle a large number of specific grammatical phenomena in dependency structures. This treebank has been released by the Linguistic Data Consortium (LDC) (Che, Li, and Liu 2012). We hope that more researchers can benefit from it. Based on syntactic parsing, we hoped to further explore the semantic structure and the relationship of sentences. Therefore, we carried out research on semantic role labeling, and worked on the semantic role labeling methods based on the tree kernel, including the hybrid convolution tree kernel (Che et al. 2008) and the grammar-driven tree kernel (Zhang et al. 2007). In addition, we further broadened our mind and tried to analyze the semantics of Chinese directly. We proposed semantic dependency parsing tasks that directly establish semantic-level dependencies between content words, ignoring auxiliaries and prepositions. Meanwhile, we violated the tree structure constraints, allowing one word to depend on more than one parent node, so as to form semantic dependency graph structures. At this point, the semantic dependency treebank that we have already labeled has reached more than 30,000 sentences. Much ongoing research is based on these data. Figure 1 shows an example of syntactic dependency parsing, semantic role labeling, and semantic dependency parsing for an input sentence “现在 / 她 /脸色 /难看 /, /好像 /病了 /。 [Now she looks terrible, seems to be sick]” .","5 ltp and ltp-cloud :Every summer, HIT and MSRA would jointly organize a summer school for NLP research students. We invited domestic and foreign experts to give lectures to Chinese students engaged in this field. Because the summer school was free, students from all over the country came together every year, listening to lectures and conducting experiments. When I communicated with these students, I found that many of them came from labs that lacked fundamental NLP tools, such as word segmentors, partof-speech taggers, and syntactic parsers. It would have been very difficult for them to implement their research ideas without these tools. I felt bad when I saw that. They are all students with dreams and innovative ideas. We must create a level playing field for all of them, I thought. After coming back from the summer school, I met Ting Liu. He is a strong supporter of the idea of sharing. We decided to release an open-source NLP system: Language Technology Platform (LTP). This platform integrates several Chinese NLP basic technologies, including Chinese word segmentation, part-of-speech tagging, named entity recognition, dependency parsing, and semantic role labeling, which has made great contributions to the development of further applications. In recent years, we realized that cloud computing and the mobile Internet have brought great opportunities and challenges to the NLP field. Therefore, we developed LTP-cloud2 in 2013, which provides accurate and fast NLP service via the Internet. Currently, the number of LTP-cloud registered users has exceeded 3,000, and most of them are NLP beginners. As I had wished, they no longer need to build an NLP basic processing system from scratch for their research. Every time I see the thank-you notes to the LTP and LTP-cloud in the acknowledgments of their papers, I am proud and grateful.","6 machine translation on the internet :As more and more papers were published in top conferences and journals, our lab made a name in the academic world. Many people in the lab were satisfied, but I felt differently, since publishing papers should not be the major objective for research. New models and techniques should be applied to solve real-world problems and improve people’s daily lives. Particularly since we have moved into the era of the Internet. Many new concepts and ideas have come into being, such as big data and cloud computing. In such a new era, machine translation research should no longer be restricted to the labs, running experiments on a small parallel corpus. Instead, it should embrace the Internet, and embrace big data. We paid great attention to the cooperation with IT and Internet companies. We established a joint lab with MSRA right after it was founded. After that, we have also established joint labs with other companies, like IBM, Baidu, and Tencent. My student Haifeng Wang is the vice president of Baidu. He is in charge of NLP research and development, as well as Web search. We decided to collaborate in MT shortly after he joined Baidu, since Baidu can provide a huge platform for us to verify our ideas. Together with Tsinghua University, Zhejiang University, the Institute of Computing Technology, and the Institute of Automation of the Chinese Academy of Science, we successfully applied for an 863 project titled “Machine Translation on the Internet.” All the members participating in this project have great passion for MT technologies and products. Chinese people accept the principle that “取之于民,用之于民” [what is taken from the people should be used for the interests of the people]. Internet-based machine translation also follows this principle, which mines a large volume of translation data from the Internet, trains the translation model, and then provides high-quality services for Internet users. In our online translation system, taking Chinese–English translation, for example, there are hundreds of millions of parallel data for this language pair, which were filtered from billions of raw parallel data. We collected a large amount of data from hundreds of billions of Web pages. So I should say our MT service is actually built upon the whole Internet. We have designed various mining models for these heterogeneous Internet data sources, including bilingual parallel pages, bilingual comparable pages, Web pages containing aligned sentence pairs, as well as plain texts containing entity and terminology translations. The mined translation data are filtered and refined. We set different updating frequencies for different Web sites, so as to guarantee that the latest data can be included. I often observe the mined translation data by myself, and I can find plenty of wonderful translations generated by ordinary Internet users. Their wisdom is perfectly integrated into the translation system. However, how to make use of such a big corpus? 2 http://www.ltp-cloud.com/demo/. This is a sweet annoyance. To handle big data, we have developed fast training and parallel decoding techniques in our project. With such big data and frequent updates, even Internet buzzwords can be correctly translated. My students often post the so-called “magic translation” on microblogs. After the machine translation service came online, I began to realize that it would not only influence those Ph.D. students who are reading and writing research papers, or those businessmen who are studying materials from foreign countries. It also makes a huge difference to ordinary people’s lives. Figure 2 shows some examples of Chinese– English machine translation from the Baidu online translation service,3 which integrates the research work of the 863 project “Machine Translation on the Internet.” I once met a 50-year-old Chinese lady on a flight to Japan. She could not speak Japanese, but she had finally decided to marry her Japanese husband, with whom she had chatted online using machine translation. Another story comes from my neighbors, who are a couple my age. Their children have lived in Germany for years. The first 3 http://fanyi.baidu.com/. time the old couple met their grandson when their family came back to China, they were thrilled. However, on meeting their grandson, who can only speak German, they had no way to express their love, which made them sad. The grandma blamed herself and even wept when she was alone. With my recommendation, they started to use the online speech translation app in their smart phone. Now, they can finally talk to their grandson.","7 integration of mt models :I have been working in machine translation for several decades, going through almost all the streams of technologies, from rule-based MT (RBMT) models at the very beginning, example-based MT (EBMT) methods, to statistical MT (SMT) methods, as well as the research hotspot today—neural network machine translation (NMT). Actually, we tried the neural network–based models in NLP tasks, such as dialogue act analysis and word sense disambiguation, more than 15 years ago (Wang, Gao, and Li 1999a, 1999b). It is big data and computing power today that help neural network–based models significantly outperform traditional ones. I know that every method has its advantages and disadvantages. Although the new model and its methodology surpasses the old ones overall, it does not mean that the old methods are useless. There’s an old saying in Chinese, “the silly bear keeps picking corn,” which describes that when a bear is stealing corn from a peasant’s field, it would always throw away the old one in its hand when it picked a new one; the silly bear would always end up with only one corn in his hand. I hoped that my team and I wouldn’t become the “silly bears.” Therefore, when we decided to develop an Internet MT system, we all agreed on the idea that we needed a hybrid approach, with which we could integrate all translation models and subsystems, on each of which we have all spent great effort. It is just like an orchestra, in which all instruments, such as piano, violin, cello, trumpet, and so on, are arranged perfectly together. Only in this way can the orchestra present a wonderful performance. As shown in Figure 3, in our MT system today, different models work together perfectly. The rule-based method is used to translate expressions like date, time, and number. The example-based method is applied to translate buzzwords, especially the new emerging Internet expressions. On the other hand, those complicated long sentences are translated using the syntax-based statistical model, while those sentences that can be covered by a predefined vocabulary are translated with an NMT model. Finally, the sentences left are all translated with a classical SMT model. The conductor of such an orchestra is a discriminative distributing module, which decides what subsystem an input sentence should be distributed to, based on a variety of statistical and linguistic features.","8 translation for resource-poor languages :Shortly after the release of Chinese–English and English–Chinese translation services, we also released translation services between Chinese and Japanese, Korean, and other daily-used foreign languages. However, with translation directions expanded, users’ expectations for the translation between the resource-poor languages became higher and higher. Especially in recent years, China has been doing business more frequently with many countries, such as Thailand and Portugal, among others, and the destinations for Chinese tourists have become more diverse. One of my friends told me a story after he came back from a tour in Southeast Asia. He ordered three kinds of salads in a restaurant, since he did not understand or speak the local language. He could not communicate with the waiters or even read the menu. These incidents told us that solving translation problems for these resource-poor languages is urgent. Therefore, we have successively released translation services between Chinese and over 20 foreign languages. Now, we have covered languages in eight of the top ten destinations for Chinese tourists, and all the top ten foreign cities where Chinese tourists spend the most money. On this basis, we took a further step. We built translation systems between any two languages using the pivot approach (Wang, Wu, and Liu 2006; Wu and Wang 2007). For resource-poor language pairs, we use English or Chinese as the pivot language. Translation models are trained for source-pivot and pivot-target, respectively, which are then combined to form the translation model from the source to the target language. Using this model, Baidu online translation services successfully realized pairwise translation between any two of 27 languages; in total, 702 translation directions.","Good afternoon, ladies and gentlemen. I am standing here, grateful, excited, and proud. I see so many friends, my colleagues, students, and many more researchers in this room. I see that the work we started 50 years ago is now flourishing and is embedded in people’s everyday lives. I see for the first time that the ACL conference is held here in Beijing, China. And I am deeply honored to be awarded the Lifetime Achievement Award of 2015. I want to thank the ACL for giving me the Lifetime Achievement Award of 2015. It is the appreciation of not only my work, but also of the work that my fellow researchers, my colleagues, and my students have done through all these years. It is an honor for all of us. As a veteran of NLP research, I am fortunate to witness and be a part of its long yet inspiring journey in China. So today, to everyone here, my friends, colleagues, and students, either well-known scientists or young researchers: I’d like to share my experience and thoughts with you.","[{""affiliations"": [], ""name"": ""Sheng Li""}]",SP:33d2ba530a9f1861cb08be9db34d54b1396d0270,"[{""authors"": [""Ting Liu""], ""title"": ""Leveraging multiple MT"", ""year"": 2010}, {""authors"": [""Seoul. Zhao"", ""Tiejun"", ""Sheng Li"", ""Min Zhang""], ""title"": ""CEMT-III: A fully automatic"", ""venue"": ""Proceedings of the NLPPRS,"", ""year"": 1995}, {""authors"": [""Zhou"", ""Ming"", ""Sheng Li"", ""Mingzeng Hu"", ""Shi Miao.""], ""title"": ""An interactive Chinese\u2013English machine translation system: CEMT-II"", ""venue"": ""Journal of the China Society for Scientific and"", ""year"": 1990}]",,,,,,,,,,"9 mt methodology for other areas :“他山之石,可以攻玉” [Stones from other hills may serve to polish the jade at hand]. This is a Chinese old saying from “《诗经》” [The Book of Songs], which was written 2,500 years ago. It suggests that one may benefit from other people’s opinions and methods for their task. Machine translation technology is now a “stone from another hill,” which has been used in many other areas. For instance, some researchers recast paraphrasing as a monolingual translation problem and use MT models to generate paraphrases of the input sentences (Zhao et al. 2009, 2010). There are also researchers who regard query reformulation as the translation from the original query to the rewritten one (Riezler and Liu 2010). However, what interests me the most is the encounter between translation technology and Chinese traditional culture. For example, MSRA uses the translation model to automatically generate couplets (Jiang and Zhou 2008), which are posted on the doors of every house during Chinese New Year. Baidu applies translation methods to compose poems. Given a picture and the first line of a poem, the system can generate another three lines of the poem that describe the content of the picture. In addition, I have heard recently that both Microsoft and Baidu have released their chatting robots, which are named Microsoft XiaoIce and Baidu Xiaodu, respectively. They both use translation techniques in the searching and generation of chatting responses. It is fair to say that machine translation has become more than a specific method. Instead, it has evolved into a methodology and could make a contribution to other similar or related areas.",,,,,,,,,,,"10 conclusion :There is an ancient story in China called “愚公移山” [Yugong moves the mountain]. In the story, an old man called Yugong—meaning an unwise man—lived in a mountain area. He decided to build a road to the outside world by moving two huge mountains away. Other people all thought it was impossible and laughed at him. However, Yugong said to the people calmly: “Even if I die, I have children; and my children would have children in the future. As the mountain wouldn’t grow, we would move the mountain away eventually.” Today, when facing the ambitious goal of automatic high-quality machine translation, and even the whole NLP field, I cannot help thinking of Yugong’s spirit. I have been, and I still am, trying to solve the questions and obstacles along the way. Even if one day I will no longer be able to keep exploring MT, I believe that the younger generations will keep on going until the dream of making a computer truly understand languages eventually comes true. My friends, especially the young ones, to share what I have learned from my career, I’d like to say: Make yourself a good translation system: Input diligence today, and it will definitely translate into an amazing tomorrow!",,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 early machine translation in china :The history of machine translation (MT) in China dates back to 1956. At that time the new country had immense construction projects to recover what had been ruined in the war. However, the government precisely recognized the significance of machine translation, and started to explore this area, as the fourth country following the United States, the United Kingdom, and the Soviet Union. In 1959, Russian–Chinese machine translation was demonstrated on a Type-104 general-purpose computer made in China. This first MT system had a dictionary of 2,030 entries, and 29 groups of rules for lexical analysis. Programmed by machine instructions, the system was able to translate nine different types of sentences. It used punched tape as the input, and the output was a special kind of code for Chinese characters, since there was no Chinese character output device at the time. As the pioneer in Chinese MT, the system touched the issues of word sense disambiguation, word reordering, and proposed the idea of predicate-focused sentence analysis and pivot language for multilingual translation. In the same year, machine translation research at the Harbin Institute of Technology (HIT) was started by Prof. Zhen Wang (and later Prof. Kaizhu Wang), focusing on the Russian–Chinese MT group. The pursuit for MT has never halted after these forerunners. © Association for Computational Linguistics doi:10.1162/COLI a 00240 © 2015 Association for Computational Linguistics 2 the cemt series :In 1960, I was admitted to HIT. Five years later, I graduated and became a faculty member in the computer department of HIT, which was probably the first computer discipline among Chinese universities. I started my research, however, not from machine translation but from information retrieval (IR). I was fully occupied by how to effectively store books and documents on computers, and then retrieve them quickly and accurately. The start of my research in MT was incidentally caused by IR problems. At that time, Ming Zhou was my Ph.D. student. He is now the principal researcher of Natural Language Computing at Microsoft Research Asia (MSRA), and many of you may be acquainted with him. In 1985, at the beginning of his graduate study, he was aiming to address the topic of word extraction for Chinese documents to boost IR performance. For an exhaustive survey, Ming went to Beijing from Harbin alone, and buried himself at the National Library for over a month. He came back disappointed, finding that the related work was some language-dependent solutions for English. Actually, many research directions encountered this problem at that time. That’s why Ming and I decided to develop an MT system through which we could first translate Chinese materials into English, so as to take advantage of the solutions proposed for English, and finally translate the results back into Chinese, if necessary. In those years, the translation from Chinese to other foreign languages was less studied in China. Everything was hard in the beginning. We had to build everything from scratch, such as collecting and inputting each entry of the translation dictionary. Fortunately, we were not alone. I came to know many peer scholars, including Prof. Weitian Wu, Zhiwei Feng, Prof. Zhendong Dong, Prof. Shiwen Yu, and Prof. Changning Huang, as well as Dr. Zhaoxiong Chen. Although we didn’t work together, we could always learn from each other and inspire each other in MT research. After three years’ effort, we accomplished a rule-based MT system named CEMT-I (Li et al. 1988). It ran on an IBM PC XT1 and was capable of translating eight kinds of Chinese sentence patterns with fewer than one thousand rules. It had a dictionary of 30,000 Chinese-English entries. Simple or even crude as it now seems, it really encouraged every member of our team. After that, we developed CEMT-II (Zhou et al. 1990) and CEMT-III (Zhao, Li, and Zhang 1995) successively. The CEMT series seemed to have a special kind of magic. Almost all the students who participated in these projects devoted themselves to machine translation in their following careers, including Ming Zhou, Min Zhang, and Tiejun Zhao. 3 dear and bt863 :Inspired by the success of the CEMT series, we also developed a computer-aided translation system called “DEAR.” DEAR was put to market via a software chain store. Although it did not sell much, it was our first effort to commercialize the MT technology. I still remember how excited I was when I saw DEAR placed on the shelves for the first time. Today, it still reminds me that research work cannot just stay in the lab. Also in the 1980s, China’s NLP field was marked by a milestone event: the establishment of the Chinese Information Processing Society of China (CIPS). From then on, NLP researchers throughout the country have been connected and the academic exchange has been facilitated at the national scale. It was far beyond my imagination then that, 1 https://en.wikipedia.org/wiki/IBM_Personal_Computer_XT. thirty years later, I would have the honor to be the president of this society, leading it to keep on contributing to the development of world-level NLP technology. I usually regard the series of MT systems that we developed as a large family. In 1994, BT863 joined this family with some new features (Zhao, Li, and Wang 1995; Wang et al. 1997). First, BT863 was distinguished as a bi-direction translation between Chinese and English under a uniform architecture. Second, in addition to the rules, it was augmented with examples and templates learned from a corpus. Finally, this system is remembered for its top performance in the early national MT evaluation organized by the 863 High Tech Program of China. 4 syntactic and semantic parsing :Time passed quickly. The rising of the Internet made communication more convenient, and our research was gradually connected with international peers. We concentrated on the mining and accumulation of bilingual and multilingual corpora. We explored how to integrate rule-based and example-based MT models under a unifying statistical framework. However, as more and more work was conducted, I found it more difficult to go deeper. I began to realize that translation problems cannot rely only on translation methods. From word segmentation, morphology, word meaning, to named entity, syntax, and semantics, every step in this procedure affects the quality of translation. I remember an interesting story. One day, my student Wanxiang Che input his name into our machine translation system. The system literally translated his name into ‘thousands of cars flying in the sky’. This was rated as the joke of the year in my lab, but the underlying problem is worth pondering. Traditional Chinese medicine advocates the treatment of both symptoms and root causes. The same principle applies to MT research, in which models for word alignment, decoding, reordering, and so forth, can solve the surface problems of machine translation, whereas understanding the word sense, sentence structure, and semantics is the solution to the fundamental problems. We therefore carried out research on syntactic analysis, including phrase-structure parsing and dependency parsing. In those days, dependency parsing on Chinese was not widely studied. There was no well-accepted annotation or transformed standard. Therefore, we referred to a large number of linguistic studies, developed a Chinese syntactic dependency annotation standard, and annotated a 50,000-sentence Chinese syntactic dependency treebank on this basis. This is the largest Chinese dependency treebank available. Differently from those transformed from phrase-structure treebanks, our dependency structure uses native dependency structure, which can handle a large number of specific grammatical phenomena in dependency structures. This treebank has been released by the Linguistic Data Consortium (LDC) (Che, Li, and Liu 2012). We hope that more researchers can benefit from it. Based on syntactic parsing, we hoped to further explore the semantic structure and the relationship of sentences. Therefore, we carried out research on semantic role labeling, and worked on the semantic role labeling methods based on the tree kernel, including the hybrid convolution tree kernel (Che et al. 2008) and the grammar-driven tree kernel (Zhang et al. 2007). In addition, we further broadened our mind and tried to analyze the semantics of Chinese directly. We proposed semantic dependency parsing tasks that directly establish semantic-level dependencies between content words, ignoring auxiliaries and prepositions. Meanwhile, we violated the tree structure constraints, allowing one word to depend on more than one parent node, so as to form semantic dependency graph structures. At this point, the semantic dependency treebank that we have already labeled has reached more than 30,000 sentences. Much ongoing research is based on these data. Figure 1 shows an example of syntactic dependency parsing, semantic role labeling, and semantic dependency parsing for an input sentence “现在 / 她 /脸色 /难看 /, /好像 /病了 /。 [Now she looks terrible, seems to be sick]” . 5 ltp and ltp-cloud :Every summer, HIT and MSRA would jointly organize a summer school for NLP research students. We invited domestic and foreign experts to give lectures to Chinese students engaged in this field. Because the summer school was free, students from all over the country came together every year, listening to lectures and conducting experiments. When I communicated with these students, I found that many of them came from labs that lacked fundamental NLP tools, such as word segmentors, partof-speech taggers, and syntactic parsers. It would have been very difficult for them to implement their research ideas without these tools. I felt bad when I saw that. They are all students with dreams and innovative ideas. We must create a level playing field for all of them, I thought. After coming back from the summer school, I met Ting Liu. He is a strong supporter of the idea of sharing. We decided to release an open-source NLP system: Language Technology Platform (LTP). This platform integrates several Chinese NLP basic technologies, including Chinese word segmentation, part-of-speech tagging, named entity recognition, dependency parsing, and semantic role labeling, which has made great contributions to the development of further applications. In recent years, we realized that cloud computing and the mobile Internet have brought great opportunities and challenges to the NLP field. Therefore, we developed LTP-cloud2 in 2013, which provides accurate and fast NLP service via the Internet. Currently, the number of LTP-cloud registered users has exceeded 3,000, and most of them are NLP beginners. As I had wished, they no longer need to build an NLP basic processing system from scratch for their research. Every time I see the thank-you notes to the LTP and LTP-cloud in the acknowledgments of their papers, I am proud and grateful. 6 machine translation on the internet :As more and more papers were published in top conferences and journals, our lab made a name in the academic world. Many people in the lab were satisfied, but I felt differently, since publishing papers should not be the major objective for research. New models and techniques should be applied to solve real-world problems and improve people’s daily lives. Particularly since we have moved into the era of the Internet. Many new concepts and ideas have come into being, such as big data and cloud computing. In such a new era, machine translation research should no longer be restricted to the labs, running experiments on a small parallel corpus. Instead, it should embrace the Internet, and embrace big data. We paid great attention to the cooperation with IT and Internet companies. We established a joint lab with MSRA right after it was founded. After that, we have also established joint labs with other companies, like IBM, Baidu, and Tencent. My student Haifeng Wang is the vice president of Baidu. He is in charge of NLP research and development, as well as Web search. We decided to collaborate in MT shortly after he joined Baidu, since Baidu can provide a huge platform for us to verify our ideas. Together with Tsinghua University, Zhejiang University, the Institute of Computing Technology, and the Institute of Automation of the Chinese Academy of Science, we successfully applied for an 863 project titled “Machine Translation on the Internet.” All the members participating in this project have great passion for MT technologies and products. Chinese people accept the principle that “取之于民,用之于民” [what is taken from the people should be used for the interests of the people]. Internet-based machine translation also follows this principle, which mines a large volume of translation data from the Internet, trains the translation model, and then provides high-quality services for Internet users. In our online translation system, taking Chinese–English translation, for example, there are hundreds of millions of parallel data for this language pair, which were filtered from billions of raw parallel data. We collected a large amount of data from hundreds of billions of Web pages. So I should say our MT service is actually built upon the whole Internet. We have designed various mining models for these heterogeneous Internet data sources, including bilingual parallel pages, bilingual comparable pages, Web pages containing aligned sentence pairs, as well as plain texts containing entity and terminology translations. The mined translation data are filtered and refined. We set different updating frequencies for different Web sites, so as to guarantee that the latest data can be included. I often observe the mined translation data by myself, and I can find plenty of wonderful translations generated by ordinary Internet users. Their wisdom is perfectly integrated into the translation system. However, how to make use of such a big corpus? 2 http://www.ltp-cloud.com/demo/. This is a sweet annoyance. To handle big data, we have developed fast training and parallel decoding techniques in our project. With such big data and frequent updates, even Internet buzzwords can be correctly translated. My students often post the so-called “magic translation” on microblogs. After the machine translation service came online, I began to realize that it would not only influence those Ph.D. students who are reading and writing research papers, or those businessmen who are studying materials from foreign countries. It also makes a huge difference to ordinary people’s lives. Figure 2 shows some examples of Chinese– English machine translation from the Baidu online translation service,3 which integrates the research work of the 863 project “Machine Translation on the Internet.” I once met a 50-year-old Chinese lady on a flight to Japan. She could not speak Japanese, but she had finally decided to marry her Japanese husband, with whom she had chatted online using machine translation. Another story comes from my neighbors, who are a couple my age. Their children have lived in Germany for years. The first 3 http://fanyi.baidu.com/. time the old couple met their grandson when their family came back to China, they were thrilled. However, on meeting their grandson, who can only speak German, they had no way to express their love, which made them sad. The grandma blamed herself and even wept when she was alone. With my recommendation, they started to use the online speech translation app in their smart phone. Now, they can finally talk to their grandson. 7 integration of mt models :I have been working in machine translation for several decades, going through almost all the streams of technologies, from rule-based MT (RBMT) models at the very beginning, example-based MT (EBMT) methods, to statistical MT (SMT) methods, as well as the research hotspot today—neural network machine translation (NMT). Actually, we tried the neural network–based models in NLP tasks, such as dialogue act analysis and word sense disambiguation, more than 15 years ago (Wang, Gao, and Li 1999a, 1999b). It is big data and computing power today that help neural network–based models significantly outperform traditional ones. I know that every method has its advantages and disadvantages. Although the new model and its methodology surpasses the old ones overall, it does not mean that the old methods are useless. There’s an old saying in Chinese, “the silly bear keeps picking corn,” which describes that when a bear is stealing corn from a peasant’s field, it would always throw away the old one in its hand when it picked a new one; the silly bear would always end up with only one corn in his hand. I hoped that my team and I wouldn’t become the “silly bears.” Therefore, when we decided to develop an Internet MT system, we all agreed on the idea that we needed a hybrid approach, with which we could integrate all translation models and subsystems, on each of which we have all spent great effort. It is just like an orchestra, in which all instruments, such as piano, violin, cello, trumpet, and so on, are arranged perfectly together. Only in this way can the orchestra present a wonderful performance. As shown in Figure 3, in our MT system today, different models work together perfectly. The rule-based method is used to translate expressions like date, time, and number. The example-based method is applied to translate buzzwords, especially the new emerging Internet expressions. On the other hand, those complicated long sentences are translated using the syntax-based statistical model, while those sentences that can be covered by a predefined vocabulary are translated with an NMT model. Finally, the sentences left are all translated with a classical SMT model. The conductor of such an orchestra is a discriminative distributing module, which decides what subsystem an input sentence should be distributed to, based on a variety of statistical and linguistic features. 8 translation for resource-poor languages :Shortly after the release of Chinese–English and English–Chinese translation services, we also released translation services between Chinese and Japanese, Korean, and other daily-used foreign languages. However, with translation directions expanded, users’ expectations for the translation between the resource-poor languages became higher and higher. Especially in recent years, China has been doing business more frequently with many countries, such as Thailand and Portugal, among others, and the destinations for Chinese tourists have become more diverse. One of my friends told me a story after he came back from a tour in Southeast Asia. He ordered three kinds of salads in a restaurant, since he did not understand or speak the local language. He could not communicate with the waiters or even read the menu. These incidents told us that solving translation problems for these resource-poor languages is urgent. Therefore, we have successively released translation services between Chinese and over 20 foreign languages. Now, we have covered languages in eight of the top ten destinations for Chinese tourists, and all the top ten foreign cities where Chinese tourists spend the most money. On this basis, we took a further step. We built translation systems between any two languages using the pivot approach (Wang, Wu, and Liu 2006; Wu and Wang 2007). For resource-poor language pairs, we use English or Chinese as the pivot language. Translation models are trained for source-pivot and pivot-target, respectively, which are then combined to form the translation model from the source to the target language. Using this model, Baidu online translation services successfully realized pairwise translation between any two of 27 languages; in total, 702 translation directions. Good afternoon, ladies and gentlemen. I am standing here, grateful, excited, and proud. I see so many friends, my colleagues, students, and many more researchers in this room. I see that the work we started 50 years ago is now flourishing and is embedded in people’s everyday lives. I see for the first time that the ACL conference is held here in Beijing, China. And I am deeply honored to be awarded the Lifetime Achievement Award of 2015. I want to thank the ACL for giving me the Lifetime Achievement Award of 2015. It is the appreciation of not only my work, but also of the work that my fellow researchers, my colleagues, and my students have done through all these years. It is an honor for all of us. As a veteran of NLP research, I am fortunate to witness and be a part of its long yet inspiring journey in China. So today, to everyone here, my friends, colleagues, and students, either well-known scientists or young researchers: I’d like to share my experience and thoughts with you. [{""affiliations"": [], ""name"": ""Sheng Li""}] SP:33d2ba530a9f1861cb08be9db34d54b1396d0270 [{""authors"": [""Ting Liu""], ""title"": ""Leveraging multiple MT"", ""year"": 2010}, {""authors"": [""Seoul. Zhao"", ""Tiejun"", ""Sheng Li"", ""Min Zhang""], ""title"": ""CEMT-III: A fully automatic"", ""venue"": ""Proceedings of the NLPPRS,"", ""year"": 1995}, {""authors"": [""Zhou"", ""Ming"", ""Sheng Li"", ""Mingzeng Hu"", ""Shi Miao.""], ""title"": ""An interactive Chinese\u2013English machine translation system: CEMT-II"", ""venue"": ""Journal of the China Society for Scientific and"", ""year"": 1990}] 9 mt methodology for other areas :“他山之石,可以攻玉” [Stones from other hills may serve to polish the jade at hand]. This is a Chinese old saying from “《诗经》” [The Book of Songs], which was written 2,500 years ago. It suggests that one may benefit from other people’s opinions and methods for their task. Machine translation technology is now a “stone from another hill,” which has been used in many other areas. For instance, some researchers recast paraphrasing as a monolingual translation problem and use MT models to generate paraphrases of the input sentences (Zhao et al. 2009, 2010). There are also researchers who regard query reformulation as the translation from the original query to the rewritten one (Riezler and Liu 2010). However, what interests me the most is the encounter between translation technology and Chinese traditional culture. For example, MSRA uses the translation model to automatically generate couplets (Jiang and Zhou 2008), which are posted on the doors of every house during Chinese New Year. Baidu applies translation methods to compose poems. Given a picture and the first line of a poem, the system can generate another three lines of the poem that describe the content of the picture. In addition, I have heard recently that both Microsoft and Baidu have released their chatting robots, which are named Microsoft XiaoIce and Baidu Xiaodu, respectively. They both use translation techniques in the searching and generation of chatting responses. It is fair to say that machine translation has become more than a specific method. Instead, it has evolved into a methodology and could make a contribution to other similar or related areas. 10 conclusion :There is an ancient story in China called “愚公移山” [Yugong moves the mountain]. In the story, an old man called Yugong—meaning an unwise man—lived in a mountain area. He decided to build a road to the outside world by moving two huge mountains away. Other people all thought it was impossible and laughed at him. However, Yugong said to the people calmly: “Even if I die, I have children; and my children would have children in the future. As the mountain wouldn’t grow, we would move the mountain away eventually.” Today, when facing the ambitious goal of automatic high-quality machine translation, and even the whole NLP field, I cannot help thinking of Yugong’s spirit. I have been, and I still am, trying to solve the questions and obstacles along the way. Even if one day I will no longer be able to keep exploring MT, I believe that the younger generations will keep on going until the dream of making a computer truly understand languages eventually comes true. My friends, especially the young ones, to share what I have learned from my career, I’d like to say: Make yourself a good translation system: Input diligence today, and it will definitely translate into an amazing tomorrow!","10 conclusion :There is an ancient story in China called “愚公移山” [Yugong moves the mountain]. In the story, an old man called Yugong—meaning an unwise man—lived in a mountain area. He decided to build a road to the outside world by moving two huge mountains away. Other people all thought it was impossible and laughed at him. However, Yugong said to the people calmly: “Even if I die, I have children; and my children would have children in the future. As the mountain wouldn’t grow, we would move the mountain away eventually.” Today, when facing the ambitious goal of automatic high-quality machine translation, and even the whole NLP field, I cannot help thinking of Yugong’s spirit. I have been, and I still am, trying to solve the questions and obstacles along the way. Even if one day I will no longer be able to keep exploring MT, I believe that the younger generations will keep on going until the dream of making a computer truly understand languages eventually comes true. My friends, especially the young ones, to share what I have learned from my career, I’d like to say: Make yourself a good translation system: Input diligence today, and it will definitely translate into an amazing tomorrow!" "1 introduction :Much of statistical NLP research relies on some sorts of manually annotated corpora to train models, but annotated resources are extremely expensive to build, especially on a large scale. The creation of treebanks is a prime example (Marcus, Santorini, and Marcinkiewicz 1993). However, the linguistic theories motivating these annotation efforts are often heavily debated, and as a result there often exist multiple corpora for the same task with vastly different and incompatible annotation philosophies. For example, there are several treebanks for English, including the Chomskian-style Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993), the HPSG LinGo Redwoods Treebank (Oepen et al. 2002), and a smaller dependency treebank (Buchholz and Marsi 2006). From the perspective of resource accumulation, it seems a waste in human efforts.1 A second, related problem is that the raw texts are also drawn from different domains, which for the above example range from financial news (Penn Treebank/Wall Street Journal) to transcribed dialog (LinGo). It would be nice if a system could be automatically ported from one set of guidelines and/or domain to another, in order to exploit a much larger data set. The second problem, domain adaptation, is very well studied (e.g., Blitzer, McDonald, & Pereira 2006; Daumé III 2007). This work focuses on the widely existing and equally important problem, annotation adaptation, in order to adapt the divergence between different annotation guidelines and integrate linguistic knowledge in corpora with incongruent annotation formats. In this article, we describe the problem of annotation adaptation and the intrinsic principles of the solutions, and present a series of successively improved concrete models, the goal being to transfer the annotations of a corpus (source corpus) to the annotation format of another corpus (target corpus). The transfer classifier is the fundamental component for annotation adaptation algorithms. It learns correspondence regularities between annotation guidelines from a parallel annotated corpus, which has two kinds of annotations for the same data. In the simplest model (Model 1), the source classifier trained on the source corpus gives its predications to the transfer classifier trained on the parallel annotated corpus, so as to integrate the knowledge in the two corpora. In a variant of the simplest model (Model 2), the transfer classifier is used to transform the annotations in the source corpus into the annotation format of the target corpus; then the transformed source corpus and the target corpus are merged in order to train a more accurate classifier. Based on the second model, we finally develop an optimized model (Model 3), where two optimization strategies, iterative training and predict-self re-estimation, are integrated to further improve the efficiency of annotation adaptation. We experiment on Chinese word segmentation and dependency parsing to test the efficacy of our methods. For word segmentation, the problem of incompatible annotation guidelines is one of the most glaring: No segmentation guideline has been widely accepted due to the lack of a clear definition of Chinese word morphology. For dependency parsing there also exist multiple disparate annotation guidelines. For 1 Different annotated corpora for the same task facilitate the comparison of linguistic theories. From this perspective, having multiple standards is not necessarily a waste but rather a blessing, because it is a necessary phase in coming to a consensus if there is one. example, the dependency relations extracted from a constituency treebank follow syntactic principles, whereas the semantic dependency treebank is annotated in a semantic perspective. The two corpora for word segmentation are the much larger People’s Daily corpus (PD) (5.86M words) (Yu et al. 2001) and the smaller but more popular Penn Chinese Treebank (CTB) (0.47M words) (Xue et al. 2005). They utilize very different segmentation guidelines; for example, as shown in Figure 1, PD breaks VicePresident into two words and combines the phrase visited-China as a compound, compared with the segmentation following the CTB annotation guideline. It is preferable to transfer knowledge from PD to CTB because the latter also annotates tree structures, which are useful for downstream applications like parsing, summarization, and machine translation, yet it is much smaller in size. For dependency parsing, we use the dependency treebank (DCTB) extracted from CTB according to the rules of Yamada and Matsumoto (2003), and the Semantic Dependency Treebank (SDT) built on a small part of the CTB text (Che et al. 2012). Compared with the automatically extracted dependencies in DCTB, semantic dependencies in SDT reveal semantic relationships between words, rather than the syntactic relationships in syntactic dependencies. Figure 2 shows an example. Experiments on both word segmentation and dependency parsing show that annotation adaptation results in significant improvement over the baselines, and achieves the state-of-the-art with only local features. The rest of the article is organized as follows. Section 2 gives a description of the problem of annotation adaptation. Section 3 briefly introduces the tasks of word segmentation and dependency parsing as well as their state-of-the-art models. In Section 4 we first describe the transfer classifier that indicates the intrinsic principles of annotation adaptation, and then depict the three successively enhanced models for automatic adaptation of annotations. After the description of experimental results in Section 5 and the discussion of application scenarios in Section 6, we give a brief review of related work in Section 7, drawing conclusions in Section 8.","2 automatic annotation adaptation :We define annotation adaptation as a task aimed at automatically adapting the divergence between different annotation guidelines. Statistical models can be designed to learn the relevance of two annotation guidelines in order to transform a corpus from one annotation guideline to another. From this point of view, annotation adaptation can be seen as a special case of transfer learning. Through annotation adaptation, the linguistic knowledge in different corpora is integrated, resulting in enhanced NLP systems without complicated models and features. Much research has considered the problem of domain adaptation (Blitzer, McDonald, and Pereira 2006; Daumé III 2007), which also can be seen as a special case of transfer learning. It aims to adapt models trained in one domain (e.g., chemistry) to work well in other domains (e.g., medicine). Despite superficial similarities between domain adaptation and annotation adaptation, we argue that the underlying problems are quite different. Domain adaptation assumes that the labeling guidelines are preserved between the two domains—for example, an adjective is always labeled as JJ regardless of whether it is from the Wall Street Journal (WSJ) or a biomedical text, and only the distributions are different—for example, the word control is most likely a verb in WSJ but often a noun in biomedical texts (as in control experiment). Annotation adaptation, however, tackles the problem where the guideline itself is changed, for example, one treebank might distinguish between transitive and intransitive verbs, while merging the different noun types (NN, NNS, etc.), or one treebank (PTB) might be much flatter than the other (LinGo), not to mention the fundamental disparities between their underlying linguistic representations (CFG vs. HPSG). A more formal description will allow us to make these claims more precise. Let X be the data and Y be the annotation. Annotation adaptation can be understood as a change of P(Y) due to the change in annotation guidelines while P(X) remains constant. Through annotation adaptation, we want to change the annotations of the data from one guideline to another, leaving the data itself unchanged. However, in domain adaptation, P(X) changes, but P(Y) is assumed to be constant. The word assumed means that the distributions P(Y, X) and P(Y|X) are actually changed because P(X) is changed. Domain adaptation aims to make the model better adapt to a different domain with the same annotation guidelines. According to this analysis, annotation adaptation seems more motivated from a linguistic (rather than statistical) point of view, and tackles a serious problem fundamentally different from domain adaptation, which is also a serious problem (often leading to >10% loss in accuracy). More interestingly, annotation adaptation, without any assumptions about distributions, can be simultaneously applied to both domain and annotation adaptation problems, which is very appealing in practice because the latter problem often implies the former.","3 case studies: word segmentation and dependency parsing : In many Asian languages there are no explicit word boundaries, thus word segmentation is a fundamental task for the processing and understanding of these languages. Given a sentence as a sequence of n characters: x = x1 x2 .. xn where xi is a character, word segmentation aims to split the sequence into m(≤ n) words: x1:e1 xe1+1:e2 .. xem−1+1:em where each subsequence xi:j indicates a Chinese word spanning from characters xi to xj. Word segmentation can be formalized as a sequence labeling problem (Xue and Shen 2003), where each character in the sentence is given a boundary tag representing its position in a word. Following Ng and Low (2004), joint word segmentation and partof-speech (POS) tagging can also be solved using a character classification approach by extending boundary tags to include POS information. For word segmentation we adopt the four boundary tags of Ng and Low (2004), B, M, E, and S, where B, M, and E mean the beginning, the middle, and the end of a word, respectively, and S indicates a singlecharacter word. The word segmentation result can be generated by splitting the labeled character sequence into subsequences of pattern S or BM∗E, indicating single-character words or multi-character words, respectively. Given the character sequence x, the decoder finds the output ỹ that maximizes the score function: ỹ = argmax y f(x, y) ·w = argmax y ∑ xi∈x,yi∈y f(xi, yi) ·w (1) Where function f maps (x, y) into a feature vector, w is the parameter vector generated by the training algorithm, and f(x, y) ·w is the inner product of f(x, y) and w. The score of the sentence is further factorized into each character, where yi is the character classification label of character xi. The training procedure of perceptron learns a discriminative model mapping from the inputs x to the outputs y. Algorithm 1 shows the perceptron algorithm for tuning the parameter w. The “averaged parameters” technology (Collins 2002) is used for better performance. The feature templates of the classifier is shown in Table 1. The function Pu(·) returns true for a punctuation character and false for others; the function T(·) classifies a character into four types: number, date, English letter, and others, corresponding to function values 1, 2, 3, and 4, respectively. Dependency parsing aims to link each word to its arguments so as to form a directed graph spanning the whole sentence. Normally, the directed graph is restricted to a Algorithm 1 Perceptron training algorithm. 1: Input: Training set C 2: w← 0 3: for t← 1 .. T do ⊲ T iterations 4: for (x, y) ∈ C do 5: z̃← argmaxz f(x, z) ·w 6: if z̃ 6= y then 7: w← w + f(x, y)− f(x, z̃) ⊲ update the parameters 8: end if 9: end for 10: end for 11: Output: Parameters w dependency tree where each word depends on exactly one parent, and all words find their parents. Given a sentence as a sequence n words: x = x1 x2 .. xn dependency parsing finds a dependency tree y, where (i, j) ∈ y is an edge from the head word xi to the modifier word xj. The root r ∈ x in the tree y has no head word, and each of the other words, j(j ∈ x and j 6= r), depends on a head word i(i ∈ x and i 6= j). For many languages, the dependency structures are supposed to be projective. If xj is dependent on xi, then all the words between i and j must be directly or indirectly dependent on xi. Therefore, if we put the words in their linear order, preceded by the root, all edges can be drawn above the words without crossing. We follow this constraint because the dependency treebanks in our experiments are projective. Following the edge-based factorization method (Eisner 1996), the score of a dependency tree can be factorized into the dependency edges in the tree. The spanning tree method (McDonald, Crammer, and Pereira 2005) factorizes the score of the tree as the sum of the scores of all its edges, and the score of an edge is defined as the inner product of the feature vector f and the weight vector w. Given a sentence x, the parsing procedure searches for the candidate dependency tree with the maximum score: ỹ = argmax y f(x, y) ·w = argmax y ∑ (i,j)∈y f(i, j) ·w (2) The averaged perceptron algorithm is used again to train the parameter vector. A bottom–up dynamic programming algorithm is designed to search for the candidate parse with the maximum score as shown in Algorithm 2, where V[i, j] contains the candidate dependency fragments of the span [i, j]. The feature templates are similar to those of the first-ordered MST model (McDonald, Crammer, and Pereira 2005). Each feature is composed of some words and POS tags surround word i and/or word j, as well as an optional distance representation between the two words. Table 2 shows the feature templates without distance representations.","4 models for automatic annotation adaptation :In this section, we present a series of discriminative learning algorithms for the automatic adaptation of annotation guidelines. To facilitate the description of the algorithms, several shortened forms are adopted for convenience of description. We use source corpus to denote the corpus with the annotation guideline that we do not require, which is of course the source side of adaptation, and target corpus denotes the corpus with the desired guideline. Correspondingly, the annotation guidelines of the two corpora are denoted as source guideline and target guideline, and the classifiers following the two annotation guidelines are respectively named as source classifier and target classifier. Given a parallel annotated corpus, that is, a corpus labeled with two annotation Algorithm 2 Dependency parsing algorithm. 1: Input: sentence x to be parsed 2: for [i, j] ⊆ [1, |x|] in topological order do 3: buf← ∅ 4: for k← i..j− 1 do ⊲ all partitions 5: for l ∈ V[i, k] and r ∈ V[k + 1, j] do 6: insert DERIV(l, r) into buf 7: insert DERIV(r, l) into buf 8: end for 9: end for 10: V[i, j]← best K in buf 11: end for 12: Output: the best of V[1, |x|] 13: function DERIV(p, c) 14: return p ∪ c ∪ {(p · root, c · root)} ⊲ new derivation 15: end function guidelines, a transfer classifier can be trained to capture the regularity of transformation from the source annotation to the target annotation. The classifiers mentioned here are normal discriminative classifiers that take a set of features as input and give a classification label as output. For the POS tagging problem, the classification label is a POS tag, and for the parsing task, the classification label is a dependency edge, a constituency span, or a shift-reduce action. The parallel annotated corpus is the knowledge source of annotation adaptation. The annotation quality and data size of the parallel annotated corpus determine the accuracy of the transfer classifier. Such a corpus is difficult to build manually, although we can generate a noisy one automatically from the source corpus and the target corpus. For example, we can apply the source classifier on the target corpus, thus generating a parallel annotated corpus with noisy source annotations and accurate target annotations. The training procedure of the transfer classifier predicts the target annotations with guiding features extracted from the source annotations. This approach can alleviate the effect of the noise in source annotations, and learn annotation adaptation regularities accurately. By reducing the noise in the automatically generated parallel annotated corpus, a higher accuracy of annotation adaptation can be achieved. In the following sections, we first describe the transfer classifier that reveals the intrinsic principles of annotation adaptation, then describe a series of successively enhanced models that are developed from our previous investigation (Jiang, Huang, and Liu 2009; Jiang et al. 2012). In the simplest model (Model 1), two classifiers, a source classifier and a transfer classifier, are used in a cascade. The classification results of the lower source classifier provide additional guiding features to the upper transfer classifier, yielding an improved classification result. A variant of the first model (Model 2) uses the transfer classifier to transform the source corpus from source guideline to target guideline first, then merges the transformed source corpus into the target corpus in order to train an improved target classifier on the enlarged corpus. An optimized model (Model 3) is further proposed based on Model 2. Two optimization strategies, iterative training and predict-self re-estimation, are adopted to improve the efficiency of annotation adaptation, in order to fully utilize the knowledge in heterogeneous corpora. In order to learn the regularity of the adaptation from one annotation guideline to another, a parallel annotated corpus is needed to train the transfer classifier. The parallel annotated corpus is a corpus with two different annotation guidelines, the source guideline and the target guideline. With the target annotation labels as learning objectives and the source annotation labels as guiding information, the transfer classifier learns the statistical regularity of the adaptation from the source annotations to the target annotations. The training procedure of the transfer classifier is analogous to the training of a normal classifier except for the introduction of additional guiding features. For word segmentation, the most intuitive guiding feature is the source annotation label itself. For dependency parsing, an effective guiding feature is the dependency path between the hypothetic head and modifier, as shown in Figure 3. However, our effort is not limited to this, and more special features are introduced: A classification label or dependency path is attached to each feature of the baseline classifier to generate combined guiding features. This is similar to the feature design in discriminative dependency parsing (McDonald, Crammer, and Pereira 2005; McDonald and Pereira 2006), where the basic features, composed of words and POSs in the context, are also conjoined with link direction and distance in order to generate more special features. Table 3 shows an example of guide features (as well as baseline features) for word segmentation, where “α = B” indicates that the source classification label of the current character is B, demarcating the beginning of a word. The combination strategy derives a series of specific features, helping the transfer classifier to produce more precise classifications. The parameter-tuning procedure of the transfer classifier will automatically learn the regularity of using the source annotations to guide the classification decision. In decoding, if the current character shares some basic features in Table 3 and it is classified as B in the source annotation, then the transfer classifier will probably classify it as M. In addition, the original features used in the normal classifier are also used in order to leverage the knowledge from the target annotation of the parallel annotated corpus, and the training procedure of the transfer classifier also learns the relative weights between the guiding features and the original features. Therefore, the knowledge from both the source annotation and the target annotation are automatically integrated together, and higher and more stable prediction accuracy can be achieved. The most intuitive model for annotation adaptation uses two cascaded classifiers, the source classifier and the transfer classifier, to integrate the knowledge in corpora with different annotation guidelines. In the training procedure, a source classifier is trained on the source corpus and is used to process the target corpus, generating a parallel annotated corpus (albeit a noisy one). Then, the transfer classifier is trained on the parallel annotated corpus,with the target annotations as the classification labels, and the source annotation as guiding information. Figure 4 depicts the training pipeline. The best training iterations for the source classifier and the transfer classifier are determined on the development sets of the source corpus and target corpus. In the decoding procedure, a sequence of characters (for word segmentation) or words (for dependency parsing) is input into the source classifier to obtain a classification result under the source guideline; then it is input into the transfer classifier with this classification result as the guiding information to get the final result following the target guideline. This coincides with the stacking method for combining dependency parsers (Martins et al. 2008; Nivre and McDonald 2008), and is also similar to the Pred baseline for domain adaptation in Daumé et al. (Daumé III and Marcu 2006; Daumé III 2007). Figure 5 shows the pipeline for decoding. The previous model has a drawback: It has to cascade two classifiers in decoding to integrate the knowledge in two corpora, which seriously degrades the processing speed. Here we describe a variant of the previous model, aiming at automatic transformation (rather than integration as in Model 1) between annotation guidelines of humanannotated corpora. The source classifier and the transfer classifier are trained in the same way as in the previous model. The transfer classifier is used to process the source corpus, with the source annotations as guiding information, so as to relabel the source corpus with the target annotation guideline. By merging the target corpus and the transformed source corpus for the training of the final classifier, improved classification accuracy can be achieved. From this point on we describe the pipelines of annotation transformation in pseudo-codes for simplicity and convenience of extensions. Algorithm 3 shows the overall training algorithm for the variant model. Cs and Ct denote the source corpus and the target corpus. Ms and Ms→t denote the source classifier and the transfer classifier. C q p denotes the p corpus relabeled in q annotation guideline; for example, C t s is a corpus that labels the text of the source corpus with the target guideline. Functions TRAIN and TRANSTRAIN train the source classifier and the transfer classifier, respectively, both invoking the perceptron algorithm, but with different feature sets. Functions ANNOTATE and TRANSANNOTATE call the function DECODE with different models (source/transfer classifiers), feature functions (without/with guiding features), and inputs (raw/source-annotated sentences). In the algorithm the parameters corresponding to development sets are omitted for simplicity. Compared to the online knowledge integration methodology of the previous model, annotation transformation leads to improved performance in an offline manner by integrating corpora before the training procedure. This approach enables processing speeds several times faster than the cascaded classifiers in the previous model. It also has another advantage in that we can integrate the knowledge in more than two corpora without slowing down the processing of the final classifier. The training of the transfer classifier is based on an automatically generated (rather than a gold standard) parallel annotated corpus, where the source annotations are Algorithm 3 Baseline annotation adaptation. 1: function ANNOTRANS(Cs, Ct) 2: Ms ← TRAIN(Cs) ⊲ source classifier 3: Cst ← ANNOTATE(Ms, Ct) 4: Ms→t ← TRANSTRAIN(C s t , Ct) ⊲ transfer classifier 5: Cts ← TRANSANNOTATE(Ms→t, Cs) 6: Ct∗ ← C t s ∪ Ct ⊲ integrated corpus with target guideline 7: return Ct∗ 8: end function 9: function DECODE(M, Φ, x) 10: return argmaxy∈GEN(x) S(y|M,Φ, x) 11: end function provided by the source classifier. Therefore, the performance of annotation transformation is correspondingly determined by the accuracy of the source classifier, and we can generate a more accurate parallel annotated corpus for better annotation adaptation if an improved source classifier can be obtained. Based on Model 2, two optimization strategies—iterative bidirectional training and predict-self hypothesis—are introduced to optimize the parallel annotated corpora for better annotation adaptation. We first use an iterative training procedure to gradually improve the transformation accuracy by iteratively optimizing the parallel annotated corpora. In each training iteration, both source-to-target and target-to-source annotation transformations are performed, and the transformed corpora are used to provide better annotations for the parallel annotated corpora of the next iteration. Then in the new iteration, the better parallel annotated corpora will result in more accurate transfer classifiers, resulting in the generation of better transformed corpora. Algorithm 4 shows the overall procedure of the iterative training method. The loop of lines 6–13 iteratively performs source-to-target and target-to-source annotation transformations. The source annotations of the parallelly annotated corpora, Cst and Cts, are initialized by applying the source and target classifiers on the target and source corpora, respectively (lines 2–5). In each training iteration, the transfer classifiers are trained on the current parallel annotated corpora (lines 7–8); they are used to produce the transformed corpora (lines 9–10), which provide better annotations for the parallel annotated corpora of the next iteration. The iterative training terminates when the performance of the classifier trained on the merged corpus Cts ∪ Ct converges (line 13). The discriminative training of TRANSTRAIN predicts the target annotations with the guidance of source annotations. In the first iteration, the transformed corpora generated by the transfer classifiers are better than the initial ones generated by the source and target classifiers, due to the assistance of the guiding features. In the following iterations, Algorithm 4 Iterative annotation transformation. 1: function ITERANNOTRANS(Cs, Ct) 2: Ms ← TRAIN(Cs) ⊲ source classifier 3: Cst ← ANNOTATE(Ms, Ct) 4: Mt ← TRAIN(Ct) ⊲ target classifier 5: Cts ← ANNOTATE(Mt, Cs) 6: repeat 7: Ms→t ← TRANSTRAIN(C s t , Ct) ⊲ source-to-target transfer classifier 8: Mt→s ← TRANSTRAIN(C t s, Cs) ⊲ target-to-source transfer classifier 9: Cts ← TRANSANNOTATE(Ms→t, Cs) 10: Cst ← TRANSANNOTATE(Mt→s, Ct) 11: Ct∗ ← C t s ∪ Ct 12: M∗ ← TRAIN(C t ∗) ⊲ enhanced classifier trained on merged corpus 13: until EVAL(M∗) converges 14: return Ct∗ 15: end function 16: function DECODE(M, Φ, x) 17: return argmaxy∈GEN(x) S(y|M,Φ, x) 18: end function the transformed corpora provide better annotations for the parallel annotated corpora of the subsequent iteration; the transformation accuracy will improve gradually along with the optimization of the parallel annotated corpora until convergence. The predict-self hypothesis is introduced to improve the transformation accuracy from another perspective. This hypothesis is implicit in many unsupervised learning approaches, such as Markov random field; it has also been successfully used by Daumé III (2009) in unsupervised dependency parsing. The basic idea of predict-self is, if a prediction is a better candidate for an input, it would be easier to convert it back to the original input by a reverse procedure. If applied to annotation transformation, predictself indicates that a better transformation candidate following the target guideline can be more easily transformed back to the original form following the source guideline. The most intuitive strategy to introduce the predict-self methodology into annotation transformation is using a reversed annotation transformation procedure to filter out unreliable predictions of the previous transformation. In detail, a source-to-target annotation transformation procedure is performed on the source corpus to obtain a prediction that follows the target guideline; then a second, target-to-source transformation procedure is performed on this prediction result to check whether it can be transformed back to the original source annotation. The source-to-target prediction results that fail in this reverse-verification step are discarded, so this strategy can be called predict-self filtering. A more sophisticated strategy can be called predict-self re-estimation. Instead of using the reversed transformation procedure for filtering, the re-estimation strategy integrates the scores given by the source-to-target and target-to-source annotation transformation models when evaluating the transformation candidates. By properly tuning the relative weights of the two transformation directions, better transformation performance is achieved. The scores of the two transformation models are weighted, integrated in a log-linear manner: S+(y|Ms→t,Mt→s,Φ, x) = (1− λ)× S(y|Ms→t,Φ, x) + λ× S(x|Mt→s,Φ, y) (3) The weight parameter λ is tuned on the development set. To integrate the predictself reestimation into the iterative transformation training, a reversed transformation model is introduced and the enhanced scoring function is used when the function TRANSANNOTATE invokes the function DECODE.","5 experiments :To evaluate the performance of annotation adaptation, we experiment on two important NLP tasks, Chinese word segmentation and dependency parsing, both of which can be modeled as discriminative classification problems. For both tasks, we give the performances of the baseline models and the annotation adaptation algorithms. We perform annotation adaptation for word segmentation from People’s Daily (PD) (Yu et al. 2001) to Penn Chinese Treebank 5.0 (CTB) (Xue et al. 2005). The two corpora are built according to different segmentation guidelines and differ largely in quantity of data. CTB is smaller in size with about 0.5M words, whereas PD is much larger, containing nearly 6M words. Table 4 shows the data partitioning for the two corpora. We train the baseline perceptron classifiers for Chinese word segmentation on the training sets of CTB and SPD, using corresponding development sets to determine the best As a variant of Model 1, Model 2 shares the same transfer classifier, and differs only in training and decoding of the final classifier. Tables 8 and 9 show the performances of systems resulting from Models 1 and 2, as well as the classifiers trained on the directly merged corpora. The time costs for decoding are also listed to facilitate the practical comparison. We find that the simple corpus merging strategy leads to a dramatic decrease in accuracy, due to the different and incompatible annotation guidelines. Model 1, the simplest model for annotation adaptation, gives significant improvement over the baseline classifiers for word segmentation and dependency parsing. This indicates that the statistical regularities for annotation adaptation learned by the transfer classifiers bring performance improvement, utilizing guided decisions in the cascaded classifiers. Model 2 leads to classifiers with accuracy increments comparable to those of Model 1, while consuming only one third of the decoding time. It is inconsistent with our expectation. The strategy of directly transforming the source corpus to the target guideline also facilitates the utilization of more than one source corpus. We first introduce the iterative training strategy to Model 2. The corresponding development sets are used to determine the best training iterations for the iterative annotation transformations. After each iteration, we test the performance of the classifiers trained on the merged corpora. Figures 7 and 8 show the performance curves for Chinese word segmentation and semantic dependency parsing, respectively, with iterations ranging from 1 to 10. The performance of Model 2 is naturally included in the curves (located at iteration 1). The curves show that, for both segmentation and parsing, the accuracies of the classifiers trained on the merged corpora consistently improve in the earlier iterations (e.g., from iteration 2 to iteration 5 for word segmentation). Experiments for introduction of predict-self filtering and predict-self re-estimation are shown in Figures 9 and 10. The curves show the performances of the predictself re-estimation with a series of weight parameters, ranging from 0 to 1 with step 0.05. Note that in both figures, the points at λ = 0 show the performances of Model 2. We find that predict-self filtering brings a slight improvement over the baseline for word segmentation, but even decreases the accuracy for dependency parsing. An initial analysis on the experimental results reveals that the filtering strategy discards some complicated sentences in the source corpora, and the discarded sentences would bring further improvement if properly used. For example, in word segmentation, predict-self filtering discards 5% of sentences from the source corpus, containing nearly 10% of the training words. For the two tasks, the predict-self re-estimation outperforms the filtering strategy. With properly tuned weights, predict-self re-estimation can make better use of the training data. The largest accuracy improvements achieved over Model 2 for word segmentation and dependency parsing are 0.3 points and 0.6 points. Figures 11 and 12 show the performance curves after the introduction of both iterative training and predict-self re-estimation on the basis of Model 2 (this enhanced","6 discussion: application situations :Automatic annotation adaptation aims to transform the annotations in a corpus to the annotations following other guidelines. The models for annotation adaptation use a transfer classifier to learn the statistical correspondence regularities between different annotation guidelines. These statistical regularities are learned from a parallel annotated corpus, which does not need to be manually annotated. In fact, the models for annotation adaptation train the transfer classifier on an automatically generated parallel annotated corpus, which is generated by processing a corpus with a classifier trained on another corpus. That is to say, if we want to conduct annotation adaptation across several corpora, no additional corpora need to be manually annotated. This setting makes the strategy of annotation adaptation more general, because it is much harder to manually annotate a parallel annotated corpus, regardless of the language or the NLP problem under consideration. To tackle the problem of noise in automatically generated annotations, the advanced models we designed generate a better parallel annotated corpus by making use of strategies such as iterative optimization. Automatic annotation adaptation can be applied in any situation where we have multiple corpora with different and incompatible annotation philosophies for the same task. As our case studies, both Chinese word segmentation and dependency parsing have more than one corpora with different annotation guidelines, such as the People’s Daily and the Penn Chinese Treebank for Chinese word segmentation. In a more abstract view, constituency grammar and dependency grammar can be treated as two annotation guidelines for parsing. The syntactic knowledge in a constituency treebank and a dependency treebank, therefore, can be integrated by automatic annotation adaptation. For example, the LinGo Redwoods Treebank can also be transformed to the annotation guideline of the Semantic Dependency Treebank. Furthermore, the annotations (such as a grammar) given by bilingual projection or unsupervised induction can be seen as following a special annotation philosophy. For bilingually projected annotations, the annotation guideline would be similar to that of the counterpart language. For unsupervised induced annotations, the annotation guideline reflects the statistical structural distribution of a specific data set. In both situations, the underlying annotation guidelines may be largely different from that of the testing sets, which usually come from human-annotated corpora. The system trained on a bilingually projected or unsupervised induced corpus may perform poorly on an existing testing set, but if the projected or induced corpus has high inner consistency, it could improve a system trained on an existing corpus by automatic annotation adaptation. In this point of view, the practical value of the current work on bilingual projection and unsupervised induction may be underestimated, and annotation adaptation could make better use of the projected or induced knowledge.3","7 related work :There has already been some preliminary work tackling the divergence between different annotation guidelines. Gao et al. (2004) described a transformation-based converter to transfer a certain word segmentation result to another annotation guideline. They designed class-type transformation templates and used the transformation-based error-driven learning method of Brill (1995) to learn what word delimiters should be modified. Many efforts have been devoted to manual treebank transformation, where PTB is adapted to other grammar formalisms, such as CCG and LFG (Cahill et al. 2002; Hockenmaier and Steedman 2007). However, all these are heuristic-based—that is, they need manually designed transformation templates and involve heavy human engineering. Such strategies are hard to be generalized to POS tagging, not to mention other complicated structural prediction tasks. We investigated the automatic integration of word segmentation knowledge in differently annotated corpora (Jiang, Huang, and Liu 2009; Jiang et al. 2012), which can be seen as the preliminary work of automatic annotation adaptation. Motivated by our initial investigation, researchers applied similar methodologies to constituency parsing (Sun, Wang, and Zhang 2010; Zhu, Zhu, and Hu 2011) and word segmentation (Sun and Wan 2012). This previous work verified the effectiveness of automatic annotation adaptation, but did not reveal the essential definition of the problem nor the intrinsic principles of the solutions. Instead, this work clearly defines the problem of annotation adaptation, reveals the intrinsic principles of the solutions, and systematically describes a series of gradually improved models. The most advanced model learns transformation regularities much better and achieves significant higher accuracy for both word segmentation and dependency parsing, without slowing down the final language processors. The problem of automatic annotation adaptation can be seen as a special case of transfer learning (Pan and Yang 2010), where the source and target tasks are similar, but not identical. More specifically, the problem related to annotation adaptation assumes that the labeling mechanism across the source and target tasks are the same, but the predictive functions are different. The goal of annotation adaptation is to adapt the source predictive function to be used for the target task by exploiting the labeled data of the source task and the target task. Furthermore, automatic annotation adaptation approximately falls into the spectrum of relational-knowledge-transfer problems (Mihalkova, Huynh, and Mooney 2007; Mihalkova and Mooney 2008; Davis and Domingos 2009), but it tackles problems where the relations among data between the source and target domains can be largely isomerous—or, in other words, with different and incompatible annotation schemes. This work enriches the research of transfer learning by proposing and solving an NLP problem different from previous situations. For more details of transfer learning please refer to the survey of Pan and Yang (2010). 3 We have performed preliminary experiments on word segmentation. Bilingual projection was conducted from English to Chinese with the Chinese–English FBIS as the bilingual corpus. By annotation adaptation, the projected corpus for word segmentation brings a significant F-measure increment of nearly 0.6 points over the baseline trained on CTB only. The training procedure for an annotation adaptation model requires a parallel annotated corpus (which may be automatically generated); this fact puts the method into the neighborhood of the family of approaches known as annotation projection (Hwa et al. 2002, 2005; Ganchev, Gillenwater, and Taskar 2009; Smith and Eisner 2009; Jiang and Liu 2010; Das and Petrov 2011). Essentially, annotation adaptation and annotation projection tackle different problems; the former aims to transform the annotations from one guideline to another (of course in the same language), whereas the latter aims to project the annotation (as well as the annotation guideline) from one language to another. Therefore, the machine learning methods for annotation adaptation pay attention to automatic transformation of annotations, while for annotation projection, the machine learning methods focus on the bilingual projection across languages. Co-training (Sarkar 2001) and classifier combination (Nivre and McDonald 2008) are two techniques for training improved dependency parsers. The co-training technique lets two different parsing models learn from each other during the parsing of unlabeled text: One model selects some unlabeled sentences it can confidently parse, and provides them to the other model as additional training data in order to train more powerful parsers. The classifier combination lets graph-based and transition-based dependency parsers utilize the features extracted from each other’s parsing results, to obtain combined, enhanced parsers. The two techniques aim to let two models learn from each other on the same corpus with the same distribution and annotation guideline, whereas our strategy aims to integrate the knowledge in multiple corpora with different annotation guidelines. The iterative training procedure used in the optimized model shares some similarities with the co-training algorithm in parsing (Sarkar 2001), where the training procedure lets two different models learn from each other during parsing of the raw text. The key idea of co-training is to utilize the complementarity of different parsing models to mine additional training data from raw text, whereas iterative training for annotation adaptation emphasizes the iterative optimization of the parallel annotated corpora used to train the transfer classifiers. The predict-self methodology is implicit in many unsupervised learning approaches; it has been successfully used in unsupervised dependency parsing (Daumé III 2009). We adapt this idea to the scenario of annotation adaptation to improve transformation accuracy. In recent years much effort has been devoted to the improvement of word segmentation and dependency parsing. For example, the introduction of global training or complicated features (Zhang and Clark 2007, 2010); the investigation of word structures (Li 2011); the strategies of hybrid, joint, or stacked modeling (Nakagawa and Uchimoto 2007; Kruengkrai et al. 2009; Wang, Zong, and Su 2010; Sun 2011); and the semi-supervised and unsupervised technologies utilizing raw text (Zhao and Kit 2008; Johnson and Goldwater 2009; Mochihashi, Yamada, and Ueda 2009; Hewlett and Cohen 2011). We believe that the annotation adaptation technologies can be adopted jointly with complicated features, system combination, and semi-supervised/unsupervised technologies to further improve the performance of word segmentation and dependency parsing.","8 conclusion and future work :We have described the problem of annotation adaptation and the intrinsic principles of its solutions, and proposed a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. These models learn the statistical regularities of adaptation between different annotation guidelines, and integrate the knowledge in corpora with different annotation guidelines. In the problems of Chinese word segmentation and semantic dependency parsing, annotation adaptation algorithms bring significant improvements by integrating the knowledge in differently annotated corpora, People’s Daily and Penn Chinese Treebank for word segmentation, and Penn Chinese Treebank and Semantic Dependency Parsing for dependency parsing. For both tasks, annotation adaptation leads to a segmenter and a parser achieving the state-of-the-art, despite using only local features in single classifiers. Many aspects related to annotation adaptation deserve further investigation in the future. First, models for annotation adaptation can be adapted to other NLP tasks such as semantic analysis. Second, jointly tackling the divergences in both annotations and domains is an important problem. In addition, an unsupervised-induced or bilingually projected corpus, despite performing poorly on the specified testing data, may have high inner annotation consistency. That is to say, the induced corpora can be treated as a knowledge source following another annotation guideline, and the performance of current unsupervised or bilingually projected models may be seriously underestimated. Annotation adaptation may give us a new perspective on knowledge induction and measurement for such methods.","Manually annotated corpora are indispensable resources, yet for many annotation tasks, such as the creation of treebanks, there exist multiple corpora with different and incompatible annotation guidelines. This leads to an inefficient use of human expertise, but it could be remedied by integrating knowledge across corpora with different annotation guidelines. In this article we describe the problem of annotation adaptation and the intrinsic principles of the solutions, and present a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. We evaluate our algorithms on the tasks of Chinese word segmentation and dependency parsing. For word segmentation, where there are no universal segmentation guidelines because of the lack of morphology in Chinese, we perform annotation adaptation from the much larger People’s Daily corpus to the smaller but more popular Penn Chinese Treebank. For dependency parsing, we perform annotation adaptation from the Penn Chinese Treebank to a semantics-oriented Dependency Treebank, which is annotated using significantly different annotation guidelines. In both experiments, automatic annotation adaptation brings significant improvement, achieving state-of-the-art performance despite the use of purely local features in training.","[{""affiliations"": [], ""name"": ""Wenbin Jiang""}, {""affiliations"": [], ""name"": ""Yajuan L\u00fc""}, {""affiliations"": [], ""name"": ""Liang Huang""}, {""affiliations"": [], ""name"": ""Qun Liu""}]",SP:376fabb797ac14f81d1ea7f54ed5432a0a7bf244,"[{""authors"": [""Blitzer"", ""John"", ""Ryan McDonald"", ""Fernando Pereira.""], ""title"": ""Domain adaptation with structural correspondence learning"", ""venue"": ""Proceedings of EMNLP, pages 120\u2013128, Sydney."", ""year"": 2006}, {""authors"": [""Brill"", ""Eric.""], ""title"": ""Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging"", ""venue"": ""Computational Linguistics, 21(4):543\u2013565."", ""year"": 1995}, {""authors"": [""Buchholz"", ""Sabine"", ""Erwin Marsi.""], ""title"": ""CONLL-X shared task on multilingual dependency parsing"", ""venue"": ""Proceedings of CoNLL, pages 149\u2013164, New York, NY."", ""year"": 2006}, {""authors"": [""Cahill"", ""Aoife"", ""Mairead McCarthy"", ""Josef van Genabith"", ""Andy Way.""], ""title"": ""Automatic annotation of the Penn treebank with LFG F-structure information"", ""venue"": ""Proceedings of the LREC Workshop, Las Palmas."", ""year"": 2002}, {""authors"": [""Che"", ""Wanxiang"", ""Meishan Zhang"", ""Yanqiu Shao"", ""Ting Liu.""], ""title"": ""Semeval-2012 task 5: Chinese semantic dependency parsing"", ""venue"": ""Proceedings of SemEval, pages 378\u2013384, Montreal."", ""year"": 2012}, {""authors"": [""Collins"", ""Michael.""], ""title"": ""Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms"", ""venue"": ""Proceedings of EMNLP, pages 1\u20138, Philadelphia, PA."", ""year"": 2002}, {""authors"": [""Das"", ""Dipanjan"", ""Slav Petrov.""], ""title"": ""Unsupervised part-of-speech tagging with bilingual graph-based projections"", ""venue"": ""Proceedings of ACL, pages 600\u2013609, Portland, OR."", ""year"": 2011}, {""authors"": [""Daum\u00e9 III"", ""Hal.""], ""title"": ""Frustratingly easy domain adaptation"", ""venue"": ""Proceedings of ACL, pages 256\u2013263, Prague."", ""year"": 2007}, {""authors"": [""Daum\u00e9 III"", ""Hal.""], ""title"": ""Unsupervised searchbased structured prediction"", ""venue"": ""Proceedings of ICML, pages 209\u2013216, Montreal."", ""year"": 2009}, {""authors"": [""Daum\u00e9 III"", ""Hal"", ""Daniel Marcu.""], ""title"": ""Domain adaptation for statistical classifiers"", ""venue"": ""Journal of Artificial Intelligence Research, 26:101\u2013126."", ""year"": 2006}, {""authors"": [""Davis"", ""Jesse"", ""Pedro Domingos.""], ""title"": ""Deep transfer via second-order Markov logic"", ""venue"": ""Proceedings of ICML, pages 217\u2013224, Montreal."", ""year"": 2009}, {""authors"": [""Eisner"", ""Jason M.""], ""title"": ""Three new probabilistic models for dependency parsing: An exploration"", ""venue"": ""Proceedings of COLING, pages 340\u2013345, Copenhagen."", ""year"": 1996}, {""authors"": [""Ganchev"", ""Kuzman"", ""Jennifer Gillenwater"", ""Ben Taskar.""], ""title"": ""Dependency grammar induction via bitext projection constraints"", ""venue"": ""Proceedings of ACL, pages 369\u2013377, Singapore."", ""year"": 2009}, {""authors"": [""Gao"", ""Jianfeng"", ""Andi Wu"", ""Mu Li"", ""Chang-Ning Huang"", ""Hongqiao Li"", ""Xinsong Xia"", ""Haowei Qin.""], ""title"": ""Adaptive Chinese word segmentation"", ""venue"": ""Proceedings of ACL, pages 462\u2013469, Barcelona."", ""year"": 2004}, {""authors"": [""Hewlett"", ""Daniel"", ""Paul Cohen.""], ""title"": ""Fully unsupervised word segmentation with BVE and MDL"", ""venue"": ""Proceedings of ACL, pages 540\u2013545, Portland, OR."", ""year"": 2011}, {""authors"": [""Hockenmaier"", ""Julia"", ""Mark Steedman.""], ""title"": ""CCGBank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank"", ""venue"": ""Computational Linguistics, 33(3):355\u2013396."", ""year"": 2007}, {""authors"": [""Hwa"", ""Rebecca"", ""Philip Resnik"", ""Amy Weinberg"", ""Clara Cabezas"", ""Okan Kolak.""], ""title"": ""Bootstrapping parsers via syntactic projection across parallel texts"", ""venue"": ""Natural Language Engineering, 11(3):311\u2013325."", ""year"": 2005}, {""authors"": [""Hwa"", ""Rebecca"", ""Philip Resnik"", ""Amy Weinberg"", ""Okan Kolak.""], ""title"": ""Evaluating translational correspondence using annotation projection"", ""venue"": ""Proceedings of ACL, pages 392\u2013399, Philadephia, PA."", ""year"": 2002}, {""authors"": [""Jiang"", ""Wenbin"", ""Liang Huang"", ""Qun Liu.""], ""title"": ""Automatic adaptation of annotation standards: Chinese word segmentation and POS tagging \u2013 A case study"", ""venue"": ""Proceedings of ACL, pages 522\u2013530,"", ""year"": 2009}, {""authors"": [""Jiang"", ""Wenbin"", ""Liang Huang"", ""Yajuan L\u00fc"", ""Qun Liu.""], ""title"": ""A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging"", ""venue"": ""Proceedings of ACL, pages 897\u2013904, Columbus, OH."", ""year"": 2008}, {""authors"": [""Jiang"", ""Wenbin"", ""Qun Liu.""], ""title"": ""Dependency parsing and projection based on word-pair classification"", ""venue"": ""Proceedings of the ACL, pages 12\u201320, Uppsala."", ""year"": 2010}, {""authors"": [""Jiang"", ""Wenbin"", ""Fandong Meng"", ""Qun Liu"", ""Yajuan L\u00fc.""], ""title"": ""Iterative annotation transformation with predict-self reestimation for Chinese word segmentation"", ""venue"": ""Proceedings of EMNLP,"", ""year"": 2012}, {""authors"": [""Johnson"", ""Mark"", ""Sharon Goldwater.""], ""title"": ""Improving nonparameteric Bayesian inference: Experiments on unsupervised word segmentation with adaptor grammars"", ""venue"": ""Proceedings of NAACL,"", ""year"": 2009}, {""authors"": [""Kruengkrai"", ""Canasai"", ""Kiyotaka Uchimoto"", ""Junichi Kazama"", ""Yiou Wang"", ""Kentaro Torisawa"", ""Hitoshi Isahara""], ""title"": ""An error-driven word-character hybrid model for joint Chinese word segmentation"", ""year"": 2009}, {""authors"": [""Li"", ""Zhongguo.""], ""title"": ""Parsing the internal structure of words: A new paradigm for Chinese word segmentation"", ""venue"": ""Proceedings of ACL, pages 1,405\u20131,414, Portland, OR."", ""year"": 2011}, {""authors"": [""Marcus"", ""Mitchell P."", ""Beatrice Santorini"", ""Mary Ann Marcinkiewicz.""], ""title"": ""Building a large annotated corpus of English: The Penn treebank"", ""venue"": ""Computational Linguistics, 19(2):313\u2013330."", ""year"": 1993}, {""authors"": [""Martins"", ""Andr\u00e9 F.T."", ""Dipanjan Das"", ""Noah A. Smith"", ""Eric P. Xing.""], ""title"": ""Stacking dependency parsers"", ""venue"": ""Proceedings of EMNLP, pages 157\u2013166, Honolulu, HI."", ""year"": 2008}, {""authors"": [""McDonald"", ""Ryan"", ""Koby Crammer"", ""Fernando Pereira.""], ""title"": ""Online large-margin training of dependency parsers"", ""venue"": ""Proceedings of ACL, pages 91\u201398, Ann Arbor, MI."", ""year"": 2005}, {""authors"": [""McDonald"", ""Ryan"", ""Fernando Pereira.""], ""title"": ""Online learning of approximate dependency parsing algorithms"", ""venue"": ""Proceedings of EACL, pages 81\u201388, Trento."", ""year"": 2006}, {""authors"": [""Mihalkova"", ""Lilyana"", ""Tuyen Huynh"", ""Raymond J. Mooney.""], ""title"": ""Mapping and revising Markov logic networks for transfer learning"", ""venue"": ""Proceedings of AAAI, volume 7, pages 608\u2013614, Vancouver."", ""year"": 2007}, {""authors"": [""Mihalkova"", ""Lilyana"", ""Raymond J. Mooney.""], ""title"": ""Transfer learning by mapping with minimal target data"", ""venue"": ""Proceedings of AAAI Workshop Transfer Learning for Complex Tasks, Chicago, IL."", ""year"": 2008}, {""authors"": [""Mochihashi"", ""Daichi"", ""Takeshi Yamada"", ""Naonori Ueda.""], ""title"": ""Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling"", ""venue"": ""Proceedings of ACL-IJCNLP,"", ""year"": 2009}, {""authors"": [""Nakagawa"", ""Tetsuji"", ""Kiyotaka Uchimoto.""], ""title"": ""A hybrid approach to word segmentation and POS tagging"", ""venue"": ""Proceedings of ACL, pages 217\u2013220, Prague."", ""year"": 2007}, {""authors"": [""Ng"", ""Hwee Tou"", ""Jin Kiat Low""], ""title"": ""Chinese part-of-speech tagging: One-at-a-time or all-at-once? Word-based or character-based"", ""venue"": ""In Proceedings of EMNLP,"", ""year"": 2004}, {""authors"": [""Nivre"", ""Joakim"", ""Ryan McDonald.""], ""title"": ""Integrating graph-based and transition-based dependency parsers"", ""venue"": ""Proceedings of ACL, pages 950\u2013958, Columbus, OH."", ""year"": 2008}, {""authors"": [""Oepen"", ""Stephan"", ""Kristina Toutanova"", ""Stuart Shieber"", ""Thorsten Brants""], ""title"": ""The LinGo Redwoods treebank: Motivation and preliminary applications"", ""year"": 2002}, {""authors"": [""Pan"", ""Sinno Jialin"", ""Qiang Yang.""], ""title"": ""A survey on transfer learning"", ""venue"": ""IEEE TKDE, 22(10):1345\u20131359."", ""year"": 2010}, {""authors"": [""Sarkar"", ""Anoop.""], ""title"": ""Applying co-training methods to statistical parsing"", ""venue"": ""Proceedings of NAACL, pages 1\u20138, Pittsburgh, PA."", ""year"": 2001}, {""authors"": [""Smith"", ""David"", ""Jason Eisner.""], ""title"": ""Parser adaptation and projection with quasi-synchronous grammar features"", ""venue"": ""Proceedings of EMNLP, volume 2, pages 822\u2013831, Singapore."", ""year"": 2009}, {""authors"": [""Sun"", ""Weiwei.""], ""title"": ""A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging"", ""venue"": ""Proceedings of ACL, pages 1,385\u20131,394, Portland, OR."", ""year"": 2011}, {""authors"": [""Sun"", ""Weiwei"", ""Xiaojun Wan.""], ""title"": ""Reducing approximation and estimation errors for Chinese lexical processing with heterogeneous annotations"", ""venue"": ""Proceedings of ACL, volume 1, pages 232\u2013241,"", ""year"": 2012}, {""authors"": [""Sun"", ""Weiwei"", ""Rui Wang"", ""Yi Zhang.""], ""title"": ""Discriminative parse reranking for Chinese with homogeneous and heterogeneous annotations"", ""venue"": ""Proceedings of CIPS-SIGHAN, Beijing. Available at"", ""year"": 2010}, {""authors"": [""Wang"", ""Kun"", ""Chengqing Zong"", ""Keh-Yih Su.""], ""title"": ""A character-based joint model for Chinese word segmentation"", ""venue"": ""Proceedings of COLING, pages 1,173\u20131,181, Beijing."", ""year"": 2010}, {""authors"": [""Xue"", ""Nianwen"", ""Libin Shen.""], ""title"": ""Chinese word segmentation as LMR tagging"", ""venue"": ""Proceedings of SIGHAN Workshop, volume 17, pages 176\u2013179, Sapporo."", ""year"": 2003}, {""authors"": [""Xue"", ""Nianwen"", ""Fei Xia"", ""Fu-Dong Chiou"", ""Martha Palmer.""], ""title"": ""The Penn Chinese treebank: Phrase structure annotation of a large corpus"", ""venue"": ""Natural Language Engineering, 11(2):207\u2013238."", ""year"": 2005}, {""authors"": [""H. Yamada"", ""Y. Matsumoto.""], ""title"": ""Statistical dependency analysis with support vector machines"", ""venue"": ""Proceedings of IWPT, pages 195\u2013206, Nancy."", ""year"": 2003}, {""authors"": [""Yu"", ""Shiwen"", ""Jianming Lu"", ""Xuefeng Zhu"", ""Huiming Duan"", ""Shiyong Kang"", ""Honglin Sun"", ""Hui Wang"", ""Qiang Zhao"", ""Weidong Zhan.""], ""title"": ""Processing norms of modern Chinese corpus"", ""venue"": ""Technical"", ""year"": 2001}, {""authors"": [""Zhang"", ""Yue"", ""Stephen Clark.""], ""title"": ""Chinese segmentation with a word-based perceptron algorithm"", ""venue"": ""Proceedings of ACL, pages 840\u2013847, Prague."", ""year"": 2007}, {""authors"": [""Zhang"", ""Yue"", ""Stephen Clark.""], ""title"": ""A fast decoder for joint word segmentation and POS-tagging using a single discriminative model"", ""venue"": ""Proceedings of EMNLP, pages 843\u2013852, Cambridge, MA."", ""year"": 2010}, {""authors"": [""Zhao"", ""Hai"", ""Chunyu Kit.""], ""title"": ""Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition"", ""venue"": ""Proceedings of IJCNLP,"", ""year"": 2008}, {""authors"": [""Zhu"", ""Muhua"", ""Jingbo Zhu"", ""Minghan Hu.""], ""title"": ""Better automatic treebank conversion using a feature-based approach"", ""venue"": ""Proceedings of ACL, volume 2, pages 715\u2013719, Portland, OR."", ""year"": 2011}]","acknowledgments :Jiang, Lü, and Liu were supported by National Natural Science Foundation of China (contract 61202216) and the National Key Technology R&D Program (no. 2012BAH39B03). Huang was supported in part by the DARPA DEFT Project (FA8750-13-2-0041). Liu was partially supported by the Science Foundation Ireland (grant no. 07/CE/I1142) as part of the CNGL at Dublin City University. We also thank the anonymous reviewers for their insightful comments. Finally, we want to thank Chris Hokamp for proofreading.",,,,,,,,,,,,,,,,,,,,,"automatic adaptation of annotations :Wenbin Jiang∗ Chinese Academy of Sciences Yajuan Lü∗ Chinese Academy of Sciences Liang Huang∗∗ Queens College and Graduate Center, The City University of New York Qun Liu∗† Dublin City University Chinese Academy of Sciences Manually annotated corpora are indispensable resources, yet for many annotation tasks, such as the creation of treebanks, there exist multiple corpora with different and incompatible annotation guidelines. This leads to an inefficient use of human expertise, but it could be remedied by integrating knowledge across corpora with different annotation guidelines. In this article we describe the problem of annotation adaptation and the intrinsic principles of the solutions, and present a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. We evaluate our algorithms on the tasks of Chinese word segmentation and dependency parsing. For word segmentation, where there are no universal segmentation guidelines be- cause of the lack of morphology in Chinese, we perform annotation adaptation from the much larger People’s Daily corpus to the smaller but more popular Penn Chinese Treebank. For dependency parsing, we perform annotation adaptation from the Penn Chinese Treebank to a semantics-oriented Dependency Treebank, which is annotated using significantly different annotation guidelines. In both experiments, automatic annotation adaptation brings significant improvement, achieving state-of-the-art performance despite the use of purely local features in training. ∗ Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, P.O. Box 2704, Beijing 100190, China. E-mail: {jiangwenbin, liuqun, lvyajuan}@ict.ac.cn. ∗∗ Department of Computer Science, Queens College / CUNY, 65-30 Kissena Blvd., Queens, NY 11367. E-mail: liang.huang.sh@gmail.com. † Centre for Next Generation Localisation, Faculty of Engineering and Computing, Dublin City University. E-mail: qliu@computing.dcu.ie. Submission received: 24 April 2013; revised version received: 6 March 2014; accepted for publication: 18 April 2014. doi:10.1162/COLI a 00210 © 2015 Association for Computational Linguistics",type templates instances :Guiding α α=B α ◦ C−2 α=B ◦ C−2=美 α ◦ C−1 α=B ◦ C−1=副 α ◦ C0 α=B ◦ C0=总 α ◦ C1 α=B ◦ C1=统 α ◦ C2 α=B ◦ C2=访 α ◦ C−2C−1 α=B ◦ C−2C−1=美副 α ◦ C−1C0 α=B ◦ C−1C0=副总 α ◦ C0C1 α=B ◦ C0C1=总统 α ◦ C1C2 α=B ◦ C1C2=统访 α ◦ C−1C1 α=B ◦ C−1C1=副统 α ◦ Pu(C0) α=B ◦ Pu(C0)=true α ◦ T(C−2:2) α=B ◦ T(C−2:2)= 44444,"partition sections # of words : training iterations. The performance measurement indicator for word segmentation is the balanced F-measure, F = 2PR/(P + R), a function of Precision P and Recall R, where P is the percentage of words in segmentation results that are segmented correctly, and R is the percentage of correctly segmented words in the gold standard words. For both syntactic and semantic dependency parsing, we concentrate on nonlabeled parsing that predicts the graphic dependency structure for the input sentence without considering dependency labels. The perceptron-based baseline dependency models are trained on the training sets of DCTB and SDT, using the development sets to determine the best training iterations. The performance measurement indicator for dependency parsing is the Unlabeled Attachment Score, denoted as Precision P, indicating the percentage of words in predicted dependency structure that are correctly attached to their head words. Figure 6 shows the learning curve of the averaged perceptron for word segmentation on the development set. Accuracies of the baseline classifiers are listed in Table 6. We also report the performance of the classifiers on the testing sets of the opposite corpora. Experimental results are in line with our expectations. A classifier performs better in its corresponding testing set, and performs significantly worse on testing data following a different annotation guideline. Table 7 shows the accuracies of the baseline syntactic and semantic parsers, as well as the performance of the parsers on the testing sets of the opposite corpora. Similar to the situations in word segmentation, two parsers give state-of-the-art accuracies on their own testing sets, but perform poorly on the other testing sets. This indicates the degree of divergence between the annotation guidelines of DCTB and SDT.","ctb :To approximate more general scenarios of annotation adaptation problems, we extract from PD a subset that is comparable to CTB in size. Because there are many extremely long sentences in the original PD corpus, we first split them into normal sentences according to the full-stop punctuation symbol. We randomly select 20, 000 sentences (0.45M words) from the PD training data as the new training set, and 1, 000/1, 000 sentences from the PD test data as the new testing/developing set. We label the smaller version of PD as SPD. The balanced source corpus and target corpus also facilitate the investigation of annotation adaptation. Annotation adaptation for dependency parsing is performed from the CTB-derived syntactic dependency treebank (DCTB) (Yamada and Matsumoto 2003) to the Semantic Dependency Treebank (SDT) (Che et al. 2012). Semantic dependency encodes the semantic relationships between words, which are very different from syntactic dependencies. SDT is annotated on a small portion of the CTB text as depicted in Table 5; therefore, we use the subset of DCTB covering the remaining CTB text as the source corpus. We still denote the source corpus as DCTB in the following for simplicity.",previous work :,"semeval-2012 contest :model is denoted as Model 3). We find that the predict-self re-estimation brings improvement to the iterative training at each iteration, for both word segmentation and dependency parsing. The maximum performance is achieved at iteration 4 for word segmentation, and at iteration 5 for dependency parsing. The corresponding models are evaluated on the corresponding testing sets, and the experimental results are also shown in Tables 8 and 9. Compared to Model 1, the optimized annotation adaptation strategy, Model 3, leads to classifiers with significantly higher accuracy and to processing speeds that are several times faster. Tables 10 and 11 show the experimental results compared with previous work. For both Chinese word segmentation and semantic dependency parsing, automatic annotation adaptation yields state-of-the-art performance, despite using single classifiers with only local features. Note that for the systems in the SemEval contest (Che et al. 2012), many other technologies including clause segmentation, system combination, and complicated features were adopted, as well as elaborate engineering. We also performed significance tests2 to verify the effectiveness of annotation adaptation.We find that for both Chinese word segmentation and semantic dependency parsing, annotation adaptation brings significant improvement (p < 0.001) over the baselines trained on the target corpora only. 2 http://www.cis.upenn.edu/∼dbikel/download/compare.pl.","word type proportion baseline anno. ada. trend :To evaluate the stability of annotation adaptation, we perform quantitative analysis on the results of annotation adaptation. For word segmentation, the words are grouped according to POS tags. For dependency parsing, the dependency edges are grouped according to POS tag pairs. For each category, the recall values of baseline and annotation adaptation are reported. To filter the lists, we set two significance thresholds with respect to the proportion of a category and the performance fluctuation between two systems. For word segmentation, only the categories with proportions of more than 1% and with fluctuations of more than 0.1 points are reserved, and for dependency parsing, the two thresholds are 1% and 0.5. Tables 12 and 13 show the analysis results for word segmentation and dependency parsing, respectively. For both tasks, annotation adaptation brings improvement for most of the situations.","edge type proportion baseline anno. ada. trend :We further investigate the effect of varying the sizes of the target corpora. Experiments are conducted for word segmentation and dependency parsing with fixed-size source corpora and varying-size target corpora. We use SPD and DCTB as the source corpora for word segmentation and dependency parsing, respectively. Figures 13 and 14 show the performance curves on the testing sets. We find that, for both word segmentation and dependency parsing, the improvements brought by annotation adaptation are more significant when the target corpora are smaller. It means that the automatic annotation adaptation is more valuable when the size of the target corpus is small, which is good news for the situation where the corpus we are concerned with is smaller but a larger differently annotated corpus exists. Of course, the comparison between automatic annotation adaptation and previous strategies without using additional training data is unfair. Our work aims to find another way to improve NLP tasks: focusing on the collection of more training data instead of making full use of a certain corpus. We believe that the performance of automatic annotation adaptation can be further improved by adopting the advanced technologies of previous work, such as complicated features and model combination. It would be useful to conduct experiments with more source-annotated training data, such as the SIGHAN data set for word segmentation, to investigate the trend of improvement along with the further increment of annotated sentences. It would also be valuable to evaluate the improved word segmenter and dependency parser on the out-of-domain data sets. However, currently most corpora for word segmentation and dependency parsing do not explicitly distinguish the domains of their data sections, making such evaluations difficult to conduct.",,,,,,,,,,,,,,,,,,,,,"1 introduction :Much of statistical NLP research relies on some sorts of manually annotated corpora to train models, but annotated resources are extremely expensive to build, especially on a large scale. The creation of treebanks is a prime example (Marcus, Santorini, and Marcinkiewicz 1993). However, the linguistic theories motivating these annotation efforts are often heavily debated, and as a result there often exist multiple corpora for the same task with vastly different and incompatible annotation philosophies. For example, there are several treebanks for English, including the Chomskian-style Penn Treebank (Marcus, Santorini, and Marcinkiewicz 1993), the HPSG LinGo Redwoods Treebank (Oepen et al. 2002), and a smaller dependency treebank (Buchholz and Marsi 2006). From the perspective of resource accumulation, it seems a waste in human efforts.1 A second, related problem is that the raw texts are also drawn from different domains, which for the above example range from financial news (Penn Treebank/Wall Street Journal) to transcribed dialog (LinGo). It would be nice if a system could be automatically ported from one set of guidelines and/or domain to another, in order to exploit a much larger data set. The second problem, domain adaptation, is very well studied (e.g., Blitzer, McDonald, & Pereira 2006; Daumé III 2007). This work focuses on the widely existing and equally important problem, annotation adaptation, in order to adapt the divergence between different annotation guidelines and integrate linguistic knowledge in corpora with incongruent annotation formats. In this article, we describe the problem of annotation adaptation and the intrinsic principles of the solutions, and present a series of successively improved concrete models, the goal being to transfer the annotations of a corpus (source corpus) to the annotation format of another corpus (target corpus). The transfer classifier is the fundamental component for annotation adaptation algorithms. It learns correspondence regularities between annotation guidelines from a parallel annotated corpus, which has two kinds of annotations for the same data. In the simplest model (Model 1), the source classifier trained on the source corpus gives its predications to the transfer classifier trained on the parallel annotated corpus, so as to integrate the knowledge in the two corpora. In a variant of the simplest model (Model 2), the transfer classifier is used to transform the annotations in the source corpus into the annotation format of the target corpus; then the transformed source corpus and the target corpus are merged in order to train a more accurate classifier. Based on the second model, we finally develop an optimized model (Model 3), where two optimization strategies, iterative training and predict-self re-estimation, are integrated to further improve the efficiency of annotation adaptation. We experiment on Chinese word segmentation and dependency parsing to test the efficacy of our methods. For word segmentation, the problem of incompatible annotation guidelines is one of the most glaring: No segmentation guideline has been widely accepted due to the lack of a clear definition of Chinese word morphology. For dependency parsing there also exist multiple disparate annotation guidelines. For 1 Different annotated corpora for the same task facilitate the comparison of linguistic theories. From this perspective, having multiple standards is not necessarily a waste but rather a blessing, because it is a necessary phase in coming to a consensus if there is one. example, the dependency relations extracted from a constituency treebank follow syntactic principles, whereas the semantic dependency treebank is annotated in a semantic perspective. The two corpora for word segmentation are the much larger People’s Daily corpus (PD) (5.86M words) (Yu et al. 2001) and the smaller but more popular Penn Chinese Treebank (CTB) (0.47M words) (Xue et al. 2005). They utilize very different segmentation guidelines; for example, as shown in Figure 1, PD breaks VicePresident into two words and combines the phrase visited-China as a compound, compared with the segmentation following the CTB annotation guideline. It is preferable to transfer knowledge from PD to CTB because the latter also annotates tree structures, which are useful for downstream applications like parsing, summarization, and machine translation, yet it is much smaller in size. For dependency parsing, we use the dependency treebank (DCTB) extracted from CTB according to the rules of Yamada and Matsumoto (2003), and the Semantic Dependency Treebank (SDT) built on a small part of the CTB text (Che et al. 2012). Compared with the automatically extracted dependencies in DCTB, semantic dependencies in SDT reveal semantic relationships between words, rather than the syntactic relationships in syntactic dependencies. Figure 2 shows an example. Experiments on both word segmentation and dependency parsing show that annotation adaptation results in significant improvement over the baselines, and achieves the state-of-the-art with only local features. The rest of the article is organized as follows. Section 2 gives a description of the problem of annotation adaptation. Section 3 briefly introduces the tasks of word segmentation and dependency parsing as well as their state-of-the-art models. In Section 4 we first describe the transfer classifier that indicates the intrinsic principles of annotation adaptation, and then depict the three successively enhanced models for automatic adaptation of annotations. After the description of experimental results in Section 5 and the discussion of application scenarios in Section 6, we give a brief review of related work in Section 7, drawing conclusions in Section 8. 2 automatic annotation adaptation :We define annotation adaptation as a task aimed at automatically adapting the divergence between different annotation guidelines. Statistical models can be designed to learn the relevance of two annotation guidelines in order to transform a corpus from one annotation guideline to another. From this point of view, annotation adaptation can be seen as a special case of transfer learning. Through annotation adaptation, the linguistic knowledge in different corpora is integrated, resulting in enhanced NLP systems without complicated models and features. Much research has considered the problem of domain adaptation (Blitzer, McDonald, and Pereira 2006; Daumé III 2007), which also can be seen as a special case of transfer learning. It aims to adapt models trained in one domain (e.g., chemistry) to work well in other domains (e.g., medicine). Despite superficial similarities between domain adaptation and annotation adaptation, we argue that the underlying problems are quite different. Domain adaptation assumes that the labeling guidelines are preserved between the two domains—for example, an adjective is always labeled as JJ regardless of whether it is from the Wall Street Journal (WSJ) or a biomedical text, and only the distributions are different—for example, the word control is most likely a verb in WSJ but often a noun in biomedical texts (as in control experiment). Annotation adaptation, however, tackles the problem where the guideline itself is changed, for example, one treebank might distinguish between transitive and intransitive verbs, while merging the different noun types (NN, NNS, etc.), or one treebank (PTB) might be much flatter than the other (LinGo), not to mention the fundamental disparities between their underlying linguistic representations (CFG vs. HPSG). A more formal description will allow us to make these claims more precise. Let X be the data and Y be the annotation. Annotation adaptation can be understood as a change of P(Y) due to the change in annotation guidelines while P(X) remains constant. Through annotation adaptation, we want to change the annotations of the data from one guideline to another, leaving the data itself unchanged. However, in domain adaptation, P(X) changes, but P(Y) is assumed to be constant. The word assumed means that the distributions P(Y, X) and P(Y|X) are actually changed because P(X) is changed. Domain adaptation aims to make the model better adapt to a different domain with the same annotation guidelines. According to this analysis, annotation adaptation seems more motivated from a linguistic (rather than statistical) point of view, and tackles a serious problem fundamentally different from domain adaptation, which is also a serious problem (often leading to >10% loss in accuracy). More interestingly, annotation adaptation, without any assumptions about distributions, can be simultaneously applied to both domain and annotation adaptation problems, which is very appealing in practice because the latter problem often implies the former. 3 case studies: word segmentation and dependency parsing : In many Asian languages there are no explicit word boundaries, thus word segmentation is a fundamental task for the processing and understanding of these languages. Given a sentence as a sequence of n characters: x = x1 x2 .. xn where xi is a character, word segmentation aims to split the sequence into m(≤ n) words: x1:e1 xe1+1:e2 .. xem−1+1:em where each subsequence xi:j indicates a Chinese word spanning from characters xi to xj. Word segmentation can be formalized as a sequence labeling problem (Xue and Shen 2003), where each character in the sentence is given a boundary tag representing its position in a word. Following Ng and Low (2004), joint word segmentation and partof-speech (POS) tagging can also be solved using a character classification approach by extending boundary tags to include POS information. For word segmentation we adopt the four boundary tags of Ng and Low (2004), B, M, E, and S, where B, M, and E mean the beginning, the middle, and the end of a word, respectively, and S indicates a singlecharacter word. The word segmentation result can be generated by splitting the labeled character sequence into subsequences of pattern S or BM∗E, indicating single-character words or multi-character words, respectively. Given the character sequence x, the decoder finds the output ỹ that maximizes the score function: ỹ = argmax y f(x, y) ·w = argmax y ∑ xi∈x,yi∈y f(xi, yi) ·w (1) Where function f maps (x, y) into a feature vector, w is the parameter vector generated by the training algorithm, and f(x, y) ·w is the inner product of f(x, y) and w. The score of the sentence is further factorized into each character, where yi is the character classification label of character xi. The training procedure of perceptron learns a discriminative model mapping from the inputs x to the outputs y. Algorithm 1 shows the perceptron algorithm for tuning the parameter w. The “averaged parameters” technology (Collins 2002) is used for better performance. The feature templates of the classifier is shown in Table 1. The function Pu(·) returns true for a punctuation character and false for others; the function T(·) classifies a character into four types: number, date, English letter, and others, corresponding to function values 1, 2, 3, and 4, respectively. Dependency parsing aims to link each word to its arguments so as to form a directed graph spanning the whole sentence. Normally, the directed graph is restricted to a Algorithm 1 Perceptron training algorithm. 1: Input: Training set C 2: w← 0 3: for t← 1 .. T do ⊲ T iterations 4: for (x, y) ∈ C do 5: z̃← argmaxz f(x, z) ·w 6: if z̃ 6= y then 7: w← w + f(x, y)− f(x, z̃) ⊲ update the parameters 8: end if 9: end for 10: end for 11: Output: Parameters w dependency tree where each word depends on exactly one parent, and all words find their parents. Given a sentence as a sequence n words: x = x1 x2 .. xn dependency parsing finds a dependency tree y, where (i, j) ∈ y is an edge from the head word xi to the modifier word xj. The root r ∈ x in the tree y has no head word, and each of the other words, j(j ∈ x and j 6= r), depends on a head word i(i ∈ x and i 6= j). For many languages, the dependency structures are supposed to be projective. If xj is dependent on xi, then all the words between i and j must be directly or indirectly dependent on xi. Therefore, if we put the words in their linear order, preceded by the root, all edges can be drawn above the words without crossing. We follow this constraint because the dependency treebanks in our experiments are projective. Following the edge-based factorization method (Eisner 1996), the score of a dependency tree can be factorized into the dependency edges in the tree. The spanning tree method (McDonald, Crammer, and Pereira 2005) factorizes the score of the tree as the sum of the scores of all its edges, and the score of an edge is defined as the inner product of the feature vector f and the weight vector w. Given a sentence x, the parsing procedure searches for the candidate dependency tree with the maximum score: ỹ = argmax y f(x, y) ·w = argmax y ∑ (i,j)∈y f(i, j) ·w (2) The averaged perceptron algorithm is used again to train the parameter vector. A bottom–up dynamic programming algorithm is designed to search for the candidate parse with the maximum score as shown in Algorithm 2, where V[i, j] contains the candidate dependency fragments of the span [i, j]. The feature templates are similar to those of the first-ordered MST model (McDonald, Crammer, and Pereira 2005). Each feature is composed of some words and POS tags surround word i and/or word j, as well as an optional distance representation between the two words. Table 2 shows the feature templates without distance representations. 4 models for automatic annotation adaptation :In this section, we present a series of discriminative learning algorithms for the automatic adaptation of annotation guidelines. To facilitate the description of the algorithms, several shortened forms are adopted for convenience of description. We use source corpus to denote the corpus with the annotation guideline that we do not require, which is of course the source side of adaptation, and target corpus denotes the corpus with the desired guideline. Correspondingly, the annotation guidelines of the two corpora are denoted as source guideline and target guideline, and the classifiers following the two annotation guidelines are respectively named as source classifier and target classifier. Given a parallel annotated corpus, that is, a corpus labeled with two annotation Algorithm 2 Dependency parsing algorithm. 1: Input: sentence x to be parsed 2: for [i, j] ⊆ [1, |x|] in topological order do 3: buf← ∅ 4: for k← i..j− 1 do ⊲ all partitions 5: for l ∈ V[i, k] and r ∈ V[k + 1, j] do 6: insert DERIV(l, r) into buf 7: insert DERIV(r, l) into buf 8: end for 9: end for 10: V[i, j]← best K in buf 11: end for 12: Output: the best of V[1, |x|] 13: function DERIV(p, c) 14: return p ∪ c ∪ {(p · root, c · root)} ⊲ new derivation 15: end function guidelines, a transfer classifier can be trained to capture the regularity of transformation from the source annotation to the target annotation. The classifiers mentioned here are normal discriminative classifiers that take a set of features as input and give a classification label as output. For the POS tagging problem, the classification label is a POS tag, and for the parsing task, the classification label is a dependency edge, a constituency span, or a shift-reduce action. The parallel annotated corpus is the knowledge source of annotation adaptation. The annotation quality and data size of the parallel annotated corpus determine the accuracy of the transfer classifier. Such a corpus is difficult to build manually, although we can generate a noisy one automatically from the source corpus and the target corpus. For example, we can apply the source classifier on the target corpus, thus generating a parallel annotated corpus with noisy source annotations and accurate target annotations. The training procedure of the transfer classifier predicts the target annotations with guiding features extracted from the source annotations. This approach can alleviate the effect of the noise in source annotations, and learn annotation adaptation regularities accurately. By reducing the noise in the automatically generated parallel annotated corpus, a higher accuracy of annotation adaptation can be achieved. In the following sections, we first describe the transfer classifier that reveals the intrinsic principles of annotation adaptation, then describe a series of successively enhanced models that are developed from our previous investigation (Jiang, Huang, and Liu 2009; Jiang et al. 2012). In the simplest model (Model 1), two classifiers, a source classifier and a transfer classifier, are used in a cascade. The classification results of the lower source classifier provide additional guiding features to the upper transfer classifier, yielding an improved classification result. A variant of the first model (Model 2) uses the transfer classifier to transform the source corpus from source guideline to target guideline first, then merges the transformed source corpus into the target corpus in order to train an improved target classifier on the enlarged corpus. An optimized model (Model 3) is further proposed based on Model 2. Two optimization strategies, iterative training and predict-self re-estimation, are adopted to improve the efficiency of annotation adaptation, in order to fully utilize the knowledge in heterogeneous corpora. In order to learn the regularity of the adaptation from one annotation guideline to another, a parallel annotated corpus is needed to train the transfer classifier. The parallel annotated corpus is a corpus with two different annotation guidelines, the source guideline and the target guideline. With the target annotation labels as learning objectives and the source annotation labels as guiding information, the transfer classifier learns the statistical regularity of the adaptation from the source annotations to the target annotations. The training procedure of the transfer classifier is analogous to the training of a normal classifier except for the introduction of additional guiding features. For word segmentation, the most intuitive guiding feature is the source annotation label itself. For dependency parsing, an effective guiding feature is the dependency path between the hypothetic head and modifier, as shown in Figure 3. However, our effort is not limited to this, and more special features are introduced: A classification label or dependency path is attached to each feature of the baseline classifier to generate combined guiding features. This is similar to the feature design in discriminative dependency parsing (McDonald, Crammer, and Pereira 2005; McDonald and Pereira 2006), where the basic features, composed of words and POSs in the context, are also conjoined with link direction and distance in order to generate more special features. Table 3 shows an example of guide features (as well as baseline features) for word segmentation, where “α = B” indicates that the source classification label of the current character is B, demarcating the beginning of a word. The combination strategy derives a series of specific features, helping the transfer classifier to produce more precise classifications. The parameter-tuning procedure of the transfer classifier will automatically learn the regularity of using the source annotations to guide the classification decision. In decoding, if the current character shares some basic features in Table 3 and it is classified as B in the source annotation, then the transfer classifier will probably classify it as M. In addition, the original features used in the normal classifier are also used in order to leverage the knowledge from the target annotation of the parallel annotated corpus, and the training procedure of the transfer classifier also learns the relative weights between the guiding features and the original features. Therefore, the knowledge from both the source annotation and the target annotation are automatically integrated together, and higher and more stable prediction accuracy can be achieved. The most intuitive model for annotation adaptation uses two cascaded classifiers, the source classifier and the transfer classifier, to integrate the knowledge in corpora with different annotation guidelines. In the training procedure, a source classifier is trained on the source corpus and is used to process the target corpus, generating a parallel annotated corpus (albeit a noisy one). Then, the transfer classifier is trained on the parallel annotated corpus,with the target annotations as the classification labels, and the source annotation as guiding information. Figure 4 depicts the training pipeline. The best training iterations for the source classifier and the transfer classifier are determined on the development sets of the source corpus and target corpus. In the decoding procedure, a sequence of characters (for word segmentation) or words (for dependency parsing) is input into the source classifier to obtain a classification result under the source guideline; then it is input into the transfer classifier with this classification result as the guiding information to get the final result following the target guideline. This coincides with the stacking method for combining dependency parsers (Martins et al. 2008; Nivre and McDonald 2008), and is also similar to the Pred baseline for domain adaptation in Daumé et al. (Daumé III and Marcu 2006; Daumé III 2007). Figure 5 shows the pipeline for decoding. The previous model has a drawback: It has to cascade two classifiers in decoding to integrate the knowledge in two corpora, which seriously degrades the processing speed. Here we describe a variant of the previous model, aiming at automatic transformation (rather than integration as in Model 1) between annotation guidelines of humanannotated corpora. The source classifier and the transfer classifier are trained in the same way as in the previous model. The transfer classifier is used to process the source corpus, with the source annotations as guiding information, so as to relabel the source corpus with the target annotation guideline. By merging the target corpus and the transformed source corpus for the training of the final classifier, improved classification accuracy can be achieved. From this point on we describe the pipelines of annotation transformation in pseudo-codes for simplicity and convenience of extensions. Algorithm 3 shows the overall training algorithm for the variant model. Cs and Ct denote the source corpus and the target corpus. Ms and Ms→t denote the source classifier and the transfer classifier. C q p denotes the p corpus relabeled in q annotation guideline; for example, C t s is a corpus that labels the text of the source corpus with the target guideline. Functions TRAIN and TRANSTRAIN train the source classifier and the transfer classifier, respectively, both invoking the perceptron algorithm, but with different feature sets. Functions ANNOTATE and TRANSANNOTATE call the function DECODE with different models (source/transfer classifiers), feature functions (without/with guiding features), and inputs (raw/source-annotated sentences). In the algorithm the parameters corresponding to development sets are omitted for simplicity. Compared to the online knowledge integration methodology of the previous model, annotation transformation leads to improved performance in an offline manner by integrating corpora before the training procedure. This approach enables processing speeds several times faster than the cascaded classifiers in the previous model. It also has another advantage in that we can integrate the knowledge in more than two corpora without slowing down the processing of the final classifier. The training of the transfer classifier is based on an automatically generated (rather than a gold standard) parallel annotated corpus, where the source annotations are Algorithm 3 Baseline annotation adaptation. 1: function ANNOTRANS(Cs, Ct) 2: Ms ← TRAIN(Cs) ⊲ source classifier 3: Cst ← ANNOTATE(Ms, Ct) 4: Ms→t ← TRANSTRAIN(C s t , Ct) ⊲ transfer classifier 5: Cts ← TRANSANNOTATE(Ms→t, Cs) 6: Ct∗ ← C t s ∪ Ct ⊲ integrated corpus with target guideline 7: return Ct∗ 8: end function 9: function DECODE(M, Φ, x) 10: return argmaxy∈GEN(x) S(y|M,Φ, x) 11: end function provided by the source classifier. Therefore, the performance of annotation transformation is correspondingly determined by the accuracy of the source classifier, and we can generate a more accurate parallel annotated corpus for better annotation adaptation if an improved source classifier can be obtained. Based on Model 2, two optimization strategies—iterative bidirectional training and predict-self hypothesis—are introduced to optimize the parallel annotated corpora for better annotation adaptation. We first use an iterative training procedure to gradually improve the transformation accuracy by iteratively optimizing the parallel annotated corpora. In each training iteration, both source-to-target and target-to-source annotation transformations are performed, and the transformed corpora are used to provide better annotations for the parallel annotated corpora of the next iteration. Then in the new iteration, the better parallel annotated corpora will result in more accurate transfer classifiers, resulting in the generation of better transformed corpora. Algorithm 4 shows the overall procedure of the iterative training method. The loop of lines 6–13 iteratively performs source-to-target and target-to-source annotation transformations. The source annotations of the parallelly annotated corpora, Cst and Cts, are initialized by applying the source and target classifiers on the target and source corpora, respectively (lines 2–5). In each training iteration, the transfer classifiers are trained on the current parallel annotated corpora (lines 7–8); they are used to produce the transformed corpora (lines 9–10), which provide better annotations for the parallel annotated corpora of the next iteration. The iterative training terminates when the performance of the classifier trained on the merged corpus Cts ∪ Ct converges (line 13). The discriminative training of TRANSTRAIN predicts the target annotations with the guidance of source annotations. In the first iteration, the transformed corpora generated by the transfer classifiers are better than the initial ones generated by the source and target classifiers, due to the assistance of the guiding features. In the following iterations, Algorithm 4 Iterative annotation transformation. 1: function ITERANNOTRANS(Cs, Ct) 2: Ms ← TRAIN(Cs) ⊲ source classifier 3: Cst ← ANNOTATE(Ms, Ct) 4: Mt ← TRAIN(Ct) ⊲ target classifier 5: Cts ← ANNOTATE(Mt, Cs) 6: repeat 7: Ms→t ← TRANSTRAIN(C s t , Ct) ⊲ source-to-target transfer classifier 8: Mt→s ← TRANSTRAIN(C t s, Cs) ⊲ target-to-source transfer classifier 9: Cts ← TRANSANNOTATE(Ms→t, Cs) 10: Cst ← TRANSANNOTATE(Mt→s, Ct) 11: Ct∗ ← C t s ∪ Ct 12: M∗ ← TRAIN(C t ∗) ⊲ enhanced classifier trained on merged corpus 13: until EVAL(M∗) converges 14: return Ct∗ 15: end function 16: function DECODE(M, Φ, x) 17: return argmaxy∈GEN(x) S(y|M,Φ, x) 18: end function the transformed corpora provide better annotations for the parallel annotated corpora of the subsequent iteration; the transformation accuracy will improve gradually along with the optimization of the parallel annotated corpora until convergence. The predict-self hypothesis is introduced to improve the transformation accuracy from another perspective. This hypothesis is implicit in many unsupervised learning approaches, such as Markov random field; it has also been successfully used by Daumé III (2009) in unsupervised dependency parsing. The basic idea of predict-self is, if a prediction is a better candidate for an input, it would be easier to convert it back to the original input by a reverse procedure. If applied to annotation transformation, predictself indicates that a better transformation candidate following the target guideline can be more easily transformed back to the original form following the source guideline. The most intuitive strategy to introduce the predict-self methodology into annotation transformation is using a reversed annotation transformation procedure to filter out unreliable predictions of the previous transformation. In detail, a source-to-target annotation transformation procedure is performed on the source corpus to obtain a prediction that follows the target guideline; then a second, target-to-source transformation procedure is performed on this prediction result to check whether it can be transformed back to the original source annotation. The source-to-target prediction results that fail in this reverse-verification step are discarded, so this strategy can be called predict-self filtering. A more sophisticated strategy can be called predict-self re-estimation. Instead of using the reversed transformation procedure for filtering, the re-estimation strategy integrates the scores given by the source-to-target and target-to-source annotation transformation models when evaluating the transformation candidates. By properly tuning the relative weights of the two transformation directions, better transformation performance is achieved. The scores of the two transformation models are weighted, integrated in a log-linear manner: S+(y|Ms→t,Mt→s,Φ, x) = (1− λ)× S(y|Ms→t,Φ, x) + λ× S(x|Mt→s,Φ, y) (3) The weight parameter λ is tuned on the development set. To integrate the predictself reestimation into the iterative transformation training, a reversed transformation model is introduced and the enhanced scoring function is used when the function TRANSANNOTATE invokes the function DECODE. 5 experiments :To evaluate the performance of annotation adaptation, we experiment on two important NLP tasks, Chinese word segmentation and dependency parsing, both of which can be modeled as discriminative classification problems. For both tasks, we give the performances of the baseline models and the annotation adaptation algorithms. We perform annotation adaptation for word segmentation from People’s Daily (PD) (Yu et al. 2001) to Penn Chinese Treebank 5.0 (CTB) (Xue et al. 2005). The two corpora are built according to different segmentation guidelines and differ largely in quantity of data. CTB is smaller in size with about 0.5M words, whereas PD is much larger, containing nearly 6M words. Table 4 shows the data partitioning for the two corpora. We train the baseline perceptron classifiers for Chinese word segmentation on the training sets of CTB and SPD, using corresponding development sets to determine the best As a variant of Model 1, Model 2 shares the same transfer classifier, and differs only in training and decoding of the final classifier. Tables 8 and 9 show the performances of systems resulting from Models 1 and 2, as well as the classifiers trained on the directly merged corpora. The time costs for decoding are also listed to facilitate the practical comparison. We find that the simple corpus merging strategy leads to a dramatic decrease in accuracy, due to the different and incompatible annotation guidelines. Model 1, the simplest model for annotation adaptation, gives significant improvement over the baseline classifiers for word segmentation and dependency parsing. This indicates that the statistical regularities for annotation adaptation learned by the transfer classifiers bring performance improvement, utilizing guided decisions in the cascaded classifiers. Model 2 leads to classifiers with accuracy increments comparable to those of Model 1, while consuming only one third of the decoding time. It is inconsistent with our expectation. The strategy of directly transforming the source corpus to the target guideline also facilitates the utilization of more than one source corpus. We first introduce the iterative training strategy to Model 2. The corresponding development sets are used to determine the best training iterations for the iterative annotation transformations. After each iteration, we test the performance of the classifiers trained on the merged corpora. Figures 7 and 8 show the performance curves for Chinese word segmentation and semantic dependency parsing, respectively, with iterations ranging from 1 to 10. The performance of Model 2 is naturally included in the curves (located at iteration 1). The curves show that, for both segmentation and parsing, the accuracies of the classifiers trained on the merged corpora consistently improve in the earlier iterations (e.g., from iteration 2 to iteration 5 for word segmentation). Experiments for introduction of predict-self filtering and predict-self re-estimation are shown in Figures 9 and 10. The curves show the performances of the predictself re-estimation with a series of weight parameters, ranging from 0 to 1 with step 0.05. Note that in both figures, the points at λ = 0 show the performances of Model 2. We find that predict-self filtering brings a slight improvement over the baseline for word segmentation, but even decreases the accuracy for dependency parsing. An initial analysis on the experimental results reveals that the filtering strategy discards some complicated sentences in the source corpora, and the discarded sentences would bring further improvement if properly used. For example, in word segmentation, predict-self filtering discards 5% of sentences from the source corpus, containing nearly 10% of the training words. For the two tasks, the predict-self re-estimation outperforms the filtering strategy. With properly tuned weights, predict-self re-estimation can make better use of the training data. The largest accuracy improvements achieved over Model 2 for word segmentation and dependency parsing are 0.3 points and 0.6 points. Figures 11 and 12 show the performance curves after the introduction of both iterative training and predict-self re-estimation on the basis of Model 2 (this enhanced 6 discussion: application situations :Automatic annotation adaptation aims to transform the annotations in a corpus to the annotations following other guidelines. The models for annotation adaptation use a transfer classifier to learn the statistical correspondence regularities between different annotation guidelines. These statistical regularities are learned from a parallel annotated corpus, which does not need to be manually annotated. In fact, the models for annotation adaptation train the transfer classifier on an automatically generated parallel annotated corpus, which is generated by processing a corpus with a classifier trained on another corpus. That is to say, if we want to conduct annotation adaptation across several corpora, no additional corpora need to be manually annotated. This setting makes the strategy of annotation adaptation more general, because it is much harder to manually annotate a parallel annotated corpus, regardless of the language or the NLP problem under consideration. To tackle the problem of noise in automatically generated annotations, the advanced models we designed generate a better parallel annotated corpus by making use of strategies such as iterative optimization. Automatic annotation adaptation can be applied in any situation where we have multiple corpora with different and incompatible annotation philosophies for the same task. As our case studies, both Chinese word segmentation and dependency parsing have more than one corpora with different annotation guidelines, such as the People’s Daily and the Penn Chinese Treebank for Chinese word segmentation. In a more abstract view, constituency grammar and dependency grammar can be treated as two annotation guidelines for parsing. The syntactic knowledge in a constituency treebank and a dependency treebank, therefore, can be integrated by automatic annotation adaptation. For example, the LinGo Redwoods Treebank can also be transformed to the annotation guideline of the Semantic Dependency Treebank. Furthermore, the annotations (such as a grammar) given by bilingual projection or unsupervised induction can be seen as following a special annotation philosophy. For bilingually projected annotations, the annotation guideline would be similar to that of the counterpart language. For unsupervised induced annotations, the annotation guideline reflects the statistical structural distribution of a specific data set. In both situations, the underlying annotation guidelines may be largely different from that of the testing sets, which usually come from human-annotated corpora. The system trained on a bilingually projected or unsupervised induced corpus may perform poorly on an existing testing set, but if the projected or induced corpus has high inner consistency, it could improve a system trained on an existing corpus by automatic annotation adaptation. In this point of view, the practical value of the current work on bilingual projection and unsupervised induction may be underestimated, and annotation adaptation could make better use of the projected or induced knowledge.3 7 related work :There has already been some preliminary work tackling the divergence between different annotation guidelines. Gao et al. (2004) described a transformation-based converter to transfer a certain word segmentation result to another annotation guideline. They designed class-type transformation templates and used the transformation-based error-driven learning method of Brill (1995) to learn what word delimiters should be modified. Many efforts have been devoted to manual treebank transformation, where PTB is adapted to other grammar formalisms, such as CCG and LFG (Cahill et al. 2002; Hockenmaier and Steedman 2007). However, all these are heuristic-based—that is, they need manually designed transformation templates and involve heavy human engineering. Such strategies are hard to be generalized to POS tagging, not to mention other complicated structural prediction tasks. We investigated the automatic integration of word segmentation knowledge in differently annotated corpora (Jiang, Huang, and Liu 2009; Jiang et al. 2012), which can be seen as the preliminary work of automatic annotation adaptation. Motivated by our initial investigation, researchers applied similar methodologies to constituency parsing (Sun, Wang, and Zhang 2010; Zhu, Zhu, and Hu 2011) and word segmentation (Sun and Wan 2012). This previous work verified the effectiveness of automatic annotation adaptation, but did not reveal the essential definition of the problem nor the intrinsic principles of the solutions. Instead, this work clearly defines the problem of annotation adaptation, reveals the intrinsic principles of the solutions, and systematically describes a series of gradually improved models. The most advanced model learns transformation regularities much better and achieves significant higher accuracy for both word segmentation and dependency parsing, without slowing down the final language processors. The problem of automatic annotation adaptation can be seen as a special case of transfer learning (Pan and Yang 2010), where the source and target tasks are similar, but not identical. More specifically, the problem related to annotation adaptation assumes that the labeling mechanism across the source and target tasks are the same, but the predictive functions are different. The goal of annotation adaptation is to adapt the source predictive function to be used for the target task by exploiting the labeled data of the source task and the target task. Furthermore, automatic annotation adaptation approximately falls into the spectrum of relational-knowledge-transfer problems (Mihalkova, Huynh, and Mooney 2007; Mihalkova and Mooney 2008; Davis and Domingos 2009), but it tackles problems where the relations among data between the source and target domains can be largely isomerous—or, in other words, with different and incompatible annotation schemes. This work enriches the research of transfer learning by proposing and solving an NLP problem different from previous situations. For more details of transfer learning please refer to the survey of Pan and Yang (2010). 3 We have performed preliminary experiments on word segmentation. Bilingual projection was conducted from English to Chinese with the Chinese–English FBIS as the bilingual corpus. By annotation adaptation, the projected corpus for word segmentation brings a significant F-measure increment of nearly 0.6 points over the baseline trained on CTB only. The training procedure for an annotation adaptation model requires a parallel annotated corpus (which may be automatically generated); this fact puts the method into the neighborhood of the family of approaches known as annotation projection (Hwa et al. 2002, 2005; Ganchev, Gillenwater, and Taskar 2009; Smith and Eisner 2009; Jiang and Liu 2010; Das and Petrov 2011). Essentially, annotation adaptation and annotation projection tackle different problems; the former aims to transform the annotations from one guideline to another (of course in the same language), whereas the latter aims to project the annotation (as well as the annotation guideline) from one language to another. Therefore, the machine learning methods for annotation adaptation pay attention to automatic transformation of annotations, while for annotation projection, the machine learning methods focus on the bilingual projection across languages. Co-training (Sarkar 2001) and classifier combination (Nivre and McDonald 2008) are two techniques for training improved dependency parsers. The co-training technique lets two different parsing models learn from each other during the parsing of unlabeled text: One model selects some unlabeled sentences it can confidently parse, and provides them to the other model as additional training data in order to train more powerful parsers. The classifier combination lets graph-based and transition-based dependency parsers utilize the features extracted from each other’s parsing results, to obtain combined, enhanced parsers. The two techniques aim to let two models learn from each other on the same corpus with the same distribution and annotation guideline, whereas our strategy aims to integrate the knowledge in multiple corpora with different annotation guidelines. The iterative training procedure used in the optimized model shares some similarities with the co-training algorithm in parsing (Sarkar 2001), where the training procedure lets two different models learn from each other during parsing of the raw text. The key idea of co-training is to utilize the complementarity of different parsing models to mine additional training data from raw text, whereas iterative training for annotation adaptation emphasizes the iterative optimization of the parallel annotated corpora used to train the transfer classifiers. The predict-self methodology is implicit in many unsupervised learning approaches; it has been successfully used in unsupervised dependency parsing (Daumé III 2009). We adapt this idea to the scenario of annotation adaptation to improve transformation accuracy. In recent years much effort has been devoted to the improvement of word segmentation and dependency parsing. For example, the introduction of global training or complicated features (Zhang and Clark 2007, 2010); the investigation of word structures (Li 2011); the strategies of hybrid, joint, or stacked modeling (Nakagawa and Uchimoto 2007; Kruengkrai et al. 2009; Wang, Zong, and Su 2010; Sun 2011); and the semi-supervised and unsupervised technologies utilizing raw text (Zhao and Kit 2008; Johnson and Goldwater 2009; Mochihashi, Yamada, and Ueda 2009; Hewlett and Cohen 2011). We believe that the annotation adaptation technologies can be adopted jointly with complicated features, system combination, and semi-supervised/unsupervised technologies to further improve the performance of word segmentation and dependency parsing. 8 conclusion and future work :We have described the problem of annotation adaptation and the intrinsic principles of its solutions, and proposed a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. These models learn the statistical regularities of adaptation between different annotation guidelines, and integrate the knowledge in corpora with different annotation guidelines. In the problems of Chinese word segmentation and semantic dependency parsing, annotation adaptation algorithms bring significant improvements by integrating the knowledge in differently annotated corpora, People’s Daily and Penn Chinese Treebank for word segmentation, and Penn Chinese Treebank and Semantic Dependency Parsing for dependency parsing. For both tasks, annotation adaptation leads to a segmenter and a parser achieving the state-of-the-art, despite using only local features in single classifiers. Many aspects related to annotation adaptation deserve further investigation in the future. First, models for annotation adaptation can be adapted to other NLP tasks such as semantic analysis. Second, jointly tackling the divergences in both annotations and domains is an important problem. In addition, an unsupervised-induced or bilingually projected corpus, despite performing poorly on the specified testing data, may have high inner annotation consistency. That is to say, the induced corpora can be treated as a knowledge source following another annotation guideline, and the performance of current unsupervised or bilingually projected models may be seriously underestimated. Annotation adaptation may give us a new perspective on knowledge induction and measurement for such methods. Manually annotated corpora are indispensable resources, yet for many annotation tasks, such as the creation of treebanks, there exist multiple corpora with different and incompatible annotation guidelines. This leads to an inefficient use of human expertise, but it could be remedied by integrating knowledge across corpora with different annotation guidelines. In this article we describe the problem of annotation adaptation and the intrinsic principles of the solutions, and present a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. We evaluate our algorithms on the tasks of Chinese word segmentation and dependency parsing. For word segmentation, where there are no universal segmentation guidelines because of the lack of morphology in Chinese, we perform annotation adaptation from the much larger People’s Daily corpus to the smaller but more popular Penn Chinese Treebank. For dependency parsing, we perform annotation adaptation from the Penn Chinese Treebank to a semantics-oriented Dependency Treebank, which is annotated using significantly different annotation guidelines. In both experiments, automatic annotation adaptation brings significant improvement, achieving state-of-the-art performance despite the use of purely local features in training. [{""affiliations"": [], ""name"": ""Wenbin Jiang""}, {""affiliations"": [], ""name"": ""Yajuan L\u00fc""}, {""affiliations"": [], ""name"": ""Liang Huang""}, {""affiliations"": [], ""name"": ""Qun Liu""}] SP:376fabb797ac14f81d1ea7f54ed5432a0a7bf244 [{""authors"": [""Blitzer"", ""John"", ""Ryan McDonald"", ""Fernando Pereira.""], ""title"": ""Domain adaptation with structural correspondence learning"", ""venue"": ""Proceedings of EMNLP, pages 120\u2013128, Sydney."", ""year"": 2006}, {""authors"": [""Brill"", ""Eric.""], ""title"": ""Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging"", ""venue"": ""Computational Linguistics, 21(4):543\u2013565."", ""year"": 1995}, {""authors"": [""Buchholz"", ""Sabine"", ""Erwin Marsi.""], ""title"": ""CONLL-X shared task on multilingual dependency parsing"", ""venue"": ""Proceedings of CoNLL, pages 149\u2013164, New York, NY."", ""year"": 2006}, {""authors"": [""Cahill"", ""Aoife"", ""Mairead McCarthy"", ""Josef van Genabith"", ""Andy Way.""], ""title"": ""Automatic annotation of the Penn treebank with LFG F-structure information"", ""venue"": ""Proceedings of the LREC Workshop, Las Palmas."", ""year"": 2002}, {""authors"": [""Che"", ""Wanxiang"", ""Meishan Zhang"", ""Yanqiu Shao"", ""Ting Liu.""], ""title"": ""Semeval-2012 task 5: Chinese semantic dependency parsing"", ""venue"": ""Proceedings of SemEval, pages 378\u2013384, Montreal."", ""year"": 2012}, {""authors"": [""Collins"", ""Michael.""], ""title"": ""Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms"", ""venue"": ""Proceedings of EMNLP, pages 1\u20138, Philadelphia, PA."", ""year"": 2002}, {""authors"": [""Das"", ""Dipanjan"", ""Slav Petrov.""], ""title"": ""Unsupervised part-of-speech tagging with bilingual graph-based projections"", ""venue"": ""Proceedings of ACL, pages 600\u2013609, Portland, OR."", ""year"": 2011}, {""authors"": [""Daum\u00e9 III"", ""Hal.""], ""title"": ""Frustratingly easy domain adaptation"", ""venue"": ""Proceedings of ACL, pages 256\u2013263, Prague."", ""year"": 2007}, {""authors"": [""Daum\u00e9 III"", ""Hal.""], ""title"": ""Unsupervised searchbased structured prediction"", ""venue"": ""Proceedings of ICML, pages 209\u2013216, Montreal."", ""year"": 2009}, {""authors"": [""Daum\u00e9 III"", ""Hal"", ""Daniel Marcu.""], ""title"": ""Domain adaptation for statistical classifiers"", ""venue"": ""Journal of Artificial Intelligence Research, 26:101\u2013126."", ""year"": 2006}, {""authors"": [""Davis"", ""Jesse"", ""Pedro Domingos.""], ""title"": ""Deep transfer via second-order Markov logic"", ""venue"": ""Proceedings of ICML, pages 217\u2013224, Montreal."", ""year"": 2009}, {""authors"": [""Eisner"", ""Jason M.""], ""title"": ""Three new probabilistic models for dependency parsing: An exploration"", ""venue"": ""Proceedings of COLING, pages 340\u2013345, Copenhagen."", ""year"": 1996}, {""authors"": [""Ganchev"", ""Kuzman"", ""Jennifer Gillenwater"", ""Ben Taskar.""], ""title"": ""Dependency grammar induction via bitext projection constraints"", ""venue"": ""Proceedings of ACL, pages 369\u2013377, Singapore."", ""year"": 2009}, {""authors"": [""Gao"", ""Jianfeng"", ""Andi Wu"", ""Mu Li"", ""Chang-Ning Huang"", ""Hongqiao Li"", ""Xinsong Xia"", ""Haowei Qin.""], ""title"": ""Adaptive Chinese word segmentation"", ""venue"": ""Proceedings of ACL, pages 462\u2013469, Barcelona."", ""year"": 2004}, {""authors"": [""Hewlett"", ""Daniel"", ""Paul Cohen.""], ""title"": ""Fully unsupervised word segmentation with BVE and MDL"", ""venue"": ""Proceedings of ACL, pages 540\u2013545, Portland, OR."", ""year"": 2011}, {""authors"": [""Hockenmaier"", ""Julia"", ""Mark Steedman.""], ""title"": ""CCGBank: A corpus of CCG derivations and dependency structures extracted from the Penn treebank"", ""venue"": ""Computational Linguistics, 33(3):355\u2013396."", ""year"": 2007}, {""authors"": [""Hwa"", ""Rebecca"", ""Philip Resnik"", ""Amy Weinberg"", ""Clara Cabezas"", ""Okan Kolak.""], ""title"": ""Bootstrapping parsers via syntactic projection across parallel texts"", ""venue"": ""Natural Language Engineering, 11(3):311\u2013325."", ""year"": 2005}, {""authors"": [""Hwa"", ""Rebecca"", ""Philip Resnik"", ""Amy Weinberg"", ""Okan Kolak.""], ""title"": ""Evaluating translational correspondence using annotation projection"", ""venue"": ""Proceedings of ACL, pages 392\u2013399, Philadephia, PA."", ""year"": 2002}, {""authors"": [""Jiang"", ""Wenbin"", ""Liang Huang"", ""Qun Liu.""], ""title"": ""Automatic adaptation of annotation standards: Chinese word segmentation and POS tagging \u2013 A case study"", ""venue"": ""Proceedings of ACL, pages 522\u2013530,"", ""year"": 2009}, {""authors"": [""Jiang"", ""Wenbin"", ""Liang Huang"", ""Yajuan L\u00fc"", ""Qun Liu.""], ""title"": ""A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging"", ""venue"": ""Proceedings of ACL, pages 897\u2013904, Columbus, OH."", ""year"": 2008}, {""authors"": [""Jiang"", ""Wenbin"", ""Qun Liu.""], ""title"": ""Dependency parsing and projection based on word-pair classification"", ""venue"": ""Proceedings of the ACL, pages 12\u201320, Uppsala."", ""year"": 2010}, {""authors"": [""Jiang"", ""Wenbin"", ""Fandong Meng"", ""Qun Liu"", ""Yajuan L\u00fc.""], ""title"": ""Iterative annotation transformation with predict-self reestimation for Chinese word segmentation"", ""venue"": ""Proceedings of EMNLP,"", ""year"": 2012}, {""authors"": [""Johnson"", ""Mark"", ""Sharon Goldwater.""], ""title"": ""Improving nonparameteric Bayesian inference: Experiments on unsupervised word segmentation with adaptor grammars"", ""venue"": ""Proceedings of NAACL,"", ""year"": 2009}, {""authors"": [""Kruengkrai"", ""Canasai"", ""Kiyotaka Uchimoto"", ""Junichi Kazama"", ""Yiou Wang"", ""Kentaro Torisawa"", ""Hitoshi Isahara""], ""title"": ""An error-driven word-character hybrid model for joint Chinese word segmentation"", ""year"": 2009}, {""authors"": [""Li"", ""Zhongguo.""], ""title"": ""Parsing the internal structure of words: A new paradigm for Chinese word segmentation"", ""venue"": ""Proceedings of ACL, pages 1,405\u20131,414, Portland, OR."", ""year"": 2011}, {""authors"": [""Marcus"", ""Mitchell P."", ""Beatrice Santorini"", ""Mary Ann Marcinkiewicz.""], ""title"": ""Building a large annotated corpus of English: The Penn treebank"", ""venue"": ""Computational Linguistics, 19(2):313\u2013330."", ""year"": 1993}, {""authors"": [""Martins"", ""Andr\u00e9 F.T."", ""Dipanjan Das"", ""Noah A. Smith"", ""Eric P. Xing.""], ""title"": ""Stacking dependency parsers"", ""venue"": ""Proceedings of EMNLP, pages 157\u2013166, Honolulu, HI."", ""year"": 2008}, {""authors"": [""McDonald"", ""Ryan"", ""Koby Crammer"", ""Fernando Pereira.""], ""title"": ""Online large-margin training of dependency parsers"", ""venue"": ""Proceedings of ACL, pages 91\u201398, Ann Arbor, MI."", ""year"": 2005}, {""authors"": [""McDonald"", ""Ryan"", ""Fernando Pereira.""], ""title"": ""Online learning of approximate dependency parsing algorithms"", ""venue"": ""Proceedings of EACL, pages 81\u201388, Trento."", ""year"": 2006}, {""authors"": [""Mihalkova"", ""Lilyana"", ""Tuyen Huynh"", ""Raymond J. Mooney.""], ""title"": ""Mapping and revising Markov logic networks for transfer learning"", ""venue"": ""Proceedings of AAAI, volume 7, pages 608\u2013614, Vancouver."", ""year"": 2007}, {""authors"": [""Mihalkova"", ""Lilyana"", ""Raymond J. Mooney.""], ""title"": ""Transfer learning by mapping with minimal target data"", ""venue"": ""Proceedings of AAAI Workshop Transfer Learning for Complex Tasks, Chicago, IL."", ""year"": 2008}, {""authors"": [""Mochihashi"", ""Daichi"", ""Takeshi Yamada"", ""Naonori Ueda.""], ""title"": ""Bayesian unsupervised word segmentation with nested Pitman-Yor language modeling"", ""venue"": ""Proceedings of ACL-IJCNLP,"", ""year"": 2009}, {""authors"": [""Nakagawa"", ""Tetsuji"", ""Kiyotaka Uchimoto.""], ""title"": ""A hybrid approach to word segmentation and POS tagging"", ""venue"": ""Proceedings of ACL, pages 217\u2013220, Prague."", ""year"": 2007}, {""authors"": [""Ng"", ""Hwee Tou"", ""Jin Kiat Low""], ""title"": ""Chinese part-of-speech tagging: One-at-a-time or all-at-once? Word-based or character-based"", ""venue"": ""In Proceedings of EMNLP,"", ""year"": 2004}, {""authors"": [""Nivre"", ""Joakim"", ""Ryan McDonald.""], ""title"": ""Integrating graph-based and transition-based dependency parsers"", ""venue"": ""Proceedings of ACL, pages 950\u2013958, Columbus, OH."", ""year"": 2008}, {""authors"": [""Oepen"", ""Stephan"", ""Kristina Toutanova"", ""Stuart Shieber"", ""Thorsten Brants""], ""title"": ""The LinGo Redwoods treebank: Motivation and preliminary applications"", ""year"": 2002}, {""authors"": [""Pan"", ""Sinno Jialin"", ""Qiang Yang.""], ""title"": ""A survey on transfer learning"", ""venue"": ""IEEE TKDE, 22(10):1345\u20131359."", ""year"": 2010}, {""authors"": [""Sarkar"", ""Anoop.""], ""title"": ""Applying co-training methods to statistical parsing"", ""venue"": ""Proceedings of NAACL, pages 1\u20138, Pittsburgh, PA."", ""year"": 2001}, {""authors"": [""Smith"", ""David"", ""Jason Eisner.""], ""title"": ""Parser adaptation and projection with quasi-synchronous grammar features"", ""venue"": ""Proceedings of EMNLP, volume 2, pages 822\u2013831, Singapore."", ""year"": 2009}, {""authors"": [""Sun"", ""Weiwei.""], ""title"": ""A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging"", ""venue"": ""Proceedings of ACL, pages 1,385\u20131,394, Portland, OR."", ""year"": 2011}, {""authors"": [""Sun"", ""Weiwei"", ""Xiaojun Wan.""], ""title"": ""Reducing approximation and estimation errors for Chinese lexical processing with heterogeneous annotations"", ""venue"": ""Proceedings of ACL, volume 1, pages 232\u2013241,"", ""year"": 2012}, {""authors"": [""Sun"", ""Weiwei"", ""Rui Wang"", ""Yi Zhang.""], ""title"": ""Discriminative parse reranking for Chinese with homogeneous and heterogeneous annotations"", ""venue"": ""Proceedings of CIPS-SIGHAN, Beijing. Available at"", ""year"": 2010}, {""authors"": [""Wang"", ""Kun"", ""Chengqing Zong"", ""Keh-Yih Su.""], ""title"": ""A character-based joint model for Chinese word segmentation"", ""venue"": ""Proceedings of COLING, pages 1,173\u20131,181, Beijing."", ""year"": 2010}, {""authors"": [""Xue"", ""Nianwen"", ""Libin Shen.""], ""title"": ""Chinese word segmentation as LMR tagging"", ""venue"": ""Proceedings of SIGHAN Workshop, volume 17, pages 176\u2013179, Sapporo."", ""year"": 2003}, {""authors"": [""Xue"", ""Nianwen"", ""Fei Xia"", ""Fu-Dong Chiou"", ""Martha Palmer.""], ""title"": ""The Penn Chinese treebank: Phrase structure annotation of a large corpus"", ""venue"": ""Natural Language Engineering, 11(2):207\u2013238."", ""year"": 2005}, {""authors"": [""H. Yamada"", ""Y. Matsumoto.""], ""title"": ""Statistical dependency analysis with support vector machines"", ""venue"": ""Proceedings of IWPT, pages 195\u2013206, Nancy."", ""year"": 2003}, {""authors"": [""Yu"", ""Shiwen"", ""Jianming Lu"", ""Xuefeng Zhu"", ""Huiming Duan"", ""Shiyong Kang"", ""Honglin Sun"", ""Hui Wang"", ""Qiang Zhao"", ""Weidong Zhan.""], ""title"": ""Processing norms of modern Chinese corpus"", ""venue"": ""Technical"", ""year"": 2001}, {""authors"": [""Zhang"", ""Yue"", ""Stephen Clark.""], ""title"": ""Chinese segmentation with a word-based perceptron algorithm"", ""venue"": ""Proceedings of ACL, pages 840\u2013847, Prague."", ""year"": 2007}, {""authors"": [""Zhang"", ""Yue"", ""Stephen Clark.""], ""title"": ""A fast decoder for joint word segmentation and POS-tagging using a single discriminative model"", ""venue"": ""Proceedings of EMNLP, pages 843\u2013852, Cambridge, MA."", ""year"": 2010}, {""authors"": [""Zhao"", ""Hai"", ""Chunyu Kit.""], ""title"": ""Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition"", ""venue"": ""Proceedings of IJCNLP,"", ""year"": 2008}, {""authors"": [""Zhu"", ""Muhua"", ""Jingbo Zhu"", ""Minghan Hu.""], ""title"": ""Better automatic treebank conversion using a feature-based approach"", ""venue"": ""Proceedings of ACL, volume 2, pages 715\u2013719, Portland, OR."", ""year"": 2011}] acknowledgments :Jiang, Lü, and Liu were supported by National Natural Science Foundation of China (contract 61202216) and the National Key Technology R&D Program (no. 2012BAH39B03). Huang was supported in part by the DARPA DEFT Project (FA8750-13-2-0041). Liu was partially supported by the Science Foundation Ireland (grant no. 07/CE/I1142) as part of the CNGL at Dublin City University. We also thank the anonymous reviewers for their insightful comments. Finally, we want to thank Chris Hokamp for proofreading. automatic adaptation of annotations :Wenbin Jiang∗ Chinese Academy of Sciences Yajuan Lü∗ Chinese Academy of Sciences Liang Huang∗∗ Queens College and Graduate Center, The City University of New York Qun Liu∗† Dublin City University Chinese Academy of Sciences Manually annotated corpora are indispensable resources, yet for many annotation tasks, such as the creation of treebanks, there exist multiple corpora with different and incompatible annotation guidelines. This leads to an inefficient use of human expertise, but it could be remedied by integrating knowledge across corpora with different annotation guidelines. In this article we describe the problem of annotation adaptation and the intrinsic principles of the solutions, and present a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. We evaluate our algorithms on the tasks of Chinese word segmentation and dependency parsing. For word segmentation, where there are no universal segmentation guidelines be- cause of the lack of morphology in Chinese, we perform annotation adaptation from the much larger People’s Daily corpus to the smaller but more popular Penn Chinese Treebank. For dependency parsing, we perform annotation adaptation from the Penn Chinese Treebank to a semantics-oriented Dependency Treebank, which is annotated using significantly different annotation guidelines. In both experiments, automatic annotation adaptation brings significant improvement, achieving state-of-the-art performance despite the use of purely local features in training. ∗ Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences, No. 6 Kexueyuan South Road, Haidian District, P.O. Box 2704, Beijing 100190, China. E-mail: {jiangwenbin, liuqun, lvyajuan}@ict.ac.cn. ∗∗ Department of Computer Science, Queens College / CUNY, 65-30 Kissena Blvd., Queens, NY 11367. E-mail: liang.huang.sh@gmail.com. † Centre for Next Generation Localisation, Faculty of Engineering and Computing, Dublin City University. E-mail: qliu@computing.dcu.ie. Submission received: 24 April 2013; revised version received: 6 March 2014; accepted for publication: 18 April 2014. doi:10.1162/COLI a 00210 © 2015 Association for Computational Linguistics type templates instances :Guiding α α=B α ◦ C−2 α=B ◦ C−2=美 α ◦ C−1 α=B ◦ C−1=副 α ◦ C0 α=B ◦ C0=总 α ◦ C1 α=B ◦ C1=统 α ◦ C2 α=B ◦ C2=访 α ◦ C−2C−1 α=B ◦ C−2C−1=美副 α ◦ C−1C0 α=B ◦ C−1C0=副总 α ◦ C0C1 α=B ◦ C0C1=总统 α ◦ C1C2 α=B ◦ C1C2=统访 α ◦ C−1C1 α=B ◦ C−1C1=副统 α ◦ Pu(C0) α=B ◦ Pu(C0)=true α ◦ T(C−2:2) α=B ◦ T(C−2:2)= 44444 partition sections # of words : training iterations. The performance measurement indicator for word segmentation is the balanced F-measure, F = 2PR/(P + R), a function of Precision P and Recall R, where P is the percentage of words in segmentation results that are segmented correctly, and R is the percentage of correctly segmented words in the gold standard words. For both syntactic and semantic dependency parsing, we concentrate on nonlabeled parsing that predicts the graphic dependency structure for the input sentence without considering dependency labels. The perceptron-based baseline dependency models are trained on the training sets of DCTB and SDT, using the development sets to determine the best training iterations. The performance measurement indicator for dependency parsing is the Unlabeled Attachment Score, denoted as Precision P, indicating the percentage of words in predicted dependency structure that are correctly attached to their head words. Figure 6 shows the learning curve of the averaged perceptron for word segmentation on the development set. Accuracies of the baseline classifiers are listed in Table 6. We also report the performance of the classifiers on the testing sets of the opposite corpora. Experimental results are in line with our expectations. A classifier performs better in its corresponding testing set, and performs significantly worse on testing data following a different annotation guideline. Table 7 shows the accuracies of the baseline syntactic and semantic parsers, as well as the performance of the parsers on the testing sets of the opposite corpora. Similar to the situations in word segmentation, two parsers give state-of-the-art accuracies on their own testing sets, but perform poorly on the other testing sets. This indicates the degree of divergence between the annotation guidelines of DCTB and SDT. ctb :To approximate more general scenarios of annotation adaptation problems, we extract from PD a subset that is comparable to CTB in size. Because there are many extremely long sentences in the original PD corpus, we first split them into normal sentences according to the full-stop punctuation symbol. We randomly select 20, 000 sentences (0.45M words) from the PD training data as the new training set, and 1, 000/1, 000 sentences from the PD test data as the new testing/developing set. We label the smaller version of PD as SPD. The balanced source corpus and target corpus also facilitate the investigation of annotation adaptation. Annotation adaptation for dependency parsing is performed from the CTB-derived syntactic dependency treebank (DCTB) (Yamada and Matsumoto 2003) to the Semantic Dependency Treebank (SDT) (Che et al. 2012). Semantic dependency encodes the semantic relationships between words, which are very different from syntactic dependencies. SDT is annotated on a small portion of the CTB text as depicted in Table 5; therefore, we use the subset of DCTB covering the remaining CTB text as the source corpus. We still denote the source corpus as DCTB in the following for simplicity. previous work : semeval-2012 contest :model is denoted as Model 3). We find that the predict-self re-estimation brings improvement to the iterative training at each iteration, for both word segmentation and dependency parsing. The maximum performance is achieved at iteration 4 for word segmentation, and at iteration 5 for dependency parsing. The corresponding models are evaluated on the corresponding testing sets, and the experimental results are also shown in Tables 8 and 9. Compared to Model 1, the optimized annotation adaptation strategy, Model 3, leads to classifiers with significantly higher accuracy and to processing speeds that are several times faster. Tables 10 and 11 show the experimental results compared with previous work. For both Chinese word segmentation and semantic dependency parsing, automatic annotation adaptation yields state-of-the-art performance, despite using single classifiers with only local features. Note that for the systems in the SemEval contest (Che et al. 2012), many other technologies including clause segmentation, system combination, and complicated features were adopted, as well as elaborate engineering. We also performed significance tests2 to verify the effectiveness of annotation adaptation.We find that for both Chinese word segmentation and semantic dependency parsing, annotation adaptation brings significant improvement (p < 0.001) over the baselines trained on the target corpora only. 2 http://www.cis.upenn.edu/∼dbikel/download/compare.pl. word type proportion baseline anno. ada. trend :To evaluate the stability of annotation adaptation, we perform quantitative analysis on the results of annotation adaptation. For word segmentation, the words are grouped according to POS tags. For dependency parsing, the dependency edges are grouped according to POS tag pairs. For each category, the recall values of baseline and annotation adaptation are reported. To filter the lists, we set two significance thresholds with respect to the proportion of a category and the performance fluctuation between two systems. For word segmentation, only the categories with proportions of more than 1% and with fluctuations of more than 0.1 points are reserved, and for dependency parsing, the two thresholds are 1% and 0.5. Tables 12 and 13 show the analysis results for word segmentation and dependency parsing, respectively. For both tasks, annotation adaptation brings improvement for most of the situations. edge type proportion baseline anno. ada. trend :We further investigate the effect of varying the sizes of the target corpora. Experiments are conducted for word segmentation and dependency parsing with fixed-size source corpora and varying-size target corpora. We use SPD and DCTB as the source corpora for word segmentation and dependency parsing, respectively. Figures 13 and 14 show the performance curves on the testing sets. We find that, for both word segmentation and dependency parsing, the improvements brought by annotation adaptation are more significant when the target corpora are smaller. It means that the automatic annotation adaptation is more valuable when the size of the target corpus is small, which is good news for the situation where the corpus we are concerned with is smaller but a larger differently annotated corpus exists. Of course, the comparison between automatic annotation adaptation and previous strategies without using additional training data is unfair. Our work aims to find another way to improve NLP tasks: focusing on the collection of more training data instead of making full use of a certain corpus. We believe that the performance of automatic annotation adaptation can be further improved by adopting the advanced technologies of previous work, such as complicated features and model combination. It would be useful to conduct experiments with more source-annotated training data, such as the SIGHAN data set for word segmentation, to investigate the trend of improvement along with the further increment of annotated sentences. It would also be valuable to evaluate the improved word segmenter and dependency parser on the out-of-domain data sets. However, currently most corpora for word segmentation and dependency parsing do not explicitly distinguish the domains of their data sections, making such evaluations difficult to conduct.","8 conclusion and future work :We have described the problem of annotation adaptation and the intrinsic principles of its solutions, and proposed a series of successively enhanced models that can automatically adapt the divergence between different annotation formats. These models learn the statistical regularities of adaptation between different annotation guidelines, and integrate the knowledge in corpora with different annotation guidelines. In the problems of Chinese word segmentation and semantic dependency parsing, annotation adaptation algorithms bring significant improvements by integrating the knowledge in differently annotated corpora, People’s Daily and Penn Chinese Treebank for word segmentation, and Penn Chinese Treebank and Semantic Dependency Parsing for dependency parsing. For both tasks, annotation adaptation leads to a segmenter and a parser achieving the state-of-the-art, despite using only local features in single classifiers. Many aspects related to annotation adaptation deserve further investigation in the future. First, models for annotation adaptation can be adapted to other NLP tasks such as semantic analysis. Second, jointly tackling the divergences in both annotations and domains is an important problem. In addition, an unsupervised-induced or bilingually projected corpus, despite performing poorly on the specified testing data, may have high inner annotation consistency. That is to say, the induced corpora can be treated as a knowledge source following another annotation guideline, and the performance of current unsupervised or bilingually projected models may be seriously underestimated. Annotation adaptation may give us a new perspective on knowledge induction and measurement for such methods." "1 introduction :Human evaluation is a key aspect of many NLP technologies. Automatic metrics that correlate with human judgments have been developed, especially in Machine Translation, to relieve some of the burden. Neverthess, Callison-Burch et al. (2007) note in their meta-evaluation that in MT they still “consider the human evaluation to be primary.” Whereas MT has traditionally used a Likert scale score for the criteria of adequacy and fluency, this meta-evaluation noted that these are “seemingly difficult things for judges to agree on”; consequently, asking judges to express a preference between alternative translations is increasingly used on the grounds of ease and intuitiveness. Further, where the major empirical results of a paper are from automatic metrics, it is still useful to supplement them: As two examples, Collins, Koehn, and Kucerova (2005) and Lewis and Steedman (2013), in addition to a metric-based evaluation, present human judgments of preferences for their systems with respect to a baseline (Fig. 1). For results in published work, the reader is typically left to draw inferences from the numbers. For the data in Figure 1, is there a strong preference for the non-baseline system overall, or do null preferences count against that? Is anything about the results statistically significant? There has been work in various areas of NLP in assessing statistical significance of human judgment results. However, to our knowledge, the field has not taken advantage of a body of work dedicated to analyzing human preferences—predominantly in the context of sensory discrimination testing, and consequent consumer behavior—which is supported by a great deal of statistical theory. It is linked to the mixed-effect models that are increasingly prominent in psycholinguistics and elsewhere, it has associated freely available R software, and it permits questions like the following to be asked: Can we say that the judges are expressing a preference at all, as opposed to no preference? Is there an effect from judge disagreement or inconsistency? ∗ Department of Computing, Macquarie University, NSW 2109, Australia. E-mail: mark.dras@mq.edu.au. Submission received: 10 April 2014; accepted for publication: 18 July 2014. doi:10.1162/COLI a 00222 © 2015 Association for Computational Linguistics We describe our sample data (Section 2), sketch a classical non-parametric approach (Section 3) and discuss the issues that arise from this, and look at some of the approaches used in MT (Section 4). We then (Section 5) introduce ideas from human sensory preference testing, where we review log-linear Bradley-Terry models of preferences, and apply this to our data, including discussion of ties, of subject effects, and of multiple pairwise comparisons.","2 two data sets :Single Pairwise Comparison. Our basic single pairwise comparisons are those presented in Figure 1(a) and (b). Figure 1(c) contains the counts we will be using in later analysis: We refer to counts in favor of the new system by n+, those in favor of the baseline by n−, and those reflecting no preference as n0; the Lewis and Steedman results were over 87 pairwise judgments. We add some further artificial data to illustrate how the Log-Linear Bradley-Terry (LLBT) models of Section 5 behave in accordance with intuition for data where the conclusion should be clear. These comprise a distribution with a moderate preference for + over − and not too many null preferences (ModPref), a distribution of equal preferences over all three categories (EqualPref), a distribution with mostly null preferences and equal n+ and n− (NoPref), and a distribution with very few null preferences (StrongPref). Multiple Pairwise Comparison. As noted in Section 1, there has been a trend to using human preference judgments, particularly in the workshops on statistical machine translation from Callison-Burch et al. (2007) onwards. Schemes have included asking humans to rank random samples of five translations, each from a different system. Vilar et al. (2007) propose using binary rather than n-ary rankings, arguing that this is a natural and easy task for judges. Here we present some artificial data of pairwise (binary) rankings to illustrate the techniques we discuss in Section 5, although these techniques can be extended to n-ary comparisons. In our example, there are four systems A, B, C, D and four judges J1, J2, J3, J4. The judges have pairwise ranked 240 translation pairs from systems x and y, indicating whether the translation of x is better than y (x y), worse than y (x ≺ y), or similar in quality to y (x = y); see Table 1. An overall impression, totalling all pairwise first preferences for each system (Section 4), gives a ranking of systems A–D–B–C. It can also be seen that there is little in the way of undecidedness, and also that judge J3 differs from the general judge opinion in pairwise ranking of AD and BC.","3 classical non-parametric methods :A classical approach to evaluating preferences is the non-parametric sign test (Sprent and Smeeton 2007). The first issue in applying this test here is ties, or expressions of no preference—these are often ignored when the proportion of ties is small, but for our typical examples of Figure 1, this is not true. Randles (2001) observes, regarding the approach most widely recommended by textbooks of just ignoring ties, that “the constrained number of possible p values and its ‘elimination of zeroes’ has caused concern and controversy through the years.” Randles (2001) and Rayner and Best (2001, chapter 2), reviewing several approaches to handling ties, both advocate splitting ties in various ways depending on the problem setting, for (in Randles’s characterization) “it is desirable that zeros have a conservative influence on declaring preference, but not to the same degree as negative responses.” The key point is that modeling of ties explicitly can be important, although there is no consensus on how this should be done; no approach apart from ignoring ties appears to be in widespread use. The second issue with the sign test is that of multiple judges, where data points are related (e.g., the same items are given to all judges). The Friedman test (Sprent and Smeeton 2007, Section 7.3.1) can be viewed as an extension that can be applied to multiple subjects ranking multiple items (see Bi 2006, Section 5.1.3, for an example). However, Francis, Dittrich, and Hatzinger (2010) note that [the Friedman test] simply examines the null hypothesis that the median ranks for all items are equal, and does not consider any differences in ranking between respondents. . . . Moreover, if the Friedman test rejects the null hypothesis, no quantitative interpretation, such as the odds of preferring one item over another, is provided. [Further, this] fail[s] both to consider the underlying psychological mechanism for ranking, and to formulate correct statistical models for this mechanism.","4 methods in machine translation :Human evaluation in NLP is a pervasive issue, but here we focus on MT and its shared tasks. The 2007 shared task (Callison-Burch et al. 2007) was the first to investigate a range of approaches that specifically included ranking of n translations, from best to worst, allowing ties (which were ignored); from this they defined an aggregate “rank,” “the average number of times that a system was judged to be better than any other system in the sentence ranking evaluation.” They assessed inter-annotator agreement, and—with a key goal of the meta-evaluation being to find the automatic evaluation metric that best matched human evaluations—calculated Spearman’s rank correlation coefficient between the two types of assessment. The 2008 shared task (Callison-Burch et al. 2008) took the same approach, but noted that in ranking, “[h]ow best to treat these is an open discussion, and certainly warrants further thought,” in particular because of ties “further complicating matters.” Pado et al. (2009) modified the systemlevel predictions approach to become “tie-aware,” and noted that that this “makes a considerable practical difference, improving correlation figures by 5–10 points.” At around the same time Vilar et al. (2007) examined the use of pairwise comparisons in MT evaluation. They pose the problem as one where, given an order relationship is-better-than between pairs of systems, the goal is to find an ordering of all the systems: They see this as the fundamental computer science problem of sorting. They define an aggregate evaluation score for comparing systems, estimating expected value and standard error for hypothesis testing. However, in aggregating this way information about ties is lost. Bojar et al. (2011) critique the earlier WMT evaluations, citing issues with the ignoring of non-top ranks (noted in Section 3 herein also), with ties and also with interannotator agreement. Lopez (2012) extends the analysis of Bojar et al. and casts the problem as “finding the minimum feedback arc set in a tournament, a well-known NPcomplete problem.” He advocates using the pairwise rankings themselves, rather than aggregate statistics like Vilar et al. (2007), and aims to minimize the number of violations among these. Koehn (2012) evaluates empirically the approaches of both Bojar et al. (2011) and Lopez (2012), with a focus on determining which systems are statistically distinguishable in terms of performance, defining confidence bounds for this purpose. Hopkins and May (2013) recently advocated a focus on finding the extent to which particular rankings could be trusted. They proposed a model based on Item Response Theory (IRT), which underlies many standardized tests. They draw an analogy with judges assessing students on the basis of an underlying distribution of the student’s ability, with items authored by students having a quality drawn from the student’s ability distribution. They note in passing that a Gaussian parameterization of their IRT models resembles Thurstone and Bradley-Terry models; this leads us to the topic of Section 5. Overall, then, there are ongoing discussions about what kind of analysis is appropriate for preference judgments. Some of this involves moderately heavy-duty computation for bootstrapping; this is suitable for large-scale WMT evaluations with dozens of competing systems, but perhaps less so for the scenarios we envisage in Section 1. Moreover, examining what techniques other fields have developed could be useful, especially when they come with ready-made, easy-to-use tools for smaller-scale evaluation.","5 preferences and log-linear bradley-terry methods :The statistical analysis of human perception and preferences dates back at least to the psychophysics work of German physiologist E. H. Weber in the nineteenth century. A progression from the way humans perceive differences between physical stimuli to more general analysis of human preferences has occurred particularly in the context of investigating consumer behavior—dealing with questions like whether there is a definite preference for a food with a particular type of ingredient, for example—and this is now a fully fledged area of research. Sources like Lawless and Heymann (2010) give overviews of the field and relevant statistical techniques. The earliest generally cited models for pairwise comparisons are the Thurstone model (Thurstone 1927) and the closely related Bradley-Terry (BT) model (Bradley and Terry 1952); these have connections to the IRT models, widely used in analyzing responses to questionnaires, which Hopkins and May (2013) drew on. Here we only look at BT models. In a basic BT model, the probability that object j (Oj) is preferred to object k (Ok) from a set of J objects in a particular pairwise comparison jk is given by p(Oj Ok |πj,πk) = πj πj+πk for all j = k, where πj and πk are non-negative “worth” parameters describing the location of the object on the scale of preferences for some attribute. For n objects, there will be ( n 2 ) pairwise comparisons. Log-Linear Models. It is now standard to fit BT models as log-linear models (Agresti 2007, for example), which allows them to be treated in a uniform way with much of modern statistical analysis. Log-linear models are a variety of generalized linear models (GLM), as is, for example, the logistic regression used throughout NLP. GLMs consist of a random component that identifies the response variable Y and selects a probability distribution for it; a systematic component that specifies some linear combination of the explanatory variables xi; and a link function g(·) applied to the mean μ of Y relating μ to this linear combination. They thus have the form g(μ) = α+ β1x1 + . . .+ βkxk. For log-linear models, the response variables are counts that are assumed to follow a Poisson distribution, and the link function is g(μ) = log(μ) (compare logistic regression’s g(μ) = log μ1−μ ). As an example, Y might be counts of people who hold some belief, and the various xi might be gender, socioeconomic status, and so forth. GLMs are a key tool for modern categorical data analysis, Agresti (2007, p. 65) noting that using models rather than the non-parametric approaches of Section 3 has several benefits: The structural form of the model describes the patterns of association and interaction. The sizes of the model parameters determine the strength and importance of the effects. Inferences about the parameters evaluate which explanatory variables affect the response variable Y, while controlling effects of possible confounding variables. Finally, the model’s predicted values smooth the data and provide improved estimates of the mean of Y at possible explanatory variable values. In a log-linear model, intuitive log-odds interpretations of making one response relative to another can be derived from the parameters. (Typically, software chooses a reference parameter and other parameter values are relative to that.) Statistical significance scores and standard errors can be calculated for these parameters. In addition, GLMs allow for testing of model fit. There are various model choices (e.g., should we include ties? should we include terms representing interactions?) and goodnessof-fit tests can assess the alternatives (see, e.g., Agresti 2007, Section 7.2.1). The model with a separate parameter for each cell in the associated contingency table is called the saturated model, and fits the data perfectly, making it a suitable comparator for alternatives. Deviance is a likelihood ratio statistic comparing a proposed model to the saturated one, allowing a test of the hypothesis that parameters not included in the model are zero, via goodness of fit tests; large test statistics and small p-values provide evidence of model lack of fit. Models with Ties. To set out the representation of LLBT models, we follow the formulation of Dittrich and Hatzinger (2009). Let n(jk) be the number of comparisons between objects j and k; and let Y(jk)j be the number of preferences for object j with respect to k (similarly, Y(jk)k). The outcome of a paired comparison experiment can also be regarded as a (J 2 )× J incomplete two-dimensional contingency table: There are(J 2 ) rows of pairwise comparisons, and J columns recording choices of the jth object. As with log-linear models in general, the distribution of random variables Y(jk)j and Y(jk)k is assumed to be Poisson. Conditional on fixed n(jk) = Y(jk)j + Y(jk)k, (Y(jk)j, Y(jk)k) follow a binomial (more generally, multinomial) distribution. The expected number of preferences of object j with respect to object k is denoted m(jk)j and given by n(jk)p(jk)j, with p(jk)j the binomial probability. So far this is only for binary preferences; there are various ways to account for ties. We describe the approach of Davidson and Beaver (1977), which appears quite widely used, where there is a common null preference effect for all pairwise comparisons. Then log m(jk)j = μ(jk) + λOj − λOk log m(jk)0 = μ(jk) + γ log m(jk)k = μ(jk) − λOj + λOk (1) where the μ’s are “nuisance” parameters that fix the n(jk) marginal distributions, and the λO’s represent object parameters, m(jk)0 is the expected number of null preferences for pair (jk), and γ is the undecided effect. The object parameters are related to the worth parameters of the original definition by logπ = 2λO: These represent the log-odds. In addition to the theoretical reasons for using LLBTs for modeling pairwise comparisons, a key benefit is the availability of packages in R for doing the modeling. Two candidates allowing a variety of sophisticated models are by Turner and Firth (2012) and Hatzinger and Dittrich (2012); we use the latter as the current version of the former does not handle ties. We first apply the model described by Equations (1) to the single pairwise data with ties from Section 2 using R. We refer the reader to the associated data bundle1 for the full output; we only excerpt it in the discussion below. Immediately following is a snippet of the R output for the ModPref data from Figure 1. o1 is the variable λOj for the + category, o2 for the – category, g1 for the null preferences. Estimate Std. Error z value Pr(>|z|) o1 0.2778 0.1060 2.620 0.0088 ** o2 0.0000 NA NA NA g1 -0.6551 0.2300 -2.848 0.0044 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual deviance: -6.6614e-15 on 0 degrees of freedom In the R output, o2 is the reference object, with parameter value set to zero; the negative value of the estimate for g1 combined with its statistical significance says that there is a strong tendency for an expression of preference. The positive value of the o1 parameter and its significance indicate that the + group is strongly preferred: The odds in favor of this group with respect to the – group is exp(2 × 0.2778) = 1.74 to 1. Relating this to the description of the data in Section 2, then, there is a strong preference for translations by the proposed system relative to the baseline, even taking into account null preferences. The LLBT model confirms that even small data sets like this can produce meaningful and statistically significant results. For the other artificial preference data of Figure 1, the parameters behave as expected: for EqualPref, parameter estimates are all zero, signifying that they all have the same odds; for NoPref, the positive g1 indicates a strong tendency towards no preference; for StrongPref, the negative g1 indicates a strong tendency towards some preference, but with + or – equally likely. Note that all of these are saturated models: there are three objects and three parameters, so the model fits perfectly (indicated also by zero residual deviance). When we apply them 1 Data files and all R commands and output are at https://purl.org/NET/cl-llbt-data. to the real count data of Figure 1 (c), the results indicate that for the Collins et al. data there is a weak to moderate tendency not to choose (g1 estimate 0.303, p = 0.0432), but, given that, there is a significant (0.0001) preference in favor of the reordered system. For the Lewis and Steedman results, the model gives similar results, albeit with a much stronger disposition to null preferences. In the data bundle we also carry out the sign test ignoring ties for each data set for comparison; it gives the same results in each case for the relation of + than –, but does not allow an evaluation of the effect of ties. We now apply the model described by Equations (1) to the multiple pairwise data of Table 1. In the R output, the four systems A, B, C, D correspond to objects o1, o2, o3, o4, and g1 again to null preferences. As per the overview of the MT data in Section 2, there is little undecidedness (large negative g1). The coefficients show that object o1 (system A) is most preferred, followed by o4 (D), then o2 (B) and o3 (C). Note also that in this case, the model is not saturated: There is a non-zero residual deviance. As mentioned, log-linear models can be compared in terms of goodness of fit: Dittrich, Hatzinger, and Katzenbeisser (1998) and Dittrich and Hatzinger (2009) discuss this in some detail for LLBT models. Chi-squared statistics can be used to assess goodness of fit based on the residual deviance; the degrees of freedom (d.f.) equal the number of cell counts minus the number of model parameters; both deviance and d.f. are given in the R output. For this data deviance is 30.646 on 8 d.f., whereas by contrast if the ties (g1) are left out, it is 221.22 on 9 d.f. A chi-squared test would establish the goodness of fit for each model; but even without consulting the test it can be seen that leaving out the one parameter related to ties (1 d.f.) gives a seven-fold increase in deviance, so clearly inclusion of ties produces a much better model. Introducing Subject Covariates. The model can also incorporate a range of other factors, a possibility not easily open to non-parametric methods. The one we look at here is the notion of a categorical covariate, introduced into LLBT models in Dittrich, Hatzinger, and Katzenbeisser (1998): This allows the objects (items) to vary with characteristics of the subject (judge). Many types of subject covariates could be added, grouping subjects by native language of the speaker, source of judges (e.g., Mechanical Turk, university), and so forth. Here we add just one, the identity of the subject. (Typically in a GLM this would be a random effect; we treat it as a covariate just for our simple illustration.) We define our categorical covariate S to have levels l, l = 1, 2, . . . , L. Let m(jk)j|l be the expected number of preferences for object j with respect to object k for subjects in covariate class l. The log-linear representation is then as follows: log m( jk)j|l = μ(jk)l + λOj − λOk + λSl + λOSjl − λOSkl log m( jk)0|l = μ(jk)l + λSl + γ log m( jk)k|l = μ(jk)l − λOj + λOk + λSl − λOSjl + λOSkl (2) As do Dittrich and Hatzinger (2009), we define a reference group, with the λOj ’s representing the ordering for that group; the orderings for other groups are obtained by adding the λOSjl ’s specific to group l to the λ O l ’s for the reference group. μ(jk)l and λ S l are again “nuisance” parameters, the latter representing the main effect of the subject covariate measured on the lth level; λOSjl ’s are the (useful) subject-object interaction parameters describing the effect of the subject covariate on the preference for object j (similarly λOSkl and object k). We apply the model described by Equations (2) to the multiple pairwise data, with the subject covariate SUBJ with four levels (one per judge Ji of Table 1). There are a few complexities in interpreting the output, beyond the scope of this article to discuss but covered in Dittrich, Hatzinger, and Katzenbeisser (1998). The broad interpretations to draw from the output are that interactions o1:SUBJ3 and o2:SUBJ3 are large and significant, and contribute to the model, unlike any others. These correspond to the different pairwise rankings given by judge J3 to system A (relative to D) and to B (relative to C): This is how subject effects are indicated in these LLBT models. There are many other extensions to these models. Cattelan (2012) gives a state-ofthe-art overview of such extensions across a range of approaches, with an emphasis on dependent data. We only note two extensions here that are incorporated into prefmod and relevant to NLP. With categorical object covariates, items can be grouped as well, to investigate effects of grouping there, for example, different origins for translation sources. With non-pairwise rankings, judges can rank over more than two elements, as in the standard WMT evaluations, although this needs a special treatment in the models.","6 conclusions :We have looked at the sort of (pairwise) preference data that is encountered often in NLP. A particular characteristic of NLP data is that ties or undecided results may be frequent, and there is often a concern with inter-judge consistency. Reviewing classical non-parametric approaches, we note the opinion that it is important to model ties, and also note that approaches to looking at subject (judge) effects have several issues, such as a lack of quantitative interpretation of results. Among NLP approaches, especially within MT, new techniques are still being derived, which could benefit from views from outside the field. What we present are techniques from the field of sensory preference evaluation, where there has been a long history of development by statistics researchers. Recently, log-linear models have attracted attention. Applying them to sample data, we find that they provide the sort of information and uniform framework for analysis that NLP researchers could find useful. Given both extensive theoretical underpinings and freely available statistical software, we recommend LLBT models as a potential tool.",,,"Human evaluation plays an important role in NLP, often in the form of preference judgments. Although there has been some use of classical non-parametric and bespoke approaches to evaluating these sorts of judgments, there is an entire body of work on this in the context of sensory discrimination testing and the human judgments that are central to it, backed by rigorous statistical theory and freely available software, that NLP can draw on. We investigate one approach, Log-Linear Bradley-Terry models, and apply it to sample NLP data.","[{""affiliations"": [], ""name"": ""Mark Dras""}]",SP:dea01ec1a2d1ac7f75876dd5e599522271536f62,"[{""authors"": [""Agresti"", ""Alan.""], ""title"": ""An Introduction to Categorical Data Analysis"", ""venue"": ""John Wiley, 2nd edition."", ""year"": 2007}, {""authors"": [""Bi"", ""Jian.""], ""title"": ""Sensory Discrimination Tests and Measurements: Statistical Principles, Procedures and Tables"", ""venue"": ""Blackwell, Oxford, UK."", ""year"": 2006}, {""authors"": [""Bojar"", ""Ond\u0159ej"", ""Milo\u0161 Ercegov\u010devi\u0107"", ""Martin Popel"", ""Omar Zaidan.""], ""title"": ""A grain of salt for the WMT manual evaluation"", ""venue"": ""Proceedings of WMT, pages 1\u201311."", ""year"": 2011}, {""authors"": [""Bradley"", ""Ralph"", ""Milton Terry.""], ""title"": ""Rank analysis of incomplete block designs, I"", ""venue"": ""The method of paired comparisons. Biometrika, 39:324\u2013345."", ""year"": 1952}, {""authors"": [""Callison-Burch"", ""Chris"", ""Cameron Fordyce"", ""Philipp Koehn"", ""Christof Monz"", ""Josh Schroeder.""], ""title"": ""Meta-) Evaluation of machine translation"", ""venue"": ""Proceedings of WMT, pages 136\u2013158."", ""year"": 2007}, {""authors"": [""Josh Schroeder""], ""title"": ""Further meta-evaluation of machine translation"", ""venue"": ""In Proceedings of WMT,"", ""year"": 2008}, {""authors"": [""Cattelan"", ""Manuela.""], ""title"": ""Models for paired comparison data: A review with emphasis on dependent data"", ""venue"": ""Statistical Science, 27(3):412\u2013423."", ""year"": 2012}, {""authors"": [""Collins"", ""Michael"", ""Philipp Koehn"", ""Ivona Kucerova.""], ""title"": ""Clause restructuring for statistical machine translation"", ""venue"": ""Proceedings of ACL, pages 531\u2013540."", ""year"": 2005}, {""authors"": [""R.R. Davidson"", ""R.J. Beaver.""], ""title"": ""On extending the Bradley-Terry model to incorporate within-pair order effects"", ""venue"": ""Biometrics, 33:693\u2013702."", ""year"": 1977}, {""authors"": [""Dittrich"", ""Regina"", ""Reinhold Hatzinger.""], ""title"": ""Fitting loglinear Bradley-Terry models (LLBT) for paired comparisons using the R package prefmod"", ""venue"": ""Psychological Science Quarterly,"", ""year"": 2009}, {""authors"": [""Dittrich"", ""Regina"", ""Reinhold Hatzinger"", ""W. Katzenbeisser""], ""title"": ""Modelling the effect of subject-specific covariates in paired comparison studies with an application to university rankings"", ""year"": 1998}, {""authors"": [""Francis"", ""Brian"", ""Regina Dittrich"", ""Reinhold Hatzinger""], ""title"": ""Modeling heterogeneity in ranked responses by nonparametric maximum likelihood: How do Europeans"", ""year"": 2010}, {""authors"": [""Hatzinger"", ""Reinhold"", ""Regina Dittrich.""], ""title"": ""prefmod: A package for modeling preferences based on paired comparisons, rankings, or ratings"", ""venue"": ""Journal of Statistical Software, 48(10):1\u201331."", ""year"": 2012}, {""authors"": [""Hopkins"", ""Mark"", ""Jonathan May.""], ""title"": ""Models of translation competitions"", ""venue"": ""Proceedings of ACL, pages 1,416\u20131,424."", ""year"": 2013}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Simulating human judgment in machine translation evaluation campaigns"", ""venue"": ""Proceedings of IWSLT, pages 179\u2013184."", ""year"": 2012}, {""authors"": [""Lawless"", ""Harry T."", ""Hildegarde Heymann.""], ""title"": ""Sensory Evaluation of Food: Principles and Practices"", ""venue"": ""Springer, New York, NY 2nd edition."", ""year"": 2010}, {""authors"": [""Lewis"", ""Mike"", ""Mark Steedman""], ""title"": ""Unsupervised induction of cross-lingual"", ""year"": 2013}, {""authors"": [""Lopez"", ""Adam.""], ""title"": ""Putting human assessments of machine translation systems in order"", ""venue"": ""Proceedings of WMT, pages 1\u20139."", ""year"": 2012}, {""authors"": [""Pado"", ""Sebastian"", ""Michel Galley"", ""Dan Jurafsky"", ""Christopher D. Manning.""], ""title"": ""Robust machine translation evaluation with entailment features"", ""venue"": ""Proceedings of ACL / AFNLP, pages 297\u2013305."", ""year"": 2009}, {""authors"": [""Randles"", ""Ronald H.""], ""title"": ""On Neutral Responses (zeros) in the sign test and ties in the Wilcoxon-Mann-Whitney test"", ""venue"": ""The American Statistician, 55(2):96\u2013101."", ""year"": 2001}, {""authors"": [""J.C.W. Rayner"", ""D.J. Best.""], ""title"": ""A Contingency Table Approach to Nonparametric Testing"", ""venue"": ""Chapman and Hall/CRC, Boca Raton, FL."", ""year"": 2001}, {""authors"": [""Sprent"", ""Peter"", ""Nigel Smeeton.""], ""title"": ""Applied Nonparametric Statistical Methods"", ""venue"": ""Chapman and Hall, London, UK."", ""year"": 2007}, {""authors"": [""L.L. Thurstone""], ""title"": ""A law of comparative Judgement"", ""venue"": ""Psychological Review, 34:278\u2013286."", ""year"": 1927}, {""authors"": [""Turner"", ""Heather"", ""David Firth.""], ""title"": ""Bradley-Terry Models in R: The BradleyTerry2 Package"", ""venue"": ""Journal of Statistical Software, 48(9):1\u201321."", ""year"": 2012}, {""authors"": [""Vilar"", ""David"", ""Gregor Leusch"", ""Hermann Ney"", ""Rafael E. Banchs.""], ""title"": ""Human evaluation of machine translation through binary system comparisons"", ""venue"": ""Proceedings of WMT, pages 96\u2013103."", ""year"": 2007}]",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :Human evaluation is a key aspect of many NLP technologies. Automatic metrics that correlate with human judgments have been developed, especially in Machine Translation, to relieve some of the burden. Neverthess, Callison-Burch et al. (2007) note in their meta-evaluation that in MT they still “consider the human evaluation to be primary.” Whereas MT has traditionally used a Likert scale score for the criteria of adequacy and fluency, this meta-evaluation noted that these are “seemingly difficult things for judges to agree on”; consequently, asking judges to express a preference between alternative translations is increasingly used on the grounds of ease and intuitiveness. Further, where the major empirical results of a paper are from automatic metrics, it is still useful to supplement them: As two examples, Collins, Koehn, and Kucerova (2005) and Lewis and Steedman (2013), in addition to a metric-based evaluation, present human judgments of preferences for their systems with respect to a baseline (Fig. 1). For results in published work, the reader is typically left to draw inferences from the numbers. For the data in Figure 1, is there a strong preference for the non-baseline system overall, or do null preferences count against that? Is anything about the results statistically significant? There has been work in various areas of NLP in assessing statistical significance of human judgment results. However, to our knowledge, the field has not taken advantage of a body of work dedicated to analyzing human preferences—predominantly in the context of sensory discrimination testing, and consequent consumer behavior—which is supported by a great deal of statistical theory. It is linked to the mixed-effect models that are increasingly prominent in psycholinguistics and elsewhere, it has associated freely available R software, and it permits questions like the following to be asked: Can we say that the judges are expressing a preference at all, as opposed to no preference? Is there an effect from judge disagreement or inconsistency? ∗ Department of Computing, Macquarie University, NSW 2109, Australia. E-mail: mark.dras@mq.edu.au. Submission received: 10 April 2014; accepted for publication: 18 July 2014. doi:10.1162/COLI a 00222 © 2015 Association for Computational Linguistics We describe our sample data (Section 2), sketch a classical non-parametric approach (Section 3) and discuss the issues that arise from this, and look at some of the approaches used in MT (Section 4). We then (Section 5) introduce ideas from human sensory preference testing, where we review log-linear Bradley-Terry models of preferences, and apply this to our data, including discussion of ties, of subject effects, and of multiple pairwise comparisons. 2 two data sets :Single Pairwise Comparison. Our basic single pairwise comparisons are those presented in Figure 1(a) and (b). Figure 1(c) contains the counts we will be using in later analysis: We refer to counts in favor of the new system by n+, those in favor of the baseline by n−, and those reflecting no preference as n0; the Lewis and Steedman results were over 87 pairwise judgments. We add some further artificial data to illustrate how the Log-Linear Bradley-Terry (LLBT) models of Section 5 behave in accordance with intuition for data where the conclusion should be clear. These comprise a distribution with a moderate preference for + over − and not too many null preferences (ModPref), a distribution of equal preferences over all three categories (EqualPref), a distribution with mostly null preferences and equal n+ and n− (NoPref), and a distribution with very few null preferences (StrongPref). Multiple Pairwise Comparison. As noted in Section 1, there has been a trend to using human preference judgments, particularly in the workshops on statistical machine translation from Callison-Burch et al. (2007) onwards. Schemes have included asking humans to rank random samples of five translations, each from a different system. Vilar et al. (2007) propose using binary rather than n-ary rankings, arguing that this is a natural and easy task for judges. Here we present some artificial data of pairwise (binary) rankings to illustrate the techniques we discuss in Section 5, although these techniques can be extended to n-ary comparisons. In our example, there are four systems A, B, C, D and four judges J1, J2, J3, J4. The judges have pairwise ranked 240 translation pairs from systems x and y, indicating whether the translation of x is better than y (x y), worse than y (x ≺ y), or similar in quality to y (x = y); see Table 1. An overall impression, totalling all pairwise first preferences for each system (Section 4), gives a ranking of systems A–D–B–C. It can also be seen that there is little in the way of undecidedness, and also that judge J3 differs from the general judge opinion in pairwise ranking of AD and BC. 3 classical non-parametric methods :A classical approach to evaluating preferences is the non-parametric sign test (Sprent and Smeeton 2007). The first issue in applying this test here is ties, or expressions of no preference—these are often ignored when the proportion of ties is small, but for our typical examples of Figure 1, this is not true. Randles (2001) observes, regarding the approach most widely recommended by textbooks of just ignoring ties, that “the constrained number of possible p values and its ‘elimination of zeroes’ has caused concern and controversy through the years.” Randles (2001) and Rayner and Best (2001, chapter 2), reviewing several approaches to handling ties, both advocate splitting ties in various ways depending on the problem setting, for (in Randles’s characterization) “it is desirable that zeros have a conservative influence on declaring preference, but not to the same degree as negative responses.” The key point is that modeling of ties explicitly can be important, although there is no consensus on how this should be done; no approach apart from ignoring ties appears to be in widespread use. The second issue with the sign test is that of multiple judges, where data points are related (e.g., the same items are given to all judges). The Friedman test (Sprent and Smeeton 2007, Section 7.3.1) can be viewed as an extension that can be applied to multiple subjects ranking multiple items (see Bi 2006, Section 5.1.3, for an example). However, Francis, Dittrich, and Hatzinger (2010) note that [the Friedman test] simply examines the null hypothesis that the median ranks for all items are equal, and does not consider any differences in ranking between respondents. . . . Moreover, if the Friedman test rejects the null hypothesis, no quantitative interpretation, such as the odds of preferring one item over another, is provided. [Further, this] fail[s] both to consider the underlying psychological mechanism for ranking, and to formulate correct statistical models for this mechanism. 4 methods in machine translation :Human evaluation in NLP is a pervasive issue, but here we focus on MT and its shared tasks. The 2007 shared task (Callison-Burch et al. 2007) was the first to investigate a range of approaches that specifically included ranking of n translations, from best to worst, allowing ties (which were ignored); from this they defined an aggregate “rank,” “the average number of times that a system was judged to be better than any other system in the sentence ranking evaluation.” They assessed inter-annotator agreement, and—with a key goal of the meta-evaluation being to find the automatic evaluation metric that best matched human evaluations—calculated Spearman’s rank correlation coefficient between the two types of assessment. The 2008 shared task (Callison-Burch et al. 2008) took the same approach, but noted that in ranking, “[h]ow best to treat these is an open discussion, and certainly warrants further thought,” in particular because of ties “further complicating matters.” Pado et al. (2009) modified the systemlevel predictions approach to become “tie-aware,” and noted that that this “makes a considerable practical difference, improving correlation figures by 5–10 points.” At around the same time Vilar et al. (2007) examined the use of pairwise comparisons in MT evaluation. They pose the problem as one where, given an order relationship is-better-than between pairs of systems, the goal is to find an ordering of all the systems: They see this as the fundamental computer science problem of sorting. They define an aggregate evaluation score for comparing systems, estimating expected value and standard error for hypothesis testing. However, in aggregating this way information about ties is lost. Bojar et al. (2011) critique the earlier WMT evaluations, citing issues with the ignoring of non-top ranks (noted in Section 3 herein also), with ties and also with interannotator agreement. Lopez (2012) extends the analysis of Bojar et al. and casts the problem as “finding the minimum feedback arc set in a tournament, a well-known NPcomplete problem.” He advocates using the pairwise rankings themselves, rather than aggregate statistics like Vilar et al. (2007), and aims to minimize the number of violations among these. Koehn (2012) evaluates empirically the approaches of both Bojar et al. (2011) and Lopez (2012), with a focus on determining which systems are statistically distinguishable in terms of performance, defining confidence bounds for this purpose. Hopkins and May (2013) recently advocated a focus on finding the extent to which particular rankings could be trusted. They proposed a model based on Item Response Theory (IRT), which underlies many standardized tests. They draw an analogy with judges assessing students on the basis of an underlying distribution of the student’s ability, with items authored by students having a quality drawn from the student’s ability distribution. They note in passing that a Gaussian parameterization of their IRT models resembles Thurstone and Bradley-Terry models; this leads us to the topic of Section 5. Overall, then, there are ongoing discussions about what kind of analysis is appropriate for preference judgments. Some of this involves moderately heavy-duty computation for bootstrapping; this is suitable for large-scale WMT evaluations with dozens of competing systems, but perhaps less so for the scenarios we envisage in Section 1. Moreover, examining what techniques other fields have developed could be useful, especially when they come with ready-made, easy-to-use tools for smaller-scale evaluation. 5 preferences and log-linear bradley-terry methods :The statistical analysis of human perception and preferences dates back at least to the psychophysics work of German physiologist E. H. Weber in the nineteenth century. A progression from the way humans perceive differences between physical stimuli to more general analysis of human preferences has occurred particularly in the context of investigating consumer behavior—dealing with questions like whether there is a definite preference for a food with a particular type of ingredient, for example—and this is now a fully fledged area of research. Sources like Lawless and Heymann (2010) give overviews of the field and relevant statistical techniques. The earliest generally cited models for pairwise comparisons are the Thurstone model (Thurstone 1927) and the closely related Bradley-Terry (BT) model (Bradley and Terry 1952); these have connections to the IRT models, widely used in analyzing responses to questionnaires, which Hopkins and May (2013) drew on. Here we only look at BT models. In a basic BT model, the probability that object j (Oj) is preferred to object k (Ok) from a set of J objects in a particular pairwise comparison jk is given by p(Oj Ok |πj,πk) = πj πj+πk for all j = k, where πj and πk are non-negative “worth” parameters describing the location of the object on the scale of preferences for some attribute. For n objects, there will be ( n 2 ) pairwise comparisons. Log-Linear Models. It is now standard to fit BT models as log-linear models (Agresti 2007, for example), which allows them to be treated in a uniform way with much of modern statistical analysis. Log-linear models are a variety of generalized linear models (GLM), as is, for example, the logistic regression used throughout NLP. GLMs consist of a random component that identifies the response variable Y and selects a probability distribution for it; a systematic component that specifies some linear combination of the explanatory variables xi; and a link function g(·) applied to the mean μ of Y relating μ to this linear combination. They thus have the form g(μ) = α+ β1x1 + . . .+ βkxk. For log-linear models, the response variables are counts that are assumed to follow a Poisson distribution, and the link function is g(μ) = log(μ) (compare logistic regression’s g(μ) = log μ1−μ ). As an example, Y might be counts of people who hold some belief, and the various xi might be gender, socioeconomic status, and so forth. GLMs are a key tool for modern categorical data analysis, Agresti (2007, p. 65) noting that using models rather than the non-parametric approaches of Section 3 has several benefits: The structural form of the model describes the patterns of association and interaction. The sizes of the model parameters determine the strength and importance of the effects. Inferences about the parameters evaluate which explanatory variables affect the response variable Y, while controlling effects of possible confounding variables. Finally, the model’s predicted values smooth the data and provide improved estimates of the mean of Y at possible explanatory variable values. In a log-linear model, intuitive log-odds interpretations of making one response relative to another can be derived from the parameters. (Typically, software chooses a reference parameter and other parameter values are relative to that.) Statistical significance scores and standard errors can be calculated for these parameters. In addition, GLMs allow for testing of model fit. There are various model choices (e.g., should we include ties? should we include terms representing interactions?) and goodnessof-fit tests can assess the alternatives (see, e.g., Agresti 2007, Section 7.2.1). The model with a separate parameter for each cell in the associated contingency table is called the saturated model, and fits the data perfectly, making it a suitable comparator for alternatives. Deviance is a likelihood ratio statistic comparing a proposed model to the saturated one, allowing a test of the hypothesis that parameters not included in the model are zero, via goodness of fit tests; large test statistics and small p-values provide evidence of model lack of fit. Models with Ties. To set out the representation of LLBT models, we follow the formulation of Dittrich and Hatzinger (2009). Let n(jk) be the number of comparisons between objects j and k; and let Y(jk)j be the number of preferences for object j with respect to k (similarly, Y(jk)k). The outcome of a paired comparison experiment can also be regarded as a (J 2 )× J incomplete two-dimensional contingency table: There are(J 2 ) rows of pairwise comparisons, and J columns recording choices of the jth object. As with log-linear models in general, the distribution of random variables Y(jk)j and Y(jk)k is assumed to be Poisson. Conditional on fixed n(jk) = Y(jk)j + Y(jk)k, (Y(jk)j, Y(jk)k) follow a binomial (more generally, multinomial) distribution. The expected number of preferences of object j with respect to object k is denoted m(jk)j and given by n(jk)p(jk)j, with p(jk)j the binomial probability. So far this is only for binary preferences; there are various ways to account for ties. We describe the approach of Davidson and Beaver (1977), which appears quite widely used, where there is a common null preference effect for all pairwise comparisons. Then log m(jk)j = μ(jk) + λOj − λOk log m(jk)0 = μ(jk) + γ log m(jk)k = μ(jk) − λOj + λOk (1) where the μ’s are “nuisance” parameters that fix the n(jk) marginal distributions, and the λO’s represent object parameters, m(jk)0 is the expected number of null preferences for pair (jk), and γ is the undecided effect. The object parameters are related to the worth parameters of the original definition by logπ = 2λO: These represent the log-odds. In addition to the theoretical reasons for using LLBTs for modeling pairwise comparisons, a key benefit is the availability of packages in R for doing the modeling. Two candidates allowing a variety of sophisticated models are by Turner and Firth (2012) and Hatzinger and Dittrich (2012); we use the latter as the current version of the former does not handle ties. We first apply the model described by Equations (1) to the single pairwise data with ties from Section 2 using R. We refer the reader to the associated data bundle1 for the full output; we only excerpt it in the discussion below. Immediately following is a snippet of the R output for the ModPref data from Figure 1. o1 is the variable λOj for the + category, o2 for the – category, g1 for the null preferences. Estimate Std. Error z value Pr(>|z|) o1 0.2778 0.1060 2.620 0.0088 ** o2 0.0000 NA NA NA g1 -0.6551 0.2300 -2.848 0.0044 ** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual deviance: -6.6614e-15 on 0 degrees of freedom In the R output, o2 is the reference object, with parameter value set to zero; the negative value of the estimate for g1 combined with its statistical significance says that there is a strong tendency for an expression of preference. The positive value of the o1 parameter and its significance indicate that the + group is strongly preferred: The odds in favor of this group with respect to the – group is exp(2 × 0.2778) = 1.74 to 1. Relating this to the description of the data in Section 2, then, there is a strong preference for translations by the proposed system relative to the baseline, even taking into account null preferences. The LLBT model confirms that even small data sets like this can produce meaningful and statistically significant results. For the other artificial preference data of Figure 1, the parameters behave as expected: for EqualPref, parameter estimates are all zero, signifying that they all have the same odds; for NoPref, the positive g1 indicates a strong tendency towards no preference; for StrongPref, the negative g1 indicates a strong tendency towards some preference, but with + or – equally likely. Note that all of these are saturated models: there are three objects and three parameters, so the model fits perfectly (indicated also by zero residual deviance). When we apply them 1 Data files and all R commands and output are at https://purl.org/NET/cl-llbt-data. to the real count data of Figure 1 (c), the results indicate that for the Collins et al. data there is a weak to moderate tendency not to choose (g1 estimate 0.303, p = 0.0432), but, given that, there is a significant (0.0001) preference in favor of the reordered system. For the Lewis and Steedman results, the model gives similar results, albeit with a much stronger disposition to null preferences. In the data bundle we also carry out the sign test ignoring ties for each data set for comparison; it gives the same results in each case for the relation of + than –, but does not allow an evaluation of the effect of ties. We now apply the model described by Equations (1) to the multiple pairwise data of Table 1. In the R output, the four systems A, B, C, D correspond to objects o1, o2, o3, o4, and g1 again to null preferences. As per the overview of the MT data in Section 2, there is little undecidedness (large negative g1). The coefficients show that object o1 (system A) is most preferred, followed by o4 (D), then o2 (B) and o3 (C). Note also that in this case, the model is not saturated: There is a non-zero residual deviance. As mentioned, log-linear models can be compared in terms of goodness of fit: Dittrich, Hatzinger, and Katzenbeisser (1998) and Dittrich and Hatzinger (2009) discuss this in some detail for LLBT models. Chi-squared statistics can be used to assess goodness of fit based on the residual deviance; the degrees of freedom (d.f.) equal the number of cell counts minus the number of model parameters; both deviance and d.f. are given in the R output. For this data deviance is 30.646 on 8 d.f., whereas by contrast if the ties (g1) are left out, it is 221.22 on 9 d.f. A chi-squared test would establish the goodness of fit for each model; but even without consulting the test it can be seen that leaving out the one parameter related to ties (1 d.f.) gives a seven-fold increase in deviance, so clearly inclusion of ties produces a much better model. Introducing Subject Covariates. The model can also incorporate a range of other factors, a possibility not easily open to non-parametric methods. The one we look at here is the notion of a categorical covariate, introduced into LLBT models in Dittrich, Hatzinger, and Katzenbeisser (1998): This allows the objects (items) to vary with characteristics of the subject (judge). Many types of subject covariates could be added, grouping subjects by native language of the speaker, source of judges (e.g., Mechanical Turk, university), and so forth. Here we add just one, the identity of the subject. (Typically in a GLM this would be a random effect; we treat it as a covariate just for our simple illustration.) We define our categorical covariate S to have levels l, l = 1, 2, . . . , L. Let m(jk)j|l be the expected number of preferences for object j with respect to object k for subjects in covariate class l. The log-linear representation is then as follows: log m( jk)j|l = μ(jk)l + λOj − λOk + λSl + λOSjl − λOSkl log m( jk)0|l = μ(jk)l + λSl + γ log m( jk)k|l = μ(jk)l − λOj + λOk + λSl − λOSjl + λOSkl (2) As do Dittrich and Hatzinger (2009), we define a reference group, with the λOj ’s representing the ordering for that group; the orderings for other groups are obtained by adding the λOSjl ’s specific to group l to the λ O l ’s for the reference group. μ(jk)l and λ S l are again “nuisance” parameters, the latter representing the main effect of the subject covariate measured on the lth level; λOSjl ’s are the (useful) subject-object interaction parameters describing the effect of the subject covariate on the preference for object j (similarly λOSkl and object k). We apply the model described by Equations (2) to the multiple pairwise data, with the subject covariate SUBJ with four levels (one per judge Ji of Table 1). There are a few complexities in interpreting the output, beyond the scope of this article to discuss but covered in Dittrich, Hatzinger, and Katzenbeisser (1998). The broad interpretations to draw from the output are that interactions o1:SUBJ3 and o2:SUBJ3 are large and significant, and contribute to the model, unlike any others. These correspond to the different pairwise rankings given by judge J3 to system A (relative to D) and to B (relative to C): This is how subject effects are indicated in these LLBT models. There are many other extensions to these models. Cattelan (2012) gives a state-ofthe-art overview of such extensions across a range of approaches, with an emphasis on dependent data. We only note two extensions here that are incorporated into prefmod and relevant to NLP. With categorical object covariates, items can be grouped as well, to investigate effects of grouping there, for example, different origins for translation sources. With non-pairwise rankings, judges can rank over more than two elements, as in the standard WMT evaluations, although this needs a special treatment in the models. 6 conclusions :We have looked at the sort of (pairwise) preference data that is encountered often in NLP. A particular characteristic of NLP data is that ties or undecided results may be frequent, and there is often a concern with inter-judge consistency. Reviewing classical non-parametric approaches, we note the opinion that it is important to model ties, and also note that approaches to looking at subject (judge) effects have several issues, such as a lack of quantitative interpretation of results. Among NLP approaches, especially within MT, new techniques are still being derived, which could benefit from views from outside the field. What we present are techniques from the field of sensory preference evaluation, where there has been a long history of development by statistics researchers. Recently, log-linear models have attracted attention. Applying them to sample data, we find that they provide the sort of information and uniform framework for analysis that NLP researchers could find useful. Given both extensive theoretical underpinings and freely available statistical software, we recommend LLBT models as a potential tool. Human evaluation plays an important role in NLP, often in the form of preference judgments. Although there has been some use of classical non-parametric and bespoke approaches to evaluating these sorts of judgments, there is an entire body of work on this in the context of sensory discrimination testing and the human judgments that are central to it, backed by rigorous statistical theory and freely available software, that NLP can draw on. We investigate one approach, Log-Linear Bradley-Terry models, and apply it to sample NLP data. [{""affiliations"": [], ""name"": ""Mark Dras""}] SP:dea01ec1a2d1ac7f75876dd5e599522271536f62 [{""authors"": [""Agresti"", ""Alan.""], ""title"": ""An Introduction to Categorical Data Analysis"", ""venue"": ""John Wiley, 2nd edition."", ""year"": 2007}, {""authors"": [""Bi"", ""Jian.""], ""title"": ""Sensory Discrimination Tests and Measurements: Statistical Principles, Procedures and Tables"", ""venue"": ""Blackwell, Oxford, UK."", ""year"": 2006}, {""authors"": [""Bojar"", ""Ond\u0159ej"", ""Milo\u0161 Ercegov\u010devi\u0107"", ""Martin Popel"", ""Omar Zaidan.""], ""title"": ""A grain of salt for the WMT manual evaluation"", ""venue"": ""Proceedings of WMT, pages 1\u201311."", ""year"": 2011}, {""authors"": [""Bradley"", ""Ralph"", ""Milton Terry.""], ""title"": ""Rank analysis of incomplete block designs, I"", ""venue"": ""The method of paired comparisons. Biometrika, 39:324\u2013345."", ""year"": 1952}, {""authors"": [""Callison-Burch"", ""Chris"", ""Cameron Fordyce"", ""Philipp Koehn"", ""Christof Monz"", ""Josh Schroeder.""], ""title"": ""Meta-) Evaluation of machine translation"", ""venue"": ""Proceedings of WMT, pages 136\u2013158."", ""year"": 2007}, {""authors"": [""Josh Schroeder""], ""title"": ""Further meta-evaluation of machine translation"", ""venue"": ""In Proceedings of WMT,"", ""year"": 2008}, {""authors"": [""Cattelan"", ""Manuela.""], ""title"": ""Models for paired comparison data: A review with emphasis on dependent data"", ""venue"": ""Statistical Science, 27(3):412\u2013423."", ""year"": 2012}, {""authors"": [""Collins"", ""Michael"", ""Philipp Koehn"", ""Ivona Kucerova.""], ""title"": ""Clause restructuring for statistical machine translation"", ""venue"": ""Proceedings of ACL, pages 531\u2013540."", ""year"": 2005}, {""authors"": [""R.R. Davidson"", ""R.J. Beaver.""], ""title"": ""On extending the Bradley-Terry model to incorporate within-pair order effects"", ""venue"": ""Biometrics, 33:693\u2013702."", ""year"": 1977}, {""authors"": [""Dittrich"", ""Regina"", ""Reinhold Hatzinger.""], ""title"": ""Fitting loglinear Bradley-Terry models (LLBT) for paired comparisons using the R package prefmod"", ""venue"": ""Psychological Science Quarterly,"", ""year"": 2009}, {""authors"": [""Dittrich"", ""Regina"", ""Reinhold Hatzinger"", ""W. Katzenbeisser""], ""title"": ""Modelling the effect of subject-specific covariates in paired comparison studies with an application to university rankings"", ""year"": 1998}, {""authors"": [""Francis"", ""Brian"", ""Regina Dittrich"", ""Reinhold Hatzinger""], ""title"": ""Modeling heterogeneity in ranked responses by nonparametric maximum likelihood: How do Europeans"", ""year"": 2010}, {""authors"": [""Hatzinger"", ""Reinhold"", ""Regina Dittrich.""], ""title"": ""prefmod: A package for modeling preferences based on paired comparisons, rankings, or ratings"", ""venue"": ""Journal of Statistical Software, 48(10):1\u201331."", ""year"": 2012}, {""authors"": [""Hopkins"", ""Mark"", ""Jonathan May.""], ""title"": ""Models of translation competitions"", ""venue"": ""Proceedings of ACL, pages 1,416\u20131,424."", ""year"": 2013}, {""authors"": [""Koehn"", ""Philipp.""], ""title"": ""Simulating human judgment in machine translation evaluation campaigns"", ""venue"": ""Proceedings of IWSLT, pages 179\u2013184."", ""year"": 2012}, {""authors"": [""Lawless"", ""Harry T."", ""Hildegarde Heymann.""], ""title"": ""Sensory Evaluation of Food: Principles and Practices"", ""venue"": ""Springer, New York, NY 2nd edition."", ""year"": 2010}, {""authors"": [""Lewis"", ""Mike"", ""Mark Steedman""], ""title"": ""Unsupervised induction of cross-lingual"", ""year"": 2013}, {""authors"": [""Lopez"", ""Adam.""], ""title"": ""Putting human assessments of machine translation systems in order"", ""venue"": ""Proceedings of WMT, pages 1\u20139."", ""year"": 2012}, {""authors"": [""Pado"", ""Sebastian"", ""Michel Galley"", ""Dan Jurafsky"", ""Christopher D. Manning.""], ""title"": ""Robust machine translation evaluation with entailment features"", ""venue"": ""Proceedings of ACL / AFNLP, pages 297\u2013305."", ""year"": 2009}, {""authors"": [""Randles"", ""Ronald H.""], ""title"": ""On Neutral Responses (zeros) in the sign test and ties in the Wilcoxon-Mann-Whitney test"", ""venue"": ""The American Statistician, 55(2):96\u2013101."", ""year"": 2001}, {""authors"": [""J.C.W. Rayner"", ""D.J. Best.""], ""title"": ""A Contingency Table Approach to Nonparametric Testing"", ""venue"": ""Chapman and Hall/CRC, Boca Raton, FL."", ""year"": 2001}, {""authors"": [""Sprent"", ""Peter"", ""Nigel Smeeton.""], ""title"": ""Applied Nonparametric Statistical Methods"", ""venue"": ""Chapman and Hall, London, UK."", ""year"": 2007}, {""authors"": [""L.L. Thurstone""], ""title"": ""A law of comparative Judgement"", ""venue"": ""Psychological Review, 34:278\u2013286."", ""year"": 1927}, {""authors"": [""Turner"", ""Heather"", ""David Firth.""], ""title"": ""Bradley-Terry Models in R: The BradleyTerry2 Package"", ""venue"": ""Journal of Statistical Software, 48(9):1\u201321."", ""year"": 2012}, {""authors"": [""Vilar"", ""David"", ""Gregor Leusch"", ""Hermann Ney"", ""Rafael E. Banchs.""], ""title"": ""Human evaluation of machine translation through binary system comparisons"", ""venue"": ""Proceedings of WMT, pages 96\u2013103."", ""year"": 2007}]", ,,,,,,,,"The Web is the main source of data in modern computational linguistics. Other volumes in the same series, for example, Introductions to Opinion Mining (Liu 2012) and Semisupervised Machine Learning (Søgaard 2013), start their problem statements by referring to data from the Web. This volume starts its own introduction by praising Web corpora for their size, ease of construction, and availability as a source of new text types. A random check of papers from the most recent ACL meeting also shows that the majority of them use Web data in one way or another. Our field definitely needs a comprehensive overview and a DIY manual for the task of constructing a corpus from the Web. This book is, to the best of my knowledge, the first attempt at providing such an overview.","[{""affiliations"": [], ""name"": ""Roland Sch\u00e4fer""}, {""affiliations"": [], ""name"": ""Felix Bildhauer""}]",SP:661b841860b253255a303d552a5d5d1396e9824e,"[{""authors"": [""Baeza-Yates"", ""Ricardo"", ""Carlos Castillo"", ""Efthimis N. Efthimiadis.""], ""title"": ""Characterization of national Web domains"", ""venue"": ""ACM Transactions on Internet Technology (TOIT), 7(2):9."", ""year"": 2007}, {""authors"": [""Baroni"", ""Marco"", ""Silvia Bernardini"", ""Adriano Ferraresi"", ""Eros Zanchetta.""], ""title"": ""The WaCky wide Web: A collection of very large linguistically processed Web-crawled corpora"", ""venue"": ""Language Resources and Evaluation,"", ""year"": 2009}, {""authors"": [""Baroni"", ""Marco"", ""Francis Chantree"", ""Adam Kilgarriff"", ""Serge Sharoff""], ""title"": ""Cleaneval: A competition for cleaning"", ""year"": 2008}, {""authors"": [""Studies. Springer"", ""Berlin/New York. S\u00f8gaard"", ""Anders""], ""title"": ""Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies"", ""year"": 2013}]",,,,,,,,,,,,,,,,,,,"reviewed by serge sharoff university of leeds :The Web is the main source of data in modern computational linguistics. Other volumes in the same series, for example, Introductions to Opinion Mining (Liu 2012) and Semisupervised Machine Learning (Søgaard 2013), start their problem statements by referring to data from the Web. This volume starts its own introduction by praising Web corpora for their size, ease of construction, and availability as a source of new text types. A random check of papers from the most recent ACL meeting also shows that the majority of them use Web data in one way or another. Our field definitely needs a comprehensive overview and a DIY manual for the task of constructing a corpus from the Web. This book is, to the best of my knowledge, the first attempt at providing such an overview. © 2015 Association for Computational Linguistics The book consists of an introduction and four chapters outlining the four main steps of Web corpus construction. They include: “Data Collection” (Chapter 2), “Basic Corpus Cleaning” (Chapter 3), “Linguistic Processing” (Chapter 4), and “Corpus Evaluation” (Chapter 5). Chapter 2 provides a very useful outline of the main properties of the Web and the crawling strategies. The chapter starts with an overview of a large-scale study of Web connectivity from Baeza-Yates, Castillo, and Efthimiadis (2007), listing various parameters of connectivity for a range of Top-Level Domains. However, there is little discussion of the implications for the corpus development task; for example, does the difference of the in-degree parameter of the Web pages from Chile and the UK have any implications for the Web corpora crawled from those domains? The chapter then proceeds to another important topic, which concerns the parameters of crawling; for example, the crawl bias and the number of seeds, and their influence on the final corpus. Section 2.4.1 illustrates the problems with the crawl bias by an example of deWac, a large commonly used corpus of German (Baroni et al. 2009). The second most frequent proper name bigram in this corpus is found to be Falun Gong. However, more analysis into the nature of the bias should have been beneficial. It is less likely to be related to the PageRank bias, the main bias discussed in Section 2.4.2. Other most frequent bigrams from deWac are not presented in the book, but it is interesting to note that the fourth place in it is occupied by Hartz IV, and the tenth place by Digital Eyes. This suggests that the bias comes from frequency spikes (i.e, a large number of instances collected from a small number of Web sites). Another shortcoming of this chapter is that nothing is said specifically about obtaining data from such resources as Twitter or Facebook, which need access via APIs rather than direct crawling. doi:10.1162/COLI r 00214 Computational Linguistics Volume 41, Number 1 Chapter 3 introduces methods for basic cleaning of the corpus content, such as processing of text formats (primarily HTML tags), language identification, boilerplate removal, and deduplication. Such low-level tasks are not considered to be glamorous from the view of computational linguistics, but they are extremely important for making Web-derived corpora usable (Baroni et al. 2008). The introduction offered in this chapter is reasonably complete, with good explanations of the sources of problems as well as with suggestions for the tools to be used in each task. An important bit which is missing in this chapter concerns the suggestions for choosing a particular cleaning pipeline. Although the choice indeed depends on the purposes of corpus collection, an indication of which pipeline suits which purpose is desirable. Chapter 4 is devoted to basic steps for linguistic processing of Web corpora, such as tokenization, POS tagging, and lemmatization, as well as orthographic normalization. Even though the processing pipeline is roughly the same for all NLP tasks, it becomes harder for Web corpora because they exhibit greater diversity in comparison with more homogeneous text collections (e.g., WSJ texts). Web texts are also considerably noisier, in the sense of containing nonstandard linguistic expressions, which are likely to be a challenge to the tools trained on more standard texts. The chapter presents some interesting case studies—in particular, the sources of POS tagging errors and nonstandard orthography. Chapter 5 describes ways for evaluating and comparing corpora. It gives examples of checking for word and sentence length and for sentence-level duplication. It also introduces methods for comparing frequency lists. Like other chapters it includes many interesting observations, such as the methods for extrinsic evaluation of corpora. However, the chapter does not address many issues important for corpus evaluation and comparison. Given that the previous chapters introduced a number of pipelines and corpora, this chapter would have been an ideal place to illustrate all the aspects of the pipelines by evaluating them in a consistent way. There are occasional references to this goal, such as the frequency lists of French nouns in Section 5.3.1, but this particular comparison is fairly impressionistic, and it concludes with a declaration of basic similarity of the underlying corpora. Does this mean that the crawling, cleaning, and linguistic processing pipelines do not matter? In any case, not even an impressionistic comparison of the pipelines is performed for other evaluation methods. Some illustrations are also not informative (e.g., Table 5.1.1 shows two frequency lists with the identical ranks for their words, which leads to the trivial rank correlation value of 1). The chapter contains a single paragraph devoted to composition of Web corpora. Given the size of such corpora, their evaluation crucially depends on understanding what has been crawled. The task has been approached by a number of models, such as supervised and semisupervised classification, clustering, topic modeling, and so forth, which should have been included in the discussion. The discussion does contain a relevant reference to Mehler, Sharoff, and Santini (2010), which surveys approaches to the genres of the Web, but other aspects of corpus composition need to be addressed, too. Overall, it is very useful to have a book that introduces all the aspects of Web corpus construction in a single volume with coherent presentation. The volume under review does cover the entire range of topics relevant to Web corpus construction and illustrates them via numerous examples. I would recommend it to students just starting their corpus development experiments. As for the drawbacks of the volume, there is a need to improve the structure of argumentation for the next edition. Bits of information are sometimes introduced in an incomplete way and re-introduced again in subsequent sections. For examples, two tools for crawling are discussed towards the end of Section 2.3.3, while more tools are mentioned as the discussion of crawling 162 Book Reviews strategies progresses. Chapter 1 starts with a fairly random list of non-Web corpora, whereas an overview of the book structure is confined to a short paragraph. Often, frustratingly little information is provided besides an annotated bibliography, rather than a presentation of the relevant methods and issues. In some cases this is accompanied with a statement that “covering this topic is beyond the scope of this volume,” even if the nature of the problem and the solutions could have been easily explained in a one-page summary. Another minor concern is an (understandable) emphasis on the tools and corpora developed by the authors, primarily on their German corpus. I have to admit ambivalence in my final verdict: The book is a useful introduction to an important topic, but it definitely warrants a new edition, which eliminates the shortcomings of the current one.",,,,,,,,,,,"web corpus construction :Roland Schäfer and Felix Bildhauer (Freie Universität Berlin) Morgan & Claypool (Synthesis Lectures on Human Language Technologies, edited by Graeme Hirst, volume 22), 2013, 145 pages, paper-bound, ISBN 9781608459834, doi:10.2200/S00508ED1V01Y201305HLT022",,,,,,,,,,,,,,,,,,,,"The Web is the main source of data in modern computational linguistics. Other volumes in the same series, for example, Introductions to Opinion Mining (Liu 2012) and Semisupervised Machine Learning (Søgaard 2013), start their problem statements by referring to data from the Web. This volume starts its own introduction by praising Web corpora for their size, ease of construction, and availability as a source of new text types. A random check of papers from the most recent ACL meeting also shows that the majority of them use Web data in one way or another. Our field definitely needs a comprehensive overview and a DIY manual for the task of constructing a corpus from the Web. This book is, to the best of my knowledge, the first attempt at providing such an overview. [{""affiliations"": [], ""name"": ""Roland Sch\u00e4fer""}, {""affiliations"": [], ""name"": ""Felix Bildhauer""}] SP:661b841860b253255a303d552a5d5d1396e9824e [{""authors"": [""Baeza-Yates"", ""Ricardo"", ""Carlos Castillo"", ""Efthimis N. Efthimiadis.""], ""title"": ""Characterization of national Web domains"", ""venue"": ""ACM Transactions on Internet Technology (TOIT), 7(2):9."", ""year"": 2007}, {""authors"": [""Baroni"", ""Marco"", ""Silvia Bernardini"", ""Adriano Ferraresi"", ""Eros Zanchetta.""], ""title"": ""The WaCky wide Web: A collection of very large linguistically processed Web-crawled corpora"", ""venue"": ""Language Resources and Evaluation,"", ""year"": 2009}, {""authors"": [""Baroni"", ""Marco"", ""Francis Chantree"", ""Adam Kilgarriff"", ""Serge Sharoff""], ""title"": ""Cleaneval: A competition for cleaning"", ""year"": 2008}, {""authors"": [""Studies. Springer"", ""Berlin/New York. S\u00f8gaard"", ""Anders""], ""title"": ""Sentiment Analysis and Opinion Mining. Synthesis Lectures on Human Language Technologies"", ""year"": 2013}] reviewed by serge sharoff university of leeds :The Web is the main source of data in modern computational linguistics. Other volumes in the same series, for example, Introductions to Opinion Mining (Liu 2012) and Semisupervised Machine Learning (Søgaard 2013), start their problem statements by referring to data from the Web. This volume starts its own introduction by praising Web corpora for their size, ease of construction, and availability as a source of new text types. A random check of papers from the most recent ACL meeting also shows that the majority of them use Web data in one way or another. Our field definitely needs a comprehensive overview and a DIY manual for the task of constructing a corpus from the Web. This book is, to the best of my knowledge, the first attempt at providing such an overview. © 2015 Association for Computational Linguistics The book consists of an introduction and four chapters outlining the four main steps of Web corpus construction. They include: “Data Collection” (Chapter 2), “Basic Corpus Cleaning” (Chapter 3), “Linguistic Processing” (Chapter 4), and “Corpus Evaluation” (Chapter 5). Chapter 2 provides a very useful outline of the main properties of the Web and the crawling strategies. The chapter starts with an overview of a large-scale study of Web connectivity from Baeza-Yates, Castillo, and Efthimiadis (2007), listing various parameters of connectivity for a range of Top-Level Domains. However, there is little discussion of the implications for the corpus development task; for example, does the difference of the in-degree parameter of the Web pages from Chile and the UK have any implications for the Web corpora crawled from those domains? The chapter then proceeds to another important topic, which concerns the parameters of crawling; for example, the crawl bias and the number of seeds, and their influence on the final corpus. Section 2.4.1 illustrates the problems with the crawl bias by an example of deWac, a large commonly used corpus of German (Baroni et al. 2009). The second most frequent proper name bigram in this corpus is found to be Falun Gong. However, more analysis into the nature of the bias should have been beneficial. It is less likely to be related to the PageRank bias, the main bias discussed in Section 2.4.2. Other most frequent bigrams from deWac are not presented in the book, but it is interesting to note that the fourth place in it is occupied by Hartz IV, and the tenth place by Digital Eyes. This suggests that the bias comes from frequency spikes (i.e, a large number of instances collected from a small number of Web sites). Another shortcoming of this chapter is that nothing is said specifically about obtaining data from such resources as Twitter or Facebook, which need access via APIs rather than direct crawling. doi:10.1162/COLI r 00214 Computational Linguistics Volume 41, Number 1 Chapter 3 introduces methods for basic cleaning of the corpus content, such as processing of text formats (primarily HTML tags), language identification, boilerplate removal, and deduplication. Such low-level tasks are not considered to be glamorous from the view of computational linguistics, but they are extremely important for making Web-derived corpora usable (Baroni et al. 2008). The introduction offered in this chapter is reasonably complete, with good explanations of the sources of problems as well as with suggestions for the tools to be used in each task. An important bit which is missing in this chapter concerns the suggestions for choosing a particular cleaning pipeline. Although the choice indeed depends on the purposes of corpus collection, an indication of which pipeline suits which purpose is desirable. Chapter 4 is devoted to basic steps for linguistic processing of Web corpora, such as tokenization, POS tagging, and lemmatization, as well as orthographic normalization. Even though the processing pipeline is roughly the same for all NLP tasks, it becomes harder for Web corpora because they exhibit greater diversity in comparison with more homogeneous text collections (e.g., WSJ texts). Web texts are also considerably noisier, in the sense of containing nonstandard linguistic expressions, which are likely to be a challenge to the tools trained on more standard texts. The chapter presents some interesting case studies—in particular, the sources of POS tagging errors and nonstandard orthography. Chapter 5 describes ways for evaluating and comparing corpora. It gives examples of checking for word and sentence length and for sentence-level duplication. It also introduces methods for comparing frequency lists. Like other chapters it includes many interesting observations, such as the methods for extrinsic evaluation of corpora. However, the chapter does not address many issues important for corpus evaluation and comparison. Given that the previous chapters introduced a number of pipelines and corpora, this chapter would have been an ideal place to illustrate all the aspects of the pipelines by evaluating them in a consistent way. There are occasional references to this goal, such as the frequency lists of French nouns in Section 5.3.1, but this particular comparison is fairly impressionistic, and it concludes with a declaration of basic similarity of the underlying corpora. Does this mean that the crawling, cleaning, and linguistic processing pipelines do not matter? In any case, not even an impressionistic comparison of the pipelines is performed for other evaluation methods. Some illustrations are also not informative (e.g., Table 5.1.1 shows two frequency lists with the identical ranks for their words, which leads to the trivial rank correlation value of 1). The chapter contains a single paragraph devoted to composition of Web corpora. Given the size of such corpora, their evaluation crucially depends on understanding what has been crawled. The task has been approached by a number of models, such as supervised and semisupervised classification, clustering, topic modeling, and so forth, which should have been included in the discussion. The discussion does contain a relevant reference to Mehler, Sharoff, and Santini (2010), which surveys approaches to the genres of the Web, but other aspects of corpus composition need to be addressed, too. Overall, it is very useful to have a book that introduces all the aspects of Web corpus construction in a single volume with coherent presentation. The volume under review does cover the entire range of topics relevant to Web corpus construction and illustrates them via numerous examples. I would recommend it to students just starting their corpus development experiments. As for the drawbacks of the volume, there is a need to improve the structure of argumentation for the next edition. Bits of information are sometimes introduced in an incomplete way and re-introduced again in subsequent sections. For examples, two tools for crawling are discussed towards the end of Section 2.3.3, while more tools are mentioned as the discussion of crawling 162 Book Reviews strategies progresses. Chapter 1 starts with a fairly random list of non-Web corpora, whereas an overview of the book structure is confined to a short paragraph. Often, frustratingly little information is provided besides an annotated bibliography, rather than a presentation of the relevant methods and issues. In some cases this is accompanied with a statement that “covering this topic is beyond the scope of this volume,” even if the nature of the problem and the solutions could have been easily explained in a one-page summary. Another minor concern is an (understandable) emphasis on the tools and corpora developed by the authors, primarily on their German corpus. I have to admit ambivalence in my final verdict: The book is a useful introduction to an important topic, but it definitely warrants a new edition, which eliminates the shortcomings of the current one. web corpus construction :Roland Schäfer and Felix Bildhauer (Freie Universität Berlin) Morgan & Claypool (Synthesis Lectures on Human Language Technologies, edited by Graeme Hirst, volume 22), 2013, 145 pages, paper-bound, ISBN 9781608459834, doi:10.2200/S00508ED1V01Y201305HLT022", "1 introduction :A constancy measure for a natural language text is defined, in this article, as a computational measure that converges to a value for a certain amount of text and remains invariant for any larger size. Because such a measure exhibits the same value for any size of text larger than a certain amount, its value could be considered as a text characteristic. The concept of such a text constancy measure was introduced by Yule (1944) in the form of his measure K. Since Yule, there has been a continuous quest for such measures, and various formulae have been proposed. They can be broadly categorized into three types, namely, those measuring (1) repetitiveness, (2) power law character, and (3) complexity. ∗ Kyushu University, 744 Motooka Nishiku, Fukuoka City, Fukuoka, Japan. E-mail: kumiko@ait.kyushu-u.ac.jp. ∗∗ JST-PRESTO, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan. † Gunosy Inc., 6-10-1 Roppongi, Minato-ku, Tokyo, Japan. Submission received: 11 July 2013; revised version received: 17 February 2015; accepted for publication: 18 March 2015. doi:10.1162/COLI a 00228 © 2015 Association for Computational Linguistics Yule’s original intention for K’s utility lay in author identification, assuming that it would differ for texts written by different authors. State-of-the-art multivariate machine learning techniques are powerful, however, for solving such language engineering tasks, in which Yule’s K is used only as one variable among many, as reported in Stamatatos, Fakotakis, and Kokkinakis (2001) and Stein, Lipka, and Prettenhofer (2010). We believe that constancy measures today, however, have greater importance in understanding the mathematical nature of language. Although mathematical models of language have been studied in the computational linguistics milieu, via Markov models (Manning and Schuetze 1999), Zipf’s law and its modifications (Mandelbrot 1953; Zipf 1965; Bell, Cleary, and Witten 1990), and Pitman-Yor models (Teh 2006) more recently, the true mathematical model of linguistic processes is ultimately unknown. Therefore, the convergence of a constancy measure must be examined through empirical verification. Because some constancy measures have a mathematical theory of convergence for a known process, discrepancies in the behavior of real linguistic data from such a theory would shed light on the nature of linguistic processes and give hints towards improving the mathematical models. Furthermore, as one application, a convergent measure would allow for comparison of different texts through a common, stable norm, provided that the measure converges for a sufficiently small amount of text. One of our goals is to discover a non-trivial measure with a certain convergence speed that distinguishes the different natures of texts. The objective of this article is thus to provide a potential explanation of what the study of constancy measures over 70 years has been about, by answering the three following questions mathematically and empirically: Question 1 Does a measure exhibit constancy? Question 2 If so, how fast is the convergence speed? Question 3 How discriminatory is the measure? We seek answers by first showing the meaning of Yule’s K in relation to the Rényi higher-order entropy, and by then empirically examining constancy across large-scale texts of different kinds. We finally provide an application by considering the natures of two unknown scripts, the Voynich manuscript and Rongorongo, in order to show the possible utility of a constancy measure. The most important and closest previous work was reported in Tweedie and Baayen (1998), the first paper to have examined the empirical behavior of constancy measures on real texts. The authors used English literary texts to test constancy measure candidates proposed prior to their work. Today, the coverage and abundance of language corpora allow us to conduct a larger-scale investigation across multiple languages. Recently, Golcher (2007) tested his measure V (discussed later in this paper) with IndoEuropean languages and also programming language sources. Our papers (Kimura and Tanaka-Ishii 2011, 2014) also precede this one, presenting results preliminary to this article but with only part of our data, and neither of those provides mathematical analysis with respect to the Rényi entropy. Compared with these previous reports, our contribution here can be summarized as follows: Our work elucidates the mathematical relation of Yule’s K to Rényi’s higher-order entropy and explains why K converges. Our work vastly extends the corpora used for empirical examination in terms of both size and language. Our work compares the convergent values for these corpora. Our work also presents results for unknown language data, specifically from the Voynich manuscript and Rongorongo. We start by summarizing the potential constancy measures proposed so far.","2 constancy measures :The measures proposed so far can broadly be categorized into three types, calculating the repetitiveness, power-law distribution, or complexity of text. This section mathematically analyzes these measures and summarizes them. The study of text constancy started with proposals for simple text measures of vocabulary repetitiveness. The representative example is Yule’s K (Yule 1944), while Golcher recently proposed V as another candidate (Golcher 2007). 2.1.1 Yule’s K. To the best of our knowledge, the oldest mention of constancy values was made by Yule with his notion of K (Yule 1944). Let N be the total number of words in a text, V(N) be the number of distinct words, V(m, N) be the number of words appearing m times in the text, and mmax be the largest frequency of a word. Yule’s K is then defined as follows, through the first and second moments of the vocabulary population distribution of V(m, N), where S1 = N = ∑ m mV(m, N), and S2 = ∑ m m 2V(m, N) (Yule 1944; Herdan 1964): K = CS2 − S1 S21 = C [ − 1N + mmax∑ m=1 V(m, N)( mN ) 2 ] (1) where C is a constant enlarging of the value of K, defined by Yule as C = 104. K is designed to measure the vocabulary richness of a text: The larger Yule’s K, the less rich the vocabulary is. The formula can be intuitively understood from the main term of the sum in the formula. Because the square of ( mN ) 2 indicates the degree of recurrence of a word, the sum of such degrees for all words is small if the vocabulary is rich, or large in the opposite case. Another simple example can be given in terms of S2 in this formula. Suppose a text is 10 words long: if each of the 10 tokens is distinct (high diversity), then S2 = 1 × 1 × 10 = 10; whereas, if each of the 10 tokens is identical (low diversity), then S2 = 10 × 10 × 1 = 100. Measures that are slightly different but essentially equivalent to Yule’s K have appeared here and there. For example, Herdan defined Vm as follows (Herdan 1964, pp. 67, 79): Vm = √√√√ mmax∑ m=1 V(m, N)( mN ) 2 − 1 V(N) Likewise, Simpson (1949) derived the following formula as a measure to capture the diversity of a population: D = mmax∑ m=1 V(m, N) mN m − 1 N − 1 which is equivalent to Yule’s K, as Simpson noted. 2.1.2 Other Measures Based on Simple Text Statistics. Apart from Yule’s K, various measures have been proposed from simple statistical observation of text, as detailed in Tweedie and Baayen (1998). One genre is based on the so-called token-type relation (i.e., the ratio of the vocabulary size V(N) and the text size N, in log) as formulated by Guiraud (1954) and Herdan (1964) as a law. Because this simple ratio is not stable, the measure was modified numerous times to formulate Herdan’s C (Herdan 1964), Dugast’s k and U (Dugast 1979), Maas’ a2 (Maas 1972), Tuldava’s LN (Tuldava 1977), and Brunet’s W (Brunet 1978). Another genre of measures concerns the proportion of hapax legomena, that is V(1, N). Honoré noted that V(1, N) increases linearly with respect to the log of a text’s vocabulary size V(N) (Honoré 1979). Another ratio, of V(2, N) to V(N), was proposed as a text characteristic by Sichel (1975) and Maas (1972). Each of these values, however, was found not to be convergent according to the extensive study conducted by Tweedie and Baayen (1998). In common with Yule’s intention to apply such measures for author identification, they examined all of the measures discussed here, in addition to two measures explained later: Orlov’s Z, and the Shannon entropy upper bound obtained from the relative frequencies of unigrams. They examined these measures with English novels (such as Alice’s Adventures in Wonderland) and empirically found that only Yule’s K and Orlov’s Z were convergent. Given their report, we consider K the only true candidate among the constancy measures examined so far. 2.1.3 Golcher’s V. Golcher’s V is a string-based measure calculated on the suffix tree of a text (Golcher 2007). Letting the length of the string be N and the number of inner nodes of the (Patricia) suffix tree (Gusfield 1997) be k, V is defined as: V = kN (2) Golcher empirically showed how this measure converges to almost the same value across Indo-European languages for about 30 megabytes of data. He also showed how the convergent values differ from those calculated for programming language texts. Golcher explains in his paper that the possibility of constancy of V does not yet have mathematical grounding and has only been shown empirically. He does not report values for texts larger than about 30 megabytes nor for those of non-Indo-European languages. A simple conjecture on this measure is that because a suffix tree for a string of length N has at most N − 1 inner nodes, V must end up at some value 0 ≤ V < 1, for any given text. Our group tested V with larger-scale data and concluded that V could be a constancy measure, although we admitted to observing a gradual increase (Kimura and Tanaka-Ishii 2014). Because V requires further verification on larger-scale data before ruling it out, we include it as a constancy measure candidate. Since Zipf (1965), power laws have been reported as an underlying statistical characteristic of text. The famous Zipf’s law is defined as: f (n) ∝ n−γ (3) where γ ≈ 1, and f (n) is the frequency of the nth most frequent word in a text. Various studies have sought to explain mathematically how the exponent could differ depending on the kind of text. To the best of our knowledge, however, there has been a limited number of reports related to text constancy. An exception is the study on Orlov’s Z (Orlov and Chitashvili 1983). Orlov and Chitashvili attempted to obtain explicit mathematical forms for V(N) and V(m, N) by more finely considering the long tails of vocabulary distributions for which Zipf’s law does not hold. They obtained these forms through a parameter Z, defined as the potential text length minimizing the square error of the estimated V(m, N), with its actual value as follows: Z = arg min N 1 mmax mmax∑ m=1 {E[V(m, N)] − V(m, N) V(N) }2 (4) Thus defining Z, they mathematically deduced for V(N) the following formula: V(N) = Z log(mmaxZ) N N − Z log ( N Z ) (5) Two ways to obtain Z can be formulated through approximation: one through GoodTuring smoothing (Good 1953), which assumes Zipf’s law to hold, and the other using Newton’s method. Tweedie and Baayen showed how the value of Z is stable at the size of an English novel by a single author and thus suggested that it could form a text characteristic. The empirical results, however, were not significantly convergent with respect to text size, and, moreover, Tweedie and Baayen provided their results without giving an estimation method (Tweedie and Baayen 1998). Calculation using Good-Turing smoothing, which is derived directly from Zipf’s law, would cause Z to converge, but this does not take Orlov’s original intention into consideration. Alternatively, our group (Kimura and Tanaka-Ishii 2014) verified Z through Newton’s method by setting g(Z) = 0, where g(Z) is the following function: g(Z) = Z log(mmaxZ) N N − Z log ( N Z ) − V(N) (6) We also showed how the value of Z increases rapidly when the text size is larger than 10 megabytes. The major problem with measures based on power laws lies in the skewed head and tail of the vocabulary population distribution. Because these exceptions constitute important parts of the population, parameter estimation by fitting to Equation (3) is sensitive to the estimation method. For example, the estimated value of the exponent for Zipf’s law depends on the method used for dealing with these exceptions. We tested several simple methods of estimating the Zipf law’s exponent γ with different ways of handling the head and tail of a distribution. There were settings that led to convergence, but the convergence depended on the settings. Such difficulty could be one reason why there has been no direct proposal for γ as a text constancy measure. Hence, due care must be taken in relating text constancy to a power law. We chose another path by considering text constancy through a random Zipf distribution, as described later in the experimental section. With respect to measures based on complexity, multiple reports have already examined the Shannon entropy (Shannon 1948; Cover and Thomas 2006). In addition, we introduce the Rényi higher-order entropy (Rényi 1960) as another possible measure. 2.3.1 Shannon Entropy Upper Bound. Let X be the random variable of a sequence X = X1, X2, . . . , Xi, . . . , where Xi represents the ith element of X: Xi = x ∈ X, and where X represents a given set (e.g., a set of words or characters) whose members constitute the sequence. Let Xji (i < j) denote the random variable indicating its subsequence Xi, Xi+1, Xi+2, . . . , Xj. Let P(X) indicate the probability function of a sequence X. The Shannon entropy is then defined as: H(X) = − ∑ X P(X) log P(X) (7) Tweedie and Baayen directly calculated an approximation of this formula in terms of the relative frequencies (for P) of unigrams (for X), and they concluded that the measure would continue increasing with respect to text size and would not converge for short, literary texts (Tweedie and Baayen 1998). Because we are interested in the measure’s behavior on a larger scale, we replicated their experiment, as discussed later in the section on empirical constancy. We denote this measure as H1 in this article. Apart from that report, many have studied the entropy rate, defined as: h∗ = lim n→∞ H(Xn1 ) n (8) Theoretically, the behavior of the entropy rate with respect to text size has been controversial. On the one hand, there have been indications of entropy rate constancy (Genzel and Charniak 2002; Levy and Jaeger 2007). These reports argue that the entropy rate of natural language could be constant. Due to the inherent difficulty in obtaining the true value of h∗ from a text, however, these arguments are based only on indirect clues with respect to convergence. On the other hand, Hilberg conjectured a decrease in the human conditional entropy, as follows (Hilberg 1990): H(Xn|Xn−11 ) ∝ n−1+β He obtained this through an examination of Shannon’s original experimental data and suggested that β ≈ 0.5. From this formula, Dȩbowski induces that H(Xn1 ) ∝ nβ and that the entropy rate can be formulated generally as follows (Dȩbowski 2014): H(Xn1 ) n ≈ An−1+β + h∗ (9) Note that at the limit of n → ∞, this rate goes to h∗, a constant, provided that β < 1.0. Hilberg’s conjecture is deemed compatible with entropy rate constancy at its asymptotic limit, provided that h∗ > 0 holds.1 We are therefore interested in whether this h∗ forms a text characteristic, and if so, whether h∗ > 0. Empirically, many have attempted to calculate the upper bound of the entropy rate. Brown’s report (Brown et al. 1992) is representative in showing a good estimation of the entropy rate for English from texts, as compared with values obtained from humans (Cover and King 1978). Subsequently, there have been important studies on calculating the entropy rate, as reported thoroughly in Schümann and Grassberger (1996). The questions related to h∗, however, remain unsolved. Recently, Dȩbowski used a Lempel-Ziv compressor and examined Hilberg’s conjecture for texts by single authors (Dȩbowski 2013). He showed an exponential decrease in the entropy rate with respect to text size, supporting the validity of Equation (9). Following these previous works, we examine the entropy rate by using an algorithm proposed by Grassberger (1989) and later on by Farach et al. (1995). This method is based on universal coding. The algorithm has a theoretical background of convergence to the true h∗, provided the sequence is stationary, but has been proved by Shields (1992) to be inconsistent—that is, it does not converge to the entropy rate for certain nonMarkovian processes. We still chose to apply this method, because it requires no arbitrary parameters for calculation and is applicable to large-scale data within a reasonable time. The Grassberger algorithm (Grassberger 1989; Farach et al. 1995) can be summarized as follows. Consider a sequence X of length N. The maximum matching length Li is defined as: Li = max{k : Xj+kj = Xi+ki } for j ∈ {1, . . . , i − 1}, 1 ≤ j ≤ j + k ≤ i − 1. In other words, Li is the maximum common subsequence before and after i. If L̄ is the average length of Li, given by L̄ = 1N i=N∑ i=1 Li then the method obtains the entropy rate h1 as h1 = log2N L̄ (10) Given the true entropy rate h∗, convergence has been mathematically proven for a stationary process, such that |h∗ − h1| = O(1) when N → ∞. In this article, we consider this entropy rate h1 as a constancy measure candidate. 1 According to Dȩbowski (2009), h∗ = 0 suggests that the next element of a linguistic process is deterministic, that is, a function of the corpus observed before, under the two conditions that (1) the number of possible choices for the element is finite, and (2) the corpus observed before is infinite. In reality, the finiteness of linguistic sequences has the opposite tendency (i.e., the size of the observed corpus is finite, and the possible vocabulary size is infinite). 2.3.2 Approximation of Rényi Entropy Hα. The Rényi entropy is a generalization of the Shannon entropy, defined as follows (Rényi 1960; Rényi 1970; Cover and Thomas 2006; Bromiley, Thacker, and Bouhova-Thacker 2010): Hα(X) = 11 − α log( ∑ X Pα(X)) (11) where α ≥ 0,α = 1. Hα(X) represents different ideas of sequence complexity for different α. For example: When α = 0, H0(X) indicates the number of distinct occurrences of X. When the limit α → 1 is taken, Equation (11) reduces to the Shannon entropy. The formula for α = 0 becomes equivalent to the so-called topological entropy (hence, it is another notion of entropy) for certain probability functions (Kitchens 1998) (Cover and Thomas 2006). Note that the number of distinct tokens (i.e., the cardinality of a set) has been used widely as a rough approximation of complexity in computational linguistics. Indeed, in Section 2.1.2, we saw how some candidate constancy measures are based on a token-type relation, such that the number of types is related to the complexity of a text. For texts, note also that the value grows with respect to the text size, unless X is considered, for example, in terms of unigrams of a phonographic alphabet. For α → 1, there is controversy regarding convergence, as noted in the previous section. Such difficulty in convergence for these α values lies in the nature of linguistic processes, in which the vocabulary set evolves. This view motivates us to consider α > 1 for Hα(X), since the formula captures complexity by considering linguistic hapax legomena to a lesser degree, thus giving the possibility of convergence. In fact, an approximation of the probability by the relative frequencies of unigrams at α = 2 immediately shows the essential equivalence to Yule’s K, since K from Equation (1) can be rewritten as follows: mmax∑ m=1 V(m, N)( mN ) 2 = ∑ x∈X ( freq(x) N ) 2 where freq(x) is the frequency of x ∈ X. Therefore, Yule’s K has significance within the context of complexity. This relation of Yule’s K to the Rényi entropy H2 is reported for the first time here, to the best of our knowledge. This mathematical relation clarifies both why Yule’s K should converge and what the convergent value means; specifically, the value represents the gross complexity underlying the language system. As noted earlier, the higher-order entropy considers hapax legomena to a lesser degree and calculates the gross entropy only from the representative vocabulary population. This simple argument shows that Yule’s K captures not only the simple repetitiveness of vocabulary but also the more profound signification of its equivalence with the approximated second-order entropy. Because K has been previously reported as a stable text constancy measure, we consider it here once again, but this time within the broader context of Hα. Based on the previous reports (Tweedie and Baayen 1998; Kimura and Tanaka-Ishii 2014) and the discussion so far, we consider the following four measures as candidates for text constancy measures. Repetitiveness-based measures: Yule’s K (Equation (1)); and Golcher’s V (Equation (2)). Complexity-based measures: The Shannon entropy upper bound (h1 as the entropy rate (Equations (10) and (8)) and H1 (Equation (7), with X in terms of unigrams and the probability function in terms of relative frequencies); and the approximated Rényi entropy, denoted as Hα (α > 1) (Equation (11), again with X and the probability function in terms of unigrams and relative frequencies, respectively). In addition, we empirically consider how these measures can be understood in the context of the power-law feature of language. As noted in the Introduction, for the convergent measures the speed of attaining convergence with respect to text size is examined as well. Among the candidates, K and H1 have been previously applied in a word-based manner, whereas V is string based. The Shannon entropy rate h1 has been considered in both ways. Because we should be able to consider a text in terms of both words and characters, we examine the constancy of each measure in both ways. Furthermore, because we have seen the mathematical equivalence of Yule’s K and H2, in the following we only consider H2. As for Hα, we consider α = 3, 4 only in comparison with H2. Because H1 is based on relative frequencies and can be considered together with H2, we first focus on the convergence of the three measures V, h1, and H2, and then we consider H1 in comparison with H2, H3, and H4.","3 data : Table 1 lists the data used in our experimental examination. The table indicates the data identifier (by which we refer to the data in the rest of the article), language, source, number of distinct tokens, data length by total number of tokens, and size in bytes. The first block contains relatively large-scale natural language corpora consisting of texts written by multiple authors, and the second block contains smaller corpora consisting of texts by single authors. The third block contains programming language corpora, and the fourth block contains corpora of unknown scripts, which we examine at the end of this article in Section 4.3. For the large-scale natural language data, we considered five languages: English, Japanese, Chinese, Arabic, and Thai. These languages were chosen to represent different language families and writing systems. The large-scale corpora in English, Japanese, and Chinese consist of newspapers in chronological order, and the Thai and Arabic corpora include other kinds of texts. The markers ‘w’, ‘c’, and ‘cr’ appearing at the end of every identifier in Table 1 (e.g., Enews-c, Enews-w, and Jnews-cr) indicate text processed through words, characters, and transliterated Roman characters, respectively. As for the small-scale corpora in the second block, the texts were only considered in terms of words, since verification via characters produced findings consistent with those obtained with the large-scale corpora. The texts were chosen because each was written by a single author but is relatively large. Here, we summarize our preprocessing procedures. For the annotated Thai NECTEC corpus, texts were tokenized according to the annotation. The preprocessing methods for the other corpora were as follows: English: NLTK2 was used to tokenize text into words. Japanese: Mecab3 was used for tokenization, and KAKASI4 was used for romanization. Chinese: ICTCLAS20135 was used for tokenization, and the pinyin Python library was used for pinyin romanization. Other European Languages: PunktWordTokenizer6 was used for tokenization. All the other natural language corpora were tokenized simply using spaces. Following Golcher (2007), who first suggested testing constancy on programming languages, we also collected program sources from different languages (third block in Table 1). The programs were also considered solely in terms of words, not characters. C++ and Python were chosen to represent different abstraction levels, and Lisp was chosen because of its different ordering for function arguments. Source code was collected from language libraries. The programming language texts were preprocessed as follows. Comments in natural language were eliminated (although strings remained in the programs, where each was a literal token). Identical files and copies of sources in large chunks were carefully eliminated, although this process did not completely eliminate redundancy since most programs reuse some previous code. Finally, the programs were tokenized according to the language specifications.7 The last block of the table lists two corpora of unknown scripts. We consider these scripts at the end of this article in Section 4.3, through Figure 5, to show one possible application of the text constancy measures. The first unknown script is that of the Voynich manuscript, a famous text that is undeciphered but hypothesized to have been written in natural language. This corpus is considered in terms of both characters and words, where words were defined via the white space separation in the original text. Given the common understanding that the manuscript seems to have two different parts (Reddy and Knight 2011), we separated it into two parts according to the Currier annotation (identified as A and B, respectively). The second corpus of unknown text consists of the Rongorongo script of Easter Island (Daniels and Bright 1996, Section 13; Orliac 2005; Barthel 2013). This script’s status as natural language is debatable, but if so, it is considered to possess characteristics of both phonographs and ideograms (Pozdniakov and Pozdniakov 2007). Because there are several ways to consider what constitutes a character in this script (Barthel 2013), we calculate values for the two most extreme cases as follows. For corpus RongoA-c, we consider a character inclusive of all adjoining parts (i.e., including accents and ornamental parts). On the other hand, for 2 http://nltk.org. 3 http://mecab.googlecode.com/svn/trunk/mecab/doc/index.html. 4 http://kakasi.namazu.org. 5 http://ictclas.nlpir.org. 6 http://nltk.org. 7 With respect to the Lisp programming language, its culture favors long, hyphenated variable names that can be almost as long as a sentence. For this work, therefore, Lisp variable names were tokenized by splitting at the hyphens. corpus RongoB-c, we separate parts as reasonably as possible, among multiple possible separation methods. Because the unit of word in this script is unknown, the Rongorongo script is only considered in terms of characters. The empirical verification of convergence for real data is controversial. We must first note that it does not conform with the standard approach to statistical testing. In the domain of statistics, it is a common understanding that “convergence” cannot be tested. A statistical test raises two contrasting hypotheses—called the null and alternative hypotheses—and calculates a p-value indicating the probability of the null hypothesis to occur. When this p-value is smaller than a certain value, the null hypothesis is considered unable to occur and is thus rejected. For convergence, the null hypothesis corresponds to “not converging,” and the alternative hypothesis, to “converging.” The problem here is that the null hypothesis is always related to the alternative hypothesis to a certain extent, because the difference between convergence and non-convergence is merely a matter of degree. In other words, the notion of convergence for a constancy measure does not conform with the philosophy of statistical testing. Convergence is therefore considered in terms of the distance from convergent values, or in terms of the error with respect to some parameter (such as data size). Such a distance cannot be calculated for real data, however, since the underlying mathematical model is unknown. To sum up, verification of the convergence of real data must be considered by some other means. Our proposal is to consider convergence in comparison to a set of random data whose process is known. For this random data, we considered two kinds. The first kind is used to examine data convergence in Section 4.1. This random data was generated from real data by shuffling the original text with respect to certain linguistic units. Tweedie and Baayen (1998) presented results by shuffling words, where the original texts were literary texts by single authors. Here, we generated random data by shuffling (1) words/characters, (2) sentences, or (3) documents. Because these options greatly increased the number of combinations of results, we mainly present the results with option (1) for large-scale data in this article. There are three reasons for this: Convergence must be verified especially at large scale; the most important convergence findings for randomized small-scale data were already reported in Tweedie and Baayen (1998); and the results for options (2) and (3) were situated within the range of option (1) and the original texts. Randomization of the words and characters of original texts will destroy various linguistic characteristics, such as n-grams and long-range correlation. The convergence properties of the three measures V, h1, and H2 are as follows. The convergence of V is unknown, because it lacks a mathematical background. Even if the value of V did converge, the convergent value for randomized data would differ from that of the original text, since the measure is based on repeated n-grams in the text. h1 converges to the entropy rate of the randomized text, if the data size suffices. This is supported by the mathematical background of the algorithm, which converges to the true entropy rate for stationary data. Even when h1 converges for random data, the convergent value will be larger than that of the original text, because h1 considers the probabilities of n-grams. Lastly, H2 converges to the same point for a randomized text and the original text, because it is the approximated higher-order entropy, such that words and characters are considered to occur independently. The second kind of random data is used to compare the convergent values of different texts for a constancy measure, as considered in Section 4.2. Random corpora were generated according to four different distributions: one uniform, and the other three following Zipf distributions with exponents of γ = 0.8, 1.0, and 1.3, respectively, for Equation (3). Because each set of real data consists of different numbers of distinct tokens, ranging from tens to billions, random data sets consisting of 2n distinct tokens for every n = 4 . . .19, were randomly generated for each of the four distributions. We only consider the measures H2 and H0 for these data sets. Both of these measures have convergent values, given a sufficient data size.","4 experimental results :From the previous discussion, we applied the three measures V, h1, and H2 with five large-scale and eight small-scale natural language corpora, three programming language corpora, and two unknown script corpora, in terms of words and characters. Because there were many results for different combinations of measure, data, and token (word or character), this section is structured so that it best highlights our findings. Figures 1, 2, and 3 in this section can be examined in the following manner. The horizontal axis indicates the text size of each corpus, in terms of the number of tokens, on a log scale. Chunks of different text sizes were always taken from the head of the corpus.8 The vertical axis indicates the values of the different measures: V, h1, or H2. Each figure contains multiple lines, each corresponding to a corpus, as indicated in the legends. First, we consider the results for the large-scale data. Figure 1 shows the different measures for words (left three graphs) and characters (right three graphs). We can see that V increased for both words and characters (top two graphs). Golcher tested his measure on up to 30 megabytes of text in terms of characters (Golcher 2007). We also observed a stable tendency up to around 107 characters. The increase in V became apparent, however, for larger text sizes. Thus, it is difficult to consider V as a constancy measure. As for the results for h1 (middle graphs), both graphs show a gradual decrease. The tendency was clearer for words than for characters. For some corpora, especially for characters, it was possible to observe some values converging towards h∗. The overall tendency, however, could not be concluded as converging. This result suggests the difficulty in attaining convergence of the entropy rate, even with gigabyte-scale data. From the theoretical background of the Grassberger algorithm, the values would possibly converge with larger-scale data. The continued decrease could be due to multiple reasons, including the possibility of requiring far larger data than that used here, or a discrepancy between linguistic processes and the mathematical model assumed for the Grassberger algorithm. We tried to estimate h∗ by fitting the Equation (9). For the corpora with good fitting, all of the estimated values were larger than zero, but many of the results could not 8 For real data, this was done without any randomization of the order of texts for all corpora besides Atext-w and Atext-c. The Watan corpus is distributed not in the chronological order of the publishing dates, but as a set of articles grouped into categories (i.e., all articles of one category, then all articles of another category, and so on). Because of this, there is a large skew in the vocabulary distribution, depending on the section of the corpus. We thus randomly reshuffled the articles by categories for the whole corpus before taking chunks of different sizes (always from the beginning) to generate our results. Apart from this, we avoided any arbitrary randomization with respect to the original data summarized in Table 1. be fitted easily, and the estimated values were unstable due to fluctuation of the lines. Whether a value for h∗ is reached asymptotically and also whether h∗ > 0 remain important questions requiring separate, more extensive mathematical and empirical studies. In contrast, H2 (or Yule’s K, bottom graphs) showed convergence, already at the level of 105 tokens, for both words and characters. From the previous verification of Yule’s K, we can conclude that H2 is convergent. The final convergent values, however, differed for the various writing systems. We return to this issue in the next section. To better understand the convergence, Figure 2 shows the results for the corresponding randomized data. As mentioned in Section 3.2, the original texts were randomized by shuffling words and characters for the data examined by words and characters, respectively. Therefore, all n-gram characteristics existing in the text were destroyed, and what remained were the different words and characters appearing in a random order. Here, we see how the random data’s behavior has some of the theoretical properties of convergence, as summarized in Section 3.2. As mentioned previously, because V has no mathematical background, its behavior even for uniform random data is unknown, and even if it converged, the convergent value would be smaller than that of the original text. The top two graphs in Figure 2 exhibit some oscillation, especially for randomized Chinese (Cnews-c,w). Such peculiar oscillation was already reported by Golcher himself (Golcher 2007) for uniformly random data. This was easy to replicate, as reported in Kimura and Tanaka-Ishii (2014), for uniformly random data with the number of distinct tokens up to a hundred. Because the word distribution almost follows Zipf’s law, the vocabulary is not uniformly distributed, yet oscillating results occur for some randomized data in the top left figure. Moreover, the values seem to increase for Japanese and English for words at a larger scale. Although the plots for some scripts seem convergent (top right graph), these convergent values are theoretically different from those of the original texts, if they exist, and this stability is not universal across the different data sets. Given this result, it is doubtful that V is convergent across languages. In contrast, h1 is mathematically proven to be convergent given infinite-length randomized data, but to larger values than those of the original texts, as mentioned in Section 3.2. The middle two graphs of Figure 2 show the results for h1. The majority of the plots do not reach convergence even at the largest data sizes, but for certain results with characters, especially in the Roman alphabet, the plots seem to go to a convergent value (middle right). All the plots can be extrapolated to converge to a certain entropy rate above zero, although these values are larger than the convergent values—if they ever exist—of the real data. These results confirm the difficulty of judging whether the entropy rates of the original texts are convergent and whether they remain above zero. Lastly, it is easy to see that H2 is convergent for a randomized text (bottom two graphs), and the convergent values are the same for the cases with and without randomization. In fact, the plots converge to exactly the same points faster and more stably, which shows the effect of randomization. As for the other randomization options, by sentences and documents, the findings—both the tendencies of the lines and the changes in the values—can be situated in the middle of what we have seen so far. The plots should increasingly fluctuate more like the real data because of the incomplete randomization, in the order of sentences and then documents. Returning to inspection of the remaining real data, Figure 3 shows V, h1, and H2 in terms of words for the small-scale corpora (left column) and for the programming language texts (right column). For the small-scale corpora, in general, the plots are bunched together, and the results shared the tendencies noted previously for the largescale corpora. V again showed an increase, while h1 showed a tendency to decrease. H2 converged rapidly and was already almost stable at 104 tokens. This again shows how H2 exhibits stable constancy, especially with texts written by single authors. As for the programming language results, the plots fluctuate more than for the natural language texts because of the redundancy within the program sources. Still, the global tendencies noted so far were just discernible. V had relatively larger values but h1 and H2 had smaller values for programs, as compared to the natural language texts. The differences in value indicate the larger degree of repetitiveness in programs. Lastly, Figure 4 shows the Hα results for the Wall Street Journal in terms of words in unigrams (Enews-w). The horizontal axis indicates the corpus size, and the vertical axis indicates the approximated entropy value. The different lines represent the results for Hα with α = 1, 2, 3, 4. The two H1 plots represent calculations with and without Laplace smoothing (Manning and Schuetze 1999). We can see that without smoothing, H1 increased, as Tweedie and Baayen (1998) reported, but in contrast to their conclusion, we observe a tendency of convergence for larger-scale data. The increase was due to the influence of low-frequency vocabulary pushing up the entropy. The opposite tendency to decrease was observed for the smoothed probabilities, with the plot eventually converging to the same point as that for the unsmoothed H1 values. The convergence was by far slower for H1 as compared with that for H2, H3, and H4, which all had attained convergence already at 102 tokens. The convergence values naturally decreased for larger α, although the amount of decrease itself rapidly decreased with larger α. In answer to Questions 1 and 2 raised in the Introduction—which measures show constancy, with sufficient convergence speed—the empirical conclusion from our data is that Hα with α > 1 showed stable constancy when the values were approximated using relative frequencies. For H1, the convergence was much slower because of the strong influence of low-frequency words. Consequently, the constancy of Hα with α > 1 is attained by representing the gross complexity underlying a text. Now we turn to Question 3 raised in the Introduction and examine the discriminatory power of H2. As Yule intended, does H2 identify authors? Given the influence of different writing systems, as seen previously in Figure 1, we examine the relation between H2 and the number of distinct tokens (the alphabet/vocabulary size). Note that because this number corresponds to H0 in Equation (11), this analysis effectively considers texts on the H0-H2 plane. Since H0 grows according to the text size, unlike H2, the same text size must be used for all corpora in order to meaningfully compare H0 values.9 Given that H2 converges fast, we chose a size of 104 tokens to handle all of the small- and large-scale corpora. For each of the corpora listed in Table 1 and the second kind of random corpora explained at the end of Section 3.2, Figure 5 plots the values of H2 (vertical axis) and the number of distinct tokens H0 (horizontal) measured for each corpus at a size of 104 tokens. The three large circles are groupings of points. The leftmost group represents news sources in alphabetic characters. All of the romanized Chinese, Japanese, and Arabic texts are located almost at the same vertical location as the English text. This indicates the difficulty for H2 to distinguish natural languages if measured in terms of alphabetic characters. The middle group represents the programming language texts in terms of words. This group is located separately (vertically lower than the natural language corpora in terms of words), so H2 is likely to distinguish between natural languages and programming languages. The rightmost group represents the small-scale corpora. Considering the proximity of these points despite the variety of the content, it is unlikely that H2 can distinguish authors, in contrast to Yule’s hope. Still, these points are located lower than those for news text. Therefore, H2 has the potential to distinguish genre or maybe writing style. 9 Because H0 is not convergent, the horizontal locations remain unstable, unless the tokens are of a phonographic alphabet. In other words, for all word-based results and character-based results not based on a phonographic alphabet, the resulting horizontal locations are changed by increasing the corpus size. As for the random data, the H0 values are convergent, because these data sets have a finite number of distinct tokens. Since H0 is measured only for the first 104 tokens, however, the horizontal locations are underestimated, especially for random data following a Zipf distribution. The natural language texts located near the line for a Zipf exponent of 0.8 are those of the non-alphabetic writing systems.10 Note that Chinese characters have morphological features, and the Arabic and Thai languages also have flexibility in terms of which units are considered words and morphemes. In other words, the plots closer to the random data with a smaller Zipf exponent are for language corpora of morphemic sequences. The group of plots measured for phonographic scripts is located near the line for a Zipf exponent of 1.0 (the grouping of points in the leftmost circle), which could suggest that morphemes are more randomized units than words. The nature of unknown scripts can also be considered through our understanding thus far. Figure 5 includes plots for the Voynich manuscript in terms of words and characters, and for the Rongorongo script in terms of characters. Like all the data seen in this figure, the points are placed at the H2 values (vertically) for the number of distinct tokens (horizontally) at the specified size of 104 tokens, with the exception of Voynich-A in terms of words. Because this corpus consists of fewer than 104 words (refer to the data length by tokens listed for VoynichA-w in Table 1), its point is located horizontally at the vocabulary size corresponding to the corpus’ maximum size. For the two Voynich manuscript parts, the plots in terms of words appear near the Arabic corpus for words (Abook-w). For characters, on the other hand, the plots are at the leftmost end of the figure. This was due to overestimation of the total number 10 Note that here we use the values of the Zipf exponent for the random data, and not the estimated exponents for the real data. The rank-frequency distributions of characters, especially for phonetic alphabets, often do not follow a power law. of characters for the alphabetic texts (e.g., both English and other, romanized language texts), since all ASCII characters, such as colons, periods, and question marks, are counted. Still, the H2 values are located almost at the same position as for the other romanized texts, indicating that the Voinich manuscript has approximately similar complexity. These results suggest the possibility that the Voynich manuscript could have been generated from a source in natural language, possibly written in some script of the abjad type. This supports previous findings (Reddy and Knight 2011; Montemurro and Zanette 2013), which reported the possibility of the Voynich manuscript being in a natural language and the coincidence of its word length distribution with that of Arabic. On the other hand, the plots for the Rongorongo script appear near the line for a Zipf exponent of 0.8, with RongoA near Arabic in terms of words but RongoB somewhat further down from Japanese in terms of characters. The status of Rongorongo as natural language has been controversial (Pozdniakov and Pozdniakov 2007). Both points in the graph, however, are near many other natural language texts (and not widely separated), making it reasonable to hypothesize that Rongorongo is indeed natural language. The characters can be deemed morphologically rich, because both plots are close to the line for a Zipf exponent of 0.8. In the case of RongoA, for which a character was considered inclusive of all parts (i.e., including accents and ornamental parts), the morphological richness is comparable to that of the words of an abjad script. On the other hand, when considering the different character parts as distinct (RongoB), the location drifts towards the plot for Thai, a phonographic script, in terms of characters. Therefore, the Rongorongo script could be considered basically morphemic, with some parts functioning phonographically. This conclusion again supports a previous hypothesis proposed by a domain specialist (Pozdniakov and Pozdniakov 2007). This analysis of two unknown scripts supports previous conjectures. Our results, however, only add a small bit of evidence to those conjectures; clearly, reaching a reasonable conclusion would require further study. Moreover, the analysis of unknown scripts introduced here could provide another possible application of text constancy measures, from a broader context.","5 conclusion :We have discussed text constancy measures, whose values are invariant across different sizes of text, for a given text. Such measures have a 70-year history, since Yule originally proposed K as a text characteristic, potentially with language engineering utility for problems such as author identification. We consider text constancy measures today to have scientific importance in understanding language universals from a computational view. After overviewing measures proposed so far and previous studies on text constancy, we explained how K essentially has a mathematical equivalence to the Rényi higher-order entropy. We then empirically examined various measures across different languages and kinds of corpora. Our results showed that only the approximated higherorder Rényi entropy exhibits stable, rapid constancy. Examining the nature of the convergent values revealed that K does not possess the discriminatory power of author identification as Yule had hoped. We also applied our understanding to two unknown scripts, the Voynich manuscript and Rongorongo, and showed how our constancy results support previous hypotheses about each of these scripts. Our future work will include application of K to other kinds of data besides natural language. There, too, we will consider the questions raised in the Introduction, of whether K converges and of how discriminatory it is. We are especially interested in considering the relation between the value of K and the meaningfulness of data.",,,,"This article presents a mathematical and empirical verification of computational constancy measures for natural language text. A constancy measure characterizes a given text by having an invariant value for any size larger than a certain amount. The study of such measures has a 70-year history dating back to Yule’s K, with the original intended application of author identification. We examine various measures proposed since Yule and reconsider reports made so far, thus overviewing the study of constancy measures. We then explain how K is essentially equivalent to an approximation of the second-order Rényi entropy, thus indicating its signification within language science. We then empirically examine constancy measure candidates within this new, broader context. The approximated higher-order entropy exhibits stable convergence across different languages and kinds of text. We also show, however, that it cannot identify authors, contrary to Yule’s intention. Lastly, we apply K to two unknown scripts, the Voynich manuscript and Rongorongo, and show how the results support previous hypotheses about these scripts.","[{""affiliations"": [], ""name"": ""Kumiko Tanaka-Ishii""}, {""affiliations"": [], ""name"": ""Shunsuke Aihara""}]",SP:54cf25939e5f7b648f0a97837673bc1f8e10a4be,"[{""authors"": [""T. Barthel""], ""title"": ""The Rongorongo of Easter Island: Thomas Barthel\u2019s transliteration system"", ""venue"": ""Available at http://kohaumotu. org/rongorongo org/corpus/ codes.html. Accessed June 2015."", ""year"": 2013}, {""authors"": [""T.C. Bell"", ""J.G. Cleary"", ""I.H. Witten.""], ""title"": ""Text Compression"", ""venue"": ""Prentice Hall."", ""year"": 1990}, {""authors"": [""P.A. Bromiley"", ""N.A. Thacker"", ""E. Bouhova-Thacker.""], ""title"": ""Shannon entropy, Renyi entropy, and information"", ""venue"": ""Available at http://www.tina-vision.net/ docs/memos/2004-004.pdf. Accessed"", ""year"": 2010}, {""authors"": [""P.F. Brown"", ""S.A. Della Pietra"", ""V.J. Della Pietra"", ""J.C. Lai"", ""R.L. Mercer.""], ""title"": ""An estimate of an upper bound for the entropy of English"", ""venue"": ""Computational Linguistics, 18(1):31\u201340."", ""year"": 1992}, {""authors"": [""E. Brunet""], ""title"": ""Vocabulaire de Jean Giraudoux: Structure et Evolution"", ""venue"": ""Slatkine."", ""year"": 1978}, {""authors"": [""T. Cover"", ""R. King.""], ""title"": ""A convergent gambling estimate of the entropy of English"", ""venue"": ""IEEE Transactions on Information Theory, 24(4):413\u2013421."", ""year"": 1978}, {""authors"": [""T.M. Cover"", ""J.A. Thomas.""], ""title"": ""Elements of Information Theory"", ""venue"": ""Wiley-Interscience."", ""year"": 2006}, {""authors"": [""\u0141. D\u0229bowski""], ""title"": ""A general definition of conditional information and its application to ergodic decomposition"", ""venue"": ""Statistics and Probability Letters, 79(9):1260\u20131268."", ""year"": 2009}, {""authors"": [""\u0141. D\u0229bowski""], ""title"": ""Empirical evidence for Hilberg\u2019s conjecture in single author texts"", ""venue"": ""Methods and Applications of Quantitative Linguistics: Selected papers of the 8th International Conference on"", ""year"": 2013}, {""authors"": [""\u0141. D\u0229bowski""], ""title"": ""The relaxed Hilberg conjecture: A review and new experimental support"", ""venue"": ""Available at http://www.ipipan.waw.pl/ldebowsk/. Accessed June 2015."", ""year"": 2014}, {""authors"": [""D. Dugast""], ""title"": ""Vocabulaire et Stylistique"", ""venue"": ""I Th\u00e9\u00e2tre et Dialogue. Slatkine-Champion. Travaux de Linguistique Quantitative."", ""year"": 1979}, {""authors"": [""M. Farach"", ""M. Noordewier"", ""S. Savari"", ""L. Shepp"", ""A. Wyner"", ""J. Ziv.""], ""title"": ""On the entropy of DNA: Algorithms and measurements based on memory and rapid convergence"", ""venue"": ""Proceedings of the"", ""year"": 1995}, {""authors"": [""D. Genzel"", ""E. Charniak.""], ""title"": ""Entropy rate constancy in text"", ""venue"": ""Annual Meeting of the Association for the ACL, pages 199\u2013206, Philadelphia, PA."", ""year"": 2002}, {""authors"": [""F. Golcher""], ""title"": ""A stable statistical constant specific for human language texts"", ""venue"": ""Recent Advances in Natural Language Processing, Borovets."", ""year"": 2007}, {""authors"": [""I.J. Good""], ""title"": ""The population frequencies of species and the estimation of population parameters"", ""venue"": ""Biometrika, 40(3\u20134):237\u2013264."", ""year"": 1953}, {""authors"": [""P. Grassberger""], ""title"": ""Estimating the information content of symbol sequences and efficient codes"", ""venue"": ""IEEE Transactions on Information Theory, 35:669\u2013675."", ""year"": 1989}, {""authors"": [""H. Guiraud""], ""title"": ""Les Charact\u00e8res Statistique du Vocabulaire"", ""venue"": ""Universitaires de France Press."", ""year"": 1954}, {""authors"": [""D. Gusfield""], ""title"": ""Algorithms on Strings, and Sequences: Computer Science and Computational Biology"", ""venue"": ""Cambridge University Press."", ""year"": 1997}, {""authors"": [""G. Herdan""], ""title"": ""Quantitative Linguistics"", ""venue"": ""Butterworths."", ""year"": 1964}, {""authors"": [""W. Hilberg""], ""title"": ""Der bekannte grenzwert der redundanzfreien information in texten eine fehlinterpretation der shannonschen experimente? Frequenz, 44(9\u201310):243\u2013248"", ""year"": 1990}, {""authors"": [""A. Honor\u00e9""], ""title"": ""Some simple measures of richness of vocabulary"", ""venue"": ""Association for Literary and Linguistic Computing Bulletin, 7:172\u2013177."", ""year"": 1979}, {""authors"": [""D. Kimura"", ""K. Tanaka-Ishii.""], ""title"": ""A study on constants of natural language texts"", ""venue"": ""Journal of Natural Language Processing, 18(2):119\u2013137."", ""year"": 2011}, {""authors"": [""D. Kimura"", ""K. Tanaka-Ishii.""], ""title"": ""A study on constants of natural language texts"", ""venue"": ""Journal of Natural Language Processing, 21:877\u2013895. Special issue of awarded papers. [The English translated version"", ""year"": 2014}, {""authors"": [""B. Kitchens""], ""title"": ""Symbolic Dynamics: One-sided, Two-sided and Countable State Markov Shifts"", ""venue"": ""Springer. 501"", ""year"": 1998}, {""authors"": [""R. Levy"", ""T.F. Jaeger.""], ""title"": ""Speakers optimize information density through information density through syntactic reduction"", ""venue"": ""Annual Conference on Neural Information"", ""year"": 2007}, {""authors"": [""H.D. Maas""], ""title"": ""Zusammenhang zwischen wortschatzumfang und l\u00e4nge eines textes [Relationship between vocabulary and text length"", ""venue"": ""Zeitschrift f\u00fcr Literaturwissenschaft und Linguistik, 8:73\u201370."", ""year"": 1972}, {""authors"": [""B. Mandelbrot""], ""title"": ""An informational theory of the statistical structure of language"", ""venue"": ""Communication Theory, 486\u2013500."", ""year"": 1953}, {""authors"": [""C. Manning"", ""H. Schuetze.""], ""title"": ""Foundations of Statistical Natural Language Processing"", ""venue"": ""MIT Press."", ""year"": 1999}, {""authors"": [""M. Montemurro"", ""D. Zanette.""], ""title"": ""Keywords and co-occurrence patterns in the Voynich Manuscript: An information-theoretic analysis"", ""venue"": ""PLOS One. doi: 10.1371/journal.pone.0066344."", ""year"": 2013}, {""authors"": [""C. Orliac""], ""title"": ""The Rongorongo tablets from Easter Island: Botanical identification and 14c dating"", ""venue"": ""Archaeology in Oceania, 40(3):115\u2013119."", ""year"": 2005}, {""authors"": [""J.K. Orlov"", ""R.Y. Chitashvili.""], ""title"": ""Generalized z-distribution generating the well-known \u2018rank-distributions"", ""venue"": ""Bulletin of the Academy of Sciences of Georgia, 110:269\u2013272."", ""year"": 1983}, {""authors"": [""K. Pozdniakov"", ""I. Pozdniakov.""], ""title"": ""Rapanui writing and the Rapanui language: Preliminary results of a statistical analysis"", ""venue"": ""Forum for Anthropology and Culture, 3:3\u201336."", ""year"": 2007}, {""authors"": [""S. Reddy"", ""K. Knight.""], ""title"": ""What we know about the Voynich Manuscript"", ""venue"": ""ACL Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, Portland, OR."", ""year"": 2011}, {""authors"": [""A. R\u00e9nyi""], ""title"": ""On measures of entropy and information"", ""venue"": ""Proceedings of the Fourth Berkeley Symposium on Mathematics,"", ""year"": 1960}, {""authors"": [""A. R\u00e9nyi""], ""title"": ""Foundations of Probability"", ""venue"": ""Dover Publications."", ""year"": 1970}, {""authors"": [""T. Sch\u00fcmann"", ""P. Grassberger.""], ""title"": ""Entropy estimation of symbol sequences"", ""venue"": ""Chaos, 6(3):414\u2013427."", ""year"": 1996}, {""authors"": [""C. Shannon""], ""title"": ""A mathematical theory of communication"", ""venue"": ""Bell System Technical Journal, 27:379\u2013423, 623\u2013656."", ""year"": 1948}, {""authors"": [""P.C. Shields""], ""title"": ""Entropy and prefixes"", ""venue"": ""Annals of Probability, 20(1):403\u2013409."", ""year"": 1992}, {""authors"": [""H.S. Sichel""], ""title"": ""On a distribution law for word frequencies"", ""venue"": ""Journal of the American Statistical Association, 70(351):542\u2013547."", ""year"": 1975}, {""authors"": [""E.H. Simpson""], ""title"": ""Measurement of diversity"", ""venue"": ""Nature, 163:688."", ""year"": 1949}, {""authors"": [""E. Stamatatos"", ""N. Fakotakis"", ""G. Kokkinakis.""], ""title"": ""Automatic text categorization in terms of genre and author"", ""venue"": ""Computational Linguistics, 26(4):471\u2013495."", ""year"": 2001}, {""authors"": [""B. Stein"", ""N. Lipka"", ""P. Prettenhofer.""], ""title"": ""Intrinsic plagiarism analysis"", ""venue"": ""Language Resources and Evaluation, 45(1):63\u201382."", ""year"": 2010}, {""authors"": [""Y.W. Teh""], ""title"": ""A hierarchical Bayesian language model based on Pitman-Yor processes"", ""venue"": ""Proceedings of the 21st International Conference On Computational Linguistics and 44th Annual Meeting of the"", ""year"": 2006}, {""authors"": [""J. Tuldava""], ""title"": ""Quantitative relations between the size of the text and lexical richness"", ""venue"": ""SMIL Quarterly, Journal of Linguistic Calculus, 4:28\u201335."", ""year"": 1977}, {""authors"": [""F.J. Tweedie"", ""R.H. Baayen""], ""title"": ""How variable may a constant be? Measures of lexical richness in perspective"", ""venue"": ""Computers and the Humanities, 32:323\u2013352."", ""year"": 1998}, {""authors"": [""G.U. Yule""], ""title"": ""The Statistical Study of Literary Vocabulary"", ""venue"": ""Cambridge University Press."", ""year"": 1944}, {""authors"": [""G.K. Zipf""], ""title"": ""Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology"", ""venue"": ""Hafner, New York. 502"", ""year"": 1965}]",acknowledgments :This research was supported by JST’s PRESTO program.,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"1 introduction :A constancy measure for a natural language text is defined, in this article, as a computational measure that converges to a value for a certain amount of text and remains invariant for any larger size. Because such a measure exhibits the same value for any size of text larger than a certain amount, its value could be considered as a text characteristic. The concept of such a text constancy measure was introduced by Yule (1944) in the form of his measure K. Since Yule, there has been a continuous quest for such measures, and various formulae have been proposed. They can be broadly categorized into three types, namely, those measuring (1) repetitiveness, (2) power law character, and (3) complexity. ∗ Kyushu University, 744 Motooka Nishiku, Fukuoka City, Fukuoka, Japan. E-mail: kumiko@ait.kyushu-u.ac.jp. ∗∗ JST-PRESTO, 4-1-8 Honcho, Kawaguchi, Saitama 332-0012, Japan. † Gunosy Inc., 6-10-1 Roppongi, Minato-ku, Tokyo, Japan. Submission received: 11 July 2013; revised version received: 17 February 2015; accepted for publication: 18 March 2015. doi:10.1162/COLI a 00228 © 2015 Association for Computational Linguistics Yule’s original intention for K’s utility lay in author identification, assuming that it would differ for texts written by different authors. State-of-the-art multivariate machine learning techniques are powerful, however, for solving such language engineering tasks, in which Yule’s K is used only as one variable among many, as reported in Stamatatos, Fakotakis, and Kokkinakis (2001) and Stein, Lipka, and Prettenhofer (2010). We believe that constancy measures today, however, have greater importance in understanding the mathematical nature of language. Although mathematical models of language have been studied in the computational linguistics milieu, via Markov models (Manning and Schuetze 1999), Zipf’s law and its modifications (Mandelbrot 1953; Zipf 1965; Bell, Cleary, and Witten 1990), and Pitman-Yor models (Teh 2006) more recently, the true mathematical model of linguistic processes is ultimately unknown. Therefore, the convergence of a constancy measure must be examined through empirical verification. Because some constancy measures have a mathematical theory of convergence for a known process, discrepancies in the behavior of real linguistic data from such a theory would shed light on the nature of linguistic processes and give hints towards improving the mathematical models. Furthermore, as one application, a convergent measure would allow for comparison of different texts through a common, stable norm, provided that the measure converges for a sufficiently small amount of text. One of our goals is to discover a non-trivial measure with a certain convergence speed that distinguishes the different natures of texts. The objective of this article is thus to provide a potential explanation of what the study of constancy measures over 70 years has been about, by answering the three following questions mathematically and empirically: Question 1 Does a measure exhibit constancy? Question 2 If so, how fast is the convergence speed? Question 3 How discriminatory is the measure? We seek answers by first showing the meaning of Yule’s K in relation to the Rényi higher-order entropy, and by then empirically examining constancy across large-scale texts of different kinds. We finally provide an application by considering the natures of two unknown scripts, the Voynich manuscript and Rongorongo, in order to show the possible utility of a constancy measure. The most important and closest previous work was reported in Tweedie and Baayen (1998), the first paper to have examined the empirical behavior of constancy measures on real texts. The authors used English literary texts to test constancy measure candidates proposed prior to their work. Today, the coverage and abundance of language corpora allow us to conduct a larger-scale investigation across multiple languages. Recently, Golcher (2007) tested his measure V (discussed later in this paper) with IndoEuropean languages and also programming language sources. Our papers (Kimura and Tanaka-Ishii 2011, 2014) also precede this one, presenting results preliminary to this article but with only part of our data, and neither of those provides mathematical analysis with respect to the Rényi entropy. Compared with these previous reports, our contribution here can be summarized as follows: Our work elucidates the mathematical relation of Yule’s K to Rényi’s higher-order entropy and explains why K converges. Our work vastly extends the corpora used for empirical examination in terms of both size and language. Our work compares the convergent values for these corpora. Our work also presents results for unknown language data, specifically from the Voynich manuscript and Rongorongo. We start by summarizing the potential constancy measures proposed so far. 2 constancy measures :The measures proposed so far can broadly be categorized into three types, calculating the repetitiveness, power-law distribution, or complexity of text. This section mathematically analyzes these measures and summarizes them. The study of text constancy started with proposals for simple text measures of vocabulary repetitiveness. The representative example is Yule’s K (Yule 1944), while Golcher recently proposed V as another candidate (Golcher 2007). 2.1.1 Yule’s K. To the best of our knowledge, the oldest mention of constancy values was made by Yule with his notion of K (Yule 1944). Let N be the total number of words in a text, V(N) be the number of distinct words, V(m, N) be the number of words appearing m times in the text, and mmax be the largest frequency of a word. Yule’s K is then defined as follows, through the first and second moments of the vocabulary population distribution of V(m, N), where S1 = N = ∑ m mV(m, N), and S2 = ∑ m m 2V(m, N) (Yule 1944; Herdan 1964): K = CS2 − S1 S21 = C [ − 1N + mmax∑ m=1 V(m, N)( mN ) 2 ] (1) where C is a constant enlarging of the value of K, defined by Yule as C = 104. K is designed to measure the vocabulary richness of a text: The larger Yule’s K, the less rich the vocabulary is. The formula can be intuitively understood from the main term of the sum in the formula. Because the square of ( mN ) 2 indicates the degree of recurrence of a word, the sum of such degrees for all words is small if the vocabulary is rich, or large in the opposite case. Another simple example can be given in terms of S2 in this formula. Suppose a text is 10 words long: if each of the 10 tokens is distinct (high diversity), then S2 = 1 × 1 × 10 = 10; whereas, if each of the 10 tokens is identical (low diversity), then S2 = 10 × 10 × 1 = 100. Measures that are slightly different but essentially equivalent to Yule’s K have appeared here and there. For example, Herdan defined Vm as follows (Herdan 1964, pp. 67, 79): Vm = √√√√ mmax∑ m=1 V(m, N)( mN ) 2 − 1 V(N) Likewise, Simpson (1949) derived the following formula as a measure to capture the diversity of a population: D = mmax∑ m=1 V(m, N) mN m − 1 N − 1 which is equivalent to Yule’s K, as Simpson noted. 2.1.2 Other Measures Based on Simple Text Statistics. Apart from Yule’s K, various measures have been proposed from simple statistical observation of text, as detailed in Tweedie and Baayen (1998). One genre is based on the so-called token-type relation (i.e., the ratio of the vocabulary size V(N) and the text size N, in log) as formulated by Guiraud (1954) and Herdan (1964) as a law. Because this simple ratio is not stable, the measure was modified numerous times to formulate Herdan’s C (Herdan 1964), Dugast’s k and U (Dugast 1979), Maas’ a2 (Maas 1972), Tuldava’s LN (Tuldava 1977), and Brunet’s W (Brunet 1978). Another genre of measures concerns the proportion of hapax legomena, that is V(1, N). Honoré noted that V(1, N) increases linearly with respect to the log of a text’s vocabulary size V(N) (Honoré 1979). Another ratio, of V(2, N) to V(N), was proposed as a text characteristic by Sichel (1975) and Maas (1972). Each of these values, however, was found not to be convergent according to the extensive study conducted by Tweedie and Baayen (1998). In common with Yule’s intention to apply such measures for author identification, they examined all of the measures discussed here, in addition to two measures explained later: Orlov’s Z, and the Shannon entropy upper bound obtained from the relative frequencies of unigrams. They examined these measures with English novels (such as Alice’s Adventures in Wonderland) and empirically found that only Yule’s K and Orlov’s Z were convergent. Given their report, we consider K the only true candidate among the constancy measures examined so far. 2.1.3 Golcher’s V. Golcher’s V is a string-based measure calculated on the suffix tree of a text (Golcher 2007). Letting the length of the string be N and the number of inner nodes of the (Patricia) suffix tree (Gusfield 1997) be k, V is defined as: V = kN (2) Golcher empirically showed how this measure converges to almost the same value across Indo-European languages for about 30 megabytes of data. He also showed how the convergent values differ from those calculated for programming language texts. Golcher explains in his paper that the possibility of constancy of V does not yet have mathematical grounding and has only been shown empirically. He does not report values for texts larger than about 30 megabytes nor for those of non-Indo-European languages. A simple conjecture on this measure is that because a suffix tree for a string of length N has at most N − 1 inner nodes, V must end up at some value 0 ≤ V < 1, for any given text. Our group tested V with larger-scale data and concluded that V could be a constancy measure, although we admitted to observing a gradual increase (Kimura and Tanaka-Ishii 2014). Because V requires further verification on larger-scale data before ruling it out, we include it as a constancy measure candidate. Since Zipf (1965), power laws have been reported as an underlying statistical characteristic of text. The famous Zipf’s law is defined as: f (n) ∝ n−γ (3) where γ ≈ 1, and f (n) is the frequency of the nth most frequent word in a text. Various studies have sought to explain mathematically how the exponent could differ depending on the kind of text. To the best of our knowledge, however, there has been a limited number of reports related to text constancy. An exception is the study on Orlov’s Z (Orlov and Chitashvili 1983). Orlov and Chitashvili attempted to obtain explicit mathematical forms for V(N) and V(m, N) by more finely considering the long tails of vocabulary distributions for which Zipf’s law does not hold. They obtained these forms through a parameter Z, defined as the potential text length minimizing the square error of the estimated V(m, N), with its actual value as follows: Z = arg min N 1 mmax mmax∑ m=1 {E[V(m, N)] − V(m, N) V(N) }2 (4) Thus defining Z, they mathematically deduced for V(N) the following formula: V(N) = Z log(mmaxZ) N N − Z log ( N Z ) (5) Two ways to obtain Z can be formulated through approximation: one through GoodTuring smoothing (Good 1953), which assumes Zipf’s law to hold, and the other using Newton’s method. Tweedie and Baayen showed how the value of Z is stable at the size of an English novel by a single author and thus suggested that it could form a text characteristic. The empirical results, however, were not significantly convergent with respect to text size, and, moreover, Tweedie and Baayen provided their results without giving an estimation method (Tweedie and Baayen 1998). Calculation using Good-Turing smoothing, which is derived directly from Zipf’s law, would cause Z to converge, but this does not take Orlov’s original intention into consideration. Alternatively, our group (Kimura and Tanaka-Ishii 2014) verified Z through Newton’s method by setting g(Z) = 0, where g(Z) is the following function: g(Z) = Z log(mmaxZ) N N − Z log ( N Z ) − V(N) (6) We also showed how the value of Z increases rapidly when the text size is larger than 10 megabytes. The major problem with measures based on power laws lies in the skewed head and tail of the vocabulary population distribution. Because these exceptions constitute important parts of the population, parameter estimation by fitting to Equation (3) is sensitive to the estimation method. For example, the estimated value of the exponent for Zipf’s law depends on the method used for dealing with these exceptions. We tested several simple methods of estimating the Zipf law’s exponent γ with different ways of handling the head and tail of a distribution. There were settings that led to convergence, but the convergence depended on the settings. Such difficulty could be one reason why there has been no direct proposal for γ as a text constancy measure. Hence, due care must be taken in relating text constancy to a power law. We chose another path by considering text constancy through a random Zipf distribution, as described later in the experimental section. With respect to measures based on complexity, multiple reports have already examined the Shannon entropy (Shannon 1948; Cover and Thomas 2006). In addition, we introduce the Rényi higher-order entropy (Rényi 1960) as another possible measure. 2.3.1 Shannon Entropy Upper Bound. Let X be the random variable of a sequence X = X1, X2, . . . , Xi, . . . , where Xi represents the ith element of X: Xi = x ∈ X, and where X represents a given set (e.g., a set of words or characters) whose members constitute the sequence. Let Xji (i < j) denote the random variable indicating its subsequence Xi, Xi+1, Xi+2, . . . , Xj. Let P(X) indicate the probability function of a sequence X. The Shannon entropy is then defined as: H(X) = − ∑ X P(X) log P(X) (7) Tweedie and Baayen directly calculated an approximation of this formula in terms of the relative frequencies (for P) of unigrams (for X), and they concluded that the measure would continue increasing with respect to text size and would not converge for short, literary texts (Tweedie and Baayen 1998). Because we are interested in the measure’s behavior on a larger scale, we replicated their experiment, as discussed later in the section on empirical constancy. We denote this measure as H1 in this article. Apart from that report, many have studied the entropy rate, defined as: h∗ = lim n→∞ H(Xn1 ) n (8) Theoretically, the behavior of the entropy rate with respect to text size has been controversial. On the one hand, there have been indications of entropy rate constancy (Genzel and Charniak 2002; Levy and Jaeger 2007). These reports argue that the entropy rate of natural language could be constant. Due to the inherent difficulty in obtaining the true value of h∗ from a text, however, these arguments are based only on indirect clues with respect to convergence. On the other hand, Hilberg conjectured a decrease in the human conditional entropy, as follows (Hilberg 1990): H(Xn|Xn−11 ) ∝ n−1+β He obtained this through an examination of Shannon’s original experimental data and suggested that β ≈ 0.5. From this formula, Dȩbowski induces that H(Xn1 ) ∝ nβ and that the entropy rate can be formulated generally as follows (Dȩbowski 2014): H(Xn1 ) n ≈ An−1+β + h∗ (9) Note that at the limit of n → ∞, this rate goes to h∗, a constant, provided that β < 1.0. Hilberg’s conjecture is deemed compatible with entropy rate constancy at its asymptotic limit, provided that h∗ > 0 holds.1 We are therefore interested in whether this h∗ forms a text characteristic, and if so, whether h∗ > 0. Empirically, many have attempted to calculate the upper bound of the entropy rate. Brown’s report (Brown et al. 1992) is representative in showing a good estimation of the entropy rate for English from texts, as compared with values obtained from humans (Cover and King 1978). Subsequently, there have been important studies on calculating the entropy rate, as reported thoroughly in Schümann and Grassberger (1996). The questions related to h∗, however, remain unsolved. Recently, Dȩbowski used a Lempel-Ziv compressor and examined Hilberg’s conjecture for texts by single authors (Dȩbowski 2013). He showed an exponential decrease in the entropy rate with respect to text size, supporting the validity of Equation (9). Following these previous works, we examine the entropy rate by using an algorithm proposed by Grassberger (1989) and later on by Farach et al. (1995). This method is based on universal coding. The algorithm has a theoretical background of convergence to the true h∗, provided the sequence is stationary, but has been proved by Shields (1992) to be inconsistent—that is, it does not converge to the entropy rate for certain nonMarkovian processes. We still chose to apply this method, because it requires no arbitrary parameters for calculation and is applicable to large-scale data within a reasonable time. The Grassberger algorithm (Grassberger 1989; Farach et al. 1995) can be summarized as follows. Consider a sequence X of length N. The maximum matching length Li is defined as: Li = max{k : Xj+kj = Xi+ki } for j ∈ {1, . . . , i − 1}, 1 ≤ j ≤ j + k ≤ i − 1. In other words, Li is the maximum common subsequence before and after i. If L̄ is the average length of Li, given by L̄ = 1N i=N∑ i=1 Li then the method obtains the entropy rate h1 as h1 = log2N L̄ (10) Given the true entropy rate h∗, convergence has been mathematically proven for a stationary process, such that |h∗ − h1| = O(1) when N → ∞. In this article, we consider this entropy rate h1 as a constancy measure candidate. 1 According to Dȩbowski (2009), h∗ = 0 suggests that the next element of a linguistic process is deterministic, that is, a function of the corpus observed before, under the two conditions that (1) the number of possible choices for the element is finite, and (2) the corpus observed before is infinite. In reality, the finiteness of linguistic sequences has the opposite tendency (i.e., the size of the observed corpus is finite, and the possible vocabulary size is infinite). 2.3.2 Approximation of Rényi Entropy Hα. The Rényi entropy is a generalization of the Shannon entropy, defined as follows (Rényi 1960; Rényi 1970; Cover and Thomas 2006; Bromiley, Thacker, and Bouhova-Thacker 2010): Hα(X) = 11 − α log( ∑ X Pα(X)) (11) where α ≥ 0,α = 1. Hα(X) represents different ideas of sequence complexity for different α. For example: When α = 0, H0(X) indicates the number of distinct occurrences of X. When the limit α → 1 is taken, Equation (11) reduces to the Shannon entropy. The formula for α = 0 becomes equivalent to the so-called topological entropy (hence, it is another notion of entropy) for certain probability functions (Kitchens 1998) (Cover and Thomas 2006). Note that the number of distinct tokens (i.e., the cardinality of a set) has been used widely as a rough approximation of complexity in computational linguistics. Indeed, in Section 2.1.2, we saw how some candidate constancy measures are based on a token-type relation, such that the number of types is related to the complexity of a text. For texts, note also that the value grows with respect to the text size, unless X is considered, for example, in terms of unigrams of a phonographic alphabet. For α → 1, there is controversy regarding convergence, as noted in the previous section. Such difficulty in convergence for these α values lies in the nature of linguistic processes, in which the vocabulary set evolves. This view motivates us to consider α > 1 for Hα(X), since the formula captures complexity by considering linguistic hapax legomena to a lesser degree, thus giving the possibility of convergence. In fact, an approximation of the probability by the relative frequencies of unigrams at α = 2 immediately shows the essential equivalence to Yule’s K, since K from Equation (1) can be rewritten as follows: mmax∑ m=1 V(m, N)( mN ) 2 = ∑ x∈X ( freq(x) N ) 2 where freq(x) is the frequency of x ∈ X. Therefore, Yule’s K has significance within the context of complexity. This relation of Yule’s K to the Rényi entropy H2 is reported for the first time here, to the best of our knowledge. This mathematical relation clarifies both why Yule’s K should converge and what the convergent value means; specifically, the value represents the gross complexity underlying the language system. As noted earlier, the higher-order entropy considers hapax legomena to a lesser degree and calculates the gross entropy only from the representative vocabulary population. This simple argument shows that Yule’s K captures not only the simple repetitiveness of vocabulary but also the more profound signification of its equivalence with the approximated second-order entropy. Because K has been previously reported as a stable text constancy measure, we consider it here once again, but this time within the broader context of Hα. Based on the previous reports (Tweedie and Baayen 1998; Kimura and Tanaka-Ishii 2014) and the discussion so far, we consider the following four measures as candidates for text constancy measures. Repetitiveness-based measures: Yule’s K (Equation (1)); and Golcher’s V (Equation (2)). Complexity-based measures: The Shannon entropy upper bound (h1 as the entropy rate (Equations (10) and (8)) and H1 (Equation (7), with X in terms of unigrams and the probability function in terms of relative frequencies); and the approximated Rényi entropy, denoted as Hα (α > 1) (Equation (11), again with X and the probability function in terms of unigrams and relative frequencies, respectively). In addition, we empirically consider how these measures can be understood in the context of the power-law feature of language. As noted in the Introduction, for the convergent measures the speed of attaining convergence with respect to text size is examined as well. Among the candidates, K and H1 have been previously applied in a word-based manner, whereas V is string based. The Shannon entropy rate h1 has been considered in both ways. Because we should be able to consider a text in terms of both words and characters, we examine the constancy of each measure in both ways. Furthermore, because we have seen the mathematical equivalence of Yule’s K and H2, in the following we only consider H2. As for Hα, we consider α = 3, 4 only in comparison with H2. Because H1 is based on relative frequencies and can be considered together with H2, we first focus on the convergence of the three measures V, h1, and H2, and then we consider H1 in comparison with H2, H3, and H4. 3 data : Table 1 lists the data used in our experimental examination. The table indicates the data identifier (by which we refer to the data in the rest of the article), language, source, number of distinct tokens, data length by total number of tokens, and size in bytes. The first block contains relatively large-scale natural language corpora consisting of texts written by multiple authors, and the second block contains smaller corpora consisting of texts by single authors. The third block contains programming language corpora, and the fourth block contains corpora of unknown scripts, which we examine at the end of this article in Section 4.3. For the large-scale natural language data, we considered five languages: English, Japanese, Chinese, Arabic, and Thai. These languages were chosen to represent different language families and writing systems. The large-scale corpora in English, Japanese, and Chinese consist of newspapers in chronological order, and the Thai and Arabic corpora include other kinds of texts. The markers ‘w’, ‘c’, and ‘cr’ appearing at the end of every identifier in Table 1 (e.g., Enews-c, Enews-w, and Jnews-cr) indicate text processed through words, characters, and transliterated Roman characters, respectively. As for the small-scale corpora in the second block, the texts were only considered in terms of words, since verification via characters produced findings consistent with those obtained with the large-scale corpora. The texts were chosen because each was written by a single author but is relatively large. Here, we summarize our preprocessing procedures. For the annotated Thai NECTEC corpus, texts were tokenized according to the annotation. The preprocessing methods for the other corpora were as follows: English: NLTK2 was used to tokenize text into words. Japanese: Mecab3 was used for tokenization, and KAKASI4 was used for romanization. Chinese: ICTCLAS20135 was used for tokenization, and the pinyin Python library was used for pinyin romanization. Other European Languages: PunktWordTokenizer6 was used for tokenization. All the other natural language corpora were tokenized simply using spaces. Following Golcher (2007), who first suggested testing constancy on programming languages, we also collected program sources from different languages (third block in Table 1). The programs were also considered solely in terms of words, not characters. C++ and Python were chosen to represent different abstraction levels, and Lisp was chosen because of its different ordering for function arguments. Source code was collected from language libraries. The programming language texts were preprocessed as follows. Comments in natural language were eliminated (although strings remained in the programs, where each was a literal token). Identical files and copies of sources in large chunks were carefully eliminated, although this process did not completely eliminate redundancy since most programs reuse some previous code. Finally, the programs were tokenized according to the language specifications.7 The last block of the table lists two corpora of unknown scripts. We consider these scripts at the end of this article in Section 4.3, through Figure 5, to show one possible application of the text constancy measures. The first unknown script is that of the Voynich manuscript, a famous text that is undeciphered but hypothesized to have been written in natural language. This corpus is considered in terms of both characters and words, where words were defined via the white space separation in the original text. Given the common understanding that the manuscript seems to have two different parts (Reddy and Knight 2011), we separated it into two parts according to the Currier annotation (identified as A and B, respectively). The second corpus of unknown text consists of the Rongorongo script of Easter Island (Daniels and Bright 1996, Section 13; Orliac 2005; Barthel 2013). This script’s status as natural language is debatable, but if so, it is considered to possess characteristics of both phonographs and ideograms (Pozdniakov and Pozdniakov 2007). Because there are several ways to consider what constitutes a character in this script (Barthel 2013), we calculate values for the two most extreme cases as follows. For corpus RongoA-c, we consider a character inclusive of all adjoining parts (i.e., including accents and ornamental parts). On the other hand, for 2 http://nltk.org. 3 http://mecab.googlecode.com/svn/trunk/mecab/doc/index.html. 4 http://kakasi.namazu.org. 5 http://ictclas.nlpir.org. 6 http://nltk.org. 7 With respect to the Lisp programming language, its culture favors long, hyphenated variable names that can be almost as long as a sentence. For this work, therefore, Lisp variable names were tokenized by splitting at the hyphens. corpus RongoB-c, we separate parts as reasonably as possible, among multiple possible separation methods. Because the unit of word in this script is unknown, the Rongorongo script is only considered in terms of characters. The empirical verification of convergence for real data is controversial. We must first note that it does not conform with the standard approach to statistical testing. In the domain of statistics, it is a common understanding that “convergence” cannot be tested. A statistical test raises two contrasting hypotheses—called the null and alternative hypotheses—and calculates a p-value indicating the probability of the null hypothesis to occur. When this p-value is smaller than a certain value, the null hypothesis is considered unable to occur and is thus rejected. For convergence, the null hypothesis corresponds to “not converging,” and the alternative hypothesis, to “converging.” The problem here is that the null hypothesis is always related to the alternative hypothesis to a certain extent, because the difference between convergence and non-convergence is merely a matter of degree. In other words, the notion of convergence for a constancy measure does not conform with the philosophy of statistical testing. Convergence is therefore considered in terms of the distance from convergent values, or in terms of the error with respect to some parameter (such as data size). Such a distance cannot be calculated for real data, however, since the underlying mathematical model is unknown. To sum up, verification of the convergence of real data must be considered by some other means. Our proposal is to consider convergence in comparison to a set of random data whose process is known. For this random data, we considered two kinds. The first kind is used to examine data convergence in Section 4.1. This random data was generated from real data by shuffling the original text with respect to certain linguistic units. Tweedie and Baayen (1998) presented results by shuffling words, where the original texts were literary texts by single authors. Here, we generated random data by shuffling (1) words/characters, (2) sentences, or (3) documents. Because these options greatly increased the number of combinations of results, we mainly present the results with option (1) for large-scale data in this article. There are three reasons for this: Convergence must be verified especially at large scale; the most important convergence findings for randomized small-scale data were already reported in Tweedie and Baayen (1998); and the results for options (2) and (3) were situated within the range of option (1) and the original texts. Randomization of the words and characters of original texts will destroy various linguistic characteristics, such as n-grams and long-range correlation. The convergence properties of the three measures V, h1, and H2 are as follows. The convergence of V is unknown, because it lacks a mathematical background. Even if the value of V did converge, the convergent value for randomized data would differ from that of the original text, since the measure is based on repeated n-grams in the text. h1 converges to the entropy rate of the randomized text, if the data size suffices. This is supported by the mathematical background of the algorithm, which converges to the true entropy rate for stationary data. Even when h1 converges for random data, the convergent value will be larger than that of the original text, because h1 considers the probabilities of n-grams. Lastly, H2 converges to the same point for a randomized text and the original text, because it is the approximated higher-order entropy, such that words and characters are considered to occur independently. The second kind of random data is used to compare the convergent values of different texts for a constancy measure, as considered in Section 4.2. Random corpora were generated according to four different distributions: one uniform, and the other three following Zipf distributions with exponents of γ = 0.8, 1.0, and 1.3, respectively, for Equation (3). Because each set of real data consists of different numbers of distinct tokens, ranging from tens to billions, random data sets consisting of 2n distinct tokens for every n = 4 . . .19, were randomly generated for each of the four distributions. We only consider the measures H2 and H0 for these data sets. Both of these measures have convergent values, given a sufficient data size. 4 experimental results :From the previous discussion, we applied the three measures V, h1, and H2 with five large-scale and eight small-scale natural language corpora, three programming language corpora, and two unknown script corpora, in terms of words and characters. Because there were many results for different combinations of measure, data, and token (word or character), this section is structured so that it best highlights our findings. Figures 1, 2, and 3 in this section can be examined in the following manner. The horizontal axis indicates the text size of each corpus, in terms of the number of tokens, on a log scale. Chunks of different text sizes were always taken from the head of the corpus.8 The vertical axis indicates the values of the different measures: V, h1, or H2. Each figure contains multiple lines, each corresponding to a corpus, as indicated in the legends. First, we consider the results for the large-scale data. Figure 1 shows the different measures for words (left three graphs) and characters (right three graphs). We can see that V increased for both words and characters (top two graphs). Golcher tested his measure on up to 30 megabytes of text in terms of characters (Golcher 2007). We also observed a stable tendency up to around 107 characters. The increase in V became apparent, however, for larger text sizes. Thus, it is difficult to consider V as a constancy measure. As for the results for h1 (middle graphs), both graphs show a gradual decrease. The tendency was clearer for words than for characters. For some corpora, especially for characters, it was possible to observe some values converging towards h∗. The overall tendency, however, could not be concluded as converging. This result suggests the difficulty in attaining convergence of the entropy rate, even with gigabyte-scale data. From the theoretical background of the Grassberger algorithm, the values would possibly converge with larger-scale data. The continued decrease could be due to multiple reasons, including the possibility of requiring far larger data than that used here, or a discrepancy between linguistic processes and the mathematical model assumed for the Grassberger algorithm. We tried to estimate h∗ by fitting the Equation (9). For the corpora with good fitting, all of the estimated values were larger than zero, but many of the results could not 8 For real data, this was done without any randomization of the order of texts for all corpora besides Atext-w and Atext-c. The Watan corpus is distributed not in the chronological order of the publishing dates, but as a set of articles grouped into categories (i.e., all articles of one category, then all articles of another category, and so on). Because of this, there is a large skew in the vocabulary distribution, depending on the section of the corpus. We thus randomly reshuffled the articles by categories for the whole corpus before taking chunks of different sizes (always from the beginning) to generate our results. Apart from this, we avoided any arbitrary randomization with respect to the original data summarized in Table 1. be fitted easily, and the estimated values were unstable due to fluctuation of the lines. Whether a value for h∗ is reached asymptotically and also whether h∗ > 0 remain important questions requiring separate, more extensive mathematical and empirical studies. In contrast, H2 (or Yule’s K, bottom graphs) showed convergence, already at the level of 105 tokens, for both words and characters. From the previous verification of Yule’s K, we can conclude that H2 is convergent. The final convergent values, however, differed for the various writing systems. We return to this issue in the next section. To better understand the convergence, Figure 2 shows the results for the corresponding randomized data. As mentioned in Section 3.2, the original texts were randomized by shuffling words and characters for the data examined by words and characters, respectively. Therefore, all n-gram characteristics existing in the text were destroyed, and what remained were the different words and characters appearing in a random order. Here, we see how the random data’s behavior has some of the theoretical properties of convergence, as summarized in Section 3.2. As mentioned previously, because V has no mathematical background, its behavior even for uniform random data is unknown, and even if it converged, the convergent value would be smaller than that of the original text. The top two graphs in Figure 2 exhibit some oscillation, especially for randomized Chinese (Cnews-c,w). Such peculiar oscillation was already reported by Golcher himself (Golcher 2007) for uniformly random data. This was easy to replicate, as reported in Kimura and Tanaka-Ishii (2014), for uniformly random data with the number of distinct tokens up to a hundred. Because the word distribution almost follows Zipf’s law, the vocabulary is not uniformly distributed, yet oscillating results occur for some randomized data in the top left figure. Moreover, the values seem to increase for Japanese and English for words at a larger scale. Although the plots for some scripts seem convergent (top right graph), these convergent values are theoretically different from those of the original texts, if they exist, and this stability is not universal across the different data sets. Given this result, it is doubtful that V is convergent across languages. In contrast, h1 is mathematically proven to be convergent given infinite-length randomized data, but to larger values than those of the original texts, as mentioned in Section 3.2. The middle two graphs of Figure 2 show the results for h1. The majority of the plots do not reach convergence even at the largest data sizes, but for certain results with characters, especially in the Roman alphabet, the plots seem to go to a convergent value (middle right). All the plots can be extrapolated to converge to a certain entropy rate above zero, although these values are larger than the convergent values—if they ever exist—of the real data. These results confirm the difficulty of judging whether the entropy rates of the original texts are convergent and whether they remain above zero. Lastly, it is easy to see that H2 is convergent for a randomized text (bottom two graphs), and the convergent values are the same for the cases with and without randomization. In fact, the plots converge to exactly the same points faster and more stably, which shows the effect of randomization. As for the other randomization options, by sentences and documents, the findings—both the tendencies of the lines and the changes in the values—can be situated in the middle of what we have seen so far. The plots should increasingly fluctuate more like the real data because of the incomplete randomization, in the order of sentences and then documents. Returning to inspection of the remaining real data, Figure 3 shows V, h1, and H2 in terms of words for the small-scale corpora (left column) and for the programming language texts (right column). For the small-scale corpora, in general, the plots are bunched together, and the results shared the tendencies noted previously for the largescale corpora. V again showed an increase, while h1 showed a tendency to decrease. H2 converged rapidly and was already almost stable at 104 tokens. This again shows how H2 exhibits stable constancy, especially with texts written by single authors. As for the programming language results, the plots fluctuate more than for the natural language texts because of the redundancy within the program sources. Still, the global tendencies noted so far were just discernible. V had relatively larger values but h1 and H2 had smaller values for programs, as compared to the natural language texts. The differences in value indicate the larger degree of repetitiveness in programs. Lastly, Figure 4 shows the Hα results for the Wall Street Journal in terms of words in unigrams (Enews-w). The horizontal axis indicates the corpus size, and the vertical axis indicates the approximated entropy value. The different lines represent the results for Hα with α = 1, 2, 3, 4. The two H1 plots represent calculations with and without Laplace smoothing (Manning and Schuetze 1999). We can see that without smoothing, H1 increased, as Tweedie and Baayen (1998) reported, but in contrast to their conclusion, we observe a tendency of convergence for larger-scale data. The increase was due to the influence of low-frequency vocabulary pushing up the entropy. The opposite tendency to decrease was observed for the smoothed probabilities, with the plot eventually converging to the same point as that for the unsmoothed H1 values. The convergence was by far slower for H1 as compared with that for H2, H3, and H4, which all had attained convergence already at 102 tokens. The convergence values naturally decreased for larger α, although the amount of decrease itself rapidly decreased with larger α. In answer to Questions 1 and 2 raised in the Introduction—which measures show constancy, with sufficient convergence speed—the empirical conclusion from our data is that Hα with α > 1 showed stable constancy when the values were approximated using relative frequencies. For H1, the convergence was much slower because of the strong influence of low-frequency words. Consequently, the constancy of Hα with α > 1 is attained by representing the gross complexity underlying a text. Now we turn to Question 3 raised in the Introduction and examine the discriminatory power of H2. As Yule intended, does H2 identify authors? Given the influence of different writing systems, as seen previously in Figure 1, we examine the relation between H2 and the number of distinct tokens (the alphabet/vocabulary size). Note that because this number corresponds to H0 in Equation (11), this analysis effectively considers texts on the H0-H2 plane. Since H0 grows according to the text size, unlike H2, the same text size must be used for all corpora in order to meaningfully compare H0 values.9 Given that H2 converges fast, we chose a size of 104 tokens to handle all of the small- and large-scale corpora. For each of the corpora listed in Table 1 and the second kind of random corpora explained at the end of Section 3.2, Figure 5 plots the values of H2 (vertical axis) and the number of distinct tokens H0 (horizontal) measured for each corpus at a size of 104 tokens. The three large circles are groupings of points. The leftmost group represents news sources in alphabetic characters. All of the romanized Chinese, Japanese, and Arabic texts are located almost at the same vertical location as the English text. This indicates the difficulty for H2 to distinguish natural languages if measured in terms of alphabetic characters. The middle group represents the programming language texts in terms of words. This group is located separately (vertically lower than the natural language corpora in terms of words), so H2 is likely to distinguish between natural languages and programming languages. The rightmost group represents the small-scale corpora. Considering the proximity of these points despite the variety of the content, it is unlikely that H2 can distinguish authors, in contrast to Yule’s hope. Still, these points are located lower than those for news text. Therefore, H2 has the potential to distinguish genre or maybe writing style. 9 Because H0 is not convergent, the horizontal locations remain unstable, unless the tokens are of a phonographic alphabet. In other words, for all word-based results and character-based results not based on a phonographic alphabet, the resulting horizontal locations are changed by increasing the corpus size. As for the random data, the H0 values are convergent, because these data sets have a finite number of distinct tokens. Since H0 is measured only for the first 104 tokens, however, the horizontal locations are underestimated, especially for random data following a Zipf distribution. The natural language texts located near the line for a Zipf exponent of 0.8 are those of the non-alphabetic writing systems.10 Note that Chinese characters have morphological features, and the Arabic and Thai languages also have flexibility in terms of which units are considered words and morphemes. In other words, the plots closer to the random data with a smaller Zipf exponent are for language corpora of morphemic sequences. The group of plots measured for phonographic scripts is located near the line for a Zipf exponent of 1.0 (the grouping of points in the leftmost circle), which could suggest that morphemes are more randomized units than words. The nature of unknown scripts can also be considered through our understanding thus far. Figure 5 includes plots for the Voynich manuscript in terms of words and characters, and for the Rongorongo script in terms of characters. Like all the data seen in this figure, the points are placed at the H2 values (vertically) for the number of distinct tokens (horizontally) at the specified size of 104 tokens, with the exception of Voynich-A in terms of words. Because this corpus consists of fewer than 104 words (refer to the data length by tokens listed for VoynichA-w in Table 1), its point is located horizontally at the vocabulary size corresponding to the corpus’ maximum size. For the two Voynich manuscript parts, the plots in terms of words appear near the Arabic corpus for words (Abook-w). For characters, on the other hand, the plots are at the leftmost end of the figure. This was due to overestimation of the total number 10 Note that here we use the values of the Zipf exponent for the random data, and not the estimated exponents for the real data. The rank-frequency distributions of characters, especially for phonetic alphabets, often do not follow a power law. of characters for the alphabetic texts (e.g., both English and other, romanized language texts), since all ASCII characters, such as colons, periods, and question marks, are counted. Still, the H2 values are located almost at the same position as for the other romanized texts, indicating that the Voinich manuscript has approximately similar complexity. These results suggest the possibility that the Voynich manuscript could have been generated from a source in natural language, possibly written in some script of the abjad type. This supports previous findings (Reddy and Knight 2011; Montemurro and Zanette 2013), which reported the possibility of the Voynich manuscript being in a natural language and the coincidence of its word length distribution with that of Arabic. On the other hand, the plots for the Rongorongo script appear near the line for a Zipf exponent of 0.8, with RongoA near Arabic in terms of words but RongoB somewhat further down from Japanese in terms of characters. The status of Rongorongo as natural language has been controversial (Pozdniakov and Pozdniakov 2007). Both points in the graph, however, are near many other natural language texts (and not widely separated), making it reasonable to hypothesize that Rongorongo is indeed natural language. The characters can be deemed morphologically rich, because both plots are close to the line for a Zipf exponent of 0.8. In the case of RongoA, for which a character was considered inclusive of all parts (i.e., including accents and ornamental parts), the morphological richness is comparable to that of the words of an abjad script. On the other hand, when considering the different character parts as distinct (RongoB), the location drifts towards the plot for Thai, a phonographic script, in terms of characters. Therefore, the Rongorongo script could be considered basically morphemic, with some parts functioning phonographically. This conclusion again supports a previous hypothesis proposed by a domain specialist (Pozdniakov and Pozdniakov 2007). This analysis of two unknown scripts supports previous conjectures. Our results, however, only add a small bit of evidence to those conjectures; clearly, reaching a reasonable conclusion would require further study. Moreover, the analysis of unknown scripts introduced here could provide another possible application of text constancy measures, from a broader context. 5 conclusion :We have discussed text constancy measures, whose values are invariant across different sizes of text, for a given text. Such measures have a 70-year history, since Yule originally proposed K as a text characteristic, potentially with language engineering utility for problems such as author identification. We consider text constancy measures today to have scientific importance in understanding language universals from a computational view. After overviewing measures proposed so far and previous studies on text constancy, we explained how K essentially has a mathematical equivalence to the Rényi higher-order entropy. We then empirically examined various measures across different languages and kinds of corpora. Our results showed that only the approximated higherorder Rényi entropy exhibits stable, rapid constancy. Examining the nature of the convergent values revealed that K does not possess the discriminatory power of author identification as Yule had hoped. We also applied our understanding to two unknown scripts, the Voynich manuscript and Rongorongo, and showed how our constancy results support previous hypotheses about each of these scripts. Our future work will include application of K to other kinds of data besides natural language. There, too, we will consider the questions raised in the Introduction, of whether K converges and of how discriminatory it is. We are especially interested in considering the relation between the value of K and the meaningfulness of data. This article presents a mathematical and empirical verification of computational constancy measures for natural language text. A constancy measure characterizes a given text by having an invariant value for any size larger than a certain amount. The study of such measures has a 70-year history dating back to Yule’s K, with the original intended application of author identification. We examine various measures proposed since Yule and reconsider reports made so far, thus overviewing the study of constancy measures. We then explain how K is essentially equivalent to an approximation of the second-order Rényi entropy, thus indicating its signification within language science. We then empirically examine constancy measure candidates within this new, broader context. The approximated higher-order entropy exhibits stable convergence across different languages and kinds of text. We also show, however, that it cannot identify authors, contrary to Yule’s intention. Lastly, we apply K to two unknown scripts, the Voynich manuscript and Rongorongo, and show how the results support previous hypotheses about these scripts. [{""affiliations"": [], ""name"": ""Kumiko Tanaka-Ishii""}, {""affiliations"": [], ""name"": ""Shunsuke Aihara""}] SP:54cf25939e5f7b648f0a97837673bc1f8e10a4be [{""authors"": [""T. Barthel""], ""title"": ""The Rongorongo of Easter Island: Thomas Barthel\u2019s transliteration system"", ""venue"": ""Available at http://kohaumotu. org/rongorongo org/corpus/ codes.html. Accessed June 2015."", ""year"": 2013}, {""authors"": [""T.C. Bell"", ""J.G. Cleary"", ""I.H. Witten.""], ""title"": ""Text Compression"", ""venue"": ""Prentice Hall."", ""year"": 1990}, {""authors"": [""P.A. Bromiley"", ""N.A. Thacker"", ""E. Bouhova-Thacker.""], ""title"": ""Shannon entropy, Renyi entropy, and information"", ""venue"": ""Available at http://www.tina-vision.net/ docs/memos/2004-004.pdf. Accessed"", ""year"": 2010}, {""authors"": [""P.F. Brown"", ""S.A. Della Pietra"", ""V.J. Della Pietra"", ""J.C. Lai"", ""R.L. Mercer.""], ""title"": ""An estimate of an upper bound for the entropy of English"", ""venue"": ""Computational Linguistics, 18(1):31\u201340."", ""year"": 1992}, {""authors"": [""E. Brunet""], ""title"": ""Vocabulaire de Jean Giraudoux: Structure et Evolution"", ""venue"": ""Slatkine."", ""year"": 1978}, {""authors"": [""T. Cover"", ""R. King.""], ""title"": ""A convergent gambling estimate of the entropy of English"", ""venue"": ""IEEE Transactions on Information Theory, 24(4):413\u2013421."", ""year"": 1978}, {""authors"": [""T.M. Cover"", ""J.A. Thomas.""], ""title"": ""Elements of Information Theory"", ""venue"": ""Wiley-Interscience."", ""year"": 2006}, {""authors"": [""\u0141. D\u0229bowski""], ""title"": ""A general definition of conditional information and its application to ergodic decomposition"", ""venue"": ""Statistics and Probability Letters, 79(9):1260\u20131268."", ""year"": 2009}, {""authors"": [""\u0141. D\u0229bowski""], ""title"": ""Empirical evidence for Hilberg\u2019s conjecture in single author texts"", ""venue"": ""Methods and Applications of Quantitative Linguistics: Selected papers of the 8th International Conference on"", ""year"": 2013}, {""authors"": [""\u0141. D\u0229bowski""], ""title"": ""The relaxed Hilberg conjecture: A review and new experimental support"", ""venue"": ""Available at http://www.ipipan.waw.pl/ldebowsk/. Accessed June 2015."", ""year"": 2014}, {""authors"": [""D. Dugast""], ""title"": ""Vocabulaire et Stylistique"", ""venue"": ""I Th\u00e9\u00e2tre et Dialogue. Slatkine-Champion. Travaux de Linguistique Quantitative."", ""year"": 1979}, {""authors"": [""M. Farach"", ""M. Noordewier"", ""S. Savari"", ""L. Shepp"", ""A. Wyner"", ""J. Ziv.""], ""title"": ""On the entropy of DNA: Algorithms and measurements based on memory and rapid convergence"", ""venue"": ""Proceedings of the"", ""year"": 1995}, {""authors"": [""D. Genzel"", ""E. Charniak.""], ""title"": ""Entropy rate constancy in text"", ""venue"": ""Annual Meeting of the Association for the ACL, pages 199\u2013206, Philadelphia, PA."", ""year"": 2002}, {""authors"": [""F. Golcher""], ""title"": ""A stable statistical constant specific for human language texts"", ""venue"": ""Recent Advances in Natural Language Processing, Borovets."", ""year"": 2007}, {""authors"": [""I.J. Good""], ""title"": ""The population frequencies of species and the estimation of population parameters"", ""venue"": ""Biometrika, 40(3\u20134):237\u2013264."", ""year"": 1953}, {""authors"": [""P. Grassberger""], ""title"": ""Estimating the information content of symbol sequences and efficient codes"", ""venue"": ""IEEE Transactions on Information Theory, 35:669\u2013675."", ""year"": 1989}, {""authors"": [""H. Guiraud""], ""title"": ""Les Charact\u00e8res Statistique du Vocabulaire"", ""venue"": ""Universitaires de France Press."", ""year"": 1954}, {""authors"": [""D. Gusfield""], ""title"": ""Algorithms on Strings, and Sequences: Computer Science and Computational Biology"", ""venue"": ""Cambridge University Press."", ""year"": 1997}, {""authors"": [""G. Herdan""], ""title"": ""Quantitative Linguistics"", ""venue"": ""Butterworths."", ""year"": 1964}, {""authors"": [""W. Hilberg""], ""title"": ""Der bekannte grenzwert der redundanzfreien information in texten eine fehlinterpretation der shannonschen experimente? Frequenz, 44(9\u201310):243\u2013248"", ""year"": 1990}, {""authors"": [""A. Honor\u00e9""], ""title"": ""Some simple measures of richness of vocabulary"", ""venue"": ""Association for Literary and Linguistic Computing Bulletin, 7:172\u2013177."", ""year"": 1979}, {""authors"": [""D. Kimura"", ""K. Tanaka-Ishii.""], ""title"": ""A study on constants of natural language texts"", ""venue"": ""Journal of Natural Language Processing, 18(2):119\u2013137."", ""year"": 2011}, {""authors"": [""D. Kimura"", ""K. Tanaka-Ishii.""], ""title"": ""A study on constants of natural language texts"", ""venue"": ""Journal of Natural Language Processing, 21:877\u2013895. Special issue of awarded papers. [The English translated version"", ""year"": 2014}, {""authors"": [""B. Kitchens""], ""title"": ""Symbolic Dynamics: One-sided, Two-sided and Countable State Markov Shifts"", ""venue"": ""Springer. 501"", ""year"": 1998}, {""authors"": [""R. Levy"", ""T.F. Jaeger.""], ""title"": ""Speakers optimize information density through information density through syntactic reduction"", ""venue"": ""Annual Conference on Neural Information"", ""year"": 2007}, {""authors"": [""H.D. Maas""], ""title"": ""Zusammenhang zwischen wortschatzumfang und l\u00e4nge eines textes [Relationship between vocabulary and text length"", ""venue"": ""Zeitschrift f\u00fcr Literaturwissenschaft und Linguistik, 8:73\u201370."", ""year"": 1972}, {""authors"": [""B. Mandelbrot""], ""title"": ""An informational theory of the statistical structure of language"", ""venue"": ""Communication Theory, 486\u2013500."", ""year"": 1953}, {""authors"": [""C. Manning"", ""H. Schuetze.""], ""title"": ""Foundations of Statistical Natural Language Processing"", ""venue"": ""MIT Press."", ""year"": 1999}, {""authors"": [""M. Montemurro"", ""D. Zanette.""], ""title"": ""Keywords and co-occurrence patterns in the Voynich Manuscript: An information-theoretic analysis"", ""venue"": ""PLOS One. doi: 10.1371/journal.pone.0066344."", ""year"": 2013}, {""authors"": [""C. Orliac""], ""title"": ""The Rongorongo tablets from Easter Island: Botanical identification and 14c dating"", ""venue"": ""Archaeology in Oceania, 40(3):115\u2013119."", ""year"": 2005}, {""authors"": [""J.K. Orlov"", ""R.Y. Chitashvili.""], ""title"": ""Generalized z-distribution generating the well-known \u2018rank-distributions"", ""venue"": ""Bulletin of the Academy of Sciences of Georgia, 110:269\u2013272."", ""year"": 1983}, {""authors"": [""K. Pozdniakov"", ""I. Pozdniakov.""], ""title"": ""Rapanui writing and the Rapanui language: Preliminary results of a statistical analysis"", ""venue"": ""Forum for Anthropology and Culture, 3:3\u201336."", ""year"": 2007}, {""authors"": [""S. Reddy"", ""K. Knight.""], ""title"": ""What we know about the Voynich Manuscript"", ""venue"": ""ACL Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, Portland, OR."", ""year"": 2011}, {""authors"": [""A. R\u00e9nyi""], ""title"": ""On measures of entropy and information"", ""venue"": ""Proceedings of the Fourth Berkeley Symposium on Mathematics,"", ""year"": 1960}, {""authors"": [""A. R\u00e9nyi""], ""title"": ""Foundations of Probability"", ""venue"": ""Dover Publications."", ""year"": 1970}, {""authors"": [""T. Sch\u00fcmann"", ""P. Grassberger.""], ""title"": ""Entropy estimation of symbol sequences"", ""venue"": ""Chaos, 6(3):414\u2013427."", ""year"": 1996}, {""authors"": [""C. Shannon""], ""title"": ""A mathematical theory of communication"", ""venue"": ""Bell System Technical Journal, 27:379\u2013423, 623\u2013656."", ""year"": 1948}, {""authors"": [""P.C. Shields""], ""title"": ""Entropy and prefixes"", ""venue"": ""Annals of Probability, 20(1):403\u2013409."", ""year"": 1992}, {""authors"": [""H.S. Sichel""], ""title"": ""On a distribution law for word frequencies"", ""venue"": ""Journal of the American Statistical Association, 70(351):542\u2013547."", ""year"": 1975}, {""authors"": [""E.H. Simpson""], ""title"": ""Measurement of diversity"", ""venue"": ""Nature, 163:688."", ""year"": 1949}, {""authors"": [""E. Stamatatos"", ""N. Fakotakis"", ""G. Kokkinakis.""], ""title"": ""Automatic text categorization in terms of genre and author"", ""venue"": ""Computational Linguistics, 26(4):471\u2013495."", ""year"": 2001}, {""authors"": [""B. Stein"", ""N. Lipka"", ""P. Prettenhofer.""], ""title"": ""Intrinsic plagiarism analysis"", ""venue"": ""Language Resources and Evaluation, 45(1):63\u201382."", ""year"": 2010}, {""authors"": [""Y.W. Teh""], ""title"": ""A hierarchical Bayesian language model based on Pitman-Yor processes"", ""venue"": ""Proceedings of the 21st International Conference On Computational Linguistics and 44th Annual Meeting of the"", ""year"": 2006}, {""authors"": [""J. Tuldava""], ""title"": ""Quantitative relations between the size of the text and lexical richness"", ""venue"": ""SMIL Quarterly, Journal of Linguistic Calculus, 4:28\u201335."", ""year"": 1977}, {""authors"": [""F.J. Tweedie"", ""R.H. Baayen""], ""title"": ""How variable may a constant be? Measures of lexical richness in perspective"", ""venue"": ""Computers and the Humanities, 32:323\u2013352."", ""year"": 1998}, {""authors"": [""G.U. Yule""], ""title"": ""The Statistical Study of Literary Vocabulary"", ""venue"": ""Cambridge University Press."", ""year"": 1944}, {""authors"": [""G.K. Zipf""], ""title"": ""Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology"", ""venue"": ""Hafner, New York. 502"", ""year"": 1965}] acknowledgments :This research was supported by JST’s PRESTO program.","5 conclusion :We have discussed text constancy measures, whose values are invariant across different sizes of text, for a given text. Such measures have a 70-year history, since Yule originally proposed K as a text characteristic, potentially with language engineering utility for problems such as author identification. We consider text constancy measures today to have scientific importance in understanding language universals from a computational view. After overviewing measures proposed so far and previous studies on text constancy, we explained how K essentially has a mathematical equivalence to the Rényi higher-order entropy. We then empirically examined various measures across different languages and kinds of corpora. Our results showed that only the approximated higherorder Rényi entropy exhibits stable, rapid constancy. Examining the nature of the convergent values revealed that K does not possess the discriminatory power of author identification as Yule had hoped. We also applied our understanding to two unknown scripts, the Voynich manuscript and Rongorongo, and showed how our constancy results support previous hypotheses about each of these scripts. Our future work will include application of K to other kinds of data besides natural language. There, too, we will consider the questions raised in the Introduction, of whether K converges and of how discriminatory it is. We are especially interested in considering the relation between the value of K and the meaningfulness of data." "1 introduction :Distributional semantics approximates word meanings with vectors tracking cooccurrence in corpora (Turney and Pantel 2010). Recent work has extended this approach to phrases and sentences through vector composition (Clark 2015). Resulting compositional distributional semantic models (CDSMs) estimate degrees of semantic similarity (or, more generally, relatedness) between two phrases: A good CDSM might tell us that green bird is closer to parrot than to pigeon, useful for tasks such as paraphrasing. We take a mathematical look1 at how the composition operations postulated by CDSMs affect similarity measurements involving the vectors they produce for phrases or sentences. We show that, for an important class of composition methods, encompassing at least those based on linear transformations, the similarity equations can be decomposed into operations performed on the subparts of the input phrases, ∗ Department of Enterprise Engineering, University of Rome “Tor Vergata,” Viale del Politecnico, 1, 00133 Rome, Italy. E-mail: fabio.massimo.zanzotto@uniroma2.it. 1 Ganesalingam and Herbelot (2013) also present a mathematical investigation of CDSMs. However, except for the tensor product (a composition method we do not consider here as it is not empirically effective), they do not look at how composition strategies affect similarity comparisons. Original submission received: 10 December 2013; revision received: 26 May 2014; accepted for publication: 1 August 2014. doi:10.1162/COLI a 00215 © 2015 Association for Computational Linguistics and typically factorized into terms that reflect the linguistic structure of the input. This establishes a strong link between CDSMs and convolution kernels (Haussler 1999), which act in the same way. We thus refer to our claim as the “Convolution Conjecture.” We focus on the models in Table 1. These CDSMs all apply linear methods, and we suspect that linearity is a sufficient (but not necessary) condition to ensure that the Convolution Conjecture holds. We will first illustrate the conjecture for linear methods, and then briefly consider two nonlinear approaches: the dual space model of Turney (2012), for which it does, and a representative of the recent strand of work on neuralnetwork models of composition, for which it does not.","2 mathematical preliminaries :Vectors are represented as small letters with an arrow a and their elements are ai, matrices as capital letters in bold A and their elements are Aij, and third-order or fourth-order tensors as capital letters in the form A and their elements are Aijk or Aijkh. The symbol represents the element-wise product and ⊗ is the tensor product. The dot product is 〈 a, b〉 and the Frobenius product—that is, the generalization of the dot product to matrices and high-order tensors—is represented as 〈A, B〉F and 〈A,B〉F. The Frobenius product acts on vectors, matrices, and third-order tensors as follows: 〈 a, b〉F = ∑ i aibi = 〈 a, b〉 〈A, B〉F = ∑ ij AijBij 〈A,B〉F = ∑ ijk AijkBijk (1) A simple property that relates the dot product between two vectors and the Frobenius product between two general tensors is the following: 〈 a, b〉 = 〈I, a bT〉F (2) where I is the identity matrix. The dot product of A x and B y can be rewritten as: 〈A x, B y〉 = 〈ATB, x yT〉F (3) Let A and B be two third-order tensors and x, y, a, c four vectors. It can be shown that: 〈 xA y, aB c 〉 = 〈 ∑ j (A ⊗ B)j, x ⊗ y ⊗ a ⊗ c 〉F (4) where C = ∑ j(A ⊗ B)j is a non-standard way to indicate the tensor contraction of the tensor product between two third-order tensors. In this particular tensor contraction, the elements Ciknm of the resulting fourth-order tensor C are Ciknm = ∑ j AijkBnjm. The elements Diknm of the tensor D = x ⊗ y ⊗ a ⊗ c are Diknm = xiykancm.","3 formalizing the convolution conjecture :Structured Objects. In line with Haussler (1999), a structured object x ∈ X is either a terminal object that cannot be furthermore decomposed, or a non-terminal object that can be decomposed into n subparts. We indicate with x = (x1, . . . , xn) one such decomposition, where the subparts xi ∈ X are structured objects themselves. The set X is the set of the structured objects and TX ⊆ X is the set of the terminal objects. A structured object x can be anything according to the representational needs. Here, x is a representation of a text fragment, and so it can be a sequence of words, a sequence of words along with their part of speech, a tree structure, and so on. The set R(x) is the set of decompositions of x relevant to define a specific CDSM. Note that a given decomposition of a structured object x does not need to contain all the subparts of the original object. For example, let us consider the phrase x = tall boy. We can then define R(x) = {(tall, boy), (tall), (boy)}. This set contains the three possible decompositions of the phrase: ( tall︸︷︷︸ x1 , boy︸︷︷︸ x2 ), ( tall︸︷︷︸ x1 ), and (boy︸︷︷︸ x1 ). Recursive formulation of CDSM. A CDSM can be viewed as a function f that acts recursively on a structured object x. If x is a non-terminal object f (x) = ⊙ x∈R(x) γ( f (x1), f (x2), . . . , f (xn)) (5) where R(x) is the set of relevant decompositions, ⊙ is a repeated operation on this set, γ is a function defined on f (xi) where xi are the subparts of a decomposition of x. If x is a terminal object, f (x) is directly mapped to a tensor. The function f may operate differently on different kinds of structured objects, with tensor degree varying accordingly. The set R(x) and the functions f , γ, and ⊙ depend on the specific CDSM, and the same CDSM might be susceptible to alternative analyses satisfying the form in Equation (5). As an example, under Additive, x is a sequence of words and f is f (x) = ⎧⎨ ⎩ ∑ y∈R(x) f (y) if x /∈ TX x if x ∈ TX (6) where R((w1, . . . , wn)) = {(w1), . . . , (wn)}. The repeated operation ⊙ corresponds to summing and γ is identity. For Multiplicative we have f (x) = ⎧⎨ ⎩ y∈R(x) f (y) if x /∈ TX x if x ∈ TX (7) where R(x) = {(w1, . . . , wn)} (a single trivial decomposition including all subparts). With a single decomposition, the repeated operation reduces to a single term; and here γ is the product (it will be clear subsequently, when we apply the Convolution Conjecture to these models, why we are assuming different decomposition sets for Additive and Multiplicative). Definition 1 (Convolution Conjecture) For every CDSM f along with its R(x) set, there exist functions K, Ki and a function g such that: K( f (x), f (y)) = ∑ x∈R(x) y∈R(y) g(K1( f (x1), f (y1)), K2( f (x2), f (y2)), . . . , Kn( f (xn), f (yn))) (8) The Convolution Conjecture postulates that the similarity K( f (x), f (y)) between the tensors f (x) and f (y) is computed by combining operations on the subparts, that is, Ki( f (xi), f (yi)), using the function g. This is exactly what happens in convolution kernels (Haussler 1999). K is usually the dot product, but this is not necessary: We will show that for the dual-space model of Turney (2012) K turns out to be the fourth root of the Frobenius tensor.","4 comparing composed phrases :We illustrate now how the Convolution Conjecture (CC) applies to the considered CDSMs, exemplifying with adjective–noun and subject–verb–object phrases. Without loss of generality we use tall boy and red cat for adjective–noun phrases and goats eat grass and cows drink water for subject–verb–object phrases. Additive Model. K and Ki are dot products, g is the identity function, and f is as in Equation (6). The structure of the input is a word sequence (i.e., x = (w1 w2)) and the relevant decompositions consist of these single words, R(x) = {(w1), (w2)}. Then K( f (tall boy), f (red cat)) = 〈 tall + boy, red + cats〉 = = 〈 tall, red〉+ 〈 tall, cat〉+ 〈 boy, red〉+ 〈 boy, cat〉 = = ∑ x∈{tall,boy} y∈{red,cat} 〈 f (x), f (y)〉 = ∑ x∈{tall,boy} y∈{red,cat} K( f (x), f (y)) (9) The CC form of Additive shows that the overall dot product can be decomposed into dot products of the vectors of the single words. Composition does not add any further information. These results can be easily extended to longer phrases and to phrases of different length. Multiplicative Model. K, g are dot products, Ki the component-wise product, and f is as in Equation (7). The structure of the input is x = (w1 w2), and we use the trivial single decomposition consisting of all subparts (thus summation reduces to a single term): K( f (tall boy), f (red cat)) = 〈 tall boy, red cat〉 = 〈 tall red boy cat, 1〉 = = 〈 tall red, boy cat〉 = g(K1( tall, red), K2( boy, cat)) (10) This is the dot product between an indistinct chain of element-wise products and a vector 1 of all ones or the product of two separate element-wise products, one on adjectives tall red, and one on nouns boy cat. In this latter CC form, the final dot product is obtained in two steps: first separately operating on the adjectives and on the nouns; then taking the dot product of the resulting vectors. The comparison operations are thus reflecting the input syntactic structure. The results can be easily extended to longer phrases and to phrases of different lengths. Full Additive Model. The input consists of a sequence of (label,word) pairs x = ((L1 w1), . . . , (Ln wn)) and the relevant decomposition set includes the single tuples, that is, R(x) = {(L1 w1), . . . , (Ln wn)}. The CDSM f is defined as f (x) = ⎧⎪⎪⎨ ⎪⎪⎩ ∑ (L w)∈R(x) f (L)f (w) if x /∈ TX X if x ∈ TX is a label L w if x ∈ TX is a word w (11) The repeated operation ⊙ here is summation, and γ the matrix-by-vector product. In the CC form, K is the dot product, g the Frobenius product, K1( f (x), f (y)) = f (x)Tf (y), and K2( f (x), f (y)) = f (x)f (y)T. We have then for adjective–noun composition (by using the property in Equation (3)): K( f ((A tall) (N boy)), f ((A red) (N cat))) = 〈A tall + N boy, A red + N cat〉 = = 〈A tall, A red〉+ 〈A tall, N cat〉+ 〈N boy, A red〉+ 〈N boy, N cat〉 = = 〈ATA, tall redT〉F + 〈NTA, boy red T〉F + 〈ATN, tall catT〉F + 〈NTN, boy catT〉F = = ∑ (lx wx)∈{(A tall),(N boy)} (ly wy )∈{(A red),(N cat)} g(K1( f (lx), f (ly)), K2( f (wx), f (wy)) (12) The CC form shows how Full Additive factorizes into a more structural and a more lexical part: Each element of the sum is the Frobenius product between the product of two matrices representing syntactic labels and the tensor product between two vectors representing the corresponding words. For subject–verb–object phrases ((S w1) (V w2) (O w3)) we have K( f (((S goats) (V eat) (O grass))), f (((S cows) (V drink) (O water)))) = = 〈S goats + V eat + O grass, S cows + V drink + O water〉 = = 〈STS, goats cowsT〉F + 〈STV, goats drink T〉F + 〈STO, goats waterT〉F +〈VTS, eat cowsT〉F + 〈VTV, eat drink T〉F + 〈VTO, eat waterT〉F +〈OTS, grass cowsT〉F + 〈OTV, grass drink T〉F + 〈OTO, grass waterT〉F = ∑ (lx wx)∈{(S goats),(V eat),(O grass)} (ly wy )∈{(S cows),(V drink),(O water)} g(K1( f (lx), f (ly)), K2( f (wx), f (wy)) (13) Again, we observe the factoring into products of syntactic and lexical representations. By looking at Full Additive in the CC form, we observe that when XTY ≈ I for all matrix pairs, it degenerates to Additive. Interestingly, Full Additive can also approximate a semantic convolution kernel (Mehdad, Moschitti, and Zanzotto 2010), which combines dot products of elements in the same slot. In the adjective–noun case, we obtain this approximation by choosing two nearly orthonormal matrices A and N such that AAT = NNT ≈ I and ANT ≈ 0 and applying Equation (2): 〈A tall + N boy, A red + N cat〉 ≈ 〈tall, red〉+ 〈boy, cat〉. This approximation is valid also for three-word phrases. When the matrices S, V, and O are such that XXT ≈ I with X one of the three matrices and YXT ≈ 0 with X and Y two different matrices, Full Additive approximates a semantic convolution kernel comparing two sentences by summing the dot products of the words in the same role, that is, 〈S goats + V eat + O grass, S cows + V drink + O water〉 ≈ ≈ 〈goats, cows〉+ 〈eat, drink〉+ 〈grass, water〉 (14) Results can again be easily extended to longer and different-length phrases. Lexical Function Model. We distinguish composition with one- vs. two argument predicates. We illustrate the first through adjective–noun composition, where the adjective acts as the predicate, and the second with transitive verb constructions. Although we use the relevant syntactic labels, the formulas generalize to any construction with the same argument count. For adjective–noun phrases, the input is a sequence of (label, word) pairs (x = ((A, w1), (N, w2))) and the relevant decomposition set again includes only the single trivial decomposition into all the subparts: R(x) = {((A, w1), (N, w2))}. The method itself is recursively defined as f (x) = ⎧⎪⎨ ⎪⎩ f ((A, w1))f ((N, w2)) if x /∈ TX = ((A, w1), (N, w2)) W1 if x ∈ Tx = (A, w1) w2 if x ∈ Tx = (N, w2) (15) Here, K and g are, respectively, the dot and Frobenius product, K1( f (x), f (y)) = f (x)Tf (y), and K2( f (x), f (y)) = f (x)f (y)T. Using Equation (3), we have then K( f (tall boy)), f (red cat)) = 〈TALL boy, RED cat〉 = = 〈TALLTRED, boy catT〉F = g(K1( f (tall), f (red)), K2( f (boy), f (cat))) (16) The role of predicate and argument words in the final dot product is clearly separated, showing again the structure-sensitive nature of the decomposition of the comparison operations. In the two-place predicate case, again, the input is a set of (label, word) tuples, and the relevant decomposition set only includes the single trivial decomposition into all subparts. The CDSM f is defined as f (x) = ⎧⎪⎨ ⎪⎩ f ((S w1)) ⊗ f ((V w2)) ⊗ f ((O w3)) if x /∈ TX = ((S w1) (V w2) (O w3)) w if x ∈ TX = (l w) and l is S or O W if x ∈ TX = (V w) (17) K is the dot product and g(x, y, z) = 〈x, y ⊗ z〉F, K1( f (x), f (y)) = ∑ j( f (x) ⊗ f (y))j— that is, the tensor contraction2 along the second index of the tensor product between f (x) and f (y)—and K2( f (x), f (y)) = K3( f (x), f (y)) = f (x) ⊗ f (y) are tensor products. The dot product of goats EAT grass and cows DRINK water is (by using Equation (4)) K( f (((S goats) (V eat) (O grass))), f (((S cows) (V drink) (O water)))) = = 〈 goats EAT grass, cows DRINK water〉 = = 〈∑j(EAT ⊗ DRINK)j, goats ⊗ grass ⊗ cows ⊗ water〉F = = g(K1( f ((V eat)), f ((V drink))), K2( f ((S goats)), f ((S cows))) ⊗ K3( f ((O grass)), f ((O water)))) (18) We rewrote the equation as a Frobenius product between two fourth-order tensors. The first combines the two third-order tensors of the verbs ∑ j(EAT ⊗ DRINK)j and the second combines the vectors representing the arguments of the verb, that is: goats ⊗ grass ⊗ cows ⊗ water. In this case as well we can separate the role of predicate and argument types in the comparison computation. Extension of the Lexical Function to structured objects of different lengths is treated by using the identity element for missing parts. As an example, we show here the comparison between tall boy and cat where the identity element is the identity matrix I: K( f (tall boy)), f (cat)) = 〈TALL boy, cat〉 = 〈TALL boy, I cat〉 = = 〈TALLTI, boy catT〉F = g(K1( f (tall), f ( )), K2( f (boy), f (cat))) (19) Dual Space Model. We have until now applied the CC to linear CDSMs with the dot product as the final comparison operator (what we called K). The CC also holds for the effective Dual Space model of Turney (2012), which assumes that each word has two distributional representations, wd in “domain” space and wf in “function” space. The similarity of two phrases is directly computed as the geometric average of the separate similarities between the first and second words in both spaces. Even though 2 Grefenstette et al. (2013) first framed the Lexical Function in terms of tensor contraction. there is no explicit composition step, it is still possible to put the model in CC form. Take x = (x1, x2) and its trivial decomposition. Define, for a word w with vector representations wd and wf : f (w) = wd wf T. Define also K1( f (x1), f (y1)) = √〈 f (x1), f (y1)〉F, K2( f (x2), f (y2)) = √〈 f (x2), f (y2)〉F and g(a, b) to be √ab. Then g(K1( f (x1), f (y1)), K2( f (x2), f (y2))) = = √√ 〈 xd1 xf 1T, yd1 yf 1T〉F · √ 〈 xd2 xf 2T, yd2 yf 2T〉F = = 4 √ 〈 xd1, yd1〉 · 〈 xf 1, yf 1〉 · 〈 xd2, yd2〉 · 〈 xf 2, yf 2〉 = = geo(sim(xd1, yd1), sim(xd2, yd2), sim(xf 1, yf 1), sim(xf 2, yf 2)) (20) A Neural-network-like Model. Consider the phrase (w1, w2, . . . , wn) and the model defined by f (x) = σ( w1 + w2 + . . .+ wn), where σ(·) is a component-wise logistic function. Here we have a single trivial decomposition that includes all the subparts, and γ(x1, . . . , xn) is defined as σ(x1 + . . .+ xn). To see that for this model the CC cannot hold, consider two two-word phrases (a b) and (c d) K( f ((a, b)), f ((c, d))) = 〈 f ((a, b)), f ((c, d))〉 = ∑i [ σ( a + b) ] i · [ σ( c + d) ] i = ∑ i ( 1 + e−ai−bi + e−ci−di + e−ai−bi−ci−di )−1 (21) We need to rewrite this as g(K1( a, c), K2( b, d)) (22) But there is no possible choice of g, K1, and K2 that allows Equation (21) to be written as Equation (22). This example can be regarded as a simplified version of the neuralnetwork model of Socher et al. (2011). The fact that the CC does not apply to it suggests that it will not apply to other models in this family.","5 conclusion :The Convolution Conjecture offers a general way to rewrite the phrase similarity computations of CDSMs by highlighting the role played by the subparts of a composed representation. This perspective allows for a better understanding of the exact operations that a composition model applies to its input. The Convolution Conjecture also suggests a strong connection between CDSMs and semantic convolution kernels. This link suggests that insights from the CDSM literature could be directly integrated in the development of convolution kernels, with all the benefits offered by this wellunderstood general machine-learning framework.",,,,"Distributional semantics has been extended to phrases and sentences by means of composition operations. We look at how these operations affect similarity measurements, showing that similarity equations of an important class of composition methods can be decomposed into operations performed on the subparts of the input phrases. This establishes a strong link between these models and convolution kernels.","[{""affiliations"": [], ""name"": ""Fabio Massimo Zanzotto""}, {""affiliations"": [], ""name"": ""Lorenzo Ferrone""}, {""affiliations"": [], ""name"": ""Marco Baroni""}]",SP:1111fb9daa320f08257d60408e9f6a49aad5c0c6,"[{""authors"": [""Clark"", ""Stephen.""], ""title"": ""Vector space models of lexical meaning"", ""venue"": ""Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd ed. Blackwell, Malden, MA. In press."", ""year"": 2015}, {""authors"": [""Coecke"", ""Bob"", ""Mehrnoosh Sadrzadeh"", ""Stephen Clark.""], ""title"": ""Mathematical foundations for a compositional distributional model of meaning"", ""venue"": ""Linguistic Analysis, 36:345\u2013384."", ""year"": 2010}, {""authors"": [""Ganesalingam"", ""Mohan"", ""Aur\u00e9lie Herbelot.""], ""title"": ""Composing distributions: Mathematical structures and their linguistic interpretation"", ""venue"": ""Working paper, Computer Laboratory, University"", ""year"": 2013}, {""authors"": [""Grefenstette"", ""Edward"", ""Georgiana Dinu"", ""Yao-Zhong Zhang"", ""Mehrnoosh Sadrzadeh"", ""Marco Baroni.""], ""title"": ""Multi-step regression learning for compositional distributional semantics"", ""venue"": ""Proceedings"", ""year"": 2013}, {""authors"": [""Guevara"", ""Emiliano.""], ""title"": ""A regression model of adjective-noun compositionality in distributional semantics"", ""venue"": ""Proceedings of GEMS, pages 33\u201337, Uppsala."", ""year"": 2010}, {""authors"": [""Haussler"", ""David.""], ""title"": ""Convolution kernels on discrete structures"", ""venue"": ""Technical report USCS-CL-99-10, University of California at Santa Cruz."", ""year"": 1999}, {""authors"": [""Mitchell"", ""Jeff"", ""Mirella Lapata.""], ""title"": ""Vector-based models of semantic composition"", ""venue"": ""Proceedings of ACL, pages 236\u2013244, Columbus, OH."", ""year"": 2008}, {""authors"": [""Socher"", ""Richard"", ""Eric Huang"", ""Jeffrey Pennin"", ""Andrew Ng"", ""Christopher Manning.""], ""title"": ""Dynamic pooling and unfolding recursive autoencoders for paraphrase detection"", ""venue"": ""Proceedings of NIPS,"", ""year"": 2011}, {""authors"": [""Turney"", ""Peter.""], ""title"": ""Domain and function: A dual-space model of semantic relations and compositions"", ""venue"": ""Journal of Artificial Intelligence Research, 44:533\u2013585."", ""year"": 2012}, {""authors"": [""Turney"", ""Peter"", ""Patrick Pantel.""], ""title"": ""From frequency to meaning: Vector space models of semantics"", ""venue"": ""Journal of Artificial Intelligence Research, 37:141\u2013188."", ""year"": 2010}, {""authors"": [""Zanzotto"", ""Fabio Massimo"", ""Ioannis Korkontzelos"", ""Francesca Falucchi"", ""Suresh Manandhar.""], ""title"": ""Estimating linear models for compositional distributional semantics"", ""venue"": ""Proceedings of COLING,"", ""year"": 2010}]","acknowledgments :We thank the reviewers for helpful comments. Marco Baroni acknowledges ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Clark, Stephen. 2015. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd ed. Blackwell, Malden, MA. In press. Coecke, Bob, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis, 36:345–384. Ganesalingam, Mohan and Aurélie Herbelot. 2013. Composing distributions: Mathematical structures and their linguistic interpretation. Working paper, Computer Laboratory, University of Cambridge. Available at www.cl.cam.ac.uk/∼ah433/. Grefenstette, Edward, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-step regression learning for compositional distributional semantics. Proceedings of IWCS, pages 131–142, Potsdam. Guevara, Emiliano. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of GEMS, pages 33–37, Uppsala. Haussler, David. 1999. Convolution kernels on discrete structures. Technical report USCS-CL-99-10, University of California at Santa Cruz. Mehdad, Yashar, Alessandro Moschitti, and Fabio Massimo Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. In Proceedings of NAACL, pages 1,020–1,028, Los Angeles, CA. Mitchell, Jeff and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236–244, Columbus, OH. Socher, Richard, Eric Huang, Jeffrey Pennin, Andrew Ng, and Christopher Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS, pages 801–809, Granada. Turney, Peter. 2012. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533–585. Turney, Peter and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Zanzotto, Fabio Massimo, Ioannis Korkontzelos, Francesca Falucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of COLING, pages 1,263–1,271, Beijing.",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,when the whole is not greater :,than the combination of its parts: :,a “decompositional” look at :,"compositional distributional semantics :Fabio Massimo Zanzotto∗ University of Rome “Tor Vergata” Lorenzo Ferrone University of Rome “Tor Vergata” Marco Baroni University of Trento Distributional semantics has been extended to phrases and sentences by means of composition operations. We look at how these operations affect similarity measurements, showing that similarity equations of an important class of composition methods can be decomposed into operations performed on the subparts of the input phrases. This establishes a strong link between these models and convolution kernels.",,,,,,,,,,,,,,,,"1 introduction :Distributional semantics approximates word meanings with vectors tracking cooccurrence in corpora (Turney and Pantel 2010). Recent work has extended this approach to phrases and sentences through vector composition (Clark 2015). Resulting compositional distributional semantic models (CDSMs) estimate degrees of semantic similarity (or, more generally, relatedness) between two phrases: A good CDSM might tell us that green bird is closer to parrot than to pigeon, useful for tasks such as paraphrasing. We take a mathematical look1 at how the composition operations postulated by CDSMs affect similarity measurements involving the vectors they produce for phrases or sentences. We show that, for an important class of composition methods, encompassing at least those based on linear transformations, the similarity equations can be decomposed into operations performed on the subparts of the input phrases, ∗ Department of Enterprise Engineering, University of Rome “Tor Vergata,” Viale del Politecnico, 1, 00133 Rome, Italy. E-mail: fabio.massimo.zanzotto@uniroma2.it. 1 Ganesalingam and Herbelot (2013) also present a mathematical investigation of CDSMs. However, except for the tensor product (a composition method we do not consider here as it is not empirically effective), they do not look at how composition strategies affect similarity comparisons. Original submission received: 10 December 2013; revision received: 26 May 2014; accepted for publication: 1 August 2014. doi:10.1162/COLI a 00215 © 2015 Association for Computational Linguistics and typically factorized into terms that reflect the linguistic structure of the input. This establishes a strong link between CDSMs and convolution kernels (Haussler 1999), which act in the same way. We thus refer to our claim as the “Convolution Conjecture.” We focus on the models in Table 1. These CDSMs all apply linear methods, and we suspect that linearity is a sufficient (but not necessary) condition to ensure that the Convolution Conjecture holds. We will first illustrate the conjecture for linear methods, and then briefly consider two nonlinear approaches: the dual space model of Turney (2012), for which it does, and a representative of the recent strand of work on neuralnetwork models of composition, for which it does not. 2 mathematical preliminaries :Vectors are represented as small letters with an arrow a and their elements are ai, matrices as capital letters in bold A and their elements are Aij, and third-order or fourth-order tensors as capital letters in the form A and their elements are Aijk or Aijkh. The symbol represents the element-wise product and ⊗ is the tensor product. The dot product is 〈 a, b〉 and the Frobenius product—that is, the generalization of the dot product to matrices and high-order tensors—is represented as 〈A, B〉F and 〈A,B〉F. The Frobenius product acts on vectors, matrices, and third-order tensors as follows: 〈 a, b〉F = ∑ i aibi = 〈 a, b〉 〈A, B〉F = ∑ ij AijBij 〈A,B〉F = ∑ ijk AijkBijk (1) A simple property that relates the dot product between two vectors and the Frobenius product between two general tensors is the following: 〈 a, b〉 = 〈I, a bT〉F (2) where I is the identity matrix. The dot product of A x and B y can be rewritten as: 〈A x, B y〉 = 〈ATB, x yT〉F (3) Let A and B be two third-order tensors and x, y, a, c four vectors. It can be shown that: 〈 xA y, aB c 〉 = 〈 ∑ j (A ⊗ B)j, x ⊗ y ⊗ a ⊗ c 〉F (4) where C = ∑ j(A ⊗ B)j is a non-standard way to indicate the tensor contraction of the tensor product between two third-order tensors. In this particular tensor contraction, the elements Ciknm of the resulting fourth-order tensor C are Ciknm = ∑ j AijkBnjm. The elements Diknm of the tensor D = x ⊗ y ⊗ a ⊗ c are Diknm = xiykancm. 3 formalizing the convolution conjecture :Structured Objects. In line with Haussler (1999), a structured object x ∈ X is either a terminal object that cannot be furthermore decomposed, or a non-terminal object that can be decomposed into n subparts. We indicate with x = (x1, . . . , xn) one such decomposition, where the subparts xi ∈ X are structured objects themselves. The set X is the set of the structured objects and TX ⊆ X is the set of the terminal objects. A structured object x can be anything according to the representational needs. Here, x is a representation of a text fragment, and so it can be a sequence of words, a sequence of words along with their part of speech, a tree structure, and so on. The set R(x) is the set of decompositions of x relevant to define a specific CDSM. Note that a given decomposition of a structured object x does not need to contain all the subparts of the original object. For example, let us consider the phrase x = tall boy. We can then define R(x) = {(tall, boy), (tall), (boy)}. This set contains the three possible decompositions of the phrase: ( tall︸︷︷︸ x1 , boy︸︷︷︸ x2 ), ( tall︸︷︷︸ x1 ), and (boy︸︷︷︸ x1 ). Recursive formulation of CDSM. A CDSM can be viewed as a function f that acts recursively on a structured object x. If x is a non-terminal object f (x) = ⊙ x∈R(x) γ( f (x1), f (x2), . . . , f (xn)) (5) where R(x) is the set of relevant decompositions, ⊙ is a repeated operation on this set, γ is a function defined on f (xi) where xi are the subparts of a decomposition of x. If x is a terminal object, f (x) is directly mapped to a tensor. The function f may operate differently on different kinds of structured objects, with tensor degree varying accordingly. The set R(x) and the functions f , γ, and ⊙ depend on the specific CDSM, and the same CDSM might be susceptible to alternative analyses satisfying the form in Equation (5). As an example, under Additive, x is a sequence of words and f is f (x) = ⎧⎨ ⎩ ∑ y∈R(x) f (y) if x /∈ TX x if x ∈ TX (6) where R((w1, . . . , wn)) = {(w1), . . . , (wn)}. The repeated operation ⊙ corresponds to summing and γ is identity. For Multiplicative we have f (x) = ⎧⎨ ⎩ y∈R(x) f (y) if x /∈ TX x if x ∈ TX (7) where R(x) = {(w1, . . . , wn)} (a single trivial decomposition including all subparts). With a single decomposition, the repeated operation reduces to a single term; and here γ is the product (it will be clear subsequently, when we apply the Convolution Conjecture to these models, why we are assuming different decomposition sets for Additive and Multiplicative). Definition 1 (Convolution Conjecture) For every CDSM f along with its R(x) set, there exist functions K, Ki and a function g such that: K( f (x), f (y)) = ∑ x∈R(x) y∈R(y) g(K1( f (x1), f (y1)), K2( f (x2), f (y2)), . . . , Kn( f (xn), f (yn))) (8) The Convolution Conjecture postulates that the similarity K( f (x), f (y)) between the tensors f (x) and f (y) is computed by combining operations on the subparts, that is, Ki( f (xi), f (yi)), using the function g. This is exactly what happens in convolution kernels (Haussler 1999). K is usually the dot product, but this is not necessary: We will show that for the dual-space model of Turney (2012) K turns out to be the fourth root of the Frobenius tensor. 4 comparing composed phrases :We illustrate now how the Convolution Conjecture (CC) applies to the considered CDSMs, exemplifying with adjective–noun and subject–verb–object phrases. Without loss of generality we use tall boy and red cat for adjective–noun phrases and goats eat grass and cows drink water for subject–verb–object phrases. Additive Model. K and Ki are dot products, g is the identity function, and f is as in Equation (6). The structure of the input is a word sequence (i.e., x = (w1 w2)) and the relevant decompositions consist of these single words, R(x) = {(w1), (w2)}. Then K( f (tall boy), f (red cat)) = 〈 tall + boy, red + cats〉 = = 〈 tall, red〉+ 〈 tall, cat〉+ 〈 boy, red〉+ 〈 boy, cat〉 = = ∑ x∈{tall,boy} y∈{red,cat} 〈 f (x), f (y)〉 = ∑ x∈{tall,boy} y∈{red,cat} K( f (x), f (y)) (9) The CC form of Additive shows that the overall dot product can be decomposed into dot products of the vectors of the single words. Composition does not add any further information. These results can be easily extended to longer phrases and to phrases of different length. Multiplicative Model. K, g are dot products, Ki the component-wise product, and f is as in Equation (7). The structure of the input is x = (w1 w2), and we use the trivial single decomposition consisting of all subparts (thus summation reduces to a single term): K( f (tall boy), f (red cat)) = 〈 tall boy, red cat〉 = 〈 tall red boy cat, 1〉 = = 〈 tall red, boy cat〉 = g(K1( tall, red), K2( boy, cat)) (10) This is the dot product between an indistinct chain of element-wise products and a vector 1 of all ones or the product of two separate element-wise products, one on adjectives tall red, and one on nouns boy cat. In this latter CC form, the final dot product is obtained in two steps: first separately operating on the adjectives and on the nouns; then taking the dot product of the resulting vectors. The comparison operations are thus reflecting the input syntactic structure. The results can be easily extended to longer phrases and to phrases of different lengths. Full Additive Model. The input consists of a sequence of (label,word) pairs x = ((L1 w1), . . . , (Ln wn)) and the relevant decomposition set includes the single tuples, that is, R(x) = {(L1 w1), . . . , (Ln wn)}. The CDSM f is defined as f (x) = ⎧⎪⎪⎨ ⎪⎪⎩ ∑ (L w)∈R(x) f (L)f (w) if x /∈ TX X if x ∈ TX is a label L w if x ∈ TX is a word w (11) The repeated operation ⊙ here is summation, and γ the matrix-by-vector product. In the CC form, K is the dot product, g the Frobenius product, K1( f (x), f (y)) = f (x)Tf (y), and K2( f (x), f (y)) = f (x)f (y)T. We have then for adjective–noun composition (by using the property in Equation (3)): K( f ((A tall) (N boy)), f ((A red) (N cat))) = 〈A tall + N boy, A red + N cat〉 = = 〈A tall, A red〉+ 〈A tall, N cat〉+ 〈N boy, A red〉+ 〈N boy, N cat〉 = = 〈ATA, tall redT〉F + 〈NTA, boy red T〉F + 〈ATN, tall catT〉F + 〈NTN, boy catT〉F = = ∑ (lx wx)∈{(A tall),(N boy)} (ly wy )∈{(A red),(N cat)} g(K1( f (lx), f (ly)), K2( f (wx), f (wy)) (12) The CC form shows how Full Additive factorizes into a more structural and a more lexical part: Each element of the sum is the Frobenius product between the product of two matrices representing syntactic labels and the tensor product between two vectors representing the corresponding words. For subject–verb–object phrases ((S w1) (V w2) (O w3)) we have K( f (((S goats) (V eat) (O grass))), f (((S cows) (V drink) (O water)))) = = 〈S goats + V eat + O grass, S cows + V drink + O water〉 = = 〈STS, goats cowsT〉F + 〈STV, goats drink T〉F + 〈STO, goats waterT〉F +〈VTS, eat cowsT〉F + 〈VTV, eat drink T〉F + 〈VTO, eat waterT〉F +〈OTS, grass cowsT〉F + 〈OTV, grass drink T〉F + 〈OTO, grass waterT〉F = ∑ (lx wx)∈{(S goats),(V eat),(O grass)} (ly wy )∈{(S cows),(V drink),(O water)} g(K1( f (lx), f (ly)), K2( f (wx), f (wy)) (13) Again, we observe the factoring into products of syntactic and lexical representations. By looking at Full Additive in the CC form, we observe that when XTY ≈ I for all matrix pairs, it degenerates to Additive. Interestingly, Full Additive can also approximate a semantic convolution kernel (Mehdad, Moschitti, and Zanzotto 2010), which combines dot products of elements in the same slot. In the adjective–noun case, we obtain this approximation by choosing two nearly orthonormal matrices A and N such that AAT = NNT ≈ I and ANT ≈ 0 and applying Equation (2): 〈A tall + N boy, A red + N cat〉 ≈ 〈tall, red〉+ 〈boy, cat〉. This approximation is valid also for three-word phrases. When the matrices S, V, and O are such that XXT ≈ I with X one of the three matrices and YXT ≈ 0 with X and Y two different matrices, Full Additive approximates a semantic convolution kernel comparing two sentences by summing the dot products of the words in the same role, that is, 〈S goats + V eat + O grass, S cows + V drink + O water〉 ≈ ≈ 〈goats, cows〉+ 〈eat, drink〉+ 〈grass, water〉 (14) Results can again be easily extended to longer and different-length phrases. Lexical Function Model. We distinguish composition with one- vs. two argument predicates. We illustrate the first through adjective–noun composition, where the adjective acts as the predicate, and the second with transitive verb constructions. Although we use the relevant syntactic labels, the formulas generalize to any construction with the same argument count. For adjective–noun phrases, the input is a sequence of (label, word) pairs (x = ((A, w1), (N, w2))) and the relevant decomposition set again includes only the single trivial decomposition into all the subparts: R(x) = {((A, w1), (N, w2))}. The method itself is recursively defined as f (x) = ⎧⎪⎨ ⎪⎩ f ((A, w1))f ((N, w2)) if x /∈ TX = ((A, w1), (N, w2)) W1 if x ∈ Tx = (A, w1) w2 if x ∈ Tx = (N, w2) (15) Here, K and g are, respectively, the dot and Frobenius product, K1( f (x), f (y)) = f (x)Tf (y), and K2( f (x), f (y)) = f (x)f (y)T. Using Equation (3), we have then K( f (tall boy)), f (red cat)) = 〈TALL boy, RED cat〉 = = 〈TALLTRED, boy catT〉F = g(K1( f (tall), f (red)), K2( f (boy), f (cat))) (16) The role of predicate and argument words in the final dot product is clearly separated, showing again the structure-sensitive nature of the decomposition of the comparison operations. In the two-place predicate case, again, the input is a set of (label, word) tuples, and the relevant decomposition set only includes the single trivial decomposition into all subparts. The CDSM f is defined as f (x) = ⎧⎪⎨ ⎪⎩ f ((S w1)) ⊗ f ((V w2)) ⊗ f ((O w3)) if x /∈ TX = ((S w1) (V w2) (O w3)) w if x ∈ TX = (l w) and l is S or O W if x ∈ TX = (V w) (17) K is the dot product and g(x, y, z) = 〈x, y ⊗ z〉F, K1( f (x), f (y)) = ∑ j( f (x) ⊗ f (y))j— that is, the tensor contraction2 along the second index of the tensor product between f (x) and f (y)—and K2( f (x), f (y)) = K3( f (x), f (y)) = f (x) ⊗ f (y) are tensor products. The dot product of goats EAT grass and cows DRINK water is (by using Equation (4)) K( f (((S goats) (V eat) (O grass))), f (((S cows) (V drink) (O water)))) = = 〈 goats EAT grass, cows DRINK water〉 = = 〈∑j(EAT ⊗ DRINK)j, goats ⊗ grass ⊗ cows ⊗ water〉F = = g(K1( f ((V eat)), f ((V drink))), K2( f ((S goats)), f ((S cows))) ⊗ K3( f ((O grass)), f ((O water)))) (18) We rewrote the equation as a Frobenius product between two fourth-order tensors. The first combines the two third-order tensors of the verbs ∑ j(EAT ⊗ DRINK)j and the second combines the vectors representing the arguments of the verb, that is: goats ⊗ grass ⊗ cows ⊗ water. In this case as well we can separate the role of predicate and argument types in the comparison computation. Extension of the Lexical Function to structured objects of different lengths is treated by using the identity element for missing parts. As an example, we show here the comparison between tall boy and cat where the identity element is the identity matrix I: K( f (tall boy)), f (cat)) = 〈TALL boy, cat〉 = 〈TALL boy, I cat〉 = = 〈TALLTI, boy catT〉F = g(K1( f (tall), f ( )), K2( f (boy), f (cat))) (19) Dual Space Model. We have until now applied the CC to linear CDSMs with the dot product as the final comparison operator (what we called K). The CC also holds for the effective Dual Space model of Turney (2012), which assumes that each word has two distributional representations, wd in “domain” space and wf in “function” space. The similarity of two phrases is directly computed as the geometric average of the separate similarities between the first and second words in both spaces. Even though 2 Grefenstette et al. (2013) first framed the Lexical Function in terms of tensor contraction. there is no explicit composition step, it is still possible to put the model in CC form. Take x = (x1, x2) and its trivial decomposition. Define, for a word w with vector representations wd and wf : f (w) = wd wf T. Define also K1( f (x1), f (y1)) = √〈 f (x1), f (y1)〉F, K2( f (x2), f (y2)) = √〈 f (x2), f (y2)〉F and g(a, b) to be √ab. Then g(K1( f (x1), f (y1)), K2( f (x2), f (y2))) = = √√ 〈 xd1 xf 1T, yd1 yf 1T〉F · √ 〈 xd2 xf 2T, yd2 yf 2T〉F = = 4 √ 〈 xd1, yd1〉 · 〈 xf 1, yf 1〉 · 〈 xd2, yd2〉 · 〈 xf 2, yf 2〉 = = geo(sim(xd1, yd1), sim(xd2, yd2), sim(xf 1, yf 1), sim(xf 2, yf 2)) (20) A Neural-network-like Model. Consider the phrase (w1, w2, . . . , wn) and the model defined by f (x) = σ( w1 + w2 + . . .+ wn), where σ(·) is a component-wise logistic function. Here we have a single trivial decomposition that includes all the subparts, and γ(x1, . . . , xn) is defined as σ(x1 + . . .+ xn). To see that for this model the CC cannot hold, consider two two-word phrases (a b) and (c d) K( f ((a, b)), f ((c, d))) = 〈 f ((a, b)), f ((c, d))〉 = ∑i [ σ( a + b) ] i · [ σ( c + d) ] i = ∑ i ( 1 + e−ai−bi + e−ci−di + e−ai−bi−ci−di )−1 (21) We need to rewrite this as g(K1( a, c), K2( b, d)) (22) But there is no possible choice of g, K1, and K2 that allows Equation (21) to be written as Equation (22). This example can be regarded as a simplified version of the neuralnetwork model of Socher et al. (2011). The fact that the CC does not apply to it suggests that it will not apply to other models in this family. 5 conclusion :The Convolution Conjecture offers a general way to rewrite the phrase similarity computations of CDSMs by highlighting the role played by the subparts of a composed representation. This perspective allows for a better understanding of the exact operations that a composition model applies to its input. The Convolution Conjecture also suggests a strong connection between CDSMs and semantic convolution kernels. This link suggests that insights from the CDSM literature could be directly integrated in the development of convolution kernels, with all the benefits offered by this wellunderstood general machine-learning framework. Distributional semantics has been extended to phrases and sentences by means of composition operations. We look at how these operations affect similarity measurements, showing that similarity equations of an important class of composition methods can be decomposed into operations performed on the subparts of the input phrases. This establishes a strong link between these models and convolution kernels. [{""affiliations"": [], ""name"": ""Fabio Massimo Zanzotto""}, {""affiliations"": [], ""name"": ""Lorenzo Ferrone""}, {""affiliations"": [], ""name"": ""Marco Baroni""}] SP:1111fb9daa320f08257d60408e9f6a49aad5c0c6 [{""authors"": [""Clark"", ""Stephen.""], ""title"": ""Vector space models of lexical meaning"", ""venue"": ""Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd ed. Blackwell, Malden, MA. In press."", ""year"": 2015}, {""authors"": [""Coecke"", ""Bob"", ""Mehrnoosh Sadrzadeh"", ""Stephen Clark.""], ""title"": ""Mathematical foundations for a compositional distributional model of meaning"", ""venue"": ""Linguistic Analysis, 36:345\u2013384."", ""year"": 2010}, {""authors"": [""Ganesalingam"", ""Mohan"", ""Aur\u00e9lie Herbelot.""], ""title"": ""Composing distributions: Mathematical structures and their linguistic interpretation"", ""venue"": ""Working paper, Computer Laboratory, University"", ""year"": 2013}, {""authors"": [""Grefenstette"", ""Edward"", ""Georgiana Dinu"", ""Yao-Zhong Zhang"", ""Mehrnoosh Sadrzadeh"", ""Marco Baroni.""], ""title"": ""Multi-step regression learning for compositional distributional semantics"", ""venue"": ""Proceedings"", ""year"": 2013}, {""authors"": [""Guevara"", ""Emiliano.""], ""title"": ""A regression model of adjective-noun compositionality in distributional semantics"", ""venue"": ""Proceedings of GEMS, pages 33\u201337, Uppsala."", ""year"": 2010}, {""authors"": [""Haussler"", ""David.""], ""title"": ""Convolution kernels on discrete structures"", ""venue"": ""Technical report USCS-CL-99-10, University of California at Santa Cruz."", ""year"": 1999}, {""authors"": [""Mitchell"", ""Jeff"", ""Mirella Lapata.""], ""title"": ""Vector-based models of semantic composition"", ""venue"": ""Proceedings of ACL, pages 236\u2013244, Columbus, OH."", ""year"": 2008}, {""authors"": [""Socher"", ""Richard"", ""Eric Huang"", ""Jeffrey Pennin"", ""Andrew Ng"", ""Christopher Manning.""], ""title"": ""Dynamic pooling and unfolding recursive autoencoders for paraphrase detection"", ""venue"": ""Proceedings of NIPS,"", ""year"": 2011}, {""authors"": [""Turney"", ""Peter.""], ""title"": ""Domain and function: A dual-space model of semantic relations and compositions"", ""venue"": ""Journal of Artificial Intelligence Research, 44:533\u2013585."", ""year"": 2012}, {""authors"": [""Turney"", ""Peter"", ""Patrick Pantel.""], ""title"": ""From frequency to meaning: Vector space models of semantics"", ""venue"": ""Journal of Artificial Intelligence Research, 37:141\u2013188."", ""year"": 2010}, {""authors"": [""Zanzotto"", ""Fabio Massimo"", ""Ioannis Korkontzelos"", ""Francesca Falucchi"", ""Suresh Manandhar.""], ""title"": ""Estimating linear models for compositional distributional semantics"", ""venue"": ""Proceedings of COLING,"", ""year"": 2010}] acknowledgments :We thank the reviewers for helpful comments. Marco Baroni acknowledges ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Clark, Stephen. 2015. Vector space models of lexical meaning. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics, 2nd ed. Blackwell, Malden, MA. In press. Coecke, Bob, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis, 36:345–384. Ganesalingam, Mohan and Aurélie Herbelot. 2013. Composing distributions: Mathematical structures and their linguistic interpretation. Working paper, Computer Laboratory, University of Cambridge. Available at www.cl.cam.ac.uk/∼ah433/. Grefenstette, Edward, Georgiana Dinu, Yao-Zhong Zhang, Mehrnoosh Sadrzadeh, and Marco Baroni. 2013. Multi-step regression learning for compositional distributional semantics. Proceedings of IWCS, pages 131–142, Potsdam. Guevara, Emiliano. 2010. A regression model of adjective-noun compositionality in distributional semantics. In Proceedings of GEMS, pages 33–37, Uppsala. Haussler, David. 1999. Convolution kernels on discrete structures. Technical report USCS-CL-99-10, University of California at Santa Cruz. Mehdad, Yashar, Alessandro Moschitti, and Fabio Massimo Zanzotto. 2010. Syntactic/semantic structures for textual entailment recognition. In Proceedings of NAACL, pages 1,020–1,028, Los Angeles, CA. Mitchell, Jeff and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL, pages 236–244, Columbus, OH. Socher, Richard, Eric Huang, Jeffrey Pennin, Andrew Ng, and Christopher Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proceedings of NIPS, pages 801–809, Granada. Turney, Peter. 2012. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research, 44:533–585. Turney, Peter and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Zanzotto, Fabio Massimo, Ioannis Korkontzelos, Francesca Falucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of COLING, pages 1,263–1,271, Beijing. when the whole is not greater : than the combination of its parts: : a “decompositional” look at : compositional distributional semantics :Fabio Massimo Zanzotto∗ University of Rome “Tor Vergata” Lorenzo Ferrone University of Rome “Tor Vergata” Marco Baroni University of Trento Distributional semantics has been extended to phrases and sentences by means of composition operations. We look at how these operations affect similarity measurements, showing that similarity equations of an important class of composition methods can be decomposed into operations performed on the subparts of the input phrases. This establishes a strong link between these models and convolution kernels.", ,,,,,,,,"The phenomenal success of machine learning in engineering natural language applications has led to a curious situation: Natural language processing practitioners who were trained in the last 15 to 20 years may have established a quite successful career in this area with only a haphazard knowledge of the science of natural languages. The premise of the new volume by Emily M. Bender is that greater awareness of linguistics will enable continued technical progress, particularly as language applications are required to perform more intelligent processing in more languages.","[{""affiliations"": [], ""name"": ""Emily M. Bender""}]",SP:99bddb5469a564b2f6ddaf267bc7dc90601cd3a2,"[{""authors"": [""Chomsky"", ""Noam"", ""Morris Halle.""], ""title"": ""The Sound Pattern of English"", ""venue"": ""Harper & Row."", ""year"": 1968}, {""authors"": [""Prince"", ""Alan"", ""Paul Smolensky.""], ""title"": ""Optimality Theory: Constraint Interaction in Generative Grammar"", ""venue"": ""Wiley\u2013Blackwell."", ""year"": 2004}, {""authors"": [""Stabler"", ""Edward P.""], ""title"": ""Derivational minimalism"", ""venue"": ""Christian Retor\u00e9, editor,"", ""year"": 1997}, {""authors"": [""Steedman"", ""Mark""], ""title"": ""Logical Aspects of Computational Linguistics, volume 1328 of LNCS"", ""year"": 2000}, {""authors"": [""Wintner"", ""Shuly""], ""title"": ""Last words: What science underlies natural language engineering"", ""venue"": ""Computational Linguistics,"", ""year"": 2009}]",,,,,,,,,,,,,,,,,,,"reviewed by chris dyer carnegie mellon university :The phenomenal success of machine learning in engineering natural language applications has led to a curious situation: Natural language processing practitioners who were trained in the last 15 to 20 years may have established a quite successful career in this area with only a haphazard knowledge of the science of natural languages. The premise of the new volume by Emily M. Bender is that greater awareness of linguistics will enable continued technical progress, particularly as language applications are required to perform more intelligent processing in more languages. © 2015 Association for Computational Linguistics This book is not beholden to any particular theoretical program. Rather, it is a survey of the morphological and syntactic means by which different languages express meaning, anchored by clear and effective examples from typologically diverse languages. Eschewing theorizing to stay close to data permits a remarkably wide range of linguistic phenomena to be covered, and it is this that is the book’s greatest strength. However, in a few places, a seemingly arbitrary theoretical perspective is assumed rather more tacitly than one might hope, with few hints as to alternative analyses (e.g., see the following remarks about parts of speech in Chapter 6). Furthermore, a bit more theoretical scaffolding could have made the presentation more succinct in places (e.g., Chapter 7’s excellent discussion of heads, arguments, and adjuncts could have been more precise with a basic logical calculus). Finally, although theoretical squabbles can be off-putting to outsiders, theoretical diversity can have practical benefits, particularly in a field as omnivorous as NLP. For example, while theorists might disagree about whether morphophonology is best modeled with systems of rewrite rules (e.g., SPE) or constraint satisfaction (e.g., Optimality Theory) (Chomsky and Halle 1968; Prince and Smolensky 2004), each suggests a distinct computational instantiation with different challenges and opportunities. For such reasons, more discussion of theory would not have been unwelcome. This slight objection aside, the book is an excellent introduction to the diversity of linguistic representations that NLP must eventually contend with. The book is organized into 10 chapters, in roughly two parts (the first part, morphology; the second, syntax), spread over 100 numbered topics. Chapter 1 gives an overview of the scope of the book, distinguishing morphology and syntax from bag-of-words models. It lays out the premise that knowledge of linguistic structure can guide engineers in profitable directions by facilitating error doi:10.1162/COLI r 00212 Computational Linguistics Volume 41, Number 1 analysis and feature engineering. The notion of bounded variation is introduced: the idea that while languages exhibit diversity in how they pair sound and meaning, this variation is subject to limits, and that different languages can have similarities due to areal, genetic, and typological relatedness. A brief survey of the genetic taxonomy of the world’s languages is given and the number of speakers they have—as well as the striking difference in distributions of the languages in the NLP literature. Chapters 2 and 3 introduce morphology and morphophonology, focusing on the internal structure of words and how they are realized in text and speech. Simple English examples motivate the discussion, but more exotic nonconcatenative processes in Semitic languages and infixation examples from Lakhota emphasize phenomena that may be unfamiliar to those with experience only with Indo-European languages. The conventional tripartite distinction of roots and derivational and inflectional affixes is presented to organize the kinds of meaning/function changes characteristic of morphological processes, although compounding and cliticization—which fit less neatly into this taxonomy—are also discussed. Because syntax and semantics were only briefly mentioned in the Introduction, the extensive forward references to the related material in the later chapters were quite helpful for clarifying terms, making the e-book version particularly convenient. Chapter 4 discusses morphosyntax, reviewing the diverse grammatical functions that different languages encode with morphology. Phenomena covered include functions applying to the verbal domain, including tense, mood, and aspect, negation, evidentiality; the nominal domain, including person, number, and gender, case, definiteness, and possession; and various common agreement processes. Having established how words are constructed from morphemes, Chapters 5–9 focus on how syntax is used to combine words to form an unbounded number of sentences whose meaning is determined compositionally. Chapter 5 introduces the distinction between grammaticality of sentences and how syntactic structure determines their meanings, and Chapter 6 introduces parts of speech as clusters of distributional regularities of words and phrases in grammatical sentences. The fact that discussion of grammaticality proceeds almost exclusively in terms of POS—a familiar construct to anyone working in NLP, but one that looks quite different in many theories of syntax (Steedman 2000; Stabler 1997)—is a shortcoming. Chapter 7, perhaps the strongest in the book, discusses syntax in terms of headed phrases that relate to each other either as arguments (which semantically complete the meaning of a predicate) or adjunction (which introduces additional predicates). Diagnostics for distinguishing heads and dependents as well as arguments and adjuncts are given, together with clear examples of their application, and common mistakes (e.g., using optionality as a test of argumenthood or assuming that only verbs can select arguments) are covered. A particularly useful part of this chapter is a discussion of lexical resources (FrameNet, ProbBank) and how they relate to the concepts being discussed. Chapter 8 discusses argument types and grammatical functions, reviewing not entirely successful attempts to create universal inventories of thematic roles, ultimately demonstrating that syntactic roles are less idiosyncratic (at least within single languages), and capture many generalizations useful for semantic analysis. A discussion of cross-linguistic properties of subjects and the distinction between core and oblique arguments follows. Three important sections discuss the subtle and often confusing distinctions between syntactic and semantic arguments with effective examples. Although most of this chapter focuses on English examples, various morphological strategies for marking grammatical functions is discussed. 154 Book Reviews Chapter 9 concludes the syntax portion of the book, focusing on the processes that can introduce divergences between syntactic and semantic relationships. Because such divergences underlie many constructions with considerable value in NLP (e.g., wh-questions in English) and directly challenge the simplifying assumption of transparency between syntax and semantics, it is fortunate that this section goes into considerable detail, covering phenomena including passivization, dative shift, expletives, raising, control, and various kinds of long distance movement, as well as a good discussion of phenomena found in other languages, such as causative morphology and discontinuous constituents. Chapter 10 provides a brief appendix summarizing various large-scale computational resources (morphological analyzers, parsers, typological atlases) that encode linguistic knowledge. This book serves as a useful introduction to linguistic phenomena that will help NLP researchers orient themselves with respect to phenomena they will encounter as their applications push into new languages and strive for deeper automated understanding of language. The tension between the science of linguistics and natural language engineering and the resulting missed opportunities has been remarked upon in these pages recently (Wintner 2009), and we should applaud this successful effort to find common ground.",,,,,,,,,,,,,,,,linguistic fundamentals for natural language processing: :,"100 essentials from morphology and syntax :Emily M. Bender (University of Washington) Morgan & Claypool (Synthesis Lectures on Human Language Technologies, edited by Graeme Hirst, volume 20), 2013, xvii+166 pp; paperbound, ISBN 978-1-62705-011-1, $40.00; e-book, ISBN 978-1-62705-012-8, $30.00 or by subscription",,,,,,,,,,,,,,"The phenomenal success of machine learning in engineering natural language applications has led to a curious situation: Natural language processing practitioners who were trained in the last 15 to 20 years may have established a quite successful career in this area with only a haphazard knowledge of the science of natural languages. The premise of the new volume by Emily M. Bender is that greater awareness of linguistics will enable continued technical progress, particularly as language applications are required to perform more intelligent processing in more languages. [{""affiliations"": [], ""name"": ""Emily M. Bender""}] SP:99bddb5469a564b2f6ddaf267bc7dc90601cd3a2 [{""authors"": [""Chomsky"", ""Noam"", ""Morris Halle.""], ""title"": ""The Sound Pattern of English"", ""venue"": ""Harper & Row."", ""year"": 1968}, {""authors"": [""Prince"", ""Alan"", ""Paul Smolensky.""], ""title"": ""Optimality Theory: Constraint Interaction in Generative Grammar"", ""venue"": ""Wiley\u2013Blackwell."", ""year"": 2004}, {""authors"": [""Stabler"", ""Edward P.""], ""title"": ""Derivational minimalism"", ""venue"": ""Christian Retor\u00e9, editor,"", ""year"": 1997}, {""authors"": [""Steedman"", ""Mark""], ""title"": ""Logical Aspects of Computational Linguistics, volume 1328 of LNCS"", ""year"": 2000}, {""authors"": [""Wintner"", ""Shuly""], ""title"": ""Last words: What science underlies natural language engineering"", ""venue"": ""Computational Linguistics,"", ""year"": 2009}] reviewed by chris dyer carnegie mellon university :The phenomenal success of machine learning in engineering natural language applications has led to a curious situation: Natural language processing practitioners who were trained in the last 15 to 20 years may have established a quite successful career in this area with only a haphazard knowledge of the science of natural languages. The premise of the new volume by Emily M. Bender is that greater awareness of linguistics will enable continued technical progress, particularly as language applications are required to perform more intelligent processing in more languages. © 2015 Association for Computational Linguistics This book is not beholden to any particular theoretical program. Rather, it is a survey of the morphological and syntactic means by which different languages express meaning, anchored by clear and effective examples from typologically diverse languages. Eschewing theorizing to stay close to data permits a remarkably wide range of linguistic phenomena to be covered, and it is this that is the book’s greatest strength. However, in a few places, a seemingly arbitrary theoretical perspective is assumed rather more tacitly than one might hope, with few hints as to alternative analyses (e.g., see the following remarks about parts of speech in Chapter 6). Furthermore, a bit more theoretical scaffolding could have made the presentation more succinct in places (e.g., Chapter 7’s excellent discussion of heads, arguments, and adjuncts could have been more precise with a basic logical calculus). Finally, although theoretical squabbles can be off-putting to outsiders, theoretical diversity can have practical benefits, particularly in a field as omnivorous as NLP. For example, while theorists might disagree about whether morphophonology is best modeled with systems of rewrite rules (e.g., SPE) or constraint satisfaction (e.g., Optimality Theory) (Chomsky and Halle 1968; Prince and Smolensky 2004), each suggests a distinct computational instantiation with different challenges and opportunities. For such reasons, more discussion of theory would not have been unwelcome. This slight objection aside, the book is an excellent introduction to the diversity of linguistic representations that NLP must eventually contend with. The book is organized into 10 chapters, in roughly two parts (the first part, morphology; the second, syntax), spread over 100 numbered topics. Chapter 1 gives an overview of the scope of the book, distinguishing morphology and syntax from bag-of-words models. It lays out the premise that knowledge of linguistic structure can guide engineers in profitable directions by facilitating error doi:10.1162/COLI r 00212 Computational Linguistics Volume 41, Number 1 analysis and feature engineering. The notion of bounded variation is introduced: the idea that while languages exhibit diversity in how they pair sound and meaning, this variation is subject to limits, and that different languages can have similarities due to areal, genetic, and typological relatedness. A brief survey of the genetic taxonomy of the world’s languages is given and the number of speakers they have—as well as the striking difference in distributions of the languages in the NLP literature. Chapters 2 and 3 introduce morphology and morphophonology, focusing on the internal structure of words and how they are realized in text and speech. Simple English examples motivate the discussion, but more exotic nonconcatenative processes in Semitic languages and infixation examples from Lakhota emphasize phenomena that may be unfamiliar to those with experience only with Indo-European languages. The conventional tripartite distinction of roots and derivational and inflectional affixes is presented to organize the kinds of meaning/function changes characteristic of morphological processes, although compounding and cliticization—which fit less neatly into this taxonomy—are also discussed. Because syntax and semantics were only briefly mentioned in the Introduction, the extensive forward references to the related material in the later chapters were quite helpful for clarifying terms, making the e-book version particularly convenient. Chapter 4 discusses morphosyntax, reviewing the diverse grammatical functions that different languages encode with morphology. Phenomena covered include functions applying to the verbal domain, including tense, mood, and aspect, negation, evidentiality; the nominal domain, including person, number, and gender, case, definiteness, and possession; and various common agreement processes. Having established how words are constructed from morphemes, Chapters 5–9 focus on how syntax is used to combine words to form an unbounded number of sentences whose meaning is determined compositionally. Chapter 5 introduces the distinction between grammaticality of sentences and how syntactic structure determines their meanings, and Chapter 6 introduces parts of speech as clusters of distributional regularities of words and phrases in grammatical sentences. The fact that discussion of grammaticality proceeds almost exclusively in terms of POS—a familiar construct to anyone working in NLP, but one that looks quite different in many theories of syntax (Steedman 2000; Stabler 1997)—is a shortcoming. Chapter 7, perhaps the strongest in the book, discusses syntax in terms of headed phrases that relate to each other either as arguments (which semantically complete the meaning of a predicate) or adjunction (which introduces additional predicates). Diagnostics for distinguishing heads and dependents as well as arguments and adjuncts are given, together with clear examples of their application, and common mistakes (e.g., using optionality as a test of argumenthood or assuming that only verbs can select arguments) are covered. A particularly useful part of this chapter is a discussion of lexical resources (FrameNet, ProbBank) and how they relate to the concepts being discussed. Chapter 8 discusses argument types and grammatical functions, reviewing not entirely successful attempts to create universal inventories of thematic roles, ultimately demonstrating that syntactic roles are less idiosyncratic (at least within single languages), and capture many generalizations useful for semantic analysis. A discussion of cross-linguistic properties of subjects and the distinction between core and oblique arguments follows. Three important sections discuss the subtle and often confusing distinctions between syntactic and semantic arguments with effective examples. Although most of this chapter focuses on English examples, various morphological strategies for marking grammatical functions is discussed. 154 Book Reviews Chapter 9 concludes the syntax portion of the book, focusing on the processes that can introduce divergences between syntactic and semantic relationships. Because such divergences underlie many constructions with considerable value in NLP (e.g., wh-questions in English) and directly challenge the simplifying assumption of transparency between syntax and semantics, it is fortunate that this section goes into considerable detail, covering phenomena including passivization, dative shift, expletives, raising, control, and various kinds of long distance movement, as well as a good discussion of phenomena found in other languages, such as causative morphology and discontinuous constituents. Chapter 10 provides a brief appendix summarizing various large-scale computational resources (morphological analyzers, parsers, typological atlases) that encode linguistic knowledge. This book serves as a useful introduction to linguistic phenomena that will help NLP researchers orient themselves with respect to phenomena they will encounter as their applications push into new languages and strive for deeper automated understanding of language. The tension between the science of linguistics and natural language engineering and the resulting missed opportunities has been remarked upon in these pages recently (Wintner 2009), and we should applaud this successful effort to find common ground. linguistic fundamentals for natural language processing: : 100 essentials from morphology and syntax :Emily M. Bender (University of Washington) Morgan & Claypool (Synthesis Lectures on Human Language Technologies, edited by Graeme Hirst, volume 20), 2013, xvii+166 pp; paperbound, ISBN 978-1-62705-011-1, $40.00; e-book, ISBN 978-1-62705-012-8, $30.00 or by subscription", "1 introduction :Since the late 1970s, several grammar formalisms have been proposed that extend the power of context-free grammars in restricted ways. The two most prominent members of this class of “mildly context-sensitive” formalisms (a term coined by Joshi 1985) are Tree-Adjoining Grammar (TAG; Joshi and Schabes 1997) and Combinatory Categorial Grammar (CCG; Steedman 2000; Steedman and Baldridge 2011). Both formalisms have been applied to a broad range of linguistic phenomena, and are being widely used in computational linguistics and natural language processing. In a seminal paper, Vijay-Shanker and Weir (1994) showed that TAG, CCG, and two other mildly context-sensitive formalisms—Head Grammar (Pollard 1984) and Linear Indexed Grammar (Gazdar 1987)—all characterize the same class of string languages. However, when citing this result it is sometimes overlooked that the result applies to a version of CCG that is quite different from the versions that are in practical use today. ∗ Department of Computer and Information Science, Linköping University, 581 83 Linköping, Sweden. E-mail: marco.kuhlmann@liu.se. ∗∗ Department of Linguistics, Karl-Liebknecht-Str. 24–25, University of Potsdam, 14476 Potsdam, Germany. E-mail: koller@ling.uni-potsdam.de. † Department of Information Engineering, University of Padua, via Gradenigo 6/A, 35131 Padova, Italy. E-mail: satta@dei.unipd.it. Submission received: 4 December 2013; revised submission received: 26 July 2014; accepted for publication: 25 November 2014. doi:10.1162/COLI a 00219 © 2015 Association for Computational Linguistics The goal of this article is to contribute to a better understanding of the significance of this difference. The difference between “classical” CCG as formalized by Vijay-Shanker and Weir (1994) and the modern perspective may be illustrated with the combinatory rule of backward-crossed composition. The general form of this rule looks as follows: Backward-crossed composition, general form: Y/Z X /Y ⇒ X/Z () Y X /Y ⇒ X (backward application) (<) Formally, a rule is a syntactic object in which the letters X, Y, Z act as variables for categories. A rule instance is obtained by substituting concrete categories for all variables in the rule. For example, the derivation in Figure 3 contains the following instances of function application. We denote rule instances by using a triple arrow instead of the double arrow in our notation for rules. (S /NP)/NP NP S /NP and NP S /NP S Application rules give rise to derivations equivalent to those of context-free grammar. Indeed, versions of categorial grammar where application is the only mode of combination, such as AB-grammar (Ajdukiewicz 1935; Bar-Hillel, Gaifman, and Shamir 1960), can only generate context-free languages. CCG can be more powerful because it also includes other rules, derived from the combinators of combinatory logic (Curry, Feys, and Craig 1958). In this article, as in most of the formal work on CCG, we restrict our attention to the rules of (generalized) composition, which are based on the B combinator.1 The general form of composition rules is shown in Figure 4. In each rule, the two input categories are distinguished into one primary (shaded) and one secondary input category. The number n of outermost arguments of the secondary input category is called the degree of the rule.2 In particular, for n = 0 we obtain the rules of function 1 This means that we ignore other rules required for linguistic analysis, in particular type-raising (from the T combinator), substitution (from the S combinator), and coordination. 2 The literature on CCG assumes a bound on n; for English, Steedman (2000, p. 42) puts n ≤ 3. Adding rules of unbounded degree increases the generative capacity of the formalism (Weir and Joshi 1988). application. In contexts where we refer to both application and composition, we use the latter term for composition rules with degree n > 0. Derivation Trees. Derivation trees can now be schematically defined as in Figure 5. They contain two types of branchings: unary branchings correspond to lexicon entries; binary branchings correspond to rule instances. The yield of a derivation tree is the left-to-right concatenation of its leaves. We now define the classical CCG formalism that was studied by Vijay-Shanker and Weir (1994) and originally introduced by Weir and Joshi (1988). As mentioned in Section 1, the central feature of this formalism is its ability to impose restrictions on the applicability of combinatory rules. Specifically, a restricted rule is a rule annotated with constraints that (a) restrict the target of the primary input category; and/or (b) restrict the secondary input category, either in parts or in its entirety. Every grammar lists a finite number of restricted rules (where one and the same base rule may occur with several different restrictions). A valid rule instance is an instance that is compatible with at least one of the restricted rules. Example 1 Linguistic grammars make frequent use of rule restrictions. To exclude the undesired derivation in Figure 1 we restricted backward crossed composition to instances where both the primary and the secondary input category are functions into the category of sentences, S. Writing target for the function that returns the target of a category, the restricted rule can be written as Y/Z X /Y ⇒ X/Z (backward crossed composition) ( n) of its arguments. Example 6 We illustrate prefix-closedness using some examples: 1. Every AB-grammar (when seen as a VW-CCG) is trivially prefix-closed; in these grammars, n = 0. 2. The “pure” grammars that we considered in our earlier work (Kuhlmann, Koller, and Satta 2010) are trivially prefix-closed. 3. The grammar G1 from Example 2 is prefix-closed. 4. The grammars constructed in the proof of Lemma 3 are not prefix-closed; they do not allow the following instances of application, where the secondary input category is of the form Bc (rather than Ba): Aa /Bc Bc Aa where A, B ∈ V Bc Aa /Bc Aa where A, B ∈ V Example 7 The linguistic intuition underlying prefix-closed grammars is that if such a grammar allows us to delay the combination of a functor and its argument (via composition), then it also allows us to combine the functor and its argument immediately (via application). To illustrate this intuition, consider Figure 11, which shows two derivations related to the discussion of word order in Swiss German subordinate clauses (Shieber 1985): . . . mer em Hans es huus hälfed aastriche . . . we Hansdat the houseacc helped paint “. . . we helped Hans paint the house” Derivation (5) (simplified from Steedman and Baldridge 2011, p. 201) starts by composing the tensed verb hälfed into the infinitive aastriche and then applies the resulting category to the accusative argument of the infinitive, es huus. Prefix-closedness implies that, if the combination of hälfed and aastriche is allowed when the latter is still waiting for es huus, then it must also be allowed if es huus has already been found. Thus prefix-closedness predicts derivation (6), and along with it the alternative word order . . . mer em Hans hälfed es huus aastriche . . . we Hansdat helped the houseacc paint This word order is in fact grammatical (Shieber 1985, pp. 338–339). We now show that the restriction to prefix-closed grammars does not change the generative capacity of VW-CCG. Theorem 2 Prefix-closed VW-CCG and TAG are weakly equivalent. In this section we shall see that the weak equivalence between prefix-closed VW-CCG and TAG depends on the ability to restrict the target of the primary input category in a combinatory rule. These are the restrictions that we referred to as constraints of type (a) in Section 2.2. We say that a grammar that does not make use of these constraints is without target restrictions. This property can be formally defined as follows. Definition 3 A VW-CCG is without target restrictions if it satisfies the following implication: if X/Y Yβ Xβ is a valid rule instance then so is X̄/Y Yβ X̄β for any category X̄ of the grammar and similarly for backward rules. Example 8 1. Every AB-grammar is without target restrictions; it allows forward and backward application for every primary input category. 2. The grammar G1 from Example (2) is not without target restrictions, because its rules are restricted to primary input categories with target S. Target restrictions on the primary input category are useful in CCGs for natural languages; recall our discussion of backward-crossed composition in Section 1. As we shall see, target restrictions are also relevant from a formal point of view: If we require VW-CCGs to be without target restrictions, then we lose some of their weak generative capacity. This is the main technical result of this article. For its proof we need the following standard concept from formal language theory: Definition 4 Two languages L and L′ are Parikh-equivalent if for every string w ∈ L there exists a permuted version w′ of w such that w′ ∈ L′, and vice versa. Theorem 3 The languages generated by prefix-closed VW-CCG without target restrictions are properly included in the TAG languages. We shall now prove the central lemma that we used in the proof of Theorem 3. Lemma 6 (Main Lemma for VW-CCG) For every language L that is generated by some prefix-closed VW-CCG without target restrictions, there is a sublanguage L′ ⊆ L such that 1. L′ and L are Parikh-equivalent, and 2. L′ is context-free. Throughout this section, we let G be some arbitrary prefix-closed VW-CCG without target restrictions. The basic idea is to transform the derivations of G into a certain special form, and to prove that the transformed derivations yield a context-free language. The transformation is formalized by the rewriting system in Figure 12.4 To see how the rules of this system work, consider rule R1; the other rules are symmetric. Rule R1 rewrites an entire derivation into another derivation. It states that, whenever we have a situation where a category of the form X/Y is combined with a category of the form Yβ/Z by means of composition, and the resulting category is combined with a category Z by means of application, then we may just as well first combine Yβ/Z with Z, and then use the resulting category as a secondary input category together with X/Y. 4 Recall that we use the Greek letter β to denote a (possibly empty) sequence of arguments. Note that R1 and R2 produce a new derivation for the original sentence, whereas R3 and R4 produce a derivation that yields a permutation of that sentence: The order of the substrings corresponding to the categories Z and X/Y (in the case of rule R3) or X /Y (in the case of rule R4) is reversed. In particular, R3 captures the relation between the two derivations of Swiss German word orders shown in Figure 11: Applying R3 to derivation (5) gives derivation (6). Importantly though, while the transformation may reorder the yield of a derivation, every transformed derivation still is a derivation of G. Example 9 If we take the derivation in Figure 6 and exhaustively apply the rewriting rules from Figure 12, then the derivation that we obtain is the one in Figure 7. Note that although the latter derivation is not grammatical with respect to the grammar G1 from Example 2, it is grammatical with respect to the grammar G2 from the proof of Lemma 4, which is without target restrictions. It is instructive to compare the rewriting rules in Figure 12 to the rules that establish the normal form of Eisner (1996). This normal form is used in practical CCG parsers to solve the problem of “spurious ambiguity,” where one and the same semantic interpretation (which in CCG takes the form of a lambda term) has multiple syntactic derivation trees. It is established by rewriting rules such as the following: X/Y Yβ/Z Xβ/Z † Zγ Xβγ −→ X/Y Yβ/Z Zγ Yβγ Xβγ †† (5) The rules in Figure 12 have much in common with the Eisner rules; yet there are two important differences. First, as already mentioned, our rules (in particular, rules R3 and R4) may reorder the yield of a derivation, whereas Eisner’s normal form preserves yields. Second, our rules decrease the degrees of the involved composition operations, whereas Eisner’s rules may in fact increase them. To see this, note that the left-hand side of derivation (7) involves a composition of degree |β|+ 1 (†), whereas the right-hand side involves a composition of degree |β|+ |γ| (††). This means that rewriting will increase the degree in situations where |γ| > 1. In contrast, our rules only fire in the case where the combination with Z happens by means of an application, that is, if |γ| = 0. Under this condition, each rewrite step is guaranteed to decrease the degree of the composition. We will use this observation in the proof of Lemma 7. 3.4.1 Properties of the Transformation. The next two lemmas show that the rewriting system in Figure 12 implements a total function on the derivations of G. Lemma 7 The rewriting system is terminating and confluent: Rewriting a derivation ends after a finite number of steps, and different rewriting orders all result in the same output. Lemma 9 The yields of the transformed derivations are a subset of and Parikh-equivalent to L(G). Theorem 3 pinpoints the exact mechanism that VW-CCG uses to achieve weak equivalence to TAG: At least for the class of prefix-closed grammars, TAG equivalence is achieved if and only if we allow target restrictions. Although target restrictions are frequently used in linguistically motivated grammars, it is important and perhaps surprising to realize that they are indeed necessary to achieve the full generative capacity of VW-CCG. In the grammar formalisms folklore, the generative capacity of CCG is often attributed to generalized composition, and indeed we have seen (in Lemma 4) that even grammars without target restrictions can generate non-context-free languages such as L(G2). However, our results show that composition by itself is not enough to achieve weak equivalence with TAG: The yields of the transformed derivations from Section 3.4 form a context-free language despite the fact that these derivations may still contain compositions, including compositions of degree n > 2. In addition to composition, VWCCG also needs target restrictions to exert enough control on word order to block unwanted permutations. One way to think about this is that target restrictions can enforce alternations of composition and application (as in the derivation shown in Figure 6), while transformed derivations are characterized by projection paths without such alternations (Lemma 10). We can sharpen the picture even more by observing that the target restrictions that are crucial for the generative capacity of VW-CCG are not those on generalized composition, but those on function application. To see this we can note that the proof of Lemma 8 goes through also only if application rules such as (9) and (10) are without target restrictions. This means that we have the following qualification of Theorem 1. Lemma 13 Prefix-closed VW-CCG is weakly equivalent to TAG only because it supports target restrictions on forward and backward application. This finding is unexpected indeed—for instance, no grammar in Steedman (2000) uses target restrictions on the application rules.","4 generative capacity of multimodal ccg :After clarifying the mechanisms that “classical” CCG uses to achieve weak equivalence with TAG, we now turn our attention to “modern,” multimodal versions of CCG (Baldridge and Kruijff 2003; Steedman and Baldridge 2011). These versions emphasize the use of fully lexicalized grammars in which no rule restrictions are allowed, and instead equip slashes with types in order to control the use of the combinatory rules. Our central question is whether the use of slash types is sufficient to recover the expressiveness that we lose by giving up rule restrictions. We need to fix a specific variant of multimodal CCG to study this question formally. Published works on multimodal CCG differ with respect to the specific inventories of slash types they assume. Some important details, such as a precise definition of generalized composition with slash types, are typically not discussed at all. In this article we define a variant of multimodal CCG which we call O-CCG. This formalism extends our definition of VW-CCG (Definition 1) with the slash inventory and the composition rules of the popular OpenCCG grammar development system (White 2013). Our technical result is that the main Lemma (Lemma 6) also holds for O-CCG. With this we can conclude that the answer to our question is negative: Slash types are not sufficient to replace rule restrictions; O-CCG is strictly less powerful than TAG. Although this is primarily a theoretical result, at the end of this section we also discuss its implications for practical grammar development. We define O-CCG as a formalism that extends VW-CCG with the slash types of OpenCCG, but abandons rule restrictions. Note that OpenCCG has a number of additional features that affect the generative capacity; we discuss these in Section 4.4. Slash Types. Like other incarnations of multimodal CCG, O-CCG uses an enriched notion of categories where every slash has a type. There are eight such types:5 core types: ∗ × · left types: × right types: × 5 The type system of OpenCCG is an extension of the system used by Baldridge (2002). The basic idea behind these types is as follows. Slashes with type ∗ can only be used to instantiate application rules. Type also licenses harmonic composition rules, and type × also licenses crossed composition rules. Type · is the least restrictive type and can be used to instantiate all rules. The remaining types refine the system by incorporating a dimension of directionality. The exact type–rule compatibilities are specified in Figure 14. Inertness. O-CCG is distinguished from other versions of multimodal CCG, such as that of Baldridge and Kruijff (2003), in that every slash not only has a type but also an inertness status. Inertness was introduced by Baldridge (2002, Section 8.2.2) as an implementation of the “antecedent government” (ANT) feature of Steedman (1996), which is used to control the word order in certain English relative clauses. It is a two-valued feature. Arguments whose slash type has inertness status + are called active; arguments whose slash type has inertness status − are called inert. Only active arguments can be eliminated by means of combinatory rules; however, an inert argument can still be consumed as part of a secondary input category. For example, the following instance of application is valid because the outermost slash of the primary input category has inertness status +: X/+(Y/−Z) Y/−Z X We use the notations /st and / s t to denote the forward and backward slashes with slash type t and inertness status s. Rules. All O-CCG grammars share a fixed set of combinatory rules, shown in Figure 15. Every grammar uses all rules, up to some grammar-specific bound on the degree of generalized composition. As mentioned earlier, a combinatory rule can only be instantiated if the slashes of the input categories have compatible types. Additionally, all composition rules require the slashes of the secondary input category to have a uniform direction. This is a somewhat peculiar feature of OpenCCG, and is in contrast to VW-CCG and other versions of CCG, which also allow composition rules with mixed directions. Composition rules are classified into harmonic and crossed forms. This distinction is based on the direction of the slashes in the secondary input category. If these have the same direction as the outermost slash of the primary input category, then the rule is called harmonic; otherwise it is called crossed.6 6 In versions of CCG that allow rules with mixed slash directions, the distinction between harmonic and crossed is made based on the direction of the innermost slash of the secondary input category, |i. When a rule is applied, in most cases the arguments of the secondary input category are simply copied into the output category, as in VW-CCG. The one exception happens for crossed composition rules if not all slash directions match the direction of their slash type (left or right). In this case, the arguments of the secondary input category become inert. Thus the inertness status of an argument may change over the course of a derivation—but only from active to inert, not back again. Definition 5 A multimodal combinatory categorial grammar in the sense of OpenCCG, or O-CCG for short, is a structure G = (Σ,A, :=, d, S) where Σ is a finite vocabulary, A is a finite set of atomic categories, := is a finite relation between Σ and the set of (multimodal) categories over A, d ≥ 0 is the maximal degree of generalized composition, and S ∈ A is a distinguished atomic category. We generalize the notions of rule instances, derivation trees, and generated language to categories over slashes with types and inertness statuses in the obvious way: Instead of two slashes, we now have one slash for every combination of a direction, type, and inertness status. Similarly, we generalize the concepts of a grammar being prefix-closed (Definition 2) and without target restrictions (Definition 3) to O-CCG. We now investigate the generative capacity of O-CCG. We start with the (unsurprising) observation that O-CCG can describe non-context-free languages. Lemma 14 The languages generated by O-CCG properly include the context-free languages. The proof of Lemma 15 adapts the rewriting system from Figure 12. We simply let each rewriting step copy the type and inertness status of each slash from the left-hand side to the right-hand side of the rewriting rule. With this change, it is easy to verify that the proofs of Lemma 7 (termination and confluence), Lemma 10 (projection paths in transformed derivations are split), Lemma 11 (transformed derivations contain a finite number of categories), and Lemma 12 (transformed derivations yield a contextfree language) go through without problems. The proof of Lemma 8, however, is not straightforward, because of the dynamic nature of the inertness statuses. We therefore restate the lemma for O-CCG: Lemma 16 The rewriting system transforms O-CCG derivations into O-CCG derivations. In this section we have shown that the languages generated by O-CCG are properly included in the languages generated by TAG, and equivalently, in the languages generated by VW-CCG. This means that the multimodal machinery of OpenCCG is not powerful enough to express the rule restrictions of VW-CCG in a fully lexicalized way. The result is easy to obtain for O-CCG without inertness, which is prefix-closed and without target restrictions; but it is remarkably robust in that it also applies to O-CCG with inertness, which is not prefix-closed. As we have already mentioned, the result carries over also to other multimodal versions of CCG, such as the formalism of Baldridge and Kruijff (2003). Our result has implications for practical grammar development with OpenCCG. To illustrate this, recall Example 7, which showed that every VW-CCG without target restrictions for Swiss German that allows cross–serial word orders as in derivation (5) also permits alternative word orders, as in derivation (6). By Lemma 15, this remains true for O-CCG or weaker multimodal formalisms. This is not a problem in the case of Swiss German, where the alternative word orders are grammatical. However, there is at least one language, Dutch, where dependencies in subordinate clauses must cross. For this case, our result shows that the modalized composition rules of OpenCCG are not powerful enough to write adequate grammars. Consider the following classical example: . . . ik Cecilia de paarden zag voeren . . . I Cecilia the horses saw feed “. . . I saw Cecilia feed the horses” The straightforward derivation of the cross–serial dependencies in this sentence (adapted from Steedman 2000, p. 141) is exemplified in Figure 16. It takes the same form as derivation (5) for Swiss German: The verbs and their NP arguments lie on a single, right-branching path projected from the tensed verb zag. This projection path is not split; specifically, it starts with a composition that produces a category which acts as the primary input category of an application. As a consequence, the derivation can be transformed (by our rewriting rule R3) in exactly the same way as instance (5) could be transformed into derivation (6). The crucial difference is that the yield of the transformed derivation, *ik Cecilia zag de paarden voeren, is not a grammatical clause of Dutch. To address the problem of ungrammatical word orders in Dutch subordinate clauses, the VW-CCG grammar of Steedman (2000) and the multimodal CCG grammar of Baldridge (2002, Section 5.3.1) resort to combinatorial rules other than composition. In particular, they assume that all complement noun phrases undergo obligatory type-raising, and become primary input categories of application rules. This gives rise to derivations such as the one shown in Figure 17, which cannot be transformed using our rewriting rules because the result of the forward crossed composition >1 now is a secondary rather than a primary input category. As a consequence, this grammar is capable of enforcing the obligatory cross-serial dependencies of Dutch. However, it is important to note that it requires type-raising over arbitary categories with target S (observe the increasingly complex type-raised categories for the NPs). This kind of type-raising is allowed in many variants of CCG, including the full formalism underlying OpenCCG. VW-CCG and O-CCG, however, are limited to generalized composition, and can only support derivations like the one in Figure 17 if all the type-raised categories for the noun phrases are available in the lexicon. The unbounded type-raising required by the Steedman–Baldridge analysis of Dutch would translate into an infinite lexicon, and so this analysis is not possible in VW-CCG and O-CCG. We conclude by discussing the impact of several other constructs of OpenCCG that we have not captured in O-CCG. First, OpenCCG allows us to use generalized composition rules of arbitrary degree; there is no upper bound d on the composition degree as in an O-CCG grammar. It is known that this extends the generative capacity of CCG beyond that of TAG (Weir 1988). Second, OpenCCG allows categories to be annotated with feature structures. This has no impact on the generative capacity, as the features must take values from finite domains and can therefore be compiled into the atomic categories of the grammar. Finally, OpenCCG includes the combinatory rules of substitution and coordination, as well as multiset slashes, another extension frequently used in linguistic grammars. We have deliberately left these constructs out of O-CCG to establish the most direct comparison to the literature on VW-CCG. It is conceivable that their inclusion could restore the weak equivalence to TAG, but a proof of this result would require a non-trivial extension of the work of Vijay-Shanker and Weir (1994). Regarding multiset slashes, it is also worth noting that these were introduced with the expressed goal of allowing more flexible word order, whereas restoration of weak equivalence would require more controlled word order.","5 conclusion :In this article we have contributed two technical results to the literature on CCG. First, we have refined the weak equivalence result for CCG and TAG (Vijay-Shanker and Weir 1994) by showing that prefix-closed grammars are weakly equivalent to TAG only if target restrictions are allowed. Second, we have shown that O-CCG, the formal, composition-only core of OpenCCG, is not weakly equivalent to TAG. These results point to a tension in CCG between lexicalization and generative capacity: Lexicalized versions of the framework are less powerful than classical versions, which allow rule restrictions. What conclusions one draws from these technical results depends on the perspective. One way to look at CCG is as a system for defining formal languages. Under this view, one is primarily interested in results on generative capacity and parsing complexity such as those obtained by Vijay-Shanker and Weir (1993, 1994). Here, our results clarify the precise mechanisms that make CCG weakly equivalent to TAG. Perhaps surprisingly, it is not the availability of generalized composition rules by itself that explains the generative power of CCG, but the ability to constrain the interaction between generalized composition and function application by means of target restrictions. On the other hand, one may be interested in CCG primarily as a formalism for developing grammars for natural languages (Steedman 2000; Baldridge 2002; Steedman 2012). From this point of view, the suitability of CCG for the development of lexicalized grammars has been amply demonstrated. However, our technical results still serve as important reminders that extra care must be taken to avoid overgeneration when designing a grammar. In particular, it is worth double-checking that an OpenCCG grammar does not generate word orders that the grammar developer did not intend. Here the rewriting system that we presented in Figure 12 can serve as a useful tool: A grammar developer can take any derivation for a grammatical sentence, transform the derivation according to our rewriting rules, and check whether the transformed derivation still yields a grammatical sentence. It remains an open question how the conflicting desires for generative capacity and lexicalization might be reconciled. A simple answer is to add some lexicalized method for enforcing target restrictions to CCG, specifically on the application rules. However, we are not aware that this idea has seen widespread use in the CCG literature, so it may not be called for empirically. Alternatively, one might modify the rules of O-CCG in such a way that they are no longer prefix-closed—for example, by introducing some new slash type. Finally, it is possible that the constructs of OpenCCG that we set aside in O-CCG (such as type-raising, substitution, and multiset slashes) might be sufficient to achieve the generative capacity of classical CCG and TAG. A detailed study of the expressive power of these constructs would make an interesting avenue for future research.",,,,"The weak equivalence of Combinatory Categorial Grammar (CCG) and Tree-Adjoining Grammar (TAG) is a central result of the literature on mildly context-sensitive grammar formalisms. However, the categorial formalism for which this equivalence has been established differs significantly from the versions of CCG that are in use today. In particular, it allows restriction of combinatory rules on a per grammar basis, whereas modern CCG assumes a universal set of rules, isolating all cross-linguistic variation in the lexicon. In this article we investigate the formal significance of this difference. Our main result is that lexicalized versions of the classical CCG formalism are strictly less powerful than TAG.","[{""affiliations"": [], ""name"": ""Marco Kuhlmann""}, {""affiliations"": [], ""name"": ""Alexander Koller""}, {""affiliations"": [], ""name"": ""Giorgio Satta""}]",SP:52fb58337ad0d4ec8da3bb73ae06c06bd09b356a,"[{""authors"": [""Ajdukiewicz"", ""Kazimierz.""], ""title"": ""Die syntaktische Konnexit\u00e4t"", ""venue"": ""Studia Philosophica, 1:1\u201327."", ""year"": 1935}, {""authors"": [""Baldridge"", ""Jason"", ""Geert-Jan M. Kruijff.""], ""title"": ""Multi-modal combinatory categorial grammar"", ""venue"": ""Tenth Conference of the European Chapter of the Association for Computational Linguistics (EACL),"", ""year"": 2003}, {""authors"": [""Bar-Hillel"", ""Yehoshua"", ""Haim Gaifman"", ""Eli Shamir.""], ""title"": ""On categorial and phrase structure grammars"", ""venue"": ""Bulletin of the Research Council of Israel, 9F(1):1\u201316. Reprinted in Yehoshua Bar-Hillel. Language and"", ""year"": 1960}, {""authors"": [""Curry"", ""Haskell B."", ""Robert Feys"", ""William Craig.""], ""title"": ""Combinatory Logic"", ""venue"": ""Volume 1. Studies in Logic and the Foundations of Mathematics. North-Holland."", ""year"": 1958}, {""authors"": [""Eisner"", ""Jason.""], ""title"": ""Efficient normal-form parsing for Combinatory Categorial Grammar"", ""venue"": ""Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 79\u201386,"", ""year"": 1996}, {""authors"": [""Gazdar"", ""Gerald.""], ""title"": ""Applicability of indexed grammars to natural language"", ""venue"": ""Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories. D. Reidel, pages 69\u201394."", ""year"": 1987}, {""authors"": [""Joshi"", ""Aravind K.""], ""title"": ""Tree Adjoining Grammars: How much context-sensitivity is required to provide reasonable structural descriptions? In David R"", ""venue"": ""Dowty, Lauri Karttunen, and Arnold M."", ""year"": 1985}, {""authors"": [""Joshi"", ""Aravind K."", ""Yves Schabes.""], ""title"": ""Tree-Adjoining Grammars"", ""venue"": ""Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, volume 3. Springer, pages 69\u2013123."", ""year"": 1997}, {""authors"": [""Kuhlmann"", ""Marco"", ""Alexander Koller"", ""Giorgio Satta.""], ""title"": ""The importance of rule restrictions in CCG"", ""venue"": ""Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL),"", ""year"": 2010}, {""authors"": [""Moortgat"", ""Michael.""], ""title"": ""Categorial type logics"", ""venue"": ""Johan van Benthem and Alice ter"", ""year"": 2011}, {""authors"": [""Pollard"", ""Carl J.""], ""title"": ""Generalized Phrase Structure Grammars, Head Grammars, and Natural Language"", ""venue"": ""Ph.D. thesis, Stanford University."", ""year"": 1984}, {""authors"": [""Shieber"", ""Stuart M.""], ""title"": ""Evidence against the context-freeness of natural language"", ""venue"": ""Linguistics and Philosophy, 8(3):333\u2013343."", ""year"": 1985}, {""authors"": [""Steedman"", ""Mark.""], ""title"": ""Surface Structure and Interpretation, volume 30 of Linguistic Inquiry Monographs"", ""venue"": ""MIT Press."", ""year"": 1996}, {""authors"": [""Steedman"", ""Mark.""], ""title"": ""The Syntactic Process"", ""venue"": ""MIT Press."", ""year"": 2000}, {""authors"": [""Steedman"", ""Mark.""], ""title"": ""Taking Scope"", ""venue"": ""MIT Press."", ""year"": 2012}, {""authors"": [""Steedman"", ""Mark"", ""Jason Baldridge.""], ""title"": ""Combinatory Categorial Grammar"", ""venue"": ""Robert D. Borsley and Kersti B\u00f6rjars, editors, Non-Transformational Syntax: Formal and Explicit Models of Grammar."", ""year"": 2011}, {""authors"": [""K. Vijay-Shanker"", ""David J. Weir.""], ""title"": ""Parsing some constrained grammar formalisms"", ""venue"": ""Computational Linguistics, 19(4):591\u2013636."", ""year"": 1993}, {""authors"": [""K. Vijay-Shanker"", ""David J. Weir.""], ""title"": ""The equivalence of four extensions of context-free grammars"", ""venue"": ""Mathematical Systems Theory, 27(6):511\u2013546."", ""year"": 1994}, {""authors"": [""K. Vijay-Shanker"", ""David J. Weir"", ""Aravind K. Joshi.""], ""title"": ""Tree adjoining and head wrapping"", ""venue"": ""Proceedings of the Eleventh International Conference on Computational Linguistics (COLING),"", ""year"": 1986}, {""authors"": [""Weir"", ""David J.""], ""title"": ""Characterizing Mildly Context-Sensitive Grammar Formalisms"", ""venue"": ""Ph.D. thesis, University of Pennsylvania."", ""year"": 1988}, {""authors"": [""Weir"", ""David J."", ""Aravind K. Joshi.""], ""title"": ""Combinatory categorial grammars: Generative power and relationship to linear context-free rewriting systems"", ""venue"": ""Proceedings of the 26th Annual Meeting"", ""year"": 1988}, {""authors"": [""White"", ""Michael.""], ""title"": ""OpenCCG: The OpenNLP CCG Library"", ""venue"": ""http://openccg.sourceforge.net/ Accessed November 13, 2013. 219"", ""year"": 2013}]","acknowledgments :We are grateful to Mark Steedman and Jason Baldridge for enlightening discussions of the material presented in this article, and to the four anonymous reviewers of the article for their detailed and constructive comments. References Ajdukiewicz, Kazimierz. 1935. Die syntaktische Konnexität. Studia Philosophica, 1:1–27. Baldridge, Jason. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh, Edinburgh, UK. Baldridge, Jason and Geert-Jan M. Kruijff. 2003. Multi-modal combinatory categorial grammar. In Tenth Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 211–218, Budapest. Bar-Hillel, Yehoshua, Haim Gaifman, and Eli Shamir. 1960. On categorial and phrase structure grammars. Bulletin of the Research Council of Israel, 9F(1):1–16. Reprinted in Yehoshua Bar-Hillel. Language and Information: Selected Essays on Their Theory and Application, pages 99–115. Addison-Wesley, 1964. Curry, Haskell B., Robert Feys, and William Craig. 1958. Combinatory Logic. Volume 1. Studies in Logic and the Foundations of Mathematics. North-Holland. Eisner, Jason. 1996. Efficient normal-form parsing for Combinatory Categorial Grammar. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 79–86, Santa Cruz, CA. Gazdar, Gerald. 1987. Applicability of indexed grammars to natural language. In Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories. D. Reidel, pages 69–94. Joshi, Aravind K. 1985. Tree Adjoining Grammars: How much context-sensitivity is required to provide reasonable structural descriptions? In David R. Dowty, Lauri Karttunen, and Arnold M. Zwicky, editors, Natural Language Parsing. Cambridge University Press, pages 206–250. Joshi, Aravind K. and Yves Schabes. 1997. Tree-Adjoining Grammars. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, volume 3. Springer, pages 69–123. Kuhlmann, Marco, Alexander Koller, and Giorgio Satta. 2010. The importance of rule restrictions in CCG. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 534–543, Uppsala. Moortgat, Michael. 2011. Categorial type logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language. Elsevier, second edition, chapter 2, pages 95–179. Pollard, Carl J. 1984. Generalized Phrase Structure Grammars, Head Grammars, and Natural Language. Ph.D. thesis, Stanford University. Shieber, Stuart M. 1985. Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8(3):333–343. Steedman, Mark. 1996. Surface Structure and Interpretation, volume 30 of Linguistic Inquiry Monographs. MIT Press. Steedman, Mark. 2000. The Syntactic Process. MIT Press. Steedman, Mark. 2012. Taking Scope. MIT Press. Steedman, Mark and Jason Baldridge. 2011. Combinatory Categorial Grammar. In Robert D. Borsley and Kersti Börjars, editors, Non-Transformational Syntax: Formal and Explicit Models of Grammar. Blackwell, chapter 5, pages 181–224. Vijay-Shanker, K. and David J. Weir. 1993. Parsing some constrained grammar formalisms. Computational Linguistics, 19(4):591–636. Vijay-Shanker, K. and David J. Weir. 1994. The equivalence of four extensions of context-free grammars. Mathematical Systems Theory, 27(6):511–546. Vijay-Shanker, K., David J. Weir, and Aravind K. Joshi. 1986. Tree adjoining and head wrapping. In Proceedings of the Eleventh International Conference on Computational Linguistics (COLING), pages 202–207, Bonn. Weir, David J. 1988. Characterizing Mildly Context-Sensitive Grammar Formalisms. Ph.D. thesis, University of Pennsylvania. Weir, David J. and Aravind K. Joshi. 1988. Combinatory categorial grammars: Generative power and relationship to linear context-free rewriting systems. In Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics (ACL), pages 278–285, Buffalo, NY. White, Michael. 2013. OpenCCG: The OpenNLP CCG Library. http://openccg.sourceforge.net/ Accessed November 13, 2013.",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"lexicalization and generative power in ccg :Marco Kuhlmann∗ Linköping University Alexander Koller∗∗ University of Potsdam Giorgio Satta† University of Padua The weak equivalence of Combinatory Categorial Grammar (CCG) and Tree-Adjoining Grammar (TAG) is a central result of the literature on mildly context-sensitive grammar formalisms. However, the categorial formalism for which this equivalence has been established differs significantly from the versions of CCG that are in use today. In particular, it allows restriction of combinatory rules on a per grammar basis, whereas modern CCG assumes a universal set of rules, isolating all cross-linguistic variation in the lexicon. In this article we investigate the formal significance of this difference. Our main result is that lexicalized versions of the classical CCG formalism are strictly less powerful than TAG.","vw-ccg :restrictions. More specifically, we look at a variant of CCG consisting of the composition rules implemented in OpenCCG (White 2013), the most widely used development platform for CCG grammars. We show that this formalism is (almost) prefix-closed and cannot express target restrictions, which enables us to apply our generative capacity result from the first step. The same result holds for (the composition-only fragment of) the formalism of Baldridge and Kruijff (2003). Thus we find that, at least with existing means, the weak equivalence result of Vijay-Shanker and Weir cannot be obtained for lexicalized CCG. We conclude the article by discussing the implications of our results (Section 5).","proof :No composition rule creates new arguments: Every argument that occurs in an output category already occurs in one of the input categories. Therefore, every argument must come from some word–category pair in the lexicon, of which there are only finitely many. Lemma 2 The set of secondary input categories that occur in the derivations of a VW-CCG is finite. 3 Also, AB-grammar does not support lexicon entries for the empty string. Every secondary input category is obtained by substituting concrete categories for the variables that occur in the non-shaded component of one of the rules specified in Figure 4. After the substitution, all of these categories occur as part of arguments. Then, with Lemma 1, we deduce that the substituted categories come from a finite set. At the same time, each grammar specifies a finite set of rules. This means that there are only finitely many ways to obtain a secondary input category. When specifying VW-CCGs, we find it convenient sometimes to provide an explicit list of valid rule instances, rather than a textual description of rule restrictions. For this we use a special type of restricted rule that we call templates. A template is a restricted rule that simultaneously fixes both (a) the target of the primary input category of the rule, and (b) the entire secondary input category. We illustrate the idea with an example. Example 4 We list the templates that correspond to the rule instances in the derivation from Figure 6. (The grammar allows other instances that are not listed here.) We use the symbol as a placeholder for that part of a primary input category that is unconstrained by rule restrictions, and therefore may consist of an arbitrary sequence of arguments. A S /A S (1) S /C C /A S /A (2) S /B B/C S /C (3) S /B B/C/B S /C/B (4) For example, template (1) characterizes backward application (<0) where the target of the primary input category is S and the secondary input category is A, and template (4) characterizes forward composition of degree 2 (>2) where the target of the primary input category is S and the secondary input category is B/C/B. Note that every VW-CCG can be specified using a finite set of templates: It has a finite set of combinatory rules; the set of possible targets of the primary input category of each rule is finite because each target is an atomic category; and the set of possible secondary input categories is finite because of Lemma 2. We are given a TAG G and construct a weakly equivalent VW-CCG G′. The basic idea is to make the lexical categories of G′ correspond to the elementary trees of G, and to set up the combinatory rules and their restrictions in such a way that the derivations of G′ correspond to derivations of G. Vocabulary, Atomic Categories. The vocabulary of G′ is the set of all terminal symbols of G; the set of atomic categories consists of all symbols of the form At, where either A is a nonterminal symbol of G and t ∈ {a, c}, or A is a terminal symbol of G and t = a. The distinguished atomic category of G′ is Sa, where S is the start symbol of G. Lexicon. One may assume (cf. Vijay-Shanker, Weir, and Joshi 1986) that G is in the normal form shown in Figure 9. In this normal form there is a single initial S-tree, and all remaining elementary trees are auxiliary trees of one of five possible types. For each such tree, one constructs two lexicon entries for the empty string ε as specified in Figure 9. Additionally, for each terminal symbol x of G, one constructs a lexicon entry x := xa. Rules. The rules of G′ are forward and backward application and forward and backward composition of degree at most 2. They are used to simulate adjunction operations in derivations of G: Application simulates adjunction into nodes to the left or right of the foot node; composition simulates adjunction into nodes above the foot node. Without restrictions, these rules would allow derivations that do not correspond to derivations of G. Therefore, rules are restricted such that an argument of the form |At can be eliminated by means of an application rule only if t = a, and by means of a composition rule only if t = c. This enforces two properties that are central for the correctness of the construction (Weir 1988, p. 119): First, the secondary input category in every instance of composition is a category that has just been introduced from the lexicon. Second, categories cannot be combined in arbitrary orders. The rule restrictions are: 1. Forward and backward application are restricted to instances where both the target of the primary input category and the entire secondary input category take the form Aa. 2. Forward and backward composition are restricted to instances where the target of the primary input category takes the form Aa and the target of the secondary input category takes the form Ac. Using our template notation, the restricted rules can be written as in Figure 10. As an aside we note that the proof of Lemma 3 makes heavy use of the ability of VW-CCG to assign lexicon entries to the empty string. Such lexicon entries violate one of the central linguistic principles of CCG, the Principle of Adjacency, according to which combinatory rules may only apply to phonologically realized entities (Steedman 2000, p. 54). It is an interesting question for future research whether a version of VW-CCG without lexicon entries for the empty string remains weakly equivalent to TAG. Every prefix-closed VW-CCG is a VW-CCG, therefore the inclusion follows from Theorem 1. To show that every TAG language can be generated by a prefix-closed VWCCG, we recall the construction of a weakly equivalent VW-CCG for a given TAG that we sketched in the proof of Lemma 3. As already mentioned in Example 6, the grammar G′ constructed there is not prefix-closed. However, we can make it prefixclosed by explicitly allowing the “missing” rule instances: Aa /Bc Bc Aa where A, B ∈ V Bc Aa /Bc Aa where A, B ∈ V We shall now argue that this modification does not actually change the language generated by G′. The only categories that qualify as secondary input categories of the new instances are atomic categories of the form Bc where B is a nonterminal of the TAG G. Now the lexical categories of G′ either are of the form xa (where x is a terminal symbol) or are non-atomic. Categories of the form Bc are not among the derived categories of G′ either, as the combinatory rules only yield output categories whose targets have the form Ba. This means that the new rule instances can never be used in a complete derivation of G′, and therefore do not change the generated language. Thus we have a construction that turns a TAG into a weakly equivalent prefix-closed VW-CCG. Every prefix-closed VW-CCG without target restrictions is a VW-CCG, so the inclusion follows from Theorem 1. To see that the inclusion is proper, consider the TAG language L1 = {anbncn | n ≥ 1} from Example 5. We are interested in sublanguages L′ ⊆ L1 that are Parikh-equivalent to the full language L1. This property is trivially satisfied by L1 itself. Moreover, it is not hard to see that L1 is in fact the only sublanguage of L1 that has this property. Now in Section 3.4 we shall prove a central lemma (Lemma 6), which asserts that, if we assume that L1 is generated by a prefix-closed VW-CCG without target restrictions, then at least one of the Parikh-equivalent sublanguages of L1 must be context-free. Because L1 is the only such sublanguage, this would give us proof that L1 is context-free; but we know it is not. Therefore we conclude that L1 is not generated by a prefix-closed VW-CCG without target restrictions. Before turning to the proof of the central lemma (Lemma 6), we establish two other results about the languages generated by grammars without target restrictions. Lemma 4 The languages generated by prefix-closed VW-CCG without target restrictions properly include the context-free languages. Inclusion follows from the fact that AB-grammars (which generate all context-free languages) are prefix-closed VW-CCGs without target restrictions. To see that the inclusion is proper, consider a grammar G2 that is like G1 but does not have any rule restrictions. This grammar is trivially prefix-closed and without target restrictions; it is actually “pure” in the sense of Kuhlmann, Koller, and Satta (2010). The language L2 = L(G2) contains all the strings in L1 = {anbncn | n ≥ 1}, together with other strings, including the string bbbacacac, whose derivation we showed in Figure 7. It is not hard to see that all of these additional strings have an equal number of as, bs, and cs. We can therefore write L1 as an intersection of L2 and a regular language: L1 = L2 ∩ a∗b∗c∗. To obtain a contradiction, suppose that L2 is context-free; then because of the fact that context-free languages are closed under intersection with regular languages, the language L1 would be context-free as well—but we know it is not. Therefore we conclude that L2 is not context-free either. Lemma 5 The class of languages generated by prefix-closed VW-CCG without target restrictions is not closed under intersection with regular languages. If the class of languages generated by prefix-closed VW-CCG without target restrictions was closed under intersection with regular languages, then with L2 (the language mentioned in the previous proof) it would also include the language L1 = L2 ∩ a∗b∗c∗. However, from the proof of Theorem 3 we know that L1 is not generated by any prefixclosed VW-CCG without target restrictions. To argue that the system is terminating, we note that each rewriting step decreases the arity of one secondary input category in the derivation by one unit, while all other secondary input categories are left unchanged. As an example, consider rewriting under R1. The secondary input categories in the scope of that rule are Yβ/Z and Z on the left-hand side and Yβ and Z on the right-hand side. Here the arity of Yβ equals the arity of Yβ/Z, minus one. Because the system is terminating, to see that it is also confluent, it suffices to note that the left-hand sides of the rewrite rules do not overlap. Lemma 8 The rewriting system transforms derivations of G into derivations of G. We prove the stronger result that every rewriting step transforms derivations of G into derivations of G. We only consider rewriting under R1; the arguments for the other rules are similar. Assume that R1 is applied to a derivation of G. The rule instances in the scope of the left-hand side of R1 take the following form: X/Y Yβ/Z Xβ/Z (8) Xβ/Z Z Xβ (9) Turning to the right-hand side, the rule instances in the rewritten derivation are Yβ/Z Z Yβ (10) X/Y Yβ Xβ (11) The relation between instances (8) and (11) is the characteristic relation of prefixclosed grammars (Definition 2): If instance (8) is valid, then because G is prefix-closed, instance (11) is valid as well. Similarly, the relation between instances (9) and (10) is the characteristic relation of grammars without target restrictions (Definition 3): If instance (9) is valid, then because G is without target restrictions, instance (10) is valid as well. We conclude that if R1 is applied to a derivation of G, then the result is another derivation of G. Combining Lemma 7 and Lemma 8, we see that for every derivation d of G, exhaustive application of the rewriting rules produces another uniquely determined derivation of G. We shall refer to this derivation as R(d). A transformed derivation is any derivation d′ such that d′ = R(d) for some derivation d. Let Y be the set of yields of the transformed derivations. Every string w′ ∈ Y is obtained from a string w ∈ L(G) by choosing some derivation d of w, rewriting this derivation into the transformed derivation R(d), and taking the yield. Inclusion then follows from Lemma 8. Because of the permuting rules R3 and R4, the strings w and w′ will in general be different. What we can say, however, is that w and w′ will be equal up to permutation. Thus we have established that Y and L(G) are Parikh-equivalent. What remains in order to prove Lemma 6 is to show that the yields of the transformed derivations form a context-free language. 3.4.3 Context-Freeness of the Sublanguage. In a derivation tree, every node except the root node is labeled with either the primary or the secondary input category of a combinatory rule. We refer to these two types of nodes as primary nodes and secondary nodes, respectively. To simplify our presentation, we shall treat the root node as a secondary node. We restrict our attention to derivation trees for strings in L(G); in these trees, the root node is labeled with the distinguished atomic category S. For a leaf node u, the projection path of u is the path that starts at the parent of u and ends at the first secondary node that is encountered on the way towards the root node. We denote a projection path as a sequence X1, . . . , Xn (n ≥ 1), where X1 is the category at the parent of u and Xn is the category at the secondary node. Note that the category X1 is taken from the lexicon, while every other category is derived by combining the preceding category on the path with some secondary input category (not on the path) by means of some combinatory rule. Example 10 In the derivation in Figure 6, the projection path of the first b goes all the way to the root, while all other projection paths have length 1, starting and ending with a lexical category. In Figure 7, the projection path of the first b ends at the root, while the projection paths of the remaining bs end at the nodes with category B, and the projection paths of the cs end at the nodes with category C. A projection path X1, . . . , Xn is split if it can be segmented into two parts X1, . . . , Xs and Xs, . . . , Xn (1 ≤ s ≤ n) such that the first part only uses application rules and the second part only uses composition rules. Note that any part may consist of a single category only, in which case no combinatory rule is used in that part. If n = 1, then the path is trivially split. All projection paths in Figures 6 and 7 are split, except for the path of the first b in Figure 6, which alternates between composition (with C /A) and application (with A). Lemma 10 In transformed derivations, every projection path is split. We show that as long as a derivation d contains a projection path that is not split, it can be rewritten. A projection path that is not split contains three adjacent categories U, V, W, such that V is derived by means of a composition with primary input U, and W is derived by means of an application with primary input V. Suppose that both the composition and the application are forward. (The arguments for the other three cases are similar.) Then U can be written as X/Y for some category X and argument /Y, V can be written as Xβ/Z for some argument /Z and some (possibly empty) sequence of arguments β, and W can be written as Xβ. We can then convince ourselves that d contains the following configuration, which matches the left-hand side of rewriting rule R1: X/Y Yβ/Z Xβ/Z Z Xβ Lemma 11 The set of all categories that occur in transformed derivations is finite. Every category that occurs in transformed derivations occurs on some of its projection paths. Consider any such path. By Lemma 10 we know that this path is split; its two parts, here called P1 and P2, are visualized in Figure 13. We now reason about the arities of the categories in these two parts. 1. Because P1 only uses application, the arities in this part get smaller and smaller until they reach their minimum at Xs. This means that the arities of P1 are bounded by the arity of the first category on the path, which is a category from the lexicon. 2. Because P2 only uses composition, the arities in this part either get larger or stay the same until they reach a maximum at Xn. This means that the arities of P2 are bounded by the arity of the last category on the path, which is either the distinguished atomic category S or a secondary input category. Thus the arities of our chosen path are bounded by the maximum of three grammarspecific constants: the maximal arity of a lexical category, the arity of S (which is 0), and the maximal arity of a secondary input category. The latter value is well-defined because there are only finitely many such categories (by Lemma 2). Let k be the maximum among the three constants, and let K be the set of all categories of the form A|mXm · · · |1X1 where A is an atomic category of G, m ≤ k, and each |iXi is an argument that may occur in derivations of G. The set K contains all categories that occur on some projection path, and therefore all categories that occur in transformed derivations, but it may also include other categories. As there are only finitely many atomic categories and finitely many arguments (Lemma 1), we conclude that the set K, and hence the set of categories that occur in transformed derivations, are finite as well. Lemma 12 The transformed derivations yield a context-free language. We construct a context-free grammar H that generates the set Y of yields of the transformed derivations. To simplify the presentation, we first construct a grammar H′ that generates a superset of Y . Construction of H′. The construction of the grammar H′ is the same as the construction in the classical proof that showed the context-freeness of AB-grammars, by Bar-Hillel, Gaifman, and Shamir (1960): The production rules of H′ are set up to correspond to the valid rule instances of G. The reason that this construction is not useful for VW-CCGs in general is that these may admit infinitely many rule instances, whereas a context-free grammar can only have finitely many productions. The set of rule instances may be infinite because VW-CCG has access to composition rules (specifically, rules of degrees greater than 1); in contrast, AB-grammars are restricted to application. Crucially though, by Lemma 11 we know that as long as we are interested only in transformed derivations it is sufficient to use a finite number of rule instances—more specifically those whose input and output categories are included in the set K of arity-bounded categories. Thus for every instance X/Y Yβ Xβ where all three categories are in K, we construct a production [Xβ] → [X/Y] [Yβ] and similarly for backward rules. (We enclose categories in square brackets for clarity.) In addition, for every lexicon entry σ := X in G we add to H′ a production [X] → σ. As the terminal alphabet of H′ we choose the vocabulary of G; as the nonterminal alphabet we choose the set K; and as the start symbol we choose the distinguished atomic category S. Every transformed derivation of G corresponds (in an obvious way) to some derivation in H′, which proves that Y ⊆ L(H′). Conversely, every derivation of H′ represents a derivation of G (though not necessarily a transformed derivation), thus L(H′) ⊆ L(G). Construction of H. The chain of inclusions Y ⊆ L(H′) ⊆ L(G) is sufficient to prove Lemma 6: Because Y and L(G) are Parikh-equivalent (which we observed at the beginning of Section 3.4.2), so are L(H′) and L(G), which means that L(H′) satisfies all of the properties claimed in Lemma 6, even though this does not suffice to prove our current lemma. However, once H′ is given, it is not hard to also obtain a grammar H that generates exactly Y . For this, we need to filter out derivations whose projection paths do not have the characteristic property of transformed derivations that we established in Lemma 10. (It is not hard to see that every derivation that does have this property is a transformed derivation.) We annotate the left-hand side nonterminals in the productions of H′ with a flag t ∈ {a, c} to reflect whether the corresponding category has been derived by means of application (t = a) or composition (t = c); the value of this flag is simply the type of combinatory rule that gave rise to the production. The nonterminals in the right-hand sides are annotated in all possible ways, except that the following combinations are ruled out: [X]a → [X/Y]c[Y]t and [X]a → [Y]t[X /Y]c t ∈ {a, c} These combinations represent exactly the cases where the output category of a composition rule is used as the primary input category of an application rule, which are the cases that violate the “split” property that we established in Lemma 10. This concludes the proof of Lemma 6, and therefore the proof of Theorem 3. Inclusion follows from the fact that every AB-grammar can be written as an O-CCG with only application (d = 0). To show that the inclusion is proper, we use the same argument as in the proof of Lemma 4. The grammar G2 that we constructed there can be turned into an equivalent O-CCG by decorating each slash with ·, the least restrictive type, and setting its inertness status to +. What is less obvious is whether O-CCG generates the same class of languages as VW-CCG and TAG. Our main result is that this is not the case. Theorem 4 The languages generated by O-CCG are properly included in the TAG languages. O-CCG without Inertness. To approach Theorem 4, we set inertness aside for a moment and focus on the use of the slash types as a mechanism for imposing rule restrictions. Each of the rules in Figure 15 requires all of the slash types of the n outermost arguments of its secondary input category to be compatible with the rule, in the sense specified in Figure 14. If we now remove one or more of these arguments from a valid rule instance, then the new instance is clearly still valid, as we have reduced the number of potential violations of the type–rule compatibility. This shows that the rule system is prefix-closed. As none of the rules is conditioned on the target of the primary input category, the rule system is even without target restrictions. With these two properties established, Theorem 4 can be proved by literally the same arguments as those that we gave in Section 3. Thus we see directly that the theorem holds for versions of multimodal CCG without inertness, such as the formalism of Baldridge and Kruijff (2003). O-CCG with Inertness. In the general case, the situation is complicated by the fact that the crossed composition rules change the inertness status of some argument categories if the slash types have conflicting directions. This means that the crossed composition rules in O-CCG are not entirely prefix-closed, as illustrated by the following example. Example 11 Consider the following two rule instances: X/+× Y Y / + ×Z2 / + × Z1 X / − ×Z2 / − × Z1 (12) X/+× Y Y / + ×Z2 X / − ×Z2 (13) Instance (12) is a valid instance of forward crossed composition. Prefix-closedness would require instance (13) to be valid as well; but it is not. In instance (12) the inertness status of /+ ×Z2 is changed for the only reason that the slash type of / + × Z1 does not match the required direction. In instance (13) the argument /+× Z1 is not present, and therefore the inertness status of /+ ×Z2 is not changed, but is carried over to the output category. We therefore have to prove that the following analogue of Lemma 6 holds for O-CCG: Lemma 15 (Main Lemma for O-CCG) For every language L generated by some O-CCG there is a sublanguage L′ ⊆ L such that 1. L′ and L are Parikh-equivalent, and 2. L′ is context-free. This lemma implies that the language L1 = {anbncn | n ≥ 1} from Example 2 cannot be generated by O-CCG (but by prefix-closed VW-CCG with target restrictions, and by TAG). The argument is the same as in the proof of Theorem 3. As in the proof of Lemma 8 we establish the stronger result that the claimed property holds for every single rewriting step. We only give the argument for rewriting under R3, which involves instances of forward crossed composition. The argument for R4 is analogous, and R1 and R2 are simpler cases because they involve harmonic composition, where the inertness status does not change. Suppose that R3 is applied to a derivation of some O-CCG. In their most general form, the rule instances in the scope of the left-hand side of R3 may be written as follows, where the function σ is defined as specified in Figure 15: X/+t0 Y Y / sn tn Zn · · · / s2 t2 Z2 / s1 t1 Z1 X / σ(sn) tn Zn · · · / σ(s2) t2 Z2 / σ(s1) t1 Z1 (14) Z1 X / σ(sn) tn Zn · · · / σ(s2) t2 Z2 / + t1 Z1 X / σ(sn) tn Zn · · · / σ(s2) t1 Z2 (15) Here instance (14) is an instance of forward-crossed composition, so each of the types ti is compatible with that rule. Because the two marked arguments are identical, we have σ(s1) = +. This is only possible if the inertness statuses of the slashes / si ti do not change in the context of derivation (14), that is, if σ(si) = si for all 1 ≤ i ≤ n. Note that in this case, t0 is either a right type or one of the four undirected core types, and each t1, . . . , tn is either a left type or a core type. We can now alternatively write instances (14) and (15) as X/+t0 Y Y / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 X / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 (14 ′) Z1 X / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 X / sn tn Zn · · · / s2 t1 Z2 (15 ′) Then the rule instances in the rewritten derivation can be written as follows: Z1 Y / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 Y / sn tn Zn · · · / s2 t2 Z2 (16) X/+t0 Y Y / sn tn Zn · · · / s2 t2 Z2 X / sn tn Zn · · · / s2 t2 Z2 (17) Here instance (16) is clearly a valid instance of backward application. Based on our earlier observations about the ti and their compatibility with crossed composition, we also see that instance (17) is a valid instance of forward crossed composition (if n > 1), or of forward application (if n = 1). This completes the proof of Lemma 15. To finish the proof of Theorem 4 we have to also establish the inclusion of the O-CCG languages in the TAG languages. This is a known result for other dialects of multimodal CCG (Baldridge and Kruijff 2003), but O-CCG once again requires some extra work because of inertness. Lemma 17 The O-CCG languages are included in the TAG languages. It suffices to show that the O-CCG languages are included in the class of languages generated by LIG (Gazdar 1987); the claim then follows from the weak equivalence of LIG and TAG. Vijay-Shanker and Weir (1994, Section 3.1) present a construction that transforms an arbitrary VW-CCG into a weakly equivalent LIG. It is straightforward to adapt their construction to O-CCG. As we do not have the space here to define LIG, we only provide a sketch of the adapted construction. As in the case of VW-CCG, the valid instances of an O-CCG rule can be written down using our template notation. The adapted construction converts each such template into a production rule of a weakly equivalent LIG. Consider for instance the following instance of forward crossed composition from Example 11: A /+× Y Y / + ×Z2 / + × Z1 A / − ×Z2 / − × Z1 This template is converted into the following LIG rule. We adopt the notation of VijayShanker and Weir (1994) and write ◦◦ for the tail of a stack of unbounded size. A[◦◦ /− ×Z2 /−× Z1] → A[◦◦/+× Y] Y[ /+ ×Z2 /+× Z1] In this way, every O-CCG can be written as a weakly equivalent LIG.",,,,,,,,,,,"1 introduction :Since the late 1970s, several grammar formalisms have been proposed that extend the power of context-free grammars in restricted ways. The two most prominent members of this class of “mildly context-sensitive” formalisms (a term coined by Joshi 1985) are Tree-Adjoining Grammar (TAG; Joshi and Schabes 1997) and Combinatory Categorial Grammar (CCG; Steedman 2000; Steedman and Baldridge 2011). Both formalisms have been applied to a broad range of linguistic phenomena, and are being widely used in computational linguistics and natural language processing. In a seminal paper, Vijay-Shanker and Weir (1994) showed that TAG, CCG, and two other mildly context-sensitive formalisms—Head Grammar (Pollard 1984) and Linear Indexed Grammar (Gazdar 1987)—all characterize the same class of string languages. However, when citing this result it is sometimes overlooked that the result applies to a version of CCG that is quite different from the versions that are in practical use today. ∗ Department of Computer and Information Science, Linköping University, 581 83 Linköping, Sweden. E-mail: marco.kuhlmann@liu.se. ∗∗ Department of Linguistics, Karl-Liebknecht-Str. 24–25, University of Potsdam, 14476 Potsdam, Germany. E-mail: koller@ling.uni-potsdam.de. † Department of Information Engineering, University of Padua, via Gradenigo 6/A, 35131 Padova, Italy. E-mail: satta@dei.unipd.it. Submission received: 4 December 2013; revised submission received: 26 July 2014; accepted for publication: 25 November 2014. doi:10.1162/COLI a 00219 © 2015 Association for Computational Linguistics The goal of this article is to contribute to a better understanding of the significance of this difference. The difference between “classical” CCG as formalized by Vijay-Shanker and Weir (1994) and the modern perspective may be illustrated with the combinatory rule of backward-crossed composition. The general form of this rule looks as follows: Backward-crossed composition, general form: Y/Z X /Y ⇒ X/Z () Y X /Y ⇒ X (backward application) (<) Formally, a rule is a syntactic object in which the letters X, Y, Z act as variables for categories. A rule instance is obtained by substituting concrete categories for all variables in the rule. For example, the derivation in Figure 3 contains the following instances of function application. We denote rule instances by using a triple arrow instead of the double arrow in our notation for rules. (S /NP)/NP NP S /NP and NP S /NP S Application rules give rise to derivations equivalent to those of context-free grammar. Indeed, versions of categorial grammar where application is the only mode of combination, such as AB-grammar (Ajdukiewicz 1935; Bar-Hillel, Gaifman, and Shamir 1960), can only generate context-free languages. CCG can be more powerful because it also includes other rules, derived from the combinators of combinatory logic (Curry, Feys, and Craig 1958). In this article, as in most of the formal work on CCG, we restrict our attention to the rules of (generalized) composition, which are based on the B combinator.1 The general form of composition rules is shown in Figure 4. In each rule, the two input categories are distinguished into one primary (shaded) and one secondary input category. The number n of outermost arguments of the secondary input category is called the degree of the rule.2 In particular, for n = 0 we obtain the rules of function 1 This means that we ignore other rules required for linguistic analysis, in particular type-raising (from the T combinator), substitution (from the S combinator), and coordination. 2 The literature on CCG assumes a bound on n; for English, Steedman (2000, p. 42) puts n ≤ 3. Adding rules of unbounded degree increases the generative capacity of the formalism (Weir and Joshi 1988). application. In contexts where we refer to both application and composition, we use the latter term for composition rules with degree n > 0. Derivation Trees. Derivation trees can now be schematically defined as in Figure 5. They contain two types of branchings: unary branchings correspond to lexicon entries; binary branchings correspond to rule instances. The yield of a derivation tree is the left-to-right concatenation of its leaves. We now define the classical CCG formalism that was studied by Vijay-Shanker and Weir (1994) and originally introduced by Weir and Joshi (1988). As mentioned in Section 1, the central feature of this formalism is its ability to impose restrictions on the applicability of combinatory rules. Specifically, a restricted rule is a rule annotated with constraints that (a) restrict the target of the primary input category; and/or (b) restrict the secondary input category, either in parts or in its entirety. Every grammar lists a finite number of restricted rules (where one and the same base rule may occur with several different restrictions). A valid rule instance is an instance that is compatible with at least one of the restricted rules. Example 1 Linguistic grammars make frequent use of rule restrictions. To exclude the undesired derivation in Figure 1 we restricted backward crossed composition to instances where both the primary and the secondary input category are functions into the category of sentences, S. Writing target for the function that returns the target of a category, the restricted rule can be written as Y/Z X /Y ⇒ X/Z (backward crossed composition) ( n) of its arguments. Example 6 We illustrate prefix-closedness using some examples: 1. Every AB-grammar (when seen as a VW-CCG) is trivially prefix-closed; in these grammars, n = 0. 2. The “pure” grammars that we considered in our earlier work (Kuhlmann, Koller, and Satta 2010) are trivially prefix-closed. 3. The grammar G1 from Example 2 is prefix-closed. 4. The grammars constructed in the proof of Lemma 3 are not prefix-closed; they do not allow the following instances of application, where the secondary input category is of the form Bc (rather than Ba): Aa /Bc Bc Aa where A, B ∈ V Bc Aa /Bc Aa where A, B ∈ V Example 7 The linguistic intuition underlying prefix-closed grammars is that if such a grammar allows us to delay the combination of a functor and its argument (via composition), then it also allows us to combine the functor and its argument immediately (via application). To illustrate this intuition, consider Figure 11, which shows two derivations related to the discussion of word order in Swiss German subordinate clauses (Shieber 1985): . . . mer em Hans es huus hälfed aastriche . . . we Hansdat the houseacc helped paint “. . . we helped Hans paint the house” Derivation (5) (simplified from Steedman and Baldridge 2011, p. 201) starts by composing the tensed verb hälfed into the infinitive aastriche and then applies the resulting category to the accusative argument of the infinitive, es huus. Prefix-closedness implies that, if the combination of hälfed and aastriche is allowed when the latter is still waiting for es huus, then it must also be allowed if es huus has already been found. Thus prefix-closedness predicts derivation (6), and along with it the alternative word order . . . mer em Hans hälfed es huus aastriche . . . we Hansdat helped the houseacc paint This word order is in fact grammatical (Shieber 1985, pp. 338–339). We now show that the restriction to prefix-closed grammars does not change the generative capacity of VW-CCG. Theorem 2 Prefix-closed VW-CCG and TAG are weakly equivalent. In this section we shall see that the weak equivalence between prefix-closed VW-CCG and TAG depends on the ability to restrict the target of the primary input category in a combinatory rule. These are the restrictions that we referred to as constraints of type (a) in Section 2.2. We say that a grammar that does not make use of these constraints is without target restrictions. This property can be formally defined as follows. Definition 3 A VW-CCG is without target restrictions if it satisfies the following implication: if X/Y Yβ Xβ is a valid rule instance then so is X̄/Y Yβ X̄β for any category X̄ of the grammar and similarly for backward rules. Example 8 1. Every AB-grammar is without target restrictions; it allows forward and backward application for every primary input category. 2. The grammar G1 from Example (2) is not without target restrictions, because its rules are restricted to primary input categories with target S. Target restrictions on the primary input category are useful in CCGs for natural languages; recall our discussion of backward-crossed composition in Section 1. As we shall see, target restrictions are also relevant from a formal point of view: If we require VW-CCGs to be without target restrictions, then we lose some of their weak generative capacity. This is the main technical result of this article. For its proof we need the following standard concept from formal language theory: Definition 4 Two languages L and L′ are Parikh-equivalent if for every string w ∈ L there exists a permuted version w′ of w such that w′ ∈ L′, and vice versa. Theorem 3 The languages generated by prefix-closed VW-CCG without target restrictions are properly included in the TAG languages. We shall now prove the central lemma that we used in the proof of Theorem 3. Lemma 6 (Main Lemma for VW-CCG) For every language L that is generated by some prefix-closed VW-CCG without target restrictions, there is a sublanguage L′ ⊆ L such that 1. L′ and L are Parikh-equivalent, and 2. L′ is context-free. Throughout this section, we let G be some arbitrary prefix-closed VW-CCG without target restrictions. The basic idea is to transform the derivations of G into a certain special form, and to prove that the transformed derivations yield a context-free language. The transformation is formalized by the rewriting system in Figure 12.4 To see how the rules of this system work, consider rule R1; the other rules are symmetric. Rule R1 rewrites an entire derivation into another derivation. It states that, whenever we have a situation where a category of the form X/Y is combined with a category of the form Yβ/Z by means of composition, and the resulting category is combined with a category Z by means of application, then we may just as well first combine Yβ/Z with Z, and then use the resulting category as a secondary input category together with X/Y. 4 Recall that we use the Greek letter β to denote a (possibly empty) sequence of arguments. Note that R1 and R2 produce a new derivation for the original sentence, whereas R3 and R4 produce a derivation that yields a permutation of that sentence: The order of the substrings corresponding to the categories Z and X/Y (in the case of rule R3) or X /Y (in the case of rule R4) is reversed. In particular, R3 captures the relation between the two derivations of Swiss German word orders shown in Figure 11: Applying R3 to derivation (5) gives derivation (6). Importantly though, while the transformation may reorder the yield of a derivation, every transformed derivation still is a derivation of G. Example 9 If we take the derivation in Figure 6 and exhaustively apply the rewriting rules from Figure 12, then the derivation that we obtain is the one in Figure 7. Note that although the latter derivation is not grammatical with respect to the grammar G1 from Example 2, it is grammatical with respect to the grammar G2 from the proof of Lemma 4, which is without target restrictions. It is instructive to compare the rewriting rules in Figure 12 to the rules that establish the normal form of Eisner (1996). This normal form is used in practical CCG parsers to solve the problem of “spurious ambiguity,” where one and the same semantic interpretation (which in CCG takes the form of a lambda term) has multiple syntactic derivation trees. It is established by rewriting rules such as the following: X/Y Yβ/Z Xβ/Z † Zγ Xβγ −→ X/Y Yβ/Z Zγ Yβγ Xβγ †† (5) The rules in Figure 12 have much in common with the Eisner rules; yet there are two important differences. First, as already mentioned, our rules (in particular, rules R3 and R4) may reorder the yield of a derivation, whereas Eisner’s normal form preserves yields. Second, our rules decrease the degrees of the involved composition operations, whereas Eisner’s rules may in fact increase them. To see this, note that the left-hand side of derivation (7) involves a composition of degree |β|+ 1 (†), whereas the right-hand side involves a composition of degree |β|+ |γ| (††). This means that rewriting will increase the degree in situations where |γ| > 1. In contrast, our rules only fire in the case where the combination with Z happens by means of an application, that is, if |γ| = 0. Under this condition, each rewrite step is guaranteed to decrease the degree of the composition. We will use this observation in the proof of Lemma 7. 3.4.1 Properties of the Transformation. The next two lemmas show that the rewriting system in Figure 12 implements a total function on the derivations of G. Lemma 7 The rewriting system is terminating and confluent: Rewriting a derivation ends after a finite number of steps, and different rewriting orders all result in the same output. Lemma 9 The yields of the transformed derivations are a subset of and Parikh-equivalent to L(G). Theorem 3 pinpoints the exact mechanism that VW-CCG uses to achieve weak equivalence to TAG: At least for the class of prefix-closed grammars, TAG equivalence is achieved if and only if we allow target restrictions. Although target restrictions are frequently used in linguistically motivated grammars, it is important and perhaps surprising to realize that they are indeed necessary to achieve the full generative capacity of VW-CCG. In the grammar formalisms folklore, the generative capacity of CCG is often attributed to generalized composition, and indeed we have seen (in Lemma 4) that even grammars without target restrictions can generate non-context-free languages such as L(G2). However, our results show that composition by itself is not enough to achieve weak equivalence with TAG: The yields of the transformed derivations from Section 3.4 form a context-free language despite the fact that these derivations may still contain compositions, including compositions of degree n > 2. In addition to composition, VWCCG also needs target restrictions to exert enough control on word order to block unwanted permutations. One way to think about this is that target restrictions can enforce alternations of composition and application (as in the derivation shown in Figure 6), while transformed derivations are characterized by projection paths without such alternations (Lemma 10). We can sharpen the picture even more by observing that the target restrictions that are crucial for the generative capacity of VW-CCG are not those on generalized composition, but those on function application. To see this we can note that the proof of Lemma 8 goes through also only if application rules such as (9) and (10) are without target restrictions. This means that we have the following qualification of Theorem 1. Lemma 13 Prefix-closed VW-CCG is weakly equivalent to TAG only because it supports target restrictions on forward and backward application. This finding is unexpected indeed—for instance, no grammar in Steedman (2000) uses target restrictions on the application rules. 4 generative capacity of multimodal ccg :After clarifying the mechanisms that “classical” CCG uses to achieve weak equivalence with TAG, we now turn our attention to “modern,” multimodal versions of CCG (Baldridge and Kruijff 2003; Steedman and Baldridge 2011). These versions emphasize the use of fully lexicalized grammars in which no rule restrictions are allowed, and instead equip slashes with types in order to control the use of the combinatory rules. Our central question is whether the use of slash types is sufficient to recover the expressiveness that we lose by giving up rule restrictions. We need to fix a specific variant of multimodal CCG to study this question formally. Published works on multimodal CCG differ with respect to the specific inventories of slash types they assume. Some important details, such as a precise definition of generalized composition with slash types, are typically not discussed at all. In this article we define a variant of multimodal CCG which we call O-CCG. This formalism extends our definition of VW-CCG (Definition 1) with the slash inventory and the composition rules of the popular OpenCCG grammar development system (White 2013). Our technical result is that the main Lemma (Lemma 6) also holds for O-CCG. With this we can conclude that the answer to our question is negative: Slash types are not sufficient to replace rule restrictions; O-CCG is strictly less powerful than TAG. Although this is primarily a theoretical result, at the end of this section we also discuss its implications for practical grammar development. We define O-CCG as a formalism that extends VW-CCG with the slash types of OpenCCG, but abandons rule restrictions. Note that OpenCCG has a number of additional features that affect the generative capacity; we discuss these in Section 4.4. Slash Types. Like other incarnations of multimodal CCG, O-CCG uses an enriched notion of categories where every slash has a type. There are eight such types:5 core types: ∗ × · left types: × right types: × 5 The type system of OpenCCG is an extension of the system used by Baldridge (2002). The basic idea behind these types is as follows. Slashes with type ∗ can only be used to instantiate application rules. Type also licenses harmonic composition rules, and type × also licenses crossed composition rules. Type · is the least restrictive type and can be used to instantiate all rules. The remaining types refine the system by incorporating a dimension of directionality. The exact type–rule compatibilities are specified in Figure 14. Inertness. O-CCG is distinguished from other versions of multimodal CCG, such as that of Baldridge and Kruijff (2003), in that every slash not only has a type but also an inertness status. Inertness was introduced by Baldridge (2002, Section 8.2.2) as an implementation of the “antecedent government” (ANT) feature of Steedman (1996), which is used to control the word order in certain English relative clauses. It is a two-valued feature. Arguments whose slash type has inertness status + are called active; arguments whose slash type has inertness status − are called inert. Only active arguments can be eliminated by means of combinatory rules; however, an inert argument can still be consumed as part of a secondary input category. For example, the following instance of application is valid because the outermost slash of the primary input category has inertness status +: X/+(Y/−Z) Y/−Z X We use the notations /st and / s t to denote the forward and backward slashes with slash type t and inertness status s. Rules. All O-CCG grammars share a fixed set of combinatory rules, shown in Figure 15. Every grammar uses all rules, up to some grammar-specific bound on the degree of generalized composition. As mentioned earlier, a combinatory rule can only be instantiated if the slashes of the input categories have compatible types. Additionally, all composition rules require the slashes of the secondary input category to have a uniform direction. This is a somewhat peculiar feature of OpenCCG, and is in contrast to VW-CCG and other versions of CCG, which also allow composition rules with mixed directions. Composition rules are classified into harmonic and crossed forms. This distinction is based on the direction of the slashes in the secondary input category. If these have the same direction as the outermost slash of the primary input category, then the rule is called harmonic; otherwise it is called crossed.6 6 In versions of CCG that allow rules with mixed slash directions, the distinction between harmonic and crossed is made based on the direction of the innermost slash of the secondary input category, |i. When a rule is applied, in most cases the arguments of the secondary input category are simply copied into the output category, as in VW-CCG. The one exception happens for crossed composition rules if not all slash directions match the direction of their slash type (left or right). In this case, the arguments of the secondary input category become inert. Thus the inertness status of an argument may change over the course of a derivation—but only from active to inert, not back again. Definition 5 A multimodal combinatory categorial grammar in the sense of OpenCCG, or O-CCG for short, is a structure G = (Σ,A, :=, d, S) where Σ is a finite vocabulary, A is a finite set of atomic categories, := is a finite relation between Σ and the set of (multimodal) categories over A, d ≥ 0 is the maximal degree of generalized composition, and S ∈ A is a distinguished atomic category. We generalize the notions of rule instances, derivation trees, and generated language to categories over slashes with types and inertness statuses in the obvious way: Instead of two slashes, we now have one slash for every combination of a direction, type, and inertness status. Similarly, we generalize the concepts of a grammar being prefix-closed (Definition 2) and without target restrictions (Definition 3) to O-CCG. We now investigate the generative capacity of O-CCG. We start with the (unsurprising) observation that O-CCG can describe non-context-free languages. Lemma 14 The languages generated by O-CCG properly include the context-free languages. The proof of Lemma 15 adapts the rewriting system from Figure 12. We simply let each rewriting step copy the type and inertness status of each slash from the left-hand side to the right-hand side of the rewriting rule. With this change, it is easy to verify that the proofs of Lemma 7 (termination and confluence), Lemma 10 (projection paths in transformed derivations are split), Lemma 11 (transformed derivations contain a finite number of categories), and Lemma 12 (transformed derivations yield a contextfree language) go through without problems. The proof of Lemma 8, however, is not straightforward, because of the dynamic nature of the inertness statuses. We therefore restate the lemma for O-CCG: Lemma 16 The rewriting system transforms O-CCG derivations into O-CCG derivations. In this section we have shown that the languages generated by O-CCG are properly included in the languages generated by TAG, and equivalently, in the languages generated by VW-CCG. This means that the multimodal machinery of OpenCCG is not powerful enough to express the rule restrictions of VW-CCG in a fully lexicalized way. The result is easy to obtain for O-CCG without inertness, which is prefix-closed and without target restrictions; but it is remarkably robust in that it also applies to O-CCG with inertness, which is not prefix-closed. As we have already mentioned, the result carries over also to other multimodal versions of CCG, such as the formalism of Baldridge and Kruijff (2003). Our result has implications for practical grammar development with OpenCCG. To illustrate this, recall Example 7, which showed that every VW-CCG without target restrictions for Swiss German that allows cross–serial word orders as in derivation (5) also permits alternative word orders, as in derivation (6). By Lemma 15, this remains true for O-CCG or weaker multimodal formalisms. This is not a problem in the case of Swiss German, where the alternative word orders are grammatical. However, there is at least one language, Dutch, where dependencies in subordinate clauses must cross. For this case, our result shows that the modalized composition rules of OpenCCG are not powerful enough to write adequate grammars. Consider the following classical example: . . . ik Cecilia de paarden zag voeren . . . I Cecilia the horses saw feed “. . . I saw Cecilia feed the horses” The straightforward derivation of the cross–serial dependencies in this sentence (adapted from Steedman 2000, p. 141) is exemplified in Figure 16. It takes the same form as derivation (5) for Swiss German: The verbs and their NP arguments lie on a single, right-branching path projected from the tensed verb zag. This projection path is not split; specifically, it starts with a composition that produces a category which acts as the primary input category of an application. As a consequence, the derivation can be transformed (by our rewriting rule R3) in exactly the same way as instance (5) could be transformed into derivation (6). The crucial difference is that the yield of the transformed derivation, *ik Cecilia zag de paarden voeren, is not a grammatical clause of Dutch. To address the problem of ungrammatical word orders in Dutch subordinate clauses, the VW-CCG grammar of Steedman (2000) and the multimodal CCG grammar of Baldridge (2002, Section 5.3.1) resort to combinatorial rules other than composition. In particular, they assume that all complement noun phrases undergo obligatory type-raising, and become primary input categories of application rules. This gives rise to derivations such as the one shown in Figure 17, which cannot be transformed using our rewriting rules because the result of the forward crossed composition >1 now is a secondary rather than a primary input category. As a consequence, this grammar is capable of enforcing the obligatory cross-serial dependencies of Dutch. However, it is important to note that it requires type-raising over arbitary categories with target S (observe the increasingly complex type-raised categories for the NPs). This kind of type-raising is allowed in many variants of CCG, including the full formalism underlying OpenCCG. VW-CCG and O-CCG, however, are limited to generalized composition, and can only support derivations like the one in Figure 17 if all the type-raised categories for the noun phrases are available in the lexicon. The unbounded type-raising required by the Steedman–Baldridge analysis of Dutch would translate into an infinite lexicon, and so this analysis is not possible in VW-CCG and O-CCG. We conclude by discussing the impact of several other constructs of OpenCCG that we have not captured in O-CCG. First, OpenCCG allows us to use generalized composition rules of arbitrary degree; there is no upper bound d on the composition degree as in an O-CCG grammar. It is known that this extends the generative capacity of CCG beyond that of TAG (Weir 1988). Second, OpenCCG allows categories to be annotated with feature structures. This has no impact on the generative capacity, as the features must take values from finite domains and can therefore be compiled into the atomic categories of the grammar. Finally, OpenCCG includes the combinatory rules of substitution and coordination, as well as multiset slashes, another extension frequently used in linguistic grammars. We have deliberately left these constructs out of O-CCG to establish the most direct comparison to the literature on VW-CCG. It is conceivable that their inclusion could restore the weak equivalence to TAG, but a proof of this result would require a non-trivial extension of the work of Vijay-Shanker and Weir (1994). Regarding multiset slashes, it is also worth noting that these were introduced with the expressed goal of allowing more flexible word order, whereas restoration of weak equivalence would require more controlled word order. 5 conclusion :In this article we have contributed two technical results to the literature on CCG. First, we have refined the weak equivalence result for CCG and TAG (Vijay-Shanker and Weir 1994) by showing that prefix-closed grammars are weakly equivalent to TAG only if target restrictions are allowed. Second, we have shown that O-CCG, the formal, composition-only core of OpenCCG, is not weakly equivalent to TAG. These results point to a tension in CCG between lexicalization and generative capacity: Lexicalized versions of the framework are less powerful than classical versions, which allow rule restrictions. What conclusions one draws from these technical results depends on the perspective. One way to look at CCG is as a system for defining formal languages. Under this view, one is primarily interested in results on generative capacity and parsing complexity such as those obtained by Vijay-Shanker and Weir (1993, 1994). Here, our results clarify the precise mechanisms that make CCG weakly equivalent to TAG. Perhaps surprisingly, it is not the availability of generalized composition rules by itself that explains the generative power of CCG, but the ability to constrain the interaction between generalized composition and function application by means of target restrictions. On the other hand, one may be interested in CCG primarily as a formalism for developing grammars for natural languages (Steedman 2000; Baldridge 2002; Steedman 2012). From this point of view, the suitability of CCG for the development of lexicalized grammars has been amply demonstrated. However, our technical results still serve as important reminders that extra care must be taken to avoid overgeneration when designing a grammar. In particular, it is worth double-checking that an OpenCCG grammar does not generate word orders that the grammar developer did not intend. Here the rewriting system that we presented in Figure 12 can serve as a useful tool: A grammar developer can take any derivation for a grammatical sentence, transform the derivation according to our rewriting rules, and check whether the transformed derivation still yields a grammatical sentence. It remains an open question how the conflicting desires for generative capacity and lexicalization might be reconciled. A simple answer is to add some lexicalized method for enforcing target restrictions to CCG, specifically on the application rules. However, we are not aware that this idea has seen widespread use in the CCG literature, so it may not be called for empirically. Alternatively, one might modify the rules of O-CCG in such a way that they are no longer prefix-closed—for example, by introducing some new slash type. Finally, it is possible that the constructs of OpenCCG that we set aside in O-CCG (such as type-raising, substitution, and multiset slashes) might be sufficient to achieve the generative capacity of classical CCG and TAG. A detailed study of the expressive power of these constructs would make an interesting avenue for future research. The weak equivalence of Combinatory Categorial Grammar (CCG) and Tree-Adjoining Grammar (TAG) is a central result of the literature on mildly context-sensitive grammar formalisms. However, the categorial formalism for which this equivalence has been established differs significantly from the versions of CCG that are in use today. In particular, it allows restriction of combinatory rules on a per grammar basis, whereas modern CCG assumes a universal set of rules, isolating all cross-linguistic variation in the lexicon. In this article we investigate the formal significance of this difference. Our main result is that lexicalized versions of the classical CCG formalism are strictly less powerful than TAG. [{""affiliations"": [], ""name"": ""Marco Kuhlmann""}, {""affiliations"": [], ""name"": ""Alexander Koller""}, {""affiliations"": [], ""name"": ""Giorgio Satta""}] SP:52fb58337ad0d4ec8da3bb73ae06c06bd09b356a [{""authors"": [""Ajdukiewicz"", ""Kazimierz.""], ""title"": ""Die syntaktische Konnexit\u00e4t"", ""venue"": ""Studia Philosophica, 1:1\u201327."", ""year"": 1935}, {""authors"": [""Baldridge"", ""Jason"", ""Geert-Jan M. Kruijff.""], ""title"": ""Multi-modal combinatory categorial grammar"", ""venue"": ""Tenth Conference of the European Chapter of the Association for Computational Linguistics (EACL),"", ""year"": 2003}, {""authors"": [""Bar-Hillel"", ""Yehoshua"", ""Haim Gaifman"", ""Eli Shamir.""], ""title"": ""On categorial and phrase structure grammars"", ""venue"": ""Bulletin of the Research Council of Israel, 9F(1):1\u201316. Reprinted in Yehoshua Bar-Hillel. Language and"", ""year"": 1960}, {""authors"": [""Curry"", ""Haskell B."", ""Robert Feys"", ""William Craig.""], ""title"": ""Combinatory Logic"", ""venue"": ""Volume 1. Studies in Logic and the Foundations of Mathematics. North-Holland."", ""year"": 1958}, {""authors"": [""Eisner"", ""Jason.""], ""title"": ""Efficient normal-form parsing for Combinatory Categorial Grammar"", ""venue"": ""Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 79\u201386,"", ""year"": 1996}, {""authors"": [""Gazdar"", ""Gerald.""], ""title"": ""Applicability of indexed grammars to natural language"", ""venue"": ""Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories. D. Reidel, pages 69\u201394."", ""year"": 1987}, {""authors"": [""Joshi"", ""Aravind K.""], ""title"": ""Tree Adjoining Grammars: How much context-sensitivity is required to provide reasonable structural descriptions? In David R"", ""venue"": ""Dowty, Lauri Karttunen, and Arnold M."", ""year"": 1985}, {""authors"": [""Joshi"", ""Aravind K."", ""Yves Schabes.""], ""title"": ""Tree-Adjoining Grammars"", ""venue"": ""Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, volume 3. Springer, pages 69\u2013123."", ""year"": 1997}, {""authors"": [""Kuhlmann"", ""Marco"", ""Alexander Koller"", ""Giorgio Satta.""], ""title"": ""The importance of rule restrictions in CCG"", ""venue"": ""Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL),"", ""year"": 2010}, {""authors"": [""Moortgat"", ""Michael.""], ""title"": ""Categorial type logics"", ""venue"": ""Johan van Benthem and Alice ter"", ""year"": 2011}, {""authors"": [""Pollard"", ""Carl J.""], ""title"": ""Generalized Phrase Structure Grammars, Head Grammars, and Natural Language"", ""venue"": ""Ph.D. thesis, Stanford University."", ""year"": 1984}, {""authors"": [""Shieber"", ""Stuart M.""], ""title"": ""Evidence against the context-freeness of natural language"", ""venue"": ""Linguistics and Philosophy, 8(3):333\u2013343."", ""year"": 1985}, {""authors"": [""Steedman"", ""Mark.""], ""title"": ""Surface Structure and Interpretation, volume 30 of Linguistic Inquiry Monographs"", ""venue"": ""MIT Press."", ""year"": 1996}, {""authors"": [""Steedman"", ""Mark.""], ""title"": ""The Syntactic Process"", ""venue"": ""MIT Press."", ""year"": 2000}, {""authors"": [""Steedman"", ""Mark.""], ""title"": ""Taking Scope"", ""venue"": ""MIT Press."", ""year"": 2012}, {""authors"": [""Steedman"", ""Mark"", ""Jason Baldridge.""], ""title"": ""Combinatory Categorial Grammar"", ""venue"": ""Robert D. Borsley and Kersti B\u00f6rjars, editors, Non-Transformational Syntax: Formal and Explicit Models of Grammar."", ""year"": 2011}, {""authors"": [""K. Vijay-Shanker"", ""David J. Weir.""], ""title"": ""Parsing some constrained grammar formalisms"", ""venue"": ""Computational Linguistics, 19(4):591\u2013636."", ""year"": 1993}, {""authors"": [""K. Vijay-Shanker"", ""David J. Weir.""], ""title"": ""The equivalence of four extensions of context-free grammars"", ""venue"": ""Mathematical Systems Theory, 27(6):511\u2013546."", ""year"": 1994}, {""authors"": [""K. Vijay-Shanker"", ""David J. Weir"", ""Aravind K. Joshi.""], ""title"": ""Tree adjoining and head wrapping"", ""venue"": ""Proceedings of the Eleventh International Conference on Computational Linguistics (COLING),"", ""year"": 1986}, {""authors"": [""Weir"", ""David J.""], ""title"": ""Characterizing Mildly Context-Sensitive Grammar Formalisms"", ""venue"": ""Ph.D. thesis, University of Pennsylvania."", ""year"": 1988}, {""authors"": [""Weir"", ""David J."", ""Aravind K. Joshi.""], ""title"": ""Combinatory categorial grammars: Generative power and relationship to linear context-free rewriting systems"", ""venue"": ""Proceedings of the 26th Annual Meeting"", ""year"": 1988}, {""authors"": [""White"", ""Michael.""], ""title"": ""OpenCCG: The OpenNLP CCG Library"", ""venue"": ""http://openccg.sourceforge.net/ Accessed November 13, 2013. 219"", ""year"": 2013}] acknowledgments :We are grateful to Mark Steedman and Jason Baldridge for enlightening discussions of the material presented in this article, and to the four anonymous reviewers of the article for their detailed and constructive comments. References Ajdukiewicz, Kazimierz. 1935. Die syntaktische Konnexität. Studia Philosophica, 1:1–27. Baldridge, Jason. 2002. Lexically Specified Derivational Control in Combinatory Categorial Grammar. Ph.D. thesis, University of Edinburgh, Edinburgh, UK. Baldridge, Jason and Geert-Jan M. Kruijff. 2003. Multi-modal combinatory categorial grammar. In Tenth Conference of the European Chapter of the Association for Computational Linguistics (EACL), pages 211–218, Budapest. Bar-Hillel, Yehoshua, Haim Gaifman, and Eli Shamir. 1960. On categorial and phrase structure grammars. Bulletin of the Research Council of Israel, 9F(1):1–16. Reprinted in Yehoshua Bar-Hillel. Language and Information: Selected Essays on Their Theory and Application, pages 99–115. Addison-Wesley, 1964. Curry, Haskell B., Robert Feys, and William Craig. 1958. Combinatory Logic. Volume 1. Studies in Logic and the Foundations of Mathematics. North-Holland. Eisner, Jason. 1996. Efficient normal-form parsing for Combinatory Categorial Grammar. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (ACL), pages 79–86, Santa Cruz, CA. Gazdar, Gerald. 1987. Applicability of indexed grammars to natural language. In Uwe Reyle and Christian Rohrer, editors, Natural Language Parsing and Linguistic Theories. D. Reidel, pages 69–94. Joshi, Aravind K. 1985. Tree Adjoining Grammars: How much context-sensitivity is required to provide reasonable structural descriptions? In David R. Dowty, Lauri Karttunen, and Arnold M. Zwicky, editors, Natural Language Parsing. Cambridge University Press, pages 206–250. Joshi, Aravind K. and Yves Schabes. 1997. Tree-Adjoining Grammars. In Grzegorz Rozenberg and Arto Salomaa, editors, Handbook of Formal Languages, volume 3. Springer, pages 69–123. Kuhlmann, Marco, Alexander Koller, and Giorgio Satta. 2010. The importance of rule restrictions in CCG. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL), pages 534–543, Uppsala. Moortgat, Michael. 2011. Categorial type logics. In Johan van Benthem and Alice ter Meulen, editors, Handbook of Logic and Language. Elsevier, second edition, chapter 2, pages 95–179. Pollard, Carl J. 1984. Generalized Phrase Structure Grammars, Head Grammars, and Natural Language. Ph.D. thesis, Stanford University. Shieber, Stuart M. 1985. Evidence against the context-freeness of natural language. Linguistics and Philosophy, 8(3):333–343. Steedman, Mark. 1996. Surface Structure and Interpretation, volume 30 of Linguistic Inquiry Monographs. MIT Press. Steedman, Mark. 2000. The Syntactic Process. MIT Press. Steedman, Mark. 2012. Taking Scope. MIT Press. Steedman, Mark and Jason Baldridge. 2011. Combinatory Categorial Grammar. In Robert D. Borsley and Kersti Börjars, editors, Non-Transformational Syntax: Formal and Explicit Models of Grammar. Blackwell, chapter 5, pages 181–224. Vijay-Shanker, K. and David J. Weir. 1993. Parsing some constrained grammar formalisms. Computational Linguistics, 19(4):591–636. Vijay-Shanker, K. and David J. Weir. 1994. The equivalence of four extensions of context-free grammars. Mathematical Systems Theory, 27(6):511–546. Vijay-Shanker, K., David J. Weir, and Aravind K. Joshi. 1986. Tree adjoining and head wrapping. In Proceedings of the Eleventh International Conference on Computational Linguistics (COLING), pages 202–207, Bonn. Weir, David J. 1988. Characterizing Mildly Context-Sensitive Grammar Formalisms. Ph.D. thesis, University of Pennsylvania. Weir, David J. and Aravind K. Joshi. 1988. Combinatory categorial grammars: Generative power and relationship to linear context-free rewriting systems. In Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics (ACL), pages 278–285, Buffalo, NY. White, Michael. 2013. OpenCCG: The OpenNLP CCG Library. http://openccg.sourceforge.net/ Accessed November 13, 2013. lexicalization and generative power in ccg :Marco Kuhlmann∗ Linköping University Alexander Koller∗∗ University of Potsdam Giorgio Satta† University of Padua The weak equivalence of Combinatory Categorial Grammar (CCG) and Tree-Adjoining Grammar (TAG) is a central result of the literature on mildly context-sensitive grammar formalisms. However, the categorial formalism for which this equivalence has been established differs significantly from the versions of CCG that are in use today. In particular, it allows restriction of combinatory rules on a per grammar basis, whereas modern CCG assumes a universal set of rules, isolating all cross-linguistic variation in the lexicon. In this article we investigate the formal significance of this difference. Our main result is that lexicalized versions of the classical CCG formalism are strictly less powerful than TAG. vw-ccg :restrictions. More specifically, we look at a variant of CCG consisting of the composition rules implemented in OpenCCG (White 2013), the most widely used development platform for CCG grammars. We show that this formalism is (almost) prefix-closed and cannot express target restrictions, which enables us to apply our generative capacity result from the first step. The same result holds for (the composition-only fragment of) the formalism of Baldridge and Kruijff (2003). Thus we find that, at least with existing means, the weak equivalence result of Vijay-Shanker and Weir cannot be obtained for lexicalized CCG. We conclude the article by discussing the implications of our results (Section 5). proof :No composition rule creates new arguments: Every argument that occurs in an output category already occurs in one of the input categories. Therefore, every argument must come from some word–category pair in the lexicon, of which there are only finitely many. Lemma 2 The set of secondary input categories that occur in the derivations of a VW-CCG is finite. 3 Also, AB-grammar does not support lexicon entries for the empty string. Every secondary input category is obtained by substituting concrete categories for the variables that occur in the non-shaded component of one of the rules specified in Figure 4. After the substitution, all of these categories occur as part of arguments. Then, with Lemma 1, we deduce that the substituted categories come from a finite set. At the same time, each grammar specifies a finite set of rules. This means that there are only finitely many ways to obtain a secondary input category. When specifying VW-CCGs, we find it convenient sometimes to provide an explicit list of valid rule instances, rather than a textual description of rule restrictions. For this we use a special type of restricted rule that we call templates. A template is a restricted rule that simultaneously fixes both (a) the target of the primary input category of the rule, and (b) the entire secondary input category. We illustrate the idea with an example. Example 4 We list the templates that correspond to the rule instances in the derivation from Figure 6. (The grammar allows other instances that are not listed here.) We use the symbol as a placeholder for that part of a primary input category that is unconstrained by rule restrictions, and therefore may consist of an arbitrary sequence of arguments. A S /A S (1) S /C C /A S /A (2) S /B B/C S /C (3) S /B B/C/B S /C/B (4) For example, template (1) characterizes backward application (<0) where the target of the primary input category is S and the secondary input category is A, and template (4) characterizes forward composition of degree 2 (>2) where the target of the primary input category is S and the secondary input category is B/C/B. Note that every VW-CCG can be specified using a finite set of templates: It has a finite set of combinatory rules; the set of possible targets of the primary input category of each rule is finite because each target is an atomic category; and the set of possible secondary input categories is finite because of Lemma 2. We are given a TAG G and construct a weakly equivalent VW-CCG G′. The basic idea is to make the lexical categories of G′ correspond to the elementary trees of G, and to set up the combinatory rules and their restrictions in such a way that the derivations of G′ correspond to derivations of G. Vocabulary, Atomic Categories. The vocabulary of G′ is the set of all terminal symbols of G; the set of atomic categories consists of all symbols of the form At, where either A is a nonterminal symbol of G and t ∈ {a, c}, or A is a terminal symbol of G and t = a. The distinguished atomic category of G′ is Sa, where S is the start symbol of G. Lexicon. One may assume (cf. Vijay-Shanker, Weir, and Joshi 1986) that G is in the normal form shown in Figure 9. In this normal form there is a single initial S-tree, and all remaining elementary trees are auxiliary trees of one of five possible types. For each such tree, one constructs two lexicon entries for the empty string ε as specified in Figure 9. Additionally, for each terminal symbol x of G, one constructs a lexicon entry x := xa. Rules. The rules of G′ are forward and backward application and forward and backward composition of degree at most 2. They are used to simulate adjunction operations in derivations of G: Application simulates adjunction into nodes to the left or right of the foot node; composition simulates adjunction into nodes above the foot node. Without restrictions, these rules would allow derivations that do not correspond to derivations of G. Therefore, rules are restricted such that an argument of the form |At can be eliminated by means of an application rule only if t = a, and by means of a composition rule only if t = c. This enforces two properties that are central for the correctness of the construction (Weir 1988, p. 119): First, the secondary input category in every instance of composition is a category that has just been introduced from the lexicon. Second, categories cannot be combined in arbitrary orders. The rule restrictions are: 1. Forward and backward application are restricted to instances where both the target of the primary input category and the entire secondary input category take the form Aa. 2. Forward and backward composition are restricted to instances where the target of the primary input category takes the form Aa and the target of the secondary input category takes the form Ac. Using our template notation, the restricted rules can be written as in Figure 10. As an aside we note that the proof of Lemma 3 makes heavy use of the ability of VW-CCG to assign lexicon entries to the empty string. Such lexicon entries violate one of the central linguistic principles of CCG, the Principle of Adjacency, according to which combinatory rules may only apply to phonologically realized entities (Steedman 2000, p. 54). It is an interesting question for future research whether a version of VW-CCG without lexicon entries for the empty string remains weakly equivalent to TAG. Every prefix-closed VW-CCG is a VW-CCG, therefore the inclusion follows from Theorem 1. To show that every TAG language can be generated by a prefix-closed VWCCG, we recall the construction of a weakly equivalent VW-CCG for a given TAG that we sketched in the proof of Lemma 3. As already mentioned in Example 6, the grammar G′ constructed there is not prefix-closed. However, we can make it prefixclosed by explicitly allowing the “missing” rule instances: Aa /Bc Bc Aa where A, B ∈ V Bc Aa /Bc Aa where A, B ∈ V We shall now argue that this modification does not actually change the language generated by G′. The only categories that qualify as secondary input categories of the new instances are atomic categories of the form Bc where B is a nonterminal of the TAG G. Now the lexical categories of G′ either are of the form xa (where x is a terminal symbol) or are non-atomic. Categories of the form Bc are not among the derived categories of G′ either, as the combinatory rules only yield output categories whose targets have the form Ba. This means that the new rule instances can never be used in a complete derivation of G′, and therefore do not change the generated language. Thus we have a construction that turns a TAG into a weakly equivalent prefix-closed VW-CCG. Every prefix-closed VW-CCG without target restrictions is a VW-CCG, so the inclusion follows from Theorem 1. To see that the inclusion is proper, consider the TAG language L1 = {anbncn | n ≥ 1} from Example 5. We are interested in sublanguages L′ ⊆ L1 that are Parikh-equivalent to the full language L1. This property is trivially satisfied by L1 itself. Moreover, it is not hard to see that L1 is in fact the only sublanguage of L1 that has this property. Now in Section 3.4 we shall prove a central lemma (Lemma 6), which asserts that, if we assume that L1 is generated by a prefix-closed VW-CCG without target restrictions, then at least one of the Parikh-equivalent sublanguages of L1 must be context-free. Because L1 is the only such sublanguage, this would give us proof that L1 is context-free; but we know it is not. Therefore we conclude that L1 is not generated by a prefix-closed VW-CCG without target restrictions. Before turning to the proof of the central lemma (Lemma 6), we establish two other results about the languages generated by grammars without target restrictions. Lemma 4 The languages generated by prefix-closed VW-CCG without target restrictions properly include the context-free languages. Inclusion follows from the fact that AB-grammars (which generate all context-free languages) are prefix-closed VW-CCGs without target restrictions. To see that the inclusion is proper, consider a grammar G2 that is like G1 but does not have any rule restrictions. This grammar is trivially prefix-closed and without target restrictions; it is actually “pure” in the sense of Kuhlmann, Koller, and Satta (2010). The language L2 = L(G2) contains all the strings in L1 = {anbncn | n ≥ 1}, together with other strings, including the string bbbacacac, whose derivation we showed in Figure 7. It is not hard to see that all of these additional strings have an equal number of as, bs, and cs. We can therefore write L1 as an intersection of L2 and a regular language: L1 = L2 ∩ a∗b∗c∗. To obtain a contradiction, suppose that L2 is context-free; then because of the fact that context-free languages are closed under intersection with regular languages, the language L1 would be context-free as well—but we know it is not. Therefore we conclude that L2 is not context-free either. Lemma 5 The class of languages generated by prefix-closed VW-CCG without target restrictions is not closed under intersection with regular languages. If the class of languages generated by prefix-closed VW-CCG without target restrictions was closed under intersection with regular languages, then with L2 (the language mentioned in the previous proof) it would also include the language L1 = L2 ∩ a∗b∗c∗. However, from the proof of Theorem 3 we know that L1 is not generated by any prefixclosed VW-CCG without target restrictions. To argue that the system is terminating, we note that each rewriting step decreases the arity of one secondary input category in the derivation by one unit, while all other secondary input categories are left unchanged. As an example, consider rewriting under R1. The secondary input categories in the scope of that rule are Yβ/Z and Z on the left-hand side and Yβ and Z on the right-hand side. Here the arity of Yβ equals the arity of Yβ/Z, minus one. Because the system is terminating, to see that it is also confluent, it suffices to note that the left-hand sides of the rewrite rules do not overlap. Lemma 8 The rewriting system transforms derivations of G into derivations of G. We prove the stronger result that every rewriting step transforms derivations of G into derivations of G. We only consider rewriting under R1; the arguments for the other rules are similar. Assume that R1 is applied to a derivation of G. The rule instances in the scope of the left-hand side of R1 take the following form: X/Y Yβ/Z Xβ/Z (8) Xβ/Z Z Xβ (9) Turning to the right-hand side, the rule instances in the rewritten derivation are Yβ/Z Z Yβ (10) X/Y Yβ Xβ (11) The relation between instances (8) and (11) is the characteristic relation of prefixclosed grammars (Definition 2): If instance (8) is valid, then because G is prefix-closed, instance (11) is valid as well. Similarly, the relation between instances (9) and (10) is the characteristic relation of grammars without target restrictions (Definition 3): If instance (9) is valid, then because G is without target restrictions, instance (10) is valid as well. We conclude that if R1 is applied to a derivation of G, then the result is another derivation of G. Combining Lemma 7 and Lemma 8, we see that for every derivation d of G, exhaustive application of the rewriting rules produces another uniquely determined derivation of G. We shall refer to this derivation as R(d). A transformed derivation is any derivation d′ such that d′ = R(d) for some derivation d. Let Y be the set of yields of the transformed derivations. Every string w′ ∈ Y is obtained from a string w ∈ L(G) by choosing some derivation d of w, rewriting this derivation into the transformed derivation R(d), and taking the yield. Inclusion then follows from Lemma 8. Because of the permuting rules R3 and R4, the strings w and w′ will in general be different. What we can say, however, is that w and w′ will be equal up to permutation. Thus we have established that Y and L(G) are Parikh-equivalent. What remains in order to prove Lemma 6 is to show that the yields of the transformed derivations form a context-free language. 3.4.3 Context-Freeness of the Sublanguage. In a derivation tree, every node except the root node is labeled with either the primary or the secondary input category of a combinatory rule. We refer to these two types of nodes as primary nodes and secondary nodes, respectively. To simplify our presentation, we shall treat the root node as a secondary node. We restrict our attention to derivation trees for strings in L(G); in these trees, the root node is labeled with the distinguished atomic category S. For a leaf node u, the projection path of u is the path that starts at the parent of u and ends at the first secondary node that is encountered on the way towards the root node. We denote a projection path as a sequence X1, . . . , Xn (n ≥ 1), where X1 is the category at the parent of u and Xn is the category at the secondary node. Note that the category X1 is taken from the lexicon, while every other category is derived by combining the preceding category on the path with some secondary input category (not on the path) by means of some combinatory rule. Example 10 In the derivation in Figure 6, the projection path of the first b goes all the way to the root, while all other projection paths have length 1, starting and ending with a lexical category. In Figure 7, the projection path of the first b ends at the root, while the projection paths of the remaining bs end at the nodes with category B, and the projection paths of the cs end at the nodes with category C. A projection path X1, . . . , Xn is split if it can be segmented into two parts X1, . . . , Xs and Xs, . . . , Xn (1 ≤ s ≤ n) such that the first part only uses application rules and the second part only uses composition rules. Note that any part may consist of a single category only, in which case no combinatory rule is used in that part. If n = 1, then the path is trivially split. All projection paths in Figures 6 and 7 are split, except for the path of the first b in Figure 6, which alternates between composition (with C /A) and application (with A). Lemma 10 In transformed derivations, every projection path is split. We show that as long as a derivation d contains a projection path that is not split, it can be rewritten. A projection path that is not split contains three adjacent categories U, V, W, such that V is derived by means of a composition with primary input U, and W is derived by means of an application with primary input V. Suppose that both the composition and the application are forward. (The arguments for the other three cases are similar.) Then U can be written as X/Y for some category X and argument /Y, V can be written as Xβ/Z for some argument /Z and some (possibly empty) sequence of arguments β, and W can be written as Xβ. We can then convince ourselves that d contains the following configuration, which matches the left-hand side of rewriting rule R1: X/Y Yβ/Z Xβ/Z Z Xβ Lemma 11 The set of all categories that occur in transformed derivations is finite. Every category that occurs in transformed derivations occurs on some of its projection paths. Consider any such path. By Lemma 10 we know that this path is split; its two parts, here called P1 and P2, are visualized in Figure 13. We now reason about the arities of the categories in these two parts. 1. Because P1 only uses application, the arities in this part get smaller and smaller until they reach their minimum at Xs. This means that the arities of P1 are bounded by the arity of the first category on the path, which is a category from the lexicon. 2. Because P2 only uses composition, the arities in this part either get larger or stay the same until they reach a maximum at Xn. This means that the arities of P2 are bounded by the arity of the last category on the path, which is either the distinguished atomic category S or a secondary input category. Thus the arities of our chosen path are bounded by the maximum of three grammarspecific constants: the maximal arity of a lexical category, the arity of S (which is 0), and the maximal arity of a secondary input category. The latter value is well-defined because there are only finitely many such categories (by Lemma 2). Let k be the maximum among the three constants, and let K be the set of all categories of the form A|mXm · · · |1X1 where A is an atomic category of G, m ≤ k, and each |iXi is an argument that may occur in derivations of G. The set K contains all categories that occur on some projection path, and therefore all categories that occur in transformed derivations, but it may also include other categories. As there are only finitely many atomic categories and finitely many arguments (Lemma 1), we conclude that the set K, and hence the set of categories that occur in transformed derivations, are finite as well. Lemma 12 The transformed derivations yield a context-free language. We construct a context-free grammar H that generates the set Y of yields of the transformed derivations. To simplify the presentation, we first construct a grammar H′ that generates a superset of Y . Construction of H′. The construction of the grammar H′ is the same as the construction in the classical proof that showed the context-freeness of AB-grammars, by Bar-Hillel, Gaifman, and Shamir (1960): The production rules of H′ are set up to correspond to the valid rule instances of G. The reason that this construction is not useful for VW-CCGs in general is that these may admit infinitely many rule instances, whereas a context-free grammar can only have finitely many productions. The set of rule instances may be infinite because VW-CCG has access to composition rules (specifically, rules of degrees greater than 1); in contrast, AB-grammars are restricted to application. Crucially though, by Lemma 11 we know that as long as we are interested only in transformed derivations it is sufficient to use a finite number of rule instances—more specifically those whose input and output categories are included in the set K of arity-bounded categories. Thus for every instance X/Y Yβ Xβ where all three categories are in K, we construct a production [Xβ] → [X/Y] [Yβ] and similarly for backward rules. (We enclose categories in square brackets for clarity.) In addition, for every lexicon entry σ := X in G we add to H′ a production [X] → σ. As the terminal alphabet of H′ we choose the vocabulary of G; as the nonterminal alphabet we choose the set K; and as the start symbol we choose the distinguished atomic category S. Every transformed derivation of G corresponds (in an obvious way) to some derivation in H′, which proves that Y ⊆ L(H′). Conversely, every derivation of H′ represents a derivation of G (though not necessarily a transformed derivation), thus L(H′) ⊆ L(G). Construction of H. The chain of inclusions Y ⊆ L(H′) ⊆ L(G) is sufficient to prove Lemma 6: Because Y and L(G) are Parikh-equivalent (which we observed at the beginning of Section 3.4.2), so are L(H′) and L(G), which means that L(H′) satisfies all of the properties claimed in Lemma 6, even though this does not suffice to prove our current lemma. However, once H′ is given, it is not hard to also obtain a grammar H that generates exactly Y . For this, we need to filter out derivations whose projection paths do not have the characteristic property of transformed derivations that we established in Lemma 10. (It is not hard to see that every derivation that does have this property is a transformed derivation.) We annotate the left-hand side nonterminals in the productions of H′ with a flag t ∈ {a, c} to reflect whether the corresponding category has been derived by means of application (t = a) or composition (t = c); the value of this flag is simply the type of combinatory rule that gave rise to the production. The nonterminals in the right-hand sides are annotated in all possible ways, except that the following combinations are ruled out: [X]a → [X/Y]c[Y]t and [X]a → [Y]t[X /Y]c t ∈ {a, c} These combinations represent exactly the cases where the output category of a composition rule is used as the primary input category of an application rule, which are the cases that violate the “split” property that we established in Lemma 10. This concludes the proof of Lemma 6, and therefore the proof of Theorem 3. Inclusion follows from the fact that every AB-grammar can be written as an O-CCG with only application (d = 0). To show that the inclusion is proper, we use the same argument as in the proof of Lemma 4. The grammar G2 that we constructed there can be turned into an equivalent O-CCG by decorating each slash with ·, the least restrictive type, and setting its inertness status to +. What is less obvious is whether O-CCG generates the same class of languages as VW-CCG and TAG. Our main result is that this is not the case. Theorem 4 The languages generated by O-CCG are properly included in the TAG languages. O-CCG without Inertness. To approach Theorem 4, we set inertness aside for a moment and focus on the use of the slash types as a mechanism for imposing rule restrictions. Each of the rules in Figure 15 requires all of the slash types of the n outermost arguments of its secondary input category to be compatible with the rule, in the sense specified in Figure 14. If we now remove one or more of these arguments from a valid rule instance, then the new instance is clearly still valid, as we have reduced the number of potential violations of the type–rule compatibility. This shows that the rule system is prefix-closed. As none of the rules is conditioned on the target of the primary input category, the rule system is even without target restrictions. With these two properties established, Theorem 4 can be proved by literally the same arguments as those that we gave in Section 3. Thus we see directly that the theorem holds for versions of multimodal CCG without inertness, such as the formalism of Baldridge and Kruijff (2003). O-CCG with Inertness. In the general case, the situation is complicated by the fact that the crossed composition rules change the inertness status of some argument categories if the slash types have conflicting directions. This means that the crossed composition rules in O-CCG are not entirely prefix-closed, as illustrated by the following example. Example 11 Consider the following two rule instances: X/+× Y Y / + ×Z2 / + × Z1 X / − ×Z2 / − × Z1 (12) X/+× Y Y / + ×Z2 X / − ×Z2 (13) Instance (12) is a valid instance of forward crossed composition. Prefix-closedness would require instance (13) to be valid as well; but it is not. In instance (12) the inertness status of /+ ×Z2 is changed for the only reason that the slash type of / + × Z1 does not match the required direction. In instance (13) the argument /+× Z1 is not present, and therefore the inertness status of /+ ×Z2 is not changed, but is carried over to the output category. We therefore have to prove that the following analogue of Lemma 6 holds for O-CCG: Lemma 15 (Main Lemma for O-CCG) For every language L generated by some O-CCG there is a sublanguage L′ ⊆ L such that 1. L′ and L are Parikh-equivalent, and 2. L′ is context-free. This lemma implies that the language L1 = {anbncn | n ≥ 1} from Example 2 cannot be generated by O-CCG (but by prefix-closed VW-CCG with target restrictions, and by TAG). The argument is the same as in the proof of Theorem 3. As in the proof of Lemma 8 we establish the stronger result that the claimed property holds for every single rewriting step. We only give the argument for rewriting under R3, which involves instances of forward crossed composition. The argument for R4 is analogous, and R1 and R2 are simpler cases because they involve harmonic composition, where the inertness status does not change. Suppose that R3 is applied to a derivation of some O-CCG. In their most general form, the rule instances in the scope of the left-hand side of R3 may be written as follows, where the function σ is defined as specified in Figure 15: X/+t0 Y Y / sn tn Zn · · · / s2 t2 Z2 / s1 t1 Z1 X / σ(sn) tn Zn · · · / σ(s2) t2 Z2 / σ(s1) t1 Z1 (14) Z1 X / σ(sn) tn Zn · · · / σ(s2) t2 Z2 / + t1 Z1 X / σ(sn) tn Zn · · · / σ(s2) t1 Z2 (15) Here instance (14) is an instance of forward-crossed composition, so each of the types ti is compatible with that rule. Because the two marked arguments are identical, we have σ(s1) = +. This is only possible if the inertness statuses of the slashes / si ti do not change in the context of derivation (14), that is, if σ(si) = si for all 1 ≤ i ≤ n. Note that in this case, t0 is either a right type or one of the four undirected core types, and each t1, . . . , tn is either a left type or a core type. We can now alternatively write instances (14) and (15) as X/+t0 Y Y / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 X / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 (14 ′) Z1 X / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 X / sn tn Zn · · · / s2 t1 Z2 (15 ′) Then the rule instances in the rewritten derivation can be written as follows: Z1 Y / sn tn Zn · · · / s2 t2 Z2 / + t1 Z1 Y / sn tn Zn · · · / s2 t2 Z2 (16) X/+t0 Y Y / sn tn Zn · · · / s2 t2 Z2 X / sn tn Zn · · · / s2 t2 Z2 (17) Here instance (16) is clearly a valid instance of backward application. Based on our earlier observations about the ti and their compatibility with crossed composition, we also see that instance (17) is a valid instance of forward crossed composition (if n > 1), or of forward application (if n = 1). This completes the proof of Lemma 15. To finish the proof of Theorem 4 we have to also establish the inclusion of the O-CCG languages in the TAG languages. This is a known result for other dialects of multimodal CCG (Baldridge and Kruijff 2003), but O-CCG once again requires some extra work because of inertness. Lemma 17 The O-CCG languages are included in the TAG languages. It suffices to show that the O-CCG languages are included in the class of languages generated by LIG (Gazdar 1987); the claim then follows from the weak equivalence of LIG and TAG. Vijay-Shanker and Weir (1994, Section 3.1) present a construction that transforms an arbitrary VW-CCG into a weakly equivalent LIG. It is straightforward to adapt their construction to O-CCG. As we do not have the space here to define LIG, we only provide a sketch of the adapted construction. As in the case of VW-CCG, the valid instances of an O-CCG rule can be written down using our template notation. The adapted construction converts each such template into a production rule of a weakly equivalent LIG. Consider for instance the following instance of forward crossed composition from Example 11: A /+× Y Y / + ×Z2 / + × Z1 A / − ×Z2 / − × Z1 This template is converted into the following LIG rule. We adopt the notation of VijayShanker and Weir (1994) and write ◦◦ for the tail of a stack of unbounded size. A[◦◦ /− ×Z2 /−× Z1] → A[◦◦/+× Y] Y[ /+ ×Z2 /+× Z1] In this way, every O-CCG can be written as a weakly equivalent LIG.","5 conclusion :In this article we have contributed two technical results to the literature on CCG. First, we have refined the weak equivalence result for CCG and TAG (Vijay-Shanker and Weir 1994) by showing that prefix-closed grammars are weakly equivalent to TAG only if target restrictions are allowed. Second, we have shown that O-CCG, the formal, composition-only core of OpenCCG, is not weakly equivalent to TAG. These results point to a tension in CCG between lexicalization and generative capacity: Lexicalized versions of the framework are less powerful than classical versions, which allow rule restrictions. What conclusions one draws from these technical results depends on the perspective. One way to look at CCG is as a system for defining formal languages. Under this view, one is primarily interested in results on generative capacity and parsing complexity such as those obtained by Vijay-Shanker and Weir (1993, 1994). Here, our results clarify the precise mechanisms that make CCG weakly equivalent to TAG. Perhaps surprisingly, it is not the availability of generalized composition rules by itself that explains the generative power of CCG, but the ability to constrain the interaction between generalized composition and function application by means of target restrictions. On the other hand, one may be interested in CCG primarily as a formalism for developing grammars for natural languages (Steedman 2000; Baldridge 2002; Steedman 2012). From this point of view, the suitability of CCG for the development of lexicalized grammars has been amply demonstrated. However, our technical results still serve as important reminders that extra care must be taken to avoid overgeneration when designing a grammar. In particular, it is worth double-checking that an OpenCCG grammar does not generate word orders that the grammar developer did not intend. Here the rewriting system that we presented in Figure 12 can serve as a useful tool: A grammar developer can take any derivation for a grammatical sentence, transform the derivation according to our rewriting rules, and check whether the transformed derivation still yields a grammatical sentence. It remains an open question how the conflicting desires for generative capacity and lexicalization might be reconciled. A simple answer is to add some lexicalized method for enforcing target restrictions to CCG, specifically on the application rules. However, we are not aware that this idea has seen widespread use in the CCG literature, so it may not be called for empirically. Alternatively, one might modify the rules of O-CCG in such a way that they are no longer prefix-closed—for example, by introducing some new slash type. Finally, it is possible that the constructs of OpenCCG that we set aside in O-CCG (such as type-raising, substitution, and multiset slashes) might be sufficient to achieve the generative capacity of classical CCG and TAG. A detailed study of the expressive power of these constructs would make an interesting avenue for future research." "1 introduction :A growing body of work in computational linguistics (CL hereafter) or natural language processing manifests an interest in corpus studies, and requires reference annotations for system evaluation or machine learning purposes. The question is how to ensure that an annotation can be considered, if not as the “truth,” than at least as a suitable ∗ Normandie University, France; UNICAEN, GREYC, F-14032 Caen, France; CNRS, UMR 6072, F-14032 Caen, France. E-mail: {yann.mathet, antoine.widlocher, jean-philippe.metivier}@unicaen.fr Submission received: 15 July 2013; revised version received: 5 October 2014; accepted for publication: 12 January 2015. doi:10.1162/COLI a 00227 © 2015 Association for Computational Linguistics reference. For some simple and systematic tasks, domain experts may be able to annotate texts with almost total confidence, but this is generally not the case when no expert is available, or when the tasks become harder. The very notion of “truth” may even be utopian when the annotation process includes a certain degree of interpretation, and we should in such cases look for a consensus, also called the “gold standard,” rather than for the “truth.” For these reasons, a classic strategy for building annotated corpora with sufficient confidence is to give the same annotation task to several annotators, and to analyze to what extent they agree in order to assess the reliability of their annotations. This is the aim of inter-annotator agreement measures. It is important to point out that most of these measures do not evaluate the distance from annotations to the “truth,” but rather the distance across annotators. Of course, the hope is that the annotators will agree as far as possible, and it is usually considered that a good inter-annotator agreement ensures the constancy and the reproducibility of the annotations: When agreement is high, then the task is consistent and correctly defined, and the annotators can be expected to agree on another part of the corpus, or at another time, and their annotations therefore constitute a consensual reference (even if, as shown for example by Reidsma and Carletta [2008], such an agreement is not necessarily informative for machine learning purposes). Moreover, once several annotators reach good agreement on a given part of a corpus, then each of them can annotate alone other parts of the corpus with great confidence in the reproducibility (see the preface to Gwet [2012, page 6] for illuminating considerations). Consequently, inter-annotator agreement measurement is an important point for all annotation efforts because it is often considered that a given agreement value provided by a given method validates or invalidates the consistency of an annotation effort. How to measure agreement, and how we define a good measure, is another part of the problem. There is no universal answer, because how to measure depends on the nature of the task, hence on the kind of annotations. Admittedly, much work has already been done for some kinds of annotation efforts, namely, when annotators have to choose a category for previously identified entities. This approach, which we will call pure categorization, has led to several well-known and widely discussed coefficients such as κ, π, or α, since the 1950s. Some more recent efforts have been made in the domain of unitizing, following Krippendorff’s terminology (Krippendorff 2013), where annotators have to identify by themselves what the elements to be annotated in a text are, and where they are located. Studies are scarce, however, as Krippendorff pointed out: “Measuring the reliability of unitizing has been largely ignored in favor of coding predefined units” (Krippendorff 2013, page 310). This scarcity concerns either segmentation, where annotators simply have to mark boundaries in texts to separate contiguous segments, or more generally unitizing, where gaps may exist between units. Moreover, some even more complex configurations may occur (overlapping or embedding units), which are more rarely taken into account. And when categorization meets unitizing, as is the case in CL in such fields as, for example, NAMED ENTITY RECOGNITION1 or DISCOURSE FRAMING, very few methods are proposed and discussed. That is the main problem we focus on in this article and to which γ provides solutions. 1 Small caps are used to refer to the examples of annotation tasks introduced in Section 2.2. The new coefficient γ that is introduced in this article is an agreement measure concerning the joint tasks of unit locating (unitizing) and unit labeling (categorization). It relies on an alignment of units between different annotators, with penalties associated with each positional and categorial discrepancy. The alignment itself is chosen to minimize the overall discrepancy in a holistic way, considering the full continuum to make choices, rather than making local choices. The proposed method is unified because the computation of γ and the selection of the best alignment are interdependent: The computed measure depends on the chosen alignment, whose selection depends on the measure. This method and the principles proposed in this article have been built up since 2010, and were first presented to the French community in a very early version in Mathet and Widlöcher (2011). The initial motivation for their development was the lack of dedicated agreement measures for annotations at the discourse level, and more specifically for annotation tasks related to TOPIC TRANSITION phenomena. The article is organized as follows. First, we fix the scope of this work by defining the important notions that are necessary to characterize annotation tasks and by introducing the examples of linguistic objects and annotation tasks used in this article to compare available metrics. Second, we analyze the state of the art and identify the weaknesses of current methods. Then, we introduce our method, called γ. As this method is new, we compare it to the ones already in use, even in their specialized fields (pure categorization, or pure segmentation), and show that it has better properties overall for CL purposes.","2 motivations, scope, and illustrations :We focus in the present work on both categorizing and unitizing, and consider therefore annotation tasks where annotators are not provided with preselected units, but have to locate them and to categorize them at the same time. An example of a multi-annotated continuum (this continuum may be a text or, for example, an audio or a video recording) is provided in Figure 1, where each line represents the annotations of a given annotator, from left to right, respecting the continuum order. In order to characterize the annotation efforts focusing on specific linguistic objects, we consider the following properties, illustrated in Figure 1. Categorization occurs when the annotator is required to label (predefined or not) units. Unitizing occurs when the annotator is asked to identify the units in the continuum: She has to determine each of them (and the number of units that she wants) and to locate them by positioning their boundaries. Embedding (hierarchical overlap) may occur if units may be embedded in larger ones (of the same type, or not). Free overlap may occur when guidelines tolerate the partial overlap of elements (mainly of different types). Embedding is a special case of overlapping. A segmentation without overlap (hierarchical or free) is said to be strictly linear. Full-covering (vs. sporadicity) applies when all parts of the continuum are to be annotated. For other tasks, parts of the continuum are selected. Aggregatable types or instances correspond to the fact that several adjacent elements having the same type may aggregate, without shift in meaning, in a larger span having the same type. This larger span is said to be atomizable: Labeling the whole span or labeling all of its atoms are considered as equivalent, as illustrated by Figure 2. Two specific cases. We call hereafter pure segmentation (illustrated by Figure 3) the special case of unitizing with full-covering and without categorization, and we call pure categorization categorization without unitizing. To present the state of the art as well as our own propositions, and to make all of them more concrete, it is useful to mention examples of linguistic objects and annotation tasks for which agreement measures may be required. The following sections will then refer to these examples as often as possible, in order to illustrate discussions on abstract problems or configurations. Small caps are used to refer to the names of these tasks. A Aggregatable atoms Atomizable unit Table 1 summarizes the properties of the linguistic objects and annotation tasks mentioned in this article to illustrate and compare methods and metrics. These objects and tasks are briefly described for convenience in Appendix A. This table shows that annotation of TOPIC TRANSITIONS is the most demanding of the tasks regarding the number of necessary criteria a suitable agreement metric should assess, but most of the tasks listed here require assessment of both unitizing and categorization.","3 state of the art :As we saw in the previous section, different studies in linguistics or CL involve quite different structures, which may lead to annotation guidelines having very different properties. They require suitable metrics in order to assess agreement among annotators. As we will see, some of the needs for which γ is suitable are not satisfied by other available metrics. Note that this description of the state of the art mainly focuses on the questions which are of most importance for this work, in particular, chance correction and unitizing. For a thorough introduction to the most popular measures that concern categorizing, we refer the reader to the excellent survey by Artstein and Poesio (2008). In this section, we first address the question of chance correction in agreement measures, then we give an overview of available measures in three domains: pure categorization, pure segmentation, and unitizing. We begin the state of the art with the question of chance correction, because it is a crosscutting issue in all agreement measure domains, and because it influences the final value provided by most agreement measures. It is important to distinguish between (1) measures to evaluate systems, where the output of an annotating system is compared to a valid reference, and (2) inter-annotator agreement measures, which try to quantify the degree of similarity between what different annotators say about the same data, and which are the ones we are really concerned with in this article. In case (1), the result is straightforward, providing for instance the percentage of valid answers of a system: We know exactly how far the evaluated system is from the gold standard, and we can compare this system to others just by comparing their results. However, case (2) is more difficult. Here, measures do not compare annotations from one annotator to a valid reference (and, most of the time, no reference already exists), but they compare annotations from different annotators. As such, they are clearly not direct distances to the “truth.” So, the question is: Above what amount of agreement can we reasonably trust the annotators? The answer is not straightforward, and this is where chance correction is involved. For instance, consider a task where two annotators have to label items with 10 categories. If they annotate at random (with the 10 categories having equal prevalence), they will have an agreement of 10%. If we consider another task involving two categories only, still at random, the agreement expected by chance rises to 50%. Based on this observation, most agreement measures try to remove chance from the observed measure, that is to say, to provide the amount of agreement that is above chance. More precisely, most agreement measures (for about 60 years, with well-known measures κ, S, π) rely on the same formula: If we note Ao the observed agreement (i.e., the agreement directly observed between annotators) and Ae the so-called expected agreement (i.e. the agreement which should be obtained by chance), the final agreement A is defined by Equation (1). To illustrate this formula, assume that the observed agreement is seemingly high, say Ao = 0.9. If Ae = 0.5, A = 0.4/0.5 = 0.8, which is still considered as good, but if Ae = 0.7, A = 0.2/0.3 = 0.67, which is not that good, and if Ae = 0.9, which means annotators did not perform better than chance, then A = 0. Some other measures, namely, all α from Krippendorff, and the new γ introduced in this article, are computed from observed and expected disagreements (instead of agreements), denoted here respectively Do and De, and they define the final agreement by Equation (2). A = Ao − Ae1− Ae (1) A = 1− DoDe (2) However, the way the expected value is computed is the only difference between many coefficients (κ, S, π, and their generalizations), and is a controversial question. As precisely described in Artstein and Poesio (2008), there are three main ways to model chance in an annotation effort: 1. By considering a uniform distribution. For instance, in a categorization task, considering that each category (for each coder) has the same probability. The limitation of this approach is that it provides a poor model for chance annotation. Moreover, for a given task, the greater the number of categories, the lesser the expected value, hence the higher the final agreement. 2. By considering the mean distribution of the different annotators hence regarded as interchangeable. For instance, in a categorization task with two categories A and B, where the prevalences are respectively 90% for category A and 10% for category B, the expected value is computed as 0.9× 0.9 + 0.1× 0.1 = 0.82, which is much higher than the 0.5 obtained by considering a uniform distribution. 3. By considering the individual distributions of annotators. Here, annotators are considered as not interchangeable; each of them is considered to have her own probability for each category (for a categorization task) based on her own observed distribution. It leads to the same results as with the mean distribution if annotators all have the same distribution, or to a lesser value (hence a higher final agreement) if not. In the two cases of mean and individual distributions, expected agreement may be very high, depending on the prevalence of categories. In some annotation tasks, expected agreement becomes critically high, and any disagreements on the minor category have huge consequences on the chance-corrected agreement, as hotly debated by Berry (1992) and Goldman (1992), and criticized in CL by Di Eugenio and Glass (2004). However, we follow Krippendorff (2013, page 320), who argues that disagreements on rare categories are more serious than on frequent ones. For instance, let us consider the reliability of medical diagnostics concerning a rare disease that affects one person out of 1,000. There are 5,000 patients, 4,995 being healthy, 5 being affected. If doctors fail to agree on the 5 affected patients, their diagnostics cannot be trusted, even if they agree on the 4,995 healthy ones. These principles have been mainly introduced and used for categorization tasks, because most coefficients address these tasks, but they are more general and may also concern segmentation and, as we will see further, unitizing. The simplest measure of agreement for categorization is percentage of agreement (see for example Scott 1955, page 323). Because it does not feature chance correction, it should be used carefully for the reasons we have just seen. Consequently, most popular measures are chance-corrected: S (Bennett, Alpert, and Goldstein 1954) relies on a uniform distribution model of chance, π (Scott 1955) and α (Krippendorff 1980) on the mean distribution, and κ (Cohen 1960) on individual distributions. Generalizations to three or more annotators have been provided, such as κ (Fleiss 1971), also known as K (Siegel and Castellan 1988). Moreover, weighted coefficients such as α and κw (Cohen 1968) are designed to take into account the fact that disagreements between two categories are not necessarily all of the same importance. For instance, for scaled categories from 1 to 10 (as opposed to so-called nominal categories), a mistake between categories 3 and 4 should be less penalized than a mistake between categories 1 and 10. These metrics, widely used in CL, are suitable to assess agreement for pure categorization tasks—for example, in the domains of PART-OF-SPEECH TAGGING, GENE RENAMING, or WORD SENSE ANNOTATION. From Carletta (1996) to Artstein and Poesio (2008), most of these methods have already been discussed and compared in the perspective of CL and we will not do so here. In the domain of TOPIC SEGMENTATION, several measures have been proposed, especially to evaluate the quality of automatic segmentation systems. In most cases, this evaluation consists in comparing the output of these systems with a reference annotation. We mention them here because their use tends to be extended to interannotator agreement because of the lack of dedicated agreement measures, as illustrated by Artstein and Poesio (2008), who mention these metrics in a survey related to interannotator agreement, or by Kazantseva and Szpakowicz (2012). In this domain, annotations consist of boundaries (between topic segments), and the penalty must depend on the distance from a true boundary. Thus, dedicated measures have been proposed, such as WindowDiff (WD hereafter; Pevzner and Hearst 2002), based on Pk (Beeferman, Berger, and Lafferty 1997). WD relies on the following principle: A fixed-sized window slides over the text and the numbers of boundaries in the system output and reference are compared. Several limitations of this method have been demonstrated and adjustments proposed, for example, by Lamprier et al. (2007) or by Bestgen (2009), who recommends the use of the Generalized Hamming Distance (GHD hereafter; Bookstein, Kulyukin, and Raita 2002), in order to improve the stability of the measure, especially when the variance of segment size increases. Because these metrics are dedicated to the evaluation of automatic segmentation systems, their most serious weakness for assessing agreement is that they are not chance-corrected, but they present another limitation: They are dedicated to segmentation and assume a full-covering and linear tiling of the continuum and only one category of objects (topic segments). This strong constraint makes them unsuitable for unitizing tasks using several categories (ARGUMENTATIVE ZONING), targeting more sporadic phenomena (ANNOTATION OF COMMUNICATIVE BEHAVIOR), or involving more complex structures (NAMED ENTITY RECOGNITION, HIERARCHICAL TOPIC SEGMENTATION, TOPIC TRANSITION, DISCOURSE FRAMING, ENUMERATIVE STRUCTURES). 3.4.1 Using Measures for Categorization to Measure Agreement on Unitizing. Because of the lack of dedicated measures, some attempts have been made to transform the task of unitizing into a task of categorizing in order to use well-known coefficients such as κ. They consist of atomizing the continuum by considering each segment as a sequence of atoms, thereby reducing a unitizing problem to a categorization problem. This is illustrated by Figure 4, where real unitizing annotations are on the left (with two annotators), and the transformed annotations are on the right. To do so, an atom granularity is chosen—for instance, in the case of texts, it may be character, word, sentence, or paragraph atoms. Then, each unit is transformed into a set of items labeled with the category of this unit, and a new “blank” category is added in order to emulate gaps between units. In most cases, this method has severe limitations: 1. Two contiguous units seen as one. In zone (1) of the left part of Figure 4, one annotator has created two units (of the same category), and the other annotator has created only one unit covering the same space. However, once the continua are discretized, the two annotators seem to agree on this zone (with the four same atoms), as we can see in the right part of the figure. 2. False positive/negative disagreement and slight positional disagreement considered as the same. Zone (2) of Figure 4 shows a case where annotators disagree on whether there is a unit or not, which is quite a severe disagreement, whereas zone (3) shows a case of a slight positional disagreement. Surprisingly, these two discrepancies are counted with the same severity, as we can see in the right side of the figure, because in each case, there is a difference of category for one item only (respectively, “blank” with “blue” in case (2), and “blue” with “blank” in case (3)). 3. Agreement on gaps. Because of the discretization, artificial blank items are created, with the result that annotators may agree on “blanks.” The more gaps in real annotations, the more artificial “blank” agreement, and hence the greater the artificial increase in global agreement. Indeed, the expected agreement is less impacted by artificial “blanks,” and it may even decrease. 4. Overlapping and embedding units are not possible. This results because of the discretizing process, which requires a given position to be assigned a single category (or it would require creating as many artificial categories as possible combinations of categories). This kind of reduction is used to evaluate the annotation of COMMUNICATIVE BEHAVIOR in video recordings by Reidsma (2008), where unitizing is, on the contrary, clearly required: The time-line is discretized (atomized), then κ and α are computed using discretized time spans as units. It should be noted that Reidsman, Heylen, and Ordelman (2006, page 1119) and Reidsma (2008) claim that this “fairly standard” method (which we call discretizing measure henceforth) has certain drawbacks, such as the fact that it “does not compensate for differences in length of segments,” whereas “short segments are as important as long segments” in their corpus (which is an additional limitation to the ones we have just mentioned). They propose a second approach relying on an alignment, as we mention in Section 4.2.1. This reduction is also unacceptable for other annotation tasks. For instance, in the perspective of DISCOURSE FRAMING, two adjacent temporal frames should not be aggregated in a larger one. In the same manner, for TOPIC SEGMENTATION, it clearly makes no sense to aggregate two consecutive segments. 3.4.2 A Measure for Unitizing Without Chance Correction. Another approach, derived from Slot Error Rate (Makhoul et al. 1999), presented in Galibert et al. (2010), and called SER below, was more specifically used in the context of evaluation of NAMED ENTITY recognition systems. Comparing a “hypothesis” to a reference, this metric counts the costs of different error types: error “T” on type (i.e., category) with cost 0.5, error “B” on boundaries (i.e., position) with cost 0.5, error “TB” on both type and boundaries with cost 1, error “I” of insertion (i.e., false positive) with cost 1, and error “D” of deletion (false negative) with cost 1. The overall cost relies on an alignment of objects from reference and hypothesis, which is chosen to minimize this cost. The final value provided by SER is the average cost of the aligned pairs of units—0 meaning perfect agreement, 1 roughly meaning systematic disagreement. An example is given in Figure 5. This attempt to extend Slot Error Rate to unitizing suffers from severe limitations. In particular, all positioning and categorizing errors have the same penalty, which may be a serious drawback for annotation tasks where some fuzziness in boundary positions is allowed, such as TOPIC SEGMENTATION, TOPIC TRANSITION, or DISCOURSE FRAMING. Moreover, it is difficult to interpret because its output is surprisingly not upper bounded by 1 (in the case where there are many false positives). Additionally, it was initially designed to compare an output to a reference, and so requires some adjustments to cope with more than 2 annotators. Last but not least, it is not chance corrected. 3.4.3 Specific Measures for Unitizing. To our knowledge, the family of α measures proposed by Krippendorff is by far the broadest attempt to provide suitable metrics for various annotation tasks, involving both categorization and unitizing. In the survey by Artstein and Poesio (2008, page 581), some hope of finding an answer to unitizing is formulated as follows: “We suspect that the methods proposed by Krippendorff (1995) for measuring agreement on unitizing may be appropriate for the purpose of measuring agreement on discourse segmentation.” Unfortunately, as far as we know, its usage in CL is rare, despite the fact that it is the first coefficient that copes both with unitizing and categorizing at the same time, while taking chance into account. The family of α measures would then be suitable for annotation tasks related, for example, to COMMUNICATIVE BEHAVIOR or DIALOG ACTS. We will therefore pay special attention to Krippendorff’s work in this article, because it constitutes a very interesting reference to compare with, both in terms of theoretical choices and of results. Let us briefly recap Krippendorff’s studies on unitizing from 1995 to 2013 and introduce some of the α measures, which will be discussed in this article. The α coefficient (Krippendorff 1980, 2004, 2013), dedicated to agreement measures on categorization tasks, generalizes several other broadly used statistics and allows various categorization values (nominal, ordinal, ratio, etc.). Besides this wellknown α measure, which copes with categorizing, a new coefficient called αU has been proposed since 1995 in Krippendorff (1995) and then Krippendorff (2004), which can apply to unitizing. Recently, Krippendorff (2013, pages 310, 315) proposed a new version of this coefficient, called uα, “with major simplifications and improvements over previous proposals,” and which is meant to “assess the reliability of distinctions within a continuum—how well units and gaps coincide and whether units are of the same or of a different kind.” To supplement uα, which mainly focuses on positioning, Krippendorff has proposed c|uα (Krippendorff 2013), which ignores positioning disagreement and focuses mainly on categories. These measures will be discussed in the following sections. For now, it must be noted that uα and c|uα are not currently designed to cope with embedding or free overlapping between the units of the same annotator. These metrics are then unsuitable for annotation tasks such as, for instance, TOPIC TRANSITION, HIERARCHICAL TOPIC SEGMENTATION, DISCOURSE FRAMING, or ENUMERATIVE STRUCTURES. To conclude the state of the art, we draw up a final overview of the coverage of the requirements by the different measures in Table 2. The γ measure, introduced in the next section, aims at satisfying all these needs.","4 the proposed method: introducing γ : The basic idea of this new coefficient is as follows: All local disagreements (called disorders) between units from different annotators are averaged to compute an overall disorder. However, these local disorders can be computed only if we know for each unit of a given annotator, which units, if any, from the other annotators it should be compared with (via what is called unitary alignment)—that is to say, if we can rely on a suitable alignment of the whole (called alignment). Because it is not possible to get a reliable preconceived alignment (as explained in Section 4.2.1), γ considers all possible ones, and computes for each of them the associated overall disorder. Then, γ retains as the best alignment the one that minimizes the overall disorder, and the latter value is retained as the correct disorder. To obtain the final agreement, as with the familiar kappa and alpha coefficients, this disorder is then chance-corrected by a so-called expected disorder, which is calculated by randomly resampling existing annotations. First of all, we introduce three main principles of γ in Section 4.2. We introduce in Section 4.3 the basic definitions. The comparison of two units (depending on their relative positions and categories) relies on the concept of dissimilarity (Section 4.4). A unitary alignment groups at most one unit of each annotator, and a set of unitary alignments covering all units of all annotators is called an alignment (Section 4.5). The disorder associated with a unitary alignment results from dissimilarities between all its pairs of units, and the disorder associated with an alignment depends on those of its unitary alignments (Section 4.6). The alignment having the minimal disorder (Section 4.7) is used to compute the agreement value, taking chance correction into account (Section 4.8). 4.2.1 Measuring and Aligning at the Same Time: γ is Unified. For a given phenomenon identified by several annotators, it is necessary to provide an agreement measure permissive enough to cope with a double discrepancy concerning its position in the continuum, and the category attributed to the phenomenon. Because of discrepancy in positioning, it is necessary to provide an agreement measure with an inter-annotator alignment, which shows which unit of a given annotator corresponds, if any, to which unit of another annotator. If such an alignment is provided, it becomes possible, for each phenomenon identified by annotators, to determine to what extent the annotators agree both on its categorization and its positioning. This quantification relies on a certain measure (called dissimilarity hereafter) between annotated units: The more the units are considered as similar, the lesser the dissimilarity. But how can such an alignment be achieved? For instance, in Figure 6, aligning unit A1 of annotator A with unit B1 of annotator B consists in considering that their properties are similar enough to be associated: annotator A and annotator B have accounted for the same phenomenon, even if in a slightly different manner. Consequently, to operate, the alignment method should rely on a measure of distance (in location, in category assignment, or both) between units. Therefore, agreement measure and aligning are interdependent: It is not possible to correctly measure without aligning, and it is not possible to align units without measuring their distances. In that respect, measuring and aligning cannot constitute two successive stages, but must be considered as a whole process. This interdependence reflects the unity of the objective: Establishing to what extent some elements, possibly different, may be considered as similar enough either to quantify their differences (when measuring agreement), or to associate them (when aligning). Interestingly, Reidsma, Heylen, and Ordelman (2006, page 1119), not really satisfied by the use of the discretizing measure as already mentioned, “have developed an extra method of comparison in which [they] try to align the various segments.” This attempt highlights the necessity to rely on an alignment. Unfortunately, the way the alignment is computed, adapted from Kuper et al. (2003), is disconnected from the measure itself, being an ad hoc procedure to which other measures are applied. 4.2.2 Aligning Globally: γ is Holistic. Let us consider two annotators A and B having respectively produced unit A5, and units B4 and B5, as shown in Figure 7. When considering this configuration at a local level, we may consider, based on the overlapping area for instance, that A5 fits B5 slightly better than B4. However, this local consideration may be misleading. Indeed, Figure 8 shows two larger configurations, where A5, B4, and B5 are unchanged from Figure 7. With a larger view, the choice of alignment of A5 may be driven by the whole configuration, possibly leading to an alignment with B4 in Figure 8a, and with B5 in Figure 8b: Alignment choices depend on the whole system and the method should consequently be holistic. 4.2.3 Accounting for Different Severity Rates of Errors: Positional and Categorial Permissiveness of γ. As far as positional discrepancies between annotators are concerned, it is important for a measure to rely on a progressive error count, not on a binary one: Two positions from two annotators may be more or less close to each other but still concern the same phenomenon (partial agreement), or may be too far to be considered as related to the same phenomenon (no possible alignment). For instance, for segmentation, specific measures such as GHD or WD rely on a progressive error count for positions, with an upper limit being half the average size of the segments. For unitizing, Krippendorff considers with uα that units can be compared as long as they overlap. However, γ considers that in some cases, units by different annotators may correspond to the same phenomenon though they do not intersect. We base this claim on two grounds. First, if we observe the configuration given in Figure 9, annotators 2 and 3 have both annotated part of the NAMED ENTITY that has been annotated by annotator 1. Consequently, though they do not overlap, their units refer to the same phenomenon. In addition, we find a direct echo of this assumption in Reidsma (2008, pages 16–17) where, in a video corpus concerning COMMUNICATIVE BEHAVIOR, “different timing (non-overlapping) [of the same episode] was assigned by [...] two annotators.” Regarding categorization, some available measures consider all disagreements between all pairs of categories as equal. Other coefficients, called weighted coefficients (see Artstein and Poesio 2008), as well as γ, consider on the contrary that mismatches may not all have the same weight, some pairs of categories being closer than others. This closeness is often referred to as overlap. In our terminology, we call category-overlapping this closeness between categories, and overlap means positional overlap. For example, within annotation efforts related to WORD SENSE or DIALOG ACTS, it is clear that disagreements on labels are not all alike. Given a multi-annotated continuum t:r let A = {a1, ..., an} be the set of annotatorsr let n = |A| be the number of annotatorsr let U be the set of units from all annotatorsr ∀i ∈ J1, nK, let xi be the number of units by annotator ai for t r let x = n∑i=1 xin be the average number of annotations per annotatorr for annotator a = ai, ∀j ∈ J1, xiK, we note uaj unit from a of rank j Annotation set: An annotation set s is a set of units attached to the same continuum and produced by a given set of annotators. Corpus: A corpus c is defined with respect to a given annotation effort, and is composed of a set of continua, and of the set of annotations related to these continua. Unit: A unit u bears a category denoted cat(u), and a location given by its two boundaries, each of them corresponding to a position in the continuum, respectively denoted start(u) and end(u), start and end being functions from U to N+. Equality between units is defined as follows: ∀(u, v) ∈ U2, u = v⇔ cat(u) = cat(v)start(u) = start(v)end(u) = end(v) We introduce here the first brick to build the notion of disorder, which works at a very local level, between two units. A dissimilarity tells to what degree two units should be considered as different, taking into account such features as their positions, their categories, or a combination of the two. A dissimilarity is a function d : U2 → R+, so that : ∀(u, v) ∈ U2, { d(u, v) = d(v, u) (d is symmetric) u = v⇒ d(u, v) = 0 A dissimilarity is not necessarily a distance in the mathematical sense of the term, in particular because triangular inequality is not mandatory (for instance, in Figure 10, d(A1, B2) > d(A1, C1) + d(C1, B2)). 4.4.1 Empty Unit u∅, Empty Dissimilarity ∆∅. As we will see, γ relies on an alignment of units by different annotators. In particular, this alignment indicates for unit ua1i of annotator a1, to which unit u a2 j of annotator a2 it corresponds, in order to compute the associated dissimilarity. In some cases, though, the method will choose not to align ua1i with any unit of annotator a2 (none corresponds sufficiently). We define the empty pseudo unit, denoted u∅, which corresponds to the realization of this phenomenon: ultimately, a pseudo unit u∅ is added to the annotations of a2, and u a1 i is aligned with it. We also define the associated cost ∆∅: ∀u ∈ U , d(u, u∅) = d(u∅, u) = ∆∅ and d(u∅, u∅) = ∆∅ Dissimilarities should be calibrated so that ∆∅ is the value beyond which two compared units are considered critically different. Consequently, it constitutes a reference, and dissimilarities will be expressed in this article as multiples of ∆∅ for better clarity. It is not a parameter of gamma, but a constant (which is set to 1 in our implementation). 4.4.2 Positional Dissimilarity dpos. Different positional dissimilarities may be created, in order to deal with different annotation tasks. In this article, we use the dissimilarity shown in Equation 3, which is very versatile. dpos−sporadic(u, v) = ( |start(u)− start(v)|+ |end(u)− end(v)| (end(u)− start(u)) + (end(v)− start(v)) )2 ·∆∅ (3) Equation (3) sums the differences between the right and left boundaries of both units in its numerator. Its denominator sums the lengths of both units, so that this dissimilarity is not scale-dependent. Squaring the value is an option used here to accelerate dissimilarity when differences of positions increase. It is illustrated in Figure 10 with different configurations and their associated values, from 0 for the perfectly aligned pair of units (A1, B1) to 22.2 ·∆∅ for the worst pair (A1, C2). 4.4.3 Categorial Dissimilarity dcat. Let K be the set of categories. For a given annotation effort, |K| different categories are defined. For more convenience, we first define categorial distance between categories distcat via a square matrix of size |K|, with each category appearing both in row titles and column titles. Each cell gives the distance between two categories through a value in [0, 1]. Value 0 means perfect equality, whereas the maximum value 1 means that the categories are considered as totally different. As distcat is symmetric, such a matrix is necessarily symmetric, and bears 0 in each diagonal cell. Table 3 gives an example for three categories, and shows that an association between a unit in category cat1 with one in category cat3 is the worst possible (distance = 1), whereas it is half as much between cat1 with cat2 (distance = 0.5). This makes it possible to take into account so-called category-overlapping (in our example, cat1 and cat2 are said to overlap, which means they are not completely different), as weighted coefficients such as κw or α already do. Note that in the case of so-called “nominal categories,” the matrix will be full of 1 outside the diagonal, and full of 0 in the diagonal (different categories are considered as not matching at all). This categorial distance matrix is then used to build the categorial dissimilarities, taking into account the ∆∅ value. We define categorial dissimilarity between two units by: dcat(u, v) = fcat(distcat (cat(u), cat(v))) ·∆∅ (4) Function fcat can be used to adjust the way the dissimilarity grows with respect to the categorial distance values. The standard option2 (used in this article) is to simply consider fcat(x) = x, with which dcat naturally increases gradually from zero when categories match, to ∆∅ when categories are totally different (distcat(cat(u), cat(v)) = 1 =⇒ dcat(u, v) = ∆∅). 4.4.4 Combined Dissimilarity dcombi. Because in some annotation tasks units may differ both in position and in category, it is necessary to combine the associated dissimilarities so that all costs are cumulated. This is provided by a combined dissimilarity. Let d1 and d2 be two dissimilarities. We define: dα,βcombi(d1,d2 )(u, v) = α.d1(u, v) + β.d2(u, v) (5) It is easy to demonstrate that this linear combination of dissimilarities is itself a dissimilarity (if (α,β) 6= (0, 0)). It enables the same weight to be assigned to positions and categories using d1,1combi(dpos,dcat ), which is currently used for γ. Then, we can note that it is the same cost ∆∅ for a unit either not to be aligned with any other one, or to be aligned with a unit in the same configuration as (A1, C1) of Figure 10 (if they have the same category), or to be aligned with a unit having an incompatible category (if they occupy the same position). Unitary alignment ă. A unitary alignment ă is an i-tuple, i belonging to J1, nK (n the number of annotators), containing at most one unit by each annotator: It represents the hypothesis that i annotators agree to some extent on a given phenomenon to be unitized. In order to make all unitary alignments homogenous, we eventually complete any unitary alignment that is an i-tuple with n− i empty units u∅, so that all unitary alignments are ultimately n-tuples. Figure 11 illustrates unitary alignments with some u∅ units. Alignment ā. For a given annotation set, an alignment ā is defined as a set of unitary alignments such that each unit of each annotator belongs to one and only one of its 2 Another option is, for example, to use fcat(x) = −ln(1− x)x30 + x, which is a function almost similar to fcat(x) = x on the [0, 0.9] range, and reaches∞ near 1. Then, when the categorial distance is equal to 1, the categorial dissimilarity reaches infinity, which guarantees that the units cannot be aligned. unitary alignments. Mathematically, it constitutes a partition of the set of units (if we do not take u∅ into account). 4.6.1 Disorder of a Unitary Alignment. The disorder of a unitary alignment ă, denoted δ̆(ă), is defined for a given dissimilarity d as the average of the one-to-one dissimilarities of its units: δ̆(ă) = 1 C2n · ∑ (u,v)∈ă2 d(u, v) (6) Averaging dissimilarities rather than summing them makes the result independent of the number of annotators. 4.6.2 Disorder of an Alignment. The disorder of an alignment ā, denoted δ̄(ā), is the sum of the disorders of all its unitary alignments divided by the mean number of units per annotator: δ̄(ā) = 1x · |ā|∑ i=1 δ̆(ăi) (7) We chose to consider the average value rather than the sum so that the disorder does not depend on the size of the continuum. Best alignment â. An alignment ā of the annotation set s is considered as the best (with respect to a dissimilarity) if it minimizes its disorder among all possible alignments of s. It is denoted â. The proposed method is holistic in that it is necessary to take into account the whole set of annotations in order to determine each unitary alignment. Disorder of an annotation set δ(s). The disorder of the annotation set s, denoted δ(s), is defined as the disorder of its best alignment(s) δ̄(â). Note that it may happen that several alignments produce the lowest disorder. We have just presented the two crucial definitions of our new method, which make it “unified.” Indeed, the best alignment is chosen with respect to the disorder, therefore with respect to what computes the agreement measure; and, conversely and simultaneously, the resulting agreement value (see below) is given by the best alignment: agreement computation and alignment are fully intertwined, whereas in most agreement metrics, the alignment is fixed a priori or no alignment is used. 4.8.1 The Model of Chance of γ. As we have already mentioned in the state of the art, it is necessary for an inter-annotator agreement measure to provide chance correction. We have also seen that there are several chance correction models, and that it is a controversial question. However, for γ, we follow Krippendorff, who claims that annotators should be interchangeable, because, as stressed by Krippendorff (2011) and Zwick (1988), Cohen’s definition of expected agreement (using individual distributions) numerically rewards annotators for not agreeing on their use of values, that is to say when they have different prevalences of categories, and punishes those that do agree. Therefore, expected values of γ are computed on the basis of the average distribution of observed annotations of the several annotators. More precisely, we define the expected (chance) disorder as the average disorder of a multi randomly annotated continuum where:r The random annotations fulfill the observed annotation distributions for the following features: – the distribution of the number of units per annotator – the distribution of categories – the distribution of unit length per category – the distribution of gaps’ length – the distribution of overlapping and/or covering between each pair of categories (for instance, units of categories A and B may never intersect, 7% of the units of category A may cover one unit of category C, and so on).r The number of random annotators is the same as the number of annotators in the observed data 4.8.2 Two Possible Sources to Build Chance: Local Data versus Corpus Data. In addition, whereas other studies systematically compute the expected value on the data also used to compute the observed value (see Section 3.1), we consider that it should be computed, when possible (that is to say, when several continua have been annotated with the same set of categories and the same instructions), from the distribution observed in all continua of the annotation effort the evaluated continuum comes from: If distribution changes from one continuum to another one, it is more because of the content of each continuum than because of chance. Let us illustrate this by a simple example, where two annotators have to annotate several texts from a sentiment analysis point of view, using three available categories: positive, negative, and neutral. On average, on the whole corpus, we assume that the prevalence is 13 for each category. The expected agreement on the whole corpus is thus 0.33. We also assume that for one particular text, there are only positive and neutral annotations, 50% of each, and no negative one. The expected agreement for this particular text is 0.5, which means that this particular text is considered to facilitate agreement by chance, and which has the consequence that the final agreement will be more conservative than for the rest of the corpus. Why does the third category, “negative,” not appear in this expected agreement computation? This conception of chance considers that when an annotator begins to annotate this particular text, which she does not already know, the third category no longer exists in her mind, and that it is the case for every other annotator, whereas they are not supposed to cooperate. It cannot be by chance that all annotators use one category in some texts, and not in another one, but because of the content, and of the interpretation, of a given text. For this reason, from our point of view, it is better to take into account the data observed on a whole annotation effort rather than on each individual continuum. The complete data tell more about the mean behavior of annotators, whereas data of a given continuum may depend more on the particularities of its content. As a consequence, γ provides two ways to compute the expected values: one which considers only the data of the continuum being evaluated, as does every other coefficient; and a second one, which considers the data from all continua of the annotation effort the evaluated continuum comes from. When available, we recommend using the second one, for the reasons already expressed. 4.8.3 Using Sampling to Compute the Expected Value. Expected agreement (or disagreement) is the expected value of a random variable. But which random variable? For coefficients like kappa and alpha, observed agreement (or disagreement) is the mean agreement (or disagreement) on all pairs of instances, so the random variable can be as simple as a random pair of instances (however we interpret “random”). This value can be readily computed. For gamma, however, observed disagreement is determined on a whole annotation, so the random variable needs to be a whole random annotation. The expected value of such a complicated variable is much more difficult to determine analytically. Instead, gamma uses sampling, as introduced in Section 5. Now that the disorder and the expected disorder have been introduced, we can define the agreement measure (of annotation set s belonging to corpus c, with c = {s} if s is a sole annotation set) with Equation 8, which is derived from Equation 2: ∀s ∈ c,γ = 1− δ(s) δe(c) (8) If all annotators perfectly agree (Figure 11a), γ = 1. Figure 11c corresponds to the worst case, where the annotators are worse than annotating at random, with γ < 0. Figure 11b shows an intermediate situation.","5 implementation :In this section, we first propose an efficient solution to compute the disorder of an annotated continuum, which relies on linear programming. Second, we propose two ways to generate random annotated continua (with respect to the observed distributions) to compute the expected disorder, one relying on a single continuum, the other one relying on a corpus (i.e., several continua). Third, we determine the number of random data sets that we must generate (and compute the disorder of) to obtain an accurate value of the expected disorder. In order to simplify the discussion and the demonstrations, we consider in this section that n annotators all made the same number of annotations p. The proposed method has now been fully described on a theoretical level, but, being holistic, its software implementation leads to a major problem of complexity. One can demonstrate that there are theoretically (p!)n−1 possible alignments. However, we will (1) show how to reduce the initial complexity, and (2) provide an efficient linear programming solution. 5.1.1 Reducing the Initial Complexity. The initial number of possible unitary alignments (which are used to build a possible alignment) is pn. Fortunately, theorem provided as Equation (9) states that any unitary alignment with a cost beyond the value n ·∆∅ cannot belong to the best alignment, and so can be discarded. Indeed, any unitary alignment with a cost above ∆∅ can be replaced by creating a separate unitary alignment for each unit (of cost ∆∅ per unitary alignment, so of total cost n ·∆∅). Demonstration. Consider the best alignment â, of cardinality m. Let ă be any of its unitary alignments. For convenience, we attribute to it the index 1 (ă = ă1), while the others are indexed from 2 to m. This unitary alignment ă contains n units (either real or u∅). For each of these units ui (1 ≤ i ≤ n), we create the unitary alignment ăm+i = (ui, u∅, ..., u∅) of cardinality n. It is possible to create an alignment ā made up of the set of unitary alignments of â \ {ă}, to which we add the unitary alignments ăm+1 to ăm+n that we have just created.3 It is of cardinality m + n− 1. Because â minimizes the disorder, we obtain: δ̄(â) ≤ δ̄(ā)⇒ 1x m∑ i=1 δ̆(ăi) ≤ 1x m+n∑ i=2 δ̆(ăi) ⇒ m∑ i=1 δ̆(ăi) ≤ m+n∑ i=2 δ̆(ăi) ⇒ δ̆(ă1) ≤ m+n∑ i=m+1 δ̆(ăi) since ∀i > m, δ̆(ăi) = 1C2n (C 2 n∆∅) = ∆∅, and since we have denoted ă = ă1, ⇒ δ̆(ă) ≤ n ·∆∅ (9) Experiments have shown that this theorem allows us to discard about 90% of the unitary alignments. 3 ā is indeed an alignment, because each of its units appears in one and only one unitary alignment. 5.1.2 Finding the Best Alignment: A Linear Programming Solution. Finding the best alignment consists of minimizing the global disorder. Such a problem may be described as a linear programming problem, so that the solution can be computed by a linear programming solver. For convenience, we introduce two new definitions:r Let UA be the set of all unitary alignments.r Let UAu be the set of the unitary alignments which contain unit u. The description of the problem in linear programming terms is threefold. First, for a given alignment ā, for each possible unitary alignment ăi, we define the Boolean variable Xăi , which indicates if this unitary alignment belongs or not to the alignment: ∀ăi ∈ UA, Xāăi = { 0 iff ăi 6∈ ā 1 iff ăi ∈ ā Second, we have to express the fact that, by definition, each unit u (of each annotator) should belong to one and only one unitary alignment of the alignment ā, that is to say that among all unitary alignments containing u, exactly one Xăi equals 1, and all the others equal 0: ∀u ∈ U , ∑ ăi ∈ UAu Xāăi = 1 Third, the goal is to minimize the global disorder δ̄(ā) associated with ā, among all possible alignments ā: Minimize δ̄(ā) = ∑ ăi ∈ UA δ̆(ăi) · Xāăi The LPSolve solver4 finds the best solution in less than one second with n = 3 annotators and p = 100 annotations per annotator on a current laptop (once the initial complexity has been reduced thanks to the previous theorem), which is fast enough to be practical. The next two subsections detail two strategies to generate randomly annotated continua with respect to the definition of the expected disorder of γ, and the third subsection explains how to choose the number of expected disorder samples to generate so that their average is an accurate enough value of the theoretical expected value. The two strategies correspond to the need expressed in Section 4.8.1 to compute the expected value on the largest set of available data, either a single continuum, or, when available, several continua from the same corpus. 4 http://lpsolve.sourceforge.net. 5.2.1 A Strategy to Compute the Expected Disorder Using a Single Continuum. When the annotation effort is limited to a single continuum, we can only rely on the annotated continuum itself to compute the expected value. To create random annotations that fulfill the observed distributions, the implemented strategy is as follows: We take the real annotated continuum of an annotator (such as the example shown on the left in Figure 12), choose at random a position on this continuum, split the continuum at this position, and permute the two parts of the split continuum. Three examples of split and permutation are shown in the right part of the figure, for split positions of, respectively, 15, 24, and 38, all coming from the same real continuum, with units that are no longer aligned (except by chance). However, we have to address the fact that some units may intersect with themselves, generating some part of agreement beyond chance. For instance, in Figure 12, unit 3 intersects with itself between #15 and #24, because the length of the unit, 12, is higher than the difference of shifts 24− 15 = 9. To limit this phenomenon, we do not allow the distance between two shifts to be less than the average length of units. 5.2.2 A Strategy to Compute the Expected Disorder Using Several Continua (from the Same Corpus). This strategy consists of mixing annotations coming from different continua, so that their units may align only by chance. To create a random annotation of n annotators, we randomly choose n different continua of the corpus, and pick the annotations of one annotator (randomly chosen) of each of these continua. When different texts are of different lengths, each of them is adjusted to the longest one by duplicating as many times as necessary (like a mosaic). This is shown in Figure 13 for n = 3 annotators. We assume the corpus contains eight continua, each annotated by three annotators. To generate a random set of three annotations, we have randomly selected a combination of three values between 1 and 8, here (2, 4, 7), to select three different continua among the eight available ones of the corpus. Then, for each of these selected continua, we choose one annotator, here annotator 2 for continuum 2, annotator 3 for continuum 4, and annotator 1 for continuum 7. We combine the associated annotations as shown in the right part of the figure, and obtain a set of random annotations that fulfill (on average) the observed distributions. The (very limited) extent of the resulting agreement we can see in this example (only two units have matching categories, but with discrepancies in position) is only by chance because the compared annotations come from different continua. In addition, it is possible to create a great number of random sets of annotations with this strategy: With n annotators and m continua (m ≥ n), it is possible to generate up to Cnm · nn different combinations. For instance, in our example, which assumes n = 3 and m = 8, there are 56× 33 = 1512 combinations to create random annotations. Because the expected disorder is by definition randomly obtained on average, and because there is virtually an infinite number of possible random annotations (with a discrete and finite continuum, it is not really infinite, but still too big to be computed), we can only compute a reduced but sufficient number of experiments and obtain an approximate value of the expected disorder. This is a sampling problem as described, for example, in Israel (1992). What statistics provide is a way to determine the minimal number n0 of experiments to do (and to average) so that we get an approximate result of a given precision with a given confidence level. It consists in: First, taking a small sample to estimate the mean and standard deviation; then, using these estimates to determine the sample size n0 that is needed. We follow the strategy provided in Olivero (2001) to compute a disorder value that differs less than e = 2% from the real value with a (1− α) = 95% confidence (the software distribution we provide is set by default with these values). First, we consider a sample of chance disorder values of size n = 30. Let µ be the sample mean, and σ′ be its standard deviation. µ is directly an unbiased estimator of the population mean, and σ = √ ( nn−1 ) · σ′ is an unbiased estimator of the real standard deviation. Let Cv = σµ be the coefficient of variation (i.e., the relative standard deviation). Let U1−α2 be the abscissa of the normal curve that cuts off an area α at the tails. This value is provided in statistical tables. We get n0 by the following equation: n0 = (Cv ·U1−α2 e )2 Let us consider a real example. We generate a sample of random disorders of size n = 30. We compute its mean µ = 3.49, its standard deviation σ′ = 0.1379, hence σ = 0.1403, and Cv = 0.040188. We get U1− 0.052 = 1.96 from the corresponding available table, hence we obtain n0 = 15.5. This means that a sample of 16 disorder values gives 2% of precision with 95% confidence. The mean we have already computed with 30 values fulfills this condition, and is a good approximation of the real expected disorder. If we wish to obtain a high precision of 1%, we need n0 = 62. It is beyond the initial size of our sample (which is 30), and we will have to generate an additional set of 32 values in order to obtain the required number.","6 comparing and benchmarking γ :As γ is an entirely new agreement measure method, it is necessary to analyze how it compares with some well known and much studied methods. First, we carry out a thorough comparison between γ and the two dedicated alphas, uα and c|uα, which are the most specific measures in the domain. Second, we benchmark γ by comparing it with other main measures, thanks to a special tool that is briefly introduced. As already mentioned, Krippendorff’s uα and c|uα are clearly the most suitable coefficients for combined unitizing and categorizing. To better understand the pros and cons as well as the behavior of these measures compared with γ, we first explain how they are designed in Section 6.1.1, and then make thorough comparisons with γ from Section 6.1.2 to 6.1.6 including: (1) how they react to slight categorial disagreements, (2) interlacement of positional and categorial disagreements, (3) the impact of the size of the units on positional disagreement, (4) split disagreements, and (5) the impact of scale (e.g., if the size of all units is multiplied by 2). We finish by showing a paradox of uα in Section 6.1.7. 6.1.1 Introducing uα and c|uα. To introduce how these two coefficients work, let us consider the example taken from Krippendorff (2013), shown in Figure 14. The length of the continuum is 76, there are two annotators, and there are four possible categories, numbered 1 to 4. The uα coefficient basically relies on the comparison of all pairs of sections among annotators, a section being either a categorized unit or a gap. To get the observed disagreement value uDo, squared lengths of the unmatching intersections are summed, and this sum is then divided by the product of the length of the continuum and m(m− 1), m being the number of annotators. In the example, mismatches occur around the second and third units of the two annotators. From left to right, there are the following intersections: cat 1 with gap (l = 10), cat 1 with cat 3 (l = 5), gap with cat 3 (l = 8), cat 2 with cat 1 (l = 5), and cat 2 with gap (l = 5). This leads twice (by symmetry) to the sum 102 + 52 + 82 + 52 + 52, and so the observed disagreement uDo = 2(102+52+82+52+52 ) 76·2(2−1) = 3.145. The expected value uDe is obtained by considering all the possible positional combinations of each pair, and not only the observed ones. This means that for a given pair, one of the two units is virtually slid in front of the other in all possible ways, and the corresponding values are averaged. In this example, uDe = 5.286. Therefore, uα = 1− 3.1455.286 = 0.405. Coefficient c|uα relies on a coincidence matrix between categories, filled with the sums of the lengths of all intersections of units for each given pair of categories. For instance, in the example, the observed coincidence between category 1 and category 3 is 5, and so on. A metric matrix is chosen for these categories, for instance, an interval metric (for numerical categories), which says that the distance between category i and category j is (i− j)2. Hence, the cost for a unitary intersection between categories 1 and 2 is (1− 2)2 = 1, but is 22 = 4 between categories 1 and 3, and so on. Then, the observed disagreement is computed according to these two matrices. To finish, an expected matrix is filled (in a way which cannot be detailed here due to space constraints), and the expected value is computed the same way. In the example, c|uα = 1− 0.8333.145 = 0.744. Hence, Krippendorff’s alphas provide two clues to analyze the agreement between annotators. In the example, uα = 0.405 indicates that the unitizing is not so good, but also that the categorizing is much better, with c|uα = 0.744 (even though of course, these two values are not independent, since unitizing and categorizing coexist here by nature). Now that these coefficients have been introduced in detail, let us analyze to what extent they differ from γ. 6.1.2 Slight Categorial Disagreements: Alphas Versus γ. When annotators have slight categorial disagreements (with overlapping categories), c|uα is slightly lowered. However, uα does not take categorial overlapping into account, but has a binary response to such disagreements, and is lowered as much as if they were severe categorial disagreements. A consequence of this approach is illustrated in Figure 15, where two annotators perfectly agree both on positions and categories in the experiment on the left, and still perfectly agree on position but slightly diverge concerning categories in the experiment on the right (1/2, 6/7, and 8/9 are assumed to be close categories). However, uα drops from 1 in the left experiment to –0.34 (a negative value means worse than random) in the right experiment, despite, in the latter, the positions being all correct, and the categories being quite good, since c|uα = 0.85. On such data, γ considers that there is no positional disagreement, and c|uα and γ both consider that there are slight categorial disagreements. 6.1.3 Positional Disagreements Impacting Categorial Agreement: c|uα. Two different conceptions of how to account for categorial disagreement have, respectively, led to c|uα and γ: c|uα relies on intersections between the units of different annotators, which is basically equivalent to an observation at the atom level, whereas γ relies on alignments between units (any unit being finally attached and compared to, at most, only one other) based both on positional and categorial observation. Hence, in a configuration such as the one given in Figure 16, where two annotators annotated three units with the same categories 1 1 9 9 7 7 Observer 1 Observer 2 1 2 9 8 7 6 Observer 1 Observer 2 Figure 15 Consequences of no categorial disagreement (left) compared with slight categorial disagreements (right). 1, 4, and 2, but not exactly at the same locations; c|uα considers a certain account of categorial disagreements, whereas γ does not. According to the principles of c|uα, any part of the continuum (even at the atom level) with an intersection between different categories means some confusion between them, whereas γ considers here that the annotators fully agree on categories (they both observed a “1” then a “4” then a “2” with no confusion), and disagree only on where phenomena exactly start and finish. The crucial difference between the two methods is probably whether we consider units to be non-atomizable (and therefore consider alignments, as γ does), or atomizable (in which case two different parts of a given unit may be simultaneously and respectively compared to two different units from another annotator). 6.1.4 Disagreements on Long versus Short Spans of Texts. Here again, the way disagreements are accounted for may differ markedly between uα and γ: when a unit does not match with any other, uα takes into account the length of the corresponding span of text to assess a disagreement. As shown in Figure 17, an orphan unit of size 10 will cost 100 times as much as an orphan unit of size 1, whereas for γ, they will have the same cost. In the whole example in Figure 17, to compute the observed disagreements, uα says the first case is 50 times worse than the second, whereas γ says on the contrary that the second case is twice as bad as the first. Here, γ fulfills the need (already mentioned) expressed by Reidsma, Heylen, and Ordelman (2006, page 3) to consider that “short segments are as important as long segments.” This phenomenon is the same for categories between c|uα and γ, the size of the units having consequences only for c|uα. 6.1.5 Split Disagreements. Sometimes, an annotator may divide a given span of text into several contiguous units of the same type, or may annotate the same span with one whole unit. In these cases, c|uα computes its observed disagreement the same in both configurations, and uα assigns decreasing disagreement when splitting increases, as shown in Figure 18, whereas γ assigns increasing disagreements. Moreover, in Figure 19, the observed uα is not responsive to splits at all, whereas γ is still responsive. 6.1.6 Scale Effects. The way uα computes dissimilarities is directly proportional to squared lengths, as shown in Figure 20. On the other hand, γ may use any positional dissimilarity, and usually uses ones that are not scale-dependent for CL applications, such as dpos−sporadic (Equation (3)). For instance, if a text is annotated with two categories, one at word level, the other one at paragraph level, we may prefer to account for relative disagreements so that a missing word will be more heavily penalized in the first case than in the second. In Figure 20, the observed disagreement of uα is 32 = 9 times greater for B units than for A units, but would be the same for γwith dpos−sporadic since: ( 0 + 3( 7+10 2 ))2 = ( 0 + 9( 21+30 2 ))2 . Figure 21a, the annotators disagree on categorization, and have a moderate agreement on unitizing. This configuration leads to uα = 0.107. In Figure 21b, the configuration is quite similar, but now annotators fully agree on unitizing: Each of them puts units in the same positions. Paradoxically, uα drops to −0.287, which is less than in the first configuration. In brief, the reason for this behavior is that in the first case, the computed disagreement regarding a given pair of units is virtually distributed into shorter parts of the whole (an intersection of length 80 between them, and an intersection of length 20 with a gap for each of them, which leads to 802 + 2× 202 = 7, 200) whereas the disagreement is maximum in the second case (an intersection of length 100 with a unit of another category, which leads to 1002 = 10, 000). Contrarily, with similar data, γ provides a better agreement in the second case than in the first one. With its design, it considers that there is the same categorial agreement in both cases, but better positional agreement in the second case, which seems to better correspond to the CL tasks we have considered. 6.1.8 Overlapping Units (Embedding or Free Overlap). Both alpha coefficients are currently designed to cope only with non-overlapping units (the term overlapping also stands here for embedding), which is a limitation for several fields in CL. It is debatable whether they could be generalized to handle overlapping units. It seems that it would involve a major change in the strategy, which currently necessitates comparing the intersections of all pairs of units. In the example shown in Figure 22, even though annotators fully agree on their two units, the alphas will inherently compare A1 with B2 and A2 with B1 (in addition to the normal comparisons between A1 with B1 and A2 with B2), and will count the resulting intersections as disagreements. It is necessary here to choose once and for all what unit to compare to what other, rather than to perform all the comparisons. But making such a choice precisely consists in making an alignment, which is a fundamental feature of γ. Consequently, it seems that the alphas would need a structural modification to cope with overlapping. As explained by Reidsma, Heylen, and Ordelman (2006), because of the lack of specialized coefficients coping with unitizing, a fairly standard practice is to use categorization coefficients on a discretized (i.e., atomized) version of the continuum: For instance, each character (or each word, or each paragraph) of a text is considered as an item; and a standard categorization coefficient such as κ is used to compute agreement. Such a measure is called κd (for discretized κ) hereafter. Several weaknesses of this approach have been already mentioned in the state-of-the-art section. It is interesting to compare such a measure to the specialized one c|uα: even if they both bear the aggregatable hypothesis, they have, however, significant differences (as confirmed by the experiments presented in the next section). The main one is that c|uα does not use an artificial atomization of the continuum, and only compares units with units. In doing so, it is not prone to agreement on blanks, in contrast to κd. Another difference is that, for the same reason, c|uα is not inherently limited to non-overlapping units: Even if it is not currently designed to cope with them, as we have already seen, it is possible to submit overlapping units to this measure (some results are shown in the next section). In this section on benchmarking, we use the Corpus Shuffling Tool (CST) introduced by Mathet et al. (2012) to compare γ concretely and accurately to the other measures. We first introduce the possible error types that it will provide: category (category mistakes may occur), position (the boundaries may be shifted), false positives (the annotators add units to the reference units), false negatives (the annotators miss some of the reference units), and splits (the annotators put two or more contiguous units instead of a reference unit, which occupy the same span of text). This tool is used to simulate varying degrees of disagreement among different error types, and the metrics are compared with each other according to how they react to these disagreements. For a given error type, for each magnitude between 0 and 1 (with a step of 0.05), the tool creates 40 artificial, multi-annotator shuffled annotation sets, and computes the different measures for them. Hence, we obtain a full graph showing the behavior of each measure for this error type, with the magnitude on the x-axis, and the average agreement (over the 40 annotation sets) on the y-axis. This provides a sort of “X-ray” of the capabilities of the measures with respect to this error type, which should be evaluated against the following desiderata:r A measure should provide a full response to the whole range of magnitudes, which means in particular that the curve should ideally start from 1 (at m = 0) and reach 0 (at m = 1), but never go below 0 (indeed, negative agreement values require a part of systematic disagreement, which is not simulated by the current version of the CST).r The response should be strictly decreasing: A flat part would mean the measure does not differentiate between different magnitudes, and, even worse, an increasing part would mean that the measure is counter-effective at some magnitudes, where a worse error is penalized less severely. We emphasize the fact that the whole graph is important, up to magnitude 1. Indeed, in most real annotated corpora, even when the overall agreement is high, errors corresponding to all magnitudes may occur. For instance, an agreement of 0.8 does not necessarily correspond to the fact that all annotations are affected by slight errors (which correspond to magnitudes close to 0), but may for instance correspond to the fact that a few units are affected by severe errors (which may correspond to magnitudes close or equal to 1). It is important to note that this tool was designed by the authors of γ, for tasks where units cannot be considered as atomizable. In particular, it was conceived so that disagreements concerning small units are as important as those concerning large ones. However, it is provided as open-source (see Conclusions section) so that anyone can test and modify it, and propose new experiments to test γ and other measures in the future. 6.3.1 Introducing the CST. The main principle of this tool is as follows. A reference corpus is built, with respect to a statistical model, which defines the number of categories, their prevalence, the minimum and maximum length for each category, and so forth. Then, this reference is used by the shuffling tool to generate a multi-annotator corpus, simulating the fact that each annotator makes mistakes of a certain type, and of a certain magnitude. It is important to remark that the generated corpus does not include the reference it is built from. The magnitude m is the strength of the shuffling, that is to say the severity of mistakes annotators make compared to the reference. It can be set from 0, which means no damage is applied (and the annotators are perfect) to the extreme value 1, which means annotators are assumed to behave in the worst possible way (but still being independent of each other)—namely, at random. Figure 23 illustrates the way such a corpus is built: From the reference containing some categorized units, three new sets of annotations are built, simulating three annotators who are assumed to have the same annotating skill level, which is set in this example at magnitude 0.1. The applied error type is position only, that is to say that each annotator makes mistakes only when positioning boundaries, but does not make any other mistake (the units are reproduced in the same order, with the correct category, and in the same number). At this low magnitude, the positions are still close to those of the reference, but often vary a little. Hence, we obtain here a slightly shuffled multiannotator corpus. Let us sum up the way error types are currently designed in the CST. Position. At magnitude m, for a given unit, we define a value shiftmax that is proportional to m and to the length of the unit, and each boundary of the unit is shifted by a value randomly chosen between −shiftmax and shiftmax (note: at magnitude 0, because shiftmax = 0, units are not shifted). Category. This shuffling cannot be described in a few words (see Mathet et al. [2012] for details). It uses special matrices to simulate, using conditional probabilities, progressive confusion between categories, and can be configured to take into account overlapping of categories. The higher the magnitude, the more frequent and severe the confusion. False negatives. At magnitude m, each unit has the probability m to be forgotten. For instance, at magnitude m = 0.5, each annotator misses (on average) half of the units from the reference (but not necessarily the same units as the other annotators). False positives. At magnitude m, each annotator adds a certain number of units (proportional to m) to the ones of the reference. Splits. At magnitude m, each annotator splits a certain number of units (proportional to m). A split unit may be re-split, and so on. 6.3.2 Pure Segmentation: γ, WD, GHD. Even if γ was created to cope with error types that are poorly or not at all dealt with by other methods, and, moreover, to cope simultaneously with all of them (unitizing of categorized and overlapping categories), it is illuminating to observe how it behaves in more specific error types, to which specialized and well known methods are dedicated. We start with pure segmentation. Figure 24 shows the behavior of WD, GHD, and γ for two error types. For false negatives, WD and GHD are quite close, with an almost linear response until magnitude 0.6. Their drawback is that their responses are limited by an asymptote, because of the absence of chance correction, while γ shows a full range of agreements; for shifts, WD and GHD show an asymptote at about agreement = 0.4, while γ shows values from 1 to 0. This experiment confirms the advantage of using γ instead of these distances for inter-annotator agreement measure. 6.3.3 Pure Categorization. In this experiment, the CST is set to three annotators, four categories with given prevalences. The units are all of the same size, positioned in fixed, predefined positions, so that the focus is on categorizing only. It should be noted that, with such a configuration, α and κ behave exactly in the same way as c|uα. It is particularly striking in Figure 25 that γ behaves in almost the same way as c|uα. In fact, the observed values of these measures are exactly the same, the only difference coming from a slight difference in the expected values, due to sampling. Other tests carried out with the pure categorizing coefficient κ yielded the same results on this particular error type, which means that γ performs as well as recognized measures as far as categorizing is concerned, with two or more annotators. The uα curve goes below zero at magnitude 0.5 (probably for the reasons seen in Section 6.1.7). Moreover, its behavior depends on the size of the gaps: Indeed, with other settings of the shuffling, the curve may, on the contrary, be stuck over zero. κd fails to reach 0 because of the virtual agreement on gaps (but it would if there were no gaps). Lastly, SER (averaging the results of each pair of annotators) is bounded below by 0.6, which results from not taking chance into account. 6.3.4 Almost General Case: Unitizing + Categorizing. This section concerns the more general uses of γ, combining both unitizing and categorizing. However, in order to be compliant with uα, c|uα, and κd, we limit the configurations here so that the units do not overlap at all. In particular, the reference was built with no overlapping units, and we have used a modified version of the shifting shuffling procedure so that the nonoverlapping constraint is fully satisfied, even at high magnitudes. Positional errors (Figure 26a). An important point is that this shuffling error type, which is based only on moving positions, has a paradoxical consequence on category agreement, since units of different categories align when sufficient shifting is applied. Consequently, c|uα is not blocked at 1, even though it is designed to focus on categories. Additionally, it starts to decrease from the very first shifts, as soon as units from different annotators start overlapping. This is a concrete consequence of what has been formally studied in Section 6.1.3. γ has a most progressive response, reaches 0.1 at magnitude 1, and is the only measure to be strictly decreasing. SER immediately drops to agreement 0.5 at magnitude 0.05. As it relies on a binary positional distance, it fails to distinguish between small and large errors. This is a serious drawback of such a measure for most CL tasks. Then it goes below zero and is not strictly decreasing. uα is mostly strictly decreasing, but has some increasing parts, and, even more problematic, negative values from 0.6 to 0.9, probably because of the reason explained in Section 6.1.7. κd is too responsive at the very first magnitudes, and is not strictly decreasing, probably because it “does not compensate for differences in length of segments” (Reidsma, Heylen, and Ordelman 2006, page 3). Positional and categorial errors (Figure 26b). γ is strictly decreasing and reaches 0. The alphas are not strictly decreasing, and once again uα drops below 0 from magnitude 0.6 onwards. κd is not strictly decreasing (again, probably because it “does not compensate for differences in length of segments”), but its general shape is not that far from γ. Split errors (Figure 27). The split error type would need to create an infinite number of splits to mean pure chaos at magnitude 1. As this is computationally not possible, we restricted the number of splits to five times the number of units of the reference. We should therefore not expect measures to reach 0. In this context, γ shows a good range of responses, from 1 to 0.2, in an almost linear curve. SER is also quite linear, but gives very confusing values for this error type because it reaches negative values above magnitude 0.6. Finally, uα, c|uα, and κd are not responsive at all to this error type, as expected, and remain blocked at 1 (which is normal for c|uα, which focuses on categorizing). False positives and false negatives (Figure 28). In the current version of the CST, the false positive error type creates some overlapping (new units may overlap), and it is the reason why uα and κd were discarded from this experiment. However, we have kept c|uα because it behaves quite well despite overlapping units. All the measures have overall a good response to the false positives error type, as shown in Figure 28a, even if the shape of c|uα is delayed compared with the others, but it should be pointed out that SER has a curious and unfortunate final increasing section (not visible in the figure because this section is below 0). On the other hand, bigger differences appear with false negatives (Figure 28b). γ is still strictly decreasing and almost reaches 0 (0.025), but uα is not strictly decreasing, and is at 0 or below from m = 0.3; SER quickly drops below 0 from m = 0.4, κd is not strictly decreasing, and c|uα, as for splits, does not react at all but remains stuck at 1, which is desired for this coefficient focused on categories (values of c|uα over m = 0.7 are missing since there are not enough intersections between units for this measure to work). Overview of each measure for the almost general case. In order to summarize the behavior of each measure in response to the different error types for the almost general case (without overlap), we pick all curves relative to a given measure out of the previous plots and draw them in the same graph, as shown in Figure 29. Briefly, γ shows a steady behavior for all error types, almost strictly decreasing from 1 to 0. uα has some increasing parts and negative values and is sometimes not responsive. c|uα is very responsive for some error types, is less responsive for some other types, and is sometimes not responsive at all (which is desired, as already said). SER has unreliable responses, being either too responsive (reaching negative values) or not responsive enough. Finally, κd is not always responsive, is most of the time not strictly decreasing, but is sometimes quite progressive. 6.3.5 Fully General Case: Unitizing + Categorizing. This last section considers the fully general case, where overlapping of units within an annotator is allowed. In this experiment, we took a reference corpus with no overlap, but the errors applied (combination of positioning and false positives) progressively lead to overlapping units. The results are shown in Figure 30. As expected, γ behaves quite the same as it does with non-overlapping configurations. Admittedly, c|uα was not designed to handle these configurations (and so should not be included in this experiment), but surprisingly it seems to perform in rather the same way as it does with no overlapping; this must be investigated further, but judging from this preliminary observation, it seems this coefficient could still be operational and useful in such cases. On the contrary, uα does not handle correctly this experiment and so was not included in the graph.","7 conclusion :The present work addresses an aspect of inter-annotator agreement that is rarely studied in other approaches: the combination of unitizing and categorizing. Nevertheless, the use of methods that have been transposed from other domains (such as κ, which was originally dedicated to pure categorizing) in CL, for example at the discourse level, leads to severe biases, and manifests the need for specialized coefficients, fair and meaningful, suitable for annotation tasks focusing on complex objects. In the end, only Krippendorff’s coefficients uα and c|uα come close to the needs we expressed in the introduction, with the restriction that they are natively limited to non-overlapping units. The main reason why research on this topic is sparse, and why it may be difficult to enlarge Krippendorff’s coefficients to overlapping units, probably results from the fact that we are facing here a major difficulty: the simultaneous double discrepancy between annotators, with annotations possibly differing both in positioning relevant units anywhere on a continuum, and in categorizing each of these free units. Consequently, it is difficult for a method to choose precisely which features to compare between different annotators (unlike pure categorizing, where we know exactly what each annotator says for each predefined element to be categorized); and this problem is exacerbated when overlapping units (within an annotator) occur. To cope with this critical point, we advocate the use of an alignment that ultimately expresses which unit from one annotator should be compared to which unit, if any, from another one, and consequently makes it natural and easier to compute the agreement. Moreover, we have shown that this alignment cannot be done in an independent way, but is part of the measure method itself. This is the “unified” aspect of our approach. We have also shown that in order to be relevant, this alignment cannot be done at a local level (unit by unit), but should consider the whole set of annotations at the same time, which is the “holistic” aspect. This is how the new method γ presented here was designed. Moreover, this method is highly configurable to cope with different annotation tasks (in particular, boundary errors should not necessarily be considered the same for all kinds of annotations), and it provides the alignment that emerges from an agreement measurement. Not only is this alignment a result in itself, which can be used to build a gold standard very quickly from a multi-annotated corpus (by listing all unitary alignments, and for each of them showing the corresponding frontiers and category proposed by each annotator), but it also behaves as a kind of a “flight recorder” of the measure: Observing these alignments gives crucial information on the choices the measure makes and whether it needs to be adjusted, unlike other methods which only provide a sole “out of the box” value. Finally, we have compared γ to several other popular coefficients, even in their specific domains (pure categorization, pure segmentation), through a specific benchmark tool (namely, CST) which scans the responsivity of the measures to different kinds of errors and at all degrees of severity. Overall, γ provides broader and more progressive responsivity than the others in the experiments shown here. Concerning pure categorizing, γ does not have an edge over the well-known coefficients, such as α, but it is interesting to see that it behaves in much the same way as others in this specific field. Concerning segmentation, γ outperforms WD and GHD, by taking chance into account, but also by not depending on the heterogeneity of the segment sizes. Concerning unitizing with categorizing, as theoretically expected and confirmed by the benchmarking, SER shows severe limitations, such as a binary response to various (small or severe) positional or categorial errors, the fact that it does not make chance correction, or its limitation to two annotators only. Krippendorff’s coefficients uα and c|uα present very interesting properties, such as chance correction. However, as we have shown with thorough comparisons, they rely on quite different hypotheses to ours, since they consider intersections between units whereas we advocate considering aligned units. We have identified several situations in CL where considering alignments is advantageous, for instance, when contiguous segments of the same type may occur, or when errors on several short units should be considered as more serious than one error on a long unit, but we do not posit these situtations as a universal rule. In conclusion, when unitizing and categorizing involve internal overlapping of units, only γ is currently available, and, even if it cannot be compared to any other method at the moment for this reason, benchmarking reveals very similar responses to overlapping configurations and to non-overlapping ones, which already demonstrates its consistency and its relevance. We can summarize the features of γ as follows: It takes into account all varieties of unitizing, combines unitizing and categorizing simultaneously, enables any number of annotators, provides chance correction, processes an alignment while it measures agreement, and provides progressive responsivity to errors both for unitizing and for categorizing. This makes γ suitable for annotation tasks such as relative to NAMED ENTITY, DISCOURSE FRAMING, TOPIC TRANSITION, or ENUMERATIVE STRUCTURES. The full implementation of γ is provided as Java open-source packages on the http://gamma.greyc.fr Web site. It is already compatible with annotations created with the Glozz Annotation Platform (Widlöcher and Mathet 2012), and with annotations generated by the Corpus Shuffling Tool.",,"Agreement measures have been widely used in computational linguistics for more than 15 years to check the reliability of annotation processes. Although considerable effort has been made concerning categorization, fewer studies address unitizing, and when both paradigms are combined even fewer methods are available and discussed. The aim of this article is threefold. First, we advocate that to deal with unitizing, alignment and agreement measures should be considered as a unified process, because a relevant measure should rely on an alignment of the units from different annotators, and this alignment should be computed according to the principles of the measure. Second, we propose the new versatile measure γ, which fulfills this requirement and copes with both paradigms, and we introduce its implementation. Third, we show that this new method performs as well as, or even better than, other more specialized methods devoted to categorization or segmentation, while combining the two paradigms at the same time.","[{""affiliations"": [], ""name"": ""Yann Mathet""}, {""affiliations"": [], ""name"": ""Antoine Widl\u00f6cher""}, {""affiliations"": [], ""name"": ""Jean-Philippe M\u00e9tivier""}]",SP:48c2046e4054182647ac9058902fdedb3869a97e,"[{""authors"": [""Ludovic Tanguy"", ""Marianne Vergez-Couret"", ""Laure Vieu.""], ""title"": ""An empirical resource for discovering cognitive principles of discourse organisation: The annodis corpus"", ""venue"": ""Proceedings of the Eighth"", ""year"": 2012}, {""authors"": [""Artstein"", ""Ron"", ""Massimo Poesio.""], ""title"": ""Inter-coder agreement for computational linguistics"", ""venue"": ""Computational Linguistics, 34(4):555\u2013596."", ""year"": 2008}, {""authors"": [""Beeferman"", ""Douglas"", ""Adam Berger"", ""John Lafferty.""], ""title"": ""Text segmentation using exponential models"", ""venue"": ""Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing, pages 35\u201346,"", ""year"": 1997}, {""authors"": [""E.M. Bennett"", ""R. Alpert"", ""A.C. Goldstein.""], ""title"": ""Communications through limited questioning"", ""venue"": ""Public Opinion Quarterly, 18(3):303\u2013308."", ""year"": 1954}, {""authors"": [""Berry"", ""Charles C.""], ""title"": ""The K statistic [letter"", ""venue"": ""Journal of the American Medical Association, 268(18):2513\u20132514."", ""year"": 1992}, {""authors"": [""Bestgen"", ""Yves.""], ""title"": ""Improving text segmentation using latent semantic analysis: A reanalysis of Choi, Wiemer-Hastings, and Moore (2001)"", ""venue"": ""Computational Linguistics, 32(1):5\u201312."", ""year"": 2006}, {""authors"": [""Bestgen"", ""Yves""], ""title"": ""Quel indice pour mesurer l\u2019efficacit\u00e9 en segmentation de textes"", ""venue"": ""Actes de TALN"", ""year"": 2009}, {""authors"": [""A. Bookstein"", ""V.A. Kulyukin"", ""T. Raita.""], ""title"": ""Generalized Hamming Distance"", ""venue"": ""Information Retrieval, (5):353\u2013375."", ""year"": 2002}, {""authors"": [""J. Carletta""], ""title"": ""Assessing agreement on classification tasks: The kappa statistic"", ""venue"": ""Computational Linguistics, 22(2):249\u2013254."", ""year"": 1996}, {""authors"": [""Carletta"", ""Jean.""], ""title"": ""Unleashing the killer corpus: Experiences in creating the multi-everything ami meeting corpus"", ""venue"": ""Language Resources and Evaluation, 41(2):181\u2013190."", ""year"": 2007}, {""authors"": [""Charolles"", ""Michel"", ""Anne Le Draoulec"", ""Marie-Paule Pery-Woodley"", ""Laure Sarda.""], ""title"": ""Temporal and spatial dimensions of discourse organisation"", ""venue"": ""Journal of French Language Studies,"", ""year"": 2005}, {""authors"": [""J. Cohen""], ""title"": ""A coefficient of agreement for nominal scales"", ""venue"": ""Educational and Psychological Measurement, 20(1):37\u201346."", ""year"": 1960}, {""authors"": [""J. Cohen""], ""title"": ""Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit"", ""venue"": ""Psychological Bulletin, 70(4):213\u2013220."", ""year"": 1968}, {""authors"": [""Di Eugenio"", ""Barbara"", ""Michael Glass.""], ""title"": ""The kappa statistic: A second look"", ""venue"": ""Computational Linguistics, 30(1):95\u2013101."", ""year"": 2004}, {""authors"": [""Eisenstein"", ""Jacob.""], ""title"": ""Hierarchical text segmentation from multi-scale lexical cohesion"", ""venue"": ""Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association"", ""year"": 2009}, {""authors"": [""Fleiss"", ""Joseph.""], ""title"": ""Measuring nominal scale agreement among many raters"", ""venue"": ""Psychological Bulletin, (5):378\u2013382."", ""year"": 1971}, {""authors"": [""Fort"", ""Kar\u00ebn"", ""Claire Fran\u00e7ois"", ""Olivier Galibert"", ""Maha Ghribi.""], ""title"": ""Analyzing the impact of prevalence on the evaluation of a manual annotation campaign"", ""venue"": ""Eighth International Conference on Language"", ""year"": 2012}, {""authors"": [""Laurent.""], ""title"": ""Named and specific entity detection in varied data: The quaero named entity baseline evaluation"", ""venue"": ""Seventh International Conference on Language 477"", ""year"": 2010}, {""authors"": [""Hills"", ""CA. Krippendorff"", ""Klaus""], ""title"": ""On the reliability"", ""year"": 1995}, {""authors"": [""Labadi\u00e9"", ""Alexandre"", ""Patrice Enjalbert"", ""Yann Mathet"", ""Antoine Widl\u00f6cher.""], ""title"": ""Discourse structure annotation: Creating reference corpora"", ""venue"": ""Workshop on Language Resource and Language"", ""year"": 2010}, {""authors"": [""S. Lamprier"", ""T. Amghar"", ""B. Levrat"", ""F. Saubion.""], ""title"": ""On evaluation methodologies for text segmentation algorithms"", ""venue"": ""Proceedings of ICTAI 2007, pages 19\u201326. Patras."", ""year"": 2007}, {""authors"": [""Makhoul"", ""John"", ""Francis Kubala"", ""Richard Schwartz"", ""Ralph Weischedel.""], ""title"": ""Performance measures for information extraction"", ""venue"": ""Proceedings of DARPA Broadcast News Workshop, pages 249\u2013252,"", ""year"": 1999}, {""authors"": [""Mathet"", ""Yann"", ""Antoine Widl\u00f6cher.""], ""title"": ""Une approche holiste et unifi\u00e9e de l\u2019alignement et de la mesure d\u2019accord inter-annotateurs"", ""venue"": ""Traitement Automatique des Langues Naturelles 2011"", ""year"": 2011}, {""authors"": [""Nadeau"", ""David"", ""Satoshi Sekine.""], ""title"": ""A survey of named entity recognition and classification"", ""venue"": ""Linguisticae Investigationes, 30(1):3\u201326."", ""year"": 2007}, {""authors"": [""Olivero"", ""Patrick.""], ""title"": ""Calcul de la taille des \u00c9chantillons"", ""venue"": ""CETE du Sud-Ouest / DAT / ZELT. Technical report."", ""year"": 2001}, {""authors"": [""Passonneau"", ""Rebecca J."", ""Vikas Bhardwaj"", ""Ansaf Salleb-Aouissi"", ""Nancy Ide.""], ""title"": ""Multiplicity and word sense: Evaluating and learning from multiply labeled word sense annotations"", ""venue"": ""Language Resources and"", ""year"": 2012}, {""authors"": [""L. Pevzner"", ""M. Hearst.""], ""title"": ""A critique and improvement of an evaluation metric for text segmentation"", ""venue"": ""Computational Linguistics, 28(1):19\u201336."", ""year"": 2002}, {""authors"": [""D. Reidsma""], ""title"": ""Annotations and Subjective Machines of Annotators, Embodied Agents, Users, and Other Humans"", ""venue"": ""Ph.D. thesis, University of Twente."", ""year"": 2008}, {""authors"": [""D. Reidsma"", ""D.K.J. Heylen"", ""R.J.F. Ordelman.""], ""title"": ""Annotating emotions in meetings"", ""venue"": ""Proceedings of the Fifth International Conference on Language Resources and Evaluation, LREC 2006,"", ""year"": 2006}, {""authors"": [""Reidsma"", ""Denis"", ""Jean Carletta.""], ""title"": ""Reliability measurement without limits"", ""venue"": ""Computational Linguistics, 34(3):319\u2013326."", ""year"": 2008}, {""authors"": [""Scott"", ""William.""], ""title"": ""Reliability of content analysis: The case of nominal scale coding"", ""venue"": ""Public Opinion Quarterly, 19(3):321\u2013325."", ""year"": 1955}, {""authors"": [""Siegel"", ""Sidney"", ""N. John Castellan.""], ""title"": ""Nonparametric Statistics for the Behavioral Sciences"", ""venue"": ""McGraw-Hill, New York, 2nd edition."", ""year"": 1988}, {""authors"": [""Teufel"", ""Simone.""], ""title"": ""Argumentative Zoning: Information Extraction from Scientific Articles"", ""venue"": ""Ph.D. thesis, University of Edinburgh."", ""year"": 1999}, {""authors"": [""Teufel"", ""Simone"", ""Jean Carletta"", ""Marc Moens""], ""title"": ""An annotation scheme for discourse-level argumentation in research"", ""year"": 1999}, {""authors"": [""Teufel"", ""Simone"", ""Marc Moens.""], ""title"": ""Summarizing scientific articles: Experiments with relevance and rhetorical status"", ""venue"": ""Computational Linguistics, 28(4):409\u2013445."", ""year"": 2002}, {""authors"": [""Widl\u00f6cher"", ""Antoine"", ""Yann Mathet.""], ""title"": ""The glozz platform: A corpus annotation and mining tool"", ""venue"": ""ACM Symposium on Document Engineering (DocEng\u201912), pages 171\u2013180, Paris."", ""year"": 2012}, {""authors"": [""Zwick"", ""Rebecca.""], ""title"": ""Another look at interrater agreement"", ""venue"": ""Psychological Bulletin, (103):347\u2013387. 479"", ""year"": 1988}]","acknowledgments :We wish to thank three anonymous reviewers for helpful comments and discussion. The authors would also like to warmly thank Klaus Krippendorff for his support when they implemented his coefficients in order to test them. This work was carried out in the GREYC Laboratory, Caen, France, with the strong support of the French Contrat de Projet État-Région (CPER) and the Région Basse-Normandie, which have provided this research with two engineers, Jérôme Chauveau and Stéphane Bouvry.",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,"appendix a. examples of linguistic objects and possible annotation tasks :Terms emphasized hereafter refer to the terminology defined in Section 2. PART-OF-SPEECH. Part-of-speech (POS) tagging (see, for example, Güngör [2010] for a recent state of the art) gives a well-known illustration of a pure categorization without unitizing task: for all the words in a text (predefined units, full-covering, no overlap), annotators have to select a category, belonging to quite a small set of exclusive elements. POS units (words) having the same label are obviously not aggregatable. GENE RENAMING. In a study on gene renaming presented in Fort et al. (2012), all the tokens (predefined units, no overlap) are markable (categorization) with “Nothing” (the default value), “Former” (the original name of a gene) or “New” (its new name). This work at word level considers sparser (sporadic) phenomena than POS tagging. However, the annotation is defined as full-covering, with “Nothing” as a default tag. Note that the presence of the “Nothing” category also reveals here the reduction of a unitizing problem (detection of renaming) to a pure coding system (categorization). These units are not aggregatable. WORD SENSE. For the annotation task described in Passonneau et al. (2012), annotators were asked to assign sense labels (categorization without unitizing) to preselected moderately polysemous words (sporadicity, predefined units, no overlap) in preselected sentences where they occur. Adjacent words are not aggregatable with sense preservation. NAMED ENTITIY. Well-established named entity (NE) recognition tasks (see, for example, Nadeau and Sekine [2007]) led to many annotation efforts. In such tasks, the annotator is often asked to identify the units in the text’s continuum (unitizing, sporadicity) and to select a NE type from an inventory (categorization). It is well known that some difficulties of NE annotation relate to the delimitation of NE boundaries. For example, for a phrase such as “Mr X, the President of Y,” it makes sense to annotate subparts (“X,” “Mr X,” “the President of Y”) and/or the whole. “Y” is also a NE of another type. This may result in hierarchical or free overlapping structures. Adjacent NE are not aggregatable. ARGUMENTATIVE ZONING. Studies concerned by argumentative zoning (Teufel 1999; Teufel, Carletta, and Moens 1999; Teufel and Moens 2002) consider the argumenative structure of texts, and identify text spans having specific roles. For each sentence (fullcovering, predefined units, no overlap), a category (categorization) is selected. Adjacent sentences of the same type are aggregated into larger spans (argumentative zones). This reveals an underlying question of unitizing. However, it has to be noted that the categorization mainly concerns predefined sentences: argumentative types are aggregatable. DISCOURSE FRAMING. In Charolles et al.’s discourse framing hypothesis (Charolles et al. 2005, page 121), “a discourse frame is described as the grouping together of a number of propositions which are linked by the fact that they must be interpreted with reference to a specific criterion, realized in a frame-initial introducing expression.” Thus, temporal or spatial introducing expressions lead, for example, to temporal or spatial discourse frames in the text continuum (unitizing, sporadicity, categorization). Discourse frames are not aggregatable. Subordination is possible, leading to possibly hierarchical overlap, where frames (of the same type or of different types) are embedded. COMMUNICATIVE BEHAVIOR. The multimodal AMI Meeting corpus (Carletta 2007) covers a wide range of phenomena, and contains many different layers of annotation describing the communicative behavior of the participants of meetings. For example, in Reidsma (2008), annotators are required to identify fragments in a video recording (unitizing, sporadicity) and to categorize them (categorization). For such an annotation task, one can easily imagine instruction manuals allowing annotators to use multiple labels and to identify embedded (hierarchical overlap) or free overlapping units, even if the example provided by Reidsma (2008) does not. DIALOG ACT. Annotating dialog act conforming to a standard as defined, for example, in Bunt et al. (2010), leads annotators to assign communicative function labels and types of semantic content (categorization) to stretches of dialogue called functional segments. The possible mulitfunctionality of segments (one functional segment is related to one or more dialog acts), and the fact that annotations may be attached directly to the primary data such as stretches of speech defined by begin and end points, or attached to structures at other levels of analysis, seems to allow different kinds of configurations and annotation instructions: unitizing or pure categorization of pre-existing structures, sporadicity or full-covering, hierarchical, overlapping or linear segmentation. TOPIC SEGMENTATION. Topic segmentation (see, for example, the seminal work by Hearst [1997] or Bestgen [2006] for a more recent state of the art), which aims at detecting the most important thematic breaks in the text’s continuum, gives an illuminating example of pure segmentation. This unitizing problem of linear segmentation is full-covering and restricted to the detection of breaks (the right boundary of a unit corresponds to the left boundary of the following segment) (no overlap). If we consider the resulting segments, there is just one category (topic segment) and then no categorization. Adjacent topic segments are obviously not aggregatable without a shift in meaning. HIERARCHICAL TOPIC SEGMENTATION. In order to take better into account the fact that lexical cohesion is a multiscale phenomenon and that discourse displays a hierarchical structure, hierarchical topic segmentation proposed, for example, by Eisenstein (2009) preserves the main goal and properties of text-tiling (unitizing, no categorization, fullcovering, not aggregatable segments), but allows a topic segment to be subsegmented into sub-topic segments (hierarchical [but not free] overlap). TOPIC TRANSITION. The topic zoning annotation model presented in Labadié et al. (2010) is based on the hypothesis that, in a well constructed text, abrupt topic boundaries are more the exception than the rule. This model introduces transition zones (unitizing) between topics, zones that help the reader to move from one topic to another. The annotator is asked to identify and categorize (categorization) topic segments, introduction, conclusion, and transition zones. Hierarchical overlap is possible (embedded elements of the same type or of different types are allowed). Free overlapping structures are frequent, by virtue of the nature of transitions. Adjacent topic zones and adjacent transition zones are not aggregatable. ENUMERATIVE STRUCTURES. A study on complex discourse objects such as enumerative structures (Afantenos et al. 2012) illustrates both the need for sporadic unitizing and the need for categorization. The enumerative structures have a complex internal organization, which is composed of various types of subelements (hierarchical overlap) (a trigger of the enumeration, items composing its body, etc.) which are not aggregatable.",,,,,,,,,,"1 introduction :A growing body of work in computational linguistics (CL hereafter) or natural language processing manifests an interest in corpus studies, and requires reference annotations for system evaluation or machine learning purposes. The question is how to ensure that an annotation can be considered, if not as the “truth,” than at least as a suitable ∗ Normandie University, France; UNICAEN, GREYC, F-14032 Caen, France; CNRS, UMR 6072, F-14032 Caen, France. E-mail: {yann.mathet, antoine.widlocher, jean-philippe.metivier}@unicaen.fr Submission received: 15 July 2013; revised version received: 5 October 2014; accepted for publication: 12 January 2015. doi:10.1162/COLI a 00227 © 2015 Association for Computational Linguistics reference. For some simple and systematic tasks, domain experts may be able to annotate texts with almost total confidence, but this is generally not the case when no expert is available, or when the tasks become harder. The very notion of “truth” may even be utopian when the annotation process includes a certain degree of interpretation, and we should in such cases look for a consensus, also called the “gold standard,” rather than for the “truth.” For these reasons, a classic strategy for building annotated corpora with sufficient confidence is to give the same annotation task to several annotators, and to analyze to what extent they agree in order to assess the reliability of their annotations. This is the aim of inter-annotator agreement measures. It is important to point out that most of these measures do not evaluate the distance from annotations to the “truth,” but rather the distance across annotators. Of course, the hope is that the annotators will agree as far as possible, and it is usually considered that a good inter-annotator agreement ensures the constancy and the reproducibility of the annotations: When agreement is high, then the task is consistent and correctly defined, and the annotators can be expected to agree on another part of the corpus, or at another time, and their annotations therefore constitute a consensual reference (even if, as shown for example by Reidsma and Carletta [2008], such an agreement is not necessarily informative for machine learning purposes). Moreover, once several annotators reach good agreement on a given part of a corpus, then each of them can annotate alone other parts of the corpus with great confidence in the reproducibility (see the preface to Gwet [2012, page 6] for illuminating considerations). Consequently, inter-annotator agreement measurement is an important point for all annotation efforts because it is often considered that a given agreement value provided by a given method validates or invalidates the consistency of an annotation effort. How to measure agreement, and how we define a good measure, is another part of the problem. There is no universal answer, because how to measure depends on the nature of the task, hence on the kind of annotations. Admittedly, much work has already been done for some kinds of annotation efforts, namely, when annotators have to choose a category for previously identified entities. This approach, which we will call pure categorization, has led to several well-known and widely discussed coefficients such as κ, π, or α, since the 1950s. Some more recent efforts have been made in the domain of unitizing, following Krippendorff’s terminology (Krippendorff 2013), where annotators have to identify by themselves what the elements to be annotated in a text are, and where they are located. Studies are scarce, however, as Krippendorff pointed out: “Measuring the reliability of unitizing has been largely ignored in favor of coding predefined units” (Krippendorff 2013, page 310). This scarcity concerns either segmentation, where annotators simply have to mark boundaries in texts to separate contiguous segments, or more generally unitizing, where gaps may exist between units. Moreover, some even more complex configurations may occur (overlapping or embedding units), which are more rarely taken into account. And when categorization meets unitizing, as is the case in CL in such fields as, for example, NAMED ENTITY RECOGNITION1 or DISCOURSE FRAMING, very few methods are proposed and discussed. That is the main problem we focus on in this article and to which γ provides solutions. 1 Small caps are used to refer to the examples of annotation tasks introduced in Section 2.2. The new coefficient γ that is introduced in this article is an agreement measure concerning the joint tasks of unit locating (unitizing) and unit labeling (categorization). It relies on an alignment of units between different annotators, with penalties associated with each positional and categorial discrepancy. The alignment itself is chosen to minimize the overall discrepancy in a holistic way, considering the full continuum to make choices, rather than making local choices. The proposed method is unified because the computation of γ and the selection of the best alignment are interdependent: The computed measure depends on the chosen alignment, whose selection depends on the measure. This method and the principles proposed in this article have been built up since 2010, and were first presented to the French community in a very early version in Mathet and Widlöcher (2011). The initial motivation for their development was the lack of dedicated agreement measures for annotations at the discourse level, and more specifically for annotation tasks related to TOPIC TRANSITION phenomena. The article is organized as follows. First, we fix the scope of this work by defining the important notions that are necessary to characterize annotation tasks and by introducing the examples of linguistic objects and annotation tasks used in this article to compare available metrics. Second, we analyze the state of the art and identify the weaknesses of current methods. Then, we introduce our method, called γ. As this method is new, we compare it to the ones already in use, even in their specialized fields (pure categorization, or pure segmentation), and show that it has better properties overall for CL purposes. 2 motivations, scope, and illustrations :We focus in the present work on both categorizing and unitizing, and consider therefore annotation tasks where annotators are not provided with preselected units, but have to locate them and to categorize them at the same time. An example of a multi-annotated continuum (this continuum may be a text or, for example, an audio or a video recording) is provided in Figure 1, where each line represents the annotations of a given annotator, from left to right, respecting the continuum order. In order to characterize the annotation efforts focusing on specific linguistic objects, we consider the following properties, illustrated in Figure 1. Categorization occurs when the annotator is required to label (predefined or not) units. Unitizing occurs when the annotator is asked to identify the units in the continuum: She has to determine each of them (and the number of units that she wants) and to locate them by positioning their boundaries. Embedding (hierarchical overlap) may occur if units may be embedded in larger ones (of the same type, or not). Free overlap may occur when guidelines tolerate the partial overlap of elements (mainly of different types). Embedding is a special case of overlapping. A segmentation without overlap (hierarchical or free) is said to be strictly linear. Full-covering (vs. sporadicity) applies when all parts of the continuum are to be annotated. For other tasks, parts of the continuum are selected. Aggregatable types or instances correspond to the fact that several adjacent elements having the same type may aggregate, without shift in meaning, in a larger span having the same type. This larger span is said to be atomizable: Labeling the whole span or labeling all of its atoms are considered as equivalent, as illustrated by Figure 2. Two specific cases. We call hereafter pure segmentation (illustrated by Figure 3) the special case of unitizing with full-covering and without categorization, and we call pure categorization categorization without unitizing. To present the state of the art as well as our own propositions, and to make all of them more concrete, it is useful to mention examples of linguistic objects and annotation tasks for which agreement measures may be required. The following sections will then refer to these examples as often as possible, in order to illustrate discussions on abstract problems or configurations. Small caps are used to refer to the names of these tasks. A Aggregatable atoms Atomizable unit Table 1 summarizes the properties of the linguistic objects and annotation tasks mentioned in this article to illustrate and compare methods and metrics. These objects and tasks are briefly described for convenience in Appendix A. This table shows that annotation of TOPIC TRANSITIONS is the most demanding of the tasks regarding the number of necessary criteria a suitable agreement metric should assess, but most of the tasks listed here require assessment of both unitizing and categorization. 3 state of the art :As we saw in the previous section, different studies in linguistics or CL involve quite different structures, which may lead to annotation guidelines having very different properties. They require suitable metrics in order to assess agreement among annotators. As we will see, some of the needs for which γ is suitable are not satisfied by other available metrics. Note that this description of the state of the art mainly focuses on the questions which are of most importance for this work, in particular, chance correction and unitizing. For a thorough introduction to the most popular measures that concern categorizing, we refer the reader to the excellent survey by Artstein and Poesio (2008). In this section, we first address the question of chance correction in agreement measures, then we give an overview of available measures in three domains: pure categorization, pure segmentation, and unitizing. We begin the state of the art with the question of chance correction, because it is a crosscutting issue in all agreement measure domains, and because it influences the final value provided by most agreement measures. It is important to distinguish between (1) measures to evaluate systems, where the output of an annotating system is compared to a valid reference, and (2) inter-annotator agreement measures, which try to quantify the degree of similarity between what different annotators say about the same data, and which are the ones we are really concerned with in this article. In case (1), the result is straightforward, providing for instance the percentage of valid answers of a system: We know exactly how far the evaluated system is from the gold standard, and we can compare this system to others just by comparing their results. However, case (2) is more difficult. Here, measures do not compare annotations from one annotator to a valid reference (and, most of the time, no reference already exists), but they compare annotations from different annotators. As such, they are clearly not direct distances to the “truth.” So, the question is: Above what amount of agreement can we reasonably trust the annotators? The answer is not straightforward, and this is where chance correction is involved. For instance, consider a task where two annotators have to label items with 10 categories. If they annotate at random (with the 10 categories having equal prevalence), they will have an agreement of 10%. If we consider another task involving two categories only, still at random, the agreement expected by chance rises to 50%. Based on this observation, most agreement measures try to remove chance from the observed measure, that is to say, to provide the amount of agreement that is above chance. More precisely, most agreement measures (for about 60 years, with well-known measures κ, S, π) rely on the same formula: If we note Ao the observed agreement (i.e., the agreement directly observed between annotators) and Ae the so-called expected agreement (i.e. the agreement which should be obtained by chance), the final agreement A is defined by Equation (1). To illustrate this formula, assume that the observed agreement is seemingly high, say Ao = 0.9. If Ae = 0.5, A = 0.4/0.5 = 0.8, which is still considered as good, but if Ae = 0.7, A = 0.2/0.3 = 0.67, which is not that good, and if Ae = 0.9, which means annotators did not perform better than chance, then A = 0. Some other measures, namely, all α from Krippendorff, and the new γ introduced in this article, are computed from observed and expected disagreements (instead of agreements), denoted here respectively Do and De, and they define the final agreement by Equation (2). A = Ao − Ae1− Ae (1) A = 1− DoDe (2) However, the way the expected value is computed is the only difference between many coefficients (κ, S, π, and their generalizations), and is a controversial question. As precisely described in Artstein and Poesio (2008), there are three main ways to model chance in an annotation effort: 1. By considering a uniform distribution. For instance, in a categorization task, considering that each category (for each coder) has the same probability. The limitation of this approach is that it provides a poor model for chance annotation. Moreover, for a given task, the greater the number of categories, the lesser the expected value, hence the higher the final agreement. 2. By considering the mean distribution of the different annotators hence regarded as interchangeable. For instance, in a categorization task with two categories A and B, where the prevalences are respectively 90% for category A and 10% for category B, the expected value is computed as 0.9× 0.9 + 0.1× 0.1 = 0.82, which is much higher than the 0.5 obtained by considering a uniform distribution. 3. By considering the individual distributions of annotators. Here, annotators are considered as not interchangeable; each of them is considered to have her own probability for each category (for a categorization task) based on her own observed distribution. It leads to the same results as with the mean distribution if annotators all have the same distribution, or to a lesser value (hence a higher final agreement) if not. In the two cases of mean and individual distributions, expected agreement may be very high, depending on the prevalence of categories. In some annotation tasks, expected agreement becomes critically high, and any disagreements on the minor category have huge consequences on the chance-corrected agreement, as hotly debated by Berry (1992) and Goldman (1992), and criticized in CL by Di Eugenio and Glass (2004). However, we follow Krippendorff (2013, page 320), who argues that disagreements on rare categories are more serious than on frequent ones. For instance, let us consider the reliability of medical diagnostics concerning a rare disease that affects one person out of 1,000. There are 5,000 patients, 4,995 being healthy, 5 being affected. If doctors fail to agree on the 5 affected patients, their diagnostics cannot be trusted, even if they agree on the 4,995 healthy ones. These principles have been mainly introduced and used for categorization tasks, because most coefficients address these tasks, but they are more general and may also concern segmentation and, as we will see further, unitizing. The simplest measure of agreement for categorization is percentage of agreement (see for example Scott 1955, page 323). Because it does not feature chance correction, it should be used carefully for the reasons we have just seen. Consequently, most popular measures are chance-corrected: S (Bennett, Alpert, and Goldstein 1954) relies on a uniform distribution model of chance, π (Scott 1955) and α (Krippendorff 1980) on the mean distribution, and κ (Cohen 1960) on individual distributions. Generalizations to three or more annotators have been provided, such as κ (Fleiss 1971), also known as K (Siegel and Castellan 1988). Moreover, weighted coefficients such as α and κw (Cohen 1968) are designed to take into account the fact that disagreements between two categories are not necessarily all of the same importance. For instance, for scaled categories from 1 to 10 (as opposed to so-called nominal categories), a mistake between categories 3 and 4 should be less penalized than a mistake between categories 1 and 10. These metrics, widely used in CL, are suitable to assess agreement for pure categorization tasks—for example, in the domains of PART-OF-SPEECH TAGGING, GENE RENAMING, or WORD SENSE ANNOTATION. From Carletta (1996) to Artstein and Poesio (2008), most of these methods have already been discussed and compared in the perspective of CL and we will not do so here. In the domain of TOPIC SEGMENTATION, several measures have been proposed, especially to evaluate the quality of automatic segmentation systems. In most cases, this evaluation consists in comparing the output of these systems with a reference annotation. We mention them here because their use tends to be extended to interannotator agreement because of the lack of dedicated agreement measures, as illustrated by Artstein and Poesio (2008), who mention these metrics in a survey related to interannotator agreement, or by Kazantseva and Szpakowicz (2012). In this domain, annotations consist of boundaries (between topic segments), and the penalty must depend on the distance from a true boundary. Thus, dedicated measures have been proposed, such as WindowDiff (WD hereafter; Pevzner and Hearst 2002), based on Pk (Beeferman, Berger, and Lafferty 1997). WD relies on the following principle: A fixed-sized window slides over the text and the numbers of boundaries in the system output and reference are compared. Several limitations of this method have been demonstrated and adjustments proposed, for example, by Lamprier et al. (2007) or by Bestgen (2009), who recommends the use of the Generalized Hamming Distance (GHD hereafter; Bookstein, Kulyukin, and Raita 2002), in order to improve the stability of the measure, especially when the variance of segment size increases. Because these metrics are dedicated to the evaluation of automatic segmentation systems, their most serious weakness for assessing agreement is that they are not chance-corrected, but they present another limitation: They are dedicated to segmentation and assume a full-covering and linear tiling of the continuum and only one category of objects (topic segments). This strong constraint makes them unsuitable for unitizing tasks using several categories (ARGUMENTATIVE ZONING), targeting more sporadic phenomena (ANNOTATION OF COMMUNICATIVE BEHAVIOR), or involving more complex structures (NAMED ENTITY RECOGNITION, HIERARCHICAL TOPIC SEGMENTATION, TOPIC TRANSITION, DISCOURSE FRAMING, ENUMERATIVE STRUCTURES). 3.4.1 Using Measures for Categorization to Measure Agreement on Unitizing. Because of the lack of dedicated measures, some attempts have been made to transform the task of unitizing into a task of categorizing in order to use well-known coefficients such as κ. They consist of atomizing the continuum by considering each segment as a sequence of atoms, thereby reducing a unitizing problem to a categorization problem. This is illustrated by Figure 4, where real unitizing annotations are on the left (with two annotators), and the transformed annotations are on the right. To do so, an atom granularity is chosen—for instance, in the case of texts, it may be character, word, sentence, or paragraph atoms. Then, each unit is transformed into a set of items labeled with the category of this unit, and a new “blank” category is added in order to emulate gaps between units. In most cases, this method has severe limitations: 1. Two contiguous units seen as one. In zone (1) of the left part of Figure 4, one annotator has created two units (of the same category), and the other annotator has created only one unit covering the same space. However, once the continua are discretized, the two annotators seem to agree on this zone (with the four same atoms), as we can see in the right part of the figure. 2. False positive/negative disagreement and slight positional disagreement considered as the same. Zone (2) of Figure 4 shows a case where annotators disagree on whether there is a unit or not, which is quite a severe disagreement, whereas zone (3) shows a case of a slight positional disagreement. Surprisingly, these two discrepancies are counted with the same severity, as we can see in the right side of the figure, because in each case, there is a difference of category for one item only (respectively, “blank” with “blue” in case (2), and “blue” with “blank” in case (3)). 3. Agreement on gaps. Because of the discretization, artificial blank items are created, with the result that annotators may agree on “blanks.” The more gaps in real annotations, the more artificial “blank” agreement, and hence the greater the artificial increase in global agreement. Indeed, the expected agreement is less impacted by artificial “blanks,” and it may even decrease. 4. Overlapping and embedding units are not possible. This results because of the discretizing process, which requires a given position to be assigned a single category (or it would require creating as many artificial categories as possible combinations of categories). This kind of reduction is used to evaluate the annotation of COMMUNICATIVE BEHAVIOR in video recordings by Reidsma (2008), where unitizing is, on the contrary, clearly required: The time-line is discretized (atomized), then κ and α are computed using discretized time spans as units. It should be noted that Reidsman, Heylen, and Ordelman (2006, page 1119) and Reidsma (2008) claim that this “fairly standard” method (which we call discretizing measure henceforth) has certain drawbacks, such as the fact that it “does not compensate for differences in length of segments,” whereas “short segments are as important as long segments” in their corpus (which is an additional limitation to the ones we have just mentioned). They propose a second approach relying on an alignment, as we mention in Section 4.2.1. This reduction is also unacceptable for other annotation tasks. For instance, in the perspective of DISCOURSE FRAMING, two adjacent temporal frames should not be aggregated in a larger one. In the same manner, for TOPIC SEGMENTATION, it clearly makes no sense to aggregate two consecutive segments. 3.4.2 A Measure for Unitizing Without Chance Correction. Another approach, derived from Slot Error Rate (Makhoul et al. 1999), presented in Galibert et al. (2010), and called SER below, was more specifically used in the context of evaluation of NAMED ENTITY recognition systems. Comparing a “hypothesis” to a reference, this metric counts the costs of different error types: error “T” on type (i.e., category) with cost 0.5, error “B” on boundaries (i.e., position) with cost 0.5, error “TB” on both type and boundaries with cost 1, error “I” of insertion (i.e., false positive) with cost 1, and error “D” of deletion (false negative) with cost 1. The overall cost relies on an alignment of objects from reference and hypothesis, which is chosen to minimize this cost. The final value provided by SER is the average cost of the aligned pairs of units—0 meaning perfect agreement, 1 roughly meaning systematic disagreement. An example is given in Figure 5. This attempt to extend Slot Error Rate to unitizing suffers from severe limitations. In particular, all positioning and categorizing errors have the same penalty, which may be a serious drawback for annotation tasks where some fuzziness in boundary positions is allowed, such as TOPIC SEGMENTATION, TOPIC TRANSITION, or DISCOURSE FRAMING. Moreover, it is difficult to interpret because its output is surprisingly not upper bounded by 1 (in the case where there are many false positives). Additionally, it was initially designed to compare an output to a reference, and so requires some adjustments to cope with more than 2 annotators. Last but not least, it is not chance corrected. 3.4.3 Specific Measures for Unitizing. To our knowledge, the family of α measures proposed by Krippendorff is by far the broadest attempt to provide suitable metrics for various annotation tasks, involving both categorization and unitizing. In the survey by Artstein and Poesio (2008, page 581), some hope of finding an answer to unitizing is formulated as follows: “We suspect that the methods proposed by Krippendorff (1995) for measuring agreement on unitizing may be appropriate for the purpose of measuring agreement on discourse segmentation.” Unfortunately, as far as we know, its usage in CL is rare, despite the fact that it is the first coefficient that copes both with unitizing and categorizing at the same time, while taking chance into account. The family of α measures would then be suitable for annotation tasks related, for example, to COMMUNICATIVE BEHAVIOR or DIALOG ACTS. We will therefore pay special attention to Krippendorff’s work in this article, because it constitutes a very interesting reference to compare with, both in terms of theoretical choices and of results. Let us briefly recap Krippendorff’s studies on unitizing from 1995 to 2013 and introduce some of the α measures, which will be discussed in this article. The α coefficient (Krippendorff 1980, 2004, 2013), dedicated to agreement measures on categorization tasks, generalizes several other broadly used statistics and allows various categorization values (nominal, ordinal, ratio, etc.). Besides this wellknown α measure, which copes with categorizing, a new coefficient called αU has been proposed since 1995 in Krippendorff (1995) and then Krippendorff (2004), which can apply to unitizing. Recently, Krippendorff (2013, pages 310, 315) proposed a new version of this coefficient, called uα, “with major simplifications and improvements over previous proposals,” and which is meant to “assess the reliability of distinctions within a continuum—how well units and gaps coincide and whether units are of the same or of a different kind.” To supplement uα, which mainly focuses on positioning, Krippendorff has proposed c|uα (Krippendorff 2013), which ignores positioning disagreement and focuses mainly on categories. These measures will be discussed in the following sections. For now, it must be noted that uα and c|uα are not currently designed to cope with embedding or free overlapping between the units of the same annotator. These metrics are then unsuitable for annotation tasks such as, for instance, TOPIC TRANSITION, HIERARCHICAL TOPIC SEGMENTATION, DISCOURSE FRAMING, or ENUMERATIVE STRUCTURES. To conclude the state of the art, we draw up a final overview of the coverage of the requirements by the different measures in Table 2. The γ measure, introduced in the next section, aims at satisfying all these needs. 4 the proposed method: introducing γ : The basic idea of this new coefficient is as follows: All local disagreements (called disorders) between units from different annotators are averaged to compute an overall disorder. However, these local disorders can be computed only if we know for each unit of a given annotator, which units, if any, from the other annotators it should be compared with (via what is called unitary alignment)—that is to say, if we can rely on a suitable alignment of the whole (called alignment). Because it is not possible to get a reliable preconceived alignment (as explained in Section 4.2.1), γ considers all possible ones, and computes for each of them the associated overall disorder. Then, γ retains as the best alignment the one that minimizes the overall disorder, and the latter value is retained as the correct disorder. To obtain the final agreement, as with the familiar kappa and alpha coefficients, this disorder is then chance-corrected by a so-called expected disorder, which is calculated by randomly resampling existing annotations. First of all, we introduce three main principles of γ in Section 4.2. We introduce in Section 4.3 the basic definitions. The comparison of two units (depending on their relative positions and categories) relies on the concept of dissimilarity (Section 4.4). A unitary alignment groups at most one unit of each annotator, and a set of unitary alignments covering all units of all annotators is called an alignment (Section 4.5). The disorder associated with a unitary alignment results from dissimilarities between all its pairs of units, and the disorder associated with an alignment depends on those of its unitary alignments (Section 4.6). The alignment having the minimal disorder (Section 4.7) is used to compute the agreement value, taking chance correction into account (Section 4.8). 4.2.1 Measuring and Aligning at the Same Time: γ is Unified. For a given phenomenon identified by several annotators, it is necessary to provide an agreement measure permissive enough to cope with a double discrepancy concerning its position in the continuum, and the category attributed to the phenomenon. Because of discrepancy in positioning, it is necessary to provide an agreement measure with an inter-annotator alignment, which shows which unit of a given annotator corresponds, if any, to which unit of another annotator. If such an alignment is provided, it becomes possible, for each phenomenon identified by annotators, to determine to what extent the annotators agree both on its categorization and its positioning. This quantification relies on a certain measure (called dissimilarity hereafter) between annotated units: The more the units are considered as similar, the lesser the dissimilarity. But how can such an alignment be achieved? For instance, in Figure 6, aligning unit A1 of annotator A with unit B1 of annotator B consists in considering that their properties are similar enough to be associated: annotator A and annotator B have accounted for the same phenomenon, even if in a slightly different manner. Consequently, to operate, the alignment method should rely on a measure of distance (in location, in category assignment, or both) between units. Therefore, agreement measure and aligning are interdependent: It is not possible to correctly measure without aligning, and it is not possible to align units without measuring their distances. In that respect, measuring and aligning cannot constitute two successive stages, but must be considered as a whole process. This interdependence reflects the unity of the objective: Establishing to what extent some elements, possibly different, may be considered as similar enough either to quantify their differences (when measuring agreement), or to associate them (when aligning). Interestingly, Reidsma, Heylen, and Ordelman (2006, page 1119), not really satisfied by the use of the discretizing measure as already mentioned, “have developed an extra method of comparison in which [they] try to align the various segments.” This attempt highlights the necessity to rely on an alignment. Unfortunately, the way the alignment is computed, adapted from Kuper et al. (2003), is disconnected from the measure itself, being an ad hoc procedure to which other measures are applied. 4.2.2 Aligning Globally: γ is Holistic. Let us consider two annotators A and B having respectively produced unit A5, and units B4 and B5, as shown in Figure 7. When considering this configuration at a local level, we may consider, based on the overlapping area for instance, that A5 fits B5 slightly better than B4. However, this local consideration may be misleading. Indeed, Figure 8 shows two larger configurations, where A5, B4, and B5 are unchanged from Figure 7. With a larger view, the choice of alignment of A5 may be driven by the whole configuration, possibly leading to an alignment with B4 in Figure 8a, and with B5 in Figure 8b: Alignment choices depend on the whole system and the method should consequently be holistic. 4.2.3 Accounting for Different Severity Rates of Errors: Positional and Categorial Permissiveness of γ. As far as positional discrepancies between annotators are concerned, it is important for a measure to rely on a progressive error count, not on a binary one: Two positions from two annotators may be more or less close to each other but still concern the same phenomenon (partial agreement), or may be too far to be considered as related to the same phenomenon (no possible alignment). For instance, for segmentation, specific measures such as GHD or WD rely on a progressive error count for positions, with an upper limit being half the average size of the segments. For unitizing, Krippendorff considers with uα that units can be compared as long as they overlap. However, γ considers that in some cases, units by different annotators may correspond to the same phenomenon though they do not intersect. We base this claim on two grounds. First, if we observe the configuration given in Figure 9, annotators 2 and 3 have both annotated part of the NAMED ENTITY that has been annotated by annotator 1. Consequently, though they do not overlap, their units refer to the same phenomenon. In addition, we find a direct echo of this assumption in Reidsma (2008, pages 16–17) where, in a video corpus concerning COMMUNICATIVE BEHAVIOR, “different timing (non-overlapping) [of the same episode] was assigned by [...] two annotators.” Regarding categorization, some available measures consider all disagreements between all pairs of categories as equal. Other coefficients, called weighted coefficients (see Artstein and Poesio 2008), as well as γ, consider on the contrary that mismatches may not all have the same weight, some pairs of categories being closer than others. This closeness is often referred to as overlap. In our terminology, we call category-overlapping this closeness between categories, and overlap means positional overlap. For example, within annotation efforts related to WORD SENSE or DIALOG ACTS, it is clear that disagreements on labels are not all alike. Given a multi-annotated continuum t:r let A = {a1, ..., an} be the set of annotatorsr let n = |A| be the number of annotatorsr let U be the set of units from all annotatorsr ∀i ∈ J1, nK, let xi be the number of units by annotator ai for t r let x = n∑i=1 xin be the average number of annotations per annotatorr for annotator a = ai, ∀j ∈ J1, xiK, we note uaj unit from a of rank j Annotation set: An annotation set s is a set of units attached to the same continuum and produced by a given set of annotators. Corpus: A corpus c is defined with respect to a given annotation effort, and is composed of a set of continua, and of the set of annotations related to these continua. Unit: A unit u bears a category denoted cat(u), and a location given by its two boundaries, each of them corresponding to a position in the continuum, respectively denoted start(u) and end(u), start and end being functions from U to N+. Equality between units is defined as follows: ∀(u, v) ∈ U2, u = v⇔ cat(u) = cat(v)start(u) = start(v)end(u) = end(v) We introduce here the first brick to build the notion of disorder, which works at a very local level, between two units. A dissimilarity tells to what degree two units should be considered as different, taking into account such features as their positions, their categories, or a combination of the two. A dissimilarity is a function d : U2 → R+, so that : ∀(u, v) ∈ U2, { d(u, v) = d(v, u) (d is symmetric) u = v⇒ d(u, v) = 0 A dissimilarity is not necessarily a distance in the mathematical sense of the term, in particular because triangular inequality is not mandatory (for instance, in Figure 10, d(A1, B2) > d(A1, C1) + d(C1, B2)). 4.4.1 Empty Unit u∅, Empty Dissimilarity ∆∅. As we will see, γ relies on an alignment of units by different annotators. In particular, this alignment indicates for unit ua1i of annotator a1, to which unit u a2 j of annotator a2 it corresponds, in order to compute the associated dissimilarity. In some cases, though, the method will choose not to align ua1i with any unit of annotator a2 (none corresponds sufficiently). We define the empty pseudo unit, denoted u∅, which corresponds to the realization of this phenomenon: ultimately, a pseudo unit u∅ is added to the annotations of a2, and u a1 i is aligned with it. We also define the associated cost ∆∅: ∀u ∈ U , d(u, u∅) = d(u∅, u) = ∆∅ and d(u∅, u∅) = ∆∅ Dissimilarities should be calibrated so that ∆∅ is the value beyond which two compared units are considered critically different. Consequently, it constitutes a reference, and dissimilarities will be expressed in this article as multiples of ∆∅ for better clarity. It is not a parameter of gamma, but a constant (which is set to 1 in our implementation). 4.4.2 Positional Dissimilarity dpos. Different positional dissimilarities may be created, in order to deal with different annotation tasks. In this article, we use the dissimilarity shown in Equation 3, which is very versatile. dpos−sporadic(u, v) = ( |start(u)− start(v)|+ |end(u)− end(v)| (end(u)− start(u)) + (end(v)− start(v)) )2 ·∆∅ (3) Equation (3) sums the differences between the right and left boundaries of both units in its numerator. Its denominator sums the lengths of both units, so that this dissimilarity is not scale-dependent. Squaring the value is an option used here to accelerate dissimilarity when differences of positions increase. It is illustrated in Figure 10 with different configurations and their associated values, from 0 for the perfectly aligned pair of units (A1, B1) to 22.2 ·∆∅ for the worst pair (A1, C2). 4.4.3 Categorial Dissimilarity dcat. Let K be the set of categories. For a given annotation effort, |K| different categories are defined. For more convenience, we first define categorial distance between categories distcat via a square matrix of size |K|, with each category appearing both in row titles and column titles. Each cell gives the distance between two categories through a value in [0, 1]. Value 0 means perfect equality, whereas the maximum value 1 means that the categories are considered as totally different. As distcat is symmetric, such a matrix is necessarily symmetric, and bears 0 in each diagonal cell. Table 3 gives an example for three categories, and shows that an association between a unit in category cat1 with one in category cat3 is the worst possible (distance = 1), whereas it is half as much between cat1 with cat2 (distance = 0.5). This makes it possible to take into account so-called category-overlapping (in our example, cat1 and cat2 are said to overlap, which means they are not completely different), as weighted coefficients such as κw or α already do. Note that in the case of so-called “nominal categories,” the matrix will be full of 1 outside the diagonal, and full of 0 in the diagonal (different categories are considered as not matching at all). This categorial distance matrix is then used to build the categorial dissimilarities, taking into account the ∆∅ value. We define categorial dissimilarity between two units by: dcat(u, v) = fcat(distcat (cat(u), cat(v))) ·∆∅ (4) Function fcat can be used to adjust the way the dissimilarity grows with respect to the categorial distance values. The standard option2 (used in this article) is to simply consider fcat(x) = x, with which dcat naturally increases gradually from zero when categories match, to ∆∅ when categories are totally different (distcat(cat(u), cat(v)) = 1 =⇒ dcat(u, v) = ∆∅). 4.4.4 Combined Dissimilarity dcombi. Because in some annotation tasks units may differ both in position and in category, it is necessary to combine the associated dissimilarities so that all costs are cumulated. This is provided by a combined dissimilarity. Let d1 and d2 be two dissimilarities. We define: dα,βcombi(d1,d2 )(u, v) = α.d1(u, v) + β.d2(u, v) (5) It is easy to demonstrate that this linear combination of dissimilarities is itself a dissimilarity (if (α,β) 6= (0, 0)). It enables the same weight to be assigned to positions and categories using d1,1combi(dpos,dcat ), which is currently used for γ. Then, we can note that it is the same cost ∆∅ for a unit either not to be aligned with any other one, or to be aligned with a unit in the same configuration as (A1, C1) of Figure 10 (if they have the same category), or to be aligned with a unit having an incompatible category (if they occupy the same position). Unitary alignment ă. A unitary alignment ă is an i-tuple, i belonging to J1, nK (n the number of annotators), containing at most one unit by each annotator: It represents the hypothesis that i annotators agree to some extent on a given phenomenon to be unitized. In order to make all unitary alignments homogenous, we eventually complete any unitary alignment that is an i-tuple with n− i empty units u∅, so that all unitary alignments are ultimately n-tuples. Figure 11 illustrates unitary alignments with some u∅ units. Alignment ā. For a given annotation set, an alignment ā is defined as a set of unitary alignments such that each unit of each annotator belongs to one and only one of its 2 Another option is, for example, to use fcat(x) = −ln(1− x)x30 + x, which is a function almost similar to fcat(x) = x on the [0, 0.9] range, and reaches∞ near 1. Then, when the categorial distance is equal to 1, the categorial dissimilarity reaches infinity, which guarantees that the units cannot be aligned. unitary alignments. Mathematically, it constitutes a partition of the set of units (if we do not take u∅ into account). 4.6.1 Disorder of a Unitary Alignment. The disorder of a unitary alignment ă, denoted δ̆(ă), is defined for a given dissimilarity d as the average of the one-to-one dissimilarities of its units: δ̆(ă) = 1 C2n · ∑ (u,v)∈ă2 d(u, v) (6) Averaging dissimilarities rather than summing them makes the result independent of the number of annotators. 4.6.2 Disorder of an Alignment. The disorder of an alignment ā, denoted δ̄(ā), is the sum of the disorders of all its unitary alignments divided by the mean number of units per annotator: δ̄(ā) = 1x · |ā|∑ i=1 δ̆(ăi) (7) We chose to consider the average value rather than the sum so that the disorder does not depend on the size of the continuum. Best alignment â. An alignment ā of the annotation set s is considered as the best (with respect to a dissimilarity) if it minimizes its disorder among all possible alignments of s. It is denoted â. The proposed method is holistic in that it is necessary to take into account the whole set of annotations in order to determine each unitary alignment. Disorder of an annotation set δ(s). The disorder of the annotation set s, denoted δ(s), is defined as the disorder of its best alignment(s) δ̄(â). Note that it may happen that several alignments produce the lowest disorder. We have just presented the two crucial definitions of our new method, which make it “unified.” Indeed, the best alignment is chosen with respect to the disorder, therefore with respect to what computes the agreement measure; and, conversely and simultaneously, the resulting agreement value (see below) is given by the best alignment: agreement computation and alignment are fully intertwined, whereas in most agreement metrics, the alignment is fixed a priori or no alignment is used. 4.8.1 The Model of Chance of γ. As we have already mentioned in the state of the art, it is necessary for an inter-annotator agreement measure to provide chance correction. We have also seen that there are several chance correction models, and that it is a controversial question. However, for γ, we follow Krippendorff, who claims that annotators should be interchangeable, because, as stressed by Krippendorff (2011) and Zwick (1988), Cohen’s definition of expected agreement (using individual distributions) numerically rewards annotators for not agreeing on their use of values, that is to say when they have different prevalences of categories, and punishes those that do agree. Therefore, expected values of γ are computed on the basis of the average distribution of observed annotations of the several annotators. More precisely, we define the expected (chance) disorder as the average disorder of a multi randomly annotated continuum where:r The random annotations fulfill the observed annotation distributions for the following features: – the distribution of the number of units per annotator – the distribution of categories – the distribution of unit length per category – the distribution of gaps’ length – the distribution of overlapping and/or covering between each pair of categories (for instance, units of categories A and B may never intersect, 7% of the units of category A may cover one unit of category C, and so on).r The number of random annotators is the same as the number of annotators in the observed data 4.8.2 Two Possible Sources to Build Chance: Local Data versus Corpus Data. In addition, whereas other studies systematically compute the expected value on the data also used to compute the observed value (see Section 3.1), we consider that it should be computed, when possible (that is to say, when several continua have been annotated with the same set of categories and the same instructions), from the distribution observed in all continua of the annotation effort the evaluated continuum comes from: If distribution changes from one continuum to another one, it is more because of the content of each continuum than because of chance. Let us illustrate this by a simple example, where two annotators have to annotate several texts from a sentiment analysis point of view, using three available categories: positive, negative, and neutral. On average, on the whole corpus, we assume that the prevalence is 13 for each category. The expected agreement on the whole corpus is thus 0.33. We also assume that for one particular text, there are only positive and neutral annotations, 50% of each, and no negative one. The expected agreement for this particular text is 0.5, which means that this particular text is considered to facilitate agreement by chance, and which has the consequence that the final agreement will be more conservative than for the rest of the corpus. Why does the third category, “negative,” not appear in this expected agreement computation? This conception of chance considers that when an annotator begins to annotate this particular text, which she does not already know, the third category no longer exists in her mind, and that it is the case for every other annotator, whereas they are not supposed to cooperate. It cannot be by chance that all annotators use one category in some texts, and not in another one, but because of the content, and of the interpretation, of a given text. For this reason, from our point of view, it is better to take into account the data observed on a whole annotation effort rather than on each individual continuum. The complete data tell more about the mean behavior of annotators, whereas data of a given continuum may depend more on the particularities of its content. As a consequence, γ provides two ways to compute the expected values: one which considers only the data of the continuum being evaluated, as does every other coefficient; and a second one, which considers the data from all continua of the annotation effort the evaluated continuum comes from. When available, we recommend using the second one, for the reasons already expressed. 4.8.3 Using Sampling to Compute the Expected Value. Expected agreement (or disagreement) is the expected value of a random variable. But which random variable? For coefficients like kappa and alpha, observed agreement (or disagreement) is the mean agreement (or disagreement) on all pairs of instances, so the random variable can be as simple as a random pair of instances (however we interpret “random”). This value can be readily computed. For gamma, however, observed disagreement is determined on a whole annotation, so the random variable needs to be a whole random annotation. The expected value of such a complicated variable is much more difficult to determine analytically. Instead, gamma uses sampling, as introduced in Section 5. Now that the disorder and the expected disorder have been introduced, we can define the agreement measure (of annotation set s belonging to corpus c, with c = {s} if s is a sole annotation set) with Equation 8, which is derived from Equation 2: ∀s ∈ c,γ = 1− δ(s) δe(c) (8) If all annotators perfectly agree (Figure 11a), γ = 1. Figure 11c corresponds to the worst case, where the annotators are worse than annotating at random, with γ < 0. Figure 11b shows an intermediate situation. 5 implementation :In this section, we first propose an efficient solution to compute the disorder of an annotated continuum, which relies on linear programming. Second, we propose two ways to generate random annotated continua (with respect to the observed distributions) to compute the expected disorder, one relying on a single continuum, the other one relying on a corpus (i.e., several continua). Third, we determine the number of random data sets that we must generate (and compute the disorder of) to obtain an accurate value of the expected disorder. In order to simplify the discussion and the demonstrations, we consider in this section that n annotators all made the same number of annotations p. The proposed method has now been fully described on a theoretical level, but, being holistic, its software implementation leads to a major problem of complexity. One can demonstrate that there are theoretically (p!)n−1 possible alignments. However, we will (1) show how to reduce the initial complexity, and (2) provide an efficient linear programming solution. 5.1.1 Reducing the Initial Complexity. The initial number of possible unitary alignments (which are used to build a possible alignment) is pn. Fortunately, theorem provided as Equation (9) states that any unitary alignment with a cost beyond the value n ·∆∅ cannot belong to the best alignment, and so can be discarded. Indeed, any unitary alignment with a cost above ∆∅ can be replaced by creating a separate unitary alignment for each unit (of cost ∆∅ per unitary alignment, so of total cost n ·∆∅). Demonstration. Consider the best alignment â, of cardinality m. Let ă be any of its unitary alignments. For convenience, we attribute to it the index 1 (ă = ă1), while the others are indexed from 2 to m. This unitary alignment ă contains n units (either real or u∅). For each of these units ui (1 ≤ i ≤ n), we create the unitary alignment ăm+i = (ui, u∅, ..., u∅) of cardinality n. It is possible to create an alignment ā made up of the set of unitary alignments of â \ {ă}, to which we add the unitary alignments ăm+1 to ăm+n that we have just created.3 It is of cardinality m + n− 1. Because â minimizes the disorder, we obtain: δ̄(â) ≤ δ̄(ā)⇒ 1x m∑ i=1 δ̆(ăi) ≤ 1x m+n∑ i=2 δ̆(ăi) ⇒ m∑ i=1 δ̆(ăi) ≤ m+n∑ i=2 δ̆(ăi) ⇒ δ̆(ă1) ≤ m+n∑ i=m+1 δ̆(ăi) since ∀i > m, δ̆(ăi) = 1C2n (C 2 n∆∅) = ∆∅, and since we have denoted ă = ă1, ⇒ δ̆(ă) ≤ n ·∆∅ (9) Experiments have shown that this theorem allows us to discard about 90% of the unitary alignments. 3 ā is indeed an alignment, because each of its units appears in one and only one unitary alignment. 5.1.2 Finding the Best Alignment: A Linear Programming Solution. Finding the best alignment consists of minimizing the global disorder. Such a problem may be described as a linear programming problem, so that the solution can be computed by a linear programming solver. For convenience, we introduce two new definitions:r Let UA be the set of all unitary alignments.r Let UAu be the set of the unitary alignments which contain unit u. The description of the problem in linear programming terms is threefold. First, for a given alignment ā, for each possible unitary alignment ăi, we define the Boolean variable Xăi , which indicates if this unitary alignment belongs or not to the alignment: ∀ăi ∈ UA, Xāăi = { 0 iff ăi 6∈ ā 1 iff ăi ∈ ā Second, we have to express the fact that, by definition, each unit u (of each annotator) should belong to one and only one unitary alignment of the alignment ā, that is to say that among all unitary alignments containing u, exactly one Xăi equals 1, and all the others equal 0: ∀u ∈ U , ∑ ăi ∈ UAu Xāăi = 1 Third, the goal is to minimize the global disorder δ̄(ā) associated with ā, among all possible alignments ā: Minimize δ̄(ā) = ∑ ăi ∈ UA δ̆(ăi) · Xāăi The LPSolve solver4 finds the best solution in less than one second with n = 3 annotators and p = 100 annotations per annotator on a current laptop (once the initial complexity has been reduced thanks to the previous theorem), which is fast enough to be practical. The next two subsections detail two strategies to generate randomly annotated continua with respect to the definition of the expected disorder of γ, and the third subsection explains how to choose the number of expected disorder samples to generate so that their average is an accurate enough value of the theoretical expected value. The two strategies correspond to the need expressed in Section 4.8.1 to compute the expected value on the largest set of available data, either a single continuum, or, when available, several continua from the same corpus. 4 http://lpsolve.sourceforge.net. 5.2.1 A Strategy to Compute the Expected Disorder Using a Single Continuum. When the annotation effort is limited to a single continuum, we can only rely on the annotated continuum itself to compute the expected value. To create random annotations that fulfill the observed distributions, the implemented strategy is as follows: We take the real annotated continuum of an annotator (such as the example shown on the left in Figure 12), choose at random a position on this continuum, split the continuum at this position, and permute the two parts of the split continuum. Three examples of split and permutation are shown in the right part of the figure, for split positions of, respectively, 15, 24, and 38, all coming from the same real continuum, with units that are no longer aligned (except by chance). However, we have to address the fact that some units may intersect with themselves, generating some part of agreement beyond chance. For instance, in Figure 12, unit 3 intersects with itself between #15 and #24, because the length of the unit, 12, is higher than the difference of shifts 24− 15 = 9. To limit this phenomenon, we do not allow the distance between two shifts to be less than the average length of units. 5.2.2 A Strategy to Compute the Expected Disorder Using Several Continua (from the Same Corpus). This strategy consists of mixing annotations coming from different continua, so that their units may align only by chance. To create a random annotation of n annotators, we randomly choose n different continua of the corpus, and pick the annotations of one annotator (randomly chosen) of each of these continua. When different texts are of different lengths, each of them is adjusted to the longest one by duplicating as many times as necessary (like a mosaic). This is shown in Figure 13 for n = 3 annotators. We assume the corpus contains eight continua, each annotated by three annotators. To generate a random set of three annotations, we have randomly selected a combination of three values between 1 and 8, here (2, 4, 7), to select three different continua among the eight available ones of the corpus. Then, for each of these selected continua, we choose one annotator, here annotator 2 for continuum 2, annotator 3 for continuum 4, and annotator 1 for continuum 7. We combine the associated annotations as shown in the right part of the figure, and obtain a set of random annotations that fulfill (on average) the observed distributions. The (very limited) extent of the resulting agreement we can see in this example (only two units have matching categories, but with discrepancies in position) is only by chance because the compared annotations come from different continua. In addition, it is possible to create a great number of random sets of annotations with this strategy: With n annotators and m continua (m ≥ n), it is possible to generate up to Cnm · nn different combinations. For instance, in our example, which assumes n = 3 and m = 8, there are 56× 33 = 1512 combinations to create random annotations. Because the expected disorder is by definition randomly obtained on average, and because there is virtually an infinite number of possible random annotations (with a discrete and finite continuum, it is not really infinite, but still too big to be computed), we can only compute a reduced but sufficient number of experiments and obtain an approximate value of the expected disorder. This is a sampling problem as described, for example, in Israel (1992). What statistics provide is a way to determine the minimal number n0 of experiments to do (and to average) so that we get an approximate result of a given precision with a given confidence level. It consists in: First, taking a small sample to estimate the mean and standard deviation; then, using these estimates to determine the sample size n0 that is needed. We follow the strategy provided in Olivero (2001) to compute a disorder value that differs less than e = 2% from the real value with a (1− α) = 95% confidence (the software distribution we provide is set by default with these values). First, we consider a sample of chance disorder values of size n = 30. Let µ be the sample mean, and σ′ be its standard deviation. µ is directly an unbiased estimator of the population mean, and σ = √ ( nn−1 ) · σ′ is an unbiased estimator of the real standard deviation. Let Cv = σµ be the coefficient of variation (i.e., the relative standard deviation). Let U1−α2 be the abscissa of the normal curve that cuts off an area α at the tails. This value is provided in statistical tables. We get n0 by the following equation: n0 = (Cv ·U1−α2 e )2 Let us consider a real example. We generate a sample of random disorders of size n = 30. We compute its mean µ = 3.49, its standard deviation σ′ = 0.1379, hence σ = 0.1403, and Cv = 0.040188. We get U1− 0.052 = 1.96 from the corresponding available table, hence we obtain n0 = 15.5. This means that a sample of 16 disorder values gives 2% of precision with 95% confidence. The mean we have already computed with 30 values fulfills this condition, and is a good approximation of the real expected disorder. If we wish to obtain a high precision of 1%, we need n0 = 62. It is beyond the initial size of our sample (which is 30), and we will have to generate an additional set of 32 values in order to obtain the required number. 6 comparing and benchmarking γ :As γ is an entirely new agreement measure method, it is necessary to analyze how it compares with some well known and much studied methods. First, we carry out a thorough comparison between γ and the two dedicated alphas, uα and c|uα, which are the most specific measures in the domain. Second, we benchmark γ by comparing it with other main measures, thanks to a special tool that is briefly introduced. As already mentioned, Krippendorff’s uα and c|uα are clearly the most suitable coefficients for combined unitizing and categorizing. To better understand the pros and cons as well as the behavior of these measures compared with γ, we first explain how they are designed in Section 6.1.1, and then make thorough comparisons with γ from Section 6.1.2 to 6.1.6 including: (1) how they react to slight categorial disagreements, (2) interlacement of positional and categorial disagreements, (3) the impact of the size of the units on positional disagreement, (4) split disagreements, and (5) the impact of scale (e.g., if the size of all units is multiplied by 2). We finish by showing a paradox of uα in Section 6.1.7. 6.1.1 Introducing uα and c|uα. To introduce how these two coefficients work, let us consider the example taken from Krippendorff (2013), shown in Figure 14. The length of the continuum is 76, there are two annotators, and there are four possible categories, numbered 1 to 4. The uα coefficient basically relies on the comparison of all pairs of sections among annotators, a section being either a categorized unit or a gap. To get the observed disagreement value uDo, squared lengths of the unmatching intersections are summed, and this sum is then divided by the product of the length of the continuum and m(m− 1), m being the number of annotators. In the example, mismatches occur around the second and third units of the two annotators. From left to right, there are the following intersections: cat 1 with gap (l = 10), cat 1 with cat 3 (l = 5), gap with cat 3 (l = 8), cat 2 with cat 1 (l = 5), and cat 2 with gap (l = 5). This leads twice (by symmetry) to the sum 102 + 52 + 82 + 52 + 52, and so the observed disagreement uDo = 2(102+52+82+52+52 ) 76·2(2−1) = 3.145. The expected value uDe is obtained by considering all the possible positional combinations of each pair, and not only the observed ones. This means that for a given pair, one of the two units is virtually slid in front of the other in all possible ways, and the corresponding values are averaged. In this example, uDe = 5.286. Therefore, uα = 1− 3.1455.286 = 0.405. Coefficient c|uα relies on a coincidence matrix between categories, filled with the sums of the lengths of all intersections of units for each given pair of categories. For instance, in the example, the observed coincidence between category 1 and category 3 is 5, and so on. A metric matrix is chosen for these categories, for instance, an interval metric (for numerical categories), which says that the distance between category i and category j is (i− j)2. Hence, the cost for a unitary intersection between categories 1 and 2 is (1− 2)2 = 1, but is 22 = 4 between categories 1 and 3, and so on. Then, the observed disagreement is computed according to these two matrices. To finish, an expected matrix is filled (in a way which cannot be detailed here due to space constraints), and the expected value is computed the same way. In the example, c|uα = 1− 0.8333.145 = 0.744. Hence, Krippendorff’s alphas provide two clues to analyze the agreement between annotators. In the example, uα = 0.405 indicates that the unitizing is not so good, but also that the categorizing is much better, with c|uα = 0.744 (even though of course, these two values are not independent, since unitizing and categorizing coexist here by nature). Now that these coefficients have been introduced in detail, let us analyze to what extent they differ from γ. 6.1.2 Slight Categorial Disagreements: Alphas Versus γ. When annotators have slight categorial disagreements (with overlapping categories), c|uα is slightly lowered. However, uα does not take categorial overlapping into account, but has a binary response to such disagreements, and is lowered as much as if they were severe categorial disagreements. A consequence of this approach is illustrated in Figure 15, where two annotators perfectly agree both on positions and categories in the experiment on the left, and still perfectly agree on position but slightly diverge concerning categories in the experiment on the right (1/2, 6/7, and 8/9 are assumed to be close categories). However, uα drops from 1 in the left experiment to –0.34 (a negative value means worse than random) in the right experiment, despite, in the latter, the positions being all correct, and the categories being quite good, since c|uα = 0.85. On such data, γ considers that there is no positional disagreement, and c|uα and γ both consider that there are slight categorial disagreements. 6.1.3 Positional Disagreements Impacting Categorial Agreement: c|uα. Two different conceptions of how to account for categorial disagreement have, respectively, led to c|uα and γ: c|uα relies on intersections between the units of different annotators, which is basically equivalent to an observation at the atom level, whereas γ relies on alignments between units (any unit being finally attached and compared to, at most, only one other) based both on positional and categorial observation. Hence, in a configuration such as the one given in Figure 16, where two annotators annotated three units with the same categories 1 1 9 9 7 7 Observer 1 Observer 2 1 2 9 8 7 6 Observer 1 Observer 2 Figure 15 Consequences of no categorial disagreement (left) compared with slight categorial disagreements (right). 1, 4, and 2, but not exactly at the same locations; c|uα considers a certain account of categorial disagreements, whereas γ does not. According to the principles of c|uα, any part of the continuum (even at the atom level) with an intersection between different categories means some confusion between them, whereas γ considers here that the annotators fully agree on categories (they both observed a “1” then a “4” then a “2” with no confusion), and disagree only on where phenomena exactly start and finish. The crucial difference between the two methods is probably whether we consider units to be non-atomizable (and therefore consider alignments, as γ does), or atomizable (in which case two different parts of a given unit may be simultaneously and respectively compared to two different units from another annotator). 6.1.4 Disagreements on Long versus Short Spans of Texts. Here again, the way disagreements are accounted for may differ markedly between uα and γ: when a unit does not match with any other, uα takes into account the length of the corresponding span of text to assess a disagreement. As shown in Figure 17, an orphan unit of size 10 will cost 100 times as much as an orphan unit of size 1, whereas for γ, they will have the same cost. In the whole example in Figure 17, to compute the observed disagreements, uα says the first case is 50 times worse than the second, whereas γ says on the contrary that the second case is twice as bad as the first. Here, γ fulfills the need (already mentioned) expressed by Reidsma, Heylen, and Ordelman (2006, page 3) to consider that “short segments are as important as long segments.” This phenomenon is the same for categories between c|uα and γ, the size of the units having consequences only for c|uα. 6.1.5 Split Disagreements. Sometimes, an annotator may divide a given span of text into several contiguous units of the same type, or may annotate the same span with one whole unit. In these cases, c|uα computes its observed disagreement the same in both configurations, and uα assigns decreasing disagreement when splitting increases, as shown in Figure 18, whereas γ assigns increasing disagreements. Moreover, in Figure 19, the observed uα is not responsive to splits at all, whereas γ is still responsive. 6.1.6 Scale Effects. The way uα computes dissimilarities is directly proportional to squared lengths, as shown in Figure 20. On the other hand, γ may use any positional dissimilarity, and usually uses ones that are not scale-dependent for CL applications, such as dpos−sporadic (Equation (3)). For instance, if a text is annotated with two categories, one at word level, the other one at paragraph level, we may prefer to account for relative disagreements so that a missing word will be more heavily penalized in the first case than in the second. In Figure 20, the observed disagreement of uα is 32 = 9 times greater for B units than for A units, but would be the same for γwith dpos−sporadic since: ( 0 + 3( 7+10 2 ))2 = ( 0 + 9( 21+30 2 ))2 . Figure 21a, the annotators disagree on categorization, and have a moderate agreement on unitizing. This configuration leads to uα = 0.107. In Figure 21b, the configuration is quite similar, but now annotators fully agree on unitizing: Each of them puts units in the same positions. Paradoxically, uα drops to −0.287, which is less than in the first configuration. In brief, the reason for this behavior is that in the first case, the computed disagreement regarding a given pair of units is virtually distributed into shorter parts of the whole (an intersection of length 80 between them, and an intersection of length 20 with a gap for each of them, which leads to 802 + 2× 202 = 7, 200) whereas the disagreement is maximum in the second case (an intersection of length 100 with a unit of another category, which leads to 1002 = 10, 000). Contrarily, with similar data, γ provides a better agreement in the second case than in the first one. With its design, it considers that there is the same categorial agreement in both cases, but better positional agreement in the second case, which seems to better correspond to the CL tasks we have considered. 6.1.8 Overlapping Units (Embedding or Free Overlap). Both alpha coefficients are currently designed to cope only with non-overlapping units (the term overlapping also stands here for embedding), which is a limitation for several fields in CL. It is debatable whether they could be generalized to handle overlapping units. It seems that it would involve a major change in the strategy, which currently necessitates comparing the intersections of all pairs of units. In the example shown in Figure 22, even though annotators fully agree on their two units, the alphas will inherently compare A1 with B2 and A2 with B1 (in addition to the normal comparisons between A1 with B1 and A2 with B2), and will count the resulting intersections as disagreements. It is necessary here to choose once and for all what unit to compare to what other, rather than to perform all the comparisons. But making such a choice precisely consists in making an alignment, which is a fundamental feature of γ. Consequently, it seems that the alphas would need a structural modification to cope with overlapping. As explained by Reidsma, Heylen, and Ordelman (2006), because of the lack of specialized coefficients coping with unitizing, a fairly standard practice is to use categorization coefficients on a discretized (i.e., atomized) version of the continuum: For instance, each character (or each word, or each paragraph) of a text is considered as an item; and a standard categorization coefficient such as κ is used to compute agreement. Such a measure is called κd (for discretized κ) hereafter. Several weaknesses of this approach have been already mentioned in the state-of-the-art section. It is interesting to compare such a measure to the specialized one c|uα: even if they both bear the aggregatable hypothesis, they have, however, significant differences (as confirmed by the experiments presented in the next section). The main one is that c|uα does not use an artificial atomization of the continuum, and only compares units with units. In doing so, it is not prone to agreement on blanks, in contrast to κd. Another difference is that, for the same reason, c|uα is not inherently limited to non-overlapping units: Even if it is not currently designed to cope with them, as we have already seen, it is possible to submit overlapping units to this measure (some results are shown in the next section). In this section on benchmarking, we use the Corpus Shuffling Tool (CST) introduced by Mathet et al. (2012) to compare γ concretely and accurately to the other measures. We first introduce the possible error types that it will provide: category (category mistakes may occur), position (the boundaries may be shifted), false positives (the annotators add units to the reference units), false negatives (the annotators miss some of the reference units), and splits (the annotators put two or more contiguous units instead of a reference unit, which occupy the same span of text). This tool is used to simulate varying degrees of disagreement among different error types, and the metrics are compared with each other according to how they react to these disagreements. For a given error type, for each magnitude between 0 and 1 (with a step of 0.05), the tool creates 40 artificial, multi-annotator shuffled annotation sets, and computes the different measures for them. Hence, we obtain a full graph showing the behavior of each measure for this error type, with the magnitude on the x-axis, and the average agreement (over the 40 annotation sets) on the y-axis. This provides a sort of “X-ray” of the capabilities of the measures with respect to this error type, which should be evaluated against the following desiderata:r A measure should provide a full response to the whole range of magnitudes, which means in particular that the curve should ideally start from 1 (at m = 0) and reach 0 (at m = 1), but never go below 0 (indeed, negative agreement values require a part of systematic disagreement, which is not simulated by the current version of the CST).r The response should be strictly decreasing: A flat part would mean the measure does not differentiate between different magnitudes, and, even worse, an increasing part would mean that the measure is counter-effective at some magnitudes, where a worse error is penalized less severely. We emphasize the fact that the whole graph is important, up to magnitude 1. Indeed, in most real annotated corpora, even when the overall agreement is high, errors corresponding to all magnitudes may occur. For instance, an agreement of 0.8 does not necessarily correspond to the fact that all annotations are affected by slight errors (which correspond to magnitudes close to 0), but may for instance correspond to the fact that a few units are affected by severe errors (which may correspond to magnitudes close or equal to 1). It is important to note that this tool was designed by the authors of γ, for tasks where units cannot be considered as atomizable. In particular, it was conceived so that disagreements concerning small units are as important as those concerning large ones. However, it is provided as open-source (see Conclusions section) so that anyone can test and modify it, and propose new experiments to test γ and other measures in the future. 6.3.1 Introducing the CST. The main principle of this tool is as follows. A reference corpus is built, with respect to a statistical model, which defines the number of categories, their prevalence, the minimum and maximum length for each category, and so forth. Then, this reference is used by the shuffling tool to generate a multi-annotator corpus, simulating the fact that each annotator makes mistakes of a certain type, and of a certain magnitude. It is important to remark that the generated corpus does not include the reference it is built from. The magnitude m is the strength of the shuffling, that is to say the severity of mistakes annotators make compared to the reference. It can be set from 0, which means no damage is applied (and the annotators are perfect) to the extreme value 1, which means annotators are assumed to behave in the worst possible way (but still being independent of each other)—namely, at random. Figure 23 illustrates the way such a corpus is built: From the reference containing some categorized units, three new sets of annotations are built, simulating three annotators who are assumed to have the same annotating skill level, which is set in this example at magnitude 0.1. The applied error type is position only, that is to say that each annotator makes mistakes only when positioning boundaries, but does not make any other mistake (the units are reproduced in the same order, with the correct category, and in the same number). At this low magnitude, the positions are still close to those of the reference, but often vary a little. Hence, we obtain here a slightly shuffled multiannotator corpus. Let us sum up the way error types are currently designed in the CST. Position. At magnitude m, for a given unit, we define a value shiftmax that is proportional to m and to the length of the unit, and each boundary of the unit is shifted by a value randomly chosen between −shiftmax and shiftmax (note: at magnitude 0, because shiftmax = 0, units are not shifted). Category. This shuffling cannot be described in a few words (see Mathet et al. [2012] for details). It uses special matrices to simulate, using conditional probabilities, progressive confusion between categories, and can be configured to take into account overlapping of categories. The higher the magnitude, the more frequent and severe the confusion. False negatives. At magnitude m, each unit has the probability m to be forgotten. For instance, at magnitude m = 0.5, each annotator misses (on average) half of the units from the reference (but not necessarily the same units as the other annotators). False positives. At magnitude m, each annotator adds a certain number of units (proportional to m) to the ones of the reference. Splits. At magnitude m, each annotator splits a certain number of units (proportional to m). A split unit may be re-split, and so on. 6.3.2 Pure Segmentation: γ, WD, GHD. Even if γ was created to cope with error types that are poorly or not at all dealt with by other methods, and, moreover, to cope simultaneously with all of them (unitizing of categorized and overlapping categories), it is illuminating to observe how it behaves in more specific error types, to which specialized and well known methods are dedicated. We start with pure segmentation. Figure 24 shows the behavior of WD, GHD, and γ for two error types. For false negatives, WD and GHD are quite close, with an almost linear response until magnitude 0.6. Their drawback is that their responses are limited by an asymptote, because of the absence of chance correction, while γ shows a full range of agreements; for shifts, WD and GHD show an asymptote at about agreement = 0.4, while γ shows values from 1 to 0. This experiment confirms the advantage of using γ instead of these distances for inter-annotator agreement measure. 6.3.3 Pure Categorization. In this experiment, the CST is set to three annotators, four categories with given prevalences. The units are all of the same size, positioned in fixed, predefined positions, so that the focus is on categorizing only. It should be noted that, with such a configuration, α and κ behave exactly in the same way as c|uα. It is particularly striking in Figure 25 that γ behaves in almost the same way as c|uα. In fact, the observed values of these measures are exactly the same, the only difference coming from a slight difference in the expected values, due to sampling. Other tests carried out with the pure categorizing coefficient κ yielded the same results on this particular error type, which means that γ performs as well as recognized measures as far as categorizing is concerned, with two or more annotators. The uα curve goes below zero at magnitude 0.5 (probably for the reasons seen in Section 6.1.7). Moreover, its behavior depends on the size of the gaps: Indeed, with other settings of the shuffling, the curve may, on the contrary, be stuck over zero. κd fails to reach 0 because of the virtual agreement on gaps (but it would if there were no gaps). Lastly, SER (averaging the results of each pair of annotators) is bounded below by 0.6, which results from not taking chance into account. 6.3.4 Almost General Case: Unitizing + Categorizing. This section concerns the more general uses of γ, combining both unitizing and categorizing. However, in order to be compliant with uα, c|uα, and κd, we limit the configurations here so that the units do not overlap at all. In particular, the reference was built with no overlapping units, and we have used a modified version of the shifting shuffling procedure so that the nonoverlapping constraint is fully satisfied, even at high magnitudes. Positional errors (Figure 26a). An important point is that this shuffling error type, which is based only on moving positions, has a paradoxical consequence on category agreement, since units of different categories align when sufficient shifting is applied. Consequently, c|uα is not blocked at 1, even though it is designed to focus on categories. Additionally, it starts to decrease from the very first shifts, as soon as units from different annotators start overlapping. This is a concrete consequence of what has been formally studied in Section 6.1.3. γ has a most progressive response, reaches 0.1 at magnitude 1, and is the only measure to be strictly decreasing. SER immediately drops to agreement 0.5 at magnitude 0.05. As it relies on a binary positional distance, it fails to distinguish between small and large errors. This is a serious drawback of such a measure for most CL tasks. Then it goes below zero and is not strictly decreasing. uα is mostly strictly decreasing, but has some increasing parts, and, even more problematic, negative values from 0.6 to 0.9, probably because of the reason explained in Section 6.1.7. κd is too responsive at the very first magnitudes, and is not strictly decreasing, probably because it “does not compensate for differences in length of segments” (Reidsma, Heylen, and Ordelman 2006, page 3). Positional and categorial errors (Figure 26b). γ is strictly decreasing and reaches 0. The alphas are not strictly decreasing, and once again uα drops below 0 from magnitude 0.6 onwards. κd is not strictly decreasing (again, probably because it “does not compensate for differences in length of segments”), but its general shape is not that far from γ. Split errors (Figure 27). The split error type would need to create an infinite number of splits to mean pure chaos at magnitude 1. As this is computationally not possible, we restricted the number of splits to five times the number of units of the reference. We should therefore not expect measures to reach 0. In this context, γ shows a good range of responses, from 1 to 0.2, in an almost linear curve. SER is also quite linear, but gives very confusing values for this error type because it reaches negative values above magnitude 0.6. Finally, uα, c|uα, and κd are not responsive at all to this error type, as expected, and remain blocked at 1 (which is normal for c|uα, which focuses on categorizing). False positives and false negatives (Figure 28). In the current version of the CST, the false positive error type creates some overlapping (new units may overlap), and it is the reason why uα and κd were discarded from this experiment. However, we have kept c|uα because it behaves quite well despite overlapping units. All the measures have overall a good response to the false positives error type, as shown in Figure 28a, even if the shape of c|uα is delayed compared with the others, but it should be pointed out that SER has a curious and unfortunate final increasing section (not visible in the figure because this section is below 0). On the other hand, bigger differences appear with false negatives (Figure 28b). γ is still strictly decreasing and almost reaches 0 (0.025), but uα is not strictly decreasing, and is at 0 or below from m = 0.3; SER quickly drops below 0 from m = 0.4, κd is not strictly decreasing, and c|uα, as for splits, does not react at all but remains stuck at 1, which is desired for this coefficient focused on categories (values of c|uα over m = 0.7 are missing since there are not enough intersections between units for this measure to work). Overview of each measure for the almost general case. In order to summarize the behavior of each measure in response to the different error types for the almost general case (without overlap), we pick all curves relative to a given measure out of the previous plots and draw them in the same graph, as shown in Figure 29. Briefly, γ shows a steady behavior for all error types, almost strictly decreasing from 1 to 0. uα has some increasing parts and negative values and is sometimes not responsive. c|uα is very responsive for some error types, is less responsive for some other types, and is sometimes not responsive at all (which is desired, as already said). SER has unreliable responses, being either too responsive (reaching negative values) or not responsive enough. Finally, κd is not always responsive, is most of the time not strictly decreasing, but is sometimes quite progressive. 6.3.5 Fully General Case: Unitizing + Categorizing. This last section considers the fully general case, where overlapping of units within an annotator is allowed. In this experiment, we took a reference corpus with no overlap, but the errors applied (combination of positioning and false positives) progressively lead to overlapping units. The results are shown in Figure 30. As expected, γ behaves quite the same as it does with non-overlapping configurations. Admittedly, c|uα was not designed to handle these configurations (and so should not be included in this experiment), but surprisingly it seems to perform in rather the same way as it does with no overlapping; this must be investigated further, but judging from this preliminary observation, it seems this coefficient could still be operational and useful in such cases. On the contrary, uα does not handle correctly this experiment and so was not included in the graph. 7 conclusion :The present work addresses an aspect of inter-annotator agreement that is rarely studied in other approaches: the combination of unitizing and categorizing. Nevertheless, the use of methods that have been transposed from other domains (such as κ, which was originally dedicated to pure categorizing) in CL, for example at the discourse level, leads to severe biases, and manifests the need for specialized coefficients, fair and meaningful, suitable for annotation tasks focusing on complex objects. In the end, only Krippendorff’s coefficients uα and c|uα come close to the needs we expressed in the introduction, with the restriction that they are natively limited to non-overlapping units. The main reason why research on this topic is sparse, and why it may be difficult to enlarge Krippendorff’s coefficients to overlapping units, probably results from the fact that we are facing here a major difficulty: the simultaneous double discrepancy between annotators, with annotations possibly differing both in positioning relevant units anywhere on a continuum, and in categorizing each of these free units. Consequently, it is difficult for a method to choose precisely which features to compare between different annotators (unlike pure categorizing, where we know exactly what each annotator says for each predefined element to be categorized); and this problem is exacerbated when overlapping units (within an annotator) occur. To cope with this critical point, we advocate the use of an alignment that ultimately expresses which unit from one annotator should be compared to which unit, if any, from another one, and consequently makes it natural and easier to compute the agreement. Moreover, we have shown that this alignment cannot be done in an independent way, but is part of the measure method itself. This is the “unified” aspect of our approach. We have also shown that in order to be relevant, this alignment cannot be done at a local level (unit by unit), but should consider the whole set of annotations at the same time, which is the “holistic” aspect. This is how the new method γ presented here was designed. Moreover, this method is highly configurable to cope with different annotation tasks (in particular, boundary errors should not necessarily be considered the same for all kinds of annotations), and it provides the alignment that emerges from an agreement measurement. Not only is this alignment a result in itself, which can be used to build a gold standard very quickly from a multi-annotated corpus (by listing all unitary alignments, and for each of them showing the corresponding frontiers and category proposed by each annotator), but it also behaves as a kind of a “flight recorder” of the measure: Observing these alignments gives crucial information on the choices the measure makes and whether it needs to be adjusted, unlike other methods which only provide a sole “out of the box” value. Finally, we have compared γ to several other popular coefficients, even in their specific domains (pure categorization, pure segmentation), through a specific benchmark tool (namely, CST) which scans the responsivity of the measures to different kinds of errors and at all degrees of severity. Overall, γ provides broader and more progressive responsivity than the others in the experiments shown here. Concerning pure categorizing, γ does not have an edge over the well-known coefficients, such as α, but it is interesting to see that it behaves in much the same way as others in this specific field. Concerning segmentation, γ outperforms WD and GHD, by taking chance into account, but also by not depending on the heterogeneity of the segment sizes. Concerning unitizing with categorizing, as theoretically expected and confirmed by the benchmarking, SER shows severe limitations, such as a binary response to various (small or severe) positional or categorial errors, the fact that it does not make chance correction, or its limitation to two annotators only. Krippendorff’s coefficients uα and c|uα present very interesting properties, such as chance correction. However, as we have shown with thorough comparisons, they rely on quite different hypotheses to ours, since they consider intersections between units whereas we advocate considering aligned units. We have identified several situations in CL where considering alignments is advantageous, for instance, when contiguous segments of the same type may occur, or when errors on several short units should be considered as more serious than one error on a long unit, but we do not posit these situtations as a universal rule. In conclusion, when unitizing and categorizing involve internal overlapping of units, only γ is currently available, and, even if it cannot be compared to any other method at the moment for this reason, benchmarking reveals very similar responses to overlapping configurations and to non-overlapping ones, which already demonstrates its consistency and its relevance. We can summarize the features of γ as follows: It takes into account all varieties of unitizing, combines unitizing and categorizing simultaneously, enables any number of annotators, provides chance correction, processes an alignment while it measures agreement, and provides progressive responsivity to errors both for unitizing and for categorizing. This makes γ suitable for annotation tasks such as relative to NAMED ENTITY, DISCOURSE FRAMING, TOPIC TRANSITION, or ENUMERATIVE STRUCTURES. The full implementation of γ is provided as Java open-source packages on the http://gamma.greyc.fr Web site. It is already compatible with annotations created with the Glozz Annotation Platform (Widlöcher and Mathet 2012), and with annotations generated by the Corpus Shuffling Tool. Agreement measures have been widely used in computational linguistics for more than 15 years to check the reliability of annotation processes. Although considerable effort has been made concerning categorization, fewer studies address unitizing, and when both paradigms are combined even fewer methods are available and discussed. The aim of this article is threefold. First, we advocate that to deal with unitizing, alignment and agreement measures should be considered as a unified process, because a relevant measure should rely on an alignment of the units from different annotators, and this alignment should be computed according to the principles of the measure. Second, we propose the new versatile measure γ, which fulfills this requirement and copes with both paradigms, and we introduce its implementation. Third, we show that this new method performs as well as, or even better than, other more specialized methods devoted to categorization or segmentation, while combining the two paradigms at the same time. [{""affiliations"": [], ""name"": ""Yann Mathet""}, {""affiliations"": [], ""name"": ""Antoine Widl\u00f6cher""}, {""affiliations"": [], ""name"": ""Jean-Philippe M\u00e9tivier""}] SP:48c2046e4054182647ac9058902fdedb3869a97e [{""authors"": [""Ludovic Tanguy"", ""Marianne Vergez-Couret"", ""Laure Vieu.""], ""title"": ""An empirical resource for discovering cognitive principles of discourse organisation: The annodis corpus"", ""venue"": ""Proceedings of the Eighth"", ""year"": 2012}, {""authors"": [""Artstein"", ""Ron"", ""Massimo Poesio.""], ""title"": ""Inter-coder agreement for computational linguistics"", ""venue"": ""Computational Linguistics, 34(4):555\u2013596."", ""year"": 2008}, {""authors"": [""Beeferman"", ""Douglas"", ""Adam Berger"", ""John Lafferty.""], ""title"": ""Text segmentation using exponential models"", ""venue"": ""Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing, pages 35\u201346,"", ""year"": 1997}, {""authors"": [""E.M. Bennett"", ""R. Alpert"", ""A.C. Goldstein.""], ""title"": ""Communications through limited questioning"", ""venue"": ""Public Opinion Quarterly, 18(3):303\u2013308."", ""year"": 1954}, {""authors"": [""Berry"", ""Charles C.""], ""title"": ""The K statistic [letter"", ""venue"": ""Journal of the American Medical Association, 268(18):2513\u20132514."", ""year"": 1992}, {""authors"": [""Bestgen"", ""Yves.""], ""title"": ""Improving text segmentation using latent semantic analysis: A reanalysis of Choi, Wiemer-Hastings, and Moore (2001)"", ""venue"": ""Computational Linguistics, 32(1):5\u201312."", ""year"": 2006}, {""authors"": [""Bestgen"", ""Yves""], ""title"": ""Quel indice pour mesurer l\u2019efficacit\u00e9 en segmentation de textes"", ""venue"": ""Actes de TALN"", ""year"": 2009}, {""authors"": [""A. Bookstein"", ""V.A. Kulyukin"", ""T. Raita.""], ""title"": ""Generalized Hamming Distance"", ""venue"": ""Information Retrieval, (5):353\u2013375."", ""year"": 2002}, {""authors"": [""J. Carletta""], ""title"": ""Assessing agreement on classification tasks: The kappa statistic"", ""venue"": ""Computational Linguistics, 22(2):249\u2013254."", ""year"": 1996}, {""authors"": [""Carletta"", ""Jean.""], ""title"": ""Unleashing the killer corpus: Experiences in creating the multi-everything ami meeting corpus"", ""venue"": ""Language Resources and Evaluation, 41(2):181\u2013190."", ""year"": 2007}, {""authors"": [""Charolles"", ""Michel"", ""Anne Le Draoulec"", ""Marie-Paule Pery-Woodley"", ""Laure Sarda.""], ""title"": ""Temporal and spatial dimensions of discourse organisation"", ""venue"": ""Journal of French Language Studies,"", ""year"": 2005}, {""authors"": [""J. Cohen""], ""title"": ""A coefficient of agreement for nominal scales"", ""venue"": ""Educational and Psychological Measurement, 20(1):37\u201346."", ""year"": 1960}, {""authors"": [""J. Cohen""], ""title"": ""Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit"", ""venue"": ""Psychological Bulletin, 70(4):213\u2013220."", ""year"": 1968}, {""authors"": [""Di Eugenio"", ""Barbara"", ""Michael Glass.""], ""title"": ""The kappa statistic: A second look"", ""venue"": ""Computational Linguistics, 30(1):95\u2013101."", ""year"": 2004}, {""authors"": [""Eisenstein"", ""Jacob.""], ""title"": ""Hierarchical text segmentation from multi-scale lexical cohesion"", ""venue"": ""Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association"", ""year"": 2009}, {""authors"": [""Fleiss"", ""Joseph.""], ""title"": ""Measuring nominal scale agreement among many raters"", ""venue"": ""Psychological Bulletin, (5):378\u2013382."", ""year"": 1971}, {""authors"": [""Fort"", ""Kar\u00ebn"", ""Claire Fran\u00e7ois"", ""Olivier Galibert"", ""Maha Ghribi.""], ""title"": ""Analyzing the impact of prevalence on the evaluation of a manual annotation campaign"", ""venue"": ""Eighth International Conference on Language"", ""year"": 2012}, {""authors"": [""Laurent.""], ""title"": ""Named and specific entity detection in varied data: The quaero named entity baseline evaluation"", ""venue"": ""Seventh International Conference on Language 477"", ""year"": 2010}, {""authors"": [""Hills"", ""CA. Krippendorff"", ""Klaus""], ""title"": ""On the reliability"", ""year"": 1995}, {""authors"": [""Labadi\u00e9"", ""Alexandre"", ""Patrice Enjalbert"", ""Yann Mathet"", ""Antoine Widl\u00f6cher.""], ""title"": ""Discourse structure annotation: Creating reference corpora"", ""venue"": ""Workshop on Language Resource and Language"", ""year"": 2010}, {""authors"": [""S. Lamprier"", ""T. Amghar"", ""B. Levrat"", ""F. Saubion.""], ""title"": ""On evaluation methodologies for text segmentation algorithms"", ""venue"": ""Proceedings of ICTAI 2007, pages 19\u201326. Patras."", ""year"": 2007}, {""authors"": [""Makhoul"", ""John"", ""Francis Kubala"", ""Richard Schwartz"", ""Ralph Weischedel.""], ""title"": ""Performance measures for information extraction"", ""venue"": ""Proceedings of DARPA Broadcast News Workshop, pages 249\u2013252,"", ""year"": 1999}, {""authors"": [""Mathet"", ""Yann"", ""Antoine Widl\u00f6cher.""], ""title"": ""Une approche holiste et unifi\u00e9e de l\u2019alignement et de la mesure d\u2019accord inter-annotateurs"", ""venue"": ""Traitement Automatique des Langues Naturelles 2011"", ""year"": 2011}, {""authors"": [""Nadeau"", ""David"", ""Satoshi Sekine.""], ""title"": ""A survey of named entity recognition and classification"", ""venue"": ""Linguisticae Investigationes, 30(1):3\u201326."", ""year"": 2007}, {""authors"": [""Olivero"", ""Patrick.""], ""title"": ""Calcul de la taille des \u00c9chantillons"", ""venue"": ""CETE du Sud-Ouest / DAT / ZELT. Technical report."", ""year"": 2001}, {""authors"": [""Passonneau"", ""Rebecca J."", ""Vikas Bhardwaj"", ""Ansaf Salleb-Aouissi"", ""Nancy Ide.""], ""title"": ""Multiplicity and word sense: Evaluating and learning from multiply labeled word sense annotations"", ""venue"": ""Language Resources and"", ""year"": 2012}, {""authors"": [""L. Pevzner"", ""M. Hearst.""], ""title"": ""A critique and improvement of an evaluation metric for text segmentation"", ""venue"": ""Computational Linguistics, 28(1):19\u201336."", ""year"": 2002}, {""authors"": [""D. Reidsma""], ""title"": ""Annotations and Subjective Machines of Annotators, Embodied Agents, Users, and Other Humans"", ""venue"": ""Ph.D. thesis, University of Twente."", ""year"": 2008}, {""authors"": [""D. Reidsma"", ""D.K.J. Heylen"", ""R.J.F. Ordelman.""], ""title"": ""Annotating emotions in meetings"", ""venue"": ""Proceedings of the Fifth International Conference on Language Resources and Evaluation, LREC 2006,"", ""year"": 2006}, {""authors"": [""Reidsma"", ""Denis"", ""Jean Carletta.""], ""title"": ""Reliability measurement without limits"", ""venue"": ""Computational Linguistics, 34(3):319\u2013326."", ""year"": 2008}, {""authors"": [""Scott"", ""William.""], ""title"": ""Reliability of content analysis: The case of nominal scale coding"", ""venue"": ""Public Opinion Quarterly, 19(3):321\u2013325."", ""year"": 1955}, {""authors"": [""Siegel"", ""Sidney"", ""N. John Castellan.""], ""title"": ""Nonparametric Statistics for the Behavioral Sciences"", ""venue"": ""McGraw-Hill, New York, 2nd edition."", ""year"": 1988}, {""authors"": [""Teufel"", ""Simone.""], ""title"": ""Argumentative Zoning: Information Extraction from Scientific Articles"", ""venue"": ""Ph.D. thesis, University of Edinburgh."", ""year"": 1999}, {""authors"": [""Teufel"", ""Simone"", ""Jean Carletta"", ""Marc Moens""], ""title"": ""An annotation scheme for discourse-level argumentation in research"", ""year"": 1999}, {""authors"": [""Teufel"", ""Simone"", ""Marc Moens.""], ""title"": ""Summarizing scientific articles: Experiments with relevance and rhetorical status"", ""venue"": ""Computational Linguistics, 28(4):409\u2013445."", ""year"": 2002}, {""authors"": [""Widl\u00f6cher"", ""Antoine"", ""Yann Mathet.""], ""title"": ""The glozz platform: A corpus annotation and mining tool"", ""venue"": ""ACM Symposium on Document Engineering (DocEng\u201912), pages 171\u2013180, Paris."", ""year"": 2012}, {""authors"": [""Zwick"", ""Rebecca.""], ""title"": ""Another look at interrater agreement"", ""venue"": ""Psychological Bulletin, (103):347\u2013387. 479"", ""year"": 1988}] acknowledgments :We wish to thank three anonymous reviewers for helpful comments and discussion. The authors would also like to warmly thank Klaus Krippendorff for his support when they implemented his coefficients in order to test them. This work was carried out in the GREYC Laboratory, Caen, France, with the strong support of the French Contrat de Projet État-Région (CPER) and the Région Basse-Normandie, which have provided this research with two engineers, Jérôme Chauveau and Stéphane Bouvry. appendix a. examples of linguistic objects and possible annotation tasks :Terms emphasized hereafter refer to the terminology defined in Section 2. PART-OF-SPEECH. Part-of-speech (POS) tagging (see, for example, Güngör [2010] for a recent state of the art) gives a well-known illustration of a pure categorization without unitizing task: for all the words in a text (predefined units, full-covering, no overlap), annotators have to select a category, belonging to quite a small set of exclusive elements. POS units (words) having the same label are obviously not aggregatable. GENE RENAMING. In a study on gene renaming presented in Fort et al. (2012), all the tokens (predefined units, no overlap) are markable (categorization) with “Nothing” (the default value), “Former” (the original name of a gene) or “New” (its new name). This work at word level considers sparser (sporadic) phenomena than POS tagging. However, the annotation is defined as full-covering, with “Nothing” as a default tag. Note that the presence of the “Nothing” category also reveals here the reduction of a unitizing problem (detection of renaming) to a pure coding system (categorization). These units are not aggregatable. WORD SENSE. For the annotation task described in Passonneau et al. (2012), annotators were asked to assign sense labels (categorization without unitizing) to preselected moderately polysemous words (sporadicity, predefined units, no overlap) in preselected sentences where they occur. Adjacent words are not aggregatable with sense preservation. NAMED ENTITIY. Well-established named entity (NE) recognition tasks (see, for example, Nadeau and Sekine [2007]) led to many annotation efforts. In such tasks, the annotator is often asked to identify the units in the text’s continuum (unitizing, sporadicity) and to select a NE type from an inventory (categorization). It is well known that some difficulties of NE annotation relate to the delimitation of NE boundaries. For example, for a phrase such as “Mr X, the President of Y,” it makes sense to annotate subparts (“X,” “Mr X,” “the President of Y”) and/or the whole. “Y” is also a NE of another type. This may result in hierarchical or free overlapping structures. Adjacent NE are not aggregatable. ARGUMENTATIVE ZONING. Studies concerned by argumentative zoning (Teufel 1999; Teufel, Carletta, and Moens 1999; Teufel and Moens 2002) consider the argumenative structure of texts, and identify text spans having specific roles. For each sentence (fullcovering, predefined units, no overlap), a category (categorization) is selected. Adjacent sentences of the same type are aggregated into larger spans (argumentative zones). This reveals an underlying question of unitizing. However, it has to be noted that the categorization mainly concerns predefined sentences: argumentative types are aggregatable. DISCOURSE FRAMING. In Charolles et al.’s discourse framing hypothesis (Charolles et al. 2005, page 121), “a discourse frame is described as the grouping together of a number of propositions which are linked by the fact that they must be interpreted with reference to a specific criterion, realized in a frame-initial introducing expression.” Thus, temporal or spatial introducing expressions lead, for example, to temporal or spatial discourse frames in the text continuum (unitizing, sporadicity, categorization). Discourse frames are not aggregatable. Subordination is possible, leading to possibly hierarchical overlap, where frames (of the same type or of different types) are embedded. COMMUNICATIVE BEHAVIOR. The multimodal AMI Meeting corpus (Carletta 2007) covers a wide range of phenomena, and contains many different layers of annotation describing the communicative behavior of the participants of meetings. For example, in Reidsma (2008), annotators are required to identify fragments in a video recording (unitizing, sporadicity) and to categorize them (categorization). For such an annotation task, one can easily imagine instruction manuals allowing annotators to use multiple labels and to identify embedded (hierarchical overlap) or free overlapping units, even if the example provided by Reidsma (2008) does not. DIALOG ACT. Annotating dialog act conforming to a standard as defined, for example, in Bunt et al. (2010), leads annotators to assign communicative function labels and types of semantic content (categorization) to stretches of dialogue called functional segments. The possible mulitfunctionality of segments (one functional segment is related to one or more dialog acts), and the fact that annotations may be attached directly to the primary data such as stretches of speech defined by begin and end points, or attached to structures at other levels of analysis, seems to allow different kinds of configurations and annotation instructions: unitizing or pure categorization of pre-existing structures, sporadicity or full-covering, hierarchical, overlapping or linear segmentation. TOPIC SEGMENTATION. Topic segmentation (see, for example, the seminal work by Hearst [1997] or Bestgen [2006] for a more recent state of the art), which aims at detecting the most important thematic breaks in the text’s continuum, gives an illuminating example of pure segmentation. This unitizing problem of linear segmentation is full-covering and restricted to the detection of breaks (the right boundary of a unit corresponds to the left boundary of the following segment) (no overlap). If we consider the resulting segments, there is just one category (topic segment) and then no categorization. Adjacent topic segments are obviously not aggregatable without a shift in meaning. HIERARCHICAL TOPIC SEGMENTATION. In order to take better into account the fact that lexical cohesion is a multiscale phenomenon and that discourse displays a hierarchical structure, hierarchical topic segmentation proposed, for example, by Eisenstein (2009) preserves the main goal and properties of text-tiling (unitizing, no categorization, fullcovering, not aggregatable segments), but allows a topic segment to be subsegmented into sub-topic segments (hierarchical [but not free] overlap). TOPIC TRANSITION. The topic zoning annotation model presented in Labadié et al. (2010) is based on the hypothesis that, in a well constructed text, abrupt topic boundaries are more the exception than the rule. This model introduces transition zones (unitizing) between topics, zones that help the reader to move from one topic to another. The annotator is asked to identify and categorize (categorization) topic segments, introduction, conclusion, and transition zones. Hierarchical overlap is possible (embedded elements of the same type or of different types are allowed). Free overlapping structures are frequent, by virtue of the nature of transitions. Adjacent topic zones and adjacent transition zones are not aggregatable. ENUMERATIVE STRUCTURES. A study on complex discourse objects such as enumerative structures (Afantenos et al. 2012) illustrates both the need for sporadic unitizing and the need for categorization. The enumerative structures have a complex internal organization, which is composed of various types of subelements (hierarchical overlap) (a trigger of the enumeration, items composing its body, etc.) which are not aggregatable.","6 comparing and benchmarking γ :As γ is an entirely new agreement measure method, it is necessary to analyze how it compares with some well known and much studied methods. First, we carry out a thorough comparison between γ and the two dedicated alphas, uα and c|uα, which are the most specific measures in the domain. Second, we benchmark γ by comparing it with other main measures, thanks to a special tool that is briefly introduced. As already mentioned, Krippendorff’s uα and c|uα are clearly the most suitable coefficients for combined unitizing and categorizing. To better understand the pros and cons as well as the behavior of these measures compared with γ, we first explain how they are designed in Section 6.1.1, and then make thorough comparisons with γ from Section 6.1.2 to 6.1.6 including: (1) how they react to slight categorial disagreements, (2) interlacement of positional and categorial disagreements, (3) the impact of the size of the units on positional disagreement, (4) split disagreements, and (5) the impact of scale (e.g., if the size of all units is multiplied by 2). We finish by showing a paradox of uα in Section 6.1.7. 6.1.1 Introducing uα and c|uα. To introduce how these two coefficients work, let us consider the example taken from Krippendorff (2013), shown in Figure 14. The length of the continuum is 76, there are two annotators, and there are four possible categories, numbered 1 to 4. The uα coefficient basically relies on the comparison of all pairs of sections among annotators, a section being either a categorized unit or a gap. To get the observed disagreement value uDo, squared lengths of the unmatching intersections are summed, and this sum is then divided by the product of the length of the continuum and m(m− 1), m being the number of annotators. In the example, mismatches occur around the second and third units of the two annotators. From left to right, there are the following intersections: cat 1 with gap (l = 10), cat 1 with cat 3 (l = 5), gap with cat 3 (l = 8), cat 2 with cat 1 (l = 5), and cat 2 with gap (l = 5). This leads twice (by symmetry) to the sum 102 + 52 + 82 + 52 + 52, and so the observed disagreement uDo = 2(102+52+82+52+52 ) 76·2(2−1) = 3.145. The expected value uDe is obtained by considering all the possible positional combinations of each pair, and not only the observed ones. This means that for a given pair, one of the two units is virtually slid in front of the other in all possible ways, and the corresponding values are averaged. In this example, uDe = 5.286. Therefore, uα = 1− 3.1455.286 = 0.405. Coefficient c|uα relies on a coincidence matrix between categories, filled with the sums of the lengths of all intersections of units for each given pair of categories. For instance, in the example, the observed coincidence between category 1 and category 3 is 5, and so on. A metric matrix is chosen for these categories, for instance, an interval metric (for numerical categories), which says that the distance between category i and category j is (i− j)2. Hence, the cost for a unitary intersection between categories 1 and 2 is (1− 2)2 = 1, but is 22 = 4 between categories 1 and 3, and so on. Then, the observed disagreement is computed according to these two matrices. To finish, an expected matrix is filled (in a way which cannot be detailed here due to space constraints), and the expected value is computed the same way. In the example, c|uα = 1− 0.8333.145 = 0.744. Hence, Krippendorff’s alphas provide two clues to analyze the agreement between annotators. In the example, uα = 0.405 indicates that the unitizing is not so good, but also that the categorizing is much better, with c|uα = 0.744 (even though of course, these two values are not independent, since unitizing and categorizing coexist here by nature). Now that these coefficients have been introduced in detail, let us analyze to what extent they differ from γ. 6.1.2 Slight Categorial Disagreements: Alphas Versus γ. When annotators have slight categorial disagreements (with overlapping categories), c|uα is slightly lowered. However, uα does not take categorial overlapping into account, but has a binary response to such disagreements, and is lowered as much as if they were severe categorial disagreements. A consequence of this approach is illustrated in Figure 15, where two annotators perfectly agree both on positions and categories in the experiment on the left, and still perfectly agree on position but slightly diverge concerning categories in the experiment on the right (1/2, 6/7, and 8/9 are assumed to be close categories). However, uα drops from 1 in the left experiment to –0.34 (a negative value means worse than random) in the right experiment, despite, in the latter, the positions being all correct, and the categories being quite good, since c|uα = 0.85. On such data, γ considers that there is no positional disagreement, and c|uα and γ both consider that there are slight categorial disagreements. 6.1.3 Positional Disagreements Impacting Categorial Agreement: c|uα. Two different conceptions of how to account for categorial disagreement have, respectively, led to c|uα and γ: c|uα relies on intersections between the units of different annotators, which is basically equivalent to an observation at the atom level, whereas γ relies on alignments between units (any unit being finally attached and compared to, at most, only one other) based both on positional and categorial observation. Hence, in a configuration such as the one given in Figure 16, where two annotators annotated three units with the same categories 1 1 9 9 7 7 Observer 1 Observer 2 1 2 9 8 7 6 Observer 1 Observer 2 Figure 15 Consequences of no categorial disagreement (left) compared with slight categorial disagreements (right). 1, 4, and 2, but not exactly at the same locations; c|uα considers a certain account of categorial disagreements, whereas γ does not. According to the principles of c|uα, any part of the continuum (even at the atom level) with an intersection between different categories means some confusion between them, whereas γ considers here that the annotators fully agree on categories (they both observed a “1” then a “4” then a “2” with no confusion), and disagree only on where phenomena exactly start and finish. The crucial difference between the two methods is probably whether we consider units to be non-atomizable (and therefore consider alignments, as γ does), or atomizable (in which case two different parts of a given unit may be simultaneously and respectively compared to two different units from another annotator). 6.1.4 Disagreements on Long versus Short Spans of Texts. Here again, the way disagreements are accounted for may differ markedly between uα and γ: when a unit does not match with any other, uα takes into account the length of the corresponding span of text to assess a disagreement. As shown in Figure 17, an orphan unit of size 10 will cost 100 times as much as an orphan unit of size 1, whereas for γ, they will have the same cost. In the whole example in Figure 17, to compute the observed disagreements, uα says the first case is 50 times worse than the second, whereas γ says on the contrary that the second case is twice as bad as the first. Here, γ fulfills the need (already mentioned) expressed by Reidsma, Heylen, and Ordelman (2006, page 3) to consider that “short segments are as important as long segments.” This phenomenon is the same for categories between c|uα and γ, the size of the units having consequences only for c|uα. 6.1.5 Split Disagreements. Sometimes, an annotator may divide a given span of text into several contiguous units of the same type, or may annotate the same span with one whole unit. In these cases, c|uα computes its observed disagreement the same in both configurations, and uα assigns decreasing disagreement when splitting increases, as shown in Figure 18, whereas γ assigns increasing disagreements. Moreover, in Figure 19, the observed uα is not responsive to splits at all, whereas γ is still responsive. 6.1.6 Scale Effects. The way uα computes dissimilarities is directly proportional to squared lengths, as shown in Figure 20. On the other hand, γ may use any positional dissimilarity, and usually uses ones that are not scale-dependent for CL applications, such as dpos−sporadic (Equation (3)). For instance, if a text is annotated with two categories, one at word level, the other one at paragraph level, we may prefer to account for relative disagreements so that a missing word will be more heavily penalized in the first case than in the second. In Figure 20, the observed disagreement of uα is 32 = 9 times greater for B units than for A units, but would be the same for γwith dpos−sporadic since: ( 0 + 3( 7+10 2 ))2 = ( 0 + 9( 21+30 2 ))2 . Figure 21a, the annotators disagree on categorization, and have a moderate agreement on unitizing. This configuration leads to uα = 0.107. In Figure 21b, the configuration is quite similar, but now annotators fully agree on unitizing: Each of them puts units in the same positions. Paradoxically, uα drops to −0.287, which is less than in the first configuration. In brief, the reason for this behavior is that in the first case, the computed disagreement regarding a given pair of units is virtually distributed into shorter parts of the whole (an intersection of length 80 between them, and an intersection of length 20 with a gap for each of them, which leads to 802 + 2× 202 = 7, 200) whereas the disagreement is maximum in the second case (an intersection of length 100 with a unit of another category, which leads to 1002 = 10, 000). Contrarily, with similar data, γ provides a better agreement in the second case than in the first one. With its design, it considers that there is the same categorial agreement in both cases, but better positional agreement in the second case, which seems to better correspond to the CL tasks we have considered. 6.1.8 Overlapping Units (Embedding or Free Overlap). Both alpha coefficients are currently designed to cope only with non-overlapping units (the term overlapping also stands here for embedding), which is a limitation for several fields in CL. It is debatable whether they could be generalized to handle overlapping units. It seems that it would involve a major change in the strategy, which currently necessitates comparing the intersections of all pairs of units. In the example shown in Figure 22, even though annotators fully agree on their two units, the alphas will inherently compare A1 with B2 and A2 with B1 (in addition to the normal comparisons between A1 with B1 and A2 with B2), and will count the resulting intersections as disagreements. It is necessary here to choose once and for all what unit to compare to what other, rather than to perform all the comparisons. But making such a choice precisely consists in making an alignment, which is a fundamental feature of γ. Consequently, it seems that the alphas would need a structural modification to cope with overlapping. As explained by Reidsma, Heylen, and Ordelman (2006), because of the lack of specialized coefficients coping with unitizing, a fairly standard practice is to use categorization coefficients on a discretized (i.e., atomized) version of the continuum: For instance, each character (or each word, or each paragraph) of a text is considered as an item; and a standard categorization coefficient such as κ is used to compute agreement. Such a measure is called κd (for discretized κ) hereafter. Several weaknesses of this approach have been already mentioned in the state-of-the-art section. It is interesting to compare such a measure to the specialized one c|uα: even if they both bear the aggregatable hypothesis, they have, however, significant differences (as confirmed by the experiments presented in the next section). The main one is that c|uα does not use an artificial atomization of the continuum, and only compares units with units. In doing so, it is not prone to agreement on blanks, in contrast to κd. Another difference is that, for the same reason, c|uα is not inherently limited to non-overlapping units: Even if it is not currently designed to cope with them, as we have already seen, it is possible to submit overlapping units to this measure (some results are shown in the next section). In this section on benchmarking, we use the Corpus Shuffling Tool (CST) introduced by Mathet et al. (2012) to compare γ concretely and accurately to the other measures. We first introduce the possible error types that it will provide: category (category mistakes may occur), position (the boundaries may be shifted), false positives (the annotators add units to the reference units), false negatives (the annotators miss some of the reference units), and splits (the annotators put two or more contiguous units instead of a reference unit, which occupy the same span of text). This tool is used to simulate varying degrees of disagreement among different error types, and the metrics are compared with each other according to how they react to these disagreements. For a given error type, for each magnitude between 0 and 1 (with a step of 0.05), the tool creates 40 artificial, multi-annotator shuffled annotation sets, and computes the different measures for them. Hence, we obtain a full graph showing the behavior of each measure for this error type, with the magnitude on the x-axis, and the average agreement (over the 40 annotation sets) on the y-axis. This provides a sort of “X-ray” of the capabilities of the measures with respect to this error type, which should be evaluated against the following desiderata:r A measure should provide a full response to the whole range of magnitudes, which means in particular that the curve should ideally start from 1 (at m = 0) and reach 0 (at m = 1), but never go below 0 (indeed, negative agreement values require a part of systematic disagreement, which is not simulated by the current version of the CST).r The response should be strictly decreasing: A flat part would mean the measure does not differentiate between different magnitudes, and, even worse, an increasing part would mean that the measure is counter-effective at some magnitudes, where a worse error is penalized less severely. We emphasize the fact that the whole graph is important, up to magnitude 1. Indeed, in most real annotated corpora, even when the overall agreement is high, errors corresponding to all magnitudes may occur. For instance, an agreement of 0.8 does not necessarily correspond to the fact that all annotations are affected by slight errors (which correspond to magnitudes close to 0), but may for instance correspond to the fact that a few units are affected by severe errors (which may correspond to magnitudes close or equal to 1). It is important to note that this tool was designed by the authors of γ, for tasks where units cannot be considered as atomizable. In particular, it was conceived so that disagreements concerning small units are as important as those concerning large ones. However, it is provided as open-source (see Conclusions section) so that anyone can test and modify it, and propose new experiments to test γ and other measures in the future. 6.3.1 Introducing the CST. The main principle of this tool is as follows. A reference corpus is built, with respect to a statistical model, which defines the number of categories, their prevalence, the minimum and maximum length for each category, and so forth. Then, this reference is used by the shuffling tool to generate a multi-annotator corpus, simulating the fact that each annotator makes mistakes of a certain type, and of a certain magnitude. It is important to remark that the generated corpus does not include the reference it is built from. The magnitude m is the strength of the shuffling, that is to say the severity of mistakes annotators make compared to the reference. It can be set from 0, which means no damage is applied (and the annotators are perfect) to the extreme value 1, which means annotators are assumed to behave in the worst possible way (but still being independent of each other)—namely, at random. Figure 23 illustrates the way such a corpus is built: From the reference containing some categorized units, three new sets of annotations are built, simulating three annotators who are assumed to have the same annotating skill level, which is set in this example at magnitude 0.1. The applied error type is position only, that is to say that each annotator makes mistakes only when positioning boundaries, but does not make any other mistake (the units are reproduced in the same order, with the correct category, and in the same number). At this low magnitude, the positions are still close to those of the reference, but often vary a little. Hence, we obtain here a slightly shuffled multiannotator corpus. Let us sum up the way error types are currently designed in the CST. Position. At magnitude m, for a given unit, we define a value shiftmax that is proportional to m and to the length of the unit, and each boundary of the unit is shifted by a value randomly chosen between −shiftmax and shiftmax (note: at magnitude 0, because shiftmax = 0, units are not shifted). Category. This shuffling cannot be described in a few words (see Mathet et al. [2012] for details). It uses special matrices to simulate, using conditional probabilities, progressive confusion between categories, and can be configured to take into account overlapping of categories. The higher the magnitude, the more frequent and severe the confusion. False negatives. At magnitude m, each unit has the probability m to be forgotten. For instance, at magnitude m = 0.5, each annotator misses (on average) half of the units from the reference (but not necessarily the same units as the other annotators). False positives. At magnitude m, each annotator adds a certain number of units (proportional to m) to the ones of the reference. Splits. At magnitude m, each annotator splits a certain number of units (proportional to m). A split unit may be re-split, and so on. 6.3.2 Pure Segmentation: γ, WD, GHD. Even if γ was created to cope with error types that are poorly or not at all dealt with by other methods, and, moreover, to cope simultaneously with all of them (unitizing of categorized and overlapping categories), it is illuminating to observe how it behaves in more specific error types, to which specialized and well known methods are dedicated. We start with pure segmentation. Figure 24 shows the behavior of WD, GHD, and γ for two error types. For false negatives, WD and GHD are quite close, with an almost linear response until magnitude 0.6. Their drawback is that their responses are limited by an asymptote, because of the absence of chance correction, while γ shows a full range of agreements; for shifts, WD and GHD show an asymptote at about agreement = 0.4, while γ shows values from 1 to 0. This experiment confirms the advantage of using γ instead of these distances for inter-annotator agreement measure. 6.3.3 Pure Categorization. In this experiment, the CST is set to three annotators, four categories with given prevalences. The units are all of the same size, positioned in fixed, predefined positions, so that the focus is on categorizing only. It should be noted that, with such a configuration, α and κ behave exactly in the same way as c|uα. It is particularly striking in Figure 25 that γ behaves in almost the same way as c|uα. In fact, the observed values of these measures are exactly the same, the only difference coming from a slight difference in the expected values, due to sampling. Other tests carried out with the pure categorizing coefficient κ yielded the same results on this particular error type, which means that γ performs as well as recognized measures as far as categorizing is concerned, with two or more annotators. The uα curve goes below zero at magnitude 0.5 (probably for the reasons seen in Section 6.1.7). Moreover, its behavior depends on the size of the gaps: Indeed, with other settings of the shuffling, the curve may, on the contrary, be stuck over zero. κd fails to reach 0 because of the virtual agreement on gaps (but it would if there were no gaps). Lastly, SER (averaging the results of each pair of annotators) is bounded below by 0.6, which results from not taking chance into account. 6.3.4 Almost General Case: Unitizing + Categorizing. This section concerns the more general uses of γ, combining both unitizing and categorizing. However, in order to be compliant with uα, c|uα, and κd, we limit the configurations here so that the units do not overlap at all. In particular, the reference was built with no overlapping units, and we have used a modified version of the shifting shuffling procedure so that the nonoverlapping constraint is fully satisfied, even at high magnitudes. Positional errors (Figure 26a). An important point is that this shuffling error type, which is based only on moving positions, has a paradoxical consequence on category agreement, since units of different categories align when sufficient shifting is applied. Consequently, c|uα is not blocked at 1, even though it is designed to focus on categories. Additionally, it starts to decrease from the very first shifts, as soon as units from different annotators start overlapping. This is a concrete consequence of what has been formally studied in Section 6.1.3. γ has a most progressive response, reaches 0.1 at magnitude 1, and is the only measure to be strictly decreasing. SER immediately drops to agreement 0.5 at magnitude 0.05. As it relies on a binary positional distance, it fails to distinguish between small and large errors. This is a serious drawback of such a measure for most CL tasks. Then it goes below zero and is not strictly decreasing. uα is mostly strictly decreasing, but has some increasing parts, and, even more problematic, negative values from 0.6 to 0.9, probably because of the reason explained in Section 6.1.7. κd is too responsive at the very first magnitudes, and is not strictly decreasing, probably because it “does not compensate for differences in length of segments” (Reidsma, Heylen, and Ordelman 2006, page 3). Positional and categorial errors (Figure 26b). γ is strictly decreasing and reaches 0. The alphas are not strictly decreasing, and once again uα drops below 0 from magnitude 0.6 onwards. κd is not strictly decreasing (again, probably because it “does not compensate for differences in length of segments”), but its general shape is not that far from γ. Split errors (Figure 27). The split error type would need to create an infinite number of splits to mean pure chaos at magnitude 1. As this is computationally not possible, we restricted the number of splits to five times the number of units of the reference. We should therefore not expect measures to reach 0. In this context, γ shows a good range of responses, from 1 to 0.2, in an almost linear curve. SER is also quite linear, but gives very confusing values for this error type because it reaches negative values above magnitude 0.6. Finally, uα, c|uα, and κd are not responsive at all to this error type, as expected, and remain blocked at 1 (which is normal for c|uα, which focuses on categorizing). False positives and false negatives (Figure 28). In the current version of the CST, the false positive error type creates some overlapping (new units may overlap), and it is the reason why uα and κd were discarded from this experiment. However, we have kept c|uα because it behaves quite well despite overlapping units. All the measures have overall a good response to the false positives error type, as shown in Figure 28a, even if the shape of c|uα is delayed compared with the others, but it should be pointed out that SER has a curious and unfortunate final increasing section (not visible in the figure because this section is below 0). On the other hand, bigger differences appear with false negatives (Figure 28b). γ is still strictly decreasing and almost reaches 0 (0.025), but uα is not strictly decreasing, and is at 0 or below from m = 0.3; SER quickly drops below 0 from m = 0.4, κd is not strictly decreasing, and c|uα, as for splits, does not react at all but remains stuck at 1, which is desired for this coefficient focused on categories (values of c|uα over m = 0.7 are missing since there are not enough intersections between units for this measure to work). Overview of each measure for the almost general case. In order to summarize the behavior of each measure in response to the different error types for the almost general case (without overlap), we pick all curves relative to a given measure out of the previous plots and draw them in the same graph, as shown in Figure 29. Briefly, γ shows a steady behavior for all error types, almost strictly decreasing from 1 to 0. uα has some increasing parts and negative values and is sometimes not responsive. c|uα is very responsive for some error types, is less responsive for some other types, and is sometimes not responsive at all (which is desired, as already said). SER has unreliable responses, being either too responsive (reaching negative values) or not responsive enough. Finally, κd is not always responsive, is most of the time not strictly decreasing, but is sometimes quite progressive. 6.3.5 Fully General Case: Unitizing + Categorizing. This last section considers the fully general case, where overlapping of units within an annotator is allowed. In this experiment, we took a reference corpus with no overlap, but the errors applied (combination of positioning and false positives) progressively lead to overlapping units. The results are shown in Figure 30. As expected, γ behaves quite the same as it does with non-overlapping configurations. Admittedly, c|uα was not designed to handle these configurations (and so should not be included in this experiment), but surprisingly it seems to perform in rather the same way as it does with no overlapping; this must be investigated further, but judging from this preliminary observation, it seems this coefficient could still be operational and useful in such cases. On the contrary, uα does not handle correctly this experiment and so was not included in the graph." "1 introduction :A well-written text is not merely a sequence of independent and isolated sentences, but instead a sequence of structured and related sentences, where the meaning of a sentence relates to the previous and the following ones. In other words, a well-written ∗ Arabic Language Technologies, Qatar Computing Research Institute, Qatar Foundation, Doha, Qatar. E-mail: sjoty@qf.org.qa. ∗∗ Computer Science Department, University of British Columbia, Vancouver, BC, Canada, V6T 1Z4. E-mail: carenini@cs.ubc.ca. † Computer Science Department, University of British Columbia, Vancouver, BC, Canada, V6T 1Z4. E-mail: rng@cs.ubc.ca. Submission received: 11 May 2014; revised version received: 29 January 2015; accepted for publication: 18 March 2015. doi:10.1162/COLI a 00226 No rights reserved. This work was authored as part of the Contributor’s official duties as an Employee of the United States Government and is therefore a work of the United States Government. In accordance with 17 U.S.C. 105, no copyright protection is available for such works under U.S. Law. text has a coherence structure (Halliday and Hasan 1976; Hobbs 1979), which logically binds its clauses and sentences together to express a meaning as a whole. Rhetorical analysis seeks to uncover this coherence structure underneath the text; this has been shown to be beneficial for many Natural Language Processing (NLP) applications, including text summarization and compression (Marcu 2000b; Daumé and Marcu 2002; Sporleder and Lapata 2005; Louis, Joshi, and Nenkova 2010), text generation (Prasad et al. 2005), machine translation evaluation (Guzmán et al. 2014a, 2014b; Joty et al. 2014), sentiment analysis (Somasundaran 2010; Lazaridou, Titov, and Sporleder 2013), information extraction (Teufel and Moens 2002; Maslennikov and Chua 2007), and question answering (Verberne et al. 2007). Furthermore, rhetorical structures can be useful for other discourse analysis tasks, including co-reference resolution using Veins theory (Cristea, Ide, and Romary 1998). Different formal theories of discourse have been proposed from different viewpoints to describe the coherence structure of a text. For example, Martin (1992) and Knott and Dale (1994) propose discourse relations based on the usage of discourse connectives (e.g., because, but) in the text. Asher and Lascarides (2003) propose Segmented Discourse Representation Theory, which is driven by sentence semantics. Webber (2004) and Danlos (2009) extend sentence grammar to formalize discourse structure. Rhetorical Structure Theory (RST), proposed by Mann and Thompson (1988), is perhaps the most influential theory of discourse in computational linguistics. Although it was initially intended to be used in text generation, later it became popular as a framework for parsing the structure of a text (Taboada and Mann 2006). RST represents texts by labeled hierarchical structures, called Discourse Trees (DTs). For example, consider the DT shown in Figure 1 for the following text: But he added: “Some people use the purchasers’ index as a leading indicator, some use it as a coincident indicator. But the thing it’s supposed to measure—manufacturing strength—it missed altogether last month.” The leaves of a DT correspond to contiguous atomic text spans, called elementary discourse units (EDUs; six in the example). EDUs are clause-like units that serve as building blocks. Adjacent EDUs are connected by coherence relations (e.g., Elaboration, Contrast), forming larger discourse units (represented by internal nodes), which in turn are also subject to this relation linking. Discourse units linked by a rhetorical relation are further distinguished based on their relative importance in the text: nuclei are the core parts of the relation and satellites are peripheral or supportive ones. For example, in Figure 1, Elaboration is a relation between a nucleus (EDU 4) and a satellite (EDU 5), and Contrast is a relation between two nuclei (EDUs 2 and 3). Carlson, Marcu, and Okurowski (2002) constructed the first large RST-annotated corpus (RST–DT) on Wall Street Journal articles from the Penn Treebank. Whereas Mann and Thompson (1988) had suggested about 25 relations, the RST–DT uses 53 mono-nuclear and 25 multi-nuclear relations. The relations are grouped into 16 coarse-grained categories; see Carlson and Marcu (2001) for a detailed description of the relations. Conventionally, rhetorical analysis in RST involves two subtasks: discourse segmentation is the task of breaking the text into a sequence of EDUs, and discourse parsing is the task of linking the discourse units (EDUs and larger units) into a labeled tree. In this article, we use the terms discourse parsing and rhetorical parsing interchangeably. While recent advances in automatic discourse segmentation have attained high accuracies (an F-score of 90.5% reported by Fisher and Roark [2007]), discourse parsing still poses significant challenges (Feng and Hirst 2012) and the performance of the existing discourse parsers (Soricut and Marcu 2003; Subba and Di-Eugenio 2009; Hernault et al. 2010) is still considerably inferior compared with the human gold standard. Thus, the impact of rhetorical structure in downstream NLP applications is still very limited. The work we present in this article aims to reduce this performance gap and take discourse parsing one step further. To this end, we address three key limitations of existing discourse parsers. First, existing discourse parsers typically model the structure and the labels of a DT separately, and also do not take into account the sequential dependencies between the DT constituents. However, for several NLP tasks, it has recently been shown that joint models typically outperform independent or pipeline models (Murphy 2012, page 687). This is also supported in a recent study by Feng and Hirst (2012), in which the performance of a greedy bottom–up discourse parser improved when sequential dependencies were considered by using gold annotations for the neighboring (i.e., previous and next) discourse units as contextual features in the parsing model. To address this limitation of existing parsers, as the first contribution, we propose a novel discourse parser based on probabilistic discriminative parsing models, expressed as Conditional Random Fields (CRFs) (Sutton, McCallum, and Rohanimanesh 2007), to infer the probability of all possible DT constituents. The CRF models effectively represent the structure and the label of a DT constituent jointly, and, whenever possible, capture the sequential dependencies. Second, existing discourse parsers typically apply greedy and sub-optimal parsing algorithms to build a DT. To cope with this limitation, we use the inferred (posterior) probabilities from our CRF parsing models in a probabilistic CKY-like bottom–up parsing algorithm (Jurafsky and Martin 2008), which is non-greedy and optimal. Furthermore, a simple modification of this parsing algorithm allows us to generate k-best (i.e., the k highest probability) parse hypotheses for each input text that could then be used in a reranker to improve over the initial ranking using additional (global) features of the discourse tree as evidence, a strategy that has been successfully explored in syntactic parsing (Charniak and Johnson 2005; Collins and Koo 2005). Third, most of the existing discourse parsers do not discriminate between intrasentential parsing (i.e., building the DTs for the individual sentences) and multisentential parsing (i.e., building the DT for the whole document). However, we argue that distinguishing between these two parsing conditions can result in more effective parsing. Two separate parsing models could exploit the fact that rhetorical relations are distributed differently intra-sententially versus multi-sententially. Also, they could independently choose their own informative feature sets. As another key contribution of our work, we devise two different parsing components: one for intra-sentential parsing, the other for multi-sentential parsing. This provides for scalable, modular, and flexible solutions that can exploit the strong correlation observed between the text structure (i.e., sentence boundaries) and the structure of the discourse tree. In order to develop a complete and robust discourse parser, we combine our intrasentential and multi-sentential parsing components in two different ways. Because most sentences have a well-formed discourse sub-tree in the full DT (e.g., the second sentence in Figure 1), our first approach constructs a DT for every sentence using our intrasentential parser, and then runs the multi-sentential parser on the resulting sentencelevel DTs to build a complete DT for the whole document. However, this approach would fail in those cases where discourse structures violate sentence boundaries, also called “leaky” boundaries (Vliet and Redeker 2011). For example, consider the first sentence in Figure 1. It does not have a well-formed discourse sub-tree because the unit containing EDUs 2 and 3 merges with the next sentence and only then is the resulting unit merged with EDU 1. Our second approach, in order to deal with these leaky cases, builds sentence-level sub-trees by applying the intra-sentential parser on a sliding window covering two adjacent sentences and by then consolidating the results produced by overlapping windows. After that, the multi-sentential parser takes all these sentence-level sub-trees and builds a full DT for the whole document. Our discourse parser assumes that the input text has already been segmented into elementary discourse units. As an additional contribution, we propose a novel discriminative approach to discourse segmentation that not only achieves state-of-theart performance, but also reduces time and space complexities by using fewer features. Notice that the combination of our segmenter with our parser forms a COmplete probabilistic Discriminative framework for Rhetorical Analysis (CODRA). Whereas previous systems have been tested on only one corpus, we evaluate our framework on texts from two very different genres: news articles and instructional howto manuals. The results demonstrate that our approach to discourse parsing provides consistent and statistically significant improvements over previous methods both at the sentence level and at the document level. The performance of our final system compares very favorably to the performance of state-of-the-art discourse parsers. Finally, the oracle accuracy computed based on the k-best parse hypotheses generated by our parser demonstrates that a reranker could potentially improve the accuracy further. After discussing related work in Section 2, we present our rhetorical analysis framework in Section 3. In Section 4, we describe our discourse parser. Then, in Section 5 we present our discourse segmenter. The experiments and analysis of results are presented in Section 6. Finally, we summarize our contributions with future directions in Section 7.","2 related work :Rhetorical analysis has a long history—dating back to Mann and Thompson (1988), when RST was initially proposed as a useful linguistic method for describing natural texts, to more recent attempts to automatically extract the rhetorical structure of a given text (Hernault et al. 2010). In this section, we provide a brief overview of the computational approaches that follow RST as the theory of discourse, and that are related to our work; see the survey by Stede (2011) for a broader overview that also includes other theories of discourse. Although the most effective approaches to rhetorical analysis to date rely on supervised machine learning methods trained on human-annotated data, unsupervised methods have also been proposed, as they do not require human-annotated data and can be more easily applied to new domains. Often, discourse connectives like but, because, and although convey clear information on the kind of relation linking the two text segments. In his early work, Marcu (2000a) presented a shallow rule-based approach relying on discourse connectives (or cues) and surface patterns. He used hand-coded rules, derived from an extensive corpus study, to break the text into EDUs and to build DTs for sentences first, then for paragraphs, and so on. Despite the fact that this work pioneered the field of rhetorical analysis, it has many limitations. First, identifying discourse connectives is a difficult task on its own, because (depending on the usage), the same phrase may or may not signal a discourse relation (Pitler and Nenkova 2009). For example, but can either signal a Contrast discourse relation or can simply perform non-discourse acts. Second, discourse segmentation using only discourse connectives fails to attain high accuracy (Soricut and Marcu 2003). Third, DT structures do not always correspond to paragraph structures; for example, Sporleder and Lapata (2004) report that more than 20% of the paragraphs in the RST–DT corpus (Carlson, Marcu, and Okurowski 2002) do not correspond to a discourse unit in the DT. Fourth, discourse cues are sometimes ambiguous; for example, but can signal Contrast, Antithesis and Concession, and so on. Finally, a more serious problem with the rule-based approach is that often rhetorical relations are not explicitly signaled by discourse cues. For example, in RST–DT, Marcu and Echihabi (2002) found that only 61 out of 238 Contrast relations and 79 out of 307 Cause–Explanation relations were explicitly signaled by cue phrases. In the British National Corpus, Sporleder and Lascarides (2008) report that half of the sentences lack a discourse cue. Other studies (Schauer and Hahn 2001; Stede 2004; Taboada 2006; Subba and Di-Eugenio 2009) report even higher figures: About 60% of discourse relations are not explicitly signaled. Therefore, rather than relying on hand-coded rules based on discourse cues and surface patterns, recent approaches use machine learning techniques with a large set of informative features. While some rhetorical relations need to be explicitly signaled by discourse cues (e.g., Concession) and some do not (e.g., Background), there is a large middle ground of relations that may be signaled or not. For these “middle ground” relations, can we exploit features present in the signaled cases to automatically identify relations when they are not explicitly signaled? The idea is to use unambiguous discourse cues (e.g., although for Contrast, for example for Elaboration) to automatically label a large corpus with rhetorical relations that could then be used to train a supervised model.1 A series of previous studies have explored this idea. Marcu and Echihabi (2002) first attempted to identify four broad classes of relations: Contrast, Elaboration, Condition, and Cause–Explanation–Evidence. They used a naive Bayes classifier based on word pairs (w1, w2), where w1 occurs in the left segment, and w2 occurs in the right segment. Sporleder and Lascarides (2005) included other features (e.g., words and their stems, Part-of-Speech [POS] tags, positions, segment lengths) in a boosting-based classifier (i.e., BoosTexter [Schapire and Singer 2000]) to further improve relation classification accuracy. However, these studies evaluated classification performance on the instances 1 We categorize this approach as unsupervised because it does not rely on human-annotated data. where rhetorical relations were originally signaled (i.e., the discourse cues were artificially removed), and did not verify how well this approach performs on the instances that are not originally signaled. Subsequent studies (Blair-Goldensohn, McKeown, and Rambow 2007; Sporleder 2007; Sporleder and Lascarides 2008) confirm that classifiers trained on instances stripped of their original discourse cues do not generalize well to implicit cases because they are linguistically quite different. Note that this approach to identifying discourse relations in the absence of manually labeled data does not fully solve the parsing problem (i.e., building DTs); rather, it only attempts to identify a small subset of coarser relations between two (flat) text segments (i.e., a tagging problem). Arguably, to perform a complete rhetorical analysis, one needs to use supervised machine learning techniques based on human-annotated data. Marcu (1999) applies supervised machine learning techniques to build a discourse segmenter and a shift–reduce discourse parser. Both the segmenter and the parser rely on C4.5 decision tree classifiers (Poole and Mackworth 2010) to learn the rules automatically from the data. The discourse segmenter mainly uses discourse cues, shallowsyntactic (i.e., POS tags) and contextual features (i.e., neighboring words and their POS tags). To learn the shift–reduce actions, the discourse parser encodes five types of features: lexical (e.g., discourse cues), shallow-syntactic, textual similarity, operational (previous n shift–reduce operations), and rhetorical sub-structural features. Despite the fact that this work has pioneered many of today’s machine learning approaches to discourse parsing, it has all the limitations mentioned in Section 1. The work of Marcu (1999) is considerably improved by Soricut and Marcu (2003). They present the publicly available SPADE system,2 which comes with probabilistic models for discourse segmentation and sentence-level discourse parsing. Their segmentation and parsing models are based on lexico-syntactic patterns (or features) extracted from the lexicalized syntactic tree of a sentence. The discourse parser uses an optimal parsing algorithm to find the most probable DT structure for a sentence. SPADE was trained and tested on the RST–DT corpus. This work, by showing empirically the connection between syntax and discourse structure at the sentence level, has greatly influenced all major contributions in this area ever since. However, it is limited in several ways. First, SPADE does not produce a full-text (i.e., document-level) parse. Second, it applies a generative parsing model based on only lexico-syntactic features, whereas discriminative models are generally considered to be more accurate, and can incorporate arbitrary features more effectively (Murphy 2012). Third, the parsing model makes an independence assumption between the label and the structure of a DT constituent, and it ignores the sequential and the hierarchical dependencies between the DT constituents. Subsequent research addresses the question of how much syntax one really needs in rhetorical analysis. Sporleder and Lapata (2005) focus on the discourse chunking problem, comprising two subtasks: discourse segmentation and (flat) nuclearity assignment. They formulate discourse chunking in two alternative ways. First, one-step classification, where the discourse chunker, a multi-class classifier, assigns to each token one of the four labels: (1) B–NUC (beginning of a nucleus), (2) I–NUC (inside a nucleus), (3) B– SAT (beginning of a satellite), and (4) I–SAT (inside a satellite). Therefore, this approach performs discourse segmentation and nuclearity assignment simultaneously. Second, 2 http://www.isi.edu/licensed-sw/spade/. two-step classification, where in the first step, the discourse segmenter (a binary classifier) labels each token as either B (beginning of an EDU) or I (inside an EDU). Then, in the second step, a nuclearity labeler (another binary classifier) assigns a nuclearity status to each segment. The two-step approach avoids illegal chunk sequences like a B–NUC followed by an I–SAT or a B–SAT followed by an I–NUC, and in this approach, it is easier to incorporate sentence-level properties like the constraint that a sentence must contain at least one nucleus. They examine whether shallow-syntactic features (e.g., POS and phrase tags) would be sufficient for these purposes. The evaluation on the RST–DT shows that the two-step approach outperforms the one-step approach, and its performance is comparable to that of SPADE, which requires relatively expensive full syntactic parses. In follow–up work, Fisher and Roark (2007) demonstrate over 4% absolute performance gain in discourse segmentation, by combining the features extracted from the syntactic tree with the ones derived via POS tagging and shallow syntactic parsing (i.e., chunking). Using quite a large number of features in a binary log-linear model, they achieve state-of-the-art performance in discourse segmentation on the RST–DT test set. In a different approach, Regneri, Egg, and Koller (2008) propose to use Underspecified Discourse Representation (UDR) as an intermediate representation for discourse parsing. Underspecified representations offer a single compact representation to express possible ambiguities in a linguistic structure, and have been primarily used to deal with scope ambiguity in semantic structures (Reyle 1993; Egg, Koller, and Niehren 2001; Althaus et al. 2003; Koller, Regneri, and Thater 2008). Assuming that a UDR of a DT is already given in the form of a dominance graph (Althaus et al. 2003), Regneri, Egg, and Koller (2008) convert it into a more expressive and complete UDR representation called regular tree grammar (Koller, Regneri, and Thater 2008), for which efficient algorithms (Knight and Graehl 2005) already exist to derive the best configuration (i.e., the best discourse tree). Hernault et al. (2010) present the publicly available HILDA system,3 which comes with a discourse segmenter and a parser based on Support Vector Machines (SVMs). The discourse segmenter is a binary SVM classifier that uses the same lexico-syntactic features used in SPADE, but with more context (i.e., the lexico-syntactic features for the previous two words and the following two words). The discourse parser iteratively uses two SVM classifiers in a pipeline to build a DT. In each iteration, a binary classifier first decides which of the adjacent units to merge, then a multi-class classifier connects the selected units with an appropriate relation label. Using this simple method, they report promising results in document-level discourse parsing on the RST–DT. For a different genre, instructional texts, Subba and Di-Eugenio (2009) propose a shift–reduce discourse parser that relies on a classifier for relation labeling. Their classifier uses Inductive Logic Programming (ILP) to learn first-order logic rules from a large set of features including the linguistically rich compositional semantics coming from a semantic parser. They demonstrate that including compositional semantics with other features improves the performance of the classifier, thus, also improves the performance of the parser. Both HILDA and the ILP-based approach of Subba and Di-Eugenio (2009) are limited in several ways. First, they do not differentiate between intra- and multi-sentential 3 http://nlp.prendingerlab.net/hilda/. parsing, and both scenarios use a single uniform parsing model. Second, they take a greedy (i.e., sub-optimal) approach to construct a DT. Third, they disregard sequential dependencies between DT constituents. Furthermore, HILDA considers the structure and the labels of a DT separately. Our discourse parser CODRA, as described in the next section, addresses all these limitations. More recent work than ours also attempts to address some of the above-mentioned limitations of the existing discourse parsers. Similar to us, Feng and Hirst (2014) generate a document-level DT in two stages, where a multi-sentential parsing follows an intra-sentential one. At each stage, they iteratively use two separate linear-chain CRFs (Lafferty, McCallum, and Pereira 2001) in a cascade: one for predicting the presence of rhetorical relations between adjacent discourse units in a sequence, and the other to predict the relation label between the two most probable adjacent units to be merged as selected by the previous CRF. While they use CRFs to take into account the sequential dependencies between DT constituents, they use them greedily during parsing to achieve efficiency. They also propose a greedy post-editing step based on an additional feature (i.e., depth of a discourse unit) to modify the initial DT, which gives them a significant gain in performance. In a different approach, Li et al. (2014) propose a discourse-level dependency structure to capture direct relationships between EDUs rather than deep hierarchical relationships. They first create a discourse dependency treebank by converting the deep annotations in RST–DT to shallow head-dependent annotations between EDUs. To find the dependency parse (i.e., an optimal spanning tree) for a given text, they apply Eisner (1996) and Maximum Spanning Tree (McDonald et al. 2005) dependency parsing algorithms with the Margin Infused Relaxed Algorithm online learning framework (McDonald, Crammer, and Pereira 2005). With the successful application of deep learning to numerous NLP problems including syntactic parsing (Socher et al. 2013a), sentiment analysis (Socher et al. 2013b), and various tagging tasks (Collobert et al. 2011), a couple of recent studies in discourse parsing also use deep neural networks (DNNs) and related feature representation methods. Inspired by the work of Socher et al. (2013a, 2013b), Li, Li, and Hovy (2014) propose a recursive DNN for discourse parsing. However, as in Socher et al. (2013a, 2013b), word vectors (i.e., embeddings) are not learned explicitly for the task, rather they are taken from Collobert et al. (2011). Given the vectors of the words in an EDU, their model first composes them hierarchically based on a syntactic parse tree to get the vector representation for the EDU. Adjacent discourse units are then merged hierarchically to get the vector representations for the higher order discourse units. In every step, the merging is done using one binary (structure) and one multi-class (relation) classifier, each having a three-layer neural network architecture. The cost function for training the model is given by these two cascaded classifiers applied at different levels of the DT. Similar to our method, they use the classifier probabilities in a CKY-like parsing algorithm to find the global optimal DT. Finally, Ji and Eisenstein (2014) present a feature representation learning method in a shift–reduce discourse parser (Marcu 1999). Unlike DNNs, which learn non-linear feature transformations in a maximum likelihood model, they learn linear transformations of features in a max margin classification model.","3 overview of our rhetorical analysis framework :CODRA takes as input a raw text and produces a discourse tree that describes the text in terms of coherence relations that hold between adjacent discourse units (i.e., clauses, sentences) in the text. An example DT generated by an online demo of CODRA is shown in Appendix A.4 The color of a node represents its nuclearity status: blue denoting nucleus and yellow denoting satellite. The demo also allows some useful user interactions—for example, collapsing or expanding a node, highlighting an EDU, and so on.5 CODRA follows a pipeline architecture, shown in Figure 2. Given a raw text, the first task in the rhetorical analysis pipeline is to break the text into a sequence of EDUs (i.e., discourse segmentation). Because it is taken for granted that sentence boundaries are also EDU boundaries (i.e., EDUs do not span across multiple sentences), the discourse segmentation task boils down to finding EDU boundaries inside sentences. CODRA uses a maximum entropy model for discourse segmentation (see Section 5). Once the EDUs are identified, the discourse parsing problem is determining which discourse units (EDUs or larger units) to relate (i.e., the structure), and what relations (i.e., the labels) to use in the process of building the DT. Specifically, discourse parsing requires: (1) a parsing model to explore the search space of possible structures and labels for their nodes, and (2) a parsing algorithm for selecting the best parse tree(s) among the candidates. A probabilistic parsing model like ours assigns a probability to every possible DT. The parsing algorithm then picks the most probable DTs. The existing discourse parsers (Marcu 1999; Soricut and Marcu 2003; Subba and Di-Eugenio 2009; Hernault et al. 2010) described in Section 2 use parsing models that disregard the structural interdependencies between the DT constituents. However, we hypothesize that, like syntactic parsing, discourse parsing is also a structured prediction problem, which involves predicting multiple variables (i.e., the structure and the relation labels) that depend on each other (Smith 2011). Recently, Feng and Hirst (2012) also found these interdependencies to be critical for parsing performance. To capture the structural dependencies between the DT constituents, CODRA uses undirected conditional graphical models (i.e., CRFs) as its parsing models. To find the most probable DT, unlike most previous studies (Marcu 1999; Subba and Di-Eugenio 2009; Hernault et al. 2010), which adopt a greedy solution, CODRA applies an optimal CKY parsing algorithm to the inferred posterior probabilities (obtained from the CRFs) of all possible DT constituents. Furthermore, the parsing algorithm allows CODRA to generate a list of k-best parse hypotheses for a given text. 4 The demo of CODRA is available at http://109.228.0.153/Discourse Parser Demo/. The source code of CODRA is available from http://alt.qcri.org/tools/. 5 The input text in the demo in Appendix A is taken from www.bbc.co.uk/news/world-asia-26106490. Note that the way CRFs and CKY are used in CODRA is quite different from the way they are used in syntactic parsing. For example, in the CRF-based constituency parsing proposed by Finkel, Kleeman, and Manning (2008), the conditional probability distribution of a parse tree given a sentence decomposes across factors defined over productions, and the standard inside–outside algorithm is used for inference on possible trees. In contrast, CODRA first uses the standard forward–backward algorithm in a “fat” chain structured6 CRF (to be discussed in Section 4.1.1) to compute the posterior probabilities of all possible DT constituents for a given text (i.e., EDUs); then it uses a CKY parsing algorithm to combine those probabilities and find the most probable DT. Another crucial question related to parsing models is whether to use a single model or two different models for parsing at the sentence-level (i.e., intra-sentential) and at the document-level (i.e., multi-sentential). A simple and straightforward strategy would be to use a single unified parsing model for both intra- and multi-sentential parsing without distinguishing the two cases, as was previously done (Marcu 1999; Subba and Di-Eugenio 2009; Hernault et al. 2010). That approach has the advantages of making the parsing process easier, and the model gets more data to learn from. However, for a solution like ours, which tries to capture the interdependencies between constituents, this would be problematic with respect to scalability and inappropriate because of two modeling issues. More specifically, for scalability note that the number of valid trees grows exponentially with the number of EDUs in a document.7 Therefore, an exhaustive search over all the valid DTs is often infeasible, even for relatively small documents. For modeling, a single unified approach is inappropriate for two reasons. On the one hand, it appears that discourse relations are distributed differently intra- versus multi-sententially. For example, Figure 3 shows a comparison between the two distributions of the eight most frequent relations in the RST–DT training set. Notice that Same–Unit is more frequent than Joint in the intra-sentential case, whereas Joint is more frequent than Same–Unit in the multi-sentential case. Similarly, the relative distributions 6 By the term “fat” we refer to CRFs with multiple (interconnected) chains of output variables. 7 For n + 1 EDUs, the number of valid discourse tree structures (i.e., not counting possible variations in the nuclearity and relation labels) is the Catalan number Cn. of Background, Contrast, Cause, and Explanation are different in the two parsing scenarios. On the other hand, different kinds of features are applicable and informative for intraversus multi-sentential parsing. For example, syntactic features like dominance sets (Soricut and Marcu 2003) are extremely useful for parsing at the sentence-level, but are not even applicable in the multi-sentential case. Likewise, lexical chain features (Sporleder and Lapata 2004), which are useful for multi-sentential parsing, are not applicable at the sentence level. Based on these above observations, CODRA comprises two separate modules: an intra-sentential parser and a multi-sentential parser, as shown in Figure 2. First, the intra-sentential parser produces one or more discourse sub-trees for each sentence. Then, the multi-sentential parser generates a full DT for the document from these sub-trees. Both of our parsers have the same two components: a parsing model and a parsing algorithm. Whereas the two parsing models are rather different, the same parsing algorithm is shared by the two modules. Staging multi-sentential parsing on top of intra-sentential parsing in this way allows CODRA to explicitly exploit the strong correlation observed between the text structure and the DT structure, as explained in detail in Section 4.3.","4 the discourse parser :Before describing the parsing models and the parsing algorithm of CODRA in detail, we introduce some terminology that we will use throughout this article. A DT can be formally represented as a set of constituents of the form R[i, m, j], where i ≤ m < j. This refers to a rhetorical relation R between the discourse unit containing EDUs i through m and the discourse unit containing EDUs m+1 through j. For example, the DT for the second sentence in Figure 1 can be represented as {Elaboration–NS[4,4,5], Same–Unit–NN[4,5,6]}. Notice that in this representation, a relation R also specifies the nuclearity status of the discourse units involved, which can be one of Nucleus–Satellite (NS), Satellite–Nucleus (SN), or Nucleus–Nucleus (NN). Attaching nuclearity status to the relations allows us to perform the two subtasks of discourse parsing, relation identification and nuclearity assignment, simultaneously. A common assumption made for generating DTs effectively is that they are binary trees (Soricut and Marcu 2003; Hernault et al. 2010). That is, multi-nuclear relations (e.g., Joint, Same–Unit) involving more than two discourse units are mapped to a hierarchical right-branching binary tree. For example, a flat Joint(e1, e2, e3, e4) (Figure 4a) is mapped to a right-branching binary tree Joint(e1, Joint(e2, Joint(e3, e4))) (Figure 4b). As mentioned before, the job of the intra- and multi-sentential parsing models of CODRA is to assign a probability to each of the constituents of all possible DTs at the sentence level and at the document level, respectively. Formally, given the model parameters Θ at a particular parsing scenario (i.e., sentence-level or document-level), for each possible constituent R[i, m, j] in a candidate DT at that parsing scenario, the parsing model estimates P(R[i, m, j]|Θ), which specifies a joint distribution over the label R and the structure [i, m, j] of the constituent. For example, when applied to the sentences in Figure 1 separately, the intra-sentential parsing model (with learned parameters Θs) estimates P(R[1, 1, 2]|Θs), P(R[2, 2, 3]|Θs), P(R[1, 2, 3]|Θs), and P(R[1, 1, 3]|Θs) for the first sentence, and P(R[4, 4, 5]|Θs), P(R[5, 5, 6]|Θs), P(R[4, 5, 6]|Θs), and P(R[4, 4, 6]|Θs) for the second sentence, respectively, for all R ranging over the set of relations. 4.1.1 Intra-Sentential Parsing Model. Figure 5 shows the parsing model of CODRA for intra-sentential parsing. The observed nodes Uj (at the bottom) in a sequence represent the discourse units (EDUs or larger units). The first layer of hidden nodes are the structure nodes, where Sj ∈ {0, 1} denotes whether two adjacent discourse units Uj−1 and Uj should be connected or not. The second layer of hidden nodes are the relation nodes, with Rj ∈ {1 . . .M} denoting the relation between two adjacent units Uj−1 and Uj, where M is the total number of relations in the relation set. The connections between adjacent nodes in a hidden layer encode sequential dependencies between the respective hidden nodes, and can enforce constraints such as the fact that a node must have a unique mother, namely, a Sj= 1 must not follow a Sj−1= 1. The connections between the two hidden layers model the structure and the relation of DT constituents jointly. Notice that the probabilistic graphical model shown in Figure 5 is a chain-structured undirected graphical model (also known as Markov Random Field or MRF [Murphy 2012]) with two hidden layers, i.e., structure chain and relation chain. It becomes a Dynamic Conditional Random Field (DCRF) (Sutton, McCallum, and Rohanimanesh 2007) when we directly model the hidden (output) variables by conditioning the clique potentials (i.e., factors) on the observed (input) variables: P(R2:t, S2:t|x,Θs) = 1Z(x,Θs) t−1∏ i=2 ϕ(Ri, Ri+1|x,Θs,r)ψ(Si, Si+1|x,Θs,s)ω(Ri, Si|x,Θs,c) (1) where {ϕ} and {ψ} are the factors over the edges of the relation and structure chains, respectively, and {ω} are the factors over the edges connecting the relation and structure nodes (i.e., between-chain edges). Here, x represents input features extracted from the observed variables, Θs = [Θs,r,Θs,s,Θs,c] are model parameters, and Z(x,Θs) is the partition function. We use the standard log-linear representation of the factors: ϕ(Ri, Ri+1|x,Θs,r) = exp(ΘTs,r f (Ri, Ri+1, x)) (2) ψ(Si, Si+1|x,Θs,s) = exp(ΘTs,s f (Si, Si+1, x)) (3) ω(Ri, Si|x,Θs,c) = exp(ΘTs,c f (Ri, Si, x)) (4) where f (Y, Z, x) is a feature vector derived from the input features x and the local labels Y and Z, and Θs,y is the corresponding weight vector—that is, Θs,r and Θs,s are the weight vectors for the factors over the relation edges and the structure edges, respectively, and Θs,c is the weight vector for the factors over the between-chain edges. A DCRF is a generalization of linear-chain CRFs (Lafferty, McCallum, and Pereira 2001) to represent complex interactions between output variables (i.e., labels), such as when performing multiple labeling tasks on the same sequence. Recently, there has been an explosion of interest in CRFs for solving structured output classification problems, with many successful applications in NLP including syntactic parsing (Finkel, Kleeman, and Manning 2008), syntactic chunking (Sha and Pereira 2003), and discourse chunking (Ghosh et al. 2011) in accordance with the Penn Discourse Treebank (Prasad et al. 2008). DCRFs, being a discriminative approach to sequence modeling, have several advantages over their generative counterparts such as Hidden Markov Models (HMMs) and MRFs, which first model the joint distribution p(y, x|Θ), and then infer the conditional distribution p(y|x,Θ). It has been advocated that discriminative models are generally more accurate than generative ones because they do not “waste resources” modeling complex distributions that are observed (i.e., p(x)); instead, they focus directly on modeling what we care about, namely, the distribution of labels given the data (Murphy 2012). Other key advantages include the ability to incorporate arbitrary overlapping local and global features, and the ability to relax strong independence assumptions. Furthermore, CRFs surmount the label bias problem (Lafferty, McCallum, and Pereira 2001) of the Maximum Entropy Markov Model (McCallum, Freitag, and Pereira 2000), which is considered to be a discriminative version of the HMM. 4.1.2 Training and Applying the Intra-Sentential Parsing Model. In order to obtain the probability of the constituents of all candidate DTs for a sentence, CODRA applies the intra-sentential parsing model (with learned parameters Θs) recursively to sequences at different levels of the DT, and computes the posterior marginals over the relation– structure pairs. It uses the standard forward–backward algorithm to compute the posterior marginals. To illustrate the process, let us assume that the sentence contains four EDUs, e1, · · · , e4 (see Figure 6). At the first (i.e., bottom) level of the DT, when all the discourse units are EDUs, there is only one unit sequence (e1, e2, e3, e4) to which CODRA applies the DCRF model. Figure 6a at the top left shows the corresponding DCRF model. For this sequence it computes the posterior marginals P(R2, S2=1|e1, e2, e3, e4,Θs), P(R3, S3=1|e1, e2, e3, e4,Θs), and P(R4, S4=1| e1, e2, e3, e4,Θs) to obtain the probability of the DT constituents R[1, 1, 2], R[2, 2, 3], and R[3, 3, 4], respectively. At the second level, there are three unit sequences: (e1:2, e3, e4), (e1,e2:3, e4), and (e1,e2,e3:4). Figure 6b shows their corresponding DCRF models. Notice that each of these sequences has a discourse unit that connects two EDUs, and the probability of this connection has already been computed at the previous level. CODRA computes the posterior marginals P(R3, S3=1|e1:2,e3,e4,Θs), P(R2:3S2:3=1|e1,e2:3,e4,Θs), P(R4, S4=1|e1,e2:3,e4,Θs), and P(R3:4, S3:4=1|e1,e2,e3:4,Θs) from these three sequences, which correspond to the probability of the constituents R[1, 2, 3], R[1, 1, 3], R[2, 3, 4], and R[2, 2, 4], respectively. Similarly, it attains the probability of the constituents R[1, 1, 4], R[1, 2, 4], and R[1, 3, 4] by computing their respective posterior marginals from the three sequences at the third (i.e., top) level of the candidate DTs (see Figure 6c). Algorithm 1 describes how CODRA generates the unit sequences at different levels of the candidate DTs for a given number of EDUs in a sentence. Specifically, to compute the probability of a DT constituent R[i, k, j], CODRA generates sequences like (e1, · · · , ei−1, ei:k, ek+1:j, ej+1, · · · , en) for 1 ≤ i ≤ k < j ≤ n. However, in doing so, it may generate some duplicate sequences. Clearly, the sequence (e1, · · · , ei−1, ei:i, ei+1:j, ej+1, · · · , en) for 1 ≤ i ≤ k < j < n is already considered for computing the probability of the constituent R[i + 1, j, j + 1]. Therefore, it is a duplicate sequence that CODRA excludes from the list of sequences. The algorithm has a complexity of O(n3), where n is the number of EDUs in the sentence. Once CODRA acquires the probability of all possible intra-sentential DT constituents, the discourse sub-trees for the sentences are built by applying an optimal parsing algorithm (Section 4.2) using one of the methods described in Section 4.3. Algorithm 1 is also used to generate sequences for training the model (i.e., learning Θs). For example, Figure 7 demonstrates how we generate the training instances (right) from a gold DT with four EDUs (left). To find the relevant labels for the sequences Algorithm 1 Generating unit sequences for a sentence with n EDUs. Input: Sequence of EDUs: (e1, e2, · · · , en) Output: List of sequences: L for i = 1 → n − 1 do // all possible starting positions for the subsequence for j = i + 1 → n do // all possible ending positions for the subsequence if j == n then // sequences at top and bottom levels for k = i → j − 1 do // all possible cut points within the subsequence L.append ((e1, · · · , ei−1, ei:k, ek+1:j, ej+1, · · · , en)) end else // sequences at intermediate levels for k = i + 1 → j − 1 do // cut points excluding duplicate sequences L.append ((e1, · · · , ei−1, ei:k, ek+1:j, ej+1, · · · , en)) end end end end generated by the algorithm, we consult the gold DT and see if two discourse units are connected by a relation r (i.e., the corresponding labels are S = 1, R = r) or not (i.e., the corresponding labels are S = 0, R =NR). We train the model by maximizing the conditional likelihood of the labels in each of these training examples (see Equation (1)). 4.1.3 Multi-Sentential Parsing Model. Given the discourse units (sub-trees) for all the individual sentences in a document, a simple approach to build the DT of the document would be to apply a new DCRF model, similar to the one in Figure 5 (with different parameters), to all the possible sequences generated from these units by Algorithm 1 to infer the probability of all possible higher-order (multi-sentential) constituents. However, the number of possible sequences and their length increase with the number of sentences in a document. For example, assuming that each sentence has a well-formed DT, for a document with n sentences, Algorithm 1 generates O(n3) sequences, where the sequence at the bottom level has n units, each of the sequences at the second level has n-1 units, and so on. Because the DCRF model in Figure 5 has a “fat” chain structure, one could use the forward–backward algorithm for exact inference in this model (Murphy 2012). Forward–backward on a sequence containing T units costs O(TM2) time, where M is the number of relations in our relation set. This makes the chainstructured DCRF model impractical for multi-sentential parsing of long documents, since learning requires running inference on every training sequence with an overall time complexity of O(TM2n3) = O(M2n4) per document (Sutton and McCallum 2012). To address this problem, we have developed a simplified parsing model for multisentential parsing. Our model is shown in Figure 8. The two observed nodes Ut−1 and Ut are two adjacent (multi-sentential) discourse units. The (hidden) structure node S ∈ {0, 1} denotes whether the two discourse units should be linked or not. The other hidden node R ∈ {1 . . .M} represents the relation between the two units. Notice that similar to the model in Figure 5, this is also an undirected graphical model and becomes a CRF model if we directly model the labels by conditioning the clique potential ϕ on the input features x, derived from the observed variables: P(Rt, St|x,Θd) = 1Z(x,Θd) ϕ(Rt, St|x,Θd) (5) ϕ(Rt, St|x,Θd) = exp(ΘTd f (Rt, St, x)) (6) where f (Rt, St, x) is a feature vector derived from the input features x and the labels Rt and St, and Θd is the corresponding weight vector. Although this model is similar in spirit to the parsing model in Figure 5, it now breaks the chain structure, which makes the inference much faster (i.e., a complexity of O(M2)). Breaking the chain structure also allows CODRA to balance the data for training (an equal number of instances with S=1 and S=0), which dramatically reduces the learning time of the model. CODRA applies this parsing model to all possible adjacent units at all levels in the multi-sentential case, and computes the posterior marginals of the relation– structure pairs P(Rt, St=1|Ut−1, Ut,Θd) using the forward–backward algorithm to obtain the probability of all possible DT constituents. Given the sentence-level discourse units, Algorithm 2, which is a simplified variation of Algorithm 1, extracts all possible adjacent discourse units for multi-sentential parsing. Similar to Algorithm 1, Algorithm 2 also has a complexity of O(n3), where n is the number of sentence-level discourse units. Algorithm 2 Generating all possible adjacent discourse units at all levels of a documentlevel discourse tree. Input: Sequence of units: (U1, U2, · · ·Un), where Ux[0]:= start EDU ID of unit x, and Ux[1]:= end EDU ID of unit x. Output: List of adjacent units: L for i = 1 → n − 1 do // all possible starting positions for the subsequence for j = i + 1 → n do // all possible ending positions for the subsequence for k = i → j − 1 do // all possible cut points within the subsequence Left = Ui[0] : Uk[1] Right = Uk+1[0] : Uj[1] L.append ((Left, Right)) end end end Both our intra- and multi-sentential parsing models are designed using MALLET’s graphical model toolkit GRMM (McCallum 2002). In order to avoid overfitting, we regularize the CRF models with l2 regularization and learn the model parameters using the limited-memory BFGS (L-BFGS) fitting algorithm. 4.1.4 Features Used in the Parsing Models. Crucial to parsing performance is the set of features used in the parsing models, as summarized in Table 1. We categorize the features into seven groups and specify which groups are used in what parsing model. Notice that some of the features are used in both models. Most of the features have been explored in previous studies (e.g., Soricut and Marcu 2003; Sporleder and Lapata 2005; Hernault et al. 2010). However, we improve some of these as explained subsequently. The features are extracted from two adjacent discourse units Ut−1 and Ut. Organizational features encode useful information about text organization as shown by duVerle and Prendinger (2009). We measure the length of the discourse units as the number of EDUs and tokens in it. However, in order to better adjust to the length variations, rather than computing their absolute numbers in a unit, we choose to measure their relative numbers with respect to their total numbers in the two units. For example, if the two discourse units under consideration contain three EDUs in total, a unit containing two of the EDUs will have a relative EDU number of 0.67. We also measure the distances of the units in terms of the number of EDUs from the beginning and end of the sentence (or text in the multi-sentential case). Text structural features capture the correlation between text structure and rhetorical structure by counting the number of sentence and paragraph boundaries in the discourse units. Discourse cues (e.g., because, but), when present, signal rhetorical relations between two text segments, and have been used as a primary source of information in earlier studies (Knott and Dale 1994; Marcu 2000a). However, recent studies (Hernault et al. 2010; Biran and Rambow 2011) suggest that an empirically acquired lexical N-gram dictionary is more effective than a fixed list of cue phrases, since this approach is domain independent and capable of capturing non-lexical cues such as punctuation. In order to build a lexical N-gram dictionary empirically from the training corpus, we extract the first and last N tokens (N∈{1, 2, 3}) of each discourse unit and rank them according to their mutual information with the two labels, Structure (S) and Relation (R). 8 N-gram features N∈{1, 2, 3} Intra & Multi-Sentential Beginning (or end) lexical N-grams in unit 1. Beginning (or end) lexical N-grams in unit 2. Beginning (or end) POS N-grams in unit 1. Beginning (or end) POS N-grams in unit 2. 5 Dominance set features Intra-Sentential Syntactic labels of the head node and the attachment node. Lexical heads of the head node and the attachment node. Dominance relationship between the two units. 9 Lexical chain features Multi-Sentential Number of chains spanning unit 1 and unit 2. Number of chains start in unit 1 and end in unit 2. Number of chains start (or end) in unit 1 (or in unit 2). Number of chains skipping both unit 1 and unit 2. Number of chains skipping unit 1 (or unit 2). 2 Contextual features Intra & Multi-Sentential Previous and next feature vectors. 2 Sub-structural features Intra & Multi-Sentential Root nodes of the left and right rhetorical sub-trees. More specifically, given an N-gram x, we compute its conditional entropy H with respect to S and R as follows:8 H(S, R|x) = − ∑ s∈S r∈R log c(x, s, r)c(x) (7) where c(x) is the empirical count of N-gram x, and c(x, s, r) is the joint empirical count of N-gram x with the labels s and r. This is in contrast to HILDA (Hernault et al. 2010), which ranks the N-grams by their frequencies in the training corpus. However, Blitzer 8 The higher the conditional entropy, the lower the mutual information, and vice versa. (2008) found mutual information to be more effective than frequency as a method for feature selection. Intuitively, the most informative discourse cues are not only the most frequent, but also the ones that are indicative of the labels in the training data. In addition to the lexical N-grams we also encode the POS tags of the first and last N tokens (N∈{1, 2, 3}) in a discourse unit as shallow-syntactic features in our models. Lexico-syntactic features dominance sets extracted from the Discourse Segmented Lexicalized Syntactic Tree (DS-LST) of a sentence have been shown to be extremely effective for intra-sentential discourse parsing in SPADE (Soricut and Marcu 2003). Figure 9a shows the DS-LST (i.e., lexicalized syntactic tree with EDUs identified) for a sentence with three EDUs from the RST–DT corpus, and Figure 9b shows the corresponding discourse tree. In a DS-LST, each EDU except the one with the root node must have a head node NH that is attached to an attachment node NA residing in a separate EDU. A dominance set D (shown at the bottom of Figure 9a) contains these attachment points (shown in boxes) of the EDUs in a DS-LST. In addition to the syntactic and lexical information of the head and attachment nodes, each element in the dominance set also includes a dominance relationship between the EDUs involved; the EDU with the attachment node dominates (represented by “>”) the EDU with the head node. Soricut and Marcu (2003) hypothesize that the dominance set (i.e., lexical heads, syntactic labels, and dominance relationships) carries the most informative clues for intra-sentential parsing. For instance, the dominance relationship between the EDUs in our example sentence is 3 > 1 > 2, which favors the DT structure [1, 1, 2] over [2, 2, 3]. In order to extract dominance set features for two adjacent discourse units Ut−1 and Ut, containing EDUs ei:j and ej+1:k, respectively, we first compute the dominance set from the DS-LST of the sentence. We then extract the element from the set that holds across the EDUs j and j + 1. In our example, for the two units, containing EDUs e1 and e2, respectively, the relevant dominance set element is (1, efforts/NP)>(2, to/S). We encode the syntactic labels and lexical heads of NH and NA, and the dominance relationship as features in our intra-sentential parsing model. Lexical chains (Morris and Hirst 1991) are sequences of semantically related words that can indicate topical boundaries in a text (Galley et al. 2003; Joty, Carenini, and Ng 2013). Features extracted from lexical chains are also shown to be useful for finding paragraph-level discourse structure (Sporleder and Lapata 2004). For example, consider the text with four paragraphs (P1 to P4) in Figure 10a. Now, let us assume that there is a lexical chain that spans the whole text, skipping paragraphs P2 and P3, while a second chain only spans P2 and P3. This situation makes it more likely that P2 and P3 should be linked in the DT before either of them is linked with another paragraph. Therefore, the DT structure in Figure 10b should be more likely than the structure in Figure 10c. One challenge in computing lexical chains is that words can have multiple senses, and semantic relationships depend on the sense rather than the word itself. Several methods have been proposed to compute lexical chains (Barzilay and Elhadad 1997; Hirst and St. Onge 1997; Silber and McCoy 2002; Galley and McKeown 2003). We follow the state-of-the-art approach proposed by Galley and McKeown (2003), which extracts lexical chains after performing Word Sense Disambiguation (WSD). In the preprocessing step, we extract the nouns from the document and lemmatize them using WordNet’s built-in morphy function (Fellbaum 1998). Then, by looking up in WordNet we expand each noun to all of its senses, and build a Lexical Semantic Relatedness Graph (LSRG) (Galley and McKeown 2003; Chali and Joty 2007). In an LSRG, the nodes represent noun-tokens with their candidate senses, and the weighted edges between senses of two different tokens represent one of the three semantic relations: repetition, synonym, and hypernym. For example, Figure 11a shows a partial LSRG, where the token bank has two possible senses, namely, money bank and river bank. Using the money bank sense, bank is connected with institution and company by hypernymy relations (edges marked with H), and with another bank by a repetition relation (edges marked with R). Similarly, using the river bank sense, it is connected with riverside by a hypernymy relation and with bank by a repetition relation. Nouns that are not found in WordNet are considered as proper nouns having only one sense, and are connected by only repetition relations. We use this LSRG first to perform WSD, then to construct lexical chains. For WSD, the weights of all edges leaving the nodes under their different senses are summed up and the one with the highest score is considered to be the right sense for the wordtoken. For example, if repetition and synonymy are weighted equally, and hypernymy is given half as much weight as either of them, the score of bank’s two senses are: 1 + 0.5 + 0.5 = 2 for the sense money bank and 1 + 0.5 = 1.5 for the sense river bank. Therefore, the selected sense for bank in this context is river bank. In case of a tie, we select the sense that is most frequent (i.e., the first sense in WordNet). Note that this approach to WSD is different from that of Sporleder and Lapata (2004), which takes a greedy approach. Finally, we prune the graph by only keeping the links that connect words with the selected senses. At the end of the process, we are left with the edges that form the actual lexical chains. For example, Figure 11b shows the result of pruning the graph in Figure 11a. The lexical chains extracted from the pruned graph are shown in the box at the bottom. Following Sporleder and Lapata (2004), for each chain element, we keep track of the location (i.e., sentence ID) in the text where that element was found, and exclude chains containing only one element. Given two discourse units, we count the number of chains that: hit the two units, exclusively hit the two units, skip both units, skip one of the units, start in a unit, and end in a unit. We also consider more contextual information by including the above features computed for the neighboring adjacent discourse unit pairs in the current feature vector. For example, the contextual features for units Ut−1 and Ut include the feature vector computed from Ut−2 and Ut−1 and the feature vector computed from Ut and Ut+1. We incorporate hierarchical dependencies between the constituents in a DT by rhetorical sub-structural features. For two adjacent units Ut−1 and Ut, we extract the roots of the two rhetorical sub-trees. For example, the root of the rhetorical sub-tree spanning over EDUs e1:2 in Figure 9b is Elaboration–NS. However, extraction of these features assumes the presence of labels for the sub-trees, which is not the case when we apply the parser to a new text (sentence or document) in order to build its DT in a nongreedy fashion. One way to deal with this is to loop twice through the parsing process using two different parsing models—one trained with the complete feature set, and the other trained without the sub-structural features. We first build an initial, sub-optimal DT using the parsing model that is trained without the sub-structural features. This intermediate DT will now provide labels for the sub-structures. Next we can build a final, more accurate DT by using the complete parsing model. This idea of twopass discourse parsing, where the second pass performs post-editing using additional features, has recently been adopted by Feng and Hirst (2014) in their greedy parser. One could even continue doing post-editing multiple times until the DT converges. However, this could be very time consuming as each post-editing pass requires: (1) applying the parsing model to every possible unit sequence and computing the posterior marginals for all possible DT constituents, and (2) using the parsing algorithm to find the most probable DT. Recall from our earlier discussion in Section 4.1.3 that for n discourse units and M rhetorical relations, the first step requires O(M2n4) and O(M2n3) for intra- and multi-sentential parsing, respectively; we will see in the next section that the second step requires O(Mn3). In spite of the computational cost, the gain we attained in the subsequent passes was not significant for our development set. Therefore, we restrict our parser to only one-pass post-editing. Note that in parsing models where the score (i.e., likelihood) of a parse tree decomposes across local factors (e.g., the CRF-based syntactic parser of Finkel, Kleeman, and Manning [2008]), it is possible to define a semiring using the factors and the local scores (e.g., given by the inside algorithm). The CKY algorithm could then give the optimal parse tree in a single post-editing pass (Smith 2011). However, because our intra-sentential parsing model is designed to capture sequential dependencies between DT constituents, the score of a DT does not directly decompose across factors over discourse productions. Therefore, designing such a semiring was not possible in our case. In addition to these features, we also experimented with other features including WordNet-based lexical semantics, subjectivity, and TF.IDF-based cosine similarity. However, because such features did not improve parsing performance on our development set, they were excluded from our final set of features. The intra- and multi-sentential parsing models of CODRA assign a probability to every possible DT constituent in their respective parsing scenarios. The job of the parsing algorithm is then to find the k most probable DTs for a given text. We implement a probabilistic CKY-like bottom–up parsing algorithm that uses dynamic programming to compute the most likely parses (Jurafsky and Martin 2008). For simplicity, we first describe the specific case of generating the single most probable DT, then we describe how to generalize this algorithm to produce the k most probable DTs for a given text. Formally, the search problem for finding the most probable DT can be written as DT∗ = argmax DT P(DT|Θ) (8) where Θ specifies the parameters of the parsing model (intra- or multi-sentential). Given n discourse units, our parsing algorithm uses the upper-triangular portion of the n×n dynamic programming table D, where cell D[i, j] (for i < j) stores: D[i, j] = P(r∗[Ui(0), Um∗ (1), Uj(1)]) (9) where Ux(0) and Ux(1) are the start and end EDU Ids of discourse unit Ux, and (m∗, r∗) = argmax i≤m