text stringlengths 0 30.2k |
|---|
W09-0205 C08-1114 o We adopt a similar approach to the one used in Turney (2008) and consider each question as a separate binary classification problem with one positive training instance and 5 unknown pairs. |
W09-0419 C08-1115 o "They are part of an effort to better integrate a linguistic, rule-based system and the statistical correcting layer also illustrated in (Ueffing et al., 2008)." |
D09-1079 C08-1125 o "3.5 Domain adaptation in Machine Translation Within MT there has been a variety of approaches dealing with domain adaption (for example (Wu et al., 2008; Koehn and Schroeder, 2007)." |
P09-1036 C08-1127 o "This, unfortunately, significantly jeopardizes performance (Koehn et al., 2003; Xiong et al., 2008) because by integrating syntactic constraint into decoding as a hard constraint, it simply prohibits any other useful non-syntactic translations which violate constituent boundaries." |
N09-1061 C08-1136 o "Optimal algorithms exist for minimising the size of rules in a Synchronous Context-Free Grammar (SCFG) (Uno and Yagiura, 2000; Zhang et al., 2008)." |
P09-1088 C08-1136 o "The machine translation literature is littered with various attempts to learn a phrase-based string transducer directly from aligned sentence pairs, doing away with the separate word alignment step (Marcu and Wong, 2002; Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et al., 2008)." |
P09-1088 C08-1136 o "The sampler reasons over the infinite space of possible translation units without recourse to arbitrary restrictions (e.g., constraints drawn from a wordalignment (Cherry and Lin, 2007; Zhang et al., 2008b) or a grammar fixed a priori (Blunsom et al., 1f and e are the input and output sentences res... |
P09-1088 C08-1136 o "Following the broad shift in the field from finite state transducers to grammar transducers (Chiang, 2007), recent approaches to phrase-based alignment have used synchronous grammar formalisms permitting polynomial time inference (Wu, 1997; 783 Cherry and Lin, 2007; Zhang et al., 2008b; Blunsom et ... |
P09-1111 C08-1136 o "Other linear time algorithms for rank reduction are found in the literature (Zhang et al., 2008), but they are restricted to the case of synchronous context-free grammars, a strict subclass of the LCFRS with f = 2." |
D09-1108 C08-1138 o "In the SMT research community, the second step has been well studied and many methods have been proposed to speed up the decoding process, such as node-based or span-based beam search with different pruning strategies (Liu et al., 2006; Zhang et al., 2008a, 2008b) and cube pruning (Huang and Chiang... |
D09-1108 C08-1138 o "3.1 Exhaustive search by tree fragments This method generates all possible tree fragments rooted by each node in the source parse tree or forest, and then matches all the generated tree fragments against the source parts (left hand side) of translation rules to extract the useful rules (Zhang et al... |
D09-1108 C08-1138 p "1 Introduction Recently linguistically-motivated syntax-based translation method has achieved great success in statistical machine translation (SMT) (Galley et al., 2004; Liu et al., 2006, 2007; Zhang et al., 2007, 2008a; Mi et al., 2008; Mi and Huang 2008; Zhang et al., 2009)." |
P09-1020 C08-1138 o "4 Training This section discusses how to extract our translation rules given a triple nullnull,null null ,nullnull . As we know, the traditional tree-to-string rules can be easily extracted from nullnull,null null ,nullnull using the algorithm of Mi and Huang (2008) 2 . We would like 2 Mi and Hu... |
P09-1020 C08-1138 p "Among these advances, forest-based modeling (Mi et al., 2008; Mi and Huang, 2008) and tree sequence-based modeling (Liu et al., 2007; Zhang et al., 2008a) are two interesting modeling methods with promising results reported." |
P09-1020 C08-1138 o "Motivated by the fact that non-syntactic phrases make non-trivial contribution to phrase-based SMT, the tree sequencebased translation model is proposed (Liu et al., 2007; Zhang et al., 2008a) that uses tree sequence as the basic translation unit, rather than using single sub-tree as in the STSG." |
P09-1020 C08-1138 o (2008a) propose a tree sequence-based tree to tree translation model and Zhang et al. |
P09-1020 C08-1138 o "Therefore, structure divergence and parse errors are two of the major issues that may largely compromise the performance of syntax-based SMT (Zhang et al., 2008a; Mi et al., 2008)." |
P09-1020 C08-1138 o "A tree sequence to string rule 174 A tree-sequence to string translation rule in a forest is a triple <L, R, A>, where L is the tree sequence in source language, R is the string containing words and variables in target language, and A is the alignment between the leaf nodes of L and R. This defini... |
P09-1103 C08-1138 o "To address this issue, many syntax-based approaches (Yamada and Knight, 2001; Eisner, 2003; Gildea, 2003; Ding and Palmer, 2005; Quirk et al, 2005; Zhang et al, 2007, 2008a; Bod, 2007; Liu et al, 2006, 2007; Hearne and Way, 2003) tend to integrate more syntactic information to enhance the non-conti... |
P09-1103 C08-1138 o "Nevertheless, the generated rules are strictly required to be derived from the contiguous translational equivalences (Galley et al, 2006; Marcu et al, 2006; Zhang et al, 2007, 2008a, 2008b; Liu et al, 2006, 2007)." |
P09-1103 C08-1138 o "2 We illustrate the rule extraction with an example from the tree-to-tree translation model based on tree sequence alignment (Zhang et al, 2008a) without losing of generality to most syntactic tree based models." |
P09-1103 C08-1138 o "The proposed synchronous grammar is able to cover the previous proposed grammar based on tree (STSG, Eisner, 2003; Zhang et al, 2007) and tree sequence (STSSG, Zhang et al, 2008a) alignment." |
D09-1024 C08-1139 o "Word alignment is also a required first step in other algorithms such as for learning sub-sentential phrase pairs (Lavie et al., 2008) or the generation of parallel treebanks (Zhechev and Way, 2002)." |
E09-1044 C08-1144 o "Previously published approaches to reducing the rule set include: enforcing a minimum span of two words per non-terminal (Lopez, 2008), which would reduce our set to 115M rules; or a minimum count (mincount) threshold (Zollmann et al., 2008), which would reduce our set to 78M (mincount=2) or 57M (m... |
E09-1044 C08-1144 o "(Zollmann et al., 2008)." |
E09-1044 C08-1144 o "This is in direct contrast to recent reported results in which other filtering strategies lead to degraded performance (Shen et al., 2008; Zollmann et al., 2008)." |
N09-1049 C08-1144 o "Extensions to Hiero Several authors describe extensions to Hiero, to incorporate additional syntactic information (Zollmann and Venugopal, 2006; Zhang and Gildea, 2006; Shen et al., 2008; Marton and Resnik, 2008), or to combine it with discriminative latent models (Blunsom et al., 2008)." |
E09-1017 C08-1145 p "The fluency models hold promise for actual improvements in machine translation output quality (Zwarts and Dras, 2008)." |
A97-1055 C94-2113 o "(Dolan, 1994) and (Krovetz and Croft, 1992) claim that fine-grained semantic distinctions are unlikely to be of practical value for many applications." |
D07-1107 C94-2113 o "Much work has gone into methods for measuring synset similarity; early work in this direction includes (Dolan, 1994), which attempted to discover sense similarities between dictionary senses." |
J98-1001 C94-2113 o "Recognizing this, Dolan (1994) proposes a method for ""ambiguating"" dictionary senses by combining them to create grosser sense distinctions." |
J98-1003 C94-2113 o "Various approaches to word sense division have been proposed in the literature on WSD, including (1) sense numbers in every-day dictionaries (Lesk 1986; Cowie, Guthrie, and Guthrie 1992), (2) automatic or hand-crafted clusters of dictionary senses (Dolan 1994; Bruce and Wiebe 1995; Luk * Department... |
J98-1003 C94-2113 o "Furthermore, as pointed out in Dolan (1994), the sense division in an MRD is frequently too fine-grained for the purpose of WSD." |
J98-1003 C94-2113 o "82 Chen and Chang Topical Clustering Dolan (1994) maintains the position that intersense relations are mostly idiosyncratical, thereby making it difficult to characterize them in a general way so as to identify them." |
J98-1003 C94-2113 o "However, they do not elaborate on how the comparisons are done, or on how effective the program is. Dolan (1994) describes a heuristic approach to forming unlabeled clusters of closely related senses in an MRD." |
J98-1003 C94-2113 o "As noted in Dolan (1994), it is possible to run a sense-clustering algorithm on several MRDs to build an integrated lexical database with more complete coverage of word senses." |
J98-1003 C94-2113 o "These relations are then used for various tasks, ranging from the interpretation of a noun sequence (Vanderwende 1994) or a prepositional phrase (Ravin 1990), to resolving structural ambiguity (Jenson and Binot 1987), to merging dictionary senses for WSD (Dolan 1994)." |
P06-1014 C94-2113 o "5 Related Work Dolan (1994) describes a method for clustering word senses with the use of information provided in the electronic version of LDOCE (textual definitions, semantic relations, domain labels, etc.)." |
W00-0103 C94-2113 p "This approach took inspiration from the pioneering work by (Dolan 1994), but it is also fundamentally different, because instead of grouping similar senses together, the CoreLex approach groups together words according to all of their senses." |
W06-2503 C94-2113 o "There is also work on grouping senses of other inventories using information in the inventory (Dolan, 1994) along with information retrieval techniques (Chen and Chang, 1998)." |
W96-0305 C94-2113 o "Recently, various approaches (Dolan 1994; Luk 1995; Yarowsky 1992; Dagan et al. 1991 ;Dagan and Itai 1994) to word sense division have been used in WSD research." |
W96-0305 C94-2113 o Zero derivation Dolan (1994) pointed out that it is helpful to identify zero-derived noun/verb pairs for such tasks as normalization of the semantics of expressions that are only superficially different. |
W96-0305 C94-2113 o Dolan (1994) described a heuristic approach to forming unlabeled clusters of closely related senses in a MRD. |
W96-0305 C94-2113 o Dolan (1994) observed that sense division in MRD is frequently too free for the purpose of WSD. |
W99-0505 C94-2113 o "Towards a Meaning-Full Comparison of Lexieal Resources Kenneth C Lltkowska CL Research 9208 Gue Road Damascus, MD 20872 ken@clres corn http//www tires tom Abstract The mapping from WordNet to Hector senses m Senseval provides a ""gold standard"" against wluch to judge our ability to compare lexlcal... |
C08-1009 C98-2122 o "On the British National Corpus (BNC), using Lins (1998) similarity method, we retrieve the following neighbors for the first and second sense, respectively: 1." |
C08-1009 C98-2122 o "As described in Section 3 we retrieved neighbors using Lins (1998) similarity measure on a RASP parsed (Briscoe and Carroll, 2002) version of the BNC." |
C08-1009 C98-2122 o The best accuracies are observed when the labelsarecreatedfromdistributionallysimilarwords using Lins (1998) dependency-based similarity measure (Depend). |
C08-1009 C98-2122 p "Lins (1998) information-theoretic similarity measure is commonly used in lexicon acquisition tasks and has demonstrated good performance in unsupervised WSD (McCarthy et al., 2004)." |
C08-1009 C98-2122 n A potential caveat with Lins (1998) distributional similarity measure is its reliance on syntactic information for obtaining dependency relations. |
C08-1029 C98-2122 p "Point-wise mutual information (Lin, 1998) and Relative Feature Focus (Geffet and Dagan, 2004) are well-known examples." |
C08-1029 C98-2122 o "Feature comparison measures: to convert two feature sets into a scalar value, several measures have been proposed, such as cosine, Lins measure (Lin, 1998), Kullback-Leibler (KL) divergence and its variants." |
C08-1029 C98-2122 o "Lins measure Lin (1998) proposed a symmetrical measure: Par Lin (s t)= summationtext fF s F t (w(s,f)+w(t,f)) summationtext fF s w(s,f)+ summationtext fF t w(t,f) , where F s and F t denote sets of features with positive weights for words s and t, respectively." |
C08-1051 C98-2122 o " Three K-means algorithms using different distributional similarity or dissimilarity measures: cosine, -skew divergence (Lee, 1999) 4 , and Lins similarity (Lin, 1998)." |
C08-1051 C98-2122 o "Others proposed distributional similarity measures between words (Hindle, 1990; Lin, 1998; Lee, 1999; Weeds et al., 2004)." |
C08-1051 C98-2122 o "405 PRF 1 proposed .383 .437 .408 multinomial mixture .360 .374 .367 Newman (2004) .318 .353 .334 cosine .603 .114 .192 -skew divergence (Lee, 1999) .730 .155 .255 Lins similarity (Lin, 1998) .691 .096 .169 CBC (Lin and Pantel, 2002) .981 .060 .114 Table 3: Precision, recall, and F-measure." |
C08-1051 C98-2122 o "Applications of word clustering include language modeling (Brown et al., 1992), text classification (Baker and McCallum, 1998), thesaurus construction (Lin, 1998) and so on." |
C08-1054 C98-2122 o (2005) applied the distributional similarity proposed by Lin (1998) to coordination disambiguation. |
C08-1058 C98-2122 o "One is automatic thesaurus acquisition, that is, to identify synonyms or topically related words from corpora based on various measures of similarity (e.g. Riloff and Shepherd, 1997; Lin, 1998; Caraballo, 1999; Thelen and Riloff, 2002; You and Chen, 2006)." |
C08-1086 C98-2122 o "By no means an exhaustive list, the most commonly cited ranking and scoring algorithms are HITS (Kleinberg 1998) and PageRank (Page et al. 1998), which rank hyperlinked documents using the concepts of hubs and authorities." |
C08-1086 C98-2122 o "Within the NLP community, n-best list ranking has been looked at carefully in parsing, extractive summarization (Barzilay et al. 1999; Hovy and Lin 1998), and machine translation (Zhang et al. 2006), to name a few." |
C08-1086 C98-2122 o "Following Lin (1998), we use syntactic dependencies between words to model their semantic properties." |
C08-1100 C98-2122 o "For each word in the LDV, we consulted three existing thesauri: Rogets Thesaurus (Roget, 1995), Collins COBUILD Thesaurus (Collins, 2002), and WordNet (Fellbaum, 1998)." |
C08-1100 C98-2122 o "Various methods (Hindle, 1990; Lin, 1998) of automatically acquiring synonyms have been proposed." |
C08-1100 C98-2122 p "4.1 Features We used a dependency structure as the context for words because it is the most widely used and one of the best performing contextual information in the past studies (Ruge, 1997; Lin, 1998)." |
C08-1107 C98-2122 o "Given a wordq, its set of featuresFq and feature weightswq(f) for f Fq, a common symmetric similarity measure is Lin similarity (Lin, 1998a): Lin(u,v) = summationtext fFuFv[wu(f)+wv(f)]summationtext fFu wu(f)+ summationtext fFv wv(f) where the weight of each feature is the pointwise mutual informat... |
C08-1107 C98-2122 o "Texts are represented by dependency parse trees (using the Minipar parser (Lin, 1998b)) and templates by parse sub-trees." |
C08-1117 C98-2122 p "Among these measures, the most important are Wu & Palmers (Wu and Palmer, 1994), Resniks (Resnik, 1995) and Lins (Lin, 1998)." |
C08-1117 C98-2122 o "Where Pantel and Lin use Lins (1998) measure, we use Wu and Palmers (1994) measure." |
C08-1117 C98-2122 p One of the most important is Lins (1998). |
D08-1007 C98-2122 o "4 Experiments and Results 4.1 Set up We parsed the 3 GB AQUAINT corpus (Voorhees, 2002) using Minipar (Lin, 1998b), and collected verb-object and verb-subject frequencies, building an empirical MI model from this data." |
D08-1007 C98-2122 o "Lin (1998a)s similar word list for eat misses these but includes sleep (ranked 6) and sit (ranked 14), because these have similar subjects to eat." |
D08-1007 C98-2122 o "Discriminative, context-specific training seems to yield a better set of similar predicates, e.g. the highest-ranked contexts for DSPcooc on the verb join,3 lead 1.42, rejoin 1.39, form 1.34, belong to 1.31, found 1.31, quit 1.29, guide 1.19, induct 1.19, launch (subj) 1.18, work at 1.14 give a bet... |
D08-1007 C98-2122 o "We also test an MI model inspired by Erk (2007): MISIM(n,v) = log summationdisplay nSIMS(n) Sim(n,n) Pr(v,n ) Pr(v)Pr(n) We gather similar words using Lin (1998a), mining similar verbs from a comparable-sized parsed corpus, and collecting similar nouns from a broader 10 GB corpus of English text.4 ... |
D08-1007 C98-2122 p Erk (2007) compared a number of techniques for creating similar-word sets and found that both the Jaccard coefficient and Lin (1998a)s information-theoretic metric work best. |
D08-1048 C98-2122 o "They have been successfully applied in several tasks, such as information retrieval (Salton et al., 1975) and harvesting thesauri (Lin, 1998)." |
D08-1048 C98-2122 o "Two LUs close in the space are likely to be in a paradigmatic relation, i.e. to be close in a is-a hierarchy (Budanitsky and Hirst, 2006; Lin, 1998; Pado, 2007)." |
D08-1084 C98-2122 p "This similarity score is computed as a max over a number of component scoring functions, some based on external lexical resources, including: various string similarity functions, of which most are applied to word lemmas measures of synonymy, hypernymy, antonymy, and semantic relatedness, includin... |
D08-1103 C98-2122 o "Distributional measures of distance, such as those proposed by Lin (1998), quantify how similar the two sets of contexts of a target word pair are." |
D08-1103 C98-2122 o "For each word pair from the antonym set, we calculated the distributional distance between each of their senses using Mohammad and Hirsts (2006) method of concept distance along with the modified form of Lins (1998) distributional measure (equation 2)." |
D08-1103 C98-2122 o Again we used Mohammad and Hirsts (2006) method along with Lins (1998) distributional measure to determine the distributional closeness of two thesaurus concepts. |
D09-1028 C98-2122 o Curran (2002) and Lin (1998) use syntactic features in the vector definition. |
D09-1084 C98-2122 o "Accurate measurement of semantic similarity between lexical units such as words or phrases is important for numerous tasks in natural language processing such as word sense disambiguation (Resnik, 1995), synonym extraction (Lin, 1998a), and automatic thesauri generation (Curran, 2002)." |
D09-1084 C98-2122 o Method Correlation Edge-counting 0.664 Jiang & Conrath (1998) 0.848 Lin (1998a) 0.822 Resnik (1995) 0.745 Li et al. |
D09-1084 C98-2122 o "(Strube and Ponzetto, 2006) 0.19-0.48 Leacock & Chodrow (1998) 0.36 Lin (1998b) 0.36 Resnik (1995) 0.37 Proposed 0.504 7 Conclusion We proposed a relational model to measure the semantic similarity between two words." |
D09-1084 C98-2122 o Lin (1998b) defined the similarity between two concepts as the information that is in common to both concepts and the information contained in each individual concept. |
D09-1089 C98-2122 o "Pereira et al.(1993), Curran and Moens (2002) and Lin (1998) use syntactic features in the vector definition." |
E09-1077 C98-2122 o Wiebe (2000) uses Lin (1998a) style distributionally similar adjectives in a cluster-and-label process to generate sentiment lexicon of adjectives. |
E09-1077 C98-2122 o "3http://www.openoffice.org Another corpora based method due to Turney and Littman (2003) tries to measure the semantic orientation O(t) for a term t by O(t) = summationdisplay tiS+ PMI(t,ti) summationdisplay tjS PMI(t,tj) where S+ and S are minimal sets of polar terms that contain prototypical posi... |
I08-1021 C98-2122 o "Our approach to STC uses a thesaurus based on corpus statistics (Lin, 1998) for real-valued similarity calculation." |
I08-1060 C98-2122 o "Some researchers (Hindle, 1990; Grefenstette, 1994; Lin, 1998) classify terms by similarities based on their distributional syntactic patterns." |
I08-1072 C98-2122 o "A wide range of contextual information, such as surrounding words (Lowe and McDonald, 2000; Curran and Moens, 2002a), dependency or case structure (Hindle, 1990; Ruge, 1997; Lin, 1998), and dependency path (Lin and Pantel, 2001; Pado and Lapata, 2007), has been utilized for similarity calculation, ... |
I08-1072 C98-2122 o "3.1 Context Extraction We adopted dependency structure as the context of words since it is the most widely used and wellperforming contextual information in the past studies (Ruge, 1997; Lin, 1998)." |
I08-1072 C98-2122 o "For each word in LDV, three existing thesauri are consulted: Rogets Thesaurus (Roget, 1995), Collins COBUILD Thesaurus (Collins, 2002), and WordNet (Fellbaum, 1998)." |
I08-1073 C98-2122 o "We propose using distributional similarity (using (Lin, 1998)) as an approximation of semantic distancebetweenthewordsinthetwoglosses,rather than requiring an exact match." |
I08-1073 C98-2122 o We adopt the similarity score proposed by Lin (1998) as the distributional similarity score and use 50 nearest neighbours in line with McCarthy et al. For the random baseline we select one word sense at random for each word token and average the precision over 100 trials. |
I08-1073 C98-2122 o "2 Related Work ThisworkbuildsuponthatofMcCarthyetal.(2004) which acquires predominant senses for target words from a large sample of text using distributional similarity (Lin, 1998) to provide evidence for predominance." |
I08-1073 C98-2122 o "In this approach we extend the denition overlap by considering the distributional similarity (Lin, 1998) rather than identify of the words in the two denitions." |
I08-1073 C98-2122 o McCarthy et al. use a distributional similarity thesaurus acquired from corpus data using the method of Lin (1998) for nding the predominant sense of a word where the senses are dened by WordNet. |
I08-1073 C98-2122 o "Let w be a target word and Nw = fn1,n2nkg be the ordered set of the top scoring k neighbours of w from the thesaurus with associated distributional similarity scores fdss(w,n1),dss(w,n2),dss(w,nk)g using (Lin, 1998)." |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.