| { |
| "paper_id": "C94-1032", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T12:49:26.112978Z" |
| }, |
| "title": "A Stochastic Japanese Morphological Analyzer Using a Forward-DP Backward-A* N-Best Search Algorithm", |
| "authors": [ |
| { |
| "first": "Masa", |
| "middle": [ |
| "Aki" |
| ], |
| "last": "Nagata", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "NTT Network Information Systems l~", |
| "location": { |
| "addrLine": "~bor~ttorics 1-2356 Take, Yokosuka-Shi", |
| "postCode": "238-03", |
| "settlement": "Kanagaw~t", |
| "country": "Japan" |
| } |
| }, |
| "email": "nagata@nttnly.ntt.jl" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a novel method for segmenting the input sentence into words and assigning parts of speech to the words. It consists of a statistical language model and an efficient two-pa~qs N-best search algorithm. The algorithm does not require delimiters between words. Thus it is suitable for written Japanese. q'he proposed Japanese morphological analyzer achieved 95. l% recall and 94.6% precision for open text when it was trained and tested on the ATI'\u00a2 Corpus.", |
| "pdf_parse": { |
| "paper_id": "C94-1032", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a novel method for segmenting the input sentence into words and assigning parts of speech to the words. It consists of a statistical language model and an efficient two-pa~qs N-best search algorithm. The algorithm does not require delimiters between words. Thus it is suitable for written Japanese. q'he proposed Japanese morphological analyzer achieved 95. l% recall and 94.6% precision for open text when it was trained and tested on the ATI'\u00a2 Corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In recent years, we have seen a fair number of l)al)ers reporting accuracies of more than 95% for English part of speech tagging with statistical language modeling techniques [2-4, 10, 11] . On the other hand, there are few works on stochastic Japanese morphological analysis [9, 12, 14] , and they don't seem to have convinced the Japanese NLP community that the statistically-based teclmiques are superior to conventional rule-based techniques such as [16, 17] .", |
| "cite_spans": [ |
| { |
| "start": 175, |
| "end": 188, |
| "text": "[2-4, 10, 11]", |
| "ref_id": null |
| }, |
| { |
| "start": 276, |
| "end": 279, |
| "text": "[9,", |
| "ref_id": null |
| }, |
| { |
| "start": 280, |
| "end": 283, |
| "text": "12,", |
| "ref_id": null |
| }, |
| { |
| "start": 284, |
| "end": 287, |
| "text": "14]", |
| "ref_id": null |
| }, |
| { |
| "start": 454, |
| "end": 458, |
| "text": "[16,", |
| "ref_id": null |
| }, |
| { |
| "start": 459, |
| "end": 462, |
| "text": "17]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We show in this paper that we can buihl a stochastic Japanese morphological analyzer that offers approximately 95% accuracy on a statistical language modeling technique and an efficient two-pass N-best search strategy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We used tile simple tri-POS model as the tagging model for Japanese. Probability estimates were obtained after training on the ATI{ l)ialogue Database [5] , whose word segmentation and part of speech tag assignment were laboriously performed by hand.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 154, |
| "text": "[5]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We propose a novel search strategy for getting the N best morphological analysis hypotheses for the input sentence. It consists of the forward dynamic programming search and the backward A* search. The proposed algorithm amalgamates and extends three well-known algorithms in different fields: the Minimum Connective-Cost Method [7] for Japanese morphological analysis, Extended Viterbi Algorithm for character recognition [6] , and \"l~'ee-Trellis N-Best Search for speech recognition [15] .", |
| "cite_spans": [ |
| { |
| "start": 329, |
| "end": 332, |
| "text": "[7]", |
| "ref_id": null |
| }, |
| { |
| "start": 423, |
| "end": 426, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 485, |
| "end": 489, |
| "text": "[15]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We also propose a novel method for handling unknown words uniformly within the statistical approach. Using character trigrams ms tim word model, it generates the N-best word hypotheses that match the leftmost substrings starting at a given position in the input senten ce.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Moreover, we propose a novel method for evaluating the performance of morphological analyzers. Unlike English, Japanese does not place spaces between words. It is difficult, even for native Japanese, to place word boundaries consistently because of the agglutinative nature of the language. Thus, there were no standard performance metrics. We applied bracketing accuracy measures [1] , which is originally used for English parsers, to Japanese morphological analyzers. We also slightly extended the original definition to describe the accuracy of tile N-best candidates.", |
| "cite_spans": [ |
| { |
| "start": 381, |
| "end": 384, |
| "text": "[1]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the following sections, we first describe the techniques used in the proposed morphological analyzer, we then explain the cwduation metrics and show the system's performance by experimental results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Tri-POS Model and Relative Frequency Training", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "We used the tri-POS (or triclass, tri-tag, tri-Ggram etc.) model ~Ls tile tagging model for Japanese. Consider a word segmentation of the input sentence W = wl w2... w,~ and a sequence of tags T = tits.., t,, of the same length. The morphological analysis tmsk cau I)e formally defined ,~ finding a set of word segmentat.ion and parts of speech ~ssignment that maximize the joint probability of word sequence arm tag sequence P(W, 7'). In the tri-POS model, the joint probability is approximated by the product of parts of speech trigram probabilities P(tilti_2,ti_l) and word output probabilities for given part of speech P(wl]ll):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "r(w,:r) = ]~ r (tdt,_o.,t,_x)r'(w, lt4 (", |
| "cite_spans": [ |
| { |
| "start": 15, |
| "end": 38, |
| "text": "(tdt,_o.,t,_x)r'(w, lt4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "In practice, we consider sentence boundaries ~s special symbols as follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "P(W,T) = P(ql#)P(wtltt)P(t,.l#, tl)P(w21t~)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "~I P(tilti_2,ti_l)P(willi)P (#[t,,_l,\u00a2,,) ", |
| "cite_spans": [ |
| { |
| "start": 28, |
| "end": 41, |
| "text": "(#[t,,_l,\u00a2,,)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "i=3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "where \"#\" indicates the sentence boundary marker. If we have some tagged text available, we can estimate the probabilities P(tdti_2,ti_l ) and P(wiltl) by computing the relative frequencies of the corresponding events on this data:.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "N(ti_2, ti-1, tl) P(tifti-2'ti-t) = f(qltl-2'ti-x) - iV(ti_..,,ti_,)", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "P(wilti) = f(wilt,) --N(w,t) ('1) N(t)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "where f indicates the relative frequency, N(w, t) is t!,e number of times a given word w appears with tag l, aid N(li_2,ti-l,tl) is the number of times that sequer~ce (tilt,_2,q_, ) A-q2f(tdti_l) + qtf(ti) + qoV (5) where f indicates the relative.frequency and V is a uniform probability that each tag will occur. The nonnegative weights qi satisfy q3 + q~ + q1 + q0 = 1, and they are adjusted so as to make the observed data most probable after the adjustment by using EM algorithm ~-.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 128, |
| "text": "N(li_2,ti-l,tl)", |
| "ref_id": null |
| }, |
| { |
| "start": 167, |
| "end": 181, |
| "text": "(tilt,_2,q_, )", |
| "ref_id": null |
| }, |
| { |
| "start": 212, |
| "end": 215, |
| "text": "(5)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "2.1", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to understand the search algorithm described in the next section, we will introduce the second order HMM and extended Viterbi algorithm [6] . Considering the combined state sequence U = ltl'tt2.., ttn, where ul = tl and ui = ti-tli, we have P(uilui_l) = P(tilti_=,ti_l)", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 148, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tracing", |
| "sec_num": null |
| }, |
| { |
| "text": "Substituting Equation (6) into Equation (l), we have lWe used 120 part of speedl tags. In the ATR Corpus, 26 parts of speech, 13 conjugation types, and 7 conjugation forms are defined. Out of 26, 5 parts of speech have conjugation. Since we used a list of part of speech, conjugation type, and conjugation form as a tag, there are 119 tags in the A'IT\u00a2 Corpus. We added the sentence boundary marker to them.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 25, |
| "text": "(6)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tracing", |
| "sec_num": null |
| }, |
| { |
| "text": "aTo handle open text, word output probahility P(loilti) must also be smoothed. Tiffs problem is discussed in a later section *Ls the unknown word problem. 8Equation 8suggests that, to find the maxlmmn P(I,Vi,7]) for each ul, we need only to: remember the maximum P(W\u00a2_I, 7]_1), extend each of these probabilities to every ul by computing Eqnation (8), and select the m;uxinmm P(~/Vi,Ti) for each ui. 'thus, by increasing i by 1 to n, selecting the u. ttlat maximize P(W.,7]~), and backtracing the sequence leading to the nmxinmm probability, we can get the optimal tag seqnence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tracing", |
| "sec_num": null |
| }, |
| { |
| "text": "The search algorithm consists of a forward dynamic programming search and a backward A* search. First, a linear time dynamic programming is used for recording the scores of all partial paths in a table 3. A backward A* algorithm based tree search is then used to extend the partial paths. Partial paths extended in the backward tree search are ranked by their corresponding fill path scores, which are cmnputed by adding the scores of backward partial path scores to the cot responding best possihle scores of the remaining paths which are prerecorded in the forward search. Since the score of the incomplete portion of a path is exactly known, the backward search is admissible. That is, the top-N candidates are exact. Table 1 shows the two data structures used in our algorithm. The st,'t, cture parse stores tile information of a word and the best partial path up to the word. Parse.start and parse.end are the indices of tile start and end positions of the word in the sentence. Parse.pos is tile part of speech tag, which is a list of part of speech, conjugation type, and conjugation form in our system for Japanese. Parse.nth-order-~tate is a list of the last two parts of speech tags including that of the current word. This slot corresponds to the combined state in the second order IIMM. Parse.prob-so-far is the score of the best partial path from the beginning of the sentence to the word. Parse.prev\u00b1ous is the pointer to the (best) previous parse structure as in conventional Viterbi decoding, which is not necessary if we use the backward N best search. ~ln fact, we use two tables, pa~se-].ist and path-~ap. The reason is described later.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 721, |
| "end": 728, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Search Strategy", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The structure word represents the word information in the dictionary including its lexical form, part of speech tag, and word output probability given tt,e part of speech. \"tim beginning pasition of the word the end position of the word part of speech tag of the word a list of the la-'~t two parts (,f speech the b,~t partial path score from the start a pointer to previous parse strllettll'e word structure form ] lexical f,.'-n{ of the word", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Forward DP Search", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "l)Oa", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Forward DP Search", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "[ part of speech tag of the word prob _ word outlmt probability Before explaining tim forward search, we will define some flmctions and tables used in the algorithm. In the forward search, we use a table called parse-list, whose key is the end position of the parse structure, and wlm,se value is a list of parse structures that have the best partial path scores for each combined state at the end position. Function register-to-parse-list registers a parse structure against the parse-list and maintains the best partim parses. Function get-parse-list returns a list of parse structnres at the specified position. We also use the fimetion leltmost-substrings which returns a list of word structures in the dictionary whose lexical form matches the substrings starting at the. specified position in the input sentence. Figure 1 shows the central part of the forward dynamic programming search algorithm. It starts from the beg,string of tim inlmt sentence, and proceeds charattar by character. At each point in tim sentence, it looks up the combination of the best partial parses ending at the point and word hypotheses starting at that point. If tim connection of a partial parse and a word llypothesis is allowed by the tagging model, a new continuation parse is made and registered in the parse-list.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 819, |
| "end": 827, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Forward DP Search", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The partial path score for the new contitular,on parse is the product of the best partial path score up to the poi,g, the trigram probability of the last three parts of speech tags and the word output probability for LIfe part of speech 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Forward DP Search", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The Backward A* Search", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.2", |
| "sec_num": null |
| }, |
| { |
| "text": "The backward search uses a table called path-map, whose key is the end position of tile parse structure, and whose value is a list of parse structures that have the best partial path scores for each distinct combin~ ties of the start position and the combined state. The dilference 1)etween parse-list and path-map is that path-map is classi/ied by tim start position of the last word in addition to tim combined state. This distinction is crucial for the proposed N best algorithm, l\"or tim tbrward search to tind a parse that maximizes Equation 1, it is the parts of speech sequence that matters. For the backward N-best search, how(wet, we want N most likely word segmentation and part of speech sequence. Parse-list may shadow less probable candidates that have the same part of speech sc:qnence for the best scoring candidate, but differ in tim segmentaL,on of the last word. As shown in Figure 1 , path-map is made during the forward search by the function register-parse-to-path-map, which registers a parse structure to path-map and maintains the best partial parses in the table's criteria.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 893, |
| "end": 902, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "3.2", |
| "sec_num": null |
| }, |
| { |
| "text": "Now we describe the central part of tim backward A* search algorithm. But we assume that the readers know the A* algorithm, and exphtin only the way we applied the algorithm to the problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.2", |
| "sec_num": null |
| }, |
| { |
| "text": "We consider a parse structure ,~q a state in A* search. Two slates are e(plat if their parse structures have the same start position, end position, and combined state. The backward search starts at the end of the input, sentence, and backtracks to the beginning of the sentence using tim path-map.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.2", |
| "sec_num": null |
| }, |
| { |
| "text": "Initial states are obtained by looking up the entries of tim sentence end position of the path-map. The successor states are obtained by first, looking u 1) tim entries of the path-map at the start position of the current parse, then cbecldng whether they satisfy the constraint of the combined state transition in the second order IIMM, aim whether the transition is allowed by the tagging model. The combined state transition constraint means that tim part of speech sequence in the parse.nth-order-state of the current parse, ignor-ing the last element, equals that of tile previous parse, ignoring the first element. The state transition cost of the backward search is the product of the part of speech trigram probability and the word output probability. Tile score estimate of the remaining portion of a path is obtained from the parse.prob-so-~ar slot in the parse structure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.2", |
| "sec_num": null |
| }, |
| { |
| "text": "The backward search generates the N best hypotheses sequentially and there is no need to preset N. The complexity of the backward search is significantly less than that of the forward search.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "3.2", |
| "sec_num": null |
| }, |
| { |
| "text": "To handle open text, we have to cope with unknown words. Since Japanese do not put spaces between words, we have to identify unknown words at first. To do this, we can look at the spelling (character sequence) that may constitute a word, or look at the context to identify words that are acceptable in this context.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Once word hypotheses for unknown words are generated, the proposed N-best algorithm will find tile most likely word segmentation and part of speech assignment taking into account the entire sentence. Therefore, we can formalize the unknown word problem as (letermining the span of an unknown word, assigning its part of speech, and estimating its probability given its part of speech.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Let us call a computational model that determines the probability of any word hypothesis given its lexical form and its part of speech the \"word model\". The word model must account for morphology and word formarion to estimate the part of speech and tile probability of a word hypothesis. For tile first approxinmtion, we used the character trigram of each part of sl)eech as the word model. Let C = cic~.., c,~ denote the sequence of n characters that constitute word zv whose part of speech is t. We approximate the probability of the word given part ,(~,lc+-=, ~,-~)r,(#1c.._l, ..,) i=3 9where special symbol \"#\" indicates ttle word boundary marker. Character trigram probabilities are estimated from the training corpus by computing relative frequency of character bigram and trigram that appeared in words tagged as t.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 553, |
| "end": 585, |
| "text": ",(~,lc+-=, ~,-~)r,(#1c.._l, ..,)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Word Model", |
| "sec_num": "4" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "= Nt(ci_2, Ci_l, ci) N,(c~_.~, ~i)", |
| "eq_num": "(lO)" |
| } |
| ], |
| "section": "Pt(cilci-2, q-i) = f,(c~l~-=, \u00a2,-~)", |
| "sec_num": null |
| }, |
| { |
| "text": "where Nt(ci_2,ci_~,ci) is tile total number of times character trigram ci_2ci_~el appears in words tagged as t in the training corpus. Note that the character trigram probabilities reflect the frequency of word tokens in tile training corpus. Since there are more than 3,000 characters in Japanese, trigram probabilities are smoothed by interpolated estimation to cope with the sparse-data problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pt(cilci-2, q-i) = f,(c~l~-=, \u00a2,-~)", |
| "sec_num": null |
| }, |
| { |
| "text": "It is ideal to make this character trigram model for all open clmss categories, llowever, the amount of training data is too small for low frequency categories if we divide it by part of speech tags. Therefore, we made trigram models only for tile 4 Figure 2 show two examples of part of speech estimation for unknown words. Each trigram model returns a probability if the input string is a word belonging to the category. In both examples, the correct category has the largest probability. Figure 3 shows the N-best word hypotheses generated by using tile character trigram models. A word hypothesis is a list of word boundary, part of speech assignment, and word probability that matches tile leftmost substrings starting at a given position in tile input sentence. In the forward search, to handle unknown words, word hypotheses are generated at every position in addition to the ones generated by the function leftmost-subs~;rings, which are the words found ill tile dictionary, llowever, ill our system, we limited the ntunl)er of word hyl)otheses generated at each position to 10, for efficiency reasons.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 249, |
| "text": "4", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 250, |
| "end": 258, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 491, |
| "end": 499, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pt(cilci-2, q-i) = f,(c~l~-=, \u00a2,-~)", |
| "sec_num": null |
| }, |
| { |
| "text": "aA noun tlmt can be used a~s a verb when it is followed by a forlna,] verb \"s~tr~t\",", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pt(cilci-2, q-i) = f,(c~l~-=, \u00a2,-~)", |
| "sec_num": null |
| }, |
| { |
| "text": "We applied the performance measures for English parsers [1] to Japanese morphological analyzers. The basic idea is that morphological analysis for a sentence can be thought of as a set of labeled brackets, where a bracket corresponds to word segmentation and its la-. bel corresponds to part of speech. We then compare the brackets contained in the system's output to the brackets contained in the standard analysis. For the N-best candidate, we will make the union of t],e brackets contained in each candidate, and compare thenr to the brackets in the standard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Measures", |
| "sec_num": "5" |
| }, |
| { |
| "text": "For comparison, we court{, the number of I)rackcts in the standard data (Std), the number of brackets in the system output (Sys), and the nunlber of matching brackets (M). We then calculate the nleasurcs of recall (= M/Std) and precision (= M/Sys). We also connt the number of crossings, which is tile mmtber of c,'mes where a bracketed sequence from the standard data overlaps a bracketed sequence from tile system output, but neither sequence is completely coutained in the other.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Measures", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We defined two equaiity criteria of brackets for counting tim number of matching brackets. Two brackets are unlabeled-bracket-equal if the boundaries of the two brackets are tile same. Two brackets are labeledbracket.equal if the labels of the brackets ark the same in addition to unlabeled-I)racket-equal. In comparing the consistency of the word segmentations of two brackclings, wllich we call structure-consistency, we count the measures (recall, precision, crossings) by unlabeledbracket-equal. In comparing the consistency of part of speech assignment in addition to word segmentation, which we call label-consistency, we couut them by labeled-bracket-equal. For example, Figure 4 shows a sample of N-hest analysls hypotheses, where the first candidate is the correct analysis a. For the second candhlate, since there are !) })rackets in tim correct data (Std=9), 11 brackets in the second candidate (Sys=ll), and 8 nlatciiing brackets (M=8), tile recall and precision with respect to label consistency are 8/9 and 8/11, respectively. For the top 6Probabilities m'e in liiltura] log b~se e. two candidates, since tliere ;ire 12 distinct brackets in tile systems otll.litlt and 9 Inatehing brackets, tile recall and precision with respect to hal)el consistency are 9/9 aud 9/12, respeetiwqy. For the third candidate, since the correct data and the third candidate differ in just one part of Sl)eech tag, the recall and precision wittl respect to structure consistency are 9/9 and 9/9, respectiw>ly. We used the NI'I~ Dialogue Databaae [5] to train and test the proposed morphological analysis method. It is a corpus of approxiumtely 800,000 words whose word segmentatio,l and part of speech tag assigmnent were laboriously performed by hand. In tiffs experilneut, we only used one fourth of the A'Ft~. Corl)us , a portion of the keyl)oard dialogues in the conference registration domain. First, we selected 1,000 test sentences for all open test, arid used I.he others for training. Tile corpus was divided into 90% R)r training and 10% for testing. We then selected 1,000 sentences from tile traiuing set and used them for a closed test. The number of sentences, words, and characters for each test set and training texts are shown iu 'Pable 2.", |
| "cite_spans": [ |
| { |
| "start": 1540, |
| "end": 1543, |
| "text": "[5]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 678, |
| "end": 686, |
| "text": "Figure 4", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Measures", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The training texts contained 6580 word types and 6945 tag trigram types. There were 247 unknown word types and 213 unknown tag trigram types in tim open test senteuces. Thus, both part of speech trigralrl l)robabilities alld word output probabilities must be snioothed to handle open texts. We then tested the proposed system, which uses smoothed part of speech trigram with word model, on the open test sentences. Table 4 shows tile percentages of words correctly segmented and tagged. In Table 4 , label consistency 2 represents the accuracy of segmentation and tagging ignoring the difference in conjugation form.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 415, |
| "end": 422, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 490, |
| "end": 497, |
| "text": "Table 4", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiment", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For open texts, tile morphological analyzer achieved 95.1% recall and 94.6% precision for the top candidate, and 97.8% recall and 73.2% precision for the 5 best candidates. This performance is very encouraging, and is comparable to the state-of-the-art stochastic tagger for English [2-4, 10, 11] .", |
| "cite_spans": [ |
| { |
| "start": 283, |
| "end": 296, |
| "text": "[2-4, 10, 11]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Since the segmentation accuracy of the proposed system is relatively high (97.7% recall and 97.2% precision for the top candidate) compared to the morphological analysis accuracy, it is likely that we can improve the part of speech assignment accuracy by refining the statistically-based tagging model. We find a fair number of tagging errors happened in conjugation forms. We assume that this is caused by the fact that the Japanese tag set used in tile ATR. Corpus is not detailed enough to capture the complicated Japanese verb morphology. For open texts, the sentence accuracy of the raw part of speech trigram without word model is 62.7% for the top candidate and 70.4% for the top-5, while that of smoothed trigram with word model is 66.9% for the top and 80.3% for the top-5. We can see that, by smoothing tile part ofsllecch trigram and by adding word model to handle unknown words, the accuracy and robustness of the morphological analyzer is significantly improved. Ilowever, tile sentence accuracy for closed texts is still significantly better that that for ol)en texts. It is clear that more research has to be done on the smoothing problem.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Morphological analysis is an important practical problem with potential apl)lication in many areas including kana-to-kanji conversion 7, speech recognition, character recognition, speech synthesis, text revision support, information retrieval, and machine translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Most conventional Japanese morphological analyzers use rule-based heuristic searches. They usually use a connectivity rnatrix (part-of-sl)eech-pair grammar) ,as the language model. To rank the morphological analysis hypotheses, they usually use heuristics such as Longest Match Method or Least Bunsetsu's Number Method [16] .", |
| "cite_spans": [ |
| { |
| "start": 319, |
| "end": 323, |
| "text": "[16]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "There are some statistically-based approaches to Japanese morphological analysis. The tagging models previously used are either part of speech I)igram [9, 14] or Character-based IIMM [12] .", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 154, |
| "text": "[9,", |
| "ref_id": null |
| }, |
| { |
| "start": 155, |
| "end": 158, |
| "text": "14]", |
| "ref_id": null |
| }, |
| { |
| "start": 183, |
| "end": 187, |
| "text": "[12]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "Both hem'istic-based and statistically-based approaches use t.he Minimum Connective-Cost Method [7] , which is a linear time dynamic programming algorithm that finds the morphological hypothesis that has tile minimal connective cost (i.e. bigram-ba~sed cost) as derived by certain criteria. q'o handle unknown words, most Japanese morphological analyzers u,~e character type heuristics [17] , which is \"a string of the same character type is likely to constitute a word\". There is one stochastic approach that uses bigram of word formation unit [13] . tlowever, it does not learn probabilities from training texts, but learns them fi'om machine readable dictionaries, and the model is not incorporated in working morphological analyzers, as fitr as the author knows.", |
| "cite_spans": [ |
| { |
| "start": 96, |
| "end": 99, |
| "text": "[7]", |
| "ref_id": null |
| }, |
| { |
| "start": 386, |
| "end": 390, |
| "text": "[17]", |
| "ref_id": null |
| }, |
| { |
| "start": 545, |
| "end": 549, |
| "text": "[13]", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The unique features of the proposed Japanese morphological analyzer is that it can find tile exact N most likely hyl)otheses using part of speech trigram, and it can handle unlmown words using character trigram. The algoril.hm can naturally be extended to handle any higher order Markov models. Moreover, it can natnrally be extended to handle lattice-style input that is often used as t.he output of speech recognition and character recognition systems, by extending the function (leftmost-subatringa) so as to return a list of words in the dictionary that matches the substrings in tile input lattice stm'ting at the specified p(xqition.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "For future wot'k, we have to study the most effective way of generating word hypotheses that can handh. \u2022 un: known words. Currently, we are limiting the number of word hypotheses to reduce ambiguity at tile cost of accuracy. We have also to study tile word model for open categories thai, have conjugation, because the training 7Kana-to-kanji conversion is a pop~alar JILpanese input method on computer using ASCII keyboard. Phonetic tranHeription by Roams (ASCII) characters are input and converted fir, st to the Japanese syllabary hiragana which is then converted to orthographic trm~scrlption including Chinese character kanjl. data gets too small to make trigrams if we divide it by tags. We will probably have to tie some parameters to solve the insufficient data problem. Moreover, we have to study the method to adapt the system to a new domain. To develop an m~supervised learning method, like the forward-backward algorithm for IIMM, is an urgent goal, since we can't always expect the availability of manually segmented and tagged data. We can think of an EM algorithm by replacing maximization with summation in the extended Viterbi algorithm, but we don't know how to handle unknown words in this algorithm.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We have developed a stochastic Japanese morphological analyzer. It uses a statistical tagging model and an efficient two-pass search algorithm to llnd the N best morphological analysis hypotheses for the input sentence. Its word segmentation and tagging accuracy is approxlmatcly 95%, which is comparable to the star.eof-the-art stochastic tagger for English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "[7]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "[8]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "[9]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "[10]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "[11]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "[13]", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "InFigure 1, function transprob returns the probability of given trlgraln. Functions initial-step and final-step treat [be tl'aliSlt[ons I%L sltlill~llce ],Olll|dlll'ieg,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A Procedure for Quantit.atively Comparing the Syntactic Coverage of English Grammars\", I)AIH'A Speech an(I Nalm'a] Language Workshop", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "306--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Black, E. et al.: \"A Procedure for Quantit.a- tively Comparing the Syntactic Coverage of En- glish Grammars\", I)AIH'A Speech an(I Nalm'a] Language Workshop, pp.306-311, Morgan Kauf- [15] mann, 1991.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Equations for Part-of~Speech Tagging", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Ilendrickson", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Jacol)son", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Perkowitz", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "784--789", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Charniak, E., Ilendrickson, C., Jacol)son, N., and Perkowitz, M.: \"Equations for Part-of~Speech Tagging\", AAAI-93, I)1).784-789, 1993. [16]", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A Stochastic Part of Speech Tagger and Noun Phrase Parser for English", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Church", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "136--143", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Church, K.: \"A Stochastic Part of Speech Tagger and Noun Phrase Parser for English\", ANLP-88, pp.136-143, 1988.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A Practical Part-of-Speech Tagger", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cutting", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Kupiec", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Pederseu", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Sibnn", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "92", |
| "issue": "", |
| "pages": "133--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cutting, D., Kupiec, J., Pederseu, J., and Sibnn, P.: \"A Practical Part-of-Speech Tagger\", ANLP- [17] 92, pp.133-140, 1992.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Extended Viterbi Algorithm for Second Order Ilidden Markov Process", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Iie", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "718--720", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "IIe, Y.: \"Extended Viterbi Algorithm for Second Order Ilidden Markov Process\", ICI'R-88, pp.718- 720, 1988.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Morphological Analysis by Minimum Connetivc-Cost Method", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ilisamitsu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Nitta", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "17--24", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilisamitsu, T. and Nitta, Y.: \"Morphological Analysis by Minimum Connetivc-Cost Method\", '['echnical Report S/GNLC 90-8, IEICE, pp.17-24, 1990 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Self-organized language modeling for speech recognition", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| } |
| ], |
| "year": 1985, |
| "venue": "IBM Report", |
| "volume": "", |
| "issue": "", |
| "pages": "450--506", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jelinek, F.: \"Self-organized language modeling for speech recognition\", IBM Report, 1985 (Reprinted in Readings in Speech Recognition, 1)i).450-506).", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Synt.actic Analysis by Stochastic P, UNSETSU Grammar", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Matsunobu", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Liitaka", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshida", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "", |
| "volume": "56", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matsunobu, E., lIitaka, T., and Yoshida, S.: \"Syn- t.actic Analysis by Stochastic P, UNSETSU Gram- mar\", Technical Rel)ort SIGNL 56-3, IPSJ, 1986 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Tagging Text with a Probabilistic Moder', ICASSP-9I", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Meriahlo", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "809--812", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meriahlo, 1t.: \"Tagging Text with a Probabilistic Moder', ICASSP-9I, pp.809-812, 1991.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "I'OST: Using l)robal)ilities in Language Processing', lJCAI-9 t", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "W" |
| ], |
| "last": "Meteer", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "960--965", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meteer, M. W., Schwartz, R. and Weischedel, R.: \"I'OST: Using l)robal)ilities in Language Process- ing', lJCAI-9 t, pp.960-965, 1991.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "llidden Markov Model applied to Morphological Analysis", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Murakaini", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sagayama", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "3", |
| "issue": "", |
| "pages": "161--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Murakaini, J. and Sagayama, S.: \"llidden Markov Model applied to Morphological Analysis\", 45th National Meeting of the IPSJ, Vol.3, pp.161-162, 1992 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Japanese Word Formarion Model and Its Evahmtion", |
| "authors": [ |
| { |
| "first": "I1", |
| "middle": [], |
| "last": "Nagai", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Llital\u00a2a", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Trans IPSJ", |
| "volume": "34", |
| "issue": "9", |
| "pages": "1944--1955", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nagai, I1. and llital\u00a2a, T.: \"Japanese Word For- marion Model and Its Evahmtion\", Trans IPSJ, Vol.34, No.9, pp.1944-1955, 1993 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Morphological Category l~';igram: A Single Language Model for botl, Spoken l,anguage and Text\", ISS1)-93, I)1)", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sakai", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "87--90", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sakai, S.: \"Morphological Category l~';igram: A Single Language Model for botl, Spoken l,anguage and Text\", ISS1)-93, I)1).87-90, 1993.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A Tree-Trellis Based Fast Search for Finding the N Best Sentence llypotheses in Continuous Speech Recognition", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "K" |
| ], |
| "last": "Soong", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "ICASS P-9 I", |
| "volume": "", |
| "issue": "", |
| "pages": "705--708", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Soong, F. K. aml lluang E.: \"A Tree-Trellis Based Fast Search for Finding the N Best Sen- tence llypotheses in Continuous Speech Recogni- tion\", ICASS P-9 I, pp.705-708, 1991.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Morphological Analysis of Non-marked-off Japanese Sentences hy the Least llUNSETSU's Number Method", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Yoshinmra", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshida", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1981, |
| "venue": "", |
| "volume": "3", |
| "issue": "", |
| "pages": "40--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshinmra, K, llitaka, T., and Yoshida, S.: \"Mor- phological Analysis of Non-marked-off Japanese Sentences hy the Least llUNSETSU's Number Method\", 't'rans. I1'$3, Vol.24, No.l, pp.40-46, 19811 (in Japanese).", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Morphological Analysis of Japanese Sent.ences Containing Unknown Words", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Yoshimura", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Takeuchi", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Tsuda", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Shudo", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "", |
| "volume": "30", |
| "issue": "", |
| "pages": "294--301", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoshimura, K., Takeuchi, M., Tsuda, K. and Shudo, K.: \"Morphological Analysis of Japanese Sent.ences Containing Unknown Words\", Trans. IPSJ, Vol.30, No.3, pp.294-301, 1989 (in Japanese).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "have the same form as the first order Conshler the partial word sequence HI/ = and the partial tag sequence Ti = tl...tl, F(w~,~) = \u00a2)(w,_,,~-,)P(~d,.-~)p(wdtO" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "function ~orward-paae (string) begin initial-atepO ; It Pods special symbols at both ends.for iffil to length(string) do foreach parse in get-parse-list(i) do foreach word ill leftmost-subatrings(atring,i) ,I(7 poa-ngrma :-append(parse.nth-order-stato, list (word.poa)) if (traneprob(poe-ngrtm) > O) then new-parse :'. make-parseO ; new-parse.mtart :~ i; new-parse.end :-i + length(word.form); hey-pares,poe :-word.pea; new-parae.nth-order-mtate :~ rest(pos-ngram) ; naw-paree.preb-ee-far :-parae.prob-so-far * transprob(pos-ngram) * word.prob; new-parse.previous := paras; regieter-parse-to-parae-ltst (new-parse) ; register-paree-to-path-ma p (new-parse) ; endif elld end end finnl-etQp(); i/ Randlan trtmaition to tho e~d symbol. end Figure h The forward DP search algorithm" |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "of speech P(wlt ) by tile trigram probabilities, p(,,,Iz) = P,(C) --f',(~,l#, #)~',(~1#, <) u IX P" |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "N-best Tags for Unknown Words" |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "(get -lef tmo st-subst riags-uit h-word-model" |
| }, |
| "FIGREF6": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "N-Best Morphological Analysis hypotheses" |
| }, |
| "FIGREF7": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": ".... \u2022 ..................... Tile percentage of sentences correctly segmented and tagged." |
| }, |
| "FIGREF8": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "shows tile percentage of sentences (not words) correctly segmented and tagged." |
| }, |
| "TABREF1": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>start</td></tr><tr><td>end</td></tr><tr><td>pea</td></tr><tr><td>nth-order-state</td></tr><tr><td>prob-ao-far</td></tr><tr><td>previous</td></tr></table>", |
| "text": "Data structures for the N best algorithm" |
| }, |
| "TABREF3": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td colspan=\"5\">trahling texts closed test open</td></tr><tr><td colspan=\"2\">Sentences /</td><td>-1~5</td><td>\"</td><td>10{i0</td><td>10 o 0</td></tr><tr><td>Words</td><td>'</td><td>149059</td><td/><td>13176 [</td><td>13899</td></tr><tr><td colspan=\"2\">Characters _</td><td>267,122 [</td><td/><td>9422~</td><td>98997</td></tr></table>", |
| "text": "The aillount of training and test data ~_~" |
| }, |
| "TABREF4": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td colspan=\"2\">Perccld.;ige of words correctly segmented and</td></tr><tr><td colspan=\"2\">tagged: raw part o[' speech bigram aud trigrmn</td></tr><tr><td>I 2 I 98'{l% I 89.7'~~</td><td>[90.7% [ 0.007 [</td></tr><tr><td>I:~[ os.,~:~ [ 8a.~.s~</td><td>] 84.a% [ o.m2 ]</td></tr><tr><td>I'~ 9a.'2% I 7s.~/o k 5 I <~l''i''i\u00b0 I >in'/~%</td><td>I 79.6% I o.o15 I I r(~.o~ I o.o~s_l</td></tr><tr><td colspan=\"2\">First, as a I)reliminary experiment, we compared tile</td></tr><tr><td colspan=\"2\">perforn)ances of part of speech bigram and trigram.</td></tr><tr><td colspan=\"2\">Table 3 shows the percentages of words correctly seg-</td></tr><tr><td colspan=\"2\">mented and tagged, tested on the closed test sentences.</td></tr><tr><td colspan=\"2\">The trigram model achiew;d 97.5% recall and 97.8%</td></tr><tr><td colspan=\"2\">precision flu\" the top candidate, while tile bigram model</td></tr><tr><td colspan=\"2\">achiew.'d 96.2% recall and 96.6% precision. Although</td></tr><tr><td colspan=\"2\">both tagging models sllow very high l)erformanee, tile</td></tr><tr><td/><td>20,5</td></tr></table>", |
| "text": "" |
| }, |
| "TABREF5": { |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td colspan=\"6\">smoothed trigram with word model (open text)</td><td/><td/></tr><tr><td/><td>lal)el consistency</td><td/><td colspan=\"2\">label consistency 2</td><td/><td colspan=\"3\">structure consistency</td></tr><tr><td colspan=\"3\">recall precision crossings</td><td colspan=\"3\">recall precision crossings</td><td colspan=\"2\">recall precision</td><td>crossings</td></tr><tr><td>95.1%</td><td>94.6%</td><td>0.013</td><td>95.9%</td><td>95.4%</td><td>(J.013</td><td>97.7%</td><td>97.2%</td><td>0.013</td></tr><tr><td>96.5%</td><td>88.0%</td><td>0.023</td><td>97.0%</td><td>90.3%</td><td>0.023</td><td>98.2%</td><td>94.4%</td><td>0.022</td></tr><tr><td>97.3%</td><td>82.1%</td><td>0.031</td><td>97.6%</td><td>85.1%</td><td>0.031</td><td>98.5%</td><td>91.7%</td><td>0.029</td></tr><tr><td>97.6%</td><td>77.4%</td><td>0.0'16</td><td>97.9%</td><td>80.7%</td><td>0.046</td><td>98.7%</td><td>89.6%</td><td>0.044</td></tr><tr><td>97.8%</td><td>73.2%</td><td>0.061</td><td>98.1%</td><td>77.1%</td><td>0.060</td><td>98.8%</td><td>87.9%</td><td>0.056</td></tr></table>", |
| "text": "The percentage of words correctly segmented and tagged: smoothed trigram with word model" |
| } |
| } |
| } |
| } |