ACL-OCL / Base_JSON /prefixP /json /P96 /P96-1031.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P96-1031",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:02:34.181138Z"
},
"title": "An Efficient Compiler for Weighted Rewrite Rules",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "AT&T Research",
"location": {
"addrLine": "600 Mountain Avenue Murray Hill",
"postCode": "07974 NJ"
}
},
"email": "mohri@research@att.com"
},
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bell Laboratories",
"location": {
"addrLine": "700 Mountain Avenue Murray Hill",
"postCode": "07974",
"settlement": "NJ"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Context-dependent rewrite rules are used in many areas of natural language and speech processing. Work in computational phonology has demonstrated that, given certain conditions, such rewrite rules can be represented as finite-state transducers (FSTs). We describe a new algorithm for compiling rewrite rules into FSTs. We show the algorithm to be simpler and more efficient than existing algorithms. Further, many of our applications demand the ability to compile weighted rules into weighted FSTs, transducers generalized by providing transitions with weights. We have extended the algorithm to allow for this.",
"pdf_parse": {
"paper_id": "P96-1031",
"_pdf_hash": "",
"abstract": [
{
"text": "Context-dependent rewrite rules are used in many areas of natural language and speech processing. Work in computational phonology has demonstrated that, given certain conditions, such rewrite rules can be represented as finite-state transducers (FSTs). We describe a new algorithm for compiling rewrite rules into FSTs. We show the algorithm to be simpler and more efficient than existing algorithms. Further, many of our applications demand the ability to compile weighted rules into weighted FSTs, transducers generalized by providing transitions with weights. We have extended the algorithm to allow for this.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Rewrite rules are used in many areas of natural language and speech processing, including syntax, morphology, and phonology 1. In interesting applications, the number of rules can be very large. It is then crucial to give a representation of these rules that leads to efficient programs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1."
},
{
"text": "Finite-state transducers provide just such a compact representation (Mohri, 1994) . They are used in various areas of natural language and speech processing because their increased computational power enables one to build very large machines to model interestingly complex linguistic phenomena. They also allow algebraic operations such as union, composition, and projection which are very useful in practice (Berstel, 1979; Eilenberg, 1974 Eilenberg, 1976 . And, as originally shown by Johnson (1972) , rewrite rules can be modeled as 1 Parallel rewrite rules also have interesting applications in biology. In addition to their formal language theory interest, systems such as those of Aristid Lindenmayer provide rich mathematical models for biological development (Rozenberg and Sa]omaa, 1980) .",
"cite_spans": [
{
"start": 68,
"end": 81,
"text": "(Mohri, 1994)",
"ref_id": "BIBREF12"
},
{
"start": 409,
"end": 424,
"text": "(Berstel, 1979;",
"ref_id": "BIBREF3"
},
{
"start": 425,
"end": 440,
"text": "Eilenberg, 1974",
"ref_id": null
},
{
"start": 441,
"end": 456,
"text": "Eilenberg, 1976",
"ref_id": null
},
{
"start": 487,
"end": 501,
"text": "Johnson (1972)",
"ref_id": "BIBREF6"
},
{
"start": 767,
"end": 796,
"text": "(Rozenberg and Sa]omaa, 1980)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Motivation",
"sec_num": "1."
},
{
"text": "finite-state transducers, under the condition that no rule be allowed to apply any more than a finite number of times to its own output. Kaplan and Kay (1994) , or equivalently Karttunen (1995) , provide an algorithm for compiling rewrite rules into finite-state transducers, under the condition that they do not rewrite their noncontextual part 2. We here present a new algorithm for compiling such rewrite rules which is both simpler to understand and implement, and computationally more efficient. Clarity is important since, as pointed out by Kaplan and Kay (1994) , the representation of rewrite rules by finite-state transducers involves many subtleties. Time and space efficiency of the compilation are also crucial. Using naive algorithms can be very time consuming and lead to very large machines (Liberman, 1994) .",
"cite_spans": [
{
"start": 137,
"end": 158,
"text": "Kaplan and Kay (1994)",
"ref_id": "BIBREF7"
},
{
"start": 177,
"end": 193,
"text": "Karttunen (1995)",
"ref_id": "BIBREF8"
},
{
"start": 547,
"end": 568,
"text": "Kaplan and Kay (1994)",
"ref_id": "BIBREF7"
},
{
"start": 806,
"end": 822,
"text": "(Liberman, 1994)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "231",
"sec_num": null
},
{
"text": "In some applications such as those related to speech processing, one needs to use weighted rewrite rules, namely rewrite rules to which weights are associated. These weights are then used at the final stage of applications to output the most probable analysis. Weighted rewrite rules can be compiled into weighted finite-state transducers, namely transducers generalized by providing transitions with a weighted output, under the same context condition. These transducers are very useful in speech processing (Pereira et al., 1994) . We briefly describe how we have augmented our algorithm to handle the compilation of weighted rules into weighted finite-state transducers. In order to set the stage for our own contribution, we start by reviewing salient aspects of the Kaplan and Kay algorithm.",
"cite_spans": [
{
"start": 509,
"end": 531,
"text": "(Pereira et al., 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "231",
"sec_num": null
},
{
"text": "2The genera] question of the decidability of the halting problem even for one-rule semi-Thue systems is still open. Robert McNaughton (1994) has recently made a positive conjecture about the class of the rules without self overlap.",
"cite_spans": [
{
"start": 123,
"end": 140,
"text": "McNaughton (1994)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "231",
"sec_num": null
},
{
"text": "Prologue o I d( Obligatory( \u00a2 , <i , >)) Id(Rightcontezt(p, <, >)) Replace Id(Leftcontezt(A, <, >)) Prologue -i = Id(Z~< 0 <i \u00a2\u00b0< > B~,< 0) o = Id(( 0 > p>0Z 0-> > p>0 0-> o = [Id(~*<,>, o)Opt(Id(<a)\u00a2\u00b0<c>c \u00d7 \u00a2\u00b0c>cId(>a))]* o -\" Id((~0A<0 -~0 < < Z~0 f] ~0A<0 -~0 < < ~0)>) o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "231",
"sec_num": null
},
{
"text": "Figure 1: Compilation of obligatory left-to-right rules, using the KK algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "231",
"sec_num": null
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "231",
"sec_num": null
},
{
"text": "The rewrite rules we consider here have the following general form:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u00a2 --, p",
"eq_num": "(2)"
}
],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "Such rules can be interpreted in the following way: \u00a2 is to be replaced by \u00a2 whenever it is preceded by A and followed by p. Thus, A and p represent the left and right contexts of application of the rules. In general, \u00a2, \u00a2, A and p are all regular expressions over the alphabet of the rules. Several types of rules can be considered depending on their being obligatory or optional, and on their direction of application, from left to right, right to left or simultaneous application.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "Consider an obligatory rewrite rule of the form \u00a2 --+ \u00a2/A p, which we will assume applies left to right across the input string. Compilation of this rule in the algorithm of Kaplan and Kay (1994) (KK for short) involves composing together six transducers, see Figure 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 260,
"end": 268,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "We use the notations of KK. In particular, denotes the alphabet, < denotes the set of context labeled brackets {<a, <i, <c}, > the set {>a, >i, >c}, and 0 an additional character representing deleted material. Subscript symbols of an expression are symbols which are allowed to freely appear anywhere in the strings represented by that expression. Given a regular expression r, Id(r) is the identity transducer obtained from an automaton A representing r by adding output labels to A identical to its input labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "The first transducer, Prologue, freely introduces labeled brackets from the set {<a, <i, <~, >a, >i, >~} which are used by left and right context transducers. The last transducer, Prologue -i, erases all such brackets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "In such a short space, we can of course not hope to do justice to the KK algorithm, and the reader who is not familiar with it is urged to consult their paper. However, one point that we do need to stress is the following: while the construction of Prologue, Prologue -i and Replace 232",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "is fairly direct, construction of the other transducers is more complex, with each being derived via the application of several levels of regular operations from the original expressions in the rules. This clearly appears from the explicit expressions we have indicated for the transducers. The construction of the three other transducers involves many operations including: two intersections of automata, two distinct subtractions, and nine complementations. Each subtraction involves an intersection and a complementation algorithm 3. So, in the whole, four intersections and eleven complementations need to be performed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "Intersection and complementation are classical automata algorithms (Aho et al., 1974; Aho et al., 1986) . The complexity of intersection is quadratic. But the classical complementation algorithm requires the input automaton to be deterministic. Thus, each of these 11 operations requires first the determinization of the input. Such operations can be very costly in the case of the automata involved in the KK algorithm 4.",
"cite_spans": [
{
"start": 67,
"end": 85,
"text": "(Aho et al., 1974;",
"ref_id": "BIBREF0"
},
{
"start": 86,
"end": 103,
"text": "Aho et al., 1986)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "In the following section we briefly describe a new algorithm for compiling rewrite rules. For reasons of space, we concentrate here on the compilation of left-to-right obligatory rewrite rules. However, our methods extend straightforwardly to other modes of application (optional, right-to-left, simultaneous, batch), or kinds of rules (two-level rules) discussed by Kaplan and Kay (1994) .",
"cite_spans": [
{
"start": 367,
"end": 388,
"text": "Kaplan and Kay (1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "3A subtraction can of course also be performed directly by combining the two steps of intersection and complementation, but the corresponding algorithm has exactly the same cost as the total cost of the two operations performed consecutively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "4 One could hope to find a more efficient way of determining the complement of an automaton that would not require determinization. However, this problem is PSPACE-complete. Indeed, the regular expression non-universality problem is a subproblem of complementation known to be PSPACE-complete (Garey and Johnson, 1979, page 174) , (Stockmeyer and Meyer, 1973) . This problem also known as the emptiness of complement problem has been extensively studied (Aho et al., 1974, page 410-419) .",
"cite_spans": [
{
"start": 304,
"end": 328,
"text": "Johnson, 1979, page 174)",
"ref_id": null
},
{
"start": 331,
"end": 359,
"text": "(Stockmeyer and Meyer, 1973)",
"ref_id": "BIBREF19"
},
{
"start": 454,
"end": 486,
"text": "(Aho et al., 1974, page 410-419)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The KK Algorithm",
"sec_num": "2."
},
{
"text": "In contrast to the KK algorithm which introduces \"brackets everywhere only to restrict their occurrence subsequently, our algorithm introduces context symbols just when and where they are needed. Furthermore, the number of intermediate transducers necessary in the construction of the rules is smaller than in the KK algorithm, and each of the transducers can be constructed more directly and efficiently from the primitive expressions of the rule, ~, ~, A, p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "A transducer corresponding to the left-toright obligatory rule \u00a2 --* \u00a2/A p can be obtained by composition of five transducers:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "r o f o replace o 11 o 12",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "1. The transducer r introduces in a string a marker > before every instance of p. For reasons that will become clear we will notate this",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "as Z* p --~ E* > p.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "2. The transducer f introduces markers <1 and <2 before each instance of ~ that is followed by >:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "u u {>})'{<1, <2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "}5 >. In other words, this transducer/harks just those ~b that occur before p. 3. The replacement transducer replace replaces ~b with ~ in the context <1 ~b >, simultaneously deleting > in all positions ( Figure 2 ). Since >, <1, and <2 need to be ignored when determining an occurrence of ~b, there are loops over the transitions >: c, <1: \u00a2, <~: c at all states of \u00a2, or equivalently of the states of the cross product transducer \u00a2 \u00d7 ~. 4. The transducer 11 admits only those strings in which occurrences of <1 are preceded by A and deletes <l at such occurrences:",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "5. The transducer 12 admits only those strings in which occurrences of <2 are not preceded by A and deletes <~ at such occurrences: 2*X <2-~ ~*~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "Clearly the composition of these transducers leads to the desired result. The construction of the transducer replace is straightforward. In the following, we show that the construction of the other four transducers is also very simple, and that it only requires the determinization of 3 automata and additional work linear (time and space) in the size of the determinized automata. which inserts a marker after all prefixes of a string that match a particular regular expression. Given a regular expression fl defined on the alphabet E, one can construct, using classical algorithms (Aho et al., 1986) , a deterministic automaton a representing E*fl. As with the KK algorithm, one can obtain from a a transducer X = Id(a) simply by assigning to each transition the same output label as the input label. We can easily transform X into a new transducer r such that it inserts an arbitrary marker ~ after each occurrence of a pattern described by ~. To do so, we make final the nonfinal states of X and for any final state q of X we create a new state q~, a copy of q. Thus, q' has the same transitions as q, and qP is a final state. We then make q non-final, remove the transitions leaving q and add a transition from q to q' with input label the empty word c, and output ~. Proposition 1 Let ~ be a deterministic automaton representing E*/3, then the transducer r obtained as described above is a transducer postmarking occurrences of fl in a string ofF* by #.",
"cite_spans": [
{
"start": 583,
"end": 601,
"text": "(Aho et al., 1986)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1."
},
{
"text": "The proof is based on the observation that a deterministic automaton representing E*/~ is necessarily complete 5. Notice that nondeterministic automata representing ~*j3 are not necessarily complete. Let q be a state of a and let u E ~* be a string reaching q6. Let v be a string described by the regular expression ft. Then, for any a E ~, uav is in ~*~. Hence, uav is accepted by the automaton a, and, since ~ is deterministic, there exists a transition labeled with a leaving q. Thus, one can read any string u E E* using the automaton a. Since by definition of a, the state reached when reading a prefix u ~ of u is final iff u ~ E ~*~, by construction, the transducer r inserts the symbol # after the prefix u ~ iff u ~ ends with a pattern of ft. This ends the proof of the proposition, t3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Proof.",
"sec_num": null
},
{
"text": "In some cases, one wishes to check that any occurrence of # in a string s is preceded (or followed) by an occurrence of a pattern of 8. We shall say that the corresponding transducers are of TYPE 2. They play the role of a filter. Here again, they can be defined from a deterministic automaton representing E*B. Figure 5 illustrates the modifications to make from the automaton of figure 3. The symbols # should only appear at final states and must be erased. The loop # : e added at final states of Id(c~) is enough for that purpose.",
"cite_spans": [],
"ref_spans": [
{
"start": 312,
"end": 320,
"text": "Figure 5",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Markers of TYPE 2",
"sec_num": null
},
{
"text": "All states of the transducer are then made final since any string conforming to this restriction is acceptable: cf. the transducer !1 for A above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 2",
"sec_num": null
},
{
"text": "#:E 5An automaton A is complete iff at any state q and for any element a of the alphabet ~ there exists at least one transition leaving q labeled with a. In the case of deterministic automata, the transition is unique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 2",
"sec_num": null
},
{
"text": "6We assume all states of a accessible. This is true if a is obtained by determinization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 2",
"sec_num": null
},
{
"text": "In other cases, one wishes to check the reverse constraint, that is that occurrences of # in the string s are not preceded (or followed) by any occurrence of a pattern of ft. The transformation then simply consists of adding a loop at each nonfinal state of Id(a), and of making all states final.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "Thus, a state such as that of figure 6 is transa:a c:c Figure 6 : Non-final state q of a.",
"cite_spans": [],
"ref_spans": [
{
"start": 55,
"end": 63,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "formed into that of figure 5. We shall say that the corresponding transducer is of TYPE 3: cf. the transducer 12 for ~.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "The construction of these transducers (TYPE 1-3) can be generalized in various ways. In particular:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "\u2022 One can add several alternative markers {#1,'\", #k} after each occurrence of a pattern of 8 in a string. The result is then an automaton with transitions labeled with, for instance, ~1,'\" \", ~k after each pattern of fl: cf. transducer f for \u00a2 above.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "\u2022 Instead of inserting a symbol, one can delete a symbol which would be necessarily present after each occurrence of a pattern of 8.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "For any regular expression a, define M arker ( a, type, deletions, insertions) as the transducer of type type constructed as previously described from a deterministic automaton representing a, insertions and deletions being, respectively, the set of insertions and deletions the transducer makes.",
"cite_spans": [
{
"start": 45,
"end": 78,
"text": "( a, type, deletions, insertions)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "Proposition 2 For any regular expression a, Marker(a, type, deletions, insertions) can be constructed from a deterministic automaton representing a in linear time and space with respect to the size of this automaton.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "Proof. We proved in the previous proposition that the modifications do indeed lead to the desired transducer for TYPE 1. The proof for other cases is similar. That the construction is linear in space is clear since at most one additional transition and state is created for final or non-final states 7. The overall time complexity of the construction is linear, since the construction of ld(a) is linear in the ~For TYPE 2 and TYPE 3, no state is added but only a transition per final or non-final state. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markers of TYPE 3",
"sec_num": null
},
{
"text": "number of transitions of a and that other modifications consisting of adding new states and transitions and making states final or not are also linear. D",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(4) (5) (6) (7)",
"sec_num": null
},
{
"text": "We just showed that Marker(a,type, deletions, insertions) can be constructed in a very efficient way. Figure 7 gives the expressions of the four transducers r, f, ll, and 12 using Marker.",
"cite_spans": [],
"ref_spans": [
{
"start": 102,
"end": 110,
"text": "Figure 7",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "(4) (5) (6) (7)",
"sec_num": null
},
{
"text": "Thus, these transducers can be constructed very efficiently from deterministic automata representing s ~*reverse(p), (~ O {>})* reverse(t> >), and E*,~. The construction of r and f requires two reverse operations. This is because these two transducers insert material before p or \u00a2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "(4) (5) (6) (7)",
"sec_num": null
},
{
"text": "In many applications, in particular in areas related to speech, one wishes not only to give all possible analyses of some input, but also to give some measure of how likely each of the analyses is. One can then generalize replacements by considering extended regular expressions, namely, using the terminology of formal language theory, rational power series (Berstel and Reutenauer, 1988; Salomaa and Soittola, 1978) .",
"cite_spans": [
{
"start": 359,
"end": 389,
"text": "(Berstel and Reutenauer, 1988;",
"ref_id": null
},
{
"start": 390,
"end": 417,
"text": "Salomaa and Soittola, 1978)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Weighted Rules",
"sec_num": "4."
},
{
"text": "The rational power series we consider here are functions mapping ~* to ~+ U {oo) which can be described by regular expressions over the alphabet (T~+ U {co}) x ~. S = (4a)(2b)*(3b) is an example of rational power series. It defines a function in the following way: it associates a non-null number only with the strings recognized by the regular expression ab*b. This number is obtained by adding the coefficients involved in the recognition of the string. The value associated with abbb, for instance, is (S, abbb) = 4 + 2 + 2 + 3 = 11.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Weighted Rules",
"sec_num": "4."
},
{
"text": "In general, such extended regular expressions can be redundant. Some strings can be matched SAs in the KK algorithm we denote by \u00a2> the set of the strings described by \u00a2 containing possibly occurrences of > at any position. In the same way, subscripts such as >:> for a transducer r indicate that loops by >:> are added at all states of r. We denote by reverse(a) the regular expression describing exactly the reverse strings of a if a is a regular expression, or the reverse transducer of a if a is a transducer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extension to Weighted Rules",
"sec_num": "4."
},
{
"text": "in different ways with distinct coefficients. The value associated with those strings is then the minimum of all possible results. S' = (2a)(3b)(4b) + (5a)(3b*) matches abb with the different weights 2+3+4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "--9 and 5+3+3 = 11. The minimum of the two is the value associated with abb: (S', abb) = 9. Non-negative numbers in the definition of these power series are often interpreted as the negative logarithm of probabilities. This explains our choice of the operations: addition of the weights along the string recognition and min, since we are only interested in that result which has the highest probability 9.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "Rewrite rules can be generalized by letting \u00a2 be a rational power series. The result of the application of a generalized rule to a string is then a set of weighted strings which can be represented by a weighted automaton. Consider for instance the following rule, which states that an abstract nasal, denoted N, is rewritten as m in the context of a following labial:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "Y ---* m/__[+labial]",
"eq_num": "(8)"
}
],
"section": "235",
"sec_num": null
},
{
"text": "z Now suppose that this is only probabilistically true, and that while ninety percent of the time N does indeed become m in this environment, about ten percent of the time in real speech it becomes n. Converting from probabilities to weights, one would say that N becomes m with weight a = -log(0.9), and n with weight fl = -log(0.1), in the stated environment. One could represent this by the following rule:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N --* am + fin/__[+labial]",
"eq_num": "(9)"
}
],
"section": "235",
"sec_num": null
},
{
"text": "We define Weighted finite-state transducers as transducers such that in addition to input and output labels, each transition is labeled with a weight.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "The result of the application of a weighted transducer to a string, or more generally to an automaton is a weighted automaton. The corresponding operation is similar to the unweighted case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "However, the weight of the transducer and those of the string or automaton need to be combined too, here added, during composition (Pereira et al., 1994) .",
"cite_spans": [
{
"start": 131,
"end": 153,
"text": "(Pereira et al., 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "9Using the terminology of the theory of languages, the functions we consider here are power series defined on the tropical semiring (7~+U{oo}, min, +, (x), 0) (Kuich and Salomaa, 1986) . We have generalized the composition operation to the weighted case by introducing this combination of weights. The algorithm we described in the previous sections can then also be used to compile weighted rewrite rules.",
"cite_spans": [
{
"start": 159,
"end": 184,
"text": "(Kuich and Salomaa, 1986)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "As an example, the obligatory rule 9 can be represented by the weighted transducer of Figure 8 10 . The following theorem extends to the weighted case the assertion proved by Kaplan and Kay (1994) .",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "Kaplan and Kay (1994)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 86,
"end": 97,
"text": "Figure 8 10",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "Theorem 1 A weighted rewrite rule of the type defined above that does not rewrite its noncontextual part can be represented by a weighted finite-state transducer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "Proof. The construction we described in the previous section also provides a constructive proof of this theorem in the unweighted case. In case \u00a2 is a power series, one simply needs to use in that construction a weighted finite-state transducer representing \u00a2. By definition of composition of weighted transducers, or multiplication of power series, the weights are then used in a way consistent with the definition of the weighted contextdependent rules, o",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "235",
"sec_num": null
},
{
"text": "In order to compare the performance of the Mgorithm presented here with KK, we timed both algorithms on the compilation of individual rules taken from the following set ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "1\u00b0We here use the symbol ~ to denote all letters different from b, rn, n, p, and N.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5."
},
{
"text": "In other words we tested twenty two rules where the left context or the right context is varied in length from zero to ten occurrences of c. For our experiments, we used the alphabet of a realistic application, the text analyzer for the Bell Laboratories German text-to-speech system consisting of 194 labels. All tests were run on a Silicon Graphics IRIS Indigo 4000, 100 MhZ IP20 Processor, 128 Mbytes RAM, running IRIX 5.2. Figure 9 shows the relative performance of the two algorithms for the left context: apparently the performance of both algorithms is roughly linear in the length of the left context, but KK has a worse constant, due to the larger number of operations involved. Figure 10 shows the equivalent data for the right context. At first glance the data looks similar to that for the left context, until one notices that in Figure 10 we have plotted the time on a log scale: the KK algorithm is hyperexponential.",
"cite_spans": [],
"ref_spans": [
{
"start": 427,
"end": 435,
"text": "Figure 9",
"ref_id": "FIGREF7"
},
{
"start": 688,
"end": 697,
"text": "Figure 10",
"ref_id": "FIGREF8"
},
{
"start": 842,
"end": 851,
"text": "Figure 10",
"ref_id": "FIGREF8"
}
],
"eq_spans": [],
"section": "236",
"sec_num": null
},
{
"text": "What is the reason for this performance degradation in the right context? The culprits turn out to be the two intersectands in the expression of Rightcontext(p, <, >) in Figure 1 . Consider for example the righthand intersectand, namely ~0 > P>0~0-> ~0, which is the complement of ~0 > P>0~0-> ~0-As previously indicated, the complementation Mgorithm. requires determinization, and the determinization of automata representing expressions of the form ~*a, where c~ is a regular expression, is often very expensive, specially when the expression a is already complex, as in this case. Figure 11 plots the behavior of determinization on the expression Z~0 > P>0Z~0-> ~0 for each of the rules in the set a ~ b/__c k, (k e [0, 10]). On the horizontal axis is the number of arcs of the non-deterministic input machine, and on the vertical axis the log of the number of arcs of the deterministic machine, i.e. the machine result of the determinization algorithm without using any minimization. The perfect linearity indicates an exponential time and space behavior, and this in turn explains the observed difference in performance. In contrast, the construction of the right context machine in our algorithm involves only the single determinization of the automaton representing ~*p, and thus is much less expensive. The comparison just discussed involves a rather artificiM ruleset, but the differences in performance that we have highlighted show up in real applications. Consider two sets of pronunciation rules from the Bell Laboratories German text-to-speech system: the size of the alphabet for this ruleset is 194, as noted above. The first ruleset, consisting of pronunciation rules for the orthographic vowel <5> contains twelve rules, and the second ruleset, which deals with the orthographic vowel <a> contains twenty five rules. In the actual application of the rule compiler to these rules, one compiles the individual rules in each ruleset one by one, and composes them together in the order written, compacts them after each composition, and derives a single transducer for each set. When done off-line, these operations of compo- sition and compaction dominate the time corresponding to the construction of the transducer for each individual rule. The difference between the two algorithms appears still clearly for these two sets of rules. Table 1 shows for each algorithm the times in seconds for the overall construction, and the number of states and arcs of the output transducers.",
"cite_spans": [],
"ref_spans": [
{
"start": 170,
"end": 178,
"text": "Figure 1",
"ref_id": null
},
{
"start": 584,
"end": 593,
"text": "Figure 11",
"ref_id": null
},
{
"start": 2351,
"end": 2358,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "236",
"sec_num": null
},
{
"text": "We briefly described a new algorithm for compiling context-dependent rewrite rules into finite-state transducers. Several additional methods can be used to make this algorithm even more efficient. The automata determinizations needed for this algorithm are of a specific type. They repre-237 sent expressions of the type ~*\u00a2 where \u00a2 is a regular expression. Given a deterministic automaton representing \u00a2, such determinizations can be performed in a more efficient way using failure functions (Mohri, 1995) . Moreover, the corresponding determinization is independent of ~ which can be very large in some applications. It only depends on the alphabet of the automaton representing \u00a2.",
"cite_spans": [
{
"start": 493,
"end": 506,
"text": "(Mohri, 1995)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "One can devise an on-the-fly implementation of the composition algorithm leading to the final transducer representing a rule. Only the necessary part of the intermediate transducers is then expanded for a given input (Pereira et al., 1994) .",
"cite_spans": [
{
"start": 217,
"end": 239,
"text": "(Pereira et al., 1994)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "The resulting transducer representing a rule is often subsequentiable or p-subsequentiable. It can then be determinized and minimized (Mohri, 1994) . This both makes the use of the transducer time efficient and reduces its size.",
"cite_spans": [
{
"start": 134,
"end": 147,
"text": "(Mohri, 1994)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "We also indicated an extension of the theory of rule-compilation to the case of weighted rules, which compile into weighted finite-state transducers. Many algorithms used in the finite-state theory and in their applications to natural language processing can be extended in the same way.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "To date the main serious application of this compiler has been to developing text-analyzers for text-to-speech systems at Bell Laboratories (Sproat, 1996) : partial to more-or-less complete analyzers have been built for Spanish, Italian, French, Romanian, German, Russian, Mandarin and Japanese. However, we hope to also be able to use the compiler in serious applications in speech",
"cite_spans": [
{
"start": 140,
"end": 154,
"text": "(Sproat, 1996)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
}
],
"back_matter": [
{
"text": "We wish to thank several colleagues of AT&T/_Bell Labs, in particular Fernando Pereira and Michael Riley for stimulating discussions about this work and Bernd MSbius for providing the German pronunciation rules cited herein.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "The design and analysis of computer algorithms",
"authors": [
{
"first": "Alfred",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "John",
"middle": [
"E"
],
"last": "Hopcroft",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. 1974. The design and analysis of computer algorithms. Addison Wesley: Read- ing, MA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Compilers, Principles, Techniques and Tools",
"authors": [
{
"first": "Alfred",
"middle": [
"V"
],
"last": "Aho",
"suffix": ""
},
{
"first": "Ravi",
"middle": [],
"last": "Sethi",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [
"D"
],
"last": "Ullman",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. 1986. Compilers, Principles, Techniques and Tools. Addison Wesley: Reading, MA. Jean Berstel and Christophe Reutenauer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Rational Series and Their Languages",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rational Series and Their Languages. Springer-Verlag: Berlin-New York.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Transductions and Context-Free Languages",
"authors": [
{
"first": "Jean",
"middle": [],
"last": "Berstel",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jean Berstel. 1979. Transductions and Context- Free Languages. Teubner Studienbucher: Stuttgart.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Automata, Languages and Machines, volume A-B",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Eilenberg",
"suffix": ""
}
],
"year": 1974,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Eilenberg. 1974-1976. Automata, Lan- guages and Machines, volume A-B. Academic Press.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Computers and Intractability",
"authors": [
{
"first": "R",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "David",
"middle": [
"S"
],
"last": "Garey",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael R. Garey and David S. Johnson. 1979. Computers and Intractability. Freeman and Company, New York.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Formal Aspects of Phonological Description",
"authors": [
{
"first": "C",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Douglas Johnson. 1972. Formal Aspects of Phonological Description. Mouton, Mouton, The Hague.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Regular models of phonological rule systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ronald",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Kaplan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kay",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ronald M. Kaplan and Martin Kay. 1994. Regu- lar models of phonological rule systems. Com- putational Linguistics, 20(3).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The replace operator",
"authors": [
{
"first": "Lauri",
"middle": [],
"last": "Karttunen",
"suffix": ""
}
],
"year": 1995,
"venue": "33 rd Meeting of the Association for Computational Linguistics (ACL 95), Proceedings of the Conference, MIT, Cambridge, Massachussetts. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lauri Karttunen. 1995. The replace operator. In 33 rd Meeting of the Association for Compu- tational Linguistics (ACL 95), Proceedings of the Conference, MIT, Cambridge, Massachus- setts. ACL.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semirings, Automata, Languages",
"authors": [
{
"first": "Wener",
"middle": [],
"last": "Kuich",
"suffix": ""
},
{
"first": "Arto",
"middle": [],
"last": "Salomaa",
"suffix": ""
}
],
"year": 1986,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wener Kuich and Arto Salomaa. 1986. Semir- ings, Automata, Languages. Springer-Verlag: Berlin-New York.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Commentary on kaplan and kay",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Liberman",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "",
"issue": "3",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Liberman. 1994. Commentary on kaplan and kay. Computational Linguistics, 20(3).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The uniform halting problem for one-rule semi-thue systems",
"authors": [
{
"first": "Robert",
"middle": [],
"last": "Mcnaughton",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert McNaughton. 1994. The uniform halt- ing problem for one-rule semi-thue systems. Technical Report 94-18, Department of Com- puter Science, Rensselaer Polytechnic Insti- tute, Troy, New York.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Compact representations by finite-state transducers",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 1994,
"venue": "32 nd Meeting of the Association for Computational Linguistics (ACL 94), Proceedings of the Conference",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 1994. Compact representations by finite-state transducers. In 32 nd Meeting of the Association for Computational Linguistics (ACL 94), Proceedings of the Conference, Las Cruces, New Mexico. ACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Matching patterns of an automaton",
"authors": [
{
"first": "Mehryar",
"middle": [],
"last": "Mohri",
"suffix": ""
}
],
"year": 1995,
"venue": "Lecture Notes in Computer Science",
"volume": "937",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mehryar Mohri. 1995. Matching patterns of an automaton. Lecture Notes in Computer Sci- ence, 937.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Weighted rational transductions and their application to human language processing",
"authors": [
{
"first": "C",
"middle": [
"N"
],
"last": "Fernando",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1994,
"venue": "ARPA Workshop on Human Language Technology. Advanced Research Projects Agency",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando C. N. Pereira, Michael Riley, and Richard Sproat. 1994. Weighted rational transductions and their application to human language processing. In ARPA Workshop on Human Language Technology. Advanced Re- search Projects Agency.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The Mathematical Theory of L Systems",
"authors": [
{
"first": "Grzegorz",
"middle": [],
"last": "Rozenberg",
"suffix": ""
},
{
"first": "Arto",
"middle": [],
"last": "Salomaa",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Grzegorz Rozenberg and Arto Salomaa. 1980. The Mathematical Theory of L Systems. Aca- demic Press, New York.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Aspects of Formal Power Series",
"authors": [
{
"first": "",
"middle": [],
"last": "Automata-Theoretic",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Automata-Theoretic Aspects of Formal Power Series. Springer-Verlag: Berlin-New York.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Multilingual text analysis for text-to-speech synthesis",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the ECAI-96 Workshop on Extended Finite State Models of Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Sproat. 1996. Multilingual text analy- sis for text-to-speech synthesis. In Proceed- ings of the ECAI-96 Workshop on Extended Finite State Models of Language, Budapest, Hungary. European Conference on Artificial Intelligence.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Word problems requiring exponential time",
"authors": [
{
"first": "L",
"middle": [
"J"
],
"last": "Stockmeyer",
"suffix": ""
},
{
"first": "A",
"middle": [
"R"
],
"last": "Meyer",
"suffix": ""
}
],
"year": 1973,
"venue": "Proceedings of the 5 th Annual ACM Symposium on Theory of Computing",
"volume": "",
"issue": "",
"pages": "1--9",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. J. Stockmeyer and A. R. Meyer. 1973. Word problems requiring exponential time. In Pro- ceedings of the 5 th Annual ACM Sympo- sium on Theory of Computing. Association for Computing Machinery, New York, 1-9.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Replacement transducer replace in the obligatory left-to-right case."
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Figures 3 and 4 illustrate the transformation of X into T. Final state q of X with entering and leaving transitions.ata ctc Figure 4: States and transitions of r obtained by modifications of those of X."
},
"FIGREF2": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Filter transducer, TYPE 2."
},
"FIGREF3": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "r = [reverse(Marker(E*reverse(p), 1, {>},0))] f = [reverse(Marker((~ U {>})*reverse(C> >), 1, {<1, <u},0))] 11 = [Marker(N*)L 2,0, {<1})]<~:<2 12 = [Marker($*A,3,@, {<2})] Expressions of the r, f, ll, and 12 using Marker."
},
"FIGREF4": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Transducer representing the rule 9."
},
"FIGREF7": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Compilation times for rules of the form a ~ b/ck , (k E [0, 10])."
},
"FIGREF8": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Compilation times for rules of the form a ~ b/ c k, (k E [0, 10])."
},
"TABREF0": {
"text": "Let us start by considering the problem of constructing what we shall call a TYPE I transducer,",
"type_str": "table",
"num": null,
"content": "<table><tr><td>3.2. Markers</td></tr><tr><td>Markers of TYPE 1</td></tr><tr><td>233</td></tr></table>",
"html": null
},
"TABREF1": {
"text": "Comparison in a real example.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>I Rulesll</td><td/><td>KK</td><td>II</td><td/><td>New</td><td>I</td></tr><tr><td/><td>time</td><td colspan=\"2\">space</td><td>time</td><td colspan=\"2\">space</td></tr><tr><td/><td colspan=\"2\">(s) states</td><td>arcs</td><td colspan=\"2\">(s) states</td><td>arcs</td></tr><tr><td>&lt;5&gt;</td><td>62</td><td>412</td><td colspan=\"2\">50,475 47</td><td>394</td><td>47,491</td></tr><tr><td>&lt;a&gt;</td><td colspan=\"6\">284 1,939 215,721 240 1,927 213,408</td></tr></table>",
"html": null
}
}
}
}