File size: 43,522 Bytes
6fa4bc9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 | {
"paper_id": "P90-1003",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:39:04.520121Z"
},
"title": "PROSODY, SYNTAX AND PARSING",
"authors": [
{
"first": "John",
"middle": [],
"last": "Bear",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue Menlo Park",
"postCode": "94025",
"region": "California"
}
},
"email": ""
},
{
"first": "Patti",
"middle": [],
"last": "Price",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "SRI International",
"location": {
"addrLine": "333 Ravenswood Avenue Menlo Park",
"postCode": "94025",
"region": "California"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We describe the modification of a grammar to take advantage of prosodic information provided by a speech recognition system. This initial study is limited to the use of relative duration of phonetic segments in the assignment of syntactic structure, specifically in ruling out alternative parses in otherwise ambiguous sentences. Taking advantage of prosodic information in parsing can make a spoken language system more accurate and more efficient, if prosodicsyntactic mismatches, or unlikely matches, can be pruned. We know of no other work that has succeeded in automatically extracting speech information and using it in a parser to rule out extraneous parses.",
"pdf_parse": {
"paper_id": "P90-1003",
"_pdf_hash": "",
"abstract": [
{
"text": "We describe the modification of a grammar to take advantage of prosodic information provided by a speech recognition system. This initial study is limited to the use of relative duration of phonetic segments in the assignment of syntactic structure, specifically in ruling out alternative parses in otherwise ambiguous sentences. Taking advantage of prosodic information in parsing can make a spoken language system more accurate and more efficient, if prosodicsyntactic mismatches, or unlikely matches, can be pruned. We know of no other work that has succeeded in automatically extracting speech information and using it in a parser to rule out extraneous parses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Prosodic information can mark lexical stress, identify phrasing breaks, and provide information useful for semantic interpretation. Each of these aspects of prosody can benefit a spoken language system (SLS). In this paper we describe the modification of a grammar to take advantage of prosodic information provided by a speech component. Though prosody includes a variety of acoustic phenomena used for a variety of linguistic effects, we limit this initial study to the use of relative duration of phonetic segments in the assignment of syntactic structure, specifically in ruling out alternative parses in otherwise ambiguous sentences. It is rare that prosody alone disambiguates otherwise identical phrases. However, it is also rare that any one source of information is the sole feature that separates one phrase from all competitors. Taking advantage of prosodic information in parsing can make a spoken language system more accurate and more efficient, if prosodic-syntactic mismatches, or unlikely matches, can be pruned out. Prosodic struc-ture and syntactic structures are not, of course, completely identical. Rhythmic structures and the necessity of breathing influence the prosodic structure, but not the syntactic structure (Gee and Grosjean 1983, Cooper and Paccia-Cooper 1980 ) . Further, there are aspects of syntactic structure that are not typically marked prosodically. Our goal is to show that at least some prosodic information can be automatically extracted and used to improve syntactic analysis. Other studies have pointed to possibilities for deriving syntax from prosody (see e.g., Gee and Grosjean 1983, Briscoe and Boguraev 1984, and Komatsu, Oohira, and Ichikawa 1989) but none to our knowledge have communicated speech information directly to a parser in a spoken language system.",
"cite_spans": [
{
"start": 1239,
"end": 1247,
"text": "(Gee and",
"ref_id": "BIBREF0"
},
{
"start": 1248,
"end": 1273,
"text": "Grosjean 1983, Cooper and",
"ref_id": null
},
{
"start": 1274,
"end": 1294,
"text": "Paccia-Cooper 1980 )",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For our corpus of sentences we selected a subset of a corpus developed previously (see Price et aL 1989) for investigating the perceptual role of prosodic information in disambiguating sentences. A set of 35 phonetically ambiguous sentence pairs of differing syntactic structure was recorded by professional FM radio news announcers. By phonetically ambiguous sentences, we mean sentences that consist of the same string of phones, i.e., that suprasegmental rather than segmental information is the basis for the distinction between members of the pairs. Members of the pairs were read in disambiguating contexts on days separated by a period of several weeks to avoid exaggeration of the contrast. In the earlier study listeners viewed the two contexts while hearing one member of the pair, and were asked to select the appropriate context for the sentence. The results showed that listeners can, in general, reliably separate phonetically and syntactically ambiguous sentences on the basis of prosody. The original study investigated seven types of structural ambiguity. The present study used a subset of the sentence pairs which contained prepositional phrase attachment ambiguities, or particle/preposition ambiguities (see Appendix).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "If naive listeners can reliably separate phonetically and structurally ambiguous pairs, what is the basis for this separation? In related work on the perception of prosodic information, trained phoneticians labeled the same sentences with an integer between zero and five inclusive between every two words. These numbers, 'prosodic break indices,' encode the degree of prosodic decoupling of neighboring words, the larger the number, the more of a gap or break between the words. We found that we could label such break indices with good agreement within and across labelers. In addition, we found that these indices quite often disambiguated the sentence pairs, as illustrated below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "* Marge 0 would 1 never 2 deal 0 in 2 any 0 guys",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "\u2022 Marge 1 would 0 never 0 deal 3 in 0 any 0 guise The break indices between 'deal' and 'in' provide a clear indication in this case whether the verb is 'deal-in' or just 'deal.' The larger of the two indices, 3, indicates that in that sentence, 'in' is not tightly coupled with 'deal' and hence is not likely to be a particle.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "So far we had established that naive listeners and trained listeners appear to be able to separate such ambiguous sentence pairs on the basis of prosodic information. If we could extract such information automatically perhaps we could make it available to a parser. We found a clue in an effort to assess the phonetic ambiguity of the sentence pairs. We used SRI's DECIPHER speech recognition system, constrained to recognize the correct string of words, to automatically label and time-align the sentences used in the earlier referenced study. The DECIPHER system is particularly well suited to this task because it can model and use very bushy pronunciation networks, accounting for much more detail in pronunciation than other systems. This extra detail makes it better able to time-align the sentences and is a stricter test of phonetic ambiguity. We used the DE-CIPHER system (Weintraub et al. 1989) to label and time-align the speech, and verified that the sentences were, by this measure as well as by the earlier perceptual verification, truly ambiguous phonetically. This meant that the information separating the member of the pairs was not in the segmental information, but in the suprasegmental information: duration, pitch and pausing. As a byproduct of the labeling and time alignment, we noticed that the durations of the phones could be used to separate members of the pairs. This was easy to see in phonetically ambiguous sentence pairs: normally the structure of duration patterns is obscured by intrinsic duration of phones and the contextual effects of neighboring phones. In the phonetically ambiguous pairs, there was no need to account for these effects in order to see the striking pattern in duration differences. If a human looking at the duration patterns could reliably separate the members of the pairs, there was hope for creating an algorithm to perform the task automatically. This task could not take advantage of such pairs, but would have to face the problem of intrinsic phone duration.",
"cite_spans": [
{
"start": 881,
"end": 904,
"text": "(Weintraub et al. 1989)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "Word break indices were generated automatically by normalizing phone duration according to estimated mean and variance, and combining the average normalized duration factors of the final syllable coda consonants with a pause factor. Let di = (di-~j)/o'j be the normalized duration of the ith phoneme in the coda, where pj and ~rj are the mean and standard deviation of duration for phone j. dp is the duration (in ms) of the pause following the word, if any. A set of word break indices are computed for all the words in a sentence as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "1 n = + d,,/70",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "The term dp/70 was actually hard-limited at 4, so as not to give pauses too much weight. The set .A includes all coda consonants, but not the vowel nucleus unless the syllable ends in a vowel. Although the vowel nucleus provides some boundary cues, the lengthening associated with prominence can be confounded with boundary lengthening and the algorithm was slightly more reliable without using vowel nucleus information. These indices n are normalized over the sentence, assuming known sentence boundaries, to range from zero to five (the scale used for the initial perceptual labeling). The correlation coefficient between the hand-labeled break indices and the automatically generated break indices was very good: 0.85.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corpus",
"sec_num": null
},
{
"text": "Thus far, we have shown that naive and trained listeners can rely on suprasegmental information to separate ambiguous sentences, and we have shown that we can automatically extract information that correlates well with the perceptual labels. It remains to be shown how such information can be used by a parser. In order to do so we modified an already existing, and in fact reasonably large grammar. The parser we use is the Core Language Engine developed at SRI in Cambridge (Alshawi et al. 1988 ).",
"cite_spans": [
{
"start": 476,
"end": 496,
"text": "(Alshawi et al. 1988",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "Much of the modification of the grammar is done automatically. The first thing is to systematically change all the rules of the form A --* B C to be of the form A --. B Link C, where Link is a new grammatical category, that of the prosodic break indices. Similarly all rules with more than two right hand side elements need to have link nodes interleaved at every juncture: e.g., a rule A --* B C D is changed into A --~ B Link1 C Link2 D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "Next, allowance must be made for empty nodes. It is common practice to have rules of the form NP --* and PP ~ ~ in order to handle wh-movement and relative clauses. These rules necessitate the incorporation into the modified grammar of a rule Link --* e. Otherwise, a sentence such as a wh-question will not parse because an empty node introduced by the grammar will either not be preceded by a link, or not be followed by one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "The introduction of empty links needs to be constrained so as not to introduce spurious parses. If the only place the empty NP or PP etc. could fit into the sentence is at the end, then the only place the empty Link can go is right before it so there is no extra ambiguity introduced. However if an empty wh-phrase could be posited at a place somewhere other than the end of the sentence, then there is ambiguity as to whether it is preceded or followed by the empty link.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "For instance, for the sentence, \"What did you see _ on Saturday?\" the parser would find both of the following possibilities:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "\u2022 What L did L you L see L empty-NP empty-L on L Saturday? \u2022 What L did L you L see empty-L empty-NP L on L Saturday?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "Hence the grammar must be made to automatically rule out half of these possibilities. This can be done by constraining every empty link to be followed immediately by an empty wh-phrase, or a constituent containing an empty wh-phrase on its left branch. It is fairly straightforward to incorporate this into the routine that automatically modifies the grammar. The rule that introduces empty links gives them a feature-value pair: empty_link=y. The rules that introduce other empty constituents are modified to add to the constituent the feature-value pair: trace_on_left_branch--y. The links zero through five are given the feature-value pair empty_link--n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "The default value for trace_on_left_branch is set to n so that all words in the lexicon have that value. number of parses and parse times (in and without the use of prosodic inforfor the feature trace_on_left_branch. Additionally, if Linki has empty_link---y then Ai+x must have trace_on_left_branch--y. These modifications, incorporated into the grammar-modifying routine, suffice to eliminate the spurious ambiguity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Grammar",
"sec_num": null
},
{
"text": "Running the grammar through our procedure, to make the changes mentioned above, results in a grammar that gets the same number of parses for a sentence with links as the old grammar would have produced for the corresponding sentence without links. In order to make use of the prosodic information we still need to make an additional important change to the grammar: how does the grammar use this information? This area is a vast area of research. The present study shows the feasibility of one particular approach. In this initial endeavor, we made the most conservative changes imaginable after examining the break indices on a set of sentences. We changed the rule N --~ N Link PP so that the value of the link must be between 0 and 2 inclusive (on a scale of 0-5) for the rule to apply. We made essentially the same change to the rule for the construction verb plus particle, VP --* V Link PP, except that the value of the link must, in this case, be either 0 or 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "After setting these two parameters we parsed each of the sentences in our corpus of 14 sentences, and compared the number of parses to the number of parses obtained without benefit of prosodic information. For half of the sentences, i.e., for one member of each of the sentence pairs, the number of parses remained the same. For the other members of the pairs, the number of parses was reduced, in many cases from two parses to one.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "The actual sentences and labels are in the appendix. The incorporation of prosody resulted in a reduction of about 25% in the number of parses found, as shown in table 1. Parse times increase about 37%.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "In the study by Price et al., the sentences with more major breaks were more reliably identified by the listeners. This is exactly what happens when we put these sentences through our parser too. The large prosodic gap between a noun and a following preposition, or between a verb and a following preposition provides exactly the type of information that our grammar can easily make use of to rule out some readings. Conversely, a small prosodic gap does not provide a reliable way to tell which two constituents combine. This coincides with Steedman's (1989) observation that syntactic units do not tend to bridge major prosodic breaks.",
"cite_spans": [
{
"start": 542,
"end": 559,
"text": "Steedman's (1989)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "We can construe the large break between two words, for example a verb and a preposition/particle, as indicating that the two do not combine to form a new slightly larger constituent in which they are sisters of each other. We cannot say that no two constituents may combine when they are separated by a large gap, only that the two smallest possible constituents, i.e., the two words, may not combine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "To do the converse with small gaps and larger phrases simply does not work. There are cases where there is a small gap between two phrases that are joined together. For example there can be a small gap between the subject NP of a sentence and the main VP, yet we do not want to say that the two words on either side of the juncture must form a constituent, e.g., the head noun and auxiliary verb.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "The fact that parse times increase is due to the way in which prosodic information is incorporated into the text. The parser does a certain amount of work for each word, and the effect of adding break indices to the sentence is essentially to double the number of words that the parser must process. We expect that this overhead will constitute a less significant percentage of the parse time as the input sentences become more complex. We also hope to be able to reduce this overhead with a better understanding of the use of prosodic information and how it interacts with the parsing of spoken language.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting Grammar Parameters",
"sec_num": "4"
},
{
"text": "After devising our strategy, changing the grammar and lexicon, running our corpus through the parser, and tabulating our results, we looked at some new data that we had not considered before, to get an idea of how well our methods would carry over. The new corpus we considered is from a recording of a short radio news broadcast. This time the break indices were put into the transcript by hand. There were twentytwo places in the text where our attachment strategy would apply. In eighteen of those, our strategy or a very slight modification of it, would work properly in ruling out some incorrect parses and in not preventing the correct parse from being found. In the remaining four sentences, there seem to be other factors at work that we hope to be able to incorporate into our system in the future. For instance it has been mentioned in other work that the length of a prosodic phrase, as measured by the number of words or syllables it contains, may affect the location of prosodic boundaries. We are encouraged by the fact that our strategy seems to work well in eighteen out of twenty-two cases on the news broadcast corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Corroboration From Other Data",
"sec_num": "5"
},
{
"text": "The sample of sentences used for this study is extremely small, and the principal test set used, the phonetically ambiguous sentences, is not independent of the set used to develop our system. We therefore do not want to make any exaggerated claims in interpreting our results. We believe though, that we have found a promising and novel approach for incorporating prosodic information into a natural language processing system. We have shown that some extremely common cases of syntactic ambiguity can be resolved with prosodic information, and that grammars can be modified to take advantage of prosodic information for improved parsing. We plan to test the algorithm for generating prosodic break indices on a larger set of sentences by more talkers. Changing from speech read by professional speakers to spontaneous speech from a variety of speakers will no doubt require modification of our system along several dimensions. The next steps in this research will include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Investigating further the relationship between prosody and syntax, including the different roles of phrase breaks and prominences in marking syntactic structure,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Improving the prosodic labeling algorithm by incorporating intonation and syntactic/semantic information,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Incorporating the automatically labeled information in the parser of the SRI Spoken Language System (Moore, Pereira and Murveit 1989),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Modeling the break indices statistically as a function of syntactic structure,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "\u2022 Speeding up the parser when using the prosodic information; the expectation is that pruning out syntactic hypotheses that are incompatible with the prosodic pattern observed can both improve accuracy and speed up the parser overall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "This work was supported in part by National Science Foundation under NSF grant number IRI-8905249. The authors are indebted to the co-Principle Investigators on this project, Mart Ostendorf (Boston University) and Stefanie Shattuck-Hufnagel (MIT) for their roles in defining the prosodic infrastructure on the speech side of the speech and natural language integration. We thank Hy Murveit (SRI) and Colin Wightman (Boston University) for help in generating the phone alignments and duration normalizations, and Bob Moore for helpful comments on a draft. We thank Andrea Levitt and Leah Larkey for their help, many years ago, in developing fully voiced structurally ambiguous sentences without knowing what uses we would put them to.This work was also supported by the Defense Advanced Research Projects Agency under the Office of Naval Research contract N00014-85-C-0013.[3] W. Cooper and J. Paccia-Cooper (1980) Syntax and Speech, Harvard University Press, Cambridge, Massachusetts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": "7"
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Performance Structures: A Psycholinguistic and Linguistic Appraisal",
"authors": [
{
"first": "J",
"middle": [
"P"
],
"last": "Gee",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Grosjean",
"suffix": ""
}
],
"year": 1983,
"venue": "Cognitive Psychology",
"volume": "15",
"issue": "",
"pages": "411--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. P. Gee and F. Grosjean (1983) \"Performance Structures: A Psycholinguistic and Linguistic Appraisal,\" Cognitive Psychology, Vol. 15, pp. 411-458.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The Effects of Word Boundary Ambiguity in Continuous Speech Recognition",
"authors": [
{
"first": "J",
"middle": [],
"last": "Harrington",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Johnstone",
"suffix": ""
}
],
"year": 1987,
"venue": "Proc. of XI Int. Cong. Phonetic Sciences",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Harrington and A. Johnstone (1987) \"The Ef- fects of Word Boundary Ambiguity in Continu- ous Speech Recognition,\" Proc. of XI Int. Cong. Phonetic Sciences, Tallin, Estonia, Se 45.5.1-4.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "ProsodicM Sentence Structure Inference for Natural Conversational Speech Understanding",
"authors": [
{
"first": "A",
"middle": [],
"last": "Komatsu",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Oohira",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Ichikawa",
"suffix": ""
}
],
"year": 1989,
"venue": "ICOT Technical Memorandum: TM-0733",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Komatsu, E. Oohira and A. Ichikawa (1989) \"ProsodicM Sentence Structure Inference for Natural Conversational Speech Understanding,\" ICOT Technical Memorandum: TM-0733.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Integrating Speech and Natural-Language Processing",
"authors": [
{
"first": "R",
"middle": [],
"last": "Moore",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Murveit",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Speech and Natural Language Workshop",
"volume": "",
"issue": "",
"pages": "243--247",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Moore, F. Pereira and H. Murveit (1989) \"Integrating Speech and Natural-Language Pro- cessing,\" in Proceedings of the DARPA Speech and Natural Language Workshop, pages 243-247, February 1989.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Prosody and Parsing",
"authors": [
{
"first": "P",
"middle": [
"J"
],
"last": "Price",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Ostendorf",
"suffix": ""
},
{
"first": "C",
"middle": [
"W"
],
"last": "Wightman",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Workshop on Speech and Natural Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. J. Price, M. Ostendorf and C. W. Wightman (1989) \"Prosody and Parsing,\" Proceedings of the DARPA Workshop on Speech and Natural Language, Cape Cod, October, 1989.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Intonation and Syntax in Spoken Language Systems",
"authors": [
{
"first": "M",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of the DARPA Workshop on Speech and Natural Language",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Steedman (1989) \"Intonation and Syntax in Spoken Language Systems,\" Proceedings of the DARPA Workshop on Speech and Natural Lan- guage, Cape Cod, October 1989.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Linguistic Constraints in Hidden Markov Model Based Speech Recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Weintraub",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Murveit",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Cohen",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Price",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Bernstein",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Bell",
"suffix": ""
}
],
"year": 1989,
"venue": "Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing",
"volume": "",
"issue": "",
"pages": "699--702",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Weintraub, H. Murveit, M. Cohen, P. Price, J. Bernstein, G. Baldwin and D. Bell (1989) \"Linguistic Constraints in Hidden Markov Model Based Speech Recognition,\" in Proc. IEEE Int. Conf. Acoust., Speech, Signal Processing, pages 699-702, Glasgow, Scotland, May 1989.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Research Pro. gramme In Natural Language Processing",
"authors": [
{
"first": "H",
"middle": [],
"last": "Alshawi",
"suffix": ""
},
{
"first": "D",
"middle": [
"M"
],
"last": "Carter",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Van Eijck",
"suffix": ""
},
{
"first": "R",
"middle": [
"C"
],
"last": "Moore",
"suffix": ""
},
{
"first": "D",
"middle": [
"B"
],
"last": "Moranl",
"suffix": ""
},
{
"first": "F",
"middle": [
"C N"
],
"last": "Pereira",
"suffix": ""
},
{
"first": "S",
"middle": [
"G"
],
"last": "Pulman",
"suffix": ""
},
{
"first": "A",
"middle": [
"G"
],
"last": "Smith",
"suffix": ""
}
],
"year": 1988,
"venue": "Annual Report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Alshawi, D. M. Carter, J. van Eijck, R. C. Moore, D. B. Moranl F. C. N. Pereira, S. G. Pulman, and A. G. Smith (1988) Research Pro. gramme In Natural Language Processing: July 1988 Annual Report, SRI International Tech Note, Cambridge, England.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Control Structures and Theories of Interaction in Speech Understanding Systems",
"authors": [
{
"first": "E",
"middle": [
"J"
],
"last": "Brisco",
"suffix": ""
},
{
"first": "B",
"middle": [
"K"
],
"last": "Boguraev",
"suffix": ""
}
],
"year": 1984,
"venue": "Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "259--266",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. J. Brisco and B. K. Boguraev (1984) \"Con- trol Structures and Theories of Interaction in Speech Understanding Systems,\" COLING 1984, pp. 259-266, Association for Computational Lin- guistics, Morristown, New Jersey.",
"links": null
}
},
"ref_entries": {}
}
} |