| { |
| "paper_id": "H93-1013", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:30:53.986499Z" |
| }, |
| "title": "SESSION 3: CONTINUOUS SPEECH RECOGNITION*", |
| "authors": [ |
| { |
| "first": "Douglas", |
| "middle": [ |
| "B" |
| ], |
| "last": "Paul", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "MIT Lincoln Laboratory Lexington", |
| "location": { |
| "postCode": "02173", |
| "region": "MA" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "", |
| "pdf_parse": { |
| "paper_id": "H93-1013", |
| "_pdf_hash": "", |
| "abstract": [], |
| "body_text": [ |
| { |
| "text": "The papers in this session focus on techniques for and applications of large-vocabulary continuous speech recognition. The technique oriented papers discuss techniques for channel compensation, fast search, acoustic modeling, and adaptive language modeling. The applications oriented papers discuss methods for using recognizers for language identification, speaker identification, speakersex identification, and keyword spotting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In \"Efficient Cepstral Normalization for Robust Speech Recognition,\" Liu et al. discuss several preprocessors for channel (including microphone) compensation. Several of these techniques cover only channel equalization and several also account for additive noise. The authors obtained the their best unknown-microphone performance using a technique that accounts for both the equalization and the additive noise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In \"Comparative ExPeriments on Large Vocabulary Speech Recognition,\" Schwartz et al. describe several aspects of the BBN recognition system. They briefly describe their use of forward-backward N-best search. They also found a number of small modeling improvements which add up to a significant total improvement in performance. Finally, they describe their results on channel compensation--which are not completely in agreement with the results of the previous paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\"An Overview of the SPHINX-II Speech Recognition System\" by Huang et al. describes the CMU SPHINX-II recognition system. It describes their feature set, their use of tied-mixture (semicontinuous) pdfs, their statewise-clustered phone models (senones) and their search strategy. It also describes a technique for combination of the acoustic and language model probabilities which does not assume statistical independence between the two information sources.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Murveit et al. describe the search strategy used in the SRI recognizer in \"Progressive-Search Algorithms for Large Vocabulary Speech Recognition.\" This progres-*This work was sponsored by the Advanced Research Projects Agency. The views expressed are those of the author and do not reflect the official policy or position of the U.S. Government. sive search strategy performs the search several times, initially using inexpensive coarse models and then progressively more detailed and expensive models on each iteration. Information from each iteration is used to produce a smaller word network to constrain the search space of the next iteration.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In \"Search Algorithms for Software-Only Real-Time Recognition with Very Large Vocabularies,\" Nguyen et al. describe the techniques used at BBN to achieve realtime recognition of a 20K word task. The techniques center on using a very fast approximate forward search. Information saved from this forward search is then used to constrain a backwards A* search. This backwards search is inherently fast and can provide an N-best sentence list for more detailed reevaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Gauvain and Lamel, in \"Identification on Non-Linguistic Speech Features,\" apply a phonetic recognizer to several other purposes. By using multiple phone sets running independently in parallel, they use the output likelihoods to identify speaker sex, speaker identity, and the language. In each case the phone sets are matched to the aspect to be identified.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "\"On the Use of Tied-Mixture Distributions\" by Kimball and Ostendorf discusses the use of tied Gaussian-mixture pdfs, which have been shown to yield good recognition performance in standard HMM recognizers at a number of sites. They discuss the application of tied mixtures to their stochastic segment recognition models and show improved performance over a non-mixture based system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In \"Adaptive Language Modeling Using the Maximum Entropy Principle,\" Lau et al. describe a new method for recognition-time adaptation of the of the language model based upon the recent past. The technique uses \"trigger\" words that signal an increased probability for other words in the near future. They report a greater reduction in perplexity than that obtained by the use of a \"caching\" adaptive language model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In \"Improved Keyword-Spotting Using SRI's DECI-PHER (TM) Large-Vocabulary Speech-Recognition Sys-tem,\" Weintraub describes use of a large-vocabulary recognizer to a keyword-spotting task. He shows significantly improved performance over the traditional technique of searching for only the keywords against a background of unknown words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Peskin et al., in \"Topic and Speaker Identification via Large Vocabulary Continuous Speech Recognition,\" describe the use of the Dragon large-vocabulary recognizer to perform both topic and speaker identification. The technique described here uses a topic and speakerindependent recognizer to produce a word sequence. This word sequence can then be economically rescored using topic-dependent language models for topic identification or speaker-dependent acoustic models for speaker identification. The authors report good performance on both tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": {}, |
| "ref_entries": {} |
| } |
| } |