ACL-OCL / Base_JSON /prefixH /json /H92 /H92-1010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "H92-1010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:28:26.099552Z"
},
"title": "Spoken Language Processing in the Framework of Human-Machine Communication at LIMSI",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "LIMSI-CNRS",
"location": {
"postBox": "BP 133",
"postCode": "91403",
"settlement": "Orsuy cedex",
"country": "FRANCE"
}
},
"email": "mariani@frtim51.bitnet@marianiqlimsi.fr"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The paper provides an overview of the research conducted at LIMSI in the field of speech processing, but also in the related areas of Human-Machine Communication, including Natural Language Processing, Non Verbal and Multimodal Communication. Also presented are the commercial applications of some of the research projects. When applicable, the discussion is placed in the framework of international collaborations.",
"pdf_parse": {
"paper_id": "H92-1010",
"_pdf_hash": "",
"abstract": [
{
"text": "The paper provides an overview of the research conducted at LIMSI in the field of speech processing, but also in the related areas of Human-Machine Communication, including Natural Language Processing, Non Verbal and Multimodal Communication. Also presented are the commercial applications of some of the research projects. When applicable, the discussion is placed in the framework of international collaborations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The initial work on Text-to-Speech synthesis and Word-Based recognition resulted in the development of software and hardware (including an Asic) systems which have since been marketed, and used in different applications such as cockpit design. Some of the problems encountered in their practical use are discussed. The development of a speaker verification system presently under test field trials is also presented.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "Our speech research program is now oriented along two main axes: Voice Dictation and Spoken Dialog. The successive steps of the voice dictation project are presented, and the difficulties specific to the French language in this task are highlighted. For the dialog project, the integration of linguistic and pragmatic information with continuous speech recognition in the context of air controller training is described.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "While this paper focuses on our speech processing projects, other efforts in related areas, such as character recognition and computer vision with algorithms derived from those developed for speech processing are mentioned. Speech processing is placed in the general framework of Human-Machine Communication. This is reflected in the structure of the laboratory, by the relationship of the Speech Communication group with the two other components of the Human-Machine Communication department: the Language ~ Cognition and Non-Verbal Communication groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "When applicable, the projects of the laboratory are situated in the general context of European/international cooperative actions. Perspectives on the directions of research are given.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "INTRODUCTION",
"sec_num": null
},
{
"text": "The LIMSI laboratory is a \"full\" CNRS (French National Research Agency) laboratory. The acronym stands for \"Lab-oratoire d'Informatique pour la M6canique et les Sciences de t'Ing6nieur\" (Laboratory of Informatics for Mechanical, Chemical and Electrical Engineering).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSI & CNRS",
"sec_num": null
},
{
"text": "CNRS is the French National Research Agency. It was created in 1939. With 12,000 permanent researchers and 15,000 permanent technical and administrative staff, it is probably the largest research institution in Europe. It has 375 full laboratories, and 1000 academic \"associated\" laboratories. The 1991 budget of CNRS was 2,000 M$ (74% for salaries). CNRS has a general director, and is divided into 6 different scientific departments, each having a scientific director. LIMSI is attached to the \"Engineering Sciences\" department, with a secondary attachment to the \"Social and Human Sciences\" Department.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSI & CNRS",
"sec_num": null
},
{
"text": "There are also 40 advisory committees, forming the \"National Committee for Scientific Research\", and corresponding to general research areas. Those committees are responsible for evaluating the quality of each laboratory and of each researcher. LIMSI is principally attached to the committee 7 (Information Sciences and Technologies: Computer Science, Control and Signal Processing), but has secondary attachments to sections 10 (Chemical and Mechanical Engineering), 34 (Language Sciences), and may in the near future be attached to section 29 (Cognition and Psychology).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSI & CNRS",
"sec_num": null
},
{
"text": "For 1991, the laboratory staff comprised approximately 180 members, with a total budget of 6 MS (60% for salaries). With J. Mariani as director, the laboratory is structured in Departments, Groups and Research Topics, each with its own manager. It has two departments: Human-Machine Communication, headed by J. Mariani and Mechanical and Chemical Engineering, headed by P. Le Qu6r6, which will not be presented here. The Human-Machine Communication Department conducts research in closely related areas: Speech, Language, Vision and other means of communication between humans and machines. These areas use common methodologies and, having them studied in the same laboratory, allows for an indepth study of the different communication modes, which, we believe, is mandatory for the development of multimodal communication systems. The research activities are multidisciplinary, and deal with Computer Science, Linguistics, Cognitive Psychology and Neuro-Sciences, and have both theoretical and applied aspects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "LIMSI & CNRS",
"sec_num": null
},
{
"text": "The communication process is studied as a triplet Perception -Production -Cognition: Perception by the machine of speech (that could be enlarged to any sound), of vision (with reading as a subpart), of touch or gesture; Production of speech or text, image synthesis (that could be enlarged to Solid Modeling); And all the cognitive aspects related to dialogue, reasoning, knowledge representation and integration of different communicatitm modes. We try as much as possible to :link the studies of production and perception (i.e. the emission and reception of information), as it can be observed in speech recognition and synthesis, text analysis and generation, scene analysis and image synthesis, movement sensing and effort feedback.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE HUMAN-MACHINE COMMUNICATION",
"sec_num": null
},
{
"text": "The relationship between Language and Image processing is becoming more and more important, with the concept of \"Intelligent Images\", where the different parts of the image are in agreement with their mutual constraints, and with the constraints of the real physical world (Newton Law of gravity, phenomena which occurs in an explosion ...). Those images need advanced Human-Machine Communication, as the task is complex with commands like \"Put the ball on the table and make it bounce back\". This opens the perspective of a true multimodal communication system, including oral, written, visual and gestual communication, with the typical problems of multireference processing (like a voice message accompanied by a gesture). It can even be thought that processing multiple communication modes simultaneously is mandatory for each of the modes, as it is a necessity for knowledge acquisition in training within self-organizing approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE HUMAN-MACHINE COMMUNICATION",
"sec_num": null
},
{
"text": "We find common methodologies in the different domains: signal processing techniques, statistical or structural coding, Vector or Matrix Quantization in Speech and Vision. Pat-tern recognition techniques such as Hidden Markov Models, Markov Random Fields, Multi Layer Perceptrons, Boltzmann Machines are used in speech, language and vision. Morphological, Lexical, Grammatical, Syntactic, Semantic or Pragmatic analysis are applicable to written and spoken language, with specifics for speaking or writing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE HUMAN-MACHINE COMMUNICATION",
"sec_num": null
},
{
"text": "Finally, these domains also have the commonality of requiring a study of Human Factors (Ergonomics) for design of acceptable systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE HUMAN-MACHINE COMMUNICATION",
"sec_num": null
},
{
"text": "The Speech Communication group produced very early a Text-to-Speech stand-alone synthesis system in French (Icophone). The Icophone V system was marketed in 1975 (and appeared in Electronics, June 1975) by the TITN company, and a single unit of this one cubic meter TTS system was sold (to the Iranian Minister of Education...) at that time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "The TTS system used a battery of 44 analog oscillators to produce diphone-based synthesis (using 427 diphones). In 1980, it was replaced by a single board digital version, Icolog, which is marketed by the Vecsys company. The graphemeto-phoneme conversion algorithm uses about 1,000 rules. It has to take into account the difficult problem of liaisons between words in French.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "Another early application was formant synthesis with a first synthesizer designed in 1975 (Icophone VI). A parallel formant synthesizer based on segmental rules is now being developed in the framework of the CEC/Esprit Polyglot project, and should be adapted for 7 different languages within the consortium.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "Together with the Text-to-Speech effort, research was conducted on segmentation of continuous speech into phonemes and speech recognition. An analog speech segmentator, using a 32 analog filters bank was presented in 1974 (J.S. Li~nard et al., 1974) . The speech recognition work addressed both analytical phoneme recognition, in the design of a \"Speech Understanding System\" comparable to those developed in the ARPA-SUR project (J. Mariani et al., 1978) , and word-based recognition.",
"cite_spans": [
{
"start": 228,
"end": 249,
"text": "Li~nard et al., 1974)",
"ref_id": null
},
{
"start": 434,
"end": 455,
"text": "Mariani et al., 1978)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "The first approach was also aimed at designing a \"phoneme vocoder', with a 50 b/s rate. The experiments conducted in this project showed that a phoneme recognition rate of at least 85%, with no major recognition errors (like vowel/consonant substitutions), was necessary in order to transmit a message that can be understood by the human listener (J.S. Li~nard et al., 1977) .",
"cite_spans": [
{
"start": 353,
"end": 374,
"text": "Li~nard et al., 1977)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "Theses systems were the first IWR and CWR systems made in France and have been used in several applications, like Voice-activated telephone dialing (Jeumont-Sehneider, 1985) , or packet sorting (NMPP, 1983) . Most of these applications provided the opportunity for experimenting with voice input as a new communication means, and the market stayed limited to a few units. The application where most efforts have been devoted was pilot-plane dialog. In collaboration with the Crouzet company, a program was conducted for the \"Research and Technology Agency\" (Dret) of the French DoD. A flight with an IWR voice command system with actual commands to the plane was made in July 1982, and was reported as the first voice-controlled flight. The conclusions of the flight trials specified the need of continuous speech recognition, the necessity of a stable level of performances, whoever the pilot, and that voice input was especially interesting in critical conditions (high Gs, stress), which unfortunately correspond to adverse environments which tend to lower the recognition rates.",
"cite_spans": [
{
"start": 148,
"end": 173,
"text": "(Jeumont-Sehneider, 1985)",
"ref_id": null
},
{
"start": 194,
"end": 206,
"text": "(NMPP, 1983)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "The computing power needed for the Dynamic Programming algorithm led us to develop an Asic chip. The MuPCD (Microprocessor for Dynamic Programming) was developed at Limsi, together with the Bull company on a contract of the Ministry of Telecommunications. It was available in 1989 (G. Qudnot et al., 1989) . It has 120,000 transistors in Cmos 2 micron technology, and a power of 70 MOPS. It allows for the recognition of up to 5,000 isolated words, or 300 connected words. This chip is used in the last generation of Vecsys Datavox recognition systems.",
"cite_spans": [
{
"start": 281,
"end": 305,
"text": "(G. Qudnot et al., 1989)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "The work on Language modeling was also an early project at LIMSI. It started with a set of experiments on the problem of phoneme-to-grapheme conversion in French. A casual 9 phoneme string can generate more than 32,000 possible combinations of segmentation into words, and spelfing of those words. In many cases, the absence of pronunciation of the mark of the plural (-s at the end of the nouns, -nt at the end of verbs), generates many of those homophones. First, a simple heuristic was tried with a 20,000 word lexicon which segmented the phoneme string into the smallest number of words. It gave good results for the segmentation task, but also demonstrated the necessity of using a language model to improve the quality. This resulted in a collaboration with researchers using stochastic language modeling based on grammatical categories for document retrieval, and results were reported in 1979 on phoneme-to-grapheme conversion with a 270,000 word full-form lexicon (A. Andreewski et al., 1979) . This approach was also applied to stenotype-to-grapheme couversion.",
"cite_spans": [
{
"start": 977,
"end": 1001,
"text": "Andreewski et al., 1979)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "Another area of research is speaker verification (J. Mariani et al., 1983) . The algorithm used for word-based recognition was adapted to speaker verification, with dynamic adaptation of the reference templates of the speaker to their daily variations. The Sesame system was tested \"live\" at the Machines Parlantes exhibition in 1985, and had an impostor acceptance of 4 per 1000 obtained with informal test conditions. The system is currently in use in everyday operational conditions as the entry system at LIMSI by about 100 users since 1987.",
"cite_spans": [
{
"start": 53,
"end": 74,
"text": "Mariani et al., 1983)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "THE SPEECH COMMUNICATION GROUP",
"sec_num": null
},
{
"text": "In the Speech Communication group, work is now conducted around two main projects: the Dictation project, and the Dialog project.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRESENT PROJECTS",
"sec_num": null
},
{
"text": "In the dictation project, several steps have been taken in the design of a Voice-Activated typewriter (VAT) in French since the beginning of the project 10 years ago. Continuing the study on phoneme-to-grapheme conversion, for continuous, error-free phonemic strings, using a large vocabulary and a natural language syntax, LIMSI participated in the ESPRIT project 860 \"Linguistic Analysis of the European Languages\". In this framework, the approach for language modeling developed at LIMSI has been extended to 7 European languages. The link between the language model and the acoustic recognition was made, and resulted in a complete system (Hamlet), for a limited vocabulary (2,000 words), pronounced in isolation, and then to a 5,000 word VAT system taking advantage of the existence of the specialized MuPCD DTW chip. The complete system was demonstrated in Spring 1988. Now, work is being conducted within the ESPRIT Polyglot project, with the goal of designing speech-to-text and text-to-speech systems for the 7 languages. In this framework, the methods first developed at Olivetti for dictation in isolated mode is adapted to French, and other methods are being developed, based on discrete & tied-mixture HMMs, and TDNNs & TDNNs-HMMs combinations, for continuous speech recognition. Comparative tests are being conducted on part of the DARPA Resource Management Database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRESENT PROJECTS",
"sec_num": null
},
{
"text": "We have recorded BREF, a large read-speech corpus for French, containing over 36 GBytes of speech data from 120 speakers. The text materials were selected verbatim from the French newspaper Le Monde, so as to provide a large vocabulary (over 20,000 words) and a wide range of phonetic environments. Separate text materials, with similar distributional properties, were selected for training, development test and evaluation purposes. A series of experiments for vocabulary independent phone recognition has been recently carried out using this corpus. A baseline phone accuracy of 60% was obtained with context-independent phone models, and no phone grammar, and a phone accuracy of 68.6% with context dependent phone models and a bigram phone language model (J.L. Gauvain, L. Lamel, this conference).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRESENT PROJECTS",
"sec_num": null
},
{
"text": "The dialog project has been explored within the framework of the application of air-controller training. Currently, the training sessions are limited by the availability of the human instructor who plays the role of a pilot. Our goal is to replace him by a spoken dialog system. This allows for more availability of the system, and also to have several voices corresponding to different pilots in the synthesis module. In this project, speech understanding uses speech recognition in conjunction with a representation of the semantic and pragmatic knowledge related to the task. While the language is supposed to follow a pre-defined \"phraseology\", it is not the case most of the time. The language model is a bigram model based on grammatical categories. The probabilities for word successions are changed depending on the previous step in the dialog (prediction), and corrections of the recognized sentence can be made, using redundancy within the sentence, and a word confusion matrix. An evaluation test involving 6 speakers, and 5 scripts of an average of 20 sentences each, prediction improved the results by 10%, and correction added an extra 18.5% (the sentence understanding rate improved from 68% to 96.5%) (A. Matrouf et al., 1991) . Prior to this work, large \"Wizard of Oz\" experiments were conducted (D. Luzzati, 1984) , and the linguistic analysis of the resulting corpus in a train timetable enquiry system simulation was realized (D. Luzzati, 1987) . In general, all the implementations used linguistic analysis of real corpora in order to meet the user's needs. The recording of a spontaneous speech database (Spot) has also been started.",
"cite_spans": [
{
"start": 1221,
"end": 1242,
"text": "Matrouf et al., 1991)",
"ref_id": "BIBREF11"
},
{
"start": 1317,
"end": 1331,
"text": "Luzzati, 1984)",
"ref_id": "BIBREF7"
},
{
"start": 1450,
"end": 1464,
"text": "Luzzati, 1987)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PRESENT PROJECTS",
"sec_num": null
},
{
"text": "Other research topics concern speech signal processing, with both basic research on wavelet analysis, and development oil a real time PC-Based speech analysis tools (the Unice package which is marketed by Vecsys). The study of phonological variations has been pursued on a text (\"La Bise et le Soleil\") pronounced by several speakers, and will continue with the analysis of BREF. Another area of interest is the use of symbolic coding for improving cochlear implants. Also, apart from the classical Multilayer perceptron or TDNN approaches, an original \"Guided Propagation\" connectionist model is experimented. In the hardware domain, the use of several MuPCD Asic chips in parallel is now being implemented, and the design of a new chip, taking advantage of improved technology is envisioned.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PRESENT PROJECTS",
"sec_num": null
},
{
"text": "The algorithms developed for speech recognition have been appfied in other areas. The DP matching process developed for connected word recognition, with the MuPCD Asic, has been adapted to the problem of optical character recognition. Instead of considering the recognition of individual characters after segmentation (including eventually segmentation errors), the complete line of characters is considered, and segmentation is included in the recognition process (M. Khemakhem, 1987) . The algorithm has also been extended successfully in 2 dimensions in Computer Vision for matching similar images, with application to stereovision, and to movement analysis (G. Qu~not, 1992) . Preliminary studies for gesture recognition (throwing away, taking, hoMing tight...) using a Data GloveTM have also been conducted. It is expected that the increase of quality obtained by stochastic modeling instead of template matching in speech recognition can also be obtained in the fields of character recognition and computer vision, and we are now considering applying these techniques.",
"cite_spans": [
{
"start": 469,
"end": 485,
"text": "Khemakhem, 1987)",
"ref_id": "BIBREF5"
},
{
"start": 661,
"end": 678,
"text": "(G. Qu~not, 1992)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "APPLICATION OF THE ALGORITHMS DEVELOPED FOR SPEECH TO OTHER AREAS",
"sec_num": null
},
{
"text": "The activities of the Language ~ Cognition group have of course many interactions with the Speech Communication group. Several common research projects have been, or are being, conducted, such as the use of Conceptual Graphs to represent semantic information in a speech understanding system, or stochastic modeling of semantic information. Also, the grapheme-to-phoneme conversion software has been used to correct errors. There are also many interactions between the Connectionist Systems and the Connectionist Models research topics in the two groups. The \"Time 8z Space representation\" topic has also close relationship with the Non-verbal Communication group.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE LANGUAGE & COGNITION GROUP",
"sec_num": null
},
{
"text": "A new activity is now starting in the field of cognitive psychology. A group is being created, integrating the former Center for Cognitive Psychology (Cepco), a university Paris XI laboratory within Limsi. It includes researchers in the field of the psychology of reading and text comprehension, visuo-spatial mental representation and cognitive ergonomics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "THE LANGUAGE & COGNITION GROUP",
"sec_num": null
},
{
"text": "Speech can be used with other communication modes in order to obtain the more versatile and reliable means for human-machine communication. A study on automated telematic (voice \u00f7 text) switchboard has been conducted by both the Speech Communication and the Language 8J Cognition groups. Speech recognition together with gesture (touch screen) showed that using both together allows for better efficiency and better comfort (D. Teil et al., 1991) , with gestual communication being preferable anytime there is a need to give a low level analogous information. Timing and co-reference are the difficult problems to solve with the integrated system.",
"cite_spans": [
{
"start": 428,
"end": 446,
"text": "Teil et al., 1991)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "SPEECH IN THE FRAMEWORK OF MULTIMODAL COMMUNICATION",
"sec_num": null
},
{
"text": "A more ambitious project is now starting, including computer vision and 3D modeling, natural language and knowledge representation, and speech and gestual communication. This project aims at examining the theoretical problems of model training in the framework of multimodal information, how non-verbal (visual, gestual) information can be used in building a language model, and how linguistic information can help in order to build models of objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PERSPECTIVES FOR FUTURE RESEARCH",
"sec_num": null
},
{
"text": "The word-based approach took advantage of the idea that the stationary parts of the signal convey less, and more variable information than transition parts (3.L. Gauvain, J.Mariani, J.S. Li~nard, 1983). This led us to non-linear fixedlength compression for isolated word recognition (Morse system in 1980), and non-linear variable-length compression for connected word recognition (Mozart system in 1982) (J.L.Gauvain, 3. Mariani, 1982). Both systems used template matching via dynamic programming. Both systems resulted in single-board products which were marketed by the Vecsys company.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": " Speech Communication Group (Head: F. N6el) 17 permanent researchers, 17 Phd thesis students 6 Research topics:Speech Analysis and Synthesis: This topic includes Short term analysis, Wavelets, Instantaneous frequency analysis, Speech signal editor (Unice product marketed by Vecsys) and coding (SNCF contract), Recognition in noisy environments (Dret (MoD) contract), Text-to-speech synthesis, High-quality synthesis, Multivoice and multidialect synthesis (cooperation with Montreal University (MoFA), CEC-Esprit Polyglot project), Models in Prosodies and diagnostic (paralinguistic) aspects.Assessment and Variability: In this topic, Recording of large vocabulary recognition evaluation data base (Gdr-Prc CHM \"Bref\" project) and spontaneous speech database (Spot), Standardization of speech recognition systems assessment (contribution to Afnor), Use of phonotactic constraints, Grapheme-to-phoneme conversion with phonological variations, Regrouping of wavelets, improvement of cochlear implants.Recognition: Word-based recognition (hardware and software) (Vecsys Datavox product, Sextant-Dret contract), Syllable-based recognition, Diphone-based recognition, Large vocabulary recognition (10,000 words), Continuous and discrete HMMs, Custom Dynamic Programming VLSI (DGT-DGA/ Bull (VTI) contract), Application to printed character recognition, Speaker-independent recognition, Vocabulary-independent recognition, Discriminant recognition, Speaker adaptation, Recognition in adverse environments (DRET/Sextant contract), Speaker verification (MRT contract with Fichet-Bauche and Vecsys), Evaluation of speech recognition systems (participation in CEC/Esprit-SAM project).",
"cite_spans": [
{
"start": 1,
"end": 43,
"text": "Speech Communication Group (Head: F. N6el)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "APPENDIX",
"sec_num": null
},
{
"text": "Models for dialogue structures (Gdr/PRC CHM programme), Speaker models, Taskoriented oral dialogue system for air controller training (Stetin/Sextant Avionique/Vecsys contract for C6NA), Linguistic study of man-machine dialogues, Selective syntax parser, Multimodal dialogue (Pilot's assistant application (Dret / Sextant) and Simulation of a telephone operator.Spoken language modeling: Linguistic Models (Esprit Polyglot project), Phoneme-to-grapheme conversion (Esprit 291/860 project), Stenotype-to-grapheme conversion (Ccett-Systex project), Continuous speech automatic phonetic labelling and recognition through learning (Inserm collaboration), Morphosyntactic analysis, Automatic syntactic classification, Use of linguistic models for handwritten character recognition.Connectionist systems: Guided Propagation Models (CEC-Stimulation Brain programme, Dret (MoD) contract), Application to noisy speech analysis (cocktail party effect), to continuous speech recognition (contract with French Philips Research Labs (LEP)) and to the modeling of reading activity, Back Propagation Models, Application to character recognition and grapheme-to-phoneme conversion, Feature Maps, Parallel architectures, Integration of the connectionist approach and of the symbolic approach (AI) into the same formalism, Perception-to-Action modeling (Dret contract).Langage & Cognition Group (Head: G. Sabah)14 Permanent Researchers, 13 PhD Thesis students 8 Research topics:Automatic analysis of sentences and texts: Study and implementation of a general architecture for automatic language processing (Distributed AI: communicating multiexpert system structure); Application to the automatic con-struction of internal representations, Trope and anaphora processing, Semantic flexibility and context influence, Pragmatics (CEC/Esprit PLUS project).Written Dialog: Implementing a real dialogue in questionanswer processes, Speaker modeling, Direct and indirect speech act processing, Adapting an answer to the speaker, Application to computer-aided education (chess playing), and to documentation database query (MRT contract with the Resoudre company). Learning: Automatic parsing rules creation from examples, Syntactic and semantic aspects, Automatic pragmatic knowledge learning from texts using frames, Fine grain parallel architecture (connection machine) for knowledge representation. Tri-dimensional Modeling: 3D Modeling software \"Sculptor\", Steady and animated image synthesis. Multiparametric varieties, Application to acquisition and modeling for life and earth sciences and for cartoons; Human Factor study of interactivity in graphic interfaces.Computer Vision: Stereo Vision, Neural and Symbolic learning in computer vision, Application of Genetic Algorithms to image analysis, Scene analysis (contours and texture). Use of a parallel architecture (Transputers). Application to the analysis of road traffic (Inrets contract).Real 'Time Architecture: Medium and fine grain parallel architecture. Use of Object Oriented Languages, Hardware specialised for image synthesis. Representation of multimodal knowledge, learning, decision making. Use of AI tech-niques; link with mobile robots, and Computer Integrated Manufacturing, Real Time modeling and formulation.Character recognition and coding: Graphic encoding of characters (Eco system, Anvar licence), Printed character recognition by training and template matching (using the MuPCD VLSI chip). Gestual and m.ulti-modal Communication: Use of tactile screens, touch analysis, mouvement analysis and synthesis, Effort feedback (3D mouse, Data GIoVeTM). Integration of different communication means, perceptive sensor fusion. LIMSI is a Managing node in the \"Speech and Language\" Esprit Basic Research Network of Excellence. This network has the goal to promote the integration of Speech and Natural Language. It has now about 40 nodes around Europe. LIMSI is also a node in the French Coordinated Research Project (Prc/Gdr) on \"Human-Machine Communication.\" The Prc has 4 poles on Speech Communication, Natural Language Processing, Vision and Multimodal Communication.",
"cite_spans": [
{
"start": 3447,
"end": 3486,
"text": "Gestual and m.ulti-modal Communication:",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Dialog Structures:",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "French Ready terminal that speaks English",
"authors": [],
"year": 1975,
"venue": "Electronics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\"French Ready terminal that speaks English\", Electron- ics, June 26, 1975.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Les dictionnalres en forme complete et leur utilisation dans la transformation lexicale et syntaxique de cha",
"authors": [
{
"first": "A",
"middle": [],
"last": "Andreewski",
"suffix": ""
},
{
"first": "J",
"middle": [
"P"
],
"last": "Binquet",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Debili",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Fluhr",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Hlal",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Pouderoux",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Li~nard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
}
],
"year": 1979,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Andreewski, J.P. Binquet, F. Debili, C. Fluhr, Y. Hlal, B. Pouderoux, J.S. Li~nard, , J. Mariani, \"Les dictionnalres en forme complete et leur utilisation dans la transforma- tion lexicale et syntaxique de cha[nes phon6tiques correctes', 10~mes JEP du GALF, Grenoble, Mai-Juin 1979",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A method for connected word recognition and word spotting on a microprocessor",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvaln",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
}
],
"year": 1982,
"venue": "Proc. IEEE ICASSP 82",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.L. Gauvaln, J. Mariani, \"A method for connected word recognition and word spotting on a microprocessor.\", Proc. IEEE ICASSP 82. Paris, 3-5 mal 1982.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "On the use of time compression for word-based recognition",
"authors": [
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvaln",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Li~nard",
"suffix": ""
}
],
"year": 1983,
"venue": "ICASSP",
"volume": "83",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.L. Gauvaln, J. Mariani, J.S. Li~nard, \"On the use of time compression for word-based recognition.\", ICASSP 83. Boston, April 14-16, 1983.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Speaker-Independent Phone Recognition using BREF",
"authors": [
{
"first": "L",
"middle": [],
"last": "Gauvain",
"suffix": ""
},
{
"first": "L",
"middle": [
"F"
],
"last": "Lamel",
"suffix": ""
}
],
"year": 1992,
"venue": "DARPA Speech and Language Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Gauvain, L.F. Lamel, \"Speaker-Independent Phone Recognition using BREF\", DARPA Speech and Language Workshop, Arden House, February 1992",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Reconnaissance de caract~res imprim6s par comparaison dynamique",
"authors": [
{
"first": "M",
"middle": [],
"last": "Khemakhem",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvain",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Rivaillier",
"suffix": ""
}
],
"year": 1987,
"venue": "6~me Congr~s AFCET-INRIA \"Reconnaissance des Formes et Intelligence Artificielle",
"volume": "",
"issue": "",
"pages": "16--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Khemakhem, J.L. Gauvain, J. Rivaillier, \"Reconnais- sance de caract~res imprim6s par comparaison dynamique.\", 6~me Congr~s AFCET-INRIA \"Reconnaissance des Formes et Intelligence Artificielle\". Antibes, 16-20 novembre 1987.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Time segmentation of speech",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Li6nard",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mlouka",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Sapaly",
"suffix": ""
}
],
"year": 1974,
"venue": "Speech Communication Seminar",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.S. Li6nard, M. Mlouka, J. Mariani, J. Sapaly, \"Time segmentation of speech\", Speech Communication Seminar, Stockholm, Aofit 1974",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Intelligibilit6 de phrases synth6tiques altdr6es : application ~t la transmission phon6tique de la parole",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Li6nard",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Renard",
"suffix": ""
},
{
"first": "; D",
"middle": [],
"last": "Luzzati",
"suffix": ""
}
],
"year": 1977,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J.S. Li6nard, J. Mariani, G. Renard, \"Intelligibilit6 de phrases synth6tiques altdr6es : application ~t la transmission phon6tique de la parole\", ICA, Madrid, Juillet 1977 D. Luzzati, \"ORSO. Projet pour la constitution et l'6tude de dialogues homme-machine.\", LIMSI internal re- port, Septembre 1984.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "ALORS : a skimming parser for spontaneous speech processing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Luzzati",
"suffix": ""
}
],
"year": 1987,
"venue": "Computer Speech and Language",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Luzzati, \"ALORS : a skimming parser for spontaneous speech processing.\", Computer Speech and Language, Vol.2, 1987",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "ESOPE 0 : un programme de compr6hension de la parole continue procddant par pr6diction-vdrification aux niveaux phondtique, lexical et syntaxique",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "J",
"middle": [
"S"
],
"last": "Li6nard",
"suffix": ""
}
],
"year": 1978,
"venue": "ler Congr~s AFCET \"Reconnaissance des formes et Intelligence Artificielle",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Mariani, J.S. Li6nard, \"ESOPE 0 : un pro- gramme de compr6hension de la parole continue procddant par pr6diction-vdrification aux niveaux phondtique, lexi- cal et syntaxique\", ler Congr~s AFCET \"Reconnaissance des formes et Intelligence Artificielle, Chatenay-Malabry, F6vrier 1978",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Un syst~me de v6ri-fication du locuteur",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mariani",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvain",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Soury",
"suffix": ""
}
],
"year": 1984,
"venue": "13~mes JEP du GALF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Mariani, J.L. Gauvain, J.L. Soury, \"Un syst~me de v6ri- fication du locuteur\", 13~mes JEP du GALF, 28-30 Mai 1984",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Use of Upper Level Knowledge to Improve Human-Machine Interaction",
"authors": [
{
"first": "K",
"middle": [],
"last": "Matrouf",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1991,
"venue": "Venaco Workshop & ETRW on \"The structure of MultimodaJ Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Matrouf, F. N6el, \"Use of Upper Level Knowledge to Improve Human-Machine Interaction\", Venaco Work- shop & ETRW on \"The structure of MultimodaJ Dialogue\", Maratea, September 1991",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A Dynamic Programming Processor for Speech Recognition",
"authors": [
{
"first": "G",
"middle": [],
"last": "Qu6not",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Gauvain",
"suffix": ""
},
{
"first": "J",
"middle": [
"J"
],
"last": "Gangolf",
"suffix": ""
},
{
"first": "J",
"middle": [
"J"
],
"last": "Mariani",
"suffix": ""
}
],
"year": 1989,
"venue": "IEEE Journal of Solid-State Circuits",
"volume": "24",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Qu6not, J.L. Gauvain, J.J. Gangolf, J.J. Mariani, \"A Dynamic Programming Processor for Speech Recognition\", IEEE Journal of Solid-State Circuits, Vol. 24, N. 2, Avril 1989",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The \"Orthogonal Algorithm\" for optical flow detection using Dynamic Programming",
"authors": [
{
"first": "G",
"middle": [],
"last": "Qu6not",
"suffix": ""
}
],
"year": 1992,
"venue": "IEEE ICASSP'92",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Qu6not, \"The \"Orthogonal Algorithm\" for optical flow detection using Dynamic Programming\", IEEE ICASSP'92, San Francisco, March 1992",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Multimodal Dialogue interface on a workstation",
"authors": [
{
"first": "D",
"middle": [],
"last": "Teil",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Bellik",
"suffix": ""
}
],
"year": 1991,
"venue": "Venaco Workshop & ETRW on \"The structure of Multimodal Dialogue",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Teil, Y. Bellik, \"Multimodal Dialogue interface on a workstation\", Venaco Workshop & ETRW on \"The struc- ture of Multimodal Dialogue\", Maratea, September 1991",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"content": "<table><tr><td>nication group, headed by D. Teil. The groups are divided</td></tr><tr><td>in research topics, each being headed by a researcher. was created in 1985, when a group headed by Dr G. Sabah</td></tr><tr><td>at University Paris VI joined a group already situated at</td></tr><tr><td>University Paris XI, the location of LIMSI. The Non-verbal</td></tr><tr><td>Communication group was created in Fall 1989, by merging</td></tr><tr><td>teams in the laboratory already working on 3D modeling,</td></tr><tr><td>Computer Vision and Robotics.</td></tr><tr><td>DEPARTMENT</td></tr><tr><td>The Human-Machine Communication department has a</td></tr><tr><td>total of about 100 persons (38 permanent researchers (CNRS</td></tr><tr><td>and University, including 30 PhDs), 3 technical and adminis-</td></tr><tr><td>trative staff, 38 PhD students and 36 postdoctoral, contrac-</td></tr><tr><td>tual, associate and visiting researchers, or Master Thesis stu-</td></tr><tr><td>dents). There are 3 research groups: the Speech Communi-</td></tr><tr><td>cation group, headed by F. N6el, the Language \u00a2J Cognition</td></tr><tr><td>group, headed by G. Sabah, and the Non-Verbal Commu-</td></tr></table>",
"type_str": "table",
"text": "This structure is flexible, and researchers may participate in different, research topics (see Appendix).The activities in Speech Communication in the laboratory were initiated in 1968 by Dr J.S. Li~nard. The group itself was created in 1981. The Natural Language Processinggroup",
"num": null,
"html": null
}
}
}
}