_id
stringlengths
4
10
text
stringlengths
0
18.4k
title
stringlengths
0
8.56k
d58302
We present an algorithm that automatically learns context constraints using statistical decision trees. We then use the acquired constraints in a flexible POS tagger. The tagger is able to use information of any degree: n-grams, automatically learned context constraints, linguistically motivated manually written constraints, etc. The sources and kinds of constraints are unrestricted, and the language model can be easily extended, improving the results. The tagger has been tested and evaluated on the WSJ corpus.
A Flexible POS Tagger Using an Automatically Acquired Language Model*
d32000679
This paper is rooted in the two principles and methods that should be followed by sense discrimination for Chinese language processing: Completeness and discreteness. Built on the comparison of Semantic Knowledge-base of Contemporary Chinese (SKCC) and Grammatical Knowledge base of Contemporary Chinese (GKB), supported by large scale corpus, we conducted our new editing and checking works. Firstly, we designed a novel multi-sense lexicon candidate abstraction algorithm based on lexicon comparison between SKCC and GKB. For all 1605 candidate multi-sense lexicon, we conducted editing work on the senses, explanation, and its translation Then, we built a tree structure to process a special food and plant lexicon. Thirdly, a mapping platform between SKCC and GKB has been built to help us built mapping relationships between multi-sense lexical between SKCC and GKB. Finally, we finished mapping work for all multi-sense lexicon in SKCC.
代汉语语义词 多义词词 的校 和 修 摘要 本文依据面向汉语信息处理的词语义 区分的 完备性 和 操作性 原 则,基于 代汉语语义词
d9090411
Our aim is to build listening agents that can attentively listen to the user and satisfy his/her desire to speak and have himself/herself heard. This paper investigates the characteristics of such listening-oriented dialogues so that such a listening process can be achieved by automated dialogue systems. We collected both listening-oriented dialogues and casual conversation, and analyzed them by comparing the frequency of dialogue acts, as well as the dialogue flows using Hidden Markov Models (HMMs). The analysis revealed that listening-oriented dialogues and casual conversation have characteristically different dialogue flows and that it is important for listening agents to selfdisclose before asking questions and to utter more questions and acknowledgment than in casual conversation to be good listeners.
Analysis of Listening-oriented Dialogue for Building Listening Agents
d1829980
In medical imaging domain, digitized data is rapidly expanding Therefore it is of major interest for radiologists to be able to do an efficient and accurate extraction of imaging and clinical data (radiology reports) which are essential for a rigorous diagnosis and for a better management of patients. In daily practice, radiology reports are written using a nonstandardized language which is often ambiguous and noisy. The queries of radiological images can be greatly facilitated through textual indexing of associated reports. In order to improve the quality of the analysis of such reports, it is desirable to specify an index enlargement algorithm based on spreading activations over a general lexical-semantic network. In this paper, we present such an algorithm along with its qualitative evaluation.
Medical Imaging Report Indexing: Enrichment of Index through an Algorithm of Spreading over a Lexico-semantic Network
d16675779
The public demonstration of a Russian-English machine translation system in New York in January 1954 -a collaboration of IBM and Georgetown Universitycaused a great deal of public interest and much controversy. Although a small-scale experiment of just 250 words and six 'grammar' rules it raised expectations of automatic systems capable of high quality translation in the near future. This paper describes the system, its background, its impact and its implications.
The Georgetown-IBM experiment demonstrated in January 1954
d2507563
The paper describes a system which uses packed parser output directly to build semantic representations. More specifically, the system takes as input Packed Shared Forests in the sense of Tomita (l_bmita, 1985) and produces packed Underspeeified Discourse Representation Structures. The algorithm visits every node in the Parse Forest only a bounded number of times, so that a significant increase in efficiency is registered for ambiguous sentences.
Semantic Construction from Parse Forests
d199379544
d14722064
In the Arab world, while Modern Standard Arabic is commonly used in formal written context, on sites like Youtube, people are increasingly using Dialectal Arabic, the language for everyday use to comment on a video and interact with the community. These user-contributed comments along with the video and user attributes, offer a rich source of multi-dialectal Arabic sentences and expressions from different countries in the Arab world. This paper presents YOUDACC, an automatically annotated large-scale multi-dialectal Arabic corpus collected from user comments on Youtube videos. Our corpus covers different groups of dialects: Egyptian (EG), Gulf (GU), Iraqi (IQ), Maghrebi (MG) and Levantine (LV). We perform an empirical analysis on the crawled corpus and demonstrate that our location-based proposed method is effective for the task of dialect labeling.
YouDACC: the Youtube Dialectal Arabic Commentary Corpus
d3266283
While recent retrieval techniques do not limit the number of index terms, out-ofvocabulary (OOV) words are crucial in speech recognition. Aiming at retrieving information with spoken queries, we fill the gap between speech recognition and text retrieval in terms of the vocabulary size. Given a spoken query, we generate a transcription and detect OOV words through speech recognition. We then correspond detected OOV words to terms indexed in a target collection to complete the transcription, and search the collection for documents relevant to the completed transcription. We show the effectiveness of our method by way of experiments.
A Method for Open-Vocabulary Speech-Driven Text Retrieval
d18072453
Mitkov and Ha (2003)andMitkov et al. (2006)offered an alternative to the lengthy and demanding activity of developing multiple-choice test items by proposing an NLP-based methodology for construction of test items from instructive texts such as textbook chapters and encyclopaedia entries. One of the interesting research questions which emerged during these projects was how better quality distractors could automatically be chosen. This paper reports the results of a study seeking to establish which similarity measures generate better quality distractors of multiple-choice tests. Similarity measures employed in the procedure of selection of distractors are collocation patterns, four different methods of WordNetbased semantic similarity (extended gloss overlap measure, Leacock and Chodorow's, Jiang and Conrath's as well as Lin's measures), distributional similarity, phonetic similarity as well as a mixed strategy combining the aforementioned measures. The evaluation results show that the methods based on Lin's measure and on the mixed strategy outperform the rest, albeit not in a statistically significant fashion.
Semantic similarity of distractors in multiple-choice tests: extrinsic evaluation
d164454216
d420797
CCG, one of the most prominent grammar frameworks, efficiently deals with deletion under coordination in natural languages. However, when we expand our attention to more analytic languages whose degree of pro-dropping is more free, CCG's decomposition rule for dealing with gapping becomes incapable of parsing some patterns of intra-sentential ellipses in serial verb construction. Moreover, the decomposition rule might also lead us to overgeneration problem. In this paper the composition rule is replaced by the use of memory mechanism, called CCG-MM. Fillers can be memorized and gaps can be induced from an input sentence in functional application rules, while fillers and gaps are associated in coordination and serialization. Multimodal slashes, which allow or ban memory operations, are utilized for ease of resource management. As a result, CCG-MM is more powerful than canonical CCG, but its generative power can be bounded by partially linear indexed grammar.
A Memory-Based Approach to the Treatment of Serial Verb Construction in Combinatory Categorial Grammar
d2659316
CIRCSlM-Tutor version 2, a dialogne-based intelligent tutoring system (ITS), is nearly five years old. It conducts a conversation with a student to help the student learn to solve a class of problems in cardiovascular physiology dealing with the regulation of blood pressure. It uses natural language for both input and output, and can handle a variety of syntactic constructions and lexical items, including sentence fragments and misspelled words.
CIRCSIM-Tutor: An Intelligent Tutoring System Using Natural Language Dialogue*
d21547774
Speech Technology: from Research to the Industry of Human-Machine Communication
d8502808
Hypernymy relation acquisition has been widely investigated, especially because taxonomies, which often constitute the backbone structure of semantic resources are structured using this type of relations. Although lots of approaches have been dedicated to this task, most of them analyze only the written text. However relations between not necessarily contiguous textual units can be expressed, thanks to typographical or dispositional markers. Such relations, which are out of reach of standard NLP tools, have been investigated in well specified layout contexts. Our aim is to improve the relation extraction task considering both the plain text and the layout. We are proposing here a method which combines layout, discourse and terminological analyses, and performs a structured prediction. We focused on textual structures which correspond to a well defined discourse structure and which often bear hypernymy relations. This type of structure encompasses titles and sub-titles, or enumerative structures. The results achieve a precision of about 60%.
Discovering Hypernymy Relations using Text Layout
d14313215
Knowledge is most often and directly expressed with natural language. Therefore, the representation and acquisition of knowledge play a key role in studies on information processing and natural language interpretation. Also, they are the principal issues in establishing a knowledge database. Only highly formalized descriptive systems or knowledge databases can be processed by computers. On the basis of the descriptive mechanisms of lexicalist theory, the present paper tries to provide a detailed description and integration of the syntactic, semantic and other information of Korean, in the form of lexical structures.
A Study on the Structure of Korean Knowledge Database 1
d52011231
Distantly supervised relation extraction greatly reduces human efforts in extracting relational facts from unstructured texts. However, it suffers from noisy labeling problem, which can degrade its performance. Meanwhile, the useful information expressed in knowledge graph is still underutilized in the state-of-the-art methods for distantly supervised relation extraction. In the light of these challenges, we propose CORD, a novel COopeRative Denoising framework, which consists two base networks leveraging text corpus and knowledge graph respectively, and a cooperative module involving their mutual learning by the adaptive bi-directional knowledge distillation and dynamic ensemble with noisy-varying instances. Experimental results on a real-world dataset demonstrate that the proposed method reduces the noisy labels and achieves substantial improvement over the state-of-the-art methods. * Equal contribution.† Corresponding author. This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ Entity pair: (barack_obama, illinois) Barack obama, the senator from illinois, whose father is from kenya, explors a presidential bid in new hampshire. … … He was referring to barack obama, the new united states senator from illinois...Entity-Sequence EncoderWord-Sequence EncoderEntity-based Sentence RepresentationWord-based Sentence RepresentationCorpus-Net Loss
Cooperative Denoising for Distantly Supervised Relation Extraction
d9092992
The paper discusses the methods followed to re-use a large-scale, broad-coverage English grammar for constructing similar scale grammars for Bulgarian, Czech and Russian for the fast prototyping of a multilingual generation system. We present (1) the theoretical and methodological basis for resource sharing across languages, (2) the use of a corpus-based contrastive register analysis, in particular, contrastive analysis of mood and agency. Because the study concerns reuse of the grammar of a language that is typologically quite different from the languages treated, the issues addressed in this paper appear relevant to a wider range of researchers in need of largescale grammars for less-researched languages.
Resources for multilingual text generation in three Slavic languages
d6500977
While previous sentiment analysis research has concentrated on the interpretation of explicitly stated opinions and attitudes, this work addresses a type of opinion implicature (i.e., opinion-oriented default inference) in real-world text. This work describes a rule-based conceptual framework for representing and analyzing opinion implicatures. In the course of understanding implicatures, the system recognizes implicit sentiments (and beliefs) toward various events and entities in the sentence, often of mixed polarities; thus, it produces a richer interpretation than is typical in opinion analysis.157
A Conceptual Framework for Inferring Implicatures
d37726778
We present a large, free, French corpus of online written conversations extracted from the Ubuntu platform's forums, mailing lists and IRC channels. The corpus is meant to support multi-modality and diachronic studies of online written conversations. We choose to build the corpus around a robust metadata model based upon strong principles, such as the "stand off" annotation principle. We detail the model, we explain how the data was collected and processed -in terms of meta-data, text and conversation -and we detail the corpus' contents through a series of meaningful statistics. A portion of the corpus -about 4,700 sentences from emails, forum posts and chat messages sent in November 2014 -is annotated in terms of dialogue acts and sentiment. We discuss how we adapted our dialogue act taxonomy from the DIT++ annotation scheme and how the data was annotated, before presenting our results as well as a brief qualitative analysis of the annotated data.
Ubuntu-fr: a Large and Open Corpus for Supporting Multi-Modality and Online Written Conversation Studies
d14597127
We present a nonparametric Bayesian model of tree structures based on the hierarchical Dirichlet process (HDP). Our HDP-PCFG model allows the complexity of the grammar to grow as more training data is available. In addition to presenting a fully Bayesian model for the PCFG, we also develop an efficient variational inference procedure. On synthetic data, we recover the correct grammar without having to specify its complexity in advance. We also show that our techniques can be applied to full-scale parsing applications by demonstrating its effectiveness in learning state-split grammars.
The Infinite PCFG using Hierarchical Dirichlet Processes
d8389796
We distinguish three main, overlapping activities in an advice-giving dialogue: problem formulation, resolution, and explanation. This paper focuses on a problem formulation activity in a dialogue module which interacts on one side with an expert problem solver for financial investing and on the other side with a natural language front-end. Several strategies which reflect specific aspects of person-machine advice-giving dialogues are realized by incorporating planning at a high-level of dialogue.
PLANNING FOR PROBLEM FORMULATION IN ADVICE-GIVING DIALOGUE Cap Sogeti Innovation
d4955581
We derive and implement an algorithm similar to(Huang and Chiang, 2005)for finding the n best derivations in a weighted hypergraph. We prove the correctness and termination of the algorithm and we show experimental results concerning its runtime. Our work is different from the aforementioned one in the following respects: we consider labeled hypergraphs, allowing for tree-based language models (Maletti and Satta, 2009); we specifically handle the case of cyclic hypergraphs; we admit structured weight domains, allowing for multiple features to be processed; we use the paradigm of functional programming together with lazy evaluation, achieving concise algorithmic descriptions. * This research was financially supported by DFG VO 1101/5-1.
n-Best Parsing Revisited *
d219309328
d16775609
Syntax-based translation models should in principle be efficient with polynomially-sized search space, but in practice they are often embarassingly slow, partly due to the cost of language model integration. In this paper we borrow from phrase-based decoding the idea to generate a translation incrementally left-to-right, and show that for tree-to-string models, with a clever encoding of derivation history, this method runs in averagecase polynomial-time in theory, and lineartime with beam search in practice (whereas phrase-based decoding is exponential-time in theory and quadratic-time in practice). Experiments show that, with comparable translation quality, our tree-to-string system (in Python) can run more than 30 times faster than the phrase-based system Moses (in C++).
Efficient Incremental Decoding for Tree-to-String Translation
d15316704
This paper describes a dual-classifier approach to contextual sentiment analysis at the SemEval-2013 Task 2. Contextual analysis of polarity focuses on a word or phrase, rather than the broader task of identifying the sentiment of an entire text. The Task 2 definition includes target word spans that range in size from a single word to entire sentences. However, the context of a single word is dependent on the word's surrounding syntax, while a phrase contains most of the polarity within itself. We thus describe separate treatment with two independent classifiers, outperforming the accuracy of a single classifier. Our system ranked 6th out of 19 teams on SMS message classification, and 8th of 23 on twitter data. We also show a surprising result that a very small amount of word context is needed for high-performance polarity extraction.
USNA: A Dual-Classifier Approach to Contextual Sentiment Analysis
d8542266
The annotation and labeling of speech tasks in large multitask speech corpora is a necessary part of preparing a corpus for distribution. This paper addresses three approaches to annotation and labeling, namely manual, semi automatic and automatic procedures for labeling the UCU Accent Project speech data, at multilingual multitask longitudinal speech corpus. Accuracy and minimal time investment are the priorities in assessing the efficacy of each procedure. While manual labeling based on aural and visual input should produce the most accurate results, this approach is prone to error because of its repetitive nature. A semi automatic event detection system requiring manual rejection of false alarms and location and labeling of misses provided the best results. A fully automatic system could not be applied to entire speech recordings because of the variety of tasks and genres. However, it could be used to annotate separate sentences within a specific task. Acoustic confidence measures can correctly detect sentences that do not match the text with an equal error rate of 3.3%
Semi-automatic labeling of the UCU accents speech corpus
d16033481
Uncertainty detection is essential for many NLP applications. For instance, in information retrieval, it is of primary importance to distinguish among factual, negated and uncertain information. Current research on uncertainty detection has mostly focused on the English language, in contrast, here we present the first machine learning algorithm that aims at identifying linguistic markers of uncertainty in Hungarian texts from two domains: Wikipedia and news media. The system is based on sequence labeling and makes use of a rich feature set including orthographic, lexical, morphological, syntactic and semantic features as well. Having access to annotated data from two domains, we also focus on the domain specificities of uncertainty detection by comparing results obtained in indomain and cross-domain settings. Our results show that the domain of the text has significant influence on uncertainty detection.
Uncertainty Detection in Hungarian Texts
d1627664
This paper describes the contribution of LT3 for the CLPsych 2016 Shared Task on automatic triage of mental health forum posts. Our systems use multiclass Support Vector Machines (SVM), cascaded binary SVMs and ensembles with a rich feature set. The best systems obtain macro-averaged F-scores of 40% on the full task and 80% on the green versus alarming distinction. Multiclass SVMs with all features score best in terms of F-score, whereas feature filtering with bi-normal separation and classifier ensembling are found to improve recall of alarming posts.
Mental Distress Detection and Triage in Forum Posts: The LT3 CLPsych 2016 Shared Task System
d34045500
--215 -
PET: PROCESSING ENGLISH TEXT
d2368586
This paper presents a thesis proposal on approaches to automatically scoring non-native speech from second language tests. Current speech scoring systems assess speech by primarily using acoustic features such as fluency and pronunciation; however content features are barely involved. Motivated by this limitation, the study aims to investigate the use of content features in speech scoring systems. For content features, a central question is how speech content can be represented in appropriate means to facilitate automated speech scoring. The study proposes using ontologybased representation to perform concept level representation on speech transcripts, and furthermore the content features computed from ontology-based representation may facilitate speech scoring. One baseline and two ontology-based representations are compared in experiments. Preliminary results show that ontology-based representation slightly improves performance of one content feature for automated scoring over the baseline system.
Using Ontology-based Approaches to Representing Speech Transcripts for Automated Speech Scoring
d2215227
This paper describes an alignment-based model for interpreting natural language instructions in context. We approach instruction following as a search over plans, scoring sequences of actions conditioned on structured observations of text and the environment. By explicitly modeling both the low-level compositional structure of individual actions and the high-level structure of full plans, we are able to learn both grounded representations of sentence meaning and pragmatic constraints on interpretation. To demonstrate the model's flexibility, we apply it to a diverse set of benchmark tasks. On every task, we outperform strong task-specific baselines, and achieve several new state-of-the-art results.
Alignment-Based Compositional Semantics for Instruction Following
d218974448
d41663092
d214620266
d15427200
Discourse markers ('cue words') are lexical items that signal the kind of coherence relation holding between adjacent text spans; for example, because, since, and for this reason are different markers for causal relations. Discourse markers are a syntactically quite heterogeneous group of words, many of which are traditionally treated as function words belonging to the realm of grammar rather than to the lexicon. But for a single discourse relation there is often a set of similar markers, allowing for a range of paraphrases for expressing the relation. To capture the similarities and differences between these, and to represent them adequately, we are developing DiMLex, a lexicon of discourse markers. After describing our methodology and the kind of information to be represented in DiMLex, we briefly discuss its potential applications in both text generation and understanding.
DiMLex: A lexicon of discourse markers for text generation and understanding
d16163624
In order to investigate the effect of source language on translations, we investigate two variants of a Korean translation corpus. The first variant consists of Korean translations of 162,308 Japanese sentences from the ATR BTEC (Basic Expression Text Corpus). The second variant was made by translating the English translations of the Japanese sentences into Korean. We show that the source language text has a large influence on the target text. Even after normalizing orthographic differences, fewer than 8.3% of the sentences in the two variants were identical. We describe in general which phenomena differ and then discuss how our analysis can be used in natural language processing.
A Comparison of Two Variant Corpora: The Same Content with Different Sources
d219303777
d219303480
d2307044
The Simple English Wikipedia provides a simplified version of Wikipedia's English articles for readers with special needs. However, there are fewer efforts to make information in Wikipedia in other languages accessible to a large audience. This work proposes the use of a syntactic simplification engine with high precision rules to automatically generate a Simple Portuguese Wikipedia on demand, based on user interactions with the main Portuguese Wikipedia. Our estimates indicated that a human can simplify about 28,000 occurrences of analysed patterns per million words, while our system can correctly simplify 22,200 occurrences, with estimated f-measure 77.2%.
Towards an on-demand Simple Portuguese Wikipedia
d15273426
The third PASCAL Recognizing Textual Entailment Challenge (RTE-3) contained an optional task that extended the main entailment task by requiring a system to make three-way entailment decisions (entails, contradicts, neither) and to justify its response. Contradiction was rare in the RTE-3 test set, occurring in only about 10% of the cases, and systems found accurately detecting it difficult. Subsequent analysis of the results shows a test set must contain many more entailment pairs for the three-way decision task than the traditional two-way task to have equal confidence in system comparisons. Each of six human judges representing eventual end users rated the quality of a justification by assigning "understandability" and "correctness" scores. Ratings of the same justification across judges differed significantly, signaling the need for a better characterization of the justification task.
Contradictions and Justifications: Extensions to the Textual Entailment Task
d15241388
This paper gives an overview of ongoing work on a system for the generation of NL descriptions of classes defined in OWL ontologies. We present a general structuring approach for such descriptions. Since OWL ontologies do not by default contain the information necessary for lexicalization, lexical information has to be added to the data via annotations. A rulebased mechanism for automatically deriving these annotations is presented.
Generating Natural Language Descriptions of Ontology Concepts
d6568949
NOTES ON LR PARSER DESIGN
d219302766
d233365144
Sarcasm detection is of great importance in understanding people's true sentiments and opinions. Many online feedbacks, reviews, social media comments, etc. are sarcastic. Several researches have already been done in this field, but most researchers studied the English sarcasm analysis compared to the researches are done in Arabic sarcasm analysis because of the Arabic language challenges. In this paper, we propose a new approach for improving Arabic sarcasm detection. Our approach is using data augmentation, contextual word embedding and random forests model to get the best results. Our accuracy in the shared task on sarcasm and sentiment detection in Arabic was 0.5189 for F1-sarcastic as the official metric using the shared dataset ArSarcasm-V2(Abu Farha, et al., 2021).
d15375137
We describe an approach to simultaneous tokenization and part-of-speech tagging that is based on separating the closed and open-class items, and focusing on the likelihood of the possible stems of the openclass words. By encoding some basic linguistic information, the machine learning task is simplified, while achieving stateof-the-art tokenization results and competitive POS results, although with a reduced tag set and some evaluation difficulties.
Simultaneous Tokenization and Part-of-Speech Tagging for Arabic without a Morphological Analyzer
d225062653
d12704877
Multiling~,al cxtcnsibility requires an MT system t(~" have a tau/,uagc-iudcpendcnt pivot. It is mgtmd that au ideal, purely so. mastic pivot is impossil)le. A translafiou method is descfihcd iu which scmantic relations m~ kept implicit in synlax, while file scmanlic trails and distinetious am implicit in the words of a fllllftcdged language iisell as pivot.
IMPL~[CI(TNESS~ A~:; A G(.lt~)I[NG PRINC~itLE iN tV~AC~/li~NE TRANSLATION
d18918026
This paper summarises the results of a pilot project conducted to investigate the correlation between automatic evaluation metric scores and post-editing speed on a segment by segment basis. Firstly, the results from the comparison of various automatic metrics and post-editing speed will be reported. Secondly, further analysis is carried out by taking into consideration other relevant variables, such as text length and structures, and by means of multiple regression. It has been found that different automatic metrics achieve different levels and types of correlation with post-editing speed. We suggest that some of the source text characteristics and machine translation errors may be able to account for the gap between the automatic metric scores and post-editing speed, and may also help with understanding human post-editing process.
Correlation between Automatic Evaluation Metric Scores, Post-Editing Speed, and Some Other Factors
d5600664
We present a proposal for the structuring of collocation knowledge 1 in the lexicon of a multilingual generation system and show to what extent it can be used in the process of lexical selection. This proposal is part of Polygloss, a new research project on multilingual generation, and it has been inspired by work carried out in the S EM-SYN project (see e.g.[I~(~SNEtt 198812). The descriptive approach presented in this proposal is based on a combination of results from recent lexicographical research and the application of Meaning-Text-Theory (MTT) (see e.g. [MEL'CUK et al. 1981],[MEL'CUK et al. 1984]). We first outline the overall structure of the dictionary system that is needed by a multilingual generator; section 2 gives an overview of the results of lexicographical work on collocations and compares them with "lexical functions" as used in Meaning-Text-Theory. Section 3 shows how we intend to integrate collocations in the generation dic-1We use the term "collocation" in the sense of[HAUSMANN 1985]referring to constraints on the cooccurrence of two lexeme words; the two elements are not completely freely combined, but one of them semantically determines the other one. Examples are for instance solve a problem, turn dark, expose someone to a risk, etc. For a more detailed definition see section 2.2 Research reported in this paper is supported by the German Bundesministerium fiir Forschung und Technologie, BMFT, under grant No. 08 B 3116 3. The views and conclusions contained herein are those of the authors and should not be interpreted as positions of the project as a whole. tionary and how "lexical functions" can be used in generation.
Collocations in Multilingual Generation
d17968540
We present Ambient Search, an open source system for displaying and retrieving relevant documents in real time for speech input. The system works ambiently, that is, it unobstructively listens to speech streams in the background, identifies keywords and keyphrases for query construction and continuously serves relevant documents from its index. Query terms are ranked with Word2Vec and TF-IDF and are continuously updated to allow for ongoing querying of a document collection. The retrieved documents, in our case Wikipedia articles, are visualized in real time in a browser interface. Our evaluation shows that Ambient Search compares favorably to another implicit information retrieval system on speech streams. Furthermore, we extrinsically evaluate multiword keyphrase generation, showing positive impact for manual transcriptions.
Ambient Search: A Document Retrieval System for Speech Streams
d199379751
d8394031
We describe a method for creating a non-English subjectivity lexicon based on an English lexicon, an online translation service and a general purpose thesaurus: Wordnet. We use a PageRank-like algorithm to bootstrap from the translation of the English lexicon and rank the words in the thesaurus by polarity using the network of lexical relations in Wordnet. We apply our method to the Dutch language. The best results are achieved when using synonymy and antonymy relations only, and ranking positive and negative words simultaneously. Our method achieves an accuracy of 0.82 at the top 3,000 negative words, and 0.62 at the top 3,000 positive words.
Generating a Non-English Subjectivity Lexicon: Relations That Matter
d13091133
This paper introduces a novel corpus of natural language dialogues obtained from humans performing a cooperative, remote, search task (CReST) as it occurs naturally in a variety of scenarios (e.g., search and rescue missions in disaster areas). This corpus is unique in that it involves remote collaborations between two interlocutors who each have to perform tasks that require the other's assistance. In addition, one interlocutor's tasks require physical movement through an indoor environment as well as interactions with physical objects within the environment. The multi-modal corpus contains the speech signals as well as transcriptions of the dialogues, which are additionally annotated for dialog structure, disfluencies, and for constituent and dependency syntax. On the dialogue level, the corpus was annotated for separate dialogue moves, based on the classification developed byCarletta et al. (1997)for coding task-oriented dialogues. Disfluencies were annotated using the scheme developed by Lickley (1998). The syntactic annotation comprises POS annotation, Penn Treebank style constituent annotations as well as dependency annotations based on the dependencies of pennconverter.
The Indiana "Cooperative Remote Search Task" (CReST) Corpus
d3161143
The purpose of this research is to test the efficacy of applying automated evaluation techniques, originally devised for the evaluation of human language learners, to the output of machine translation (MT) systems. We believe that these evaluation techniques will provide information about both the human language learning process, the translation process and the development of machine translation systems. This, the first experiment in a series of experiments, looks at the intelligibility of MT output. A language learning experiment showed that assessors can differentiate native from non-native language essays in less than 100 words. Even more illuminating was the factors on which the assessors made their decisions. We tested this to see if similar criteria could be elicited from duplicating the experiment using machine translation output. Subjects were given a set of up to six extracts of translated newswire text. Some of the extracts were expert human translations, others were machine translation outputs. The subjects were given three minutes per extract to determine whether they believed the sample output to be an expert human translation or a machine translation. Additionally, they were asked to mark the word at which they made this decision. The results of this experiment, along with a preliminary analysis of the factors involved in the decision making process will be presented here.
Is That Your Final Answer?
d227230405
We provide a comprehensive overview of existing systems for the computational generation of verbal humor in the form of jokes and short humorous texts. Considering linguistic humor theories, we analyze the systematic strengths and drawbacks of the different approaches. In addition, we show how the systems have been evaluated so far and propose two evaluation criteria: humorousness and complexity. From our analysis of the field, we conclude new directions for the advancement of computational humor generation.This work is licensed under a Creative Commons Attribution 4.0 International License. License details:
A Survey on Approaches to Computational Humor Generation
d6717552
This paper presents an unsupervised topic identification method integrating linguistic and visual information based on Hidden Markov Models (HMMs). We employ HMMs for topic identification, wherein a state corresponds to a topic and various features including linguistic, visual and audio information are observed. Our experiments on two kinds of cooking TV programs show the effectiveness of our proposed method.
Unsupervised Topic Identification by Integrating Linguistic and Visual Information Based on Hidden Markov Models
d15847806
Few attempts have been made to investigate the utility of temporal reasoning within machine learning frameworks for temporal relation classification between events in news articles. This paper presents three settings where temporal reasoning aids machine learned classifiers of temporal relations: (1) expansion of the dataset used for learning; (2) detection of inconsistencies among the automatically identified relations; and (3) selection among multiple temporal relations. Feature engineering is another effort in our work to improve classification accuracy.1 All examples shown here are taken from TimeBank 1.2.
Experiments with Reasoning for Temporal Relations between Events
d8671786
Tim paper demonstrates tim problem of translating modal verbs and phrases and shows how some of these problems can be overcome by choosing semantic representations which look like representations of passive verbs. These semantic representations suit alternative ways of expressing modality by e.g. passive constructions, adverbs and impersonal constructions in the target language. Various restructuring rules for English, Swe(lish and Russian am presented.
MODALS AS A PROBLEM FOR MT
d35846451
Historically Natural Language Processing (NLP) focuses on unstructured data (speech and text) understanding while Data Mining (DM) mainly focuses on massive, structured or semi-structured datasets. The general research directions of these two fields also have followed different philosophies and principles. For example, NLP aims at deep understanding of individual words, phrases and sentences ("micro-level"), whereas DM aims to conduct a high-level understanding, discovery and synthesis of the most salient information from a large set of documents when working on text data ("macro-level"). But they share the same goal of distilling knowledge from data. In the past five years, these two areas have had intensive interactions and thus mutually enhanced each other through many successful text mining tasks. This positive progress mainly benefits from some innovative intermediate representations such as "heterogeneous information networks"[Han et al., 2010, Sun et al., 2012b.However, successful collaborations between any two fields require substantial mutual understanding, patience and passion among researchers. Similar to the applications of machine learning techniques in NLP, there is usually a gap of at least several years between the creation of a new DM approach and its first successful application in NLP. More importantly, many DM approaches such as gSpan[Yan and Han, 2002]and RankClus [Sun et al., 2009a] have demonstrated their power on structured data. But they remain relatively unknown in the NLP community, even though there are many obvious potential applications. On the other hand, compared to DM, the NLP community has paid more attention to developing large-scale data annotations, resources, shared tasks which cover a wide range of multiple genres and multiple domains. NLP can also provide the basic building blocks for many DM tasks such as text cube construction[Tao et al., 2014]. Therefore in many scenarios, for the same approach the NLP experiment setting is often much closer to real-world applications than its DM counterpart. We would like to share the experiences and lessons from our extensive inter-disciplinary collaborations in the past five years. The primary goal of this tutorial is to bridge the knowledge gap between these two fields and speed up the transition process. We will introduce two types of DM methods: (1). those state-of-the-art DM methods that have already been proven effective for NLP; and (2). some newly developed DM methods that we believe will fit into some specific NLP problems. In addition, we aim to suggest some new research directions in order to better marry these two areas and lead to more fruitful outcomes. The tutorial will thus be useful for researchers from both communities. We will try to provide a concise roadmap of recent perspectives and results, as well as point to the related DM software and resources, and NLP data sets that are available to both research communities.OutlineWe will focus on the following three perspectives.Where do NLP and DM MeetWe will first pick up the tasks shown inTable 1that have attracted interests from both NLP and DM, and give an overview of different solutions to these problems. We will compare their fundamental differences in terms of goals, theories, principles and methodologies.
Successful Data Mining Methods for NLP
d219303946
d218974497
d11978095
We analyze deployment of an interactive dialogue system in an environment where deep technical expertise might not be readily available. The initial version was created using a collection of research tools. We summarize a number of challenges with its deployment at two museums and describe a new system that simplifies the installation and user interface; reduces reliance on 3rd-party software; and provides a robust data collection mechanism.
Lessons in Dialogue System Deployment
d15200606
We present a collection of parallel treebanks that have been automatically aligned on both the terminal and the nonterminal constituent level for use in syntax-based machine translation. We describe how they were constructed and applied to a syntax-and example-based machine translation system called Parse and Corpus-Based Machine Translation (PaCo-MT). For the language pair Dutch to English, we present evaluation scores of both the nonterminal constituent alignments and the MT system itself, and in the latter case, compare them with those of Moses, a current state-of-the-art statistical MT system, when trained on the same data.
Large Aligned Treebanks for Syntax-based Machine Translation
d226262364
d16224863
This paper presents a finite-state approach to phrase-based statistical machine translation where a log-linear modelling framework is implemented by means of an on-the-fly composition of weighted finite-state transducers. Moses, a well-known state-of-the-art system, is used as a machine translation reference in order to validate our results by comparison. Experiments on the TED corpus achieve a similar performance to that yielded by Moses.
A finite-state approach to phrase-based statistical machine translation
d11502054
A tree transformation is sensible if the size of each output tree is uniformly bounded by a linear function in the size of the corresponding input tree. Every sensible tree transformation computed by an arbitrary weighted extended top-down tree transducer can also be computed by a weighted multi bottom-up tree transducer. This further motivates weighted multi bottom-up tree transducers as suitable translation models for syntax-based machine translation.
Every sensible extended top-down tree transducer is a multi bottom-up tree transducer
d17728123
Multiway trees (MT, henceforth) are a common and well-understood data structure for describing hierarchical linguistic information. With the availability of large treebanks, retrieval techniques for highly structured data now become essential. In this contribution, we investigate the efficient retrieval of MT structures at the cost of a complex index--the Treegram Index.We illustrate our approach with the VENONA retrieval system, which handles the BH t (Biblia Hebraica transeripta) treebank comprising 508,650 phrase structure trees with maximum degree eight and maximum height 17, containing altogether 3.3 million Old-Hebrew words.
The Treegram Index An Efficient Technique for Retrieval in Linguistic Treebanks
d1473515
Extracting knowledge from unstructured text is a long-standing goal of NLP. Although learning approaches to many of its subtasks have been developed (e.g., parsing, taxonomy induction, information extraction), all end-to-end solutions to date require heavy supervision and/or manual engineering, limiting their scope and scalability. We present OntoUSP, a system that induces and populates a probabilistic ontology using only dependency-parsed text as input. OntoUSP builds on the USP unsupervised semantic parser by jointly forming ISA and IS-PART hierarchies of lambda-form clusters. The ISA hierarchy allows more general knowledge to be learned, and the use of smoothing for parameter estimation. We evaluate On-toUSP by using it to extract a knowledge base from biomedical abstracts and answer questions. OntoUSP improves on the recall of USP by 47% and greatly outperforms previous state-of-the-art approaches.
Unsupervised Ontology Induction from Text
d24712229
d46582372
In this study we develop a system that tags and extracts financial concepts called financial named entities (FNE) along with corresponding numeric values -monetary and temporal. We employ machine learning and natural language processing methods to identify financial concepts and dates, and link them to numerical entities.
Experiments in Candidate Phrase Selection for Financial Named Entity Extraction - A Demo
d209442021
d6999307
Det)artlnent of CoInl)utox Scien(:eTokyo Institute of Technology 2-12-11 ()ookayama Meguroku ~[bkyo 152, JAPAN {fujii,inui,take,tanaka}@cs.titech.ac.jp Abstract Word sense disambugation has recently been utilized in corpus-based aI)proaches, reflecting the growth in the number of nmehine readable texts. One (:ategory ()f al)l)roa(:hes disambiguates an input verb sense based on the similarity t)etween its governing (:its(; fillers and those in given examl)les. In this palter , we introdu<:c the degree of (:<mtriblltion of cast; to verb sells(', disambignation intt) this existing method, in this, greater diversity of semanti(: range of case filler examples will lead to that ease contributing to verb sense disambiguation more. We also report th(; result of a coml)arative ext)eriment, in which the t)erfornlance of disaml)igui~tion is iml)rt)ved t)y considering this notion of semantic contribution.
To what extent does case contribute to verb sense disambiguation? FUJI1 Atsushi, INUI Kentaro, TOKUNAGA Takenobu and TANAKA Hozmni
d7170547
Functioning as adverbials, yídìng and shìbì in Mandarin Chinese can either express intensification or (strong) epistemic necessity. In addition, context influences their semantics. Hence, dynamic semantics are proposed for them. An information state  is a pair <A, s>, where s is a proposition and A is an affirmative ordering. Yídìng() performs update on an information state: A is updated with  and s is specified to be a subset of or equal of , as long as  is true in one of the absolutely affirmative worlds. Otherwise, uttering yídìng() leads to an absurd state. This is how a strong epistemic necessity reading is derived. To yield an intensification reading, yídìng() performs a test on the information state. Yídìng() gives back the original information state as long as  is true in all of the absolutely affirmative worlds. Otherwise, an absurd state is produced. As for shìbì, its semantics is identical to that of yídìng, except for that the s in an information state  for shìbì is underspecified and needs resolving before a proposition gets an appropriate interpretation. The information needed to resolve the underspecified s for shìbì must be inferred from the context.
Dynamic Semantics for Intensification and Epistemic Necessity: The Case of Yídìng and Shìbì in Mandarin Chinese
d27394473
A procedure is described to gather corpora of academic writing from the web using BootCaT. The procedure uses terms distinctive of different registers and disciplines in COCA to locate and gather web pages containing them.
Building Webcorpora of Academic Prose with BootCaT
d246702336
HESIP is a hybrid explanation system for image predictions that combines sub-symbolic and symbolic machine learning techniques to explain the predictions of image classification tasks. The sub-symbolic component makes a prediction for an image and the symbolic component learns probabilistic symbolic rules in order to explain that prediction. In HESIP, the explanations are generated in controlled natural language from the learned probabilistic rules using a bi-directional logic grammar. In this paper, we present an explanation modification method where a human-in-the-loop can modify an incorrect explanation generated by the HESIP system and afterwards, the modified explanation is used by the symbolic component of HESIP to learn a better explanation.
Generating and Modifying Natural Language Explanations
d38932337
This paper describes the Spanish Resource Grammar, an open-source multi-purpose broad-coverage precise grammar for Spanish. The grammar is implemented on the Linguistic Knowledge Builder (LKB) system, it is grounded in the theoretical framework of Head-driven Phrase Structure Grammar (HPSG), and it uses Minimal Recursion Semantics (MRS) for the semantic representation. We have developed a hybrid architecture which integrates shallow processing functionalities -morphological analysis, and Named Entity recognition and classification -into the parsing process. The SRG has a full coverage lexicon of closed word classes and it contains 50,852 lexical entries for open word classes. The grammar also has 64 lexical rules to perform valence changing operations on lexical items, and 191 phrase structure rules that combine words and phrases into larger constituents and compositionally build up their semantic representation. The annotation of each parsed sentence in an LKB grammar simultaneously represents a traditional phrase structure tree, and a MRS semantic representation. We provide evaluation results on sentences from newspaper texts and discuss future work.
The Spanish Resource Grammar
d1983416
Ecological Linguistics316 "A" st. s. E. Washington, D. C., 20003 ABS~ACT National and international standards committees are now discussing a two-byte code for multilingual information processing. This provides for 65,536 separate character and control codes, enough to make permanent code assiguments for all the charanters of ell national alphabets of the world, and also to include Chinese/Japanese characters.This paper discusses the kinds of flexibility required to handle both Roman and non-Roman alp.habets. It is crucial to separate information units (codes) from graphic forms, to maximize processing p ower, Comparing alphabets around the world, we find t.hat the graphic devices (letters, digraphs, accent marks, punctuation, spacing, etc.) represent a very limited number of information units. It is possible to arr_ange alphabet codes to provide transliteration equivalence, the best of three solutions compared as a _eramework for code assignments.
Multilingual Text Processing in a Two-Byte Code
d207984484
d38268803
In order to assess spoken skills of learners of Japanese effectively and more efficiently the Institute for DECODE (Institute for Digital Enhancement of Cognitive Development) at Waseda University is collaborating with Ordinate Corporation to develop and validate an automated test of spoken Japanese, SJT (Spoken Japanese Test). The SJT is intended to measure a test-taker's facility in spoken Japanese, that is listening and speaking skills in daily conversation, in a quick, accurate and reliable manner. In this paper, we discuss the purposes for developing the SJT, the mechanism of a fully automated test, and the test development processes, including item development and implementation.
Developing an Automated Test of Spoken Japanese
d235097259
d10971375
Taken abstractly, the two-level (Kimmo) morphological framework allows computationally difficult problems to arise. For example, N + 1 small automata are sufficient to encode the Boolean satisfiability problem (SAT) for formulas in N variables. However, the suspicion arises that natural-language problems may have a special structure -not shared with SAT --that is not directly captured in the two-level model. In particular, the natural problems may generally have a modular and local nature that distinguishes them from more "global" SAT problems. By exploiting this structure, it may be possible to solve the natural problems by methods that do not involve combinatorial search.We have explored this possibility in a preliminary way by applying constraint propagation methods to Kimmo generation and recognition. Constraint propagation can succeed when the solution falls into place step-by-step through a chain of limited and local inferences, but it is insufficiently powerful to solve unnaturally hard SAT problems. Limited tests indicate that the constraint-propagation algorithm for Kimmo generation works for English, Turkish, and Warlpiri. When applied to a Kimmo system that encodes SAT problems, the algorithm succeeds on "easy" SAT problems but fails (as desired) on "hard" problems.
CONSTRAINT PROPAGATION IN KIMMO SYSTEMS
d805412
The paper presents an architecture for connecting annotated linguistic data with a computational grammar system. Pivotal to the architecture is an annotational interlingua -called the Construction Labeling system (CL) -which is notationally very simple, descriptively finegrained, cross-typologically applicable, and formally well-defined enough to map to a state-of-the-art computational model of grammar. In the present instantiation of the architecture, the computational grammar is an HPSG-based system called TypeGram. Underlying the architecture is a research program of enhancing the interconnectivity between linguistic analytic subsystems such as grammar formalisms and text annotation systems.
From Descriptive Annotation to Grammar Specification
d16246188
We propose to use Graph Rewriting for parsing syntactic dependencies. We present a system of rewriting rules dedicated to French and we evaluate it by parsing the SEQUOIA corpus.
Dependency Parsing with Graph Rewriting
d10721657
Do distributional word representations encode the linguistic regularities that theories of meaning argue they should encode? We address this question in the case of the logical properties (monotonicity, force) of quantificational words such as everything (in the object domain) and always (in the time domain). Using the vector offset approach to solving word analogies, we find that the skip-gram model of distributional semantics behaves in a way that is remarkably consistent with encoding these features in some domains, with accuracy approaching 100%, especially with mediumsized context windows. Accuracy in others domains was less impressive. We compare the performance of the model to the behavior of human participants, and find that humans performed well even where the models struggled.
Quantificational features in distributional word representations
d13981343
We describe an original method that automatically finds specific topics in a large collection of texts. Each topic is first identified as a specific cluster of texts and then represented as a virtual concept, which is a weighted mixture of words. Our intention is to employ these virtual concepts in document indexing.In this paper we show some preliminary experimental results and discuss directions of future work.
Searching for Topics in a Large Collection of Texts
d237273857
Online user comments in public forums are often associated with low quality, hate speech or even excessive demands for moderation. To better exploit their constructive and deliberate potential, we present forumBERT. forum-BERT is built on top of the BERT architecture and uses a shared weight and late fusion technique to better determine the quality and relevance of a comment on a forum article. Our model integrates article context with comments for the online/offline comment moderation task. This is done using a two step procedure: self-supervised BERT language model fine tuning for topic adaptation followed by integration into the forumBERT architecture for online/offline classification. We present evaluation results on various classification tasks of the public One Million Post dataset, as well as on the online/offline comment moderation task on 998,158 labelled comments from NDR.de, a popular German broadcaster's website. fo-rumBERT significantly outperforms baseline models on the NDR dataset and also outperforms all existing advanced baseline models on the OMP dataset. Additionally we conduct two studies on the influence of topic adaptation on the general comment moderation task.
forumBERT: Topic Adaptation and Classification of Contextualized Forum Comments in German
d17259174
The paper proposes and empirically motivates an integration of supervised learning with unsupervised learning to deal with human biases in summarization. In particular, we explore the use of probabilistic decision tree within the clustering framework to account for the variation as well as regularity in human created summaries. The corpus of human created extracts is created from a newspaper corpus and used as a test set. We build probabilistic decision trees of different flavors and integrate each of them with the clustering framework. Experiments with the corpus demonstrate that the mixture of the two paradigms generally gives a significant boost in performance compared to cases where either of the two is considered alone.
Supervised Ranking in Open-Domain Text Summarization
d13295869
In this paper an efficient natural language processing system specially designed for the Chinese language is presented. The center of the present system is a bottom-up chart parser with head-driven operation; i.e., phrases are built up by starting with their heads and adjoining constituents to the left or right of the heads instead of strictly from left to right. In this way many more unnecessary searching actions can be effectively eliminated. The present system also includes several efficient approaches such as a direction-selective chart to simplify the control of the head-driven operation; a heuristic scheduling policy and a bidirectional look-ahead approach to eliminate many unnecessary searching actions, and an improved raise-bind mechanism combined with check rules to treat the difficult problems of movement transformations and empty categories and to simplify the design of grammar rules. The present design is based on careful consideration of some special syntactic phenomena of the Chinese language, such as head-final and head-initial structures and empty categories. A prototype of the present system has been successfully implemented and extensive experiments have been performed. In the test results significant improvement in the efficiency in processing many very complicated Chinese sentences has been observed. The detailed discussion on the various approaches, the overall system design, and the experimental results will all be presented in this paper.
An Efficient Natural Language Processing System Specially Designed for the Chinese Language
d15572904
This paper describes the creation of a resource of German sentences with multiple automatically created alternative syntactic analyses (parses) for the same text, and how qualitative and quantitative investigations of this resource can be performed using ANNIS, a tool for corpus querying and visualization. Using the example of PP attachment, we show how parsing can benefit from the use of such a resource.
Creating and Exploiting a Resource of Parallel Parses
d8736393
It is shown that basic language processes such as the production of free word associations and the generation of synonyms can be simulated using statistical models that analyze the distribution of words in large text corpora. According to the law of association by contiguity, the acquisition of word associations can be explained by Hebbian learning. The free word associations as produced by subjects on presentation of single stimulus words can thus be predicted by applying first-order statistics to the frequencies of word co-occurrences as observed in texts. The generation of synonyms can also be conducted on co-occurrence data but requires second-order statistics. The reason is that synonyms rarely occur together but appear in similar lexical neighborhoods. Both approaches are systematically compared and are validated on empirical data. It turns out that for both tasks the performance of the statistical system is comparable to the performance of human subjects.
The Computation of Word Associations: Comparing Syntagmatic and Paradigmatic Approaches
d192232000
d67063750
Factorization of Verbs: An Analysis of Verbs of Seeing
d219300739
d32869854
PROJECT GOALS "The primary objective of this basic research program is to develop robust methods and models for speaker-independent acoustic recognition of spontaneously-produced, continuous speech. The work has focussed on developing accurate and detailed models of phonemes and their coarticulation .for the ° purpose of large-vocabulary continuous speech recognition. Important goals of this work are to achieve the highest possible word recognition accuracy in continuous speech and to develop methods for the rapid adaptation of phonetic models to the voice of a new speaker.RECENT RESULTSPorted the BYBLOS system to the Wall Street Journal (WSJ) corpus. We found that the techniques that we had developed for recognition of the ATIS corpus worked quite well without modification on the WSJ corpus.Performed several key experiments on the WSJ corpus. We verified our conjecture that a speaker-independent system trained on a small number of speakers has about the same word error rate as a system trained on a large number of speakers, assuming the same total amount of training speech. This is the first time that this result has been performed in a well-controlled way for large vocabulary speech recognition. We also verified that training the system separately on each of the speakers and averaging the resulting models results in essentially the same performance as training on all of the data at once. These results have wide ranging implications for data collection and system design.We have shown that, for large vocabulary recognition, a speaker-independent system will have about the same error rate as a speaker-dependent system when the speakerindependent system is trained on about 15 times as much speech as the corresponding speaker-dependent system.We showed that a simple blind deconvolution method for microphone independence, in which the mean cepstrum is subtracted from each eepstrurn vector, is somewhat better than the RASTA method.Developed a new algorithm for microphone independence which uses a codebook transformation, based on selection among several known microphones. The algorithm reduced the word error rate for unknown microphones by 20% over using blind deconvolution alone.In the Nov. 1992 speech recognition test on the ATIS domain, our BYBLOS system continued to give the best results of all sites tested, with a 30% reduction in word error over last year. In our first test on the WSJ corpus, our system had the second lowest error rates.Chaired the CSR Corpus Coordinating Committee.PLANS FOR THE COMING YEARFor the coming year, we plan to continue our work on improving speech recognition performance both on the Wall Street Journal corpus and on the spontaneous ATIS speech corpus.We plan to explore different pararneterizations of the speech signal and new models for microphone and speaker adaptation.385
ROBUST CONTINUOUS SPEECH RECOGNITION
d13043763
Recent studies of distributional semantic models have set up a competition between word embeddings obtained from predictive neural networks and word vectors obtained from count-based models. This paper is an attempt to reveal the underlying contribution of additional training data and post-processing steps on each type of model in word similarity and relatedness inference tasks. We do so by designing an artificial language, training a predictive and a count-based model on data sampled from this grammar, and evaluating the resulting word vectors in paradigmatic and syntagmatic tasks defined with respect to the grammar.
An Artificial Language Evaluation of Distributional Semantic Models
d235097520
Spoken language understanding (SLU) requires a model to analyze input acoustic signal to understand its linguistic content and make predictions. To boost the models' performance, various pre-training methods have been proposed to learn rich representations from large-scale unannotated speech and text. However, the inherent disparities between the two modalities necessitate a mutual analysis. In this paper, we propose a novel semisupervised learning framework, SPLAT, to jointly pre-train the speech and language modules. Besides conducting a self-supervised masked language modeling task on the two individual modules using unpaired speech and text, SPLAT aligns representations from the two modules in a shared latent space using a small amount of paired speech and text. Thus, during fine-tuning, the speech module alone can produce representations carrying both acoustic information and contextual semantic knowledge of an input acoustic signal. Experimental results verify the effectiveness of our approach on various SLU tasks. For example, SPLAT improves the previous stateof-the-art performance on the Spoken SQuAD dataset by more than 10%.
SPLAT: Speech-Language Joint Pre-Training for Spoken Language Understanding
d219299828
d232021784
d31880211
Leximancer is a software system for performing conceptual analysis of text data in a largely language independent manner. The system is modelled on Content Analysis and provides unsupervised and supervised analysis using seeded concept classifiers. Unsupervised ontology discovery is a key component.
Automatic Extraction of Semantic Networks from Text using Leximancer