text
stringlengths
17
3.36M
source
stringlengths
3
333
__index_level_0__
int64
0
518k
Motivation: Entropy measurements on hierarchical structures have been used in methods for information retrieval and natural language modeling. Here we explore its application to semantic similarity. By finding shared ontology terms, semantic similarity can be established between annotated genes. A common procedure for establishing semantic similarity is to calculate the descriptiveness (information content) of ontology terms and use these values to determine the similarity of annotations. Most often information content is calculated for an ontology term by analyzing its frequency in an annotation corpus. The inherent problems in using these values to model functional similarity motivates our work. Summary: We present a novel calculation for establishing the entropy of a DAG-based ontology, which can be used in an alternative method for establishing the information content of its terms. We also compare our IC metric to two others using semantic and sequence similarity.
Using Entropy Estimates for DAG-Based Ontologies
1,700
We describe the Clinical TempEval task which is currently in preparation for the SemEval-2015 evaluation exercise. This task involves identifying and describing events, times and the relations between them in clinical text. Six discrete subtasks are included, focusing on recognising mentions of times and events, describing those mentions for both entity types, identifying the relation between an event and the document creation time, and identifying narrative container relations.
Clinical TempEval
1,701
Natural language processing is a prompt research area across the country. Parsing is one of the very crucial tool in language analysis system which aims to forecast the structural relationship among the words in a given sentence. Many researchers have already developed so many language tools but the accuracy is not meet out the human expectation level, thus the research is still exists. Machine translation is one of the major application area under Natural Language Processing. While translation between one language to another language, the structure identification of a sentence play a key role. This paper introduces the hybrid way to solve the identification of relationship among the given words in a sentence. In existing system is implemented using rule based approach, which is not suited in huge amount of data. The machine learning approaches is suitable for handle larger amount of data and also to get better accuracy via learning and training the system. The proposed approach takes a Tamil sentence as an input and produce the result of a dependency relation as a tree like structure using hybrid approach. This proposed tool is very helpful for researchers and act as an odd-on improve the quality of existing approaches.
An efficiency dependency parser using hybrid approach for tamil language
1,702
In this paper, we present the implementation of an automatic Sign Language (SL) sign annotation framework based on a formal logic, the Propositional Dynamic Logic (PDL). Our system relies heavily on the use of a specific variant of PDL, the Propositional Dynamic Logic for Sign Language (PDLSL), which lets us describe SL signs as formulae and corpora videos as labeled transition systems (LTSs). Here, we intend to show how a generic annotation system can be constructed upon these underlying theoretical principles, regardless of the tracking technologies available or the input format of corpora. With this in mind, we generated a development framework that adapts the system to specific use cases. Furthermore, we present some results obtained by our application when adapted to one distinct case, 2D corpora analysis with pre-processed tracking information. We also present some insights on how such a technology can be used to analyze 3D real-time data, captured with a depth device.
Implementation of an Automatic Sign Language Lexical Annotation Framework based on Propositional Dynamic Logic
1,703
This paper explores the use of Propositional Dynamic Logic (PDL) as a suitable formal framework for describing Sign Language (SL), the language of deaf people, in the context of natural language processing. SLs are visual, complete, standalone languages which are just as expressive as oral languages. Signs in SL usually correspond to sequences of highly specific body postures interleaved with movements, which make reference to real world objects, characters or situations. Here we propose a formal representation of SL signs, that will help us with the analysis of automatically-collected hand tracking data from French Sign Language (FSL) video corpora. We further show how such a representation could help us with the design of computer aided SL verification tools, which in turn would bring us closer to the development of an automatic recognition system for these languages.
Sign Language Lexical Recognition With Propositional Dynamic Logic
1,704
Machine translation (MT) research in Indian languages is still in its infancy. Not much work has been done in proper transliteration of name entities in this domain. In this paper we address this issue. We have used English-Hindi language pair for our experiments and have used a hybrid approach. At first we have processed English words using a rule based approach which extracts individual phonemes from the words and then we have applied statistical approach which converts the English into its equivalent Hindi phoneme and in turn the corresponding Hindi word. Through this approach we have attained 83.40% accuracy.
Hybrid Approach to English-Hindi Name Entity Transliteration
1,705
Evaluation plays a crucial role in development of Machine translation systems. In order to judge the quality of an existing MT system i.e. if the translated output is of human translation quality or not, various automatic metrics exist. We here present the implementation results of different metrics when used on Hindi language along with their comparisons, illustrating how effective are these metrics on languages like Hindi (free word order language).
Evaluation and Ranking of Machine Translated Output in Hindi Language using Precision and Recall Oriented Metrics
1,706
This article reports the evaluation of the integration of data from a syntactic-semantic lexicon, the Lexicon-Grammar of French, into a syntactic parser. We show that by changing the set of labels for verbs and predicational nouns, we can improve the performance on French of a non-lexicalized probabilistic parser.
Intégration des données d'un lexique syntaxique dans un analyseur syntaxique probabiliste
1,707
We present the creation of an English-Swedish FrameNet-based grammar in Grammatical Framework. The aim of this research is to make existing framenets computationally accessible for multilingual natural language applications via a common semantic grammar API, and to facilitate the porting of such grammar to other languages. In this paper, we describe the abstract syntax of the semantic grammar while focusing on its automatic extraction possibilities. We have extracted a shared abstract syntax from ~58,500 annotated sentences in Berkeley FrameNet (BFN) and ~3,500 annotated sentences in Swedish FrameNet (SweFN). The abstract syntax defines 769 frame-specific valence patterns that cover 77.8% examples in BFN and 74.9% in SweFN belonging to the shared set of 471 frames. As a side result, we provide a unified method for comparing semantic and syntactic valence patterns across framenets.
Extracting a bilingual semantic grammar from FrameNet-annotated corpora
1,708
The ability to accurately represent sentences is central to language understanding. We describe a convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) that we adopt for the semantic modelling of sentences. The network uses Dynamic k-Max Pooling, a global pooling operation over linear sequences. The network handles input sentences of varying length and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations. The network does not rely on a parse tree and is easily applicable to any language. We test the DCNN in four experiments: small scale binary and multi-class sentiment prediction, six-way question classification and Twitter sentiment prediction by distant supervision. The network achieves excellent performance in the first three tasks and a greater than 25% error reduction in the last task with respect to the strongest baseline.
A Convolutional Neural Network for Modelling Sentences
1,709
Stemming is a pre-processing step in Text Mining applications as well as a very common requirement of Natural Language processing functions. Stemming is the process for reducing inflected words to their stem. The main purpose of stemming is to reduce different grammatical forms / word forms of a word like its noun, adjective, verb, adverb etc. to its root form. Stemming is widely uses in Information Retrieval system and reduces the size of index files. We can say that the goal of stemming is to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form. In this paper we have discussed different stemming algorithm for non-Indian and Indian language, methods of stemming, accuracy and errors.
Overview of Stemming Algorithms for Indian and Non-Indian Languages
1,710
We introduce a novel approach for building language models based on a systematic, recursive exploration of skip n-gram models which are interpolated using modified Kneser-Ney smoothing. Our approach generalizes language models as it contains the classical interpolation with lower order models as a special case. In this paper we motivate, formalize and present our approach. In an extensive empirical experiment over English text corpora we demonstrate that our generalized language models lead to a substantial reduction of perplexity between 3.1% and 12.7% in comparison to traditional language models using modified Kneser-Ney smoothing. Furthermore, we investigate the behaviour over three other languages and a domain specific corpus where we observed consistent improvements. Finally, we also show that the strength of our approach lies in its ability to cope in particular with sparse training data. Using a very small training data set of only 736 KB text we yield improvements of even 25.7% reduction of perplexity.
A Generalized Language Model as the Combination of Skipped n-grams and Modified Kneser-Ney Smoothing
1,711
Metrics for measuring the comparability of corpora or texts need to be developed and evaluated systematically. Applications based on a corpus, such as training Statistical MT systems in specialised narrow domains, require finding a reasonable balance between the size of the corpus and its consistency, with controlled and benchmarked levels of comparability for any newly added sections. In this article we propose a method that can meta-evaluate comparability metrics by calculating monolingual comparability scores separately on the 'source' and 'target' sides of parallel corpora. The range of scores on the source side is then correlated (using Pearson's r coefficient) with the range of 'target' scores; the higher the correlation - the more reliable is the metric. The intuition is that a good metric should yield the same distance between different domains in different languages. Our method gives consistent results for the same metrics on different data sets, which indicates that it is reliable and can be used for metric comparison or for optimising settings of parametrised metrics.
Meta-evaluation of comparability metrics using parallel corpora
1,712
Evaluation plays a vital role in checking the quality of MT output. It is done either manually or automatically. Manual evaluation is very time consuming and subjective, hence use of automatic metrics is done most of the times. This paper evaluates the translation quality of different MT Engines for Hindi-English (Hindi data is provided as input and English is obtained as output) using various automatic metrics like BLEU, METEOR etc. Further the comparison automatic evaluation results with Human ranking have also been given.
Assessing the Quality of MT Systems for Hindi to English Translation
1,713
Stanford typed dependencies are a widely desired representation of natural language sentences, but parsing is one of the major computational bottlenecks in text analysis systems. In light of the evolving definition of the Stanford dependencies and developments in statistical dependency parsing algorithms, this paper revisits the question of Cer et al. (2010): what is the tradeoff between accuracy and speed in obtaining Stanford dependencies in particular? We also explore the effects of input representations on this tradeoff: part-of-speech tags, the novel use of an alternative dependency representation as input, and distributional representaions of words. We find that direct dependency parsing is a more viable solution than it was found to be in the past. An accompanying software release can be found at: http://www.ark.cs.cmu.edu/TBSD
An Empirical Comparison of Parsing Methods for Stanford Dependencies
1,714
In this article, we have introduced the first parallel corpus of Persian with more than 10 other European languages. This article describes primary steps toward preparing a Basic Language Resources Kit (BLARK) for Persian. Up to now, we have proposed morphosyntactic specification of Persian based on EAGLE/MULTEXT guidelines and specific resources of MULTEXT-East. The article introduces Persian Language, with emphasis on its orthography and morphosyntactic features, then a new Part-of-Speech categorization and orthography for Persian in digital environments is proposed. Finally, the corpus and related statistic will be analyzed.
The First Parallel Multilingual Corpus of Persian: Toward a Persian BLARK
1,715
We present a novel technique for learning semantic representations, which extends the distributional hypothesis to multilingual data and joint-space embeddings. Our models leverage parallel data and learn to strongly align the embeddings of semantically equivalent sentences, while maintaining sufficient distance between those of dissimilar sentences. The models do not rely on word alignments or any syntactic information and are successfully applied to a number of diverse languages. We extend our approach to learn semantic representations at the document level, too. We evaluate these models on two cross-lingual document classification tasks, outperforming the prior state of the art. Through qualitative analysis and the study of pivoting effects we demonstrate that our representations are semantically plausible and can capture semantic relationships across languages without parallel data.
Multilingual Models for Compositional Distributed Semantics
1,716
We present a method to leverage radical for learning Chinese character embedding. Radical is a semantic and phonetic component of Chinese character. It plays an important role as characters with the same radical usually have similar semantic meaning and grammatical usage. However, existing Chinese processing algorithms typically regard word or character as the basic unit but ignore the crucial radical information. In this paper, we fill this gap by leveraging radical for learning continuous representation of Chinese character. We develop a dedicated neural architecture to effectively learn character embedding and apply it on Chinese character similarity judgement and Chinese word segmentation. Experiment results show that our radical-enhanced method outperforms existing embedding learning algorithms on both tasks.
Radical-Enhanced Chinese Character Embedding
1,717
Farsi, also known as Persian, is the official language of Iran and Tajikistan and one of the two main languages spoken in Afghanistan. Farsi enjoys a unified Arabic script as its writing system. In this paper we briefly introduce the writing standards of Farsi and highlight problems one would face when analyzing Farsi electronic texts, especially during development of Farsi corpora regarding to transcription and encoding of Farsi e-texts. The pointes mentioned may sounds easy but they are crucial when developing and processing written corpora of Farsi.
Challenges in Persian Electronic Text Analysis
1,718
This paper develops a compositional vector-based semantics of subject and object relative pronouns within a categorical framework. Frobenius algebras are used to formalise the operations required to model the semantics of relative pronouns, including passing information between the relative clause and the modified noun phrase, as well as copying, combining, and discarding parts of the relative clause. We develop two instantiations of the abstract semantics, one based on a truth-theoretic approach and one based on corpus statistics.
The Frobenius anatomy of word meanings I: subject and object relative pronouns
1,719
In this work we present a morphological analysis of Bishnupriya Manipuri language, an Indo-Aryan language spoken in the north eastern India. As of now, there is no computational work available for the language. Finite state morphology is one of the successful approaches applied in a wide variety of languages over the year. Therefore we adapted the finite state approach to analyse morphology of the Bishnupriya Manipuri language.
Morphological Analysis of the Bishnupriya Manipuri Language using Finite State Transducers
1,720
Most state-of-the-art approaches for named-entity recognition (NER) use semi supervised information in the form of word clusters and lexicons. Recently neural network-based language models have been explored, as they as a byproduct generate highly informative vector representations for words, known as word embeddings. In this paper we present two contributions: a new form of learning word embeddings that can leverage information from relevant lexicons to improve the representations, and the first system to use neural word embeddings to achieve state-of-the-art results on named-entity recognition in both CoNLL and Ontonotes NER. Our system achieves an F1 score of 90.90 on the test set for CoNLL 2003---significantly better than any previous system trained on public data, and matching a system employing massive private industrial query-log data.
Lexicon Infused Phrase Embeddings for Named Entity Resolution
1,721
Linguists and psychologists have long been studying cross-linguistic transfer, the influence of native language properties on linguistic performance in a foreign language. In this work we provide empirical evidence for this process in the form of a strong correlation between language similarities derived from structural features in English as Second Language (ESL) texts and equivalent similarities obtained from the typological features of the native languages. We leverage this finding to recover native language typological similarity structure directly from ESL text, and perform prediction of typological features in an unsupervised fashion with respect to the target languages. Our method achieves 72.2% accuracy on the typology prediction task, a result that is highly competitive with equivalent methods that rely on typological resources.
Reconstructing Native Language Typology from Foreign Language Usage
1,722
Many successful approaches to semantic parsing build on top of the syntactic analysis of text, and make use of distributional representations or statistical models to match parses to ontology-specific queries. This paper presents a novel deep learning architecture which provides a semantic parsing system through the union of two neural models of language semantics. It allows for the generation of ontology-specific queries from natural language statements and questions without the need for parsing, which makes it especially suitable to grammatically malformed or syntactically atypical text, such as tweets, as well as permitting the development of semantic parsers for resource-poor languages.
A Deep Architecture for Semantic Parsing
1,723
We describe a contextual parser for the Robot Commands Treebank, a new crowdsourced resource. In contrast to previous semantic parsers that select the most-probable parse, we consider the different problem of parsing using additional situational context to disambiguate between different readings of a sentence. We show that multiple semantic analyses can be searched using dynamic programming via interaction with a spatial planner, to guide the parsing process. We are able to parse sentences in near linear-time by ruling out analyses early on that are incompatible with spatial context. We report a 34% upper bound on accuracy, as our planner correctly processes spatial context for 3,394 out of 10,000 sentences. However, our parser achieves a 96.53% exact-match score for parsing within the subset of sentences recognized by the planner, compared to 82.14% for a non-contextual parser.
Contextual Semantic Parsing using Crowdsourced Spatial Descriptions
1,724
We present an approach to the extraction of family relations from literary narrative, which incorporates a technique for utterance attribution proposed recently by Elson and McKeown (2010). In our work this technique is used in combination with the detection of vocatives - the explicit forms of address used by the characters in a novel. We take advantage of the fact that certain vocatives indicate family relations between speakers. The extracted relations are then propagated using a set of rules. We report the results of the application of our method to Jane Austen's Pride and Prejudice.
Extracting Family Relationship Networks from Novels
1,725
Named Entities (NEs) are often written with no orthographic changes across different languages that share a common alphabet. We show that this can be leveraged so as to improve named entity recognition (NER) by using unsupervised word clusters from secondary languages as features in state-of-the-art discriminative NER systems. We observe significant increases in performance, finding that person and location identification is particularly improved, and that phylogenetically close languages provide more valuable features than more distant languages.
"Translation can't change a name": Using Multilingual Data for Named Entity Recognition
1,726
We present a probabilistic model that simultaneously learns alignments and distributed representations for bilingual data. By marginalizing over word alignments the model captures a larger semantic context than prior work relying on hard alignments. The advantage of this approach is demonstrated in a cross-lingual classification task, where we outperform the prior published state of the art.
Learning Bilingual Word Representations by Marginalizing Alignments
1,727
It is now widely recognized that ontologies, are one of the fundamental cornerstones of knowledge-based systems. What is lacking, however, is a currently accepted strategy of how to build ontology; what kinds of the resources and techniques are indispensables to optimize the expenses and the time on the one hand and the amplitude, the completeness, the robustness of en ontology on the other hand. The paper offers a semi-automatic ontology construction method from text corpora in the domain of radiological protection. This method is composed from next steps: 1) text annotation with part-of-speech tags; 2) revelation of the significant linguistic structures and forming the templates; 3) search of text fragments corresponding to these templates; 4) basic ontology instantiation process
Automatic Method Of Domain Ontology Construction based on Characteristics of Corpora POS-Analysis
1,728
Conjuring up our thoughts, language reflects statistical patterns of word co-occurrences which in turn come to describe how we perceive the world. Whether counting how frequently nouns and verbs combine in Google search queries, or extracting eigenvectors from term document matrices made up of Wikipedia lines and Shakespeare plots, the resulting latent semantics capture not only the associative links which form concepts, but also spatial dimensions embedded within the surface structure of language. As both the shape and movements of objects have been found to be associated with phonetic contrasts already in toddlers, this study explores whether articulatory and acoustic parameters may likewise differentiate the latent semantics of action verbs. Selecting 3 x 20 emotion, face, and hand related verbs known to activate premotor areas in the brain, their mutual cosine similarities were computed using latent semantic analysis LSA, and the resulting adjacency matrices were compared based on two different large scale text corpora; HAWIK and TASA. Applying hierarchical clustering to identify common structures across the two text corpora, the verbs largely divide into combined mouth and hand movements versus emotional expressions. Transforming the verbs into their constituent phonemes, the clustered small and large size movements appear differentiated by front versus back vowels corresponding to increasing levels of arousal. Whereas the clustered emotional verbs seem characterized by sequences of close versus open jaw produced phonemes, generating up- or downwards shifts in formant frequencies that may influence their perceived valence. Suggesting, that the latent semantics of action verbs reflect parameters of intensity and emotional polarity that appear correlated with the articulatory contrasts and acoustic characteristics of phonemes
Latent semantics of action verbs reflect phonetic parameters of intensity and emotional content
1,729
Word sense disambiguation (WSD) is a problem in the field of computational linguistics given as finding the intended sense of a word (or a set of words) when it is activated within a certain context. WSD was recently addressed as a combinatorial optimization problem in which the goal is to find a sequence of senses that maximize the semantic relatedness among the target words. In this article, a novel algorithm for solving the WSD problem called D-Bees is proposed which is inspired by bee colony optimization (BCO)where artificial bee agents collaborate to solve the problem. The D-Bees algorithm is evaluated on a standard dataset (SemEval 2007 coarse-grained English all-words task corpus)and is compared to simulated annealing, genetic algorithms, and two ant colony optimization techniques (ACO). It will be observed that the BCO and ACO approaches are on par.
D-Bees: A Novel Method Inspired by Bee Colony Optimization for Solving Word Sense Disambiguation
1,730
The strength with which a statement is made can have a significant impact on the audience. For example, international relations can be strained by how the media in one country describes an event in another; and papers can be rejected because they overstate or understate their findings. It is thus important to understand the effects of statement strength. A first step is to be able to distinguish between strong and weak statements. However, even this problem is understudied, partly due to a lack of data. Since strength is inherently relative, revisions of texts that make claims are a natural source of data on strength differences. In this paper, we introduce a corpus of sentence-level revisions from academic writing. We also describe insights gained from our annotation efforts for this task.
A Corpus of Sentence-level Revisions in Academic Writing: A Step towards Understanding Statement Strength in Communication
1,731
In this work we present our expert system of Automatic reading or speech synthesis based on a text written in Standard Arabic, our work is carried out in two great stages: the creation of the sound data base, and the transformation of the written text into speech (Text To Speech TTS). This transformation is done firstly by a Phonetic Orthographical Transcription (POT) of any written Standard Arabic text with the aim of transforming it into his corresponding phonetics sequence, and secondly by the generation of the voice signal which corresponds to the chain transcribed. We spread out the different of conception of the system, as well as the results obtained compared to others works studied to realize TTS based on Standard Arabic.
An Expert System for Automatic Reading of A Text Written in Standard Arabic
1,732
Minimum error rate training (MERT) is a widely used training procedure for statistical machine translation. A general problem of this approach is that the search space is easy to converge to a local optimum and the acquired weight set is not in accord with the real distribution of feature functions. This paper introduces coordinate system selection (RSS) into the search algorithm for MERT. Contrary to previous approaches in which every dimension only corresponds to one independent feature function, we create several coordinate systems by moving one of the dimensions to a new direction. The basic idea is quite simple but critical that the training procedure of MERT should be based on a coordinate system formed by search directions but not directly on feature functions. Experiments show that by selecting coordinate systems with tuning set results, better results can be obtained without any other language knowledge.
Coordinate System Selection for Minimum Error Rate Training in Statistical Machine Translation
1,733
This paper presents a novel combinational phonetic algorithm for Sindhi Language, to be used in developing Sindhi Spell Checker which has yet not been developed prior to this work. The compound textual forms and glyphs of Sindhi language presents a substantial challenge for developing Sindhi spell checker system and generating similar suggestion list for misspelled words. In order to implement such a system, phonetic based Sindhi language rules and patterns must be considered into account for increasing the accuracy and efficiency. The proposed system is developed with a blend between Phonetic based SoundEx algorithm and ShapeEx algorithm for pattern or glyph matching, generating accurate and efficient suggestion list for incorrect or misspelled Sindhi words. A table of phonetically similar sounding Sindhi characters for SoundEx algorithm is also generated along with another table containing similar glyph or shape based character groups for ShapeEx algorithm. Both these are first ever attempt of any such type of categorization and representation for Sindhi Language.
Phonetic based SoundEx & ShapeEx algorithm for Sindhi Spell Checker System
1,734
We provide a method for automatically detecting change in language across time through a chronologically trained neural language model. We train the model on the Google Books Ngram corpus to obtain word vector representations specific to each year, and identify words that have changed significantly from 1900 to 2009. The model identifies words such as "cell" and "gay" as having changed during that time period. The model simultaneously identifies the specific years during which such words underwent change.
Temporal Analysis of Language through Neural Language Models
1,735
We describe INAUT, a controlled natural language dedicated to collaborative update of a knowledge base on maritime navigation and to automatic generation of coast pilot books (Instructions nautiques) of the French National Hydrographic and Oceanographic Service SHOM. INAUT is based on French language and abundantly uses georeferenced entities. After describing the structure of the overall system, giving details on the language and on its generation, and discussing the three major applications of INAUT (document production, interaction with ENCs and collaborative updates of the knowledge base), we conclude with future extensions and open problems.
INAUT, a Controlled Language for the French Coast Pilot Books Instructions nautiques
1,736
In recent years, new developments in the area of lexicography have altered not only the management, processing and publishing of lexicographical data, but also created new types of products such as electronic dictionaries and thesauri. These expand the range of possible uses of lexical data and support users with more flexibility, for instance in assisting human translation. In this article, we give a short and easy-to-understand introduction to the problematic nature of the storage, display and interpretation of lexical data. We then describe the main methods and specifications used to build and represent lexical data. This paper is targeted for the following groups of people: linguists, lexicographers, IT specialists, computer linguists and all others who wish to learn more about the modelling, representation and visualization of lexical knowledge. This paper is written in two languages: French and German.
Méthodes pour la représentation informatisée de données lexicales / Methoden der Speicherung lexikalischer Daten
1,737
This paper presents preliminary results of Croatian syllable networks analysis. Syllable network is a network in which nodes are syllables and links between them are constructed according to their connections within words. In this paper we analyze networks of syllables generated from texts collected from the Croatian Wikipedia and Blogs. As a main tool we use complex network analysis methods which provide mechanisms that can reveal new patterns in a language structure. We aim to show that syllable networks have much higher clustering coefficient in comparison to Erd\"os-Renyi random networks. The results indicate that Croatian syllable networks exhibit certain properties of a small world networks. Furthermore, we compared Croatian syllable networks with Portuguese and Chinese syllable networks and we showed that they have similar properties.
A preliminary study of Croatian Language Syllable Networks
1,738
We present a natural language modelization method which is strongely relying on mathematics. This method, called "Formal Semantics," has been initiated by the American linguist Richard M. Montague in the 1970's. It uses mathematical tools such as formal languages and grammars, first-order logic, type theory and $\lambda$-calculus. Our goal is to have the reader discover both Montagovian formal semantics and the mathematical tools that he used in his method. ----- Nous pr\'esentons une m\'ethode de mod\'elisation de la langue naturelle qui est fortement bas\'ee sur les math\'ematiques. Cette m\'ethode, appel\'ee {\guillemotleft}s\'emantique formelle{\guillemotright}, a \'et\'e initi\'ee par le linguiste am\'ericain Richard M. Montague dans les ann\'ees 1970. Elle utilise des outils math\'ematiques tels que les langages et grammaires formels, la logique du 1er ordre, la th\'eorie de types et le $\lambda$-calcul. Nous nous proposons de faire d\'ecouvrir au lecteur tant la s\'emantique formelle de Montague que les outils math\'ematiques dont il s'est servi.
Les mathématiques de la langue : l'approche formelle de Montague
1,739
This paper presents a scalable method for integrating compositional morphological representations into a vector-based probabilistic language model. Our approach is evaluated in the context of log-bilinear language models, rendered suitably efficient for implementation inside a machine translation decoder by factoring the vocabulary. We perform both intrinsic and extrinsic evaluations, presenting results on a range of languages which demonstrate that our model learns morphological representations that both perform well on word similarity tasks and lead to substantial reductions in perplexity. When used for translation into morphologically rich languages with large vocabularies, our models obtain improvements of up to 1.2 BLEU points relative to a baseline system using back-off n-gram models.
Compositional Morphology for Word Representations and Language Modelling
1,740
We present an extended, thematically reinforced version of Gabrilovich and Markovitch's Explicit Semantic Analysis (ESA), where we obtain thematic information through the category structure of Wikipedia. For this we first define a notion of categorical tfidf which measures the relevance of terms in categories. Using this measure as a weight we calculate a maximal spanning tree of the Wikipedia corpus considered as a directed graph of pages and categories. This tree provides us with a unique path of "most related categories" between each page and the top of the hierarchy. We reinforce tfidf of words in a page by aggregating it with categorical tfidfs of the nodes of these paths, and define a thematically reinforced ESA semantic relatedness measure which is more robust than standard ESA and less sensitive to noise caused by out-of-context words. We apply our method to the French Wikipedia corpus, evaluate it through a text classification on a 37.5 MB corpus of 20 French newsgroups and obtain a precision increase of 9-10% compared with standard ESA.
Thematically Reinforced Explicit Semantic Analysis
1,741
Traditional learning-based coreference resolvers operate by training the mention-pair model for determining whether two mentions are coreferent or not. Though conceptually simple and easy to understand, the mention-pair model is linguistically rather unappealing and lags far behind the heuristic-based coreference models proposed in the pre-statistical NLP era in terms of sophistication. Two independent lines of recent research have attempted to improve the mention-pair model, one by acquiring the mention-ranking model to rank preceding mentions for a given anaphor, and the other by training the entity-mention model to determine whether a preceding cluster is coreferent with a given mention. We propose a cluster-ranking approach to coreference resolution, which combines the strengths of the mention-ranking model and the entity-mention model, and is therefore theoretically more appealing than both of these models. In addition, we seek to improve cluster rankers via two extensions: (1) lexicalization and (2) incorporating knowledge of anaphoricity by jointly modeling anaphoricity determination and coreference resolution. Experimental results on the ACE data sets demonstrate the superior performance of cluster rankers to competing approaches as well as the effectiveness of our two extensions.
Narrowing the Modeling Gap: A Cluster-Ranking Approach to Coreference Resolution
1,742
Chinese characters have a complex and hierarchical graphical structure carrying both semantic and phonetic information. We use this structure to enhance the text model and obtain better results in standard NLP operations. First of all, to tackle the problem of graphical variation we define allographic classes of characters. Next, the relation of inclusion of a subcharacter in a characters, provides us with a directed graph of allographic classes. We provide this graph with two weights: semanticity (semantic relation between subcharacter and character) and phoneticity (phonetic relation) and calculate "most semantic subcharacter paths" for each character. Finally, adding the information contained in these paths to unigrams we claim to increase the efficiency of text mining methods. We evaluate our method on a text classification task on two corpora (Chinese and Japanese) of a total of 18 million characters and get an improvement of 3% on an already high baseline of 89.6% precision, obtained by a linear SVM classifier. Other possible applications and perspectives of the system are discussed.
New Perspectives in Sinographic Language Processing Through the Use of Character Structure
1,743
Although the parallel corpus has an irreplaceable role in machine translation, its scale and coverage is still beyond the actual needs. Non-parallel corpus resources on the web have an inestimable potential value in machine translation and other natural language processing tasks. This article proposes a semi-supervised transductive learning method for expanding the training corpus in statistical machine translation system by extracting parallel sentences from the non-parallel corpus. This method only requires a small amount of labeled corpus and a large unlabeled corpus to build a high-performance classifier, especially for when there is short of labeled corpus. The experimental results show that by combining the non-parallel corpus alignment and the semi-supervised transductive learning method, we can more effectively use their respective strengths to improve the performance of machine translation system.
Machine Translation Model based on Non-parallel Corpus and Semi-supervised Transductive Learning
1,744
Economic issues related to the information processing techniques are very important. The development of such technologies is a major asset for developing countries like Cambodia and Laos, and emerging ones like Vietnam, Malaysia and Thailand. The MotAMot project aims to computerize an under-resourced language: Khmer, spoken mainly in Cambodia. The main goal of the project is the development of a multilingual lexical system targeted for Khmer. The macrostructure is a pivot one with each word sense of each language linked to a pivot axi. The microstructure comes from a simplification of the explanatory and combinatory dictionary. The lexical system has been initialized with data coming mainly from the conversion of the French-Khmer bilingual dictionary of Denis Richer from Word to XML format. The French part was completed with pronunciation and parts-of-speech coming from the FeM French-english-Malay dictionary. The Khmer headwords noted in IPA in the Richer dictionary were converted to Khmer writing with OpenFST, a finite state transducer tool. The resulting resource is available online for lookup, editing, download and remote programming via a REST API on a Jibiki platform.
MotàMot project: conversion of a French-Khmer published dictionary for building a multilingual lexical system
1,745
This paper relates work done during the DiLAF project. It consists in converting 5 bilingual African language-French dictionaries originally in Word format into XML following the LMF model. The languages processed are Bambara, Hausa, Kanuri, Tamajaq and Songhai-zarma, still considered as under-resourced languages concerning Natural Language Processing tools. Once converted, the dictionaries are available online on the Jibiki platform for lookup and modification. The DiLAF project is first presented. A description of each dictionary follows. Then, the conversion methodology from .doc format to XML files is presented. A specific point on the usage of Unicode follows. Then, each step of the conversion into XML and LMF is detailed. The last part presents the Jibiki lexical resources management platform used for the project.
Computerization of African languages-French dictionaries
1,746
The technique of building of networks of hierarchies of terms based on the analysis of chosen text corpora is offered. The technique is based on the methodology of horizontal visibility graphs. Constructed and investigated language network, formed on the basis of electronic preprints arXiv on topics of information retrieval.
Building of Networks of Natural Hierarchies of Terms Based on Analysis of Texts Corpora
1,747
The Swiss avalanche bulletin is produced twice a day in four languages. Due to the lack of time available for manual translation, a fully automated translation system is employed, based on a catalogue of predefined phrases and predetermined rules of how these phrases can be combined to produce sentences. The system is able to automatically translate such sentences from German into the target languages French, Italian and English without subsequent proofreading or correction. Our catalogue of phrases is limited to a small sublanguage. The reduction of daily translation costs is expected to offset the initial development costs within a few years. After being operational for two winter seasons, we assess here the quality of the produced texts based on an evaluation where participants rate real danger descriptions from both origins, the catalogue of phrases versus the manually written and translated texts. With a mean recognition rate of 55%, users can hardly distinguish between the two types of texts, and give similar ratings with respect to their language quality. Overall, the output from the catalogue system can be considered virtually equivalent to a text written by avalanche forecasters and then manually translated by professional translators. Furthermore, forecasters declared that all relevant situations were captured by the system with sufficient accuracy and within the limited time available.
Evaluating the fully automatic multi-language translation of the Swiss avalanche bulletin
1,748
Name matching between multiple natural languages is an important step in cross-enterprise integration applications and data mining. It is difficult to decide whether or not two syntactic values (names) from two heterogeneous data sources are alternative designation of the same semantic entity (person), this process becomes more difficult with Arabic language due to several factors including spelling and pronunciation variation, dialects and special vowel and consonant distinction and other linguistic characteristics. This paper proposes a new framework for name matching between the Arabic language and other languages. The framework uses a dictionary based on a new proposed version of the Soundex algorithm to encapsulate the recognition of special features of Arabic names. The framework proposes a new proximity matching algorithm to suit the high importance of order sensitivity in Arabic name matching. New performance evaluation metrics are proposed as well. The framework is implemented and verified empirically in several case studies demonstrating substantial improvements compared to other well-known techniques found in literature.
Cross-Language Personal Name Mapping
1,749
This paper re-investigates a lexical acquisition system initially developed for French.We show that, interestingly, the architecture of the system reproduces and implements the main components of Optimality Theory. However, we formulate the hypothesis that some of its limitations are mainly due to a poor representation of the constraints used. Finally, we show how a better representation of the constraints used would yield better results.
Optimality Theory as a Framework for Lexical Acquisition
1,750
This paper reports about our work in the ICON 2013 NLP TOOLS CONTEST on Named Entity Recognition. We submitted runs for Bengali, English, Hindi, Marathi, Punjabi, Tamil and Telugu. A statistical HMM (Hidden Markov Models) based model has been used to implement our system. The system has been trained and tested on the NLP TOOLS CONTEST: ICON 2013 datasets. Our system obtains F-measures of 0.8599, 0.7704, 0.7520, 0.4289, 0.5455, 0.4466, and 0.4003 for Bengali, English, Hindi, Marathi, Punjabi, Tamil and Telugu respectively.
An HMM Based Named Entity Recognition System for Indian Languages: The JU System at ICON 2013
1,751
We present a novel framework for learning to interpret and generate language using only perceptual context as supervision. We demonstrate its capabilities by developing a system that learns to sportscast simulated robot soccer games in both English and Korean without any language-specific prior knowledge. Training employs only ambiguous supervision consisting of a stream of descriptive textual comments and a sequence of events extracted from the simulation trace. The system simultaneously establishes correspondences between individual comments and the events that they describe while building a translation model that supports both parsing and generation. We also present a novel algorithm for learning which events are worth describing. Human evaluations of the generated commentaries indicate they are of reasonable quality and in some cases even on par with those produced by humans for our limited domain.
Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language
1,752
Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.
Comparing and Combining Sentiment Analysis Methods
1,753
Sentence extraction based summarization methods has some limitations as it doesn't go into the semantics of the document. Also, it lacks the capability of sentence generation which is intuitive to humans. Here we present a novel method to summarize text documents taking the process to semantic levels with the use of WordNet and other resources, and using a technique for sentence generation. We involve semantic role labeling to get the semantic representation of text and use of segmentation to form clusters of the related pieces of text. Picking out the centroids and sentence generation completes the task. We evaluate our system against human composed summaries and also present an evaluation done by humans to measure the quality attributes of our summaries.
A Semantic Approach to Summarization
1,754
It has been proved that large scale realistic Knowledge Based Machine Translation applications require acquisition of huge knowledge about language and about the world. This knowledge is encoded in computational grammars, lexicons and domain models. Another approach which avoids the need for collecting and analyzing massive knowledge, is the Example Based approach, which is the topic of this paper. We show through the paper that using Example Based in its native form is not suitable for translating into Arabic. Therefore a modification to the basic approach is presented to improve the accuracy of the translation process. The basic idea of the new approach is to improve the technique by which template-based approaches select the appropriate templates.
The Best Templates Match Technique For Example Based Machine Translation
1,755
Development of a proper names pronunciation lexicon is usually a manual effort which can not be avoided. Grapheme to phoneme (G2P) conversion modules, in literature, are usually rule based and work best for non-proper names in a particular language. Proper names are foreign to a G2P module. We follow an optimization approach to enable automatic construction of proper names pronunciation lexicon. The idea is to construct a small orthogonal set of words (basis) which can span the set of names in a given database. We propose two algorithms for the construction of this basis. The transcription lexicon of all the proper names in a database can be produced by the manual transcription of only the small set of basis words. We first construct a cost function and show that the minimization of the cost function results in a basis. We derive conditions for convergence of this cost function and validate them experimentally on a very large proper name database. Experiments show the transcription can be achieved by transcribing a set of small number of basis words. The algorithms proposed are generic and independent of language; however performance is better if the proper names have same origin, namely, same language or geographical region.
Basis Identification for Automatic Creation of Pronunciation Lexicon for Proper Names
1,756
IsiZulu is one of the eleven official languages of South Africa and roughly half the population can speak it. It is the first (home) language for over 10 million people in South Africa. Only a few computational resources exist for isiZulu and its related Nguni languages, yet the imperative for tool development exists. We focus on natural language generation, and the grammar options and preferences in particular, which will inform verbalization of knowledge representation languages and could contribute to machine translation. The verbalization pattern specification shows that the grammar rules are elaborate and there are several options of which one may have preference. We devised verbalization patterns for subsumption, basic disjointness, existential and universal quantification, and conjunction. This was evaluated in a survey among linguists and non-linguists. Some differences between linguists and non-linguists can be observed, with the former much more in agreement, and preferences depend on the overall structure of the sentence, such as singular for subsumption and plural in other cases.
Toward verbalizing ontologies in isiZulu
1,757
Recent developments in controlled natural language editors for knowledge engineering (KE) have given rise to expectations that they will make KE tasks more accessible and perhaps even enable non-engineers to build knowledge bases. This exploratory research focussed on novices and experts in knowledge engineering during their attempts to learn a controlled natural language (CNL) known as OWL Simplified English and use it to build a small knowledge base. Participants' behaviours during the task were observed through eye-tracking and screen recordings. This was an attempt at a more ambitious user study than in previous research because we used a naturally occurring text as the source of domain knowledge, and left them without guidance on which information to select, or how to encode it. We have identified a number of skills (competencies) required for this difficult task and key problems that authors face.
How Easy is it to Learn a Controlled Natural Language for Building a Knowledge Base?
1,758
This paper presents a currently bilingual but potentially multilingual FrameNet-based grammar library implemented in Grammatical Framework. The contribution of this paper is two-fold. First, it offers a methodological approach to automatically generate the grammar based on semantico-syntactic valence patterns extracted from FrameNet-annotated corpora. Second, it provides a proof of concept for two use cases illustrating how the acquired multilingual grammar can be exploited in different CNL applications in the domains of arts and tourism.
Controlled Natural Language Generation from a Multilingual FrameNet-based Grammar
1,759
One of the main challenges for building the Semantic web is Ontology Authoring. Controlled Natural Languages CNLs offer a user friendly means for non-experts to author ontologies. This paper provides a snapshot of the state-of-the-art for the core CNLs for ontology authoring and reviews their respective evaluations.
A Brief State of the Art for Ontology Authoring
1,760
Controlled natural languages for industrial application are often regarded as a response to the challenges of translation and multilingual communication. This paper presents a quite different approach taken by Koenig & Bauer AG, where the main goal was the improvement of the authoring process for technical documentation. Most importantly, this paper explores the notion of a controlled language and demonstrates how style guides can emerge from non-linguistic considerations. Moreover, it shows the transition from loose language recommendations into precise and prescriptive rules and investigates whether such rules can be regarded as a full-fledged controlled language.
Are Style Guides Controlled Languages? The Case of Koenig & Bauer AG
1,761
This paper presents a system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features. Our model learns low-dimensional embeddings of words and knowledge base constituents; these representations are used to score natural language questions against candidate answers. Training our system using pairs of questions and structured representations of their answers, and pairs of question paraphrases, yields competitive results on a competitive benchmark of the literature.
Question Answering with Subgraph Embeddings
1,762
In todays digital world automated Machine Translation of one language to another has covered a long way to achieve different kinds of success stories. Whereas Babel Fish supports a good number of foreign languages and only Hindi from Indian languages, the Google Translator takes care of about 10 Indian languages. Though most of the Automated Machine Translation Systems are doing well but handling Indian languages needs a major care while handling the local proverbs/ idioms. Most of the Machine Translation system follows the direct translation approach while translating one Indian language to other. Our research at KMIT R&D Lab found that handling the local proverbs/idioms is not given enough attention by the earlier research work. This paper focuses on two of the majorly spoken Indian languages Marathi and Telugu, and translation between them. Handling proverbs and idioms of both the languages have been given a special care, and the research outcome shows a significant achievement in this direction.
Translation Of Telugu-Marathi and Vice-Versa using Rule Based Machine Translation
1,763
In this paper, we describe methods for handling multilingual non-compositional constructions in the framework of GF. We specifically look at methods to detect and extract non-compositional phrases from parallel texts and propose methods to handle such constructions in GF grammars. We expect that the methods to handle non-compositional constructions will enrich CNLs by providing more flexibility in the design of controlled languages. We look at two specific use cases of non-compositional constructions: a general-purpose method to detect and extract multilingual multiword expressions and a procedure to identify nominal compounds in German. We evaluate our procedure for multiword expressions by performing a qualitative analysis of the results. For the experiments on nominal compounds, we incorporate the detected compounds in a full SMT pipeline and evaluate the impact of our method in machine translation process.
Handling non-compositionality in multilingual CNLs
1,764
In this paper, we investigate and experiment the notion of error correction memory applied to error correction in technical texts. The main purpose is to induce relatively generic correction patterns associated with more contextual correction recommendations, based on previously memorized and analyzed corrections. The notion of error correction memory is developed within the framework of the LELIE project and illustrated on the case of fuzzy lexical items, which is a major problem in technical texts.
Towards an Error Correction Memory to Enhance Technical Texts Authoring in LELIE
1,765
Inspired by embedded programming languages, an embedded CNL (controlled natural language) is a proper fragment of an entire natural language (its host language), but it has a parser that recognizes the entire host language. This makes it possible to process out-of-CNL input and give useful feedback to users, instead of just reporting syntax errors. This extended abstract explains the main concepts of embedded CNL implementation in GF (Grammatical Framework), with examples from machine translation and some other ongoing work.
Embedded Controlled Languages
1,766
In this paper we describe our contribution to the PoliInformatics 2014 Challenge on the 2007-2008 financial crisis. We propose a state of the art technique to extract information from texts and provide different representations, giving first a static overview of the domain and then a dynamic representation of its main evolutions. We show that this strategy provides a practical solution to some recent theories in social sciences that are facing a lack of methods and tools to automatically extract information from natural language texts.
Mapping the Economic Crisis: Some Preliminary Investigations
1,767
Text classification is a task of automatic classification of text into one of the predefined categories. The problem of text classification has been widely studied in different communities like natural language processing, data mining and information retrieval. Text classification is an important constituent in many information management tasks like topic identification, spam filtering, email routing, language identification, genre classification, readability assessment etc. The performance of text classification improves notably when phrase patterns are used. The use of phrase patterns helps in capturing non-local behaviours and thus helps in the improvement of text classification task. Phrase structure extraction is the first step to continue with the phrase pattern identification. In this survey, detailed study of phrase structure learning methods have been carried out. This will enable future work in several NLP tasks, which uses syntactic information from phrase structure like grammar checkers, question answering, information extraction, machine translation, text classification. The paper also provides different levels of classification and detailed comparison of the phrase structure learning methods.
A survey on phrase structure learning methods for text classification
1,768
In this paper we present an ongoing research investigating the possibility and potential of integrating frame semantics, particularly FrameNet, in the Grammatical Framework (GF) application grammar development. An important component of GF is its Resource Grammar Library (RGL) that encapsulates the low-level linguistic knowledge about morphology and syntax of currently more than 20 languages facilitating rapid development of multilingual applications. In the ideal case, porting a GF application grammar to a new language would only require introducing the domain lexicon - translation equivalents that are interlinked via common abstract terms. While it is possible for a highly restricted CNL, developing and porting a less restricted CNL requires above average linguistic knowledge about the particular language, and above average GF experience. Specifying a lexicon is mostly straightforward in the case of nouns (incl. multi-word units), however, verbs are the most complex category (in terms of both inflectional paradigms and argument structure), and adding them to a GF application grammar is not a straightforward task. In this paper we are focusing on verbs, investigating the possibility of creating a multilingual FrameNet-based GF library. We propose an extension to the current RGL, allowing GF application developers to define clauses on the semantic level, thus leaving the language-specific syntactic mapping to this extension. We demonstrate our approach by reengineering the MOLTO Phrasebook application grammar.
FrameNet Resource Grammar Library for GF
1,769
The computational handling of Modern Standard Arabic is a challenge in the field of natural language processing due to its highly rich morphology. However, several authors have pointed out that the Arabic morphological system is in fact extremely regular. The existing Arabic morphological analyzers have exploited this regularity to variable extent, yet we believe there is still some scope for improvement. Taking inspiration in traditional Arabic prosody, we have designed and implemented a compact and simple morphological system which in our opinion takes further advantage of the regularities encountered in the Arabic morphological system. The output of the system is a large-scale lexicon of inflected forms that has subsequently been used to create an Online Interface for a morphological analyzer of Arabic verbs. The Jabalin Online Interface is available at http://elvira.lllf.uam.es/jabalin/, hosted at the LLI-UAM lab. The generation system is also available under a GNU GPL 3 license.
Jabalin: a Comprehensive Computational Model of Modern Standard Arabic Verbal Morphology Based on Traditional Arabic Prosody
1,770
In this paper, we tackle the problem of the translation of proper names. We introduce our hypothesis according to which proper names can be translated more often than most people seem to think. Then, we describe the construction of a parallel multilingual corpus used to illustrate our point. We eventually evaluate both the advantages and limits of this corpus in our study.
Les noms propres se traduisent-ils ? Étude d'un corpus multilingue
1,771
An inter-rater agreement study is performed for readability assessment in Bengali. A 1-7 rating scale was used to indicate different levels of readability. We obtained moderate to fair agreement among seven independent annotators on 30 text passages written by four eminent Bengali authors. As a by product of our study, we obtained a readability-annotated ground truth dataset in Bengali. .
Inter-Rater Agreement Study on Readability Assessment in Bengali
1,772
Machine translation is the process of translating text from one language to another. In this paper, Statistical Machine Translation is done on Assamese and English language by taking their respective parallel corpus. A statistical phrase based translation toolkit Moses is used here. To develop the language model and to align the words we used two another tools IRSTLM, GIZA respectively. BLEU score is used to check our translation system performance, how good it is. A difference in BLEU scores is obtained while translating sentences from Assamese to English and vice-versa. Since Indian languages are morphologically very rich hence translation is relatively harder from English to Assamese resulting in a low BLEU score. A statistical transliteration system is also introduced with our translation system to deal basically with proper nouns, OOV (out of vocabulary) words which are not present in our corpus.
Assamese-English Bilingual Machine Translation
1,773
Machine Translation is the challenging problem for Indian languages. Every day we can see some machine translators being developed, but getting a high quality automatic translation is still a very distant dream . The correct translated sentence for Hindi language is rarely found. In this paper, we are emphasizing on English-Hindi language pair, so in order to preserve the correct MT output we present a ranking system, which employs some machine learning techniques and morphological features. In ranking no human intervention is required. We have also validated our results by comparing it with human ranking.
Quality Estimation Of Machine Translation Outputs Through Stemming
1,774
Named Entity Recognition is always important when dealing with major Natural Language Processing tasks such as information extraction, question-answering, machine translation, document summarization etc so in this paper we put forward a survey of Named Entities in Indian Languages with particular reference to Assamese. There are various rule-based and machine learning approaches available for Named Entity Recognition. At the very first of the paper we give an idea of the available approaches for Named Entity Recognition and then we discuss about the related research in this field. Assamese like other Indian languages is agglutinative and suffers from lack of appropriate resources as Named Entity Recognition requires large data sets, gazetteer list, dictionary etc and some useful feature like capitalization as found in English cannot be found in Assamese. Apart from this we also describe some of the issues faced in Assamese while doing Named Entity Recognition.
A Survey of Named Entity Recognition in Assamese and other Indian Languages
1,775
In this paper we present a fundamental lexical semantics of Sinhala language and a Hidden Markov Model (HMM) based Part of Speech (POS) Tagger for Sinhala language. In any Natural Language processing task, Part of Speech is a very vital topic, which involves analysing of the construction, behaviour and the dynamics of the language, which the knowledge could utilized in computational linguistics analysis and automation applications. Though Sinhala is a morphologically rich and agglutinative language, in which words are inflected with various grammatical features, tagging is very essential for further analysis of the language. Our research is based on statistical based approach, in which the tagging process is done by computing the tag sequence probability and the word-likelihood probability from the given corpus, where the linguistic knowledge is automatically extracted from the annotated corpus. The current tagger could reach more than 90% of accuracy for known words.
Hidden Markov Model Based Part of Speech Tagger for Sinhala Language
1,776
We analyze a word embedding method in supervised tasks. It maps words on a sphere such that words co-occurring in similar contexts lie closely. The similarity of contexts is measured by the distribution of substitutes that can fill them. We compared word embeddings, including more recent representations, in Named Entity Recognition (NER), Chunking, and Dependency Parsing. We examine our framework in multilingual dependency parsing as well. The results show that the proposed method achieves as good as or better results compared to the other word embeddings in the tasks we investigate. It achieves state-of-the-art results in multilingual dependency parsing. Word embeddings in 7 languages are available for public use.
Substitute Based SCODE Word Embeddings in Supervised NLP Tasks
1,777
Previous attempts at RST-style discourse segmentation typically adopt features centered on a single token to predict whether to insert a boundary before that token. In contrast, we develop a discourse segmenter utilizing a set of pairing features, which are centered on a pair of adjacent tokens in the sentence, by equally taking into account the information from both tokens. Moreover, we propose a novel set of global features, which encode characteristics of the segmentation as a whole, once we have an initial segmentation. We show that both the pairing and global features are useful on their own, and their combination achieved an $F_1$ of 92.6% of identifying in-sentence discourse boundaries, which is a 17.8% error-rate reduction over the state-of-the-art performance, approaching 95% of human performance. In addition, similar improvement is observed across different classification frameworks.
Two-pass Discourse Segmentation with Pairing and Global Features
1,778
We present a novel approach for recognizing what we call targetable named entities; that is, named entities in a targeted set (e.g, movies, books, TV shows). Unlike many other NER systems that need to retrain their statistical models as new entities arrive, our approach does not require such retraining, which makes it more adaptable for types of entities that are frequently updated. For this preliminary study, we focus on one entity type, movie title, using data collected from Twitter. Our system is tested on two evaluation sets, one including only entities corresponding to movies in our training set, and the other excluding any of those entities. Our final model shows F1-scores of 76.19% and 78.70% on these evaluation sets, which gives strong evidence that our approach is completely unbiased to any par- ticular set of entities found during training.
Targetable Named Entity Recognition in Social Media
1,779
Identifying concepts and relationships in biomedical text enables knowledge to be applied in computational analyses. Many biological natural language process (BioNLP) projects attempt to address this challenge, but the state of the art in BioNLP still leaves much room for improvement. Progress in BioNLP research depends on large, annotated corpora for evaluating information extraction systems and training machine learning models. Traditionally, such corpora are created by small numbers of expert annotators often working over extended periods of time. Recent studies have shown that workers on microtask crowdsourcing platforms such as Amazon's Mechanical Turk (AMT) can, in aggregate, generate high-quality annotations of biomedical text. Here, we investigated the use of the AMT in capturing disease mentions in PubMed abstracts. We used the NCBI Disease corpus as a gold standard for refining and benchmarking our crowdsourcing protocol. After several iterations, we arrived at a protocol that reproduced the annotations of the 593 documents in the training set of this gold standard with an overall F measure of 0.872 (precision 0.862, recall 0.883). The output can also be tuned to optimize for precision (max = 0.984 when recall = 0.269) or recall (max = 0.980 when precision = 0.436). Each document was examined by 15 workers, and their annotations were merged based on a simple voting method. In total 145 workers combined to complete all 593 documents in the span of 1 week at a cost of $.06 per abstract per worker. The quality of the annotations, as judged with the F measure, increases with the number of workers assigned to each task such that the system can be tuned to balance cost against quality. These results demonstrate that microtask crowdsourcing can be a valuable tool for generating well-annotated corpora in BioNLP.
Microtask crowdsourcing for disease mention annotation in PubMed abstracts
1,780
In this paper, we describe the problem of cognate identification and its relation to phylogenetic inference. We introduce subsequence based features for discriminating cognates from non-cognates. We show that subsequence based features perform better than the state-of-the-art string similarity measures for the purpose of cognate identification. We use the cognate judgments for the purpose of phylogenetic inference and observe that these classifiers infer a tree which is close to the gold standard tree. The contribution of this paper is the use of subsequence features for cognate identification and to employ the cognate judgments for phylogenetic inference.
Gap-weighted subsequences for automatic cognate identification and phylogenetic inference
1,781
Real-word spelling correction differs from non-word spelling correction in its aims and its challenges. Here we show that the central problem in real-word spelling correction is detection. Methods from non-word spelling correction, which focus instead on selection among candidate corrections, do not address detection adequately, because detection is either assumed in advance or heavily constrained. As we demonstrate in this paper, merely discriminating between the intended word and a random close variation of it within the context of a sentence is a task that can be performed with high accuracy using straightforward models. Trigram models are sufficient in almost all cases. The difficulty comes when every word in the sentence is a potential error, with a large set of possible candidate corrections. Despite their strengths, trigram models cannot reliably find true errors without introducing many more, at least not when used in the obvious sequential way without added structure. The detection task exposes weakness not visible in the selection task.
Detection is the central problem in real-word spelling correction
1,782
We present SimLex-999, a gold standard resource for evaluating distributional semantic models that improves on existing resources in several important ways. First, in contrast to gold standards such as WordSim-353 and MEN, it explicitly quantifies similarity rather than association or relatedness, so that pairs of entities that are associated but not actually similar [Freud, psychology] have a low rating. We show that, via this focus on similarity, SimLex-999 incentivizes the development of models with a different, and arguably wider range of applications than those which reflect conceptual association. Second, SimLex-999 contains a range of concrete and abstract adjective, noun and verb pairs, together with an independent rating of concreteness and (free) association strength for each pair. This diversity enables fine-grained analyses of the performance of models on concepts of different types, and consequently greater insight into how architectures can be improved. Further, unlike existing gold standard evaluations, for which automatic approaches have reached or surpassed the inter-annotator agreement ceiling, state-of-the-art models perform well below this ceiling on SimLex-999. There is therefore plenty of scope for SimLex-999 to quantify future improvements to distributional semantic models, guiding the development of the next generation of representation-learning architectures.
SimLex-999: Evaluating Semantic Models with (Genuine) Similarity Estimation
1,783
In this work, we present an application of the recently proposed unsupervised keyword extraction algorithm RAKE to a corpus of Polish legal texts from the field of public procurement. RAKE is essentially a language and domain independent method. Its only language-specific input is a stoplist containing a set of non-content words. The performance of the method heavily depends on the choice of such a stoplist, which should be domain adopted. Therefore, we complement RAKE algorithm with an automatic approach to selecting non-content words, which is based on the statistical properties of term distribution.
Unsupervised Keyword Extraction from Polish Legal Texts
1,784
Ferrer-i-Cancho (2015) presents a mathematical model of both the synchronic and diachronic nature of word order based on the assumption that memory costs are a never decreasing function of distance and a few very general linguistic assumptions. However, even these minimal and seemingly obvious assumptions are not as safe as they appear in light of recent typological and psycholinguistic evidence. The interaction of word order and memory has further depths to be explored.
Be Careful When Assuming the Obvious: Commentary on "The placement of the head that minimizes online memory: a complex systems approach"
1,785
We provide a comparative study between neural word representations and traditional vector spaces based on co-occurrence counts, in a number of compositional tasks. We use three different semantic spaces and implement seven tensor-based compositional models, which we then test (together with simpler additive and multiplicative approaches) in tasks involving verb disambiguation and sentence similarity. To check their scalability, we additionally evaluate the spaces using simple compositional methods on larger-scale tasks with less constrained language: paraphrase detection and dialogue act tagging. In the more constrained tasks, co-occurrence vectors are competitive, although choice of compositional method is important; on the larger-scale tasks, they are outperformed by neural word embeddings, which show robust, stable performance across the tasks.
Evaluating Neural Word Representations in Tensor-Based Compositional Settings
1,786
This paper provides a method for improving tensor-based compositional distributional models of meaning by the addition of an explicit disambiguation step prior to composition. In contrast with previous research where this hypothesis has been successfully tested against relatively simple compositional models, in our work we use a robust model trained with linear regression. The results we get in two experiments show the superiority of the prior disambiguation method and suggest that the effectiveness of this approach is model-independent.
Resolving Lexical Ambiguity in Tensor Regression Models of Meaning
1,787
We present STIR (STrongly Incremental Repair detection), a system that detects speech repairs and edit terms on transcripts incrementally with minimal latency. STIR uses information-theoretic measures from n-gram models as its principal decision features in a pipeline of classifiers detecting the different stages of repairs. Results on the Switchboard disfluency tagged corpus show utterance-final accuracy on a par with state-of-the-art incremental repair detection methods, but with better incremental accuracy, faster time-to-detection and less computational overhead. We evaluate its performance using incremental metrics and propose new repair processing evaluation standards.
Strongly Incremental Repair Detection
1,788
In this empirical study, I compare various tree distance measures -- originally developed in computational biology for the purpose of tree comparison -- for the purpose of parser evaluation. I will control for the parser setting by comparing the automatically generated parse trees from the state-of-the-art parser Charniak, 2000) with the gold-standard parse trees. The article describes two different tree distance measures (RF and QD) along with its variants (GRF and GQD) for the purpose of parser evaluation. The article will argue that RF measure captures similar information as the standard EvalB metric (Sekine and Collins, 1997) and the tree edit distance (Zhang and Shasha, 1989) applied by Tsarfaty et al. (2011). Finally, the article also provides empirical evidence by reporting high correlations between the different tree distances and EvalB metric's scores.
Empirical Evaluation of Tree distances for Parser Evaluation
1,789
This report describes an NLP assistant for the collaborative development environment Clide, that supports the development of NLP applications by providing easy access to some common NLP data structures. The assistant visualizes text fragments and their dependencies by displaying the semantic graph of a sentence, the coreference chain of a paragraph and mined triples that are extracted from a paragraph's semantic graphs and linked using its coreference chain. Using this information and a logic programming library, we create an NLP database which is used by a series of queries to mine the triples. The algorithm is tested by translating a natural language text describing a graph to an actual graph that is shown as an annotation in the text editor.
An NLP Assistant for Clide
1,790
Automatic Multi-Word Term (MWT) extraction is a very important issue to many applications, such as information retrieval, question answering, and text categorization. Although many methods have been used for MWT extraction in English and other European languages, few studies have been applied to Arabic. In this paper, we propose a novel, hybrid method which combines linguistic and statistical approaches for Arabic Multi-Word Term extraction. The main contribution of our method is to consider contextual information and both termhood and unithood for association measures at the statistical filtering step. In addition, our technique takes into account the problem of MWT variation in the linguistic filtering step. The performance of the proposed statistical measure (NLC-value) is evaluated using an Arabic environment corpus by comparing it with some existing competitors. Experimental results show that our NLC-value measure outperforms the other ones in term of precision for both bi-grams and tri-grams.
A Study of Association Measures and their Combination for Arabic MWT Extraction
1,791
This paper presents a new model of WordNet that is used to disambiguate the correct sense of polysemy word based on the clue words. The related words for each sense of a polysemy word as well as single sense word are referred to as the clue words. The conventional WordNet organizes nouns, verbs, adjectives and adverbs together into sets of synonyms called synsets each expressing a different concept. In contrast to the structure of WordNet, we developed a new model of WordNet that organizes the different senses of polysemy words as well as the single sense words based on the clue words. These clue words for each sense of a polysemy word as well as for single sense word are used to disambiguate the correct meaning of the polysemy word in the given context using knowledge based Word Sense Disambiguation (WSD) algorithms. The clue word can be a noun, verb, adjective or adverb.
Word Sense Disambiguation using WSD specific Wordnet of Polysemy Words
1,792
The rendering of Sanskrit poetry from text to speech is a problem that has not been solved before. One reason may be the complications in the language itself. We present unique algorithms based on extensive empirical analysis, to synthesize speech from a given text input of Sanskrit verses. Using a pre-recorded audio units database which is itself tremendously reduced in size compared to the colossal size that would otherwise be required, the algorithms work on producing the best possible, tunefully rendered chanting of the given verse. His would enable the visually impaired and those with reading disabilities to easily access the contents of Sanskrit verses otherwise available only in writing.
An Algorithm Based on Empirical Methods, for Text-to-Tuneful-Speech Synthesis of Sanskrit Verse
1,793
Comprehensively searching for words in Sanskrit E-text is a non-trivial problem because words could change their forms in different contexts. One such context is sandhi or euphonic conjunctions, which cause a word to change owing to the presence of adjacent letters or words. The change wrought by these possible conjunctions can be so significant in Sanskrit that a simple search for the word in its given form alone can significantly reduce the success level of the search. This work presents a representational schema that represents letters in a binary format and reduces Paninian rules of euphonic conjunctions to simple bit set-unset operations. The work presents an efficient algorithm to process vowel-based sandhis using this schema. It further presents another algorithm that uses the sandhi processor to generate the possible transformed word forms of a given word to use in a comprehensive word search.
A Binary Schema and Computational Algorithms to Process Vowel-based Euphonic Conjunctions for Word Searches
1,794
Searching for words in Sanskrit E-text is a problem that is accompanied by complexities introduced by features of Sanskrit such as euphonic conjunctions or sandhis. A word could occur in an E-text in a transformed form owing to the operation of rules of sandhi. Simple word search would not yield these transformed forms of the word. Further, there is no search engine in the literature that can comprehensively search for words in Sanskrit E-texts taking euphonic conjunctions into account. This work presents an optimal binary representational schema for letters of the Sanskrit alphabet along with algorithms to efficiently process the sandhi rules of Sanskrit grammar. The work further presents an algorithm that uses the sandhi processing algorithm to perform a comprehensive word search on E-text.
Computational Algorithms Based on the Paninian System to Process Euphonic Conjunctions for Word Searches
1,795
Twitter with over 500 million users globally, generates over 100,000 tweets per minute . The 140 character limit per tweet, perhaps unintentionally, encourages users to use shorthand notations and to strip spellings to their bare minimum "syllables" or elisions e.g. "srsly". The analysis of twitter messages which typically contain misspellings, elisions, and grammatical errors, poses a challenge to established Natural Language Processing (NLP) tools which are generally designed with the assumption that the data conforms to the basic grammatical structure commonly used in English language. In order to make sense of Twitter messages it is necessary to first transform them into a canonical form, consistent with the dictionary or grammar. This process, performed at the level of individual tokens ("words"), is called lexical normalisation. This paper investigates various techniques for lexical normalisation of Twitter data and presents the findings as the techniques are applied to process raw data from Twitter.
Lexical Normalisation of Twitter Data
1,796
A crowdsourcing translation approach is an effective tool for globalization of site content, but it is also an important source of parallel linguistic data. For the given site, processed with a crowdsourcing system, a sentence-aligned corpus can be fetched, which covers a very narrow domain of terminology and language patterns - a site-specific domain. These data can be used for training and estimation of site-specific statistical machine translation engine
Using crowdsourcing system for creating site-specific statistical machine translation engine
1,797
In this paper, the performance of two dependency parsers, namely Stanford and Minipar, on biomedical texts has been reported. The performance of te parsers to assignm dependencies between two biomedical concepts that are already proved to be connected is not satisfying. Both Stanford and Minipar, being statistical parsers, fail to assign dependency relation between two connected concepts if they are distant by at least one clause. Minipar's performance, in terms of precision, recall and the F-score of the attachment score (e.g., correctly identified head in a dependency), to parse biomedical text is also measured taking the Stanford's as a gold standard. The results suggest that Minipar is not suitable yet to parse biomedical texts. In addition, a qualitative investigation reveals that the difference between working principles of the parsers also play a vital role for Minipar's degraded performance.
Performance of Stanford and Minipar Parser on Biomedical Texts
1,798
Contemporary research on computational processing of linguistic metaphors is divided into two main branches: metaphor recognition and metaphor interpretation. We take a different line of research and present an automated method for generating conceptual metaphors from linguistic data. Given the generated conceptual metaphors, we find corresponding linguistic metaphors in corpora. In this paper, we describe our approach and its evaluation using English and Russian data.
Generating Conceptual Metaphors from Proposition Stores
1,799