text
stringlengths 17
3.36M
| source
stringlengths 3
333
| __index_level_0__
int64 0
518k
|
|---|---|---|
Jules Bloch's work on formation of the Marathi language has to be expanded further to provide for a study of evolution and formation of Indian languages in the Indian language union (sprachbund). The paper analyses the stages in the evolution of early writing systems which began with the evolution of counting in the ancient Near East. A stage anterior to the stage of syllabic representation of sounds of a language, is identified. Unique geometric shapes required for tokens to categorize objects became too large to handle to abstract hundreds of categories of goods and metallurgical processes during the production of bronze-age goods. About 3500 BCE, Indus script as a writing system was developed to use hieroglyphs to represent the 'spoken words' identifying each of the goods and processes. A rebus method of representing similar sounding words of the lingua franca of the artisans was used in Indus script. This method is recognized and consistently applied for the lingua franca of the Indian sprachbund. That the ancient languages of India, constituted a sprachbund (or language union) is now recognized by many linguists. The sprachbund area is proximate to the area where most of the Indus script inscriptions were discovered, as documented in the corpora. That hundreds of Indian hieroglyphs continued to be used in metallurgy is evidenced by their use on early punch-marked coins. This explains the combined use of syllabic scripts such as Brahmi and Kharoshti together with the hieroglyphs on Rampurva copper bolt, and Sohgaura copper plate from about 6th century BCE.Indian hieroglyphs constitute a writing system for meluhha language and are rebus representations of archaeo-metallurgy lexemes. The rebus principle was employed by the early scripts and can legitimately be used to decipher the Indus script, after secure pictorial identification.
|
Indus script corpora, archaeo-metallurgy and Meluhha (Mleccha)
| 1,500
|
In computing, spell checking is the process of detecting and sometimes providing spelling suggestions for incorrectly spelled words in a text. Basically, a spell checker is a computer program that uses a dictionary of words to perform spell checking. The bigger the dictionary is, the higher is the error detection rate. The fact that spell checkers are based on regular dictionaries, they suffer from data sparseness problem as they cannot capture large vocabulary of words including proper names, domain-specific terms, technical jargons, special acronyms, and terminologies. As a result, they exhibit low error detection rate and often fail to catch major errors in the text. This paper proposes a new context-sensitive spelling correction method for detecting and correcting non-word and real-word errors in digital text documents. The approach hinges around data statistics from Google Web 1T 5-gram data set which consists of a big volume of n-gram word sequences, extracted from the World Wide Web. Fundamentally, the proposed method comprises an error detector that detects misspellings, a candidate spellings generator based on a character 2-gram model that generates correction suggestions, and an error corrector that performs contextual error correction. Experiments conducted on a set of text documents from different domains and containing misspellings, showed an outstanding spelling error correction rate and a drastic reduction of both non-word and real-word errors. In a further study, the proposed algorithm is to be parallelized so as to lower the computational cost of the error detection and correction processes.
|
Context-sensitive Spelling Correction Using Google Web 1T 5-Gram
Information
| 1,501
|
The aim of this paper is to evaluate a Text to Knowledge Mapping (TKM) Prototype. The prototype is domain-specific, the purpose of which is to map instructional text onto a knowledge domain. The context of the knowledge domain is DC electrical circuit. During development, the prototype has been tested with a limited data set from the domain. The prototype reached a stage where it needs to be evaluated with a representative linguistic data set called corpus. A corpus is a collection of text drawn from typical sources which can be used as a test data set to evaluate NLP systems. As there is no available corpus for the domain, we developed and annotated a representative corpus. The evaluation of the prototype considers two of its major components- lexical components and knowledge model. Evaluation on lexical components enriches the lexical resources of the prototype like vocabulary and grammar structures. This leads the prototype to parse a reasonable amount of sentences in the corpus. While dealing with the lexicon was straight forward, the identification and extraction of appropriate semantic relations was much more involved. It was necessary, therefore, to manually develop a conceptual structure for the domain to formulate a domain-specific framework of semantic relations. The framework of semantic relationsthat has resulted from this study consisted of 55 relations, out of which 42 have inverse relations. We also conducted rhetorical analysis on the corpus to prove its representativeness in conveying semantic. Finally, we conducted a topical and discourse analysis on the corpus to analyze the coverage of discourse by the prototype.
|
A Corpus-based Evaluation of a Domain-specific Text to Knowledge Mapping
Prototype
| 1,502
|
Two formalisms, both based on context-free grammars, have recently been proposed as a basis for a non-uniform random generation of combinatorial objects. The former, introduced by Denise et al, associates weights with letters, while the latter, recently explored by Weinberg et al in the context of random generation, associates weights to transitions. In this short note, we use a simple modification of the Greibach Normal Form transformation algorithm, due to Blum and Koch, to show the equivalent expressivities, in term of their induced distributions, of these two formalisms.
|
Rule-weighted and terminal-weighted context-free grammars have identical
expressivity
| 1,503
|
This paper describes the use of Naive Bayes to address the task of assigning function tags and context free grammar (CFG) to parse Myanmar sentences. Part of the challenge of statistical function tagging for Myanmar sentences comes from the fact that Myanmar has free-phrase-order and a complex morphological system. Function tagging is a pre-processing step for parsing. In the task of function tagging, we use the functional annotated corpus and tag Myanmar sentences with correct segmentation, POS (part-of-speech) tagging and chunking information. We propose Myanmar grammar rules and apply context free grammar (CFG) to find out the parse tree of function tagged Myanmar sentences. Experiments show that our analysis achieves a good result with parsing of simple sentences and three types of complex sentences.
|
Parsing of Myanmar sentences with function tagging
| 1,504
|
Existing probabilistic scanners and parsers impose hard constraints on the way lexical and syntactic ambiguities can be resolved. Furthermore, traditional grammar-based parsing tools are limited in the mechanisms they allow for taking context into account. In this paper, we propose a model-driven tool that allows for statistical language models with arbitrary probability estimators. Our work on model-driven probabilistic parsing is built on top of ModelCC, a model-based parser generator, and enables the probabilistic interpretation and resolution of anaphoric, cataphoric, and recursive references in the disambiguation of abstract syntax graphs. In order to prove the expression power of ModelCC, we describe the design of a general-purpose natural language parser.
|
A Model-Driven Probabilistic Parser Generator
| 1,505
|
This work consists of creating a system of the Computer Assisted Language Learning (CALL) based on a system of Automatic Speech Recognition (ASR) for the Arabic language using the tool CMU Sphinx3 [1], based on the approach of HMM. To this work, we have constructed a corpus of six hours of speech recordings with a number of nine speakers. we find in the robustness to noise a grounds for the choice of the HMM approach [2]. the results achieved are encouraging since our corpus is made by only nine speakers, but they are always reasons that open the door for other improvement works.
|
Arabic Language Learning Assisted by Computer, based on Automatic Speech
Recognition
| 1,506
|
While the use of cluster features became ubiquitous in core NLP tasks, most cluster features in NLP are based on distributional similarity. We propose a new type of clustering criteria, specific to the task of part-of-speech tagging. Instead of distributional similarity, these clusters are based on the beha vior of a baseline tagger when applied to a large corpus. These cluster features provide similar gains in accuracy to those achieved by distributional-similarity derived clusters. Using both types of cluster features together further improve tagging accuracies. We show that the method is effective for both the in-domain and out-of-domain scenarios for English, and for French, German and Italian. The effect is larger for out-of-domain text.
|
Task-specific Word-Clustering for Part-of-Speech Tagging
| 1,507
|
We introduce precision-biased parsing: a parsing task which favors precision over recall by allowing the parser to abstain from decisions deemed uncertain. We focus on dependency-parsing and present an ensemble method which is capable of assigning parents to 84% of the text tokens while being over 96% accurate on these tokens. We use the precision-biased parsing task to solve the related high-quality parse-selection task: finding a subset of high-quality (accurate) trees in a large collection of parsed text. We present a method for choosing over a third of the input trees while keeping unlabeled dependency parsing accuracy of 97% on these trees. We also present a method which is not based on an ensemble but rather on directly predicting the risk associated with individual parser decisions. In addition to its efficiency, this method demonstrates that a parsing system can provide reasonable estimates of confidence in its predictions without relying on ensembles or aggregate corpus counts.
|
Precision-biased Parsing and High-Quality Parse Selection
| 1,508
|
Lexical substitutes have found use in areas such as paraphrasing, text simplification, machine translation, word sense disambiguation, and part of speech induction. However the computational complexity of accurately identifying the most likely substitutes for a word has made large scale experiments difficult. In this paper I introduce a new search algorithm, FASTSUBS, that is guaranteed to find the K most likely lexical substitutes for a given word in a sentence based on an n-gram language model. The computation is sub-linear in both K and the vocabulary size V. An implementation of the algorithm and a dataset with the top 100 substitutes of each token in the WSJ section of the Penn Treebank are available at http://goo.gl/jzKH0.
|
FASTSUBS: An Efficient and Exact Procedure for Finding the Most Likely
Lexical Substitutes Based on an N-gram Language Model
| 1,509
|
The study of the Tip of the Tongue phenomenon (TOT) provides valuable clues and insights concerning the organisation of the mental lexicon (meaning, number of syllables, relation with other words, etc.). This paper describes a tool based on psycho-linguistic observations concerning the TOT phenomenon. We've built it to enable a speaker/writer to find the word he is looking for, word he may know, but which he is unable to access in time. We try to simulate the TOT phenomenon by creating a situation where the system knows the target word, yet is unable to access it. In order to find the target word we make use of the paradigmatic and syntagmatic associations stored in the linguistic databases. Our experiment allows the following conclusion: a tool like SVETLAN, capable to structure (automatically) a dictionary by domains can be used sucessfully to help the speaker/writer to find the word he is looking for, if it is combined with a database rich in terms of paradigmatic links like EuroWordNet.
|
Système d'aide à l'accès lexical : trouver le mot qu'on a sur le
bout de la langue
| 1,510
|
This project explores the nature of language acquisition in computers, guided by techniques similar to those used in children. While existing natural language processing methods are limited in scope and understanding, our system aims to gain an understanding of language from first principles and hence minimal initial input. The first portion of our system was implemented in Java and is focused on understanding the morphology of language using bigrams. We use frequency distributions and differences between them to define and distinguish languages. English and French texts were analyzed to determine a difference threshold of 55 before the texts are considered to be in different languages, and this threshold was verified using Spanish texts. The second portion of our system focuses on gaining an understanding of the syntax of a language using a recursive method. The program uses one of two possible methods to analyze given sentences based on either sentence patterns or surrounding words. Both methods have been implemented in C++. The program is able to understand the structure of simple sentences and learn new words. In addition, we have provided some suggestions regarding future work and potential extensions of the existing program.
|
Language Acquisition in Computers
| 1,511
|
Universal Networking Language (UNL) is a declarative formal language that is used to represent semantic data extracted from natural language texts. This paper presents a novel approach to converting Bangla natural language text into UNL using a method known as Predicate Preserving Parser (PPP) technique. PPP performs morphological, syntactic and semantic, and lexical analysis of text synchronously. This analysis produces a semantic-net like structure represented using UNL. We demonstrate how Bangla texts are analyzed following the PPP technique to produce UNL documents which can then be translated into any other suitable natural language facilitating the opportunity to develop a universal language translation method via UNL.
|
UNL Based Bangla Natural Text Conversion - Predicate Preserving Parser
Approach
| 1,512
|
Understanding the ways in which participants in public discussions frame their arguments is important in understanding how public opinion is formed. In this paper, we adopt the position that it is time for more computationally-oriented research on problems involving framing. In the interests of furthering that goal, we propose the following specific, interesting and, we believe, relatively accessible question: In the controversy regarding the use of genetically-modified organisms (GMOs) in agriculture, do pro- and anti-GMO articles differ in whether they choose to adopt a "scientific" tone? Prior work on the rhetoric and sociology of science suggests that hedging may distinguish popular-science text from text written by professional scientists for their colleagues. We propose a detailed approach to studying whether hedge detection can be used to understanding scientific framing in the GMO debates, and provide corpora to facilitate this study. Some of our preliminary analyses suggest that hedges occur less frequently in scientific discourse than in popular text, a finding that contradicts prior assertions in the literature. We hope that our initial work and data will encourage others to pursue this promising line of inquiry.
|
Hedge detection as a lens on framing in the GMO debates: A position
paper
| 1,513
|
In this memory we made the design of an indexing model for Arabic language and adapting standards for describing learning resources used (the LOM and their application profiles) with learning conditions such as levels education of students, their levels of understanding...the pedagogical context with taking into account the repre-sentative elements of the text, text's length,...in particular, we highlight the specificity of the Arabic language which is a complex language, characterized by its flexion, its voyellation and its agglutination.
|
Developing a model for a text database indexed pedagogically for
teaching the Arabic language
| 1,514
|
BADREX uses dynamically generated regular expressions to annotate term definition-term abbreviation pairs, and corefers unpaired acronyms and abbreviations back to their initial definition in the text. Against the Medstract corpus BADREX achieves precision and recall of 98% and 97%, and against a much larger corpus, 90% and 85%, respectively. BADREX yields improved performance over previous approaches, requires no training data and allows runtime customisation of its input parameters. BADREX is freely available from https://github.com/philgooch/BADREX-Biomedical-Abbreviation-Expander as a plugin for the General Architecture for Text Engineering (GATE) framework and is licensed under the GPLv3.
|
BADREX: In situ expansion and coreference of biomedical abbreviations
using dynamic regular expressions
| 1,515
|
We describe the TempEval-3 task which is currently in preparation for the SemEval-2013 evaluation exercise. The aim of TempEval is to advance research on temporal information processing. TempEval-3 follows on from previous TempEval events, incorporating: a three-part task structure covering event, temporal expression and temporal relation extraction; a larger dataset; and single overall task quality scores.
|
TempEval-3: Evaluating Events, Time Expressions, and Temporal Relations
| 1,516
|
We now have a rich and growing set of modeling tools and algorithms for inducing linguistic structure from text that is less than fully annotated. In this paper, we discuss some of the weaknesses of our current methodology. We present a new abstract framework for evaluating natural language processing (NLP) models in general and unsupervised NLP models in particular. The central idea is to make explicit certain adversarial roles among researchers, so that the different roles in an evaluation are more clearly defined and performers of all roles are offered ways to make measurable contributions to the larger goal. Adopting this approach may help to characterize model successes and failures by encouraging earlier consideration of error analysis. The framework can be instantiated in a variety of ways, simulating some familiar intrinsic and extrinsic evaluations as well as some new evaluations.
|
Adversarial Evaluation for Models of Natural Language
| 1,517
|
This paper addresses the problem of mapping natural language sentences to lambda-calculus encodings of their meaning. We describe a learning algorithm that takes as input a training set of sentences labeled with expressions in the lambda calculus. The algorithm induces a grammar for the problem, along with a log-linear model that represents a distribution over syntactic and semantic analyses conditioned on the input sentence. We apply the method to the task of learning natural language interfaces to databases and show that the learned parsers outperform previous methods in two benchmark database domains.
|
Learning to Map Sentences to Logical Form: Structured Classification
with Probabilistic Categorial Grammars
| 1,518
|
The following study presents a collocation extraction approach based on clustering technique. This study uses a combination of several classical measures which cover all aspects of a given corpus then it suggests separating bigrams found in the corpus in several disjoint groups according to the probability of presence of collocations. This will allow excluding groups where the presence of collocations is very unlikely and thus reducing in a meaningful way the search space.
|
Clustering based approach extracting collocations
| 1,519
|
The work of automatic segmentation of a Manipuri language (or Meiteilon) word into syllabic units is demonstrated in this paper. This language is a scheduled Indian language of Tibeto-Burman origin, which is also a very highly agglutinative language. This language usages two script: a Bengali script and Meitei Mayek (Script). The present work is based on the second script. An algorithm is designed so as to identify mainly the syllables of Manipuri origin word. The result of the algorithm shows a Recall of 74.77, Precision of 91.21 and F-Score of 82.18 which is a reasonable score with the first attempt of such kind for this language.
|
Automatic Segmentation of Manipuri (Meiteilon) Word into Syllabic Units
| 1,520
|
The notion of appropriate sequence as introduced by Z. Harris provides a powerful syntactic way of analysing the detailed meaning of various sentences, including ambiguous ones. In an adjectival sentence like 'The leather was yellow', the introduction of an appropriate noun, here 'colour', specifies which quality the adjective describes. In some other adjectival sentences with an appropriate noun, that noun plays the same part as 'colour' and seems to be relevant to the description of the adjective. These appropriate nouns can usually be used in elementary sentences like 'The leather had some colour', but in many cases they have a more or less obligatory modifier. For example, you can hardly mention that an object has a colour without qualifying that colour at all. About 300 French nouns are appropriate in at least one adjectival sentence and have an obligatory modifier. They enter in a number of sentence structures related by several syntactic transformations. The appropriateness of the noun and the fact that the modifier is obligatory are reflected in these transformations. The description of these syntactic phenomena provides a basis for a classification of these nouns. It also concerns the lexical properties of thousands of predicative adjectives, and in particular the relations between the sentence without the noun : 'The leather was yellow' and the adjectival sentence with the noun : 'The colour of the leather was yellow'.
|
Appropriate Nouns with Obligatory Modifiers
| 1,521
|
The comparative evaluation of Arabic HPSG grammar lexica requires a deep study of their linguistic coverage. The complexity of this task results mainly from the heterogeneity of the descriptive components within those lexica (underlying linguistic resources and different data categories, for example). It is therefore essential to define more homogeneous representations, which in turn will enable us to compare them and eventually merge them. In this context, we present a method for comparing HPSG lexica based on a rule system. This method is implemented within a prototype for the projection from Arabic HPSG to a normalised pivot language compliant with LMF (ISO 24613 - Lexical Markup Framework) and serialised using a TEI (Text Encoding Initiative) based representation. The design of this system is based on an initial study of the HPSG formalism looking at its adequacy for the representation of Arabic, and from this, we identify the appropriate feature structures corresponding to each Arabic lexical category and their possible LMF counterparts.
|
A prototype for projecting HPSG syntactic lexica towards LMF
| 1,522
|
In this article we focus firstly on the principle of pedagogical indexing and characteristics of Arabic language and secondly on the possibility of adapting the standard for describing learning resources used (the LOM and its Application Profiles) with learning conditions such as the educational levels of students and their levels of understanding,... the educational context with taking into account the representative elements of text, text length, ... in particular, we put in relief the specificity of the Arabic language which is a complex language, characterized by its flexion, its voyellation and agglutination.
|
Adaptation of pedagogical resources description standard (LOM) with the
specificity of Arabic language
| 1,523
|
The sense analysis is still critical problem in machine translation system, especially such as English-Korean translation which the syntactical different between source and target languages is very great. We suggest a method for selecting the noun sense using contextual feature in English-Korean Translation.
|
A Method for Selecting Noun Sense using Co-occurrence Relation in
English-Korean Translation
| 1,524
|
With such increasing popularity and availability of digital text data, authorships of digital texts can not be taken for granted due to the ease of copying and parsing. This paper presents a new text style analysis called natural frequency zoned word distribution analysis (NFZ-WDA), and then a basic authorship attribution scheme and an open authorship attribution scheme for digital texts based on the analysis. NFZ-WDA is based on the observation that all authors leave distinct intrinsic word usage traces on texts written by them and these intrinsic styles can be identified and employed to analyze the authorship. The intrinsic word usage styles can be estimated through the analysis of word distribution within a text, which is more than normal word frequency analysis and can be expressed as: which groups of words are used in the text; how frequently does each group of words occur; how are the occurrences of each group of words distributed in the text. Next, the basic authorship attribution scheme and the open authorship attribution scheme provide solutions for both closed and open authorship attribution problems. Through analysis and extensive experimental studies, this paper demonstrates the efficiency of the proposed method for authorship attribution.
|
More than Word Frequencies: Authorship Attribution via Natural Frequency
Zoned Word Distribution Analysis
| 1,525
|
A recent advance in computer technology has permitted scientists to implement and test algorithms that were known from quite some time (or not) but which were computationally expensive. Two such projects are IBM's Jeopardy as a part of its DeepQA project [1] and Wolfram's Wolframalpha[2]. Both these methods implement natural language processing (another goal of AI scientists) and try to answer questions as asked by the user. Though the goal of the two projects is similar, both of them have a different procedure at it's core. In the following sections, the mechanism and history of IBM's Jeopardy and Wolfram alpha has been explained followed by the implications of these projects in realizing Ray Kurzweil's [3] dream of passing the Turing test by 2029. A recipe of taking the above projects to a new level is also explained.
|
Recent Technological Advances in Natural Language Processing and
Artificial Intelligence
| 1,526
|
In this paper, we present a new approach dedicated to correcting the spelling errors of the Arabic language. This approach corrects typographical errors like inserting, deleting, and permutation. Our method is inspired from the Levenshtein algorithm, and allows a finer and better scheduling than Levenshtein. The results obtained are very satisfactory and encouraging, which shows the interest of our new approach.
|
Introduction of the weight edition errors in the Levenshtein distance
| 1,527
|
Dynamics of average length of words in Russian and English is analysed in the article. Words belonging to the diachronic text corpus Google Books Ngram and dated back to the last two centuries are studied. It was found out that average word length slightly increased in the 19th century, and then it was growing rapidly most of the 20th century and started decreasing over the period from the end of the 20th - to the beginning of the 21th century. Words which contributed mostly to increase or decrease of word average length were identified. At that, content words and functional words are analysed separately. Long content words contribute mostly to word average length of word. As it was shown, these words reflect the main tendencies of social development and thus, are used frequently. Change of frequency of personal pronouns also contributes significantly to change of average word length. The other parameters connected with average length of word were also analysed.
|
Average word length dynamics as indicator of cultural changes in society
| 1,528
|
Written Communication on Computers requires knowledge of writing text for the desired language using Computer. Mostly people do not use any other language besides English. This creates a barrier. To resolve this issue we have developed a scheme to input text in Hindi using phonetic mapping scheme. Using this scheme we generate intermediate code strings and match them with pronunciations of input text. Our system show significant success over other input systems available.
|
Input Scheme for Hindi Using Phonetic Mapping
| 1,529
|
Natural Language Parsing has been the most prominent research area since the genesis of Natural Language Processing. Probabilistic Parsers are being developed to make the process of parser development much easier, accurate and fast. In Indian context, identification of which Computational Grammar Formalism is to be used is still a question which needs to be answered. In this paper we focus on this problem and try to analyze different formalisms for Indian languages.
|
Evaluation of Computational Grammar Formalisms for Indian Languages
| 1,530
|
This paper defines a method for lexicon in the biomedical domain from comparable corpora. The method is based on compositional translation and exploits morpheme-level translation equivalences. It can generate translations for a large variety of morphologically constructed words and can also generate 'fertile' translations. We show that fertile translations increase the overall quality of the extracted lexicon for English to French translation.
|
Identification of Fertile Translations in Medical Comparable Corpora: a
Morpho-Compositional Approach
| 1,531
|
The utility and power of Natural Language Processing (NLP) seems destined to change our technological society in profound and fundamental ways. However there are, to date, few accessible descriptions of the science of NLP that have been written for a popular audience, or even for an audience of intelligent, but uninitiated scientists. This paper aims to provide just such an overview. In short, the objective of this article is to describe the purpose, procedures and practical applications of NLP in a clear, balanced, and readable way. We will examine the most recent literature describing the methods and processes of NLP, analyze some of the challenges that researchers are faced with, and briefly survey some of the current and future applications of this science to IT research in general.
|
Natural Language Processing - A Survey
| 1,532
|
We present a study of the relationship between gender, linguistic style, and social networks, using a novel corpus of 14,000 Twitter users. Prior quantitative work on gender often treats this social variable as a female/male binary; we argue for a more nuanced approach. By clustering Twitter users, we find a natural decomposition of the dataset into various styles and topical interests. Many clusters have strong gender orientations, but their use of linguistic resources sometimes directly conflicts with the population-level language statistics. We view these clusters as a more accurate reflection of the multifaceted nature of gendered language styles. Previous corpus-based work has also had little to say about individuals whose linguistic styles defy population-level gender patterns. To identify such individuals, we train a statistical classifier, and measure the classifier confidence for each individual in the dataset. Examining individuals whose language does not match the classifier's model for their gender, we find that they have social networks that include significantly fewer same-gender social connections and that, in general, social network homophily is correlated with the use of same-gender language markers. Pairing computational methods and social theory thus offers a new perspective on how gender emerges as individuals position themselves relative to audiences, topics, and mainstream gender norms.
|
Gender identity and lexical variation in social media
| 1,533
|
Gujarati is a resource poor language with almost no language processing tools being available. In this paper we have shown an implementation of a rule based stemmer of Gujarati. We have shown the creation of rules for stemming and the richness in morphology that Gujarati possesses. We have also evaluated our results by verifying it with a human expert.
|
A Lightweight Stemmer for Gujarati
| 1,534
|
Developing parallel corpora is an important and a difficult activity for Machine Translation. This requires manual annotation by Human Translators. Translating same text again is a useless activity. There are tools available to implement this for European Languages, but no such tool is available for Indian Languages. In this paper we present a tool for Indian Languages which not only provides automatic translations of the previously available translation but also provides multiple translations, in cases where a sentence has multiple translations, in ranked list of suggestive translations for a sentence. Moreover this tool also lets translators have global and local saving options of their work, so that they may share it with others, which further lightens the task.
|
Design of English-Hindi Translation Memory for Efficient Translation
| 1,535
|
This paper proposes a method for extracting translations of morphologically constructed terms from comparable corpora. The method is based on compositional translation and exploits translation equivalences at the morpheme-level, which allows for the generation of "fertile" translations (translation pairs in which the target term has more words than the source term). Ranking methods relying on corpus-based and translation-based features are used to select the best candidate translation. We obtain an average precision of 91% on the Top1 candidate translation. The method was tested on two language pairs (English-French and English-German) and with a small specialized comparable corpora (400k words per language).
|
Extraction of domain-specific bilingual lexicon from comparable corpora:
compositional translation and ranking
| 1,536
|
The use of naive Bayesian classifier (NB) and the classifier by the k nearest neighbors (kNN) in classification semantic analysis of authors' texts of English fiction has been analysed. The authors' works are considered in the vector space the basis of which is formed by the frequency characteristics of semantic fields of nouns and verbs. Highly precise classification of authors' texts in the vector space of semantic fields indicates about the presence of particular spheres of author's idiolect in this space which characterizes the individual author's style.
|
Classification Analysis Of Authorship Fiction Texts in The Space Of
Semantic Fields
| 1,537
|
This paper describes the Hangulphabet, a new writing system that should prove useful in a number of contexts. Using the Hangulphabet, a user can instantly see voicing, manner and place of articulation of any phoneme found in human language. The Hangulphabet places consonant graphemes on a grid with the x-axis representing the place of articulation and the y-axis representing manner of articulation. Each individual grapheme contains radicals from both axes where the points intersect. The top radical represents manner of articulation where the bottom represents place of articulation. A horizontal line running through the middle of the bottom radical represents voicing. For vowels, place of articulation is located on a grid that represents the position of the tongue in the mouth. This grid is similar to that of the IPA vowel chart (International Phonetic Association, 1999). The difference with the Hangulphabet being the trapezoid representing the vocal apparatus is on a slight tilt. Place of articulation for a vowel is represented by a breakout figure from the grid. This system can be used as an alternative to the International Phonetic Alphabet (IPA) or as a complement to it. Beginning students of linguistics may find it particularly useful. A Hangulphabet font has been created to facilitate switching between the Hangulphabet and the IPA.
|
The Hangulphabet: A Descriptive Alphabet
| 1,538
|
Large language models have been proven quite beneficial for a variety of automatic speech recognition tasks in Google. We summarize results on Voice Search and a few YouTube speech transcription tasks to highlight the impact that one can expect from increasing both the amount of training data, and the size of the language model estimated from such data. Depending on the task, availability and amount of training data used, language model size and amount of work and care put into integrating them in the lattice rescoring step we observe reductions in word error rate between 6% and 10% relative, for systems on a wide range of operating points between 17% and 52% word error rate.
|
Large Scale Language Modeling in Automatic Speech Recognition
| 1,539
|
In principle, the design of transition-based dependency parsers makes it possible to experiment with any general-purpose classifier without other changes to the parsing algorithm. In practice, however, it often takes substantial software engineering to bridge between the different representations used by two software packages. Here we present extensions to MaltParser that allow the drop-in use of any classifier conforming to the interface of the Weka machine learning package, a wrapper for the TiMBL memory-based learner to this interface, and experiments on multilingual dependency parsing with a variety of classifiers. While earlier work had suggested that memory-based learners might be a good choice for low-resource parsing scenarios, we cannot support that hypothesis in this work. We observed that support-vector machines give better parsing performance than the memory-based learner, regardless of the size of the training set.
|
Transition-Based Dependency Parsing With Pluggable Classifiers
| 1,540
|
Analyzing writing styles of non-native speakers is a challenging task. In this paper, we analyze the comments written in the discussion pages of the English Wikipedia. Using learning algorithms, we are able to detect native speakers' writing style with an accuracy of 74%. Given the diversity of the English Wikipedia users and the large number of languages they speak, we measure the similarities among their native languages by comparing the influence they have on their English writing style. Our results show that languages known to have the same origin and development path have similar footprint on their speakers' English writing style. To enable further studies, the dataset we extracted from Wikipedia will be made available publicly.
|
Detecting English Writing Styles For Non-native Speakers
| 1,541
|
Controlled natural languages (CNL) with a direct mapping to formal logic have been proposed to improve the usability of knowledge representation systems, query interfaces, and formal specifications. Predictive editors are a popular approach to solve the problem that CNLs are easy to read but hard to write. Such predictive editors need to be able to "look ahead" in order to show all possible continuations of a given unfinished sentence. Such lookahead features, however, are difficult to implement in a satisfying way with existing grammar frameworks, especially if the CNL supports complex nonlocal structures such as anaphoric references. Here, methods and algorithms are presented for a new grammar notation called Codeco, which is specifically designed for controlled natural languages and predictive editors. A parsing approach for Codeco based on an extended chart parsing algorithm is presented. A large subset of Attempto Controlled English (ACE) has been represented in Codeco. Evaluation of this grammar and the parser implementation shows that the approach is practical, adequate and efficient.
|
A Principled Approach to Grammars for Controlled Natural Languages and
Predictive Editors
| 1,542
|
Web users produce more and more documents expressing opinions. Because these have become important resources for customers and manufacturers, many have focused on them. Opinions are often expressed through adjectives with positive or negative semantic values. In extracting information from users' opinion in online reviews, exact recognition of the semantic polarity of adjectives is one of the most important requirements. Since adjectives have different semantic orientations according to contexts, it is not satisfying to extract opinion information without considering the semantic and lexical relations between the adjectives and the feature nouns appropriate to a given domain. In this paper, we present a classification of adjectives by polarity, and we analyze adjectives that are undetermined in the absence of contexts. Our research should be useful for accurately predicting semantic orientations of opinion sentences, and should be taken into account before relying on an automatic methods.
|
Semantic Polarity of Adjectival Predicates in Online Reviews
| 1,543
|
In this paper we present a new and simple language-independent method for word-alignment based on the use of external sources of bilingual information such as machine translation systems. We show that the few parameters of the aligner can be trained on a very small corpus, which leads to results comparable to those obtained by the state-of-the-art tool GIZA++ in terms of precision. Regarding other metrics, such as alignment error rate or F-measure, the parametric aligner, when trained on a very small gold-standard (450 pairs of sentences), provides results comparable to those produced by GIZA++ when trained on an in-domain corpus of around 10,000 pairs of sentences. Furthermore, the results obtained indicate that the training is domain-independent, which enables the use of the trained aligner 'on the fly' on any new pair of sentences.
|
Using external sources of bilingual information for on-the-fly word
alignment
| 1,544
|
Using a corpus of over 17,000 financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP- and DOWN-verbs used to describe movements of indices, stocks, and shares. Using measures of the overlap in the argument distributions of these verbs and k-means clustering of their distributions, we advance evidence for the proposal that the metaphors referred to by these verbs are organised into hierarchical structures of superordinate and subordinate groups.
|
Identifying Metaphor Hierarchies in a Corpus Analysis of Finance
Articles
| 1,545
|
Using a corpus of 17,000+ financial news reports (involving over 10M words), we perform an analysis of the argument-distributions of the UP and DOWN verbs used to describe movements of indices, stocks and shares. In Study 1 participants identified antonyms of these verbs in a free-response task and a matching task from which the most commonly identified antonyms were compiled. In Study 2, we determined whether the argument-distributions for the verbs in these antonym-pairs were sufficiently similar to predict the most frequently-identified antonym. Cosine similarity correlates moderately with the proportions of antonym-pairs identified by people (r = 0.31). More impressively, 87% of the time the most frequently-identified antonym is either the first- or second-most similar pair in the set of alternatives. The implications of these results for distributional approaches to determining metaphoric knowledge are discussed.
|
Identifying Metaphoric Antonyms in a Corpus Analysis of Finance Articles
| 1,546
|
We present a method of finding and analyzing shifts in grammatical relations found in diachronic corpora. Inspired by the econometric technique of measuring return and volatility instead of relative frequencies, we propose them as a way to better characterize changes in grammatical patterns like nominalization, modification and comparison. To exemplify the use of these techniques, we examine a corpus of NIPS papers and report trends which manifest at the token, part-of-speech and grammatical levels. Building up from frequency observations to a second-order analysis, we show that shifts in frequencies overlook deeper trends in language, even when part-of-speech information is included. Examining token, POS and grammatical levels of variation enables a summary view of diachronic text as a whole. We conclude with a discussion about how these methods can inform intuitions about specialist domains as well as changes in language use as a whole.
|
Diachronic Variation in Grammatical Relations
| 1,547
|
Many approaches to sentiment analysis rely on lexica where words are tagged with their prior polarity - i.e. if a word out of context evokes something positive or something negative. In particular, broad-coverage resources like SentiWordNet provide polarities for (almost) every word. Since words can have multiple senses, we address the problem of how to compute the prior polarity of a word starting from the polarity of each sense and returning its polarity strength as an index between -1 and 1. We compare 14 such formulae that appear in the literature, and assess which one best approximates the human judgement of prior polarities, with both regression and classification models.
|
Assessing Sentiment Strength in Words Prior Polarities
| 1,548
|
The present paper explores various arguments in favour of making the Text Encoding Initia-tive (TEI) guidelines an appropriate serialisation for ISO standard 24613:2008 (LMF, Lexi-cal Mark-up Framework) . It also identifies the issues that would have to be resolved in order to reach an appropriate implementation of these ideas, in particular in terms of infor-mational coverage. We show how the customisation facilities offered by the TEI guidelines can provide an adequate background, not only to cover missing components within the current Dictionary chapter of the TEI guidelines, but also to allow specific lexical projects to deal with local constraints. We expect this proposal to be a basis for a future ISO project in the context of the on going revision of LMF.
|
TEI and LMF crosswalks
| 1,549
|
Online content analysis employs algorithmic methods to identify entities in unstructured text. Both machine learning and knowledge-base approaches lie at the foundation of contemporary named entities extraction systems. However, the progress in deploying these approaches on web-scale has been been hampered by the computational cost of NLP over massive text corpora. We present SpeedRead (SR), a named entity recognition pipeline that runs at least 10 times faster than Stanford NLP pipeline. This pipeline consists of a high performance Penn Treebank- compliant tokenizer, close to state-of-art part-of-speech (POS) tagger and knowledge-based named entity recognizer.
|
SpeedRead: A Fast Named Entity Recognition Pipeline
| 1,550
|
Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper, we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities.
|
The Manifold of Human Emotions
| 1,551
|
A neural probabilistic language model (NPLM) provides an idea to achieve the better perplexity than n-gram language model and their smoothed language models. This paper investigates application area in bilingual NLP, specifically Statistical Machine Translation (SMT). We focus on the perspectives that NPLM has potential to open the possibility to complement potentially `huge' monolingual resources into the `resource-constraint' bilingual resources. We introduce an ngram-HMM language model as NPLM using the non-parametric Bayesian construction. In order to facilitate the application to various tasks, we propose the joint space model of ngram-HMM language model. We show an experiment of system combination in the area of SMT. One discovery was that our treatment of noise improved the results 0.20 BLEU points if NPLM is trained in relatively small corpus, in our case 500,000 sentence pairs, which is often the case due to the long training time of NPLM.
|
Joint Space Neural Probabilistic Language Model for Statistical Machine
Translation
| 1,552
|
We propose two novel model architectures for computing continuous vector representations of words from very large data sets. The quality of these representations is measured in a word similarity task, and the results are compared to the previously best performing techniques based on different types of neural networks. We observe large improvements in accuracy at much lower computational cost, i.e. it takes less than a day to learn high quality word vectors from a 1.6 billion words data set. Furthermore, we show that these vectors provide state-of-the-art performance on our test set for measuring syntactic and semantic word similarities.
|
Efficient Estimation of Word Representations in Vector Space
| 1,553
|
Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes are corrected. Such learning from positive evidence has been viewed as raising logical problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this logical problem can be dissolved. Here, we review recent formal results showing that the learner has sufficient data to learn successfully from positive evidence, if it favours the simplest encoding of the linguistic input. Results include the ability to learn a linguistic prediction, grammaticality judgements, language production, and form-meaning mappings. The simplicity approach can also be scaled-down to analyse the ability to learn a specific linguistic constructions, and is amenable to empirical test as a framework for describing human language acquisition.
|
Language learning from positive evidence, reconsidered: A
simplicity-based approach
| 1,554
|
The paper revives an older approach to acoustic modeling that borrows from n-gram language modeling in an attempt to scale up both the amount of training data and model size (as measured by the number of parameters in the model), to approximately 100 times larger than current sizes used in automatic speech recognition. In such a data-rich setting, we can expand the phonetic context significantly beyond triphones, as well as increase the number of Gaussian mixture components for the context-dependent states that allow it. We have experimented with contexts that span seven or more context-independent phones, and up to 620 mixture components per state. Dealing with unseen phonetic contexts is accomplished using the familiar back-off technique used in language modeling due to implementation simplicity. The back-off acoustic model is estimated, stored and served using MapReduce distributed computing infrastructure. Speech recognition experiments are carried out in an N-best list rescoring framework for Google Voice Search. Training big models on large amounts of data proves to be an effective way to increase the accuracy of a state-of-the-art automatic speech recognition system. We use 87,000 hours of training data (speech along with transcription) obtained by filtering utterances in Voice Search logs on automatic speech recognition confidence. Models ranging in size between 20--40 million Gaussians are estimated using maximum likelihood training. They achieve relative reductions in word-error-rate of 11% and 6% when combined with first-pass models trained using maximum likelihood, and boosted maximum mutual information, respectively. Increasing the context size beyond five phones (quinphones) does not help.
|
Large Scale Distributed Acoustic Modeling With Back-off N-grams
| 1,555
|
When developing a conversational agent, there is often an urgent need to have a prototype available in order to test the application with real users. A Wizard of Oz is a possibility, but sometimes the agent should be simply deployed in the environment where it will be used. Here, the agent should be able to capture as many interactions as possible and to understand how people react to failure. In this paper, we focus on the rapid development of a natural language understanding module by non experts. Our approach follows the learning paradigm and sees the process of understanding natural language as a classification problem. We test our module with a conversational agent that answers questions in the art domain. Moreover, we show how our approach can be used by a natural language interface to a cinema database.
|
Towards the Rapid Development of a Natural Language Understanding Module
| 1,556
|
The variation of word meaning according to the context leads us to enrich the type system of our syntactical and semantic analyser of French based on categorial grammars and Montague semantics (or lambda-DRT). The main advantage of a deep semantic analyse is too represent meaning by logical formulae that can be easily used e.g. for inferences. Determiners and quantifiers play a fundamental role in the construction of those formulae. But in our rich type system the usual semantic terms do not work. We propose a solution ins- pired by the tau and epsilon operators of Hilbert, kinds of generic elements and choice functions. This approach unifies the treatment of the different determi- ners and quantifiers as well as the dynamic binding of pronouns. Above all, this fully computational view fits in well within the wide coverage parser Grail, both from a theoretical and a practical viewpoint.
|
Sémantique des déterminants dans un cadre richement typé
| 1,557
|
The Lexical Access Problem consists of determining the intended sequence of words corresponding to an input sequence of phonemes (basic speech sounds) that come from a low-level phoneme recognizer. In this paper we present an information-theoretic approach based on the Minimum Message Length Criterion for solving the Lexical Access Problem. We model sentences using phoneme realizations seen in training, and word and part-of-speech information obtained from text corpora. We show results on multiple-speaker, continuous, read speech and discuss a heuristic using equivalence classes of similar sounding words which speeds up the recognition process without significant deterioration in recognition accuracy.
|
Lexical Access for Speech Understanding using Minimum Message Length
Encoding
| 1,558
|
This paper describes our submission to the First Workshop on Reordering for Statistical Machine Translation. We have decided to build a reordering system based on tree-to-string model, using only publicly available tools to accomplish this task. With the provided training data we have built a translation model using Moses toolkit, and then we applied a chart decoder, implemented in Moses, to reorder the sentences. Even though our submission only covered English-Farsi language pair, we believe that the approach itself should work regardless of the choice of the languages, so we have also carried out the experiments for English-Italian and English-Urdu. For these language pairs we have noticed a significant improvement over the baseline in BLEU, Kendall-Tau and Hamming metrics. A detailed description is given, so that everyone can reproduce our results. Also, some possible directions for further improvements are discussed.
|
Building a reordering system using tree-to-string hierarchical model
| 1,559
|
Cross-Language Information Retrieval (CLIR) and machine translation (MT) resources, such as dictionaries and parallel corpora, are scarce and hard to come by for special domains. Besides, these resources are just limited to a few languages, such as English, French, and Spanish and so on. So, obtaining comparable corpora automatically for such domains could be an answer to this problem effectively. Comparable corpora, that the subcorpora are not translations of each other, can be easily obtained from web. Therefore, building and using comparable corpora is often a more feasible option in multilingual information processing. Comparability metrics is one of key issues in the field of building and using comparable corpus. Currently, there is no widely accepted definition or metrics method of corpus comparability. In fact, Different definitions or metrics methods of comparability might be given to suit various tasks about natural language processing. A new comparability, namely, termhood-based metrics, oriented to the task of bilingual terminology extraction, is proposed in this paper. In this method, words are ranked by termhood not frequency, and then the cosine similarities, calculated based on the ranking lists of word termhood, is used as comparability. Experiments results show that termhood-based metrics performs better than traditional frequency-based metrics.
|
Termhood-based Comparability Metrics of Comparable Corpus in Special
Domain
| 1,560
|
Purpose: Terminology is the set of technical words or expressions used in specific contexts, which denotes the core concept in a formal discipline and is usually applied in the fields of machine translation, information retrieval, information extraction and text categorization, etc. Bilingual terminology extraction plays an important role in the application of bilingual dictionary compilation, bilingual Ontology construction, machine translation and cross-language information retrieval etc. This paper addresses the issues of monolingual terminology extraction and bilingual term alignment based on multi-level termhood. Design/methodology/approach: A method based on multi-level termhood is proposed. The new method computes the termhood of the terminology candidate as well as the sentence that includes the terminology by the comparison of the corpus. Since terminologies and general words usually have differently distribution in the corpus, termhood can also be used to constrain and enhance the performance of term alignment when aligning bilingual terms on the parallel corpus. In this paper, bilingual term alignment based on termhood constraints is presented. Findings: Experiment results show multi-level termhood can get better performance than existing method for terminology extraction. If termhood is used as constrain factor, the performance of bilingual term alignment can be improved.
|
Bilingual Terminology Extraction Using Multi-level Termhood
| 1,561
|
Regulations in the Building Industry are becoming increasingly complex and involve more than one technical area. They cover products, components and project implementation. They also play an important role to ensure the quality of a building, and to minimize its environmental impact. In this paper, we are particularly interested in the modeling of the regulatory constraints derived from the Technical Guides issued by CSTB and used to validate Technical Assessments. We first describe our approach for modeling regulatory constraints in the SBVR language, and formalizing them in the SPARQL language. Second, we describe how we model the processes of compliance checking described in the CSTB Technical Guides. Third, we show how we implement these processes to assist industrials in drafting Technical Documents in order to acquire a Technical Assessment; a compliance report is automatically generated to explain the compliance or noncompliance of this Technical Documents.
|
Towards a Semantic-based Approach for Modeling Regulatory Documents in
Building Industry
| 1,562
|
In natural-language discourse, related events tend to appear near each other to describe a larger scenario. Such structures can be formalized by the notion of a frame (a.k.a. template), which comprises a set of related events and prototypical participants and event transitions. Identifying frames is a prerequisite for information extraction and natural language generation, and is usually done manually. Methods for inducing frames have been proposed recently, but they typically use ad hoc procedures and are difficult to diagnose or extend. In this paper, we propose the first probabilistic approach to frame induction, which incorporates frames, events, participants as latent topics and learns those frame and event transitions that best explain the text. The number of frames is inferred by a novel application of a split-merge method from syntactic parsing. In end-to-end evaluations from text to induced frames and extracted facts, our method produced state-of-the-art results while substantially reducing engineering effort.
|
Probabilistic Frame Induction
| 1,563
|
In the first part of this article, we explore the background of computer-assisted learning from its beginnings in the early XIXth century and the first teaching machines, founded on theories of learning, at the start of the XXth century. With the arrival of the computer, it became possible to offer language learners different types of language activities such as comprehension tasks, simulations, etc. However, these have limits that cannot be overcome without some contribution from the field of natural language processing (NLP). In what follows, we examine the challenges faced and the issues raised by integrating NLP into CALL. We hope to demonstrate that the key to success in integrating NLP into CALL is to be found in multidisciplinary work between computer experts, linguists, language teachers, didacticians and NLP specialists.
|
NLP and CALL: integration is working
| 1,564
|
This project is a part of nature language processing and its aims to develop a system of recognition inference text-appointed TIMINF. This type of system can detect, given two portions of text, if a text is semantically deducted from the other. We focused on making the inference time in this type of system. For that we have built and analyzed a body built from questions collected through the web. This study has enabled us to classify different types of times inferences and for designing the architecture of TIMINF which seeks to integrate a module inference time in a detection system inference text. We also assess the performance of sorties TIMINF system on a test corpus with the same strategy adopted in the challenge RTE.
|
Role of temporal inference in the recognition of textual inference
| 1,565
|
Probabilistic approaches to part-of-speech tagging rely primarily on whole-word statistics about word/tag combinations as well as contextual information. But experience shows about 4 per cent of tokens encountered in test sets are unknown even when the training set is as large as a million words. Unseen words are tagged using secondary strategies that exploit word features such as endings, capitalizations and punctuation marks. In this work, word-ending statistics are primary and whole-word statistics are secondary. First, a tagger was trained and tested on word endings only. Subsequent experiments added back whole-word statistics for the words occurring most frequently in the training set. As grew larger, performance was expected to improve, in the limit performing the same as word-based taggers. Surprisingly, the ending-based tagger initially performed nearly as well as the word-based tagger; in the best case, its performance significantly exceeded that of the word-based tagger. Lastly, and unexpectedly, an effect of negative returns was observed - as grew larger, performance generally improved and then declined. By varying factors such as ending length and tag-list strategy, we achieved a success rate of 97.5 percent.
|
Ending-based Strategies for Part-of-speech Tagging
| 1,566
|
The classification of opinion texts in positive and negative is becoming a subject of great interest in sentiment analysis. The existence of many labeled opinions motivates the use of statistical and machine-learning methods. First-order statistics have proven to be very limited in this field. The Opinum approach is based on the order of the words without using any syntactic and semantic information. It consists of building one probabilistic model for the positive and another one for the negative opinions. Then the test opinions are compared to both models and a decision and confidence measure are calculated. In order to reduce the complexity of the training corpus we first lemmatize the texts and we replace most named-entities with wildcards. Opinum presents an accuracy above 81% for Spanish opinions in the financial products domain. In this work we discuss which are the most important factors that have impact on the classification performance.
|
Statistical sentiment analysis performance in Opinum
| 1,567
|
This article reports on the results of the research done towards the fully automatically merging of lexical resources. Our main goal is to show the generality of the proposed approach, which have been previously applied to merge Spanish Subcategorization Frames lexica. In this work we extend and apply the same technique to perform the merging of morphosyntactic lexica encoded in LMF. The experiments showed that the technique is general enough to obtain good results in these two different tasks which is an important step towards performing the merging of lexical resources fully automatically.
|
Towards the Fully Automatic Merging of Lexical Resources: A Step Forward
| 1,568
|
The work we present here addresses cue-based noun classification in English and Spanish. Its main objective is to automatically acquire lexical semantic information by classifying nouns into previously known noun lexical classes. This is achieved by using particular aspects of linguistic contexts as cues that identify a specific lexical class. Here we concentrate on the task of identifying such cues and the theoretical background that allows for an assessment of the complexity of the task. The results show that, despite of the a-priori complexity of the task, cue-based classification is a useful tool in the automatic acquisition of lexical semantic classes.
|
Automatic lexical semantic classification of nouns
| 1,569
|
Subjective language detection is one of the most important challenges in Sentiment Analysis. Because of the weight and frequency in opinionated texts, adjectives are considered a key piece in the opinion extraction process. These subjective units are more and more frequently collected in polarity lexicons in which they appear annotated with their prior polarity. However, at the moment, any polarity lexicon takes into account prior polarity variations across domains. This paper proves that a majority of adjectives change their prior polarity value depending on the domain. We propose a distinction between domain dependent and domain independent adjectives. Moreover, our analysis led us to propose a further classification related to subjectivity degree: constant, mixed and highly subjective adjectives. Following this classification, polarity values will be a better support for Sentiment Analysis.
|
A Classification of Adjectives for Polarity Lexicons Enhancement
| 1,570
|
The objective of the PANACEA ICT-2007.2.2 EU project is to build a platform that automates the stages involved in the acquisition, production, updating and maintenance of the large language resources required by, among others, MT systems. The development of a Corpus Acquisition Component (CAC) for extracting monolingual and bilingual data from the web is one of the most innovative building blocks of PANACEA. The CAC, which is the first stage in the PANACEA pipeline for building Language Resources, adopts an efficient and distributed methodology to crawl for web documents with rich textual content in specific languages and predefined domains. The CAC includes modules that can acquire parallel data from sites with in-domain content available in more than one language. In order to extrinsically evaluate the CAC methodology, we have conducted several experiments that used crawled parallel corpora for the identification and extraction of parallel sentences using sentence alignment. The corpora were then successfully used for domain adaptation of Machine Translation Systems.
|
Mining and Exploiting Domain-Specific Corpora in the PANACEA Platform
| 1,571
|
In this work we present the results of our experimental work on the develop-ment of lexical class-based lexica by automatic means. The objective is to as-sess the use of linguistic lexical-class based information as a feature selection methodology for the use of classifiers in quick lexical development. The results show that the approach can help in re-ducing the human effort required in the development of language resources sig-nificantly.
|
Automatic Detection of Non-deverbal Event Nouns for Quick Lexicon
Production
| 1,572
|
Acquiring lexical information is a complex problem, typically approached by relying on a number of contexts to contribute information for classification. One of the first issues to address in this domain is the determination of such contexts. The work presented here proposes the use of automatically obtained FORMAL role descriptors as features used to draw nouns from the same lexical semantic class together in an unsupervised clustering task. We have dealt with three lexical semantic classes (HUMAN, LOCATION and EVENT) in English. The results obtained show that it is possible to discriminate between elements from different lexical semantic classes using only FORMAL role information, hence validating our initial hypothesis. Also, iterating our method accurately accounts for fine-grained distinctions within lexical classes, namely distinctions involving ambiguous expressions. Moreover, a filtering and bootstrapping strategy employed in extracting FORMAL role descriptors proved to minimize effects of sparse data and noise in our task.
|
Using qualia information to identify lexical semantic classes in an
unsupervised clustering task
| 1,573
|
This article presents a probabilistic generative model for text based on semantic topics and syntactic classes called Part-of-Speech LDA (POSLDA). POSLDA simultaneously uncovers short-range syntactic patterns (syntax) and long-range semantic patterns (topics) that exist in document collections. This results in word distributions that are specific to both topics (sports, education, ...) and parts-of-speech (nouns, verbs, ...). For example, multinomial distributions over words are uncovered that can be understood as "nouns about weather" or "verbs about law". We describe the model and an approximate inference algorithm and then demonstrate the quality of the learned topics both qualitatively and quantitatively. Then, we discuss an NLP application where the output of POSLDA can lead to strong improvements in quality: unsupervised part-of-speech tagging. We describe algorithms for this task that make use of POSLDA-learned distributions that result in improved performance beyond the state of the art.
|
Probabilistic Topic and Syntax Modeling with Part-of-Speech LDA
| 1,574
|
SYNTAGMA is a rule-based parsing system, structured on two levels: a general parsing engine and a language specific grammar. The parsing engine is a language independent program, while grammar and language specific rules and resources are given as text files, consisting in a list of constituent structuresand a lexical database with word sense related features and constraints. Since its theoretical background is principally Tesniere's Elements de syntaxe, SYNTAGMA's grammar emphasizes the role of argument structure (valency) in constraint satisfaction, and allows also horizontal bounds, for instance treating coordination. Notions such as Pro, traces, empty categories are derived from Generative Grammar and some solutions are close to Government&Binding Theory, although they are the result of an autonomous research. These properties allow SYNTAGMA to manage complex syntactic configurations and well known weak points in parsing engineering. An important resource is the semantic network, which is used in disambiguation tasks. Parsing process follows a bottom-up, rule driven strategy. Its behavior can be controlled and fine-tuned.
|
SYNTAGMA. A Linguistic Approach to Parsing
| 1,575
|
Computers still have a long way to go before they can interact with users in a truly natural fashion. From a users perspective, the most natural way to interact with a computer would be through a speech and gesture interface. Although speech recognition has made significant advances in the past ten years, gesture recognition has been lagging behind. Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Statements dealing with sign language occupy a significant interest in the Automatic Natural Language Processing (ANLP) domain. In this work, we are dealing with sign language recognition, in particular of French Sign Language (FSL). FSL has its own specificities, such as the simultaneity of several parameters, the important role of the facial expression or movement and the use of space for the proper utterance organization. Unlike speech recognition, Frensh sign language (FSL) events occur both sequentially and simultaneously. Thus, the computational processing of FSL is too complex than the spoken languages. We present a novel approach based on HMM to reduce the recognition complexity.
|
Extension of hidden markov model for recognizing large vocabulary of
sign language
| 1,576
|
Our day-to-day life has always been influenced by what people think. Ideas and opinions of others have always affected our own opinions. The explosion of Web 2.0 has led to increased activity in Podcasting, Blogging, Tagging, Contributing to RSS, Social Bookmarking, and Social Networking. As a result there has been an eruption of interest in people to mine these vast resources of data for opinions. Sentiment Analysis or Opinion Mining is the computational treatment of opinions, sentiments and subjectivity of text. In this report, we take a look at the various challenges and applications of Sentiment Analysis. We will discuss in details various approaches to perform a computational treatment of sentiments and opinions. Various supervised or data-driven techniques to SA like Na\"ive Byes, Maximum Entropy, SVM, and Voted Perceptrons will be discussed and their strengths and drawbacks will be touched upon. We will also see a new dimension of analyzing sentiments by Cognitive Psychology mainly through the work of Janyce Wiebe, where we will see ways to detect subjectivity, perspective in narrative and understanding the discourse structure. We will also study some specific topics in Sentiment Analysis and the contemporary works in those areas.
|
Sentiment Analysis : A Literature Survey
| 1,577
|
In the geolocation field where high-level programs and low-level devices coexist, it is often difficult to find a friendly user inter- face to configure all the parameters. The challenge addressed in this paper is to propose intuitive and simple, thus natural lan- guage interfaces to interact with low-level devices. Such inter- faces contain natural language processing and fuzzy represen- tations of words that facilitate the elicitation of business-level objectives in our context.
|
Dealing with natural language interfaces in a geolocation context
| 1,578
|
Word ambiguity removal is a task of removing ambiguity from a word, i.e. correct sense of word is identified from ambiguous sentences. This paper describes a model that uses Part of Speech tagger and three categories for word sense disambiguation (WSD). Human Computer Interaction is very needful to improve interactions between users and computers. For this, the Supervised and Unsupervised methods are combined. The WSD algorithm is used to find the efficient and accurate sense of a word based on domain information. The accuracy of this work is evaluated with the aim of finding best suitable domain of word.
|
An Improved Approach for Word Ambiguity Removal
| 1,579
|
TimeML is an XML-based schema for annotating temporal information over discourse. The standard has been used to annotate a variety of resources and is followed by a number of tools, the creation of which constitute hundreds of thousands of man-hours of research work. However, the current state of resources is such that many are not valid, or do not produce valid output, or contain ambiguous or custom additions and removals. Difficulties arising from these variances were highlighted in the TempEval-3 exercise, which included its own extra stipulations over conventional TimeML as a response. To unify the state of current resources, and to make progress toward easy adoption of its current incarnation ISO-TimeML, this paper introduces TimeML-strict: a valid, unambiguous, and easy-to-process subset of TimeML. We also introduce three resources -- a schema for TimeML-strict; a validator tool for TimeML-strict, so that one may ensure documents are in the correct form; and a repair tool that corrects common invalidating errors and adds disambiguating markup in order to convert documents from the laxer TimeML standard to TimeML-strict.
|
TimeML-strict: clarifying temporal annotation
| 1,580
|
This paper describes a temporal expression identification and normalization system, ManTIME, developed for the TempEval-3 challenge. The identification phase combines the use of conditional random fields along with a post-processing identification pipeline, whereas the normalization phase is carried out using NorMA, an open-source rule-based temporal normalizer. We investigate the performance variation with respect to different feature types. Specifically, we show that the use of WordNet-based features in the identification task negatively affects the overall performance, and that there is no statistically significant difference in using gazetteers, shallow parsing and propositional noun phrases labels on top of the morphological features. On the test data, the best run achieved 0.95 (P), 0.85 (R) and 0.90 (F1) in the identification phase. Normalization accuracies are 0.84 (type attribute) and 0.77 (value attribute). Surprisingly, the use of the silver data (alone or in addition to the gold annotated ones) does not improve the performance.
|
ManTIME: Temporal expression identification and normalization in the
TempEval-3 challenge
| 1,581
|
We consider the unsupervised alignment of the full text of a book with a human-written summary. This presents challenges not seen in other text alignment problems, including a disparity in length and, consequent to this, a violation of the expectation that individual words and phrases should align, since large passages and chapters can be distilled into a single summary phrase. We present two new methods, based on hidden Markov models, specifically targeted to this problem, and demonstrate gains on an extractive book summarization task. While there is still much room for improvement, unsupervised alignment holds intrinsic value in offering insight into what features of a book are deemed worthy of summarization.
|
New Alignment Methods for Discriminative Book Summarization
| 1,582
|
The project presented in this article aims to formalize criteria and procedures in order to extract semantic information from parsed dictionary glosses. The actual purpose of the project is the generation of a semantic network (nearly an ontology) issued from a monolingual Italian dictionary, through unsupervised procedures. Since the project involves rule-based Parsing, Semantic Tagging and Word Sense Disambiguation techniques, its outcomes may find an interest also beyond this immediate intent. The cooperation of both syntactic and semantic features in meaning construction are investigated, and procedures which allows a translation of syntactic dependencies in semantic relations are discussed. The procedures that rise from this project can be applied also to other text types than dictionary glosses, as they convert the output of a parsing process into a semantic representation. In addition some mechanism are sketched that may lead to a kind of procedural semantics, through which multiple paraphrases of an given expression can be generated. Which means that these techniques may find an application also in 'query expansion' strategies, interesting Information Retrieval, Search Engines and Question Answering Systems.
|
Rule-Based Semantic Tagging. An Application Undergoing Dictionary
Glosses
| 1,583
|
Chinese word segmentation is a fundamental task for Chinese language processing. The granularity mismatch problem is the main cause of the errors. This paper showed that the binary tree representation can store outputs with different granularity. A binary tree based framework is also designed to overcome the granularity mismatch problem. There are two steps in this framework, namely tree building and tree pruning. The tree pruning step is specially designed to focus on the granularity problem. Previous work for Chinese word segmentation such as the sequence tagging can be easily employed in this framework. This framework can also provide quantitative error analysis methods. The experiments showed that after using a more sophisticated tree pruning function for a state-of-the-art conditional random field based baseline, the error reduction can be up to 20%.
|
Binary Tree based Chinese Word Segmentation
| 1,584
|
Conceptual combination performs a fundamental role in creating the broad range of compound phrases utilized in everyday language. This article provides a novel probabilistic framework for assessing whether the semantics of conceptual combinations are compositional, and so can be considered as a function of the semantics of the constituent concepts, or not. While the systematicity and productivity of language provide a strong argument in favor of assuming compositionality, this very assumption is still regularly questioned in both cognitive science and philosophy. Additionally, the principle of semantic compositionality is underspecified, which means that notions of both "strong" and "weak" compositionality appear in the literature. Rather than adjudicating between different grades of compositionality, the framework presented here contributes formal methods for determining a clear dividing line between compositional and non-compositional semantics. In addition, we suggest that the distinction between these is contextually sensitive. Utilizing formal frameworks developed for analyzing composite systems in quantum theory, we present two methods that allow the semantics of conceptual combinations to be classified as "compositional" or "non-compositional". Compositionality is first formalised by factorising the joint probability distribution modeling the combination, where the terms in the factorisation correspond to individual concepts. This leads to the necessary and sufficient condition for the joint probability distribution to exist. A failure to meet this condition implies that the underlying concepts cannot be modeled in a single probability space when considering their combination, and the combination is thus deemed "non-compositional". The formal analysis methods are demonstrated by applying them to an empirical study of twenty-four non-lexicalised conceptual combinations.
|
A probabilistic framework for analysing the compositionality of
conceptual combinations
| 1,585
|
We describe an inventory of semantic relations that are expressed by prepositions. We define these relations by building on the word sense disambiguation task for prepositions and propose a mapping from preposition senses to the relation labels by collapsing semantically related senses across prepositions.
|
An Inventory of Preposition Relations
| 1,586
|
Conventional statistics-based methods for joint Chinese word segmentation and part-of-speech tagging (S&T) have generalization ability to recognize new words that do not appear in the training data. An undesirable side effect is that a number of meaningless words will be incorrectly created. We propose an effective and efficient framework for S&T that introduces features to significantly reduce meaningless words generation. A general lexicon, Wikepedia and a large-scale raw corpus of 200 billion characters are used to generate word-based features for the wordhood. The word-lattice based framework consists of a character-based model and a word-based model in order to employ our word-based features. Experiments on Penn Chinese treebank 5 show that this method has a 62.9% reduction of meaningless word generation in comparison with the baseline. As a result, the F1 measure for segmentation is increased to 0.984.
|
Reduce Meaningless Words for Joint Chinese Word Segmentation and
Part-of-speech Tagging
| 1,587
|
We live in a translingual society, in order to communicate with people from different parts of the world we need to have an expertise in their respective languages. Learning all these languages is not at all possible; therefore we need a mechanism which can do this task for us. Machine translators have emerged as a tool which can perform this task. In order to develop a machine translator we need to develop several different rules. The very first module that comes in machine translation pipeline is morphological analysis. Stemming and lemmatization comes under morphological analysis. In this paper we have created a lemmatizer which generates rules for removing the affixes along with the addition of rules for creating a proper root word.
|
Development of a Hindi Lemmatizer
| 1,588
|
We design a new co-occurrence based word association measure by incorporating the concept of significant cooccurrence in the popular word association measure Pointwise Mutual Information (PMI). By extensive experiments with a large number of publicly available datasets we show that the newly introduced measure performs better than other co-occurrence based measures and despite being resource-light, compares well with the best known resource-heavy distributional similarity and knowledge based word association measures. We investigate the source of this performance improvement and find that of the two types of significant co-occurrence - corpus-level and document-level, the concept of corpus level significance combined with the use of document counts in place of word counts is responsible for all the performance gains observed. The concept of document level significance is not helpful for PMI adaptation.
|
Improving Pointwise Mutual Information (PMI) by Incorporating
Significant Co-occurrence
| 1,589
|
Inferring evaluation scores based on human judgments is invaluable compared to using current evaluation metrics which are not suitable for real-time applications e.g. post-editing. However, these judgments are much more expensive to collect especially from expert translators, compared to evaluation based on indicators contrasting source and translation texts. This work introduces a novel approach for quality estimation by combining learnt confidence scores from a probabilistic inference model based on human judgments, with selective linguistic features-based scores, where the proposed inference model infers the credibility of given human ranks to solve the scarcity and inconsistency issues of human judgments. Experimental results, using challenging language-pairs, demonstrate improvement in correlation with human judgments over traditional evaluation metrics.
|
Intelligent Hybrid Man-Machine Translation Quality Estimation
| 1,590
|
Machine Translation for Indian languages is an emerging research area. Transliteration is one such module that we design while designing a translation system. Transliteration means mapping of source language text into the target language. Simple mapping decreases the efficiency of overall translation system. We propose the use of stemming and part-of-speech tagging for transliteration. The effectiveness of translation can be improved if we use part-of-speech tagging and stemming assisted transliteration.We have shown that much of the content in Gujarati gets transliterated while being processed for translation to Hindi language.
|
Improving the quality of Gujarati-Hindi Machine Translation through
part-of-speech tagging and stemmer-assisted transliteration
| 1,591
|
In this paper we present a Marathi part of speech tagger. It is a morphologically rich language. It is spoken by the native people of Maharashtra. The general approach used for development of tagger is statistical using trigram Method. The main concept of trigram is to explore the most likely POS for a token based on given information of previous two tags by calculating probabilities to determine which is the best sequence of a tag. In this paper we show the development of the tagger. Moreover we have also shown the evaluation done.
|
Part of Speech Tagging of Marathi Text Using Trigram Method
| 1,592
|
Machine Transliteration has come out to be an emerging and a very important research area in the field of machine translation. Transliteration basically aims to preserve the phonological structure of words. Proper transliteration of name entities plays a very significant role in improving the quality of machine translation. In this paper we are doing machine transliteration for English-Punjabi language pair using rule based approach. We have constructed some rules for syllabification. Syllabification is the process to extract or separate the syllable from the words. In this we are calculating the probabilities for name entities (Proper names and location). For those words which do not come under the category of name entities, separate probabilities are being calculated by using relative frequency through a statistical machine translation toolkit known as MOSES. Using these probabilities we are transliterating our input text from English to Punjabi.
|
Rule Based Transliteration Scheme for English to Punjabi
| 1,593
|
Natural language processing area is still under research. But now a day it is on platform for worldwide researchers. Natural language processing includes analyzing the language based on its structure and then tagging of each word appropriately with its grammar base. Here we have 50,000 tagged words set and we try to cluster those Gujarati words based on proposed algorithm, we have defined our own algorithm for processing. Many clustering techniques are available Ex. Single linkage, complete, linkage,average linkage, Hear no of clusters to be formed are not known, so it is all depends on the type of data set provided . Clustering is preprocess for stemming . Stemming is the process where root is extracted from its word. Ex. cats= cat+S, meaning. Cat: Noun and plural form.
|
Clustering Algorithm for Gujarati Language
| 1,594
|
For the past 60 years, Research in machine translation is going on. For the development in this field, a lot of new techniques are being developed each day. As a result, we have witnessed development of many automatic machine translators. A manager of machine translation development project needs to know the performance increase/decrease, after changes have been done in his system. Due to this reason, a need for evaluation of machine translation systems was felt. In this article, we shall present the evaluation of some machine translators. This evaluation will be done by a human evaluator and by some automatic evaluation metrics, which will be done at sentence, document and system level. In the end we shall also discuss the comparison between the evaluations.
|
Human and Automatic Evaluation of English-Hindi Machine Translation
| 1,595
|
We develop a probabilistic latent-variable model to discover semantic frames---types of events and their participants---from corpora. We present a Dirichlet-multinomial model in which frames are latent categories that explain the linking of verb-subject-object triples, given document-level sparsity. We analyze what the model learns, and compare it to FrameNet, noting it learns some novel and interesting frames. This document also contains a discussion of inference issues, including concentration parameter learning; and a small-scale error analysis of syntactic parsing accuracy.
|
Learning Frames from Text with an Unsupervised Latent Variable Model
| 1,596
|
The problem of named entity recognition in the medical/clinical domain has gained increasing attention do to its vital role in a wide range of clinical decision support applications. The identification of complete and correct term span is vital for further knowledge synthesis (e.g., coding/mapping concepts thesauruses and classification standards). This paper investigates boundary adjustment by sequence labeling representations models and post-processing techniques in the problem of clinical named entity recognition (recognition of clinical events). Using current state-of-the-art sequence labeling algorithm (conditional random fields), we show experimentally that sequence labeling representation and post-processing can be significantly helpful in strict boundary identification of clinical events.
|
Boundary identification of events in clinical named entity recognition
| 1,597
|
An object--oriented approach to create a natural language understanding system is considered. The understanding program is a formal system built on the base of predicative calculus. Horn's clauses are used as well--formed formulas. An inference is based on the principle of resolution. Sentences of natural language are represented in the view of typical predicate set. These predicates describe physical objects and processes, abstract objects, categories and semantic relations between objects. Predicates for concrete assertions are saved in a database. To describe the semantics of classes for physical objects, abstract concepts and processes, a knowledge base is applied. The proposed representation of natural language sentences is a semantic net. Nodes of such net are typical predicates. This approach is perspective as, firstly, such typification of nodes facilitates essentially forming of processing algorithms and object descriptions, secondly, the effectiveness of algorithms is increased (particularly for the great number of nodes), thirdly, to describe the semantics of words, encyclopedic knowledge is used, and this permits essentially to extend the class of solved problems.
|
Logical analysis of natural language semantics to solve the problem of
computer understanding
| 1,598
|
How many words are needed to define all the words in a dictionary? Graph-theoretic analysis reveals that about 10% of a dictionary is a unique Kernel of words that define one another and all the rest, but this is not the smallest such subset. The Kernel consists of one huge strongly connected component (SCC), about half its size, the Core, surrounded by many small SCCs, the Satellites. Core words can define one another but not the rest of the dictionary. The Kernel also contains many overlapping Minimal Grounding Sets (MGSs), each about the same size as the Core, each part-Core, part-Satellite. MGS words can define all the rest of the dictionary. They are learned earlier, more concrete and more frequent than the rest of the dictionary. Satellite words, not correlated with age or frequency, are less concrete (more abstract) words that are also needed for full lexical power.
|
Hidden Structure and Function in the Lexicon
| 1,599
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.