text
stringlengths 0
4.09k
|
|---|
Title: Embedding Data within Knowledge Spaces
|
Abstract: The promise of e-Science will only be realized when data is discoverable, accessible, and comprehensible within distributed teams, across disciplines, and over the long-term--without reliance on out-of-band (non-digital) means. We have developed the open-source Tupelo semantic content management framework and are employing it to manage a wide range of e-Science entities (including data, documents, workflows, people, and projects) and a broad range of metadata (including provenance, social networks, geospatial relationships, temporal relations, and domain descriptions). Tupelo couples the use of global identifiers and resource description framework (RDF) statements with an aggregatable content repository model to provide a unified space for securely managing distributed heterogeneous content and relationships.
|
Title: Alleviating Media Bias Through Intelligent Agent Blogging
|
Abstract: Consumers of mass media must have a comprehensive, balanced and plural selection of news to get an unbiased perspective; but achieving this goal can be very challenging, laborious and time consuming. News stories development over time, its (in)consistency, and different level of coverage across the media outlets are challenges that a conscientious reader has to overcome in order to alleviate bias. In this paper we present an intelligent agent framework currently facilitating analysis of the main sources of on-line news in El Salvador. We show how prior tools of text analysis and Web 2.0 technologies can be combined with minimal manual intervention to help individuals on their rational decision process, while holding media outlets accountable for their work.
|
Title: Comparative concept similarity over Minspaces: Axiomatisation and Tableaux Calculus
|
Abstract: We study the logic of comparative concept similarity $\CSL$ introduced by Sheremet, Tishkovsky, Wolter and Zakharyaschev to capture a form of qualitative similarity comparison. In this logic we can formulate assertions of the form " objects A are more similar to B than to C". The semantics of this logic is defined by structures equipped by distance functions evaluating the similarity degree of objects. We consider here the particular case of the semantics induced by , the latter being distance spaces where the minimum of a set of distances always exists. It turns out that the semantics over arbitrary minspaces can be equivalently specified in terms of preferential structures, typical of conditional logics. We first give a direct axiomatisation of this logic over Minspaces. We next define a decision procedure in the form of a tableaux calculus. Both the calculus and the axiomatisation take advantage of the reformulation of the semantics in terms of preferential structures.
|
Title: Directional Clustering Tests Based on Nearest Neighbor Contingency Tables
|
Abstract: Spatial interaction between two or more classes or species has important implications in various fields and causes multivariate patterns such as segregation or association. Segregation occurs when members of a class or species are more likely to be found near members of the same class or conspecifics; while association occurs when members of a class or species are more likely to be found near members of another class or species. The null patterns considered are random labeling (RL) and complete spatial randomness (CSR) of points from two or more classes, which is called \emphCSR independence, henceforth. The clustering tests based on nearest neighbor contingency tables (NNCTs) that are in use in literature are two-sided tests. In this article, we consider the directional (i.e., one-sided) versions of the cell-specific NNCT-tests and introduce new directional NNCT-tests for the two-class case. We analyze the distributional properties; compare the empirical significant levels and empirical power estimates of the tests using extensive Monte Carlo simulations. We demonstrate that the new directional tests have comparable performance with the currently available NNCT-tests in terms of empirical size and power. We use four example data sets for illustrative purposes and provide guidelines for using these NNCT-tests.
|
Title: New Confidence Measures for Statistical Machine Translation
|
Abstract: A confidence measure is able to estimate the reliability of an hypothesis provided by a machine translation system. The problem of confidence measure can be seen as a process of testing : we want to decide whether the most probable sequence of words provided by the machine translation system is correct or not. In the following we describe several original word-level confidence measures for machine translation, based on mutual information, n-gram language model and lexical features language model. We evaluate how well they perform individually or together, and show that using a combination of confidence measures based on mutual information yields a classification error rate as low as 25.1% with an F-measure of 0.708.
|
Title: A Model for Managing Collections of Patterns
|
Abstract: Data mining algorithms are now able to efficiently deal with huge amount of data. Various kinds of patterns may be discovered and may have some great impact on the general development of knowledge. In many domains, end users may want to have their data mined by data mining tools in order to extract patterns that could impact their business. Nevertheless, those users are often overwhelmed by the large quantity of patterns extracted in such a situation. Moreover, some privacy issues, or some commercial one may lead the users not to be able to mine the data by themselves. Thus, the users may not have the possibility to perform many experiments integrating various constraints in order to focus on specific patterns they would like to extract. Post processing of patterns may be an answer to that drawback. Thus, in this paper we present a framework that could allow end users to manage collections of patterns. We propose to use an efficient data structure on which some algebraic operators may be used in order to retrieve or access patterns in pattern bases.
|
Title: Discovering general partial orders in event streams
|
Abstract: Frequent episode discovery is a popular framework for pattern discovery in event streams. An episode is a partially ordered set of nodes with each node associated with an event type. Efficient (and separate) algorithms exist for episode discovery when the associated partial order is total (serial episode) and trivial (parallel episode). In this paper, we propose efficient algorithms for discovering frequent episodes with general partial orders. These algorithms can be easily specialized to discover serial or parallel episodes. Also, the algorithms are flexible enough to be specialized for mining in the space of certain interesting subclasses of partial orders. We point out that there is an inherent combinatorial explosion in frequent partial order mining and most importantly, frequency alone is not a sufficient measure of interestingness. We propose a new interestingness measure for general partial order episodes and a discovery method based on this measure, for filtering out uninteresting partial orders. Simulations demonstrate the effectiveness of our algorithms.
|
Title: Extraction de concepts sous contraintes dans des donn\'ees d'expression de g\`enes
|
Abstract: In this paper, we propose a technique to extract constrained formal concepts.
|
Title: Database Transposition for Constrained (Closed) Pattern Mining
|
Abstract: Recently, different works proposed a new way to mine patterns in databases with pathological size. For example, experiments in genome biology usually provide databases with thousands of attributes (genes) but only tens of objects (experiments). In this case, mining the "transposed" database runs through a smaller search space, and the Galois connection allows to infer the closed patterns of the original database. We focus here on constrained pattern mining for those unusual databases and give a theoretical framework for database and constraint transposition. We discuss the properties of constraint transposition and look into classical constraints. We then address the problem of generating the closed patterns of the original database satisfying the constraint, starting from those mined in the "transposed" database. Finally, we show how to generate all the patterns satisfying the constraint from the closed ones.
|
Title: Multi-Label Prediction via Compressed Sensing
|
Abstract: We consider multi-label prediction problems with large output spaces under the assumption of output sparsity -- that the target (label) vectors have small support. We develop a general theory for a variant of the popular error correcting output code scheme, using ideas from compressed sensing for exploiting this sparsity. The method can be regarded as a simple reduction from multi-label regression problems to binary regression problems. We show that the number of subproblems need only be logarithmic in the total number of possible labels, making this approach radically more efficient than others. We also state and prove robustness guarantees for this method in the form of regret transform bounds (in general), and also provide a more detailed analysis for the linear prediction setting.
|
Title: Sparse partial least squares for on-line variable selection in multivariate data streams
|
Abstract: In this paper we propose a computationally efficient algorithm for on-line variable selection in multivariate regression problems involving high dimensional data streams. The algorithm recursively extracts all the latent factors of a partial least squares solution and selects the most important variables for each factor. This is achieved by means of only one sparse singular value decomposition which can be efficiently updated on-line and in an adaptive fashion. Simulation results based on artificial data streams demonstrate that the algorithm is able to select important variables in dynamic settings where the correlation structure among the observed streams is governed by a few hidden components and the importance of each variable changes over time. We also report on an application of our algorithm to a multivariate version of the "enhanced index tracking" problem using financial data streams. The application consists of performing on-line asset allocation with the objective of overperforming two benchmark indices simultaneously.
|
Title: Using the Eigenvalue Relaxation for Binary Least-Squares Estimation Problems
|
Abstract: The goal of this paper is to survey the properties of the eigenvalue relaxation for least squares binary problems. This relaxation is a convex program which is obtained as the Lagrangian dual of the original problem with an implicit compact constraint and as such, is a convex problem with polynomial time complexity. Moreover, as a main pratical advantage of this relaxation over the standard Semi-Definite Programming approach, several efficient bundle methods are available for this problem allowing to address problems of very large dimension. The necessary tools from convex analysis are recalled and shown at work for handling the problem of exactness of this relaxation. Two applications are described. The first one is the problem of binary image reconstruction and the second is the problem of multiuser detection in CDMA systems.
|
Title: Hierarchical Bayesian Modeling of Hitting Performance in Baseball
|
Abstract: We have developed a sophisticated statistical model for predicting the hitting performance of Major League baseball players. The Bayesian paradigm provides a principled method for balancing past performance with crucial covariates, such as player age and position. We share information across time and across players by using mixture distributions to control shrinkage for improved accuracy. We compare the performance of our model to current sabermetric methods on a held-out season (2006), and discuss both successes and limitations.
|
Title: Optimal designs for dose-finding experiments in toxicity studies
|
Abstract: We construct optimal designs for estimating fetal malformation rate, prenatal death rate and an overall toxicity index in a toxicology study under a broad range of model assumptions. We use Weibull distributions to model these rates and assume that the number of implants depend on the dose level. We study properties of the optimal designs when the intra-litter correlation coefficient depends on the dose levels in different ways. Locally optimal designs are found, along with robustified versions of the designs that are less sensitive to misspecification in the initial values of the model parameters. We also report efficiencies of commonly used designs in toxicological experiments and efficiencies of the proposed optimal designs when the true rates have non-Weibull distributions. Optimal design strategies for finding multiple-objective designs in toxicology studies are outlined as well.
|
Title: Needles in the Haystack: Identifying Individuals Present in Pooled Genomic Data
|
Abstract: Recent publications have described and applied a novel metric that quantifies the genetic distance of an individual with respect to two population samples, and have suggested that the metric makes it possible to infer the presence of an individual of known genotype in a sample for which only the marginal allele frequencies are known. However, the assumptions, limitations, and utility of this metric remained incompletely characterized. Here we present an exploration of the strengths and limitations of that method. In addition to analytical investigations of the underlying assumptions, we use both real and simulated genotypes to test empirically the method's accuracy. The results reveal that, when used as a means by which to identify individuals as members of a population sample, the specificity is low in several circumstances. We find that the misclassifications stem from violations of assumptions that are crucial to the technique yet hard to control in practice, and we explore the feasibility of several methods to improve the sensitivity. Additionally, we find that the specificity may still be lower than expected even in ideal circumstances. However, despite the metric's inadequacies for identifying the presence of an individual in a sample, our results suggest potential avenues for future research on tuning this method to problems of ancestry inference or disease prediction. By revealing both the strengths and limitations of the proposed method, we hope to elucidate situations in which this distance metric may be used in an appropriate manner. We also discuss the implications of our findings in forensics applications and in the protection of GWAS participant privacy.
|
Title: p-order rounded integer-valued autoregressive (RINAR(p)) process
|
Abstract: An extension of the RINAR(1) process for modelling discrete-time dependent counting processes is considered. The model RINAR(p) investigated here is a direct and natural extension of the real AR(p) model. Compared to classical INAR(p) models based on the thinning operator, the new models have several advantages: simple innovation structure ; autoregressive coefficients with arbitrary signs ; possible negative values for time series ; possible negative values for the autocorrelation function. The conditions for the stationarity and ergodicity, of the RINAR(p) model, are given. For parameter estimation, we consider the least squares estimator and we prove its consistency under suitable identifiability condition. Simulation experiments as well as analysis of real data sets are carried out to assess the performance of the model.
|
Title: Improvements of real coded genetic algorithms based on differential operators preventing premature convergence
|
Abstract: This paper presents several types of evolutionary algorithms (EAs) used for global optimization on real domains. The interest has been focused on multimodal problems, where the difficulties of a premature convergence usually occurs. First the standard genetic algorithm (SGA) using binary encoding of real values and its unsatisfactory behavior with multimodal problems is briefly reviewed together with some improvements of fighting premature convergence. Two types of real encoded methods based on differential operators are examined in detail: the differential evolution (DE), a very modern and effective method firstly published by R. Storn and K. Price, and the simplified real-coded differential genetic algorithm SADE proposed by the authors. In addition, an improvement of the SADE method, called CERAF technology, enabling the population of solutions to escape from local extremes, is examined. All methods are tested on an identical set of objective functions and a systematic comparison based on a reliable methodology is presented. It is confirmed that real coded methods generally exhibit better behavior on real domains than the binary algorithms, even when extended by several improvements. Furthermore, the positive influence of the differential operators due to their possibility of self-adaptation is demonstrated. From the reliability point of view, it seems that the real encoded differential algorithm, improved by the technology described in this paper, is a universal and reliable method capable of solving all proposed test problems.
|
Title: A competitive comparison of different types of evolutionary algorithms
|
Abstract: This paper presents comparison of several stochastic optimization algorithms developed by authors in their previous works for the solution of some problems arising in Civil Engineering. The introduced optimization methods are: the integer augmented simulated annealing (IASA), the real-coded augmented simulated annealing (RASA), the differential evolution (DE) in its original fashion developed by R. Storn and K. Price and simplified real-coded differential genetic algorithm (SADE). Each of these methods was developed for some specific optimization problem; namely the Chebychev trial polynomial problem, the so called type 0 function and two engineering problems - the reinforced concrete beam layout and the periodic unit cell problem respectively. Detailed and extensive numerical tests were performed to examine the stability and efficiency of proposed algorithms. The results of our experiments suggest that the performance and robustness of RASA, IASA and SADE methods are comparable, while the DE algorithm performs slightly worse. This fact together with a small number of internal parameters promotes the SADE method as the most robust for practical use.
|
Title: Back analysis of microplane model parameters using soft computing methods
|
Abstract: A new procedure based on layered feed-forward neural networks for the microplane material model parameters identification is proposed in the present paper. Novelties are usage of the Latin Hypercube Sampling method for the generation of training sets, a systematic employment of stochastic sensitivity analysis and a genetic algorithm-based training of a neural network by an evolutionary algorithm. Advantages and disadvantages of this approach together with possible extensions are thoroughly discussed and analyzed.
|
Title: Risk bounds in linear regression through PAC-Bayesian truncation
|
Abstract: We consider the problem of predicting as well as the best linear combination of d given functions in least squares regression, and variants of this problem including constraints on the parameters of the linear combination. When the input distribution is known, there already exists an algorithm having an expected excess risk of order d/n, where n is the size of the training data. Without this strong assumption, standard results often contain a multiplicative log n factor, and require some additional assumptions like uniform boundedness of the d-dimensional input representation and exponential moments of the output. This work provides new risk bounds for the ridge estimator and the ordinary least squares estimator, and their variants. It also provides shrinkage procedures with convergence rate d/n (i.e., without the logarithmic factor) in expectation and in deviations, under various assumptions. The key common surprising factor of these results is the absence of exponential moment condition on the output distribution while achieving exponential deviations. All risk bounds are obtained through a PAC-Bayesian analysis on truncated differences of losses. Finally, we show that some of these results are not particular to the least squares loss, but can be generalized to similar strongly convex loss functions.
|
Title: Optimal Probabilistic Ring Exploration by Asynchronous Oblivious Robots
|
Abstract: We consider a team of $k$ identical, oblivious, asynchronous mobile robots that are able to sense (, view) their environment, yet are unable to communicate, and evolve on a constrained path. Previous results in this weak scenario show that initial symmetry yields high lower bounds when problems are to be solved by robots. In this paper, we initiate research on probabilistic bounds and solutions in this context, and focus on the problem of anonymous unoriented rings of any size. It is known that $\Theta(\log n)$ robots are necessary and sufficient to solve the problem with $k$ deterministic robots, provided that $k$ and $n$ are coprime. By contrast, we show that identical probabilistic robots are necessary and sufficient to solve the same problem, also removing the coprime constraint. Our positive results are constructive.
|
Title: Topological Centrality and Its Applications
|
Abstract: Recent development of network structure analysis shows that it plays an important role in characterizing complex system of many branches of sciences. Different from previous network centrality measures, this paper proposes the notion of topological centrality (TC) reflecting the topological positions of nodes and edges in general networks, and proposes an approach to calculating the topological centrality. The proposed topological centrality is then used to discover communities and build the backbone network. Experiments and applications on research network show the significance of the proposed approach.
|
Title: Sparse Conformal Predictors
|
Abstract: Conformal predictors, introduced by Vovk et al. (2005), serve to build prediction intervals by exploiting a notion of conformity of the new data point with previously observed data. In the present paper, we propose a novel method for constructing prediction intervals for the response variable in multivariate linear models. The main emphasis is on sparse linear models, where only few of the covariates have significant influence on the response variable even if their number is very large. Our approach is based on combining the principle of conformal prediction with the $\ell_1$ penalized least squares estimator (LASSO). The resulting confidence set depends on a parameter $\epsilon>0$ and has a coverage probability larger than or equal to $1-\epsilon$. The numerical experiments reported in the paper show that the length of the confidence set is small. Furthermore, as a by-product of the proposed approach, we provide a data-driven procedure for choosing the LASSO penalty. The selection power of the method is illustrated on simulated data.
|
Title: On the usefulness of Meyer wavelets for deconvolution and density estimation
|
Abstract: The aim of this paper is to show the usefulness of Meyer wavelets for the classical problem of density estimation and for density deconvolution from noisy observations. By using such wavelets, the computation of the empirical wavelet coefficients relies on the fast Fourier transform of the data and on the fact that Meyer wavelets are band-limited functions. This makes such estimators very simple to compute and this avoids the problem of evaluating wavelets at non-dyadic points which is the main drawback of classical wavelet-based density estimators. Our approach is based on term-by-term thresholding of the empirical wavelet coefficients with random thresholds depending on an estimation of the variance of each coefficient. Such estimators are shown to achieve the same performances of an oracle estimator up to a logarithmic term. These estimators also achieve near-minimax rates of convergence over a large class of Besov spaces. A simulation study is proposed to show the good finite sample performances of the estimator for both problems of direct density estimation and density deconvolution.
|
Title: A List of Household Objects for Robotic Retrieval Prioritized by People with ALS (Version 092008)
|
Abstract: This technical report is designed to serve as a citable reference for the original prioritized object list that the Healthcare Robotics Lab at Georgia Tech released on its website in September of 2008. It is also expected to serve as the primary citable reference for the research associated with this list until the publication of a detailed, peer-reviewed paper. The original prioritized list of object classes resulted from a needs assessment involving 8 motor-impaired patients with amyotrophic lateral sclerosis (ALS) and targeted, in-person interviews of 15 motor-impaired ALS patients. All of these participants were drawn from the Emory ALS Center. The prioritized object list consists of 43 object classes ranked by how important the participants considered each class to be for retrieval by an assistive robot. We intend for this list to be used by researchers to inform the design and benchmarking of robotic systems, especially research related to autonomous mobile manipulation.
|
Title: A Standalone Markerless 3D Tracker for Handheld Augmented Reality
|
Abstract: This paper presents an implementation of a markerless tracking technique targeted to the Windows Mobile Pocket PC platform. The primary aim of this work is to allow the development of standalone augmented reality applications for handheld devices based on natural feature tracking. In order to achieve this goal, a subset of two computer vision libraries was ported to the Pocket PC platform. They were also adapted to use fixed point math, with the purpose of improving the overall performance of the routines. The port of these libraries opens up the possibility of having other computer vision tasks being executed on mobile platforms. A model based tracking approach that relies on edge information was adopted. Since it does not require a high processing power, it is suitable for constrained devices such as handhelds. The OpenGL ES graphics library was used to perform computer vision tasks, taking advantage of existing graphics hardware acceleration. An augmented reality application was created using the implemented technique and evaluations were done regarding tracking performance and accuracy
|
Title: Feature Hashing for Large Scale Multitask Learning
|
Abstract: Empirical evidence suggests that hashing is an effective strategy for dimensionality reduction and practical nonparametric estimation. In this paper we provide exponential tail bounds for feature hashing and show that the interaction between random subspaces is negligible with high probability. We demonstrate the feasibility of this approach with experimental results for a new use case -- multitask learning with hundreds of thousands of tasks.
|
Title: BagPack: A general framework to represent semantic relations
|
Abstract: We introduce a way to represent word pairs instantiating arbitrary semantic relations that keeps track of the contexts in which the words in the pair occur both together and independently. The resulting features are of sufficient generality to allow us, with the help of a standard supervised machine learning algorithm, to tackle a variety of unrelated semantic tasks with good results and almost no task-specific tailoring.
|
Title: Multiple testing via successive subdivision
|
Abstract: A sequential multiple testing procedure recently introduced by Heinrich, Bach and Kornmeier allows to "zoom in" on, and thus identify regions with highly significant departures from null-hypotheses. The purpose of this note is to state a cognate of this procedure in general form and to prove that it controls the familywise error. Two possible applications are briefly indicated.
|
Title: What's in a Message?
|
Abstract: In this paper we present the first step in a larger series of experiments for the induction of predicate/argument structures. The structures that we are inducing are very similar to the conceptual structures that are used in Frame Semantics (such as FrameNet). Those structures are called messages and they were previously used in the context of a multi-document summarization system of evolving events. The series of experiments that we are proposing are essentially composed from two stages. In the first stage we are trying to extract a representative vocabulary of words. This vocabulary is later used in the second stage, during which we apply to it various clustering approaches in order to identify the clusters of predicates and arguments--or frames and semantic roles, to use the jargon of Frame Semantics. This paper presents in detail and evaluates the first stage.
|
Title: XML Representation of Constraint Networks: Format XCSP 2.1
|
Abstract: We propose a new extended format to represent constraint networks using XML. This format allows us to represent constraints defined either in extension or in intension. It also allows us to reference global constraints. Any instance of the problems CSP (Constraint Satisfaction Problem), QCSP (Quantified CSP) and WCSP (Weighted CSP) can be represented using this format.
|
Title: Object Classification by means of Multi-Feature Concept Learning in a Multi Expert-Agent System
|
Abstract: Classification of some objects in classes of concepts is an essential and even breathtaking task in many applications. A solution is discussed here based on Multi-Agent systems. A kernel of some expert agents in several classes is to consult a central agent decide among the classification problem of a certain object. This kernel is moderated with the center agent, trying to manage the querying agents for any decision problem by means of a data-header like feature set. Agents have cooperation among concepts related to the classes of this classification decision-making; and may affect on each others' results on a certain query object in a multi-agent learning approach. This leads to an online feature learning via the consulting trend. The performance is discussed to be much better in comparison to some other prior trends while system's message passing overload is decreased to less agents and the expertism helps the performance and operability of system win the comparison.
|
Title: Using SLP Neural Network to Persian Handwritten Digits Recognition
|
Abstract: This paper has been withdrawn by the author ali pourmohammad.
|
Title: Ultrametric Wavelet Regression of Multivariate Time Series: Application to Colombian Conflict Analysis
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.