text
stringlengths 0
4.09k
|
|---|
Abstract: The emerging Web of Data utilizes the web infrastructure to represent and interrelate data. The foundational standards of the Web of Data include the Uniform Resource Identifier (URI) and the Resource Description Framework (RDF). URIs are used to identify resources and RDF is used to relate resources. While RDF has been posited as a logic language designed specifically for knowledge representation and reasoning, it is more generally useful if it can conveniently support other models of computing. In order to realize the Web of Data as a general-purpose medium for storing and processing the world's data, it is necessary to separate RDF from its logic language legacy and frame it simply as a data model. Moreover, there is significant advantage in seeing the Semantic Web as a particular interpretation of the Web of Data that is focused specifically on knowledge representation and reasoning. By doing so, other interpretations of the Web of Data are exposed that realize RDF in different capacities and in support of different computing models.
|
Title: Finding Anomalous Periodic Time Series: An Application to Catalogs of Periodic Variable Stars
|
Abstract: Catalogs of periodic variable stars contain large numbers of periodic light-curves (photometric time series data from the astrophysics domain). Separating anomalous objects from well-known classes is an important step towards the discovery of new classes of astronomical objects. Most anomaly detection methods for time series data assume either a single continuous time series or a set of time series whose periods are aligned. Light-curve data precludes the use of these methods as the periods of any given pair of light-curves may be out of sync. One may use an existing anomaly detection method if, prior to similarity calculation, one performs the costly act of aligning two light-curves, an operation that scales poorly to massive data sets. This paper presents PCAD, an unsupervised anomaly detection method for large sets of unsynchronized periodic time-series data, that outputs a ranked list of both global and local anomalies. It calculates its anomaly score for each light-curve in relation to a set of centroids produced by a modified k-means clustering algorithm. Our method is able to scale to large data sets through the use of sampling. We validate our method on both light-curve data and other time series data sets. We demonstrate its effectiveness at finding known anomalies, and discuss the effect of sample size and number of centroids on our results. We compare our method to naive solutions and existing time series anomaly detection methods for unphased data, and show that PCAD's reported anomalies are comparable to or better than all other methods. Finally, astrophysicists on our team have verified that PCAD finds true anomalies that might be indicative of novel astrophysical phenomena.
|
Title: The sensitivity of linear regression coefficients' confidence limits to the omission of a confounder
|
Abstract: Omitted variable bias can affect treatment effect estimates obtained from observational data due to the lack of random assignment to treatment groups. Sensitivity analyses adjust these estimates to quantify the impact of potential omitted variables. This paper presents methods of sensitivity analysis to adjust interval estimates of treatment effect---both the point estimate and standard error---obtained using multiple linear regression. Central to our approach is what we term benchmarking, the use of data to establish reference points for speculation about omitted confounders. The method adapts to treatment effects that may differ by subgroup, to scenarios involving omission of multiple variables, and to combinations of covariance adjustment with propensity score stratification. We illustrate it using data from an influential study of health outcomes of patients admitted to critical care.
|
Title: Quantum Annealing for Clustering
|
Abstract: This paper studies quantum annealing (QA) for clustering, which can be seen as an extension of simulated annealing (SA). We derive a QA algorithm for clustering and propose an annealing schedule, which is crucial in practice. Experiments show the proposed QA algorithm finds better clustering assignments than SA. Furthermore, QA is as easy as SA to implement.
|
Title: Quantum Annealing for Variational Bayes Inference
|
Abstract: This paper presents studies on a deterministic annealing algorithm based on quantum annealing for variational Bayes (QAVB) inference, which can be seen as an extension of the simulated annealing for variational Bayes (SAVB) inference. QAVB is as easy as SAVB to implement. Experiments revealed QAVB finds a better local optimum than SAVB in terms of the variational free energy in latent Dirichlet allocation (LDA).
|
Title: Profiling of a network behind an infectious disease outbreak
|
Abstract: Stochasticity and spatial heterogeneity are of great interest recently in studying the spread of an infectious disease. The presented method solves an inverse problem to discover the effectively decisive topology of a heterogeneous network and reveal the transmission parameters which govern the stochastic spreads over the network from a dataset on an infectious disease outbreak in the early growth phase. Populations in a combination of epidemiological compartment models and a meta-population network model are described by stochastic differential equations. Probability density functions are derived from the equations and used for the maximal likelihood estimation of the topology and parameters. The method is tested with computationally synthesized datasets and the WHO dataset on SARS outbreak.
|
Title: Coevolutionary Genetic Algorithms for Establishing Nash Equilibrium in Symmetric Cournot Games
|
Abstract: We use co-evolutionary genetic algorithms to model the players' learning process in several Cournot models, and evaluate them in terms of their convergence to the Nash Equilibrium. The "social-learning" versions of the two co-evolutionary algorithms we introduce, establish Nash Equilibrium in those models, in contrast to the "individual learning" versions which, as we see here, do not imply the convergence of the players' strategies to the Nash outcome. When players use "canonical co-evolutionary genetic algorithms" as learning algorithms, the process of the game is an ergodic Markov Chain, and therefore we analyze simulation results using both the relevant methodology and more general statistical tests, to find that in the "social" case, states leading to NE play are highly frequent at the stationary distribution of the chain, in contrast to the "individual learning" case, when NE is not reached at all in our simulations; to find that the expected Hamming distance of the states at the limiting distribution from the "NE state" is significantly smaller in the "social" than in the "individual learning case"; to estimate the expected time that the "social" algorithms need to get to the "NE state" and verify their robustness and finally to show that a large fraction of the games played are indeed at the Nash Equilibrium.
|
Title: Where are the really hard manipulation problems? The phase transition in manipulating the veto rule
|
Abstract: Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is "hung". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.
|
Title: Decompositions of All Different, Global Cardinality and Related Constraints
|
Abstract: We show that some common and important global constraints like ALL-DIFFERENT and GCC can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency, and in some cases even greater pruning. These decompositions can be easily added to new solvers. They also provide other constraints with access to the state of the propagator by sharing of variables. Such sharing can be used to improve propagation between constraints. We report experiments with our decomposition in a pseudo-Boolean solver.
|
Title: Circuit Complexity and Decompositions of Global Constraints
|
Abstract: We show that tools from circuit complexity can be used to study decompositions of global constraints. In particular, we study decompositions of global constraints into conjunctive normal form with the property that unit propagation on the decomposition enforces the same level of consistency as a specialized propagation algorithm. We prove that a constraint propagator has a a polynomial size decomposition if and only if it can be computed by a polynomial size monotone Boolean circuit. Lower bounds on the size of monotone Boolean circuits thus translate to lower bounds on the size of decompositions of global constraints. For instance, we prove that there is no polynomial sized decomposition of the domain consistency propagator for the ALLDIFFERENT constraint.
|
Title: Scenario-based Stochastic Constraint Programming
|
Abstract: To model combinatorial decision problems involving uncertainty and probability, we extend the stochastic constraint programming framework proposed in [Walsh, 2002] along a number of important dimensions (e.g. to multiple chance constraints and to a range of new objectives). We also provide a new (but equivalent) semantics based on scenarios. Using this semantics, we can compile stochastic constraint programs down into conventional (nonstochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as finance, agriculture and production.
|
Title: Reasoning about soft constraints and conditional preferences: complexity results and approximation techniques
|
Abstract: Many real life optimization problems contain both hard and soft constraints, as well as qualitative conditional preferences. However, there is no single formalism to specify all three kinds of information. We therefore propose a framework, based on both CP-nets and soft constraints, that handles both hard and soft constraints as well as conditional preferences efficiently and uniformly. We study the complexity of testing the consistency of preference statements, and show how soft constraints can faithfully approximate the semantics of conditional preference statements whilst improving the computational complexity
|
Title: Multiset Ordering Constraints
|
Abstract: We identify a new and important global (or non-binary) constraint. This constraint ensures that the values taken by two vectors of variables, when viewed as multisets, are ordered. This constraint is useful for a number of different applications including breaking symmetry and fuzzy constraint satisfaction. We propose and implement an efficient linear time algorithm for enforcing generalised arc consistency on such a multiset ordering constraint. Experimental results on several problem domains show considerable promise.
|
Title: Tag Clouds for Displaying Semantics: The Case of Filmscripts
|
Abstract: We relate tag clouds to other forms of visualization, including planar or reduced dimensionality mapping, and Kohonen self-organizing maps. Using a modified tag cloud visualization, we incorporate other information into it, including text sequence and most pertinent words. Our notion of word pertinence goes beyond just word frequency and instead takes a word in a mathematical sense as located at the average of all of its pairwise relationships. We capture semantics through context, taken as all pairwise relationships. Our domain of application is that of filmscript analysis. The analysis of filmscripts, always important for cinema, is experiencing a major gain in importance in the context of television. Our objective in this work is to visualize the semantics of filmscript, and beyond filmscript any other partially structured, time-ordered, sequence of text segments. In particular we develop an innovative approach to plot characterization.
|
Title: Swap Bribery
|
Abstract: In voting theory, bribery is a form of manipulative behavior in which an external actor (the briber) offers to pay the voters to change their votes in order to get her preferred candidate elected. We investigate a model of bribery where the price of each vote depends on the amount of change that the voter is asked to implement. Specifically, in our model the briber can change a voter's preference list by paying for a sequence of swaps of consecutive candidates. Each swap may have a different price; the price of a bribery is the sum of the prices of all swaps that it involves. We prove complexity results for this model, which we call swap bribery, for a broad class of election systems, including variants of approval and k-approval, Borda, Copeland, and maximin.
|
Title: A New Solution to the Relative Orientation Problem using only 3 Points and the Vertical Direction
|
Abstract: This paper presents a new method to recover the relative pose between two images, using three points and the vertical direction information. The vertical direction can be determined in two ways: 1- using direct physical measurement like IMU (inertial measurement unit), 2- using vertical vanishing point. This knowledge of the vertical direction solves 2 unknowns among the 3 parameters of the relative rotation, so that only 3 homologous points are requested to position a couple of images. Rewriting the coplanarity equations leads to a simpler solution. The remaining unknowns resolution is performed by an algebraic method using Grobner bases. The elements necessary to build a specific algebraic solver are given in this paper, allowing for a real-time implementation. The results on real and synthetic data show the efficiency of this method.
|
Title: Optimal byzantine resilient convergence in oblivious robot networks
|
Abstract: Given a set of robots with arbitrary initial location and no agreement on a global coordinate system, convergence requires that all robots asymptotically approach the exact same, but unknown beforehand, location. Robots are oblivious-- they do not recall the past computations -- and are allowed to move in a one-dimensional space. Additionally, robots cannot communicate directly, instead they obtain system related information only via visual sensors. We draw a connection between the convergence problem in robot networks, and the distributed problem (that requires correct processes to decide, for some constant $\epsilon$, values distance $\epsilon$ apart and within the range of initial proposed values). Surprisingly, even though specifications are similar, the convergence implementation in robot networks requires specific assumptions about synchrony and Byzantine resilience. In more details, we prove necessary and sufficient conditions for the convergence of mobile robots despite a subset of them being Byzantine (i.e. they can exhibit arbitrary behavior). Additionally, we propose a deterministic convergence algorithm for robot networks and analyze its correctness and complexity in various synchrony settings. The proposed algorithm tolerates f Byzantine robots for (2f+1)-sized robot networks in fully synchronous networks, (3f+1)-sized in semi-synchronous networks. These bounds are optimal for the class of cautious algorithms, which guarantee that correct robots always move inside the range of positions of the correct robots.
|
Title: Transfer Learning Using Feature Selection
|
Abstract: We present three related ways of using Transfer Learning to improve feature selection. The three methods address different problems, and hence share different kinds of information between tasks or feature classes, but all three are based on the information theoretic Minimum Description Length (MDL) principle and share the same underlying Bayesian interpretation. The first method, MIC, applies when predictive models are to be built simultaneously for multiple tasks (``simultaneous transfer'') that share the same set of features. MIC allows each feature to be added to none, some, or all of the task models and is most beneficial for selecting a small set of predictive features from a large pool of features, as is common in genomic and biological datasets. Our second method, TPC (Three Part Coding), uses a similar methodology for the case when the features can be divided into feature classes. Our third method, Transfer-TPC, addresses the ``sequential transfer'' problem in which the task to which we want to transfer knowledge may not be known in advance and may have different amounts of data than the other tasks. Transfer-TPC is most beneficial when we want to transfer knowledge between tasks which have unequal amounts of labeled data, for example the data for disambiguating the senses of different verbs. We demonstrate the effectiveness of these approaches with experimental results on real world data pertaining to genomics and to Word Sense Disambiguation (WSD).
|
Title: Normalized Web Distance and Word Similarity
|
Abstract: There is a great deal of work in cognitive psychology, linguistics, and computer science, about using word (or phrase) frequencies in context in text corpora to develop measures for word similarity or word association, going back to at least the 1960s. The goal of this chapter is to introduce the normalizedis a general way to tap the amorphous low-grade knowledge available for free on the Internet, typed in by local users aiming at personal gratification of diverse objectives, and yet globally achieving what is effectively the largest semantic electronic database in the world. Moreover, this database is available for all by using any search engine that can return aggregate page-count estimates for a large range of search-queries. In the paper introducing the NWD it was called `normalized Google distance (NGD),' but since Google doesn't allow computer searches anymore, we opt for the more neutral and descriptive NWD. web distance (NWD) method to determine similarity between words and phrases. It
|
Title: Maximum Likelihood Estimation for Markov Chains
|
Abstract: A new approach for optimal estimation of Markov chains with sparse transition matrices is presented.
|
Title: Characterizing predictable classes of processes
|
Abstract: The problem is sequence prediction in the following setting. A sequence $x_1,...,x_n,...$ of discrete-valued observations is generated according to some unknown probabilistic law (measure) $\mu$. After observing each outcome, it is required to give the conditional probabilities of the next observation. The measure $\mu$ belongs to an arbitrary class $\C$ of stochastic processes. We are interested in predictors $\rho$ whose conditional probabilities converge to the "true" $\mu$-conditional probabilities if any $\mu\in\C$ is chosen to generate the data. We show that if such a predictor exists, then a predictor can also be obtained as a convex combination of a countably many elements of $\C$. In other words, it can be obtained as a Bayesian predictor whose prior is concentrated on a countable set. This result is established for two very different measures of performance of prediction, one of which is very strong, namely, total variation, and the other is very weak, namely, prediction in expected average Kullback-Leibler divergence.
|
Title: Automating Quantified Multimodal Logics in Simple Type Theory -- A Case Study
|
Abstract: In a case study we investigate whether off the shelf higher-order theorem provers and model generators can be employed to automate reasoning in and about quantified multimodal logics. In our experiments we exploit the new TPTP infrastructure for classical higher-order logic.
|
Title: Information Modeling for a Dynamic Representation of an Emergency Situation
|
Abstract: In this paper we propose an approach to build a decision support system that can help emergency planners and responders to detect and manage emergency situations. The internal mechanism of the system is independent from the treated application. Therefore, we think the system may be used or adapted easily to different case studies. We focus here on a first step in the decision-support process which concerns the modeling of information issued from the perceived environment and their representation dynamically using a multiagent system. This modeling was applied on the RoboCupRescue Simulation System. An implementation and some results are presented here.
|
Title: Weak Evolvability Equals Strong Evolvability
|
Abstract: An updated version will be uploaded later.
|
Title: Considerations on Construction Ontologies
|
Abstract: The paper proposes an analysis on some existent ontologies, in order to point out ways to resolve semantic heterogeneity in information systems. Authors are highlighting the tasks in a Knowledge Acquisiton System and identifying aspects related to the addition of new information to an intelligent system. A solution is proposed, as a combination of ontology reasoning services and natural languages generation. A multi-agent system will be conceived with an extractor agent, a reasoner agent and a competence management agent.
|
Title: A black box method for solving the complex exponentials approximation problem
|
Abstract: A common problem, arising in many different applied contexts, consists in estimating the number of exponentially damped sinusoids whose weighted sum best fits a finite set of noisy data and in estimating their parameters. Many different methods exist to this purpose. The best of them are based on approximate Maximum Likelihood estimators, assuming to know the number of damped sinusoids, which can then be estimated by an order selection procedure. As the problem can be severely ill posed, a stochastic perturbation method is proposed which provides better results than Maximum Likelihood based methods when the signal-to-noise ratio is low. The method depends on some hyperparameters which turn out to be essentially independent of the application. Therefore they can be fixed once and for all, giving rise to a black box method.
|
Title: A Logic Programming Approach to Activity Recognition
|
Abstract: We have been developing a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities detected on video frames. The output of our system is a set of recognised long-term activities, which are pre-defined temporal combinations of short-term activities. The constraints on the short-term activities that, if satisfied, lead to the recognition of a long-term activity, are expressed using a dialect of the Event Calculus. We illustrate the expressiveness of the dialect by showing the representation of several typical complex activities. Furthermore, we present a detailed evaluation of the system through experimentation on a benchmark dataset of surveillance videos.
|
Title: Mining Generalized Patterns from Large Databases using Ontologies
|
Abstract: Formal Concept Analysis (FCA) is a mathematical theory based on the formalization of the notions of concept and concept hierarchies. It has been successfully applied to several Computer Science fields such as data mining,software engineering, and knowledge engineering, and in many domains like medicine, psychology, linguistics and ecology. For instance, it has been exploited for the design, mapping and refinement of ontologies. In this paper, we show how FCA can benefit from a given domain ontology by analyzing the impact of a taxonomy (on objects and/or attributes) on the resulting concept lattice. We willmainly concentrate on the usage of a taxonomy to extract generalized patterns (i.e., knowledge generated from data when elements of a given domain ontology are used) in the form of concepts and rules, and improve navigation through these patterns. To that end, we analyze three generalization cases and show their impact on the size of the generalized pattern set. Different scenarios of simultaneous generalizations on both objects and attributes are also discussed
|
Title: Divide and Conquer: Partitioning Online Social Networks
|
Abstract: Online Social Networks (OSNs) have exploded in terms of scale and scope over the last few years. The unprecedented growth of these networks present challenges in terms of system design and maintenance. One way to cope with this is by partitioning such large networks and assigning these partitions to different machines. However, social networks possess unique properties that make the partitioning problem non-trivial. The main contribution of this paper is to understand different properties of social networks and how these properties can guide the choice of a partitioning algorithm. Using large scale measurements representing real OSNs, we first characterize different properties of social networks, and then we evaluate qualitatively different partitioning methods that cover the design space. We expose different trade-offs involved and understand them in light of properties of social networks. We show that a judicious choice of a partitioning scheme can help improve performance.
|
Title: A Minimum Description Length Approach to Multitask Feature Selection
|
Abstract: Many regression problems involve not one but several response variables (y's). Often the responses are suspected to share a common underlying structure, in which case it may be advantageous to share information across them; this is known as multitask learning. As a special case, we can use multiple responses to better identify shared predictive features -- a project we might call multitask feature selection. This thesis is organized as follows. Section 1 introduces feature selection for regression, focusing on ell_0 regularization methods and their interpretation within a Minimum Description Length (MDL) framework. Section 2 proposes a novel extension of MDL feature selection to the multitask setting. The approach, called the "Multiple Inclusion Criterion" (MIC), is designed to borrow information across regression tasks by more easily selecting features that are associated with multiple responses. We show in experiments on synthetic and real biological data sets that MIC can reduce prediction error in settings where features are at least partially shared across responses. Section 3 surveys hypothesis testing by regression with a single response, focusing on the parallel between the standard Bonferroni correction and an MDL approach. Mirroring the ideas in Section 2, Section 4 proposes a novel MIC approach to hypothesis testing with multiple responses and shows that on synthetic data with significant sharing of features across responses, MIC sometimes outperforms standard FDR-controlling methods in terms of finding true positives for a given level of false positives. Section 5 concludes.
|
Title: A Walk in Facebook: Uniform Sampling of Users in Online Social Networks
|
Abstract: Our goal in this paper is to develop a practical framework for obtaining a uniform sample of users in an online social network (OSN) by crawling its social graph. Such a sample allows to estimate any user property and some topological properties as well. To this end, first, we consider and compare several candidate crawling techniques. Two approaches that can produce approximately uniform samples are the Metropolis-Hasting random walk (MHRW) and a re-weighted random walk (RWRW). Both have pros and cons, which we demonstrate through a comparison to each other as well as to the "ground truth." In contrast, using Breadth-First-Search (BFS) or an unadjusted Random Walk (RW) leads to substantially biased results. Second, and in addition to offline performance assessment, we introduce online formal convergence diagnostics to assess sample quality during the data collection process. We show how these diagnostics can be used to effectively determine when a random walk sample is of adequate size and quality. Third, as a case study, we apply the above methods to Facebook and we collect the first, to the best of our knowledge, representative sample of Facebook users. We make it publicly available and employ it to characterize several key properties of Facebook.
|
Title: Effect of indirect dependencies on "Maximum likelihood blind separation of two quantum states (qubits) with cylindrical-symmetry Heisenberg spin coupling"
|
Abstract: In a previous paper [1], we investigated the Blind Source Separation (BSS) problem, for the nonlinear mixing model that we introduced in that paper. We proposed to solve this problem by using a maximum likelihood (ML) approach. When applying the ML approach to BSS problems, one usually determines the analytical expressions of the derivatives of the log-likelihood with respect to the parameters of the considered mixing model. In the literature, these calculations were mainly considered for linear mixtures up to now. They are more complex for nonlinear mixtures, due to dependencies between the considered quantities. Moreover, the notations commonly employed by the BSS community in such calculations may become misleading when using them for nonlinear mixtures, due to the above-mentioned dependencies. In this document, we therefore explain this phenomenon, by showing the effect of indirect dependencies on the application of the ML approach to the mixing model considered in [1]. This yields the explicit expression of the complete derivative of the log-likelihood associated to that mixing model.
|
Title: Managing Distributed MARF with SNMP
|
Abstract: The scope of this project's work focuses on the research and prototyping of the extension of the Distributed MARF such that its services can be managed through the most popular management protocol familiarly, SNMP. The rationale behind SNMP vs. MARF's proprietary management protocols, is that can be integrated with the use of common network service and device management, so the administrators can manage MARF nodes via a already familiar protocol, as well as monitor their performance, gather statistics, set desired configuration, etc. perhaps using the same management tools they've been using for other network devices and application servers.
|
Title: A note on Influence diagnostics in nonlinear mixed-effects elliptical models
|
Abstract: This paper provides general matrix formulas for computing the score function, the (expected and observed) Fisher information and the $\Delta$ matrices (required for the assessment of local influence) for a quite general model which includes the one proposed by Russo et al. (2009). Additionally, we also present an expression for the generalized leverage. The matrix formulation has a considerable advantage, since although the complexity of the postulated model, all general formulas are compact, clear and have nice forms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.