id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1108.4940
Quantum rate distortion, reverse Shannon theorems, and source-channel separation
quant-ph cs.IT math.IT
We derive quantum counterparts of two key theorems of classical information theory, namely, the rate distortion theorem and the source-channel separation theorem. The rate-distortion theorem gives the ultimate limits on lossy data compression, and the source-channel separation theorem implies that a two-stage protocol consisting of compression and channel coding is optimal for transmitting a memoryless source over a memoryless channel. In spite of their importance in the classical domain, there has been surprisingly little work in these areas for quantum information theory. In the present paper, we prove that the quantum rate distortion function is given in terms of the regularized entanglement of purification. We also determine a single-letter expression for the entanglement-assisted quantum rate distortion function, and we prove that it serves as a lower bound on the unassisted quantum rate distortion function. This implies that the unassisted quantum rate distortion function is non-negative and generally not equal to the coherent information between the source and distorted output (in spite of Barnum's conjecture that the coherent information would be relevant here). Moreover, we prove several quantum source-channel separation theorems. The strongest of these are in the entanglement-assisted setting, in which we establish a necessary and sufficient codition for transmitting a memoryless source over a memoryless quantum channel up to a given distortion.
1108.4942
Making Use of Advances in Answer-Set Programming for Abstract Argumentation Systems
cs.AI
Dung's famous abstract argumentation frameworks represent the core formalism for many problems and applications in the field of argumentation which significantly evolved within the last decade. Recent work in the field has thus focused on implementations for these frameworks, whereby one of the main approaches is to use Answer-Set Programming (ASP). While some of the argumentation semantics can be nicely expressed within the ASP language, others required rather cumbersome encoding techniques. Recent advances in ASP systems, in particular, the metasp optimization frontend for the ASP-package gringo/claspD provides direct commands to filter answer sets satisfying certain subset-minimality (or -maximality) constraints. This allows for much simpler encodings compared to the ones in standard ASP language. In this paper, we experimentally compare the original encodings (for the argumentation semantics based on preferred, semi-stable, and respectively, stage extensions) with new metasp encodings. Moreover, we provide novel encodings for the recently introduced resolution-based grounded semantics. Our experimental results indicate that the metasp approach works well in those cases where the complexity of the encoded problem is adequately mirrored within the metasp approach.
1108.4961
Non-trivial two-armed partial-monitoring games are bandits
cs.LG
We consider online learning in partial-monitoring games against an oblivious adversary. We show that when the number of actions available to the learner is two and the game is nontrivial then it is reducible to a bandit-like game and thus the minimax regret is $\Theta(\sqrt{T})$.
1108.4973
Learning from Complex Systems: On the Roles of Entropy and Fisher Information in Pairwise Isotropic Gaussian Markov Random Fields
cs.IT cs.AI cs.CV math.IT stat.CO
Markov Random Field models are powerful tools for the study of complex systems. However, little is known about how the interactions between the elements of such systems are encoded, especially from an information-theoretic perspective. In this paper, our goal is to enlight the connection between Fisher information, Shannon entropy, information geometry and the behavior of complex systems modeled by isotropic pairwise Gaussian Markov random fields. We propose analytical expressions to compute local and global versions of these measures using Besag's pseudo-likelihood function, characterizing the system's behavior through its \emph{Fisher curve}, a parametric trajectory accross the information space that provides a geometric representation for the study of complex systems. Computational experiments show how the proposed tools can be useful in extrating relevant information from complex patterns. The obtained results quantify and support our main conclusion, which is: in terms of information, moving towards higher entropy states (A --> B) is different from moving towards lower entropy states (B --> A), since the \emph{Fisher curves} are not the same given a natural orientation (the direction of time).
1108.4982
Synthesis of anisotropic suboptimal controllers by convex optimization
cs.SY math.OC
This paper considers a disturbance attenuation problem for a linear discrete time invariant system under random disturbances with imprecisely known probability distributions. The statistical uncertainty is measured in terms of relative entropy using the mean anisotropy functional. The disturbance attenuation capabilities of the system are quantified by the anisotropic norm which is a stochastic counterpart of the H-infinity norm. The designed anisotropic suboptimal controller generally is a dynamic fixed-order output-feedback compensator which is required to stabilize the closed-loop system and keep its anisotropic norm below a prescribed threshold value.
1108.5002
Verbal Characterization of Probabilistic Clusters using Minimal Discriminative Propositions
cs.AI
In a knowledge discovery process, interpretation and evaluation of the mined results are indispensable in practice. In the case of data clustering, however, it is often difficult to see in what aspect each cluster has been formed. This paper proposes a method for automatic and objective characterization or "verbalization" of the clusters obtained by mixture models, in which we collect conjunctions of propositions (attribute-value pairs) that help us interpret or evaluate the clusters. The proposed method provides us with a new, in-depth and consistent tool for cluster interpretation/evaluation, and works for various types of datasets including continuous attributes and missing values. Experimental results with a couple of standard datasets exhibit the utility of the proposed method, and the importance of the feedbacks from the interpretation/evaluation step.
1108.5016
Une analyse bas\'ee sur la S-DRT pour la mod\'elisation de dialogues pathologiques
cs.CL cs.AI
In this article, we present a corpus of dialogues between a schizophrenic speaker and an interlocutor who drives the dialogue. We had identified specific discontinuities for paranoid schizophrenics. We propose a modeling of these discontinuities with S-DRT (its pragmatic part)
1108.5017
Event in Compositional Dynamic Semantics
cs.CL cs.AI cs.LO
We present a framework which constructs an event-style dis- course semantics. The discourse dynamics are encoded in continuation semantics and various rhetorical relations are embedded in the resulting interpretation of the framework. We assume discourse and sentence are distinct semantic objects, that play different roles in meaning evalua- tion. Moreover, two sets of composition functions, for handling different discourse relations, are introduced. The paper first gives the necessary background and motivation for event and dynamic semantics, then the framework with detailed examples will be introduced.
1108.5019
Prescribing the motion of a set of particles in a 3D perfect fluid
math.AP cs.SY math.OC
We establish a result concerning the so-called Lagrangian controllability of the Euler equation for incompressible perfect fluids in dimension 3. More precisely we consider a connected bounded domain of R^3 and two smooth contractible sets of fluid particles, surrounding the same volume. We prove that given any initial velocity field, one can find a boundary control and a time interval such that the corresponding solution of the Euler equation makes the first of the two sets approximately reach the second one.
1108.5025
Robust Stackelberg game in communication systems
cs.IT cs.GT math.IT
This paper studies multi-user communication systems with two groups of users: leaders which possess system information, and followers which have no system information using the formulation of Stackelberg games. In such games, the leaders play and choose their actions based on their information about the system and the followers choose their actions myopically according to their observations of the aggregate impact of other users. However, obtaining the exact value of these parameters is not practical in communication systems. To study the effect of uncertainty and preserve the players' utilities in these conditions, we introduce a robust equilibrium for Stackelberg games. In this framework, the leaders' information and the followers' observations are uncertain parameters, and the leaders and the followers choose their actions by solving the worst-case robust optimizations. We show that the followers' uncertain parameters always increase the leaders' utilities and decrease the followers' utilities. Conversely, the leaders' uncertain information reduces the leaders' utilities and increases the followers' utilities. We illustrate our theoretical results with the numerical results obtained based on the power control games in the interference channels.
1108.5027
Encoding Phases using Commutativity and Non-commutativity in a Logical Framework
cs.CL cs.AI cs.LO
This article presents an extension of Minimalist Categorial Gram- mars (MCG) to encode Chomsky's phases. These grammars are based on Par- tially Commutative Logic (PCL) and encode properties of Minimalist Grammars (MG) of Stabler. The first implementation of MCG were using both non- commutative properties (to respect the linear word order in an utterance) and commutative ones (to model features of different constituents). Here, we pro- pose to adding Chomsky's phases with the non-commutative tensor product of the logic. Then we could give account of the PIC just by using logical prop- erties of the framework.
1108.5037
Orthonormal Expansion l1-Minimization Algorithms for Compressed Sensing
cs.IT cs.SY math.IT math.OC
Compressed sensing aims at reconstructing sparse signals from significantly reduced number of samples, and a popular reconstruction approach is $\ell_1$-norm minimization. In this correspondence, a method called orthonormal expansion is presented to reformulate the basis pursuit problem for noiseless compressed sensing. Two algorithms are proposed based on convex optimization: one exactly solves the problem and the other is a relaxed version of the first one. The latter can be considered as a modified iterative soft thresholding algorithm and is easy to implement. Numerical simulation shows that, in dealing with noise-free measurements of sparse signals, the relaxed version is accurate, fast and competitive to the recent state-of-the-art algorithms. Its practical application is demonstrated in a more general case where signals of interest are approximately sparse and measurements are contaminated with noise.
1108.5052
On the Quality of Wireless Network Connectivity
cs.NI cs.IT math.IT
Despite intensive research in the area of network connectivity, there is an important category of problems that remain unsolved: how to measure the quality of connectivity of a wireless multi-hop network which has a realistic number of nodes, not necessarily large enough to warrant the use of asymptotic analysis, and has unreliable connections, reflecting the inherent unreliable characteristics of wireless communications? The quality of connectivity measures how easily and reliably a packet sent by a node can reach another node. It complements the use of \emph{capacity} to measure the quality of a network in saturated traffic scenarios and provides a native measure of the quality of (end-to-end) network connections. In this paper, we explore the use of probabilistic connectivity matrix as a possible tool to measure the quality of network connectivity. Some interesting properties of the probabilistic connectivity matrix and their connections to the quality of connectivity are demonstrated. We argue that the largest eigenvalue of the probabilistic connectivity matrix can serve as a good measure of the quality of network connectivity.
1108.5095
RBO Protocol: Broadcasting Huge Databases for Tiny Receivers
cs.DS cs.DB cs.DC cs.DM cs.NI
We propose a protocol (called RBO) for broadcasting long streams of single-packet messages over radio channel for tiny, battery powered, receivers. The messages are labeled by the keys from some linearly ordered set. The sender repeatedly broadcasts a sequence of many (possibly millions) of messages, while each receiver is interested in reception of a message with a specified key within this sequence. The transmission is arranged so that the receiver can wake up in arbitrary moment and find the nearest transmission of its searched message. Even if it does not know the position of the message in the sequence, it needs only to receive a small number of (the headers of) other messages to locate it properly. Thus it can save energy by keeping the radio switched off most of the time. We show that bit-reversal permutation has "recursive bisection properties" and, as a consequence, RBO can be implemented very efficiently with only constant number of $\log_2 n$-bit variables, where $n$ is the total number of messages in the sequence. The total number of the required receptions is at most $2\log_2 n +2$ in the model with perfect synchronization. The basic procedure of RBO (computation of the time slot for the next required reception) requires only $O(\log^3 n)$ bit-wise operations. We propose implementation mechanisms for realistic model (with imperfect synchronization), for operating systems (such as e.g. TinyOS).
1108.5096
Minimalist Grammars and Minimalist Categorial Grammars, definitions toward inclusion of generated languages
cs.CL
Stabler proposes an implementation of the Chomskyan Minimalist Program, Chomsky 95 with Minimalist Grammars - MG, Stabler 97. This framework inherits a long linguistic tradition. But the semantic calculus is more easily added if one uses the Curry-Howard isomorphism. Minimalist Categorial Grammars - MCG, based on an extension of the Lambek calculus, the mixed logic, were introduced to provide a theoretically-motivated syntax-semantics interface, Amblard 07. In this article, we give full definitions of MG with algebraic tree descriptions and of MCG, and take the first steps towards giving a proof of inclusion of their generated languages.
1108.5104
Improved Linear Programming Bounds on Sizes of Constant-Weight Codes
cs.IT math.IT
Let $A(n,d,w)$ be the largest possible size of an $(n,d,w)$ constant-weight binary code. By adding new constraints to Delsarte linear programming, we obtain twenty three new upper bounds on $A(n,d,w)$ for $n \leq 28$. The used techniques allow us to give a simple proof of an important theorem of Delsarte which makes linear programming possible for binary codes.
1108.5128
Digital Self Triggered Robust Control of Nonlinear Systems
math.OC cs.SY
In this paper we develop novel results on self triggering control of nonlinear systems, subject to perturbations and actuation delays. First, considering an unperturbed nonlinear system with bounded actuation delays, we provide conditions that guarantee the existence of a self triggering control strategy stabilizing the closed--loop system. Then, considering parameter uncertainties, disturbances, and bounded actuation delays, we provide conditions guaranteeing the existence of a self triggering strategy, that keeps the state arbitrarily close to the equilibrium point. In both cases, we provide a methodology for the computation of the next execution time. We show on an example the relevant benefits obtained with this approach, in terms of energy consumption, with respect to control algorithms based on a constant sampling, with a sensible reduction of the average sampling time.
1108.5140
A convex formulation of strict anisotropic norm bounded real lemma
cs.SY math.OC
This paper is aimed at extending the H-infinity Bounded Real Lemma to stochastic systems under random disturbances with imprecisely known probability distributions. The statistical uncertainty is measured in entropy theoretic terms using the mean anisotropy functional. The disturbance attenuation capabilities of the system are quantified by the anisotropic norm which is a stochastic counterpart of the H-infinity norm. A state-space sufficient criterion for the anisotropic norm of a linear discrete time invariant system to be bounded by a given threshold value is derived. The resulting Strict Anisotropic Norm Bounded Real Lemma involves an inequality on the determinant of a positive definite matrix and a linear matrix inequality. It is shown that slight reformulation of these conditions allows the anisotropic norm of a system to be efficiently computed via convex optimization.
1108.5147
To Switch or Not To Switch: Understanding Social Influence in Recommender Systems
cs.CY cs.HC cs.SI physics.soc-ph
We designed and ran an experiment to test how often people's choices are reversed by others' recommendations when facing different levels of confirmation and conformity pressures. In our experiment participants were first asked to provide their preferences between pairs of items. They were then asked to make second choices about the same pairs with knowledge of others' preferences. Our results show that others people's opinions significantly sway people's own choices. The influence is stronger when people are required to make their second decision sometime later (22.4%) than immediately (14.1%). Moreover, people are most likely to reverse their choices when facing a moderate number of opposing opinions. Finally, the time people spend making the first decision significantly predicts whether they will reverse their decisions later on, while demographics such as age and gender do not. These results have implications for consumer behavior research as well as online marketing strategies.
1108.5192
Positivity of the English language
physics.soc-ph cs.CL
Over the last million years, human language has emerged and evolved as a fundamental instrument of social communication and semiotic representation. People use language in part to convey emotional information, leading to the central and contingent questions: (1) What is the emotional spectrum of natural language? and (2) Are natural languages neutrally, positively, or negatively biased? Here, we report that the human-perceived positivity of over 10,000 of the most frequently used English words exhibits a clear positive bias. More deeply, we characterize and quantify distributions of word positivity for four large and distinct corpora, demonstrating that their form is broadly invariant with respect to frequency of word use.
1108.5212
Deinterleaving Finite Memory Processes via Penalized Maximum Likelihood
cs.IT math.IT
We study the problem of deinterleaving a set of finite-memory (Markov) processes over disjoint finite alphabets, which have been randomly interleaved by a finite-memory switch. The deinterleaver has access to a sample of the resulting interleaved process, but no knowledge of the number or structure of the component Markov processes, or of the switch. We study conditions for uniqueness of the interleaved representation of a process, showing that certain switch configurations, as well as memoryless component processes, can cause ambiguities in the representation. We show that a deinterleaving scheme based on minimizing a penalized maximum-likelihood cost function is strongly consistent, in the sense of reconstructing, almost surely as the observed sequence length tends to infinity, a set of component and switch Markov processes compatible with the original interleaved process. Furthermore, under certain conditions on the structure of the switch (including the special case of a memoryless switch), we show that the scheme recovers \emph{all} possible interleaved representations of the original process. Experimental results are presented demonstrating that the proposed scheme performs well in practice, even for relatively short input samples.
1108.5217
An Experimental Comparison of PMSPrune and Other Algorithms for Motif Search
q-bio.QM cs.CE q-bio.GN
Extracting meaningful patterns from voluminous amount of biological data is a very big challenge. Motifs are biological patterns of great interest to biologists. Many different versions of the motif finding problem have been identified by researchers. Examples include the Planted $(l, d)$ Motif version, those based on position-specific score matrices, etc. A comparative study of the various motif search algorithms is very important for several reasons. For example, we could identify the strengths and weaknesses of each. As a result, we might be able to devise hybrids that will perform better than the individual components. In this paper we (either directly or indirectly) compare the performance of PMSprune (an algorithm based on the $(l, d)$ motif model) and several other algorithms in terms of seven measures and using well established benchmarks In this paper, we (directly or indirectly) compare the quality of motifs predicted by PMSprune and 14 other algorithms. We have employed several benchmark datasets including the one used by Tompa, et.al. These comparisons show that the performance of PMSprune is competitive when compared to the other 14 algorithms tested. We have compared (directly or indirectly) the performance of PMSprune and 14 other algorithms using the Benchmark dataset provided by Tompa, et.al. It is observed that both PMSprune and DME (an algorithm based on position-specific score matrices) in general perform better than the 13 algorithms reported in Tompa et. al.. Subsequently we have compared PMSprune and DME on other benchmark data sets including ChIP-Chip, ChIP-seq, and ABS. Between PMSprune and DME, PMSprune performs better than DME on six measures. DME performs better than PMSprune on one measure (namely, specificity).
1108.5248
Optimal Coalition Structures in Cooperative Graph Games
cs.GT cs.MA
Representation languages for coalitional games are a key research area in algorithmic game theory. There is an inherent tradeoff between how general a language is, allowing it to capture more elaborate games, and how hard it is computationally to optimize and solve such games. One prominent such language is the simple yet expressive Weighted Graph Games (WGGs) representation [14], which maintains knowledge about synergies between agents in the form of an edge weighted graph. We consider the problem of finding the optimal coalition structure in WGGs. The agents in such games are vertices in a graph, and the value of a coalition is the sum of the weights of the edges present between coalition members. The optimal coalition structure is a partition of the agents to coalitions, that maximizes the sum of utilities obtained by the coalitions. We show that finding the optimal coalition structure is not only hard for general graphs, but is also intractable for restricted families such as planar graphs which are amenable for many other combinatorial problems. We then provide algorithms with constant factor approximations for planar, minor-free and bounded degree graphs.
1108.5250
Single-trial EEG Discrimination between Wrist and Finger Movement Imagery and Execution in a Sensorimotor BCI
cs.AI
A brain-computer interface (BCI) may be used to control a prosthetic or orthotic hand using neural activity from the brain. The core of this sensorimotor BCI lies in the interpretation of the neural information extracted from electroencephalogram (EEG). It is desired to improve on the interpretation of EEG to allow people with neuromuscular disorders to perform daily activities. This paper investigates the possibility of discriminating between the EEG associated with wrist and finger movements. The EEG was recorded from test subjects as they executed and imagined five essential hand movements using both hands. Independent component analysis (ICA) and time-frequency techniques were used to extract spectral features based on event-related (de)synchronisation (ERD/ERS), while the Bhattacharyya distance (BD) was used for feature reduction. Mahalanobis distance (MD) clustering and artificial neural networks (ANN) were used as classifiers and obtained average accuracies of 65 % and 71 % respectively. This shows that EEG discrimination between wrist and finger movements is possible. The research introduces a new combination of motor tasks to BCI research.
1108.5253
A Frequent Closed Itemsets Lattice-based Approach for Mining Minimal Non-Redundant Association Rules
cs.DB
There are many algorithms developed for improvement the time of mining frequent itemsets (FI) or frequent closed itemsets (FCI). However, the algorithms which deal with the time of generating association rules were not put in deep research. In reality, in case of a database containing many FI/FCI (from ten thousands up to millions), the time of generating association rules is much larger than that of mining FI/FCI. Therefore, this paper presents an application of frequent closed itemsets lattice (FCIL) for mining minimal non-redundant association rules (MNAR) to reduce a lot of time for generating rules. Firstly, we use CHARM-L for building FCIL. After that, based on FCIL, an algorithm for fast generating MNAR will be proposed. Experimental results show that the proposed algorithm is much faster than frequent itemsets lattice-based algorithm in the mining time.
1108.5316
Link Failure Detection in Multi-hop Control Networks
math.OC cs.NI cs.SY
A Multi-hop Control Network (MCN) consists of a plant where the communication between sensors, actuators and computational unit is supported by a wireless multi-hop communication network, and data flow is performed using scheduling and routing of sensing and actuation data. We characterize the problem of detecting the failure of links of the radio connectivity graph and provide necessary and sufficient conditions on the plant dynamics and on the communication protocol. We also provide a methodology to \emph{explicitly} design the network topology, scheduling and routing of a communication protocol in order to satisfy the above conditions.
1108.5355
A tale of many cities: universal patterns in human urban mobility
physics.soc-ph cs.SI
The advent of geographic online social networks such as Foursquare, where users voluntarily signal their current location, opens the door to powerful studies on human movement. In particular the fine granularity of the location data, with GPS accuracy down to 10 meters, and the worldwide scale of Foursquare adoption are unprecedented. In this paper we study urban mobility patterns of people in several metropolitan cities around the globe by analyzing a large set of Foursquare users. Surprisingly, while there are variations in human movement in different cities, our analysis shows that those are predominantly due to different distributions of places across different urban environments. Moreover, a universal law for human mobility is identified, which isolates as a key component the rank-distance, factoring in the number of places between origin and destination, rather than pure physical distance, as considered in some previous works. Building on our findings, we also show how a rank-based movement model accurately captures real human movements in different cities. Our results shed new light on the driving factors of urban human mobility, with potential applications for urban planning, location-based advertisement and even social studies.
1108.5359
Solving Principal Component Pursuit in Linear Time via $l_1$ Filtering
cs.NA cs.CV
In the past decades, exactly recovering the intrinsic data structure from corrupted observations, which is known as robust principal component analysis (RPCA), has attracted tremendous interests and found many applications in computer vision. Recently, this problem has been formulated as recovering a low-rank component and a sparse component from the observed data matrix. It is proved that under some suitable conditions, this problem can be exactly solved by principal component pursuit (PCP), i.e., minimizing a combination of nuclear norm and $l_1$ norm. Most of the existing methods for solving PCP require singular value decompositions (SVD) of the data matrix, resulting in a high computational complexity, hence preventing the applications of RPCA to very large scale computer vision problems. In this paper, we propose a novel algorithm, called $l_1$ filtering, for \emph{exactly} solving PCP with an $O(r^2(m+n))$ complexity, where $m\times n$ is the size of data matrix and $r$ is the rank of the matrix to recover, which is supposed to be much smaller than $m$ and $n$. Moreover, $l_1$ filtering is \emph{highly parallelizable}. It is the first algorithm that can \emph{exactly} solve a nuclear norm minimization problem in \emph{linear time} (with respect to the data size). Experiments on both synthetic data and real applications testify to the great advantage of $l_1$ filtering in speed over state-of-the-art algorithms.
1108.5387
Un metodo estable para la evaluacion de la complejidad algoritmica de cadenas cortas
cs.CC cs.IT math.IT
It is discussed and surveyed a numerical method proposed before, that alternative to the usual compression method, provides an approximation to the algorithmic (Kolmogorov) complexity, particularly useful for short strings for which compression methods simply fail. The method shows to be stable enough and useful to conceive and compare patterns in an algorithmic models. (article in Spanish)
1108.5395
Noise Covariance Properties in Dual-Tree Wavelet Decompositions
math.ST cs.CV stat.TH
Dual-tree wavelet decompositions have recently gained much popularity, mainly due to their ability to provide an accurate directional analysis of images combined with a reduced redundancy. When the decomposition of a random process is performed -- which occurs in particular when an additive noise is corrupting the signal to be analyzed -- it is useful to characterize the statistical properties of the dual-tree wavelet coefficients of this process. As dual-tree decompositions constitute overcomplete frame expansions, correlation structures are introduced among the coefficients, even when a white noise is analyzed. In this paper, we show that it is possible to provide an accurate description of the covariance properties of the dual-tree coefficients of a wide-sense stationary process. The expressions of the (cross-)covariance sequences of the coefficients are derived in the one and two-dimensional cases. Asymptotic results are also provided, allowing to predict the behaviour of the second-order moments for large lag values or at coarse resolution. In addition, the cross-correlations between the primal and dual wavelets, which play a primary role in our theoretical analysis, are calculated for a number of classical wavelet families. Simulation results are finally provided to validate these results.
1108.5397
Prediction of peptide bonding affinity: kernel methods for nonlinear modeling
stat.ML cs.LG q-bio.QM
This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.
1108.5431
Providing information can be a stable non-cooperative evolutionary strategy
q-bio.PE cs.NE
Human language is still an embarrassment for evolutionary theory, as the speaker's benefit remains unclear. The willingness to communicate information is shown here to be an evolutionary stable strategy (ESS), even if acquiring original information from the environment involves significant cost and communicating it provides no material benefit to addressees. In this study, communication is used to advertise the emitter's ability to obtain novel information. We found that communication strategies can take two forms, competitive and uniform, that these two strategies are stable and that they necessarily coexist.
1108.5450
Deterministic multidimensional growth model for small-world networks
physics.data-an cs.SI
We proposed a deterministic multidimensional growth model for small-world networks. The model can characterize the distinguishing properties of many real-life networks with geometric space structure. Our results show the model possesses small-world effect: larger clustering coefficient and smaller characteristic path length. We also obtain some accurate results for its properties including degree distribution, clustering coefficient and network diameter and discuss them. It is also worth noting that we get an accurate analytical expression for calculating the characteristic path length. We verify numerically and experimentally these main features.
1108.5451
A Uniform Fixpoint Approach to the Implementation of Inference Methods for Deductive Databases
cs.DB
Within the research area of deductive databases three different database tasks have been deeply investigated: query evaluation, update propagation and view updating. Over the last thirty years various inference mechanisms have been proposed for realizing these main functionalities of a rule-based system. However, these inference mechanisms have been rarely used in commercial DB systems until now. One important reason for this is the lack of a uniform approach well-suited for implementation in an SQL-based system. In this paper, we present such a uniform approach in form of a new version of the soft consequence operator. Additionally, we present improved transformation-based approaches to query optimization and update propagation and view updating which are all using this operator as underlying evaluation mechanism.
1108.5460
Personalized Web Services for Web Information Extraction
cs.IR
The field of information extraction from the Web emerged with the growth of the Web and the multiplication of online data sources. This paper is an analysis of information extraction methods. It presents a service oriented approach for web information extraction considering both web data management and extraction services. Then we propose an SOA based architecture to enhance flexibility and on-the-fly modification of web extraction services. An implementation of the proposed architecture is proposed on the middleware level of Java Enterprise Edition (JEE) servers.
1108.5475
The Dimension of Subcode-Subfields of Shortened Generalized Reed Solomon Codes
cs.IT math.IT
Reed-Solomon (RS) codes are among the most ubiquitous codes due to their good parameters as well as efficient encoding and decoding procedures. However, RS codes suffer from having a fixed length. In many applications where the length is static, the appropriate length can be obtained by an RS code by shortening or puncturing. Generalized Reed-Solomon (GRS) codes are a generalization of RS codes, whose subfield-subcodes are extensively studied. In this paper we show that a particular class of GRS codes produces many subfield-subcodes with large dimension. An algorithm for searching through the codes is presented as well as a list of new codes obtained from this method.
1108.5491
Improving Ranking Using Quantum Probability
cs.IR cs.ET cs.LG physics.data-an
The paper shows that ranking information units by quantum probability differs from ranking them by classical probability provided the same data used for parameter estimation. As probability of detection (also known as recall or power) and probability of false alarm (also known as fallout or size) measure the quality of ranking, we point out and show that ranking by quantum probability yields higher probability of detection than ranking by classical probability provided a given probability of false alarm and the same parameter estimation data. As quantum probability provided more effective detectors than classical probability within other domains that data management, we conjecture that, the system that can implement subspace-based detectors shall be more effective than a system which implements a set-based detectors, the effectiveness being calculated as expected recall estimated over the probability of detection and expected fallout estimated over the probability of false alarm.
1108.5505
Event-triggered and self-triggered stabilization of distributed networked control systems
math.OC cs.SY
Event-triggered and self-triggered control have recently been proposed as implementation strategies that considerably reduce the resources required for control. Although most of the work so far has focused on closing a single control loop, some researchers have started to investigate how these new implementation strategies can be applied when closing multiple-feedback loops in the presence of physically distributed sensors and actuators. In this paper, we consider a scenario where the distributed sensors, actuators, and controllers communicate via a shared wired channel. We use our recent prescriptive framework for the event-triggered control of nonlinear systems to develop novel policies suitable for the considered distributed scenario. Afterwards, we explain how self-triggering rules can be deduced from the developed event-triggered strategies.
1108.5514
Strategic Learning and Robust Protocol Design for Online Communities with Selfish Users
cs.LG cs.GT cs.SI
This paper focuses on analyzing the free-riding behavior of self-interested users in online communities. Hence, traditional optimization methods for communities composed of compliant users such as network utility maximization cannot be applied here. In our prior work, we show how social reciprocation protocols can be designed in online communities which have populations consisting of a continuum of users and are stationary under stochastic permutations. Under these assumptions, we are able to prove that users voluntarily comply with the pre-determined social norms and cooperate with other users in the community by providing their services. In this paper, we generalize the study by analyzing the interactions of self-interested users in online communities with finite populations and are not stationary. To optimize their long-term performance based on their knowledge, users adapt their strategies to play their best response by solving individual stochastic control problems. The best-response dynamic introduces a stochastic dynamic process in the community, in which the strategies of users evolve over time. We then investigate the long-term evolution of a community, and prove that the community will converge to stochastically stable equilibria which are stable against stochastic permutations. Understanding the evolution of a community provides protocol designers with guidelines for designing social norms in which no user has incentives to adapt its strategy and deviate from the prescribed protocol, thereby ensuring that the adopted protocol will enable the community to achieve the optimal social welfare.
1108.5515
Robustness of a Tree-like Network of Interdependent Networks
physics.data-an cs.SI physics.soc-ph
In reality, many real-world networks interact with and depend on other networks. We develop an analytical framework for studying interacting networks and present an exact percolation law for a network of $n$ interdependent networks (NON). We present a general framework to study the dynamics of the cascading failures process at each step caused by an initial failure occurring in the NON system. We study and compare both $n$ coupled Erd\H{o}s-R\'{e}nyi (ER) graphs and $n$ coupled random regular (RR) graphs. We found recently [Gao et. al. arXive:1010.5829] that for an NON composed of $n$ ER networks each of average degree $k$, the giant component, $P_{\infty}$, is given by $P_{\infty}=p[1-\exp(-kP_{\infty})]^n$ where $1-p$ is the initial fraction of removed nodes. Our general result coincides for $n=1$ with the known Erd\H{o}s-R\'{e}nyi second-order phase transition at a threshold, $p=p_c$, for a single network. For $n=2$ the general result for $P_{\infty}$ corresponds to the $n=2$ result [Buldyrev et. al., Nature, 464, (2010)]. Similar to the ER NON, for $n=1$ the percolation transition at $p_c$, is of second order while for any $n>1$ it is of first order. The first order percolation transition in both ER and RR (for $n>1$) is accompanied by cascading failures between the networks due to their interdependencies. However, we find that the robustness of $n$ coupled RR networks of degree $k$ is dramatically higher compared to the $n$ coupled ER networks of average degree $k$. While for ER NON there exists a critical minimum average degree $k=k_{\min}$, that increases with $n$, below which the system collapses, there is no such analogous $k_{\min}$ for RR NON system.
1108.5520
A sentiment analysis of Singapore Presidential Election 2011 using Twitter data with census correction
stat.AP cs.CL cs.SI
Sentiment analysis is a new area in text analytics where it focuses on the analysis and understanding of the emotions from the text patterns. This new form of analysis has been widely adopted in customer relation management especially in the context of complaint management. With increasing level of interest in this technology, more and more companies are adopting it and using it to champion their marketing efforts. However, sentiment analysis using twitter has remained extremely difficult to manage due to the sampling bias. In this paper, we will discuss about the application of using reweighting techniques in conjunction with online sentiment divisions to predict the vote percentage that individual candidate will receive. There will be in depth discussion about the various aspects using sentiment analysis to predict outcomes as well as the potential pitfalls in the estimation due to the anonymous nature of the internet.
1108.5533
A Remark on the Lasso and the Dantzig Selector
math.ST cs.IT math.FA math.IT stat.TH
This article investigates a new parameter for the high-dimensional regression with noise: the distortion. This latter has attracted a lot of attention recently with the appearance of new deterministic constructions of 'almost'-Euclidean sections of the L1-ball. It measures how far is the intersection between the kernel of the design matrix and the unit L1-ball from an L2-ball. We show that the distortion holds enough information to derive oracle inequalities (i.e. a comparison to an ideal situation where one knows the s largest coefficients of the target) for the lasso and the Dantzig selector.
1108.5543
Multi-Robot Organisms: State of the Art
cs.RO cs.NE cs.SY
This paper represents the state of the art development on the field of artificial multi-robot organisms. It briefly considers mechatronic development, sensor and computational equipment, software framework and introduces one of the Grand Challenges for swarm and reconfigurable robotics.
1108.5547
Instantons causing iterative decoding to cycle
cs.IT math.IT
It is speculated that the most probable channel noise realizations (instantons) that cause the iterative decoding of low-density parity-check codes to fail make the decoding not to converge. The Wiberg's formula is generalized for the case when the part of a computational tree that contributes to the output at its center is ambiguous. Two methods of finding the instantons for large number of iterations are presented and tested on Tanner's [155, 64, 20] code and Gaussian channel. The inherently dynamic instanton with effective distance of 11.475333 is found.
1108.5567
Parsing Combinatory Categorial Grammar with Answer Set Programming: Preliminary Report
cs.AI cs.CL
Combinatory categorial grammar (CCG) is a grammar formalism used for natural language parsing. CCG assigns structured lexical categories to words and uses a small set of combinatory rules to combine these categories to parse a sentence. In this work we propose and implement a new approach to CCG parsing that relies on a prominent knowledge representation formalism, answer set programming (ASP) - a declarative programming paradigm. We formulate the task of CCG parsing as a planning problem and use an ASP computational tool to compute solutions that correspond to valid parses. Compared to other approaches, there is no need to implement a specific parsing algorithm using such a declarative method. Our approach aims at producing all semantically distinct parse trees for a given sentence. From this goal, normalization and efficiency issues arise, and we deal with them by combining and extending existing strategies. We have implemented a CCG parsing tool kit - AspCcgTk - that uses ASP as its main computational means. The C&C supertagger can be used as a preprocessor within AspCcgTk, which allows us to achieve wide-coverage natural language parsing.
1108.5575
Getting Beyond the State of the Art of Information Retrieval with Quantum Theory
cs.IR cs.LG physics.data-an
According to the probability ranking principle, the document set with the highest values of probability of relevance optimizes information retrieval effectiveness given the probabilities are estimated as accurately as possible. The key point of this principle is the separation of the document set into two subsets with a given level of fallout and with the highest recall. If subsets of set measures are replaced by subspaces and space measures, we obtain an alternative theory stemming from Quantum Theory. That theory is named after vector probability because vectors represent event like sets do in classical probability. The paper shows that the separation into vector subspaces is more effective than the separation into subsets with the same available evidence. The result is proved mathematically and verified experimentally. In general, the paper suggests that quantum theory is not only a source of rhetoric inspiration, but is a sufficient condition to improve retrieval effectiveness in a principled way.
1108.5586
FdConfig: A Constraint-Based Interactive Product Configurator
cs.AI
We present a constraint-based approach to interactive product configuration. Our configurator tool FdConfig is based on feature models for the representation of the product domain. Such models can be directly mapped into constraint satisfaction problems and dealt with by appropriate constraint solvers. During the interactive configuration process the user generates new constraints as a result of his configuration decisions and even may retract constraints posted earlier. We discuss the configuration process, explain the underlying techniques and show optimizations.
1108.5592
A Performance Study of Data Mining Techniques: Multiple Linear Regression vs. Factor Analysis
cs.DB
The growing volume of data usually creates an interesting challenge for the need of data analysis tools that discover regularities in these data. Data mining has emerged as disciplines that contribute tools for data analysis, discovery of hidden knowledge, and autonomous decision making in many application domains. The purpose of this study is to compare the performance of two data mining techniques viz., factor analysis and multiple linear regression for different sample sizes on three unique sets of data. The performance of the two data mining techniques is compared on following parameters like mean square error (MSE), R-square, R-Square adjusted, condition number, root mean square error(RMSE), number of variables included in the prediction model, modified coefficient of efficiency, F-value, and test of normality. These parameters have been computed using various data mining tools like SPSS, XLstat, Stata, and MS-Excel. It is seen that for all the given dataset, factor analysis outperform multiple linear regression. But the absolute value of prediction accuracy varied between the three datasets indicating that the data distribution and data characteristics play a major role in choosing the correct prediction technique.
1108.5593
Unsteady Hydromagnetic Flow of Viscoelastic Fluid down an Open Inclined Channel
physics.flu-dyn cs.CE
In this paper, we study the unsteady hydromagnetic flow of a Walter's fluid (Model B') down an open inclined channel of width 2a and depth d under gravity, the walls of the channel being normal to the surface of the bottom under the influence of a uniform transverse magnetic field. A uniform tangential stress is applied at the free surface in the direction of flow. We have evaluated the velocity distribution by using Laplace transform and finite Fourier Sine transform technique. The velocity distribution has been obtained taking different form of time dependent pressure gradient g(t), viz., i) constant ii) exponential decreasing function of time and iii) Cosine function of time. The effects of magnetic parameter M, Reynolds number R and the viscoelastic parameter K are discussed on the velocity distribution in three different cases.
1108.5619
Modification of GTD from Flat File Format to OLAP for Data Mining
cs.DB
This document is part of original research work by the authors in a bid to explore new fields for applying Data Mining Techniques. The sample data is part of a large data set from University of Maryland (UMD) and outlines how more meaningful patterns can be discovered by preprocessing the data in the form of OLAP cubes.
1108.5622
Optimization of Lyapunov Invariants in Verification of Software Systems (Extended Version)
cs.SY cs.SE math.OC
The paper proposes a control-theoretic framework for verification of numerical software systems, and puts forward software verification as an important application of control and systems theory. The idea is to transfer Lyapunov functions and the associated computational techniques from control systems analysis and convex optimization to verification of various software safety and performance specifications. These include but are not limited to absence of overflow, absence of division-by-zero, termination in finite time, presence of dead-code, and certain user-specified assertions. Central to this framework are Lyapunov invariants. These are properly constructed functions of the program variables, and satisfy certain properties-resembling those of Lyapunov functions-along the execution trace. The search for the invariants can be formulated as a convex optimization problem. If the associated optimization problem is feasible, the result is a certificate for the specification.
1108.5624
Multi-Robot Searching Algorithm Using Levy Flight and Artificial Potential Field
cs.RO cs.SY
An efficient search algorithm is very crucial in robotic area, especially for exploration missions, where the target availability is unknown and the condition of the environment is highly unpredictable. In a very large environment, it is not sufficient to scan an area or volume by a single robot, multiple robots should be involved to perform the collective exploration. In this paper, we propose to combine bio-inspired search algorithm called Levy flight and artificial potential field method to perform an efficient searching algorithm for multi-robot applications. The main focus of this work is not only to prove the concept or to measure the efficiency of the algorithm by experiments, but also to develop an appropriate generic framework to be implemented both in simulation and on real robotic platforms. Several experiments, which compare different search algorithms, are also performed.
1108.5626
Nested HEX-Programs
cs.AI
Answer-Set Programming (ASP) is an established declarative programming paradigm. However, classical ASP lacks subprogram calls as in procedural programming, and access to external computations (like remote procedure calls) in general. The feature is desired for increasing modularity and---assuming proper access in place---(meta-)reasoning over subprogram results. While HEX-programs extend classical ASP with external source access, they do not support calls of (sub-)programs upfront. We present nested HEX-programs, which extend HEX-programs to serve the desired feature, in a user-friendly manner. Notably, the answer sets of called sub-programs can be individually accessed. This is particularly useful for applications that need to reason over answer sets like belief set merging, user-defined aggregate functions, or preferences of answer sets.
1108.5643
Collective Adaptive Systems: Challenges Beyond Evolvability
cs.ET cs.CY cs.NE
This position paper overviews several challenges of collective adaptive systems, which are beyond the research objectives of current top-projects in ICT, and especially in FET, initiatives. The attention is paid not only to challenges and new research topics, but also to their impact and potential breakthroughs in information and communication technologies.
1108.5667
A prototype of a knowledge-based programming environment
cs.AI cs.LO
In this paper we present a proposal for a knowledge-based programming environment. In such an environment, declarative background knowledge, procedures, and concrete data are represented in suitable languages and combined in a flexible manner. This leads to a highly declarative programming style. We illustrate our approach on an example and report about our prototype implementation.
1108.5668
Datum-Wise Classification: A Sequential Approach to Sparsity
cs.AI cs.LG
We propose a novel classification technique whose aim is to select an appropriate representation for each datapoint, in contrast to the usual approach of selecting a representation encompassing the whole dataset. This datum-wise representation is found by using a sparsity inducing empirical risk, which is a relaxation of the standard L 0 regularized risk. The classification problem is modeled as a sequential decision process that sequentially chooses, for each datapoint, which features to use before classifying. Datum-Wise Classification extends naturally to multi-class tasks, and we describe a specific case where our inference has equivalent complexity to a traditional linear classifier, while still using a variable number of features. We compare our classifier to classical L 1 regularized linear models (L 1-SVM and LARS) on a set of common binary and multi-class datasets and show that for an equal average number of features used we can get improved performance using our method.
1108.5703
Web Pages Clustering: A New Approach
cs.IR
The rapid growth of web has resulted in vast volume of information. Information availability at a rapid speed to the user is vital. English language (or any for that matter) has lot of ambiguity in the usage of words. So there is no guarantee that a keyword based search engine will provide the required results. This paper introduces the use of dictionary (standardised) to obtain the context with which a keyword is used and in turn cluster the results based on this context. These ideas can be merged with a metasearch engine to enhance the search efficiency.
1108.5710
Generalized Fast Approximate Energy Minimization via Graph Cuts: Alpha-Expansion Beta-Shrink Moves
cs.CV cs.AI
We present alpha-expansion beta-shrink moves, a simple generalization of the widely-used alpha-beta swap and alpha-expansion algorithms for approximate energy minimization. We show that in a certain sense, these moves dominate both alpha-beta-swap and alpha-expansion moves, but unlike previous generalizations the new moves require no additional assumptions and are still solvable in polynomial-time. We show promising experimental results with the new moves, which we believe could be used in any context where alpha-expansions are currently employed.
1108.5717
Structure Selection from Streaming Relational Data
cs.AI
Statistical relational learning techniques have been successfully applied in a wide range of relational domains. In most of these applications, the human designers capitalized on their background knowledge by following a trial-and-error trajectory, where relational features are manually defined by a human engineer, parameters are learned for those features on the training data, the resulting model is validated, and the cycle repeats as the engineer adjusts the set of features. This paper seeks to streamline application development in large relational domains by introducing a light-weight approach that efficiently evaluates relational features on pieces of the relational graph that are streamed to it one at a time. We evaluate our approach on two social media tasks and demonstrate that it leads to more accurate models that are learned faster.
1108.5719
Computational topology for configuration spaces of hard disks
math.AT cs.RO math-ph math.MP
We explore the topology of configuration spaces of hard disks experimentally, and show that several changes in the topology can already be observed with a small number of particles. The results illustrate a theorem of Baryshnikov, Bubenik, and Kahle that critical points correspond to configurations of disks with balanced mechanical stresses, and suggest conjectures about the asymptotic topology as the number of disks tends to infinity.
1108.5720
Conjugate Variables as a Resource in Signal and Image Processing
cs.CV physics.data-an quant-ph
In this paper we develop a new technique to model joint distributions of signals. Our technique is based on quantum mechanical conjugate variables. We show that the transition probability of quantum states leads to a distance function on the signals. This distance function obeys the triangle inequality on all quantum states and becomes a metric on pure quantum states. Treating signals as conjugate variables allows us to create a new approach to segment them. Keywords: Quantum information, transition probability, Euclidean distance, Fubini-study metric, Bhattacharyya coefficients, conjugate variable, signal/sensor fusion, signal and image segmentation.
1108.5724
On the Stability of Linear Discrete-Time Fuzzy Systems
cs.SY math.OC
In this paper the linear and stationary Discrete-time systems with state variables and dynamic coefficients represented by fuzzy numbers are studied, providing some stability criteria, and characterizing the bounds of the set of solutions in the case of positive systems.
1108.5756
Sensitivity And Out-Of-Sample Error in Continuous Time Data Assimilation
physics.ao-ph cs.SY math.OC
Data assimilation refers to the problem of finding trajectories of a prescribed dynamical model in such a way that the output of the model (usually some function of the model states) follows a given time series of observations. Typically though, these two requirements cannot both be met at the same time--tracking the observations is not possible without the trajectory deviating from the proposed model equations, while adherence to the model requires deviations from the observations. Thus, data assimilation faces a trade-off. In this contribution, the sensitivity of the data assimilation with respect to perturbations in the observations is identified as the parameter which controls the trade-off. A relation between the sensitivity and the out-of-sample error is established which allows to calculate the latter under operational conditions. A minimum out-of-sample error is proposed as a criterion to set an appropriate sensitivity and to settle the discussed trade-off. Two approaches to data assimilation are considered, namely variational data assimilation and Newtonian nudging, aka synchronisation. Numerical examples demonstrate the feasibility of the approach.
1108.5781
Phase Transition in Distance-Based Phylogeny Reconstruction
math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH
We introduce a new distance-based phylogeny reconstruction technique which provably achieves, at sufficiently short branch lengths, a logarithmic sequence-length requirement---improving significantly over previous polynomial bounds for distance-based methods and matching existing results for general methods. The technique is based on an averaging procedure that implicitly reconstructs ancestral sequences. In the same token, we extend previous results on phase transitions in phylogeny reconstruction to general time-reversible models. More precisely, we show that in the so-called Kesten-Stigum zone (roughly, a region of the parameter space where ancestral sequences are well approximated by "linear combinations" of the observed sequences) sequences of length $O(\log n)$ suffice for reconstruction when branch lengths are discretized. Here $n$ is the number of extant species. Our results challenge, to some extent, the conventional wisdom that estimates of evolutionary distances alone carry significantly less information about phylogenies than full sequence datasets.
1108.5784
Probability Ranking in Vector Spaces
cs.IR cs.LG
The Probability Ranking Principle states that the document set with the highest values of probability of relevance optimizes information retrieval effectiveness given the probabilities are estimated as accurately as possible. The key point of the principle is the separation of the document set into two subsets with a given level of fallout and with the highest recall. The paper introduces the separation between two vector subspaces and shows that the separation yields a more effective performance than the optimal separation into subsets with the same available evidence, the performance being measured with recall and fallout. The result is proved mathematically and exemplified experimentally.
1108.5794
A Constraint Logic Programming Approach for Computing Ordinal Conditional Functions
cs.AI
In order to give appropriate semantics to qualitative conditionals of the form "if A then normally B", ordinal conditional functions (OCFs) ranking the possible worlds according to their degree of plausibility can be used. An OCF accepting all conditionals of a knowledge base R can be characterized as the solution of a constraint satisfaction problem. We present a high-level, declarative approach using constraint logic programming techniques for solving this constraint satisfaction problem. In particular, the approach developed here supports the generation of all minimal solutions; these minimal solutions are of special interest as they provide a basis for model-based inference from R.
1108.5825
Confidentiality-Preserving Data Publishing for Credulous Users by Extended Abduction
cs.AI
Publishing private data on external servers incurs the problem of how to avoid unwanted disclosure of confidential data. We study a problem of confidentiality in extended disjunctive logic programs and show how it can be solved by extended abduction. In particular, we analyze how credulous non-monotonic reasoning affects confidentiality.
1108.5837
Translating Answer-Set Programs into Bit-Vector Logic
cs.AI cs.LO
Answer set programming (ASP) is a paradigm for declarative problem solving where problems are first formalized as rule sets, i.e., answer-set programs, in a uniform way and then solved by computing answer sets for programs. The satisfiability modulo theories (SMT) framework follows a similar modelling philosophy but the syntax is based on extensions of propositional logic rather than rules. Quite recently, a translation from answer-set programs into difference logic was provided---enabling the use of particular SMT solvers for the computation of answer sets. In this paper, the translation is revised for another SMT fragment, namely that based on fixed-width bit-vector theories. Thus, even further SMT solvers can be harnessed for the task of computing answer sets. The results of a preliminary experimental comparison are also reported. They suggest a level of performance which is similar to that achieved via difference logic.
1108.5838
Off-grid Direction of Arrival Estimation Using Sparse Bayesian Inference
stat.AP cs.IT math.IT stat.ML
Direction of arrival (DOA) estimation is a classical problem in signal processing with many practical applications. Its research has recently been advanced owing to the development of methods based on sparse signal reconstruction. While these methods have shown advantages over conventional ones, there are still difficulties in practical situations where true DOAs are not on the discretized sampling grid. To deal with such an off-grid DOA estimation problem, this paper studies an off-grid model that takes into account effects of the off-grid DOAs and has a smaller modeling error. An iterative algorithm is developed based on the off-grid model from a Bayesian perspective while joint sparsity among different snapshots is exploited by assuming a Laplace prior for signals at all snapshots. The new approach applies to both single snapshot and multi-snapshot cases. Numerical simulations show that the proposed algorithm has improved accuracy in terms of mean squared estimation error. The algorithm can maintain high estimation accuracy even under a very coarse sampling grid.
1108.5860
Linear Operator Inequality and Null Controllability with Vanishing Energy for unbounded control systems
math.OC cs.SY math.AP
We consider linear systems on a separable Hilbert space $H$, which are null controllable at some time $T_0>0$ under the action of a point or boundary control. Parabolic and hyperbolic control systems usually studied in applications are special cases. To every initial state $ y_0 \in H$ we associate the minimal "energy" needed to transfer $ y_0 $ to $ 0 $ in a time $ T \ge T_0$ ("energy" of a control being the square of its $ L^2 $ norm). We give both necessary and sufficient conditions under which the minimal energy converges to $ 0 $ for $ T\to+\infty $. This extends to boundary control systems the concept of null controllability with vanishing energy introduced by Priola and Zabczyk (Siam J. Control Optim. 42 (2003)) for distributed systems. The proofs in Priola-Zabczyk paper depend on properties of the associated Riccati equation, which are not available in the present, general setting. Here we base our results on new properties of the quadratic regulator problem with stability and the Linear Operator Inequality.
1108.5881
Spread Decoding in Extension Fields
cs.IT math.IT
A spread code is a set of vector spaces of a fixed dimension over a finite field Fq with certain properties used for random network coding. It can be constructed in different ways which lead to different decoding algorithms. In this work we present a new representation of spread codes with a minimum distance decoding algorithm which is efficient when the codewords, the received space and the error space have small dimension.
1108.5890
Coordinating Interfering Transmissions in Cooperative Wireless LANs
cs.NI cs.IT math.IT
In this paper we present a cooperative medium access control (MAC) protocol that is designed for a physical layer that can decode interfering transmissions in distributed wireless networks. The proposed protocol pro-actively enforces two independent packet transmissions to interfere in a controlled and cooperative manner. The protocol ensures that when a node desires to transmit a unicast packet, regardless of the destination, it coordinates with minimal overhead with relay nodes in order to concurrently transmit over the wireless channel with a third node. The relay is responsible for allowing packets from the two selected nodes to interfere only when the desired packets can be decoded at the appropriate destinations and increase the sum-rate of the cooperative transmission. In case this is not feasible, classic cooperative or direct transmission is adopted. To enable distributed, uncoordinated, and adaptive operation of the protocol, a relay selection mechanism is introduced so that the optimal relay is selected dynamically and depending on the channel conditions. The most important advantage of the protocol is that interfering transmissions can originate from completely independent unicast transmissions from two senders. We present simulation results that validate the efficacy of our proposed scheme in terms of throughput and delay.
1108.5934
The Sznajd model with limited persuasion: competition between high-reputation and hesitant agents
physics.soc-ph cond-mat.stat-mech cs.SI
In this work we study a modified version of the two-dimensional Sznajd sociophysics model. In particular, we consider the effects of agents' reputations in the persuasion rules. In other words, a high-reputation group with a common opinion may convince their neighbors with probability $p$, which induces an increase of the group's reputation. On the other hand, there is always a probability $q=1-p$ of the neighbors to keep their opinions, which induces a decrease of the group's reputation. These rules describe a competition between groups with high reputation and hesitant agents, which makes the full-consensus states (with all spins pointing in one direction) more difficult to be reached. As consequences, the usual phase transition does not occur for $p<p_{c} \sim 0.69$ and the system presents realistic democracy-like situations, where the majority of spins are aligned in a certain direction, for a wide range of parameters.
1108.5935
The Rabin cryptosystem revisited
math.NT cs.CR cs.IT math.IT
The Rabin public-key cryptosystem is revisited with a focus on the problem of identifying the encrypted message unambiguously for any pair of primes. In particular, a deterministic scheme using quartic reciprocity is described that works for primes congruent 5 modulo 8, a case that was still open. Both theoretical and practical solutions are presented. The Rabin signature is also reconsidered and a deterministic padding mechanism is proposed.
1108.5943
Proof System for Plan Verification under 0-Approximation Semantics
cs.AI cs.LO
In this paper a proof system is developed for plan verification problems $\{X\}c\{Y\}$ and $\{X\}c\{KW p\}$ under 0-approximation semantics for ${\mathcal A}_K$. Here, for a plan $c$, two sets $X,Y$ of fluent literals, and a literal $p$, $\{X\}c\{Y\}$ (resp. $\{X\}c\{KW p\}$) means that all literals of $Y$ become true (resp. $p$ becomes known) after executing $c$ in any initial state in which all literals in $X$ are true.Then, soundness and completeness are proved. The proof system allows verifying plans and generating plans as well.
1108.5974
Emotional Analysis of Blogs and Forums Data
cs.CL physics.data-an physics.soc-ph
We perform a statistical analysis of emotionally annotated comments in two large online datasets, examining chains of consecutive posts in the discussions. Using comparisons with randomised data we show that there is a high level of correlation for the emotional content of messages.
1108.6003
Characterization and exploitation of community structure in cover song networks
cs.IR cs.MM cs.SI physics.data-an stat.ML
The use of community detection algorithms is explored within the framework of cover song identification, i.e. the automatic detection of different audio renditions of the same underlying musical piece. Until now, this task has been posed as a typical query-by-example task, where one submits a query song and the system retrieves a list of possible matches ranked by their similarity to the query. In this work, we propose a new approach which uses song communities to provide more relevant answers to a given query. Starting from the output of a state-of-the-art system, songs are embedded in a complex weighted network whose links represent similarity (related musical content). Communities inside the network are then recognized as groups of covers and this information is used to enhance the results of the system. In particular, we show that this approach increases both the coherence and the accuracy of the system. Furthermore, we provide insight into the internal organization of individual cover song communities, showing that there is a tendency for the original song to be central within the community. We postulate that the methods and results presented here could be relevant to other query-by-example tasks.
1108.6007
Domain-specific Languages in a Finite Domain Constraint Programming System
cs.AI
In this paper, we present domain-specific languages (DSLs) that we devised for their use in the implementation of a finite domain constraint programming system, available as library(clpfd) in SWI-Prolog and YAP-Prolog. These DSLs are used in propagator selection and constraint reification. In these areas, they lead to concise specifications that are easy to read and reason about. At compilation time, these specifications are translated to Prolog code, reducing interpretative run-time overheads. The devised languages can be used in the implementation of other finite domain constraint solvers as well and may contribute to their correctness, conciseness and efficiency.
1108.6016
Improving Entity Resolution with Global Constraints
cs.DB cs.IR
Some of the greatest advances in web search have come from leveraging socio-economic properties of online user behavior. Past advances include PageRank, anchor text, hubs-authorities, and TF-IDF. In this paper, we investigate another socio-economic property that, to our knowledge, has not yet been exploited: sites that create lists of entities, such as IMDB and Netflix, have an incentive to avoid gratuitous duplicates. We leverage this property to resolve entities across the different web sites, and find that we can obtain substantial improvements in resolution accuracy. This improvement in accuracy also translates into robustness, which often reduces the amount of training data that must be labeled for comparing entities across many sites. Furthermore, the technique provides robustness when resolving sites that have some duplicates, even without first removing these duplicates. We present algorithms with very strong precision and recall, and show that max weight matching, while appearing to be a natural choice turns out to have poor performance in some situations. The presented techniques are now being used in the back-end entity resolution system at a major Internet search engine.
1108.6031
Robust Adaptive Geometric Tracking Controls on SO(3) with an Application to the Attitude Dynamics of a Quadrotor UAV
math.OC cs.SY
This paper provides new results for a robust adaptive tracking control of the attitude dynamics of a rigid body. Both of the attitude dynamics and the proposed control system are globally expressed on the special orthogonal group, to avoid complexities and ambiguities associated with other attitude representations such as Euler angles or quaternions. By designing an adaptive law for the inertia matrix of a rigid body, the proposed control system can asymptotically follow an attitude command without the knowledge of the inertia matrix, and it is extended to guarantee boundedness of tracking errors in the presence of unstructured disturbances. These are illustrated by numerical examples and experiments for the attitude dynamics of a quadrotor UAV.
1108.6046
Optimal Deterministic Polynomial-Time Data Exchange for Omniscience
cs.IT cs.CR math.IT
We study the problem of constructing a deterministic polynomial time algorithm that achieves omniscience, in a rate-optimal manner, among a set of users that are interested in a common file but each has only partial knowledge about it as side-information. Assuming that the collective information among all the users is sufficient to allow the reconstruction of the entire file, the goal is to minimize the (possibly weighted) amount of bits that these users need to exchange over a noiseless public channel in order for all of them to learn the entire file. Using established connections to the multi-terminal secrecy problem, our algorithm also implies a polynomial-time method for constructing a maximum size secret shared key in the presence of an eavesdropper. We consider the following types of side-information settings: (i) side information in the form of uncoded fragments/packets of the file, where the users' side-information consists of subsets of the file; (ii) side information in the form of linearly correlated packets, where the users have access to linear combinations of the file packets; and (iii) the general setting where the the users' side-information has an arbitrary (i.i.d.) correlation structure. Building on results from combinatorial optimization, we provide a polynomial-time algorithm (in the number of users) that, first finds the optimal rate allocations among these users, then determines an explicit transmission scheme (i.e., a description of which user should transmit what information) for cases (i) and (ii).
1108.6088
No Internal Regret via Neighborhood Watch
cs.LG cs.GT
We present an algorithm which attains O(\sqrt{T}) internal (and thus external) regret for finite games with partial monitoring under the local observability condition. Recently, this condition has been shown by (Bartok, Pal, and Szepesvari, 2011) to imply the O(\sqrt{T}) rate for partial monitoring games against an i.i.d. opponent, and the authors conjectured that the same holds for non-stochastic adversaries. Our result is in the affirmative, and it completes the characterization of possible rates for finite partial-monitoring games, an open question stated by (Cesa-Bianchi, Lugosi, and Stoltz, 2006). Our regret guarantees also hold for the more general model of partial monitoring with random signals.
1108.6113
A New Computationally Efficient Measure of Topological Redundancy of Biological and Social Networks
physics.soc-ph cs.DM cs.SI math.DS q-bio.MN
It is well-known that biological and social interaction networks have a varying degree of redundancy, though a consensus of the precise cause of this is so far lacking. In this paper, we introduce a topological redundancy measure for labeled directed networks that is formal, computationally efficient and applicable to a variety of directed networks such as cellular signaling, metabolic and social interaction networks. We demonstrate the computational efficiency of our measure by computing its value and statistical significance on a number of biological and social networks with up to several thousands of nodes and edges. Our results suggest a number of interesting observations: (1) social networks are more redundant that their biological counterparts, (2) transcriptional networks are less redundant than signaling networks, (3) the topological redundancy of the C. elegans metabolic network is largely due to its inclusion of currency metabolites, and (4) the redundancy of signaling networks is highly (negatively) correlated with the monotonicity of their dynamics.
1108.6114
Projective Parameterized Linear Codes Arising from some Matrices and their Main Parameters
cs.IT math.IT
In this paper we will estimate the main parameters of some evaluation codes which are known as projective parameterized codes. We will find the length of these codes and we will give a formula for the dimension in terms of the Hilbert function associated to two ideals, one of them being the vanishing ideal of the projective torus. Also we will find an upper bound for the minimum distance and, in some cases, we will give some lower bounds for the regularity index and the minimum distance. These lower bounds work in several cases, particularly for any projective parameterized code associated to the incidence matrix of uniform clutters and then they work in the case of graphs.
1108.6121
The Value of Feedback in Decentralized Detection
cs.IT math.IT stat.AP
We consider the decentralized binary hypothesis testing problem in networks with feedback, where some or all of the sensors have access to compressed summaries of other sensors' observations. We study certain two-message feedback architectures, in which every sensor sends two messages to a fusion center, with the second message based on full or partial knowledge of the first messages of the other sensors. We also study one-message feedback architectures, in which each sensor sends one message to a fusion center, with a group of sensors having full or partial knowledge of the messages from the sensors not in that group. Under either a Neyman-Pearson or a Bayesian formulation, we show that the asymptotically optimal (in the limit of a large number of sensors) detection performance (as quantified by error exponents) does not benefit from the feedback messages, if the fusion center remembers all sensor messages. However, feedback can improve the Bayesian detection performance in the one-message feedback architecture if the fusion center has limited memory; for that case, we determine the corresponding optimal error exponents.
1108.6132
Distributed MAC Protocol Supporting Physical-Layer Network Coding
cs.NI cs.DC cs.IT math.IT
Physical-layer network coding (PNC) is a promising approach for wireless networks. It allows nodes to transmit simultaneously. Due to the difficulties of scheduling simultaneous transmissions, existing works on PNC are based on simplified medium access control (MAC) protocols, which are not applicable to general multi-hop wireless networks, to the best of our knowledge. In this paper, we propose a distributed MAC protocol that supports PNC in multi-hop wireless networks. The proposed MAC protocol is based on the carrier sense multiple access (CSMA) strategy and can be regarded as an extension to the IEEE 802.11 MAC protocol. In the proposed protocol, each node collects information on the queue status of its neighboring nodes. When a node finds that there is an opportunity for some of its neighbors to perform PNC, it notifies its corresponding neighboring nodes and initiates the process of packet exchange using PNC, with the node itself as a relay. During the packet exchange process, the relay also works as a coordinator which coordinates the transmission of source nodes. Meanwhile, the proposed protocol is compatible with conventional network coding and conventional transmission schemes. Simulation results show that the proposed protocol is advantageous in various scenarios of wireless applications.
1108.6146
Use of a speed equation for numerical simulation of hydraulic fractures
physics.flu-dyn cs.CE
The paper treats the propagation of a hydraulically driven crack. We explicitly write the local speed equation, which facilitates using the theory of propagating interfaces. It is shown that when neglecting the lag between the liquid front and the crack tip, the lubrication PDE yields that a solution satisfies the speed equation identically. This implies that for zero or small lag, the boundary value problem appears ill-posed when solved numerically. We suggest e - regularization, which consists in employing the speed equation together with a prescribed BC on the front to obtain a new BC formulated at a small distance behind the front rather than on the front itself. It is shown that - regularization provides accurate and stable results with reasonable time expense. It is also shown that the speed equation gives a key to proper choice of unknown functions when solving a hydraulic fracture problem numerically.
1108.6150
A unified formulation of Gaussian vs. sparse stochastic processes - Part I: Continuous-domain theory
cs.IT math.IT math.PR
We introduce a general distributional framework that results in a unifying description and characterization of a rich variety of continuous-time stochastic processes. The cornerstone of our approach is an innovation model that is driven by some generalized white noise process, which may be Gaussian or not (e.g., Laplace, impulsive Poisson or alpha stable). This allows for a conceptual decoupling between the correlation properties of the process, which are imposed by the whitening operator L, and its sparsity pattern which is determined by the type of noise excitation. The latter is fully specified by a Levy measure. We show that the range of admissible innovation behavior varies between the purely Gaussian and super-sparse extremes. We prove that the corresponding generalized stochastic processes are well-defined mathematically provided that the (adjoint) inverse of the whitening operator satisfies some Lp bound for p>=1. We present a novel operator-based method that yields an explicit characterization of all Levy-driven processes that are solutions of constant-coefficient stochastic differential equations. When the underlying system is stable, we recover the family of stationary CARMA processes, including the Gaussian ones. The approach remains valid when the system is unstable and leads to the identification of potentially useful generalizations of the Levy processes, which are sparse and non-stationary. Finally, we show how we can apply finite difference operators to obtain a stationary characterization of these processes that is maximally decoupled and stable, irrespective of the location of the poles in the complex plane.
1108.6152
A unified formulation of Gaussian vs. sparse stochastic processes - Part II: Discrete-domain theory
cs.IT math.IT math.PR
This paper is devoted to the characterization of an extended family of CARMA (continuous-time autoregressive moving average) processes that are solutions of stochastic differential equations driven by white Levy innovations. These are completely specified by: (1) a set of poles and zeros that fixes their correlation structure, and (2) a canonical infinitely-divisible probability distribution that controls their degree of sparsity (with the Gaussian model corresponding to the least sparse scenario). The generalized CARMA processes are either stationary or non-stationary, depending on the location of the poles in the complex plane. The most basic non-stationary representatives (with a single pole at the origin) are the Levy processes, which are the non-Gaussian counterparts of Brownian motion. We focus on the general analog-to-discrete conversion problem and introduce a novel spline-based formalism that greatly simplifies the derivation of the correlation properties and joint probability distributions of the discrete versions of these processes. We also rely on the concept of generalized increment process, which suppresses all long range dependencies, to specify an equivalent discrete-domain innovation model. A crucial ingredient is the existence of a minimally-supported function associated with the whitening operator L; this B-spline, which is fundamental to our formulation, appears in most of our formulas, both at the level of the correlation and the characteristic function. We make use of these discrete-domain results to numerically generate illustrative examples of sparse signals that are consistent with the continuous-domain model.
1108.6175
Adaptive Locomotion of Multibody Snake-like Robot
cs.RO cs.SY
This paper represents an adaptive rhythmic control for a snake-like robot with 25 degrees of freedom. The adaptive gait control is implemented in algorithmic way in simulation and on a real robot. We investigated behavioral and energetic properties of this control and a dynamics of different body segments. It turned out that despite using homogeneous generators, physical constraints have an inhomogeneous impact on neighbor body segments. By analytical modeling of such dynamics, it may result in heterogeneous coupling of oscillators for a rhythmic control and impact scalability and synchronization effects of gait pattern generators.
1108.6185
Weighted Reed-Muller codes revisited
cs.IT math.IT
We consider weighted Reed-Muller codes over point ensemble $S_1 \times...\times S_m$ where $S_i$ needs not be of the same size as $S_j$. For $m = 2$ we determine optimal weights and analyze in detail what is the impact of the ratio $|S_1|/|S_2|$ on the minimum distance. In conclusion the weighted Reed-Muller code construction is much better than its reputation. For a class of affine variety codes that contains the weighted Reed-Muller codes we then present two list decoding algorithms. With a small modification one of these algorithms is able to correct up to 31 errors of the [49, 11, 28] Joyner code.
1108.6197
Two-Level Fingerprinting Codes: Non-Trivial Constructions
cs.IT math.IT
We extend the concept of two-level fingerprinting codes, introduced by Anthapadmanabhan and Barg (2009) in context of traceability (TA) codes, to other types of fingerprinting codes, namely identifiable parent property (IPP) codes, secure-frameproof (SFP) codes, and frameproof (FP) codes. We define and propose the first explicit non-trivial construction for two-level IPP, SFP and FP codes.
1108.6198
Decision Support for e-Governance: A Text Mining Approach
cs.DB cs.IR
Information and communication technology has the capability to improve the process by which governments involve citizens in formulating public policy and public projects. Even though much of government regulations may now be in digital form (and often available online), due to their complexity and diversity, identifying the ones relevant to a particular context is a non-trivial task. Similarly, with the advent of a number of electronic online forums, social networking sites and blogs, the opportunity of gathering citizens' petitions and stakeholders' views on government policy and proposals has increased greatly, but the volume and the complexity of analyzing unstructured data makes this difficult. On the other hand, text mining has come a long way from simple keyword search, and matured into a discipline capable of dealing with much more complex tasks. In this paper we discuss how text-mining techniques can help in retrieval of information and relationships from textual data sources, thereby assisting policy makers in discovering associations between policies and citizens' opinions expressed in electronic public forums and blogs etc. We also present here, an integrated text mining based architecture for e-governance decision support along with a discussion on the Indian scenario.
1108.6208
Coprocessor - a Standalone SAT Preprocessor
cs.AI
In this work a stand-alone preprocessor for SAT is presented that is able to perform most of the known preprocessing techniques. Preprocessing a formula in SAT is important for performance since redundancy can be removed. The preprocessor is part of the SAT solver riss and is called Coprocessor. Not only riss, but also MiniSat 2.2 benefit from it, because the SatELite preprocessor of MiniSat does not implement recent techniques. By using more advanced techniques, Coprocessor is able to reduce the redundancy in a formula further and improves the overall solving performance.
1108.6211
Transfer from Multiple MDPs
cs.AI cs.LG
Transfer reinforcement learning (RL) methods leverage on the experience collected on a set of source tasks to speed-up RL algorithms. A simple and effective approach is to transfer samples from source tasks and include them into the training set used to solve a given target task. In this paper, we investigate the theoretical properties of this transfer method and we introduce novel algorithms adapting the transfer process on the basis of the similarity between source and target tasks. Finally, we report illustrative experimental results in a continuous chain problem.
1108.6214
Likelihood Consensus and Its Application to Distributed Particle Filtering
stat.AP cs.IT math.IT
We consider distributed state estimation in a wireless sensor network without a fusion center. Each sensor performs a global estimation task---based on the past and current measurements of all sensors---using only local processing and local communications with its neighbors. In this estimation task, the joint (all-sensors) likelihood function (JLF) plays a central role as it epitomizes the measurements of all sensors. We propose a distributed method for computing, at each sensor, an approximation of the JLF by means of consensus algorithms. This "likelihood consensus" method is applicable if the local likelihood functions of the various sensors (viewed as conditional probability density functions of the local measurements) belong to the exponential family of distributions. We then use the likelihood consensus method to implement a distributed particle filter and a distributed Gaussian particle filter. Each sensor runs a local particle filter, or a local Gaussian particle filter, that computes a global state estimate. The weight update in each local (Gaussian) particle filter employs the JLF, which is obtained through the likelihood consensus scheme. For the distributed Gaussian particle filter, the number of particles can be significantly reduced by means of an additional consensus scheme. Simulation results are presented to assess the performance of the proposed distributed particle filters for a multiple target tracking problem.
1108.6223
Towards Configuration of applied Web-based information system
cs.SE cs.AI cs.DM cs.NI cs.SY math.OC
In the paper, combinatorial synthesis of structure for applied Web-based systems is described. The problem is considered as a combination of selected design alternatives for system parts/components into a resultant composite decision (i.e., system configuration design). The solving framework is based on Hierarchical Morphological Multicriteria Design (HMMD) approach: (i) multicriteria selection of alternatives for system parts, (ii) composing the selected alternatives into a resultant combination (while taking into account ordinal quality of the alternatives above and their compatibility). A lattice-based discrete space is used to evaluate (to integrate) quality of the resultant combinations (i.e., composite system decisions or system configurations). In addition, a simplified solving framework based on multicriteria multiple choice problem is considered. A multistage design process to obtain a system trajectory is described as well. The basic applied example is targeted to an applied Web-based system for a communication service provider. Two other applications are briefly described (corporate system and information system for academic application).
1108.6239
Efficient data compression from statistical physics of codes over finite fields
cs.IT cond-mat.stat-mech math.IT
In this paper we discuss a novel data compression technique for binary symmetric sources based on the cavity method over a Galois Field of order q (GF(q)). We present a scheme of low complexity and near optimal empirical performance. The compression step is based on a reduction of sparse low density parity check codes over GF(q) and is done through the so called reinforced belief-propagation equations. These reduced codes appear to have a non-trivial geometrical modification of the space of codewords which makes such compression computationally feasible. The computational complexity is O(d.n.q.log(q)) per iteration, where d is the average degree of the check nodes and n is the number of bits. For our code ensemble, decompression can be done in a time linear in the code's length by a simple leaf-removal algorithm.
1108.6260
Structural Routability of n-Pairs Information Networks
cs.IT cs.NI cs.SI math.IT
Information does not generally behave like a conservative fluid flow in communication networks with multiple sources and sinks. However, it is often conceptually and practically useful to be able to associate separate data streams with each source-sink pair, with only routing and no coding performed at the network nodes. This raises the question of whether there is a nontrivial class of network topologies for which achievability is always equivalent to routability, for any combination of source signals and positive channel capacities. This chapter considers possibly cyclic, directed, errorless networks with n source-sink pairs and mutually independent source signals. The concept of downward dominance is introduced and it is shown that, if the network topology is downward dominated, then the achievability of a given combination of source signals and channel capacities implies the existence of a feasible multicommodity flow.
1108.6274
Every Formula-Based Logic Program Has a Least Infinite-Valued Model
cs.LO cs.AI
Every definite logic program has as its meaning a least Herbrand model with respect to the program-independent ordering "set-inclusion". In the case of normal logic programs there do not exist least models in general. However, according to a recent approach by Rondogiannis and Wadge, who consider infinite-valued models, every normal logic program does have a least model with respect to a program-independent ordering. We show that this approach can be extended to formula-based logic programs (i.e., finite sets of rules of the form A\leftarrowF where A is an atom and F an arbitrary first-order formula). We construct for a given program P an interpretation M_P and show that it is the least of all models of P. Keywords: Logic programming, semantics of programs, negation-as-failure, infinite-valued logics, set theory