id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1106.3684
Exploratory simulation of an Intelligent Iris Verifier Distributed System
cs.CV cs.ET cs.LO
This paper discusses some topics related to the latest trends in the field of evolutionary approaches to iris recognition. It presents the results of an exploratory experimental simulation whose goal was to analyze the possibility of establishing an Interchange Protocol for Digital Identities evolved in different geographic locations interconnected through and into an Intelligent Iris Verifier Distributed System (IIVDS) based on multi-enrollment. Finding a logically consistent model for the Interchange Protocol is the key factor in designing the future large-scale iris biometric networks. Therefore, the logical model of such a protocol is also investigated here. All tests are made on Bath Iris Database and prove that outstanding power of discrimination between the intra- and the inter-class comparisons can be achieved by an IIVDS, even when practicing 52.759.182 inter-class and 10.991.943 intra-class comparisons. Still, the test results confirm that inconsistent enrollment can change the logic of recognition from a fuzzified 2-valent consistent logic of biometric certitudes to a fuzzified 3-valent inconsistent possibilistic logic of biometric beliefs justified through experimentally determined probabilities, or to a fuzzified 8-valent logic which is almost consistent as a biometric theory - this quality being counterbalanced by an absolutely reasonable loss in the user comfort level.
1106.3685
Embedding and Automating Conditional Logics in Classical Higher-Order Logic
cs.AI cs.LO math.LO
A sound and complete embedding of conditional logics into classical higher-order logic is presented. This embedding enables the application of off-the-shelf higher-order automated theorem provers and model finders for reasoning within and about conditional logics.
1106.3693
Perfect Reconstruction Two-Channel Wavelet Filter-Banks for Graph Structured Data
cs.DC cs.SI
In this work we propose the construction of two-channel wavelet filterbanks for analyzing functions defined on the vertices of any arbitrary finite weighted undirected graph. These graph based functions are referred to as graph-signals as we build a framework in which many concepts from the classical signal processing domain, such as Fourier decomposition, signal filtering and downsampling can be extended to graph domain. Especially, we observe a spectral folding phenomenon in bipartite graphs which occurs during downsampling of these graphs and produces aliasing in graph signals. This property of bipartite graphs, allows us to design critically sampled two-channel filterbanks, and we propose quadrature mirror filters (referred to as graph-QMF) for bipartite graph which cancel aliasing and lead to perfect reconstruction. For arbitrary graphs we present a bipartite subgraph decomposition which produces an edge-disjoint collection of bipartite subgraphs. Graph-QMFs are then constructed on each bipartite subgraph leading to "multi-dimensional" separable wavelet filterbanks on graphs. Our proposed filterbanks are critically sampled and we state necessary and sufficient conditions for orthogonality, aliasing cancellation and perfect reconstruction. The filterbanks are realized by Chebychev polynomial approximations.
1106.3703
Prediction and Modularity in Dynamical Systems
nlin.AO cs.AI cs.IT cs.LG cs.SY math.IT q-bio.QM stat.ME
Identifying and understanding modular organizations is centrally important in the study of complex systems. Several approaches to this problem have been advanced, many framed in information-theoretic terms. Our treatment starts from the complementary point of view of statistical modeling and prediction of dynamical systems. It is known that for finite amounts of training data, simpler models can have greater predictive power than more complex ones. We use the trade-off between model simplicity and predictive accuracy to generate optimal multiscale decompositions of dynamical networks into weakly-coupled, simple modules. State-dependent and causal versions of our method are also proposed.
1106.3711
Sidelobe Suppression for Capon Beamforming with Mainlobe to Sidelobe Power Ratio Maximization
cs.IT math.IT
High sidelobe level is a major disadvantage of the Capon beamforming. To suppress the sidelobe, this paper introduces a mainlobe to sidelobe power ratio constraint to the Capon beamforming. it minimizes the sidelobe power while keeping the mainlobe power constant. Simulations show that the obtained beamformer outperforms the Capon beamformer.
1106.3713
Source-Channel Coding Theorems for the Multiple-Access Relay Channel
cs.IT math.IT
We study reliable transmission of arbitrarily correlated sources over multiple-access relay channels (MARCs) and multiple-access broadcast relay channels (MABRCs). In MARCs only the destination is interested in reconstructing the sources, while in MABRCs both the relay and the destination want to reconstruct them. In addition to arbitrary correlation among the source signals at the users, both the relay and the destination have side information correlated with the source signals. Our objective is to determine whether a given pair of sources can be losslessly transmitted to the destination for a given number of channel symbols per source sample, defined as the source-channel rate. Sufficient conditions for reliable communication based on operational separation, as well as necessary conditions on the achievable source-channel rates are characterized. Since operational separation is generally not optimal for MARCs and MABRCs, sufficient conditions for reliable communication using joint source-channel coding schemes based on a combination of the correlation preserving mapping technique with Slepian-Wolf source coding are also derived. For correlated sources transmitted over fading Gaussian MARCs and MABRCs, we present conditions under which separation (i.e., separate and stand-alone source and channel codes) is optimal. This is the first time optimality of separation is proved for MARCs and MABRCs.
1106.3725
Learning XML Twig Queries
cs.DB cs.LG
We investigate the problem of learning XML queries, path queries and tree pattern queries, from examples given by the user. A learning algorithm takes on the input a set of XML documents with nodes annotated by the user and returns a query that selects the nodes in a manner consistent with the annotation. We study two learning settings that differ with the types of annotations. In the first setting the user may only indicate required nodes that the query must return. In the second, more general, setting, the user may also indicate forbidden nodes that the query must not return. The query may or may not return any node with no annotation. We formalize what it means for a class of queries to be \emph{learnable}. One requirement is the existence of a learning algorithm that is sound i.e., always returns a query consistent with the examples given by the user. Furthermore, the learning algorithm should be complete i.e., able to produce every query with a sufficiently rich example. Other requirements involve tractability of learning and its robustness to nonessential examples. We show that the classes of simple path queries and path-subsumption-free tree queries are learnable from positive examples. The learnability of the full class of tree pattern queries (and the full class of path queries) remains an open question. We show also that adding negative examples to the picture renders the learning unfeasible. Published in ICDT 2012, Berlin.
1106.3740
The Asymptotic Mandelbrot Law of Some Evolution Networks
physics.data-an cs.SI physics.soc-ph
In this letter, we study some evolution networks that grow with linear preferential attachment. Based upon some recent results on the quotient Gamma function, we give a rigorous proof of the asymptotic Mandelbrot law for the degree distribution $p_k \propto (k + c)^{-\gamma}$ in certain conditions. We also analytically derive the best fitting values for the scaling exponent $\gamma$ and the shifting coefficient $c$.
1106.3745
Composition with Target Constraints
cs.DB
It is known that the composition of schema mappings, each specified by source-to-target tgds (st-tgds), can be specified by a second-order tgd (SO tgd). We consider the question of what happens when target constraints are allowed. Specifically, we consider the question of specifying the composition of standard schema mappings (those specified by st-tgds, target egds, and a weakly acyclic set of target tgds). We show that SO tgds, even with the assistance of arbitrary source constraints and target constraints, cannot specify in general the composition of two standard schema mappings. Therefore, we introduce source-to-target second-order dependencies (st-SO dependencies), which are similar to SO tgds, but allow equations in the conclusion. We show that st-SO dependencies (along with target egds and target tgds) are sufficient to express the composition of every finite sequence of standard schema mappings, and further, every st-SO dependency specifies such a composition. In addition to this expressive power, we show that st-SO dependencies enjoy other desirable properties. In particular, they have a polynomial-time chase that generates a universal solution. This universal solution can be used to find the certain answers to unions of conjunctive queries in polynomial time. It is easy to show that the composition of an arbitrary number of standard schema mappings is equivalent to the composition of only two standard schema mappings. We show that surprisingly, the analogous result holds also for schema mappings specified by just st-tgds (no target constraints). This is proven by showing that every SO tgd is equivalent to an unnested SO tgd (one where there is no nesting of function symbols). Similarly, we prove unnesting results for st-SO dependencies, with the same types of consequences.
1106.3754
Families of graph-different Hamilton paths
math.CO cs.IT math.IT
Let D be an arbitrary subset of the natural numbers. For every n, let M(n;D) be the maximum of the cardinality of a set of Hamiltonian paths in the complete graph K_n such that the union of any two paths from the family contains a not necessarily induced cycle of some length from D. We determine or bound the asymptotics of M(n;D) in various special cases. This problem is closely related to that of the permutation capacity of graphs and constitutes a further extension of the problem area around Shannon capacity. We also discuss how to generalize our cycle-difference problems and present an example where cycles are replaced by 4-cliques. These problems are in a natural duality to those of graph intersection, initiated by Erd\"os, Simonovits and S\'os. The lack of kernel structure as a natural candidate for optimum makes our problems quite challenging.
1106.3759
Frequency Theorem for discrete time stochastic system with multiplicative noise
math.OC cs.SY
In this paper we consider the problem of minimizing a quadratic functional for a discrete-time linear stochastic system with multiplicative noise, on a standard probability space, in infinite time horizon. We show that the necessary and sufficient conditions for the existence of the optimal control can be formulated as matrix inequalities in frequency domain. Furthermore, we show that if the optimal control exists, then certain Lyapunov equations must have a solution. The optimal control is obtained by solving a deterministic linear-quadratic optimal control problem whose functional depends on the solution to the Lyapunov equations. Moreover, we show that under certain conditions, solvability of the Lyapunov equations is guaranteed. We also show that, if the frequency inequalities are strict, then the solution is unique up to equivalence.
1106.3767
Rewriting Ontological Queries into Small Nonrecursive Datalog Programs
cs.AI cs.DB cs.LO
We consider the setting of ontological database access, where an Abox is given in form of a relational database D and where a Boolean conjunctive query q has to be evaluated against D modulo a Tbox T formulated in DL-Lite or Linear Datalog+/-. It is well-known that (T,q) can be rewritten into an equivalent nonrecursive Datalog program P that can be directly evaluated over D. However, for Linear Datalog? or for DL-Lite versions that allow for role inclusion, the rewriting methods described so far result in a nonrecursive Datalog program P of size exponential in the joint size of T and q. This gives rise to the interesting question of whether such a rewriting necessarily needs to be of exponential size. In this paper we show that it is actually possible to translate (T,q) into a polynomially sized equivalent nonrecursive Datalog program P.
1106.3791
Reference Sequence Construction for Relative Compression of Genomes
q-bio.QM cs.CE cs.IT math.IT
Relative compression, where a set of similar strings are compressed with respect to a reference string, is a very effective method of compressing DNA datasets containing multiple similar sequences. Relative compression is fast to perform and also supports rapid random access to the underlying data. The main difficulty of relative compression is in selecting an appropriate reference sequence. In this paper, we explore using the dictionary of repeats generated by Comrad, Re-pair and Dna-x algorithms as reference sequences for relative compression. We show this technique allows better compression and supports random access just as well. The technique also allows more general repetitive datasets to be compressed using relative compression.
1106.3809
Fisher Information in Flow Size Distribution
cs.IT cs.NI math.IT
The flow size distribution is a useful metric for traffic modeling and management. Its estimation based on sampled data, however, is problematic. Previous work has shown that flow sampling (FS) offers enormous statistical benefits over packet sampling but high resource requirements precludes its use in routers. We present Dual Sampling (DS), a two-parameter family, which, to a large extent, provide FS-like statistical performance by approaching FS continuously, with just packet-sampling-like computational cost. Our work utilizes a Fisher information based approach recently used to evaluate a number of sampling schemes, excluding FS, for TCP flows. We revise and extend the approach to make rigorous and fair comparisons between FS, DS and others. We show how DS significantly outperforms other packet based methods, including Sample and Hold, the closest packet sampling-based competitor to FS. We describe a packet sampling-based implementation of DS and analyze its key computational costs to show that router implementation is feasible. Our approach offers insights into numerous issues, including the notion of `flow quality' for understanding the relative performance of methods, and how and when employing sequence numbers is beneficial. Our work is theoretical with some simulation support and case studies on Internet data.
1106.3826
On the Non-Progressive Spread of Influence through Social Networks
cs.SI cs.GT physics.soc-ph
The spread of influence in social networks is studied in two main categories: the progressive model and the non-progressive model (see e.g. the seminal work of Kempe, Kleinberg, and Tardos in KDD 2003). While the progressive models are suitable for modeling the spread of influence in monopolistic settings, non-progressive are more appropriate for modeling non-monopolistic settings, e.g., modeling diffusion of two competing technologies over a social network. Despite the extensive work on the progressive model, non-progressive models have not been studied well. In this paper, we study the spread of influence in the non-progressive model under the strict majority threshold: given a graph $G$ with a set of initially infected nodes, each node gets infected at time $\tau$ iff a majority of its neighbors are infected at time $\tau-1$. Our goal in the \textit{MinPTS} problem is to find a minimum-cardinality initial set of infected nodes that would eventually converge to a steady state where all nodes of $G$ are infected. We prove that while the MinPTS is NP-hard for a restricted family of graphs, it admits an improved constant-factor approximation algorithm for power-law graphs. We do so by proving lower and upper bounds in terms of the minimum and maximum degree of nodes in the graph. The upper bound is achieved in turn by applying a natural greedy algorithm. Our experimental evaluation of the greedy algorithm also shows its superior performance compared to other algorithms for a set of real-world graphs as well as the random power-law graphs. Finally, we study the convergence properties of these algorithms and show that the non-progressive model converges in at most $O(|E(G)|)$ steps.
1106.3834
Dimensionally Constrained Symbolic Regression
stat.ML cs.NE physics.comp-ph
We describe dimensionally constrained symbolic regression which has been developed for mass measurement in certain classes of events in high-energy physics (HEP). With symbolic regression, we can derive equations that are well known in HEP. However, in problems with large number of variables, we find that by constraining the terms allowed in the symbolic regression, convergence behavior is improved. Dimensionally constrained symbolic regression (DCSR) finds solutions with much better fitness than is normally possible with symbolic regression. In some cases, novel solutions are found.
1106.3862
On Kinds of Indiscernibility in Logic and Metaphysics
physics.hist-ph cs.AI quant-ph
Using the Hilbert-Bernays account as a spring-board, we first define four ways in which two objects can be discerned from one another, using the non-logical vocabulary of the language concerned. (These definitions are based on definitions made by Quine and Saunders.) Because of our use of the Hilbert-Bernays account, these definitions are in terms of the syntax of the language. But we also relate our definitions to the idea of permutations on the domain of quantification, and their being symmetries. These relations turn out to be subtle---some natural conjectures about them are false. We will see in particular that the idea of symmetry meshes with a species of indiscernibility that we will call `absolute indiscernibility'. We then report all the logical implications between our four kinds of discernibility. We use these four kinds as a resource for stating four metaphysical theses about identity. Three of these theses articulate two traditional philosophical themes: viz. the principle of the identity of indiscernibles (which will come in two versions), and haecceitism. The fourth is recent. Its most notable feature is that it makes diversity (i.e. non-identity) weaker than what we will call individuality (being an individual): two objects can be distinct but not individuals. For this reason, it has been advocated both for quantum particles and for spacetime points. Finally, we locate this fourth metaphysical thesis in a broader position, which we call structuralism. We conclude with a discussion of the semantics suitable for a structuralist, with particular reference to physical theories as well as elementary model theory.
1106.3876
Uncertainty in Ontologies: Dempster-Shafer Theory for Data Fusion Applications
cs.AI
Nowadays ontologies present a growing interest in Data Fusion applications. As a matter of fact, the ontologies are seen as a semantic tool for describing and reasoning about sensor data, objects, relations and general domain theories. In addition, uncertainty is perhaps one of the most important characteristics of the data and information handled by Data Fusion. However, the fundamental nature of ontologies implies that ontologies describe only asserted and veracious facts of the world. Different probabilistic, fuzzy and evidential approaches already exist to fill this gap; this paper recaps the most popular tools. However none of the tools meets exactly our purposes. Therefore, we constructed a Dempster-Shafer ontology that can be imported into any specific domain ontology and that enables us to instantiate it in an uncertain manner. We also developed a Java application that enables reasoning about these uncertain ontological instances.
1106.3932
Coincidences and the encounter problem: A formal account
cs.AI
Individuals have an intuitive perception of what makes a good coincidence. Though the sensitivity to coincidences has often been presented as resulting from an erroneous assessment of probability, it appears to be a genuine competence, based on non-trivial computations. The model presented here suggests that coincidences occur when subjects perceive complexity drops. Co-occurring events are, together, simpler than if considered separately. This model leads to a possible redefinition of subjective probability.
1106.3940
Cooperative spectrum sensing over unreliable reporting channel
stat.OT cs.IT math.IT
This article aims to analyze a cooperative spectrum sensing scheme using a centralized approach with unreliable reporting channel. The spectrum sensing is applied to a cognitive radio system, where each cognitive radio performs a simple energy detection and send the decision to a fusion center through a reporting channel. When the decisions are available at the fusion center, a n-out-of-K rule is applied. The impact of the choice of the parameter n in the cognitive radio system performance is analyzed in the case where the reporting channel introduces errors.
1106.3951
Optimal rate list decoding via derivative codes
cs.IT cs.CC cs.DS math.IT
The classical family of $[n,k]_q$ Reed-Solomon codes over a field $\F_q$ consist of the evaluations of polynomials $f \in \F_q[X]$ of degree $< k$ at $n$ distinct field elements. In this work, we consider a closely related family of codes, called (order $m$) {\em derivative codes} and defined over fields of large characteristic, which consist of the evaluations of $f$ as well as its first $m-1$ formal derivatives at $n$ distinct field elements. For large enough $m$, we show that these codes can be list-decoded in polynomial time from an error fraction approaching $1-R$, where $R=k/(nm)$ is the rate of the code. This gives an alternate construction to folded Reed-Solomon codes for achieving the optimal trade-off between rate and list error-correction radius. Our decoding algorithm is linear-algebraic, and involves solving a linear system to interpolate a multivariate polynomial, and then solving another structured linear system to retrieve the list of candidate polynomials $f$. The algorithm for derivative codes offers some advantages compared to a similar one for folded Reed-Solomon codes in terms of efficient unique decoding in the presence of side information.
1106.3967
Intelligent Self-Repairable Web Wrappers
cs.AI cs.IR
The amount of information available on the Web grows at an incredible high rate. Systems and procedures devised to extract these data from Web sources already exist, and different approaches and techniques have been investigated during the last years. On the one hand, reliable solutions should provide robust algorithms of Web data mining which could automatically face possible malfunctioning or failures. On the other, in literature there is a lack of solutions about the maintenance of these systems. Procedures that extract Web data may be strictly interconnected with the structure of the data source itself; thus, malfunctioning or acquisition of corrupted data could be caused, for example, by structural modifications of data sources brought by their owners. Nowadays, verification of data integrity and maintenance are mostly manually managed, in order to ensure that these systems work correctly and reliably. In this paper we propose a novel approach to create procedures able to extract data from Web sources -- the so called Web wrappers -- which can face possible malfunctioning caused by modifications of the structure of the data source, and can automatically repair themselves.
1106.3977
Models, Calculation and Optimization of Gas Networks, Equipment and Contracts for Design, Operation, Booking and Accounting
cs.CE
There are proposed models of contracts, technological equipment and gas networks and methods of their optimization. The flow in network undergoes restrictions of contracts and equipment to be operated. The values of sources and sinks are provided by contracts. The contract models represent (sub-) networks. The simplest contracts represent either nodes or edges. Equipment is modeled by edges. More sophisticated equipment is represented by sub-networks. Examples of such equipment are multi-poles and compressor stations with many entries and exits. The edges can be of different types corresponding to equipment and contracts. On such edges, there are given systems of equation and inequalities simulating the contracts and equipment. On this base, the methods proposed that allow: calculation and control of contract values for booking on future days and for accounting of sales and purchases; simulation and optimization of design and of operation of gas networks. These models and methods are implemented in software systems ACCORD and Graphicord as well as in the distributed control system used by Wingas, Germany. As numerical example, the industrial computations are presented.
1106.3981
Group Codes and the Schreier matrix form
cs.IT math.IT
In a group trellis, the sequence of branches that split from the identity path and merge to the identity path form two normal chains. The Schreier refinement theorem can be applied to these two normal chains. The refinement of the two normal chains can be written in the form of a matrix, called the Schreier matrix form, with rows and columns determined by the two normal chains. Based on the Schreier matrix form, we give an encoder structure for a group code which is an estimator. The encoder uses the important idea of shortest length generator sequences previously explained by Forney and Trott. In this encoder the generator sequences are shown to have an additional property: the components of the generators are coset representatives in a chain coset decomposition of the branch group B of the code. Therefore this encoder appears to be a natural form for a group code encoder. The encoder has a register implementation which is somewhat different from the classical shift register structure. This form of the encoder can be extended. We find a composition chain of the branch group B and give an encoder which uses coset representatives in the composition chain of B. When B is solvable, the generators are constructed using coset representatives taken from prime cyclic groups.
1106.4058
Experimental Support for a Categorical Compositional Distributional Model of Meaning
cs.CL math.CT
Modelling compositional meaning for sentences using empirical distributional methods has been a challenge for computational linguists. We implement the abstract categorical model of Coecke et al. (arXiv:1003.4394v1 [cs.CL]) using data from the BNC and evaluate it. The implementation is based on unsupervised learning of matrices for relational words and applying them to the vectors of their arguments. The evaluation is based on the word disambiguation task developed by Mitchell and Lapata (2008) for intransitive sentences, and on a similar new experiment designed for transitive sentences. Our model matches the results of its competitors in the first experiment, and betters them in the second. The general improvement in results with increase in syntactic complexity showcases the compositional power of our model.
1106.4064
Algorithmic Programming Language Identification
cs.LG
Motivated by the amount of code that goes unidentified on the web, we introduce a practical method for algorithmically identifying the programming language of source code. Our work is based on supervised learning and intelligent statistical features. We also explored, but abandoned, a grammatical approach. In testing, our implementation greatly outperforms that of an existing tool that relies on a Bayesian classifier. Code is written in Python and available under an MIT license.
1106.4075
On the Inclusion Relation of Reproducing Kernel Hilbert Spaces
math.FA cs.LG
To help understand various reproducing kernels used in applied sciences, we investigate the inclusion relation of two reproducing kernel Hilbert spaces. Characterizations in terms of feature maps of the corresponding reproducing kernels are established. A full table of inclusion relations among widely-used translation invariant kernels is given. Concrete examples for Hilbert-Schmidt kernels are presented as well. We also discuss the preservation of such a relation under various operations of reproducing kernels. Finally, we briefly discuss the special inclusion with a norm equivalence.
1106.4083
Symmetry-Based Search Space Reduction For Grid Maps
cs.AI cs.RO
In this paper we explore a symmetry-based search space reduction technique which can speed up optimal pathfinding on undirected uniform-cost grid maps by up to 38 times. Our technique decomposes grid maps into a set of empty rectangles, removing from each rectangle all interior nodes and possibly some from along the perimeter. We then add a series of macro-edges between selected pairs of remaining perimeter nodes to facilitate provably optimal traversal through each rectangle. We also develop a novel online pruning technique to further speed up search. Our algorithm is fast, memory efficient and retains the same optimality and completeness guarantees as searching on an unmodified grid map.
1106.4090
Discovery of Invariants through Automated Theory Formation
cs.LO cs.AI cs.SE
Refinement is a powerful mechanism for mastering the complexities that arise when formally modelling systems. Refinement also brings with it additional proof obligations -- requiring a developer to discover properties relating to their design decisions. With the goal of reducing this burden, we have investigated how a general purpose theory formation tool, HR, can be used to automate the discovery of such properties within the context of Event-B. Here we develop a heuristic approach to the automatic discovery of invariants and report upon a series of experiments that we undertook in order to evaluate our approach. The set of heuristics developed provides systematic guidance in tailoring HR for a given Event-B development. These heuristics are based upon proof-failure analysis, and have given rise to some promising results.
1106.4128
Percolation in Interdependent and Interconnected Networks: Abrupt Change from Second to First Order Transition
physics.soc-ph cs.SI
Robustness of two coupled networks system has been studied only for dependency coupling (S. Buldyrev et. al., Nature, 2010) and only for connectivity coupling (E. A. Leicht and R. M. D'Souza, arxiv:09070894). Here we study, using a percolation approach, a more realistic coupled networks system where both interdependent and interconnected links exist. We find a rich and unusual phase transition phenomena including hybrid transition of mixed first and second order i.e., discontinuities like a first order transition of the giant component followed by a continuous decrease to zero like a second order transition. Moreover, we find unusual discontinuous changes from second order to first order transition as a function of the dependency coupling between the two networks.
1106.4131
Repeaters in relativistic communications
physics.class-ph cs.IT math.IT
The communication efficiency between a transmitter and a receiver is affected by motion and the presence of gravitational fields. We study the effect of regenerating the signal in intermediate repeaters in different relativistic scenarios and comment the differences with respect to nonrelativistic repeaters.
1106.4215
Heterogenous mean-field analysis of a generalized voter-like model on networks
physics.soc-ph cond-mat.stat-mech cs.SI
We propose a generalized framework for the study of voter models in complex networks at the the heterogeneous mean-field (HMF) level that (i) yields a unified picture for existing copy/invasion processes and (ii) allows for the introduction of further heterogeneity through degree-selectivity rules. In the context of the HMF approximation, our model is capable of providing straightforward estimates for central quantities such as the exit probability and the consensus/fixation time, based on the statistical properties of the complex network alone. The HMF approach has the advantage of being readily applicable also in those cases in which exact solutions are difficult to work out. Finally, the unified formalism allows one to understand previously proposed voter-like processes as simple limits of the generalized model.
1106.4218
Rooting opinions in the minds: a cognitive model and a formal account of opinions and their dynamics
cs.AI
The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, like computer science and complexity, have tried to deal with this issue. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The understanding of how opinions change and the way they are affected by social influence are challenging issues requiring a thorough analysis of opinion per se but also of the way in which they travel between agents' minds and are modulated by these exchanges. To account for the two-faceted nature of opinions, which are mental entities undergoing complex social processes, we outline a preliminary model in which a cognitive theory of opinions is put forward and it is paired with a formal description of them and of their spreading among minds. Furthermore, investigating social influence also implies the necessity to account for the way in which people change their minds, as a consequence of interacting with other people, and the need to explain the higher or lower persistence of such changes.
1106.4221
Understanding opinions. A cognitive and formal account
cs.AI
The study of opinions, their formation and change, is one of the defining topics addressed by social psychology, but in recent years other disciplines, as computer science and complexity, have addressed this challenge. Despite the flourishing of different models and theories in both fields, several key questions still remain unanswered. The aim of this paper is to challenge the current theories on opinion by putting forward a cognitively grounded model where opinions are described as specific mental representations whose main properties are put forward. A comparison with reputation will be also presented.
1106.4232
Approximate controllability for linear degenerate parabolic problems with bilinear control
math.AP cs.SY math.OC
In this work we study the global approximate multiplicative controllability for the linear degenerate parabolic Cauchy-Neumann problem $$ \{{array}{l} \displaystyle{v_t-(a(x) v_x)_x =\alpha (t,x)v\,\,\qquad {in} \qquad Q_T \,=\,(0,T)\times(-1,1)} [2.5ex] \displaystyle{a(x)v_x(t,x)|_{x=\pm 1} = 0\,\,\qquad\qquad\qquad\,\, t\in (0,T)} [2.5ex] \displaystyle{v(0,x)=v_0 (x) \,\qquad\qquad\qquad\qquad\quad\,\, x\in (-1,1)}, {array}. $$ with the bilinear control $\alpha(t,x)\in L^\infty (Q_T).$ The problem is strongly degenerate in the sense that $a\in C^1([-1,1]),$ positive on $(-1,1),$ is allowed to vanish at $\pm 1$ provided that a certain integrability condition is fulfilled. We will show that the above system can be steered in $L^2(\Omega)$ from any nonzero, nonnegative initial state into any neighborhood of any desirable nonnegative target-state by bilinear static controls. Moreover, we extend the above result relaxing the sign constraint on $v_0$.
1106.4251
Learning with the Weighted Trace-norm under Arbitrary Sampling Distributions
cs.LG stat.ML
We provide rigorous guarantees on learning with the weighted trace-norm under arbitrary sampling distributions. We show that the standard weighted trace-norm might fail when the sampling distribution is not a product distribution (i.e. when row and column indexes are not selected independently), present a corrected variant for which we establish strong learning guarantees, and demonstrate that it works better in practice. We provide guarantees when weighting by either the true or empirical sampling distribution, and suggest that even if the true distribution is known (or is uniform), weighting by the empirical distribution may be beneficial.
1106.4286
Multi-receiver Wiretap Channel with Public and Confidential Messages
cs.IT cs.CR math.IT
We study the multi-receiver wiretap channel with public and confidential messages. In this channel, there is a transmitter that wishes to communicate with two legitimate users in the presence of an external eavesdropper. The transmitter sends a pair of public and confidential messages to each legitimate user. While there are no secrecy constraints on the public messages, confidential messages need to be transmitted in perfect secrecy. We study the discrete memoryless multi-receiver wiretap channel as well as its Gaussian multi-input multi-output (MIMO) instance. First, we consider the degraded discrete memoryless channel, and obtain an inner bound for the capacity region by using an achievable scheme that uses superposition coding and binning. Next, we obtain an outer bound, and show that this outer bound partially matches the inner bound, providing a partial characterization for the capacity region of the degraded channel model. Second, we obtain an inner bound for the general, not necessarily degraded, discrete memoryless channel by using Marton's inner bound, superposition coding, rate-splitting and binning. Third, we consider the degraded Gaussian MIMO channel, and show that, to evaluate both the inner and outer bounds, considering only jointly Gaussian auxiliary random variables and channel input is sufficient. Since the inner and outer bounds partially match, these sufficiency results provide a partial characterization of the capacity region of the degraded Gaussian MIMO channel. Finally, we provide an inner bound for the capacity region of the general, not necessarily degraded, Gaussian MIMO multi-receiver wiretap channel.
1106.4288
Continuum Limits of Markov Chains with Application to Network Modeling
cs.NI cs.IT math.AP math.IT
In this paper we investigate the continuum limits of a class of Markov chains. The investigation of such limits is motivated by the desire to model very large networks. We show that under some conditions, a sequence of Markov chains converges in some sense to the solution of a partial differential equation. Based on such convergence we approximate Markov chains modeling networks with a large number of components by partial differential equations. While traditional Monte Carlo simulation for very large networks is practically infeasible, partial differential equations can be solved with reasonable computational overhead using well-established mathematical tools.
1106.4300
Human as Real-Time Sensors of Social and Physical Events: A Case Study of Twitter and Sports Games
cs.SI physics.soc-ph
In this work, we study how Twitter can be used as a sensor to detect frequent and diverse social and physical events in real-time. We devise efficient data collection and event recognition solutions that work despite various limits on free access to Twitter data. We describe a web service implementation of our solution and report our experience with the 2010-2011 US National Football League (NFL) games. The service was able to recognize NFL game events within 40 seconds and with accuracy up to 90%. This capability will be very useful for not only real-time electronic program guide for live broadcast programs but also refined auction of advertisement slots. More importantly, it demonstrates for the first time the feasibility of using Twitter for real-time social and physical event detection for ubiquitous computing.
1106.4333
Residual Component Analysis
stat.ML cs.AI math.ST stat.CO stat.TH
Probabilistic principal component analysis (PPCA) seeks a low dimensional representation of a data set in the presence of independent spherical Gaussian noise, Sigma = (sigma^2)*I. The maximum likelihood solution for the model is an eigenvalue problem on the sample covariance matrix. In this paper we consider the situation where the data variance is already partially explained by other factors, e.g. covariates of interest, or temporal correlations leaving some residual variance. We decompose the residual variance into its components through a generalized eigenvalue problem, which we call residual component analysis (RCA). We show that canonical covariates analysis (CCA) is a special case of our algorithm and explore a range of new algorithms that arise from the framework. We illustrate the ideas on a gene expression time series data set and the recovery of human pose from silhouette.
1106.4337
Speed of complex network synchronization
cond-mat.dis-nn cs.SI nlin.CD physics.soc-ph
Synchrony is one of the most common dynamical states emerging on networks. The speed of convergence towards synchrony provides a fundamental collective time scale for synchronizing systems. Here we study the asymptotic synchronization times for directed networks with topologies ranging from completely ordered, grid-like, to completely disordered, random, including intermediate, partially disordered topologies. We extend the approach of Master Stability Functions to quantify synchronization times. We find that the synchronization times strongly and systematically depend on the network topology. In particular, at fixed in-degree, stronger topological randomness induces faster synchronization, whereas at fixed path length, synchronization is slowest for intermediate randomness in the small-world regime. Randomly rewiring real-world neural, social and transport networks confirms this picture.
1106.4346
Average-Consensus Algorithms in a Deterministic Framework
cs.DC cs.SY math.OC
We consider the average-consensus problem in a multi-node network of finite size. Communication between nodes is modeled by a sequence of directed signals with arbitrary communication delays. Four distributed algorithms that achieve average-consensus are proposed. Necessary and sufficient communication conditions are given for each algorithm to achieve average-consensus. Resource costs for each algorithm are derived based on the number of scalar values that are required for communication and storage at each node. Numerical examples are provided to illustrate the empirical convergence rate of the four algorithms in comparison with a well-known "gossip" algorithm as well as a randomized information spreading algorithm when assuming a fully connected random graph with instantaneous communication.
1106.4355
Tight Measurement Bounds for Exact Recovery of Structured Sparse Signals
stat.ML cs.LG
Standard compressive sensing results state that to exactly recover an s sparse signal in R^p, one requires O(s. log(p)) measurements. While this bound is extremely useful in practice, often real world signals are not only sparse, but also exhibit structure in the sparsity pattern. We focus on group-structured patterns in this paper. Under this model, groups of signal coefficients are active (or inactive) together. The groups are predefined, but the particular set of groups that are active (i.e., in the signal support) must be learned from measurements. We show that exploiting knowledge of groups can further reduce the number of measurements required for exact signal recovery, and derive universal bounds for the number of measurements needed. The bound is universal in the sense that it only depends on the number of groups under consideration, and not the particulars of the groups (e.g., compositions, sizes, extents, overlaps, etc.). Experiments show that our result holds for a variety of overlapping group configurations.
1106.4386
Optimal Rate Scheduling via Utility-Maximization for J-User MIMO Markov Fading Wireless Channels with Cooperation
math.PR cs.IT math.IT math.OC math.ST stat.TH
We design a dynamic rate scheduling policy of Markov type via the solution (a social optimal Nash equilibrium point) to a utility-maximization problem over a randomly evolving capacity set for a class of generalized processor-sharing queues living in a random environment, whose job arrivals to each queue follow a doubly stochastic renewal process (DSRP). Both the random environment and the random arrival rate of each DSRP are driven by a finite state continuous time Markov chain (FS-CTMC). Whereas the scheduling policy optimizes in a greedy fashion with respect to each queue and environmental state and since the closed-form solution for the performance of such a queueing system under the policy is difficult to obtain, we establish a reflecting diffusion with regime-switching (RDRS) model for its measures of performance and justify its asymptotic optimality through deriving the stochastic fluid and diffusion limits for the corresponding system under heavy traffic and identifying a cost function related to the utility function, which is minimized through minimizing the workload process in the diffusion limit. More importantly, our queueing model includes both J-user multi-input multi-output (MIMO) multiple access channel (MAC) and broadcast channel (BC) with cooperation and admission control as special cases. In these wireless systems, data from the J users in the MAC or data to the J users in the BC is transmitted over a common channel that is fading according to the FS-CTMC. The J-user capacity region for the MAC or the BC is a set-valued stochastic process that switches with the FS-CTMC fading. In any particular channel state, we show that each of the J-user capacity regions is a convex set bounded by a number of linear or smooth curved facets. Therefore our queueing model can perfectly match the dynamics of these wireless systems.
1106.4399
Motif based hierarchical random graphs: structural properties and critical points of an Ising model
math-ph cond-mat.stat-mech cs.SI math.MP physics.soc-ph
A class of random graphs is introduced and studied. The graphs are constructed in an algorithmic way from five motifs which were found in [Milo R., Shen-Orr S., Itzkovitz S., Kashtan N., Chklovskii D., Alon U., Science, 2002, 298, 824-827]. The construction scheme resembles that used in [Hinczewski M., A. Nihat Berker, Phys. Rev. E, 2006, 73, 066126], according to which the short-range bonds are non-random, whereas the long-range bonds appear independently with the same probability. A number of structural properties of the graphs have been described, among which there are degree distributions, clustering, amenability, small-world property. For one of the motifs, the critical point of the Ising model defined on the corresponding graph has been studied.
1106.4475
Interesting Multi-Relational Patterns
cs.DB cs.DS cs.SI
Mining patterns from multi-relational data is a problem attracting increasing interest within the data mining community. Traditional data mining approaches are typically developed for highly simplified types of data, such as an attribute-value table or a binary database, such that those methods are not directly applicable to multi-relational data. Nevertheless, multi-relational data is a more truthful and therefore often also a more powerful representation of reality. Mining patterns of a suitably expressive syntax directly from this representation, is thus a research problem of great importance. In this paper we introduce a novel approach to mining patterns in multi-relational data. We propose a new syntax for multi-relational patterns as complete connected subgraphs in a representation of the database as a K-partite graph. We show how this pattern syntax is generally applicable to multirelational data, while it reduces to well-known tiles [7] when the data is a simple binary or attribute-value table. We propose RMiner, an efficient algorithm to mine such patterns, and we introduce a method for quantifying their interestingness when contrasted with prior information of the data miner. Finally, we illustrate the usefulness of our approach by discussing results on real-world and synthetic databases.
1106.4487
Natural Evolution Strategies
stat.ML cs.NE
This paper presents Natural Evolution Strategies (NES), a recent family of algorithms that constitute a more principled approach to black-box optimization than established evolutionary algorithms. NES maintains a parameterized distribution on the set of solution candidates, and the natural gradient is used to update the distribution's parameters in the direction of higher expected fitness. We introduce a collection of techniques that address issues of convergence, robustness, sample complexity, computational complexity and sensitivity to hyperparameters. This paper explores a number of implementations of the NES family, ranging from general-purpose multi-variate normal distributions to heavy-tailed and separable distributions tailored towards global optimization and search in high dimensional spaces, respectively. Experimental results show best published performance on various standard benchmarks, as well as competitive performance on others.
1106.4507
OFDM pilot allocation for sparse channel estimation
cs.IT math.IT
In communication systems, efficient use of the spectrum is an indispensable concern. Recently the use of compressed sensing for the purpose of estimating Orthogonal Frequency Division Multiplexing (OFDM) sparse multipath channels has been proposed to decrease the transmitted overhead in form of the pilot subcarriers which are essential for channel estimation. In this paper, we investigate the problem of deterministic pilot allocation in OFDM systems. The method is based on minimizing the coherence of the submatrix of the unitary Discrete Fourier Transform (DFT) matrix associated with the pilot subcarriers. Unlike the usual case of equidistant pilot subcarriers, we show that non-uniform patterns based on cyclic difference sets are optimal. In cases where there are no difference sets, we perform a greedy search method for finding a suboptimal solution. We also investigate the performance of the recovery methods such as Orthogonal Matching Pursuit (OMP) and Iterative Method with Adaptive Thresholding (IMAT) for estimation of the channel taps.
1106.4509
Machine Learning Markets
cs.AI cs.MA cs.NE q-fin.TR stat.ML
Prediction markets show considerable promise for developing flexible mechanisms for machine learning. Here, machine learning markets for multivariate systems are defined, and a utility-based framework is established for their analysis. This differs from the usual approach of defining static betting functions. It is shown that such markets can implement model combination methods used in machine learning, such as product of expert and mixture of expert approaches as equilibrium pricing models, by varying agent utility functions. They can also implement models composed of local potentials, and message passing methods. Prediction markets also allow for more flexible combinations, by combining multiple different utility functions. Conversely, the market mechanisms implement inference in the relevant probabilistic models. This means that market mechanism can be utilized for implementing parallelized model building and inference for probabilistic modelling.
1106.4514
Sub-Nyquist Sampling: Bridging Theory and Practice
cs.IT cs.ET math.IT
Sampling theory encompasses all aspects related to the conversion of continuous-time signals to discrete streams of numbers. The famous Shannon-Nyquist theorem has become a landmark in the development of digital signal processing. In modern applications, an increasingly number of functions is being pushed forward to sophisticated software algorithms, leaving only those delicate finely-tuned tasks for the circuit level. In this paper, we review sampling strategies which target reduction of the ADC rate below Nyquist. Our survey covers classic works from the early 50's of the previous century through recent publications from the past several years. The prime focus is bridging theory and practice, that is to pinpoint the potential of sub-Nyquist strategies to emerge from the math to the hardware. In that spirit, we integrate contemporary theoretical viewpoints, which study signal modeling in a union of subspaces, together with a taste of practical aspects, namely how the avant-garde modalities boil down to concrete signal processing systems. Our hope is that this presentation style will attract the interest of both researchers and engineers in the hope of promoting the sub-Nyquist premise into practical applications, and encouraging further research into this exciting new frontier.
1106.4557
Learning When Training Data are Costly: The Effect of Class Distribution on Tree Induction
cs.AI
For large, real-world inductive learning problems, the number of training examples often must be limited due to the costs associated with procuring, preparing, and storing the training examples and/or the computational costs associated with learning from them. In such circumstances, one question of practical importance is: if only n training examples can be selected, in what proportion should the classes be represented? In this article we help to answer this question by analyzing, for a fixed training-set size, the relationship between the class distribution of the training data and the performance of classification trees induced from these data. We study twenty-six data sets and, for each, determine the best class distribution for learning. The naturally occurring class distribution is shown to generally perform well when classifier performance is evaluated using undifferentiated error rate (0/1 loss). However, when the area under the ROC curve is used to evaluate classifier performance, a balanced distribution is shown to perform well. Since neither of these choices for class distribution always generates the best-performing classifier, we introduce a budget-sensitive progressive sampling algorithm for selecting training examples based on the class associated with each example. An empirical analysis of this algorithm shows that the class distribution of the resulting training set yields classifiers with good (nearly-optimal) classification performance.
1106.4561
PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains
cs.AI
In recent years research in the planning community has moved increasingly toward s application of planners to realistic problems involving both time and many typ es of resources. For example, interest in planning demonstrated by the space res earch community has inspired work in observation scheduling, planetary rover ex ploration and spacecraft control domains. Other temporal and resource-intensive domains including logistics planning, plant control and manufacturing have also helped to focus the community on the modelling and reasoning issues that must be confronted to make planning technology meet the challenges of application. The International Planning Competitions have acted as an important motivating fo rce behind the progress that has been made in planning since 1998. The third com petition (held in 2002) set the planning community the challenge of handling tim e and numeric resources. This necessitated the development of a modelling langua ge capable of expressing temporal and numeric properties of planning domains. In this paper we describe the language, PDDL2.1, that was used in the competition. We describe the syntax of the language, its formal semantics and the validation of concurrent plans. We observe that PDDL2.1 has considerable modelling power --- exceeding the capabilities of current planning technology --- and presents a number of important challenges to the research community.
1106.4569
The Communicative Multiagent Team Decision Problem: Analyzing Teamwork Theories and Models
cs.AI
Despite the significant progress in multiagent teamwork, existing research does not address the optimality of its prescriptions nor the complexity of the teamwork problem. Without a characterization of the optimality-complexity tradeoffs, it is impossible to determine whether the assumptions and approximations made by a particular theory gain enough efficiency to justify the losses in overall performance. To provide a tool for use by multiagent researchers in evaluating this tradeoff, we present a unified framework, the COMmunicative Multiagent Team Decision Problem (COM-MTDP). The COM-MTDP model combines and extends existing multiagent theories, such as decentralized partially observable Markov decision processes and economic team theory. In addition to their generality of representation, COM-MTDPs also support the analysis of both the optimality of team performance and the computational complexity of the agents' decision problem. In analyzing complexity, we present a breakdown of the computational complexity of constructing optimal teams under various classes of problem domains, along the dimensions of observability and communication cost. In analyzing optimality, we exploit the COM-MTDP's ability to encode existing teamwork theories and models to encode two instantiations of joint intentions theory taken from the literature. Furthermore, the COM-MTDP model provides a basis for the development of novel team coordination algorithms. We derive a domain-independent criterion for optimal communication and provide a comparative analysis of the two joint intentions instantiations with respect to this optimal policy. We have implemented a reusable, domain-independent software package based on COM-MTDPs to analyze teamwork coordination strategies, and we demonstrate its use by encoding and evaluating the two joint intentions strategies within an example domain.
1106.4570
Competitive Safety Analysis: Robust Decision-Making in Multi-Agent Systems
cs.GT cs.AI
Much work in AI deals with the selection of proper actions in a given (known or unknown) environment. However, the way to select a proper action when facing other agents is quite unclear. Most work in AI adopts classical game-theoretic equilibrium analysis to predict agent behavior in such settings. This approach however does not provide us with any guarantee for the agent. In this paper we introduce competitive safety analysis. This approach bridges the gap between the desired normative AI approach, where a strategy should be selected in order to guarantee a desired payoff, and equilibrium analysis. We show that a safety level strategy is able to guarantee the value obtained in a Nash equilibrium, in several classical computer science settings. Then, we discuss the concept of competitive safety strategies, and illustrate its use in a decentralized load balancing setting, typical to network problems. In particular, we show that when we have many agents, it is possible to guarantee an expected payoff which is a factor of 8/9 of the payoff obtained in a Nash equilibrium. Our discussion of competitive safety analysis for decentralized load balancing is further developed to deal with many communication links and arbitrary speeds. Finally, we discuss the extension of the above concepts to Bayesian games, and illustrate their use in a basic auctions setup.
1106.4571
Acquiring Word-Meaning Mappings for Natural Language Interfaces
cs.CL cs.AI
This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted Examples), that acquires a semantic lexicon from a corpus of sentences paired with semantic representations. The lexicon learned consists of phrases paired with meaning representations. WOLFIE is part of an integrated system that learns to transform sentences into representations such as logical database queries. Experimental results are presented demonstrating WOLFIE's ability to learn useful lexicons for a database interface in four different natural languages. The usefulness of the lexicons learned by WOLFIE are compared to those acquired by a similar system, with results favorable to WOLFIE. A second set of experiments demonstrates WOLFIE's ability to scale to larger and more difficult, albeit artificially generated, corpora. In natural language acquisition, it is difficult to gather the annotated data needed for supervised learning; however, unannotated data is fairly plentiful. Active learning methods attempt to select for annotation and training only the most informative examples, and therefore are potentially very useful in natural language applications. However, most results to date for active learning have only considered standard classification tasks. To reduce annotation effort while maintaining accuracy, we apply active learning to semantic lexicons. We show that active learning can significantly reduce the number of annotated examples required to achieve a given level of performance.
1106.4572
Specific-to-General Learning for Temporal Events with Application to Learning Event Definitions from Video
cs.AI cs.LG
We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video sequences. First, we introduce a simple, propositional, temporal, event-description language called AMA that is sufficiently expressive to represent many events yet sufficiently restrictive to support learning. We then give algorithms, along with lower and upper complexity bounds, for the subsumption and generalization problems for AMA formulas. We present a positive-examples--only specific-to-general learning method based on these algorithms. We also present a polynomial-time--computable ``syntactic'' subsumption test that implies semantic subsumption without being equivalent to it. A generalization algorithm based on syntactic subsumption can be used in place of semantic generalization to improve the asymptotic complexity of the resulting learning algorithm. Finally, we apply this algorithm to the task of learning relational event definitions from video and show that it yields definitions that are competitive with hand-coded ones.
1106.4573
Towards Adjustable Autonomy for the Real World
cs.AI
Adjustable autonomy refers to entities dynamically varying their own autonomy, transferring decision-making control to other entities (typically agents transferring control to human users) in key situations. Determining whether and when such transfers-of-control should occur is arguably the fundamental research problem in adjustable autonomy. Previous work has investigated various approaches to addressing this problem but has often focused on individual agent-human interactions. Unfortunately, domains requiring collaboration between teams of agents and humans reveal two key shortcomings of these previous approaches. First, these approaches use rigid one-shot transfers of control that can result in unacceptable coordination failures in multiagent settings. Second, they ignore costs (e.g., in terms of time delays or effects on actions) to an agent's team due to such transfers-of-control. To remedy these problems, this article presents a novel approach to adjustable autonomy, based on the notion of a transfer-of-control strategy. A transfer-of-control strategy consists of a conditional sequence of two types of actions: (i) actions to transfer decision-making control (e.g., from an agent to a user or vice versa) and (ii) actions to change an agent's pre-specified coordination constraints with team members, aimed at minimizing miscoordination costs. The goal is for high-quality individual decisions to be made with minimal disruption to the coordination of the team. We present a mathematical model of transfer-of-control strategies. The model guides and informs the operationalization of the strategies using Markov Decision Processes, which select an optimal strategy, given an uncertain environment and costs to the individuals and teams. The approach has been carefully evaluated, including via its use in a real-world, deployed multi-agent system that assists a research group in its daily activities.
1106.4574
Better Mini-Batch Algorithms via Accelerated Gradient Methods
cs.LG
Mini-batch algorithms have been proposed as a way to speed-up stochastic convex optimization problems. We study how such algorithms can be improved using accelerated gradient methods. We provide a novel analysis, which shows how standard gradient methods may sometimes be insufficient to obtain a significant speed-up and propose a novel accelerated gradient algorithm, which deals with this deficiency, enjoys a uniformly superior guarantee and works well in practice.
1106.4575
An Analysis of Phase Transition in NK Landscapes
cs.AI
In this paper, we analyze the decision version of the NK landscape model from the perspective of threshold phenomena and phase transitions under two random distributions, the uniform probability model and the fixed ratio model. For the uniform probability model, we prove that the phase transition is easy in the sense that there is a polynomial algorithm that can solve a random instance of the problem with the probability asymptotic to 1 as the problem size tends to infinity. For the fixed ratio model, we establish several upper bounds for the solubility threshold, and prove that random instances with parameters above these upper bounds can be solved polynomially. This, together with our empirical study for random instances generated below and in the phase transition region, suggests that the phase transition of the fixed ratio model is also easy.
1106.4576
Expert-Guided Subgroup Discovery: Methodology and Application
cs.AI
This paper presents an approach to expert-guided subgroup discovery. The main step of the subgroup discovery process, the induction of subgroup descriptions, is performed by a heuristic beam search algorithm, using a novel parametrized definition of rule quality which is analyzed in detail. The other important steps of the proposed subgroup discovery process are the detection of statistically significant properties of selected subgroups and subgroup visualization: statistically significant properties are used to enrich the descriptions of induced subgroups, while the visualization shows subgroup properties in the form of distributions of the numbers of examples in the subgroups. The approach is illustrated by the results obtained for a medical problem of early detection of patient risk groups.
1106.4577
Interactive Execution Monitoring of Agent Teams
cs.MA cs.AI
There is an increasing need for automated support for humans monitoring the activity of distributed teams of cooperating agents, both human and machine. We characterize the domain-independent challenges posed by this problem, and describe how properties of domains influence the challenges and their solutions. We will concentrate on dynamic, data-rich domains where humans are ultimately responsible for team behavior. Thus, the automated aid should interactively support effective and timely decision making by the human. We present a domain-independent categorization of the types of alerts a plan-based monitoring system might issue to a user, where each type generally requires different monitoring techniques. We describe a monitoring framework for integrating many domain-specific and task-specific monitoring techniques and then using the concept of value of an alert to avoid operator overload. We use this framework to describe an execution monitoring approach we have used to implement Execution Assistants (EAs) in two different dynamic, data-rich, real-world domains to assist a human in monitoring team behavior. One domain (Army small unit operations) has hundreds of mobile, geographically distributed agents, a combination of humans, robots, and vehicles. The other domain (teams of unmanned ground and air vehicles) has a handful of cooperating robots. Both domains involve unpredictable adversaries in the vicinity. Our approach customizes monitoring behavior for each specific task, plan, and situation, as well as for user preferences. Our EAs alert the human controller when reported events threaten plan execution or physically threaten team members. Alerts were generated in a timely manner without inundating the user with too many alerts (less than 10 percent of alerts are unwanted, as judged by domain experts).
1106.4578
Propositional Independence - Formula-Variable Independence and Forgetting
cs.AI
Independence -- the study of what is relevant to a given problem of reasoning -- has received an increasing attention from the AI community. In this paper, we consider two basic forms of independence, namely, a syntactic one and a semantic one. We show features and drawbacks of them. In particular, while the syntactic form of independence is computationally easy to check, there are cases in which things that intuitively are not relevant are not recognized as such. We also consider the problem of forgetting, i.e., distilling from a knowledge base only the part that is relevant to the set of queries constructed from a subset of the alphabet. While such process is computationally hard, it allows for a simplification of subsequent reasoning, and can thus be viewed as a form of compilation: once the relevant part of a knowledge base has been extracted, all reasoning tasks to be performed can be simplified.
1106.4600
Perturbed and Permuted: Signal Integration in Network-Structured Dynamic Systems
q-bio.QM cond-mat.dis-nn cs.SY math.DS q-bio.MN
Biological systems (among others) may respond to a large variety of distinct external stimuli, or signals. These perturbations will generally be presented to the system not singly, but in various combinations, so that a proper understanding of the system response requires assessment of the degree to which the effects of one signal modulate the effects of another. This paper develops a pair of structural metrics for sparse differential equation models of complex dynamic systems and demonstrates that said metrics correlate with proxies of the susceptibility of one signal-response to be altered in the context of a second signal. One of these metrics may be interpreted as a normalized arc density in the neighborhood of certain influential nodes; this metric appears to correlate with increased independence of signal response.
1106.4632
Inferring 3D Articulated Models for Box Packaging Robot
cs.RO cs.AI cs.CV
Given a point cloud, we consider inferring kinematic models of 3D articulated objects such as boxes for the purpose of manipulating them. While previous work has shown how to extract a planar kinematic model (often represented as a linear chain), such planar models do not apply to 3D objects that are composed of segments often linked to the other segments in cyclic configurations. We present an approach for building a model that captures the relation between the input point cloud features and the object segment as well as the relation between the neighboring object segments. We use a conditional random field that allows us to model the dependencies between different segments of the object. We test our approach on inferring the kinematic structure from partial and noisy point cloud data for a wide variety of boxes including cake boxes, pizza boxes, and cardboard cartons of several sizes. The inferred structure enables our robot to successfully close these boxes by manipulating the flaps.
1106.4649
Space-Efficient Data-Analysis Queries on Grids
cs.DS cs.CG cs.DB
We consider various data-analysis queries on two-dimensional points. We give new space/time tradeoffs over previous work on geometric queries such as dominance and rectangle visibility, and on semigroup and group queries such as sum, average, variance, minimum and maximum. We also introduce new solutions to queries less frequently considered in the literature such as two-dimensional quantiles, majorities, successor/predecessor, mode, and various top-$k$ queries, considering static and dynamic scenarios.
1106.4692
Early Phishing
cs.CR cs.CY cs.SI
The history of phishing traces back in important ways to the mid-1990s when hacking software facilitated the mass targeting of people in password stealing scams on America Online (AOL). The first of these software programs was mine, called AOHell, and it was where the word phishing was coined. The software provided an automated password and credit card-stealing mechanism starting in January 1995. Though the practice of tricking users in order to steal passwords or information possibly goes back to the earliest days of computer networking, AOHell's phishing system was the first automated tool made publicly available for this purpose. The program influenced the creation of many other automated phishing systems that were made over a number of years. These tools were available to amateurs who used them to engage in a countless number of phishing attacks. By the later part of the decade, the activity moved from AOL to other networks and eventually grew to involve professional criminals on the internet. What began as a scheme by rebellious teenagers to steal passwords evolved into one of the top computer security threats affecting people, corporations, and governments.
1106.4728
Large Zero Autocorrelation Zone of Golay Sequences and $4^q$-QAM Golay Complementary Sequences
cs.IT math.IT
Sequences with good correlation properties have been widely adopted in modern communications, radar and sonar applications. In this paper, we present our new findings on some constructions of single $H$-ary Golay sequence and $4^q$-QAM Golay complementary sequence with a large zero autocorrelation zone, where $H\ge 2$ is an arbitrary even integer and $q\ge 2$ is an arbitrary integer. Those new results on Golay sequences and QAM Golay complementary sequences can be explored during synchronization and detection at the receiver end and thus improve the performance of the communication system.
1106.4862
Translation of Pronominal Anaphora between English and Spanish: Discrepancies and Evaluation
cs.CL cs.AI
This paper evaluates the different tasks carried out in the translation of pronominal anaphora in a machine translation (MT) system. The MT interlingua approach named AGIR (Anaphora Generation with an Interlingua Representation) improves upon other proposals presented to date because it is able to translate intersentential anaphors, detect co-reference chains, and translate Spanish zero pronouns into English---issues hardly considered by other systems. The paper presents the resolution and evaluation of these anaphora problems in AGIR with the use of different kinds of knowledge (lexical, morphological, syntactic, and semantic). The translation of English and Spanish anaphoric third-person personal pronouns (including Spanish zero pronouns) into the target language has been evaluated on unrestricted corpora. We have obtained a precision of 80.4% and 84.8% in the translation of Spanish and English pronouns, respectively. Although we have only studied the Spanish and English languages, our approach can be easily extended to other languages such as Portuguese, Italian, or Japanese.
1106.4863
Monte Carlo Methods for Tempo Tracking and Rhythm Quantization
cs.AI
We present a probabilistic generative model for timing deviations in expressive music performance. The structure of the proposed model is equivalent to a switching state space model. The switch variables correspond to discrete note locations as in a musical score. The continuous hidden variables denote the tempo. We formulate two well known music recognition problems, namely tempo tracking and automatic transcription (rhythm quantization) as filtering and maximum a posteriori (MAP) state estimation tasks. Exact computation of posterior features such as the MAP state is intractable in this model class, so we introduce Monte Carlo methods for integration and optimization. We compare Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated annealing and iterative improvement) and sequential Monte Carlo methods (particle filters). Our simulation results suggest better results with sequential methods. The methods can be applied in both online and batch scenarios such as tempo tracking and transcription and are thus potentially useful in a number of music applications such as adaptive automatic accompaniment, score typesetting and music information retrieval.
1106.4864
Exploiting Contextual Independence In Probabilistic Inference
cs.AI
Bayesian belief networks have grown to prominence because they provide compact representations for many problems for which probabilistic inference is appropriate, and there are algorithms to exploit this compactness. The next step is to allow compact representations of the conditional probabilities of a variable given its parents. In this paper we present such a representation that exploits contextual independence in terms of parent contexts; which variables act as parents may depend on the value of other variables. The internal representation is in terms of contextual factors (confactors) that is simply a pair of a context and a table. The algorithm, contextual variable elimination, is based on the standard variable elimination algorithm that eliminates the non-query variables in turn, but when eliminating a variable, the tables that need to be multiplied can depend on the context. This algorithm reduces to standard variable elimination when there is no contextual independence structure to exploit. We show how this can be much more efficient than variable elimination when there is structure to exploit. We explain why this new method can exploit more structure than previous methods for structured belief network inference and an analogous algorithm that uses trees.
1106.4865
Bound Propagation
cs.AI
In this article we present an algorithm to compute bounds on the marginals of a graphical model. For several small clusters of nodes upper and lower bounds on the marginal values are computed independently of the rest of the network. The range of allowed probability distributions over the surrounding nodes is restricted using earlier computed bounds. As we will show, this can be considered as a set of constraints in a linear programming problem of which the objective function is the marginal probability of the center nodes. In this way knowledge about the maginals of neighbouring clusters is passed to other clusters thereby tightening the bounds on their marginals. We show that sharp bounds can be obtained for undirected and directed graphs that are used for practical applications, but for which exact computations are infeasible.
1106.4866
On Polynomial Sized MDP Succinct Policies
cs.AI
Policies of Markov Decision Processes (MDPs) determine the next action to execute from the current state and, possibly, the history (the past states). When the number of states is large, succinct representations are often used to compactly represent both the MDPs and the policies in a reduced amount of space. In this paper, some problems related to the size of succinctly represented policies are analyzed. Namely, it is shown that some MDPs have policies that can only be represented in space super-polynomial in the size of the MDP, unless the polynomial hierarchy collapses. This fact motivates the study of the problem of deciding whether a given MDP has a policy of a given size and reward. Since some algorithms for MDPs work by finding a succinct representation of the value function, the problem of deciding the existence of a succinct representation of a value function of a given size and reward is also considered.
1106.4867
Compiling Causal Theories to Successor State Axioms and STRIPS-Like Systems
cs.AI
We describe a system for specifying the effects of actions. Unlike those commonly used in AI planning, our system uses an action description language that allows one to specify the effects of actions using domain rules, which are state constraints that can entail new action effects from old ones. Declaratively, an action domain in our language corresponds to a nonmonotonic causal theory in the situation calculus. Procedurally, such an action domain is compiled into a set of logical theories, one for each action in the domain, from which fully instantiated successor state-like axioms and STRIPS-like systems are then generated. We expect the system to be a useful tool for knowledge engineers writing action specifications for classical AI planning systems, GOLOG systems, and other systems where formal specifications of actions are needed.
1106.4868
VHPOP: Versatile Heuristic Partial Order Planner
cs.AI
VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP. It draws from the experience gained in the early to mid 1990's on flaw selection strategies for POCL planning, and combines this with more recent developments in the field of domain independent planning such as distance based heuristics and reachability analysis. We present an adaptation of the additive heuristic for plan space planning, and modify it to account for possible reuse of existing actions in a plan. We also propose a large set of novel flaw selection strategies, and show how these can help us solve more problems than previously possible by POCL planners. VHPOP also supports planning with durative actions by incorporating standard techniques for temporal constraint reasoning. We demonstrate that the same heuristic techniques used to boost the performance of classical POCL planning can be effective in domains with durative actions as well. The result is a versatile heuristic POCL planner competitive with established CSP-based and heuristic state space planners.
1106.4869
SHOP2: An HTN Planning System
cs.AI
The SHOP2 planning system received one of the awards for distinguished performance in the 2002 International Planning Competition. This paper describes the features of SHOP2 which enabled it to excel in the competition, especially those aspects of SHOP2 that deal with temporal and metric planning domains.
1106.4871
An Architectural Approach to Ensuring Consistency in Hierarchical Execution
cs.AI
Hierarchical task decomposition is a method used in many agent systems to organize agent knowledge. This work shows how the combination of a hierarchy and persistent assertions of knowledge can lead to difficulty in maintaining logical consistency in asserted knowledge. We explore the problematic consequences of persistent assumptions in the reasoning process and introduce novel potential solutions. Having implemented one of the possible solutions, Dynamic Hierarchical Justification, its effectiveness is demonstrated with an empirical analysis.
1106.4872
Wrapper Maintenance: A Machine Learning Approach
cs.AI
The proliferation of online information sources has led to an increased use of wrappers for extracting data from Web sources. While most of the previous research has focused on quick and efficient generation of wrappers, the development of tools for wrapper maintenance has received less attention. This is an important research problem because Web sources often change in ways that prevent the wrappers from extracting data correctly. We present an efficient algorithm that learns structural information about data from positive examples alone. We describe how this information can be used for two wrapper maintenance applications: wrapper verification and reinduction. The wrapper verification system detects when a wrapper is not extracting correct data, usually because the Web source has changed its format. The reinduction algorithm automatically recovers from changes in the Web source by identifying data on Web pages so that a new wrapper may be generated for this source. To validate our approach, we monitored 27 wrappers over a period of a year. The verification algorithm correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes, resulting in precision of 0.73 and recall of 0.95. We validated the reinduction algorithm on ten Web sources. We were able to successfully reinduce the wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data extraction task.
1106.4880
Semantic Inference using Chemogenomics Data for Drug Discovery
q-bio.QM cs.DL cs.IR
Background Semantic Web Technology (SWT) makes it possible to integrate and search the large volume of life science datasets in the public domain, as demonstrated by well-known linked data projects such as LODD, Bio2RDF, and Chem2Bio2RDF. Integration of these sets creates large networks of information. We have previously described a tool called WENDI for aggregating information pertaining to new chemical compounds, effectively creating evidence paths relating the compounds to genes, diseases and so on. In this paper we examine the utility of automatically inferring new compound-disease associations (and thus new links in the network) based on semantically marked-up versions of these evidence paths, rule-sets and inference engines. Results Through the implementation of a semantic inference algorithm, rule set, Semantic Web methods (RDF, OWL and SPARQL) and new interfaces, we have created a new tool called Chemogenomic Explorer that uses networks of ontologically annotated RDF statements along with deductive reasoning tools to infer new associations between the query structure and genes and diseases from WENDI results. The tool then permits interactive clustering and filtering of these evidence paths. Conclusions We present a new aggregate approach to inferring links between chemical compounds and diseases using semantic inference. This approach allows multiple evidence paths between compounds and diseases to be identified using a rule-set and semantically annotated data, and for these evidence paths to be clustered to show overall evidence linking the compound to a disease. We believe this is a powerful approach, because it allows compound-disease relationships to be ranked by the amount of evidence supporting them.
1106.4907
Face Identification from Manipulated Facial Images using SIFT
cs.CV
Editing on digital images is ubiquitous. Identification of deliberately modified facial images is a new challenge for face identification system. In this paper, we address the problem of identification of a face or person from heavily altered facial images. In this face identification problem, the input to the system is a manipulated or transformed face image and the system reports back the determined identity from a database of known individuals. Such a system can be useful in mugshot identification in which mugshot database contains two views (frontal and profile) of each criminal. We considered only frontal view from the available database for face identification and the query image is a manipulated face generated by face transformation software tool available online. We propose SIFT features for efficient face identification in this scenario. Further comparative analysis has been given with well known eigenface approach. Experiments have been conducted with real case images to evaluate the performance of both methods.
1106.4925
Belief-propagation algorithm and the Ising model on networks with arbitrary distributions of motifs
cond-mat.dis-nn cs.AI
We generalize the belief-propagation algorithm to sparse random networks with arbitrary distributions of motifs (triangles, loops, etc.). Each vertex in these networks belongs to a given set of motifs (generalization of the configuration model). These networks can be treated as sparse uncorrelated hypergraphs in which hyperedges represent motifs. Here a hypergraph is a generalization of a graph, where a hyperedge can connect any number of vertices. These uncorrelated hypergraphs are tree-like (hypertrees), which crucially simplify the problem and allow us to apply the belief-propagation algorithm to these loopy networks with arbitrary motifs. As natural examples, we consider motifs in the form of finite loops and cliques. We apply the belief-propagation algorithm to the ferromagnetic Ising model on the resulting random networks. We obtain an exact solution of this model on networks with finite loops or cliques as motifs. We find an exact critical temperature of the ferromagnetic phase transition and demonstrate that with increasing the clustering coefficient and the loop size, the critical temperature increases compared to ordinary tree-like complex networks. Our solution also gives the birth point of the giant connected component in these loopy networks.
1106.4987
The Cosparse Analysis Model and Algorithms
math.NA cs.IT math.IT
After a decade of extensive study of the sparse representation synthesis model, we can safely say that this is a mature and stable field, with clear theoretical foundations, and appealing applications. Alongside this approach, there is an analysis counterpart model, which, despite its similarity to the synthesis alternative, is markedly different. Surprisingly, the analysis model did not get a similar attention, and its understanding today is shallow and partial. In this paper we take a closer look at the analysis approach, better define it as a generative model for signals, and contrast it with the synthesis one. This work proposes effective pursuit methods that aim to solve inverse problems regularized with the analysis-model prior, accompanied by a preliminary theoretical study of their performance. We demonstrate the effectiveness of the analysis model in several experiments.
1106.4988
The uniform controllability property of semidiscrete approximations for the parabolic distributed parameter systems in Banach spaces
math.OC cs.SY
The problem we consider in this work is to minimize the L^q-norm (q > 2) of the semidiscrete controls. As shown in [LT06], under the main approximation assumptions that the discretized semigroup is uniformly analytic and that the degree of unboundedness of control operator is lower than 1/2, the uniform controllability property of semidiscrete approximations for the parabolic systems is achieved in L^2. In the present paper, we show that the uniform controllability property still continue to be asserted in L^q. (q > 2) even with the con- dition that the degree of unboundedness of control operator is greater than 1/2. Moreover, the minimization procedure to compute the ap- proximation controls is provided. An example of application is imple- mented for the one dimensional heat equation with Dirichlet boundary control.
1106.5003
Voltage Collapse and ODE Approach to Power Flows: Analysis of a Feeder Line with Static Disorder in Consumption/Production
nlin.AO cs.CE physics.soc-ph
We consider a model of a distribution feeder connecting multiple loads to the sub-station. Voltage is controlled directly at the head of the line (sub-station), however, voltage anywhere further down the line is subject to fluctuations, caused by irregularities of real and reactive distributed power consumption/generation. The lack of a direct control of voltage along the line may result in the voltage instability, also called voltage collapse - phenomenon well known and documented in the power engineering literature. Motivated by emerging photo-voltaic technology, which brings a new source of renewable generation but also contributes significant increase in power flow fluctuations, we reexamine the phenomenon of voltage stability and collapse. In the limit where the number of consumers is large and spatial variations in power flows are smooth functions of position along the feeder, we derive a set of the power flow Ordinary Differential Equations (ODE), verify phenomenon of voltage collapse, and study the effect of disorder and irregularity in injection and consumption on the voltage profile by simulating the stochastic ODE. We observe that disorder leads to nonlinear amplification of the voltage variations at the end of the line as the point of voltage collapse is approached. We also find that the disorder, when correlated on a scale sufficiently small compared to the length of the line, self-averages, i.e. the voltage profile remains spatially smooth for any individual realization of the disorder and is correlated only at scales comparable to the length of the line. Finally, we explain why the integrated effect of disorder on the voltage at the end of the line cannot be described within a naive one-generator-one-load model.
1106.5037
Fast and Efficient Compressive Sensing using Structurally Random Matrices
cs.IT math.IT
This paper introduces a new framework of fast and efficient sensing matrices for practical compressive sensing, called Structurally Random Matrix (SRM). In the proposed framework, we pre-randomize a sensing signal by scrambling its samples or flipping its sample signs and then fast-transform the randomized samples and finally, subsample the transform coefficients as the final sensing measurements. SRM is highly relevant for large-scale, real-time compressive sensing applications as it has fast computation and supports block-based processing. In addition, we can show that SRM has theoretical sensing performance comparable with that of completely random sensing matrices. Numerical simulation results verify the validity of the theory as well as illustrate the promising potentials of the proposed sensing framework.
1106.5039
The Capacity of MIMO Channels with Per-Antenna Power Constraint
cs.IT math.IT math.OC
We establish the optimal input signaling and the capacity of MIMO channels under per-antenna power constraint. While admitting a linear eigenbeam structure, the optimal input is no longer diagonalizable by the channel right singular vectors as with sum power constraint. We formulate the capacity optimization as an SDP problem and solve in closed-form the optimal input covariance as a function of the dual variable. We then design an efficient algorithm to find this optimal input signaling for all channel sizes. The proposed algorithm allows for straightforward implementation in practical systems in real time. Simulation results show that with equal constraint per antenna, capacity with per-antenna power can be close to capacity with sum power, but as the constraint becomes more skew, the two capacities diverge. Forcing input eigenbeams to match the channel right singular vectors achieves no improvement over independent signaling and can even be detrimental to capacity.
1106.5040
Optimal High Frequency Trading with limit and market orders
q-fin.TR cs.SY math.OC q-fin.CP
We propose a framework for studying optimal market making policies in a limit order book (LOB). The bid-ask spread of the LOB is modelled by a Markov chain with finite values, multiple of the tick size, and subordinated by the Poisson process of the tick-time clock. We consider a small agent who continuously submits limit buy/sell orders and submits market orders at discrete dates. The objective of the market maker is to maximize her expected utility from revenue over a short term horizon by a tradeoff between limit and market orders, while controlling her inventory position. This is formulated as a mixed regime switching regular/ impulse control problem that we characterize in terms of quasi-variational system by dynamic programming methods. In the case of a mean-variance criterion with martingale reference price or when the asset price follows a Levy process and with exponential utility criterion, the dynamic programming system can be reduced to a system of simple equations involving only the inventory and spread variables. Calibration procedures are derived for estimating the transition matrix and intensity parameters for the spread and for Cox processes modelling the execution of limit orders. Several computational tests are performed both on simulated and real data, and illustrate the impact and profit when considering execution priority in limit orders and market orders
1106.5053
Modeling Social Networks with Node Attributes using the Multiplicative Attribute Graph Model
cs.SI physics.soc-ph
Networks arising from social, technological and natural domains exhibit rich connectivity patterns and nodes in such networks are often labeled with attributes or features. We address the question of modeling the structure of networks where nodes have attribute information. We present a Multiplicative Attribute Graph (MAG) model that considers nodes with categorical attributes and models the probability of an edge as the product of individual attribute link formation affinities. We develop a scalable variational expectation maximization parameter estimation method. Experiments show that MAG model reliably captures network connectivity as well as provides insights into how different attributes shape the network structure.
1106.5063
On-line Decentralized Charging of Plug-In Electric Vehicles in Power Systems
math.OC cs.SY
The concept of plug-in electric vehicles (PEV) are gaining increasing popularity in recent years, due to the growing societal awareness of reducing greenhouse gas (GHG) emissions, and gaining independence on foreign oil or petroleum. Large-scale deployment of PEVs currently faces many challenges. One particular concern is that the PEV charging can potentially cause significant impacts on the existing power distribution system, due to the increase in peak load. As such, this work tries to mitigate the impacts of PEV charging by proposing a decentralized smart PEV charging algorithm to minimize the distribution system load variance, so that a `flat' total load profile can be obtained. The charging algorithm is myopic, in that it controls the PEV charging processes in each time slot based entirely on the current power system states, without knowledge about future system dynamics. We provide theoretical guarantees on the asymptotic optimality of the proposed charging algorithm. Thus, compared to other forecast based smart charging approaches in the literature, the charging algorithm not only achieves optimality asymptotically in an on-line, and decentralized manner, but also is robust against various uncertainties in the power system, such as random PEV driving patterns and distributed generation (DG) with highly intermittent renewable energy sources.
1106.5111
Exploiting Reputation in Distributed Virtual Environments
cs.AI
The cognitive research on reputation has shown several interesting properties that can improve both the quality of services and the security in distributed electronic environments. In this paper, the impact of reputation on decision-making under scarcity of information will be shown. First, a cognitive theory of reputation will be presented, then a selection of simulation experimental results from different studies will be discussed. Such results concern the benefits of reputation when agents need to find out good sellers in a virtual market-place under uncertainty and informational cheating.
1106.5112
The All Relevant Feature Selection using Random Forest
cs.AI
In this paper we examine the application of the random forest classifier for the all relevant feature selection problem. To this end we first examine two recently proposed all relevant feature selection algorithms, both being a random forest wrappers, on a series of synthetic data sets with varying size. We show that reasonable accuracy of predictions can be achieved and that heuristic algorithms that were designed to handle the all relevant problem, have performance that is close to that of the reference ideal algorithm. Then, we apply one of the algorithms to four families of semi-synthetic data sets to assess how the properties of particular data set influence results of feature selection. Finally we test the procedure using a well-known gene expression data set. The relevance of nearly all previously established important genes was confirmed, moreover the relevance of several new ones is discovered.
1106.5113
Secure Mining of Association Rules in Horizontally Distributed Databases
cs.DB cs.CR cs.DC
We propose a protocol for secure mining of association rules in horizontally distributed databases. The current leading protocol is that of Kantarcioglu and Clifton (TKDE 2004). Our protocol, like theirs, is based on the Fast Distributed Mining (FDM) algorithm of Cheung et al. (PDIS 1996), which is an unsecured distributed version of the Apriori algorithm. The main ingredients in our protocol are two novel secure multi-party algorithms --- one that computes the union of private subsets that each of the interacting players hold, and another that tests the inclusion of an element held by one player in a subset held by another. Our protocol offers enhanced privacy with respect to the protocol of Kantarcioglu and Clifton. In addition, it is simpler and is significantly more efficient in terms of communication rounds, communication cost and computational cost.
1106.5122
Clustering with Prototype Extraction for Census Data Analysis
cs.DB cs.SI physics.soc-ph
Not long ago primary census data became available to publicity. It opened qualitatively new perspectives not only for researchers in demography and sociology, but also for those people, who somehow face processes occurring in society. In this paper authors propose using Data Mining methods for searching hidden patterns in census data. A novel clustering-based technique is described as well. It allows determining factors which influence people behavior, in particular decision-making process (as an example, a decision whether to have a baby or not). Proposed technique is based on clustering a set of respondents, for whom a certain event have already happened (for instance, a baby was born), and discovering clusters' prototypes from a set of respondents, for whom this event hasn't occurred yet. By means of analyzing clusters' and their prototypes' characteristics it is possible to identify which factors influence the decision-making process. Authors also provide an experimental example of the described approach usage.
1106.5130
Some Properties of R\'{e}nyi Entropy over Countably Infinite Alphabets
cs.IT math.IT
In this paper we study certain properties of R\'{e}nyi entropy functionals $H_\alpha(\mathcal{P})$ on the space of probability distributions over $\mathbb{Z}_+$. Primarily, continuity and convergence issues are addressed. Some properties shown parallel those known in the finite alphabet case, while others illustrate a quite different behaviour of R\'enyi entropy in the infinite case. In particular, it is shown that, for any distribution $\mathcal P$ and any $r\in[0,\infty]$, there exists a sequence of distributions $\mathcal{P}_n$ converging to $\mathcal{P}$ with respect to the total variation distance, such that $\lim_{n\to\infty}\lim_{\alpha\to{1+}} H_\alpha(\mathcal{P}_n) = \lim_{\alpha\to{1+}}\lim_{n\to\infty} H_\alpha(\mathcal{P}_n) + r$.
1106.5150
All scale-free networks are sparse
physics.soc-ph cond-mat.stat-mech cs.SI
We study the realizability of scale free-networks with a given degree sequence, showing that the fraction of realizable sequences undergoes two first-order transitions at the values 0 and 2 of the power-law exponent. We substantiate this finding by analytical reasoning and by a numerical method, proposed here, based on extreme value arguments, which can be applied to any given degree distribution. Our results reveal a fundamental reason why large scale-free networks without constraints on minimum and maximum degree must be sparse.
1106.5156
Morphological Reconstruction for Word Level Script Identification
cs.CV
A line of a bilingual document page may contain text words in regional language and numerals in English. For Optical Character Recognition (OCR) of such a document page, it is necessary to identify different script forms before running an individual OCR system. In this paper, we have identified a tool of morphological opening by reconstruction of an image in different directions and regional descriptors for script identification at word level, based on the observation that every text has a distinct visual appearance. The proposed system is developed for three Indian major bilingual documents, Kannada, Telugu and Devnagari containing English numerals. The nearest neighbour and k-nearest neighbour algorithms are applied to classify new word images. The proposed algorithm is tested on 2625 words with various font styles and sizes. The results obtained are quite encouraging
1106.5174
A Game Theoretical Approach to Broadcast Information Diffusion in Social Networks
cs.SI cs.GT physics.soc-ph
One major function of social networks (e.g., massive online social networks) is the dissemination of information, such as scientific knowledge, news, and rumors. Information can be propagated by the users of the network via natural connections in written, oral or electronic form. The information passing from a sender to receivers and back (in the form of comments) involves all of the actors considering their knowledge, trust, and popularity, which shape their publishing and commenting strategies. To understand such human aspects of the information dissemination, we propose a game theoretical model of a one-way information forwarding and feedback mechanism in a star-shaped social network that takes into account the personalities of the communicating actors.
1106.5177
Coherence-Pattern Guided Compressive Sensing with Unresolved Grids
cs.IT math.IT math.NA
Highly coherent sensing matrices arise in discretization of continuum imaging problems such as radar and medical imaging when the grid spacing is below the Rayleigh threshold. Algorithms based on techniques of band exclusion (BE) and local optimization (LO) are proposed to deal with such coherent sensing matrices. These techniques are embedded in the existing compressed sensing algorithms such as Orthogonal Matching Pursuit (OMP), Subspace Pursuit (SP), Iterative Hard Thresholding (IHT), Basis Pursuit (BP) and Lasso, and result in the modified algorithms BLOOMP, BLOSP, BLOIHT, BP-BLOT and Lasso-BLOT, respectively. Under appropriate conditions, it is proved that BLOOMP can reconstruct sparse, widely separated objects up to one Rayleigh length in the Bottleneck distance {\em independent} of the grid spacing. One of the most distinguishing attributes of BLOOMP is its capability of dealing with large dynamic ranges. The BLO-based algorithms are systematically tested with respect to four performance metrics: dynamic range, noise stability, sparsity and resolution. With respect to dynamic range and noise stability, BLOOMP is the best performer. With respect to sparsity, BLOOMP is the best performer for high dynamic range while for dynamic range near unity BP-BLOT and Lasso-BLOT with the optimized regularization parameter have the best performance. In the noiseless case, BP-BLOT has the highest resolving power up to certain dynamic range. The algorithms BLOSP and BLOIHT are good alternatives to BLOOMP and BP/Lasso-BLOT: they are faster than both BLOOMP and BP/Lasso-BLOT and shares, to a lesser degree, BLOOMP's amazing attribute with respect to dynamic range. Detailed comparisons with existing algorithms such as Spectral Iterative Hard Thresholding (SIHT) and the frame-adapted BP are given.
1106.5186
Learning Shape and Texture Characteristics of CT Tree-in-Bud Opacities for CAD Systems
cs.CV
Although radiologists can employ CAD systems to characterize malignancies, pulmonary fibrosis and other chronic diseases; the design of imaging techniques to quantify infectious diseases continue to lag behind. There exists a need to create more CAD systems capable of detecting and quantifying characteristic patterns often seen in respiratory tract infections such as influenza, bacterial pneumonia, or tuborculosis. One of such patterns is Tree-in-bud (TIB) which presents \textit{thickened} bronchial structures surrounding by clusters of \textit{micro-nodules}. Automatic detection of TIB patterns is a challenging task because of their weak boundary, noisy appearance, and small lesion size. In this paper, we present two novel methods for automatically detecting TIB patterns: (1) a fast localization of candidate patterns using information from local scale of the images, and (2) a M\"{o}bius invariant feature extraction method based on learned local shape and texture properties. A comparative evaluation of the proposed methods is presented with a dataset of 39 laboratory confirmed viral bronchiolitis human parainfluenza (HPIV) CTs and 21 normal lung CTs. Experimental results demonstrate that the proposed CAD system can achieve high detection rate with an overall accuracy of 90.96%.
1106.5213
Personalised Travel Recommendation based on Location Co-occurrence
cs.IR
We propose a new task of recommending touristic locations based on a user's visiting history in a geographically remote region. This can be used to plan a touristic visit to a new city or country, or by travel agencies to provide personalised travel deals. A set of geotags is used to compute a location similarity model between two different regions. The similarity between two landmarks is derived from the number of users that have visited both places, using a Gaussian density estimation of the co-occurrence space of location visits to cluster related geotags. The standard deviation of the kernel can be used as a scale parameter that determines the size of the recommended landmarks. A personalised recommendation based on the location similarity model is evaluated on city and country scale and is able to outperform a location ranking based on popularity. Especially when a tourist filter based on visit duration is enforced, the prediction can be accurately adapted to the preference of the user. An extensive evaluation based on manual annotations shows that more strict ranking methods like cosine similarity and a proposed RankDiff algorithm provide more serendipitous recommendations and are able to link similar locations on opposite sides of the world.
1106.5236
A General Framework for Structured Sparsity via Proximal Optimization
cs.LG stat.ML
We study a generalized framework for structured sparsity. It extends the well-known methods of Lasso and Group Lasso by incorporating additional constraints on the variables as part of a convex optimization problem. This framework provides a straightforward way of favouring prescribed sparsity patterns, such as orderings, contiguous regions and overlapping groups, among others. Existing optimization methods are limited to specific constraint sets and tend to not scale well with sample size and dimensionality. We propose a novel first order proximal method, which builds upon results on fixed points and successive approximations. The algorithm can be applied to a general class of conic and norm constraints sets and relies on a proximity operator subproblem which can be computed explicitly. Experiments on different regression problems demonstrate the efficiency of the optimization algorithm and its scalability with the size of the problem. They also demonstrate state of the art statistical performance, which improves over Lasso and StructOMP.