id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1103.4086
Lattice Codes for the Wiretap Gaussian Channel: Construction and Analysis
cs.IT math.IT
We consider the Gaussian wiretap channel, where two legitimate players Alice and Bob communicate over an additive white Gaussian noise (AWGN) channel, while Eve is eavesdropping, also through an AWGN channel. We propose a coding strategy based on lattice coset encoding. We analyze Eve's probability of decoding, from which we define the secrecy gain as a design criterion for wiretap lattice codes, expressed in terms of the lattice theta series, which characterizes Eve's confusion as a function of the channel parameters. The secrecy gain is studied for even unimodular lattices, and an asymptotic analysis shows that it grows exponentially in the dimension of the lattice. Examples of wiretap lattice codes are given. Interestingly, minimizing Eve's probability of error involves the same optimization of the theta series as does the flatness factor, another newly defined code design that characterizes lattice codes that achieve strong secrecy.
1103.4090
A Linear Classifier Based on Entity Recognition Tools and a Statistical Approach to Method Extraction in the Protein-Protein Interaction Literature
q-bio.QM cs.CL cs.IR cs.LG
We participated, in the Article Classification and the Interaction Method subtasks (ACT and IMT, respectively) of the Protein-Protein Interaction task of the BioCreative III Challenge. For the ACT, we pursued an extensive testing of available Named Entity Recognition and dictionary tools, and used the most promising ones to extend our Variable Trigonometric Threshold linear classifier. For the IMT, we experimented with a primarily statistical approach, as opposed to employing a deeper natural language processing strategy. Finally, we also studied the benefits of integrating the method extraction approach that we have used for the IMT into the ACT pipeline. For the ACT, our linear article classifier leads to a ranking and classification performance significantly higher than all the reported submissions. For the IMT, our results are comparable to those of other systems, which took very different approaches. For the ACT, we show that the use of named entity recognition tools leads to a substantial improvement in the ranking and classification of articles relevant to protein-protein interaction. Thus, we show that our substantially expanded linear classifier is a very competitive classifier in this domain. Moreover, this classifier produces interpretable surfaces that can be understood as "rules" for human understanding of the classification. In terms of the IMT task, in contrast to other participants, our approach focused on identifying sentences that are likely to bear evidence for the application of a PPI detection method, rather than on classifying a document as relevant to a method. As BioCreative III did not perform an evaluation of the evidence provided by the system, we have conducted a separate assessment; the evaluators agree that our tool is indeed effective in detecting relevant evidence for PPI detection methods.
1103.4168
Caching in Multidimensional Databases
cs.DB
One utilisation of multidimensional databases is the field of On-line Analytical Processing (OLAP). The applications in this area are designed to make the analysis of shared multidimensional information fast [9]. On one hand, speed can be achieved by specially devised data structures and algorithms. On the other hand, the analytical process is cyclic. In other words, the user of the OLAP application runs his or her queries one after the other. The output of the last query may be there (at least partly) in one of the previous results. Therefore caching also plays an important role in the operation of these systems. However, caching itself may not be enough to ensure acceptable performance. Size does matter: The more memory is available, the more we gain by loading and keeping information in there. Oftentimes, the cache size is fixed. This limits the performance of the multidimensional database, as well, unless we compress the data in order to move a greater proportion of them into the memory. Caching combined with proper compression methods promise further performance improvements. In this paper, we investigate how caching influences the speed of OLAP systems. Different physical representations (multidimensional and table) are evaluated. For the thorough comparison, models are proposed. We draw conclusions based on these models, and the conclusions are verified with empirical data.
1103.4169
Difference-Huffman Coding of Multidimensional Databases
cs.DB
A new compression method called difference-Huffman coding (DHC) is introduced in this paper. It is verified empirically that DHC results in a smaller multidimensional physical representation than those for other previously published techniques (single count header compression, logical position compression, base-offset compression and difference sequence compression). The article examines how caching influences the expected retrieval time of the multidimensional and table representations of relations. A model is proposed for this, which is then verified with empirical data. Conclusions are drawn, based on the model and the experiment, about when one physical representation outperforms another in terms of retrieval time. Over the tested range of available memory, the performance for the multidimensional representation was always much quicker than for the table representation.
1103.4177
On the Capacity of the Noncausal Relay Channel
cs.IT math.IT
This paper studies the noncausal relay channel, also known as the relay channel with unlimited lookahead, introduced by El Gamal, Hassanpour, and Mammen. Unlike the standard relay channel model, where the relay encodes its signal based on the previous received output symbols, the relay in the noncausal relay channel encodes its signal as a function of the entire received sequence. In the existing coding schemes, the relay uses this noncausal information solely to recover the transmitted message and then cooperates with the sender to communicate this message to the receiver. However, it is shown in this paper that by applying the Gelfand--Pinsker coding scheme, the relay can take further advantage of the noncausally available information, which can achieve strictly higher rates than existing coding schemes. This paper also provides a new upper bound on the capacity of the noncausal relay that strictly improves upon the cutset bound. These new lower and upper bounds on the capacity coincide for the class of degraded noncausal relay channels and establish the capacity for this class.
1103.4198
Continuous-time performance limitations for overshoot and resulted tracking measures
math.OC cs.SY
A dual formulation for the problem of determining absolute performance limitations on overshoot, undershoot, maximum amplitude and fluctuation minimization for continuous-time feedback systems is constructed. Determining, for example, the minimum possible overshoot attainable by all possible stabilizing controllers is an optimization task that cannot be expressed as a minimum-norm problem. It is this fact, coupled with the continuous-time rather than discrete-time formulation, that makes these problems challenging. We extend previous results to include more general reference functions, and derive new results (in continuous time) on the influence of pole/zero locations on achievable time-domain performance.
1103.4204
Parallel Online Learning
cs.LG
In this work we study parallelization of online learning, a core primitive in machine learning. In a parallel environment all known approaches for parallel online learning lead to delayed updates, where the model is updated using out-of-date information. In the worst case, or when examples are temporally correlated, delay can have a very adverse effect on the learning algorithm. Here, we analyze and present preliminary empirical results on a set of learning architectures based on a feature sharding approach that present various tradeoffs between delay, degree of parallelism, representation power and empirical performance.
1103.4223
A Stochastic-Geometry Approach to Coverage in Cellular Networks with Multi-Cell Cooperation
cs.IT math.IT
Multi-cell cooperation is a promising approach for mitigating inter-cell interference in dense cellular networks. Quantifying the performance of multi-cell cooperation is challenging as it integrates physical-layer techniques and network topologies. For tractability, existing work typically relies on the over-simplified Wyner-type models. In this paper, we propose a new stochastic-geometry model for a cellular network with multi-cell cooperation, which accounts for practical factors including the irregular locations of base stations (BSs) and the resultant path-losses. In particular, the proposed network-topology model has three key features: i) the cells are modeled using a Poisson random tessellation generated by Poisson distributed BSs, ii) multi-antenna BSs are clustered using a hexagonal lattice and BSs in the same cluster mitigate mutual interference by spatial interference avoidance, iii) BSs near cluster edges access a different sub-channel from that by other BSs, shielding cluster-edge mobiles from strong interference. Using this model and assuming sparse scattering, we analyze the shapes of the outage probabilities of mobiles served by cluster-interior BSs as the average number $K$ of BSs per cluster increases. The outage probability of a mobile near a cluster center is shown to be proportional to $e^{-c(2-\sqrt{\nu})^2K}$ where $\nu$ is the fraction of BSs lying in the interior of clusters and $c$ is a constant. Moreover, the outage probability of a typical mobile is proved to scale proportionally with $e^{-c' (1-\sqrt{\nu})^2K}$ where $c'$ is a constant.
1103.4282
Stratified B-trees and versioning dictionaries
cs.DS cs.DB
A classic versioned data structure in storage and computer science is the copy-on-write (CoW) B-tree -- it underlies many of today's file systems and databases, including WAFL, ZFS, Btrfs and more. Unfortunately, it doesn't inherit the B-tree's optimality properties; it has poor space utilization, cannot offer fast updates, and relies on random IO to scale. Yet, nothing better has been developed since. We describe the `stratified B-tree', which beats all known semi-external memory versioned B-trees, including the CoW B-tree. In particular, it is the first versioned dictionary to achieve optimal tradeoffs between space, query and update performance.
1103.4286
Design and frequency analysis of continuous finite-time-convergent differentiator
cs.SY math.DS math.OC
In this paper, a continuous finite-time-convergent differentiator is presented based on a strong Lyapunov function. The continuous differentiator can reduce chattering phenomenon sufficiently than normal sliding mode differentiator, and the outputs of signal tracking and derivative estimation are all smooth. Frequency analysis is applied to compare the continuous differentiator with sliding mode differentiator. The beauties of the continuous finite-time-convergent differentiator include its simplicity, restraining noises sufficiently, and avoiding the chattering phenomenon.
1103.4311
Design and analysis of continuous hybrid differentiator
cs.SY math.DS math.OC
In this paper, a continuous hybrid differentiator is presented based on a strong Lyapunov function. The differentiator design can not only reduce sufficiently chattering phenomenon of derivative estimation by introducing a perturbation parameter, but also the dynamical performances are improved by adding linear correction terms to the nonlinear ones. Moreover, strong robustness ability is obtained by integrating sliding mode items and the linear filter. Frequency analysis is applied to compare the hybrid continuous differentiator with sliding mode differentiator. The merits of the continuous hybrid differentiator include the excellent dynamical performances, restraining noises sufficiently, and avoiding the chattering phenomenon.
1103.4335
Diviseurs de la forme 2D-G sans sections et rang de la multiplication dans les corps finis (Divisors of the form 2D-G without sections and bilinear complexity of multiplication in finite fields)
math.AG cs.CC cs.IT math.IT math.NT
Let X be an algebraic curve, defined over a perfect field, and G a divisor on X. If X has sufficiently many points, we show how to construct a divisor D on X such that l(2D-G)=0, of essentially any degree such that this is compatible the Riemann-Roch theorem. We also generalize this construction to the case of a finite number of constraints, l(k_i.D-G_i)=0, where |k_i|\leq 2. Such a result was previously claimed by Shparlinski-Tsfasman-Vladut, in relation with the Chudnovsky-Chudnovsky method for estimating the bilinear complexity of the multiplication in finite fields based on interpolation on curves; unfortunately, as noted by Cascudo et al., their proof was flawed. So our work fixes the proof of Shparlinski-Tsfasman-Vladut and shows that their estimate m_q\leq 2(1+1/(A(q)-1)) holds, at least when A(q)\geq 5. We also fix a statement of Ballet that suffers from the same problem, and then we point out a few other possible applications.
1103.4339
Optimal allocation patterns and optimal seed mass of a perennial plant
q-bio.PE cs.SY math.OC
We present a novel optimal allocation model for perennial plants, in which assimilates are not allocated directly to vegetative or reproductive parts but instead go first to a storage compartment from where they are then optimally redistributed. We do not restrict considerations purely to periods favourable for photosynthesis, as it was done in published models of perennial species, but analyse the whole life period of a perennial plant. As a result, we obtain the general scheme of perennial plant development, for which annual and monocarpic strategies are special cases. We not only re-derive predictions from several previous optimal allocation models, but also obtain more information about plants' strategies during transitions between favourable and unfavourable seasons. One of the model's predictions is that a plant can begin to re-establish vegetative tissues from storage, some time before the beginning of favourable conditions, which in turn allows for better production potential when conditions become better. By means of numerical examples we show that annual plants with single or multiple reproduction periods, monocarps, evergreen perennials and polycarpic perennials can be studied successfully with the help of our unified model. Finally, we build a bridge between optimal allocation models and models describing trade-offs between size and the number of seeds: a modelled plant can control the distribution of not only allocated carbohydrates but also seed size. We provide sufficient conditions for the optimality of producing the smallest and largest seeds possible.
1103.4340
Fault Tolerant Stabilizability of Multi-Hop Control Networks
math.OC cs.SY
A Multi-hop Control Network (MCN) consists of a plant where the communication between sensor, actuator and computational unit is supported by a wireless multi-hop communication network, and data flow is performed using scheduling and routing of sensing and actuation data. We address the problem of characterizing controllability and observability of a MCN, by means of necessary and sufficient conditions on the plant dynamics and on the communication scheduling and routing. We provide a methodology to design scheduling and routing, in order to satisfy controllability and observability of a MCN for any fault occurrence in a given set of configurations of failures.
1103.4342
MDP Optimal Control under Temporal Logic Constraints
cs.RO cs.SY math.OC
In this paper, we develop a method to automatically generate a control policy for a dynamical system modeled as a Markov Decision Process (MDP). The control specification is given as a Linear Temporal Logic (LTL) formula over a set of propositions defined on the states of the MDP. We synthesize a control policy such that the MDP satisfies the given specification almost surely, if such a policy exists. In addition, we designate an "optimizing proposition" to be repeatedly satisfied, and we formulate a novel optimization criterion in terms of minimizing the expected cost in between satisfactions of this proposition. We propose a sufficient condition for a policy to be optimal, and develop a dynamic programming algorithm that synthesizes a policy that is optimal under some conditions, and sub-optimal otherwise. This problem is motivated by robotic applications requiring persistent tasks, such as environmental monitoring or data gathering, to be performed.
1103.4358
Selfishness, fraternity, and other-regarding preference in spatial evolutionary games
physics.soc-ph cs.SI q-bio.PE
Spatial evolutionary games are studied with myopic players whose payoff interest, as a personal character, is tuned from selfishness to other-regarding preference via fraternity. The players are located on a square lattice and collect income from symmetric two-person two-strategy (called cooperation and defection) games with their nearest neighbors. During the elementary steps of evolution a randomly chosen player modifies her strategy in order to maximize stochastically her utility function composed from her own and the co-players' income with weight factors $1-Q$ and Q. These models are studied within a wide range of payoff parameters using Monte Carlo simulations for noisy strategy updates and by spatial stability analysis in the low noise limit. For fraternal players ($Q=1/2$) the system evolves into ordered arrangements of strategies in the low noise limit in a way providing optimum payoff for the whole society. Dominance of defectors, representing the "tragedy of the commons", is found within the regions of prisoner's dilemma and stag hunt game for selfish players (Q=0). Due to the symmetry in the effective utility function the system exhibits similar behavior even for Q=1 that can be interpreted as the "lovers' dilemma".
1103.4395
On Non-Bayesian Social Learning
cs.SI physics.soc-ph
We study a model of information aggregation and social learning recently proposed by Jadbabaie, Sandroni, and Tahbaz-Salehi, in which individual agents try to learn a correct state of the world by iteratively updating their beliefs using private observations and beliefs of their neighbors. No individual agent's private signal might be informative enough to reveal the unknown state. As a result, agents share their beliefs with others in their social neighborhood to learn from each other. At every time step each agent receives a private signal, and computes a Bayesian posterior as an intermediate belief. The intermediate belief is then averaged with the belief of neighbors to form the individual's belief at next time step. We find a set of minimal sufficient conditions under which the agents will learn the unknown state and reach consensus on their beliefs without any assumption on the private signal structure. The key enabler is a result that shows that using this update, agents will eventually forecast the indefinite future correctly.
1103.4401
On the gradual deployment of random pairwise key distribution schemes (Extended Version)
cs.CR cs.DM cs.IT math.IT
In the context of wireless sensor networks, the pairwise key distribution scheme of Chan et al. has several advantages over other key distribution schemes including the original scheme of Eschenauer and Gligor. However, this offline pairwise key distribution mechanism requires that the network size be set in advance, and involves all sensor nodes simultaneously. Here, we address this issue by describing an implementation of the pairwise scheme that supports the gradual deployment of sensor nodes in several consecutive phases. We discuss the key ring size needed to maintain the secure connectivity throughout all the deployment phases. In particular we show that the number of keys at each sensor node can be taken to be $O(\log n)$ in order to achieve secure connectivity (with high probability).
1103.4406
Interference Alignment with Partially Coordinated Transmit Precoding
cs.IT math.IT
In this paper, we introduce an efficient interference alignment (IA) algorithm exploiting partially coordinated transmit precoding to improve the number of concurrent interference-free transmissions, i.e., the multiplexing gain, in multicell downlink. The proposed coordination model is such that each base-station simultaneously transmits to two users and each user is served by two base-stations. First, we show in a K-user system operating at the information theoretic upper bound of degrees of freedom (DOF), the generic IA is proper when $K \leq 3$, whereas the proposed partially coordinated IA is proper when $K \leq 5$. Then, we derive a non-iterative, i.e., one shot, IA algorithm for the proposed scheme when $K \leq 5$. We show that for a given latency, the backhaul data rate requirement of the proposed method grows linearly with K. Monte-Carlo simulation results show that the proposed one-shot algorithm offers higher system throughput than the iterative IA at practical SNR levels.
1103.4410
Distributed Inference and Query Processing for RFID Tracking and Monitoring
cs.DB
In this paper, we present the design of a scalable, distributed stream processing system for RFID tracking and monitoring. Since RFID data lacks containment and location information that is key to query processing, we propose to combine location and containment inference with stream query processing in a single architecture, with inference as an enabling mechanism for high-level query processing. We further consider challenges in instantiating such a system in large distributed settings and design techniques for distributed inference and query processing. Our experimental results, using both real-world data and large synthetic traces, demonstrate the accuracy, efficiency, and scalability of our proposed techniques.
1103.4435
Information Theoretic Bounds for Tensor Rank Minimization over Finite Fields
cs.IT math.IT
We consider the problem of noiseless and noisy low-rank tensor completion from a set of random linear measurements. In our derivations, we assume that the entries of the tensor belong to a finite field of arbitrary size and that reconstruction is based on a rank minimization framework. The derived results show that the smallest number of measurements needed for exact reconstruction is upper bounded by the product of the rank, the order and the dimension of a cubic tensor. Furthermore, this condition is also sufficient for unique minimization. Similar bounds hold for the noisy rank minimization scenario, except for a scaling function that depends on the channel error probability.
1103.4438
Anytime Reliable Codes for Stabilizing Plants over Erasure Channels
cs.SY cs.IT math.IT math.OC
The problem of stabilizing an unstable plant over a noisy communication link is an increasingly important one that arises in problems of distributed control and networked control systems. Although the work of Schulman and Sahai over the past two decades, and their development of the notions of "tree codes" and "anytime capacity", provides the theoretical framework for studying such problems, there has been scant practical progress in this area because explicit constructions of tree codes with efficient encoding and decoding did not exist. To stabilize an unstable plant driven by bounded noise over a noisy channel one needs real-time encoding and real-time decoding and a reliability which increases exponentially with delay, which is what tree codes guarantee. We prove the existence of linear tree codes with high probability and, for erasure channels, give an explicit construction with an expected encoding and decoding complexity that is constant per time instant. We give sufficient conditions on the rate and reliability required of the tree codes to stabilize vector plants and argue that they are asymptotically tight. This work takes a major step towards controlling plants over noisy channels, and we demonstrate the efficacy of the method through several examples.
1103.4454
Regularity Results for Eikonal-Type Equations with Nonsmooth Coefficients
math.OC cs.SY math.AP
Solutions of the Hamilton-Jacobi equation $H(x,-Du(x))=1$, with $H(\cdot,p)$ H\"older continuous and $H(x,\cdot)$ convex and positively homogeneous of degree 1, are shown to be locally semiconcave with a power-like modulus. An essential step of the proof is the ${\mathcal C}^{1,\alpha}$-regularity of the extremal trajectories associated with the multifunction generated by $D_pH$.
1103.4480
Clustered regression with unknown clusters
cs.LG stat.ML
We consider a collection of prediction experiments, which are clustered in the sense that groups of experiments ex- hibit similar relationship between the predictor and response variables. The experiment clusters as well as the regres- sion relationships are unknown. The regression relation- ships define the experiment clusters, and in general, the predictor and response variables may not exhibit any clus- tering. We call this prediction problem clustered regres- sion with unknown clusters (CRUC) and in this paper we focus on linear regression. We study and compare several methods for CRUC, demonstrate their applicability to the Yahoo Learning-to-rank Challenge (YLRC) dataset, and in- vestigate an associated mathematical model. CRUC is at the crossroads of many prior works and we study several prediction algorithms with diverse origins: an adaptation of the expectation-maximization algorithm, an approach in- spired by K-means clustering, the singular value threshold- ing approach to matrix rank minimization under quadratic constraints, an adaptation of the Curds and Whey method in multiple regression, and a local regression (LoR) scheme reminiscent of neighborhood methods in collaborative filter- ing. Based on empirical evaluation on the YLRC dataset as well as simulated data, we identify the LoR method as a good practical choice: it yields best or near-best prediction performance at a reasonable computational load, and it is less sensitive to the choice of the algorithm parameter. We also provide some analysis of the LoR method for an asso- ciated mathematical model, which sheds light on optimal parameter choice and prediction performance.
1103.4487
Handwritten Digit Recognition with a Committee of Deep Neural Nets on GPUs
cs.LG cs.AI cs.CV cs.NE
The competitive MNIST handwritten digit recognition benchmark has a long history of broken records since 1998. The most recent substantial improvement by others dates back 7 years (error rate 0.4%) . Recently we were able to significantly improve this result, using graphics cards to greatly speed up training of simple but deep MLPs, which achieved 0.35%, outperforming all the previous more complex methods. Here we report another substantial improvement: 0.31% obtained using a committee of MLPs.
1103.4525
Robust Lattice Alignment for K-user MIMO Interference Channels with Imperfect Channel Knowledge
cs.IT math.IT
In this paper, we consider a robust lattice alignment design for K-user quasi-static MIMO interference channels with imperfect channel knowledge. With random Gaussian inputs, the conventional interference alignment (IA) method has the feasibility problem when the channel is quasi-static. On the other hand, structured lattices can create structured interference as opposed to the random interference caused by random Gaussian symbols. The structured interference space can be exploited to transmit the desired signals over the gaps. However, the existing alignment methods on the lattice codes for quasi-static channels either require infinite SNR or symmetric interference channel coefficients. Furthermore, perfect channel state information (CSI) is required for these alignment methods, which is difficult to achieve in practice. In this paper, we propose a robust lattice alignment method for quasi-static MIMO interference channels with imperfect CSI at all SNR regimes, and a two-stage decoding algorithm to decode the desired signal from the structured interference space. We derive the achievable data rate based on the proposed robust lattice alignment method, where the design of the precoders, decorrelators, scaling coefficients and interference quantization coefficients is jointly formulated as a mixed integer and continuous optimization problem. The effect of imperfect CSI is also accommodated in the optimization formulation, and hence the derived solution is robust to imperfect CSI. We also design a low complex iterative optimization algorithm for our robust lattice alignment method by using the existing iterative IA algorithm that was designed for the conventional IA method. Numerical results verify the advantages of the proposed robust lattice alignment method.
1103.4547
Canonical Dual Method for Resource Allocation and Adaptive Modulation in Uplink SC-FDMA Systems
cs.IT math.IT
In this paper, we study resource allocation and adaptive modulation in SC-FDMA which is adopted as the multiple access scheme for the uplink in the 3GPP-LTE standard. A sum-utility maximization (SUmax), and a joint adaptive modulation and sum-cost minimization (JAMSCmin) problems are considered. Unlike OFDMA, in addition to the restriction of allocating a sub-channel to one user at most, the multiple sub-channels allocated to a user in SC-FDMA should be consecutive as well. This renders the resource allocation problem prohibitively difficult and the standard optimization tools (e.g., Lagrange dual approach widely used for OFDMA, etc.) can not help towards its optimal solution. We propose a novel optimization framework for the solution of these problems that is inspired from the recently developed canonical duality theory. We first formulate the optimization problems as binary-integer programming problems and then transform these binary-integer programming problems into continuous space canonical dual problems that are concave maximization problems. Based on the solution of the continuous space dual problems, we derive resource allocation (joint with adaptive modulation for JAMSCmin) algorithms for both the problems which have polynomial complexities. We provide conditions under which the proposed algorithms are optimal. We also propose an adaptive modulation scheme for SUmax problem. We compare the proposed algorithms with the existing algorithms in the literature to assess their performance.
1103.4550
Community Detection via Semi-Synchronous Label Propagation Algorithms
cs.SI physics.soc-ph
A recently introduced novel community detection strategy is based on a label propagation algorithm (LPA) which uses the diffusion of information in the network to identify communities. Studies of LPAs showed that the strategy is effective in finding a good community structure. Label propagation step can be performed in parallel on all nodes (synchronous model) or sequentially (asynchronous model); both models present some drawback, e.g., algorithm termination is nor granted in the first case, performances can be worst in the second case. In this paper, we present a semi-synchronous version of LPA which aims to combine the advantages of both synchronous and asynchronous models. We prove that our models always converge to a stable labeling. Moreover, we experimentally investigate the effectiveness of the proposed strategy comparing its performance with the asynchronous model both in terms of quality, efficiency and stability. Tests show that the proposed protocol does not harm the quality of the partitioning. Moreover it is quite efficient; each propagation step is extremely parallelizable and it is more stable than the asynchronous model, thanks to the fact that only a small amount of randomization is used by our proposal.
1103.4558
Representing First-Order Causal Theories by Logic Programs
cs.AI cs.LO
Nonmonotonic causal logic, introduced by Norman McCain and Hudson Turner, became a basis for the semantics of several expressive action languages. McCain's embedding of definite propositional causal theories into logic programming paved the way to the use of answer set solvers for answering queries about actions described in such languages. In this paper we extend this embedding to nondefinite theories and to first-order causal logic.
1103.4578
Common Signal Analysis
cs.IT math.IT
A common signal is defined for any two signals which have non-zero correlation. A mathematical method is provided to extract the best obtainable common signal between the two signals. This analysis is extended to extracting common signal among three signals.
1103.4584
Automatic Synthesis of Switching Controllers for Linear Hybrid Automata
cs.LO cs.FL cs.SY math.OC
In this paper we study the problem of automatically generating switching controllers for the class of Linear Hybrid Automata, with respect to safety objectives. We identify and solve inaccuracies contained in previous characterizations of the problem, providing a sound and complete symbolic fixpoint procedure, based on polyhedral abstractions of the state space. We also prove the termination of each iteration of the procedure. Some promising experimental results are presented, based on an implementation of the fixpoint procedure on top of the tool PHAVer.
1103.4601
Doubly Robust Policy Evaluation and Learning
cs.LG cs.AI cs.RO stat.AP stat.ML
We study decision making in environments where the reward is only partially observed, but can be modeled as a function of an action and an observed context. This setting, known as contextual bandits, encompasses a wide variety of applications including health-care policy and Internet advertising. A central task is evaluation of a new policy given historic data consisting of contexts, actions and received rewards. The key challenge is that the past data typically does not faithfully represent proportions of actions taken by a new policy. Previous approaches rely either on models of rewards or models of the past policy. The former are plagued by a large bias whereas the latter have a large variance. In this work, we leverage the strength and overcome the weaknesses of the two approaches by applying the doubly robust technique to the problems of policy evaluation and optimization. We prove that this approach yields accurate value estimates when we have either a good (but not necessarily consistent) model of rewards or a good (but not necessarily consistent) model of past policy. Extensive empirical comparison demonstrates that the doubly robust approach uniformly improves over existing techniques, achieving both lower variance in value estimation and better policies. As such, we expect the doubly robust approach to become common practice.
1103.4659
Social Influencing and Associated Random Walk Models: Asymptotic Consensus Times on the Complete Graph
physics.soc-ph cond-mat.stat-mech cs.SI
We investigate consensus formation and the asymptotic consensus times in stylized individual- or agent-based models, in which global agreement is achieved through pairwise negotiations with or without a bias. Considering a class of individual-based models on finite complete graphs, we introduce a coarse-graining approach (lumping microscopic variables into macrostates) to analyze the ordering dynamics in an associated random-walk framework. Within this framework, yielding a linear system, we derive general equations for the expected consensus time and the expected time spent in each macro-state. Further, we present the asymptotic solutions of the 2-word naming game, and separately discuss its behavior under the influence of an external field and with the introduction of committed agents.
1103.4684
Vector Broadcast Channels: Optimality of Threshold Feedback Policies
cs.IT math.IT
Beamforming techniques utilizing only partial channel state information (CSI) has gained popularity over other communication strategies requiring perfect CSI thanks to their lower feedback requirements. The amount of feedback in beamforming based communication systems can be further reduced through selective feedback techniques in which only the users with channels good enough are allowed to feed back by means of a decentralized feedback policy. In this paper, we prove that thresholding at the receiver is the rate-wise optimal decentralized feedback policy for feedback limited systems with prescribed feedback constraints. This result is highly adaptable due to its distribution independent nature, provides an analytical justification for the use of threshold feedback policies in practical systems, and reinforces previous work analyzing threshold feedback policies as a selective feedback technique without proving its optimality. It is robust to selfish unilateral deviations. Finally, it reduces the search for rate-wise optimal feedback policies subject to feedback constraints from function spaces to a finite dimensional Euclidean space.
1103.4687
Vector Broadcast Channels: Optimal Threshold Selection Problem
cs.IT math.IT
Threshold feedback policies are well known and provably rate-wise optimal selective feedback techniques for communication systems requiring partial channel state information (CSI). However, optimal selection of thresholds at mobile users to maximize information theoretic data rates subject to feedback constraints is an open problem. In this paper, we focus on the optimal threshold selection problem, and provide a solution for this problem for finite feedback systems. Rather surprisingly, we show that using the same threshold values at all mobile users is not always a rate-wise optimal feedback strategy, even for a system with identical users experiencing statistically the same channel conditions. By utilizing the theory of majorization, we identify an underlying Schur-concave structure in the rate function and obtain sufficient conditions for a homogenous threshold feedback policy to be optimal. Our results hold for most fading channel models, and we illustrate an application of our results to familiar Rayleigh fading channels.
1103.4720
Computer Modelling of 3D Geological Surface
cs.CE
The geological surveying presently uses methods and tools for the computer modeling of 3D-structures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.
1103.4723
Automatic Extraction of Open Space Area from High Resolution Urban Satellite Imagery
cs.CV
In the 21st century, Aerial and satellite images are information rich. They are also complex to analyze. For GIS systems, many features require fast and reliable extraction of open space area from high resolution satellite imagery. In this paper we will study efficient and reliable automatic extraction algorithm to find out the open space area from the high resolution urban satellite imagery. This automatic extraction algorithm uses some filters and segmentations and grouping is applying on satellite images. And the result images may use to calculate the total available open space area and the built up area. It may also use to compare the difference between present and past open space area using historical urban satellite images of that same projection
1103.4756
Identification of Piecewise Linear Models of Complex Dynamical Systems
math.OC cs.SY
The paper addresses the realization and identification problem or a subclass of piecewise-affine hybrid systems. The paper provides necessary and sufficient conditions for existence of a realization, a characterization of minimality, and an identification algorithm for this subclass of hybrid systems. The considered system class and the identification problem are motivated by applications in systems biology.
1103.4767
A comparison of Gap statistic definitions with and without logarithm function
stat.ME cs.CV
The Gap statistic is a standard method for determining the number of clusters in a set of data. The Gap statistic standardizes the graph of $\log(W_{k})$, where $W_{k}$ is the within-cluster dispersion, by comparing it to its expectation under an appropriate null reference distribution of the data. We suggest to use $W_{k}$ instead of $\log(W_{k})$, and to compare it to the expectation of $W_{k}$ under a null reference distribution. In fact, whenever a number fulfills the original Gap statistic inequality, this number also fulfills the inequality of a Gap statistic using $W_{k}$, but not \textit{vice versa}. The two definitions of the Gap function are evaluated on several simulated data sets and on a real data of DCE-MR images.
1103.4774
Full-Rate Full-Diversity Achieving MIMO Precoding with Partial CSIT
cs.IT math.IT
In this paper, we consider a $n_t\times n_r$ multiple-input multiple-output (MIMO) channel subjected to block fading. Reliability (in terms of achieved diversity order) and rate (in number of symbols transmitted per channel use) are of interest in such channels. We propose a new precoding scheme which achieves both full diversity ($n_tn_r$th order diversity) as well as full rate ($n_t$ symbols per channel use) using partial channel state information at the transmitter (CSIT), applicable in MIMO systems including $n_r<n_t$ asymmetric MIMO. The proposed scheme achieves full diversity and improved coding gain through an optimization over the choice of constellation sets. The optimization maximizes $d_{min}^2$ for our precoding scheme subject to an energy constraint. The scheme requires feedback of $n_t-1$ angle parameter values, compared to $2n_tn_r$ real coefficients in case of full CSIT. Error rate performance results for $3\times 1$, $3\times 2$, $4\times 1$, $8\times 1$ precoded MIMO systems (with $n_t=3,3,4,8$ symbols per channel use, respectively) show that the proposed precoding achieves 3rd, 6th, 4th and 8th order diversities, respectively. These performances are shown to be better than other precoding schemes in the literature; the better performance is due to the choice of the signal sets and the feedback angles in the proposed scheme.
1103.4778
Formal and Computational Properties of the Confidence Boost of Association Rules
cs.DB cs.AI
Some existing notions of redundancy among association rules allow for a logical-style characterization and lead to irredundant bases of absolutely minimum size. One can push the intuition of redundancy further and find an intuitive notion of interest of an association rule, in terms of its "novelty" with respect to other rules. Namely: an irredundant rule is so because its confidence is higher than what the rest of the rules would suggest; then, one can ask: how much higher? We propose to measure such a sort of "novelty" through the confidence boost of a rule, which encompasses two previous similar notions (confidence width and rule blocking, of which the latter is closely related to the earlier measure "improvement"). Acting as a complement to confidence and support, the confidence boost helps to obtain small and crisp sets of mined association rules, and solves the well-known problem that, in certain cases, rules of negative correlation may pass the confidence bound. We analyze the properties of two versions of the notion of confidence boost, one of them a natural generalization of the other. We develop efficient algorithmics to filter rules according to their confidence boost, compare the concept to some similar notions in the bibliography, and describe the results of some experimentation employing the new notions on standard benchmark datasets. We describe an open-source association mining tool that embodies one of our variants of confidence boost in such a way that the data mining process does not require the user to select any value for any parameter.
1103.4784
Latent Capacity Region: A Case Study on Symmetric Broadcast With Common Messages
cs.IT math.IT
We consider the problem of broadcast with common messages, and focus on the case that the common message rate $R_{\mathcal{A}}$, i.e., the rate of the message intended for all the receivers in the set $\mathcal{A}$, is the same for all the set $\mathcal{A}$ of the same cardinality. Instead of attempting to characterize the capacity region of general broadcast channels, we only consider the structure of the capacity region that any broadcast channel should bear. The concept of latent capacity region is useful in capturing these underlying constraints, and we provide a complete characterization of the latent capacity region for the symmetric broadcast problem. The converse proof of this tight characterization relies on a deterministic broadcast channel model. The achievability proof generalizes the familiar rate transfer argument to include more involved erasure correction coding among messages, thus revealing an inherent connection between broadcast with common message and erasure correction codes.
1103.4787
Energy Management Policies for Energy-Neutral Source-Channel Coding
cs.IT math.IT
In cyber-physical systems where sensors measure the temporal evolution of a given phenomenon of interest and radio communication takes place over short distances, the energy spent for source acquisition and compression may be comparable with that used for transmission. Additionally, in order to avoid limited lifetime issues, sensors may be powered via energy harvesting and thus collect all the energy they need from the environment. This work addresses the problem of energy allocation over source acquisition/compression and transmission for energy-harvesting sensors. At first, focusing on a single-sensor, energy management policies are identified that guarantee a maximal average distortion while at the same time ensuring the stability of the queue connecting source and channel encoders. It is shown that the identified class of policies is optimal in the sense that it stabilizes the queue whenever this is feasible by any other technique that satisfies the same average distortion constraint. Moreover, this class of policies performs an independent resource optimization for the source and channel encoders. Analog transmission techniques as well as suboptimal strategies that do not use the energy buffer (battery) or use it only for adapting either source or channel encoder energy allocation are also studied for performance comparison. The problem of optimizing the desired trade-off between average distortion and delay is then formulated and solved via dynamic programming tools. Finally, a system with multiple sensors is considered and time-division scheduling strategies are derived that are able to maintain the stability of all data queues and to meet the average distortion constraints at all sensors whenever it is feasible.
1103.4820
Design and classification of dynamic multi-objective optimization problems
cs.NE
In this work we provide a formal model for the different time-dependent components that can appear in dynamic multi-objective optimization problems, along with a classification of these components. Four main classes are identified, corresponding to the influence of the parameters, objective functions, previous states of the dynamic system and, last, environment changes, which in turn lead to online optimization problems. For illustration purposes, examples are provided for each class identified - by no means standing as the most representative ones or exhaustive in scope.
1103.4854
When is social computation better than the sum of its parts?
cs.IT cs.AI math.IT
Social computation, whether in the form of searches performed by swarms of agents or collective predictions of markets, often supplies remarkably good solutions to complex problems. In many examples, individuals trying to solve a problem locally can aggregate their information and work together to arrive at a superior global solution. This suggests that there may be general principles of information aggregation and coordination that can transcend particular applications. Here we show that the general structure of this problem can be cast in terms of information theory and derive mathematical conditions that lead to optimal multi-agent searches. Specifically, we illustrate the problem in terms of local search algorithms for autonomous agents looking for the spatial location of a stochastic source. We explore the types of search problems, defined in terms of the statistical properties of the source and the nature of measurements at each agent, for which coordination among multiple searchers yields an advantage beyond that gained by having the same number of independent searchers. We show that effective coordination corresponds to synergy and that ineffective coordination corresponds to independence as defined using information theory. We classify explicit types of sources in terms of their potential for synergy. We show that sources that emit uncorrelated signals provide no opportunity for synergetic coordination while sources that emit signals that are correlated in some way, do allow for strong synergy between searchers. These general considerations are crucial for designing optimal algorithms for particular search problems in real world settings.
1103.4888
Cooperative searching for stochastic targets
cs.IT cs.AI math.IT
Spatial search problems abound in the real world, from locating hidden nuclear or chemical sources to finding skiers after an avalanche. We exemplify the formalism and solution for spatial searches involving two agents that may or may not choose to share information during a search. For certain classes of tasks, sharing information between multiple searchers makes cooperative searching advantageous. In some examples, agents are able to realize synergy by aggregating information and moving based on local judgments about maximal information gathering expectations. We also explore one- and two-dimensional simplified situations analytically and numerically to provide a framework for analyzing more complex problems. These general considerations provide a guide for designing optimal algorithms for real-world search problems.
1103.4893
Robust Distributed Routing in Dynamical Flow Networks - Part II: Strong Resilience, Equilibrium Selection and Cascaded Failures
cs.SY math.CA math.DS math.OC nlin.AO
Strong resilience properties of dynamical flow networks are analyzed for distributed routing policies. The latter are characterized by the property that the way the inflow at a non-destination node gets split among its outgoing links is allowed to depend only on local information about the current particle densities on the outgoing links. The strong resilience of the network is defined as the infimum sum of link-wise flow capacity reductions under which the network cannot maintain the asymptotic total inflow to the destination node to be equal to the inflow at the origin. A class of distributed routing policies that are locally responsive to local information is shown to yield the maximum possible strong resilience under such local information constraints for an acyclic dynamical flow network with a single origin-destination pair. The maximal strong resilience achievable is shown to be equal to the minimum node residual capacity of the network. The latter depends on the limit flow of the unperturbed network and is defined as the minimum, among all the non-destination nodes, of the sum, over all the links outgoing from the node, of the differences between the maximum flow capacity and the limit flow of the unperturbed network. We propose a simple convex optimization problem to solve for equilibrium limit flows of the unperturbed network that minimize average delay subject to strong resilience guarantees, and discuss the use of tolls to induce such an equilibrium limit flow in transportation networks. Finally, we present illustrative simulations to discuss the connection between cascaded failures and the resilience properties of the network.
1103.4896
Classification of Sets using Restricted Boltzmann Machines
cs.LG stat.ML
We consider the problem of classification when inputs correspond to sets of vectors. This setting occurs in many problems such as the classification of pieces of mail containing several pages, of web sites with several sections or of images that have been pre-segmented into smaller regions. We propose generalizations of the restricted Boltzmann machine (RBM) that are appropriate in this context and explore how to incorporate different assumptions about the relationship between the input sets and the target class within the RBM. In experiments on standard multiple-instance learning datasets, we demonstrate the competitiveness of approaches based on RBMs and apply the proposed variants to the problem of incoming mail classification.
1103.4904
Distribution-Independent Evolvability of Linear Threshold Functions
cs.LG cs.CC cs.NE
Valiant's (2007) model of evolvability models the evolutionary process of acquiring useful functionality as a restricted form of learning from random examples. Linear threshold functions and their various subclasses, such as conjunctions and decision lists, play a fundamental role in learning theory and hence their evolvability has been the primary focus of research on Valiant's framework (2007). One of the main open problems regarding the model is whether conjunctions are evolvable distribution-independently (Feldman and Valiant, 2008). We show that the answer is negative. Our proof is based on a new combinatorial parameter of a concept class that lower-bounds the complexity of learning from correlations. We contrast the lower bound with a proof that linear threshold functions having a non-negligible margin on the data points are evolvable distribution-independently via a simple mutation algorithm. Our algorithm relies on a non-linear loss function being used to select the hypotheses instead of 0-1 loss in Valiant's (2007) original definition. The proof of evolvability requires that the loss function satisfies several mild conditions that are, for example, satisfied by the quadratic loss function studied in several other works (Michael, 2007; Feldman, 2009; Valiant, 2010). An important property of our evolution algorithm is monotonicity, that is the algorithm guarantees evolvability without any decreases in performance. Previously, monotone evolvability was only shown for conjunctions with quadratic loss (Feldman, 2009) or when the distribution on the domain is severely restricted (Michael, 2007; Feldman, 2009; Kanade et al., 2010)
1103.4913
Automatic Open Space Area Extraction and Change Detection from High Resolution Urban Satellite Images
cs.CV
In this paper, we study efficient and reliable automatic extraction algorithm to find out the open space area from the high resolution urban satellite imagery, and to detect changes from the extracted open space area during the period 2003, 2006 and 2008. This automatic extraction and change detection algorithm uses some filters, segmentation and grouping that are applied on satellite images. The resultant images may be used to calculate the total available open space area and the built up area. It may also be used to compare the difference between present and past open space area using historical urban satellite images of that same projection, which is an important geo spatial data management application.
1103.4916
Detection of Spatial Changes using Spatial Data Mining
cs.DB
The Change detection based on analysis and samples are analyzed. Land use/cover change detection based on SDM is discussed.
1103.4919
Link Prediction in Complex Networks: A Clustering Perspective
cs.SI physics.soc-ph
Link prediction is an open problem in the complex network, which attracts much research interest currently. However, little attention has been paid to the relation between network structure and the performance of prediction methods. In order to fill this vital gap, we try to understand how the network structure affects the performance of link prediction methods in the view of clustering. Our experiments on both synthetic and real-world networks show that as the clustering grows, the precision of these methods could be improved remarkably, while for the sparse and weakly clustered network, they perform poorly. We explain this through the distinguishment caused by increased clustering between the score distribution of positive and negative instances. Our finding also sheds light on the problem of how to select appropriate approaches for different networks with various densities and clusterings.
1103.4951
Exact Reconstruction using Beurling Minimal Extrapolation
math.ST cs.IT math.IT math.OC math.PR stat.TH
We show that measures with finite support on the real line are the unique solution to an algorithm, named generalized minimal extrapolation, involving only a finite number of generalized moments (which encompass the standard moments, the Laplace transform, the Stieltjes transformation, etc). Generalized minimal extrapolation shares related geometric properties with basis pursuit of Chen, Donoho and Saunders [CDS98]. Indeed we also extend some standard results of compressed sensing (the dual polynomial, the nullspace property) to the signed measure framework. We express exact reconstruction in terms of a simple interpolation problem. We prove that every nonnegative measure, supported by a set containing s points,can be exactly recovered from only 2s + 1 generalized moments. This result leads to a new construction of deterministic sensing matrices for compressed sensing.
1103.4959
On mean-square boundedness of stochastic linear systems with quantized observations
math.OC cs.SY
We propose a procedure to design a state-quantizer with finitely many bins for a marginally stable stochastic linear system evolving in $\R^d$, and a bounded policy based on the resulting quantized state measurements to ensure bounded second moment in closed-loop.
1103.4977
Statistical Inference for R\'enyi Entropy Functionals
math.ST cs.IT math.IT stat.TH
Numerous entropy-type characteristics (functionals) generalizing R\'enyi entropy are widely used in mathematical statistics, physics, information theory, and signal processing for characterizing uncertainty in probability distributions and distribution identification problems. We consider estimators of some entropy (integral) functionals for discrete and continuous distributions based on the number of epsilon-close vector records in the corresponding independent and identically distributed samples from two distributions. The estimators form a triangular scheme of generalized U-statistics. We show the asymptotic properties of these estimators (e.g., consistency and asymptotic normality). The results can be applied in various problems in computer science and mathematical statistics (e.g., approximate matching for random databases, record linkage, image matching).
1103.4979
An Introduction to Functional dependency in Relational Databases
cs.DB
This write-up is the suggested lecture notes for a second level course on advanced topics in database systems for master's students of Computer Science with a theoretical focus. A prerequisite in algorithms and an exposure to database systems are required. Additional reading may require exposure to mathematical logic. The starting point for these notes are from M.Y.Vardi's survey listed herein as a reference - some of the proofs are presented as such . This select rewrite on functional dependency is intended to provide a few clarifications even though radically new design approaches are now being proposed.
1103.5002
User Modeling Combining Access Logs, Page Content and Semantics
cs.IR cs.AI cs.HC
The paper proposes an approach to modeling users of large Web sites based on combining different data sources: access logs and content of the accessed pages are combined with semantic information about the Web pages, the users and the accesses of the users to the Web site. The assumption is that we are dealing with a large Web site providing content to a large number of users accessing the site. The proposed approach represents each user by a set of features derived from the different data sources, where some feature values may be missing for some users. It further enables user modeling based on the provided characteristics of the targeted user subset. The approach is evaluated on real-world data where we compare performance of the automatic assignment of a user to a predefined user segment when different data sources are used to represent the users.
1103.5027
Google matrix of the world trade network
q-fin.GN cond-mat.stat-mech cs.SI physics.soc-ph
Using the United Nations Commodity Trade Statistics Database [http://comtrade.un.org/db/] we construct the Google matrix of the world trade network and analyze its properties for various trade commodities for all countries and all available years from 1962 to 2009. The trade flows on this network are classified with the help of PageRank and CheiRank algorithms developed for the World Wide Web and other large scale directed networks. For the world trade this ranking treats all countries on equal democratic grounds independent of country richness. Still this method puts at the top a group of industrially developed countries for trade in {\it all commodities}. Our study establishes the existence of two solid state like domains of rich and poor countries which remain stable in time, while the majority of countries are shown to be in a gas like phase with strong rank fluctuations. A simple random matrix model provides a good description of statistical distribution of countries in two-dimensional rank plane. The comparison with usual ranking by export and import highlights new features and possibilities of our approach.
1103.5034
On Understanding and Machine Understanding
cs.AI
In the present paper, we try to propose a self-similar network theory for the basic understanding. By extending the natural languages to a kind of so called idealy sufficient language, we can proceed a few steps to the investigation of the language searching and the language understanding of AI. Image understanding, and the familiarity of the brain to the surrounding environment are also discussed. Group effects are discussed by addressing the essense of the power of influences, and constructing the influence network of a society. We also give a discussion of inspirations.
1103.5043
An Empirical Study of Real-World SPARQL Queries
cs.IR cs.AI cs.HC
Understanding how users tailor their SPARQL queries is crucial when designing query evaluation engines or fine-tuning RDF stores with performance in mind. In this paper we analyze 3 million real-world SPARQL queries extracted from logs of the DBPedia and SWDF public endpoints. We aim at finding which are the most used language elements both from syntactical and structural perspectives, paying special attention to triple patterns and joins, since they are indeed some of the most expensive SPARQL operations at evaluation phase. We have determined that most of the queries are simple and include few triple patterns and joins, being Subject-Subject, Subject-Object and Object-Object the most common join types. The graph patterns are usually star-shaped and despite triple pattern chains exist, they are generally short.
1103.5044
Mining User Comment Activity for Detecting Forum Spammers in YouTube
cs.IR cs.AI cs.HC
Research shows that comment spamming (comments which are unsolicited, unrelated, abusive, hateful, commercial advertisements etc) in online discussion forums has become a common phenomenon in Web 2.0 applications and there is a strong need to counter or combat comment spamming. We present a method to automatically detect comment spammer in YouTube (largest and a popular video sharing website) forums. The proposed technique is based on mining comment activity log of a user and extracting patterns (such as time interval between subsequent comments, presence of exactly same comment across multiple unrelated videos) indicating spam behavior. We perform empirical analysis on data crawled from YouTube and demonstrate that the proposed method is effective for the task of comment spammer detection.
1103.5046
From Linked Data to Relevant Data -- Time is the Essence
cs.IR cs.AI cs.HC
The Semantic Web initiative puts emphasis not primarily on putting data on the Web, but rather on creating links in a way that both humans and machines can explore the Web of data. When such users access the Web, they leave a trail as Web servers maintain a history of requests. Web usage mining approaches have been studied since the beginning of the Web given the log's huge potential for purposes such as resource annotation, personalization, forecasting etc. However, the impact of any such efforts has not really gone beyond generating statistics detailing who, when, and how Web pages maintained by a Web server were visited.
1103.5078
Algorithms for computing the greatest simulations and bisimulations between fuzzy automata
cs.FL cs.AI
Recently, two types of simulations (forward and backward simulations) and four types of bisimulations (forward, backward, forward-backward, and backward-forward bisimulations) between fuzzy automata have been introduced. If there is at least one simulation/bisimulation of some of these types between the given fuzzy automata, it has been proved that there is the greatest simulation/bisimulation of this kind. In the present paper, for any of the above-mentioned types of simulations/bisimulations we provide an effective algorithm for deciding whether there is a simulation/bisimulation of this type between the given fuzzy automata, and for computing the greatest one, whenever it exists. The algorithms are based on the method developed in [J. Ignjatovi\'c, M. \'Ciri\'c, S. Bogdanovi\'c, On the greatest solutions to certain systems of fuzzy relation inequalities and equations, Fuzzy Sets and Systems 161 (2010) 3081-3113], which comes down to the computing of the greatest post-fixed point, contained in a given fuzzy relation, of an isotone function on the lattice of fuzzy relations.
1103.5081
Using Variable Threshold to Increase Capacity in a Feedback Neural Network
cs.NE
The article presents new results on the use of variable thresholds to increase the capacity of a feedback neural network. Non-binary networks are also considered in this analysis.
1103.5110
Formation of Modularity in a Model of Evolving Networks
physics.soc-ph cs.SI nlin.AO
Modularity structures are common in various social and biological networks. However, its dynamical origin remains an open question. In this work, we set up a dynamical model describing the evolution of a social network. Based on the observations of real social networks, we introduced a link-creating/deleting strategy according to the local dynamics in the model. Thus the coevolution of dynamics and topology naturally determines the network properties. It is found that for a small coupling strength, the networked system cannot reach any synchronization and the network topology is homogeneous. Interestingly, when the coupling strength is large enough, the networked system spontaneously forms communities with different dynamical states. Meanwhile, the network topology becomes heterogeneous with modular structures. It is further shown that in a certain parameter regime, both the degree and the community size in the formed network follow a power-law distribution, and the networks are found to be assortative. These results are consistent with the characteristics of many empirical networks, and are helpful to understand the mechanism of formation of modularity in complex networks.
1103.5120
Emergence of scale-free leadership structure in social recommender systems
physics.soc-ph cs.IR cs.SI
The study of the organization of social networks is important for understanding of opinion formation, rumor spreading, and the emergence of trends and fashion. This paper reports empirical analysis of networks extracted from four leading sites with social functionality (Delicious, Flickr, Twitter and YouTube) and shows that they all display a scale-free leadership structure. To reproduce this feature, we propose an adaptive network model driven by social recommending. Artificial agent-based simulations of this model highlight a "good get richer" mechanism where users with broad interests and good judgments are likely to become popular leaders for the others. Simulations also indicate that the studied social recommendation mechanism can gradually improve the user experience by adapting to tastes of its users. Finally we outline implications for real online resource-sharing systems.
1103.5128
Power Consumption of LDPC Decoders in Software Radio
cs.IT math.IT
LDPC code is a powerful error correcting code and has been applied to many advanced communication systems. The prosperity of software radio has motivated us to investigate the implementation of LDPC decoders on processors. In this paper, we estimate and compare complexity and power consumption of LDPC decoding algorithms running on general purpose processors. Using the estimation results, we show two power control schemes for software radio: SNR-based algorithm diversity and joint transmit power and receiver energy management. Overall, this paper discusses general concerns about using processors as the software radio platform for the implementation of LDPC decoders.
1103.5131
Analysis of Equilibria and Strategic Interaction in Complex Networks
cs.GT cs.SI cs.SY math.OC
This paper studies $n$-person simultaneous-move games with linear best response function, where individuals interact within a given network structure. This class of games have been used to model various settings, such as, public goods, belief formation, peer effects, and oligopoly. The purpose of this paper is to study the effect of the network structure on Nash equilibrium outcomes of this class of games. Bramoull\'{e} et al. derived conditions for uniqueness and stability of a Nash equilibrium in terms of the smallest eigenvalue of the adjacency matrix representing the network of interactions. Motivated by this result, we study how local structural properties of the network of interactions affect this eigenvalue, influencing game equilibria. In particular, we use algebraic graph theory and convex optimization to derive new bounds on the smallest eigenvalue in terms of the distribution of degrees, cycles, and other relevant substructures. We illustrate our results with numerical simulations involving online social networks.
1103.5133
Cooperative Strategies for Simultaneous and Broadcast Relay Channels
cs.IT math.IT
Consider the \emph{simultaneous relay channel} (SRC) which consists of a set of relay channels where the source wishes to transmit common and private information to each of the destinations. This problem is recognized as being equivalent to that of sending common and private information to several destinations in presence of helper relays where each channel outcome becomes a branch of the \emph{broadcast relay channel} (BRC). Cooperative schemes and capacity region for a set with two memoryless relay channels are investigated. The proposed coding schemes, based on \emph{Decode-and-Forward} (DF) and \emph{Compress-and-Forward} (CF) must be capable of transmitting information simultaneously to all destinations in such set. Depending on the quality of source-to-relay and relay-to-destination channels, inner bounds on the capacity of the general BRC are derived. Three cases of particular interest are considered: cooperation is based on DF strategy for both users --referred to as DF-DF region--, cooperation is based on CF strategy for both users --referred to as CF-CF region--, and cooperation is based on DF strategy for one destination and CF for the other --referred to as DF-CF region--. These results can be seen as a generalization and hence unification of previous works. An outer-bound on the capacity of the general BRC is also derived. Capacity results are obtained for the specific cases of semi-degraded and degraded Gaussian simultaneous relay channels. Rates are evaluated for Gaussian models where the source must guarantee a minimum amount of information to both users while additional information is sent to each of them.
1103.5142
Asymptotic Properties of One-Bit Distributed Detection with Ordered Transmissions
cs.IT cs.MA math.IT stat.AP
Consider a sensor network made of remote nodes connected to a common fusion center. In a recent work Blum and Sadler [1] propose the idea of ordered transmissions -sensors with more informative samples deliver their messages first- and prove that optimal detection performance can be achieved using only a subset of the total messages. Taking to one extreme this approach, we show that just a single delivering allows making the detection errors as small as desired, for a sufficiently large network size: a one-bit detection scheme can be asymptotically consistent. The transmission ordering is based on the modulus of some local statistic (MO system). We derive analytical results proving the asymptotic consistency and, for the particular case that the local statistic is the log-likelihood (\ell-MO system), we also obtain a bound on the error convergence rate. All the theorems are proved under the general setup of random number of sensors. Computer experiments corroborate the analysis and address typical examples of applications including: non-homogeneous Poisson-deployed networks, detection by per-sensor censoring, monitoring of energy-constrained phenomenon.
1103.5163
Generic Controllability of 3D Swimmers in a Perfect Fluid
math.OC cs.SY physics.bio-ph
We address the problem of controlling a dynamical system governing the motion of a 3D weighted shape changing body swimming in a perfect fluid. The rigid displacement of the swimmer results from the exchange of momentum between prescribed shape changes and the flow, the total impulse of the fluid-swimmer system being constant for all times. We prove the following tracking results: (i) Synchronized swimming: Maybe up to an arbitrarily small change of its density, any swimmer can approximately follow any given trajectory while, in addition, undergoing approximately any given shape changes. In this statement, the control consists in arbitrarily small superimposed deformations; (ii) Freestyle swimming: Maybe up to an arbitrarily small change of its density, any swimmer can approximately tracks any given trajectory by combining suitably at most five basic movements that can be generically chosen (no macro shape changes are prescribed in this statement).
1103.5170
Differentially Private Spatial Decompositions
cs.DB
Differential privacy has recently emerged as the de facto standard for private data release. This makes it possible to provide strong theoretical guarantees on the privacy and utility of released data. While it is well-known how to release data based on counts and simple functions under this guarantee, it remains to provide general purpose techniques to release different kinds of data. In this paper, we focus on spatial data such as locations and more generally any data that can be indexed by a tree structure. Directly applying existing differential privacy methods to this type of data simply generates noise. Instead, we introduce a new class of "private spatial decompositions": these adapt standard spatial indexing methods such as quadtrees and kd-trees to provide a private description of the data distribution. Equipping such structures with differential privacy requires several steps to ensure that they provide meaningful privacy guarantees. Various primitives, such as choosing splitting points and describing the distribution of points within a region, must be done privately, and the guarantees of the different building blocks composed to provide an overall guarantee. Consequently, we expose the design space for private spatial decompositions, and analyze some key examples. Our experimental study demonstrates that it is possible to build such decompositions efficiently, and use them to answer a variety of queries privately with high accuracy.
1103.5188
Differential Privacy: on the trade-off between Utility and Information Leakage
cs.CR cs.DB cs.IT math.IT
Differential privacy is a notion of privacy that has become very popular in the database community. Roughly, the idea is that a randomized query mechanism provides sufficient privacy protection if the ratio between the probabilities that two adjacent datasets give the same answer is bound by e^epsilon. In the field of information flow there is a similar concern for controlling information leakage, i.e. limiting the possibility of inferring the secret information from the observables. In recent years, researchers have proposed to quantify the leakage in terms of R\'enyi min mutual information, a notion strictly related to the Bayes risk. In this paper, we show how to model the query system in terms of an information-theoretic channel, and we compare the notion of differential privacy with that of mutual information. We show that differential privacy implies a bound on the mutual information (but not vice-versa). Furthermore, we show that our bound is tight. Then, we consider the utility of the randomization mechanism, which represents how close the randomized answers are, in average, to the real ones. We show that the notion of differential privacy implies a bound on utility, also tight, and we propose a method that under certain conditions builds an optimal randomization mechanism, i.e. a mechanism which provides the best utility while guaranteeing differential privacy.
1103.5197
A New Secret key Agreement Scheme in a Four-Terminal Network
cs.CR cs.IT math.IT
A new scenario for generating a secret key and two private keys among three Terminals in the presence of an external eavesdropper is considered. Terminals 1, 2 and 3 intend to share a common secret key concealed from the external eavesdropper (Terminal 4) and simultaneously, each of Terminals 1 and 2 intends to share a private key with Terminal 3 while keeping it concealed from each other and from Terminal 4. All four Terminals observe i.i.d. outputs of correlated sources and there is a public channel from Terminal 3 to Terminals 1 and 2. An inner bound of the "secret key-private keys capacity region" is derived and the single letter capacity regions are obtained for some special cases.
1103.5218
Generalized Symmetric Divergence Measures and the Probability of Error
cs.IT math.IT
There are three classical divergence measures exist in the literature on information theory and statistics. These are namely, Jeffryes-Kullback-Leiber J-divergence. Sibson-Burbea-Rao Jensen-Shannon divegernce and Taneja Arithmetic-Geometric divergence. These three measures bear an interesting relationship among each other. The divergence measures like Hellinger discrimination, symmetric chi-square divergence, and triangular discrimination are also known in the literature. In this paper, we have considered generalized symmetric divergence measures having the measures given above as particular cases. Bounds on the probability of error are obtained in terms of generalized symmetric divergence measures. Study of bounds on probability of error is extended for the difference of divergence measures.
1103.5219
Upper Bounds on the Probability of Error in terms of Mean Divergence Measures
cs.IT math.IT
In this paper we shall consider some famous means such as arithmetic, harmonic, geometric, root square mean, etc. Considering the difference of these means, we can establish. some inequalities among them. Interestingly, the difference of mean considered is convex functions. Applying some properties, upper bounds on the probability of error are established in this paper. It is also shown that the results obtained are sharper than obtained directly applying known inequalities.
1103.5231
Leaders in Social Networks, the Delicious Case
physics.soc-ph cs.IR cs.SI
Finding pertinent information is not limited to search engines. Online communities can amplify the influence of a small number of power users for the benefit of all other users. Users' information foraging in depth and breadth can be greatly enhanced by choosing suitable leaders. For instance in delicious.com, users subscribe to leaders' collection which lead to a deeper and wider reach not achievable with search engines. To consolidate such collective search, it is essential to utilize the leadership topology and identify influential users. Google's PageRank, as a successful search algorithm in the World Wide Web, turns out to be less effective in networks of people. We thus devise an adaptive and parameter-free algorithm, the LeaderRank, to quantify user influence. We show that LeaderRank outperforms PageRank in terms of ranking effectiveness, as well as robustness against manipulations and noisy data. These results suggest that leaders who are aware of their clout may reinforce the development of social networks, and thus the power of collective search.
1103.5258
Controllability of rolling without twisting or slipping in higher dimensions
math.OC cs.SY math.DG
We describe how the dynamical system of rolling two $n$-dimensional connected, oriented Riemannian manifolds $M$ and $\hat M$ without twisting or slipping, can be lifted to a nonholonomic system of elements in the product of the oriented orthonormal frame bundles belonging to the manifolds. By considering the lifted problem and using properties of the elements in the respective principal Ehresmann connections, we obtain sufficient conditions for the local controllability of the system in terms of the curvature tensors and the sectional curvatures of the manifolds involved. We also give some results for the particular cases when $M$ and $\hat M$ are locally symmetric or complete.
1103.5269
Naming Games in Two-Dimensional and Small-World-Connected Random Geometric Networks
cond-mat.stat-mech cs.SI physics.soc-ph
We investigate a prototypical agent-based model, the Naming Game, on two-dimensional random geometric networks. The Naming Game [A. Baronchelli et al., J. Stat. Mech.: Theory Exp. (2006) P06014.] is a minimal model, employing local communications that captures the emergence of shared communication schemes (languages) in a population of autonomous semiotic agents. Implementing the Naming Games with local broadcasts on random geometric graphs, serves as a model for agreement dynamics in large-scale, autonomously operating wireless sensor networks. Further, it captures essential features of the scaling properties of the agreement process for spatially-embedded autonomous agents. Among the relevant observables capturing the temporal properties of the agreement process, we investigate the cluster-size distribution and the distribution of the agreement times, both exhibiting dynamic scaling. We also present results for the case when a small density of long-range communication links are added on top of the random geometric graph, resulting in a "small-world"-like network and yielding a significantly reduced time to reach global agreement. We construct a finite-size scaling analysis for the agreement times in this case.
1103.5290
Optimal Energy Allocation for Wireless Communications with Energy Harvesting Constraints
cs.IT math.IT
We consider the use of energy harvesters, in place of conventional batteries with fixed energy storage, for point-to-point wireless communications. In addition to the challenge of transmitting in a channel with time selective fading, energy harvesters provide a perpetual but unreliable energy source. In this paper, we consider the problem of energy allocation over a finite horizon, taking into account channel conditions and energy sources that are time varying, so as to maximize the throughput. Two types of side information (SI) on the channel conditions and harvested energy are assumed to be available: causal SI (of the past and present slots) or full SI (of the past, present and future slots). We obtain structural results for the optimal energy allocation, via the use of dynamic programming and convex optimization techniques. In particular, if unlimited energy can be stored in the battery with harvested energy and the full SI is available, we prove the optimality of a water-filling energy allocation solution where the so-called water levels follow a staircase function.
1103.5348
Precoding for Outage Probability Minimization on Block Fading Channels
cs.IT math.IT
The outage probability limit is a fundamental and achievable lower bound on the word error rate of coded communication systems affected by fading. This limit is mainly determined by two parameters: the diversity order and the coding gain. With linear precoding, full diversity on a block fading channel can be achieved without error-correcting code. However, the effect of precoding on the coding gain is not well known, mainly due to the complicated expression of the outage probability. Using a geometric approach, this paper establishes simple upper bounds on the outage probability, the minimization of which yields to precoding matrices that achieve very good performance. For discrete alphabets, it is shown that the combination of constellation expansion and precoding is sufficient to closely approach the minimum possible outage achieved by an i.i.d. Gaussian input distribution, thus essentially maximizing the coding gain.
1103.5362
Verhulst-Lotka-Volterra (VLV) model of ideological struggles
physics.soc-ph cs.SI nlin.AO
Let the population of e.g. a country where some opinion struggle occurs be varying in time, according to Verhulst equation. Consider next some competition between opinions such as the dynamics be described by Lotka and Volterra equations. Two kinds of influences can be used, in such a model, for describing the dynamics of an agent opinion conversion: this can occur (i) either by means of mass communication tools, under some external field influence, or (ii) by means of direct interactions between agents. It results, among other features, that change(s) in environmental conditions can prevent the extinction of populations of followers of some ideology due to different kinds of resurrection effects. The tension arising in the country population is proposed to be measured by an appropriately defined scale index.
1103.5382
On religion and language evolutions seen through mathematical and agent based models
physics.soc-ph cs.SI nlin.AO
(shortened version) Religions and languages are social variables, like age, sex, wealth or political opinions, to be studied like any other organizational parameter. In fact, religiosity is one of the most important sociological aspects of populations. Languages are also a characteristics of the human kind. New religions, new languages appear though others disappear. All religions and languages evolve when they adapt to the society developments. On the other hand, the number of adherents of a given religion, the number of persons speaking a language is not fixed. Several questions can be raised. E.g. from a macroscopic point of view : How many religions/languages exist at a given time? What is their distribution? What is their life time? How do they evolve?. From a microscopic view point: can one invent agent based models to describe macroscopic aspects? Does it exist simple evolution equations? It is intuitively accepted, but also found through from statistical analysis of the frequency distribution that an attachment process is the primary cause of the distribution evolution : usually the initial religion/language is that of the mother. Later on, changes can occur either due to heterogeneous agent interaction processes or due to external field constraints, - or both. Such cases can be illustrated with historical facts and data. It is stressed that characteristic time scales are different, and recalled that external fields are very relevant in the case of religions, rending the study more interesting within a mechanistic approach
1103.5405
Network Estimation and Packet Delivery Prediction for Control over Wireless Mesh Networks
cs.SY cs.NI math.OC
Much of the current theory of networked control systems uses simple point-to-point communication models as an abstraction of the underlying network. As a result, the controller has very limited information on the network conditions and performs suboptimally. This work models the underlying wireless multihop mesh network as a graph of links with transmission success probabilities, and uses a recursive Bayesian estimator to provide packet delivery predictions to the controller. The predictions are a joint probability distribution on future packet delivery sequences, and thus capture correlations between successive packet deliveries. We look at finite horizon LQG control over a lossy actuation channel and a perfect sensing channel, both without delay, to study how the controller can compensate for predicted network outages.
1103.5410
Political protest Italian-style: The dissonance between the blogosphere and mainstream media in the promotion and coverage of Beppe Grillo's V-day
cs.SI cs.CY physics.soc-ph
We analyze the organization, promotion and public perception of V-day, a political rally that took place on September 8, 2007, to protest against corruption in the Italian Parliament. Launched by blogger Beppe Grillo, and promoted via a word of mouth mobilization on the Italian blogosphere, V-day brought close to one million Italians in the streets on a single day, but was mostly ignored by mainstream media. This article is divided into two parts. In the first part, we analyze the volume and content of online articles published by both bloggers and mainstream news sources from June 14 (the day V-day was announced) until September 15, 2007 (one week after it took place) . We find that the success of V-day can be attributed to the coverage of bloggers and small-scale local news outlets only, suggesting a strong grassroots component in the organization of the rally. We also find a dissonant thematic relationship between content published by blogs and mainstream media: while the majority of blogs analyzed promote V-day, major mainstream media sources critique the methods of information production and dissemination employed by Grillo. Based on this finding, in the second part of the study, we explore the role of Grillo in the organization of the rally from a network analysis perspective. We study the interlinking structure of the V-day blogosphere network, to determine its structure, its levels of heterogeneity, and resilience. Our analysis contradicts the hypothesis that Grillo served as a top-down, broadcast-like source of information. Rather, we find that information about V-day was transferred across heterogeneous nodes in a moderately robust and resilient core network of blogs. We speculate that the organization of V-day represents the very first case, in Italian history, of a political demonstration developed and promoted primarily via the use of social media on the web.
1103.5426
Interference Channels with Rate-Limited Feedback
cs.IT math.IT
We consider the two-user interference channel with rate-limited feedback. Related prior works focus on the case where feedback links have infinite capacity, while no research has been done for the rate-limited feedback problem. Several new challenges arise due to the capacity limitations of the feedback links, both in deriving inner-bounds and outer-bounds. We study this problem under three different interference models: the El Gamal-Costa deterministic model, the linear deterministic model, and the Gaussian model. For the first two models, we develop an achievable scheme that employs three techniques: Han-Kobayashi message splitting, quantize-and-binning, and decode-and-forward. We also derive new outer-bounds for all three models and we show the optimality of our scheme under the linear deterministic model. In the Gaussian case, we propose a transmission strategy that incorporates lattice codes, inspired by the ideas developed in the first two models. For symmetric channel gains, we prove that the gap between the achievable sum-rate of the proposed scheme and our new outer-bounds is bounded by a constant number of bits, independent of the channel gains.
1103.5431
Identification of Nonlinear Systems with Stable Limit Cycles via Convex Optimization
math.OC cs.SY
We propose a convex optimization procedure for black-box identification of nonlinear state-space models for systems that exhibit stable limit cycles (unforced periodic solutions). It extends the "robust identification error" framework in which a convex upper bound on simulation error is optimized to fit rational polynomial models with a strong stability guarantee. In this work, we relax the stability constraint using the concepts of transverse dynamics and orbital stability, thus allowing systems with autonomous oscillations to be identified. The resulting optimization problem is convex, and can be formulated as a semidefinite program. A simulation-error bound is proved without assuming that the true system is in the model class, or that the number of measurements goes to infinity. Conditions which guarantee existence of a unique limit cycle of the model are proved and related to the model class that we search over. The method is illustrated by identifying a high-fidelity model from experimental recordings of a live rat hippocampal neuron in culture.
1103.5441
Nobody but You: Sensor Selection for Voltage Regulation in Smart Grid
cs.SY math.OC
The increasing availability of distributed energy resources (DERs) and sensors in smart grid, as well as overlaying communication network, provides substantial potential benefits for improving the power system's reliability. In this paper, the problem of sensor selection is studied for the MAC layer design of wireless sensor networks for regulating the voltages in smart grid. The framework of hybrid dynamical system is proposed, using Kalman filter for voltage state estimation and LQR feedback control for voltage adjustment. The approach to obtain the optimal sensor selection sequence is studied. A sub- optimal sequence is obtained by applying the sliding window algorithm. Simulation results show that the proposed sensor selection strategy achieves a 40% performance gain over the baseline algorithm of the round-robin sensor polling.
1103.5451
Complexity in human transportation networks: A comparative analysis of worldwide air transportation and global cargo ship movements
physics.soc-ph cond-mat.stat-mech cs.SI
We present a comparative network theoretic analysis of the two largest global transportation networks: The worldwide air-transportation network (WAN) and the global cargoship network (GCSN). We show that both networks exhibit striking statistical similarities despite significant differences in topology and connectivity. Both networks exhibit a discontinuity in node and link betweenness distributions which implies that these networks naturally segragate in two different classes of nodes and links. We introduce a technique based on effective distances, shortest paths and shortest-path trees for strongly weighted symmetric networks and show that in a shortest-path-tree representation the most significant features of both networks can be readily seen. We show that effective shortest-path distance, unlike conventional geographic distance measures, strongly correlates with node centrality measures. Using the new technique we show that network resilience can be investigated more precisely than with contemporary techniques that are based on percolation theory. We extract a functional relationship between node characteristics and resilience to network disruption. Finally we discuss the results, their implications and conclude that dynamic processes that evolve on both networks are expected to share universal dynamic characteristics.
1103.5478
Proof of the outage probability conjecture for MISO channels
cs.IT math.IT
In Telatar 1999, it is conjectured that the covariance matrices minimizing the outage probability for MIMO channels with Gaussian fading are diagonal with either zeros or constant values on the diagonal. In the MISO setting, this is equivalent to conjecture that the Gaussian quadratic forms having largest tale probability correspond to such diagonal matrices. We prove here the conjecture in the MISO setting.
1103.5479
Unicity conditions for low-rank matrix recovery
math.NA cs.IT cs.SY math.IT math.OC math.PR
Low-rank matrix recovery addresses the problem of recovering an unknown low-rank matrix from few linear measurements. Nuclear-norm minimization is a tractible approach with a recent surge of strong theoretical backing. Analagous to the theory of compressed sensing, these results have required random measurements. For example, m >= Cnr Gaussian measurements are sufficient to recover any rank-r n x n matrix with high probability. In this paper we address the theoretical question of how many measurements are needed via any method whatsoever --- tractible or not. We show that for a family of random measurement ensembles, m >= 4nr - 4r^2 measurements are sufficient to guarantee that no rank-2r matrix lies in the null space of the measurement operator with probability one. This is a necessary and sufficient condition to ensure uniform recovery of all rank-r matrices by rank minimization. Furthermore, this value of $m$ precisely matches the dimension of the manifold of all rank-2r matrices. We also prove that for a fixed rank-r matrix, m >= 2nr - r^2 + 1 random measurements are enough to guarantee recovery using rank minimization. These results give a benchmark to which we may compare the efficacy of nuclear-norm minimization.
1103.5520
Shannon Entropy based Randomness Measurement and Test for Image Encryption
cs.CR cs.IT math.IT
The quality of image encryption is commonly measured by the Shannon entropy over the ciphertext image. However, this measurement does not consider to the randomness of local image blocks and is inappropriate for scrambling based image encryption methods. In this paper, a new information entropy-based randomness measurement for image encryption is introduced which, for the first time, answers the question of whether a given ciphertext image is sufficiently random-like. It measures the randomness over the ciphertext in a fairer way by calculating the averaged entropy of a series of small image blocks within the entire test image. In order to fulfill both quantitative and qualitative measurement, the expectation and the variance of this averaged block entropy for a true-random image are strictly derived and corresponding numerical reference tables are also provided. Moreover, a hypothesis test at significance-level is given to help accept or reject the hypothesis that the test image is ideally encrypted/random-like. Simulation results show that the proposed test is able to give both effectively quantitative and qualitative results for image encryption. The same idea can also be applied to measure other digital data, like audio and video.
1103.5535
A Lattice Compress-and-Forward Scheme
cs.IT math.IT
We present a nested lattice-code-based strategy that achieves the random-coding based Compress-and-Forward (CF) rate for the three node Gaussian relay channel. To do so, we first outline a lattice-based strategy for the $(X+Z_1,X+Z_2)$ Wyner-Ziv lossy source-coding with side-information problem in Gaussian noise, a re-interpretation of the nested lattice-code-based Gaussian Wyner-Ziv scheme presented by Zamir, Shamai, and Erez. We use the notation $(X+Z_1,X+Z_2)$ Wyner-Ziv to mean that the source is of the form $X+ Z_1$ and the side-information at the receiver is of the form $X+ Z_2$, for independent Gaussian $X, Z_1$ and $Z_2$. We next use this $(X+Z_1,X+Z_2)$ Wyner-Ziv scheme to implement a "structured" or lattice-code-based CF scheme which achieves the classic CF rate for Gaussian relay channels. This suggests that lattice codes may not only be useful in point-to-point single-hop source and channel coding, in multiple access and broadcast channels, but that they may also be useful in larger relay networks. The usage of lattice codes in larger networks is motivated by their structured nature (possibly leading to rate gains) and decoding (relatively simple) being more practically realizable than their random coding based counterparts. We furthermore expect the proposed lattice-based CF scheme to constitute a first step towards a generic structured achievability scheme for networks such as a structured version of the recently introduced "noisy network coding".
1103.5542
Sparsity Enhanced Decision Feedback Equalization
cs.IT math.IT
For single-carrier systems with frequency domain equalization, decision feedback equalization (DFE) performs better than linear equalization and has much lower computational complexity than sequence maximum likelihood detection. The main challenge in DFE is the feedback symbol selection rule. In this paper, we give a theoretical framework for a simple, sparsity based thresholding algorithm. We feed back multiple symbols in each iteration, so the algorithm converges fast and has a low computational cost. We show how the initial solution can be obtained via convex relaxation instead of linear equalization, and illustrate the impact that the choice of the initial solution has on the bit error rate performance of our algorithm. The algorithm is applicable in several existing wireless communication systems (SC-FDMA, MC-CDMA, MIMO-OFDM). Numerical results illustrate significant performance improvement in terms of bit error rate compared to the MMSE solution.
1103.5554
Visual Localisation of Mobile Devices in an Indoor Environment under Network Delay Conditions
cs.RO
Current progresses in home automation and service robotic environment have highlighted the need to develop interoperability mechanisms that allow a standard communication between the two systems. During the development of the DHCompliant protocol, the problem of locating mobile devices in an indoor environment has been investigated. The communication of the device with the location service has been carried out to study the time delay that web services offer in front of the sockets. The importance of obtaining data from real-time location systems portends that a basic tool for interoperability, such as web services, can be ineffective in this scenario because of the delays added in the invocation of services. This paper is focused on introducing a web service to resolve a coordinates request without any significant delay in comparison with the sockets.
1103.5569
An upper bound on community size in scalable community detection
physics.soc-ph cs.SI
It is well-known that community detection methods based on modularity optimization often fails to discover small communities. Several objective functions used for community detection therefore involve a resolution parameter that allows the detection of communities at different scales. We provide an explicit upper bound on the community size of communities resulting from the optimization of several of these functions. We also show with a simple example that the use of the resolution parameter may artificially force the complete disaggregation of large and densely connected communities.
1103.5580
Designing a Miniature Wheel Arrangement for Mobile Robot Platforms
cs.RO
In this research report details of design of a miniature wheel arrangement are presented. This miniature wheel arrangement is essentially a direction control mechanism intended for use on a mobile robot platform or base. The design is a specific one employing a stepper motor as actuator and as described can only be used on a certain type of wheeled robots. However, as a basic steering control element, more than one of these miniature wheel arrangements can be grouped together to implement more elaborate and intelligent direction control schemes on varying configurations of wheeled mobile robot platforms.
1103.5582
Role-similarity based comparison of directed networks
physics.soc-ph cs.SI q-bio.MN
The widespread relevance of complex networks is a valuable tool in the analysis of a broad range of systems. There is a demand for tools which enable the extraction of meaningful information and allow the comparison between different systems. We present a novel measure of similarity between nodes in different networks as a generalization of the concept of self-similarity. A similarity matrix is assembled as the distance between feature vectors that contain the in and out paths of all lengths for each node. Hence, nodes operating in a similar flow environment are considered similar regardless of network membership. We demonstrate that this method has the potential to be influential in tasks such as assigning identity or function to uncharacterized nodes. In addition an innovative application of graph partitioning to the raw results extends the concept to the comparison of networks in terms of their underlying role-structure.
1103.5586
Use of Devolved Controllers in Data Center Networks
cs.NI cs.SY math.OC
In a data center network, for example, it is quite often to use controllers to manage resources in a centralized man- ner. Centralized control, however, imposes a scalability problem. In this paper, we investigate the use of multiple independent controllers instead of a single omniscient controller to manage resources. Each controller looks after a portion of the network only, but they together cover the whole network. This therefore solves the scalability problem. We use flow allocation as an example to see how this approach can manage the bandwidth use in a distributed manner. The focus is on how to assign components of a network to the controllers so that (1) each controller only need to look after a small part of the network but (2) there is at least one controller that can answer any request. We outline a way to configure the controllers to fulfill these requirements as a proof that the use of devolved controllers is possible. We also discuss several issues related to such implementation.
1103.5602
Time and spectral domain relative entropy: A new approach to multivariate spectral estimation
math.OC cs.SY
The concept of spectral relative entropy rate is introduced for jointly stationary Gaussian processes. Using classical information-theoretic results, we establish a remarkable connection between time and spectral domain relative entropy rates. This naturally leads to a new spectral estimation technique where a multivariate version of the Itakura-Saito distance is employed}. It may be viewed as an extension of the approach, called THREE, introduced by Byrnes, Georgiou and Lindquist in 2000 which, in turn, followed in the footsteps of the Burg-Jaynes Maximum Entropy Method. Spectral estimation is here recast in the form of a constrained spectrum approximation problem where the distance is equal to the processes relative entropy rate. The corresponding solution entails a complexity upper bound which improves on the one so far available in the multichannel framework. Indeed, it is equal to the one featured by THREE in the scalar case. The solution is computed via a globally convergent matricial Newton-type algorithm. Simulations suggest the effectiveness of the new technique in tackling multivariate spectral estimation tasks, especially in the case of short data records.