id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1101.2405
Improved Peak Cancellation for PAPR Reduction in OFDM Systems
cs.IT math.IT
This letter presents an improved peak cancellation (PC) scheme for peak-to-average power ratio (PAPR) reduction in orthogonal frequency division multiplexing (OFDM) systems. The main idea is based on a serial peak cancellation (SPC) mode for alleviating the peak regrowth of the conventional schemes. Based on the SPC mode, two particular algorithms are developed with different tradeoff between PAPR and computational complexity. Simulation shows that the proposed scheme has a better tradeoff among PAPR, complexity and signal distortion than the conventional schemes.
1101.2416
Decentralized Formation Control Part I: Geometric Aspects
math.OC cs.MA cs.SY
In this paper, we develop new methods for the analysis of decentralized control systems and we apply them to formation control problems. The basic set-up consists of a system with multiple agents corresponding to the nodes of a graph whose edges encode the information that is available to the agents. We address the question of whether the information flow defined by the graph is sufficient for the agents to accomplish a given task. Formation control is concerned with problems in which agents are required to stabilize at a given distance from other agents. In this context, the graph of a formation encodes both the information flow and the distance constraints, by fixing the lengths of the edges. A formation is said to be rigid if it cannot be continuously deformed with the distance constraints satisfied; a formation is minimally rigid if no distance constraint can be omitted without the formation losing its rigidity. Hence, the graph underlying minimally rigid formation provides just enough constraints to yield a rigid formation. An open question we will settle is whether the information flow afforded by a minimally rigid graph is sufficient to insure global stability. We show that the answer is negative in the case of directed information flow. In this first part, we establish basic properties of formation control in the plane. Formations and the associated control problems are defined modulo rigid transformations. This fact has strong implications on the geometry of the space of formations and on the feedback laws, since they need to respect this invariance. We study both aspects here. We show that the space of frameworks of n agents is CP(n-2) x (0,\infty). We then illustrate how the non-trivial topology of this space relates to the parametrization of the formation by inter-agent distances.
1101.2421
Decentralized Formation Control Part II: Algebraic aspects of information flow and singularities
math.OC cs.MA cs.SY
Given an ensemble of autonomous agents and a task to achieve cooperatively, how much do the agents need to know about the state of the ensemble and about the task in order to achieve it? We introduce new methods to understand these aspects of decentralized control. Precisely, we introduce a framework to capture what agents with partial information can achieve by cooperating and illustrate its use by deriving results about global stabilization of directed formations. This framework underscores the need to differentiate the knowledge an agent has about the task to accomplish from the knowledge an agent has about the current state of the system. The control of directed formations has proven to be more difficult than initially thought, as is exemplified by the lack of global result for formations with n \geq 4 agents. We established in part I that the space of planar formations has a non-trivial global topology. We propose here an extension of the notion of global stability which, because it acknowledges this non-trivial topology, can be applied to the study of formation control. We then develop a framework that reduces the question of whether feedback with partial information can stabilize the system to whether two sets of functions intersect. We apply this framework to the study of a directed formation with n = 4 agents and show that the agents do not have enough information to implement locally stabilizing feedback laws. Additionally, we show that feedback laws that respect the information flow cannot stabilize a target configuration without stabilizing other, unwanted configurations.
1101.2427
Content-Based Filtering for Video Sharing Social Networks
cs.CV cs.SI
In this paper we compare the use of several features in the task of content filtering for video social networks, a very challenging task, not only because the unwanted content is related to very high-level semantic concepts (e.g., pornography, violence, etc.) but also because videos from social networks are extremely assorted, preventing the use of constrained a priori information. We propose a simple method, able to combine diverse evidence, coming from different features and various video elements (entire video, shots, frames, keyframes, etc.). We evaluate our method in three social network applications, related to the detection of unwanted content - pornographic videos, violent videos, and videos posted to artificially manipulate popularity scores. Using challenging test databases, we show that this simple scheme is able to obtain good results, provided that adequate features are chosen. Moreover, we establish a representation using codebooks of spatiotemporal local descriptors as critical to the success of the method in all three contexts. This is consequential, since the state-of-the-art still relies heavily on static features for the tasks addressed.
1101.2435
Networks with arbitrary edge multiplicities
physics.soc-ph cs.SI physics.data-an
One of the main characteristics of real-world networks is their large clustering. Clustering is one aspect of a more general but much less studied structural organization of networks, i.e. edge multiplicity, defined as the number of triangles in which edges, rather than vertices, participate. Here we show that the multiplicity distribution of real networks is in many cases scale-free, and in general very broad. Thus, besides the fact that in real networks the number of edges attached to vertices often has a scale-free distribution, we find that the number of triangles attached to edges can have a scale-free distribution as well. We show that current models, even when they generate clustered networks, systematically fail to reproduce the observed multiplicity distributions. We therefore propose a generalized model that can reproduce networks with arbitrary distributions of vertex degrees and edge multiplicities, and study many of its properties analytically.
1101.2478
Delay and Power-Optimal Control in Multi-Class Queueing Systems
math.OC cs.SY
We consider optimizing average queueing delay and average power consumption in a nonpreemptive multi-class M/G/1 queue with dynamic power control that affects instantaneous service rates. Four problems are studied: (1) satisfying per-class average delay constraints; (2) minimizing a separable convex function of average delays subject to per-class delay constraints; (3) minimizing average power consumption subject to per-class delay constraints; (4) minimizing a separable convex function of average delays subject to an average power constraint. Combining an achievable region approach in queueing systems and the Lyapunov optimization theory suitable for optimizing dynamic systems with time average constraints, we propose a unified framework to solve the above problems. The solutions are variants of dynamic $c\mu$ rules, and implement weighted priority policies in every busy period, where weights are determined by past queueing delays in all job classes. Our solutions require limited statistical knowledge of arrivals and service times, and no statistical knowledge is needed in the first problem. Overall, we provide a new set of tools for stochastic optimization and control over multi-class queueing systems with time average constraints.
1101.2483
On Achievability of Gaussian Interference Channel Capacity to within One Bit
cs.IT math.IT
In the earlier version of this paper, it was wrongly claimed that time-sharing is required to achieve the capacity region of the Gaussian interference channel to within one bit, especially at corner points. The flaw in the argument of the earlier version lies in fixing the decoding paradigm for a fixed common/private message splitting encoding strategy. More specifically, the additional constraints (7b) and (7d) in the earlier version arise if we force the common messages to be always decoded at unintended receivers. However, (7b) and (7d) can be eliminated by allowing the decoders to ignore unintended common messages, particularly at corner points of the rate region, without resorting to time-sharing at the transmit side, as suggested in the earlier version. For these reasons, our earlier claim is invalid.
1101.2491
A Review of Research on Devnagari Character Recognition
cs.CV
English Character Recognition (CR) has been extensively studied in the last half century and progressed to a level, sufficient to produce technology driven applications. But same is not the case for Indian languages which are complicated in terms of structure and computations. Rapidly growing computational power may enable the implementation of Indic CR methodologies. Digital document processing is gaining popularity for application to office and library automation, bank and postal services, publishing houses and communication technology. Devnagari being the national language of India, spoken by more than 500 million people, should be given special attention so that document retrieval and analysis of rich ancient and modern Indian literature can be effectively done. This article is intended to serve as a guide and update for the readers, working in the Devnagari Optical Character Recognition (DOCR) area. An overview of DOCR systems is presented and the available DOCR techniques are reviewed. The current status of DOCR is discussed and directions for future research are suggested.
1101.2516
Maximum Rate of Unitary-Weight, Single-Symbol Decodable STBCs
cs.IT math.IT
It is well known that the Space-time Block Codes (STBCs) from Complex orthogonal designs (CODs) are single-symbol decodable/symbol-by-symbol decodable (SSD). The weight matrices of the square CODs are all unitary and obtainable from the unitary matrix representations of Clifford Algebras when the number of transmit antennas $n$ is a power of 2. The rate of the square CODs for $n = 2^a$ has been shown to be $\frac{a+1}{2^a}$ complex symbols per channel use. However, SSD codes having unitary-weight matrices need not be CODs, an example being the Minimum-Decoding-Complexity STBCs from Quasi-Orthogonal Designs. In this paper, an achievable upper bound on the rate of any unitary-weight SSD code is derived to be $\frac{a}{2^{a-1}}$ complex symbols per channel use for $2^a$ antennas, and this upper bound is larger than that of the CODs. By way of code construction, the interrelationship between the weight matrices of unitary-weight SSD codes is studied. Also, the coding gain of all unitary-weight SSD codes is proved to be the same for QAM constellations and conditions that are necessary for unitary-weight SSD codes to achieve full transmit diversity and optimum coding gain are presented.
1101.2524
Generalized Silver Codes
cs.IT math.IT
For an $n_t$ transmit, $n_r$ receive antenna system ($n_t \times n_r$ system), a {\it{full-rate}} space time block code (STBC) transmits $n_{min} = min(n_t,n_r)$ complex symbols per channel use. The well known Golden code is an example of a full-rate, full-diversity STBC for 2 transmit antennas. Its ML-decoding complexity is of the order of $M^{2.5}$ for square $M$-QAM. The Silver code for 2 transmit antennas has all the desirable properties of the Golden code except its coding gain, but offers lower ML-decoding complexity of the order of $M^2$. Importantly, the slight loss in coding gain is negligible compared to the advantage it offers in terms of lowering the ML-decoding complexity. For higher number of transmit antennas, the best known codes are the Perfect codes, which are full-rate, full-diversity, information lossless codes (for $n_r \geq n_t$) but have a high ML-decoding complexity of the order of $M^{n_tn_{min}}$ (for $n_r < n_t$, the punctured Perfect codes are considered). In this paper, a scheme to obtain full-rate STBCs for $2^a$ transmit antennas and any $n_r$ with reduced ML-decoding complexity of the order of $M^{n_t(n_{min}-(3/4))-0.5}$, is presented. The codes constructed are also information lossless for $n_r \geq n_t$, like the Perfect codes and allow higher mutual information than the comparable punctured Perfect codes for $n_r < n_t$. These codes are referred to as the {\it generalized Silver codes}, since they enjoy the same desirable properties as the comparable Perfect codes (except possibly the coding gain) with lower ML-decoding complexity, analogous to the Silver-Golden codes for 2 transmit antennas. Simulation results of the symbol error rates for 4 and 8 transmit antennas show that the generalized Silver codes match the punctured Perfect codes in error performance while offering lower ML-decoding complexity.
1101.2533
A Low ML-decoding Complexity, Full-diversity, Full-rate MIMO Precoder
cs.IT math.IT
Precoding for multiple-input, multiple-output (MIMO) antenna systems is considered with perfect channel knowledge available at both the transmitter and the receiver. For 2 transmit antennas and QAM constellations, an approximately optimal (with respect to the minimum Euclidean distance between points in the received signal space) real-valued precoder based on the singular value decomposition (SVD) of the channel is proposed, and it is shown to offer a maximum-likelihood (ML)-decoding complexity of $\mathcal{O}(\sqrt{M})$ for square $M$-QAM. The proposed precoder is obtainable easily for arbitrary QAM constellations, unlike the known complex-valued optimal precoder by Collin et al. for 2 transmit antennas, which is in existence for 4-QAM alone with an ML-decoding complexity of $\mathcal{O}(M\sqrt{M})$ (M=4) and is extremely hard to obtain for larger QAM constellations. The proposed precoder's loss in error performance for 4-QAM in comparison with the complex-valued optimal precoder is only marginal. Our precoding scheme is extended to higher number of transmit antennas on the lines of the E-$d_{min}$ precoder for 4-QAM by Vrigneau et al. which is an extension of the complex-valued optimal precoder for 4-QAM. Compared with the recently proposed $X-$ and $Y-$precoders, the error performance of our precoder is significantly better. It is shown that our precoder provides full-diversity for QAM constellations and this is supported by simulation plots of the word error probability for $2\times2$, $4\times4$ and $8\times8$ systems.
1101.2549
An Estimation of the Shortest and Largest Average Path Length in Graphs of Given Density
physics.soc-ph cs.SI
Many real world networks (graphs) are observed to be 'small worlds', i.e., the average path length among nodes is small. On the other hand, it is somewhat unclear what other average path length values networks can produce. In particular, it is not known what the maximum and the minimum average path length values are. In this paper we provide a lower estimation for the shortest average path length (l) values in connected networks, and the largest possible average path length values in networks with given size and density. To the latter end, we construct a special family of graphs and calculate their average path lengths. We also demonstrate the correctness of our estimation by simulations.
1101.2575
Errata list for "Error Control Coding" by Lin and Costello
cs.IT math.IT
This document lists some errors found in the second edition of Error Control Coding by Shu Lin and Daniel J. Costello, Jr.
1101.2613
A Novel Probabilistic Pruning Approach to Speed Up Similarity Queries in Uncertain Databases
cs.DB
In this paper, we propose a novel, effective and efficient probabilistic pruning criterion for probabilistic similarity queries on uncertain data. Our approach supports a general uncertainty model using continuous probabilistic density functions to describe the (possibly correlated) uncertain attributes of objects. In a nutshell, the problem to be solved is to compute the PDF of the random variable denoted by the probabilistic domination count: Given an uncertain database object B, an uncertain reference object R and a set D of uncertain database objects in a multi-dimensional space, the probabilistic domination count denotes the number of uncertain objects in D that are closer to R than B. This domination count can be used to answer a wide range of probabilistic similarity queries. Specifically, we propose a novel geometric pruning filter and introduce an iterative filter-refinement strategy for conservatively and progressively estimating the probabilistic domination count in an efficient way while keeping correctness according to the possible world semantics. In an experimental evaluation, we show that our proposed technique allows to acquire tight probability bounds for the probabilistic domination count quickly, even for large uncertain databases.
1101.2678
Parallelization Strategies for Ant Colony Optimisation on GPUs
cs.DC cs.MA
Ant Colony Optimisation (ACO) is an effective population-based meta-heuristic for the solution of a wide variety of problems. As a population-based algorithm, its computation is intrinsically massively parallel, and it is there- fore theoretically well-suited for implementation on Graphics Processing Units (GPUs). The ACO algorithm comprises two main stages: Tour construction and Pheromone update. The former has been previously implemented on the GPU, using a task-based parallelism approach. However, up until now, the latter has always been implemented on the CPU. In this paper, we discuss several parallelisation strategies for both stages of the ACO algorithm on the GPU. We propose an alternative data-based parallelism scheme for Tour construction, which fits better on the GPU architecture. We also describe novel GPU programming strategies for the Pheromone update stage. Our results show a total speed-up exceeding 28x for the Tour construction stage, and 20x for Pheromone update, and suggest that ACO is a potentially fruitful area for future research in the GPU domain.
1101.2713
Matched Filtering from Limited Frequency Samples
cs.IT math.IT
In this paper, we study a simple correlation-based strategy for estimating the unknown delay and amplitude of a signal based on a small number of noisy, randomly chosen frequency-domain samples. We model the output of this "compressive matched filter" as a random process whose mean equals the scaled, shifted autocorrelation function of the template signal. Using tools from the theory of empirical processes, we prove that the expected maximum deviation of this process from its mean decreases sharply as the number of measurements increases, and we also derive a probabilistic tail bound on the maximum deviation. Putting all of this together, we bound the minimum number of measurements required to guarantee that the empirical maximum of this random process occurs sufficiently close to the true peak of its mean function. We conclude that for broad classes of signals, this compressive matched filter will successfully estimate the unknown delay (with high probability, and within a prescribed tolerance) using a number of random frequency-domain samples that scales inversely with the signal-to-noise ratio and only logarithmically in the in the observation bandwidth and the possible range of delays.
1101.2719
CSSF MIMO RADAR: Low-Complexity Compressive Sensing Based MIMO Radar That Uses Step Frequency
cs.IT math.IT
A new approach is proposed, namely CSSF MIMO radar, which applies the technique of step frequency (SF) to compressive sensing (CS) based multi-input multi-output (MIMO) radar. The proposed approach enables high resolution range, angle and Doppler estimation, while transmitting narrowband pulses. The problem of joint angle-Doppler-range estimation is first formulated to fit the CS framework, i.e., as an L1 optimization problem. Direct solution of this problem entails high complexity as it employs a basis matrix whose construction requires discretization of the angle-Doppler-range space. Since high resolution requires fine space discretization, the complexity of joint range, angle and Doppler estimation can be prohibitively high. For the case of slowly moving targets, a technique is proposed that achieves significant complexity reduction by successively estimating angle-range and Doppler in a decoupled fashion and by employing initial estimates obtained via matched filtering to further reduce the space that needs to be digitized. Numerical results show that the combination of CS and SF results in a MIMO radar system that has superior resolution and requires far less data as compared to a system that uses a matched filter with SF.
1101.2721
Optimized data sharing in multicell MIMO with finite backhaul capacity
cs.IT math.IT
This paper addresses cooperation in a multicell environment where base stations (BSs) wish to jointly serve multiple users, under a constrained-capacity backhaul. We point out that for finite backhaul capacity a trade-off between sharing user data, which allows for full MIMO cooperation, and not doing so, which reduces the setup to an interference channel but also requires less overhead, emerges. We optimize this trade-off by formulating a rate splitting approach in which non-shared data (private to each transmitter) and shared data are superposed. We derive the corresponding achievable rate region and obtain the optimal beamforming design for both shared and private symbols. We show how the capacity of the backhaul can be used to determine how much of the user data is worth sharing across multiple BSs, particularly depending on how strong the interference is.
1101.2728
Index Coding and Error Correction
cs.IT math.IT
A problem of index coding with side information was first considered by Y. Birk and T. Kol (IEEE INFOCOM, 1998). In the present work, a generalization of index coding scheme, where transmitted symbols are subject to errors, is studied. Error-correcting methods for such a scheme, and their parameters, are investigated. In particular, the following question is discussed: given the side information hypergraph of index coding scheme and the maximal number of erroneous symbols $\delta$, what is the shortest length of a linear index code, such that every receiver is able to recover the required information? This question turns out to be a generalization of the problem of finding a shortest-length error-correcting code with a prescribed error-correcting capability in the classical coding theory. The Singleton bound and two other bounds, referred to as the $\alpha$-bound and the $\kappa$-bound, for the optimal length of a linear error-correcting index code (ECIC) are established. For large alphabets, a construction based on concatenation of an optimal index code with an MDS classical code, is shown to attain the Singleton bound. For smaller alphabets, however, this construction may not be optimal. A random construction is also analyzed. It yields another inexplicit bound on the length of an optimal linear ECIC. Finally, the decoding of linear ECIC's is discussed. The syndrome decoding is shown to output the exact message if the weight of the error vector is less or equal to the error-correcting capability of the corresponding ECIC.
1101.2785
Multiplexed Model Predictive Control
cs.SY math.OC
This paper proposes a form of MPC in which the control variables are moved asynchronously. This contrasts with most MIMO control schemes, which assume that all variables are updated simultaneously. MPC outperforms other control strategies through its ability to deal with constraints. This requires on-line optimization, hence computational complexity can become an issue when applying MPC to complex systems with fast response times. The multiplexed MPC scheme described in this paper solves the MPC problem for each subsystem sequentially, and updates subsystem controls as soon as the solution is available, thus distributing the control moves over a complete update cycle. The resulting computational speed-up allows faster response to disturbances, which may result in improved performance, despite finding sub-optimal solutions to the original problem.
1101.2804
Aging in language dynamics
physics.soc-ph cond-mat.stat-mech cs.CL cs.MA
Human languages evolve continuously, and a puzzling problem is how to reconcile the apparent robustness of most of the deep linguistic structures we use with the evidence that they undergo possibly slow, yet ceaseless, changes. Is the state in which we observe languages today closer to what would be a dynamical attractor with statistically stationary properties or rather closer to a non-steady state slowly evolving in time? Here we address this question in the framework of the emergence of shared linguistic categories in a population of individuals interacting through language games. The observed emerging asymptotic categorization, which has been previously tested - with success - against experimental data from human languages, corresponds to a metastable state where global shifts are always possible but progressively more unlikely and the response properties depend on the age of the system. This aging mechanism exhibits striking quantitative analogies to what is observed in the statistical mechanics of glassy systems. We argue that this can be a general scenario in language dynamics where shared linguistic conventions would not emerge as attractors, but rather as metastable states.
1101.2834
Subjective Collaborative Filtering
cs.IR cs.SI
We present an item-based approach for collaborative filtering. We determine a list of recommended items for a user by considering their previous purchases. Additionally other features of the users could be considered such as page views, search queries, etc... In particular we address the problem of efficiently comparing items. Our algorithm can efficiently approximate an estimate of the similarity between two items. As measure of similarity we use an approximation of the Jaccard similarity that can be computed by constant time operations and one bitwise OR. Moreover we improve the accuracy of the similarity by introducing the concept of user preference for a given product, which both takes into account multiple purchases and purchases of related items. The product of the user preference and the Jaccard measure (or its approximation) is used as a score for deciding whether a given product has to be recommended.
1101.2926
Convergence to consensus in multiagent systems and the lengths of inter-communication intervals
math.OC cs.MA math.DS
A theorem on (partial) convergence to consensus of multiagent systems is presented. It is proven with tools studying the convergence properties of products of row stochastic matrices with positive diagonals which are infinite to the left. Thus, it can be seen as a switching linear system in discrete time. It is further shown that the result is strictly more general than results of Moreau (IEEE Transactions on Automatic Control, vol. 50, no. 2, 2005), although Moreau's results are formulated for generally nonlinear updating maps. This is shown by demonstrating the existence of an appropriate switching linear system which mimics the nonlinear updating maps. Further on, an example system is given for which convergence to consensus can be shown by using the theorem. In this system the lengths of intercommunication intervals in the switching communication topology grows without bound. This makes other theorems not applicable.
1101.2937
A Deterministic Polynomial--Time Algorithm for Constructing a Multicast Coding Scheme for Linear Deterministic Relay Networks
cs.IT math.IT
We propose a new way to construct a multicast coding scheme for linear deterministic relay networks. Our construction can be regarded as a generalization of the well-known multicast network coding scheme of Jaggi et al. to linear deterministic relay networks and is based on the notion of flow for a unicast session that was introduced by the authors in earlier work. We present randomized and deterministic polynomial--time versions of our algorithm and show that for a network with $g$ destinations, our deterministic algorithm can achieve the capacity in $\left\lceil \log(g+1)\right\rceil $ uses of the network.
1101.2985
Resequencing: A Method for Conforming to Conventions for Sharing Credits Among Multiple Authors
cs.DL cs.CY cs.IR
Devising an appropriate scheme that assigns the weights to share credits among multiple authors of a paper is a challenging task. This challenge comes from the fact that different types of conventions might be followed among different research discipline or research groups. In this paper, we discuss that for the purpose of evaluating the quality of research produced by authors, one can resequence either authors or weights and can apply a weight assignment policy which the evaluator deems fit for the particular research discipline or research group.
1101.2987
Support vector machines/relevance vector machine for remote sensing classification: A review
cs.CV cs.LG
Kernel-based machine learning algorithms are based on mapping data from the original input feature space to a kernel feature space of higher dimensionality to solve a linear problem in that space. Over the last decade, kernel based classification and regression approaches such as support vector machines have widely been used in remote sensing as well as in various civil engineering applications. In spite of their better performance with different datasets, support vector machines still suffer from shortcomings such as visualization/interpretation of model, choice of kernel and kernel specific parameter as well as the regularization parameter. Relevance vector machines are another kernel based approach being explored for classification and regression with in last few years. The advantages of the relevance vector machines over the support vector machines is the availability of probabilistic predictions, using arbitrary kernel functions and not requiring setting of the regularization parameter. This paper presents a state-of-the-art review of SVM and RVM in remote sensing and provides some details of their use in other civil engineering application also.
1101.3051
Adaptive Variable Degree-k Zero-Trees for Re-Encoding of Perceptually Quantized Wavelet-Packet Transformed Audio and High Quality Speech
cs.IT math.IT
A fast, efficient and scalable algorithm is proposed, in this paper, for re-encoding of perceptually quantized wavelet-packet transform (WPT) coefficients of audio and high quality speech and is called "adaptive variable degree-k zero-trees" (AVDZ). The quantization process is carried out by taking into account some basic perceptual considerations, and achieves good subjective quality with low complexity. The performance of the proposed AVDZ algorithm is compared with two other zero-tree-based schemes comprising: 1- Embedded Zero-tree Wavelet (EZW) and 2- The set partitioning in hierarchical trees (SPIHT). Since EZW and SPIHT are designed for image compression, some modifications are incorporated in these schemes for their better matching to audio signals. It is shown that the proposed modifications can improve their performance by about 15-25%. Furthermore, it is concluded that the proposed AVDZ algorithm outperforms these modified versions in terms of both output average bit-rates and computation times.
1101.3068
Degrees of Freedom Region for an Interference Network with General Message Demands
cs.IT math.IT
We consider a single hop interference network with $K$ transmitters and $J$ receivers, all having $M$ antennas. Each transmitter emits an independent message and each receiver requests an arbitrary subset of the messages. This generalizes the well-known $K$-user $M$-antenna interference channel, where each message is requested by a unique receiver. For our setup, we derive the degrees of freedom (DoF) region. The achievability scheme generalizes the interference alignment schemes proposed by Cadambe and Jafar. In particular, we achieve general points in the DoF region by using multiple base vectors and aligning all interferers at a given receiver to the interferer with the largest DoF. As a byproduct, we obtain the DoF region for the original interference channel. We also discuss extensions of our approach where the same region can be achieved by considering a reduced set of interference alignment constraints, thus reducing the time-expansion duration needed. The DoF region for the considered system depends only on a subset of receivers whose demands meet certain characteristics. The geometric shape of the DoF region is also discussed.
1101.3070
Information and the arrow of time
physics.pop-ph cs.IT math.IT
This paper is a discussion about the relationship between time and information. We argue that the direction of arrow of time is related to the directivity of information copying that occurs in Nature.
1101.3085
Simulating Opinion Dynamics in Heterogeneous Communication
cs.SI cs.MA physics.soc-ph
Since the information available is fundamental for our perceptions and opinions, we are interested in understanding the conditions allowing for a good information to be disseminated. This paper explores opinion dynamics by means of multi-agent based simulations when agents get informed by different sources of information. The scenario implemented includes three main streams of information acquisition, differing in both the contents and the perceived reliability of the messages spread. Agents' internal opinion is updated either by accessing one of the information sources, namely media and experts, or by exchanging information with one another. They are also endowed with cognitive mechanisms to accept, reject or partially consider the acquired information. We expect that peer-to--peer communication and reliable information sources are able both to reduce biased perceptions and to inhibit information cheating, possibly performed by the media as stated by the agenda-setting theory. In the paper, after having shortly presented both the hypotheses and the model, the simulation design will be specified and results will be discussed with respect to the hypotheses. Some considerations and ideas for future studies will conclude the paper.
1101.3098
Quantum Convex Support
math-ph cs.IT math.IT math.MP quant-ph
Convex support, the mean values of a set of random variables, is central in information theory and statistics. Equally central in quantum information theory are mean values of a set of observables in a finite-dimensional C*-algebra A, which we call (quantum) convex support. The convex support can be viewed as a projection of the state space of A and it is a projection of a spectrahedron. Spectrahedra are increasingly investigated at least since the 1990's boom in semidefinite programming. We recall the geometry of the positive semi-definite cone and of the state space. We write a convex duality for general self-dual convex cones. This restricts to projections of state spaces and connects them to results on spectrahedra. Really new in this article is an analysis of the face lattice of convex support by mapping this lattice to a lattice of orthogonal projections, using natural isomorphisms. The result encodes the face lattice of the convex support into a set of projections in A and enables the integration of convex geometry with matrix calculus or algebraic techniques.
1101.3122
Digital herders and phase transition in a voting model
physics.soc-ph cond-mat.stat-mech cs.SI
In this paper, we discuss a voting model with two candidates, C_1 and C_2. We set two types of voters--herders and independents. The voting of independent voters is based on their fundamental values; on the other hand, the voting of herders is based on the number of votes. Herders always select the majority of the previous $r$ votes, which is visible to them. We call them digital herders. We can accurately calculate the distribution of votes for special cases. When r>=3, we find that a phase transition occurs at the upper limit of t, where t is the discrete time (or number of votes). As the fraction of herders increases, the model features a phase transition beyond which a state where most voters make the correct choice coexists with one where most of them are wrong. On the other hand, when r<3, there is no phase transition. In this case, the herders' performance is the same as that of the independent voters. Finally, we recognize the behavior of human beings by conducting simple experiments.
1101.3124
SafeVchat: Detecting Obscene Content and Misbehaving Users in Online Video Chat Services
cs.CR cs.CV cs.HC
Online video chat services such as Chatroulette, Omegle, and vChatter that randomly match pairs of users in video chat sessions are fast becoming very popular, with over a million users per month in the case of Chatroulette. A key problem encountered in such systems is the presence of flashers and obscene content. This problem is especially acute given the presence of underage minors in such systems. This paper presents SafeVchat, a novel solution to the problem of flasher detection that employs an array of image detection algorithms. A key contribution of the paper concerns how the results of the individual detectors are fused together into an overall decision classifying the user as misbehaving or not, based on Dempster-Shafer Theory. The paper introduces a novel, motion-based skin detection method that achieves significantly higher recall and better precision. The proposed methods have been evaluated over real world data and image traces obtained from Chatroulette.com.
1101.3149
Competition between Intra-community and Inter-community Synchronization
physics.soc-ph cs.SI nlin.AO
In this paper the effects of external links on the synchronization performance of community networks, especially on the competition between individual community and the whole network, are studied in detail. The study is organized from two aspects: the number or portion of external links and the connecting strategy of external links between different communities. It is found that increasing the number of external links will enhance the global synchronizability but degrade the ynchronization performance of individual community before some critical point. After that the individual community will synchronize better and better as part of the whole network because the community structure is not so prominent. Among various connection strategies, connecting nodes belonging to different communities randomly rather than connecting nodes with larger degrees is the most efficient way to enhance global synchronization of the network. However, a preferential connection scheme linking most of the hubs from the communities will allow rather efficient global synchronization while maintaining strong dynamical clustering of the communities. Interestingly, the observations are found to be relevant in a realistic network of cat cortex. The synchronization state is just at the critical point, which shows a reasonable combination of segregated function in individual communities and coordination among them. Our work sheds light on principles underlying the emergence of modular architectures in real network systems and provides guidance for the manipulation of synchronization in community networks.
1101.3198
Towards Optimal Schemes for the Half-Duplex Two-Way Relay Channel
cs.IT math.IT
A restricted two-way communication problem in a small fully-connected network is investigated. The network consists of three nodes, all having access to a common channel with half-duplex constraint. Two nodes want to establish a dialog while the third node can assist in the bi-directional transmission process. All nodes have agreed on a transmission protocol a priori and the problem is restricted to the dialog encoders not being allowed to establish a cooperation by the use of previous receive signals. The channel is referred to as the restricted half-duplex two-way relay channel. Here the channel is defined and an outer bound on the achievable rates is derived by the application of the cut-set theorem. This shows that the problem consists of six parts. We propose a transmission protocol which takes into account all possible transmit-receive configurations of the network and performs partial decoding of the messages at the relay as well as sequential decoding at the dialog nodes. By the use of random codes and suboptimal decoders, two inner bound on the achievable rates are derived. Restricting to the suggested strategies and fixed input distributions it is argued to be possible to determine optimal transmission schemes with respect to various reasonable objectives at low complexity. In comparison to two-way communication without relay, simulations for an AWGN channel model then show that it is possible to simultaneously increase the communication rates of both dialog messages and to outperform relaying strategies that ignore an available direct path.
1101.3214
Generalized Belief Propagation for the Noiseless Capacity and Information Rates of Run-Length Limited Constraints
cs.IT math.IT stat.CO
The performance of the generalized belief propagation algorithm for computing the noiseless capacity and mutual information rates of finite-size two-dimensional and three-dimensional run-length limited constraints is investigated. For each constraint, a method is proposed to choose the basic regions and to construct the region graph. Simulation results for the capacity of different constraints as a function of the size of the channel and mutual information rates of different constraints as a function of signal-to-noise ratio are reported. Convergence to the Shannon capacity is also discussed.
1101.3220
Decision-Feedback Differential Detection in Impulse-Radio Ultra-Wideband Systems
cs.IT math.IT
In this paper we present decision-feedback differential detection (DF-DD) schemes for autocorrelation-based detection in impulse-radio ultra-wideband (IR-UWB) systems, a signaling scheme regarded as a promising candidate in particular for low-complexity wireless sensor networks. To this end, we first discuss ideal noncoherent sequence estimation and approximations thereof based on block-wise multiple-symbol differential detection (MSDD) and the Viterbi algorithm (VA) from the perspective of tree-search/trellis decoding. Exploiting relations well-known from tree-search decoding, we are able to derive the novel decision-feedback differential detection (DF-DD) schemes. A comprehensive comparison with respect to performance and complexity of the presented schemes in a typical IR-UWB scenario reveals---along with novel insights in techniques for complexity reduction of the sphere decoder applied for MSDD---that sorted DF-DD achieves close-to-optimum performance at very low, and in particular constant receiver complexity.
1101.3285
A note on the multiple unicast capacity of directed acyclic networks
cs.IT math.IT
We consider the multiple unicast problem under network coding over directed acyclic networks with unit capacity edges. There is a set of n source-terminal (s_i - t_i) pairs that wish to communicate at unit rate over this network. The connectivity between the s_i - t_i pairs is quantified by means of a connectivity level vector, [k_1 k_2 ... k_n] such that there exist k_i edge-disjoint paths between s_i and t_i. Our main aim is to characterize the feasibility of achieving this for different values of n and [k_1 ... k_n]. For 3 unicast connections (n = 3), we characterize several achievable and unachievable values of the connectivity 3-tuple. In addition, in this work, we have found certain network topologies, and capacity characterizations that are useful in understanding the case of general n.
1101.3291
Node Classification in Social Networks
cs.SI physics.soc-ph
When dealing with large graphs, such as those that arise in the context of online social networks, a subset of nodes may be labeled. These labels can indicate demographic values, interest, beliefs or other characteristics of the nodes (users). A core problem is to use this information to extend the labeling so that all nodes are assigned a label (or labels). In this chapter, we survey classification techniques that have been proposed for this problem. We consider two broad categories: methods based on iterative application of traditional classifiers using graph information as features, and methods which propagate the existing labels via random walks. We adopt a common perspective on these methods to highlight the similarities between different approaches within and across the two categories. We also describe some extensions and related directions to the central problem of node classification.
1101.3341
Collaborative Filtering without Explicit Feedbacks for Digital Recorders
cs.IR cs.HC
Recommendation is usually reduced to a prediction problem over the function $r(u_a, e_i)$ that returns the expected rating of element $e_i$ for user $u_a$. In the IPTV domain, we deal with an environment where the definitions of all the parameters involved in this function (i.e., user profiles, feedback ratings and elements) are controversial. To our knowledge, this paper represents the first attempt to run collaborative filtering algorithms without inner assumptions: we start our analysis from an unstructured set of recordings, before performing a data pre-processing phase in order to extract useful information. Hence, we experiment with a real Digital Video Recorder system where EPG have not been provided to the user for selecting event timings and where explicit feedbacks were not collected.
1101.3348
Structured sublinear compressive sensing via belief propagation
cs.IT math.IT
Compressive sensing (CS) is a sampling technique designed for reducing the complexity of sparse data acquisition. One of the major obstacles for practical deployment of CS techniques is the signal reconstruction time and the high storage cost of random sensing matrices. We propose a new structured compressive sensing scheme, based on codes of graphs, that allows for a joint design of structured sensing matrices and logarithmic-complexity reconstruction algorithms. The compressive sensing matrices can be shown to offer asymptotically optimal performance when used in combination with Orthogonal Matching Pursuit (OMP) methods. For more elaborate greedy reconstruction schemes, we propose a new family of list decoding belief propagation algorithms, as well as reinforced- and multiple-basis belief propagation algorithms. Our simulation results indicate that reinforced BP CS schemes offer very good complexity-performance tradeoffs for very sparse signal vectors.
1101.3352
Dimensional behaviour of entropy and information
math.FA cs.IT math.IT math.PR
We develop an information-theoretic perspective on some questions in convex geometry, providing for instance a new equipartition property for log-concave probability measures, some Gaussian comparison results for log-concave measures, an entropic formulation of the hyperplane conjecture, and a new reverse entropy power inequality for log-concave measures analogous to V. Milman's reverse Brunn-Minkowski inequality.
1101.3354
Introduction to the Bag of Features Paradigm for Image Classification and Retrieval
cs.CV cs.IR
The past decade has seen the growing popularity of Bag of Features (BoF) approaches to many computer vision tasks, including image classification, video search, robot localization, and texture recognition. Part of the appeal is simplicity. BoF methods are based on orderless collections of quantized local image descriptors; they discard spatial information and are therefore conceptually and computationally simpler than many alternative methods. Despite this, or perhaps because of this, BoF-based systems have set new performance standards on popular image classification benchmarks and have achieved scalability breakthroughs in image retrieval. This paper presents an introduction to BoF image representations, describes critical design choices, and surveys the BoF literature. Emphasis is placed on recent techniques that mitigate quantization errors, improve feature detection, and speed up image retrieval. At the same time, unresolved issues and fundamental challenges are raised. Among the unresolved issues are determining the best techniques for sampling images, describing local image features, and evaluating system performance. Among the more fundamental challenges are how and whether BoF methods can contribute to localizing objects in complex images, or to associating high-level semantics with natural images. This survey should be useful both for introducing new investigators to the field and for providing existing researchers with a consolidated reference to related work.
1101.3381
Efficient Independence-Based MAP Approach for Robust Markov Networks Structure Discovery
cs.AI cs.CV
This work introduces the IB-score, a family of independence-based score functions for robust learning of Markov networks independence structures. Markov networks are a widely used graphical representation of probability distributions, with many applications in several fields of science. The main advantage of the IB-score is the possibility of computing it without the need of estimation of the numerical parameters, an NP-hard problem, usually solved through an approximate, data-intensive, iterative optimization. We derive a formal expression for the IB-score from first principles, mainly maximum a posteriori and conditional independence properties, and exemplify several instantiations of it, resulting in two novel algorithms for structure learning: IBMAP-HC and IBMAP-TS. Experimental results over both artificial and real world data show these algorithms achieve important error reductions in the learnt structures when compared with the state-of-the-art independence-based structure learning algorithm GSMN, achieving increments of more than 50% in the amount of independencies they encode correctly, and in some cases, learning correctly over 90% of the edges that GSMN learnt incorrectly. Theoretical analysis shows IBMAP-HC proceeds efficiently, achieving these improvements in a time polynomial to the number of random variables in the domain.
1101.3391
Automated Image Processing for the Analysis of DNA Repair Dynamics
cs.CV q-bio.QM
The efficient repair of cellular DNA is essential for the maintenance and inheritance of genomic information. In order to cope with the high frequency of spontaneous and induced DNA damage, a multitude of repair mechanisms have evolved. These are enabled by a wide range of protein factors specifically recognizing different types of lesions and finally restoring the normal DNA sequence. This work focuses on the repair factor XPC (xeroderma pigmentosum complementation group C), which identifies bulky DNA lesions and initiates their removal via the nucleotide excision repair pathway. The binding of XPC to damaged DNA can be visualized in living cells by following the accumulation of a fluorescent XPC fusion at lesions induced by laser microirradiation in a fluorescence microscope. In this work, an automated image processing pipeline is presented which allows to identify and quantify the accumulation reaction without any user interaction. The image processing pipeline comprises a preprocessing stage where the image stack data is filtered and the nucleus of interest is segmented. Afterwards, the images are registered to each other in order to account for movements of the cell, and then a bounding box enclosing the XPC-specific signal is automatically determined. Finally, the time-dependent relocation of XPC is evaluated by analyzing the intensity change within this box. Comparison of the automated processing results with the manual evaluation yields qualitatively similar results. However, the automated analysis provides more accurate, reproducible data with smaller standard errors. The image processing pipeline presented in this work allows for an efficient analysis of large amounts of experimental data with no user interaction required.
1101.3393
Traffic properties for stochastic routings on scale-free networks
physics.soc-ph cond-mat.stat-mech cs.SI
For realistic scale-free networks, we investigate the traffic properties of stochastic routing inspired by a zero-range process known in statistical physics. By parameters $\alpha$ and $\delta$, this model controls degree-dependent hopping of packets and forwarding of packets with higher performance at more busy nodes. Through a theoretical analysis and numerical simulations, we derive the condition for the concentration of packets at a few hubs. In particular, we show that the optimal $\alpha$ and $\delta$ are involved in the trade-off between a detour path for $\alpha < 0$ and long wait at hubs for $\alpha > 0$; In the low-performance regime at a small $\delta$, the wandering path for $\alpha < 0$ better reduces the mean travel time of a packet with high reachability. Although, in the high-performance regime at a large $\delta$, the difference between $\alpha > 0$ and $\alpha < 0$ is small, neither the wandering long path with short wait trapped at nodes ($\alpha = -1$), nor the short hopping path with long wait trapped at hubs ($\alpha = 1$) is advisable. A uniformly random walk ($\alpha = 0$) yields slightly better performance. We also discuss the congestion phenomena in a more complicated situation with packet generation at each time step.
1101.3398
New Quadriphase Sequences families with Larger Linear Span and Size
cs.IT math.IT
In this paper, new families of quadriphase sequences with larger linear span and size have been proposed and studied. In particular, a new family of quadriphase sequences of period $2^n-1$ for a positive integer $n=em$ with an even positive factor $m$ is presented, the cross-correlation function among these sequences has been explicitly calculated. Another new family of quadriphase sequences of period $2(2^n-1)$ for a positive integer $n=em$ with an even positive factor $m$ is also presented, a detailed analysis of the cross-correlation function of proposed sequences has also been presented.
1101.3400
Behavioral On-Line Advertising
cs.IR
We present a new algorithm for behavioral targeting of banner advertisements. We record different user's actions such as clicks, search queries and page views. We use the collected information on the user to estimate in real time the probability of a click on a banner. A banner is displayed if it either has the highest probability of being clicked or if it is the one that generates the highest average profit.
1101.3444
Control of Wireless Networks with Secrecy
cs.IT cs.DC math.IT
We consider the problem of cross-layer resource allocation in time-varying cellular wireless networks, and incorporate information theoretic secrecy as a Quality of Service constraint. Specifically, each node in the network injects two types of traffic, private and open, at rates chosen in order to maximize a global utility function, subject to network stability and secrecy constraints. The secrecy constraint enforces an arbitrarily low mutual information leakage from the source to every node in the network, except for the sink node. We first obtain the achievable rate region for the problem for single and multi-user systems assuming that the nodes have full CSI of their neighbors. Then, we provide a joint flow control, scheduling and private encoding scheme, which does not rely on the knowledge of the prior distribution of the gain of any channel. We prove that our scheme achieves a utility, arbitrarily close to the maximum achievable utility. Numerical experiments are performed to verify the analytical results, and to show the efficacy of the dynamic control algorithm.
1101.3453
Algebraic Foundations for Information Theoretical, Probabilistic and Guessability measures of Information Flow
cs.IT math.IT
Several mathematical ideas have been investigated for Quantitative Information Flow. Information theory, probability, guessability are the main ideas in most proposals. They aim to quantify how much information is leaked, how likely is to guess the secret and how long does it take to guess the secret respectively. In this paper, we show how the Lattice of Information provides a valuable foundation for all these approaches; not only it provides an elegant algebraic framework for the ideas, but also to investigate their relationship. In particular we will use this lattice to prove some results establishing order relation correspondences between the different quantitative approaches. The implications of these results w.r.t. recent work in the community is also investigated. While this work concentrates on the foundational importance of the Lattice of Information its practical relevance has been recently proven, notably with the quantitative analysis of Linux kernel vulnerabilities. Overall we believe these works set the case for establishing the Lattice of Information as one of the main reference structure for Quantitative Information Flow.
1101.3457
Capacity of DNA Data Embedding Under Substitution Mutations
cs.IT math.IT q-bio.PE q-bio.QM
A number of methods have been proposed over the last decade for encoding information using deoxyribonucleic acid (DNA), giving rise to the emerging area of DNA data embedding. Since a DNA sequence is conceptually equivalent to a sequence of quaternary symbols (bases), DNA data embedding (diversely called DNA watermarking or DNA steganography) can be seen as a digital communications problem where channel errors are tantamount to mutations of DNA bases. Depending on the use of coding or noncoding DNA hosts, which, respectively, denote DNA segments that can or cannot be translated into proteins, DNA data embedding is essentially a problem of communications with or without side information at the encoder. In this paper the Shannon capacity of DNA data embedding is obtained for the case in which DNA sequences are subject to substitution mutations modelled using the Kimura model from molecular evolution studies. Inferences are also drawn with respect to the biological implications of some of the results presented.
1101.3465
The "psychological map of the brain", as a personal information card (file), - a project for the student of the 21st century
cs.AI
We suggest a procedure that is relevant both to electronic performance and human psychology, so that the creative logic and the respect for human nature appear in a good agreement. The idea is to create an electronic card containing basic information about a person's psychological behavior in order to make it possible to quickly decide about the suitability of one for another. This "psychological electronics" approach could be tested via student projects.
1101.3500
Computation for Supremal Simulation-Based Controllable and Strong Observable Subautomata
cs.SY
Bisimulation relation has been successfully applied to computer science and control theory. In our previous work, simulation-based controllability and simulation-based observability are proposed, under which the existence of bisimilarity supervisor is guaranteed. However, a given specification automaton may not satisfy these conditions, and a natural question is how to compute a maximum permissive subspecification. This paper aims to answer this question and investigate the computation of the supremal simulation-based controllable and strong observable subautomata with respect to given specifications by the lattice theory. In order to achieve the supremal solution, three monotone operators, namely simulation operator, controllable operator and strong observable operator, are proposed upon the established complete lattice. Then, inequalities based on these operators are formulated, whose solution is the simulation-based controllable and strong observable set. In particular, a sufficient condition is presented to guarantee the existence of the supremal simulation-based controllable and strong observable subautomata. Furthermore, an algorithm is proposed to compute such subautomata.
1101.3574
A Game-Theoretic View of the Interference Channel: Impact of Coordination and Bargaining
cs.IT math.IT
This work considers coordination and bargaining between two selfish users over a Gaussian interference channel. The usual information theoretic approach assumes full cooperation among users for codebook and rate selection. In the scenario investigated here, each user is willing to coordinate its actions only when an incentive exists and benefits of cooperation are fairly allocated. The users are first allowed to negotiate for the use of a simple Han-Kobayashi type scheme with fixed power split. Conditions for which users have incentives to cooperate are identified. Then, two different approaches are used to solve the associated bargaining problem. First, the Nash Bargaining Solution (NBS) is used as a tool to get fair information rates and the operating point is obtained as a result of an optimization problem. Next, a dynamic alternating-offer bargaining game (AOBG) from bargaining theory is introduced to model the bargaining process and the rates resulting from negotiation are characterized. The relationship between the NBS and the equilibrium outcome of the AOBG is studied and factors that may affect the bargaining outcome are discussed. Finally, under certain high signal-to-noise ratio regimes, the bargaining problem for the generalized degrees of freedom is studied.
1101.3578
Infinity in computable probability
math.LO cs.CL cs.LO
Does combining a finite collection of objects infinitely many times guarantee the construction of a particular object? Here we use recursive function theory to examine the popular scenario of an infinite collection of typing monkeys reproducing the works of Shakespeare. Our main result is to show that it is possible to assign typing probabilities in such a way that while it is impossible that no monkey reproduces Shakespeare's works, the probability of any finite collection of monkeys doing so is arbitrarily small. We extend our results to target-free writing, and end with a broad discussion and pointers to future work.
1101.3594
Classification under Data Contamination with Application to Remote Sensing Image Mis-registration
stat.ME cs.LG stat.ML
This work is motivated by the problem of image mis-registration in remote sensing and we are interested in determining the resulting loss in the accuracy of pattern classification. A statistical formulation is given where we propose to use data contamination to model and understand the phenomenon of image mis-registration. This model is widely applicable to many other types of errors as well, for example, measurement errors and gross errors etc. The impact of data contamination on classification is studied under a statistical learning theoretical framework. A closed-form asymptotic bound is established for the resulting loss in classification accuracy, which is less than $\epsilon/(1-\epsilon)$ for data contamination of an amount of $\epsilon$. Our bound is sharper than similar bounds in the domain adaptation literature and, unlike such bounds, it applies to classifiers with an infinite Vapnik-Chervonekis (VC) dimension. Extensive simulations have been conducted on both synthetic and real datasets under various types of data contamination, including label flipping, feature swapping and the replacement of feature values with data generated from a random source such as a Gaussian or Cauchy distribution. Our simulation results show that the bound we derive is fairly tight.
1101.3620
Clustering Protein Sequences Given the Approximation Stability of the Min-Sum Objective Function
cs.DS cs.CE
We study the problem of efficiently clustering protein sequences in a limited information setting. We assume that we do not know the distances between the sequences in advance, and must query them during the execution of the algorithm. Our goal is to find an accurate clustering using few queries. We model the problem as a point set $S$ with an unknown metric $d$ on $S$, and assume that we have access to \emph{one versus all} distance queries that given a point $s \in S$ return the distances between $s$ and all other points. Our one versus all query represents an efficient sequence database search program such as BLAST, which compares an input sequence to an entire data set. Given a natural assumption about the approximation stability of the \emph{min-sum} objective function for clustering, we design a provably accurate clustering algorithm that uses few one versus all queries. In our empirical study we show that our method compares favorably to well-established clustering algorithms when we compare computationally derived clusterings to gold-standard manual classifications.
1101.3684
Bio-inspired Methods for Dynamic Network Analysis in Science Mapping
cs.DL cs.SI physics.soc-ph
We apply bio-inspired methods for the analysis of different dynamic bibliometric networks (linking papers by citation, authors, and keywords, respectively). Biological species are clusters of individuals defined by widely different criteria and in the biological perspective it is natural to (1) use different categorizations on the same entities (2) to compare the different categorizations and to analyze the dissimilarities, especially as they change over time. We employ the same methodology to comparisons of bibliometric classifications. We constructed them as analogs of three species concepts: cladistic or lineage based, similarity based, and "biological species" (based on co-reproductive ability). We use the Rand and Jaccard indexes to compare classifications in different time intervals. The experiment is aimed to address the classic problem of science mapping, as to what extent the various techniques based on different bibliometric indicators, such as citations, keywords or authors are able to detect convergent structures in the litrerature, that is, to identify coherent specialities or research directions and their dynamics.
1101.3719
Trip Length Distribution Under Multiplicative Spatial Models of Supply and Demand: Theory and Sensitivity Analysis
physics.data-an cs.SI physics.soc-ph
We propose new probabilistic models for the spatial distribution of supply and demand and use the models to determine how the trip length distribution is affected by the relative shortage or excess of supply, the spatial clustering of supply and demand, and the degree of attraction or repulsion between supply and demand at different spatial scales. The models have a multiplicative structure and in certain cases possess scale invariance properties. Using detailed population data in metropolitan US regions validates the demand model. The trip length distribution, evaluated under destination choice models of the intervening opportunities type, has quasi-analytic form.We take advantage of this feature to study the sensitivity of the trip length distribution to parameters of the demand, supply and destination choice models. We find that trip length is affected in important but different ways by the spatial density of potential destinations, the dependence among their attractiveness levels, and the correlation between supply and demand at different spatial scales.
1101.3724
Throughput-Delay Analysis of Random Linear Network Coding for Wireless Broadcasting
cs.IT math.IT
In an unreliable single-hop broadcast network setting, we investigate the throughput and decoding-delay performance of random linear network coding as a function of the coding window size and the network size. Our model consists of a source transmitting packets of a single flow to a set of $n$ users over independent erasure channels. The source performs random linear network coding (RLNC) over $k$ (coding window size) packets and broadcasts them to the users. We note that the broadcast throughput of RLNC must vanish with increasing $n$, for any fixed $k.$ Hence, in contrast to other works in the literature, we investigate how the coding window size $k$ must scale for increasing $n$. Our analysis reveals that the coding window size of $\Theta(\ln(n))$ represents a phase transition rate, below which the throughput converges to zero, and above which it converges to the broadcast capacity. Further, we characterize the asymptotic distribution of decoding delay and provide approximate expressions for the mean and variance of decoding delay for the scaling regime of $k=\omega(\ln(n)).$ These asymptotic expressions reveal the impact of channel correlations on the throughput and delay performance of RLNC. We also show how our analysis can be extended to other rateless block coding schemes such as the LT codes. Finally, we comment on the extension of our results to the cases of dependent channels across users and asymmetric channel model.
1101.3735
Formalising the multidimensional nature of social networks
physics.soc-ph cs.SI q-bio.PE
Individuals interact with conspecifics in a number of behavioural contexts or dimensions. Here, we formalise this by considering a social network between n individuals interacting in b behavioural dimensions as a nxnxb multidimensional object. In addition, we propose that the topology of this object is driven by individual needs to reduce uncertainty about the outcomes of interactions in one or more dimension. The proposal grounds social network dynamics and evolution in individual selection processes and allows us to define the uncertainty of the social network as the joint entropy of its constituent interaction networks. In support of these propositions we use simulations and natural 'knock-outs' in a free-ranging baboon troop to show (i) that such an object can display a small-world state and (ii) that, as predicted, changes in interactions after social perturbations lead to a more certain social network, in which the outcomes of interactions are easier for members to predict. This new formalisation of social networks provides a framework within which to predict network dynamics and evolution under the assumption that it is driven by individuals seeking to reduce the uncertainty of their social environment.
1101.3755
Transductive-Inductive Cluster Approximation Via Multivariate Chebyshev Inequality
cs.CV cs.AI
Approximating adequate number of clusters in multidimensional data is an open area of research, given a level of compromise made on the quality of acceptable results. The manuscript addresses the issue by formulating a transductive inductive learning algorithm which uses multivariate Chebyshev inequality. Considering clustering problem in imaging, theoretical proofs for a particular level of compromise are derived to show the convergence of the reconstruction error to a finite value with increasing (a) number of unseen examples and (b) the number of clusters, respectively. Upper bounds for these error rates are also proved. Non-parametric estimates of these error from a random sample of sequences empirically point to a stable number of clusters. Lastly, the generalization of algorithm can be applied to multidimensional data sets from different fields.
1101.3761
Tagging with DHARMA, a DHT-based Approach for Resource Mapping through Approximation
cs.DC cs.SI
We introduce collaborative tagging and faceted search on structured P2P systems. Since a trivial and brute force mapping of an entire folksonomy over a DHT-based system may reduce scalability, we propose an approximated graph maintenance approach. Evaluations on real data coming from Last.fm prove that such strategies reduce vocabulary noise (i.e., representation's overfitting phenomena) and hotspots issues.
1101.3774
Secret Key Agreement Over Multipath Channels Exploiting a Variable-Directional Antenna
cs.IT math.IT
We develop an approach of key distribution protocol (KDP) proposed recently by T. Aono et al. A more general mathematical model based on the use of Variable-Directional Antenna (VDA) under the condition of multipath wave propagation is proposed. Statistical characteristics of VDA were investigated by simulation, that allows us to specify model parameters. The security of the considered KDP is estimated in terms of Shannon's information leaking to an eavesdropper depending on the mutual locations of the legal users and the eavesdropper. Antenna diversity is proposed as a mean to enhance the KDP security. In order to provide a better agreement of the shared keys it is investigated the use of error-correcting codes.
1101.3824
Series Expansion for Interference in Wireless Networks
cs.IT math.IT math.PR stat.AP
The spatial correlations in transmitter node locations introduced by common multiple access protocols makes the analysis of interference, outage, and other related metrics in a wireless network extremely difficult. Most works therefore assume that nodes are distributed either as a Poisson point process (PPP) or a grid, and utilize the independence properties of the PPP (or the regular structure of the grid) to analyze interference, outage and other metrics. But,the independence of node locations makes the PPP a dubious model for nontrivial MACs which intentionally introduce correlations, e.g. spatial separation, while the grid is too idealized to model real networks. In this paper, we introduce a new technique based on the factorial moment expansion of functionals of point processes to analyze functions of interference, in particular outage probability. We provide a Taylor-series type expansion of functions of interference, wherein increasing the number of terms in the series provides a better approximation at the cost of increased complexity of computation. Various examples illustrate how this new approach can be used to find outage probability in both Poisson and non-Poisson wireless networks.
1101.3838
Performance Bounds for Sparse Parametric Covariance Estimation in Gaussian Models
cs.IT math.IT math.ST stat.TH
We consider estimation of a sparse parameter vector that determines the covariance matrix of a Gaussian random vector via a sparse expansion into known "basis matrices". Using the theory of reproducing kernel Hilbert spaces, we derive lower bounds on the variance of estimators with a given mean function. This includes unbiased estimation as a special case. We also present a numerical comparison of our lower bounds with the variance of two standard estimators (hard-thresholding estimator and maximum likelihood estimator).
1101.3885
An Upper Bound for Signal Transmission Error Probability in Hyperbolic Spaces
cs.IT math.IT
We introduce and discuss the concept of Gaussian probability density function (pdf) for the n-dimensional hyperbolic space which has been proposed as an environment for coding and decoding signals. An upper bound for the error probability of signal transmission associated with the hyperbolic distance is established. The pdf and the upper bound were developed using Poincare models for the hyperbolic spaces.
1101.3929
Characteristic Generators and Dualization for Tail-Biting Trellises
cs.IT math.IT
This paper focuses on dualizing tail-biting trellises, particularly KV-trellises. These trellises are based on characteristic generators, as introduced by Koetter/Vardy (2003), and may be regarded as a natural generalization of minimal conventional trellises, even though they are not necessarily minimal. Two dualization techniques will be investigated: the local dualization, introduced by Forney (2001) for general normal graphs, and a linear algebra based dualization tailored to the specific class of tail-biting BCJR-trellises, introduced by Nori/Shankar (2006). It turns out that, in general, the BCJR-dual is a subtrellis of the local dual, while for KV-trellises these two coincide. Furthermore, making use of both the BCJR-construction and the local dualization, it will be shown that for each complete set of characteristic generators of a code there exists a complete set of characteristic generators of the dual code such that their resulting KV-trellises are dual to each other if paired suitably. This proves a stronger version of a conjecture formulated by Koetter/Vardy.
1101.3973
On cooperative patrolling: optimal trajectories, complexity analysis, and approximation algorithms
math.CO cs.DS cs.SY math.OC
The subject of this work is the patrolling of an environment with the aid of a team of autonomous agents. We consider both the design of open-loop trajectories with optimal properties, and of distributed control laws converging to optimal trajectories. As performance criteria, the refresh time and the latency are considered, i.e., respectively, time gap between any two visits of the same region, and the time necessary to inform every agent about an event occurred in the environment. We associate a graph with the environment, and we study separately the case of a chain, tree, and cyclic graph. For the case of chain graph, we first describe a minimum refresh time and latency team trajectory, and we propose a polynomial time algorithm for its computation. Then, we describe a distributed procedure that steers the robots toward an optimal trajectory. For the case of tree graph, a polynomial time algorithm is developed for the minimum refresh time problem, under the technical assumption of a constant number of robots involved in the patrolling task. Finally, we show that the design of a minimum refresh time trajectory for a cyclic graph is NP-hard, and we develop a constant factor approximation algorithm.
1101.3979
Selection of network coding nodes for minimal playback delay in streaming overlays
cs.MM cs.IT cs.NI math.IT
Network coding permits to deploy distributed packet delivery algorithms that locally adapt to the network availability in media streaming applications. However, it may also increase delay and computational complexity if it is not implemented efficiently. We address here the effective placement of nodes that implement randomized network coding in overlay networks, so that the goodput is kept high while the delay for decoding stays small in streaming applications. We first estimate the decoding delay at each client, which depends on the innovative rate in the network. This estimation permits to identify the nodes that have to perform coding for a reduced decoding delay. We then propose two iterative algorithms for selecting the nodes that should perform network coding. The first algorithm relies on the knowledge of the full network statistics. The second algorithm uses only local network statistics at each node. Simulation results show that large performance gains can be achieved with the selection of only a few network coding nodes. Moreover, the second algorithm performs very closely to the central estimation strategy, which demonstrates that the network coding nodes can be selected efficiently in a distributed manner. Our scheme shows large gains in terms of achieved throughput, delay and video quality in realistic overlay networks when compared to methods that employ traditional streaming strategies as well as random network nodes selection algorithms.
1101.4003
Dyna-H: a heuristic planning reinforcement learning algorithm applied to role-playing-game strategy decision systems
cs.AI cs.LG cs.SY math.OC
In a Role-Playing Game, finding optimal trajectories is one of the most important tasks. In fact, the strategy decision system becomes a key component of a game engine. Determining the way in which decisions are taken (online, batch or simulated) and the consumed resources in decision making (e.g. execution time, memory) will influence, in mayor degree, the game performance. When classical search algorithms such as A* can be used, they are the very first option. Nevertheless, such methods rely on precise and complete models of the search space, and there are many interesting scenarios where their application is not possible. Then, model free methods for sequential decision making under uncertainty are the best choice. In this paper, we propose a heuristic planning strategy to incorporate the ability of heuristic-search in path-finding into a Dyna agent. The proposed Dyna-H algorithm, as A* does, selects branches more likely to produce outcomes than other branches. Besides, it has the advantages of being a model-free online reinforcement learning algorithm. The proposal was evaluated against the one-step Q-Learning and Dyna-Q algorithms obtaining excellent experimental results: Dyna-H significantly overcomes both methods in all experiments. We suggest also, a functional analogy between the proposed sampling from worst trajectories heuristic and the role of dreams (e.g. nightmares) in human behavior.
1101.4028
Who is the best player ever? A complex network analysis of the history of professional tennis
physics.soc-ph cs.SI physics.pop-ph
We consider all matches played by professional tennis players between 1968 and 2010, and, on the basis of this data set, construct a directed and weighted network of contacts. The resulting graph shows complex features, typical of many real networked systems studied in literature. We develop a diffusion algorithm and apply it to the tennis contact network in order to rank professional players. Jimmy Connors is identified as the best player of the history of tennis according to our ranking procedure. We perform a complete analysis by determining the best players on specific playing surfaces as well as the best ones in each of the years covered by the data set. The results of our technique are compared to those of two other well established methods. In general, we observe that our ranking method performs better: it has a higher predictive power and does not require the arbitrary introduction of external criteria for the correct assessment of the quality of players. The present work provides a novel evidence of the utility of tools and methods of network theory in real applications.
1101.4036
Secure Multiplex Coding with a Common Message
cs.IT cs.CR math.IT
We determine the capacity region of the secure multiplex coding with a common message, and evaluate the mutual information and the equivocation rate of a collection of secret messages to the second receiver (eavesdropper), which were not evaluated by Yamamoto et al.
1101.4075
PMI-based MIMO OFDM PHY Integrated Key Exchange (P-MOPI) Scheme
cs.IT math.IT
In the literature, J.-P. Cheng et al. have proposed the MIMO-OFDM PHY integrated (MOPI) scheme for achieving physical-layer security in practice without using any cryptographic ciphers. The MOPI scheme uses channel sounding and physical-layer network coding (PNC) to prevent eavesdroppers from learning the channel state information (CSI). Nevertheless, due to the use of multiple antennas for PNC at transmitter and beamforming at receiver, it is not possible to have spatial multiplexing nor use space-time codes in our previous MOPI scheme. In this paper, we propose a variant of the MOPI scheme, called P-MOPI, that works with a cryptographic cipher and utilizes precoding matrix index (PMI) as an efficient key-exchange mechanism. With channel sounding, the PMI is only known between the transmitter and the legal receiver. The shared key can then be used, e.g., as the seed to generate pseudo random bit sequences for securing subsequent transmissions using a stream cipher. By applying the same techniques at independent subcarriers of the OFDM system, the P-MOPI scheme easily allows two communicating parties to exchange over 100 secret bits. As a result, not only secure communication but also the MIMO gain can be guaranteed by using the P-MOPI scheme.
1101.4100
Reconciling Compressive Sampling Systems for Spectrally-sparse Continuous-time Signals
cs.IT math.IT
The Random Demodulator (RD) and the Modulated Wideband Converter (MWC) are two recently proposed compressed sensing (CS) techniques for the acquisition of continuous-time spectrally-sparse signals. They extend the standard CS paradigm from sampling discrete, finite dimensional signals to sampling continuous and possibly infinite dimensional ones, and thus establish the ability to capture these signals at sub-Nyquist sampling rates. The RD and the MWC have remarkably similar structures (similar block diagrams), but their reconstruction algorithms and signal models strongly differ. To date, few results exist that compare these systems, and owing to the potential impacts they could have on spectral estimation in applications like electromagnetic scanning and cognitive radio, we more fully investigate their relationship in this paper. We show that the RD and the MWC are both based on the general concept of random filtering, but employ significantly different sampling functions. We also investigate system sensitivities (or robustness) to sparse signal model assumptions. Lastly, we show that "block convolution" is a fundamental aspect of the MWC, allowing it to successfully sample and reconstruct block-sparse (multiband) signals. Based on this concept, we propose a new acquisition system for continuous-time signals whose amplitudes are block sparse. The paper includes detailed time and frequency domain analyses of the RD and the MWC that differ, sometimes substantially, from published results.
1101.4101
Context Capture in Software Development
cs.SE cs.AI
The context of a software developer is something hard to define and capture, as it represents a complex network of elements across different dimensions that are not limited to the work developed on an IDE. We propose the definition of a software developer context model that takes into account all the dimensions that characterize the work environment of the developer. We are especially focused on what the software developer context encompasses at the project level and how it can be captured. The experimental work done so far show that useful context information can be extracted from project management tools. The extraction, analysis and availability of this context information can be used to enrich the work environment of the developer with additional knowledge to support her/his work.
1101.4103
Evolutionary Mechanics: new engineering principles for the emergence of flexibility in a dynamic and uncertain world
nlin.AO cs.AI
Engineered systems are designed to deftly operate under predetermined conditions yet are notoriously fragile when unexpected perturbations arise. In contrast, biological systems operate in a highly flexible manner; learn quickly adequate responses to novel conditions, and evolve new routines/traits to remain competitive under persistent environmental change. A recent theory on the origins of biological flexibility has proposed that degeneracy - the existence of multi-functional components with partially overlapping functions - is a primary determinant of the robustness and adaptability found in evolved systems. While degeneracy's contribution to biological flexibility is well documented, there has been little investigation of degeneracy design principles for achieving flexibility in systems engineering. Actually, the conditions that can lead to degeneracy are routinely eliminated in engineering design. With the planning of transportation vehicle fleets taken as a case study, this paper reports evidence that degeneracy improves robustness and adaptability of a simulated fleet without incurring costs to efficiency. We find degeneracy dramatically increases robustness of a fleet to unpredicted changes in the environment while it also facilitates robustness to anticipated variations. When we allow a fleet's architecture to be adapted in response to environmental change, we find degeneracy can be selectively acquired, leading to faster rates of design adaptation and ultimately to better designs. Given the range of conditions where favorable short-term and long-term performance outcomes are observed, we propose that degeneracy design principles fundamentally alter the propensity for adaptation and may be useful within several engineering and planning contexts.
1101.4170
The Role of Normalization in the Belief Propagation Algorithm
cs.LG
An important part of problems in statistical physics and computer science can be expressed as the computation of marginal probabilities over a Markov Random Field. The belief propagation algorithm, which is an exact procedure to compute these marginals when the underlying graph is a tree, has gained its popularity as an efficient way to approximate them in the more general case. In this paper, we focus on an aspect of the algorithm that did not get that much attention in the literature, which is the effect of the normalization of the messages. We show in particular that, for a large class of normalization strategies, it is possible to focus only on belief convergence. Following this, we express the necessary and sufficient conditions for local stability of a fixed point in terms of the graph structure and the beliefs values at the fixed point. We also explicit some connexion between the normalization constants and the underlying Bethe Free Energy.
1101.4204
Measuring Performance of Continuous-Time Stochastic Processes using Timed Automata
cs.SY cs.FL
We propose deterministic timed automata (DTA) as a model-independent language for specifying performance and dependability measures over continuous-time stochastic processes. Technically, these measures are defined as limit frequencies of locations (control states) of a DTA that observes computations of a given stochastic process. Then, we study the properties of DTA measures over semi-Markov processes in greater detail. We show that DTA measures over semi-Markov processes are well-defined with probability one, and there are only finitely many values that can be assumed by these measures with positive probability. We also give an algorithm which approximates these values and the associated probabilities up to an arbitrarily small given precision. Thus, we obtain a general and effective framework for analysing DTA measures over semi-Markov processes.
1101.4207
Blind Channel Estimation for Amplify-and-Forward Two-Way Relay Networks Employing M-PSK Modulation
cs.IT math.IT stat.OT
We consider the problem of channel estimation for amplify-and-forward (AF) two-way relay networks (TWRNs). Most works on this problem focus on pilot-based approaches which impose a significant training overhead that reduces the spectral efficiency of the system. To avoid such losses, this work proposes blind channel estimation algorithms for AF TWRNs that employ constant-modulus (CM) signaling. Our main algorithm is based on the deterministic maximum likelihood (DML) approach. Assuming M-PSK modulation, we show that the resulting estimator is consistent and approaches the true channel with high probability at high SNR for modulation orders higher than 2. For BPSK, however, the DML performs poorly and we propose an alternative algorithm that performs much better by taking into account the BPSK structure of the data symbols. For comparative purposes, we also investigate the Gaussian maximum-likelihood (GML) approach which treats the data symbols as Gaussian-distributed nuisance parameters. We derive the Cramer-Rao bound and use Monte-Carlo simulations to investigate the mean squared error (MSE) performance of the proposed algorithms. We also compare the symbol-error rate (SER) performance of the DML algorithm with that of the training-based least-squares (LS) algorithm and demonstrate that the DML offers a superior tradeoff between accuracy and spectral efficiency.
1101.4211
Throughput-optimal Scheduling in Multi-hop Wireless Networks without Per-flow Information
cs.NI cs.IT cs.PF math.IT
In this paper, we consider the problem of link scheduling in multi-hop wireless networks under general interference constraints. Our goal is to design scheduling schemes that do not use per-flow or per-destination information, maintain a single data queue for each link, and exploit only local information, while guaranteeing throughput optimality. Although the celebrated back-pressure algorithm maximizes throughput, it requires per-flow or per-destination information. It is usually difficult to obtain and maintain this type of information, especially in large networks, where there are numerous flows. Also, the back-pressure algorithm maintains a complex data structure at each node, keeps exchanging queue length information among neighboring nodes, and commonly results in poor delay performance. In this paper, we propose scheduling schemes that can circumvent these drawbacks and guarantee throughput optimality. These schemes use either the readily available hop-count information or only the local information for each link. We rigorously analyze the performance of the proposed schemes using fluid limit techniques via an inductive argument and show that they are throughput-optimal. We also conduct simulations to validate our theoretical results in various settings, and show that the proposed schemes can substantially improve the delay performance in most scenarios.
1101.4227
Statistical Mechanics of Semi-Supervised Clustering in Sparse Graphs
physics.data-an cond-mat.dis-nn cond-mat.stat-mech cs.LG
We theoretically study semi-supervised clustering in sparse graphs in the presence of pairwise constraints on the cluster assignments of nodes. We focus on bi-cluster graphs, and study the impact of semi-supervision for varying constraint density and overlap between the clusters. Recent results for unsupervised clustering in sparse graphs indicate that there is a critical ratio of within-cluster and between-cluster connectivities below which clusters cannot be recovered with better than random accuracy. The goal of this paper is to examine the impact of pairwise constraints on the clustering accuracy. Our results suggests that the addition of constraints does not provide automatic improvement over the unsupervised case. When the density of the constraints is sufficiently small, their only impact is to shift the detection threshold while preserving the criticality. Conversely, if the density of (hard) constraints is above the percolation threshold, the criticality is suppressed and the detection threshold disappears.
1101.4270
A Comparative Agglomerative Hierarchical Clustering Method to Cluster Implemented Course
cs.DB
There are many clustering methods, such as hierarchical clustering method. Most of the approaches to the clustering of variables encountered in the literature are of hierarchical type. The great majority of hierarchical approaches to the clustering of variables are of agglomerative nature. The agglomerative hierarchical approach to clustering starts with each observation as its own cluster and then continually groups the observations into increasingly larger groups. Higher Learning Institution (HLI) provides training to introduce final-year students to the real working environment. In this research will use Euclidean single linkage and complete linkage. MATLAB and HCE 3.5 software will used to train data and cluster course implemented during industrial training. This study indicates that different method will create a different number of clusters.
1101.4279
Low-Complexity Detection/Equalization in Large-Dimension MIMO-ISI Channels Using Graphical Models
cs.IT math.IT
In this paper, we deal with low-complexity near-optimal detection/equalization in large-dimension multiple-input multiple-output inter-symbol interference (MIMO-ISI) channels using message passing on graphical models. A key contribution in the paper is the demonstration that near-optimal performance in MIMO-ISI channels with large dimensions can be achieved at low complexities through simple yet effective simplifications/approximations, although the graphical models that represent MIMO-ISI channels are fully/densely connected (loopy graphs). These include 1) use of Markov Random Field (MRF) based graphical model with pairwise interaction, in conjunction with {\em message/belief damping}, and 2) use of Factor Graph (FG) based graphical model with {\em Gaussian approximation of interference} (GAI). The per-symbol complexities are $O(K^2n_t^2)$ and $O(Kn_t)$ for the MRF and the FG with GAI approaches, respectively, where $K$ and $n_t$ denote the number of channel uses per frame, and number of transmit antennas, respectively. These low-complexities are quite attractive for large dimensions, i.e., for large $Kn_t$. From a performance perspective, these algorithms are even more interesting in large-dimensions since they achieve increasingly closer to optimum detection performance for increasing $Kn_t$. Also, we show that these message passing algorithms can be used in an iterative manner with local neighborhood search algorithms to improve the reliability/performance of $M$-QAM symbol detection.
1101.4285
Degree and connectivity of the Internet's scale-free topology
cs.NI cs.SI physics.soc-ph
In this paper we theoretically and empirically study the degree and connectivity of the Internet's scale-free topology at the autonomous system (AS) level. The basic features of the scale-free network have influence on the normalization constant of the degree distribution p(k). We develop a mathematics model of the Internet's scale-free topology. On this model we theoretically get the formulas of the average degree, the ratios of the kmin-degree (minimum degree) nodes and the kmax-degree (maximum degree) nodes, the fraction of the degrees (or links) in the hands of the richer (top best-connected) nodes. We find the average degree is larger for smaller power-law exponent {\lambda} and larger minimum or maximum degree. The ratio of the kmin-degree nodes is larger for larger {\lambda} and smaller kmin or kmax. The ratio of the kmax-degree ones is larger for smaller {\lambda} and kmax or larger kmin. The richer nodes hold most of the total degrees of the AS-level Internet topology. In addition, we reveal the ratio of the kmin-degree nodes or the rate of the increase of the average degree has power-law decay with the increase of the kmin. The ratio of the kmax-degree nodes has power-law decay with the increase of the kmax, and the fraction of the degrees in the hands of the richer 27% nodes is about 73% (the '73/27 rule'). At last, we empirically calculate, based on empirical data extracted from BGP, the average degree and the ratio and fraction using our method and other methods, and find that our method is rigorous and effective for the AS-level Internet topology.
1101.4301
Diffusion framework for geometric and photometric data fusion in non-rigid shape analysis
cs.CV
In this paper, we explore the use of the diffusion geometry framework for the fusion of geometric and photometric information in local and global shape descriptors. Our construction is based on the definition of a diffusion process on the shape manifold embedded into a high-dimensional space where the embedding coordinates represent the photometric information. Experimental results show that such data fusion is useful in coping with different challenges of shape analysis where pure geometric and pure photometric methods fail.
1101.4313
A weak spectral condition for the controllability of the bilinear Schr\"odinger equation with application to the control of a rotating planar molecule
math.OC cs.SY math.AP
In this paper we prove an approximate controllability result for the bilinear Schr\"odinger equation. This result requires less restrictive non-resonance hypotheses on the spectrum of the uncontrolled Schr\"odinger operator than those present in the literature. The control operator is not required to be bounded and we are able to extend the controllability result to the density matrices. The proof is based on fine controllability properties of the finite dimensional Galerkin approximations and allows to get estimates for the $L^{1}$ norm of the control. The general controllability result is applied to the problem of controlling the rotation of a bipolar rigid molecule confined on a plane by means of two orthogonal external fields.
1101.4335
Peak Reduction and Clipping Mitigation by Compressive Sensing
cs.IT math.IT math.ST stat.TH
This work establishes the design, analysis, and fine-tuning of a Peak-to-Average-Power-Ratio (PAPR) reducing system, based on compressed sensing at the receiver of a peak-reducing sparse clipper applied to an OFDM signal at the transmitter. By exploiting the sparsity of the OFDM signal in the time domain relative to a pre-defined clipping threshold, the method depends on partially observing the frequency content of extremely simple sparse clippers to recover the locations, magnitudes, and phases of the clipped coefficients of the peak-reduced signal. We claim that in the absence of optimization algorithms at the transmitter that confine the frequency support of clippers to a predefined set of reserved-tones, no other tone-reservation method can reliably recover the original OFDM signal with such low complexity. Afterwards we focus on designing different clipping signals that can embed a priori information regarding the support and phase of the peak-reducing signal to the receiver, followed by modified compressive sensing techniques for enhanced recovery. This includes data-based weighted {\ell} 1 minimization for enhanced support recovery and phase-augmention for homogeneous clippers followed by Bayesian techniques. We show that using such techniques for a typical OFDM signal of 256 subcarriers and 20% reserved tones, the PAPR can be reduced by approximately 4.5 dB with a significant increase in capacity compared to a system which uses all its tones for data transmission and clips to such levels. The design is hence appealing from both capacity and PAPR reduction aspects.
1101.4343
Fundamental Tradeoffs on Green Wireless Networks
cs.IT math.IT
Traditional design of mobile wireless networks mainly focuses on ubiquitous access and large capacity. However, as energy saving and environmental protection become a global demand and inevitable trend, wireless researchers and engineers need to shift their focus to energy-efficiency oriented design, that is, green radio. In this paper, we propose a framework for green radio research and integrate the fundamental issues that are currently scattered. The skeleton of the framework consists of four fundamental tradeoffs: deployment efficiency - energy efficiency tradeoff, spectrum efficiency - energy efficiency tradeoff, bandwidth - power tradeoff, and delay - power tradeoff. With the help of the four fundamental tradeoffs, we demonstrate that key network performance/cost indicators are all stringed together.
1101.4351
Building a Chaotic Proved Neural Network
cs.AI cs.CR math.DS math.GN
Chaotic neural networks have received a great deal of attention these last years. In this paper we establish a precise correspondence between the so-called chaotic iterations and a particular class of artificial neural networks: global recurrent multi-layer perceptrons. We show formally that it is possible to make these iterations behave chaotically, as defined by Devaney, and thus we obtain the first neural networks proven chaotic. Several neural networks with different architectures are trained to exhibit a chaotical behavior.
1101.4356
Meaning Negotiation as Inference
cs.AI
Meaning negotiation (MN) is the general process with which agents reach an agreement about the meaning of a set of terms. Artificial Intelligence scholars have dealt with the problem of MN by means of argumentations schemes, beliefs merging and information fusion operators, and ontology alignment but the proposed approaches depend upon the number of participants. In this paper, we give a general model of MN for an arbitrary number of agents, in which each participant discusses with the others her viewpoint by exhibiting it in an actual set of constraints on the meaning of the negotiated terms. We call this presentation of individual viewpoints an angle. The agents do not aim at forming a common viewpoint but, instead, at agreeing about an acceptable common angle. We analyze separately the process of MN by two agents (\emph{bilateral} or \emph{pairwise} MN) and by more than two agents (\emph{multiparty} MN), and we use game theoretic models to understand how the process develops in both cases: the models are Bargaining Game for bilateral MN and English Auction for multiparty MN. We formalize the process of reaching such an agreement by giving a deduction system that comprises of rules that are consistent and adequate for representing MN.
1101.4372
Order Optimal Information Spreading Using Algebraic Gossip
cs.IT cs.DC cs.NI math.IT
In this paper we study gossip based information spreading with bounded message sizes. We use algebraic gossip to disseminate $k$ distinct messages to all $n$ nodes in a network. For arbitrary networks we provide a new upper bound for uniform algebraic gossip of $O((k+\log n + D)\Delta)$ rounds with high probability, where $D$ and $\Delta$ are the diameter and the maximum degree in the network, respectively. For many topologies and selections of $k$ this bound improves previous results, in particular, for graphs with a constant maximum degree it implies that uniform gossip is \emph{order optimal} and the stopping time is $\Theta(k + D)$. To eliminate the factor of $\Delta$ from the upper bound we propose a non-uniform gossip protocol, TAG, which is based on algebraic gossip and an arbitrary spanning tree protocol $\S$. The stopping time of TAG is $O(k+\log n +d(\S)+t(\S))$, where $t(\S)$ is the stopping time of the spanning tree protocol, and $d(\S)$ is the diameter of the spanning tree. We provide two general cases in which this bound leads to an order optimal protocol. The first is for $k=\Omega(n)$, where, using a simple gossip broadcast protocol that creates a spanning tree in at most linear time, we show that TAG finishes after $\Theta(n)$ rounds for any graph. The second uses a sophisticated, recent gossip protocol to build a fast spanning tree on graphs with large weak conductance. In turn, this leads to the optimally of TAG on these graphs for $k=\Omega(\mathrm{polylog}(n))$. The technique used in our proofs relies on queuing theory, which is an interesting approach that can be useful in future gossip analysis.
1101.4373
Statistical Multiresolution Dantzig Estimation in Imaging: Fundamental Concepts and Algorithmic Framework
stat.AP cs.CV cs.SY math.OC stat.CO
In this paper we are concerned with fully automatic and locally adaptive estimation of functions in a "signal + noise"-model where the regression function may additionally be blurred by a linear operator, e.g. by a convolution. To this end, we introduce a general class of statistical multiresolution estimators and develop an algorithmic framework for computing those. By this we mean estimators that are defined as solutions of convex optimization problems with supremum-type constraints. We employ a combination of the alternating direction method of multipliers with Dykstra's algorithm for computing orthogonal projections onto intersections of convex sets and prove numerical convergence. The capability of the proposed method is illustrated by various examples from imaging and signal detection.
1101.4378
Cycles of cooperation and defection in imperfect learning
physics.soc-ph cs.SI nlin.AO
When people play a repeated game they usually try to anticipate their opponents' moves based on past observations, and then decide what action to take next. Behavioural economics studies the mechanisms by which strategic decisions are taken in these adaptive learning processes. We here investigate a model of learning the iterated prisoner's dilemma game. Players have the choice between three strategies, always defect (ALLD), always cooperate (ALLC) and tit-for-tat (TFT). The only strict Nash equilibrium in this situation is ALLD. When players learn to play this game convergence to the equilibrium is not guaranteed, for example we find cooperative behaviour if players discount observations in the distant past. When agents use small samples of observed moves to estimate their opponent's strategy the learning process is stochastic, and sustained oscillations between cooperation and defection can emerge. These cycles are similar to those found in stochastic evolutionary processes, but the origin of the noise sustaining the oscillations is different and lies in the imperfect sampling of the opponent's strategy. Based on a systematic expansion technique, we are able to predict the properties of these learning cycles, providing an analytical tool with which the outcome of more general stochastic adaptation processes can be characterised.
1101.4388
Reproducing Kernel Banach Spaces with the l1 Norm
stat.ML cs.LG math.FA
Targeting at sparse learning, we construct Banach spaces B of functions on an input space X with the properties that (1) B possesses an l1 norm in the sense that it is isometrically isomorphic to the Banach space of integrable functions on X with respect to the counting measure; (2) point evaluations are continuous linear functionals on B and are representable through a bilinear form with a kernel function; (3) regularized learning schemes on B satisfy the linear representer theorem. Examples of kernel functions admissible for the construction of such spaces are given.
1101.4431
Parameter Optimization of Multi-Agent Formations based on LQR Design
cs.SY cs.MA
In this paper we study the optimal formation control of multiple agents whose interaction parameters are adjusted upon a cost function consisting of both the control energy and the geometrical performance. By optimizing the interaction parameters and by the linear quadratic regulation(LQR) controllers, the upper bound of the cost function is minimized. For systems with homogeneous agents interconnected over sparse graphs, distributed controllers are proposed that inherit the same underlying graph as the one among agents. For the more general case, a relaxed optimization problem is considered so as to eliminate the nonlinear constraints. Using the subgradient method, interaction parameters among agents are optimized under the constraint of a sparse graph, and the optimum of the cost function is a better result than the one when agents interacted only through the control channel. Numerical examples are provided to validate the effectiveness of the method and to illustrate the geometrical performance of the system.
1101.4435
Solutions for the MIMO Gaussian Wiretap Channel with a Cooperative Jammer
cs.IT math.IT
We study the Gaussian MIMO wiretap channel with a transmitter, a legitimate receiver, an eavesdropper and an external helper, each equipped with multiple antennas. The transmitter sends confidential messages to its intended receiver, while the helper transmits jamming signals independent of the source message to confuse the eavesdropper. The jamming signal is assumed to be treated as noise at both the intended receiver and the eavesdropper. We obtain a closed-form expression for the structure of the artificial noise covariance matrix that guarantees no decrease in the secrecy capacity of the wiretap channel. We also describe how to find specific realizations of this covariance matrix expression that provide good secrecy rate performance, even when there is no non-trivial null space between the helper and the intended receiver. Unlike prior work, our approach considers the general MIMO case, and is not restricted to SISO or MISO scenarios.
1101.4439
Reproducing Kernel Banach Spaces with the l1 Norm II: Error Analysis for Regularized Least Square Regression
stat.ML cs.LG math.FA
A typical approach in estimating the learning rate of a regularized learning scheme is to bound the approximation error by the sum of the sampling error, the hypothesis error and the regularization error. Using a reproducing kernel space that satisfies the linear representer theorem brings the advantage of discarding the hypothesis error from the sum automatically. Following this direction, we illustrate how reproducing kernel Banach spaces with the l1 norm can be applied to improve the learning rate estimate of l1-regularization in machine learning.
1101.4445
Spectrum Management for Cognitive Radio based on Genetics Algorithm
cs.NE
Spectrum scarceness is one of the major challenges that the present world is facing. The efficient use of existing licensed spectrum is becoming most critical as growing demand of the radio spectrum. Different researches show that the use of licensed are not utilized inefficiently. It has been also shown that primary user does not use more than 70% of the licensed frequency band most of the time. Many researchers are trying to found the techniques that efficiently utilize the under-utilized licensed spectrum. One of the approaches is the use of "Cognitive Radio". This allows the radio to learn from its environment, changing certain parameters. Based on this knowledge the radio can dynamically exploit the spectrum holes in the licensed band of the spectrum. This paper w i l l focus on the performance of spectrum allocation technique, based on popular meta-heuristics Genetics Algorithm and analyzing the performance of this technique using Mat Lab.
1101.4450
Adaptive Submodular Optimization under Matroid Constraints
cs.DS cs.AI
Many important problems in discrete optimization require maximization of a monotonic submodular function subject to matroid constraints. For these problems, a simple greedy algorithm is guaranteed to obtain near-optimal solutions. In this article, we extend this classic result to a general class of adaptive optimization problems under partial observability, where each choice can depend on observations resulting from past choices. Specifically, we prove that a natural adaptive greedy algorithm provides a $1/(p+1)$ approximation for the problem of maximizing an adaptive monotone submodular function subject to $p$ matroid constraints, and more generally over arbitrary $p$-independence systems. We illustrate the usefulness of our result on a complex adaptive match-making application.