id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
1301.5006
Blind Adaptive Algorithms for Decision Feedback DS-CDMA Receivers in Multipath Channels
cs.IT math.IT
In this work we examine blind adaptive and iterative decision feedback (DF) receivers for direct sequence code division multiple access (DS-CDMA) systems in frequency selective channels. Code-constrained minimum variance (CMV) and constant modulus (CCM) design criteria for DF receivers based on constrained optimization techniques are investigated for scenarios subject to multipath. Computationally efficient blind adaptive stochastic gradient (SG) and recursive least squares (RLS) algorithms are developed for estimating the parameters of DF detectors along with successive, parallel and iterative DF structures. A novel successive parallel arbitrated DF scheme is presented and combined with iterative techniques for use with cascaded DF stages in order to mitigate the deleterious effects of error propagation. Simulation results for an uplink scenario assess the algorithms, the blind adaptive DF detectors against linear receivers and evaluate the effects of error propagation of the new cancellations techniques against previously reported approaches.
1301.5011
Adaptive Space-Time Decision Feedback Neural Detectors with Data Selection for High-Data Rate Users in DS-CDMA Systems
cs.IT math.IT
A space-time adaptive decision feedback (DF) receiver using recurrent neural networks (RNN) is proposed for joint equalization and interference suppression in direct-sequence code-division-multiple-access (DS-CDMA) systems equipped with antenna arrays. The proposed receiver structure employs dynamically driven RNNs in the feedforward section for equalization and multi-access interference suppression and a finite impulse response (FIR) linear filter in the feedback section for performing interference cancellation. A data selective gradient algorithm, based upon the set-membership design framework, is proposed for the estimation of the coefficients of RNN structures and is applied to the estimation of the parameters of the proposed neural receiver structure. Simulation results show that the proposed techniques achieve significant performance gains over existing schemes.
1301.5022
A formalization of re-identification in terms of compatible probabilities
cs.CR cs.AI cs.IT math.IT
Re-identification algorithms are used in data privacy to measure disclosure risk. They model the situation in which an adversary attacks a published database by means of linking the information of this adversary with the database. In this paper we formalize this type of algorithm in terms of true probabilities and compatible belief functions. The purpose of this work is to leave aside as re-identification algorithms those algorithms that do not satisfy a minimum requirement.
1301.5033
On the Distribution of MIMO Mutual Information: An In-Depth Painlev\'{e} Based Characterization
cs.IT math.IT
This paper builds upon our recent work which computed the moment generating function of the MIMO mutual information exactly in terms of a Painlev\'{e} V differential equation. By exploiting this key analytical tool, we provide an in-depth characterization of the mutual information distribution for sufficiently large (but finite) antenna numbers. In particular, we derive systematic closed-form expansions for the high order cumulants. These results yield considerable new insight, such as providing a technical explanation as to why the well known Gaussian approximation is quite robust to large SNR for the case of unequal antenna arrays, whilst it deviates strongly for equal antenna arrays. In addition, by drawing upon our high order cumulant expansions, we employ the Edgeworth expansion technique to propose a refined Gaussian approximation which is shown to give a very accurate closed-form characterization of the mutual information distribution, both around the mean and for moderate deviations into the tails (where the Gaussian approximation fails remarkably). For stronger deviations where the Edgeworth expansion becomes unwieldy, we employ the saddle point method and asymptotic integration tools to establish new analytical characterizations which are shown to be very simple and accurate. Based on these results we also recover key well established properties of the tail distribution, including the diversity-multiplexing-tradeoff.
1301.5034
Downlink MIMO HetNets: Modeling, Ordering Results and Performance Analysis
cs.IT cs.NI math.IT
We develop a general downlink model for multi-antenna heterogeneous cellular networks (HetNets), where base stations (BSs) across tiers may differ in terms of transmit power, target signal-to-interference-ratio (SIR), deployment density, number of transmit antennas and the type of multi-antenna transmission. In particular, we consider and compare space division multiple access (SDMA), single user beamforming (SU-BF), and baseline single-input single-output (SISO) transmission. For this general model, the main contributions are: (i) ordering results for both coverage probability and per user rate in closed form for any BS distribution for the three considered techniques, using novel tools from stochastic orders, (ii) upper bounds on the coverage probability assuming a Poisson BS distribution, and (iii) a comparison of the area spectral efficiency (ASE). The analysis concretely demonstrates, for example, that for a given total number of transmit antennas in the network, it is preferable to spread them across many single-antenna BSs vs. fewer multi-antenna BSs. Another observation is that SU-BF provides higher coverage and per user data rate than SDMA, but SDMA is in some cases better in terms of ASE.
1301.5044
Performance Analysis of Heterogeneous Feedback Design in an OFDMA Downlink with Partial and Imperfect Feedback
cs.IT math.IT
Current OFDMA systems group resource blocks into subband to form the basic feedback unit. Homogeneous feedback design with a common subband size is not aware of the heterogeneous channel statistics among users. Under a general correlated channel model, we demonstrate the gain of matching the subband size to the underlying channel statistics motivating heterogeneous feedback design with different subband sizes and feedback resources across clusters of users. Employing the best-M partial feedback strategy, users with smaller subband size would convey more partial feedback to match the frequency selectivity. In order to develop an analytical framework to investigate the impact of partial feedback and potential imperfections, we leverage the multi-cluster subband fading model. The perfect feedback scenario is thoroughly analyzed, and the closed form expression for the average sum rate is derived for the heterogeneous partial feedback system. We proceed to examine the effect of imperfections due to channel estimation error and feedback delay, which leads to additional consideration of system outage. Two transmission strategies: the fix rate and the variable rate, are considered for the outage analysis. We also investigate how to adapt to the imperfections in order to maximize the average goodput under heterogeneous partial feedback.
1301.5047
Asymptotically Efficient Distributed Estimation With Exponential Family Statistics
math.PR cs.IT math.IT math.OC
The paper studies the problem of distributed parameter estimation in multi-agent networks with exponential family observation statistics. A certainty-equivalence type distributed estimator of the consensus + innovations form is proposed in which, at each each observation sampling epoch agents update their local parameter estimates by appropriately combining the data received from their neighbors and the locally sensed new information (innovation). Under global observability of the networked sensing model, i.e., the ability to distinguish between different instances of the parameter value based on the joint observation statistics, and mean connectivity of the inter-agent communication network, the proposed estimator is shown to yield consistent parameter estimates at each network agent. Further, it is shown that the distributed estimator is asymptotically efficient, in that, the asymptotic covariances of the agent estimates coincide with that of the optimal centralized estimator, i.e., the inverse of the centralized Fisher information rate. From a technical viewpoint, the proposed distributed estimator leads to non-Markovian mixed timescale stochastic recursions and the analytical methods developed in the paper contribute to the general theory of distributed stochastic approximation.
1301.5061
Capacity Region Bounds and Resource Allocation for Two-Way OFDM Relay Channels
cs.IT math.IT
In this paper, we consider two-way orthogonal frequency division multiplexing (OFDM) relay channels, where the direct link between the two terminal nodes is too weak to be used for data transmission. The widely known per-subcarrier decode-and-forward (DF) relay strategy, treats each subcarrier as a separate channel, and performs independent channel coding over each subcarrier. We show that this per-subcarrier DF relay strategy is only a suboptimal DF relay strategy, and present a multi-subcarrier DF relay strategy which utilizes cross-subcarrier channel coding to achieve a larger rate region. We then propose an optimal resource allocation algorithm to characterize the achievable rate region of the multi-subcarrier DF relay strategy. The computational complexity of this algorithm is much smaller than that of standard Lagrangian duality optimization algorithms. We further analyze the asymptotic performance of two-way relay strategies including the above two DF relay strategies and an amplify-and-forward (AF) relay strategy. The analysis shows that the multi-subcarrier DF relay strategy tends to achieve the capacity region of the two-way OFDM relay channels in the low signal-to-noise ratio (SNR) regime, while the AF relay strategy tends to achieve the multiplexing gain region of the two-way OFDM relay channels in the high SNR regime. Numerical results are provided to justify all the analytical results and the efficacy of the proposed optimal resource allocation algorithm.
1301.5063
Heteroscedastic Conditional Ordinal Random Fields for Pain Intensity Estimation from Facial Images
cs.CV cs.LG stat.ML
We propose a novel method for automatic pain intensity estimation from facial images based on the framework of kernel Conditional Ordinal Random Fields (KCORF). We extend this framework to account for heteroscedasticity on the output labels(i.e., pain intensity scores) and introduce a novel dynamic features, dynamic ranks, that impose temporal ordinal constraints on the static ranks (i.e., intensity scores). Our experimental results show that the proposed approach outperforms state-of-the art methods for sequence classification with ordinal data and other ordinal regression models. The approach performs significantly better than other models in terms of Intra-Class Correlation measure, which is the most accepted evaluation measure in the tasks of facial behaviour intensity estimation.
1301.5069
Secrecy without one-way functions
cs.CR cs.IT math.IT
We show that some problems in information security can be solved without using one-way functions. The latter are usually regarded as a central concept of cryptography, but the very existence of one-way functions depends on difficult conjectures in complexity theory, most notably on the notorious "$P \ne NP$" conjecture. In this paper, we suggest protocols for secure computation of the sum, product, and some other functions, without using any one-way functions. A new input that we offer here is that, in contrast with other proposals, we conceal "intermediate results" of a computation. For example, when we compute the sum of $k$ numbers, only the final result is known to the parties; partial sums are not known to anybody. Other applications of our method include voting/rating over insecure channels and a rather elegant and efficient solution of Yao's "millionaires' problem". Then, while it is fairly obvious that a secure (bit) commitment between two parties is impossible without a one-way function, we show that it is possible if the number of parties is at least 3. We also show how our (bit) commitment scheme for 3 parties can be used to arrange an unconditionally secure (bit) commitment between just two parties if they use a "dummy" (e.g., a computer) as the third party. We explain how our concept of a "dummy" is different from a well-known concept of a "trusted third party". We also suggest a protocol, without using a one-way function, for "mental poker", i.e., a fair card dealing (and playing) over distance. We also propose a secret sharing scheme where an advantage over Shamir's and other known secret sharing schemes is that nobody, including the dealer, ends up knowing the shares owned by any particular player. It should be mentioned that computational cost of our protocols is negligible to the point that all of them can be executed without a computer.
1301.5083
Improved Asymptotic Key Rate of the B92 Protocol
quant-ph cs.IT math.IT
We analyze the asymptotic key rate of the single photon B92 protocol by using Renner's security analysis given in 2005. The new analysis shows that the B92 protocol can securely generate key at 6.5% depolarizing rate, while the previous analyses cannot guarantee the secure key generation at 4.2% depolarizing rate.
1301.5088
Piecewise Linear Multilayer Perceptrons and Dropout
stat.ML cs.LG
We propose a new type of hidden layer for a multilayer perceptron, and demonstrate that it obtains the best reported performance for an MLP on the MNIST dataset.
1301.5096
Minimax Filtering via Relations between Information and Estimation
cs.IT math.IT
We investigate the problem of continuous-time causal estimation under a minimax criterion. Let $X^T = \{X_t,0\leq t\leq T\}$ be governed by the probability law $P_{\theta}$ from a class of possible laws indexed by $\theta \in \Lambda$, and $Y^T$ be the noise corrupted observations of $X^T$ available to the estimator. We characterize the estimator minimizing the worst case regret, where regret is the difference between the causal estimation loss of the estimator and that of the optimum estimator. One of the main contributions of this paper is characterizing the minimax estimator, showing that it is in fact a Bayesian estimator. We then relate minimax regret to the channel capacity when the channel is either Gaussian or Poisson. In this case, we characterize the minimax regret and the minimax estimator more explicitly. If we further assume that the uncertainty set consists of deterministic signals, the worst case regret is exactly equal to the corresponding channel capacity, namely the maximal mutual information attainable across the channel among all possible distributions on the uncertainty set of signals. The corresponding minimax estimator is the Bayesian estimator assuming the capacity-achieving prior. Using this relation, we also show that the capacity achieving prior coincides with the least favorable input. Moreover, we show that this minimax estimator is not only minimizing the worst case regret but also essentially minimizing regret for "most" of the other sources in the uncertainty set. We present a couple of examples for the construction of an minimax filter via an approximation of the associated capacity achieving distribution.
1301.5108
Balanced Sparsest Generator Matrices for MDS Codes
cs.IT math.IT
We show that given $n$ and $k$, for $q$ sufficiently large, there always exists an $[n, k]_q$ MDS code that has a generator matrix $G$ satisfying the following two conditions: (C1) Sparsest: each row of $G$ has Hamming weight $n - k + 1$; (C2) Balanced: Hamming weights of the columns of $G$ differ from each other by at most one.
1301.5109
Constrained Source Coding with Side Information
cs.IT math.IT
The source-coding problem with side information at the decoder is studied subject to a constraint that the encoder---to whom the side information is unavailable---be able to compute the decoder's reconstruction sequence to within some distortion. For discrete memoryless sources and finite single-letter distortion measures, an expression is given for the minimal description rate as a function of the joint law of the source and side information and of the allowed distortions at the encoder and at the decoder. The minimal description rate is also computed for a memoryless Gaussian source with squared-error distortion measures. A solution is also provided to a more general problem where there are more than two distortion constraints and each distortion function may be a function of three arguments: the source symbol, the encoder's reconstruction symbol, and the decoder's reconstruction symbol.
1301.5112
Active Learning on Trees and Graphs
cs.LG stat.ML
We investigate the problem of active learning on a given tree whose nodes are assigned binary labels in an adversarial way. Inspired by recent results by Guillory and Bilmes, we characterize (up to constant factors) the optimal placement of queries so to minimize the mistakes made on the non-queried nodes. Our query selection algorithm is extremely efficient, and the optimal number of mistakes on the non-queried nodes is achieved by a simple and efficient mincut classifier. Through a simple modification of the query selection algorithm we also show optimality (up to constant factors) with respect to the trade-off between number of queries and number of mistakes on non-queried nodes. By using spanning trees, our algorithms can be efficiently applied to general graphs, although the problem of finding optimal and efficient active learning algorithms for general graphs remains open. Towards this end, we provide a lower bound on the number of mistakes made on arbitrary graphs by any active learning algorithm using a number of queries which is up to a constant fraction of the graph size.
1301.5121
Partitioning Graph Databases - A Quantitative Evaluation
cs.DB cs.DC
Electronic data is growing at increasing rates, in both size and connectivity: the increasing presence of, and interest in, relationships between data. An example is the Twitter social network graph. Due to this growth demand is increasing for technologies that can process such data. Currently relational databases are the predominant technology, but they are poorly suited to processing connected data as they are optimized for index-intensive operations. Conversely, graph databases are optimized for graph computation. They link records by direct references, avoiding index lookups, and enabling retrieval of adjacent elements in constant time, regardless of graph size. However, as data volume increases these databases outgrow the resources of one computer and data partitioning becomes necessary. We evaluate the viability of using graph partitioning algorithms to partition graph databases. A prototype partitioned database was developed. Three partitioning algorithms explored and one implemented. Three graph datasets were used: two real and one synthetically generated. These were partitioned in various ways and the impact on database performance measured. We defined one synthetic access pattern per dataset and executed each on the partitioned datasets. Evaluation took place in a simulation environment, ensuring repeatability and allowing measurement of metrics like network traffic and load balance. Results show that compared to random partitioning the partitioning algorithm reduced traffic by 40-90%. Executing the algorithm intermittently during usage maintained partition quality, while requiring only 1% the computation of initial partitioning. Strong correlations were found between theoretic quality metrics and generated network traffic under non-uniform access patterns.
1301.5154
A Rational and Efficient Algorithm for View Revision in Databases
cs.LO cs.AI cs.DB
The dynamics of belief and knowledge is one of the major components of any autonomous system that should be able to incorporate new pieces of information. In this paper, we argue that to apply rationality result of belief dynamics theory to various practical problems, it should be generalized in two respects: first of all, it should allow a certain part of belief to be declared as immutable; and second, the belief state need not be deductively closed. Such a generalization of belief dynamics, referred to as base dynamics, is presented, along with the concept of a generalized revision algorithm for Horn knowledge bases. We show that Horn knowledge base dynamics has interesting connection with kernel change and abduction. Finally, we also show that both variants are rational in the sense that they satisfy certain rationality postulates stemming from philosophical works on belief dynamics.
1301.5159
International collaboration clusters in Africa
cs.DL cs.SI physics.soc-ph
Recent discussion about the increase in international research collaboration suggests a comprehensive global network centred around a group of core countries and driven by generic socio-economic factors where the global system influences all national and institutional outcomes. In counterpoint, we demonstrate that the collaboration pattern for countries in Africa is far from universal. Instead, it exhibits layers of internal clusters and external links that are explained not by monotypic global influences but by regional geography and, perhaps even more strongly, by history, culture and language. Analysis of these bottom-up, subjective, human factors is required in order to provide the fuller explanation useful for policy and management purposes.
1301.5160
See the Tree Through the Lines: The Shazoo Algorithm -- Full Version --
cs.LG
Predicting the nodes of a given graph is a fascinating theoretical problem with applications in several domains. Since graph sparsification via spanning trees retains enough information while making the task much easier, trees are an important special case of this problem. Although it is known how to predict the nodes of an unweighted tree in a nearly optimal way, in the weighted case a fully satisfactory algorithm is not available yet. We fill this hole and introduce an efficient node predictor, Shazoo, which is nearly optimal on any weighted tree. Moreover, we show that Shazoo can be viewed as a common nontrivial generalization of both previous approaches for unweighted trees and weighted lines. Experiments on real-world datasets confirm that Shazoo performs well in that it fully exploits the structure of the input tree, and gets very close to (and sometimes better than) less scalable energy minimization methods.
1301.5177
"Seed+Expand": A validated methodology for creating high quality publication oeuvres of individual researchers
cs.DL cs.IR
The study of science at the individual micro-level frequently requires the disambiguation of author names. The creation of author's publication oeuvres involves matching the list of unique author names to names used in publication databases. Despite recent progress in the development of unique author identifiers, e.g., ORCID, VIVO, or DAI, author disambiguation remains a key problem when it comes to large-scale bibliometric analysis using data from multiple databases. This study introduces and validates a new methodology called seed+expand for semi-automatic bibliographic data collection for a given set of individual authors. Specifically, we identify the oeuvre of a set of Dutch full professors during the period 1980-2011. In particular, we combine author records from the National Research Information System (NARCIS) with publication records from the Web of Science. Starting with an initial list of 8,378 names, we identify "seed publications" for each author using five different approaches. Subsequently, we "expand" the set of publication in three different approaches. The different approaches are compared and resulting oeuvres are evaluated on precision and recall using a "gold standard" dataset of authors for which verified publications in the period 2001-2010 are available.
1301.5201
Models of Social Groups in Blogosphere Based on Information about Comment Addressees and Sentiments
cs.SI physics.soc-ph
This work concerns the analysis of number, sizes and other characteristics of groups identified in the blogosphere using a set of models identifying social relations. These models differ regarding identification of social relations, influenced by methods of classifying the addressee of the comments (they are either the post author or the author of a comment on which this comment is directly addressing) and by a sentiment calculated for comments considering the statistics of words present and connotation. The state of a selected blog portal was analyzed in sequential, partly overlapping time intervals. Groups in each interval were identified using a version of the CPM algorithm, on the basis of them, stable groups, existing for at least a minimal assumed duration of time, were identified.
1301.5220
Properties of the Least Squares Temporal Difference learning algorithm
stat.ML cs.LG
This paper presents four different ways of looking at the well-known Least Squares Temporal Differences (LSTD) algorithm for computing the value function of a Markov Reward Process, each of them leading to different insights: the operator-theory approach via the Galerkin method, the statistical approach via instrumental variables, the linear dynamical system view as well as the limit of the TD iteration. We also give a geometric view of the algorithm as an oblique projection. Furthermore, there is an extensive comparison of the optimization problem solved by LSTD as compared to Bellman Residual Minimization (BRM). We then review several schemes for the regularization of the LSTD solution. We then proceed to treat the modification of LSTD for the case of episodic Markov Reward Processes.
1301.5258
Extremality Properties for the Basic Polarization Transformations
cs.IT math.IT
We study the extremality of the BEC and the BSC for Gallager's reliability function $E_0$ evaluated under the uniform input distribution for binary input DMCs from the aspect of channel polarization. In particular, we show that amongst all B-DMCs of a given $E_0(\rho)$ value, for a fixed $\rho \geq 0$, the BEC and BSC are extremal in the evolution of $E_0$ under the one-step polarization transformations.
1301.5273
Using Periodicity of Nucleotide Sequences
q-bio.GN cs.CE
Withdrawn by arXiv administrators due to content entirely plagiarized from other authors (not in arXiv).
1301.5288
The connection between Bayesian estimation of a Gaussian random field and RKHS
stat.ML cs.LG math.ST stat.TH
Reconstruction of a function from noisy data is often formulated as a regularized optimization problem over an infinite-dimensional reproducing kernel Hilbert space (RKHS). The solution describes the observed data and has a small RKHS norm. When the data fit is measured using a quadratic loss, this estimator has a known statistical interpretation. Given the noisy measurements, the RKHS estimate represents the posterior mean (minimum variance estimate) of a Gaussian random field with covariance proportional to the kernel associated with the RKHS. In this paper, we provide a statistical interpretation when more general losses are used, such as absolute value, Vapnik or Huber. Specifically, for any finite set of sampling locations (including where the data were collected), the MAP estimate for the signal samples is given by the RKHS estimate evaluated at these locations.
1301.5309
Capacity Results for Binary Fading Interference Channels with Delayed CSIT
cs.IT math.IT
To study the effect of lack of up-to-date channel state information at the transmitters (CSIT), we consider two-user binary fading interference channels with Delayed-CSIT. We characterize the capacity region for such channels under homogeneous assumption where channel gains have identical and independent distributions across time and space, eliminating the possibility of exploiting time/space correlation. We introduce and discuss several novel coding opportunities created by outdated CSIT that can enlarge the achievable rate region. The capacity-achieving scheme relies on accurate combination, concatenation, and merging of these opportunities, depending on the channel statistics. The outer-bounds are based on an extremal inequality we develop for a binary broadcast channel with Delayed-CSIT. We further extend the results and characterize the capacity region when output feedback links are available from the receivers to the transmitters in addition to the delayed knowledge of the channel state information. We also discuss the extension of our results to the non-homogeneous setting.
1301.5332
Online Learning with Pairwise Loss Functions
stat.ML cs.LG
Efficient online learning with pairwise loss functions is a crucial component in building large-scale learning system that maximizes the area under the Receiver Operator Characteristic (ROC) curve. In this paper we investigate the generalization performance of online learning algorithms with pairwise loss functions. We show that the existing proof techniques for generalization bounds of online algorithms with a univariate loss can not be directly applied to pairwise losses. In this paper, we derive the first result providing data-dependent bounds for the average risk of the sequence of hypotheses generated by an arbitrary online learner in terms of an easily computable statistic, and show how to extract a low risk hypothesis from the sequence. We demonstrate the generality of our results by applying it to two important problems in machine learning. First, we analyze two online algorithms for bipartite ranking; one being a natural extension of the perceptron algorithm and the other using online convex optimization. Secondly, we provide an analysis for the risk bound for an online algorithm for supervised metric learning.
1301.5334
Generalized Cut-Set Bounds for Broadcast Networks
cs.IT math.IT
A broadcast network is a classical network with all source messages collocated at a single source node. For broadcast networks, the standard cut-set bounds, which are known to be loose in general, are closely related to union as a specific set operation to combine the basic cuts of the network. This paper provides a new set of network coding bounds for general broadcast networks. These bounds combine the basic cuts of the network via a variety of set operations (not just the union) and are established via only the submodularity of Shannon entropy. The tightness of these bounds are demonstrated via applications to combination networks.
1301.5348
Why Size Matters: Feature Coding as Nystrom Sampling
cs.LG cs.CV
Recently, the computer vision and machine learning community has been in favor of feature extraction pipelines that rely on a coding step followed by a linear classifier, due to their overall simplicity, well understood properties of linear classifiers, and their computational efficiency. In this paper we propose a novel view of this pipeline based on kernel methods and Nystrom sampling. In particular, we focus on the coding of a data point with a local representation based on a dictionary with fewer elements than the number of data points, and view it as an approximation to the actual function that would compute pair-wise similarity to all data points (often too many to compute in practice), followed by a Nystrom sampling step to select a subset of all data points. Furthermore, since bounds are known on the approximation power of Nystrom sampling as a function of how many samples (i.e. dictionary size) we consider, we can derive bounds on the approximation of the exact (but expensive to compute) kernel matrix, and use it as a proxy to predict accuracy as a function of the dictionary size, which has been observed to increase but also to saturate as we increase its size. This model may help explaining the positive effect of the codebook size and justifying the need to stack more layers (often referred to as deep learning), as flat models empirically saturate as we add more complexity.
1301.5349
Toward the Automatic Generation of a Semantic VRML Model from Unorganized 3D Point Clouds
cs.CG cs.AI
This paper presents our experience regarding the creation of 3D semantic facility model out of unorganized 3D point clouds. Thus, a knowledge-based detection approach of objects using the OWL ontology language is presented. This knowledge is used to define SWRL detection rules. In addition, the combination of 3D processing built-ins and topological Built-Ins in SWRL rules aims at combining geometrical analysis of 3D point clouds and specialist's knowledge. This combination allows more flexible and intelligent detection and the annotation of objects contained in 3D point clouds. The created WiDOP prototype takes a set of 3D point clouds as input, and produces an indexed scene of colored objects visualized within VRML language as output. The context of the study is the detection of railway objects materialized within the Deutsche Bahn scene such as signals, technical cupboards, electric poles, etc. Therefore, the resulting enriched and populated domain ontology, that contains the annotations of objects in the point clouds, is used to feed a GIS system.
1301.5356
Efficient MRF Energy Propagation for Video Segmentation via Bilateral Filters
cs.CV
Segmentation of an object from a video is a challenging task in multimedia applications. Depending on the application, automatic or interactive methods are desired; however, regardless of the application type, efficient computation of video object segmentation is crucial for time-critical applications; specifically, mobile and interactive applications require near real-time efficiencies. In this paper, we address the problem of video segmentation from the perspective of efficiency. We initially redefine the problem of video object segmentation as the propagation of MRF energies along the temporal domain. For this purpose, a novel and efficient method is proposed to propagate MRF energies throughout the frames via bilateral filters without using any global texture, color or shape model. Recently presented bi-exponential filter is utilized for efficiency, whereas a novel technique is also developed to dynamically solve graph-cuts for varying, non-lattice graphs in general linear filtering scenario. These improvements are experimented for both automatic and interactive video segmentation scenarios. Moreover, in addition to the efficiency, segmentation quality is also tested both quantitatively and qualitatively. Indeed, for some challenging examples, significant time efficiency is observed without loss of segmentation quality.
1301.5359
Local Graph Coloring and Index Coding
cs.IT cs.DM math.IT
We present a novel upper bound for the optimal index coding rate. Our bound uses a graph theoretic quantity called the local chromatic number. We show how a good local coloring can be used to create a good index code. The local coloring is used as an alignment guide to assign index coding vectors from a general position MDS code. We further show that a natural LP relaxation yields an even stronger index code. Our bounds provably outperform the state of the art on index coding but at most by a constant factor.
1301.5434
Design of Compandor Based on Approximate the First-Degree Spline Function
cs.IT math.IT
In this paper, the approximation of the optimal compressor function using spline function of the first-degree is done. For the companding quantizer designed on the basis of the approximative spline function of the first-degree, the support region is numerically optimized to provide the minimum of the total distortion for the last segment. It is shown that the companding quantizer with the optimized support region threshold provides the signal to quantization noise ratio that is very close to the one of the optimal companding quantizer having an equal number of levels.
1301.5451
Spread spectrum compressed sensing MRI using chirp radio frequency pulses
cs.CV math.OC physics.med-ph
Compressed sensing has shown great potential in reducing data acquisition time in magnetic resonance imaging (MRI). Recently, a spread spectrum compressed sensing MRI method modulates an image with a quadratic phase. It performs better than the conventional compressed sensing MRI with variable density sampling, since the coherence between the sensing and sparsity bases are reduced. However, spread spectrum in that method is implemented via a shim coil which limits its modulation intensity and is not convenient to operate. In this letter, we propose to apply chirp (linear frequency-swept) radio frequency pulses to easily control the spread spectrum. To accelerate the image reconstruction, an alternating direction algorithm is modified by exploiting the complex orthogonality of the quadratic phase encoding. Reconstruction on the acquired data demonstrates that more image features are preserved using the proposed approach than those of conventional CS-MRI.
1301.5482
Relative Generalized Rank Weight of Linear Codes and Its Applications to Network Coding
cs.IT cs.CR math.CO math.IT
By extending the notion of minimum rank distance, this paper introduces two new relative code parameters of a linear code C_1 of length n over a field extension and its subcode C_2. One is called the relative dimension/intersection profile (RDIP), and the other is called the relative generalized rank weight (RGRW). We clarify their basic properties and the relation between the RGRW and the minimum rank distance. As applications of the RDIP and the RGRW, the security performance and the error correction capability of secure network coding, guaranteed independently of the underlying network code, are analyzed and clarified. We propose a construction of secure network coding scheme, and analyze its security performance and error correction capability as an example of applications of the RDIP and the RGRW. Silva and Kschischang showed the existence of a secure network coding in which no part of the secret message is revealed to the adversary even if any dim C_1-1 links are wiretapped, which is guaranteed over any underlying network code. However, the explicit construction of such a scheme remained an open problem. Our new construction is just one instance of secure network coding that solves this open problem.
1301.5488
Multi-class Generalized Binary Search for Active Inverse Reinforcement Learning
cs.LG cs.AI stat.ML
This paper addresses the problem of learning a task from demonstration. We adopt the framework of inverse reinforcement learning, where tasks are represented in the form of a reward function. Our contribution is a novel active learning algorithm that enables the learning agent to query the expert for more informative demonstrations, thus leading to more sample-efficient learning. For this novel algorithm (Generalized Binary Search for Inverse Reinforcement Learning, or GBS-IRL), we provide a theoretical bound on sample complexity and illustrate its applicability on several different tasks. To our knowledge, GBS-IRL is the first active IRL algorithm with provable sample complexity bounds. We also discuss our method in light of other existing methods in the literature and its general applicability in multi-class classification problems. Finally, motivated by recent work on learning from demonstration in robots, we also discuss how different forms of human feedback can be integrated in a transparent manner in our learning framework.
1301.5491
ChESS - Quick and Robust Detection of Chess-board Features
cs.CV
Localization of chess-board vertices is a common task in computer vision, underpinning many applications, but relatively little work focusses on designing a specific feature detector that is fast, accurate and robust. In this paper the `Chess-board Extraction by Subtraction and Summation' (ChESS) feature detector, designed to exclusively respond to chess-board vertices, is presented. The method proposed is robust against noise, poor lighting and poor contrast, requires no prior knowledge of the extent of the chess-board pattern, is computationally very efficient, and provides a strength measure of detected features. Such a detector has significant application both in the key field of camera calibration, as well as in Structured Light 3D reconstruction. Evidence is presented showing its robustness, accuracy, and efficiency in comparison to other commonly used detectors both under simulation and in experimental 3D reconstruction of flat plate and cylindrical objects
1301.5522
On Gaussian Half-Duplex Relay Networks
cs.IT math.IT
This paper considers Gaussian relay networks where a source transmits a message to a sink terminal with the help of one or more relay nodes. The relays work in half-duplex mode, in the sense that they can not transmit and receive at the same time. For the case of one relay, the generalized Degrees-of-Freedom is characterized first and then it is shown that capacity can be achieved to within a constant gap regardless of the actual value of the channel parameters. Different achievable schemes are presented with either deterministic or random switch for the relay node. It is shown that random switch in general achieves higher rates than deterministic switch. For the case of K relays, it is shown that the generalized Degrees-of-Freedom can be obtained by solving a linear program and that capacity can be achieved to within a constant gap of K/2log(4K). This gap may be further decreased by considering more structured networks such as, for example, the diamond network.
1301.5535
On the Achievable Rate-Regions for State-Dependent Gaussian Interference Channel
cs.IT math.IT
In this paper, we study a general additive state-dependent Gaussian interference channel (ASD-GIC) where we consider two-user interference channel with two independent states known non-causally at both transmitters, but unknown to either of the receivers. An special case, where the additive states over the two links are the same is studied in [1], [2], in which it is shown that the gap between the achievable symmetric rate and the upper bound is less than 1/4 bit for the strong interference case. Here, we also consider the case where each channel state has unbounded variance [3], which is referred to as the strong interferences. We first obtain an outer bound on the capacity region. By utilizing lattice-based coding schemes, we obtain four achievable rate regions. Depend on noise variance and channel power constraint, achievable rate regions can coincide with the channel capacity region. For the symmetric model, the achievable sum-rate reaches to within 0.661 bit of the channel capacity for signal to noise ratio (SNR) greater than one.
1301.5536
On the Correlation Between Polarized BECs
cs.IT math.IT
We consider the $2^n$ channels synthesized by the $n$-fold application of Ar\i{}kan's polar transform to a binary erasure channel (BEC). The synthetic channels are BECs themselves, and we show that, asymptotically for almost all these channels, the pairwise correlations between their erasure events are extremely small: the correlation coefficients vanish faster than any exponential in $n$. Such a fast decay of correlations allows us to conclude that the union bound on the block error probability of polar codes is very tight.
1301.5582
Multi-Class Detection and Segmentation of Objects in Depth
cs.CV cs.RO
The quality of life of many people could be improved by autonomous humanoid robots in the home. To function in the human world, a humanoid household robot must be able to locate itself and perceive the environment like a human; scene perception, object detection and segmentation, and object spatial localization in 3D are fundamental capabilities for such humanoid robots. This paper presents a 3D multi-class object detection and segmentation method. The contributions are twofold. Firstly, we present a multi-class detection method, where a minimal joint codebook is learned in a principled manner. Secondly, we incorporate depth information using RGB-D imagery, which increases the robustness of the method and gives the 3D location of objects -- necessary since the robot reasons in 3D space. Experiments show that the multi-class extension improves the detection efficiency with respect to the number of classes and the depth extension improves the detection robustness and give sufficient natural 3D location of the objects.
1301.5586
Measuring the Significance of the Geographic Flow of Music
cs.SI physics.soc-ph
In previous work, our results suggested that some cities tend to be ahead of others in their musical preferences. We concluded that work by noting that to properly test this claim, we would try to exploit the leader-follower relationships that we identified to make predictions. Here we present the results of our predictive evaluation. We find that information on the past musical preferences in other cities allows a linear model to improve its predictions by approx. 5% over a simple baseline. This suggests that at best, previously found leader-follower relationships are rather weak.
1301.5593
A Packetized Direct Load Control Mechanism for Demand Side Management
cs.SY
Electricity peaks can be harmful to grid stability and result in additional generation costs to balance supply with demand. By developing a network of smart appliances together with a quasi-decentralized control protocol, direct load control (DLC) provides an opportunity to reduce peak consumption by directly controlling the on/off switch of the networked appliances. This paper proposes a packetized DLC (PDLC) solution that is illustrated by an application to air conditioning temperature control. Here the term packetized refers to a fixed time energy usage authorization. The consumers in each room choose their preferred set point, and then an operator of the local appliance pool will determine the comfort band around the set point. We use a thermal dynamic model to investigate the duty cycle of thermostatic appliances. Three theorems are proposed in this paper. The first two theorems evaluate the performance of the PDLC in both transient and steady state operation. The first theorem proves that the average room temperature would converge to the average room set point with fixed number of packets applied in each discrete interval. The second theorem proves that the PDLC solution guarantees to control the temperature of all the rooms within their individual comfort bands. The third theorem proposes an allocation method to link the results in theorem 1 and assumptions in theorem 2 such that the overall PDLC solution works. The direct result of the theorems is that we can reduce the consumption oscillation that occurs when no control is applied. Simulation is provided to verify theoretical results.
1301.5595
A discrete analysis of metal-v belt drive
cs.CE
The metal-V belt drive includes a large number of parts which interact between them to transmit power from the input to the output pulleys. A compression belt composed of a great number of struts is maintained by a tension flat belt. Power is them shared into the two belts that moves generally in opposite directions. Due to the particular geometry of the elements and to the great number of parts, a numerical approach achieves the global equilibrium of the mechanism from the elementary part equilibrium. Sliding arc on each pulley can be thus defined both for the compression and tension belts. Finally, power sharing can be calculated as differential motion between the belts, is defined. The first part of the paper will present the different steps of the quasi-static mechanical analysis and their numerical implementations. Load distributions, speed profiles and sliding angle values will be discussed. The second part of the paper will deal to a systematic use of the computer software. Speed ratio, transmitted torque, strut geometry and friction coefficients effect will be analysed with the output parameter variations. Finally, the effect pulley deformable flanges will be discussed.
1301.5596
Systems of MDS codes from units and idempotents
cs.IT math.IT
Algebraic systems are constructed from which series of maximum distance separable (mds) codes are derived. The methods use unit and idempotent schemes.
1301.5607
Information as Distinctions: New Foundations for Information Theory
cs.IT math.IT math.LO
The logical basis for information theory is the newly developed logic of partitions that is dual to the usual Boolean logic of subsets. The key concept is a "distinction" of a partition, an ordered pair of elements in distinct blocks of the partition. The logical concept of entropy based on partition logic is the normalized counting measure of the set of distinctions of a partition on a finite set--just as the usual logical notion of probability based on the Boolean logic of subsets is the normalized counting measure of the subsets (events). Thus logical entropy is a measure on the set of ordered pairs, and all the compound notions of entropy (join entropy, conditional entropy, and mutual information) arise in the usual way from the measure (e.g., the inclusion-exclusion principle)--just like the corresponding notions of probability. The usual Shannon entropy of a partition is developed by replacing the normalized count of distinctions (dits) by the average number of binary partitions (bits) necessary to make all the distinctions of the partition.
1301.5650
Regularization and nonlinearities for neural language models: when are they needed?
stat.ML cs.LG
Neural language models (LMs) based on recurrent neural networks (RNN) are some of the most successful word and character-level LMs. Why do they work so well, in particular better than linear neural LMs? Possible explanations are that RNNs have an implicitly better regularization or that RNNs have a higher capacity for storing patterns due to their nonlinearities or both. Here we argue for the first explanation in the limit of little training data and the second explanation for large amounts of text data. We show state-of-the-art performance on the popular and small Penn dataset when RNN LMs are regularized with random dropout. Nonetheless, we show even better performance from a simplified, much less expressive linear RNN model without off-diagonal entries in the recurrent matrix. We call this model an impulse-response LM (IRLM). Using random dropout, column normalization and annealed learning rates, IRLMs develop neurons that keep a memory of up to 50 words in the past and achieve a perplexity of 102.5 on the Penn dataset. On two large datasets however, the same regularization methods are unsuccessful for both models and the RNN's expressivity allows it to overtake the IRLM by 10 and 20 percent perplexity, respectively. Despite the perplexity gap, IRLMs still outperform RNNs on the Microsoft Research Sentence Completion (MRSC) task. We develop a slightly modified IRLM that separates long-context units (LCUs) from short-context units and show that the LCUs alone achieve a state-of-the-art performance on the MRSC task of 60.8%. Our analysis indicates that a fruitful direction of research for neural LMs lies in developing more accessible internal representations, and suggests an optimization regime of very high momentum terms for effectively training such models.
1301.5655
Achievable rate region based on coset codes for multiple access channel with states
cs.IT math.IT
We prove that the ensemble the nested coset codes built on finite fields achieves the capacity of arbitrary discrete memoryless point-to-point channels. Exploiting it's algebraic structure, we develop a coding technique for communication over general discrete multiple access channel with channel state information distributed at the transmitters. We build an algebraic coding framework for this problem using the ensemble of Abelian group codes and thereby derive a new achievable rate region. We identify non-additive and non-symmteric examples for which the proposed achievable rate region is strictly larger than the one achievable using random unstructured codes.
1301.5676
Spatial Coupling as a Proof Technique
cs.IT math.IT
The aim of this paper is to show that spatial coupling can be viewed not only as a means to build better graphical models, but also as a tool to better understand uncoupled models. The starting point is the observation that some asymptotic properties of graphical models are easier to prove in the case of spatial coupling. In such cases, one can then use the so-called interpolation method to transfer known results for the spatially coupled case to the uncoupled one. Our main use of this framework is for LDPC codes, where we use interpolation to show that the average entropy of the codeword conditioned on the observation is asymptotically the same for spatially coupled as for uncoupled ensembles. We give three applications of this result for a large class of LDPC ensembles. The first one is a proof of the so-called Maxwell construction stating that the MAP threshold is equal to the Area threshold of the BP GEXIT curve. The second is a proof of the equality between the BP and MAP GEXIT curves above the MAP threshold. The third application is the intimately related fact that the replica symmetric formula for the conditional entropy in the infinite block length limit is exact.
1301.5684
Computing sum of sources over an arbitrary multiple access channel
cs.IT math.IT
The problem of computing sum of sources over a multiple access channel (MAC) is considered. Building on the technique of linear computation coding (LCC) proposed by Nazer and Gastpar [2007], we employ the ensemble of nested coset codes to derive a new set of sufficient conditions for computing the sum of sources over an \textit{arbitrary} MAC. The optimality of nested coset codes [Padakandla, Pradhan 2011] enables this technique outperform LCC even for linear MAC with a structural match. Examples of nonadditive MAC for which the technique proposed herein outperforms separation and systematic based computation are also presented. Finally, this technique is enhanced by incorporating separation based strategy, leading to a new set of sufficient conditions for computing the sum over a MAC.
1301.5686
Transfer Topic Modeling with Ease and Scalability
cs.CL cs.LG stat.ML
The increasing volume of short texts generated on social media sites, such as Twitter or Facebook, creates a great demand for effective and efficient topic modeling approaches. While latent Dirichlet allocation (LDA) can be applied, it is not optimal due to its weakness in handling short texts with fast-changing topics and scalability concerns. In this paper, we propose a transfer learning approach that utilizes abundant labeled documents from other domains (such as Yahoo! News or Wikipedia) to improve topic modeling, with better model fitting and result interpretation. Specifically, we develop Transfer Hierarchical LDA (thLDA) model, which incorporates the label information from other domains via informative priors. In addition, we develop a parallel implementation of our model for large-scale applications. We demonstrate the effectiveness of our thLDA model on both a microblogging dataset and standard text collections including AP and RCV1 datasets.
1301.5687
Outage Probability of Wireless Ad Hoc Networks with Cooperative Relaying
cs.IT math.IT
In this paper, we analyze the performance of cooperative transmissions in wireless ad hoc networks with random node locations. According to a contention probability for message transmission, each source node can either transmits its own message signal or acts as a potential relay for others. Hence, each destination node can potentially receive two copies of the message signal, one from the direct link and the other from the relay link. Taking the random node locations and interference into account, we derive closed-form expressions for the outage probability with different combining schemes at the destination nodes. In particular, the outage performance of optimal combining, maximum ratio combining, and selection combining strategies are studied and quantified.
1301.5695
Optimal Amplify-and-Forward Schemes for Relay Channels with Correlated Relay Noise
cs.IT math.IT
This paper investigates amplify-and-forward (AF) schemes for both one and two-way relay channels. Unlike most existing works assuming independent noise at the relays, we consider a more general scenario with correlated relay noise. We first propose an approach to efficiently solve a class of quadratically constrained fractional problems via second-order cone programming (SOCP). Then it is shown that the AF relay optimization problems studied in this paper can be incorporated into such quadratically constrained fractional problems. As a consequence, the proposed approach can be used as a unified framework to solve the optimal AF rate for the one-way relay channel and the optimal AF rate region for the two-way relay channel under both sum and individual relay power constraints. In particular, for one-way relay channel under individual relay power constraints, we propose two suboptimal AF schemes in closed-form. It is shown that they are approximately optimal in certain conditions of interest. Furthermore, we find an interesting result that, on average, noise correlation is beneficial no matter the relays know the noise covariance matrix or not for such scenario. Overall, the obtained results recover and generalize several existing results for the uncorrelated counterpart. (unsubmitted)
1301.5701
Sequential and Decentralized Estimation of Linear Regression Parameters in Wireless Sensor Networks
stat.AP cs.IT math.IT math.OC math.PR stat.ME
Sequential estimation of a vector of linear regression coefficients is considered under both centralized and decentralized setups. In sequential estimation, the number of observations used for estimation is determined by the observed samples, hence is random, as opposed to fixed-sample-size estimation. Specifically, after receiving a new sample, if a target accuracy level is reached, we stop and estimate using the samples collected so far; otherwise we continue to receive another sample. It is known that finding an optimum sequential estimator, which minimizes the average sample number for a given target accuracy level, is an intractable problem with a general stopping rule that depends on the complete observation history. By properly restricting the search space to stopping rules that depend on a specific subset of the complete observation history, we derive the optimum sequential estimator in the centralized case via optimal stopping theory. However, finding the optimum stopping rule in this case requires numerical computations that {\em quadratically} scales with the number of parameters to be estimated. For the decentralized setup with stringent energy constraints, under an alternative problem formulation that is conditional on the observed regressors, we first derive a simple optimum scheme whose computational complexity is {\em constant} with respect to the number of parameters. Then, following this simple optimum scheme we propose a decentralized sequential estimator whose computational complexity and energy consumption scales {\em linearly} with the number of parameters. Specifically, in the proposed decentralized scheme a close-to-optimum average stopping time performance is achieved by infrequently transmitting a single pulse with very short duration.
1301.5728
A Potential Theory of General Spatially-Coupled Systems via a Continuum Approximation
cs.IT math.IT
This paper analyzes general spatially-coupled (SC) systems with multi-dimensional coupling. A continuum approximation is used to derive potential functions that characterize the performance of the SC systems. For any dimension of coupling, it is shown that, if the boundary of the SC systems is fixed to the unique stable solution that minimizes the potential over all stationary solutions, the systems can approach the optimal performance as the number of coupled systems tends to infinity.
1301.5734
Reinforcement learning from comparisons: Three alternatives is enough, two is not
math.OC cs.LG math.PR
The paper deals with the problem of finding the best alternatives on the basis of pairwise comparisons when these comparisons need not be transitive. In this setting, we study a reinforcement urn model. We prove convergence to the optimal solution when reinforcement of a winning alternative occurs each time after considering three random alternatives. The simpler process, which reinforces the winner of a random pair does not always converges: it may cycle.
1301.5765
High Capacity Indoor & Hotspot Wireless System in Shared Spectrum - A Techno-Economic Analysis
cs.NI cs.IT math.IT
Predictions for wireless and mobile Internet access suggest exponential traffic increase particularly in inbuilding environments. Non-traditional actors such as facility owners have a growing interest in deploying and operating their own indoor networks to fulfill the capacity demand. Such local operators will need spectrum sharing with neighboring networks because they are not likely to have their own dedicated spectrum. Management of inter-network interference then becomes a key issue for high capacity provision. Tight operator-wise cooperation provides superior performance, but at the expense of high infrastructure cost and business-related barriers. Limited coordination on the other hand causes harmful interference between operators which in turn will require even denser networks. In this paper, we propose a techno-economic analysis framework for investigating and comparing the strategies of the indoor operators. We refine a traditional network cost model by introducing new inter-operator cost factors. Then, we present a numerical example to demonstrate how the proposed framework can help us comparing different operator strategies. Finally, we suggest areas for future research.
1301.5809
Producing a Unified Graph Representation from Multiple Social Network Views
cs.SI physics.soc-ph
In many social networks, several different link relations will exist between the same set of users. Additionally, attribute or textual information will be associated with those users, such as demographic details or user-generated content. For many data analysis tasks, such as community finding and data visualisation, the provision of multiple heterogeneous types of user data makes the analysis process more complex. We propose an unsupervised method for integrating multiple data views to produce a single unified graph representation, based on the combination of the k-nearest neighbour sets for users derived from each view. These views can be either relation-based or feature-based. The proposed method is evaluated on a number of annotated multi-view Twitter datasets, where it is shown to support the discovery of the underlying community structure in the data.
1301.5831
Canalization and control in automata networks: body segmentation in Drosophila melanogaster
q-bio.MN cs.CE cs.DM cs.FL nlin.AO
We present schema redescription as a methodology to characterize canalization in automata networks used to model biochemical regulation and signalling. In our formulation, canalization becomes synonymous with redundancy present in the logic of automata. This results in straightforward measures to quantify canalization in an automaton (micro-level), which is in turn integrated into a highly scalable framework to characterize the collective dynamics of large-scale automata networks (macro-level). This way, our approach provides a method to link micro- to macro-level dynamics -- a crux of complexity. Several new results ensue from this methodology: uncovering of dynamical modularity (modules in the dynamics rather than in the structure of networks), identification of minimal conditions and critical nodes to control the convergence to attractors, simulation of dynamical behaviour from incomplete information about initial conditions, and measures of macro-level canalization and robustness to perturbations. We exemplify our methodology with a well-known model of the intra- and inter cellular genetic regulation of body segmentation in Drosophila melanogaster. We use this model to show that our analysis does not contradict any previous findings. But we also obtain new knowledge about its behaviour: a better understanding of the size of its wild-type attractor basin (larger than previously thought), the identification of novel minimal conditions and critical nodes that control wild-type behaviour, and the resilience of these to stochastic interventions. Our methodology is applicable to any complex network that can be modelled using automata, but we focus on biochemical regulation and signalling, towards a better understanding of the (decentralized) control that orchestrates cellular activity -- with the ultimate goal of explaining how do cells and tissues 'compute'.
1301.5848
Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff
cs.IT cs.NI math.IT
Replicating or caching popular content in memories distributed across the network is a technique to reduce peak network loads. Conventionally, the main performance gain of this caching was thought to result from making part of the requested data available closer to end users. Instead, we recently showed that a much more significant gain can be achieved by using caches to create coded-multicasting opportunities, even for users with different demands, through coding across data streams. These coded-multicasting opportunities are enabled by careful content overlap at the various caches in the network, created by a central coordinating server. In many scenarios, such a central coordinating server may not be available, raising the question if this multicasting gain can still be achieved in a more decentralized setting. In this paper, we propose an efficient caching scheme, in which the content placement is performed in a decentralized manner. In other words, no coordination is required for the content placement. Despite this lack of coordination, the proposed scheme is nevertheless able to create coded-multicasting opportunities and achieves a rate close to the optimal centralized scheme.
1301.5852
On a Multiple-Access in a Vector Disjunctive Channel
cs.IT math.IT
We address the problem of increasing the sum rate in a multiple-access system from [1] for small number of users. We suggest an improved signal-code construction in which in case of a small number of users we give more resources to them. For the resulting multiple-access system a lower bound on the relative sum rate is derived. It is shown to be very close to the maximal value of relative sum rate in [1] even for small number of users. The bound is obtained for the case of decoding by exhaustive search. We also suggest reduced-complexity decoding and compare the maximal number of users in this case and in case of decoding by exhaustive search.
1301.5871
Towards a faster symbolic aggregate approximation method
cs.DB cs.IR
The similarity search problem is one of the main problems in time series data mining. Traditionally, this problem was tackled by sequentially comparing the given query against all the time series in the database, and returning all the time series that are within a predetermined threshold of that query. But the large size and the high dimensionality of time series databases that are in use nowadays make that scenario inefficient. There are many representation techniques that aim at reducing the dimensionality of time series so that the search can be handled faster at a lower-dimensional space level. The symbolic aggregate approximation (SAX) is one of the most competitive methods in the literature. In this paper we present a new method that improves the performance of SAX by adding to it another exclusion condition that increases the exclusion power. This method is based on using two representations of the time series: one of SAX and the other is based on an optimal approximation of the time series. Pre-computed distances are calculated and stored offline to be used online to exclude a wide range of the search space using two exclusion conditions. We conduct experiments which show that the new method is faster than SAX.
1301.5887
Counting Triangles in Massive Graphs with MapReduce
cs.SI cs.DC
Graphs and networks are used to model interactions in a variety of contexts. There is a growing need to quickly assess the characteristics of a graph in order to understand its underlying structure. Some of the most useful metrics are triangle-based and give a measure of the connectedness of mutual friends. This is often summarized in terms of clustering coefficients, which measure the likelihood that two neighbors of a node are themselves connected. Computing these measures exactly for large-scale networks is prohibitively expensive in both memory and time. However, a recent wedge sampling algorithm has proved successful in efficiently and accurately estimating clustering coefficients. In this paper, we describe how to implement this approach in MapReduce to deal with massive graphs. We show results on publicly-available networks, the largest of which is 132M nodes and 4.7B edges, as well as artificially generated networks (using the Graph500 benchmark), the largest of which has 240M nodes and 8.5B edges. We can estimate the clustering coefficient by degree bin (e.g., we use exponential binning) and the number of triangles per bin, as well as the global clustering coefficient and total number of triangles, in an average of 0.33 seconds per million edges plus overhead (approximately 225 seconds total for our configuration). The technique can also be used to study triangle statistics such as the ratio of the highest and lowest degree, and we highlight differences between social and non-social networks. To the best of our knowledge, these are the largest triangle-based graph computations published to date.
1301.5898
Phase Diagram and Approximate Message Passing for Blind Calibration and Dictionary Learning
cs.IT cond-mat.stat-mech cs.LG math.IT
We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the mean-squared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possible-but-hard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes.
1301.5912
Resource Allocation and Interference Mitigation Techniques for Cooperative Multi-Antenna and Spread Spectrum Wireless Networks
cs.IT math.IT
This chapter presents joint interference suppression and power allocation algorithms for DS-CDMA and MIMO networks with multiple hops and amplify-and-forward and decode-and-forward (DF) protocols. A scheme for joint allocation of power levels across the relays and linear interference suppression is proposed. We also consider another strategy for joint interference suppression and relay selection that maximizes the diversity available in the system. Simulations show that the proposed cross-layer optimization algorithms obtain significant gains in capacity and performance over existing schemes.
1301.5915
The Packing Radius of a Code and Partitioning Problems: the Case for Poset Metrics
cs.IT math.CO math.IT
Until this work, the packing radius of a poset code was only known in the cases where the poset was a chain, a hierarchy, a union of disjoint chains of the same size, and for some families of codes. Our objective is to approach the general case of any poset. To do this, we will divide the problem into two parts. The first part consists in finding the packing radius of a single vector. We will show that this is equivalent to a generalization of a famous NP-hard problem known as "the partition problem". Then, we will review the main results known about this problem giving special attention to the algorithms to solve it. The main ingredient to these algorithms is what is known as the differentiating method, and therefore, we will extend it to the general case. The second part consists in finding the vector that determines the packing radius of the code. For this, we will show how it is sometimes possible to compare the packing radius of two vectors without calculating them explicitly.
1301.5937
A Tight Lower Bound on the Mutual Information of a Binary and an Arbitrary Finite Random Variable in Dependence of the Variational Distance
cs.IT math.IT
In this paper a numerical method is presented, which finds a lower bound for the mutual information between a binary and an arbitrary finite random variable with joint distributions that have a variational distance not greater than a known value to a known joint distribution. This lower bound can be applied to mutual information estimation with confidence intervals.
1301.5938
Evolution of the Internet k-dense structure
cs.SI cs.NI physics.soc-ph
As the Internet AS-level topology grows over time, some of its structural properties remain unchanged. Such time- invariant properties are generally interesting, because they tend to reflect some fundamental processes or constraints behind Internet growth. As has been shown before, the time-invariant structural properties of the Internet include some most basic ones, such as the degree distribution or clustering. Here we add to this time-invariant list a non-trivial property - k-dense decomposition. This property is derived from a recursive form of edge multiplicity, defined as the number of triangles that share a given edge. We show that after proper normalization, the k- dense decomposition of the Internet has remained stable over the last decade, even though the Internet size has approximately doubled, and so has the k-density of its k-densest core. This core consists mostly of content providers peering at Internet eXchange Points, and it only loosely overlaps with the high-degree or high-rank AS core, consisting mostly of tier-1 transit providers. We thus show that high degrees and high k-densities reflect two different Internet-specific properties of ASes (transit versus content providers). As a consequence, even though degrees and k-densities of nodes are correlated, the relative fluctuations are strong, and related to that, random graphs with the same degree distribution or even degree correlations as in the Internet, do not reproduce its k-dense decomposition. Therefore an interesting open question is what Internet topology models or generators can fully explain or at least reproduce the k-dense properties of the Internet.
1301.5942
Confidence Intervals for the Mutual Information
cs.IT math.IT
By combining a bound on the absolute value of the difference of mutual information between two joint probablity distributions with a fixed variational distance, and a bound on the probability of a maximal deviation in variational distance between a true joint probability distribution and an empirical joint probability distribution, confidence intervals for the mutual information of two random variables with finite alphabets are established. Different from previous results, these intervals do not need any assumptions on the distribution and the sample size.
1301.5943
Identifying Player\'s Strategies in No Limit Texas Hold\'em Poker through the Analysis of Individual Moves
cs.AI cs.GT
The development of competitive artificial Poker playing agents has proven to be a challenge, because agents must deal with unreliable information and deception which make it essential to model the opponents in order to achieve good results. This paper presents a methodology to develop opponent modeling techniques for Poker agents. The approach is based on applying clustering algorithms to a Poker game database in order to identify player types based on their actions. First, common game moves were identified by clustering all players\' moves. Then, player types were defined by calculating the frequency with which the players perform each type of movement. With the given dataset, 7 different types of players were identified with each one having at least one tactic that characterizes him. The identification of player types may improve the overall performance of Poker agents, because it helps the agents to predict the opponent\'s moves, by associating each opponent to a distinct cluster.
1301.5946
Computer Poker Research at LIACC
cs.AI
Computer Poker's unique characteristics present a well-suited challenge for research in artificial intelligence. For that reason, and due to the Poker's market increase in popularity in Portugal since 2008, several members of LIACC have researched in this field. Several works were published as papers and master theses and more recently a member of LIACC engaged on a research in this area as a Ph.D. thesis in order to develop a more extensive and in-depth work. This paper describes the existing research in LIACC about Computer Poker, with special emphasis on the completed master's theses and plans for future work. This paper means to present a summary of the lab's work to the research community in order to encourage the exchange of ideas with other labs / individuals. LIACC hopes this will improve research in this area so as to reach the goal of creating an agent that surpasses the best human players.
1301.5952
Deterministic Constructions of Binary Measurement Matrices from Finite Geometry
cs.IT math.IT
Deterministic constructions of measurement matrices in compressed sensing (CS) are considered in this paper. The constructions are inspired by the recent discovery of Dimakis, Smarandache and Vontobel which says that parity-check matrices of good low-density parity-check (LDPC) codes can be used as {provably} good measurement matrices for compressed sensing under $\ell_1$-minimization. The performance of the proposed binary measurement matrices is mainly theoretically analyzed with the help of the analyzing methods and results from (finite geometry) LDPC codes. Particularly, several lower bounds of the spark (i.e., the smallest number of columns that are linearly dependent, which totally characterizes the recovery performance of $\ell_0$-minimization) of general binary matrices and finite geometry matrices are obtained and they improve the previously known results in most cases. Simulation results show that the proposed matrices perform comparably to, sometimes even better than, the corresponding Gaussian random matrices. Moreover, the proposed matrices are sparse, binary, and most of them have cyclic or quasi-cyclic structure, which will make the hardware realization convenient and easy.
1301.5954
QoS-Aware Transmission Policies for OFDM Bidirectional Decode-and-Forward Relaying
cs.IT math.IT
Two-way relaying can considerably improve spectral efficiency in relay-assisted bidirectional communications. However, the benefits and flexible structure of orthogonal frequency division multiplexing (OFDM)-based two-way decode-and-forward (DF) relay systems is much less exploited. Moreover, most of existing works have not considered quality-of-service (QoS) provisioning for two-way relaying. In this paper, we consider the OFDM-based bidirectional transmission where a pair of users exchange information with or without the assistance of a single DF relay. Each user can communicate with the other via three transmission modes: direct transmission, one-way relaying, and two-way relaying. We jointly optimize the transmission policies, including power allocation, transmission mode selection, and subcarrier assignment for maximizing the weighted sum rates of the two users with diverse quality-of-service (QoS) guarantees. We formulate the joint optimization problem as a mixed integer programming problem. By using the dual method, we efficiently solve the problem in an asymptotically optimal manner. Moreover, we derive the capacity region of two-way DF relaying in parallel channels. Simulation results show that the proposed resource-allocation scheme can substantially improve system performance compared with the conventional schemes. A number of interesting insights are also provided via comprehensive simulations.
1301.5961
New Lower Bounds for Constant Dimension Codes
cs.IT math.IT
This paper provides new constructive lower bounds for constant dimension codes, using different techniques such as Ferrers diagram rank metric codes and pending blocks. Constructions for two families of parameters of constant dimension codes are presented. The examples of codes obtained by these constructions are the largest known constant dimension codes for the given parameters.
1301.5973
Non-Adaptive Distributed Compression in Networks
cs.IT math.IT
In this paper, we discuss non-adaptive distributed compression of inter-node correlated real-valued messages. To do so, we discuss the performance of conventional packet forwarding via routing, in terms of the total network load versus the resulting quality of service (distortion level). As a better alternative for packet forwarding, we briefly describe our previously proposed one-step Quantized Network Coding (QNC), and make motivating arguments on its advantage when the appropriate marginal rates for distributed source coding are not available at the encoder source nodes. We also derive analytic guarantees on the resulting distortion of our one-step QNC scenario. Finally, we conclude the paper by providing a mathematical comparison between the total network loads of one-step QNC and conventional packet forwarding, showing a significant reduction in the case of one-step QNC.
1301.5979
Understanding metropolitan patterns of daily encounters
physics.soc-ph cs.SI physics.data-an
Understanding of the mechanisms driving our daily face-to-face encounters is still limited; the field lacks large-scale datasets describing both individual behaviors and their collective interactions. However, here, with the help of travel smart card data, we uncover such encounter mechanisms and structures by constructing a time-resolved in-vehicle social encounter network on public buses in a city (about 5 million residents). This is the first time that such a large network of encounters has been identified and analyzed. Using a population scale dataset, we find physical encounters display reproducible temporal patterns, indicating that repeated encounters are regular and identical. On an individual scale, we find that collective regularities dominate distinct encounters' bounded nature. An individual's encounter capability is rooted in his/her daily behavioral regularity, explaining the emergence of "familiar strangers" in daily life. Strikingly, we find individuals with repeated encounters are not grouped into small communities, but become strongly connected over time, resulting in a large, but imperceptible, small-world contact network or "structure of co-presence" across the whole metropolitan area. Revealing the encounter pattern and identifying this large-scale contact network are crucial to understanding the dynamics in patterns of social acquaintances, collective human behaviors, and -- particularly -- disclosing the impact of human behavior on various diffusion/spreading processes.
1301.5986
Autocorrelation and Linear Complexity of Quaternary Sequences of Period 2p Based on Cyclotomic Classes of Order Four
cs.IT math.IT
We examine the linear complexity and the autocorrelation properties of new quaternary cyclotomic sequences of period 2p. The sequences are constructed via the cyclotomic classes of order four.
1301.6011
A Framework for Intelligent Medical Diagnosis using Rough Set with Formal Concept Analysis
cs.AI
Medical diagnosis process vary in the degree to which they attempt to deal with different complicating aspects of diagnosis such as relative importance of symptoms, varied symptom pattern and the relation between diseases them selves. Based on decision theory, in the past many mathematical models such as crisp set, probability distribution, fuzzy set, intuitionistic fuzzy set were developed to deal with complicating aspects of diagnosis. But, many such models are failed to include important aspects of the expert decisions. Therefore, an effort has been made to process inconsistencies in data being considered by Pawlak with the introduction of rough set theory. Though rough set has major advantages over the other methods, but it generates too many rules that create many difficulties while taking decisions. Therefore, it is essential to minimize the decision rules. In this paper, we use two processes such as pre process and post process to mine suitable rules and to explore the relationship among the attributes. In pre process we use rough set theory to mine suitable rules, whereas in post process we use formal concept analysis from these suitable rules to explore better knowledge and most important factors affecting the decision making.
1301.6022
Improving the lifecycle of robotics components using Domain-Specific Languages
cs.RO cs.SE
There is currently a large amount of robotics software using the component-oriented programming paradigm. However, the rapid growth in number and complexity of components may compromise the scalability and the whole lifecycle of robotics software systems. Model-Driven Engineering can be used to mitigate these problems. This paper describes how using Domain-Specific Languages to generate and describe critical parts of robotic systems helps developers to perform component managerial tasks such as component creation, modification, monitoring and deployment. Four different DSLs are proposed in this paper: i) CDSL for specifying the structure of the components, ii) IDSL for the description of their interfaces, iii) DDSL for describing the deployment process of component networks and iv) PDSL to define and configure component parameters. Their benefits have been demonstrated after their implementation in RoboComp, a general-purpose and component-based robotics framework. Examples of the usage of these DSLs are shown along with experiments that demonstrate the benefits they bring to the lifecycle of the components.
1301.6039
Recycling Proof Patterns in Coq: Case Studies
cs.AI cs.LG cs.LO
Development of Interactive Theorem Provers has led to the creation of big libraries and varied infrastructures for formal proofs. However, despite (or perhaps due to) their sophistication, the re-use of libraries by non-experts or across domains is a challenge. In this paper, we provide detailed case studies and evaluate the machine-learning tool ML4PG built to interactively data-mine the electronic libraries of proofs, and to provide user guidance on the basis of proof patterns found in the existing libraries.
1301.6058
Weighted Last-Step Min-Max Algorithm with Improved Sub-Logarithmic Regret
cs.LG
In online learning the performance of an algorithm is typically compared to the performance of a fixed function from some class, with a quantity called regret. Forster proposed a last-step min-max algorithm which was somewhat simpler than the algorithm of Vovk, yet with the same regret. In fact the algorithm he analyzed assumed that the choices of the adversary are bounded, yielding artificially only the two extreme cases. We fix this problem by weighing the examples in such a way that the min-max problem will be well defined, and provide analysis with logarithmic regret that may have better multiplicative factor than both bounds of Forster and Vovk. We also derive a new bound that may be sub-logarithmic, as a recent bound of Orabona et.al, but may have better multiplicative factor. Finally, we analyze the algorithm in a weak-type of non-stationary setting, and show a bound that is sub-linear if the non-stationarity is sub-linear as well.
1301.6063
Arbitrarily Small Amounts of Correlation for Arbitrarily Varying Quantum Channels
quant-ph cs.IT math-ph math.IT math.MP
As our main result we show that, in order to achieve the randomness assisted message - and entanglement transmission capacities of a finite arbitrarily varying quantum channel it is not necessary that sender and receiver share (asymptotically perfect) common randomness. Rather, it is sufficient that they each have access to an unlimited amount of uses of one part of a correlated bipartite source. This access might be restricted to an arbitrary small (nonzero) fraction per channel use, without changing the main result. We investigate the notion of common randomness. It turns out that this is a very costly resource - generically, it cannot be obtained just by local processing of a bipartite source. This result underlines the importance of our main result. Also, the asymptotic equivalence of the maximal- and average error criterion for classical message transmission over finite arbitrarily varying quantum channels is proven. At last, we prove a simplifed symmetrizability condition for finite arbitrarily varying quantum channels.
1301.6111
A Proof of Threshold Saturation for Spatially-Coupled LDPC Codes on BMS Channels
cs.IT math.IT
Low-density parity-check (LDPC) convolutional codes have been shown to exhibit excellent performance under low-complexity belief-propagation decoding [1], [2]. This phenomenon is now termed threshold saturation via spatial coupling. The underlying principle behind this appears to be very general and spatially-coupled (SC) codes have been successfully applied in numerous areas. Recently, SC regular LDPC codes have been proven to achieve capacity universally, over the class of binary memoryless symmetric (BMS) channels, under belief-propagation decoding [3], [4]. In [5], [6], potential functions are used to prove that the BP threshold of SC irregular LDPC ensembles saturates, for the binary erasure channel, to the conjectured MAP threshold (known as the Maxwell threshold) of the underlying irregular ensembles. In this paper, that proof technique is generalized to BMS channels, thereby extending some results of [4] to irregular LDPC ensembles. We also believe that this approach can be expanded to cover a wide class of graphical models whose message-passing rules are associated with a Bethe free energy.
1301.6117
Higher genus universally decodable matrices (UDMG)
cs.IT math.IT
We introduce the notion of Universally Decodable Matrices of Genus g (UDMG), which for g=0 reduces to the notion of Universally Decodable Matrices (UDM) introduced in [8]. A UDMG is a set of L matrices over a finite field, each with K rows, and a linear independence condition satisfied by collections of K+g columns formed from the initial segments of the matrices. We consider the mathematical structure of UDMGs and their relation to linear vector codes. We then give a construction of UDMG based on curves of genus g over the finite field, which is a natural generalization of the UDM constructed in [8]. We provide upper (and constructable lower) bounds for L in terms of K, q, g, and the number of columns of the matrices. We will show there is a fundamental trade off (Theorem 5.4) between L and g, akin to the Singleton bound for the minimal Hamming distance of linear vector codes.
1301.6118
X THEN X: Manipulation of Same-System Runoff Elections
cs.GT cs.CC cs.MA
Do runoff elections, using the same voting rule as the initial election but just on the winning candidates, increase or decrease the complexity of manipulation? Does allowing revoting in the runoff increase or decrease the complexity relative to just having a runoff without revoting? For both weighted and unweighted voting, we show that even for election systems with simple winner problems the complexity of manipulation, manipulation with runoffs, and manipulation with revoting runoffs are independent, in the abstract. On the other hand, for some important, well-known election systems we determine what holds for each of these cases. For no such systems do we find runoffs lowering complexity, and for some we find that runoffs raise complexity. Ours is the first paper to show that for natural, unweighted election systems, runoffs can increase the manipulation complexity.
1301.6120
A Rate-Splitting Approach to Fading Channels with Imperfect Channel-State Information
cs.IT math.IT
As shown by M\'edard, the capacity of fading channels with imperfect channel-state information (CSI) can be lower-bounded by assuming a Gaussian channel input $X$ with power $P$ and by upper-bounding the conditional entropy $h(X|Y,\hat{H})$ by the entropy of a Gaussian random variable with variance equal to the linear minimum mean-square error in estimating $X$ from $(Y,\hat{H})$. We demonstrate that, using a rate-splitting approach, this lower bound can be sharpened: by expressing the Gaussian input $X$ as the sum of two independent Gaussian variables $X_1$ and $X_2$ and by applying M\'edard's lower bound first to bound the mutual information between $X_1$ and $Y$ while treating $X_2$ as noise, and by applying it a second time to the mutual information between $X_2$ and $Y$ while assuming $X_1$ to be known, we obtain a capacity lower bound that is strictly larger than M\'edard's lower bound. We then generalize this approach to an arbitrary number $L$ of layers, where $X$ is expressed as the sum of $L$ independent Gaussian random variables of respective variances $P_{\ell}$, $\ell = 1,\dotsc,L$ summing up to $P$. Among all such rate-splitting bounds, we determine the supremum over power allocations $P_\ell$ and total number of layers $L$. This supremum is achieved for $L\to\infty$ and gives rise to an analytically expressible capacity lower bound. For Gaussian fading, this novel bound is shown to converge to the Gaussian-input mutual information as the signal-to-noise ratio (SNR) grows, provided that the variance of the channel estimation error $H-\hat{H}$ tends to zero as the SNR tends to infinity.
1301.6125
Flaglets: Exact Wavelets on the Ball
cs.IT astro-ph.IM math.IT
We summarise the construction of exact axisymmetric scale-discretised wavelets on the sphere and on the ball. The wavelet transform on the ball relies on a novel 3D harmonic transform called the Fourier-Laguerre transform which combines the spherical harmonic transform with damped Laguerre polynomials on the radial half-line. The resulting wavelets, called flaglets, extract scale-dependent, spatially localised features in three-dimensions while treating the tangential and radial structures separately. Both the Fourier-Laguerre and the flaglet transforms are theoretically exact thanks to a novel sampling theorem on the ball. Our implementation of these methods is publicly available and achieves floating-point accuracy when applied to band-limited signals.
1301.6150
Polar Codes For Broadcast Channels
cs.IT math.IT
Polar codes are introduced for discrete memoryless broadcast channels. For $m$-user deterministic broadcast channels, polarization is applied to map uniformly random message bits from $m$ independent messages to one codeword while satisfying broadcast constraints. The polarization-based codes achieve rates on the boundary of the private-message capacity region. For two-user noisy broadcast channels, polar implementations are presented for two information-theoretic schemes: i) Cover's superposition codes; ii) Marton's codes. Due to the structure of polarization, constraints on the auxiliary and channel-input distributions are identified to ensure proper alignment of polarization indices in the multi-user setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binary-input stochastically degraded). The complexity of encoding and decoding is $O(n*log n)$ where $n$ is the block length. In addition, polar code sequences obtain a stretched-exponential decay of $O(2^{-n^{\beta}})$ of the average block error probability where $0 < \beta < 0.5$.
1301.6157
High-Rate Regenerating Codes Through Layering
cs.IT math.IT
In this paper, we provide explicit constructions for a class of exact-repair regenerating codes that possess a layered structure. These regenerating codes correspond to interior points on the storage-repair-bandwidth tradeoff, and compare very well in comparison to scheme that employs space-sharing between MSR and MBR codes. For the parameter set $(n,k,d=k)$ with $n < 2k-1$, we construct a class of codes with an auxiliary parameter $w$, referred to as canonical codes. With $w$ in the range $n-k < w < k$, these codes operate in the region between the MSR point and the MBR point, and perform significantly better than the space-sharing line. They only require a field size greater than $w+n-k$. For the case of $(n,n-1,n-1)$, canonical codes can also be shown to achieve an interior point on the line-segment joining the MSR point and the next point of slope-discontinuity on the storage-repair-bandwidth tradeoff. Thus we establish the existence of exact-repair codes on a point other than the MSR and the MBR point on the storage-repair-bandwidth tradeoff. We also construct layered regenerating codes for general parameter set $(n,k<d,k)$, which we refer to as non-canonical codes. These codes also perform significantly better than the space-sharing line, though they require a significantly higher field size. All the codes constructed in this paper are high-rate, can repair multiple node-failures and do not require any computation at the helper nodes. We also construct optimal codes with locality in which the local codes are layered regenerating codes.
1301.6190
Blahut-Arimoto Algorithm and Code Design for Action-Dependent Source Coding Problems
cs.IT math.IT
The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient algorithm for numerical computation of the rate-distortion-cost function for this problem is proposed, and a convergence proof is provided. Moreover, a two-stage code design based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Specific coding/decoding strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the lower bound dictated by the rate-distortion-cost function.
1301.6191
Reuse, Temporal Dynamics, Interest Sharing, and Collaboration in Social Tagging Systems
cs.IR cs.DL cs.SI physics.soc-ph
User-generated content is shaping the dynamics of the World Wide Web. Indeed, an increasingly large number of systems provide mechanisms to support the growing demand for content creation, sharing, and management. Tagging systems are a particular class of these systems where users share and collaboratively annotate content such as photos and URLs. This collaborative behavior and the pool of user-generated metadata create opportunities to improve existing systems and to design new mechanisms. However, to realize this potential, it is necessary to understand the usage characteristics of current systems. This work addresses this issue characterizing three tagging systems (CiteULike, Connotea and del.icio.us) while focusing on three aspects: i) the patterns of information (tags and items) production; ii) the temporal dynamics of users' tag vocabularies; and, iii) the social aspects of tagging systems.
1301.6196
On the Number of Interference Alignment Solutions for the K-User MIMO Channel with Constant Coefficients
cs.IT math.IT
In this paper, we study the number of different interference alignment (IA) solutions in a K-user multiple-input multiple-output (MIMO) interference channel, when the alignment is performed via beamforming and no symbol extensions are allowed. We focus on the case where the number of IA equations matches the number of variables. In this situation, the number of IA solutions is finite and constant for any channel realization out of a zero-measure set and, as we prove in the paper, it is given by an integral formula that can be numerically approximated using Monte Carlo integration methods. More precisely, the number of alignment solutions is the scaled average of the determinant of a certain Hermitian matrix related to the geometry of the problem. Interestingly, while the value of this determinant at an arbitrary point can be used to check the feasibility of the IA problem, its average (properly scaled) gives the number of solutions. For single-beam systems the asymptotic growth rate of the number of solutions is analyzed and some connections with classical combinatorial problems are presented. Nonetheless, our results can be applied to arbitrary interference MIMO networks, with any number of users, antennas and streams per user.
1301.6198
Approximate Sum-Capacity of K-user Cognitive Interference Channels with Cumulative Message Sharing
cs.IT math.IT
This paper considers the K user cognitive interference channel with one primary and K-1 secondary/cognitive transmitters with a cumulative message sharing structure, i.e cognitive transmitter $i\in [2:K]$ knows non-causally all messages of the users with index less than i. We propose a computable outer bound valid for any memoryless channel. We first evaluate the sum-rate outer bound for the high- SNR linear deterministic approximation of the Gaussian noise channel. This is shown to be capacity for the 3-user channel with arbitrary channel gains and the sum-capacity for the symmetric K-user channel. Interestingly. for the K user channel having only the K th cognitive know all the other messages is sufficient to achieve capacity i.e cognition at transmitter 2 to K-1 is not needed. Next the sum capacity of the symmetric Gaussian noise channel is characterized to within a constant additive and multiplicative gap. The proposed achievable scheme for the additive gap is based on Dirty paper coding and can be thought of as a MIMO-broadcast scheme where only one encoding order is possible due to the message sharing structure. As opposed to other multiuser interference channel models, a single scheme suffices for both the weak and strong interference regimes. With this scheme the generalized degrees of freedom (gDOF) is shown to be a function of K, in contrast to the non cognitive case and the broadcast channel case. Interestingly, it is show that as the number of users grows to infinity the gDoF of the K-user cognitive interference channel with cumulative message sharing tends to the gDoF of a broadcast channel with a K-antenna transmitter and K single-antenna receivers. The analytical additive additive and multiplicative gaps are a function of the number of users. Numerical evaluations of inner and outer bounds show that the actual gap is less than the analytical one.
1301.6199
Sample Complexity of Bayesian Optimal Dictionary Learning
cs.LG cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
We consider a learning problem of identifying a dictionary matrix D (M times N dimension) from a sample set of M dimensional vectors Y = N^{-1/2} DX, where X is a sparse matrix (N times P dimension) in which the density of non-zero entries is 0<rho< 1. In particular, we focus on the minimum sample size P_c (sample complexity) necessary for perfectly identifying D of the optimal learning scheme when D and X are independently generated from certain distributions. By using the replica method of statistical mechanics, we show that P_c=O(N) holds as long as alpha = M/N >rho is satisfied in the limit of N to infinity. Our analysis also implies that the posterior distribution given Y is condensed only at the correct dictionary D when the compression rate alpha is greater than a certain critical value alpha_M(rho). This suggests that belief propagation may allow us to learn D with a low computational complexity using O(N) samples.
1301.6209
On the achievable region for interference networks with point-to-point codes
cs.IT math.IT
This paper studies evaluation of the capacity region for interference networks with point-to-point (p2p) capacity-achieving codes. Such capacity region has recently been characterized as union of several sub-regions each of which has distinctive operational characteristics. Detailed evaluation of this region, therefore, can be accomplished in a very simple manner by acknowledging such characteristics, which, in turn, provides an insight for a simple implementation scenario. Completely generalized message assignment which is also practically relevant is considered in this paper, and it is shown to provide strictly larger achievable rates than what traditional message assignment does when a receiver with joint decoding capability is used.
1301.6230
Numerical homotopy continuation for control and online identification of nonlinear systems: the survey of selected results
math.OC cs.SY
The article gives an overview of the parameter numerical continuation methodology applied to setpoint control and parameter identification of nonlinear systems. The control problems for affine systems as well as general (nonaffine) nonlinear systems are considered. Online parameter identification is also presented in two versions: with linear and nonlinear nonconvex parameterization. Simulation results for illustrative examples are shown.
1301.6231
Generalizing Bounds on the Minimum Distance of Cyclic Codes Using Cyclic Product Codes
cs.IT math.IT
Two generalizations of the Hartmann--Tzeng (HT) bound on the minimum distance of q-ary cyclic codes are proposed. The first one is proven by embedding the given cyclic code into a cyclic product code. Furthermore, we show that unique decoding up to this bound is always possible and outline a quadratic-time syndrome-based error decoding algorithm. The second bound is stronger and the proof is more involved. Our technique of embedding the code into a cyclic product code can be applied to other bounds, too and therefore generalizes them.
1301.6236
Multi-Trial Guruswami--Sudan Decoding for Generalised Reed--Solomon Codes
cs.IT math.IT
An iterated refinement procedure for the Guruswami--Sudan list decoding algorithm for Generalised Reed--Solomon codes based on Alekhnovich's module minimisation is proposed. The method is parametrisable and allows variants of the usual list decoding approach. In particular, finding the list of \emph{closest} codewords within an intermediate radius can be performed with improved average-case complexity while retaining the worst-case complexity.
1301.6255
Information Loss due to Finite Block Length in a Gaussian Line Network: An Improved Bound
cs.IT math.IT math.PR stat.AP
A bound on the maximum information transmission rate through a cascade of Gaussian links is presented. The network model consists of a source node attempting to send a message drawn from a finite alphabet to a sink, through a cascade of Additive White Gaussian Noise links each having an input power constraint. Intermediate nodes are allowed to perform arbitrary encoding/decoding operations, but the block length and the encoding rate are fixed. The bound presented in this paper is fundamental and depends only on the design parameters namely, the network size, block length, transmission rate, and signal-to-noise ratio.