id
stringlengths
9
16
title
stringlengths
4
278
categories
listlengths
1
13
abstract
stringlengths
3
4.08k
filtered_category_membership
dict
1002.2780
Collaborative Filtering in a Non-Uniform World: Learning with the Weighted Trace Norm
[ "cs.LG" ]
We show that matrix completion with trace-norm regularization can be significantly hurt when entries of the matrix are sampled non-uniformly. We introduce a weighted version of the trace-norm regularizer that works well also with non-uniform sampling. Our experimental results demonstrate that the weighted trace-norm regularization indeed yields significant gains on the (highly non-uniformly sampled) Netflix dataset.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2813
Distributed Rate Allocation for Wireless Networks
[ "cs.IT", "cs.NI", "math.IT" ]
This paper develops a distributed algorithm for rate allocation in wireless networks that achieves the same throughput region as optimal centralized algorithms. This cross-layer algorithm jointly performs medium access control (MAC) and physical-layer rate adaptation. The paper establishes that this algorithm is throughput-optimal for general rate regions. In contrast to on-off scheduling, rate allocation enables optimal utilization of physical-layer schemes by scheduling multiple rate levels. The algorithm is based on local queue-length information, and thus the algorithm is of significant practical value. The algorithm requires that each link can determine the global feasibility of increasing its current data-rate. In many classes of networks, any one link's data-rate primarily impacts its neighbors and this impact decays with distance. Hence, local exchanges can provide the information needed to determine feasibility. Along these lines, the paper discusses the potential use of existing physical-layer control messages to determine feasibility. This can be considered as a technique analogous to carrier sensing in CSMA (Carrier Sense Multiple Access) networks. An important application of this algorithm is in multiple-band multiple-radio throughput-optimal distributed scheduling for white-space networks.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2858
PageRank: Standing on the shoulders of giants
[ "cs.IR", "cs.DL" ]
PageRank is a Web page ranking technique that has been a fundamental ingredient in the development and success of the Google search engine. The method is still one of the many signals that Google uses to determine which pages are most important. The main idea behind PageRank is to determine the importance of a Web page in terms of the importance assigned to the pages hyperlinking to it. In fact, this thesis is not new, and has been previously successfully exploited in different contexts. We review the PageRank method and link it to some renowned previous techniques that we have found in the fields of Web information retrieval, bibliometrics, sociometry, and econometrics.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2897
Model-Driven Constraint Programming
[ "cs.AI" ]
Constraint programming can definitely be seen as a model-driven paradigm. The users write programs for modeling problems. These programs are mapped to executable models to calculate the solutions. This paper focuses on efficient model management (definition and transformation). From this point of view, we propose to revisit the design of constraint-programming systems. A model-driven architecture is introduced to map solving-independent constraint models to solving-dependent decision models. Several important questions are examined, such as the need for a visual highlevel modeling language, and the quality of metamodeling techniques to implement the transformations. A main result is the s-COMMA platform that efficiently implements the chain from modeling to solving constraint problems
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2928
Reconstruction of signals with unknown spectra in information field theory with parameter uncertainty
[ "astro-ph.IM", "astro-ph.CO", "cs.IT", "math.IT", "physics.data-an", "stat.ME" ]
The optimal reconstruction of cosmic metric perturbations and other signals requires knowledge of their power spectra and other parameters. If these are not known a priori, they have to be measured simultaneously from the same data used for the signal reconstruction. We formulate the general problem of signal inference in the presence of unknown parameters within the framework of information field theory. We develop a generic parameter uncertainty renormalized estimation (PURE) technique and address the problem of reconstructing Gaussian signals with unknown power-spectrum with five different approaches: (i) separate maximum-a-posteriori power spectrum measurement and subsequent reconstruction, (ii) maximum-a-posteriori power reconstruction with marginalized power-spectrum, (iii) maximizing the joint posterior of signal and spectrum, (iv) guessing the spectrum from the variance in the Wiener filter map, and (v) renormalization flow analysis of the field theoretical problem providing the PURE filter. In all cases, the reconstruction can be described or approximated as Wiener filter operations with assumed signal spectra derived from the data according to the same recipe, but with differing coefficients. All of these filters, except the renormalized one, exhibit a perception threshold in case of a Jeffreys prior for the unknown spectrum. Data modes, with variance below this threshold do not affect the signal reconstruction at all. Filter (iv) seems to be similar to the so called Karhune-Loeve and Feldman-Kaiser-Peacock estimators for galaxy power spectra used in cosmology, which therefore should also exhibit a marginal perception threshold if correctly implemented. We present statistical performance tests and show that the PURE filter is superior to the others.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2959
Geometric approach to sampling and communication
[ "cs.IT", "cs.CV", "math.DG", "math.IT" ]
Relationships that exist between the classical, Shannon-type, and geometric-based approaches to sampling are investigated. Some aspects of coding and communication through a Gaussian channel are considered. In particular, a constructive method to determine the quantizing dimension in Zador's theorem is provided. A geometric version of Shannon's Second Theorem is introduced. Applications to Pulse Code Modulation and Vector Quantization of Images are addressed.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2964
Open vs Closed Access Femtocells in the Uplink
[ "cs.IT", "math.IT" ]
Femtocells are assuming an increasingly important role in the coverage and capacity of cellular networks. In contrast to existing cellular systems, femtocells are end-user deployed and controlled, randomly located, and rely on third party backhaul (e.g. DSL or cable modem). Femtocells can be configured to be either open access or closed access. Open access allows an arbitrary nearby cellular user to use the femtocell, whereas closed access restricts the use of the femtocell to users explicitly approved by the owner. Seemingly, the network operator would prefer an open access deployment since this provides an inexpensive way to expand their network capabilities, whereas the femtocell owner would prefer closed access, in order to keep the femtocell's capacity and backhaul to himself. We show mathematically and through simulations that the reality is more complicated for both parties, and that the best approach depends heavily on whether the multiple access scheme is orthogonal (TDMA or OFDMA, per subband) or non-orthogonal (CDMA). In a TDMA/OFDMA network, closed-access is typically preferable at high user densities, whereas in CDMA, open access can provide gains of more than 200% for the home user by reducing the near-far problem experienced by the femtocell. The results of this paper suggest that the interests of the femtocell owner and the network operator are more compatible than typically believed, and that CDMA femtocells should be configured for open access whereas OFDMA or TDMA femtocells should adapt to the cellular user density.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2966
Nonbinary Quantum Cyclic and Subsystem Codes Over Asymmetrically-decohered Quantum Channels
[ "cs.IT", "math.IT" ]
Quantum computers theoretically are able to solve certain problems more quickly than any deterministic or probabilistic computers. A quantum computer exploits the rules of quantum mechanics to speed up computations. However, one has to mitigate the resulting noise and decoherence effects to avoid computational errors in order to successfully build quantum computers. In this paper, we construct asymmetric quantum codes to protect quantum information over asymmetric quantum channels, $\Pr Z \geq \Pr X$. Two generic methods are presented to derive asymmetric quantum cyclic codes using the generator polynomials and defining sets of classical cyclic codes. Consequently, the methods allow us to construct several families of quantum BCH, RS, and RM codes over asymmetric quantum channels. Finally, the methods are used to construct families of asymmetric subsystem codes.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.2971
Erasure Multiple Descriptions
[ "cs.IT", "math.IT" ]
We consider a binary erasure version of the n-channel multiple descriptions problem with symmetric descriptions, i.e., the rates of the n descriptions are the same and the distortion constraint depends only on the number of messages received. We consider the case where there is no excess rate for every k out of n descriptions. Our goal is to characterize the achievable distortions D_1, D_2,...,D_n. We measure the fidelity of reconstruction using two distortion criteria: an average-case distortion criterion, under which distortion is measured by taking the average of the per-letter distortion over all source sequences, and a worst-case distortion criterion, under which distortion is measured by taking the maximum of the per-letter distortion over all source sequences. We present achievability schemes, based on random binning for average-case distortion and systematic MDS (maximum distance separable) codes for worst-case distortion, and prove optimality results for the corresponding achievable distortion regions. We then use the binary erasure multiple descriptions setup to propose a layered coding framework for multiple descriptions, which we then apply to vector Gaussian multiple descriptions and prove its optimality for symmetric scalar Gaussian multiple descriptions with two levels of receivers and no excess rate for the central receiver. We also prove a new outer bound for the general multi-terminal source coding problem and use it to prove an optimality result for the robust binary erasure CEO problem. For the latter, we provide a tight lower bound on the distortion for \ell messages for any coding scheme that achieves the minimum achievable distortion for k messages where k is less than or equal to \ell.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3023
Rewriting Constraint Models with Metamodels
[ "cs.AI" ]
An important challenge in constraint programming is to rewrite constraint models into executable programs calculat- ing the solutions. This phase of constraint processing may require translations between constraint programming lan- guages, transformations of constraint representations, model optimizations, and tuning of solving strategies. In this paper, we introduce a pivot metamodel describing the common fea- tures of constraint models including different kinds of con- straints, statements like conditionals and loops, and other first-class elements like object classes and predicates. This metamodel is general enough to cope with the constructions of many languages, from object-oriented modeling languages to logic languages, but it is independent from them. The rewriting operations manipulate metamodel instances apart from languages. As a consequence, the rewriting operations apply whatever languages are selected and they are able to manage model semantic information. A bridge is created between the metamodel space and languages using parsing techniques. Tools from the software engineering world can be useful to implement this framework.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3024
Bounds for binary codes relative to pseudo-distances of k points
[ "cs.IT", "math.IT" ]
We apply Schrijver's semidefinite programming method to obtain improved upper bounds on generalized distances and list decoding radii of binary codes.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3047
On the Non-Coherent Wideband Multipath Fading Relay Channel
[ "cs.IT", "math.IT" ]
We investigate the multipath fading relay channel in the limit of a large bandwidth, and in the non-coherent setting, where the channel state is unknown to all terminals, including the relay and the destination. We propose a hypergraph model of the wideband multipath fading relay channel, and show that its min-cut is achieved by a non-coherent peaky frequency binning scheme. The so-obtained lower bound on the capacity of the wideband multipath fading relay channel turns out to coincide with the block-Markov lower bound on the capacity of the wideband frequency-division Gaussian (FD-AWGN) relay channel. In certain cases, this achievable rate also meets the cut-set upper-bound, and thus reaches the capacity of the non-coherent wideband multipath fading relay channel.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3065
Linear Capacity Scaling in Wireless Networks: Beyond Physical Limits?
[ "cs.IT", "math.IT" ]
We investigate the role of cooperation in wireless networks subject to a spatial degrees of freedom limitation. To address the worst case scenario, we consider a free-space line-of-sight type environment with no scattering and no fading. We identify three qualitatively different operating regimes that are determined by how the area of the network A, normalized with respect to the wavelength lambda, compares to the number of users n. In networks with sqrt{A}/lambda < sqrt{n}, the limitation in spatial degrees of freedom does not allow to achieve a capacity scaling better than sqrt{n} and this performance can be readily achieved by multi-hopping. This result has been recently shown by Franceschetti et al. However, for networks with sqrt{A}/lambda > sqrt{n}, the number of available degrees of freedom is min(n, sqrt{A}/lambda), larger that what can be achieved by multi-hopping. We show that the optimal capacity scaling in this regime is achieved by hierarchical cooperation. In particular, in networks with sqrt{A}/lambda> n, hierarchical cooperation can achieve linear scaling.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3078
Using ATL to define advanced and flexible constraint model transformations
[ "cs.AI" ]
Transforming constraint models is an important task in re- cent constraint programming systems. User-understandable models are defined during the modeling phase but rewriting or tuning them is manda- tory to get solving-efficient models. We propose a new architecture al- lowing to define bridges between any (modeling or solver) languages and to implement model optimizations. This architecture follows a model- driven approach where the constraint modeling process is seen as a set of model transformations. Among others, an interesting feature is the def- inition of transformations as concept-oriented rules, i.e. based on types of model elements where the types are organized into a hierarchy called a metamodel.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3086
Convergence of Bayesian Control Rule
[ "cs.AI", "cs.LG" ]
Recently, new approaches to adaptive control have sought to reformulate the problem as a minimization of a relative entropy criterion to obtain tractable solutions. In particular, it has been shown that minimizing the expected deviation from the causal input-output dependencies of the true plant leads to a new promising stochastic control rule called the Bayesian control rule. This work proves the convergence of the Bayesian control rule under two sufficient assumptions: boundedness, which is an ergodicity condition; and consistency, which is an instantiation of the sure-thing principle.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3117
LP Decoding of Regular LDPC Codes in Memoryless Channels
[ "cs.IT", "math.IT" ]
We study error bounds for linear programming decoding of regular LDPC codes. For memoryless binary-input output-symmetric channels, we prove bounds on the word error probability that are inverse doubly-exponential in the girth of the factor graph. For memoryless binary-input AWGN channel, we prove lower bounds on the threshold for regular LDPC codes whose factor graphs have logarithmic girth under LP-decoding. Specifically, we prove a lower bound of $\sigma=0.735$ (upper bound of $\frac{Eb}{N_0}=2.67$dB) on the threshold of $(3,6)$-regular LDPC codes whose factor graphs have logarithmic girth. Our proof is an extension of a recent paper of Arora, Daskalakis, and Steurer [STOC 2009] who presented a novel probabilistic analysis of LP decoding over a binary symmetric channel. Their analysis is based on the primal LP representation and has an explicit connection to message passing algorithms. We extend this analysis to any MBIOS channel.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3174
A new approach to content-based file type detection
[ "cs.LG", "cs.AI" ]
File type identification and file type clustering may be difficult tasks that have an increasingly importance in the field of computer and network security. Classical methods of file type detection including considering file extensions and magic bytes can be easily spoofed. Content-based file type detection is a newer way that is taken into account recently. In this paper, a new content-based method for the purpose of file type detection and file type clustering is proposed that is based on the PCA and neural networks. The proposed method has a good accuracy and is fast enough.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3183
A Complete Characterization of Statistical Query Learning with Applications to Evolvability
[ "cs.CC", "cs.LG" ]
Statistical query (SQ) learning model of Kearns (1993) is a natural restriction of the PAC learning model in which a learning algorithm is allowed to obtain estimates of statistical properties of the examples but cannot see the examples themselves. We describe a new and simple characterization of the query complexity of learning in the SQ learning model. Unlike the previously known bounds on SQ learning our characterization preserves the accuracy and the efficiency of learning. The preservation of accuracy implies that that our characterization gives the first characterization of SQ learning in the agnostic learning framework. The preservation of efficiency is achieved using a new boosting technique and allows us to derive a new approach to the design of evolutionary algorithms in Valiant's (2006) model of evolvability. We use this approach to demonstrate the existence of a large class of monotone evolutionary learning algorithms based on square loss performance estimation. These results differ significantly from the few known evolutionary algorithms and give evidence that evolvability in Valiant's model is a more versatile phenomenon than there had been previous reason to suspect.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3187
On the scaling of Polar Codes: II. The behavior of un-polarized channels
[ "cs.IT", "math.IT" ]
We provide upper and lower bounds on the escape rate of the Bhattacharyya process corresponding to polar codes and transmission over the the binary erasure channel. More precisely, we bound the exponent of the number of sub-channels whose Bhattacharyya constant falls in a fixed interval $[a,b]$. Mathematically this can be stated as bounding the limit $\lim_{n \to \infty} \frac{1}{n} \ln \mathbb{P}(Z_n \in [a,b])$, where $Z_n$ is the Bhattacharyya process. The quantity $\mathbb{P}(Z_n \in [a,b])$ represents the fraction of sub-channels that are still un-polarized at time $n$.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3188
Noisy Network Coding
[ "cs.IT", "math.IT" ]
A noisy network coding scheme for sending multiple sources over a general noisy network is presented. For multi-source multicast networks, the scheme naturally extends both network coding over noiseless networks by Ahlswede, Cai, Li, and Yeung, and compress-forward coding for the relay channel by Cover and El Gamal to general discrete memoryless and Gaussian networks. The scheme also recovers as special cases the results on coding for wireless relay networks and deterministic networks by Avestimehr, Diggavi, and Tse, and coding for wireless erasure networks by Dana, Gowaikar, Palanki, Hassibi, and Effros. The scheme involves message repetition coding, relay signal compression, and simultaneous decoding. Unlike previous compress--forward schemes, where independent messages are sent over multiple blocks, the same message is sent multiple times using independent codebooks as in the network coding scheme for cyclic networks. Furthermore, the relays do not use Wyner--Ziv binning as in previous compress-forward schemes, and each decoder performs simultaneous joint typicality decoding on the received signals from all the blocks without explicitly decoding the compression indices. A consequence of this new scheme is that achievability is proved simply and more generally without resorting to time expansion to extend results for acyclic networks to networks with cycles. The noisy network coding scheme is then extended to general multi-source networks by combining it with decoding techniques for interference channels. For the Gaussian multicast network, noisy network coding improves the previously established gap to the cutset bound. We also demonstrate through two popular AWGN network examples that noisy network coding can outperform conventional compress-forward, amplify-forward, and hash-forward schemes.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3192
Study of Gaussian Relay Channels with Correlated Noises
[ "cs.IT", "math.IT" ]
In this paper, we consider full-duplex and half-duplex Gaussian relay channels where the noises at the relay and destination are arbitrarily correlated. We first derive the capacity upper bound and the achievable rates with three existing schemes: Decode-and-Forward (DF), Compress-and-Forward (CF), and Amplify-and-Forward (AF). We present two capacity results under specific noise correlation coefficients, one being achieved by DF and the other being achieved by direct link transmission (or a special case of CF). The channel for the former capacity result is equivalent to the traditional Gaussian degraded relay channel and the latter corresponds to the Gaussian reversely-degraded relay channel. For CF and AF schemes, we show that their achievable rates are strictly decreasing functions over the negative correlation coefficient. Through numerical comparisons under different channel settings, we observe that although DF completely disregards the noise correlation while the other two can potentially exploit such extra information, none of the three relay schemes always outperforms the others over different correlation coefficients. Moreover, the exploitation of noise correlation by CF and AF accrues more benefit when the source-relay link is weak. This paper also considers the optimal power allocation problem under the correlated-noise channel setting. With individual power constraints at the relay and the source, it is shown that the relay should use all its available power to maximize the achievable rates under any correlation coefficient. With a total power constraint across the source and the relay, the achievable rates are proved to be concave functions over the power allocation factor for AF and CF under full-duplex mode, where the closed-form power allocation strategy is derived.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3195
Efficiently Discovering Hammock Paths from Induced Similarity Networks
[ "cs.AI", "cs.DB" ]
Similarity networks are important abstractions in many information management applications such as recommender systems, corpora analysis, and medical informatics. For instance, by inducing similarity networks between movies rated similarly by users, or between documents containing common terms, and or between clinical trials involving the same themes, we can aim to find the global structure of connectivities underlying the data, and use the network as a basis to make connections between seemingly disparate entities. In the above applications, composing similarities between objects of interest finds uses in serendipitous recommendation, in storytelling, and in clinical diagnosis, respectively. We present an algorithmic framework for traversing similarity paths using the notion of `hammock' paths which are generalization of traditional paths. Our framework is exploratory in nature so that, given starting and ending objects of interest, it explores candidate objects for path following, and heuristics to admissibly estimate the potential for paths to lead to a desired destination. We present three diverse applications: exploring movie similarities in the Netflix dataset, exploring abstract similarities across the PubMed corpus, and exploring description similarities in a database of clinical trials. Experimental results demonstrate the potential of our approach for unstructured knowledge discovery in similarity networks.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3234
Improved subspace estimation for multivariate observations of high dimension: the deterministic signals case
[ "cs.IT", "math.IT" ]
We consider the problem of subspace estimation in situations where the number of available snapshots and the observation dimension are comparable in magnitude. In this context, traditional subspace methods tend to fail because the eigenvectors of the sample correlation matrix are heavily biased with respect to the true ones. It has recently been suggested that this situation (where the sample size is small compared to the observation dimension) can be very accurately modeled by considering the asymptotic regime where the observation dimension $M$ and the number of snapshots $N$ converge to $+\infty$ at the same rate. Using large random matrix theory results, it can be shown that traditional subspace estimates are not consistent in this asymptotic regime. Furthermore, new consistent subspace estimate can be proposed, which outperform the standard subspace methods for realistic values of $M$ and $N$. The work carried out so far in this area has always been based on the assumption that the observations are random, independent and identically distributed in the time domain. The goal of this paper is to propose new consistent subspace estimators for the case where the source signals are modelled as unknown deterministic signals. In practice, this allows to use the proposed approach regardless of the statistical properties of the source signals. In order to construct the proposed estimators, new technical results concerning the almost sure location of the eigenvalues of sample covariance matrices of Information plus Noise complex Gaussian models are established. These results are believed to be of independent interest.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3238
Exploring a Multidimensional Representation of Documents and Queries (extended version)
[ "cs.IR" ]
In Information Retrieval (IR), whether implicitly or explicitly, queries and documents are often represented as vectors. However, it may be more beneficial to consider documents and/or queries as multidimensional objects. Our belief is this would allow building "truly" interactive IR systems, i.e., where interaction is fully incorporated in the IR framework. The probabilistic formalism of quantum physics represents events and densities as multidimensional objects. This paper presents our first step towards building an interactive IR framework upon this formalism, by stating how the first interaction of the retrieval process, when the user types a query, can be formalised. Our framework depends on a number of parameters affecting the final document ranking. In this paper we experimentally investigate the effect of these parameters, showing that the proposed representation of documents and queries as multidimensional objects can compete with standard approaches, with the additional prospect to be applied to interactive retrieval.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3239
Message-Passing Algorithms: Reparameterizations and Splittings
[ "cs.IT", "cs.AI", "math.IT" ]
The max-product algorithm, a local message-passing scheme that attempts to compute the most probable assignment (MAP) of a given probability distribution, has been successfully employed as a method of approximate inference for applications arising in coding theory, computer vision, and machine learning. However, the max-product algorithm is not guaranteed to converge to the MAP assignment, and if it does, is not guaranteed to recover the MAP assignment. Alternative convergent message-passing schemes have been proposed to overcome these difficulties. This work provides a systematic study of such message-passing algorithms that extends the known results by exhibiting new sufficient conditions for convergence to local and/or global optima, providing a combinatorial characterization of these optima based on graph covers, and describing a new convergent and correct message-passing algorithm whose derivation unifies many of the known convergent message-passing algorithms. While convergent and correct message-passing algorithms represent a step forward in the analysis of max-product style message-passing algorithms, the conditions needed to guarantee convergence to a global optimum can be too restrictive in both theory and practice. This limitation of convergent and correct message-passing schemes is characterized by graph covers and illustrated by example.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3258
Asymptotically Stable Walking of a Five-Link Underactuated 3D Bipedal Robot
[ "cs.RO" ]
This paper presents three feedback controllers that achieve an asymptotically stable, periodic, and fast walking gait for a 3D (spatial) bipedal robot consisting of a torso, two legs, and passive (unactuated) point feet. The contact between the robot and the walking surface is assumed to inhibit yaw rotation. The studied robot has 8 DOF in the single support phase and 6 actuators. The interest of studying robots with point feet is that the robot's natural dynamics must be explicitly taken into account to achieve balance while walking. We use an extension of the method of virtual constraints and hybrid zero dynamics, in order to simultaneously compute a periodic orbit and an autonomous feedback controller that realizes the orbit. This method allows the computations to be carried out on a 2-DOF subsystem of the 8-DOF robot model. The stability of the walking gait under closed-loop control is evaluated with the linearization of the restricted Poincar\'e map of the hybrid zero dynamics. Three strategies are explored. The first strategy consists of imposing a stability condition during the search of a periodic gait by optimization. The second strategy uses an event-based controller. In the third approach, the effect of output selection is discussed and a pertinent choice of outputs is proposed, leading to stabilization without the use of a supplemental event-based controller.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3307
Graph Zeta Function in the Bethe Free Energy and Loopy Belief Propagation
[ "cs.AI", "cs.DM", "math-ph", "math.MP" ]
We propose a new approach to the analysis of Loopy Belief Propagation (LBP) by establishing a formula that connects the Hessian of the Bethe free energy with the edge zeta function. The formula has a number of theoretical implications on LBP. It is applied to give a sufficient condition that the Hessian of the Bethe free energy is positive definite, which shows non-convexity for graphs with multiple cycles. The formula clarifies the relation between the local stability of a fixed point of LBP and local minima of the Bethe free energy. We also propose a new approach to the uniqueness of LBP fixed point, and show various conditions of uniqueness.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3312
Multiuser Scheduling in a Markov-modeled Downlink using Randomly Delayed ARQ Feedback
[ "cs.IT", "cs.SY", "math.IT", "math.OC" ]
We focus on the downlink of a cellular system, which corresponds to the bulk of the data transfer in such wireless systems. We address the problem of opportunistic multiuser scheduling under imperfect channel state information, by exploiting the memory inherent in the channel. In our setting, the channel between the base station and each user is modeled by a two-state Markov chain and the scheduled user sends back an ARQ feedback signal that arrives at the scheduler with a random delay that is i.i.d across users and time. The scheduler indirectly estimates the channel via accumulated delayed-ARQ feedback and uses this information to make scheduling decisions. We formulate a throughput maximization problem as a partially observable Markov decision process (POMDP). For the case of two users in the system, we show that a greedy policy is sum throughput optimal for any distribution on the ARQ feedback delay. For the case of more than two users, we prove that the greedy policy is suboptimal and demonstrate, via numerical studies, that it has near optimal performance. We show that the greedy policy can be implemented by a simple algorithm that does not require the statistics of the underlying Markov channel or the ARQ feedback delay, thus making it robust against errors in system parameter estimation. Establishing an equivalence between the two-user system and a genie-aided system, we obtain a simple closed form expression for the sum capacity of the Markov-modeled downlink. We further derive inner and outer bounds on the capacity region of the Markov-modeled downlink and tighten these bounds for special cases of the system parameters.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 1 }
1002.3320
Co-channel Interference Cancellation for Space-Time Coded OFDM Systems Using Adaptive Beamforming and Null Deepening
[ "cs.CL" ]
Combined with space-time coding, the orthogonal frequency division multiplexing (OFDM) system explores space diversity. It is a potential scheme to offer spectral efficiency and robust high data rate transmissions over frequency-selective fading channel. However, space-time coding impairs the system ability to suppress interferences as the signals transmitted from two transmit antennas are superposed and interfered at the receiver antennas. In this paper, we developed an adaptive beamforming based on least mean squared error algorithm and null deepening to combat co-channel interference (CCI) for the space-time coded OFDM (STC-OFDM) system. To illustrate the performance of the presented approach, it is compared to the null steering beamformer which requires a prior knowledge of directions of arrival (DOAs). The structure of space-time decoders are preserved although there is the use of beamformers before decoding. By incorporating the proposed beamformer as a CCI canceller in the STC-OFDM systems, the performance improvement is achieved as shown in the simulation results.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3342
Spectral properties of the Google matrix of the World Wide Web and other directed networks
[ "cs.IR" ]
We study numerically the spectrum and eigenstate properties of the Google matrix of various examples of directed networks such as vocabulary networks of dictionaries and university World Wide Web networks. The spectra have gapless structure in the vicinity of the maximal eigenvalue for Google damping parameter $\alpha$ equal to unity. The vocabulary networks have relatively homogeneous spectral density, while university networks have pronounced spectral structures which change from one university to another, reflecting specific properties of the networks. We also determine specific properties of eigenstates of the Google matrix, including the PageRank. The fidelity of the PageRank is proposed as a new characterization of its stability.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3344
Iterative exact global histogram specification and SSIM gradient ascent: a proof of convergence, step size and parameter selection
[ "cs.CV", "cs.MM" ]
The SSIM-optimized exact global histogram specification (EGHS) is shown to converge in the sense that the first order approximation of the result's quality (i.e., its structural similarity with input) does not decrease in an iteration, when the step size is small. Each iteration is composed of SSIM gradient ascent and basic EGHS with the specified target histogram. Selection of step size and other parameters is also discussed.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3345
Interactive Submodular Set Cover
[ "cs.LG" ]
We introduce a natural generalization of submodular set cover and exact active learning with a finite hypothesis class (query learning). We call this new problem interactive submodular set cover. Applications include advertising in social networks with hidden information. We give an approximation guarantee for a novel greedy algorithm and give a hardness of approximation result which matches up to constant factors. We also discuss negative results for simpler approaches and present encouraging early experimental results.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3356
Uplink CoMP under a Constrained Backhaul and Imperfect Channel Knowledge
[ "cs.IT", "math.IT" ]
Coordinated Multi-Point (CoMP) is known to be a key technology for next generation mobile communications systems, as it allows to overcome the burden of inter-cell interference. Especially in the uplink, it is likely that interference exploitation schemes will be used in the near future, as they can be used with legacy terminals and require no or little changes in standardization. Major drawbacks, however, are the extent of additional backhaul infrastructure needed, and the sensitivity to imperfect channel knowledge. This paper jointly addresses both issues in a new framework incorporating a multitude of proposed theoretical uplink CoMP concepts, which are then put into perspective with practical CoMP algorithms. This comprehensive analysis provides new insight into the potential usage of uplink CoMP in next generation wireless communications systems.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3449
Minimizing weighted sum download time for one-to-many file transfer in peer-to-peer networks
[ "cs.IT", "cs.NI", "math.IT", "math.OC" ]
This paper considers the problem of transferring a file from one source node to multiple receivers in a peer-to-peer (P2P) network. The objective is to minimize the weighted sum download time (WSDT) for the one-to-many file transfer. Previous work has shown that, given an order at which the receivers finish downloading, the minimum WSD can be solved in polynomial time by convex optimization, and can be achieved by linear network coding, assuming that node uplinks are the only bottleneck in the network. This paper, however, considers heterogeneous peers with both uplink and downlink bandwidth constraints specified. The static scenario is a file-transfer scheme in which the network resource allocation remains static until all receivers finish downloading. This paper first shows that the static scenario may be optimized in polynomial time by convex optimization, and the associated optimal static WSD can be achieved by linear network coding. This paper then presented a lower bound to the minimum WSDT that is easily computed and turns out to be tight across a wide range of parameterizations of the problem. This paper also proposes a static routing-based scheme and a static rateless-coding-based scheme which have almost-optimal empirical performances. The dynamic scenario is a file-transfer scheme which can re-allocate the network resource during the file transfer. This paper proposes a dynamic rateless-coding-based scheme, which provides significantly smaller WSDT than the optimal static scenario does.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3493
The Missing Piece Syndrome in Peer-to-Peer Communication
[ "cs.PF", "cs.IT", "math.IT" ]
Typical protocols for peer-to-peer file sharing over the Internet divide files to be shared into pieces. New peers strive to obtain a complete collection of pieces from other peers and from a seed. In this paper we investigate a problem that can occur if the seeding rate is not large enough. The problem is that, even if the statistics of the system are symmetric in the pieces, there can be symmetry breaking, with one piece becoming very rare. If peers depart after obtaining a complete collection, they can tend to leave before helping other peers receive the rare piece. Assuming that peers arrive with no pieces, there is a single seed, random peer contacts are made, random useful pieces are downloaded, and peers depart upon receiving the complete file, the system is stable if the seeding rate (in pieces per time unit) is greater than the arrival rate, and is unstable if the seeding rate is less than the arrival rate. The result persists for any piece selection policy that selects from among useful pieces, such as rarest first, and it persists with the use of network coding.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3521
Properties and Construction of Polar Codes
[ "cs.IT", "math.IT" ]
Recently, Ar{\i}kan introduced the method of channel polarization on which one can construct efficient capacity-achieving codes, called polar codes, for any binary discrete memoryless channel. In the thesis, we show that decoding algorithm of polar codes, called successive cancellation decoding, can be regarded as belief propagation decoding, which has been used for decoding of low-density parity-check codes, on a tree graph. On the basis of the observation, we show an efficient construction method of polar codes using density evolution, which has been used for evaluation of the error probability of belief propagation decoding on a tree graph. We further show that channel polarization phenomenon and polar codes can be generalized to non-binary discrete memoryless channels. Asymptotic performances of non-binary polar codes, which use non-binary matrices called the Reed-Solomon matrices, are better than asymptotic performances of the best explicitly known binary polar code. We also find that the Reed-Solomon matrices are considered to be natural generalization of the original binary channel polarization introduced by Ar{\i}kan.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3671
Privacy-Preserving Protocols for Eigenvector Computation
[ "cs.CR", "cs.DB" ]
In this paper, we present a protocol for computing the principal eigenvector of a collection of data matrices belonging to multiple semi-honest parties with privacy constraints. Our proposed protocol is based on secure multi-party computation with a semi-honest arbitrator who deals with data encrypted by the other parties using an additive homomorphic cryptosystem. We augment the protocol with randomization and obfuscation to make it difficult for any party to estimate properties of the data belonging to other parties from the intermediate steps. The previous approaches towards this problem were based on expensive QR decomposition of correlation matrices, we present an efficient algorithm using the power iteration method. We analyze the protocol for correctness, security, and efficiency.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3724
An Optimized Data Structure for High Throughput 3D Proteomics Data: mzRTree
[ "cs.CE", "cs.DS", "q-bio.QM" ]
As an emerging field, MS-based proteomics still requires software tools for efficiently storing and accessing experimental data. In this work, we focus on the management of LC-MS data, which are typically made available in standard XML-based portable formats. The structures that are currently employed to manage these data can be highly inefficient, especially when dealing with high-throughput profile data. LC-MS datasets are usually accessed through 2D range queries. Optimizing this type of operation could dramatically reduce the complexity of data analysis. We propose a novel data structure for LC-MS datasets, called mzRTree, which embodies a scalable index based on the R-tree data structure. mzRTree can be efficiently created from the XML-based data formats and it is suitable for handling very large datasets. We experimentally show that, on all range queries, mzRTree outperforms other known structures used for LC-MS data, even on those queries these structures are optimized for. Besides, mzRTree is also more space efficient. As a result, mzRTree reduces data analysis computational costs for very large profile datasets.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3770
Extended Range Telepresence for Evacuation Training in Pedestrian Simulations
[ "cs.HC", "cs.MA" ]
In this contribution, we propose a new framework to evaluate pedestrian simula-tions by using Extended Range Telepresence. Telepresence is used as a virtual reality walking simulator, which provides the user with a realistic impression of being present and walking in a virtual environment that is much larger than the real physical environment, in which the user actually walks. The validation of the simulation is performed by comparing motion data of the telepresent user with simulated data at some points of the simulation. The use of haptic feedback from the simulation makes the framework suitable for training in emergency situations.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 1, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 1, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3909
Proposal new area of study by connecting between information theory and Weber-Fechner law
[ "cs.IT", "math.IT" ]
Rough speaking, information theory deals with data transmitted over a channel such as the internet. Modern information theory is generally considered to have been founded in 1948 by Shannon in his seminal paper, "A mathematical theory of communication." Shannon's formulation of information theory was an immediate success with communications engineers. Shannon defined mathematically the amount of information transmitted over a channel. The amount of information doesn't mean the number of symbols of data. It depends on occurrence probabilities of symbols of the data. Meanwhile, psychophysics is the study of quantitative relations between psychological events and physical events or, more specifically, between sensations and the stimuli that produce them. It seems that Shannon's information theory bears no relation to psychophysics established by German scientist and philosopher Fechner. Here I show that to our astonishment it is possible to combine two fields. And therefore we come to be capable of measuring mathematically perceptions of the physical stimuli applicable to the Weber-Fechner law. I will define the concept of new entropy. And as a consequence of this, new field will begin life.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3931
Competitive Spectrum Management with Incomplete Information
[ "cs.IT", "cs.GT", "math.IT" ]
This paper studies an interference interaction (game) between selfish and independent wireless communication systems in the same frequency band. Each system (player) has incomplete information about the other player's channel conditions. A trivial Nash equilibrium point in this game is where players mutually full spread (FS) their transmit spectrum and interfere with each other. This point may lead to poor spectrum utilization from a global network point of view and even for each user individually. In this paper, we provide a closed form expression for a non pure-FS epsilon-Nash equilibrium point; i.e., an equilibrium point where players choose FDM for some channel realizations and FS for the others. We show that operating in this non pure-FS epsilon-Nash equilibrium point increases each user's throughput and therefore improves the spectrum utilization, and demonstrate that this performance gain can be substantial. Finally, important insights are provided into the behaviour of selfish and rational wireless users as a function of the channel parameters such as fading probabilities, the interference-to-signal ratio.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3943
Downlink Performance Analysis for a Generalized Shotgun Cellular System
[ "cs.IT", "math.IT" ]
In this paper, we analyze the signal-to-interference-plus-noise ratio (SINR) performance at a mobile station (MS) in a random cellular network. The cellular network is formed by base-stations (BSs) placed in a one, two or three dimensional space according to a possibly non-homogeneous Poisson point process, which is a generalization of the so-called shotgun cellular system. We develop a sequence of equivalence relations for the SCSs and use them to derive semi-analytical expressions for the coverage probability at the MS when the transmissions from each BS may be affected by random fading with arbitrary distributions as well as attenuation following arbitrary path-loss models. For homogeneous Poisson point processes in the interference-limited case with power-law path-loss model, we show that the SINR distribution is the same for all fading distributions and is not a function of the base station density. In addition, the influence of random transmission powers, power control, multiple channel reuse groups on the downlink performance are also discussed. The techniques developed for the analysis of SINR have applications beyond cellular networks and can be used in similar studies for cognitive radio networks, femtocell networks and other heterogeneous and multi-tier networks.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3985
Supervised Learning of Digital image restoration based on Quantization Nearest Neighbor algorithm
[ "cs.CV" ]
In this paper, an algorithm is proposed for Image Restoration. Such algorithm is different from the traditional approaches in this area, by utilizing priors that are learned from similar images. Original images and their degraded versions by the known degradation operators are utilized for designing the Quantization. The code vectors are designed using the blurred images. For each such vector, the high frequency information obtained from the original images is also available. During restoration, the high frequency information of a given degraded image is estimated from its low frequency information based on the artificial noise. For the restoration problem, a number of techniques are designed corresponding to various versions of the blurring function. Given a noisy and blurred image, one of the techniques is chosen based on a similarity measure, therefore providing the identification of the blur. To make the restoration process computationally efficient, the Quantization Nearest Neighborhood approaches are utilized.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.3990
Static Address Generation Easing: a Design Methodology for Parallel Interleaver Architectures
[ "cs.AR", "cs.IT", "math.IT" ]
For high throughput applications, turbo-like iterative decoders are implemented with parallel architectures. However, to be efficient parallel architectures require to avoid collision accesses i.e. concurrent read/write accesses should not target the same memory block. This consideration applies to the two main classes of turbo-like codes which are Low Density Parity Check (LDPC) and Turbo-Codes. In this paper we propose a methodology which finds a collision-free mapping of the variables in the memory banks and which optimizes the resulting interleaving architecture. Finally, we show through a pedagogical example the interest of our approach compared to state-of-the-art techniques.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4004
Nature inspired artificial intelligence based adaptive traffic flow distribution in computer network
[ "cs.NE" ]
Because of the stochastic nature of traffic requirement matrix, it is very difficult to get the optimal traffic distribution to minimize the delay even with adaptive routing protocol in a fixed connection network where capacity already defined for each link. Hence there is a requirement to define such a method, which could generate the optimal solution very quickly and efficiently. This paper presenting a new concept to provide the adaptive optimal traffic distribution for dynamic condition of traffic matrix using nature based intelligence methods. With the defined load and fixed capacity of links, average delay for packet has minimized with various variations of evolutionary programming and particle swarm optimization. Comparative study has given over their performance in terms of converging speed. Universal approximation capability, the key feature of feed forward neural network has applied to predict the flow distribution on each link to minimize the average delay for a total load available at present on the network. For any variation in the total load, the new flow distribution can be generated by neural network immediately, which could generate minimum delay in the network. With the inclusion of this information, performance of routing protocol will be improved very much.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4007
Word level Script Identification from Bangla and Devanagri Handwritten Texts mixed with Roman Script
[ "cs.LG" ]
India is a multi-lingual country where Roman script is often used alongside different Indic scripts in a text document. To develop a script specific handwritten Optical Character Recognition (OCR) system, it is therefore necessary to identify the scripts of handwritten text correctly. In this paper, we present a system, which automatically separates the scripts of handwritten words from a document, written in Bangla or Devanagri mixed with Roman scripts. In this script separation technique, we first, extract the text lines and words from document pages using a script independent Neighboring Component Analysis technique. Then we have designed a Multi Layer Perceptron (MLP) based classifier for script separation, trained with 8 different wordlevel holistic features. Two equal sized datasets, one with Bangla and Roman scripts and the other with Devanagri and Roman scripts, are prepared for the system evaluation. On respective independent text samples, word-level script identification accuracies of 99.29% and 98.43% are achieved.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4014
A fuzzified BRAIN algorithm for learning DNF from incomplete data
[ "cs.IT", "cs.AI", "math.IT", "math.LO" ]
Aim of this paper is to address the problem of learning Boolean functions from training data with missing values. We present an extension of the BRAIN algorithm, called U-BRAIN (Uncertainty-managing Batch Relevance-based Artificial INtelligence), conceived for learning DNF Boolean formulas from partial truth tables, possibly with uncertain values or missing bits. Such an algorithm is obtained from BRAIN by introducing fuzzy sets in order to manage uncertainty. In the case where no missing bits are present, the algorithm reduces to the original BRAIN.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4019
Query Learning with Exponential Query Costs
[ "stat.ML", "cs.IT", "math.IT" ]
In query learning, the goal is to identify an unknown object while minimizing the number of "yes" or "no" questions (queries) posed about that object. A well-studied algorithm for query learning is known as generalized binary search (GBS). We show that GBS is a greedy algorithm to optimize the expected number of queries needed to identify the unknown object. We also generalize GBS in two ways. First, we consider the case where the cost of querying grows exponentially in the number of queries and the goal is to minimize the expected exponential cost. Then, we consider the case where the objects are partitioned into groups, and the objective is to identify only the group to which the object belongs. We derive algorithms to address these issues in a common, information-theoretic framework. In particular, we present an exact formula for the objective function in each case involving Shannon or Renyi entropy, and develop a greedy algorithm for minimizing it. Our algorithms are demonstrated on two applications of query learning, active learning and emergency response.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4020
Causal Markov condition for submodular information measures
[ "cs.IT", "math.IT" ]
The causal Markov condition (CMC) is a postulate that links observations to causality. It describes the conditional independences among the observations that are entailed by a causal hypothesis in terms of a directed acyclic graph. In the conventional setting, the observations are random variables and the independence is a statistical one, i.e., the information content of observations is measured in terms of Shannon entropy. We formulate a generalized CMC for any kind of observations on which independence is defined via an arbitrary submodular information measure. Recently, this has been discussed for observations in terms of binary strings where information is understood in the sense of Kolmogorov complexity. Our approach enables us to find computable alternatives to Kolmogorov complexity, e.g., the length of a text after applying existing data compression schemes. We show that our CMC is justified if one restricts the attention to a class of causal mechanisms that is adapted to the respective information measure. Our justification is similar to deriving the statistical CMC from functional models of causality, where every variable is a deterministic function of its observed causes and an unobserved noise term. Our experiments on real data demonstrate the performance of compression based causal inference.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4022
An Alternative Proof for the Capacity Region of the Degraded Gaussian MIMO Broadcast Channel
[ "cs.IT", "math.IT" ]
We provide an alternative proof for the capacity region of the degraded Gaussian multiple-input multiple-output (MIMO) broadcast channel. Our proof does not use the channel enhancement technique as opposed to the original proof of Weingertan {\it et. al.} and the alternative proof of Liu {\it et. al}. Our proof starts with the single-letter description of the capacity region of the degraded broadcast channel, and directly evaluates it for the degraded Gaussian MIMO broadcast channel by using two main technical tools. The first one is the generalized de Bruijn identity due to Palomar \emph{et. al.} which provides a connection between the differential entropy and the Fisher information matrix. The second tool we use is an inequality due to Dembo which lower bounds the differential entropy in terms of the Fisher information matrix.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4040
Handwritten Bangla Basic and Compound character recognition using MLP and SVM classifier
[ "cs.CV", "cs.LG" ]
A novel approach for recognition of handwritten compound Bangla characters, along with the Basic characters of Bangla alphabet, is presented here. Compared to English like Roman script, one of the major stumbling blocks in Optical Character Recognition (OCR) of handwritten Bangla script is the large number of complex shaped character classes of Bangla alphabet. In addition to 50 basic character classes, there are nearly 160 complex shaped compound character classes in Bangla alphabet. Dealing with such a large varieties of handwritten characters with a suitably designed feature set is a challenging problem. Uncertainty and imprecision are inherent in handwritten script. Moreover, such a large varieties of complex shaped characters, some of which have close resemblance, makes the problem of OCR of handwritten Bangla characters more difficult. Considering the complexity of the problem, the present approach makes an attempt to identify compound character classes from most frequently to less frequently occurred ones, i.e., in order of importance. This is to develop a frame work for incrementally increasing the number of learned classes of compound characters from more frequently occurred ones to less frequently occurred ones along with Basic characters. On experimentation, the technique is observed produce an average recognition rate of 79.25 after three fold cross validation of data with future scope of improvement and extension.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4041
Improving Term Extraction Using Particle Swarm Optimization Techniques
[ "cs.IR" ]
Term extraction is one of the layers in the ontology development process which has the task to extract all the terms contained in the input document automatically. The purpose of this process is to generate list of terms that are relevant to the domain of the input document. In the literature there are many approaches, techniques and algorithms used for term extraction. In this paper we propose a new approach using particle swarm optimization techniques in order to improve the accuracy of term extraction results. We choose five features to represent the term score. The approach has been applied to the domain of religious document. We compare our term extraction method precision with TFIDF, Weirdness, GlossaryExtraction and TermExtractor. The experimental results show that our propose approach achieve better precision than those four algorithm.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4046
Supervised Classification Performance of Multispectral Images
[ "cs.LG", "cs.CV" ]
Nowadays government and private agencies use remote sensing imagery for a wide range of applications from military applications to farm development. The images may be a panchromatic, multispectral, hyperspectral or even ultraspectral of terra bytes. Remote sensing image classification is one amongst the most significant application worlds for remote sensing. A few number of image classification algorithms have proved good precision in classifying remote sensing data. But, of late, due to the increasing spatiotemporal dimensions of the remote sensing data, traditional classification algorithms have exposed weaknesses necessitating further research in the field of remote sensing image classification. So an efficient classifier is needed to classify the remote sensing images to extract information. We are experimenting with both supervised and unsupervised classification. Here we compare the different classification methods and their performances. It is found that Mahalanobis classifier performed the best in our classification.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4048
A Hough Transform based Technique for Text Segmentation
[ "cs.IR" ]
Text segmentation is an inherent part of an OCR system irrespective of the domain of application of it. The OCR system contains a segmentation module where the text lines, words and ultimately the characters must be segmented properly for its successful recognition. The present work implements a Hough transform based technique for line and word segmentation from digitized images. The proposed technique is applied not only on the document image dataset but also on dataset for business card reader system and license plate recognition system. For standardization of the performance of the system the technique is also applied on public domain dataset published in the website by CMATER, Jadavpur University. The document images consist of multi-script printed and hand written text lines with variety in script and line spacing in single document image. The technique performs quite satisfactorily when applied on mobile camera captured business card images with low resolution. The usefulness of the technique is verified by applying it in a commercial project for localization of license plate of vehicles from surveillance camera images by the process of segmentation itself. The accuracy of the technique for word segmentation, as verified experimentally, is 85.7% for document images, 94.6% for business card images and 88% for surveillance camera images.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4058
Contextual Bandit Algorithms with Supervised Learning Guarantees
[ "cs.LG" ]
We address the problem of learning in an online, bandit setting where the learner must repeatedly select among $K$ actions, but only receives partial feedback based on its choices. We establish two new facts: First, using a new algorithm called Exp4.P, we show that it is possible to compete with the best in a set of $N$ experts with probability $1-\delta$ while incurring regret at most $O(\sqrt{KT\ln(N/\delta)})$ over $T$ time steps. The new algorithm is tested empirically in a large-scale, real-world dataset. Second, we give a new algorithm called VE that competes with a possibly infinite set of policies of VC-dimension $d$ while incurring regret at most $O(\sqrt{T(d\ln(T) + \ln (1/\delta))})$ with probability $1-\delta$. These guarantees improve on those of all previous algorithms, whether in a stochastic or adversarial environment, and bring us closer to providing supervised learning type guarantees for the contextual bandit setting.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4061
Flux Analysis in Process Models via Causality
[ "cs.CE", "cs.LO", "q-bio.QM" ]
We present an approach for flux analysis in process algebra models of biological systems. We perceive flux as the flow of resources in stochastic simulations. We resort to an established correspondence between event structures, a broadly recognised model of concurrency, and state transitions of process models, seen as Petri nets. We show that we can this way extract the causal resource dependencies in simulations between individual state transitions as partial orders of events. We propose transformations on the partial orders that provide means for further analysis, and introduce a software tool, which implements these ideas. By means of an example of a published model of the Rho GTP-binding proteins, we argue that this approach can provide the substitute for flux analysis techniques on ordinary differential equation models within the stochastic setting of process algebras.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4062
Modelling and Analysis of Biochemical Signalling Pathway Cross-talk
[ "cs.CE", "q-bio.QM" ]
Signalling pathways are abstractions that help life scientists structure the coordination of cellular activity. Cross-talk between pathways accounts for many of the complex behaviours exhibited by signalling pathways and is often critical in producing the correct signal-response relationship. Formal models of signalling pathways and cross-talk in particular can aid understanding and drive experimentation. We define an approach to modelling based on the concept that a pathway is the (synchronising) parallel composition of instances of generic modules (with internal and external labels). Pathways are then composed by (synchronising) parallel composition and renaming; different types of cross-talk result from different combinations of synchronisation and renaming. We define a number of generic modules in PRISM and five types of cross-talk: signal flow, substrate availability, receptor function, gene expression and intracellular communication. We show that Continuous Stochastic Logic properties can both detect and distinguish the types of cross-talk. The approach is illustrated with small examples and an analysis of the cross-talk between the TGF-b/BMP, WNT and MAPK pathways.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4063
Investigating modularity in the analysis of process algebra models of biochemical systems
[ "cs.CE", "q-bio.QM" ]
Compositionality is a key feature of process algebras which is often cited as one of their advantages as a modelling technique. It is certainly true that in biochemical systems, as in many other systems, model construction is made easier in a formalism which allows the problem to be tackled compositionally. In this paper we consider the extent to which the compositional structure which is inherent in process algebra models of biochemical systems can be exploited during model solution. In essence this means using the compositional structure to guide decomposed solution and analysis. Unfortunately the dynamic behaviour of biochemical systems exhibits strong interdependencies between the components of the model making decomposed solution a difficult task. Nevertheless we believe that if such decomposition based on process algebras could be established it would demonstrate substantial benefits for systems biology modelling. In this paper we present our preliminary investigations based on a case study of the pheromone pathway in yeast, modelling in the stochastic process algebra Bio-PEPA.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4064
A flexible architecture for modeling and simulation of diffusional association
[ "cs.CE", "q-bio.QM" ]
Up to now, it is not possible to obtain analytical solutions for complex molecular association processes (e.g. Molecule recognition in Signaling or catalysis). Instead Brownian Dynamics (BD) simulations are commonly used to estimate the rate of diffusional association, e.g. to be later used in mesoscopic simulations. Meanwhile a portfolio of diffusional association (DA) methods have been developed that exploit BD. However, DA methods do not clearly distinguish between modeling, simulation, and experiment settings. This hampers to classify and compare the existing methods with respect to, for instance model assumptions, simulation approximations or specific optimization strategies for steering the computation of trajectories. To address this deficiency we propose FADA (Flexible Architecture for Diffusional Association) - an architecture that allows the flexible definition of the experiment comprising a formal description of the model in SpacePi, different simulators, as well as validation and analysis methods. Based on the NAM (Northrup-Allison-McCammon) method, which forms the basis of many existing DA methods, we illustrate the structure and functioning of FADA. A discussion of future validation experiments illuminates how the FADA can be exploited in order to estimate reaction rates and how validation techniques may be applied to validate additional features of the model.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4065
BlenX-based compositional modeling of complex reaction mechanisms
[ "cs.CE", "q-bio.QM" ]
Molecular interactions are wired in a fascinating way resulting in complex behavior of biological systems. Theoretical modeling provides a useful framework for understanding the dynamics and the function of such networks. The complexity of the biological networks calls for conceptual tools that manage the combinatorial explosion of the set of possible interactions. A suitable conceptual tool to attack complexity is compositionality, already successfully used in the process algebra field to model computer systems. We rely on the BlenX programming language, originated by the beta-binders process calculus, to specify and simulate high-level descriptions of biological circuits. The Gillespie's stochastic framework of BlenX requires the decomposition of phenomenological functions into basic elementary reactions. Systematic unpacking of complex reaction mechanisms into BlenX templates is shown in this study. The estimation/derivation of missing parameters and the challenges emerging from compositional model building in stochastic process algebras are discussed. A biological example on circadian clock is presented as a case study of BlenX compositionality.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4066
Types for BioAmbients
[ "cs.CE", "cs.LO", "q-bio.QM" ]
The BioAmbients calculus is a process algebra suitable for representing compartmentalization, molecular localization and movements between compartments. In this paper we enrich this calculus with a static type system classifying each ambient with group types specifying the kind of compartments in which the ambient can stay. The type system ensures that, in a well-typed process, ambients cannot be nested in a way that violates the type hierarchy. Exploiting the information given by the group types, we also extend the operational semantics of BioAmbients with rules signalling errors that may derive from undesired ambients' moves (i.e. merging incompatible tissues). Thus, the signal of errors can help the modeller to detect and locate unwanted situations that may arise in a biological system, and give practical hints on how to avoid the undesired behaviour.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4067
A Taxonomy of Causality-Based Biological Properties
[ "cs.CE", "q-bio.QM" ]
We formally characterize a set of causality-based properties of metabolic networks. This set of properties aims at making precise several notions on the production of metabolites, which are familiar in the biologists' terminology. From a theoretical point of view, biochemical reactions are abstractly represented as causal implications and the produced metabolites as causal consequences of the implication representing the corresponding reaction. The fact that a reactant is produced is represented by means of the chain of reactions that have made it exist. Such representation abstracts away from quantities, stoichiometric and thermodynamic parameters and constitutes the basis for the characterization of our properties. Moreover, we propose an effective method for verifying our properties based on an abstract model of system dynamics. This consists of a new abstract semantics for the system seen as a concurrent network and expressed using the Chemical Ground Form calculus. We illustrate an application of this framework to a portion of a real metabolic pathway.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4088
Additive Asymmetric Quantum Codes
[ "cs.IT", "math.IT" ]
We present a general construction of asymmetric quantum codes based on additive codes under the trace Hermitian inner product. Various families of additive codes over $\F_{4}$ are used in the construction of many asymmetric quantum codes over $\F_{4}$.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4180
Design of a Smart Unmanned Ground Vehicle for Hazardous Environments
[ "cs.RO", "cs.HC" ]
A smart Unmanned Ground Vehicle (UGV) is designed and developed for some application specific missions to operate predominantly in hazardous environments. In our work, we have developed a small and lightweight vehicle to operate in general cross-country terrains in or without daylight. The UGV can send visual feedbacks to the operator at a remote location. Onboard infrared sensors can detect the obstacles around the UGV and sends signals to the operator.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 1, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 1, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4263
Precoding by Pairing Subchannels to Increase MIMO Capacity with Discrete Input Alphabets
[ "cs.IT", "math.IT" ]
We consider Gaussian multiple-input multiple-output (MIMO) channels with discrete input alphabets. We propose a non-diagonal precoder based on the X-Codes in \cite{Xcodes_paper} to increase the mutual information. The MIMO channel is transformed into a set of parallel subchannels using Singular Value Decomposition (SVD) and X-Codes are then used to pair the subchannels. X-Codes are fully characterized by the pairings and a $2\times 2$ real rotation matrix for each pair (parameterized with a single angle). This precoding structure enables us to express the total mutual information as a sum of the mutual information of all the pairs. The problem of finding the optimal precoder with the above structure, which maximizes the total mutual information, is solved by {\em i}) optimizing the rotation angle and the power allocation within each pair and {\em ii}) finding the optimal pairing and power allocation among the pairs. It is shown that the mutual information achieved with the proposed pairing scheme is very close to that achieved with the optimal precoder by Cruz {\em et al.}, and is significantly better than Mercury/waterfilling strategy by Lozano {\em et al.}. Our approach greatly simplifies both the precoder optimization and the detection complexity, making it suitable for practical applications.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4286
Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules
[ "cs.LO", "cs.AI" ]
Association rules are among the most widely employed data analysis methods in the field of Data Mining. An association rule is a form of partial implication between two sets of binary variables. In the most common approach, association rules are parameterized by a lower bound on their confidence, which is the empirical conditional probability of their consequent given the antecedent, and/or by some other parameter bounds such as "support" or deviation from independence. We study here notions of redundancy among association rules from a fundamental perspective. We see each transaction in a dataset as an interpretation (or model) in the propositional logic sense, and consider existing notions of redundancy, that is, of logical entailment, among association rules, of the form "any dataset in which this first rule holds must obey also that second rule, therefore the second is redundant". We discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them. We show that the main alternatives we discuss correspond actually to just two variants, which differ in the treatment of full-confidence implications. For each of these two notions of redundancy, we provide a sound and complete deduction calculus, and we show how to construct complete bases (that is, axiomatizations) of absolutely minimum size in terms of the number of rules. We explore finally an approach to redundancy with respect to several association rules, and fully characterize its simplest case of two partial premises.
{ "Other": 1, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4311
Lowering the Error Floor of LDPC Codes Using Cyclic Liftings
[ "cs.IT", "math.IT" ]
Cyclic liftings are proposed to lower the error floor of low-density parity-check (LDPC) codes. The liftings are designed to eliminate dominant trapping sets of the base code by removing the short cycles which form the trapping sets. We derive a necessary and sufficient condition for the cyclic permutations assigned to the edges of a cycle $c$ of length $\ell(c)$ in the base graph such that the inverse image of $c$ in the lifted graph consists of only cycles of length strictly larger than $\ell(c)$. The proposed method is universal in the sense that it can be applied to any LDPC code over any channel and for any iterative decoding algorithm. It also preserves important properties of the base code such as degree distributions, encoder and decoder structure, and in some cases, the code rate. The proposed method is applied to both structured and random codes over the binary symmetric channel (BSC). The error floor improves consistently by increasing the lifting degree, and the results show significant improvements in the error floor compared to the base code, a random code of the same degree distribution and block length, and a random lifting of the same degree. Similar improvements are also observed when the codes designed for the BSC are applied to the additive white Gaussian noise (AWGN) channel.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4315
Mining Statistically Significant Substrings Based on the Chi-Square Measure
[ "cs.DB" ]
Given the vast reservoirs of data stored worldwide, efficient mining of data from a large information store has emerged as a great challenge. Many databases like that of intrusion detection systems, web-click records, player statistics, texts, proteins etc., store strings or sequences. Searching for an unusual pattern within such long strings of data has emerged as a requirement for diverse applications. Given a string, the problem then is to identify the substrings that differs the most from the expected or normal behavior, i.e., the substrings that are statistically significant. In other words, these substrings are less likely to occur due to chance alone and may point to some interesting information or phenomenon that warrants further exploration. To this end, we use the chi-square measure. We propose two heuristics for retrieving the top-k substrings with the largest chi-square measure. We show that the algorithms outperform other competing algorithms in the runtime, while maintaining a high approximation ratio of more than 0.96.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 1, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4317
CLD-shaped Brushstrokes in Non-Photorealistic Rendering
[ "cs.CV", "cs.GR" ]
Rendering techniques based on a random grid can be improved by adapting brushstrokes to the shape of different areas of the original picture. In this paper, the concept of Coherence Length Diagram is applied to determine the adaptive brushstrokes, in order to simulate an impressionist painting. Some examples are provided to instance the proposed algorithm.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 1, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4453
Nonparametric Estimation and On-Line Prediction for General Stationary Ergodic Sources
[ "cs.IT", "cs.AI", "math.IT", "math.PR" ]
We proposed a learning algorithm for nonparametric estimation and on-line prediction for general stationary ergodic sources. We prepare histograms each of which estimates the probability as a finite distribution, and mixture them with weights to construct an estimator. The whole analysis is based on measure theory. The estimator works whether the source is discrete or continuous. If it is stationary ergodic, then the measure theoretically given Kullback-Leibler information divided by the sequence length $n$ converges to zero as $n$ goes to infinity. In particular, for continuous sources, the method does not require existence of a probability density function.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4458
Approximate Sparsity Pattern Recovery: Information-Theoretic Lower Bounds
[ "cs.IT", "math.IT" ]
Recovery of the sparsity pattern (or support) of an unknown sparse vector from a small number of noisy linear measurements is an important problem in compressed sensing. In this paper, the high-dimensional setting is considered. It is shown that if the measurement rate and per-sample signal-to-noise ratio (SNR) are finite constants independent of the length of the vector, then the optimal sparsity pattern estimate will have a constant fraction of errors. Lower bounds on the measurement rate needed to attain a desired fraction of errors are given in terms of the SNR and various key parameters of the unknown vector. The tightness of the bounds in a scaling sense, as a function of the SNR and the fraction of errors, is established by comparison with existing achievable bounds. Near optimality is shown for a wide variety of practically motivated signal models.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4470
Large-System Analysis of Joint Channel and Data Estimation for MIMO DS-CDMA Systems
[ "cs.IT", "math.IT" ]
This paper presents a large-system analysis of the performance of joint channel estimation, multiuser detection, and per-user decoding (CE-MUDD) for randomly-spread multiple-input multiple-output (MIMO) direct-sequence code-division multiple-access (DS-CDMA) systems. A suboptimal receiver based on successive decoding in conjunction with linear minimum mean-squared error (LMMSE) channel estimation is investigated. The replica method, developed in statistical mechanics, is used to evaluate the performance in the large-system limit, where the number of users and the spreading factor tend to infinity while their ratio and the number of transmit and receive antennas are kept constant. The performance of the joint CE-MUDD based on LMMSE channel estimation is compared to the spectral efficiencies of several receivers based on one-shot LMMSE channel estimation, in which the decoded data symbols are not utilized to refine the initial channel estimates. The results imply that the use of joint CE-MUDD significantly reduces rate loss due to transmission of pilot signals, especially for multiple-antenna systems. As a result, joint CE-MUDD can provide significant performance gains, compared to the receivers based on one-shot channel estimation.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4473
On Scaling Laws of Diversity Schemes in Decentralized Estimation
[ "cs.IT", "math.IT" ]
This paper is concerned with decentralized estimation of a Gaussian source using multiple sensors. We consider a diversity scheme where only the sensor with the best channel sends their measurements over a fading channel to a fusion center, using the analog amplify and forwarding technique. The fusion centre reconstructs an MMSE estimate of the source based on the received measurements. A distributed version of the diversity scheme where sensors decide whether to transmit based only on their local channel information is also considered. We derive asymptotic expressions for the expected distortion (of the MMSE estimate at the fusion centre) of these schemes as the number of sensors becomes large. For comparison, asymptotic expressions for the expected distortion for a coherent multi-access scheme and an orthogonal access scheme are derived. We also study for the diversity schemes, the optimal power allocation for minimizing the expected distortion subject to average total power constraints. The effect of optimizing the probability of transmission on the expected distortion in the distributed scenario is also studied. It is seen that as opposed to the coherent multi-access scheme and the orthogonal scheme (where the expected distortion decays as 1/M, M being the number of sensors), the expected distortion decays only as 1/ln(M) for the diversity schemes. This reduction of the decay rate can be seen as a tradeoff between the simplicity of the diversity schemes and the strict synchronization and large bandwidth requirements for the coherent multi-access and the orthogonal schemes, respectively.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4510
On linear $q$-ary completely regular codes with $\rho=2$ and dual antipodal
[ "cs.IT", "math.CO", "math.IT" ]
We characterize all linear $q$-ary completely regular codes with covering radius $\rho=2$ when the dual codes are antipodal. These completely regular codes are extensions of linear completely regular codes with covering radius 1, which are all classified. For $\rho=2$, we give a list of all such codes known to us. This also gives the characterization of two weight linear antipodal codes.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4522
Feature Importance in Bayesian Assessment of Newborn Brain Maturity from EEG
[ "cs.AI" ]
The methodology of Bayesian Model Averaging (BMA) is applied for assessment of newborn brain maturity from sleep EEG. In theory this methodology provides the most accurate assessments of uncertainty in decisions. However, the existing BMA techniques have been shown providing biased assessments in the absence of some prior information enabling to explore model parameter space in details within a reasonable time. The lack in details leads to disproportional sampling from the posterior distribution. In case of the EEG assessment of brain maturity, BMA results can be biased because of the absence of information about EEG feature importance. In this paper we explore how the posterior information about EEG features can be used in order to reduce a negative impact of disproportional sampling on BMA performance. We use EEG data recorded from sleeping newborns to test the efficiency of the proposed BMA technique.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4548
Interference Alignment for the Multi-Antenna Compound Wiretap Channel
[ "cs.IT", "cs.CR", "math.IT" ]
We study a wiretap channel model where the sender has $M$ transmit antennas and there are two groups consisting of $J_1$ and $J_2$ receivers respectively. Each receiver has a single antenna. We consider two scenarios. First we consider the compound wiretap model -- group 1 constitutes the set of legitimate receivers, all interested in a common message, whereas group 2 is the set of eavesdroppers. We establish new lower and upper bounds on the secure degrees of freedom. Our lower bound is based on the recently proposed \emph{real interference alignment} scheme. The upper bound provides the first known example which illustrates that the \emph{pairwise upper bound} used in earlier works is not tight. The second scenario we study is the compound private broadcast channel. Each group is interested in a message that must be protected from the other group. Upper and lower bounds on the degrees of freedom are developed by extending the results on the compound wiretap channel.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4587
Using Information Semantic Systems for Absolutely Secure Processing
[ "cs.IT", "math.IT" ]
Propose a new cryptographic information concept. It allows : - to create absolutely algorithmic unbreakable ciphers for communication through open digital channels; - to create new code-breaking methods. They will be the most efficient decoding methods to-date, with the help of which any of the existing codes could, in principle, be broken, provided it is not absolutely unbreakable.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4592
Is It Real, or Is It Randomized?: A Financial Turing Test
[ "q-fin.GN", "cs.CE", "cs.HC" ]
We construct a financial "Turing test" to determine whether human subjects can differentiate between actual vs. randomized financial returns. The experiment consists of an online video-game (http://arora.ccs.neu.edu) where players are challenged to distinguish actual financial market returns from random temporal permutations of those returns. We find overwhelming statistical evidence (p-values no greater than 0.5%) that subjects can consistently distinguish between the two types of time series, thereby refuting the widespread belief that financial markets "look random." A key feature of the experiment is that subjects are given immediate feedback regarding the validity of their choices, allowing them to learn and adapt. We suggest that such novel interfaces can harness human capabilities to process and extract information from financial data in ways that computers cannot.
{ "Other": 0, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 1, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4658
Principal Component Analysis with Contaminated Data: The High Dimensional Case
[ "stat.ML", "cs.LG", "stat.ME" ]
We consider the dimensionality-reduction problem (finding a subspace approximation of observed data) for contaminated data in the high dimensional regime, where the number of observations is of the same magnitude as the number of variables of each observation, and the data set contains some (arbitrarily) corrupted observations. We propose a High-dimensional Robust Principal Component Analysis (HR-PCA) algorithm that is tractable, robust to contaminated points, and easily kernelizable. The resulting subspace has a bounded deviation from the desired one, achieves maximal robustness -- a breakdown point of 50% while all existing algorithms have a breakdown point of zero, and unlike ordinary PCA algorithms, achieves optimality in the limit case where the proportion of corrupted points goes to zero.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4661
Complementary approaches to understanding the plant circadian clock
[ "cs.CE", "cs.MS", "q-bio.MN" ]
Circadian clocks are oscillatory genetic networks that help organisms adapt to the 24-hour day/night cycle. The clock of the green alga Ostreococcus tauri is the simplest plant clock discovered so far. Its many advantages as an experimental system facilitate the testing of computational predictions. We present a model of the Ostreococcus clock in the stochastic process algebra Bio-PEPA and exploit its mapping to different analysis techniques, such as ordinary differential equations, stochastic simulation algorithms and model-checking. The small number of molecules reported for this system tests the limits of the continuous approximation underlying differential equations. We investigate the difference between continuous-deterministic and discrete-stochastic approaches. Stochastic simulation and model-checking allow us to formulate new hypotheses on the system behaviour, such as the presence of self-sustained oscillations in single cells under constant light conditions. We investigate how to model the timing of dawn and dusk in the context of model-checking, which we use to compute how the probability distributions of key biochemical species change over time. These show that the relative variation in expression level is smallest at the time of peak expression, making peak time an optimal experimental phase marker. Building on these analyses, we use approaches from evolutionary systems biology to investigate how changes in the rate of mRNA degradation impacts the phase of a key protein likely to affect fitness. We explore how robust this circadian clock is towards such potential mutational changes in its underlying biochemistry. Our work shows that multiple approaches lead to a more complete understanding of the clock.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4665
Syntactic Topic Models
[ "cs.CL", "cs.AI", "math.ST", "stat.TH" ]
The syntactic topic model (STM) is a Bayesian nonparametric model of language that discovers latent distributions of words (topics) that are both semantically and syntactically coherent. The STM models dependency parsed corpora where sentences are grouped into documents. It assumes that each word is drawn from a latent topic chosen by combining document-level features and the local syntactic context. Each document has a distribution over latent topics, as in topic models, which provides the semantic consistency. Each element in the dependency parse tree also has a distribution over the topics of its children, as in latent-state syntax models, which provides the syntactic consistency. These distributions are convolved so that the topic of each word is likely under both its document and syntactic context. We derive a fast posterior inference algorithm based on variational methods. We report qualitative and quantitative studies on both synthetic data and hand-parsed documents. We show that the STM is a more predictive model of language than current models based only on syntax or only on topics.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4759
On the order bounds for one-point AG codes
[ "cs.IT", "math.IT" ]
The order bound for the minimum distance of algebraic geometry codes was originally defined for the duals of one-point codes and later generalized for arbitrary algebraic geometry codes. Another bound of order type for the minimum distance of general linear codes, and for codes from order domains in particular, was given in [H. Andersen and O. Geil, Evaluation codes from order domain theory, Finite Fields and their Applications 14 (2008), pp. 92-123]. Here we investigate in detail the application of that bound to one-point algebraic geometry codes, obtaining a bound $d^*$ for the minimum distance of these codes. We establish a connection between $d^*$ and the order bound and its generalizations. We also study the improved code constructions based on $d^*$. Finally we extend $d^*$ to all generalized Hamming weights.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4768
Neural daylight control system
[ "cs.NE", "nlin.AO" ]
The paper describes the design, the implementation of a neural controller used in an automatic daylight control system. The automatic lighting control system (ALCS) attempt to maintain constant the illuminance at the desired level on working plane even if the daylight contribution is variable. Therefore, the daylight will represent the perturbation signal for the ALCS. The mathematical model of process is unknown. The applied structure of control need the inverse model of process. For this purpose it was used other artificial neural network (ANN) which identify the inverse model of process in an on-line manner. In fact, this ANN identify the inverse model of process + the perturbation signal. In this way the learning signal for neural controller has a better accuracy for the present application.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4802
Gaussian Process Structural Equation Models with Latent Variables
[ "cs.LG", "stat.ML" ]
In a variety of disciplines such as social sciences, psychology, medicine and economics, the recorded data are considered to be noisy measurements of latent variables connected by some causal structure. This corresponds to a family of graphical models known as the structural equation model with latent variables. While linear non-Gaussian variants have been well-studied, inference in nonparametric structural equation models is still underdeveloped. We introduce a sparse Gaussian process parameterization that defines a non-linear structure connecting latent variables, unlike common formulations of Gaussian process latent variable models. The sparse parameterization is given a full Bayesian treatment without compromising Markov chain Monte Carlo efficiency. We compare the stability of the sampling procedure and the predictive ability of the model against the current practice.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4818
A Trustability Metric for Code Search based on Developer Karma
[ "cs.SE", "cs.IR" ]
The promise of search-driven development is that developers will save time and resources by reusing external code in their local projects. To efficiently integrate this code, users must be able to trust it, thus trustability of code search results is just as important as their relevance. In this paper, we introduce a trustability metric to help users assess the quality of code search results and therefore ease the cost-benefit analysis they undertake trying to find suitable integration candidates. The proposed trustability metric incorporates both user votes and cross-project activity of developers to calculate a "karma" value for each developer. Through the karma value of all its developers a project is ranked on a trustability scale. We present JBender, a proof-of-concept code search engine which implements our trustability metric and we discuss preliminary results from an evaluation of the prototype.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 1, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4820
SLAM : Solutions lexicales automatique pour m\'etaphores
[ "cs.CL" ]
This article presents SLAM, an Automatic Solver for Lexical Metaphors like ?d\'eshabiller* une pomme? (to undress* an apple). SLAM calculates a conventional solution for these productions. To carry on it, SLAM has to intersect the paradigmatic axis of the metaphorical verb ?d\'eshabiller*?, where ?peler? (?to peel?) comes closer, with a syntagmatic axis that comes from a corpus where ?peler une pomme? (to peel an apple) is semantically and syntactically regular. We test this model on DicoSyn, which is a ?small world? network of synonyms, to compute the paradigmatic axis and on Frantext.20, a French corpus, to compute the syntagmatic axis. Further, we evaluate the model with a sample of an experimental corpus of the database of Flexsem
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 1, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4831
On Analysis and Evaluation of Multi-Sensory Cognitive Learning of a Mathematical Topic Using Artificial Neural Networks
[ "cs.NE" ]
This piece of research belongs to the field of educational assessment issue based upon the cognitive multimedia theory. Considering that theory; visual and auditory material should be presented simultaneously to reinforce the retention of a mathematical learned topic, a carefully computer-assisted learning (CAL) module is designed for development of a multimedia tutorial for our suggested mathematical topic. The designed CAL module is a multimedia tutorial computer package with visual and/or auditory material. So, via suggested computer package, Multi-Sensory associative memories and classical conditioning theories are practically applicable at an educational field (a children classroom). It is noticed that comparative practical results obtained are interesting for field application of CAL package with and without associated teacher's voice. Finally, the presented study highly recommends application of a novel teaching trend aiming to improve quality of children mathematical learning performance.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4862
Less Regret via Online Conditioning
[ "cs.LG", "cs.AI" ]
We analyze and evaluate an online gradient descent algorithm with adaptive per-coordinate adjustment of learning rates. Our algorithm can be thought of as an online version of batch gradient descent with a diagonal preconditioner. This approach leads to regret bounds that are stronger than those of standard online gradient descent for general online convex optimization problems. Experimentally, we show that our algorithm is competitive with state-of-the-art algorithms for large scale machine learning problems.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4907
Twenty Questions Games Always End With Yes
[ "cs.IT", "cs.DM", "math.IT" ]
Huffman coding is often presented as the optimal solution to Twenty Questions. However, a caveat is that Twenty Questions games always end with a reply of "Yes," whereas Huffman codewords need not obey this constraint. We bring resolution to this issue, and prove that the average number of questions still lies between H(X) and H(X)+1.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4908
Adaptive Bound Optimization for Online Convex Optimization
[ "cs.LG" ]
We introduce a new online convex optimization algorithm that adaptively chooses its regularization function based on the loss functions observed so far. This is in contrast to previous algorithms that use a fixed regularization function such as L2-squared, and modify it only via a single time-dependent parameter. Our algorithm's regret bounds are worst-case optimal, and for certain realistic classes of loss functions they are much better than existing bounds. These bounds are problem-dependent, which means they can exploit the structure of the actual problem instance. Critically, however, our algorithm does not need to know this structure in advance. Rather, we prove competitive guarantees that show the algorithm provides a bound within a constant factor of the best possible bound (of a certain functional form) in hindsight.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4919
Proceedings Third Workshop From Biology To Concurrency and back
[ "cs.CE", "cs.PL" ]
This volume contains the papers presented at the 3rd Workshop "From Biology To Concurrency and back", FBTC 2010, held in Paphos, Cyprus, on March 27, 2010, as satellite event of the Joint European Conference on Theory and Practice of Software, ETAPS 2010. The Workshop aimed at gathering together researchers with special interest at the convergence of life and computer science, with particular focus on the application of techniques and methods from concurrency. The papers contained in this volume present works on modelling, analysis, and validation of biological behaviours using concurrency-inspired methods and platforms, and bio-inspired models and tools for describing distributed interactions.
{ "Other": 1, "cs.AI": 0, "cs.CE": 1, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.4935
Multiarray Signal Processing: Tensor decomposition meets compressed sensing
[ "math.NA", "cs.IT", "math.IT" ]
We discuss how recently discovered techniques and tools from compressed sensing can be used in tensor decompositions, with a view towards modeling signals from multiple arrays of multiple sensors. We show that with appropriate bounds on a measure of separation between radiating sources called coherence, one could always guarantee the existence and uniqueness of a best rank-r approximation of the tensor representing the signal. We also deduce a computationally feasible variant of Kruskal's uniqueness condition, where the coherence appears as a proxy for k-rank. Problems of sparsest recovery with an infinite continuous dictionary, lowest-rank tensor representation, and blind source separation are treated in a uniform fashion. The decomposition of the measurement tensor leads to simultaneous localization and extraction of radiating sources, in an entirely deterministic manner.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1002.5026
Capacity Region of Gaussian MIMO Broadcast Channels with Common and Confidential Messages
[ "cs.IT", "cs.CR", "math.IT" ]
We study the two-user Gaussian multiple-input multiple-output (MIMO) broadcast channel with common and confidential messages. In this channel, the transmitter sends a common message to both users, and a confidential message to each user which needs to be kept perfectly secret from the other user. We obtain the entire capacity region of this channel. We also explore the connections between the capacity region we obtain for the Gaussian MIMO broadcast channel with common and confidential messages and the capacity region of its non-confidential counterpart, i.e., the Gaussian MIMO broadcast channel with common and private messages, which is not known completely.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 1, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0024
Asymptotic Analysis of Generative Semi-Supervised Learning
[ "cs.LG" ]
Semisupervised learning has emerged as a popular framework for improving modeling accuracy while controlling labeling cost. Based on an extension of stochastic composite likelihood we quantify the asymptotic accuracy of generative semi-supervised learning. In doing so, we complement distribution-free analysis by providing an alternative framework to measure the value associated with different labeling policies and resolve the fundamental question of how much data to label and in what manner. We demonstrate our approach with both simulation studies and real world experiments using naive Bayes for text classification and MRFs and CRFs for structured prediction in NLP.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0034
A New Understanding of Prediction Markets Via No-Regret Learning
[ "cs.AI", "cs.LG" ]
We explore the striking mathematical connections that exist between market scoring rules, cost function based prediction markets, and no-regret learning. We show that any cost function based prediction market can be interpreted as an algorithm for the commonly studied problem of learning from expert advice by equating trades made in the market with losses observed by the learning algorithm. If the loss of the market organizer is bounded, this bound can be used to derive an O(sqrt(T)) regret bound for the corresponding learning algorithm. We then show that the class of markets with convex cost functions exactly corresponds to the class of Follow the Regularized Leader learning algorithms, with the choice of a cost function in the market corresponding to the choice of a regularizer in the learning problem. Finally, we show an equivalence between market scoring rules and prediction markets with convex cost functions. This implies that market scoring rules can also be interpreted naturally as Follow the Regularized Leader algorithms, and may be of independent interest. These connections provide new insight into how it is that commonly studied markets, such as the Logarithmic Market Scoring Rule, can aggregate opinions into accurate estimates of the likelihood of future events.
{ "Other": 0, "cs.AI": 1, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0060
Comment on "Fastest learning in small-world neural networks"
[ "stat.ML", "cs.NE" ]
This comment reexamines Simard et al.'s work in [D. Simard, L. Nadeau, H. Kroger, Phys. Lett. A 336 (2005) 8-15]. We found that Simard et al. calculated mistakenly the local connectivity lengths Dlocal of networks. The right results of Dlocal are presented and the supervised learning performance of feedforward neural networks (FNNs) with different rewirings are re-investigated in this comment. This comment discredits Simard et al's work by two conclusions: 1) Rewiring connections of FNNs cannot generate networks with small-world connectivity; 2) For different training sets, there do not exist networks with a certain number of rewirings generating reduced learning errors than networks with other numbers of rewiring.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 0, "cs.MA": 0, "cs.NE": 1, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0064
Decoding by Sampling: A Randomized Lattice Algorithm for Bounded Distance Decoding
[ "cs.IT", "math.IT", "math.NT" ]
Despite its reduced complexity, lattice reduction-aided decoding exhibits a widening gap to maximum-likelihood (ML) performance as the dimension increases. To improve its performance, this paper presents randomized lattice decoding based on Klein's sampling technique, which is a randomized version of Babai's nearest plane algorithm (i.e., successive interference cancelation (SIC)). To find the closest lattice point, Klein's algorithm is used to sample some lattice points and the closest among those samples is chosen. Lattice reduction increases the probability of finding the closest lattice point, and only needs to be run once during pre-processing. Further, the sampling can operate very efficiently in parallel. The technical contribution of this paper is two-fold: we analyze and optimize the decoding radius of sampling decoding resulting in better error performance than Klein's original algorithm, and propose a very efficient implementation of random rounding. Of particular interest is that a fixed gain in the decoding radius compared to Babai's decoding can be achieved at polynomial complexity. The proposed decoder is useful for moderate dimensions where sphere decoding becomes computationally intensive, while lattice reduction-aided decoding starts to suffer considerable loss. Simulation results demonstrate near-ML performance is achieved by a moderate number of samples, even if the dimension is as high as 32.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0079
Non-Sparse Regularization for Multiple Kernel Learning
[ "cs.LG", "stat.ML" ]
Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability and scalability. Unfortunately, this 1-norm MKL is rarely observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary norms, like p-norms with p>1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the commonly used wrapper approaches. A theoretical analysis and an experiment on controlled artificial data experiment sheds light on the appropriateness of sparse, non-sparse and $\ell_\infty$-norm MKL in various scenarios. Empirical applications of p-norm MKL to three real-world problems from computational biology show that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 0, "cs.LG": 1, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0090
Random Access Game in Fading Channels with Capture: Equilibria and Braess-like Paradoxes
[ "cs.GT", "cs.IT", "math.IT" ]
The Nash equilibrium point of the transmission probabilities in a slotted ALOHA system with selfish nodes is analyzed. The system consists of a finite number of heterogeneous nodes, each trying to minimize its average transmission probability (or power investment) selfishly while meeting its average throughput demand over the shared wireless channel to a common base station (BS). We use a game-theoretic approach to analyze the network under two reception models: one is called power capture, the other is called signal to interference plus noise ratio (SINR) capture. It is shown that, in some situations, Braess-like paradoxes may occur. That is, the performance of the system may become worse instead of better when channel state information (CSI) is available at the selfish nodes. In particular, for homogeneous nodes, we analytically present that Braess-like paradoxes occur in the power capture model, and in the SINR capture model with the capture ratio larger than one and the noise to signal ratio sufficiently small.
{ "Other": 1, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }
1003.0093
Joint Subcarrier Pairing and Power Allocation for OFDM Transmission with Decode-and-Forward Relaying
[ "cs.IT", "math.IT" ]
In this paper, a point-to-point Orthogonal Frequency Division Multiplexing (OFDM) system with a decode-and-forward (DF) relay is considered. The transmission consists of two hops. The source transmits in the first hop, and the relay transmits in the second hop. Each hop occupies one time slot. The relay is half-duplex, and capable of decoding the message on a particular subcarrier in one time slot, and re-encoding and forwarding it on a different subcarrier in the next time slot. Thus each message is transmitted on a pair of subcarriers in two hops. It is assumed that the destination is capable of combining the signals from the source and the relay pertaining to the same message. The goal is to maximize the weighted sum rate of the system by jointly optimizing subcarrier pairing and power allocation on each subcarrier in each hop. The weighting of the rates is to take into account the fact that different subcarriers may carry signals for different services. Both total and individual power constraints for the source and the relay are investigated. For the situations where the relay does not transmit on some subcarriers because doing so does not improve the weighted sum rate, we further allow the source to transmit new messages on these idle subcarriers. To the best of our knowledge, such a joint optimization inclusive of the destination combining has not been discussed in the literature. The problem is first formulated as a mixed integer programming problem. It is then transformed to a convex optimization problem by continuous relaxation, and solved in the dual domain. Based on the optimization results, algorithms to achieve feasible solutions are also proposed. Simulation results show that the proposed algorithms almost achieve the optimal weighted sum rate, and outperform the existing methods in various channel conditions.
{ "Other": 0, "cs.AI": 0, "cs.CE": 0, "cs.CL": 0, "cs.CR": 0, "cs.CV": 0, "cs.CY": 0, "cs.DB": 0, "cs.HC": 0, "cs.IR": 0, "cs.IT": 1, "cs.LG": 0, "cs.MA": 0, "cs.NE": 0, "cs.RO": 0, "cs.SD": 0, "cs.SI": 0, "cs.SY": 0 }