id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
0905.2479
A note on a complex Hilbert metric with application to domain of analyticity for entropy rate of hidden Markov processes
math.DS cs.IT math.IT
In this note, we show that small complex perturbations of positive matrices are contractions, with respect to a complex version of the Hilbert metric, on the standard complex simplex. We show that this metric can be used to obtain estimates of the domain of analyticity of entropy rate for a hidden Markov process when the underlying Markov chain has strictly positive transition probabilities.
0905.2501
Macrodynamics of users' behavior in Information Retrieval
cs.IR
We present a method to geometrize massive data sets from search engines query logs. For this purpose, a macrodynamic-like quantitative model of the Information Retrieval (IR) process is developed, whose paradigm is inspired by basic constructions of Einstein's general relativity theory in which all IR objects are uniformly placed in a common Room. The Room has a structure similar to Einsteinian spacetime, namely that of a smooth manifold. Documents and queries are treated as matter objects and sources of material fields. Relevance, the central notion of IR, becomes a dynamical issue controlled by both gravitation (or, more precisely, as the motion in a curved spacetime) and forces originating from the interactions of matter fields. The spatio-temporal description ascribes dynamics to any document or query, thus providing a uniform description for documents of both initially static and dynamical nature. Within the IR context, the techniques presented are based on two ideas. The first is the placement of all objects participating in IR into a common continuous space. The second idea is the `objectivization' of the IR process; instead of expressing users' wishes, we consider the overall IR as an objective physical process, representing the IR process in terms of motion in a given external-fields configuration. Various semantic environments are treated as various IR universes.
0905.2635
Point-Set Registration: Coherent Point Drift
cs.CV
Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown non-rigid spatial transformation, large dimensionality of point set, noise and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and non-rigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the GMM centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by re-parametrization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the non-rigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and non-rigid transformations in the presence of noise, outliers and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.
0905.2638
Secure Degrees of Freedom for Gaussian Channels with Interference: Structured Codes Outperform Gaussian Signaling
cs.IT math.IT
In this work, we prove that a positive secure degree of freedom is achievable for a large class of Gaussian channels as long as the channel is not degraded and the channel is fully connected. This class includes the MAC wire-tap channel, the 2-user interference channel with confidential messages, the 2-user interference channel with an external eavesdropper. Best known achievable schemes to date for these channels use Gaussian signaling. In this work, we show that structured codes outperform Gaussian random codes at high SNR when channel gains are real numbers.
0905.2639
Information-theoretic limits of selecting binary graphical models in high dimensions
cs.IT cs.LG math.IT math.ST stat.TH
The problem of graphical model selection is to correctly estimate the graph structure of a Markov random field given samples from the underlying distribution. We analyze the information-theoretic limitations of the problem of graph selection for binary Markov random fields under high-dimensional scaling, in which the graph size $p$ and the number of edges $k$, and/or the maximal node degree $d$ are allowed to increase to infinity as a function of the sample size $n$. For pairwise binary Markov random fields, we derive both necessary and sufficient conditions for correct graph selection over the class $\mathcal{G}_{p,k}$ of graphs on $p$ vertices with at most $k$ edges, and over the class $\mathcal{G}_{p,d}$ of graphs on $p$ vertices with maximum degree at most $d$. For the class $\mathcal{G}_{p, k}$, we establish the existence of constants $c$ and $c'$ such that if $\numobs < c k \log p$, any method has error probability at least 1/2 uniformly over the family, and we demonstrate a graph decoder that succeeds with high probability uniformly over the family for sample sizes $\numobs > c' k^2 \log p$. Similarly, for the class $\mathcal{G}_{p,d}$, we exhibit constants $c$ and $c'$ such that for $n < c d^2 \log p$, any method fails with probability at least 1/2, and we demonstrate a graph decoder that succeeds with high probability for $n > c' d^3 \log p$.
0905.2640
The Gaussian Many-to-One Interference Channel with Confidential Messages
cs.IT math.IT
We investigate the $K$-user many-to-one interference channel with confidential messages in which the $K$th user experiences interference from all other $K-1$ users, and is at the same time treated as an eavesdropper to all the messages of these users. We derive achievable rates and an upper bound on the sum rate for this channel and show that the gap between the achievable sum rate and its upper bound is $\log_2(K-1)$ bits per channel use under very strong interference, when the interfering users have equal power constraints and interfering link channel gains. The main contributions of this work are: (i) nested lattice codes are shown to provide secrecy when interference is present, (ii) a secrecy sum rate upper bound is found for strong interference regime and (iii) it is proved that under very strong interference and a symmetric setting, the gap between the achievable sum rate and the upper bound is constant with respect to transmission powers.
0905.2643
K-user Interference Channels: Achievable Secrecy Rate and Degrees of Freedom
cs.IT math.IT
In this work, we consider achievable secrecy rates for symmetric $K$-user ($K \ge 3$) interference channels with confidential messages. We find that nested lattice codes and layered coding are useful in providing secrecy for these channels. Achievable secrecy rates are derived for very strong interference. In addition, we derive the secure degrees of freedom for a range of channel parameters. As a by-product of our approach, we also demonstrate that nested lattice codes are useful for K-user symmetric interference channels without secrecy constraints in that they yield higher degrees of freedom than previous results.
0905.2645
Providing Secrecy with Lattice Codes
cs.IT math.IT
Recent results have shown that lattice codes can be used to construct good channel codes, source codes and physical layer network codes for Gaussian channels. On the other hand, for Gaussian channels with secrecy constraints, efforts to date rely on random codes. In this work, we provide a tool to bridge these two areas so that the secrecy rate can be computed when lattice codes are used. In particular, we address the problem of bounding equivocation rates under nonlinear modulus operation that is present in lattice encoders/decoders. The technique is then demonstrated in two Gaussian channel examples: (1) a Gaussian wiretap channel with a cooperative jammer, and (2) a multi-hop line network from a source to a destination with untrusted intermediate relay nodes from whom the information needs to be kept secret. In both cases, lattice codes are used to facilitate cooperative jamming. In the second case, interestingly, we demonstrate that a non-vanishing positive secrecy rate is achievable regardless of the number of hops.
0905.2649
An Immune System Inspired Approach to Automated Program Verification
cs.NE
An immune system inspired Artificial Immune System (AIS) algorithm is presented, and is used for the purposes of automated program verification. Relevant immunological concepts are discussed and the field of AIS is briefly reviewed. It is proposed to use this AIS algorithm for a specific automated program verification task: that of predicting shape of program invariants. It is shown that the algorithm correctly predicts program invariant shape for a variety of benchmarked programs.
0905.2657
Web 2.0 OLAP: From Data Cubes to Tag Clouds
cs.DB
Increasingly, business projects are ephemeral. New Business Intelligence tools must support ad-lib data sources and quick perusal. Meanwhile, tag clouds are a popular community-driven visualization technique. Hence, we investigate tag-cloud views with support for OLAP operations such as roll-ups, slices, dices, clustering, and drill-downs. As a case study, we implemented an application where users can upload data and immediately navigate through its ad hoc dimensions. To support social networking, views can be easily shared and embedded in other Web sites. Algorithmically, our tag-cloud views are approximate range top-k queries over spontaneous data cubes. We present experimental evidence that iceberg cuboids provide adequate online approximations. We benchmark several browser-oblivious tag-cloud layout optimizations.
0905.2659
Coalitional Games for Distributed Collaborative Spectrum Sensing in Cognitive Radio Networks
cs.GT cs.IT math.IT
Collaborative spectrum sensing among secondary users (SUs) in cognitive networks is shown to yield a significant performance improvement. However, there exists an inherent trade off between the gains in terms of probability of detection of the primary user (PU) and the costs in terms of false alarm probability. In this paper, we study the impact of this trade off on the topology and the dynamics of a network of SUs seeking to reduce the interference on the PU through collaborative sensing. Moreover, while existing literature mainly focused on centralized solutions for collaborative sensing, we propose distributed collaboration strategies through game theory. We model the problem as a non-transferable coalitional game, and propose a distributed algorithm for coalition formation through simple merge and split rules. Through the proposed algorithm, SUs can autonomously collaborate and self-organize into disjoint independent coalitions, while maximizing their detection probability taking into account the cooperation costs (in terms of false alarm). We study the stability of the resulting network structure, and show that a maximum number of SUs per formed coalition exists for the proposed utility model. Simulation results show that the proposed algorithm allows a reduction of up to 86.6% of the average missing probability per SU (probability of missing the detection of the PU) relative to the non-cooperative case, while maintaining a certain false alarm level. In addition, through simulations, we compare the performance of the proposed distributed solution with respect to an optimal centralized solution that minimizes the average missing probability per SU. Finally, the results also show how the proposed algorithm autonomously adapts the network topology to environmental changes such as mobility.
0905.2676
On the Benefits of Bandwidth Limiting in Decentralized Vector Multiple Access Channels
cs.IT math.IT
We study the network spectral efficiency of decentralized vector multiple access channels (MACs) when the number of accessible dimensions per transmitter is strategically limited. Considering each dimension as a frequency band, we call this limiting process bandwidth limiting (BL). Assuming that each transmitter maximizes its own data rate by water-filling over the available frequency bands, we consider two scenarios. In the first scenario, transmitters use non-intersecting sets of bands (spectral resource partition), and in the second one, they freely exploit all the available frequency bands (spectral resource sharing). In the latter case, successive interference cancelation (SIC) is used. We show the existence of an optimal number of dimensions that a transmitter must use in order to maximize the network performance measured in terms of spectral efficiency. We provide a closed form expression for the optimal number of accessible bands in the first scenario. Such an optimum point, depends on the number of active transmitters, the number of available frequency bands and the different signal-to-noise ratios. In the second scenario, we show that BL does not bring a significant improvement on the network spectral efficiency, when all transmitters use the same BL policy. For both scenarios, we provide simulation results to validate our conclusions.
0905.2718
Achievable Rate and Optimal Physical Layer Rate Allocation in Interference-Free Wireless Networks
cs.IT math.IT
We analyze the achievable rate in interference-free wireless networks with physical layer fading channels and orthogonal multiple access. As a starting point, the point-to-point channel is considered. We find the optimal physical and network layer rate trade-off which maximizes the achievable overall rate for both a fixed rate transmission scheme and an improved scheme based on multiple virtual users and superposition coding. These initial results are extended to the network setting, where, based on a cut-set formulation, the achievable rate at each node and its upper bound are derived. We propose a distributed optimization algorithm which allows to jointly determine the maximum achievable rate, the optimal physical layer rates on each network link, and an opportunistic back-pressure-type routing strategy on the network layer. This inherently justifies the layered architecture in existing wireless networks. Finally, we show that the proposed layered optimization approach can achieve almost all of the ergodic network capacity in high SNR.
0905.2796
Sparse Network Coding with Overlapping Classes
cs.IT math.IT
This paper presents a novel approach to network coding for distribution of large files. Instead of the usual approach of splitting packets into disjoint classes (also known as generations) we propose the use of overlapping classes. The overlapping allows the decoder to alternate between Gaussian elimination and back substitution, simultaneously boosting the performance and reducing the decoding complexity. Our approach can be seen as a combination of fountain coding and network coding. Simulation results are presented that demonstrate the promise of our approach.
0905.2817
Cavity approach to the Sourlas code system
cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
The statistical physics properties of regular and irregular Sourlas codes are investigated in this paper by the cavity method. At finite temperatures, the free energy density of these coding systems is derived and compared with the result obtained by the replica method. In the zero temperature limit, the Shannon's bound is recovered in the case of infinite-body interactions while the code rate is still finite. However, the decoding performance as obtained by the replica theory has not considered the zero-temperature entropic effect. The cavity approach is able to consider the ground-state entropy. It leads to a set of evanescent cavity fields propagation equations which further improve the decoding performance, as confirmed by our numerical simulations on single instances. For the irregular Sourlas code, we find that it takes the trade-off between good dynamical property and high performance of decoding. In agreement with the results found from the algorithmic point of view, the decoding exhibits a first order phase transition as occurs in the regular code system with three-body interactions. The cavity approach for the Sourlas code system can be extended to consider first-step replica-symmetry-breaking.
0905.2882
Do not Choose Representation just Change: An Experimental Study in States based EA
cs.NE cs.AI
Our aim in this paper is to analyse the phenotypic effects (evolvability) of diverse coding conversion operators in an instance of the states based evolutionary algorithm (SEA). Since the representation of solutions or the selection of the best encoding during the optimization process has been proved to be very important for the efficiency of evolutionary algorithms (EAs), we will discuss a strategy of coupling more than one representation and different procedures of conversion from one coding to another during the search. Elsewhere, some EAs try to use multiple representations (SM-GA, SEA, etc.) in intention to benefit from the characteristics of each of them. In spite of those results, this paper shows that the change of the representation is also a crucial approach to take into consideration while attempting to increase the performances of such EAs. As a demonstrative example, we use a two states SEA (2-SEA) which has two identical search spaces but different coding conversion operators. The results show that the way of changing from one coding to another and not only the choice of the best representation nor the representation itself is very advantageous and must be taken into account in order to well-desing and improve EAs execution.
0905.2919
Succinct Representation of Codes with Applications to Testing
cs.IT math.IT
Motivated by questions in property testing, we search for linear error-correcting codes that have the "single local orbit" property: i.e., they are specified by a single local constraint and its translations under the symmetry group of the code. We show that the dual of every "sparse" binary code whose coordinates are indexed by elements of F_{2^n} for prime n, and whose symmetry group includes the group of non-singular affine transformations of F_{2^n} has the single local orbit property. (A code is said to be "sparse" if it contains polynomially many codewords in its block length.) In particular this class includes the dual-BCH codes for whose duals (i.e., for BCH codes) simple bases were not known. Our result gives the first short (O(n)-bit, as opposed to the natural exp(n)-bit) description of a low-weight basis for BCH codes. The interest in the "single local orbit" property comes from the recent result of Kaufman and Sudan (STOC 2008) that shows that the duals of codes that have the single local orbit property under the affine symmetry group are locally testable. When combined with our main result, this shows that all sparse affine-invariant codes over the coordinates F_{2^n} for prime n are locally testable. If, in addition to n being prime, if 2^n-1 is also prime (i.e., 2^n-1 is a Mersenne prime), then we get that every sparse cyclic code also has the single local orbit. In particular this implies that BCH codes of Mersenne prime length are generated by a single low-weight codeword and its cyclic shifts.
0905.2924
Colorization of Natural Images via L1 Optimization
cs.CV
Natural images in the colour space YUV have been observed to have a non-Gaussian, heavy tailed distribution (called 'sparse') when the filter G(U)(r) = U(r) - sum_{s \in N(r)} w{(Y)_{rs}} U(s), is applied to the chromacity channel U (and equivalently to V), where w is a weighting function constructed from the intensity component Y [1]. In this paper we develop Bayesian analysis of the colorization problem using the filter response as a regularization term to arrive at a non-convex optimization problem. This problem is convexified using L1 optimization which often gives the same results for sparse signals [2]. It is observed that L1 optimization, in many cases, over-performs the famous colorization algorithm by Levin et al [3].
0905.2958
A statistical learning approach to color demosaicing
cs.CV
A statistical learning/inference framework for color demosaicing is presented. We start with simplistic assumptions about color constancy, and recast color demosaicing as a blind linear inverse problem: color parameterizes the unknown kernel, while brightness takes on the role of a latent variable. An expectation-maximization algorithm naturally suggests itself for the estimation of them both. Then, as we gradually broaden the family of hypothesis where color is learned, we let our demosaicing behave adaptively, in a manner that reflects our prior knowledge about the statistics of color images. We show that we can incorporate realistic, learned priors without essentially changing the complexity of the simple expectation-maximization algorithm we started with.
0905.2990
Automatic Summarization System coupled with a Question-Answering System (QAAS)
cs.IR cs.CL
To select the most relevant sentences of a document, it uses an optimal decision algorithm that combines several metrics. The metrics processes, weighting and extract pertinence sentences by statistical and informational algorithms. This technique might improve a Question-Answering system, whose function is to provide an exact answer to a question in natural language. In this paper, we present the results obtained by coupling the Cortex summarizer with a Question-Answering system (QAAS). Two configurations have been evaluated. In the first one, a low compression level is selected and the summarization system is only used as a noise filter. In the second configuration, the system actually functions as a summarizer, with a very high level of compression. Our results on French corpus demonstrate that the coupling of Automatic Summarization system with a Question-Answering system is promising. Then the system has been adapted to generate a customized summary depending on the specific question. Tests on a french multi-document corpus have been realized, and the personalized QAAS system obtains the best performances.
0905.2997
Average-Case Active Learning with Costs
cs.LG
We analyze the expected cost of a greedy active learning algorithm. Our analysis extends previous work to a more general setting in which different queries have different costs. Moreover, queries may have more than two possible responses and the distribution over hypotheses may be non uniform. Specific applications include active learning with label costs, active learning for multiclass and partial label queries, and batch mode active learning. We also discuss an approximate version of interest when there are very many queries.
0905.3023
Interference and Deployment Issues for Cognitive Radio Systems in Shadowing Environments
cs.IT math.IT
In this paper we describe a model for calculating the aggregate interference encountered by primary receivers in the presence of randomly placed cognitive radios (CRs). We show that incorporating the impact of distance attenuation and lognormal fading on each constituent interferer in the aggregate, leads to a composite interference that cannot be satisfactorily modeled by a lognormal. Using the interference statistics we determine a number of key parameters needed for the deployment of CRs. Examples of these are the exclusion zone radius, needed to protect the primary receiver under different types of fading environments and acceptable interference levels, and the numbers of CRs that can be deployed. We further show that if the CRs have apriori knowledge of the radio environment map (REM), then a much larger number of CRs can be deployed especially in a high density environment. Given REM information, we also look at the CR numbers achieved by two different types of techniques to process the scheduling information.
0905.3030
Performance of Cognitive Radio Systems with Imperfect Radio Environment Map Information
cs.IT math.IT
In this paper we describe the effect of imperfections in the radio environment map (REM) information on the performance of cognitive radio (CR) systems. Via simulations we explore the relationship between the required precision of the REM and various channel/system properties. For example, the degree of spatial correlation in the shadow fading is a key factor as is the interference constraint employed by the primary user. Based on the CR interferers obtained from the simulations, we characterize the temporal behavior of such systems by computing the level crossing rates (LCRs) of the cumulative interference represented by these CRs. This evaluates the effect of short term fluctuations above acceptable interference levels due to the fast fading. We derive analytical formulae for the LCRs in Rayleigh and Rician fast fading conditions. The analytical results are verified by Monte Carlo simulations.
0905.3076
On a Class of Doubly-Generalized LDPC Codes with Single Parity-Check Variable Nodes
cs.IT math.IT
A class of doubly-generalized low-density parity-check (D-GLDPC) codes, where single parity-check (SPC) codes are used as variable nodes (VNs), is investigated. An expression for the growth rate of the weight distribution of any D-GLDPC ensemble with a uniform check node (CN) set is presented at first, together with an analytical technique for its efficient evaluation. These tools are then used for detailed analysis of a case study, namely, a rate-1/2 D-GLDPC ensemble where all the CNs are (7,4) Hamming codes and all the VNs are length-7 SPC codes. It is illustrated how the VN representations can heavily affect the code properties and how different VN representations can be combined within the same graph to enhance some of the code parameters. The analysis is conducted over the binary erasure channel. Interesting features of the new codes include the capability of achieving a good compromise between waterfall and error floor performance while preserving graphical regularity, and values of threshold outperforming LDPC counterparts.
0905.3086
Deterministic Relay Networks with State Information
cs.IT math.IT
Motivated by fading channels and erasure channels, the problem of reliable communication over deterministic relay networks is studied, in which relay nodes receive a function of the incoming signals and a random network state. An achievable rate is characterized for the case in which destination nodes have full knowledge of the state information. If the relay nodes receive a linear function of the incoming signals and the state in a finite field, then the achievable rate is shown to be optimal, meeting the cut-set upper bound on the capacity. This result generalizes on a unified framework the work of Avestimehr, Diggavi, and Tse on the deterministic networks with state dependency, the work of Dana, Gowaikar, Palanki, Hassibi, and Effros on linear erasure networks with interference, and the work of Smith and Vishwanath on linear erasure networks with broadcast.
0905.3108
A Note on the Complexity of the Satisfiability Problem for Graded Modal Logics
cs.LO cs.AI cs.CC
Graded modal logic is the formal language obtained from ordinary (propositional) modal logic by endowing its modal operators with cardinality constraints. Under the familiar possible-worlds semantics, these augmented modal operators receive interpretations such as "It is true at no fewer than 15 accessible worlds that...", or "It is true at no more than 2 accessible worlds that...". We investigate the complexity of satisfiability for this language over some familiar classes of frames. This problem is more challenging than its ordinary modal logic counterpart--especially in the case of transitive frames, where graded modal logic lacks the tree-model property. We obtain tight complexity bounds for the problem of determining the satisfiability of a given graded modal logic formula over the classes of frames characterized by any combination of reflexivity, seriality, symmetry, transitivity and the Euclidean property.
0905.3109
Interference Channels with Source Cooperation
cs.IT math.IT
The role of cooperation in managing interference - a fundamental feature of the wireless channel - is investigated by studying the two-user Gaussian interference channel where the source nodes can both transmit and receive in full-duplex. The sum-capacity of this channel is obtained within a gap of a constant number of bits. The coding scheme used builds up on the superposition scheme of Han and Kobayashi (1981) for the two-user interference channel without cooperation. New upperbounds on the sum-capacity are also derived. The same coding scheme is shown to obtain the sum-capacity of the symmetric two-user Gaussian interference channel with noiseless feedback within a constant gap.
0905.3135
The discrete logarithm problem in the group of non-singular circulant matrices
cs.CR cs.DM cs.IT math.IT
The discrete logarithm problem is one of the backbones in public key cryptography. In this paper we study the discrete logarithm problem in the group of circulant matrices over a finite field. This gives rise to secure and fast public key cryptosystems.
0905.3178
SQS-graphs of Solov'eva-Phelps codes
math.CO cs.IT math.IT
A binary extended 1-perfect code $\mathcal C$ folds over its kernel via the Steiner quadruple systems associated with its codewords. The resulting folding, proposed as a graph invariant for $\mathcal C$, distinguishes among the 361 nonlinear codes $\mathcal C$ of kernel dimension $\kappa$ obtained via Solov'eva-Phelps doubling construction, where $9\geq\kappa\geq 5$. Each of the 361 resulting graphs has most of its nonloop edges expressible in terms of lexicographically ordered quarters of products of classes from extended 1-perfect partitions of length 8 (as classified by Phelps) and loops mostly expressible in terms of the lines of the Fano plane.
0905.3201
On the Statistics of Cognitive Radio Capacity in Shadowing and Fast Fading Environments
cs.IT math.IT
In this paper we consider the capacity of the cognitive radio channel in a fading environment under a "low interference regime". This capacity depends critically on a power loss parameter, $\alpha$, which governs how much transmit power the cognitive radio dedicates to relaying the primary message. We derive a simple, accurate approximation to $\alpha$ which gives considerable insight into system capacity. We also investigate the effects of system parameters and propagation environment on $\alpha$ and the cognitive radio capacity. In all cases, the use of the approximation is shown to be extremely accurate. Finally, we derive the probability that the "low interference regime" holds and demonstrate that this is the dominant case, especially in practical cognitive radio deployment scenarios.
0905.3245
Novel Algorithm for Sparse Solutions to Linear Inverse Problems with Multiple Measurements
cs.IT math.IT
In this report, a novel efficient algorithm for recovery of jointly sparse signals (sparse matrix) from multiple incomplete measurements has been presented, in particular, the NESTA-based MMV optimization method. In a nutshell, the jointly sparse recovery is obviously superior to applying standard sparse reconstruction methods to each channel individually. Moreover several efforts have been made to improve the NESTA-based MMV algorithm, in particular, (1) the NESTA-based MMV algorithm for partially known support to greatly improve the convergence rate, (2) the detection of partial (or all) locations of unknown jointly sparse signals by using so-called MUSIC algorithm; (3) the iterative NESTA-based algorithm by combing hard thresholding technique to decrease the numbers of measurements. It has been shown that by using proposed approach one can recover the unknown sparse matrix X with () Spark A -sparsity from () Spark A measurements, predicted in Ref. [1], where the measurement matrix denoted by A satisfies the so-called restricted isometry property (RIP). Under a very mild condition on the sparsity of X and characteristics of the A, the iterative hard threshold (IHT)-based MMV method has been shown to be also a very good candidate.
0905.3318
An Object-Oriented and Fast Lexicon for Semantic Generation
cs.CL cs.DB cs.DS cs.IR cs.PL
This paper is about the technical design of a large computational lexicon, its storage, and its access from a Prolog environment. Traditionally, efficient access and storage of data structures is implemented by a relational database management system. In Delilah, a lexicon-based NLP system, efficient access to the lexicon by the semantic generator is vital. We show that our highly detailed HPSG-style lexical specifications do not fit well in the Relational Model, and that they cannot be efficiently retrieved. We argue that they fit more naturally in the Object-Oriented Model. Although storage of objects is redundant, we claim that efficient access is still possible by applying indexing, and compression techniques from the Relational Model to the Object-Oriented Model. We demonstrate that it is possible to implement object-oriented storage and fast access in ISO Prolog.
0905.3347
Information Distance in Multiples
cs.CV cs.LG
Information distance is a parameter-free similarity measure based on compression, used in pattern recognition, data mining, phylogeny, clustering, and classification. The notion of information distance is extended from pairs to multiples (finite lists). We study maximal overlap, metricity, universality, minimal overlap, additivity, and normalized information distance in multiples. We use the theoretical notion of Kolmogorov complexity which for practical purposes is approximated by the length of the compressed version of the file involved, using a real-world compression program. {\em Index Terms}-- Information distance, multiples, pattern recognition, data mining, similarity, Kolmogorov complexity
0905.3356
Memento Ludi: Information Retrieval from a Game-Theoretic Perspective
cs.IR cs.GT
We develop a macro-model of information retrieval process using Game Theory as a mathematical theory of conflicts. We represent the participants of the Information Retrieval process as a game of two abstract players. The first player is the `intellectual crowd' of users of search engines, the second is a community of information retrieval systems. In order to apply Game Theory, we treat search log data as Nash equilibrium strategies and solve the inverse problem of finding appropriate payoff functions. For that, we suggest a particular model, which we call Alpha model. Within this model, we suggest a method, called shifting, which makes it possible to partially control the behavior of massive users. This Note is addressed to researchers in both game theory (providing a new class of real life problems) and information retrieval, for whom we present new techniques to control the IR environment.
0905.3360
A Generalized Statistical Complexity Measure: Applications to Quantum Systems
quant-ph cs.IT math.IT nlin.AO physics.atom-ph
A two-parameter family of complexity measures $\tilde{C}^{(\alpha,\beta)}$ based on the R\'enyi entropies is introduced and characterized by a detailed study of its mathematical properties. This family is the generalization of a continuous version of the LMC complexity, which is recovered for $\alpha=1$ and $\beta=2$. These complexity measures are obtained by multiplying two quantities bringing global information on the probability distribution defining the system. When one of the parameters, $\alpha$ or $\beta$, goes to infinity, one of the global factors becomes a local factor. For this special case, the complexity is calculated on different quantum systems: H-atom, harmonic oscillator and square well.
0905.3369
Learning Nonlinear Dynamic Models
cs.AI cs.LG
We present a novel approach for learning nonlinear dynamic models, which leads to a new set of tools capable of solving problems that are otherwise difficult. We provide theory showing this new approach is consistent for models with long range structure, and apply the approach to motion capture and high-dimensional video data, yielding results superior to standard alternatives.
0905.3378
Interpretations of the Web of Data
cs.AI cs.DL
The emerging Web of Data utilizes the web infrastructure to represent and interrelate data. The foundational standards of the Web of Data include the Uniform Resource Identifier (URI) and the Resource Description Framework (RDF). URIs are used to identify resources and RDF is used to relate resources. While RDF has been posited as a logic language designed specifically for knowledge representation and reasoning, it is more generally useful if it can conveniently support other models of computing. In order to realize the Web of Data as a general-purpose medium for storing and processing the world's data, it is necessary to separate RDF from its logic language legacy and frame it simply as a data model. Moreover, there is significant advantage in seeing the Semantic Web as a particular interpretation of the Web of Data that is focused specifically on knowledge representation and reasoning. By doing so, other interpretations of the Web of Data are exposed that realize RDF in different capacities and in support of different computing models.
0905.3407
Throughput and Delay Scaling in Supportive Two-Tier Networks
cs.IT math.IT
Consider a wireless network that has two tiers with different priorities: a primary tier vs. a secondary tier, which is an emerging network scenario with the advancement of cognitive radio technologies. The primary tier consists of randomly distributed legacy nodes of density $n$, which have an absolute priority to access the spectrum. The secondary tier consists of randomly distributed cognitive nodes of density $m=n^\beta$ with $\beta\geq 2$, which can only access the spectrum opportunistically to limit the interference to the primary tier. Based on the assumption that the secondary tier is allowed to route the packets for the primary tier, we investigate the throughput and delay scaling laws of the two tiers in the following two scenarios: i) the primary and secondary nodes are all static; ii) the primary nodes are static while the secondary nodes are mobile. With the proposed protocols for the two tiers, we show that the primary tier can achieve a per-node throughput scaling of $\lambda_p(n)=\Theta(1/\log n)$ in the above two scenarios. In the associated delay analysis for the first scenario, we show that the primary tier can achieve a delay scaling of $D_p(n)=\Theta(\sqrt{n^\beta\log n}\lambda_p(n))$ with $\lambda_p(n)=O(1/\log n)$. In the second scenario, with two mobility models considered for the secondary nodes: an i.i.d. mobility model and a random walk model, we show that the primary tier can achieve delay scaling laws of $\Theta(1)$ and $\Theta(1/S)$, respectively, where $S$ is the random walk step size. The throughput and delay scaling laws for the secondary tier are also established, which are the same as those for a stand-alone network.
0905.3428
Finding Anomalous Periodic Time Series: An Application to Catalogs of Periodic Variable Stars
cs.LG astro-ph.IM physics.data-an
Catalogs of periodic variable stars contain large numbers of periodic light-curves (photometric time series data from the astrophysics domain). Separating anomalous objects from well-known classes is an important step towards the discovery of new classes of astronomical objects. Most anomaly detection methods for time series data assume either a single continuous time series or a set of time series whose periods are aligned. Light-curve data precludes the use of these methods as the periods of any given pair of light-curves may be out of sync. One may use an existing anomaly detection method if, prior to similarity calculation, one performs the costly act of aligning two light-curves, an operation that scales poorly to massive data sets. This paper presents PCAD, an unsupervised anomaly detection method for large sets of unsynchronized periodic time-series data, that outputs a ranked list of both global and local anomalies. It calculates its anomaly score for each light-curve in relation to a set of centroids produced by a modified k-means clustering algorithm. Our method is able to scale to large data sets through the use of sampling. We validate our method on both light-curve data and other time series data sets. We demonstrate its effectiveness at finding known anomalies, and discuss the effect of sample size and number of centroids on our results. We compare our method to naive solutions and existing time series anomaly detection methods for unphased data, and show that PCAD's reported anomalies are comparable to or better than all other methods. Finally, astrophysicists on our team have verified that PCAD finds true anomalies that might be indicative of novel astrophysical phenomena.
0905.3434
Exploiting Opportunistic Multiuser Detection in Decentralized Multiuser MIMO Systems
cs.IT math.IT
This paper studies the design of a decentralized multiuser multi-antenna (MIMO) system for spectrum sharing over a fixed narrow band, where the coexisting users independently update their transmit covariance matrices for individual transmit-rate maximization via an iterative manner. This design problem was usually investigated in the literature by assuming that each user treats the co-channel interference from all the other users as additional (colored) noise at the receiver, i.e., the conventional single-user decoder (SUD) is applied. This paper proposes a new decoding method for the decentralized multiuser MIMO system, whereby each user opportunistically cancels the co-channel interference from some or all of the other users via applying multiuser detection techniques, thus termed opportunistic multiuser detection (OMD). This paper studies the optimal transmit covariance design for users' iterative maximization of individual transmit rates with the proposed OMD, and demonstrates the resulting capacity gains in decentralized multiuser MIMO systems against the conventional SUD.
0905.3436
On Active Learning and Supervised Transmission of Spectrum Sharing Based Cognitive Radios by Exploiting Hidden Primary Radio Feedback
cs.IT math.IT
This paper studies the wireless spectrum sharing between a pair of distributed primary radio (PR) and cognitive radio (CR) links. Assuming that the PR link adapts its transmit power and/or rate upon receiving an interference signal from the CR and such transmit adaptations are observable by the CR, this results in a new form of feedback from the PR to CR, refereed to as hidden PR feedback, whereby the CR learns the PR's strategy for transmit adaptations without the need of a dedicated feedback channel from the PR. In this paper, we exploit the hidden PR feedback to design new learning and transmission schemes for spectrum sharing based CRs, namely active learning and supervised transmission. For active learning, the CR initiatively sends a probing signal to interfere with the PR, and from the observed PR transmit adaptations the CR estimates the channel gain from its transmitter to the PR receiver, which is essential for the CR to control its interference to the PR during the subsequent data transmission. This paper proposes a new transmission protocol for the CR to implement the active learning and the solutions to deal with various practical issues for implementation, such as time synchronization, rate estimation granularity, power measurement noise, and channel variation. Furthermore, with the acquired knowledge from active learning, the CR designs a supervised data transmission by effectively controlling the interference powers both to and from the PR, so as to achieve the optimum performance tradeoffs for the PR and CR links. Numerical results are provided to evaluate the effectiveness of the proposed schemes for CRs under different system setups.
0905.3527
Quantum Annealing for Clustering
cond-mat.dis-nn cond-mat.stat-mech cs.LG quant-ph
This paper studies quantum annealing (QA) for clustering, which can be seen as an extension of simulated annealing (SA). We derive a QA algorithm for clustering and propose an annealing schedule, which is crucial in practice. Experiments show the proposed QA algorithm finds better clustering assignments than SA. Furthermore, QA is as easy as SA to implement.
0905.3528
Quantum Annealing for Variational Bayes Inference
cond-mat.dis-nn cond-mat.stat-mech cs.LG quant-ph
This paper presents studies on a deterministic annealing algorithm based on quantum annealing for variational Bayes (QAVB) inference, which can be seen as an extension of the simulated annealing for variational Bayes (SAVB) inference. QAVB is as easy as SAVB to implement. Experiments revealed QAVB finds a better local optimum than SAVB in terms of the variational free energy in latent Dirichlet allocation (LDA).
0905.3582
Profiling of a network behind an infectious disease outbreak
cs.AI q-bio.PE
Stochasticity and spatial heterogeneity are of great interest recently in studying the spread of an infectious disease. The presented method solves an inverse problem to discover the effectively decisive topology of a heterogeneous network and reveal the transmission parameters which govern the stochastic spreads over the network from a dataset on an infectious disease outbreak in the early growth phase. Populations in a combination of epidemiological compartment models and a meta-population network model are described by stochastic differential equations. Probability density functions are derived from the equations and used for the maximal likelihood estimation of the topology and parameters. The method is tested with computationally synthesized datasets and the WHO dataset on SARS outbreak.
0905.3587
Prediction, Retrodiction, and The Amount of Information Stored in the Present
cond-mat.stat-mech cond-mat.dis-nn cs.IT math.IT physics.data-an
We introduce an ambidextrous view of stochastic dynamical systems, comparing their forward-time and reverse-time representations and then integrating them into a single time-symmetric representation. The perspective is useful theoretically, computationally, and conceptually. Mathematically, we prove that the excess entropy--a familiar measure of organization in complex systems--is the mutual information not only between the past and future, but also between the predictive and retrodictive causal states. Practically, we exploit the connection between prediction and retrodiction to directly calculate the excess entropy. Conceptually, these lead one to discover new system invariants for stochastic dynamical systems: crypticity (information accessibility) and causal irreversibility. Ultimately, we introduce a time-symmetric representation that unifies all these quantities, compressing the two directional representations into one. The resulting compression offers a new conception of the amount of information stored in the present.
0905.3602
Level Crossing Rates of Interference in Cognitive Radio Networks
cs.IT math.IT
The future deployment of cognitive radios is critically dependent on the fact that the incumbent primary user system must remain as oblivious as possible to their presence. This in turn heavily relies on the fluctuations of the interfering cognitive radio signals. In this letter we compute the level crossing rates of the cumulative interference created by the cognitive radios. We derive analytical formulae for the level crossing rates in Rayleigh and Rician fast fading conditions. We approximate Rayleigh and Rician level crossing rates using fluctuation rates of gamma and scaled noncentral $\chi^2$ processes respectively. The analytical results and the approximations used in their derivations are verified by Monte Carlo simulations and the analysis is applied to a particular CR allocation strategy.
0905.3640
Coevolutionary Genetic Algorithms for Establishing Nash Equilibrium in Symmetric Cournot Games
cs.GT cs.LG
We use co-evolutionary genetic algorithms to model the players' learning process in several Cournot models, and evaluate them in terms of their convergence to the Nash Equilibrium. The "social-learning" versions of the two co-evolutionary algorithms we introduce, establish Nash Equilibrium in those models, in contrast to the "individual learning" versions which, as we see here, do not imply the convergence of the players' strategies to the Nash outcome. When players use "canonical co-evolutionary genetic algorithms" as learning algorithms, the process of the game is an ergodic Markov Chain, and therefore we analyze simulation results using both the relevant methodology and more general statistical tests, to find that in the "social" case, states leading to NE play are highly frequent at the stationary distribution of the chain, in contrast to the "individual learning" case, when NE is not reached at all in our simulations; to find that the expected Hamming distance of the states at the limiting distribution from the "NE state" is significantly smaller in the "social" than in the "individual learning case"; to estimate the expected time that the "social" algorithms need to get to the "NE state" and verify their robustness and finally to show that a large fraction of the games played are indeed at the Nash Equilibrium.
0905.3689
Optimized Training and Feedback for MIMO Downlink Channels
cs.IT math.IT
We consider a MIMO fading broadcast channel where channel state information is acquired at user terminals via downlink training and channel feedback is used to provide transmitter channel state information (CSIT) to the base station. The feedback channel (the corresponding uplink) is modeled as an AWGN channel, orthogonal across users. The total bandwidth consumed is the sum of the bandwidth/resources used for downlink training, channel feedback, and data transmission. Assuming that the channel follows a block fading model and that zeroforcing beamforming is used, we optimize the net achievable rate for unquantized (analog) and quantized (digital) channel feedback. The optimal number of downlink training pilots is seen to be essentially the same for both feedback techniques, but digital feedback is shown to provide a larger net rate than analog feedback.
0905.3720
Where are the really hard manipulation problems? The phase transition in manipulating the veto rule
cs.AI cs.CC
Voting is a simple mechanism to aggregate the preferences of agents. Many voting rules have been shown to be NP-hard to manipulate. However, a number of recent theoretical results suggest that this complexity may only be in the worst-case since manipulation is often easy in practice. In this paper, we show that empirical studies are useful in improving our understanding of this issue. We demonstrate that there is a smooth transition in the probability that a coalition can elect a desired candidate using the veto rule as the size of the manipulating coalition increases. We show that a rescaled probability curve displays a simple and universal form independent of the size of the problem. We argue that manipulation of the veto rule is asymptotically easy for many independent and identically distributed votes even when the coalition of manipulators is critical in size. Based on this argument, we identify a situation in which manipulation is computationally hard. This is when votes are highly correlated and the election is "hung". We show, however, that even a single uncorrelated voter is enough to make manipulation easy again.
0905.3733
Trapping Set Enumerators for Repeat Multiple Accumulate Code Ensembles
cs.IT math.IT
The serial concatenation of a repetition code with two or more accumulators has the advantage of a simple encoder structure. Furthermore, the resulting ensemble is asymptotically good and exhibits minimum distance growing linearly with block length. However, in practice these codes cannot be decoded by a maximum likelihood decoder, and iterative decoding schemes must be employed. For low-density parity-check codes, the notion of trapping sets has been introduced to estimate the performance of these codes under iterative message passing decoding. In this paper, we present a closed form finite length ensemble trapping set enumerator for repeat multiple accumulate codes by creating a trellis representation of trapping sets. We also obtain the asymptotic expressions when the block length tends to infinity and evaluate them numerically.
0905.3755
Decompositions of All Different, Global Cardinality and Related Constraints
cs.AI
We show that some common and important global constraints like ALL-DIFFERENT and GCC can be decomposed into simple arithmetic constraints on which we achieve bound or range consistency, and in some cases even greater pruning. These decompositions can be easily added to new solvers. They also provide other constraints with access to the state of the propagator by sharing of variables. Such sharing can be used to improve propagation between constraints. We report experiments with our decomposition in a pseudo-Boolean solver.
0905.3757
Circuit Complexity and Decompositions of Global Constraints
cs.AI cs.CC
We show that tools from circuit complexity can be used to study decompositions of global constraints. In particular, we study decompositions of global constraints into conjunctive normal form with the property that unit propagation on the decomposition enforces the same level of consistency as a specialized propagation algorithm. We prove that a constraint propagator has a a polynomial size decomposition if and only if it can be computed by a polynomial size monotone Boolean circuit. Lower bounds on the size of monotone Boolean circuits thus translate to lower bounds on the size of decompositions of global constraints. For instance, we prove that there is no polynomial sized decomposition of the domain consistency propagator for the ALLDIFFERENT constraint.
0905.3763
Scenario-based Stochastic Constraint Programming
cs.AI
To model combinatorial decision problems involving uncertainty and probability, we extend the stochastic constraint programming framework proposed in [Walsh, 2002] along a number of important dimensions (e.g. to multiple chance constraints and to a range of new objectives). We also provide a new (but equivalent) semantics based on scenarios. Using this semantics, we can compile stochastic constraint programs down into conventional (nonstochastic) constraint programs. This allows us to exploit the full power of existing constraint solvers. We have implemented this framework for decision making under uncertainty in stochastic OPL, a language which is based on the OPL constraint modelling language [Hentenryck et al., 1999]. To illustrate the potential of this framework, we model a wide range of problems in areas as diverse as finance, agriculture and production.
0905.3766
Reasoning about soft constraints and conditional preferences: complexity results and approximation techniques
cs.AI
Many real life optimization problems contain both hard and soft constraints, as well as qualitative conditional preferences. However, there is no single formalism to specify all three kinds of information. We therefore propose a framework, based on both CP-nets and soft constraints, that handles both hard and soft constraints as well as conditional preferences efficiently and uniformly. We study the complexity of testing the consistency of preference statements, and show how soft constraints can faithfully approximate the semantics of conditional preference statements whilst improving the computational complexity
0905.3769
Multiset Ordering Constraints
cs.AI
We identify a new and important global (or non-binary) constraint. This constraint ensures that the values taken by two vectors of variables, when viewed as multisets, are ordered. This constraint is useful for a number of different applications including breaking symmetry and fuzzy constraint satisfaction. We propose and implement an efficient linear time algorithm for enforcing generalised arc consistency on such a multiset ordering constraint. Experimental results on several problem domains show considerable promise.
0905.3771
Memory Retrieved from Single Neurons
cs.NE q-bio.NC
The paper examines the problem of accessing a vector memory from a single neuron in a Hebbian neural network. It begins with the review of the author's earlier method, which is different from the Hopfield model in that it recruits neighboring neurons by spreading activity, making it possible for single or group of neurons to become associated with vector memories. Some open issues associated with this approach are identified. It is suggested that fragments that generate stored memories could be associated with single neurons through local spreading activity.
0905.3830
Tag Clouds for Displaying Semantics: The Case of Filmscripts
cs.AI
We relate tag clouds to other forms of visualization, including planar or reduced dimensionality mapping, and Kohonen self-organizing maps. Using a modified tag cloud visualization, we incorporate other information into it, including text sequence and most pertinent words. Our notion of word pertinence goes beyond just word frequency and instead takes a word in a mathematical sense as located at the average of all of its pairwise relationships. We capture semantics through context, taken as all pairwise relationships. Our domain of application is that of filmscript analysis. The analysis of filmscripts, always important for cinema, is experiencing a major gain in importance in the context of television. Our objective in this work is to visualize the semantics of filmscript, and beyond filmscript any other partially structured, time-ordered, sequence of text segments. In particular we develop an innovative approach to plot characterization.
0905.3858
Multicasting in Large Wireless Networks: Bounds on the Minimum Energy per Bit
cs.IT math.IT
We consider scaling laws for maximal energy efficiency of communicating a message to all the nodes in a wireless network, as the number of nodes in the network becomes large. Two cases of large wireless networks are studied -- dense random networks and constant density (extended) random networks. In addition, we also study finite size regular networks in order to understand how regularity in node placement affects energy consumption. We first establish an information-theoretic lower bound on the minimum energy per bit for multicasting in arbitrary wireless networks when the channel state information is not available at the transmitters. Upper bounds are obtained by constructing a simple flooding scheme that requires no information at the receivers about the channel states or the locations and identities of the nodes. The gap between the upper and lower bounds is only a constant factor for dense random networks and regular networks, and differs by a poly-logarithmic factor for extended random networks. Furthermore, we show that the proposed upper and lower bounds for random networks hold almost surely in the node locations as the number of nodes approaches infinity.
0905.3885
Swap Bribery
cs.GT cs.AI
In voting theory, bribery is a form of manipulative behavior in which an external actor (the briber) offers to pay the voters to change their votes in order to get her preferred candidate elected. We investigate a model of bribery where the price of each vote depends on the amount of change that the voter is asked to implement. Specifically, in our model the briber can change a voter's preference list by paying for a sequence of swaps of consecutive candidates. Each swap may have a different price; the price of a bribery is the sum of the prices of all swaps that it involves. We prove complexity results for this model, which we call swap bribery, for a broad class of election systems, including variants of approval and k-approval, Borda, Copeland, and maximin.
0905.3934
Cooperative encoding for secrecy in interference channels
cs.IT math.IT
This paper investigates the fundamental performance limits of the two-user interference channel in the presence of an external eavesdropper. In this setting, we construct an inner bound, to the secrecy capacity region, based on the idea of cooperative encoding in which the two users cooperatively design their randomized codebooks and jointly optimize their channel prefixing distributions. Our achievability scheme also utilizes message-splitting in order to allow for partial decoding of the interference at the non-intended receiver. Outer bounds are then derived and used to establish the optimality of the proposed scheme in certain cases. In the Gaussian case, the previously proposed cooperative jamming and noise-forwarding techniques are shown to be special cases of our proposed approach. Overall, our results provide structural insights on how the interference can be exploited to increase the secrecy capacity of wireless networks.
0905.3964
A New Solution to the Relative Orientation Problem using only 3 Points and the Vertical Direction
cs.CV
This paper presents a new method to recover the relative pose between two images, using three points and the vertical direction information. The vertical direction can be determined in two ways: 1- using direct physical measurement like IMU (inertial measurement unit), 2- using vertical vanishing point. This knowledge of the vertical direction solves 2 unknowns among the 3 parameters of the relative rotation, so that only 3 homologous points are requested to position a couple of images. Rewriting the coplanarity equations leads to a simpler solution. The remaining unknowns resolution is performed by an algebraic method using Grobner bases. The elements necessary to build a specific algebraic solver are given in this paper, allowing for a real-time implementation. The results on real and synthetic data show the efficiency of this method.
0905.3967
Optimal byzantine resilient convergence in oblivious robot networks
cs.DC cs.RO
Given a set of robots with arbitrary initial location and no agreement on a global coordinate system, convergence requires that all robots asymptotically approach the exact same, but unknown beforehand, location. Robots are oblivious-- they do not recall the past computations -- and are allowed to move in a one-dimensional space. Additionally, robots cannot communicate directly, instead they obtain system related information only via visual sensors. We draw a connection between the convergence problem in robot networks, and the distributed \emph{approximate agreement} problem (that requires correct processes to decide, for some constant $\epsilon$, values distance $\epsilon$ apart and within the range of initial proposed values). Surprisingly, even though specifications are similar, the convergence implementation in robot networks requires specific assumptions about synchrony and Byzantine resilience. In more details, we prove necessary and sufficient conditions for the convergence of mobile robots despite a subset of them being Byzantine (i.e. they can exhibit arbitrary behavior). Additionally, we propose a deterministic convergence algorithm for robot networks and analyze its correctness and complexity in various synchrony settings. The proposed algorithm tolerates f Byzantine robots for (2f+1)-sized robot networks in fully synchronous networks, (3f+1)-sized in semi-synchronous networks. These bounds are optimal for the class of cautious algorithms, which guarantee that correct robots always move inside the range of positions of the correct robots.
0905.4022
Transfer Learning Using Feature Selection
cs.LG
We present three related ways of using Transfer Learning to improve feature selection. The three methods address different problems, and hence share different kinds of information between tasks or feature classes, but all three are based on the information theoretic Minimum Description Length (MDL) principle and share the same underlying Bayesian interpretation. The first method, MIC, applies when predictive models are to be built simultaneously for multiple tasks (``simultaneous transfer'') that share the same set of features. MIC allows each feature to be added to none, some, or all of the task models and is most beneficial for selecting a small set of predictive features from a large pool of features, as is common in genomic and biological datasets. Our second method, TPC (Three Part Coding), uses a similar methodology for the case when the features can be divided into feature classes. Our third method, Transfer-TPC, addresses the ``sequential transfer'' problem in which the task to which we want to transfer knowledge may not be known in advance and may have different amounts of data than the other tasks. Transfer-TPC is most beneficial when we want to transfer knowledge between tasks which have unequal amounts of labeled data, for example the data for disambiguating the senses of different verbs. We demonstrate the effectiveness of these approaches with experimental results on real world data pertaining to genomics and to Word Sense Disambiguation (WSD).
0905.4023
DMT Optimality of LR-Aided Linear Decoders for a General Class of Channels, Lattice Designs, and System Models
cs.IT math.IT
The work identifies the first general, explicit, and non-random MIMO encoder-decoder structures that guarantee optimality with respect to the diversity-multiplexing tradeoff (DMT), without employing a computationally expensive maximum-likelihood (ML) receiver. Specifically, the work establishes the DMT optimality of a class of regularized lattice decoders, and more importantly the DMT optimality of their lattice-reduction (LR)-aided linear counterparts. The results hold for all channel statistics, for all channel dimensions, and most interestingly, irrespective of the particular lattice-code applied. As a special case, it is established that the LLL-based LR-aided linear implementation of the MMSE-GDFE lattice decoder facilitates DMT optimal decoding of any lattice code at a worst-case complexity that grows at most linearly in the data rate. This represents a fundamental reduction in the decoding complexity when compared to ML decoding whose complexity is generally exponential in rate. The results' generality lends them applicable to a plethora of pertinent communication scenarios such as quasi-static MIMO, MIMO-OFDM, ISI, cooperative-relaying, and MIMO-ARQ channels, in all of which the DMT optimality of the LR-aided linear decoder is guaranteed. The adopted approach yields insight, and motivates further study, into joint transceiver designs with an improved SNR gap to ML decoding.
0905.4039
Normalized Web Distance and Word Similarity
cs.CL cs.IR
There is a great deal of work in cognitive psychology, linguistics, and computer science, about using word (or phrase) frequencies in context in text corpora to develop measures for word similarity or word association, going back to at least the 1960s. The goal of this chapter is to introduce the normalizedis a general way to tap the amorphous low-grade knowledge available for free on the Internet, typed in by local users aiming at personal gratification of diverse objectives, and yet globally achieving what is effectively the largest semantic electronic database in the world. Moreover, this database is available for all by using any search engine that can return aggregate page-count estimates for a large range of search-queries. In the paper introducing the NWD it was called `normalized Google distance (NGD),' but since Google doesn't allow computer searches anymore, we opt for the more neutral and descriptive NWD. web distance (NWD) method to determine similarity between words and phrases. It
0905.4057
Coalitional Game Theory for Communication Networks: A Tutorial
cs.IT cs.GT math.IT
Game theoretical techniques have recently become prevalent in many engineering applications, notably in communications. With the emergence of cooperation as a new communication paradigm, and the need for self-organizing, decentralized, and autonomic networks, it has become imperative to seek suitable game theoretical tools that allow to analyze and study the behavior and interactions of the nodes in future communication networks. In this context, this tutorial introduces the concepts of cooperative game theory, namely coalitional games, and their potential applications in communication and wireless networks. For this purpose, we classify coalitional games into three categories: Canonical coalitional games, coalition formation games, and coalitional graph games. This new classification represents an application-oriented approach for understanding and analyzing coalitional games. For each class of coalitional games, we present the fundamental components, introduce the key properties, mathematical techniques, and solution concepts, and describe the methodologies for applying these games in several applications drawn from the state-of-the-art research in communications. In a nutshell, this article constitutes a unified treatment of coalitional game theory tailored to the demands of communications and network engineers.
0905.4087
Structural Solutions for Cross-Layer Optimization of Wireless Multimedia Transmission
cs.MM cs.IT math.IT
In this paper, we propose a systematic solution to the problem of cross-layer optimization for delay-sensitive media transmission over time-varying wireless channels as well as investigate the structures and properties of this solution, such that it can be easily implemented in various multimedia systems and applications. Specifically, we formulate this problem as a finite-horizon Markov decision process (MDP) by explicitly considering the users' heterogeneous multimedia traffic characteristics (e.g. delay deadlines, distortion impacts and dependencies etc.), time-varying network conditions as well as, importantly, their ability to adapt their cross-layer transmission strategies in response to these dynamics. Based on the heterogeneous characteristics of the media packets, we are able to express the transmission priorities between packets as a new type of directed acyclic graph (DAG). This DAG provides the necessary structure for determining the optimal cross-layer actions in each time slot: the root packet in the DAG will always be selected for transmission since it has the highest positive marginal utility; and the complexity of the proposed cross-layer solution is demonstrated to linearly increase w.r.t. the number of disconnected packet pairs in the DAG and exponentially increase w.r.t. the number of packets on which the current packets depend on. The simulation results demonstrate that the proposed solution significantly outperforms existing state-of-the-art cross-layer solutions. Moreover, we show that our solution provides the upper bound performance for the cross-layer optimization solutions with delayed feedback such as the well-known RaDiO framework.
0905.4091
Hybrid ARQ in Multiple-Antenna Slow Fading Channels: Performance Limits and Optimal Linear Dispersion Code Design
cs.IT math.IT
This paper focuses on studying the fundamental performance limits and linear dispersion code design for the MIMO-ARQ slow fading channel. Optimal average rate of well-known HARQ protocols is analyzed. The optimal design of space-time coding for the MIMO-ARQ channel is discussed. Information-theoretic measures are used to optimize the rate assignment and derive the optimum design criterion, which is then used to evaluate the optimality of existing space-time codes. A different design criterion, which is obtained from the error probability analysis of space-time coded MIMO-HARQ, is presented. Examples are studied to reveal the gain of ARQ feedback in space-time coded MIMO systems.
0905.4138
Faster estimation of the correlation fractal dimension using box-counting
cs.DB cs.DS
Fractal dimension is widely adopted in spatial databases and data mining, among others as a measure of dataset skewness. State-of-the-art algorithms for estimating the fractal dimension exhibit linear runtime complexity whether based on box-counting or approximation schemes. In this paper, we revisit a correlation fractal dimension estimation algorithm that redundantly rescans the dataset and, extending that work, we propose another linear, yet faster and as accurate method, which completes in a single pass.
0905.4160
Codes over Quaternion Integers with Respect to Lipschitz Metric
cs.IT math.IT
I want to withdraw this paper.
0905.4162
Google matrix, dynamical attractors and Ulam networks
cs.IR
We study the properties of the Google matrix generated by a coarse-grained Perron-Frobenius operator of the Chirikov typical map with dissipation. The finite size matrix approximant of this operator is constructed by the Ulam method. This method applied to the simple dynamical model creates the directed Ulam networks with approximate scale-free scaling and characteristics being rather similar to those of the World Wide Web. The simple dynamical attractors play here the role of popular web sites with a strong concentration of PageRank. A variation of the Google parameter $\alpha$ or other parameters of the dynamical map can drive the PageRank of the Google matrix to a delocalized phase with a strange attractor where the Google search becomes inefficient.
0905.4163
Cyclic Codes over Some Finite Rings
cs.IT math.CO math.IT
In this paper cyclic codes are established with respect to the Mannheim metric over some finite rings by using Gaussian integers and the decoding algorithm for these codes is given.
0905.4164
Iterative Decoding on Multiple Tanner Graphs Using Random Edge Local Complementation
cs.IT math.IT
In this paper, we propose to enhance the performance of the sum-product algorithm (SPA) by interleaving SPA iterations with a random local graph update rule. This rule is known as edge local complementation (ELC), and has the effect of modifying the Tanner graph while preserving the code. We have previously shown how the ELC operation can be used to implement an iterative permutation group decoder (SPA-PD)--one of the most successful iterative soft-decision decoding strategies at small blocklengths. In this work, we exploit the fact that ELC can also give structurally distinct parity-check matrices for the same code. Our aim is to describe a simple iterative decoder, running SPA-PD on distinct structures, based entirely on random usage of the ELC operation. This is called SPA-ELC, and we focus on small blocklength codes with strong algebraic structure. In particular, we look at the extended Golay code and two extended quadratic residue codes. Both error rate performance and average decoding complexity, measured by the average total number of messages required in the decoding, significantly outperform those of the standard SPA, and compares well with SPA-PD. However, in contrast to SPA-PD, which requires a global action on the Tanner graph, we obtain a performance improvement via local action alone. Such localized algorithms are of mathematical interest in their own right, but are also suited to parallel/distributed realizations.
0905.4165
Cyclic Codes over Some Finite Quaternion Integer Rings
cs.IT math.IT
In this paper, cyclic codes are established over some finite quaternion integer rings with respect to the quaternion Mannheim distance, and de- coding algorithm for these codes is given.
0905.4201
The Usefulness of Multilevel Hash Tables with Multiple Hash Functions in Large Databases
cs.DS cs.DB
In this work, attempt is made to select three good hash functions which uniformly distribute hash values that permute their internal states and allow the input bits to generate different output bits. These functions are used in different levels of hash tables that are coded in Java Programming Language and a quite number of data records serve as primary data for testing the performances. The result shows that the two-level hash tables with three different hash functions give a superior performance over one-level hash table with two hash functions or zero-level hash table with one function in term of reducing the conflict keys and quick lookup for a particular element. The result assists to reduce the complexity of join operation in query language from O(n2) to O(1) by placing larger query result, if any, in multilevel hash tables with multiple hash functions and generate shorter query result.
0905.4303
On Block Noncoherent Communication with Low-Precision Phase Quantization at the Receiver
cs.IT math.IT
We consider communication over the block noncoherent AWGN channel with low-precision Analog-to-Digital Converters (ADCs) at the receiver. For standard uniform Phase Shift Keying (PSK) modulation, we investigate the performance of a receiver architecture that quantizes only the phase of the received signal; this has the advantage of being implementable without automatic gain control, using multiple 1-bit ADCs preceded by analog multipliers. We study the structure of the transition density of the resulting channel model. Several results, based on the symmetry inherent in the channel, are provided to characterize this transition density. A low complexity procedure for computing the channel capacity is obtained using these results. Numerical capacity computations for QPSK show that 8-bin phase quantization of the received signal recovers more than 80-85 % of the capacity attained with unquantized observations, while 12-bin phase quantization recovers above 90-95 % of the unquantized capacity. Dithering the constellation is shown to improve the performance in the face of drastic quantization.
0905.4341
Characterizing predictable classes of processes
cs.AI cs.IT math.IT math.PR
The problem is sequence prediction in the following setting. A sequence $x_1,...,x_n,...$ of discrete-valued observations is generated according to some unknown probabilistic law (measure) $\mu$. After observing each outcome, it is required to give the conditional probabilities of the next observation. The measure $\mu$ belongs to an arbitrary class $\C$ of stochastic processes. We are interested in predictors $\rho$ whose conditional probabilities converge to the "true" $\mu$-conditional probabilities if any $\mu\in\C$ is chosen to generate the data. We show that if such a predictor exists, then a predictor can also be obtained as a convex combination of a countably many elements of $\C$. In other words, it can be obtained as a Bayesian predictor whose prior is concentrated on a countable set. This result is established for two very different measures of performance of prediction, one of which is very strong, namely, total variation, and the other is very weak, namely, prediction in expected average Kullback-Leibler divergence.
0905.4369
Automating Quantified Multimodal Logics in Simple Type Theory -- A Case Study
cs.AI cs.LO
In a case study we investigate whether off the shelf higher-order theorem provers and model generators can be employed to automate reasoning in and about quantified multimodal logics. In our experiments we exploit the new TPTP infrastructure for classical higher-order logic.
0905.4378
The Cramer-Rao Bound for Sparse Estimation
math.ST cs.IT math.IT stat.TH
The goal of this paper is to characterize the best achievable performance for the problem of estimating an unknown parameter having a sparse representation. Specifically, we consider the setting in which a sparsely representable deterministic parameter vector is to be estimated from measurements corrupted by Gaussian noise, and derive a lower bound on the mean-squared error (MSE) achievable in this setting. To this end, an appropriate definition of bias in the sparse setting is developed, and the constrained Cramer-Rao bound (CRB) is obtained. This bound is shown to equal the CRB of an estimator with knowledge of the support set, for almost all feasible parameter values. Consequently, in the unbiased case, our bound is identical to the MSE of the oracle estimator. Combined with the fact that the CRB is achieved at high signal-to-noise ratios by the maximum likelihood technique, our result provides a new interpretation for the common practice of using the oracle estimator as a gold standard against which practical approaches are compared.
0905.4387
Information Modeling for a Dynamic Representation of an Emergency Situation
cs.AI cs.MA
In this paper we propose an approach to build a decision support system that can help emergency planners and responders to detect and manage emergency situations. The internal mechanism of the system is independent from the treated application. Therefore, we think the system may be used or adapted easily to different case studies. We focus here on a first step in the decision-support process which concerns the modeling of information issued from the perceived environment and their representation dynamically using a multiagent system. This modeling was applied on the RoboCupRescue Simulation System. An implementation and some results are presented here.
0905.4476
Beacon-Assisted Spectrum Access with Cooperative Cognitive Transmitter and Receiver
cs.IT math.IT
Spectrum access is an important function of cognitive radios for detecting and utilizing spectrum holes without interfering with the legacy systems. In this paper we propose novel cooperative communication models and show how deploying such cooperations between a pair of secondary transmitter and receiver assists them in identifying spectrum opportunities more reliably. These cooperations are facilitated by dynamically and opportunistically assigning one of the secondary users as a relay to assist the other one which results in more efficient spectrum hole detection. Also, we investigate the impact of erroneous detection of spectrum holes and thereof missing communication opportunities on the capacity of the secondary channel. The capacity of the secondary users with interference-avoiding spectrum access is affected by 1) how effectively the availability of vacant spectrum is sensed by the secondary transmitter-receiver pair, and 2) how correlated are the perceptions of the secondary transmitter-receiver pair about network spectral activity. We show that both factors are improved by using the proposed cooperative protocols. One of the proposed protocols requires explicit information exchange in the network. Such information exchange in practice is prone to wireless channel errors (i.e., is imperfect) and costs bandwidth loss. We analyze the effects of such imperfect information exchange on the capacity as well as the effect of bandwidth cost on the achievable throughput. The protocols are also extended to multiuser secondary networks.
0905.4482
Topics in Compressed Sensing
math.NA cs.IT math.IT
Compressed sensing has a wide range of applications that include error correction, imaging, radar and many more. Given a sparse signal in a high dimensional space, one wishes to reconstruct that signal accurately and efficiently from a number of linear measurements much less than its actual dimension. Although in theory it is clear that this is possible, the difficulty lies in the construction of algorithms that perform the recovery efficiently, as well as determining which kind of linear measurements allow for the reconstruction. There have been two distinct major approaches to sparse recovery that each present different benefits and shortcomings. The first, L1-minimization methods such as Basis Pursuit, use a linear optimization problem to recover the signal. This method provides strong guarantees and stability, but relies on Linear Programming, whose methods do not yet have strong polynomially bounded runtimes. The second approach uses greedy methods that compute the support of the signal iteratively. These methods are usually much faster than Basis Pursuit, but until recently had not been able to provide the same guarantees. This gap between the two approaches was bridged when we developed and analyzed the greedy algorithm Regularized Orthogonal Matching Pursuit (ROMP). ROMP provides similar guarantees to Basis Pursuit as well as the speed of a greedy algorithm. Our more recent algorithm Compressive Sampling Matching Pursuit (CoSaMP) improves upon these guarantees, and is optimal in every important aspect.
0905.4541
Turbo Packet Combining Strategies for the MIMO-ISI ARQ Channel
cs.IT math.IT
This paper addresses the issue of efficient turbo packet combining techniques for coded transmission with a Chase-type automatic repeat request (ARQ) protocol operating over a multiple-input--multiple-output (MIMO) channel with intersymbol interference (ISI). First of all, we investigate the outage probability and the outage-based power loss of the MIMO-ISI ARQ channel when optimal maximum a posteriori (MAP) turbo packet combining is used at the receiver. We show that the ARQ delay (i.e., the maximum number of ARQ rounds) does not completely translate into a diversity gain. We then introduce two efficient turbo packet combining algorithms that are inspired by minimum mean square error (MMSE)-based turbo equalization techniques. Both schemes can be viewed as low-complexity versions of the optimal MAP turbo combiner. The first scheme is called signal-level turbo combining and performs packet combining and multiple transmission ISI cancellation jointly at the signal-level. The second scheme, called symbol-level turbo combining, allows ARQ rounds to be separately turbo equalized, while combining is performed at the filter output. We conduct a complexity analysis where we demonstrate that both algorithms have almost the same computational cost as the conventional log-likelihood ratio (LLR)-level combiner. Simulation results show that both proposed techniques outperform LLR-level combining, while for some representative MIMO configurations, signal-level combining has better ISI cancellation capability and achievable diversity order than that of symbol-level combining.
0905.4545
Minimum Distance and Convergence Analysis of Hamming-Accumulate-Acccumulate Codes
cs.IT math.IT
In this letter we consider the ensemble of codes formed by the serial concatenation of a Hamming code and two accumulate codes. We show that this ensemble is asymptotically good, in the sense that most codes in the ensemble have minimum distance growing linearly with the block length. Thus, the resulting codes achieve high minimum distances with high probability, about half or more of the minimum distance of a typical random linear code of the same rate and length in our examples. The proposed codes also show reasonably good iterative convergence thresholds, which makes them attractive for applications requiring high code rates and low error rates, such as optical communications and magnetic recording.
0905.4570
Weak Evolvability Equals Strong Evolvability
cs.AI cs.NE
An updated version will be uploaded later.
0905.4601
Considerations on Construction Ontologies
cs.AI
The paper proposes an analysis on some existent ontologies, in order to point out ways to resolve semantic heterogeneity in information systems. Authors are highlighting the tasks in a Knowledge Acquisiton System and identifying aspects related to the addition of new information to an intelligent system. A solution is proposed, as a combination of ontology reasoning services and natural languages generation. A multi-agent system will be conceived with an extractor agent, a reasoner agent and a competence management agent.
0905.4605
Techniques for Securing Data Exchange between a Database Server and a Client Program
cs.DB
The goal of the presented work is to illustrate a method by which the data exchange between a standalone computer software and a shared database server can be protected of unauthorized interceptation of the traffic in Internet network, a transport network for data managed by those two systems, interceptation by which an attacker could gain illegetimate access to the database, threatening this way the data integrity and compromising the database.
0905.4614
A Logic Programming Approach to Activity Recognition
cs.AI
We have been developing a system for recognising human activity given a symbolic representation of video content. The input of our system is a set of time-stamped short-term activities detected on video frames. The output of our system is a set of recognised long-term activities, which are pre-defined temporal combinations of short-term activities. The constraints on the short-term activities that, if satisfied, lead to the recognition of a long-term activity, are expressed using a dialect of the Event Calculus. We illustrate the expressiveness of the dialect by showing the representation of several typical complex activities. Furthermore, we present a detailed evaluation of the system through experimentation on a benchmark dataset of surveillance videos.
0905.4627
CoPhIR: a Test Collection for Content-Based Image Retrieval
cs.MM cs.IR
The scalability, as well as the effectiveness, of the different Content-based Image Retrieval (CBIR) approaches proposed in literature, is today an important research issue. Given the wealth of images on the Web, CBIR systems must in fact leap towards Web-scale datasets. In this paper, we report on our experience in building a test collection of 100 million images, with the corresponding descriptive features, to be used in experimenting new scalable techniques for similarity searching, and comparing their results. In the context of the SAPIR (Search on Audio-visual content using Peer-to-peer Information Retrieval) European project, we had to experiment our distributed similarity searching technology on a realistic data set. Therefore, since no large-scale collection was available for research purposes, we had to tackle the non-trivial process of image crawling and descriptive feature extraction (we used five MPEG-7 features) using the European EGEE computer GRID. The result of this effort is CoPhIR, the first CBIR test collection of such scale. CoPhIR is now open to the research community for experiments and comparisons, and access to the collection was already granted to more than 50 research groups worldwide.
0905.4656
Quantization Errors of fGn and fBm Signals
cs.IT math.IT
In this Letter, we show that under the assumption of high resolution, the quantization errors of fGn and fBm signals with uniform quantizer can be treated as uncorrelated white noises.
0905.4684
A Simple Sequential Spectrum Sensing Scheme for Cognitive Radio
cs.IT math.IT
Cognitive radio that supports a secondary and opportunistic access to licensed spectrum shows great potential to dramatically improve spectrum utilization. Spectrum sensing performed by secondary users to detect unoccupied spectrum bands, is a key enabling technique for cognitive radio. This paper proposes a truncated sequential spectrum sensing scheme, namely the sequential shifted chi-square test (SSCT). The SSCT has a simple test statistic and does not rely on any deterministic knowledge about primary signals. As figures of merit, the exact false-alarm probability is derived, and the miss-detection probability as well as the average sample number (ASN) are evaluated by using a numerical integration algorithm. Corroborating numerical examples show that, in comparison with fixed-sample size detection schemes such as energy detection, the SSCT delivers considerable reduction on the ASN while maintaining a comparable detection performance.
0905.4700
Cross-Layer Design of FDD-OFDM Systems based on ACK/NAK Feedbacks
cs.IT math.IT
It is well-known that cross-layer scheduling which adapts power, rate and user allocation can achieve significant gain on system capacity. However, conventional cross-layer designs all require channel state information at the base station (CSIT) which is difficult to obtain in practice. In this paper, we focus on cross-layer resource optimization based on ACK/NAK feedback flows in OFDM systems without explicit CSIT. While the problem can be modeled as Markov Decision Process (MDP), brute force approach by policy iteration or value iteration cannot lead to any viable solution. Thus, we derive a simple closed-form solution for the MDP cross-layer problem, which is asymptotically optimal for sufficiently small target packet error rate (PER). The proposed solution also has low complexity and is suitable for realtime implementation. It is also shown to achieve significant performance gain compared with systems that do not utilize the ACK/NAK feedbacks for cross-layer designs or cross-layer systems that utilize very unreliable CSIT for adaptation with mismatch in CSIT error statistics. Asymptotic analysis is also provided to obtain useful design insights.
0905.4713
Mining Generalized Patterns from Large Databases using Ontologies
cs.AI cs.DB cs.DM
Formal Concept Analysis (FCA) is a mathematical theory based on the formalization of the notions of concept and concept hierarchies. It has been successfully applied to several Computer Science fields such as data mining,software engineering, and knowledge engineering, and in many domains like medicine, psychology, linguistics and ecology. For instance, it has been exploited for the design, mapping and refinement of ontologies. In this paper, we show how FCA can benefit from a given domain ontology by analyzing the impact of a taxonomy (on objects and/or attributes) on the resulting concept lattice. We willmainly concentrate on the usage of a taxonomy to extract generalized patterns (i.e., knowledge generated from data when elements of a given domain ontology are used) in the form of concepts and rules, and improve navigation through these patterns. To that end, we analyze three generalization cases and show their impact on the size of the generalized pattern set. Different scenarios of simultaneous generalizations on both objects and attributes are also discussed
0905.4757
Stochastic Optimization for Markov Modulated Networks with Application to Delay Constrained Wireless Scheduling
math.OC cs.SY
We consider a wireless system with a small number of delay constrained users and a larger number of users without delay constraints. We develop a scheduling algorithm that reacts to time varying channels and maximizes throughput utility (to within a desired proximity), stabilizes all queues, and satisfies the delay constraints. The problem is solved by reducing the constrained optimization to a set of weighted stochastic shortest path problems, which act as natural generalizations of max-weight policies to Markov decision networks. We also present approximation results for the corresponding shortest path problems, and discuss the additional complexity and delay incurred as compared to systems without delay constraints. The solution technique is general and applies to other constrained stochastic decision problems.
0905.4761
Optimizing XML Compression
cs.DB
The eXtensible Markup Language (XML) provides a powerful and flexible means of encoding and exchanging data. As it turns out, its main advantage as an encoding format (namely, its requirement that all open and close markup tags are present and properly balanced) yield also one of its main disadvantages: verbosity. XML-conscious compression techniques seek to overcome this drawback. Many of these techniques first separate XML structure from the document content, and then compress each independently. Further compression gains can be realized by identifying and compressing together document content that is highly similar, thereby amortizing the storage costs of auxiliary information required by the chosen compression algorithm. Additionally, the proper choice of compression algorithm is an important factor not only for the achievable compression gain, but also for access performance. Hence, choosing a compression configuration that optimizes compression gain requires one to determine (1) a partitioning strategy for document content, and (2) the best available compression algorithm to apply to each set within this partition. In this paper, we show that finding an optimal compression configuration with respect to compression gain is an NP-hard optimization problem. This problem remains intractable even if one considers a single compression algorithm for all content. We also describe an approximation algorithm for selecting a partitioning strategy for document content based on the branch-and-bound paradigm.
0905.4771
Variational structure of the optimal artificial diffusion method for the advection-diffusion equation
cs.CE cs.NA
In this research note we provide a variational basis for the optimal artificial diffusion method, which has been a cornerstone in developing many stabilized methods. The optimal artificial diffusion method produces exact nodal solutions when applied to one-dimensional problems with constant coefficients and forcing function. We first present a variational principle for a multi-dimensional advective-diffusive system, and then derive a new stable weak formulation. When applied to one-dimensional problems with constant coefficients and forcing function, this resulting weak formulation will be equivalent to the optimal artificial diffusion method. We present representative numerical results to corroborate our theoretical findings.
0905.4918
Divide and Conquer: Partitioning Online Social Networks
cs.NI cs.AI cs.DC
Online Social Networks (OSNs) have exploded in terms of scale and scope over the last few years. The unprecedented growth of these networks present challenges in terms of system design and maintenance. One way to cope with this is by partitioning such large networks and assigning these partitions to different machines. However, social networks possess unique properties that make the partitioning problem non-trivial. The main contribution of this paper is to understand different properties of social networks and how these properties can guide the choice of a partitioning algorithm. Using large scale measurements representing real OSNs, we first characterize different properties of social networks, and then we evaluate qualitatively different partitioning methods that cover the design space. We expose different trade-offs involved and understand them in light of properties of social networks. We show that a judicious choice of a partitioning scheme can help improve performance.
0905.4926
On Node Density -- Outage Probability Tradeoff in Wireless Networks
cs.IT math.IT
A statistical model of interference in wireless networks is considered, which is based on the traditional propagation channel model and a Poisson model of random spatial distribution of nodes in 1-D, 2-D and 3-D spaces with both uniform and non-uniform densities. The power of nearest interferer is used as a major performance indicator, instead of a traditionally-used total interference power, since at the low outage region, they have the same statistics so that the former is an accurate approximation of the latter. This simplifies the problem significantly and allows one to develop a unified framework for the outage probability analysis, including the impacts of complete/partial interference cancelation, of different types of fading and of linear filtering, either alone or in combination with each other. When a given number of nearest interferers are completely canceled, the outage probability is shown to scale down exponentially in this number. Three different models of partial cancelation are considered and compared via their outage probabilities. The partial cancelation level required to eliminate the impact of an interferer is quantified. The effect of a broad class of fading processes (including all popular fading models) is included in the analysis in a straightforward way, which can be positive or negative depending on a particular model and propagation/system parameters. The positive effect of linear filtering (e.g. by directional antennas) is quantified via a new statistical selectivity parameter. The analysis results in formulation of a tradeoff relationship between the network density and the outage probability, which is a result of the interplay between random geometry of node locations, the propagation path loss and the distortion effects at the victim receiver.
0905.4937
A criterion for hypothesis testing for stationary processes
math.ST cs.IT math.IT math.PR stat.TH
Given a finite-valued sample $X_1,...,X_n$ we wish to test whether it was generated by a stationary ergodic process belonging to a family $H_0$, or it was generated by a stationary ergodic process outside $H_0$. We require the Type I error of the test to be uniformly bounded, while the type II error has to be mande not more than a finite number of times with probability 1. For this notion of consistency we provide necessary and sufficient conditions on the family $H_0$ for the existence of a consistent test. This criterion is illustrated with applications to testing for a membership to parametric families, generalizing some existing results. In addition, we analyze a stronger notion of consistency, which requires finite-sample guarantees on error of both types, and provide some necessary and some sufficient conditions for the existence of a consistent test. We emphasize that no assumption on the process distributions are made beyond stationarity and ergodicity.
0906.0037
Asymptotic Capacity and Optimal Precoding in MIMO Multi-Hop Relay Networks
cs.IT math.IT
A multi-hop relaying system is analyzed where data sent by a multi-antenna source is relayed by successive multi-antenna relays until it reaches a multi-antenna destination. Assuming correlated fading at each hop, each relay receives a faded version of the signal from the previous level, performs linear precoding and retransmits it to the next level. Using free probability theory and assuming that the noise power at relaying levels-- but not at destination-- is negligible, the closed-form expression of the asymptotic instantaneous end-to-end mutual information is derived as the number of antennas at all levels grows large. The so-obtained deterministic expression is independent from the channel realizations while depending only on channel statistics. Moreover, it also serves as the asymptotic value of the average end-to-end mutual information. The optimal singular vectors of the precoding matrices that maximize the average mutual information with finite number of antennas at all levels are also provided. It turns out that the optimal precoding singular vectors are aligned to the eigenvectors of the channel correlation matrices. Thus they can be determined using only the known channel statistics. As the optimal precoding singular vectors are independent from the system size, they are also optimal in the asymptotic regime.