id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
0801.3703
On minimality of convolutional ring encoders
cs.IT math.IT
Convolutional codes are considered with code sequences modelled as semi-infinite Laurent series. It is wellknown that a convolutional code C over a finite group G has a minimal trellis representation that can be derived from code sequences. It is also wellknown that, for the case that G is a finite field, any polynomial encoder of C can be algebraically manipulated to yield a minimal polynomial encoder whose controller canonical realization is a minimal trellis. In this paper we seek to extend this result to the finite ring case G = Z_{p^r} by introducing a socalled "p-encoder". We show how to manipulate a polynomial encoding of a noncatastrophic convolutional code over Z_{p^r} to produce a particular type of p-encoder ("minimal p-encoder") whose controller canonical realization is a minimal trellis with nonlinear features. The minimum number of trellis states is then expressed as p^gamma, where gamma is the sum of the row degrees of the minimal p-encoder. In particular, we show that any convolutional code over Z_{p^r} admits a delay-free p-encoder which implies the novel result that delay-freeness is not a property of the code but of the encoder, just as in the field case. We conjecture that a similar result holds with respect to catastrophicity, i.e., any catastrophic convolutional code over Z_{p^r} admits a noncatastrophic p-encoder.
0801.3773
Graph-Based Classification of Self-Dual Additive Codes over Finite Fields
cs.IT math.CO math.IT quant-ph
Quantum stabilizer states over GF(m) can be represented as self-dual additive codes over GF(m^2). These codes can be represented as weighted graphs, and orbits of graphs under the generalized local complementation operation correspond to equivalence classes of codes. We have previously used this fact to classify self-dual additive codes over GF(4). In this paper we classify self-dual additive codes over GF(9), GF(16), and GF(25). Assuming that the classical MDS conjecture holds, we are able to classify all self-dual additive MDS codes over GF(9) by using an extension technique. We prove that the minimum distance of a self-dual additive code is related to the minimum vertex degree in the associated graph orbit. Circulant graph codes are introduced, and a computer search reveals that this set contains many strong codes. We show that some of these codes have highly regular graph representations.
0801.3817
Robustness Evaluation of Two CCG, a PCFG and a Link Grammar Parsers
cs.CL
Robustness in a parser refers to an ability to deal with exceptional phenomena. A parser is robust if it deals with phenomena outside its normal range of inputs. This paper reports on a series of robustness evaluations of state-of-the-art parsers in which we concentrated on one aspect of robustness: its ability to parse sentences containing misspelled words. We propose two measures for robustness evaluation based on a comparison of a parser's output for grammatical input sentences and their noisy counterparts. In this paper, we use these measures to compare the overall robustness of the four evaluated parsers, and we present an analysis of the decline in parser performance with increasing error levels. Our results indicate that performance typically declines tens of percentage units when parsers are presented with texts containing misspellings. When it was tested on our purpose-built test set of 443 sentences, the best parser in the experiment (C&C parser) was able to return exactly the same parse tree for the grammatical and ungrammatical sentences for 60.8%, 34.0% and 14.9% of the sentences with one, two or three misspelled words respectively.
0801.3837
Universal Fingerprinting: Capacity and Random-Coding Exponents
cs.IT math.IT
This paper studies fingerprinting (traitor tracing) games in which the number of colluders and the collusion channel are unknown. The fingerprints are embedded into host sequences representing signals to be protected and provide the receiver with the capability to trace back pirated copies to the colluders. The colluders and the fingerprint embedder are subject to signal fidelity constraints. Our problem setup unifies the signal-distortion and Boneh-Shaw formulations of fingerprinting. The fundamental tradeoffs between fingerprint codelength, number of users, number of colluders, fidelity constraints, and decoding reliability are then determined. Several bounds on fingerprinting capacity have been presented in recent literature. This paper derives exact capacity formulas and presents a new randomized fingerprinting scheme with the following properties: (1) the encoder and receiver assume a nominal coalition size but do not need to know the actual coalition size and the collusion channel; (2) a tunable parameter $\Delta$ trades off false-positive and false-negative error exponents; (3) the receiver provides a reliability metric for its decision; and (4) the scheme is capacity-achieving when the false-positive exponent $\Delta$ tends to zero and the nominal coalition size coincides with the actual coalition size. A fundamental component of the new scheme is the use of a "time-sharing" randomized sequence. The decoder is a maximum penalized mutual information decoder, where the significance of each candidate coalition is assessed relative to a threshold, and the penalty is proportional to the coalition size. A much simpler {\em threshold decoder} that satisfies properties (1)---(3) above but not (4) is also given.
0801.3864
Between conjecture and memento: shaping a collective emotional perception of the future
cs.CL cs.GL
Large scale surveys of public mood are costly and often impractical to perform. However, the web is awash with material indicative of public mood such as blogs, emails, and web queries. Inexpensive content analysis on such extensive corpora can be used to assess public mood fluctuations. The work presented here is concerned with the analysis of the public mood towards the future. Using an extension of the Profile of Mood States questionnaire, we have extracted mood indicators from 10,741 emails submitted in 2006 to futureme.org, a web service that allows its users to send themselves emails to be delivered at a later date. Our results indicate long-term optimism toward the future, but medium-term apprehension and confusion.
0801.3871
On the Scaling Window of Model RB
cs.CC cond-mat.stat-mech cs.AI
This paper analyzes the scaling window of a random CSP model (i.e. model RB) for which we can identify the threshold points exactly, denoted by $r_{cr}$ or $p_{cr}$. For this model, we establish the scaling window $W(n,\delta)=(r_{-}(n,\delta), r_{+}(n,\delta))$ such that the probability of a random instance being satisfiable is greater than $1-\delta$ for $r<r_{-}(n,\delta)$ and is less than $\delta$ for $r>r_{+}(n,\delta)$. Specifically, we obtain the following result $$W(n,\delta)=(r_{cr}-\Theta(\frac{1}{n^{1-\epsilon}\ln n}), \ r_{cr}+\Theta(\frac{1}{n\ln n})),$$ where $0\leq\epsilon<1$ is a constant. A similar result with respect to the other parameter $p$ is also obtained. Since the instances generated by model RB have been shown to be hard at the threshold, this is the first attempt, as far as we know, to analyze the scaling window of such a model with hard instances.
0801.3875
Towards a Real-Time Data Driven Wildland Fire Model
physics.ao-ph cs.CE
A wildland fire model based on semi-empirical relations for the spread rate of a surface fire and post-frontal heat release is coupled with the Weather Research and Forecasting atmospheric model (WRF). The propagation of the fire front is implemented by a level set method. Data is assimilated by a morphing ensemble Kalman filter, which provides amplitude as well as position corrections. Thermal images of a fire will provide the observations and will be compared to a synthetic image from the model state.
0801.3878
Hash Property and Coding Theorems for Sparse Matrices and Maximum-Likelihood Coding
cs.IT math.IT
The aim of this paper is to prove the achievability of several coding problems by using sparse matrices (the maximum column weight grows logarithmically in the block length) and maximal-likelihood (ML) coding. These problems are the Slepian-Wolf problem, the Gel'fand-Pinsker problem, the Wyner-Ziv problem, and the One-helps-one problem (source coding with partial side information at the decoder). To this end, the notion of a hash property for an ensemble of functions is introduced and it is proved that an ensemble of $q$-ary sparse matrices satisfies the hash property. Based on this property, it is proved that the rate of codes using sparse matrices and maximal-likelihood (ML) coding can achieve the optimal rate.
0801.3880
Spectral efficiency and optimal medium access control of random access systems over large random spreading CDMA
cs.IT math.IT
This paper analyzes the spectral efficiency as a function of medium access control (MAC) for large random spreading CDMA random access systems that employ a linear receiver. It is shown that located at higher than the physical layer, MAC along with spreading and power allocation can effectively perform spectral efficiency maximization and near-far mitigation.
0801.3908
Encoding changing country codes for the Semantic Web with ISO 3166 and SKOS
cs.IR
This paper shows how authority files can be encoded for the Semantic Web with the Simple Knowledge Organisation System (SKOS). In particular the application of SKOS for encoding the structure, management, and utilization of country codes as defined in ISO 3166 is demonstrated. The proposed encoding gives a use case for SKOS that includes features that have only been discussed little so far, such as multiple notations, nested concept schemes, changes by versioning.
0801.3926
On the Weight Distribution of the Extended Quadratic Residue Code of Prime 137
cs.IT cs.DM math.IT
The Hamming weight enumerator function of the formally self-dual even, binary extended quadratic residue code of prime p = 8m + 1 is given by Gleason's theorem for singly-even code. Using this theorem, the Hamming weight distribution of the extended quadratic residue is completely determined once the number of codewords of Hamming weight j A_j, for 0 <= j <= 2m, are known. The smallest prime for which the Hamming weight distribution of the corresponding extended quadratic residue code is unknown is 137. It is shown in this paper that, for p=137 A_2m = A_34 may be obtained with out the need of exhaustive codeword enumeration. After the remainder of A_j required by Gleason's theorem are computed and independently verified using their congruences, the Hamming weight distributions of the binary augmented and extended quadratic residue codes of prime 137 are derived.
0801.3971
A Bayesian Optimisation Algorithm for the Nurse Scheduling Problem
cs.NE cs.CE
A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurses assignment. Unlike our previous work that used Gas to implement implicit learning, the learning in the proposed algorithm is explicit, ie. Eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated, ie in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
0801.3983
New Upper Bounds on Sizes of Permutation Arrays
cs.IT math.IT
A permutation array(or code) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. Let $P(n,d)$ denote the maximum size of an $(n,d)$ PA. New upper bounds on $P(n,d)$ are given. For constant $\alpha,\beta$ satisfying certain conditions, whenever $d=\beta n^{\alpha}$, the new upper bounds are asymptotically better than the previous ones.
0801.3986
New Lower Bounds on Sizes of Permutation Arrays
cs.IT math.IT
A permutation array(or code) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. Let $P(n,d)$ denote the maximum size of an $(n,d)$ PA. This correspondence focuses on the lower bound on $P(n,d)$. First we give three improvements over the Gilbert-Varshamov lower bounds on $P(n,d)$ by applying the graph theorem framework presented by Jiang and Vardy. Next we show another two new improved bounds by considering the covered balls intersections. Finally some new lower bounds for certain values of $n$ and $d$ are given.
0801.3987
New Constructions of Permutation Arrays
cs.IT math.IT
A permutation array(permutation code, PA) of length $n$ and distance $d$, denoted by $(n,d)$ PA, is a set of permutations $C$ from some fixed set of $n$ elements such that the Hamming distance between distinct members $\mathbf{x},\mathbf{y}\in C$ is at least $d$. In this correspondence, we present two constructions of PA from fractional polynomials over finite field, and a construction of $(n,d)$ PA from permutation group with degree $n$ and minimal degree $d$. All these new constructions produces some new lower bounds for PA.
0801.4024
Set-based complexity and biological information
cs.IT cs.CC math.IT q-bio.QM
It is not obvious what fraction of all the potential information residing in the molecules and structures of living systems is significant or meaningful to the system. Sets of random sequences or identically repeated sequences, for example, would be expected to contribute little or no useful information to a cell. This issue of quantitation of information is important since the ebb and flow of biologically significant information is essential to our quantitative understanding of biological function and evolution. Motivated specifically by these problems of biological information, we propose here a class of measures to quantify the contextual nature of the information in sets of objects, based on Kolmogorov's intrinsic complexity. Such measures discount both random and redundant information and are inherent in that they do not require a defined state space to quantify the information. The maximization of this new measure, which can be formulated in terms of the universal information distance, appears to have several useful and interesting properties, some of which we illustrate with examples.
0801.4048
High Performance Cooperative Transmission Protocols Based on Multiuser Detection and Network Coding
cs.IT math.IT
Cooperative transmission is an emerging communication technique that takes advantage of the broadcast nature of wireless channels. However, due to low spectral efficiency and the requirement of orthogonal channels, its potential for use in future wireless networks is limited. In this paper, by making use of multiuser detection (MUD) and network coding, cooperative transmission protocols with high spectral efficiency, diversity order, and coding gain are developed. Compared with the traditional cooperative transmission protocols with single-user detection, in which the diversity gain is only for one source user, the proposed MUD cooperative transmission protocols have the merit that the improvement of one user's link can also benefit the other users. In addition, using MUD at the relay provides an environment in which network coding can be employed. The coding gain and high diversity order can be obtained by fully utilizing the link between the relay and the destination. From the analysis and simulation results, it is seen that the proposed protocols achieve higher diversity gain, better asymptotic efficiency, and lower bit error rate, compared to traditional MUD schemes and to existing cooperative transmission protocols. From the simulation results, the performance of the proposed scheme is near optimal as the performance gap is 0.12dB for average bit error rate (BER) 10^{-6} and 1.04dB for average BER 10^(-3), compared to two performance upper bounds.
0801.4061
The optimal assignment kernel is not positive definite
cs.LG
We prove that the optimal assignment kernel, proposed recently as an attempt to embed labeled graphs and more generally tuples of basic data to a Hilbert space, is in fact not always positive definite.
0801.4119
Strategic Alert Throttling for Intrusion Detection Systems
cs.NE cs.CR
Network intrusion detection systems are themselves becoming targets of attackers. Alert flood attacks may be used to conceal malicious activity by hiding it among a deluge of false alerts sent by the attacker. Although these types of attacks are very hard to stop completely, our aim is to present techniques that improve alert throughput and capacity to such an extent that the resources required to successfully mount the attack become prohibitive. The key idea presented is to combine a token bucket filter with a realtime correlation algorithm. The proposed algorithm throttles alert output from the IDS when an attack is detected. The attack graph used in the correlation algorithm is used to make sure that alerts crucial to forming strategies are not discarded by throttling.
0801.4129
Scaling Laws and Techniques in Decentralized Processing of Interfered Gaussian Channels
cs.IT math.IT
The scaling laws of the achievable communication rates and the corresponding upper bounds of distributed reception in the presence of an interfering signal are investigated. The scheme includes one transmitter communicating to a remote destination via two relays, which forward messages to the remote destination through reliable links with finite capacities. The relays receive the transmission along with some unknown interference. We focus on three common settings for distributed reception, wherein the scaling laws of the capacity (the pre-log as the power of the transmitter and the interference are taken to infinity) are completely characterized. It is shown in most cases that in order to overcome the interference, a definite amount of information about the interference needs to be forwarded along with the desired message, to the destination. It is exemplified in one scenario that the cut-set upper bound is strictly loose. The results are derived using the cut-set along with a new bounding technique, which relies on multi letter expressions. Furthermore, lattices are found to be a useful communication technique in this setting, and are used to characterize the scaling laws of achievable rates.
0801.4190
Phylogenies without Branch Bounds: Contracting the Short, Pruning the Deep
q-bio.PE cs.CE cs.DS math.PR math.ST stat.TH
We introduce a new phylogenetic reconstruction algorithm which, unlike most previous rigorous inference techniques, does not rely on assumptions regarding the branch lengths or the depth of the tree. The algorithm returns a forest which is guaranteed to contain all edges that are: 1) sufficiently long and 2) sufficiently close to the leaves. How much of the true tree is recovered depends on the sequence length provided. The algorithm is distance-based and runs in polynomial time.
0801.4194
A statistical mechanical interpretation of algorithmic information theory
cs.IT cs.CC math.IT math.PR quant-ph
We develop a statistical mechanical interpretation of algorithmic information theory by introducing the notion of thermodynamic quantities, such as free energy, energy, statistical mechanical entropy, and specific heat, into algorithmic information theory. We investigate the properties of these quantities by means of program-size complexity from the point of view of algorithmic randomness. It is then discovered that, in the interpretation, the temperature plays a role as the compression rate of the values of all these thermodynamic quantities, which include the temperature itself. Reflecting this self-referential nature of the compression rate of the temperature, we obtain fixed point theorems on compression rate.
0801.4198
Microscopic Analysis for Decoupling Principle of Linear Vector Channel
cs.IT math.IT
This paper studies the decoupling principle of a linear vector channel, which is an extension of CDMA and MIMO channels. We show that the scalar-channel characterization obtained via the decoupling principle is valid not only for collections of a large number of elements of input vector, as discussed in previous studies, but also for individual elements of input vector, i.e. the linear vector channel for individual elements of channel input vector is decomposed into a bank of independent scalar Gaussian channels in the large-system limit, where dimensions of channel input and output are both sent to infinity while their ratio fixed.
0801.4287
Movie Recommendation Systems Using An Artificial Immune System
cs.NE cs.AI
We apply the Artificial Immune System (AIS) technology to the Collaborative Filtering (CF) technology when we build the movie recommendation system. Two different affinity measure algorithms of AIS, Kendall tau and Weighted Kappa, are used to calculate the correlation coefficients for this movie recommendation system. From the testing we think that Weighted Kappa is more suitable than Kendall tau for movie problems.
0801.4305
Risk-Seeking versus Risk-Avoiding Investments in Noisy Periodic Environments
q-fin.PM cs.CE physics.soc-ph
We study the performance of various agent strategies in an artificial investment scenario. Agents are equipped with a budget, $x(t)$, and at each time step invest a particular fraction, $q(t)$, of their budget. The return on investment (RoI), $r(t)$, is characterized by a periodic function with different types and levels of noise. Risk-avoiding agents choose their fraction $q(t)$ proportional to the expected positive RoI, while risk-seeking agents always choose a maximum value $q_{max}$ if they predict the RoI to be positive ("everything on red"). In addition to these different strategies, agents have different capabilities to predict the future $r(t)$, dependent on their internal complexity. Here, we compare 'zero-intelligent' agents using technical analysis (such as moving least squares) with agents using reinforcement learning or genetic algorithms to predict $r(t)$. The performance of agents is measured by their average budget growth after a certain number of time steps. We present results of extensive computer simulations, which show that, for our given artificial environment, (i) the risk-seeking strategy outperforms the risk-avoiding one, and (ii) the genetic algorithm was able to find this optimal strategy itself, and thus outperforms other prediction approaches considered.
0801.4307
On Affinity Measures for Artificial Immune System Movie Recommenders
cs.NE cs.AI cs.CY
We combine Artificial Immune Systems 'AIS', technology with Collaborative Filtering 'CF' and use it to build a movie recommendation system. We already know that Artificial Immune Systems work well as movie recommenders from previous work by Cayzer and Aickelin 3, 4, 5. Here our aim is to investigate the effect of different affinity measure algorithms for the AIS. Two different affinity measures, Kendalls Tau and Weighted Kappa, are used to calculate the correlation coefficients for the movie recommender. We compare the results with those published previously and show that Weighted Kappa is more suitable than others for movie problems. We also show that AIS are generally robust movie recommenders and that, as long as a suitable affinity measure is chosen, results are good.
0801.4312
Investigating Artificial Immune Systems For Job Shop Rescheduling In Changing Environments
cs.NE cs.CE
Artificial immune system can be used to generate schedules in changing environments and it has been proven to be more robust than schedules developed using a genetic algorithm. Good schedules can be produced especially when the number of the antigens is increased. However, an increase in the range of the antigens had somehow affected the fitness of the immune system. In this research, we are trying to improve the result of the system by rescheduling the same problem using the same method while at the same time maintaining the robustness of the schedules.
0801.4314
Artificial Immune Systems (AIS) - A New Paradigm for Heuristic Decision Making
cs.NE cs.AI
Over the last few years, more and more heuristic decision making techniques have been inspired by nature, e.g. evolutionary algorithms, ant colony optimisation and simulated annealing. More recently, a novel computational intelligence technique inspired by immunology has emerged, called Artificial Immune Systems (AIS). This immune system inspired technique has already been useful in solving some computational problems. In this keynote, we will very briefly describe the immune system metaphors that are relevant to AIS. We will then give some illustrative real-world problems suitable for AIS use and show a step-by-step algorithm walkthrough. A comparison of AIS to other well-known algorithms and areas for future work will round this keynote off. It should be noted that as AIS is still a young and evolving field, there is not yet a fixed algorithm template and hence actual implementations might differ somewhat from the examples given here.
0801.4355
TER: A Robot for Remote Ultrasonic Examination: Experimental Evaluations
cs.OH cs.RO
This chapter: o Motivates the clinical use of robotic tele-echography o Introduces the TER system o Describes technical and clinical evaluations performed with TER
0801.4544
A Neyman-Pearson Approach to Universal Erasure and List Decoding
cs.IT math.IT
When information is to be transmitted over an unknown, possibly unreliable channel, an erasure option at the decoder is desirable. Using constant-composition random codes, we propose a generalization of Csiszar and Korner's Maximum Mutual Information decoder with erasure option for discrete memoryless channels. The new decoder is parameterized by a weighting function that is designed to optimize the fundamental tradeoff between undetected-error and erasure exponents for a compound class of channels. The class of weighting functions may be further enlarged to optimize a similar tradeoff for list decoders -- in that case, undetected-error probability is replaced with average number of incorrect messages in the list. Explicit solutions are identified. The optimal exponents admit simple expressions in terms of the sphere-packing exponent, at all rates below capacity. For small erasure exponents, these expressions coincide with those derived by Forney (1968) for symmetric channels, using Maximum a Posteriori decoding. Thus for those channels at least, ignorance of the channel law is inconsequential. Conditions for optimality of the Csiszar-Korner rule and of the simpler empirical-mutual-information thresholding rule are identified. The error exponents are evaluated numerically for the binary symmetric channel.
0801.4571
Is SP BP?
cs.IT math.IT
The Survey Propagation (SP) algorithm for solving $k$-SAT problems has been shown recently as an instance of the Belief Propagation (BP) algorithm. In this paper, we show that for general constraint-satisfaction problems, SP may not be reducible from BP. We also establish the conditions under which such a reduction is possible. Along our development, we present a unification of the existing SP algorithms in terms of a probabilistically interpretable iterative procedure -- weighted Probabilistic Token Passing.
0801.4706
A Class of Errorless Codes for Over-loaded Synchronous Wireless and Optical CDMA Systems
cs.IT math.CO math.IT
In this paper we introduce a new class of codes for over-loaded synchronous wireless and optical CDMA systems which increases the number of users for fixed number of chips without introducing any errors. Equivalently, the chip rate can be reduced for a given number of users, which implies bandwidth reduction for downlink wireless systems. An upper bound for the maximum number of users for a given number of chips is derived. Also, lower and upper bounds for the sum channel capacity of a binary over-loaded CDMA are derived that can predict the existence of such over-loaded codes. We also propose a simplified maximum likelihood method for decoding these types of over-loaded codes. Although a high percentage of the over-loading factor degrades the system performance in noisy channels, simulation results show that this degradation is not significant. More importantly, for moderate values of Eb/N0 (in the range of 6-10 dB) or higher, the proposed codes perform much better than the binary Welch bound equality sequences.
0801.4716
Methods to integrate a language model with semantic information for a word prediction component
cs.CL
Most current word prediction systems make use of n-gram language models (LM) to estimate the probability of the following word in a phrase. In the past years there have been many attempts to enrich such language models with further syntactic or semantic information. We want to explore the predictive powers of Latent Semantic Analysis (LSA), a method that has been shown to provide reliable information on long-distance semantic dependencies between words in a context. We present and evaluate here several methods that integrate LSA-based information with a standard language model: a semantic cache, partial reranking, and different forms of interpolation. We found that all methods show significant improvements, compared to the 4-gram baseline, and most of them to a simple cache model as well.
0801.4746
Concerning Olga, the Beautiful Little Street Dancer (Adjectives as Higher-Order Polymorphic Functions)
cs.CL cs.LO
In this paper we suggest a typed compositional seman-tics for nominal compounds of the form [Adj Noun] that models adjectives as higher-order polymorphic functions, and where types are assumed to represent concepts in an ontology that reflects our commonsense view of the world and the way we talk about it in or-dinary language. In addition to [Adj Noun] compounds our proposal seems also to suggest a plausible explana-tion for well known adjective ordering restrictions.
0801.4790
Information Width
cs.DM cs.IT cs.LG math.IT
Kolmogorov argued that the concept of information exists also in problems with no underlying stochastic model (as Shannon's information representation) for instance, the information contained in an algorithm or in the genome. He introduced a combinatorial notion of entropy and information $I(x:\sy)$ conveyed by a binary string $x$ about the unknown value of a variable $\sy$. The current paper poses the following questions: what is the relationship between the information conveyed by $x$ about $\sy$ to the description complexity of $x$ ? is there a notion of cost of information ? are there limits on how efficient $x$ conveys information ? To answer these questions Kolmogorov's definition is extended and a new concept termed {\em information width} which is similar to $n$-widths in approximation theory is introduced. Information of any input source, e.g., sample-based, general side-information or a hybrid of both can be evaluated by a single common formula. An application to the space of binary functions is considered.
0801.4794
On the Complexity of Binary Samples
cs.DM cs.AI cs.LG
Consider a class $\mH$ of binary functions $h: X\to\{-1, +1\}$ on a finite interval $X=[0, B]\subset \Real$. Define the {\em sample width} of $h$ on a finite subset (a sample) $S\subset X$ as $\w_S(h) \equiv \min_{x\in S} |\w_h(x)|$, where $\w_h(x) = h(x) \max\{a\geq 0: h(z)=h(x), x-a\leq z\leq x+a\}$. Let $\mathbb{S}_\ell$ be the space of all samples in $X$ of cardinality $\ell$ and consider sets of wide samples, i.e., {\em hypersets} which are defined as $A_{\beta, h} = \{S\in \mathbb{S}_\ell: \w_{S}(h) \geq \beta\}$. Through an application of the Sauer-Shelah result on the density of sets an upper estimate is obtained on the growth function (or trace) of the class $\{A_{\beta, h}: h\in\mH\}$, $\beta>0$, i.e., on the number of possible dichotomies obtained by intersecting all hypersets with a fixed collection of samples $S\in\mathbb{S}_\ell$ of cardinality $m$. The estimate is $2\sum_{i=0}^{2\lfloor B/(2\beta)\rfloor}{m-\ell\choose i}$.
0801.4807
Automatic Text Area Segmentation in Natural Images
cs.CV
We present a hierarchical method for segmenting text areas in natural images. The method assumes that the text is written with a contrasting color on a more or less uniform background. But no assumption is made regarding the language or character set used to write the text. In particular, the text can contain simple graphics or symbols. The key feature of our approach is that we first concentrate on finding the background of the text, before testing whether there is actually text on the background. Since uniform areas are easy to find in natural images, and since text backgrounds define areas which contain "holes" (where the text is written) we thus look for uniform areas containing "holes" and label them as text backgrounds candidates. Each candidate area is then further tested for the presence of text within its convex hull. We tested our method on a database of 65 images including English and Urdu text. The method correctly segmented all the text areas in 63 of these images, and in only 4 of these were areas that do not contain text also segmented.
0802.0003
On mobile sets in the binary hypercube
math.CO cs.IT math.IT
If two distance-3 codes have the same neighborhood, then each of them is called a mobile set. In the (4k+3)-dimensional binary hypercube, there exists a mobile set of cardinality 2*6^k that cannot be split into mobile sets of smaller cardinalities or represented as a natural extension of a mobile set in a hypercube of smaller dimension. Keywords: mobile set; 1-perfect code.
0802.0006
New Perspectives and some Celebrated Quantum Inequalities
math-ph cs.IT math.IT math.MP
Some of the important inequalities associated with quantum entropy are immediate algebraic consequences of the Hansen-Pedersen-Jensen inequality. A general argument is given in terms of the matrix perspective of an operator convex function. A matrix analogue of Mar\'{e}chal's extended perspectives provides additional inequalities, including a $p+q\leq 1$ result of Lieb.
0802.0030
Mission impossible: Computing the network coding capacity region
cs.IT math.IT
One of the main theoretical motivations for the emerging area of network coding is the achievability of the max-flow/min-cut rate for single source multicast. This can exceed the rate achievable with routing alone, and is achievable with linear network codes. The multi-source problem is more complicated. Computation of its capacity region is equivalent to determination of the set of all entropy functions $\Gamma^*$, which is non-polyhedral. The aim of this paper is to demonstrate that this difficulty can arise even in single source problems. In particular, for single source networks with hierarchical sink requirements, and for single source networks with secrecy constraints. In both cases, we exhibit networks whose capacity regions involve $\Gamma^*$. As in the multi-source case, linear codes are insufficient.
0802.0116
Shallow Models for Non-Iterative Modal Logics
cs.LO cs.AI cs.CC cs.MA
The methods used to establish PSPACE-bounds for modal logics can roughly be grouped into two classes: syntax driven methods establish that exhaustive proof search can be performed in polynomial space whereas semantic approaches directly construct shallow models. In this paper, we follow the latter approach and establish generic PSPACE-bounds for a large and heterogeneous class of modal logics in a coalgebraic framework. In particular, no complete axiomatisation of the logic under scrutiny is needed. This does not only complement our earlier, syntactic, approach conceptually, but also covers a wide variety of new examples which are difficult to harness by purely syntactic means. Apart from re-proving known complexity bounds for a large variety of structurally different logics, we apply our method to obtain previously unknown PSPACE-bounds for Elgesem's logic of agency and for graded modal logic over reflexive frames.
0802.0130
About the true type of smoothers
math.OC cs.IT math.IT
We employ the variational formulation and the Euler-Lagrange equations to study the steady-state error in linear non-causal estimators (smoothers). We give a complete description of the steady-state error for inputs that are polynomial in time. We show that the steady-state error regime in a smoother is similar to that in a filter of double the type. This means that the steady-state error in the optimal smoother is significantly smaller than that in the Kalman filter. The results reveal a significant advantage of smoothing over filtering with respect to robustness to model uncertainty.
0802.0137
Fault-Tolerant Partial Replication in Large-Scale Database Systems
cs.DB
We investigate a decentralised approach to committing transactions in a replicated database, under partial replication. Previous protocols either re-execute transactions entirely and/or compute a total order of transactions. In contrast, ours applies update values, and orders only conflicting transactions. It results that transactions execute faster, and distributed databases commit in small committees. Both effects contribute to preserve scalability as the number of databases and transactions increase. Our algorithm ensures serializability, and is live and safe in spite of faults.
0802.0179
On the Relation Between the Index Coding and the Network Coding Problems
cs.IT math.IT
In this paper we show that the Index Coding problem captures several important properties of the more general Network Coding problem. An instance of the Index Coding problem includes a server that holds a set of information messages $X=\{x_1,...,x_k\}$ and a set of receivers $R$. Each receiver has some side information, known to the server, represented by a subset of $X$ and demands another subset of $X$. The server uses a noiseless communication channel to broadcast encodings of messages in $X$ to satisfy the receivers' demands. The goal of the server is to find an encoding scheme that requires the minimum number of transmissions. We show that any instance of the Network Coding problem can be efficiently reduced to an instance of the Index Coding problem. Our reduction shows that several important properties of the Network Coding problem carry over to the Index Coding problem. In particular, we prove that both scalar linear and vector linear codes are insufficient for achieving the minimal number of transmissions.
0802.0251
Multi-Layer Perceptrons and Symbolic Data
cs.NE
In some real world situations, linear models are not sufficient to represent accurately complex relations between input variables and output variables of a studied system. Multilayer Perceptrons are one of the most successful non-linear regression tool but they are unfortunately restricted to inputs and outputs that belong to a normed vector space. In this chapter, we propose a general recoding method that allows to use symbolic data both as inputs and outputs to Multilayer Perceptrons. The recoding is quite simple to implement and yet provides a flexible framework that allows to deal with almost all practical cases. The proposed method is illustrated on a real world data set.
0802.0252
Acc\'el\'eration des cartes auto-organisatrices sur tableau de dissimilarit\'es par s\'eparation et \'evaluation
cs.NE
In this paper, a new implementation of the adaptation of Kohonen self-organising maps (SOM) to dissimilarity matrices is proposed. This implementation relies on the branch and bound principle to reduce the algorithm running time. An important property of this new approach is that the obtained algorithm produces exactly the same results as the standard algorithm.
0802.0287
A data-driven functional projection approach for the selection of feature ranges in spectra with ICA or cluster analysis
cs.NE
Prediction problems from spectra are largely encountered in chemometry. In addition to accurate predictions, it is often needed to extract information about which wavelengths in the spectra contribute in an effective way to the quality of the prediction. This implies to select wavelengths (or wavelength intervals), a problem associated to variable selection. In this paper, it is shown how this problem may be tackled in the specific case of smooth (for example infrared) spectra. The functional character of the spectra (their smoothness) is taken into account through a functional variable projection procedure. Contrarily to standard approaches, the projection is performed on a basis that is driven by the spectra themselves, in order to best fit their characteristics. The methodology is illustrated by two examples of functional projection, using Independent Component Analysis and functional variable clustering, respectively. The performances on two standard infrared spectra benchmarks are illustrated.
0802.0342
The Case for Structured Random Codes in Network Capacity Theorems
cs.IT math.IT
Random coding arguments are the backbone of most channel capacity achievability proofs. In this paper, we show that in their standard form, such arguments are insufficient for proving some network capacity theorems: structured coding arguments, such as random linear or lattice codes, attain higher rates. Historically, structured codes have been studied as a stepping stone to practical constructions. However, K\"{o}rner and Marton demonstrated their usefulness for capacity theorems through the derivation of the optimal rate region of a distributed functional source coding problem. Here, we use multicasting over finite field and Gaussian multiple-access networks as canonical examples to demonstrate that even if we want to send bits over a network, structured codes succeed where simple random codes fail. Beyond network coding, we also consider distributed computation over noisy channels and a special relay-type problem.
0802.0351
Path Loss Exponent Estimation in a Large Field of Interferers
cs.IT math.IT
In wireless channels, the path loss exponent (PLE) has a strong impact on the quality of links, and hence, it needs to be accurately estimated for the efficient design and operation of wireless networks. In this paper, we address the problem of PLE estimation in large wireless networks, which is relevant to several important issues in networked communications such as localization, energy-efficient routing, and channel access. We consider a large ad hoc network where nodes are distributed as a homogeneous Poisson point process on the plane and the channels are subject to Nakagami-m fading. We propose and discuss three distributed algorithms for estimating the PLE under these settings which explicitly take into account the interference in the network. In addition, we provide simulation results to demonstrate the performance of the algorithms and quantify the estimation errors. We also describe how to estimate the PLE accurately even in networks with spatially varying PLEs and more general node distributions.
0802.0414
The exit problem in optimal non-causal extimation
math.OC cs.IT math.IT
We study the phenomenon of loss of lock in the optimal non-causal phase estimation problem, a benchmark problem in nonlinear estimation. Our method is based on the computation of the asymptotic distribution of the optimal estimation error in case the number of trajectories in the optimization problem is finite. The computation is based directly on the minimum noise energy optimality criterion rather than on state equations of the error, as is the usual case in the literature. The results include an asymptotic computation of the mean time to lose lock (MTLL) in the optimal smoother. We show that the MTLL in the first and second order smoothers is significantly longer than that in the causal extended Kalman filter.
0802.0487
Algorithmically independent sequences
cs.IT cs.SE math.AG math.IT
Two objects are independent if they do not affect each other. Independence is well-understood in classical information theory, but less in algorithmic information theory. Working in the framework of algorithmic information theory, the paper proposes two types of independence for arbitrary infinite binary sequences and studies their properties. Our two proposed notions of independence have some of the intuitive properties that one naturally expects. For example, for every sequence $x$, the set of sequences that are independent (in the weaker of the two senses) with $x$ has measure one. For both notions of independence we investigate to what extent pairs of independent sequences, can be effectively constructed via Turing reductions (from one or more input sequences). In this respect, we prove several impossibility results. For example, it is shown that there is no effective way of producing from an arbitrary sequence with positive constructive Hausdorff dimension two sequences that are independent (even in the weaker type of independence) and have super-logarithmic complexity. Finally, a few conjectures and open questions are discussed.
0802.0534
Capacity of Wireless Networks within o(log(SNR)) - the Impact of Relays, Feedback, Cooperation and Full-Duplex Operation
cs.IT math.IT
Recent work has characterized the sum capacity of time-varying/frequency-selective wireless interference networks and $X$ networks within $o(\log({SNR}))$, i.e., with an accuracy approaching 100% at high SNR (signal to noise power ratio). In this paper, we seek similar capacity characterizations for wireless networks with relays, feedback, full duplex operation, and transmitter/receiver cooperation through noisy channels. First, we consider a network with $S$ source nodes, $R$ relay nodes and $D$ destination nodes with random time-varying/frequency-selective channel coefficients and global channel knowledge at all nodes. We allow full-duplex operation at all nodes, as well as causal noise-free feedback of all received signals to all source and relay nodes. The sum capacity of this network is characterized as $\frac{SD}{S+D-1}\log({SNR})+o(\log({SNR}))$. The implication of the result is that the capacity benefits of relays, causal feedback, transmitter/receiver cooperation through physical channels and full duplex operation become a negligible fraction of the network capacity at high SNR. Some exceptions to this result are also pointed out in the paper. Second, we consider a network with $K$ full duplex nodes with an independent message from every node to every other node in the network. We find that the sum capacity of this network is bounded below by $\frac{K(K-1)}{2K-2}+o(\log({SNR}))$ and bounded above by $\frac{K(K-1)}{2K-3}+o(\log({SNR}))$.
0802.0554
Message-Passing Decoding of Lattices Using Gaussian Mixtures
cs.IT math.IT
A lattice decoder which represents messages explicitly as a mixture of Gaussians functions is given. In order to prevent the number of functions in a mixture from growing as the decoder iterations progress, a method for replacing N Gaussian functions with M Gaussian functions, with M < N, is given. A squared distance metric is used to select functions for combining. A pair of selected Gaussians is replaced by a single Gaussian with the same first and second moments. The metric can be computed efficiently, and at the same time, the proposed algorithm empirically gives good results, for example, a dimension 100 lattice has a loss of 0.2 dB in signal-to-noise ratio at a probability of symbol error of 10^{-5}.
0802.0580
Rotated and Scaled Alamouti Coding
cs.IT math.IT
Repetition-based retransmission is used in Alamouti-modulation [1998] for $2\times 2$ MIMO systems. We propose to use instead of ordinary repetition so-called "scaled repetition" together with rotation. It is shown that the rotated and scaled Alamouti code has a hard-decision performance which is only slightly worse than that of the Golden code [2005], the best known $2\times 2$ space-time code. Decoding the Golden code requires an exhaustive search over all codewords, while our rotated and scaled Alamouti code can be decoded with an acceptable complexity however.
0802.0738
MIMO Networks: the Effects of Interference
cs.IT math.IT
Multiple-input/multiple-output (MIMO) systems promise enormous capacity increase and are being considered as one of the key technologies for future wireless networks. However, the decrease in capacity due to the presence of interferers in MIMO networks is not well understood. In this paper, we develop an analytical framework to characterize the capacity of MIMO communication systems in the presence of multiple MIMO co-channel interferers and noise. We consider the situation in which transmitters have no information about the channel and all links undergo Rayleigh fading. We first generalize the known determinant representation of hypergeometric functions with matrix arguments to the case when the argument matrices have eigenvalues of arbitrary multiplicity. This enables the derivation of the distribution of the eigenvalues of Gaussian quadratic forms and Wishart matrices with arbitrary correlation, with application to both single user and multiuser MIMO systems. In particular, we derive the ergodic mutual information for MIMO systems in the presence of multiple MIMO interferers. Our analysis is valid for any number of interferers, each with arbitrary number of antennas having possibly unequal power levels. This framework, therefore, accommodates the study of distributed MIMO systems and accounts for different positions of the MIMO interferers.
0802.0776
Distributed Compression for the Uplink of a Backhaul-Constrained Coordinated Cellular Network
cs.IT math.IT
We consider a backhaul-constrained coordinated cellular network. That is, a single-frequency network with $N+1$ multi-antenna base stations (BSs) that cooperate in order to decode the users' data, and that are linked by means of a common lossless backhaul, of limited capacity $\mathrm{R}$. To implement receive cooperation, we propose distributed compression: $N$ BSs, upon receiving their signals, compress them using a multi-source lossy compression code. Then, they send the compressed vectors to a central BS, which performs users' decoding. Distributed Wyner-Ziv coding is proposed to be used, and is optimally designed in this work. The first part of the paper is devoted to a network with a unique multi-antenna user, that transmits a predefined Gaussian space-time codeword. For such a scenario, the compression codebooks at the BSs are optimized, considering the user's achievable rate as the performance metric. In particular, for $N = 1$ the optimum codebook distribution is derived in closed form, while for $N>1$ an iterative algorithm is devised. The second part of the contribution focusses on the multi-user scenario. For it, the achievable rate region is obtained by means of the optimum compression codebooks for sum-rate and weighted sum-rate, respectively.
0802.0797
Central Limit Theorems for Wavelet Packet Decompositions of Stationary Random Processes
cs.IT math.IT
This paper provides central limit theorems for the wavelet packet decomposition of stationary band-limited random processes. The asymptotic analysis is performed for the sequences of the wavelet packet coefficients returned at the nodes of any given path of the $M$-band wavelet packet decomposition tree. It is shown that if the input process is centred and strictly stationary, these sequences converge in distribution to white Gaussian processes when the resolution level increases, provided that the decomposition filters satisfy a suitable property of regularity. For any given path, the variance of the limit white Gaussian process directly relates to the value of the input process power spectral density at a specific frequency.
0802.0802
On Approximating Frequency Moments of Data Streams with Skewed Projections
cs.DS cs.IT math.IT
We propose skewed stable random projections for approximating the pth frequency moments of dynamic data streams (0<p<=2), which has been frequently studied in theoretical computer science and database communities. Our method significantly (or even infinitely when p->1) improves previous methods based on (symmetric) stable random projections. Our proposed method is applicable to data streams that are (a) insertion only (the cash-register model); or (b) always non-negative (the strict Turnstile model), or (c) eventually non-negative at check points. This is only a minor restriction for practical applications. Our method works particularly well when p = 1+/- \Delta and \Delta is small, which is a practically important scenario. For example, \Delta may be the decay rate or interest rate, which are usually small. Of course, when \Delta = 0, one can compute the 1th frequent moment (i.e., the sum) essentially error-free using a simple couter. Our method may be viewed as a ``genearlized counter'' in that it can count the total value in the future, taking in account of the effect of decaying or interest accruement. In a summary, our contributions are two-fold. (A) This is the first propsal of skewed stable random projections. (B) Based on first principle, we develop various statistical estimators for skewed stable distributions, including their variances and error (tail) probability bounds, and consequently the sample complexity bounds.
0802.0808
Turbo Interleaving inside the cdma2000 and W-CDMA Mobile Communication Systems: A Tutorial
cs.IT math.IT
In this paper a discussion of the detailed operation of the interleavers used by the turbo codes defined on the telecommunications standards cdma2000 (3GPP2 C.S0024-B V2.0) and W-CDMA (3GPP TS 25.212 V7.4.0) is presented. Differences in the approach used by each turbo interleaver as well as dispersion analysis and frequency analysis are also discussed. Two examples are presented to illustrate the complete interleaving process defined by each standard. These two interleaving approaches are also representative for other communications standards.
0802.0823
Doubly-Generalized LDPC Codes: Stability Bound over the BEC
cs.IT math.IT
The iterative decoding threshold of low-density parity-check (LDPC) codes over the binary erasure channel (BEC) fulfills an upper bound depending only on the variable and check nodes with minimum distance 2. This bound is a consequence of the stability condition, and is here referred to as stability bound. In this paper, a stability bound over the BEC is developed for doubly-generalized LDPC codes, where the variable and the check nodes can be generic linear block codes, assuming maximum a posteriori erasure correction at each node. It is proved that in this generalized context as well the bound depends only on the variable and check component codes with minimum distance 2. A condition is also developed, namely the derivative matching condition, under which the bound is achieved with equality.
0802.0835
Bit-Optimal Lempel-Ziv compression
cs.DS cs.IT math.IT
One of the most famous and investigated lossless data-compression scheme is the one introduced by Lempel and Ziv about 40 years ago. This compression scheme is known as "dictionary-based compression" and consists of squeezing an input string by replacing some of its substrings with (shorter) codewords which are actually pointers to a dictionary of phrases built as the string is processed. Surprisingly enough, although many fundamental results are nowadays known about upper bounds on the speed and effectiveness of this compression process and references therein), ``we are not aware of any parsing scheme that achieves optimality when the LZ77-dictionary is in use under any constraint on the codewords other than being of equal length'' [N. Rajpoot and C. Sahinalp. Handbook of Lossless Data Compression, chapter Dictionary-based data compression. Academic Press, 2002. pag. 159]. Here optimality means to achieve the minimum number of bits in compressing each individual input string, without any assumption on its generating source. In this paper we provide the first LZ-based compressor which computes the bit-optimal parsing of any input string in efficient time and optimal space, for a general class of variable-length codeword encodings which encompasses most of the ones typically used in data compression and in the design of search engines and compressed indexes.
0802.0861
Using Bayesian Blocks to Partition Self-Organizing Maps
cs.NE
Self organizing maps (SOMs) are widely-used for unsupervised classification. For this application, they must be combined with some partitioning scheme that can identify boundaries between distinct regions in the maps they produce. We discuss a novel partitioning scheme for SOMs based on the Bayesian Blocks segmentation algorithm of Scargle [1998]. This algorithm minimizes a cost function to identify contiguous regions over which the values of the attributes can be represented as approximately constant. Because this cost function is well-defined and largely independent of assumptions regarding the number and structure of clusters in the original sample space, this partitioning scheme offers significant advantages over many conventional methods. Sample code is available.
0802.0914
Shrinkage Effect in Ancestral Maximum Likelihood
q-bio.PE cs.CE math.PR math.ST stat.TH
Ancestral maximum likelihood (AML) is a method that simultaneously reconstructs a phylogenetic tree and ancestral sequences from extant data (sequences at the leaves). The tree and ancestral sequences maximize the probability of observing the given data under a Markov model of sequence evolution, in which branch lengths are also optimized but constrained to take the same value on any edge across all sequence sites. AML differs from the more usual form of maximum likelihood (ML) in phylogenetics because ML averages over all possible ancestral sequences. ML has long been known to be statistically consistent -- that is, it converges on the correct tree with probability approaching 1 as the sequence length grows. However, the statistical consistency of AML has not been formally determined, despite informal remarks in a literature that dates back 20 years. In this short note we prove a general result that implies that AML is statistically inconsistent. In particular we show that AML can `shrink' short edges in a tree, resulting in a tree that has no internal resolution as the sequence length grows. Our results apply to any number of taxa.
0802.1002
New Estimation Procedures for PLS Path Modelling
cs.LG
Given R groups of numerical variables X1, ... XR, we assume that each group is the result of one underlying latent variable, and that all latent variables are bound together through a linear equation system. Moreover, we assume that some explanatory latent variables may interact pairwise in one or more equations. We basically consider PLS Path Modelling's algorithm to estimate both latent variables and the model's coefficients. New "external" estimation schemes are proposed that draw latent variables towards strong group structures in a more flexible way. New "internal" estimation schemes are proposed to enable PLSPM to make good use of variable group complementarity and to deal with interactions. Application examples are given.
0802.1220
Complexity of Decoding Positive-Rate Reed-Solomon Codes
cs.IT math.IT
The complexity of maximal likelihood decoding of the Reed-Solomon codes $[q-1, k]_q$ is a well known open problem. The only known result in this direction states that it is at least as hard as the discrete logarithm in some cases where the information rate unfortunately goes to zero. In this paper, we remove the rate restriction and prove that the same complexity result holds for any positive information rate. In particular, this resolves an open problem left in [4], and rules out the possibility of a polynomial time algorithm for maximal likelihood decoding problem of Reed-Solomon codes of any rate under a well known cryptographical hardness assumption. As a side result, we give an explicit construction of Hamming balls of radius bounded away from the minimum distance, which contain exponentially many codewords for Reed-Solomon code of any positive rate less than one. The previous constructions only apply to Reed-Solomon codes of diminishing rates. We also give an explicit construction of Hamming balls of relative radius less than 1 which contain subexponentially many codewords for Reed-Solomon code of rate approaching one.
0802.1244
Learning Balanced Mixtures of Discrete Distributions with Small Sample
cs.LG stat.ML
We study the problem of partitioning a small sample of $n$ individuals from a mixture of $k$ product distributions over a Boolean cube $\{0, 1\}^K$ according to their distributions. Each distribution is described by a vector of allele frequencies in $\R^K$. Given two distributions, we use $\gamma$ to denote the average $\ell_2^2$ distance in frequencies across $K$ dimensions, which measures the statistical divergence between them. We study the case assuming that bits are independently distributed across $K$ dimensions. This work demonstrates that, for a balanced input instance for $k = 2$, a certain graph-based optimization function returns the correct partition with high probability, where a weighted graph $G$ is formed over $n$ individuals, whose pairwise hamming distances between their corresponding bit vectors define the edge weights, so long as $K = \Omega(\ln n/\gamma)$ and $Kn = \tilde\Omega(\ln n/\gamma^2)$. The function computes a maximum-weight balanced cut of $G$, where the weight of a cut is the sum of the weights across all edges in the cut. This result demonstrates a nice property in the high-dimensional feature space: one can trade off the number of features that are required with the size of the sample to accomplish certain tasks like clustering.
0802.1258
Bayesian Nonlinear Principal Component Analysis Using Random Fields
cs.CV cs.LG
We propose a novel model for nonlinear dimension reduction motivated by the probabilistic formulation of principal component analysis. Nonlinearity is achieved by specifying different transformation matrices at different locations of the latent space and smoothing the transformation using a Markov random field type prior. The computation is made feasible by the recent advances in sampling from von Mises-Fisher distributions.
0802.1296
On quantum statistics in data analysis
cs.IR math.CT quant-ph
Originally, quantum probability theory was developed to analyze statistical phenomena in quantum systems, where classical probability theory does not apply, because the lattice of measurable sets is not necessarily distributive. On the other hand, it is well known that the lattices of concepts, that arise in data analysis, are in general also non-distributive, albeit for completely different reasons. In his recent book, van Rijsbergen argues that many of the logical tools developed for quantum systems are also suitable for applications in information retrieval. I explore the mathematical support for this idea on an abstract vector space model, covering several forms of data analysis (information retrieval, data mining, collaborative filtering, formal concept analysis...), and roughly based on an idea from categorical quantum mechanics. It turns out that quantum (i.e., noncommutative) probability distributions arise already in this rudimentary mathematical framework. We show that a Bell-type inequality must be satisfied by the standard similarity measures, if they are used for preference predictions. The fact that already a very general, abstract version of the vector space model yields simple counterexamples for such inequalities seems to be an indicator of a genuine need for quantum statistics in data analysis.
0802.1306
Network as a computer: ranking paths to find flows
cs.IR cs.AI math.CT
We explore a simple mathematical model of network computation, based on Markov chains. Similar models apply to a broad range of computational phenomena, arising in networks of computers, as well as in genetic, and neural nets, in social networks, and so on. The main problem of interaction with such spontaneously evolving computational systems is that the data are not uniformly structured. An interesting approach is to try to extract the semantical content of the data from their distribution among the nodes. A concept is then identified by finding the community of nodes that share it. The task of data structuring is thus reduced to the task of finding the network communities, as groups of nodes that together perform some non-local data processing. Towards this goal, we extend the ranking methods from nodes to paths. This allows us to extract some information about the likely flow biases from the available static information about the network.
0802.1327
Exchange of Limits: Why Iterative Decoding Works
cs.IT math.IT
We consider communication over binary-input memoryless output-symmetric channels using low-density parity-check codes and message-passing decoding. The asymptotic (in the length) performance of such a combination for a fixed number of iterations is given by density evolution. Letting the number of iterations tend to infinity we get the density evolution threshold, the largest channel parameter so that the bit error probability tends to zero as a function of the iterations. In practice we often work with short codes and perform a large number of iterations. It is therefore interesting to consider what happens if in the standard analysis we exchange the order in which the blocklength and the number of iterations diverge to infinity. In particular, we can ask whether both limits give the same threshold. Although empirical observations strongly suggest that the exchange of limits is valid for all channel parameters, we limit our discussion to channel parameters below the density evolution threshold. Specifically, we show that under some suitable technical conditions the bit error probability vanishes below the density evolution threshold regardless of how the limit is taken.
0802.1369
Interior-Point Algorithms for Linear-Programming Decoding
cs.IT math.IT
Interior-point algorithms constitute a very interesting class of algorithms for solving linear-programming problems. In this paper we study efficient implementations of such algorithms for solving the linear program that appears in the linear-programming decoder formulation.
0802.1372
An integral formula for large random rectangular matrices and its application to analysis of linear vector channels
cs.IT cond-mat.dis-nn math.IT
A statistical mechanical framework for analyzing random linear vector channels is presented in a large system limit. The framework is based on the assumptions that the left and right singular value bases of the rectangular channel matrix $\bH$ are generated independently from uniform distributions over Haar measures and the eigenvalues of $\bH^{\rm T}\bH$ asymptotically follow a certain specific distribution. These assumptions make it possible to characterize the communication performance of the channel utilizing an integral formula with respect to $\bH$, which is analogous to the one introduced by Marinari {\em et. al.} in {\em J. Phys. A} {\bf 27}, 7647 (1994) for large random square (symmetric) matrices. A computationally feasible algorithm for approximately decoding received signals based on the integral formula is also provided.
0802.1380
New Bounds for the Capacity Region of the Finite-State Multiple Access Channel
cs.IT math.IT
The capacity region of the Finite-State Multiple Access Channel (FS-MAC) with feedback that may be an arbitrary time-invariant function of the channel output samples is considered. We provided a sequence of inner and outer bounds for this region. These bounds are shown to coincide, and hence yield the capacity region, of FS-MACs where the state process is stationary and ergodic and not affected by the inputs, and for indecomposable FS-MAC when feedback is not allowed. Though the capacity region is `multi-letter' in general, our results yield explicit conclusions when applied to specific scenarios of interest.
0802.1383
On Directed Information and Gambling
cs.IT math.IT
We study the problem of gambling in horse races with causal side information and show that Massey's directed information characterizes the increment in the maximum achievable capital growth rate due to the availability of side information. This result gives a natural interpretation of directed information $I(Y^n \to X^n)$ as the amount of information that $Y^n$ \emph{causally} provides about $X^n$. Extensions to stock market portfolio strategies and data compression with causal side information are also discussed.
0802.1393
Les Agents comme des interpr\'eteurs Scheme : Sp\'ecification dynamique par la communication
cs.MA cs.AI
We proposed in previous papers an extension and an implementation of the STROBE model, which regards the Agents as Scheme interpreters. These Agents are able to interpret messages in a dedicated environment including an interpreter that learns from the current conversation therefore representing evolving meta-level Agent's knowledge. When the Agent's interpreter is a nondeterministic one, the dialogues may consist of subsequent refinements of specifications in the form of constraint sets. The paper presents a worked out example of dynamic service generation - such as necessary on Grids - by exploiting STROBE Agents equipped with a nondeterministic interpreter. It shows how enabling dynamic specification of a problem. Then it illustrates how these principles could be effective for other applications. Details of the implementation are not provided here, but are available.
0802.1412
Extreme Learning Machine for land cover classification
cs.NE cs.CV
This paper explores the potential of extreme learning machine based supervised classification algorithm for land cover classification. In comparison to a backpropagation neural network, which requires setting of several user-defined parameters and may produce local minima, extreme learning machine require setting of one parameter and produce a unique solution. ETM+ multispectral data set (England) was used to judge the suitability of extreme learning machine for remote sensing classifications. A back propagation neural network was used to compare its performance in term of classification accuracy and computational cost. Results suggest that the extreme learning machine perform equally well to back propagation neural network in term of classification accuracy with this data set. The computational cost using extreme learning machine is very small in comparison to back propagation neural network.
0802.1430
A New Approach to Collaborative Filtering: Operator Estimation with Spectral Regularization
cs.LG
We present a general approach for collaborative filtering (CF) using spectral regularization to learn linear operators from "users" to the "objects" they rate. Recent low-rank type matrix completion approaches to CF are shown to be special cases. However, unlike existing regularization based CF methods, our approach can be used to also incorporate information such as attributes of the users or the objects -- a limitation of existing regularization based CF methods. We then provide novel representer theorems that we use to develop new estimation methods. We provide learning algorithms based on low-rank decompositions, and test them on a standard CF dataset. The experiments indicate the advantages of generalizing the existing regularization based CF methods to incorporate related information about users and objects. Finally, we show that certain multi-task learning methods can be also seen as special cases of our proposed approach.
0802.1555
Constructing Linear Codes with Good Joint Spectra
cs.IT math.IT
The problem of finding good linear codes for joint source-channel coding (JSCC) is investigated in this paper. By the code-spectrum approach, it has been proved in the authors' previous paper that a good linear code for the authors' JSCC scheme is a code with a good joint spectrum, so the main task in this paper is to construct linear codes with good joint spectra. First, the code-spectrum approach is developed further to facilitate the calculation of spectra. Second, some general principles for constructing good linear codes are presented. Finally, we propose an explicit construction of linear codes with good joint spectra based on low density parity check (LDPC) codes and low density generator matrix (LDGM) codes.
0802.1567
Universal Coding for Lossless and Lossy Complementary Delivery Problems
cs.IT math.IT
This paper deals with a coding problem called complementary delivery, where messages from two correlated sources are jointly encoded and each decoder reproduces one of two messages using the other message as the side information. Both lossless and lossy universal complementary delivery coding schemes are investigated. In the lossless case, it is demonstrated that a universal complementary delivery code can be constructed by only combining two Slepian-Wolf codes. Especially, it is shown that a universal lossless complementary delivery code, for which error probability is exponentially tight, can be constructed from two linear Slepian-Wolf codes. In the lossy case, a universal complementary delivery coding scheme based on Wyner-Ziv codes is proposed. While the proposed scheme cannot attain the optimal rate-distortion trade-off in general, the rate-loss is upper bounded by a universal constant under some mild conditions. The proposed schemes allows us to apply any Slepian-Wolf and Wyner-Ziv codes to complementary delivery coding.
0802.1604
On the Complexity of Nash Equilibria of Action-Graph Games
cs.GT cs.MA
We consider the problem of computing Nash Equilibria of action-graph games (AGGs). AGGs, introduced by Bhat and Leyton-Brown, is a succinct representation of games that encapsulates both "local" dependencies as in graphical games, and partial indifference to other agents' identities as in anonymous games, which occur in many natural settings. This is achieved by specifying a graph on the set of actions, so that the payoff of an agent for selecting a strategy depends only on the number of agents playing each of the neighboring strategies in the action graph. We present a Polynomial Time Approximation Scheme for computing mixed Nash equilibria of AGGs with constant treewidth and a constant number of agent types (and an arbitrary number of strategies), together with hardness results for the cases when either the treewidth or the number of agent types is unconstrained. In particular, we show that even if the action graph is a tree, but the number of agent-types is unconstrained, it is NP-complete to decide the existence of a pure-strategy Nash equilibrium and PPAD-complete to compute a mixed Nash equilibrium (even an approximate one); similarly for symmetric AGGs (all agents belong to a single type), if we allow arbitrary treewidth. These hardness results suggest that, in some sense, our PTAS is as strong of a positive result as one can expect.
0802.1738
Characterising through Erasing: A Theoretical Framework for Representing Documents Inspired by Quantum Theory
cs.IR quant-ph
The problem of representing text documents within an Information Retrieval system is formulated as an analogy to the problem of representing the quantum states of a physical system. Lexical measurements of text are proposed as a way of representing documents which are akin to physical measurements on quantum states. Consequently, the representation of the text is only known after measurements have been made, and because the process of measuring may destroy parts of the text, the document is characterised through erasure. The mathematical foundations of such a quantum representation of text are provided in this position paper as a starting point for indexing and retrieval within a ``quantum like'' Information Retrieval system.
0802.1754
ARQ for Network Coding
cs.IT cs.NI math.IT
A new coding and queue management algorithm is proposed for communication networks that employ linear network coding. The algorithm has the feature that the encoding process is truly online, as opposed to a block-by-block approach. The setup assumes a packet erasure broadcast channel with stochastic arrivals and full feedback, but the proposed scheme is potentially applicable to more general lossy networks with link-by-link feedback. The algorithm guarantees that the physical queue size at the sender tracks the backlog in degrees of freedom (also called the virtual queue size). The new notion of a node "seeing" a packet is introduced. In terms of this idea, our algorithm may be viewed as a natural extension of ARQ schemes to coded networks. Our approach, known as the drop-when-seen algorithm, is compared with a baseline queuing approach called drop-when-decoded. It is shown that the expected queue size for our approach is $O(\frac1{1-\rho})$ as opposed to $\Omega(\frac1{(1-\rho)^2})$ for the baseline approach, where $\rho$ is the load factor.
0802.1785
Near ML detection using Dijkstra's algorithm with bounded list size over MIMO channels
cs.IT math.IT
We propose Dijkstra's algorithm with bounded list size after QR decomposition for decreasing the computational complexity of near maximum-likelihood (ML) detection of signals over multiple-input-multiple-output (MIMO) channels. After that, we compare the performances of proposed algorithm, QR decomposition M-algorithm (QRD-MLD), and its improvement. When the list size is set to achieve the almost same symbol error rate (SER) as the QRD-MLD, the proposed algorithm has smaller average computational complexity.
0802.1815
A Construction for Constant-Composition Codes
cs.IT math.IT
By employing the residue polynomials, a construction of constant-composition codes is given. This construction generalizes the one proposed by Xing[16]. It turns out that when d=3 this construction gives a lower bound of constant-composition codes improving the one in [10]. Moreover, for d>3, we give a lower bound on maximal size of constant-composition codes. In particular, our bound for d=5 gives the best possible size of constant-composition codes up to magnitude.
0802.1888
Multi-hop Cooperative Wireless Networks: Diversity Multiplexing Tradeoff and Optimal Code Design
cs.IT math.IT
We consider single-source single-sink (ss-ss) multi-hop networks, with slow-fading links and single-antenna half-duplex relays. We identify two families of networks that are multi-hop generalizations of the well-studied two-hop network: K-Parallel-Path (KPP) networks and layered networks. KPP networks can be viewed as the union of K node-disjoint parallel relaying paths, each of length greater than one. KPP networks are then generalized to KPP(I) networks, which permit interference between paths and to KPP(D) networks, which possess a direct link from source to sink. We characterize the DMT of these families of networks completely for K > 3. Layered networks are networks comprising of relaying layers with edges existing only within the same layer or between adjacent layers. We prove that a linear DMT between the maximum diversity d_{max} and the maximum multiplexing gain of 1 is achievable for fully-connected layered networks. This is shown to be equal to the optimal DMT if the number of layers is less than 4. For multi-antenna KPP and layered networks, we provide an achievable DMT region. For arbitrary ss-ss single-antenna directed-acyclic full-duplex networks, we prove that a linear tradeoff between maximum diversity and maximum multiplexing gain is achievable. All protocols in this paper are explicit and use only amplify and forward (AF) relaying. We also construct codes with short block-lengths based on cyclic division algebras that achieve the optimal DMT for all the proposed schemes. Two key implications of the results in the paper are that the half-duplex constraint does not entail any rate loss for a large class of networks and that simple AF protocols are often sufficient to attain the optimal DMT.
0802.1893
Diversity and Degrees of Freedom of Cooperative Wireless Networks
cs.IT math.IT
Wireless fading networks with multiple antennas are typically studied information-theoretically from two different perspectives - the outage characterization and the ergodic capacity characterization. A key parameter in the outage characterization of a network is the diversity, whereas a first-order indicator for the ergodic capacity is the degrees of freedom (DOF), which is the pre-log coefficient in the capacity expression. In this paper, we present max-flow min-cut type theorems for computing both the diversity and the degrees of freedom of arbitrary single-source single-sink multi-antenna networks. We also show that an amplify-and-forward protocol is sufficient to achieve this. The degrees of freedom characterization is obtained using a conversion to a deterministic wireless network for which the capacity was recently found. We show that the diversity result easily extends to multi-source multi-sink networks and evaluate the DOF for multi-casting in single-source multi-sink networks.
0802.2001
Exploiting problem structure in a genetic algorithm approach to a nurse rostering problem
cs.NE cs.CE
There is considerable interest in the use of genetic algorithms to solve problems arising in the areas of scheduling and timetabling. However, the classical genetic algorithm paradigm is not well equipped to handle the conflict between objectives and constraints that typically occurs in such problems. In order to overcome this, successful implementations frequently make use of problem specific knowledge. This paper is concerned with the development of a GA for a nurse rostering problem at a major UK hospital. The structure of the constraints is used as the basis for a co-evolutionary strategy using co-operating sub-populations. Problem specific knowledge is also used to define a system of incentives and disincentives, and a complementary mutation operator. Empirical results based on 52 weeks of live data show how these features are able to improve an unsuccessful canonical GA to the point where it is able to provide a practical solution to the problem
0802.2013
Throughput-Delay Trade-off for Hierarchical Cooperation in Ad Hoc Wireless Networks
cs.IT math.IT
Hierarchical cooperation has recently been shown to achieve better throughput scaling than classical multihop schemes under certain assumptions on the channel model in static wireless networks. However, the end-to-end delay of this scheme turns out to be significantly larger than those of multihop schemes. A modification of the scheme is proposed here that achieves a throughput-delay trade-off $D(n)=(\log n)^2 T(n)$ for T(n) between $\Theta(\sqrt{n}/\log n)$ and $\Theta(n/\log n)$, where D(n) and T(n) are respectively the average delay per bit and the aggregate throughput in a network of n nodes. This trade-off complements the previous results of El Gamal et al., which show that the throughput-delay trade-off for multihop schemes is given by D(n)=T(n) where T(n) lies between $\Theta(1)$ and $\Theta(\sqrt{n})$. Meanwhile, the present paper considers the network multiple-access problem, which may be of interest in its own right.
0802.2015
Combining Expert Advice Efficiently
cs.LG cs.DS cs.IT math.IT
We show how models for prediction with expert advice can be defined concisely and clearly using hidden Markov models (HMMs); standard HMM algorithms can then be used to efficiently calculate, among other things, how the expert predictions should be weighted according to the model. We cast many existing models as HMMs and recover the best known running times in each case. We also describe two new models: the switch distribution, which was recently developed to improve Bayesian/Minimum Description Length model selection, and a new generalisation of the fixed share algorithm based on run-length coding. We give loss bounds for all models and shed new light on their relationships.
0802.2045
Blocking Sets in the complement of hyperplane arrangements in projective space
cs.IT math.IT
It is well know that the theory of minimal blocking sets is studied by several author. Another theory which is also studied by a large number of researchers is the theory of hyperplane arrangements. We can remark that the affine space $AG(n,q)$ is the complement of the line at infinity in $PG(n,q)$. Then $AG(n,q)$ can be regarded as the complement of an hyperplane arrangement in $PG(n,q)$! Therefore the study of blocking sets in the affine space $AG(n,q)$ is simply the study of blocking sets in the complement of a finite arrangement in $PG(n,q)$. In this paper the author generalizes this remark starting to study the problem of existence of blocking sets in the complement of a given hyperplane arrangement in $PG(n,q)$. As an example she solves the problem for the case of braid arrangement. Moreover she poses significant questions on this new and interesting problem.
0802.2125
Multiple Access Outerbounds and the Inseparability of Parallel Interference Channels
cs.IT math.IT
It is known that the capacity of parallel (multi-carrier) Gaussian point-to-point, multiple access and broadcast channels can be achieved by separate encoding for each subchannel (carrier) subject to a power allocation across carriers. In this paper we show that such a separation does not apply to parallel Gaussian interference channels in general. A counter-example is provided in the form of a 3 user interference channel where separate encoding can only achieve a sum capacity of $\log({SNR})+o(\log({SNR}))$ per carrier while the actual capacity, achieved only by joint-encoding across carriers, is $3/2\log({SNR}))+o(\log({SNR}))$ per carrier. As a byproduct of our analysis, we propose a class of multiple-access-outerbounds on the capacity of the 3 user interference channel.
0802.2127
New Implementation Framework for Saturation-Based Reasoning
cs.AI cs.LO
The saturation-based reasoning methods are among the most theoretically developed ones and are used by most of the state-of-the-art first-order logic reasoners. In the last decade there was a sharp increase in performance of such systems, which I attribute to the use of advanced calculi and the intensified research in implementation techniques. However, nowadays we are witnessing a slowdown in performance progress, which may be considered as a sign that the saturation-based technology is reaching its inherent limits. The position I am trying to put forward in this paper is that such scepticism is premature and a sharp improvement in performance may potentially be reached by adopting new architectural principles for saturation. The top-level algorithms and corresponding designs used in the state-of-the-art saturation-based theorem provers have (at least) two inherent drawbacks: the insufficient flexibility of the used inference selection mechanisms and the lack of means for intelligent prioritising of search directions. In this position paper I analyse these drawbacks and present two ideas on how they could be overcome. In particular, I propose a flexible low-cost high-precision mechanism for inference selection, intended to overcome problems associated with the currently used instances of clause selection-based procedures. I also outline a method for intelligent prioritising of search directions, based on probing the search space by exploring generalised search directions. I discuss some technical issues related to implementation of the proposed architectural principles and outline possible solutions.
0802.2138
Support Vector classifiers for Land Cover Classification
cs.NE cs.CV
Support vector machines represent a promising development in machine learning research that is not widely used within the remote sensing community. This paper reports the results of Multispectral(Landsat-7 ETM+) and Hyperspectral DAIS)data in which multi-class SVMs are compared with maximum likelihood and artificial neural network methods in terms of classification accuracy. Our results show that the SVM achieves a higher level of classification accuracy than either the maximum likelihood or the neural classifier, and that the support vector machine can be used with small training datasets and high-dimensional data.
0802.2158
A Radar-Shaped Statistic for Testing and Visualizing Uniformity Properties in Computer Experiments
cs.LG math.ST stat.TH
In the study of computer codes, filling space as uniformly as possible is important to describe the complexity of the investigated phenomenon. However, this property is not conserved by reducing the dimension. Some numeric experiment designs are conceived in this sense as Latin hypercubes or orthogonal arrays, but they consider only the projections onto the axes or the coordinate planes. In this article we introduce a statistic which allows studying the good distribution of points according to all 1-dimensional projections. By angularly scanning the domain, we obtain a radar type representation, allowing the uniformity defects of a design to be identified with respect to its projections onto straight lines. The advantages of this new tool are demonstrated on usual examples of space-filling designs (SFD) and a global statistic independent of the angle of rotation is studied.
0802.2159
A Distributed Merge and Split Algorithm for Fair Cooperation in Wireless Networks
cs.IT cs.GT math.IT
This paper introduces a novel concept from coalitional game theory which allows the dynamic formation of coalitions among wireless nodes. A simple and distributed merge and split algorithm for coalition formation is constructed. This algorithm is applied to study the gains resulting from the cooperation among single antenna transmitters for virtual MIMO formation. The aim is to find an ultimate transmitters coalition structure that allows cooperating users to maximize their utilities while accounting for the cost of coalition formation. Through this novel game theoretical framework, the wireless network transmitters are able to self-organize and form a structured network composed of disjoint stable coalitions. Simulation results show that the proposed algorithm can improve the average individual user utility by 26.4% as well as cope with the mobility of the distributed users.
0802.2234
Textual Fingerprinting with Texts from Parkin, Bassewitz, and Leander
cs.CL cs.CR
Current research in author profiling to discover a legal author's fingerprint does not only follow examinations based on statistical parameters only but include more and more dynamic methods that can learn and that react adaptable to the specific behavior of an author. But the question on how to appropriately represent a text is still one of the fundamental tasks, and the problem of which attribute should be used to fingerprint the author's style is still not exactly defined. In this work, we focus on linguistic selection of attributes to fingerprint the style of the authors Parkin, Bassewitz and Leander. We use texts of the genre Fairy Tale as it has a clear style and texts of a shorter size with a straightforward story-line and a simple language.
0802.2305
Compressed Counting
cs.IT cs.CC cs.DM cs.DS cs.LG math.IT
Counting is among the most fundamental operations in computing. For example, counting the pth frequency moment has been a very active area of research, in theoretical computer science, databases, and data mining. When p=1, the task (i.e., counting the sum) can be accomplished using a simple counter. Compressed Counting (CC) is proposed for efficiently computing the pth frequency moment of a data stream signal A_t, where 0<p<=2. CC is applicable if the streaming data follow the Turnstile model, with the restriction that at the time t for the evaluation, A_t[i]>= 0, which includes the strict Turnstile model as a special case. For natural data streams encountered in practice, this restriction is minor. The underly technique for CC is what we call skewed stable random projections, which captures the intuition that, when p=1 a simple counter suffices, and when p = 1+/\Delta with small \Delta, the sample complexity of a counter system should be low (continuously as a function of \Delta). We show at small \Delta the sample complexity (number of projections) k = O(1/\epsilon) instead of O(1/\epsilon^2). Compressed Counting can serve a basic building block for other tasks in statistics and computing, for example, estimation entropies of data streams, parameter estimations using the method of moments and maximum likelihood. Finally, another contribution is an algorithm for approximating the logarithmic norm, \sum_{i=1}^D\log A_t[i], and logarithmic distance. The logarithmic distance is useful in machine learning practice with heavy-tailed data.
0802.2345
On the Frame Error Rate of Transmission Schemes on Quasi-Static Fading Channels
cs.IT math.IT
It is known that the frame error rate of turbo codes on quasi-static fading channels can be accurately approximated using the convergence threshold of the corresponding iterative decoder. This paper considers quasi-static fading channels and demonstrates that non-iterative schemes can also be characterized by a similar threshold based on which their frame error rate can be readily estimated. In particular, we show that this threshold is a function of the probability of successful frame detection in additive white Gaussian noise, normalized by the squared instantaneous signal-to-noise ratio. We apply our approach to uncoded binary phase shift keying, convolutional coding and turbo coding and demonstrate that the approximated frame error rate is within 0.4 dB of the simulation results. Finally, we introduce performance evaluation plots to explore the impact of the frame size on the performance of the schemes under investigation.
0802.2349
Algebraic geometry codes from higher dimensional varieties
cs.IT math.IT
This paper is a general survey of literature on Goppa-type codes from higher dimensional algebraic varieties. The construction and several techniques for estimating the minimum distance are described first. Codes from various classes of varieties, including Hermitian hypersurfaces, Grassmannians, flag varieties, ruled surfaces over curves, and Deligne-Lusztig varieties are considered. Connections with the theories of toric codes and order domains are also briefly indicated.
0802.2360
On Maximizing Coverage in Gaussian Relay Networks
cs.IT math.IT
Results for Gaussian relay channels typically focus on maximizing transmission rates for given locations of the source, relay and destination. We introduce an alternative perspective, where the objective is maximizing coverage for a given rate. The new objective captures the problem of how to deploy relays to provide a given level of service to a particular geographic area, where the relay locations become a design parameter that can be optimized. We evaluate the decode and forward (DF) and compress and forward (CF) strategies for the relay channel with respect to the new objective of maximizing coverage. When the objective is maximizing rate, different locations of the destination favor different strategies. When the objective is coverage for a given rate, and the relay is able to decode, DF is uniformly superior in that it provides coverage at any point served by CF. When the channel model is modified to include random fading, we show that the monotone ordering of coverage regions is not always maintained. While the coverage provided by DF is sensitive to changes in the location of the relay and the path loss exponent, CF exhibits a more graceful degradation with respect to such changes. The techniques used to approximate coverage regions are new and may be of independent interest.