id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
0806.4648
An Algebraic Approach for the MIMO Control of Small Scale Helicopter
cs.RO
The control of small-scale helicopter is a MIMO problem. To use of classical control approach to formally solve a MIMO problem, one needs to come up with multidimensional Root Locus diagram to tune the control parameters. The problem with the required dimension of the RL diagram for MIMO design has forced the design procedure of classical approach to be conducted in cascaded multi-loop SISO system starting from the innermost loop outward. To implement this control approach for a helicopter, a pitch and roll attitude control system is often subordinated to a, respectively, longitudinal and lateral velocity control system in a nested architecture. The requirement for this technique to work is that the inner attitude control loop must have a higher bandwidth than the outer velocity control loop which is not the case for high performance mini helicopter. To address the above problems, an algebraic design approach is proposed in this work. The designed control using s-CDM approach is demonstrated for hovering control of small-scale helicopter simultaneously subjected to plant parameter uncertainties and wind disturbances.
0806.4650
Structural Damage Detection Using Randomized Trained Neural Networks
cs.NE
A computationally method on damage detection problems in structures was conducted using neural networks. The problem that is considered in this works consists of estimating the existence, location and extent of stiffness reduction in structure which is indicated by the changes of the structural static parameters such as deflection and strain. The neural network was trained to recognize the behaviour of static parameter of the undamaged structure as well as of the structure with various possible damage extent and location which were modelled as random states. The proposed techniques were applied to detect damage in a simply supported beam. The structure was analyzed using finite-element-method (FEM) and the damage identification was conducted by a back-propagation neural network using the change of the structural strain and displacement. The results showed that using proposed method the strain is more efficient for identification of damage than the displacement.
0806.4652
A Fixed-Parameter Algorithm for Random Instances of Weighted d-CNF Satisfiability
cs.DS cs.AI cs.CC
We study random instances of the weighted $d$-CNF satisfiability problem (WEIGHTED $d$-SAT), a generic W[1]-complete problem. A random instance of the problem consists of a fixed parameter $k$ and a random $d$-CNF formula $\weicnf{n}{p}{k, d}$ generated as follows: for each subset of $d$ variables and with probability $p$, a clause over the $d$ variables is selected uniformly at random from among the $2^d - 1$ clauses that contain at least one negated literals. We show that random instances of WEIGHTED $d$-SAT can be solved in $O(k^2n + n^{O(1)})$-time with high probability, indicating that typical instances of WEIGHTED $d$-SAT under this instance distribution are fixed-parameter tractable. The result also hold for random instances from the model $\weicnf{n}{p}{k,d}(d')$ where clauses containing less than $d' (1 < d' < d)$ negated literals are forbidden, and for random instances of the renormalized (miniaturized) version of WEIGHTED $d$-SAT in certain range of the random model's parameter $p(n)$. This, together with our previous results on the threshold behavior and the resolution complexity of unsatisfiable instances of $\weicnf{n}{p}{k, d}$, provides an almost complete characterization of the typical-case behavior of random instances of WEIGHTED $d$-SAT.
0806.4667
Overlaid Cellular and Mobile Ad Hoc Networks
cs.IT math.IT
In cellular systems using frequency division duplex, growing Internet services cause unbalance of uplink and downlink traffic, resulting in poor uplink spectrum utilization. Addressing this issue, this paper considers overlaying an ad hoc network onto a cellular uplink network for improving spectrum utilization and spatial reuse efficiency. Transmission capacities of the overlaid networks are analyzed, which are defined as the maximum densities of the ad hoc nodes and mobile users under an outage constraint. Using tools from stochastic geometry, the capacity tradeoff curves for the overlaid networks are shown to be linear. Deploying overlaid networks based on frequency separation is proved to achieve higher network capacities than that based on spatial separation. Furthermore, spatial diversity is shown to enhance network capacities.
0806.4686
Sparse Online Learning via Truncated Gradient
cs.LG cs.AI
We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: The degree of sparsity is continuous -- a parameter controls the rate of sparsification from no sparsification to total sparsification. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular $L_1$-regularization method in the batch setting. We prove that small rates of sparsification result in only small additional regret with respect to typical online learning guarantees. The approach works well empirically. We apply the approach to several datasets and find that for datasets with large numbers of features, substantial sparsity is discoverable.
0806.4703
Challenging More Updates: Towards Anonymous Re-publication of Fully Dynamic Datasets
cs.DB
Most existing anonymization work has been done on static datasets, which have no update and need only one-time publication. Recent studies consider anonymizing dynamic datasets with external updates: the datasets are updated with record insertions and/or deletions. This paper addresses a new problem: anonymous re-publication of datasets with internal updates, where the attribute values of each record are dynamically updated. This is an important and challenging problem for attribute values of records are updating frequently in practice and existing methods are unable to deal with such a situation. We initiate a formal study of anonymous re-publication of dynamic datasets with internal updates, and show the invalidation of existing methods. We introduce theoretical definition and analysis of dynamic datasets, and present a general privacy disclosure framework that is applicable to all anonymous re-publication problems. We propose a new counterfeited generalization principle alled m-Distinct to effectively anonymize datasets with both external updates and internal updates. We also develop an algorithm to generalize datasets to meet m-Distinct. The experiments conducted on real-world data demonstrate the effectiveness of the proposed solution.
0806.4722
Malleable Coding: Compressed Palimpsests
cs.IT math.IT
A malleable coding scheme considers not only compression efficiency but also the ease of alteration, thus encouraging some form of recycling of an old compressed version in the formation of a new one. Malleability cost is the difficulty of synchronizing compressed versions, and malleable codes are of particular interest when representing information and modifying the representation are both expensive. We examine the trade-off between compression efficiency and malleability cost under a malleability metric defined with respect to a string edit distance. This problem introduces a metric topology to the compressed domain. We characterize the achievable rates and malleability as the solution of a subgraph isomorphism problem. This can be used to argue that allowing conditional entropy of the edited message given the original message to grow linearly with block length creates an exponential increase in code length.
0806.4737
On the Multiplexing Gain of K-user Partially Connected Interference Channel
cs.IT math.IT
The multiplexing gain (MUXG) of $K$-user interference channel (IC) with partially connected interfering links is analyzed. The motivation for the partially connected IC comes from the fact that not all interferences are equally strong in practice. The MUXG is characterized as a function of the number ($K$) of users and the number ($N \geq 1$) of interfering links. Our analysis is mainly based on the interference alignment (IA) technique to mitigate interference. Our main results are as follows: One may expect that higher MUXG can be attained when some of interfering links do not exist. However, when $N$ is odd and $K=N+2$, the MUXG is not increased beyond the optimal MUXG of fully connected IC, which is $\frac{KM}{2}$. The number of interfering links has no influence on the achievable MUXG using IA, but affects the efficiency in terms of the number of required channel realizations: When N=1 or 2, the optimal MUXG of the fully connected IC is achievable with a finite number of channel realizations. In case of $N \geq 3$, however, the MUXG of $\frac{KM}{2}$ can be achieved asymptotically as the number of channel realizations tends to infinity.
0806.4749
Nested Ordered Sets and their Use for Data Modelling
cs.DB
In this paper we present a new approach to data modelling, called the concept-oriented model (CoM), and describe its main features and characteristics including data semantics and operations. The distinguishing feature of this model is that it is based on the formalism of nested ordered sets where any element participates in two structures simultaneously: hierarchical (nested) and multi-dimensional (ordered). An element of the model is postulated to consist of two parts, called identity and entity, and the whole approach can be naturally broken into two branches: identity modelling and entity modelling. We also propose a new query language with the main construct, called concept, defined as a pair of two classes: identity class and entity class. We describe how its operations of projection, de-projection and product can be used to solve typical data modelling tasks.
0806.4773
Signal Codes
cs.IT math.IT
Motivated by signal processing, we present a new class of channel codes, called signal codes, for continuous-alphabet channels. Signal codes are lattice codes whose encoding is done by convolving an integer information sequence with a fixed filter pattern. Decoding is based on the bidirectional sequential stack decoder, which can be implemented efficiently using the heap data structure. Error analysis and simulation results indicate that signal codes can achieve low error rate at approximately 1dB from channel capacity.
0806.4787
Locality and Bounding-Box Quality of Two-Dimensional Space-Filling Curves
cs.CG cs.DB
Space-filling curves can be used to organise points in the plane into bounding-box hierarchies (such as R-trees). We develop measures of the bounding-box quality of space-filling curves that express how effective different space-filling curves are for this purpose. We give general lower bounds on the bounding-box quality measures and on locality according to Gotsman and Lindenbaum for a large class of space-filling curves. We describe a generic algorithm to approximate these and similar quality measures for any given curve. Using our algorithm we find good approximations of the locality and the bounding-box quality of several known and new space-filling curves. Surprisingly, some curves with relatively bad locality by Gotsman and Lindenbaum's measure, have good bounding-box quality, while the curve with the best-known locality has relatively bad bounding-box quality.
0806.4802
A new Hedging algorithm and its application to inferring latent random variables
cs.GT cs.AI
We present a new online learning algorithm for cumulative discounted gain. This learning algorithm does not use exponential weights on the experts. Instead, it uses a weighting scheme that depends on the regret of the master algorithm relative to the experts. In particular, experts whose discounted cumulative gain is smaller (worse) than that of the master algorithm receive zero weight. We also sketch how a regret-based algorithm can be used as an alternative to Bayesian averaging in the context of inferring latent random variables.
0806.4874
Myopic Coding in Multiterminal Networks
cs.IT math.IT
This paper investigates the interplay between cooperation and achievable rates in multi-terminal networks. Cooperation refers to the process of nodes working together to relay data toward the destination. There is an inherent tradeoff between achievable information transmission rates and the level of cooperation, which is determined by how many nodes are involved and how the nodes encode/decode the data. We illustrate this trade-off by studying information-theoretic decode-forward based coding strategies for data transmission in multi-terminal networks. Decode-forward strategies are usually discussed in the context of omniscient coding, in which all nodes in the network fully cooperate with each other, both in encoding and decoding. In this paper, we investigate myopic coding, in which each node cooperates with only a few neighboring nodes. We show that achievable rates of myopic decode-forward can be as large as that of omniscient decode-forward in the low SNR regime. We also show that when each node has only a few cooperating neighbors, adding one node into the cooperation increases the transmission rate significantly. Furthermore, we show that myopic decode-forward can achieve non-zero rates as the network size grows without bound.
0806.4899
A Dynamic Programming Approach To Length-Limited Huffman Coding
cs.DS cs.IT math.IT
The ``state-of-the-art'' in Length Limited Huffman Coding algorithms is the $\Theta(ND)$-time, $\Theta(N)$-space one of Hirschberg and Larmore, where $D\le N$ is the length restriction on the code. This is a very clever, very problem specific, technique. In this note we show that there is a simple Dynamic-Programming (DP) method that solves the problem with the same time and space bounds. The fact that there was an $\Theta(ND)$ time DP algorithm was previously known; it is a straightforward DP with the Monge property (which permits an order of magnitude speedup). It was not interesting, though, because it also required $\Theta(ND)$ space. The main result of this paper is the technique developed for reducing the space. It is quite simple and applicable to many other problems modeled by DPs with the Monge property. We illustrate this with examples from web-proxy design and wireless mobile paging.
0806.4920
Conception et Evaluation de XQuery dans une architecture de m\'ediation "Tout-XML"
cs.DB
XML has emerged as the leading language for representing and exchanging data not only on the Web, but also in general in the enterprise. XQuery is emerging as the standard query language for XML. Thus, tools are required to mediate between XML queries and heterogeneous data sources to integrate data in XML. This paper presents the XMedia mediator, a unique tool for integrating and querying disparate heterogeneous information as unified XML views. It describes the mediator architecture and focuses on the unique distributed query processing technology implemented in this component. Query evaluation is based on an original XML algebra simply extending classical operators to process tuples of tree elements. Further, we present a set of performance evaluation on a relational benchmark, which leads to discuss possible performance enhancements.
0806.4921
Interpr\'etation vague des contraintes structurelles pour la RI dans des corpus de documents XML - \'Evaluation d'une m\'ethode approch\'ee de RI structur\'ee
cs.IR
We propose specific data structures designed to the indexing and retrieval of information elements in heterogeneous XML data bases. The indexing scheme is well suited to the management of various contextual searches, expressed either at a structural level or at an information content level. The approximate search mechanisms are based on a modified Levenshtein editing distance and information fusion heuristics. The implementation described highlights the mixing of structured information presented as field/value instances and free text elements. The retrieval performances of the proposed approach are evaluated within the INEX 2005 evaluation campaign. The evaluation results rank the proposed approach among the best evaluated XML IR systems for the VVCAS task.
0806.4958
Deterministic Designs with Deterministic Guarantees: Toeplitz Compressed Sensing Matrices, Sequence Designs and System Identification
cs.IT math.IT
In this paper we present a new family of discrete sequences having "random like" uniformly decaying auto-correlation properties. The new class of infinite length sequences are higher order chirps constructed using irrational numbers. Exploiting results from the theory of continued fractions and diophantine approximations, we show that the class of sequences so formed has the property that the worst-case auto-correlation coefficients for every finite length sequence decays at a polynomial rate. These sequences display doppler immunity as well. We also show that Toeplitz matrices formed from such sequences satisfy restricted-isometry-property (RIP), a concept that has played a central role recently in Compressed Sensing applications. Compressed sensing has conventionally dealt with sensing matrices with arbitrary components. Nevertheless, such arbitrary sensing matrices are not appropriate for linear system identification and one must employ Toeplitz structured sensing matrices. Linear system identification plays a central role in a wide variety of applications such as channel estimation for multipath wireless systems as well as control system applications. Toeplitz matrices are also desirable on account of their filtering structure, which allows for fast implementation together with reduced storage requirements.
0806.4979
Bounds on Codes Based on Graph Theory
cs.IT math.IT
Let $A_q(n,d)$ be the maximum order (maximum number of codewords) of a $q$-ary code of length $n$ and Hamming distance at least $d$. And let $A(n,d,w)$ that of a binary code of constant weight $w$. Building on results from algebraic graph theory and Erd\H{o}s-ko-Rado like theorems in extremal combinatorics, we show how several known bounds on $A_q(n,d)$ and $A(n,d,w)$ can be easily obtained in a single framework. For instance, both the Hamming and Singleton bounds can derived as an application of a property relating the clique number and the independence number of vertex transitive graphs. Using the same techniques, we also derive some new bounds and present some additional applications.
0807.0023
Automatic Metadata Generation using Associative Networks
cs.IR cs.DL
In spite of its tremendous value, metadata is generally sparse and incomplete, thereby hampering the effectiveness of digital information services. Many of the existing mechanisms for the automated creation of metadata rely primarily on content analysis which can be costly and inefficient. The automatic metadata generation system proposed in this article leverages resource relationships generated from existing metadata as a medium for propagation from metadata-rich to metadata-poor resources. Because of its independence from content analysis, it can be applied to a wide variety of resource media types and is shown to be computationally inexpensive. The proposed method operates through two distinct phases. Occurrence and co-occurrence algorithms first generate an associative network of repository resources leveraging existing repository metadata. Second, using the associative network as a substrate, metadata associated with metadata-rich resources is propagated to metadata-poor resources by means of a discrete-form spreading activation algorithm. This article discusses the general framework for building associative networks, an algorithm for disseminating metadata through such networks, and the results of an experiment and validation of the proposed method using a standard bibliographic dataset.
0807.0042
A Simple Converse Proof and a Unified Capacity Formula for Channels with Input Constraints
cs.IT math.IT
Given the single-letter capacity formula and the converse proof of a channel without constraints, we provide a simple approach to extend the results for the same channel but with constraints. The resulting capacity formula is the minimum of a Lagrange dual function. It gives an unified formula in the sense that it works regardless whether the problem is convex. If the problem is non-convex, we show that the capacity can be larger than the formula obtained by the naive approach of imposing constraints on the maximization in the capacity formula of the case without the constraints. The extension on the converse proof is simply by adding a term involving the Lagrange multiplier and the constraints. The rest of the proof does not need to be changed. We name the proof method the Lagrangian Converse Proof. In contrast, traditional approaches need to construct a better input distribution for convex problems or need to introduce a time sharing variable for non-convex problems. We illustrate the Lagrangian Converse Proof for three channels, the classic discrete time memoryless channel, the channel with non-causal channel-state information at the transmitter, the channel with limited channel-state feedback. The extension to the rate distortion theory is also provided.
0807.0070
Quantitative Paradigm of Software Reliability as Content Relevance
cs.SE cs.IR
This paper presents a quantitative approach to software reliability and content relevance definitions validated by the systems' potential reliability law.Thus it is argued for the unified math nature or quantitative paradigm of software reliability and content relevance.
0807.0087
Path lengths in tree-child time consistent hybridization networks
q-bio.PE cs.CE cs.DM q-bio.QM
Hybridization networks are representations of evolutionary histories that allow for the inclusion of reticulate events like recombinations, hybridizations, or lateral gene transfers. The recent growth in the number of hybridization network reconstruction algorithms has led to an increasing interest in the definition of metrics for their comparison that can be used to assess the accuracy or robustness of these methods. In this paper we establish some basic results that make it possible the generalization to tree-child time consistent (TCTC) hybridization networks of some of the oldest known metrics for phylogenetic trees: those based on the comparison of the vectors of path lengths between leaves. More specifically, we associate to each hybridization network a suitably defined vector of `splitted' path lengths between its leaves, and we prove that if two TCTC hybridization networks have the same such vectors, then they must be isomorphic. Thus, comparing these vectors by means of a metric for real-valued vectors defines a metric for TCTC hybridization networks. We also consider the case of fully resolved hybridization networks, where we prove that simpler, `non-splitted' vectors can be used.
0807.0093
Graph Kernels
cs.LG
We present a unified framework to study graph kernels, special cases of which include the random walk graph kernel \citep{GaeFlaWro03,BorOngSchVisetal05}, marginalized graph kernel \citep{KasTsuIno03,KasTsuIno04,MahUedAkuPeretal04}, and geometric kernel on graphs \citep{Gaertner02}. Through extensions of linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) and reduction to a Sylvester equation, we construct an algorithm that improves the time complexity of kernel computation from $O(n^6)$ to $O(n^3)$. When the graphs are sparse, conjugate gradient solvers or fixed-point iterations bring our algorithm into the sub-cubic domain. Experiments on graphs from bioinformatics and other application domains show that it is often more than a thousand times faster than previous approaches. We then explore connections between diffusion kernels \citep{KonLaf02}, regularization on graphs \citep{SmoKon03}, and graph kernels, and use these connections to propose new graph kernels. Finally, we show that rational kernels \citep{CorHafMoh02,CorHafMoh03,CorHafMoh04} when specialized to graphs reduce to the random walk graph kernel.
0807.0199
Quadratic Forms and Space-Time Block Codes from Generalized Quaternion and Biquaternion Algebras
cs.IT math.IT
In the context of space-time block codes (STBCs), the theory of generalized quaternion and biquaternion algebras (i.e., tensor products of two quaternion algebras) over arbitrary base fields is presented, as well as quadratic form theoretic criteria to check if such algebras are division algebras. For base fields relevant to STBCs, these criteria are exploited, via Springer's theorem, to construct several explicit infinite families of (bi-)quaternion division algebras. These are used to obtain new $2\x 2$ and $4\x 4$ STBCs.
0807.0204
Diversity Multiplexing Tradeoff of Asynchronous Cooperative Relay Networks
cs.IT math.IT
The assumption of nodes in a cooperative communication relay network operating in synchronous fashion is often unrealistic. In the present paper, we consider two different models of asynchronous operation in cooperative-diversity networks experiencing slow fading and examine the corresponding diversity-multiplexing tradeoffs (DMT). For both models, we propose protocols and distributed space-time codes that asymptotically achieve the transmit diversity bound for all multiplexing gains and for any number of relays.
0807.0245
Full Diversity Codes for MISO Systems Equipped with Linear or ML Detectors
cs.IT math.IT
In this paper, a general criterion for space time block codes (STBC) to achieve full-diversity with a linear receiver is proposed for a wireless communication system having multiple transmitter and single receiver antennas (MISO). Particularly, the STBC with Toeplitz structure satisfies this criterion and therefore, enables full-diversity. Further examination of this Toeplitz STBC reveals the following important properties: a) The symbol transmission rate can be made to approach unity. b) Applying the Toeplitz code to any signalling scheme having nonzero distance between the nearest constellation points results in a non-vanishing determinant. In addition, if QAM is used as the signalling scheme, then for independent MISO flat fading channels, the Toeplitz codes is proved to approach the optimal diversity-vs-multiplexing tradeoff with a ZF receiver when the number of channel uses is large. This is, so far, the first non-orthogonal STBC shown to achieve the optimal tradeoff for such a receiver. On the other hand, when ML detection is employed in a MISO system, the Toeplitz STBC achieves the maximum coding gain for independent channels. When the channel fading coefficients are correlated, the inherent transmission matrix in the Toeplitz STBC can be designed to minimize the average worst case pair-wise error probability.
0807.0311
About the creation of a parallel bilingual corpora of web-publications
cs.CL
The algorithm of the creation texts parallel corpora was presented. The algorithm is based on the use of "key words" in text documents, and on the means of their automated translation. Key words were singled out by means of using Russian and Ukrainian morphological dictionaries, as well as dictionaries of the translation of nouns for the Russian and Ukrainianlanguages. Besides, to calculate the weights of the terms in the documents, empiric-statistic rules were used. The algorithm under consideration was realized in the form of a program complex, integrated into the content-monitoring InfoStream system. As a result, a parallel bilingual corpora of web-publications containing about 30 thousand documents, was created
0807.0337
Unveiling the mystery of visual information processing in human brain
cs.AI cs.IR cs.IT math.IT q-bio.NC
It is generally accepted that human vision is an extremely powerful information processing system that facilitates our interaction with the surrounding world. However, despite extended and extensive research efforts, which encompass many exploration fields, the underlying fundamentals and operational principles of visual information processing in human brain remain unknown. We still are unable to figure out where and how along the path from eyes to the cortex the sensory input perceived by the retina is converted into a meaningful object representation, which can be consciously manipulated by the brain. Studying the vast literature considering the various aspects of brain information processing, I was surprised to learn that the respected scholarly discussion is totally indifferent to the basic keynote question: "What is information?" in general or "What is visual information?" in particular. In the old days, it was assumed that any scientific research approach has first to define its basic departure points. Why was it overlooked in brain information processing research remains a conundrum. In this paper, I am trying to find a remedy for this bizarre situation. I propose an uncommon definition of "information", which can be derived from Kolmogorov's Complexity Theory and Chaitin's notion of Algorithmic Information. Embracing this new definition leads to an inevitable revision of traditional dogmas that shape the state of the art of brain information processing research. I hope this revision would better serve the challenging goal of human visual information processing modeling.
0807.0517
Modeling belief systems with scale-free networks
cs.AI physics.soc-ph
Evolution of belief systems has always been in focus of cognitive research. In this paper we delineate a new model describing belief systems as a network of statements considered true. Testing the model a small number of parameters enabled us to reproduce a variety of well-known mechanisms ranging from opinion changes to development of psychological problems. The self-organizing opinion structure showed a scale-free degree distribution. The novelty of our work lies in applying a convenient set of definitions allowing us to depict opinion network dynamics in a highly favorable way, which resulted in a scale-free belief network. As an additional benefit, we listed several conjectural consequences in a number of areas related to thinking and reasoning.
0807.0564
Linear-Programming Receivers
cs.IT math.IT
It is shown that any communication system which admits a sum-product (SP) receiver also admits a corresponding linear-programming (LP) receiver. The two receivers have a relationship defined by the local structure of the underlying graphical model, and are inhibited by the same phenomenon, which we call 'pseudoconfigurations'. This concept is a generalization of the concept of 'pseudocodewords' for linear codes. It is proved that the LP receiver has the 'optimum certificate' property, and that the receiver output is the lowest cost pseudoconfiguration. Equivalence of graph-cover pseudoconfigurations and linear-programming pseudoconfigurations is also proved. While the LP receiver is generally more complex than the corresponding SP receiver, the LP receiver and its associated pseudoconfiguration structure provide an analytic tool for the analysis of SP receivers. As an example application, we show how the LP design technique may be applied to the problem of joint equalization and decoding.
0807.0565
Music, Complexity, Information
physics.soc-ph cs.CL
These are the preparatory notes for a Science & Music essay, "Playing by numbers", appeared in Nature 453 (2008) 988-989.
0807.0595
Nonstandard linear recurring sequence subgroups in finite fields and automorphisms of cyclic codes
cs.IT cs.DM math.CO math.IT
Let $q=p^r$ be a prime power, and let $f(x)=x^m-\gs_{m-1}x^{m-1}- >...-\gs_1x-\gs_0$ be an irreducible polynomial over the finite field $\GF(q)$ of size $q$. A zero $\xi$ of $f$ is called {\em nonstandard (of degree $m$) over $\GF(q)$} if the recurrence relation $u_m=\gs_{m-1}u_{m-1} + ... + \gs_1u_1+\gs_0u_0$ with characteristic polynomial $f$ can generate the powers of $\xi$ in a nontrivial way, that is, with $u_0=1$ and $f(u_1)\neq 0$. In 2003, Brison and Nogueira asked for a characterisation of all nonstandard cases in the case $m=2$, and solved this problem for $q$ a prime, and later for $q=p^r$ with $r\leq4$. In this paper, we first show that classifying nonstandard finite field elements is equivalent to classifying those cyclic codes over $\GF(q)$ generated by a single zero that posses extra permutation automorphisms. Apart from two sporadic examples of degree 11 over $\GF(2)$ and of degree 5 over $\GF(3)$, related to the Golay codes, there exist two classes of examples of nonstandard finite field elements. One of these classes (type I) involves irreducible polynomials $f$ of the form $f(x)=x^m-f_0$, and is well-understood. The other class (type II) can be obtained from a primitive element in some subfield by a process that we call extension and lifting. We will use the known classification of the subgroups of $\PGL(2,q)$ in combination with a recent result by Brison and Nogueira to show that a nonstandard element of degree two over $\GF(q)$ necessarily is of type I or type II, thus solving completely the classification problem for the case $m=2$.
0807.0627
Belief decision support and reject for textured images characterization
cs.AI
The textured images' classification assumes to consider the images in terms of area with the same texture. In uncertain environment, it could be better to take an imprecise decision or to reject the area corresponding to an unlearning class. Moreover, on the areas that are the classification units, we can have more than one texture. These considerations allows us to develop a belief decision model permitting to reject an area as unlearning and to decide on unions and intersections of learning classes. The proposed approach finds all its justification in an application of seabed characterization from sonar images, which contributes to an illustration.
0807.0672
Algorithmic Problem Complexity
cs.CC cs.IT math.IT
People solve different problems and know that some of them are simple, some are complex and some insoluble. The main goal of this work is to develop a mathematical theory of algorithmic complexity for problems. This theory is aimed at determination of computer abilities in solving different problems and estimation of resources that computers need to do this. Here we build the part of this theory related to static measures of algorithms. At first, we consider problems for finite words and study algorithmic complexity of such problems, building optimal complexity measures. Then we consider problems for such infinite objects as functions and study algorithmic complexity of these problems, also building optimal complexity measures. In the second part of the work, complexity of algorithmic problems, such as the halting problem for Turing machines, is measured by the classes of automata that are necessary to solve this problem. To classify different problems with respect to their complexity, inductive Turing machines, which extend possibilities of Turing machines, are used. A hierarchy of inductive Turing machines generates an inductive hierarchy of algorithmic problems. Here we specifically consider algorithmic problems related to Turing machines and inductive Turing machines, and find a place for these problems in the inductive hierarchy of algorithmic problems.
0807.0821
On Wiretap Networks II
cs.IT math.IT
We consider the problem of securing a multicast network against a wiretapper that can intercept the packets on a limited number of arbitrary network links of his choice. We assume that the network implements network coding techniques to simultaneously deliver all the packets available at the source to all the destinations. We show how this problem can be looked at as a network generalization of the Ozarow-Wyner Wiretap Channel of type II. In particular, we show that network security can be achieved by using the Ozarow-Wyner approach of coset coding at the source on top of the implemented network code. This way, we quickly and transparently recover some of the results available in the literature on secure network coding for wiretapped networks. We also derive new bounds on the required secure code alphabet size and an algorithm for code construction.
0807.0868
On the Capacity of Pairwise Collaborative Networks
cs.IT math.IT
We derive expressions for the achievable rate region of a collaborative coding scheme in a two-transmitter, two-receiver Pairwise Collaborative Network (PCN) where one transmitter and receiver pair, namely relay pair, assists the other pair, namely the source pair, by partially decoding and forwarding the transmitted message to the intended receiver. The relay pair provides such assistance while handling a private message. We assume that users can use the past channel outputs and can transmit and receive at the same time and in the same frequency band. In this collaborative scheme, the transmitter of the source pair splits its information into two independent parts. Ironically, the relay pair employs the decode and forward coding to assist the source pair in delivering a part of its message and re-encodes the decoded message along with private message, which is intended to the receiver of the relay pair, and broadcasts the results. The receiver of the relay pair decodes both messages, retrieves the private message, re-encodes and transmits the decoded massage to the intended destination. We also characterize the achievable rate region for Gaussian PCN. Finally, we provide numerical results to study the rate trade off for the involved pairs. Numerical result shows that the collaboration offers gain when the channel gain between the users of the relay pair are strong. It also shows that if the channel conditions between transmitters or between the receivers of the relay and source pairs are poor, such a collaboration is not beneficial.
0807.0908
The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information
cs.AI
We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by Correspondence Analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential - e.g. temporal - ordering of the data into account. We employ such a perspective to look at narrative, "the flow of thought and the flow of language" (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions.
0807.0942
Secrecy via Sources and Channels
cs.IT math.IT
Alice and Bob want to share a secret key and to communicate an independent message, both of which they desire to be kept secret from an eavesdropper Eve. We study this problem of secret communication and secret key generation when two resources are available -- correlated sources at Alice, Bob, and Eve, and a noisy broadcast channel from Alice to Bob and Eve which is independent of the sources. We are interested in characterizing the fundamental trade-off between the rates of the secret message and secret key. We present an achievable solution and prove its optimality for the parallel channels and sources case when each sub-channel and source component satisfies a degradation order (either in favor of the legitimate receiver or the eavesdropper). This includes the case of jointly Gaussian sources and an additive Gaussian channel, for which the secrecy region is evaluated.
0807.1005
Catching Up Faster by Switching Sooner: A Prequential Solution to the AIC-BIC Dilemma
math.ST cs.IT cs.LG math.IT stat.ME stat.ML stat.TH
Bayesian model averaging, model selection and its approximations such as BIC are generally statistically consistent, but sometimes achieve slower rates og convergence than other methods such as AIC and leave-one-out cross-validation. On the other hand, these other methods can br inconsistent. We identify the "catch-up phenomenon" as a novel explanation for the slow convergence of Bayesian methods. Based on this analysis we define the switch distribution, a modification of the Bayesian marginal distribution. We show that, under broad conditions,model selection and prediction based on the switch distribution is both consistent and achieves optimal convergence rates, thereby resolving the AIC-BIC dilemma. The method is practical; we give an efficient implementation. The switch distribution has a data compression interpretation, and can thus be viewed as a "prequential" or MDL method; yet it is different from the MDL methods that are usually considered in the literature. We compare the switch distribution to Bayes factor model selection and leave-one-out cross-validation.
0807.1158
Path Gain Algebraic Formulation for the Scalar Linear Network Coding Problem
cs.IT math.IT
In the algebraic view, the solution to a network coding problem is seen as a variety specified by a system of polynomial equations typically derived by using edge-to-edge gains as variables. The output from each sink is equated to its demand to obtain polynomial equations. In this work, we propose a method to derive the polynomial equations using source-to-sink path gains as the variables. In the path gain formulation, we show that linear and quadratic equations suffice; therefore, network coding becomes equivalent to a system of polynomial equations of maximum degree 2. We present algorithms for generating the equations in the path gains and for converting path gain solutions to edge-to-edge gain solutions. Because of the low degree, simplification is readily possible for the system of equations obtained using path gains. Using small-sized network coding problems, we show that the path gain approach results in simpler equations and determines solvability of the problem in certain cases. On a larger network (with 87 nodes and 161 edges), we show how the path gain approach continues to provide deterministic solutions to some network coding problems.
0807.1211
Flux: FunctionaL Updates for XML (extended report)
cs.PL cs.DB
XML database query languages have been studied extensively, but XML database updates have received relatively little attention, and pose many challenges to language design. We are developing an XML update language called Flux, which stands for FunctionaL Updates for XML, drawing upon ideas from functional programming languages. In prior work, we have introduced a core language for Flux with a clear operational semantics and a sound, decidable static type system based on regular expression types. Our initial proposal had several limitations. First, it lacked support for recursive types or update procedures. Second, although a high-level source language can easily be translated to the core language, it is difficult to propagate meaningful type errors from the core language back to the source. Third, certain updates are well-formed yet contain path errors, or ``dead'' subexpressions which never do any useful work. It would be useful to detect path errors, since they often represent errors or optimization opportunities. In this paper, we address all three limitations. Specifically, we present an improved, sound type system that handles recursion. We also formalize a source update language and give a translation to the core language that preserves and reflects typability. We also develop a path-error analysis (a form of dead-code analysis) for updates.
0807.1253
Informed Traders
q-fin.TR cs.IT math.IT math.PR
An asymmetric information model is introduced for the situation in which there is a small agent who is more susceptible to the flow of information in the market than the general market participant, and who tries to implement strategies based on the additional information. In this model market participants have access to a stream of noisy information concerning the future return of an asset, whereas the informed trader has access to a further information source which is obscured by an additional noise that may be correlated with the market noise. The informed trader uses the extraneous information source to seek statistical arbitrage opportunities, while at the same time accommodating the additional risk. The amount of information available to the general market participant concerning the asset return is measured by the mutual information of the asset price and the associated cash flow. The worth of the additional information source is then measured in terms of the difference of mutual information between the general market participant and the informed trader. This difference is shown to be nonnegative when the signal-to-noise ratio of the information flow is known in advance. Explicit trading strategies leading to statistical arbitrage opportunities, taking advantage of the additional information, are constructed, illustrating how excess information can be translated into profit.
0807.1267
Optimal Direct Sum and Privacy Trade-off Results for Quantum and Classical Communication Complexity
cs.DC cs.IT math.IT
We show optimal Direct Sum result for the one-way entanglement-assisted quantum communication complexity for any relation f subset of X x Y x Z. We show: Q^{1,pub}(f^m) = Omega(m Q^{1,pub}(f)), where Q^{1,pub}(f), represents the one-way entanglement-assisted quantum communication complexity of f with error at most 1/3 and f^m represents m-copies of f. Similarly for the one-way public-coin classical communication complexity we show: R^{1,pub}(f^m) = Omega(m R^{1,pub}(f)), where R^{1,pub}(f), represents the one-way public-coin classical communication complexity of f with error at most 1/3. We show similar optimal Direct Sum results for the Simultaneous Message Passing quantum and classical models. For two-way protocols we present optimal Privacy Trade-off results leading to a Weak Direct Sum result for such protocols. We show our Direct Sum and Privacy Trade-off results via message compression arguments which also imply a new round elimination lemma in quantum communication. This allows us to extend classical lower bounds on the cell probe complexity of some data structure problems, e.g. Approximate Nearest Neighbor Searching on the Hamming cube {0,1}^n and Predecessor Search to the quantum setting. In a separate result we show that Newman's technique of reducing the number of public-coins in a classical protocol cannot be lifted to the quantum setting. We do this by defining a general notion of black-box reduction of prior entanglement that subsumes Newman's technique. We prove that such a black-box reduction is impossible for quantum protocols. In the final result in the theme of message compression, we provide an upper bound on the problem of Exact Remote State Preparation.
0807.1313
On the Tradeoffs of Implementing Randomized Network Coding in Multicast Networks
cs.IT math.IT
Randomized network coding (RNC) greatly reduces the complexity of implementing network coding in large-scale, heterogeneous networks. This paper examines two tradeoffs in applying RNC: The first studies how the performance of RNC varies with a node's randomizing capabilities. Specifically, a limited randomized network coding (L-RNC) scheme - in which intermediate nodes perform randomized encoding based on only a limited number of random coefficients - is proposed and its performance bounds are analyzed. Such a L-RNC approach is applicable to networks in which nodes have either limited computation/storage capacity or have ambiguity about downstream edge connectivity (e.g., as in ad hoc sensor networks). A second tradeoff studied here examines the relationship between the reliability and the capacity gains of generalized RNC, i.e., how the outage probability of RNC relates to the transmission rate at the source node. This tradeoff reveals that significant reductions in outage probability are possible when the source transmits deliberately and only slightly below network capacity. This approach provides an effective means to improve the feasibility probability of RNC when the size of the finite field is fixed.
0807.1372
Communication over Finite-Field Matrix Channels
cs.IT math.IT
This paper is motivated by the problem of error control in network coding when errors are introduced in a random fashion (rather than chosen by an adversary). An additive-multiplicative matrix channel is considered as a model for random network coding. The model assumes that n packets of length m are transmitted over the network, and up to t erroneous packets are randomly chosen and injected into the network. Upper and lower bounds on capacity are obtained for any channel parameters, and asymptotic expressions are provided in the limit of large field or matrix size. A simple coding scheme is presented that achieves capacity in both limiting cases. The scheme has decoding complexity O(n^2 m) and a probability of error that decreases exponentially both in the packet length and in the field size in bits. Extensions of these results for coherent network coding are also presented.
0807.1475
Simulations of Large-scale WiFi-based Wireless Networks: Interdisciplinary Challenges and Applications
cs.CE cs.DC
Wireless Fidelity (WiFi) is the fastest growing wireless technology to date. In addition to providing wire-free connectivity to the Internet WiFi technology also enables mobile devices to connect directly to each other and form highly dynamic wireless adhoc networks. Such distributed networks can be used to perform cooperative communication tasks such ad data routing and information dissemination in the absence of a fixed infrastructure. Furthermore, adhoc grids composed of wirelessly networked portable devices are emerging as a new paradigm in grid computing. In this paper we review computational and algorithmic challenges of high-fidelity simulations of such WiFi-based wireless communication and computing networks, including scalable topology maintenance, mobility modelling, parallelisation and synchronisation. We explore similarities and differences between the simulations of these networks and simulations of interacting many-particle systems, such as molecular dynamics (MD) simulations. We show how the cell linked-list algorithm which we have adapted from our MD simulations can be used to greatly improve the computational performance of wireless network simulators in the presence of mobility, and illustrate with an example from our simulation studies of worm attacks on mobile wireless adhoc networks.
0807.1494
Algorithm Selection as a Bandit Problem with Unbounded Losses
cs.AI cs.GT cs.LG
Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
0807.1513
A First-Order Non-Homogeneous Markov Model for the Response of Spiking Neurons Stimulated by Small Phase-Continuous Signals
q-bio.NC cs.NE
We present a first-order non-homogeneous Markov model for the interspike-interval density of a continuously stimulated spiking neuron. The model allows the conditional interspike-interval density and the stationary interspike-interval density to be expressed as products of two separate functions, one of which describes only the neuron characteristics, and the other of which describes only the signal characteristics. This allows the use of this model to predict the response when the underlying neuron model is not known or well determined. The approximation shows particularly clearly that signal autocorrelations and cross-correlations arise as natural features of the interspike-interval density, and are particularly clear for small signals and moderate noise. We show that this model simplifies the design of spiking neuron cross-correlation systems, and describe a four-neuron mutual inhibition network that generates a cross-correlation output for two input signals.
0807.1543
On the Capacity of MIMO Interference Channels
cs.IT math.IT
The capacity region of a multiple-input-multiple-output interference channel (MIMO IC) where the channel matrices are square and invertible is studied. The capacity region for strong interference is established where the definition of strong interference parallels that of scalar channels. Moreover, the sum-rate capacity for Z interference, noisy interference, and mixed interference is established. These results generalize known results for the scalar Gaussian IC.
0807.1550
Discernment of Hubs and Clusters in Socioeconomic Networks
physics.soc-ph cs.SI physics.data-an stat.AP
Interest in the analysis of networks has grown rapidly in the new millennium. Consequently, we promote renewed attention to a certain methodological approach introduced in 1974. Over the succeeding decade, this two-stage--double-standardization and hierarchical clustering (single-linkage-like)--procedure was applied to a wide variety of weighted, directed networks of a socioeconomic nature, frequently revealing the presence of ``hubs''. These were, typically--in the numerous instances studied of migration flows between geographic subdivisions within nations--``cosmopolitan/non-provincial'' areas, a prototypical example being the French capital, Paris. Such locations emit and absorb people broadly across their respective nations. Additionally, the two-stage procedure--which ``might very well be the most successful application of cluster analysis'' (R. C. Dubes, 1985)--detected many (physically or socially) isolated, functional groups (regions) of areas, such as the southern islands, Shikoku and Kyushu, of Japan, the Italian islands of Sardinia and Sicily, and the New England region of the United States. Further, we discuss a (complementary) approach developed in 1976, in which the max-flow/min-cut theorem was applied to raw/non-standardized (interindustry, as well as migration) flows.
0807.1560
Scientific Paper Summarization Using Citation Summary Networks
cs.IR cs.CL
Quickly moving to a new area of research is painful for researchers due to the vast amount of scientific literature in each field of study. One possible way to overcome this problem is to summarize a scientific topic. In this paper, we propose a model of summarizing a single article, which can be further used to summarize an entire topic. Our model is based on analyzing others' viewpoint of the target article's contributions and the study of its citation summary network using a clustering approach.
0807.1734
Faster Sequential Search with a Two-Pass Dynamic-Time-Warping Lower Bound
cs.DB
The Dynamic Time Warping (DTW) is a popular similarity measure between time series. The DTW fails to satisfy the triangle inequality and its computation requires quadratic time. Hence, to find closest neighbors quickly, we use bounding techniques. We can avoid most DTW computations with an inexpensive lower bound (LB_Keogh). We compare LB_Keogh with a tighter lower bound (LB_Improved). We find that LB_Improved-based search is faster for sequential search. As an example, our approach is 3 times faster over random-walk and shape time series. We also review some of the mathematical properties of the DTW. We derive a tight triangle inequality for the DTW. We show that the DTW becomes the l_1 distance when time series are separated by a constant.
0807.1773
Spatial Interference Cancellation for Multi-Antenna Mobile Ad Hoc Networks
cs.IT math.IT
Interference between nodes is a critical impairment in mobile ad hoc networks (MANETs). This paper studies the role of multiple antennas in mitigating such interference. Specifically, a network is studied in which receivers apply zero-forcing beamforming to cancel the strongest interferers. Assuming a network with Poisson distributed transmitters and independent Rayleigh fading channels, the transmission capacity is derived, which gives the maximum number of successful transmissions per unit area. Mathematical tools from stochastic geometry are applied to obtain the asymptotic transmission capacity scaling and characterize the impact of inaccurate channel state information (CSI). It is shown that, if each node cancels L interferers, the transmission capacity decreases as the outage probability to the power of 1/(L+1) as the outage probability vanishes. For fixed outage probability, as L grows, the transmission capacity increases as L to the power of (1-2/alpha) where alpha is the path-loss exponent. Moreover, CSI inaccuracy is shown to have no effect on the transmission capacity scaling as the outage probability vanishes, provided that the CSI training sequence has an appropriate length, which we derived. Numerical results suggest that canceling merely one interferer by each node increases the transmission capacity by an order of magnitude or more, even when the CSI is imperfect.
0807.1906
Extension of Inagaki General Weighted Operators and A New Fusion Rule Class of Proportional Redistribution of Intersection Masses
cs.AI
In this paper we extend Inagaki Weighted Operators fusion rule (WO) in information fusion by doing redistribution of not only the conflicting mass, but also of masses of non-empty intersections, that we call Double Weighted Operators (DWO). Then we propose a new fusion rule Class of Proportional Redistribution of Intersection Masses (CPRIM), which generates many interesting particular fusion rules in information fusion. Both formulas are presented for any number of sources of information. An application and comparison with other fusion rules are given in the last section.
0807.1997
Multi-Instance Learning by Treating Instances As Non-I.I.D. Samples
cs.LG cs.AI
Multi-instance learning attempts to learn from a training set consisting of labeled bags each containing many unlabeled instances. Previous studies typically treat the instances in the bags as independently and identically distributed. However, the instances in a bag are rarely independent, and therefore a better performance can be expected if the instances are treated in an non-i.i.d. way that exploits the relations among instances. In this paper, we propose a simple yet effective multi-instance learning method, which regards each bag as a graph and uses a specific kernel to distinguish the graphs by considering the features of the nodes as well as the features of the edges that convey some relations among instances. The effectiveness of the proposed method is validated by experiments.
0807.2028
On Krause's multi-agent consensus model with state-dependent connectivity (Extended version)
cs.MA
We study a model of opinion dynamics introduced by Krause: each agent has an opinion represented by a real number, and updates its opinion by averaging all agent opinions that differ from its own by less than 1. We give a new proof of convergence into clusters of agents, with all agents in the same cluster holding the same opinion. We then introduce a particular notion of equilibrium stability and provide lower bounds on the inter-cluster distances at a stable equilibrium. To better understand the behavior of the system when the number of agents is large, we also introduce and study a variant involving a continuum of agents, obtaining partial convergence results and lower bounds on inter-cluster distances, under some mild assumptions.
0807.2043
Intrusion Detection Using Cost-Sensitive Classification
cs.CR cs.CV cs.NI
Intrusion Detection is an invaluable part of computer networks defense. An important consideration is the fact that raising false alarms carries a significantly lower cost than not detecting at- tacks. For this reason, we examine how cost-sensitive classification methods can be used in Intrusion Detection systems. The performance of the approach is evaluated under different experimental conditions, cost matrices and different classification models, in terms of expected cost, as well as detection and false alarm rates. We find that even under unfavourable conditions, cost-sensitive classification can improve performance significantly, if only slightly.
0807.2047
The Five Points Pose Problem : A New and Accurate Solution Adapted to any Geometric Configuration
cs.CV
The goal of this paper is to estimate directly the rotation and translation between two stereoscopic images with the help of five homologous points. The methodology presented does not mix the rotation and translation parameters, which is comparably an important advantage over the methods using the well-known essential matrix. This results in correct behavior and accuracy for situations otherwise known as quite unfavorable, such as planar scenes, or panoramic sets of images (with a null base length), while providing quite comparable results for more "standard" cases. The resolution of the algebraic polynomials resulting from the modeling of the coplanarity constraint is made with the help of powerful algebraic solver tools (the Groebner bases and the Rational Univariate Representation).
0807.2108
On dual Schur domain decomposition method for linear first-order transient problems
cs.NA cs.CE
This paper addresses some numerical and theoretical aspects of dual Schur domain decomposition methods for linear first-order transient partial differential equations. In this work, we consider the trapezoidal family of schemes for integrating the ordinary differential equations (ODEs) for each subdomain and present four different coupling methods, corresponding to different algebraic constraints, for enforcing kinematic continuity on the interface between the subdomains. Method 1 (d-continuity) is based on the conventional approach using continuity of the primary variable and we show that this method is unstable for a lot of commonly used time integrators including the mid-point rule. To alleviate this difficulty, we propose a new Method 2 (Modified d-continuity) and prove its stability for coupling all time integrators in the trapezoidal family (except the forward Euler). Method 3 (v-continuity) is based on enforcing the continuity of the time derivative of the primary variable. However, this constraint introduces a drift in the primary variable on the interface. We present Method 4 (Baumgarte stabilized) which uses Baumgarte stabilization to limit this drift and we derive bounds for the stabilization parameter to ensure stability. Our stability analysis is based on the ``energy'' method, and one of the main contributions of this paper is the extension of the energy method (which was previously introduced in the context of numerical methods for ODEs) to assess the stability of numerical formulations for index-2 differential-algebraic equations (DAEs).
0807.2158
Universally-composable privacy amplification from causality constraints
quant-ph cs.CR cs.IT math.IT
We consider schemes for secret key distribution which use as a resource correlations that violate Bell inequalities. We provide the first security proof for such schemes, according to the strongest notion of security, the so called universally-composable security. Our security proof does not rely on the validity of quantum mechanics, it solely relies on the impossibility of arbitrarily-fast signaling between separate physical systems. This allows for secret communication in situations where the participants distrust their quantum devices.
0807.2268
Multihop Diversity in Wideband OFDM Systems: The Impact of Spatial Reuse and Frequency Selectivity
cs.IT math.IT
The goal of this paper is to establish which practical routing schemes for wireless networks are most suitable for wideband systems in the power-limited regime, which is, for example, a practically relevant mode of operation for the analysis of ultrawideband (UWB) mesh networks. For this purpose, we study the tradeoff between energy efficiency and spectral efficiency (known as the power-bandwidth tradeoff) in a wideband linear multihop network in which transmissions employ orthogonal frequency-division multiplexing (OFDM) modulation and are affected by quasi-static, frequency-selective fading. Considering open-loop (fixed-rate) and closed-loop (rate-adaptive) multihop relaying techniques, we characterize the impact of routing with spatial reuse on the statistical properties of the end-to-end conditional mutual information (conditioned on the specific values of the channel fading parameters and therefore treated as a random variable) and on the energy and spectral efficiency measures of the wideband regime. Our analysis particularly deals with the convergence of these end-to-end performance measures in the case of large number of hops, i.e., the phenomenon first observed in \cite{Oyman06b} and named as ``multihop diversity''. Our results demonstrate the realizability of the multihop diversity advantages in the case of routing with spatial reuse for wideband OFDM systems under wireless channel effects such as path-loss and quasi-static frequency-selective multipath fading.
0807.2282
Hardware/Software Co-Design for Spike Based Recognition
cs.NE cs.AI cs.CE
The practical applications based on recurrent spiking neurons are limited due to their non-trivial learning algorithms. The temporal nature of spiking neurons is more favorable for hardware implementation where signals can be represented in binary form and communication can be done through the use of spikes. This work investigates the potential of recurrent spiking neurons implementations on reconfigurable platforms and their applicability in temporal based applications. A theoretical framework of reservoir computing is investigated for hardware/software implementation. In this framework, only readout neurons are trained which overcomes the burden of training at the network level. These recurrent neural networks are termed as microcircuits which are viewed as basic computational units in cortical computation. This paper investigates the potential of recurrent neural reservoirs and presents a novel hardware/software strategy for their implementation on FPGAs. The design is implemented and the functionality is tested in the context of speech recognition application.
0807.2292
Rate and power allocation under the pairwise distributed source coding constraint
cs.IT math.IT
We consider the problem of rate and power allocation for a sensor network under the pairwise distributed source coding constraint. For noiseless source-terminal channels, we show that the minimum sum rate assignment can be found by finding a minimum weight arborescence in an appropriately defined directed graph. For orthogonal noisy source-terminal channels, the minimum sum power allocation can be found by finding a minimum weight matching forest in a mixed graph. Numerical results are presented for both cases showing that our solutions always outperform previously proposed solutions. The gains are considerable when source correlations are high.
0807.2358
Polygon Exploration with Time-Discrete Vision
cs.CG cs.RO
With the advent of autonomous robots with two- and three-dimensional scanning capabilities, classical visibility-based exploration methods from computational geometry have gained in practical importance. However, real-life laser scanning of useful accuracy does not allow the robot to scan continuously while in motion; instead, it has to stop each time it surveys its environment. This requirement was studied by Fekete, Klein and Nuechter for the subproblem of looking around a corner, but until now has not been considered in an online setting for whole polygonal regions. We give the first algorithmic results for this important algorithmic problem that combines stationary art gallery-type aspects with watchman-type issues in an online scenario: We demonstrate that even for orthoconvex polygons, a competitive strategy can be achieved only for limited aspect ratio A (the ratio of the maximum and minimum edge length of the polygon), i.e., for a given lower bound on the size of an edge; we give a matching upper bound by providing an O(log A)-competitive strategy for simple rectilinear polygons, using the assumption that each edge of the polygon has to be fully visible from some scan point.
0807.2383
CPBVP: A Constraint-Programming Framework for Bounded Program Verification
cs.SE cs.AI cs.LO
This paper studies how to verify the conformity of a program with its specification and proposes a novel constraint-programming framework for bounded program verification (CPBPV). The CPBPV framework uses constraint stores to represent the specification and the program and explores execution paths nondeterministically. The input program is partially correct if each constraint store so produced implies the post-condition. CPBPV does not explore spurious execution paths as it incrementally prunes execution paths early by detecting that the constraint store is not consistent. CPBPV uses the rich language of constraint programming to express the constraint store. Finally, CPBPV is parametrized with a list of solvers which are tried in sequence, starting with the least expensive and less general. Experimental results often produce orders of magnitude improvements over earlier approaches, running times being often independent of the variable domains. Moreover, CPBPV was able to detect subtle errors in some programs while other frameworks based on model checking have failed.
0807.2440
Construction of Error-Correcting Codes for Random Network Coding
cs.IT math.IT
In this work we present error-correcting codes for random network coding based on rank- metric codes, Ferrers diagrams, and puncturing. For most parameters, the constructed codes are larger than all previously known codes.
0807.2464
On "Bit-Interleaved Coded Multiple Beamforming"
cs.IT math.IT
The interleaver design criteria described in [1] should take into account all error patterns of interest.
0807.2471
An ESPRIT-based approach for Initial Ranging in OFDMA systems
cs.IT math.IT
This work presents a novel Initial Ranging scheme for orthogonal frequency-division multiple-access networks. Users that intend to establish a communication link with the base station (BS) are normally misaligned both in time and frequency and the goal is to jointly estimate their timing errors and carrier frequency offsets with respect to the BS local references. This is accomplished with affordable complexity by resorting to the ESPRIT algorithm. Computer simulations are used to assess the effectiveness of the proposed solution and to make comparisons with existing alternatives.
0807.2475
Opportunistic Collaborative Beamforming with One-Bit Feedback
cs.IT math.IT
An energy-efficient opportunistic collaborative beamformer with one-bit feedback is proposed for ad hoc sensor networks over Rayleigh fading channels. In contrast to conventional collaborative beamforming schemes in which each source node uses channel state information to correct its local carrier offset and channel phase, the proposed beamforming scheme opportunistically selects a subset of source nodes whose received signals combine in a quasi-coherent manner at the intended receiver. No local phase-precompensation is performed by the nodes in the opportunistic collaborative beamformer. As a result, each node requires only one-bit of feedback from the destination in order to determine if it should or shouldn't participate in the collaborative beamformer. Theoretical analysis shows that the received signal power obtained with the proposed beamforming scheme scales linearly with the number of available source nodes. Since the the optimal node selection rule requires an exhaustive search over all possible subsets of source nodes, two low-complexity selection algorithms are developed. Simulation results confirm the effectiveness of opportunistic collaborative beamforming with the low-complexity selection algorithms.
0807.2496
Hybrid Keyword Search Auctions
cs.GT cs.DS cs.IR
Search auctions have become a dominant source of revenue generation on the Internet. Such auctions have typically used per-click bidding and pricing. We propose the use of hybrid auctions where an advertiser can make a per-impression as well as a per-click bid, and the auctioneer then chooses one of the two as the pricing mechanism. We assume that the advertiser and the auctioneer both have separate beliefs (called priors) on the click-probability of an advertisement. We first prove that the hybrid auction is truthful, assuming that the advertisers are risk-neutral. We then show that this auction is superior to the existing per-click auction in multiple ways: 1) It takes into account the risk characteristics of the advertisers. 2) For obscure keywords, the auctioneer is unlikely to have a very sharp prior on the click-probabilities. In such situations, the hybrid auction can result in significantly higher revenue. 3) An advertiser who believes that its click-probability is much higher than the auctioneer's estimate can use per-impression bids to correct the auctioneer's prior without incurring any extra cost. 4) The hybrid auction can allow the advertiser and auctioneer to implement complex dynamic programming strategies. As Internet commerce matures, we need more sophisticated pricing models to exploit all the information held by each of the participants. We believe that hybrid auctions could be an important step in this direction.
0807.2569
Text Data Mining: Theory and Methods
stat.ML cs.IR stat.CO
This paper provides the reader with a very brief introduction to some of the theory and methods of text data mining. The intent of this article is to introduce the reader to some of the current methodologies that are employed within this discipline area while at the same time making the reader aware of some of the interesting challenges that remain to be solved within the area. Finally, the articles serves as a very rudimentary tutorial on some of techniques while also providing the reader with a list of references for additional study.
0807.2648
On Endogenous Reconfiguration in Mobile Robotic Networks
cs.RO
In this paper, our focus is on certain applications for mobile robotic networks, where reconfiguration is driven by factors intrinsic to the network rather than changes in the external environment. In particular, we study a version of the coverage problem useful for surveillance applications, where the objective is to position the robots in order to minimize the average distance from a random point in a given environment to the closest robot. This problem has been well-studied for omni-directional robots and it is shown that optimal configuration for the network is a centroidal Voronoi configuration and that the coverage cost belongs to $\Theta(m^{-1/2})$, where $m$ is the number of robots in the network. In this paper, we study this problem for more realistic models of robots, namely the double integrator (DI) model and the differential drive (DD) model. We observe that the introduction of these motion constraints in the algorithm design problem gives rise to an interesting behavior. For a \emph{sparser} network, the optimal algorithm for these models of robots mimics that for omni-directional robots. We propose novel algorithms whose performances are within a constant factor of the optimal asymptotically (i.e., as $m \to +\infty$). In particular, we prove that the coverage cost for the DI and DD models of robots is of order $m^{-1/3}$. Additionally, we show that, as the network grows, these novel algorithms outperform the conventional algorithm; hence necessitating a reconfiguration in the network in order to maintain optimal quality of service.
0807.2666
Source and Channel Coding for Correlated Sources Over Multiuser Channels
cs.IT math.IT
Source and channel coding over multiuser channels in which receivers have access to correlated source side information is considered. For several multiuser channel models necessary and sufficient conditions for optimal separation of the source and channel codes are obtained. In particular, the multiple access channel, the compound multiple access channel, the interference channel and the two-way channel with correlated sources and correlated receiver side information are considered, and the optimality of separation is shown to hold for certain source and side information structures. Interestingly, the optimal separate source and channel codes identified for these models are not necessarily the optimal codes for the underlying source coding or the channel coding problems. In other words, while separation of the source and channel codes is optimal, the nature of these optimal codes is impacted by the joint design criterion.
0807.2677
Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio
cs.NI cs.LG
We study the problem of dynamic spectrum sensing and access in cognitive radio systems as a partially observed Markov decision process (POMDP). A group of cognitive users cooperatively tries to exploit vacancies in primary (licensed) channels whose occupancies follow a Markovian evolution. We first consider the scenario where the cognitive users have perfect knowledge of the distribution of the signals they receive from the primary users. For this problem, we obtain a greedy channel selection and access policy that maximizes the instantaneous reward, while satisfying a constraint on the probability of interfering with licensed transmissions. We also derive an analytical universal upper bound on the performance of the optimal policy. Through simulation, we show that our scheme achieves good performance relative to the upper bound and improved performance relative to an existing scheme. We then consider the more practical scenario where the exact distribution of the signal from the primary is unknown. We assume a parametric model for the distribution and develop an algorithm that can learn the true distribution, still guaranteeing the constraint on the interference probability. We show that this algorithm outperforms the naive design that assumes a worst case value for the parameter. We also provide a proof for the convergence of the learning algorithm.
0807.2678
Eigenfactor : Does the Principle of Repeated Improvement Result in Better Journal Impact Estimates than Raw Citation Counts?
cs.DL cs.DB
Eigenfactor.org, a journal evaluation tool which uses an iterative algorithm to weight citations (similar to the PageRank algorithm used for Google) has been proposed as a more valid method for calculating the impact of journals. The purpose of this brief communication is to investigate whether the principle of repeated improvement provides different rankings of journals than does a simple unweighted citation count (the method used by ISI).
0807.2680
Group Divisible Codes and Their Application in the Construction of Optimal Constant-Composition Codes of Weight Three
cs.IT cs.DM math.CO math.IT
The concept of group divisible codes, a generalization of group divisible designs with constant block size, is introduced in this paper. This new class of codes is shown to be useful in recursive constructions for constant-weight and constant-composition codes. Large classes of group divisible codes are constructed which enabled the determination of the sizes of optimal constant-composition codes of weight three (and specified distance), leaving only four cases undetermined. Previously, the sizes of constant-composition codes of weight three were known only for those of sufficiently large length.
0807.2701
A Cutting Plane Method based on Redundant Rows for Improving Fractional Distance
cs.IT math.IT
In this paper, an idea of the cutting plane method is employed to improve the fractional distance of a given binary parity check matrix. The fractional distance is the minimum weight (with respect to l1-distance) of vertices of the fundamental polytope. The cutting polytope is defined based on redundant rows of the parity check matrix and it plays a key role to eliminate unnecessary fractional vertices in the fundamental polytope. We propose a greedy algorithm and its efficient implementation for improving the fractional distance based on the cutting plane method.
0807.2724
An Asymptotic Analysis of the MIMO BC under Linear Filtering
cs.IT math.IT
We investigate the MIMO broadcast channel in the high SNR regime when linear filtering is applied instead of dirty paper coding. Using a user-wise rate duality where the streams of every single user are not treated as self-interference as in the hitherto existing stream-wise rate dualities for linear filtering, we solve the weighted sum rate maximization problem of the broadcast channel in the dual multiple access channel. Thus, we can exactly quantify the asymptotic rate loss of linear filtering compared to dirty paper coding for any channel realization. Having converted the optimum covariance matrices to the broadcast channel by means of the duality, we observe that the optimal covariance matrices in the broadcast channel feature quite complicated but still closed form expressions although the respective transmit covariance matrices in the dual multiple access channel share a very simple structure. We immediately come to the conclusion that block-diagonalization is the asymptotically optimum transmit strategy in the broadcast channel. Out of the set of block-diagonalizing precoders, we present the one which achieves the largest sum rate and thus corresponds to the optimum solution found in the dual multiple access channel. Additionally, we quantify the ergodic rate loss of linear coding compared to dirty paper coding for Gaussian channels with correlations at the mobiles.
0807.2728
Iterative ('Turbo') Multiuser Detectors For Impulse Radio Systems
cs.IT math.IT
In recent years, there has been a growing interest in multiple access communication systems that spread their transmitted energy over very large bandwidths. These systems, which are referred to as ultra wide-band (UWB) systems, have various advantages over narrow-band and conventional wide-band systems. The importance of multiuser detection for achieving high data or low bit error rates in these systems has already been established in several studies. This paper presents iterative ('turbo') multiuser detection for impulse radio (IR) UWB systems over multipath channels. While this approach is demonstrated for UWB signals, it can also be used in other systems that use similar types of signaling. When applied to the type of signals used by UWB systems, the complexity of the proposed detector can be quite low. Also, two very low complexity implementations of the iterative multiuser detection scheme are proposed based on Gaussian approximation and soft interference cancellation. The performance of these detectors is assessed using simulations that demonstrate their favorable properties.
0807.2730
Position Estimation via Ultra-Wideband Signals
cs.IT math.IT
The high time resolution of ultra-wideband (UWB) signals facilitates very precise position estimation in many scenarios, which makes a variety applications possible. This paper reviews the problem of position estimation in UWB systems, beginning with an overview of the basic structure of UWB signals and their positioning applications. This overview is followed by a discussion of various position estimation techniques, with an emphasis on time-based approaches, which are particularly suitable for UWB positioning systems. Practical issues arising in UWB signal design and hardware implementation are also discussed.
0807.2844
On the Performance of Selection Relaying
cs.IT math.IT
Interest in selection relaying is growing. The recent developments in this area have largely focused on information theoretic analyses such as outage performance. Some of these analyses are accurate only at high SNR regimes. In this paper error rate analyses that are sufficiently accurate over a wide range of SNR regimes are provided. The motivations for this work are that practical systems operate at far lower SNR values than those supported by the high SNR analysis. To enable designers to make informed decisions regarding network design and deployment, it is imperative that system performance is evaluated with a reasonable degree of accuracy over practical SNR regimes. Simulations have been used to corroborate the analytical results, as close agreement between the two is observed.
0807.2859
The Transport Capacity of a Wireless Network is a Subadditive Euclidean Functional
cs.IT math.IT
The transport capacity of a dense ad hoc network with n nodes scales like \sqrt(n). We show that the transport capacity divided by \sqrt(n) approaches a non-random limit with probability one when the nodes are i.i.d. distributed on the unit square. We prove that the transport capacity under the protocol model is a subadditive Euclidean functional and use the machinery of subadditive functions in the spirit of Steele to show the existence of the limit.
0807.2928
Visual Grouping by Neural Oscillators
cs.CV cs.NE
Distributed synchronization is known to occur at several scales in the brain, and has been suggested as playing a key functional role in perceptual grouping. State-of-the-art visual grouping algorithms, however, seem to give comparatively little attention to neural synchronization analogies. Based on the framework of concurrent synchronization of dynamic systems, simple networks of neural oscillators coupled with diffusive connections are proposed to solve visual grouping problems. Multi-layer algorithms and feedback mechanisms are also studied. The same algorithm is shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration and image segmentation.
0807.2972
DescribeX: A Framework for Exploring and Querying XML Web Collections
cs.DB
This thesis introduces DescribeX, a powerful framework that is capable of describing arbitrarily complex XML summaries of web collections, providing support for more efficient evaluation of XPath workloads. DescribeX permits the declarative description of document structure using all axes and language constructs in XPath, and generalizes many of the XML indexing and summarization approaches in the literature. DescribeX supports the construction of heterogeneous summaries where different document elements sharing a common structure can be declaratively defined and refined by means of path regular expressions on axes, or axis path regular expression (AxPREs). DescribeX can significantly help in the understanding of both the structure of complex, heterogeneous XML collections and the behaviour of XPath queries evaluated on them. Experimental results demonstrate the scalability of DescribeX summary refinements and stabilizations (the key enablers for tailoring summaries) with multi-gigabyte web collections. A comparative study suggests that using a DescribeX summary created from a given workload can produce query evaluation times orders of magnitude better than using existing summaries. DescribeX's light-weight approach of combining summaries with a file-at-a-time XPath processor can be a very competitive alternative, in terms of performance, to conventional fully-fledged XML query engines that provide DB-like functionality such as security, transaction processing, and native storage.
0807.2983
On Probability Distributions for Trees: Representations, Inference and Learning
cs.LG
We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded number of children). Both properties are particularly relevant for applications: nondeterministic automata are required for the inference problem to be relevant (recall that Hidden Markov Models are equivalent to nondeterministic string automata); nowadays applications for Web Information Extraction, Web Services and document processing consider unranked trees.
0807.3006
The rank convergence of HITS can be slow
cs.DS cs.IR
We prove that HITS, to "get right" h of the top k ranked nodes of an N>=2k node graph, can require h^(Omega(N h/k)) iterations (i.e. a substantial Omega(N h log(h)/k) matrix multiplications even with a "squaring trick"). Our proof requires no algebraic tools and is entirely self-contained.
0807.3050
Dimensionally Distributed Learning: Models and Algorithm
cs.IT math.IT
This paper introduces a framework for regression with dimensionally distributed data with a fusion center. A cooperative learning algorithm, the iterative conditional expectation algorithm (ICEA), is designed within this framework. The algorithm can effectively discover linear combinations of individual estimators trained by each agent without transferring and storing large amount of data amongst the agents and the fusion center. The convergence of ICEA is explored. Specifically, for a two agent system, each complete round of ICEA is guaranteed to be a non-expansive map on the function space of each agent. The advantages and limitations of ICEA are also discussed for data sets with various distributions and various hidden rules. Moreover, several techniques are also designed to leverage the algorithm to effectively learn more complex hidden rules that are not linearly decomposable.
0807.3065
Sharp Bounds for Optimal Decoding of Low Density Parity Check Codes
cs.IT math.IT
Consider communication over a binary-input memoryless output-symmetric channel with low density parity check (LDPC) codes and maximum a posteriori (MAP) decoding. The replica method of spin glass theory allows to conjecture an analytic formula for the average input-output conditional entropy per bit in the infinite block length limit. Montanari proved a lower bound for this entropy, in the case of LDPC ensembles with convex check degree polynomial, which matches the replica formula. Here we extend this lower bound to any irregular LDPC ensemble. The new feature of our work is an analysis of the second derivative of the conditional input-output entropy with respect to noise. A close relation arises between this second derivative and correlation or mutual information of codebits. This allows us to extend the realm of the interpolation method, in particular we show how channel symmetry allows to control the fluctuations of the overlap parameters.
0807.3094
Energy-Efficient Resource Allocation in Multiuser MIMO Systems: A Game-Theoretic Framework
cs.IT cs.GT math.IT
This paper focuses on the cross-layer issue of resource allocation for energy efficiency in the uplink of a multiuser MIMO wireless communication system. Assuming that all of the transmitters and the uplink receiver are equipped with multiple antennas, the situation considered is that in which each terminal is allowed to vary its transmit power, beamforming vector, and uplink receiver in order to maximize its own utility, which is defined as the ratio of data throughput to transmit power; the case in which non-linear interference cancellation is used at the receiver is also investigated. Applying a game-theoretic formulation, several non-cooperative games for utility maximization are thus formulated, and their performance is compared in terms of achieved average utility, achieved average SINR and average transmit power at the Nash equilibrium. Numerical results show that the use of the proposed cross-layer resource allocation policies brings remarkable advantages to the network performance.
0807.3096
Stochastic Maximum Principle for a PDEs with noise and control on the boundary
math.PR cs.SY math.OC
In this paper we prove necessary conditions for optimality of a stochastic control problem for a class of stochastic partial differential equations that is controlled through the boundary. This kind of problems can be interpreted as a stochastic control problem for an evolution system in an Hilbert space. The regularity of the solution of the adjoint equation, that is a backward stochastic equation in infinite dimension, plays a crucial role in the formulation of the maximum principle.
0807.3097
Energy-Efficient Power Control in Multipath CDMA Channels via Large System Analysis
cs.IT cs.GT math.IT
This paper is focused on the design and analysis of power control procedures for the uplink of multipath code-division-multiple-access (CDMA) channels based on the large system analysis (LSA). Using the tools of LSA, a new decentralized power control algorithm aimed at energy efficiency maximization and requiring very little prior information on the interference background is proposed; moreover, it is also shown that LSA can be used to predict with good accuracy the performance and operational conditions of a large network operating at the equilibrium over a multipath channel, i.e. the power, signal-to-interference-plus-noise ratio (SINR) and utility profiles across users, wherein the utility is defined as the number of bits reliably delivered to the receiver for each energy-unit used for transmission. Additionally, an LSA-based performance comparison among linear receivers is carried out in terms of achieved energy efficiency at the equilibrium. Finally, the problem of the choice of the utility-maximizing training length is also considered. Numerical results show a very satisfactory agreement of the theoretical analysis with simulation results obtained with reference to systems with finite (and not so large) numbers of users.
0807.3156
Algorithmic randomness and splitting of supermartingales
cs.IT math.IT
Randomness in the sense of Martin-L\"of can be defined in terms of lower semicomputable supermartingales. We show that such a supermartingale cannot be replaced by a pair of supermartingales that bet only on the even bits (the first one) and on the odd bits (the second one) knowing all preceding bits.
0807.3198
On algebras admitting a complete set of near weights, evaluation codes and Goppa codes
cs.IT math.IT
In 1998 Hoholdt, van Lint and Pellikaan introduced the concept of a ``weight function'' defined on a F_q-algebra and used it to construct linear codes, obtaining among them the algebraic-geometric (AG) codes supported on one point. Later it was proved by Matsumoto that all codes produced using a weight function are actually AG codes supported on one point. Recently, ``near weight functions'' (a generalization of weight functions), also defined on a F_q-algebra, were introduced to study codes supported on two points. In this paper we show that an algebra admits a set of m near weight functions having a compatibility property, namely, the set is a ``complete set'', if and only if it is the ring of regular functions of an affine geometrically irreducible algebraic curve defined over F_q whose points at infinity have a total of m rational branches. Then the codes produced using the near weight functions are exactly the AG codes supported on m points. A formula for the minimum distance of these codes is presented with examples which show that in some situations it compares better than the usual Goppa bound.
0807.3212
Construction of Large Constant Dimension Codes With a Prescribed Minimum Distance
cs.IT cs.DM math.CO math.IT
In this paper we construct constant dimension space codes with prescribed minimum distance. There is an increased interest in space codes since a paper by Koetter and Kschischang were they gave an application in network coding. There is also a connection to the theory of designs over finite fields. We will modify a method of Braun, Kerber and Laue which they used for the construction of designs over finite fields to do the construction of space codes. Using this approach we found many new constant dimension spaces codes with a larger number of codewords than previously known codes. We will finally give a table of the best found constant dimension space codes.
0807.3222
The two-user Gaussian interference channel: a deterministic view
cs.IT math.IT
This paper explores the two-user Gaussian interference channel through the lens of a natural deterministic channel model. The main result is that the deterministic channel uniformly approximates the Gaussian channel, the capacity regions differing by a universal constant. The problem of finding the capacity of the Gaussian channel to within a constant error is therefore reduced to that of finding the capacity of the far simpler deterministic channel. Thus, the paper provides an alternative derivation of the recent constant gap capacity characterization of Etkin, Tse, and Wang. Additionally, the deterministic model gives significant insight towards the Gaussian channel.
0807.3223
The NAO humanoid: a combination of performance and affordability
cs.RO
This article presents the design of the autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics. With its height of 0.57 m and its weight about 4.5 kg, this innovative robot is lightweight and compact. It distinguishes itself from its existing Japanese, American, and other counterparts thanks to its pelvis kinematics design, its proprietary actuation system based on brush DC motors, its electronic, computer and distributed software architectures. This robot has been designed to be affordable without sacrificing quality and performance. It is an open and easy-to-handle platform where the user can change all the embedded system software or just add some applications to make the robot adopt specific behaviours. The robot's head and forearms are modular and can be changed to promote further evolution. The comprehensive and functional design is one of the reasons that helped select NAO to replace the AIBO quadrupeds in the 2008 RoboCup standard league.
0807.3225
Exploiting Bird Locomotion Kinematics Data for Robotics Modeling
cs.RO
We present here the results of an analysis carried out by biologists and roboticists with the aim of modeling bird locomotion kinematics for robotics purposes. The aim was to develop a bio-inspired kinematic model of the bird leg from biological data. We first acquired and processed kinematic data for sagittal and top views obtained by X-ray radiography of quails walking. Data processing involved filtering and specific data reconstruction in three dimensions, as two-dimensional views cannot be synchronized. We then designed a robotic model of a bird-like leg based on a kinematic analysis of the biological data. Angular velocity vectors were calculated to define the number of degrees of freedom (DOF) at each joint and the orientation of the rotation axes.
0807.3287
Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods
q-bio.MN cs.AI math.LO
Our aim is to build a set of rules, such that reasoning over temporal dependencies within gene regulatory networks is possible. The underlying transitions may be obtained by discretizing observed time series, or they are generated based on existing knowledge, e.g. by Boolean networks or their nondeterministic generalization. We use the mathematical discipline of formal concept analysis (FCA), which has been applied successfully in domains as knowledge representation, data mining or software engineering. By the attribute exploration algorithm, an expert or a supporting computer program is enabled to decide about the validity of a minimal set of implications and thus to construct a sound and complete knowledge base. From this all valid implications are derivable that relate to the selected properties of a set of genes. We present results of our method for the initiation of sporulation in Bacillus subtilis. However the formal structures are exhibited in a most general manner. Therefore the approach may be adapted to signal transduction or metabolic networks, as well as to discrete temporal transitions in many biological and nonbiological areas.
0807.3332
Energy-efficient Scheduling of Delay Constrained Traffic over Fading Channels
cs.IT math.IT
A delay-constrained scheduling problem for point-to-point communication is considered: a packet of $B$ bits must be transmitted by a hard deadline of $T$ slots over a time-varying channel. The transmitter/scheduler must determine how many bits to transmit, or equivalently how much energy to transmit with, during each time slot based on the current channel quality and the number of unserved bits, with the objective of minimizing expected total energy. In order to focus on the fundamental scheduling problem, it is assumed that no other packets are scheduled during this time period and no outage is allowed. Assuming transmission at capacity of the underlying Gaussian noise channel, a closed-form expression for the optimal scheduling policy is obtained for the case T=2 via dynamic programming; for $T>2$, the optimal policy can only be numerically determined. Thus, the focus of the work is on derivation of simple, near-optimal policies based on intuition from the T=2 solution and the structure of the general problem. The proposed bit-allocation policies consist of a linear combination of a delay-associated term and an opportunistic (channel-aware) term. In addition, a variation of the problem in which the entire packet must be transmitted in a single slot is studied, and a channel-threshold policy is shown to be optimal.
0807.3337
Algebraic constructions of LDPC codes with no short cycles
math.RA cs.IT math.IT
An algebraic group ring method for constructing codes with no short cycles in the check matrix is derived. It is shown that the matrix of a group ring element has no short cycles if and only if the collection of group differences of this element has no repeats. When applied to elements in the group ring with small support this gives a general method for constructing and analysing low density parity check (LDPC) codes with no short cycles from group rings. Examples of LDPC codes with no short cycles are constructed from group ring elements and these are simulated and compared with known LDPC codes, including those adopted for wireless standards.