id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
0908.2676
Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices
cs.IT math.IT
In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar $m\times n$ RIP fulfilling $\pm 1$ matrices of order $k$ such that $m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big)$. The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ($\{0,1,-1\}$ elements) that satisfy the RIP condition.
0908.2741
B-Rank: A top N Recommendation Algorithm
physics.data-an cs.IR
In this paper B-Rank, an efficient ranking algorithm for recommender systems, is proposed. B-Rank is based on a random walk model on hypergraphs. Depending on the setup, B-Rank outperforms other state of the art algorithms in terms of precision, recall (19% - 50%), and inter list diversity (20% - 60%). B-Rank captures well the difference between popular and niche objects. The proposed algorithm produces very promising results for sparse and dense voting matrices. Furthermore, a recommendation list update algorithm is introduced,to cope with new votes. This technique significantly reduces computational complexity. The implementation of the algorithm is simple, since B-Rank needs no parameter tuning.
0908.2744
XTile: An Error-Correction Package for DNA Self-Assembly
cs.IT math.IT
Self assembly is a process by which supramolecular species form spontaneously from their components. This process is ubiquitous throughout the life chemistry and is central to biological information processing. It has been predicted that in future self assembly will become an important engineering discipline by combining the fields of bio molecular computation, nano technology and medicine. However error control is a key challenge in realizing the potential of self assembly. Recently many authors have proposed several combinatorial error correction schemes to control errors which have a close analogy with the coding theory such as Winfree s proofreading scheme and its generalizations by Chen and Goel and compact scheme of Reif, Sahu and Yin. In this work, we present an error correction computational tool XTile that can be used to create input files to the Xgrow simulator of Winfree by providing the design logic of the tiles and it also allows the user to apply proofreading, snake and compact error correction schemes.
0908.2828
A Rate-Distortion Perspective on Multiple Decoding Attempts for Reed-Solomon Codes
cs.IT math.IT
Recently, a number of authors have proposed decoding schemes for Reed-Solomon (RS) codes based on multiple trials of a simple RS decoding algorithm. In this paper, we present a rate-distortion (R-D) approach to analyze these multiple-decoding algorithms for RS codes. This approach is first used to understand the asymptotic performance-versus-complexity trade-off of multiple error-and-erasure decoding of RS codes. By defining an appropriate distortion measure between an error pattern and an erasure pattern, the condition for a single error-and-erasure decoding to succeed reduces to a form where the distortion is compared to a fixed threshold. Finding the best set of erasure patterns for multiple decoding trials then turns out to be a covering problem which can be solved asymptotically by rate-distortion theory. Next, this approach is extended to analyze multiple algebraic soft-decision (ASD) decoding of RS codes. Both analytical and numerical computations of the R-D functions for the corresponding distortion measures are discussed. Simulation results show that proposed algorithms using this approach perform better than other algorithms with the same complexity.
0908.2847
The Single Source Two Terminal Network with Network Coding
cs.IT cs.NI math.IT
We consider a communication network with a single source that has a set of messages and two terminals where each terminal is interested in an arbitrary subset of messages at the source. A tight capacity region for this problem is demonstrated. We show by a simple graph-theoretic procedure that any such problem can be solved by performing network coding on the subset of messages that are requested by both the terminals and that routing is sufficient for transferring the remaining messages.
0908.2941
Delay-Sensitive Distributed Power and Transmission Threshold Control for S-ALOHA Network with Finite State Markov Fading Channels
cs.IT math.IT
In this paper, we consider the delay-sensitive power and transmission threshold control design in S-ALOHA network with FSMC fading channels. The random access system consists of an access point with K competing users, each has access to the local channel state information (CSI) and queue state information (QSI) as well as the common feedback (ACK/NAK/Collision) from the access point. We seek to derive the delay-optimal control policy (composed of threshold and power control). The optimization problem belongs to the memoryless policy K-agent infinite horizon decentralized Markov decision process (DEC-MDP), and finding the optimal policy is shown to be computationally intractable. To obtain a feasible and low complexity solution, we recast the optimization problem into two subproblems, namely the power control and the threshold control problem. For a given threshold control policy, the power control problem is decomposed into a reduced state MDP for single user so that the overall complexity is O(NJ), where N and J are the buffer size and the cardinality of the CSI states. For the threshold control problem, we exploit some special structure of the collision channel and common feedback information to derive a low complexity solution. The delay performance of the proposed design is shown to have substantial gain relative to conventional throughput optimal approaches for S-ALOHA.
0908.2984
Explicit Codes Minimizing Repair Bandwidth for Distributed Storage
cs.IT math.IT
We consider the setting of data storage across n nodes in a distributed manner. A data collector (DC) should be able to reconstruct the entire data by connecting to any k out of the n nodes and downloading all the data stored in them. When a node fails, it has to be regenerated back using the existing nodes. In a recent paper, Wu et al. have obtained an information theoretic lower bound for the repair bandwidth. Also, there has been additional interest in storing data in systematic form as no post processing is required when DC connects to k systematic nodes. Because of their preferred status there is a need to regenerate back any systematic node quickly and exactly. Replacement of a failed node by an exact replica is termed Exact Regeneration.In this paper, we consider the problem of minimizing the repair bandwidth for exact regeneration of the systematic nodes. The file to be stored is of size B and each node can store alpha = B/k units of data. A failed systematic node is regenerated by downloading beta units of data each from d existing nodes. We give a lower bound for the repair bandwidth for exact regeneration of the systematic nodes which matches with the bound given by Wu et al. For d >= 2k-1 we give an explicit code construction which minimizes the repair bandwidth when the existing k-1 systematic nodes participate in the regeneration. We show the existence and construction of codes that achieve the bound for d >= 2k-3. Here we also establish the necessity of interference alignment. We prove that the bound is not achievable for d <= 2k-4 when beta=1. We also give a coding scheme which can be used for any d and k, which is optimal for d >= 2k-1.
0908.3025
Techniques for Highly Multiobjective Optimisation: Some Nondominated Points are Better than Others
cs.NE
The research area of evolutionary multiobjective optimization (EMO) is reaching better understandings of the properties and capabilities of EMO algorithms, and accumulating much evidence of their worth in practical scenarios. An urgent emerging issue is that the favoured EMO algorithms scale poorly when problems have many (e.g. five or more) objectives. One of the chief reasons for this is believed to be that, in many-objective EMO search, populations are likely to be largely composed of nondominated solutions. In turn, this means that the commonly-used algorithms cannot distinguish between these for selective purposes. However, there are methods that can be used validly to rank points in a nondominated set, and may therefore usefully underpin selection in EMO search. Here we discuss and compare several such methods. Our main finding is that simple variants of the often-overlooked Average Ranking strategy usually outperform other methods tested, covering problems with 5-20 objectives and differing amounts of inter-objective correlation.
0908.3091
Computational Understanding and Manipulation of Symmetries
cs.AI cs.MS
For natural and artificial systems with some symmetry structure, computational understanding and manipulation can be achieved without learning by exploiting the algebraic structure. Here we describe this algebraic coordinatization method and apply it to permutation puzzles. Coordinatization yields a structural understanding, not just solutions for the puzzles.
0908.3098
Throughput of Cellular Uplink with Dynamic User Activity and Cooperative Base-Stations
cs.IT math.IT
The throughput of a linear cellular uplink with a random number of users, different power control schemes, and cooperative base stations is considered in the large system limit where the number of cells is large for non fading Gaussian channels. The analysis is facilitated by establishing an analogy between the cellular channel per-cell throughput with joint multi-cell processing (MCP), and the rate of a deterministic inter-symbol interference (ISI) channel with flat fading. It is shown that, under certain conditions, the dynamics of cellular systems (i.e., a random number of users coupled with a given power control scheme) can be interpreted, as far as the uplink throughput is concerned, as the flat fading process of the equivalent ISI channel. The results are used to demonstrate the benefits of MCP over the conventional single cell processing approach as a function of various system parameters in the presence of random user activity.
0908.3148
Another Look at Quantum Neural Computing
cs.NE cs.AI
The term quantum neural computing indicates a unity in the functioning of the brain. It assumes that the neural structures perform classical processing and that the virtual particles associated with the dynamical states of the structures define the underlying quantum state. We revisit the concept and also summarize new arguments related to the learning modes of the brain in response to sensory input that may be aggregated in three types: associative, reorganizational, and quantum. The associative and reorganizational types are quite apparent based on experimental findings; it is much harder to establish that the brain as an entity exhibits quantum properties. We argue that the reorganizational behavior of the brain may be viewed as inner adjustment corresponding to its quantum behavior at the system level. Not only neural structures but their higher abstractions also may be seen as whole entities. We consider the dualities associated with the behavior of the brain and how these dualities are bridged.
0908.3162
Practical approach to programmable analog circuits with memristors
physics.ins-det cond-mat.mes-hall cs.AI
We suggest an approach to use memristors (resistors with memory) in programmable analog circuits. Our idea consists in a circuit design in which low voltages are applied to memristors during their operation as analog circuit elements and high voltages are used to program the memristor's states. This way, as it was demonstrated in recent experiments, the state of memristors does not essentially change during analog mode operation. As an example of our approach, we have built several programmable analog circuits demonstrating memristor-based programming of threshold, gain and frequency.
0908.3166
Remarks on the Criteria of Constructing MIMO-MAC DMT Optimal Codes
cs.IT math.IT
In this paper we investigate the criteria proposed by Coronel et al. for constructing MIMO MAC-DMT optimal codes over several classes of fading channels. We first give a counterexample showing their DMT result is not correct when the channel is frequency-selective. For the case of symmetric MIMO-MAC flat fading channels, their DMT result reduces to exactly the same as that derived by Tse et al., and we therefore focus on their criteria for constructing MAC-DMT optimal codes, especially when the number of receive antennas is sufficiently large. In such case, we show their criterion is equivalent to requiring the codes of any subset of users to satisfy a joint non-vanishing determinant criterion when the system operates in the antenna pooling regime. Finally an upper bound on the product of minimum eigenvalues of the difference matrices is provided, and is used to show any MIMO-MAC codes satisfying their criterion can possibly exist only when the target multiplexing gain is small.
0908.3171
On the Optimality of Beamforming for Multi-User MISO Interference Channels with Single-User Detection
cs.IT math.IT
For a multi-user interference channel with multi-antenna transmitters and single-antenna receivers, by restricting each receiver to a single-user detector, computing the largest achievable rate region amounts to solving a family of non-convex optimization problems. Recognizing the intrinsic connection between the signal power at the intended receiver and the interference power at the unintended receiver, the original family of non-convex optimization problems is converted into a new family of convex optimization problems. It is shown that, for such interference channels with each receiver implementing single-user detection, transmitter beamforming can achieve all boundary points of the achievable rate region.
0908.3184
Location of Single Neuron Memories in a Hebbian Network
cs.NE
This paper reports the results of an experiment on the use of Kak's B-Matrix approach to spreading activity in a Hebbian neural network. Specifically, it concentrates on the memory retrieval from single neurons and compares the performance of the B-Matrix approach to that of the traditional approach.
0908.3212
Quantifying Rational Belief
physics.data-an cs.AI
Some criticisms that have been raised against the Cox approach to probability theory are addressed. Should we use a single real number to measure a degree of rational belief? Can beliefs be compared? Are the Cox axioms obvious? Are there counterexamples to Cox? Rather than justifying Cox's choice of axioms we follow a different path and derive the sum and product rules of probability theory as the unique (up to regraduations) consistent representations of the Boolean AND and OR operations.
0908.3234
Overlapped Chunked Network Coding
cs.IT math.IT
Network coding is known to improve the throughput and the resilience to losses in most network scenarios. In a practical network scenario, however, the accurate modeling of the traffic is often too complex and/or infeasible. The goal is thus to design codes that perform close to the capacity of any network (with arbitrary traffic) efficiently. In this context, random linear network codes are known to be capacity-achieving while requiring a decoding complexity quadratic in the message length. Chunked Codes (CC) were proposed by Maymounkov et al. to improve the computational efficiency of random codes by partitioning the message into a number of non-overlapping chunks. CC can also be capacity-achieving but have a lower encoding/decoding complexity at the expense of slower convergence to the capacity. In this paper, we propose and analyze a generalized version of CC called Overlapped Chunked Codes (OCC) in which chunks are allowed to overlap. Our theoretical analysis and simulation results show that compared to CC, OCC can achieve the capacity with a faster speed while maintaining almost the same advantage in computational efficiency.
0908.3252
Non-quadratic convex regularized reconstruction of MR images from spiral acquisitions
cs.CV cs.CE
Combining fast MR acquisition sequences and high resolution imaging is a major issue in dynamic imaging. Reducing the acquisition time can be achieved by using non-Cartesian and sparse acquisitions. The reconstruction of MR images from these measurements is generally carried out using gridding that interpolates the missing data to obtain a dense Cartesian k-space filling. The MR image is then reconstructed using a conventional Fast Fourier Transform. The estimation of the missing data unavoidably introduces artifacts in the image that remain difficult to quantify. A general reconstruction method is proposed to take into account these limitations. It can be applied to any sampling trajectory in k-space, Cartesian or not, and specifically takes into account the exact location of the measured data, without making any interpolation of the missing data in k-space. Information about the expected characteristics of the imaged object is introduced to preserve the spatial resolution and improve the signal to noise ratio in a regularization framework. The reconstructed image is obtained by minimizing a non-quadratic convex objective function. An original rewriting of this criterion is shown to strongly improve the reconstruction efficiency. Results on simulated data and on a real spiral acquisition are presented and discussed.
0908.3265
Rate Constrained Random Access over a Fading Channel
cs.GT cs.LG cs.NI
In this paper, we consider uplink transmissions involving multiple users communicating with a base station over a fading channel. We assume that the base station does not coordinate the transmissions of the users and hence the users employ random access communication. The situation is modeled as a non-cooperative repeated game with incomplete information. Each user attempts to minimize its long term power consumption subject to a minimum rate requirement. We propose a two timescale stochastic gradient algorithm (TTSGA) for tuning the users' transmission probabilities. The algorithm includes a 'waterfilling threshold update mechanism' that ensures that the rate constraints are satisfied. We prove that under the algorithm, the users' transmission probabilities converge to a Nash equilibrium. Moreover, we also prove that the rate constraints are satisfied; this is also demonstrated using simulation studies.
0908.3280
On the Relationship between Trading Network and WWW Network: A Preferential Attachment Perspective
cs.IR cs.CE
This paper describes the relationship between trading network and WWW network from preferential attachment mechanism perspective. This mechanism is known to be the underlying principle in the network evolution and has been incorporated to formulate two famous web pages ranking algorithms, PageRank and HITS. We point out the differences between trading network and WWW network in this mechanism, derive the formulation of HITS-based ranking algorithm for trading network as a direct consequence of the differences, and apply the same framework when deriving the formulation back to the HITS formulation that turns to become a technique to accelerate its convergences.
0908.3359
Geometric Analysis of the Conformal Camera for Intermediate-Level Vision and Perisaccadic Perception
cs.CV cs.RO
A binocular system developed by the author in terms of projective Fourier transform (PFT) of the conformal camera, which numerically integrates the head, eyes, and visual cortex, is used to process visual information during saccadic eye movements. Although we make three saccades per second at the eyeball's maximum speed of 700 deg/sec, our visual system accounts for these incisive eye movements to produce a stable percept of the world. This visual constancy is maintained by neuronal receptive field shifts in various retinotopically organized cortical areas prior to saccade onset, giving the brain access to visual information from the saccade's target before the eyes' arrival. It integrates visual information acquisition across saccades. Our modeling utilizes basic properties of PFT. First, PFT is computable by FFT in complex logarithmic coordinates that approximate the retinotopy. Second, a translation in retinotopic (logarithmic) coordinates, modeled by the shift property of the Fourier transform, remaps the presaccadic scene into a postsaccadic reference frame. It also accounts for the perisaccadic mislocalization observed by human subjects in laboratory experiments. Because our modeling involves cross-disciplinary areas of conformal geometry, abstract and computational harmonic analysis, computational vision, and visual neuroscience, we include the corresponding background material and elucidate how these different areas interwove in our modeling of primate perception. In particular, we present the physiological and behavioral facts underlying the neural processes related to our modeling. We also emphasize the conformal camera's geometry and discuss how it is uniquely useful in the intermediate-level vision computational aspects of natural scene understanding.
0908.3380
Construction of Hilbert Transform Pairs of Wavelet Bases and Gabor-like Transforms
cs.IT cs.CV math.IT
We propose a novel method for constructing Hilbert transform (HT) pairs of wavelet bases based on a fundamental approximation-theoretic characterization of scaling functions--the B-spline factorization theorem. In particular, starting from well-localized scaling functions, we construct HT pairs of biorthogonal wavelet bases of L^2(R) by relating the corresponding wavelet filters via a discrete form of the continuous HT filter. As a concrete application of this methodology, we identify HT pairs of spline wavelets of a specific flavor, which are then combined to realize a family of complex wavelets that resemble the optimally-localized Gabor function for sufficiently large orders. Analytic wavelets, derived from the complexification of HT wavelet pairs, exhibit a one-sided spectrum. Based on the tensor-product of such analytic wavelets, and, in effect, by appropriately combining four separable biorthogonal wavelet bases of L^2(R^2), we then discuss a methodology for constructing 2D directional-selective complex wavelets. In particular, analogous to the HT correspondence between the components of the 1D counterpart, we relate the real and imaginary components of these complex wavelets using a multi-dimensional extension of the HT--the directional HT. Next, we construct a family of complex spline wavelets that resemble the directional Gabor functions proposed by Daugman. Finally, we present an efficient FFT-based filterbank algorithm for implementing the associated complex wavelet transform.
0908.3383
On the Shiftability of Dual-Tree Complex Wavelet Transforms
cs.IT cs.CV math.IT
The dual-tree complex wavelet transform (DT-CWT) is known to exhibit better shift-invariance than the conventional discrete wavelet transform. We propose an amplitude-phase representation of the DT-CWT which, among other things, offers a direct explanation for the improvement in the shift-invariance. The representation is based on the shifting action of the group of fractional Hilbert transform (fHT) operators, which extends the notion of arbitrary phase-shifts from sinusoids to finite-energy signals (wavelets in particular). In particular, we characterize the shiftability of the DT-CWT in terms of the shifting property of the fHTs. At the heart of the representation are certain fundamental invariances of the fHT group, namely that of translation, dilation, and norm, which play a decisive role in establishing the key properties of the transform. It turns out that these fundamental invariances are exclusive to this group. Next, by introducing a generalization of the Bedrosian theorem for the fHT operator, we derive an explicitly understanding of the shifting action of the fHT for the particular family of wavelets obtained through the modulation of lowpass functions (e.g., the Shannon and Gabor wavelet). This, in effect, links the corresponding dual-tree transform with the framework of windowed-Fourier analysis. Finally, we extend these ideas to the multi-dimensional setting by introducing a directional extension of the fHT, the fractional directional Hilbert transform. In particular, we derive a signal representation involving the superposition of direction-selective wavelets with appropriate phase-shifts, which helps explain the improved shift-invariance of the transform along certain preferential directions.
0908.3394
A Cognitive Mind-map Framework to Foster Trust
cs.AI
The explorative mind-map is a dynamic framework, that emerges automatically from the input, it gets. It is unlike a verificative modeling system where existing (human) thoughts are placed and connected together. In this regard, explorative mind-maps change their size continuously, being adaptive with connectionist cells inside; mind-maps process data input incrementally and offer lots of possibilities to interact with the user through an appropriate communication interface. With respect to a cognitive motivated situation like a conversation between partners, mind-maps become interesting as they are able to process stimulating signals whenever they occur. If these signals are close to an own understanding of the world, then the conversational partner becomes automatically more trustful than if the signals do not or less match the own knowledge scheme. In this (position) paper, we therefore motivate explorative mind-maps as a cognitive engine and propose these as a decision support engine to foster trust.
0908.3463
Interpolation-Based QR Decomposition in MIMO-OFDM Systems
cs.IT math.IT
Detection algorithms for multiple-input multiple-output (MIMO) wireless systems based on orthogonal frequency-division multiplexing (OFDM) typically require the computation of a QR decomposition for each of the data-carrying OFDM tones. The resulting computational complexity will, in general, be significant, as the number of data-carrying tones ranges from 48 (as in the IEEE 802.11a/g standards) to 1728 (as in the IEEE 802.16e standard). Motivated by the fact that the channel matrices arising in MIMO-OFDM systems are highly oversampled polynomial matrices, we formulate interpolation-based QR decomposition algorithms. An in-depth complexity analysis, based on a metric relevant for very large scale integration (VLSI) implementations, shows that the proposed algorithms, for sufficiently high number of data-carrying tones and sufficiently small channel order, provably exhibit significantly smaller complexity than brute-force per-tone QR decomposition.
0908.3512
The Infinite-message Limit of Two-terminal Interactive Source Coding
cs.IT math.IT
A two-terminal interactive function computation problem with alternating messages is studied within the framework of distributed block source coding theory. For any finite number of messages, a single-letter characterization of the sum-rate-distortion function was established in previous works using standard information-theoretic techniques. This, however, does not provide a satisfactory characterization of the infinite-message limit, which is a new, unexplored dimension for asymptotic-analysis in distributed block source coding involving potentially an infinite number of infinitesimal-rate messages. In this paper, the infinite-message sum-rate-distortion function, viewed as a functional of the joint source pmf and the distortion levels, is characterized as the least element of a partially ordered family of functionals having certain convex-geometric properties. The new characterization does not involve evaluating the infinite-message limit of a finite-message sum-rate-distortion expression. This characterization leads to a family of lower bounds for the infinite-message sum-rate-distortion expression and a simple criterion to test the optimality of any achievable infinite-message sum-rate-distortion expression. For computing the amplewise Boolean AND function, the infinite-message minimum sum-rates are characterized in closed analytic form. These sum-rates are shown to be achievable using infinitely many infinitesimal-rate messages. The new convex-geometric characterization is used to develop an iterative algorithm for evaluating any finite-message sumrate-distortion function. It is also used to construct the first examples which demonstrate that for lossy source reproduction, two messages can strictly improve the one-message Wyner-Ziv rate-distortion function settling an unresolved question from a 1985 paper.
0908.3523
Cognitive Dimensions Analysis of Interfaces for Information Seeking
cs.HC cs.IR
Cognitive Dimensions is a framework for analyzing human-computer interaction. It is used for meta-analysis, that is, for talking about characteristics of systems without getting bogged down in details of a particular implementation. In this paper, I discuss some of the dimensions of this theory and how they can be applied to analyze information seeking interfaces. The goal of this analysis is to introduce a useful vocabulary that practitioners and researchers can use to describe systems, and to guide interface design toward more usable and useful systems
0908.3539
An Accurate Approximation to the Distribution of the Sum of Equally Correlated Nakagami-m Envelopes and its Application in Equal Gain Diversity Receivers
cs.IT math.IT
We present a novel and accurate approximation for the distribution of the sum of equally correlated Nakagami-m variates. Ascertaining on this result we study the performance of Equal Gain Combining (EGC) receivers, operating over equally correlating fading channels. Numerical results and simulations show the accuracy of the proposed approximation and the validity of the mathematical analysis.
0908.3541
Level Crossing Rate and Average Fade Duration of the Double Nakagami-m Random Process and Application in MIMO Keyhole Fading Channels
cs.IT math.IT
We present novel exact expressions and accurate closed-form approximations for the level crossing rate (LCR) and the average fade duration (AFD) of the double Nakagami-m random process. These results are then used to study the second order statistics of multiple input multiple output (MIMO) keyhole fading channels with space-time block coding. Numerical and computer simulation examples validate the accuracy of the presented mathematical analysis and show the tightness of the proposed approximations.
0908.3544
On the Second Order Statistics of the Multihop Rayleigh Fading Channel
cs.IT math.IT
Second order statistics provides a dynamic representation of a fading channel and plays an important role in the evaluation and design of the wireless communication systems. In this paper, we present a novel analytical framework for the evaluation of important second order statistical parameters, as the level crossing rate (LCR) and the average fade duration (AFD) of the amplify-and-forward multihop Rayleigh fading channel. More specifically, motivated by the fact that this channel is a cascaded one and can be modeled as the product of N fading amplitudes, we derive novel analytical expressions for the average LCR and the AFD of the product of N Rayleigh fading envelopes (or of the recently so-called N*Rayleigh channel). Furthermore, we derive simple and efficient closed-form approximations to the aforementioned parameters, using the multivariate Laplace approximation theorem. It is shown that our general results reduce to the corresponding ones of the specific dual-hop case, previously published. Numerical and computer simulation examples verify the accuracy of the presented mathematical analysis and show the tightness of the proposed approximations.
0908.3549
Level Crossing Rate and Average Fade Duration of the Multihop Rayleigh Fading Channel
cs.IT math.IT
We present a novel analytical framework for the evaluation of important second order statistical parameters, as the level crossing rate (LCR) and the average fade duration (AFD) of the amplify-and-forward multihop Rayleigh fading channel. More specifically, motivated by the fact that this channel is a cascaded one, which can be modelled as the product of N fading amplitudes, we derive novel analytical expressions for the average LCR and AFD of the product of N Rayleigh fading envelopes, or of the recently so-called N*Rayleigh channel. Furthermore, we derive simple and efficient closed-form approximations to the aforementioned parameters, using the multivariate Laplace approximation theorem. It is shown that our general results reduce to the specific dual-hop case, previously published. Numerical and computer simulation examples verify the accuracy of the presented mathematical analysis and show the tightness of the proposed approximations.
0908.3551
Level Crossing Rate and Average Fade Duration of EGC Systems with Cochannel Interference in Rayleigh Fading
cs.IT math.IT
Both the first-order signal statistics (e.g. the outage probability) and the second-order signal statistics (e.g. the average level crossing rate, LCR, and the average fade duration, AFD) are important design criteria and performance measures for the wireless communication systems, including the equal gain combining (EGC) systems in presence of the cochannel interference (CCI). Although the analytical expressions for the outage probability of the coherent EGC systems exposed to CCI and various fading channels are already known, the respective ones for the average LCR and the AFD are not available in the literature. This paper presents such analytical expressions for the Rayleigh fading channel, which are obtained by utilizing a novel analytical approach that does not require the explicit expression for the joint PDF of the instantaneous output signal-to-interference ratio (SIR) and its time derivative. Applying the characteristic function method and the Beaulieu series, we determined the average LCR and the AFD at the output of an interference-limited EGC system with an arbitrary diversity order and an arbitrary number of cochannel interferers in forms of an infinite integral and an infinite series. For the dual diversity case, the respective expressions are derived in closed forms in terms of the gamma and the beta functions.
0908.3552
Level Crossing Rate and Average Fade Duration of Dual Selection Combining with Cochannel Interference and Nakagami Fading
cs.IT math.IT
This letter provides closed-form expressions for the outage probability, the average level crossing rate (LCR) and the average fade duration (AFD) of a dual diversity selection combining (SC) system exposed to the combined influence of the cochannel interference (CCI) and the thermal noise (AWGN) in Nakagami fading channel. The branch selection is based on the desired signal power SC algorithm with all input signals assumed to be independent, while the powers of the desired signals in all diversity branches are mutually equal but distinct from the power of the interference signals. The analytical results reduce to known solutions in the cases of an interference-limited system in Rayleigh fading and an AWGN-limited system in Nakagami fading. The average LCR is determined by an original approach that does not require explicit knowledge of the joint PDF of the envelope and its time derivative, which also paves the way for similar analysis of other diversity systems.
0908.3562
Another Look at the Physics of Large Deviations With Application to Rate-Distortion Theory
cs.IT math.IT
We revisit and extend the physical interpretation recently given to a certain identity between large--deviations rate--functions (as well as applications of this identity to Information Theory), as an instance of thermal equilibrium between several physical systems that are brought into contact. Our new interpretation, of mechanical equilibrium between these systems, is shown to have several advantages relative to that of thermal equilibrium. This physical point of view also provides a trigger to the development of certain alternative representations of the rate--distortion function and channel capacity, which are new to the best knowledge of the author.
0908.3565
Coverage Optimization using Generalized Voronoi Partition
math.OC cs.SY math.DS
In this paper a generalization of the Voronoi partition is used for optimal deployment of autonomous agents carrying sensors with heterogeneous capabilities, to maximize the sensor coverage. The generalized centroidal Voronoi configuration, in which the agents are located at the centroids of the corresponding generalized Voronoi cells, is shown to be a local optimal configuration. Simulation results are presented to illustrate the presented deployment strategy.
0908.3574
In-packet Bloom filters: Design and networking applications
cs.DS cs.IT cs.NI cs.PF math.IT
The Bloom filter (BF) is a well-known space-efficient data structure that answers set membership queries with some probability of false positives. In an attempt to solve many of the limitations of current inter-networking architectures, some recent proposals rely on including small BFs in packet headers for routing, security, accountability or other purposes that move application states into the packets themselves. In this paper, we consider the design of such in-packet Bloom filters (iBF). Our main contributions are exploring the design space and the evaluation of a series of extensions (1) to increase the practicality and performance of iBFs, (2) to enable false-negative-free element deletion, and (3) to provide security enhancements. In addition to the theoretical estimates, extensive simulations of the multiple design parameters and implementation alternatives validate the usefulness of the extensions, providing for enhanced and novel iBF networking applications.
0908.3610
Emergent Network Structure, evolvable Robustness and non-linear Effects of Point Mutations in an Artificial Genome Model
q-bio.MN cond-mat.dis-nn cs.NE q-bio.GN q-bio.SC
Genetic regulation is a key component in development, but a clear understanding of the structure and dynamics of genetic networks is not yet at hand. In this paper we investigate these properties within an artificial genome model originally introduced by Reil (1999). We analyze statistical properties of randomly generated genomes both on the sequence- and network level, and show that this model correctly predicts the frequency of genes in genomes as found in experimental data. Using an evolutionary algorithm based on stabilizing selection for a phenotype, we show that dynamical robustness against single base mutations, as well as against random changes in initial states of regulatory dynamics that mimic stochastic fluctuations in environmental conditions, can emerge in parallel. Point mutations at the sequence level have strongly non-linear effects on network wiring, including as well structurally neutral mutations and simultaneous rewiring of multiple connections, which occasionally lead to strong reorganization of the attractor landscape and metastability of evolutionary dynamics. Evolved genomes exhibit characteristic patterns on both sequence and network level.
0908.3633
Maximizing profit using recommender systems
cs.CY cs.AI cs.IR
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
0908.3653
Chaotic Transitions in Wall Following Robots
nlin.CD cs.RO nlin.AO
In this paper we examine how simple agents similar to Braitenberg vehicles can exhibit chaotic movement patterns. The agents are wall following robots as described by Steve Mesburger and Alfred Hubler in their paper "Chaos in Wall Following Robots". These agents uses a simple forward facing distance sensor, with a limited field of view "phi" for navigation. An agent drives forward at a constant velocity and uses the sensor to turn right when it is too close to an object and left when it is too far away. For a flat wall the agent stays a fixed distance from the wall and travels along it, regardless of the sensor's capabilities. But, if the wall represents a periodic function, the agent drives on a periodic path when the sensor has a narrow field of view. The agent's trajectory transitions to chaos when the sensor's field of view is increased. Numerical experiments were performed with square, triangle, and sawtooth waves for the wall, to find this pattern. The bifurcations of the agents were analyzed, finding both border collision and period doubling bifurcations. Detailed experimental results will be reported separately.
0908.3666
On the minimal penalty for Markov order estimation
math.PR cs.IT math.IT math.ST stat.TH
We show that large-scale typicality of Markov sample paths implies that the likelihood ratio statistic satisfies a law of iterated logarithm uniformly to the same scale. As a consequence, the penalized likelihood Markov order estimator is strongly consistent for penalties growing as slowly as log log n when an upper bound is imposed on the order which may grow as rapidly as log n. Our method of proof, using techniques from empirical process theory, does not rely on the explicit expression for the maximum likelihood estimator in the Markov case and could therefore be applicable in other settings.
0908.3670
Randomized Scheduling Algorithm for Queueing Networks
cs.IT cs.NI math.IT math.PR
There has recently been considerable interest in design of low-complexity, myopic, distributed and stable scheduling policies for constrained queueing network models that arise in the context of emerging communication networks. Here, we consider two representative models. One, a model for the collection of wireless nodes communicating through a shared medium, that represents randomly varying number of packets in the queues at the nodes of networks. Two, a buffered circuit switched network model for an optical core of future Internet, to capture the randomness in calls or flows present in the network. The maximum weight scheduling policy proposed by Tassiulas and Ephremide in 1992 leads to a myopic and stable policy for the packet-level wireless network model. But computationally it is very expensive (NP-hard) and centralized. It is not applicable to the buffered circuit switched network due to the requirement of non-premption of the calls in the service. As the main contribution of this paper, we present a stable scheduling algorithm for both of these models. The algorithm is myopic, distributed and performs few logical operations at each node per unit time.
0908.3702
Bit-Interleaved Coded Multiple Beamforming with Constellation Precoding
cs.IT math.IT
In this paper, we present the diversity order analysis of bit-interleaved coded multiple beamforming (BICMB) combined with the constellation precoding scheme. Multiple beamforming is realized by singular value decomposition of the channel matrix which is assumed to be perfectly known to the transmitter as well as the receiver. Previously, BICMB is known to have a diversity order bound related with the product of the code rate and the number of parallel subchannels, losing the full diversity order in some cases. In this paper, we show that BICMB combined with the constellation precoder and maximum likelihood detection achieves the full diversity order. We also provide simulation results that match the analysis.
0908.3706
Uncovering delayed patterns in noisy and irregularly sampled time series: an astronomy application
astro-ph.CO astro-ph.IM cs.LG cs.NE
We study the problem of estimating the time delay between two signals representing delayed, irregularly sampled and noisy versions of the same underlying pattern. We propose and demonstrate an evolutionary algorithm for the (hyper)parameter estimation of a kernel-based technique in the context of an astronomical problem, namely estimating the time delay between two gravitationally lensed signals from a distant quasar. Mixed types (integer and real) are used to represent variables within the evolutionary algorithm. We test the algorithm on several artificial data sets, and also on real astronomical observations of quasar Q0957+561. By carrying out a statistical analysis of the results we present a detailed comparison of our method with the most popular methods for time delay estimation in astrophysics. Our method yields more accurate and more stable time delay estimates: for Q0957+561, we obtain 419.6 days for the time delay between images A and B. Our methodology can be readily applied to current state-of-the-art optical monitoring data in astronomy, but can also be applied in other disciplines involving similar time series data.
0908.3710
Randomization for Security in Half-Duplex Two-Way Gaussian Channels
cs.IT cs.CR math.IT
This paper develops a new physical layer framework for secure two-way wireless communication in the presence of a passive eavesdropper, i.e., Eve. Our approach achieves perfect information theoretic secrecy via a novel randomized scheduling and power allocation scheme. The key idea is to allow Alice and Bob to send symbols at random time instants. While Alice will be able to determine the symbols transmitted by Bob, Eve will suffer from ambiguity regarding the source of any particular symbol. This desirable ambiguity is enhanced, in our approach, by randomizing the transmit power level. Our theoretical analysis, in a 2-D geometry, reveals the ability of the proposed approach to achieve relatively high secure data rates under mild conditions on the spatial location of Eve. These theoretical claims are then validated by experimental results using IEEE 802.15.4-enabled sensor boards in different configurations, motivated by the spatial characteristics of Wireless Body Area Networks (WBAN).
0908.3855
Gabor wavelet analysis and the fractional Hilbert transform
cs.IT cs.CV math.IT
We propose an amplitude-phase representation of the dual-tree complex wavelet transform (DT-CWT) which provides an intuitive interpretation of the associated complex wavelet coefficients. The representation, in particular, is based on the shifting action of the group of fractional Hilbert transforms (fHT) which allow us to extend the notion of arbitrary phase-shifts beyond pure sinusoids. We explicitly characterize this shifting action for a particular family of Gabor-like wavelets which, in effect, links the corresponding dual-tree transform with the framework of windowed-Fourier analysis. We then extend these ideas to the bivariate DT-CWT based on certain directional extensions of the fHT. In particular, we derive a signal representation involving the superposition of direction-selective wavelets affected with appropriate phase-shifts.
0908.3861
Fast adaptive elliptical filtering using box splines
cs.IT cs.CV math.IT
We demonstrate that it is possible to filter an image with an elliptic window of varying size, elongation and orientation with a fixed computational cost per pixel. Our method involves the application of a suitable global pre-integrator followed by a pointwise-adaptive localization mesh. We present the basic theory for the 1D case using a B-spline formalism and then appropriately extend it to 2D using radially-uniform box splines. The size and ellipticity of these radially-uniform box splines is adaptively controlled. Moreover, they converge to Gaussians as the order increases. Finally, we present a fast and practical directional filtering algorithm that has the capability of adapting to the local image features.
0908.3886
Cooperative Routing for Wireless Networks using Mutual-Information Accumulation
cs.IT math.IT
Cooperation between the nodes of wireless multihop networks can increase communication reliability, reduce energy consumption, and decrease latency. The possible improvements are even greater when nodes perform mutual information accumulation using rateless codes. In this paper, we investigate routing problems in such networks. Given a network, a source, and a destination, our objective is to minimize end-to-end transmission delay under energy and bandwidth constraints. We provide an algorithm that determines which nodes should participate in forwarding the message and what resources (time, energy, bandwidth) should be allocated to each. Our approach factors into two sub-problems, each of which can be solved efficiently. For any transmission order we show that solving for the optimum resource allocation can be formulated as a linear programming problem. We then show that the transmission order can be improved systematically by swapping nodes based on the solution of the linear program. Solving a sequence of linear programs leads to a locally optimal solution in a very efficient manner. In comparison to the proposed cooperative routing solution, it is observed that conventional shortest path multihop routing typically incurs additional delays and energy expenditures on the order of 70%. Our first algorithm is centralized, assuming that routing computations can be done at a central processor with full access to channel state information for the entire system. We also design two distributed routing algorithms that require only local channel state information. We provide simulations showing that for the same networks the distributed algorithms find routes that are only about two to five percent less efficient than the centralized algorithm.
0908.3902
On the Expressiveness of Line Drawings
cs.OH cs.NE
Can expressiveness of a drawing be traced with a computer? In this study a neural network (perceptron) and a support vector machine are used to classify line drawings. To do this the line drawings are attributed values according to a kinematic model and a diffusion model for the lines they consist of. The values for both models are related to looking times. Extreme values according to these models, that is both extremely short and extremely long looking times, are interpreted as indicating expressiveness. The results strongly indicate that expressiveness in this sense can be detected, at least with a neural network.
0908.3929
A Dynamic Boundary Guarding Problem with Translating Targets
cs.RO
We introduce a problem in which a service vehicle seeks to guard a deadline (boundary) from dynamically arriving mobile targets. The environment is a rectangle and the deadline is one of its edges. Targets arrive continuously over time on the edge opposite the deadline, and move towards the deadline at a fixed speed. The goal for the vehicle is to maximize the fraction of targets that are captured before reaching the deadline. We consider two cases; when the service vehicle is faster than the targets, and; when the service vehicle is slower than the targets. In the first case we develop a novel vehicle policy based on computing longest paths in a directed acyclic graph. We give a lower bound on the capture fraction of the policy and show that the policy is optimal when the distance between the target arrival edge and deadline becomes very large. We present numerical results which suggest near optimal performance away from this limiting regime. In the second case, when the targets are slower than the vehicle, we propose a policy based on servicing fractions of the translational minimum Hamiltonian path. In the limit of low target speed and high arrival rate, the capture fraction of this policy is within a small constant factor of the optimal.
0908.3957
Enhancing XML Data Warehouse Query Performance by Fragmentation
cs.DB
XML data warehouses form an interesting basis for decision-support applications that exploit heterogeneous data from multiple sources. However, XML-native database systems currently suffer from limited performances in terms of manageable data volume and response time for complex analytical queries. Fragmenting and distributing XML data warehouses (e.g., on data grids) allow to address both these issues. In this paper, we work on XML warehouse fragmentation. In relational data warehouses, several studies recommend the use of derived horizontal fragmentation. Hence, we propose to adapt it to the XML context. We particularly focus on the initial horizontal fragmentation of dimensions' XML documents and exploit two alternative algorithms. We experimentally validate our proposal and compare these alternatives with respect to a unified XML warehouse model we advocate for.
0908.3982
Distributed Source Coding for Correlated Memoryless Gaussian Sources
cs.IT math.IT
We consider a distributed source coding problem of $L$ correlated Gaussian observations $Y_i, i=1,2,...,L$. We assume that the random vector $Y^{L}={}^{\rm t} (Y_1,Y_2,$ $...,Y_L)$ is an observation of the Gaussian random vector $X^K={}^{\rm t}(X_1,X_2,...,X_K)$, having the form $Y^L=AX^K+N^L ,$ where $A$ is a $L\times K$ matrix and $N^L={}^{\rm t}(N_1,N_2,...,N_L)$ is a vector of $L$ independent Gaussian random variables also independent of $X^K$. The estimation error on $X^K$ is measured by the distortion covariance matrix. The rate distortion region is defined by a set of all rate vectors for which the estimation error is upper bounded by an arbitrary prescribed covariance matrix in the meaning of positive semi definite. In this paper we derive explicit outer and inner bounds of the rate distortion region. This result provides a useful tool to study the direct and indirect source coding problems on this Gaussian distributed source coding system, which remain open in general.
0908.3999
An improved axiomatic definition of information granulation
cs.AI
To capture the uncertainty of information or knowledge in information systems, various information granulations, also known as knowledge granulations, have been proposed. Recently, several axiomatic definitions of information granulation have been introduced. In this paper, we try to improve these axiomatic definitions and give a universal construction of information granulation by relating information granulations with a class of functions of multiple variables. We show that the improved axiomatic definition has some concrete information granulations in the literature as instances.
0908.4051
Training-Based Schemes are Suboptimal for High Rate Asynchronous Communication
cs.IT math.IT
We consider asynchronous point-to-point communication. Building on a recently developed model, we show that training based schemes, i.e., communication strategies that separate synchronization from information transmission, perform suboptimally at high rate.
0908.4073
Distributed Averaging via Lifted Markov Chains
cs.IT cs.DC math.IT math.PR
Motivated by applications of distributed linear estimation, distributed control and distributed optimization, we consider the question of designing linear iterative algorithms for computing the average of numbers in a network. Specifically, our interest is in designing such an algorithm with the fastest rate of convergence given the topological constraints of the network. As the main result of this paper, we design an algorithm with the fastest possible rate of convergence using a non-reversible Markov chain on the given network graph. We construct such a Markov chain by transforming the standard Markov chain, which is obtained using the Metropolis-Hastings method. We call this novel transformation pseudo-lifting. We apply our method to graphs with geometry, or graphs with doubling dimension. Specifically, the convergence time of our algorithm (equivalently, the mixing time of our Markov chain) is proportional to the diameter of the network graph and hence optimal. As a byproduct, our result provides the fastest mixing Markov chain given the network topological constraints, and should naturally find their applications in the context of distributed optimization, estimation and control.
0908.4074
Retrieval of Remote Sensing Images Using Colour and Texture Attribute
cs.IR cs.MM
Grouping images into semantically meaningful categories using low-level visual feature is a challenging and important problem in content-based image retrieval. The groupings can be used to build effective indices for an image database. Digital image analysis techniques are being used widely in remote sensing assuming that each terrain surface category is characterized with spectral signature observed by remote sensors. Even with the remote sensing images of IRS data, integration of spatial information is expected to assist and to improve the image analysis of remote sensing data. In this paper we present a satellite image retrieval based on a mixture of old fashioned ideas and state of the art learning tools. We have developed a methodology to classify remote sensing images using HSV color features and Haar wavelet texture features and then grouping them on the basis of particular threshold value. The experimental results indicate that the use of color and texture feature extraction is very useful for image retrieval.
0908.4094
Codes in Permutations and Error Correction for Rank Modulation
cs.IT math.IT
Codes for rank modulation have been recently proposed as a means of protecting flash memory devices from errors. We study basic coding theoretic problems for such codes, representing them as subsets of the set of permutations of $n$ elements equipped with the Kendall tau distance. We derive several lower and upper bounds on the size of codes. These bounds enable us to establish the exact scaling of the size of optimal codes for large values of $n$. We also show the existence of codes whose size is within a constant factor of the sphere packing bound for any fixed number of errors.
0908.4144
ABC-LogitBoost for Multi-class Classification
cs.LG cs.AI
We develop abc-logitboost, based on the prior work on abc-boost and robust logitboost. Our extensive experiments on a variety of datasets demonstrate the considerable improvement of abc-logitboost over logitboost and abc-mart.
0908.4208
Performance Analysis over Slow Fading Channels of a Half-Duplex Single-Relay Protocol: Decode or Quantize and Forward
cs.IT math.IT
In this work, a new static relaying protocol is introduced for half duplex single-relay networks, and its performance is studied in the context of communications over slow fading wireless channels. The proposed protocol is based on a Decode or Quantize and Forward (DoQF) approach. In slow fading scenarios, two performance metrics are relevant and complementary, namely the outage probability gain and the Diversity-Multiplexing Tradeoff (DMT). First, we analyze the behavior of the outage probability P_o associated with the proposed protocol as the SNR tends to infinity. In this case, we prove that SNR^2 P_o converges to a constant. We refer to this constant as the outage gain and we derive its closed-form expression for a general class of wireless channels that includes the Rayleigh and the Rice channels as particular cases. We furthermore prove that the DoQF protocol has the best achievable outage gain in the wide class of half-duplex static relaying protocols. A method for minimizing the outage gain with respect to the power distribution between the source and the relay, and with respect to the durations of the slots is also provided. Next, we focus on Rayleigh distributed fading channels to derive the DMT associated with the proposed DoQF protocol. Our results show that the DMT of DoQF achieves the 2 by 1 MISO upper-bound for multiplexing gains r < 0.25.
0908.4211
Coding Improves the Throughput-Delay Trade-off in Mobile Wireless Networks
cs.IT math.IT
We study the throughput-delay performance tradeoff in large-scale wireless ad hoc networks. It has been shown that the per source-destination pair throughput can be improved from Theta(1/sqrt(n log n)) to Theta(1) if nodes are allowed to move and a 2-hop relay scheme is employed. The price paid for such an improvement on throughput is large delay. Indeed, the delay scaling of the 2-hop relay scheme is Theta(n log n) under the random walk mobility model. In this paper, we employ coding techniques to improve the throughput-delay trade-off for mobile wireless networks. For the random walk mobility model, we improve the delay from Theta(n log n) to Theta(n) by employing Reed-Solomon codes. Our approach maintains the diversity gained by mobility while decreasing the delay.
0908.4265
Channel Protection: Random Coding Meets Sparse Channels
cs.IT math.IT math.OC
Multipath interference is an ubiquitous phenomenon in modern communication systems. The conventional way to compensate for this effect is to equalize the channel by estimating its impulse response by transmitting a set of training symbols. The primary drawback to this type of approach is that it can be unreliable if the channel is changing rapidly. In this paper, we show that randomly encoding the signal can protect it against channel uncertainty when the channel is sparse. Before transmission, the signal is mapped into a slightly longer codeword using a random matrix. From the received signal, we are able to simultaneously estimate the channel and recover the transmitted signal. We discuss two schemes for the recovery. Both of them exploit the sparsity of the underlying channel. We show that if the channel impulse response is sufficiently sparse, the transmitted signal can be recovered reliably.
0908.4290
Bridging the Gap between Crisis Response Operations and Systems
cs.CY cs.MA
There exist huge problems in the current practice of crisis response operations. Response problems are projected as a combination of failure in communication, failure in technology, failure in methodology, failure of management, and finally failure of observation. In this paper we compare eight crisis response systems namely: DrillSim [2, 13], DEFACTO [12, 17], ALADDIN [1, 6], RoboCup Rescue [11, 15], FireGrid [3, 8, 18], WIPER [16], D-AESOP [4], and PLAN C [14]. Comparison results will disclose the cause of failure of current crisis response operations (the response gap). Based on comparison results; we provide recommendations for bridging this gap between response operations and systems.
0908.4310
Co-occurrence Matrix and Fractal Dimension for Image Segmentation
stat.AP cs.CV
One of the most important tasks in image processing problem and machine vision is object recognition, and the success of many proposed methods relies on a suitable choice of algorithm for the segmentation of an image. This paper focuses on how to apply texture operators based on the concept of fractal dimension and cooccurence matrix, to the problem of object recognition and a new method based on fractal dimension is introduced. Several images, in which the result of the segmentation can be shown, are used to illustrate the use of each method and a comparative study of each operator is made.
0908.4386
Handwritten Farsi Character Recognition using Artificial Neural Network
cs.CV
Neural Networks are being used for character recognition from last many years but most of the work was confined to English character recognition. Till date, a very little work has been reported for Handwritten Farsi Character recognition. In this paper, we have made an attempt to recognize handwritten Farsi characters by using a multilayer perceptron with one hidden layer. The error backpropagation algorithm has been used to train the MLP network. In addition, an analysis has been carried out to determine the number of hidden nodes to achieve high performance of backpropagation network in the recognition of handwritten Farsi characters. The system has been trained using several different forms of handwriting provided by both male and female participants of different age groups. Finally, this rigorous training results an automatic HCR system using MLP network. In this work, the experiments were carried out on two hundred fifty samples of five writers. The results showed that the MLP networks trained by the error backpropagation algorithm are superior in recognition accuracy and memory usage. The result indicates that the backpropagation network provides good recognition accuracy of more than 80% of handwritten Farsi characters.
0908.4413
Multiple Retrieval Models and Regression Models for Prior Art Search
cs.CL
This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend.
0908.4419
Distributed Flooding-based Storage Algorithms for Large-scale Sensor Networks
cs.IT cs.NI math.IT
In this paper we propose distributed flooding-based storage algorithms for large-scale wireless sensor networks. Assume a wireless sensor network with $n$ nodes that have limited power, memory, and bandwidth. Each node is capable of both sensing and storing data. Such sensor nodes might disappear from the network due to failures or battery depletion. Hence it is desired to design efficient schemes to collect data from these $n$ nodes. We propose two distributed storage algorithms (DSA's) that utilize network flooding to solve this problem. In the first algorithm, DSA-I, we assume that every node utilizes network flooding to disseminate its data throughout the network using a mixing time of approximately O(n). We show that this algorithm is efficient in terms of the encoding and decoding operations. In the second algorithm, DSA-II, we assume that the total number of nodes is not known to every sensor; hence dissemination of the data does not depend on $n$. The encoding operations in this case take $O(C\mu^2)$, where $\mu$ is the mean degree of the network graph and $C$ is a system parameter. We evaluate the performance of the proposed algorithms through analysis and simulation, and show that their performance matches the derived theoretical results.
0908.4427
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
cs.CE cs.MS
We have developed a new programming framework, called Sieve, to support parallel numerical PDE algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or \emph{arrows}, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode not only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.
0908.4431
An OLAC Extension for Dravidian Languages
cs.CL
OLAC was founded in 2000 for creating online databases of language resources. This paper intends to review the bottom-up distributed character of the project and proposes an extension of the architecture for Dravidian languages. An ontological structure is considered for effective natural language processing (NLP) and its advantages over statistical methods are reviewed
0908.4445
Asymptotic Equipartition Property of Output when Rate is above Capacity
cs.IT math.IT
The output distribution, when rate is above capacity, is investigated. It is shown that there is an asymptotic equipartition property (AEP) of the typical output sequences, independently of the specific codebook used, as long as the codebook is typical according to the standard random codebook generation. This equipartition of the typical output sequences is caused by the mixup of input sequences when there are too many of them, namely, when the rate is above capacity. This discovery sheds some light on the optimal design of the compress-and-forward relay schemes.
0908.4457
Additivity of on-line decision complexity is violated by a linear term in the length of a binary string
cs.IT math.IT
We show that there are infinitely many binary strings z, such that the sum of the on-line decision complexity of predicting the even bits of z given the previous uneven bits, and the decision complexity of predicting the uneven bits given the previous event bits, exceeds the Kolmogorov complexity of z by a linear term in the length of z.
0908.4464
The eel-like robot
cs.RO physics.class-ph
The aim of this project is to design, study and build an "eel-like robot" prototype able to swim in three dimensions. The study is based on the analysis of eel swimming and results in the realization of a prototype with 12 vertebrae, a skin and a head with two fins. To reach these objectives, a multidisciplinary group of teams and laboratories has been formed in the framework of two French projects.
0908.4494
Learning, complexity and information density
cs.IT cs.CC math.IT math.PR
What is the relationship between the complexity of a learner and the randomness of his mistakes? This question was posed in \cite{rat0903} who showed that the more complex the learner the higher the possibility that his mistakes deviate from a true random sequence. In the current paper we report on an empirical investigation of this problem. We investigate two characteristics of randomness, the stochastic and algorithmic complexity of the binary sequence of mistakes. A learner with a Markov model of order $k$ is trained on a finite binary sequence produced by a Markov source of order $k^{*}$ and is tested on a different random sequence. As a measure of learner's complexity we define a quantity called the \emph{sysRatio}, denoted by $\rho$, which is the ratio between the compressed and uncompressed lengths of the binary string whose $i^{th}$ bit represents the maximum \emph{a posteriori} decision made at state $i$ of the learner's model. The quantity $\rho$ is a measure of information density. The main result of the paper shows that this ratio is crucial in answering the above posed question. The result indicates that there is a critical threshold $\rho^{*}$ such that when $\rho\leq\rho^{*}$ the sequence of mistakes possesses the following features: (1)\emph{}low divergence $\Delta$ from a random sequence, (2) low variance in algorithmic complexity. When $\rho>\rho^{*}$, the characteristics of the mistake sequence changes sharply towards a\emph{}high\emph{$\Delta$} and high variance in algorithmic complexity.
0908.4580
A Computational View of Market Efficiency
cs.CE cs.CC q-fin.TR
We propose to study market efficiency from a computational viewpoint. Borrowing from theoretical computer science, we define a market to be \emph{efficient with respect to resources $S$} (e.g., time, memory) if no strategy using resources $S$ can make a profit. As a first step, we consider memory-$m$ strategies whose action at time $t$ depends only on the $m$ previous observations at times $t-m,...,t-1$. We introduce and study a simple model of market evolution, where strategies impact the market by their decision to buy or sell. We show that the effect of optimal strategies using memory $m$ can lead to "market conditions" that were not present initially, such as (1) market bubbles and (2) the possibility for a strategy using memory $m' > m$ to make a bigger profit than was initially possible. We suggest ours as a framework to rationalize the technological arms race of quantitative trading firms.
0909.0067
Bilinear biorthogonal expansions and the Dunkl kernel on the real line
math.FA cs.IT math.IT
We study an extension of the classical Paley-Wiener space structure, which is based on bilinear expansions of integral kernels into biorthogonal sequences of functions. The structure includes both sampling expansions and Fourier-Neumann type series as special cases, and it also provides a bilinear expansion for the Dunkl kernel (in the rank 1 case) which is a Dunkl analogue of Gegenbauer's expansion of the plane wave and the corresponding sampling expansions. In fact, we show how to derive sampling and Fourier-Neumann type expansions from the results related to the bilinear expansion for the Dunkl kernel.
0909.0105
Concatenated Coding for the AWGN Channel with Noisy Feedback
cs.IT math.IT
The use of open-loop coding can be easily extended to a closed-loop concatenated code if the channel has access to feedback. This can be done by introducing a feedback transmission scheme as an inner code. In this paper, this process is investigated for the case when a linear feedback scheme is implemented as an inner code and, in particular, over an additive white Gaussian noise (AWGN) channel with noisy feedback. To begin, we look to derive an optimal linear feedback scheme by optimizing over the received signal-to-noise ratio. From this optimization, an asymptotically optimal linear feedback scheme is produced and compared to other well-known schemes. Then, the linear feedback scheme is implemented as an inner code to a concatenated code over the AWGN channel with noisy feedback. This code shows improvements not only in error exponent bounds, but also in bit-error-rate and frame-error-rate. It is also shown that the if the concatenated code has total blocklength L and the inner code has blocklength, N, the inner code blocklength should scale as N = O(C/R), where C is the capacity of the channel and R is the rate of the concatenated code. Simulations with low density parity check (LDPC) and turbo codes are provided to display practical applications and their error rate benefits.
0909.0108
On the optimal design of parallel robots taking into account their deformations and natural frequencies
cs.RO
This paper discusses the utility of using simple stiffness and vibrations models, based on the Jacobian matrix of a manipulator and only the rigidity of the actuators, whenever its geometry is optimised. In many works, these simplified models are used to propose optimal design of robots. However, the elasticity of the drive system is often negligible in comparison with the elasticity of the elements, especially in applications where high dynamic performances are needed. Therefore, the use of such a simplified model may lead to the creation of robots with long legs, which will be submitted to large bending and twisting deformations. This paper presents an example of manipulator for which it is preferable to use a complete stiffness or vibration model to obtain the most suitable design and shows that the use of simplified models can lead to mechanisms with poorer rigidity.
0909.0109
On the Internal Topological Structure of Plane Regions
cs.AI cs.CG
The study of topological information of spatial objects has for a long time been a focus of research in disciplines like computational geometry, spatial reasoning, cognitive science, and robotics. While the majority of these researches emphasised the topological relations between spatial objects, this work studies the internal topological structure of bounded plane regions, which could consist of multiple pieces and/or have holes and islands to any finite level. The insufficiency of simple regions (regions homeomorphic to closed disks) to cope with the variety and complexity of spatial entities and phenomena has been widely acknowledged. Another significant drawback of simple regions is that they are not closed under set operations union, intersection, and difference. This paper considers bounded semi-algebraic regions, which are closed under set operations and can closely approximate most plane regions arising in practice.
0909.0118
Dynamic Multimedia Content Retrieval System in Distributed Environment
cs.MM cs.IR
WiCoM enables remote management of web resources. Our application Mobile reporter is aimed at Journalist, who will be able to capture the events in real-time using their mobile phones and update their web server on the latest event. WiCoM has been developed using J2ME technology on the client side and PHP on the server side. The communication between the client and the server is established through GPRS. Mobile reporter will be able to upload, edit and remove both textual as well as multimedia contents in the server.
0909.0122
Reasoning with Topological and Directional Spatial Information
cs.AI
Current research on qualitative spatial representation and reasoning mainly focuses on one single aspect of space. In real world applications, however, multiple spatial aspects are often involved simultaneously. This paper investigates problems arising in reasoning with combined topological and directional information. We use the RCC8 algebra and the Rectangle Algebra (RA) for expressing topological and directional information respectively. We give examples to show that the bipath-consistency algorithm BIPATH is incomplete for solving even basic RCC8 and RA constraints. If topological constraints are taken from some maximal tractable subclasses of RCC8, and directional constraints are taken from a subalgebra, termed DIR49, of RA, then we show that BIPATH is able to separate topological constraints from directional ones. This means, given a set of hybrid topological and directional constraints from the above subclasses of RCC8 and RA, we can transfer the joint satisfaction problem in polynomial time to two independent satisfaction problems in RCC8 and RA. For general RA constraints, we give a method to compute solutions that satisfy all topological constraints and approximately satisfy each RA constraint to any prescribed precision.
0909.0138
Reasoning about Cardinal Directions between Extended Objects
cs.AI
Direction relations between extended spatial objects are important commonsense knowledge. Recently, Goyal and Egenhofer proposed a formal model, known as Cardinal Direction Calculus (CDC), for representing direction relations between connected plane regions. CDC is perhaps the most expressive qualitative calculus for directional information, and has attracted increasing interest from areas such as artificial intelligence, geographical information science, and image retrieval. Given a network of CDC constraints, the consistency problem is deciding if the network is realizable by connected regions in the real plane. This paper provides a cubic algorithm for checking consistency of basic CDC constraint networks, and proves that reasoning with CDC is in general an NP-Complete problem. For a consistent network of basic CDC constraints, our algorithm also returns a 'canonical' solution in cubic time. This cubic algorithm is also adapted to cope with cardinal directions between possibly disconnected regions, in which case currently the best algorithm is of time complexity O(n^5).
0909.0173
A theory of intelligence: networked problem solving in animal societies
cs.AI nlin.AO
A society's single emergent, increasing intelligence arises partly from the thermodynamic advantages of networking the innate intelligence of different individuals, and partly from the accumulation of solved problems. Economic growth is proportional to the square of the network entropy of a society's population times the network entropy of the number of the society's solved problems.
0909.0206
Geometry of the Welch Bounds
cs.IT math.IT
A geometric perspective involving Grammian and frame operators is used to derive the entire family of Welch bounds. This perspective unifies a number of observations that have been made regarding tightness of the bounds and their connections to symmetric k-tensors, tight frames, homogeneous polynomials, and t-designs. In particular. a connection has been drawn between sampling of homogeneous polynomials and frames of symmetric k-tensors. It is also shown that tightness of the bounds requires tight frames. The lack of tight frames in symmetric k-tensors in many cases, however, leads to consideration of sets that come as close as possible to attaining the bounds. The geometric derivation is then extended in the setting of generalized or continuous frames. The Welch bounds for finite sets and countably infinite sets become special cases of this general setting.
0909.0247
An Enhanced Static Data Compression Scheme Of Bengali Short Message
cs.IT math.IT
This paper concerns a modified approach of compressing Short Bengali Text Message for small devices. The prime objective of this research technique is to establish a low complexity compression scheme suitable for small devices having small memory and relatively lower processing speed. The basic aim is not to compress text of any size up to its maximum level without having any constraint on space and time, rather than the main target is to compress short messages up to an optimal level which needs minimum space, consume less time and the processor requirement is lower. We have implemented Character Masking, Dictionary Matching, Associative rule of data mining and Hyphenation algorithm for syllable based compression in hierarchical steps to achieve low complexity lossless compression of text message for any mobile devices. The scheme to choose the diagrams are performed on the basis of extensive statistical model and the static Huffman coding is done through the same context.
0909.0251
Channel Equalization in Digital Transmission
cs.IT math.IT
Channel equalization is the process of reducing amplitude, frequency and phase distortion in a radio channel with the intent of improving transmission performance. Different types of equalizers, their applications and some practical example is given. Especially, in the digital communication scenario how equalization works is shown. This paper presents a vivid description on channel equalization in digital transmission system.
0909.0400
Rare-Allele Detection Using Compressed Se(que)nsing
q-bio.GN cs.IT cs.LG math.IT q-bio.QM stat.AP stat.ML
Detection of rare variants by resequencing is important for the identification of individuals carrying disease variants. Rapid sequencing by new technologies enables low-cost resequencing of target regions, although it is still prohibitive to test more than a few individuals. In order to improve cost trade-offs, it has recently been suggested to apply pooling designs which enable the detection of carriers of rare alleles in groups of individuals. However, this was shown to hold only for a relatively low number of individuals in a pool, and requires the design of pooling schemes for particular cases. We propose a novel pooling design, based on a compressed sensing approach, which is both general, simple and efficient. We model the experimental procedure and show via computer simulations that it enables the recovery of rare allele carriers out of larger groups than were possible before, especially in situations where high coverage is obtained for each individual. Our approach can also be combined with barcoding techniques to enhance performance and provide a feasible solution based on current resequencing costs. For example, when targeting a small enough genomic region (~100 base-pairs) and using only ~10 sequencing lanes and ~10 distinct barcodes, one can recover the identity of 4 rare allele carriers out of a population of over 4000 individuals.
0909.0442
Kinematic analysis of a class of analytic planar 3-RPR parallel manipulators
cs.RO
A class of analytic planar 3-RPR manipulators is analyzed in this paper. These manipulators have congruent base and moving platforms and the moving platform is rotated of 180 deg about an axis in the plane. The forward kinematics is reduced to the solution of a 3rd-degree polynomial and a quadratic equation in sequence. The singularities are calculated and plotted in the joint space. The second-order singularities (cups points), which play an important role in non-singular change of assembly-mode motions, are also analyzed.
0909.0481
Scale-Based Gaussian Coverings: Combining Intra and Inter Mixture Models in Image Segmentation
cs.CV
By a "covering" we mean a Gaussian mixture model fit to observed data. Approximations of the Bayes factor can be availed of to judge model fit to the data within a given Gaussian mixture model. Between families of Gaussian mixture models, we propose the R\'enyi quadratic entropy as an excellent and tractable model comparison framework. We exemplify this using the segmentation of an MRI image volume, based (1) on a direct Gaussian mixture model applied to the marginal distribution function, and (2) Gaussian model fit through k-means applied to the 4D multivalued image volume furnished by the wavelet transform. Visual preference for one model over another is not immediate. The R\'enyi quadratic entropy allows us to show clearly that one of these modelings is superior to the other.
0909.0553
A New Approach to Random Access: Reliable Communication and Reliable Collision Detection
cs.IT math.IT
This paper applies Information Theoretic analysis to packet-based random multiple access communication systems. A new channel coding approach is proposed for coding within each data packet with built-in support for bursty traffic properties, such as message underflow, and for random access properties, such as packet collision detection. The coding approach does not require joint communication rate determination either among the transmitters or between the transmitters and the receiver. Its performance limitation is characterized by an achievable region defined in terms of communication rates, such that reliable packet recovery is supported for all rates inside the region and reliable collision detection is supported for all rates outside the region. For random access communication over a discrete-time memoryless channel, it is shown that the achievable rate region of the introduced coding approach equals the Shannon information rate region without a convex hull operation. Further connections between the achievable rate region and the Shannon information rate region are developed and explained.
0909.0555
Reduced Complexity Sphere Decoding
cs.IT math.IT
In Multiple-Input Multiple-Output (MIMO) systems, Sphere Decoding (SD) can achieve performance equivalent to full search Maximum Likelihood (ML) decoding, with reduced complexity. Several researchers reported techniques that reduce the complexity of SD further. In this paper, a new technique is introduced which decreases the computational complexity of SD substantially, without sacrificing performance. The reduction is accomplished by deconstructing the decoding metric to decrease the number of computations and exploiting the structure of a lattice representation. Furthermore, an application of SD, employing a proposed smart implementation with very low computational complexity is introduced. This application calculates the soft bit metrics of a bit-interleaved convolutional-coded MIMO system in an efficient manner. Based on the reduced complexity SD, the proposed smart implementation employs the initial radius acquired by Zero-Forcing Decision Feedback Equalization (ZF-DFE) which ensures no empty spheres. Other than that, a technique of a particular data structure is also incorporated to efficiently reduce the number of executions carried out by SD. Simulation results show that these approaches achieve substantial gains in terms of the computational complexity for both uncoded and coded MIMO systems.
0909.0572
A Method for Accelerating the HITS Algorithm
cs.IR
We present a new method to accelerate the HITS algorithm by exploiting hyperlink structure of the web graph. The proposed algorithm extends the idea of authority and hub scores from HITS by introducing two diagonal matrices which contain constants that act as weights to make authority pages more authoritative and hub pages more hubby. This method works because in the web graph good authorities are pointed to by good hubs and good hubs point to good authorities. Consequently, these pages will collect their scores faster under the proposed algorithm than under the standard HITS. We show that the authority and hub vectors of the proposed algorithm exist but are not necessarily be unique, and then give a treatment to ensure the uniqueness property of the vectors. The experimental results show that the proposed algorithm can improve HITS computations, especially for back button datasets.
0909.0588
Receding horizon decoding of convolutional codes
cs.IT math.IT math.OC
Decoding of convolutional codes poses a significant challenge for coding theory. Classical methods, based on e.g. Viterbi decoding, suffer from being computationally expensive and are restricted therefore to codes of small complexity. Based on analogies with model predictive optimal control, we propose a new iterative method for convolutional decoding that is cheaper to implement than established algorithms, while still offering significant error correction capabilities. The algorithm is particularly well-suited for decoding special types of convolutional codes, such as e.g. cyclic convolutional codes.
0909.0599
Codebook Design Method for Noise Robust Speaker Identification based on Genetic Algorithm
cs.SD cs.NE
In this paper, a novel method of designing a codebook for noise robust speaker identification purpose utilizing Genetic Algorithm has been proposed. Wiener filter has been used to remove the background noises from the source speech utterances. Speech features have been extracted using standard speech parameterization method such as LPC, LPCC, RCC, MFCC, (delta)MFCC and (delta)(delta) MFCC. For each of these techniques, the performance of the proposed system has been compared. In this codebook design method, Genetic Algorithm has the capability of getting global optimal result and hence improves the quality of the codebook. Comparing with the NOIZEOUS speech database, the experimental result shows that 79.62 percent accuracy has been achieved.
0909.0611
Effects of Mechanical Coupling on the Dynamics of Balancing Tasks
cs.CE
Coupled human balancing tasks are investigated based on both pseudo-neural controllers characterized by time-delayed feedback with random gain and natural human balancing tasks. It is shown numerically that, compared to single balancing tasks, balancing tasks coupled by mechanical structures exhibit enhanced stability against balancing errors in terms of both amplitude and velocity and also improve the tracking ability of the controllers. We then perform an experiment in which numerical pseudo-neural controllers are replaced with natural human balancing tasks carried out using computer mice. The results reveal that the coupling structure generates asymmetric tracking abilities in subjects whose tracking abilities are nearly symmetric in their single balancing tasks.
0909.0635
Advances in Feature Selection with Mutual Information
cs.LG cs.IT math.IT
The selection of features that are relevant for a prediction or classification problem is an important problem in many domains involving high-dimensional data. Selecting features helps fighting the curse of dimensionality, improving the performances of prediction or classification methods, and interpreting the application. In a nonlinear context, the mutual information is widely used as relevance criterion for features and sets of features. Nevertheless, it suffers from at least three major limitations: mutual information estimators depend on smoothing parameters, there is no theoretically justified stopping criterion in the feature selection greedy procedure, and the estimation itself suffers from the curse of dimensionality. This chapter shows how to deal with these problems. The two first ones are addressed by using resampling techniques that provide a statistical basis to select the estimator parameters and to stop the search procedure. The third one is addressed by modifying the mutual information criterion into a measure of how features are complementary (and not only informative) for the problem at hand.
0909.0638
Median topographic maps for biomedical data sets
cs.LG q-bio.QM
Median clustering extends popular neural data analysis methods such as the self-organizing map or neural gas to general data structures given by a dissimilarity matrix only. This offers flexible and robust global data inspection methods which are particularly suited for a variety of data as occurs in biomedical domains. In this chapter, we give an overview about median clustering and its properties and extensions, with a particular focus on efficient implementations adapted to large scale data analysis.
0909.0641
Monotonicity, thinning and discrete versions of the Entropy Power Inequality
cs.IT math.IT
We consider the entropy of sums of independent discrete random variables, in analogy with Shannon's Entropy Power Inequality, where equality holds for normals. In our case, infinite divisibility suggests that equality should hold for Poisson variables. We show that some natural analogues of the Entropy Power Inequality do not in fact hold, but propose an alternative formulation which does always hold. The key to many proofs of Shannon's Entropy Power Inequality is the behaviour of entropy on scaling of continuous random variables. We believe that R\'{e}nyi's operation of thinning discrete random variables plays a similar role to scaling, and give a sharp bound on how the entropy of ultra log-concave random variables behaves on thinning. In the spirit of the monotonicity results established by Artstein, Ball, Barthe and Naor, we prove a stronger version of concavity of entropy, which implies a strengthened form of our discrete Entropy Power Inequality.
0909.0682
On Planning with Preferences in HTN
cs.AI
In this paper, we address the problem of generating preferred plans by combining the procedural control knowledge specified by Hierarchical Task Networks (HTNs) with rich qualitative user preferences. The outcome of our work is a language for specifyin user preferences, tailored to HTN planning, together with a provably optimal preference-based planner, HTNPLAN, that is implemented as an extension of SHOP2. To compute preferred plans, we propose an approach based on forward-chaining heuristic search. Our heuristic uses an admissible evaluation function measuring the satisfaction of preferences over partial plans. Our empirical evaluation demonstrates the effectiveness of our HTNPLAN heuristics. We prove our approach sound and optimal with respect to the plans it generates by appealing to a situation calculus semantics of our preference language and of HTN planning. While our implementation builds on SHOP2, the language and techniques proposed here are relevant to a broad range of HTN planners.
0909.0685
In-Network Outlier Detection in Wireless Sensor Networks
cs.DB cs.NI
To address the problem of unsupervised outlier detection in wireless sensor networks, we develop an approach that (1) is flexible with respect to the outlier definition, (2) computes the result in-network to reduce both bandwidth and energy usage,(3) only uses single hop communication thus permitting very simple node failure detection and message reliability assurance mechanisms (e.g., carrier-sense), and (4) seamlessly accommodates dynamic updates to data. We examine performance using simulation with real sensor data streams. Our results demonstrate that our approach is accurate and imposes a reasonable communication load and level of power consumption.
0909.0704
Concentric Permutation Source Codes
cs.IT math.IT
Permutation codes are a class of structured vector quantizers with a computationally-simple encoding procedure based on sorting the scalar components. Using a codebook comprising several permutation codes as subcodes preserves the simplicity of encoding while increasing the number of rate-distortion operating points, improving the convex hull of operating points, and increasing design complexity. We show that when the subcodes are designed with the same composition, optimization of the codebook reduces to a lower-dimensional vector quantizer design within a single cone. Heuristics for reducing design complexity are presented, including an optimization of the rate allocation in a shape-gain vector quantizer with gain-dependent wrapped spherical shape codebook.
0909.0737
Efficient algorithms for training the parameters of hidden Markov models using stochastic expectation maximization EM training and Viterbi training
q-bio.QM cs.LG q-bio.GN
Background: Hidden Markov models are widely employed by numerous bioinformatics programs used today. Applications range widely from comparative gene prediction to time-series analyses of micro-array data. The parameters of the underlying models need to be adjusted for specific data sets, for example the genome of a particular species, in order to maximize the prediction accuracy. Computationally efficient algorithms for parameter training are thus key to maximizing the usability of a wide range of bioinformatics applications. Results: We introduce two computationally efficient training algorithms, one for Viterbi training and one for stochastic expectation maximization (EM) training, which render the memory requirements independent of the sequence length. Unlike the existing algorithms for Viterbi and stochastic EM training which require a two-step procedure, our two new algorithms require only one step and scan the input sequence in only one direction. We also implement these two new algorithms and the already published linear-memory algorithm for EM training into the hidden Markov model compiler HMM-Converter and examine their respective practical merits for three small example models. Conclusions: Bioinformatics applications employing hidden Markov models can use the two algorithms in order to make Viterbi training and stochastic EM training more computationally efficient. Using these algorithms, parameter training can thus be attempted for more complex models and longer training sequences. The two new algorithms have the added advantage of being easier to implement than the corresponding default algorithms for Viterbi training and stochastic EM training.
0909.0760
Optimizing Orthogonal Multiple Access based on Quantized Channel State Information
cs.IT math.IT
The performance of systems where multiple users communicate over wireless fading links benefits from channel-adaptive allocation of the available resources. Different from most existing approaches that allocate resources based on perfect channel state information, this work optimizes channel scheduling along with per user rate and power loadings over orthogonal fading channels, when both terminals and scheduler rely on quantized channel state information. Channel-adaptive policies are designed to optimize an average transmit-performance criterion subject to average quality of service requirements. While the resultant optimal policy per fading realization shows that the individual rate and power loadings can be obtained separately for each user, the optimal scheduling is slightly more complicated. Specifically, per fading realization each channel is allocated either to a single (winner) user, or, to a small group of winner users whose percentage of shared resources is found by solving a linear program. A single scheduling scheme combining both alternatives becomes possible by smoothing the original disjoint scheme. The smooth scheduling is asymptotically optimal and incurs reduced computational complexity. Different alternatives to obtain the Lagrange multipliers required to implement the channel-adaptive policies are proposed, including stochastic iterations that are provably convergent and do not require knowledge of the channel distribution. The development of the optimal channel-adaptive allocation is complemented with discussions on the overhead required to implement the novel policies.