id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
0903.3995
Gradient-based adaptive interpolation in super-resolution image restoration
cs.MM cs.CV
This paper presents a super-resolution method based on gradient-based adaptive interpolation. In this method, in addition to considering the distance between the interpolated pixel and the neighboring valid pixel, the interpolation coefficients take the local gradient of the original image into account. The smaller the local gradient of a pixel is, the more influence it should have on the interpolated pixel. And the interpolated high resolution image is finally deblurred by the application of wiener filter. Experimental results show that our proposed method not only substantially improves the subjective and objective quality of restored images, especially enhances edges, but also is robust to the registration error and has low computational complexity.
0903.4014
Construction of Codes for Wiretap Channel and Secret Key Agreement from Correlated Source Outputs by Using Sparse Matrices
cs.IT cs.CR math.IT
The aim of this paper is to prove coding theorems for the wiretap channel coding problem and secret key agreement problem based on the the notion of a hash property for an ensemble of functions. These theorems imply that codes using sparse matrices can achieve the optimal rate. Furthermore, fixed-rate universal coding theorems for a wiretap channel and a secret key agreement are also proved.
0903.4035
BLOGRANK: Ranking Weblogs Based On Connectivity And Similarity Features
cs.IR
A large part of the hidden web resides in weblog servers. New content is produced in a daily basis and the work of traditional search engines turns to be insufficient due to the nature of weblogs. This work summarizes the structure of the blogosphere and highlights the special features of weblogs. In this paper we present a method for ranking weblogs based on the link graph and on several similarity characteristics between weblogs. First we create an enhanced graph of connected weblogs and add new types of edges and weights utilising many weblog features. Then, we assign a ranking to each weblog using our algorithm, BlogRank, which is a modified version of PageRank. For the validation of our method we run experiments on a weblog dataset, which we process and adapt to our search engine. (http://spiderwave.aueb.gr/Blogwave). The results suggest that the use of the enhanced graph and the BlogRank algorithm is preferred by the users.
0903.4036
Feedback control logic synthesis for non safe Petri nets
cs.IT math.IT
This paper addresses the problem of forbidden states of non safe Petri Net (PN) modelling discrete events systems. To prevent the forbidden states, it is possible to use conditions or predicates associated with transitions. Generally, there are many forbidden states, thus many complex conditions are associated with the transitions. A new idea for computing predicates in non safe Petri nets will be presented. Using this method, we can construct a maximally permissive controller if it exists.
0903.4101
Polylog space compression, pushdown compression, and Lempel-Ziv are incomparable
cs.CC cs.IR
The pressing need for efficient compression schemes for XML documents has recently been focused on stack computation, and in particular calls for a formulation of information-lossless stack or pushdown compressors that allows a formal analysis of their performance and a more ambitious use of the stack in XML compression, where so far it is mainly connected to parsing mechanisms. In this paper we introduce the model of pushdown compressor, based on pushdown transducers that compute a single injective function while keeping the widest generality regarding stack computation. We also consider online compression algorithms that use at most polylogarithmic space (plogon). These algorithms correspond to compressors in the data stream model. We compare the performance of these two families of compressors with each other and with the general purpose Lempel-Ziv algorithm. This comparison is made without any a priori assumption on the data's source and considering the asymptotic compression ratio for infinite sequences. We prove that in all cases they are incomparable.
0903.4128
Rate Adaptation via Link-Layer Feedback for Goodput Maximization over a Time-Varying Channel
cs.IT cs.NI math.IT math.OC
We consider adapting the transmission rate to maximize the goodput, i.e., the amount of data transmitted without error, over a continuous Markov flat-fading wireless channel. In particular, we consider schemes in which transmitter channel state is inferred from degraded causal error-rate feedback, such as packet-level ACK/NAKs in an automatic repeat request (ARQ) system. In such schemes, the choice of transmission rate affects not only the subsequent goodput but also the subsequent feedback, implying that the optimal rate schedule is given by a partially observable Markov decision process (POMDP). Because solution of the POMDP is computationally impractical, we consider simple suboptimal greedy rate assignment and show that the optimal scheme would itself be greedy if the error-rate feedback was non-degraded. Furthermore, we show that greedy rate assignment using non-degraded feedback yields a total goodput that upper bounds that of optimal rate assignment using degraded feedback. We then detail the implementation of the greedy scheme and propose a reduced-complexity greedy scheme that adapts the transmission rate only once per block of packets. We also investigate the performance of the schemes numerically, and show that the proposed greedy scheme achieves steady-state goodputs that are reasonably close to the upper bound on goodput calculated using non-degraded feedback. A similar improvement is obtained in steady-state goodput, drop rate, and average buffer occupancy in the presence of data buffers. We also investigate an upper bound on the performance of optimal rate assignment for a discrete approximation of the channel and show that such quantization leads to a significant loss in achievable goodput.
0903.4132
Switcher-random-walks: a cognitive-inspired mechanism for network exploration
cs.AI cond-mat.dis-nn physics.soc-ph
Semantic memory is the subsystem of human memory that stores knowledge of concepts or meanings, as opposed to life specific experiences. The organization of concepts within semantic memory can be understood as a semantic network, where the concepts (nodes) are associated (linked) to others depending on perceptions, similarities, etc. Lexical access is the complementary part of this system and allows the retrieval of such organized knowledge. While conceptual information is stored under certain underlying organization (and thus gives rise to a specific topology), it is crucial to have an accurate access to any of the information units, e.g. the concepts, for efficiently retrieving semantic information for real-time needings. An example of an information retrieval process occurs in verbal fluency tasks, and it is known to involve two different mechanisms: -clustering-, or generating words within a subcategory, and, when a subcategory is exhausted, -switching- to a new subcategory. We extended this approach to random-walking on a network (clustering) in combination to jumping (switching) to any node with certain probability and derived its analytical expression based on Markov chains. Results show that this dual mechanism contributes to optimize the exploration of different network models in terms of the mean first passage time. Additionally, this cognitive inspired dual mechanism opens a new framework to better understand and evaluate exploration, propagation and transport phenomena in other complex systems where switching-like phenomena are feasible.
0903.4207
MacWilliams Identities for Codes on Graphs
cs.IT math.IT
The MacWilliams identity for linear time-invariant convolutional codes that has recently been found by Gluesing-Luerssen and Schneider is proved concisely, and generalized to arbitrary group codes on graphs. A similar development yields a short, transparent proof of the dual sum-product update rule.
0903.4217
Conditional Probability Tree Estimation Analysis and Algorithms
cs.LG cs.AI
We consider the problem of estimating the conditional probability of a label in time $O(\log n)$, where $n$ is the number of possible labels. We analyze a natural reduction of this problem to a set of binary regression problems organized in a tree structure, proving a regret bound that scales with the depth of the tree. Motivated by this analysis, we propose the first online algorithm which provably constructs a logarithmic depth tree on the set of labels to solve this problem. We test the algorithm empirically, showing that it works succesfully on a dataset with roughly $10^6$ labels.
0903.4237
Projection-Forcing Multisets of Weight Changes
math.CO cs.IT math.IT
Let $F$ be a finite field. A multiset $S$ of integers is projection-forcing if for every linear function $\phi : F^n \to F^m$ whose multiset of weight changes is $S$, $\phi$ is a coordinate projection up to permutation and scaling of entries. The MacWilliams Extension Theorem from coding theory says that $S = \{0, 0, ..., 0\}$ is projection-forcing. We give a (super-polynomial) algorithm to determine whether or not a given $S$ is projection-forcing. We also give a condition that can be checked in polynomial time that implies that $S$ is projection-forcing. This result is a generalization of the MacWilliams Extension Theorem and work by the first author.
0903.4298
Design of Log-Map / Max-Log-Map Decoder
cs.IT math.IT
The process of turbo-code decoding starts with the formation of a posteriori probabilities (APPs) for each data bit, which is followed by choosing the data-bit value that corresponds to the maximum a posteriori (MAP) probability for that data bit. Upon reception of a corrupted code-bit sequence, the process of decision making with APPs allows the MAP algorithm to determine the most likely information bit to have been transmitted at each bit time.
0903.4305
Evaluation d'une requete en SQL
cs.DB
The objective of this paper is to show how the interrogation processor responds to SQL interrogation. The interrogation processor is split into two parts. The first, called the interrogation compiler translates an SQL query into a plan of physical execution. The second, called evaluation query runs the execution plan.
0903.4386
Error-and-Erasure Decoding for Block Codes with Feedback
cs.IT math.IT
Inner and outer bounds are derived on the optimal performance of fixed length block codes on discrete memoryless channels with feedback and errors-and-erasures decoding. First an inner bound is derived using a two phase encoding scheme with communication and control phases together with the optimal decoding rule for the given encoding scheme, among decoding rules that can be represented in terms of pairwise comparisons between the messages. Then an outer bound is derived using a generalization of the straight-line bound to errors-and-erasures decoders and the optimal error exponent trade off of a feedback encoder with two messages. In addition upper and lower bounds are derived, for the optimal erasure exponent of error free block codes in terms of the rate. Finally we present a proof of the fact that the optimal trade off between error exponents of a two message code does not increase with feedback on DMCs.
0903.4426
Capacity Scaling Laws for Underwater Networks
cs.IT math.IT
The underwater acoustic channel is characterized by a path loss that depends not only on the transmission distance, but also on the signal frequency. Signals transmitted from one user to another over a distance $l$ are subject to a power loss of $l^{-\alpha}{a(f)}^{-l}$. Although a terrestrial radio channel can be modeled similarly, the underwater acoustic channel has different characteristics. The spreading factor $\alpha$, related to the geometry of propagation, has values in the range $1 \leq \alpha \leq 2$. The absorption coefficient $a(f)$ is a rapidly increasing function of frequency: it is three orders of magnitude greater at 100 kHz than at a few Hz. Existing results for capacity of wireless networks correspond to scenarios for which $a(f) = 1$, or a constant greater than one, and $\alpha \geq 2$. These results cannot be applied to underwater acoustic networks in which the attenuation varies over the system bandwidth. We use a water-filling argument to assess the minimum transmission power and optimum transmission band as functions of the link distance and desired data rate, and study the capacity scaling laws under this model.
0903.4434
Random Linear Network Coding for Time-Division Duplexing: Queueing Analysis
cs.IT math.IT
We study the performance of random linear network coding for time division duplexing channels with Poisson arrivals. We model the system as a bulk-service queue with variable bulk size. A full characterization for random linear network coding is provided for time division duplexing channels [1] by means of the moment generating function. We present numerical results for the mean number of packets in the queue and consider the effect of the range of allowable bulk sizes. We show that there exists an optimal choice of this range that minimizes the mean number of data packets in the queue.
0903.4443
Broadcasting in Time-Division Duplexing: A Random Linear Network Coding Approach
cs.IT math.IT
We study random linear network coding for broadcasting in time division duplexing channels. We assume a packet erasure channel with nodes that cannot transmit and receive information simultaneously. The sender transmits coded data packets back-to-back before stopping to wait for the receivers to acknowledge the number of degrees of freedom, if any, that are required to decode correctly the information. We study the mean time to complete the transmission of a block of packets to all receivers. We also present a bound on the number of stops to wait for acknowledgement in order to complete transmission with probability at least $1-\epsilon$, for any $\epsilon>0$. We present analysis and numerical results showing that our scheme outperforms optimal scheduling policies for broadcast, in terms of the mean completion time. We provide a simple heuristic to compute the number of coded packets to be sent before stopping that achieves close to optimal performance with the advantage of a considerable reduction in the search time.
0903.4513
Building the information kernel and the problem of recognition
cs.CV cs.AI
At this point in time there is a need for a new representation of different information, to identify and organize descending its characteristics. Today, science is a powerful tool for the description of reality - the numbers. Why the most important property of numbers. Suppose we have a number 0.2351734, it is clear that the figures are there in order of importance. If necessary, we can round the number up to some value, eg 0.235. Arguably, the 0,235 - the most important information of 0.2351734. Thus, we can reduce the size of numbers is not losing much with the accuracy. Clearly, if learning to provide a graphical or audio information kernel, we can provide the most relevant information, discarding the rest. Introduction of various kinds of information in an information kernel, is an important task, to solve many problems in artificial intelligence and information theory.
0903.4526
On the Achievable Rate of the Fading Dirty Paper Channel with Imperfect CSIT
cs.IT math.IT
The problem of dirty paper coding (DPC) over the (multi-antenna) fading dirty paper channel (FDPC) Y = H(X + S) + Z is considered when there is imperfect knowledge of the channel state information H at the transmitter (CSIT). The case of FDPC with positive definite (p.d.) input covariance matrix was studied by the authors in a recent paper, and here the more general case of positive semi-definite (p.s.d.) input covariance is dealt with. Towards this end, the choice of auxiliary random variable is modified. The algorithms for determination of inflation factor proposed in the p.d. case are then generalized to the case of p.s.d. input covariance. Subsequently, the largest DPC-achievable high-SNR (signal-to-noise ratio) scaling factor over the no-CSIT FDPC with p.s.d. input covariance matrix is derived. This scaling factor is seen to be a non-trivial generalization of the one achieved for the p.d. case. Next, in the limit of low SNR, it is proved that the choice of all-zero inflation factor (thus treating interference as noise) is optimal in the 'ratio' sense, regardless of the covariance matrix used. Further, in the p.d. covariance case, the inflation factor optimal at high SNR is obtained when the number of transmit antennas is greater than the number of receive antennas, with the other case having been already considered in the earlier paper. Finally, the problem of joint optimization of the input covariance matrix and the inflation factor is dealt with, and an iterative numerical algorithm is developed.
0903.4527
Graph polynomials and approximation of partition functions with Loopy Belief Propagation
cs.DM cs.LG
The Bethe approximation, or loopy belief propagation algorithm is a successful method for approximating partition functions of probabilistic models associated with a graph. Chertkov and Chernyak derived an interesting formula called Loop Series Expansion, which is an expansion of the partition function. The main term of the series is the Bethe approximation while other terms are labeled by subgraphs called generalized loops. In our recent paper, we derive the loop series expansion in form of a polynomial with coefficients positive integers, and extend the result to the expansion of marginals. In this paper, we give more clear derivation for the results and discuss the properties of the polynomial which is introduced in the paper.
0903.4530
Nonnegative approximations of nonnegative tensors
cs.NA cs.IR
We study the decomposition of a nonnegative tensor into a minimal sum of outer product of nonnegative vectors and the associated parsimonious naive Bayes probabilistic model. We show that the corresponding approximation problem, which is central to nonnegative PARAFAC, will always have optimal solutions. The result holds for any choice of norms and, under a mild assumption, even Bregman divergences.
0903.4545
Computer- and robot-assisted Medical Intervention
cs.RO
Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.
0903.4554
Fountain Codes and Invertible Matrices
cs.IT math.IT
This paper deals with Fountain codes, and especially with their encoding matrices, which are required here to be invertible. A result is stated that an encoding matrix induces a permutation. Also, a result is that encoding matrices form a group with multiplication operation. An encoding is a transformation, which reduces the entropy of an initially high-entropy input vector. A special encoding matrix, with which the entropy reduction is more effective than with matrices created by the Ideal Soliton distribution is formed. Experimental results with entropy reduction are shown.
0903.4582
On the Achievable Diversity-Multiplexing Tradeoff in MIMO Fading Channels with Imperfect CSIT
cs.IT math.IT
In this paper, we analyze the fundamental tradeoff of diversity and multiplexing in multi-input multi-output (MIMO) channels with imperfect channel state information at the transmitter (CSIT). We show that with imperfect CSIT, a higher diversity gain as well as a more efficient diversity-multiplexing tradeoff (DMT) can be achieved. In the case of multi-input single-output (MISO)/single-input multi-output (SIMO) channels with K transmit/receive antennas, one can achieve a diversity gain of d(r)=K(1-r+K\alpha) at spatial multiplexing gain r, where \alpha is the CSIT quality defined in this paper. For general MIMO channels with M (M>1) transmit and N (N>1) receive antennas, we show that depending on the value of \alpha, different DMT can be derived and the value of \alpha has a great impact on the achievable diversity, especially at high multiplexing gains. Specifically, when \alpha is above a certain threshold, one can achieve a diversity gain of d(r)=MN(1+MN\alpha)-(M+N-1)r; otherwise, the achievable DMT is much lower and can be described as a collection of discontinuous line segments depending on M, N, r and \alpha. Our analysis reveals that imperfect CSIT significantly improves the achievable diversity gain while enjoying high spatial multiplexing gains.
0903.4594
Dynamic Control of Tunable Sub-optimal Algorithms for Scheduling of Time-varying Wireless Networks
cs.IT math.IT
It is well known that for ergodic channel processes the Generalized Max-Weight Matching (GMWM) scheduling policy stabilizes the network for any supportable arrival rate vector within the network capacity region. This policy, however, often requires the solution of an NP-hard optimization problem. This has motivated many researchers to develop sub-optimal algorithms that approximate the GMWM policy in selecting schedule vectors. One implicit assumption commonly shared in this context is that during the algorithm runtime, the channel states remain effectively unchanged. This assumption may not hold as the time needed to select near-optimal schedule vectors usually increases quickly with the network size. In this paper, we incorporate channel variations and the time-efficiency of sub-optimal algorithms into the scheduler design, to dynamically tune the algorithm runtime considering the tradeoff between algorithm efficiency and its robustness to changing channel states. Specifically, we propose a Dynamic Control Policy (DCP) that operates on top of a given sub-optimal algorithm, and dynamically but in a large time-scale adjusts the time given to the algorithm according to queue backlog and channel correlations. This policy does not require knowledge of the structure of the given sub-optimal algorithm, and with low overhead can be implemented in a distributed manner. Using a novel Lyapunov analysis, we characterize the throughput stability region induced by DCP and show that our characterization can be tight. We also show that the throughput stability region of DCP is at least as large as that of any other static policy. Finally, we provide two case studies to gain further intuition into the performance of DCP.
0903.4696
Multidimensional Online Robot Motion
cs.CG cs.RO
We consider three related problems of robot movement in arbitrary dimensions: coverage, search, and navigation. For each problem, a spherical robot is asked to accomplish a motion-related task in an unknown environment whose geometry is learned by the robot during navigation. The robot is assumed to have tactile and global positioning sensors. We view these problems from the perspective of (non-linear) competitiveness as defined by Gabriely and Rimon. We first show that in 3 dimensions and higher, there is no upper bound on competitiveness: every online algorithm can do arbitrarily badly compared to the optimal. We then modify the problems by assuming a fixed clearance parameter. We are able to give optimally competitive algorithms under this assumption.
0903.4738
Constellation Precoded Beamforming
cs.IT math.IT
We present and analyze the performance of constellation precoded beamforming. This multi-input multi-output transmission technique is based on the singular value decomposition of a channel matrix. In this work, the beamformer is precoded to improve its diversity performance. It was shown previously that while single beamforming achieves full diversity without channel coding, multiple beamforming results in diversity loss. In this paper, we show that a properly designed constellation precoder makes uncoded multiple beamforming achieve full diversity order. We also show that partially precoded multiple beamforming gets better diversity order than multiple beamforming without constellation precoder if the subchannels to be precoded are properly chosen. We propose several criteria to design the constellation precoder. Simulation results match the analysis, and show that precoded multiple beamforming actually outperforms single beamforming without precoding at the same system data rate while achieving full diversity order.
0903.4742
Guaranteed Minimum Rank Approximation from Linear Observations by Nuclear Norm Minimization with an Ellipsoidal Constraint
cs.IT math.IT
The rank minimization problem is to find the lowest-rank matrix in a given set. Nuclear norm minimization has been proposed as an convex relaxation of rank minimization. Recht, Fazel, and Parrilo have shown that nuclear norm minimization subject to an affine constraint is equivalent to rank minimization under a certain condition given in terms of the rank-restricted isometry property. However, in the presence of measurement noise, or with only approximately low rank generative model, the appropriate constraint set is an ellipsoid rather than an affine space. There exist polynomial-time algorithms to solve the nuclear norm minimization with an ellipsoidal constraint, but no performance guarantee has been shown for these algorithms. In this paper, we derive such an explicit performance guarantee, bounding the error in the approximate solution provided by nuclear norm minimization with an ellipsoidal constraint.
0903.4817
An Exponential Lower Bound on the Complexity of Regularization Paths
cs.LG cs.CG cs.CV math.OC stat.ML
For a variety of regularized optimization problems in machine learning, algorithms computing the entire solution path have been developed recently. Most of these methods are quadratic programs that are parameterized by a single parameter, as for example the Support Vector Machine (SVM). Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. It has been assumed that these piecewise linear solution paths have only linear complexity, i.e. linearly many bends. We prove that for the support vector machine this complexity can be exponential in the number of training points in the worst case. More strongly, we construct a single instance of n input points in d dimensions for an SVM such that at least \Theta(2^{n/2}) = \Theta(2^d) many distinct subsets of support vectors occur as the regularization parameter changes.
0903.4826
New Linear Codes from Matrix-Product Codes with Polynomial Units
cs.IT math.IT
A new construction of codes from old ones is considered, it is an extension of the matrix-product construction. Several linear codes that improve the parameters of the known ones are presented.
0903.4856
A Combinatorial Algorithm to Compute Regularization Paths
cs.LG cs.AI cs.CV
For a wide variety of regularization methods, algorithms computing the entire solution path have been developed recently. Solution path algorithms do not only compute the solution for one particular value of the regularization parameter but the entire path of solutions, making the selection of an optimal parameter much easier. Most of the currently used algorithms are not robust in the sense that they cannot deal with general or degenerate input. Here we present a new robust, generic method for parametric quadratic programming. Our algorithm directly applies to nearly all machine learning applications, where so far every application required its own different algorithm. We illustrate the usefulness of our method by applying it to a very low rank problem which could not be solved by existing path tracking methods, namely to compute part-worth values in choice based conjoint analysis, a popular technique from market research to estimate consumers preferences on a class of parameterized options.
0903.4860
Learning Multiple Belief Propagation Fixed Points for Real Time Inference
cs.LG cond-mat.dis-nn physics.data-an
In the context of inference with expectation constraints, we propose an approach based on the "loopy belief propagation" algorithm LBP, as a surrogate to an exact Markov Random Field MRF modelling. A prior information composed of correlations among a large set of N variables, is encoded into a graphical model; this encoding is optimized with respect to an approximate decoding procedure LBP, which is used to infer hidden variables from an observed subset. We focus on the situation where the underlying data have many different statistical components, representing a variety of independent patterns. Considering a single parameter family of models we show how LBP may be used to encode and decode efficiently such information, without solving the NP hard inverse problem yielding the optimal MRF. Contrary to usual practice, we work in the non-convex Bethe free energy minimization framework, and manage to associate a belief propagation fixed point to each component of the underlying probabilistic mixture. The mean field limit is considered and yields an exact connection with the Hopfield model at finite temperature and steady state, when the number of mixture components is proportional to the number of variables. In addition, we provide an enhanced learning procedure, based on a straightforward multi-parameter extension of the model in conjunction with an effective continuous optimization procedure. This is performed using the stochastic search heuristic CMAES and yields a significant improvement with respect to the single parameter basic model.
0903.4930
Time manipulation technique for speeding up reinforcement learning in simulations
cs.AI cs.LG cs.RO
A technique for speeding up reinforcement learning algorithms by using time manipulation is proposed. It is applicable to failure-avoidance control problems running in a computer simulation. Turning the time of the simulation backwards on failure events is shown to speed up the learning by 260% and improve the state space exploration by 12% on the cart-pole balancing task, compared to the conventional Q-learning and Actor-Critic algorithms.
0903.4939
A Novel Algorithm for Compressive Sensing: Iteratively Reweighed Operator Algorithm (IROA)
cs.IT math.IT
Compressive sensing claims that the sparse signals can be reconstructed exactly from many fewer measurements than traditionally believed necessary. One of issues ensuring the successful compressive sensing is to deal with the sparsity-constraint optimization. Up to now, many excellent theories, algorithms and software have been developed, for example, the so-called greedy algorithm ant its variants, the sparse Bayesian algorithm, the convex optimization methods, and so on. The formulations for them consist of two terms, in which one is and the other is (, mostly, p=1 is adopted due to good characteristic of the convex function) (NOTE: without the loss of generality, itself is assumed to be sparse). It is noted that all of them specify the sparsity constraint by the second term. Different from them, the developed formulation in this paper consists of two terms where one is with () and the other is . For each iteration the measurement matrix (linear operator) is reweighed by determined by which is obtained in the previous iteration, so the proposed method is called the iteratively reweighed operator algorithm (IROA). Moreover, in order to save the computation time, another reweighed operation has been carried out; in particular, the columns of corresponding to small have been excluded out. Theoretical analysis and numerical simulations have shown that the proposed method overcomes the published algorithms.
0903.5045
Digital Restoration of Ancient Papyri
cs.CV
Image processing can be used for digital restoration of ancient papyri, that is, for a restoration performed on their digital images. The digital manipulation allows reducing the background signals and enhancing the readability of texts. In the case of very old and damaged documents, this is fundamental for identification of the patterns of letters. Some examples of restoration, obtained with an image processing which uses edges detection and Fourier filtering, are shown. One of them concerns 7Q5 fragment of the Dead Sea Scrolls.
0903.5049
SQS-graphs of extended 1-perfect codes
math.CO cs.IT math.IT
A binary extended 1-perfect code $\mathcal C$ folds over its kernel via the Steiner quadruple systems associated with its codewords. The resulting folding, proposed as a graph invariant for $\mathcal C$, distinguishes among the 361 nonlinear codes $\mathcal C$ of kernel dimension $\kappa$ with $9\geq\kappa\geq 5$ obtained via Solov'eva-Phelps doubling construction. Each of the 361 resulting graphs has most of its nonloop edges expressible in terms of the lexicographically disjoint quarters of the products of the components of two of the ten 1-perfect partitions of length 8 classified by Phelps, and loops mostly expressible in terms of the lines of the Fano plane.
0903.5054
Flow of Activity in the Ouroboros Model
cs.AI
The Ouroboros Model is a new conceptual proposal for an algorithmic structure for efficient data processing in living beings as well as for artificial agents. Its central feature is a general repetitive loop where one iteration cycle sets the stage for the next. Sensory input activates data structures (schemata) with similar constituents encountered before, thus expectations are kindled. This corresponds to the highlighting of empty slots in the selected schema, and these expectations are compared with the actually encountered input. Depending on the outcome of this consumption analysis different next steps like search for further data or a reset, i.e. a new attempt employing another schema, are triggered. Monitoring of the whole process, and in particular of the flow of activation directed by the consumption analysis, yields valuable feedback for the optimum allocation of attention and resources including the selective establishment of useful new memory entries.
0903.5066
Modified-CS: Modifying Compressive Sensing for Problems with Partially Known Support
cs.IT math.IT math.ST stat.ME stat.TH
We study the problem of reconstructing a sparse signal from a limited number of its linear projections when a part of its support is known, although the known part may contain some errors. The ``known" part of the support, denoted T, may be available from prior knowledge. Alternatively, in a problem of recursively reconstructing time sequences of sparse spatial signals, one may use the support estimate from the previous time instant as the ``known" part. The idea of our proposed solution (modified-CS) is to solve a convex relaxation of the following problem: find the signal that satisfies the data constraint and is sparsest outside of T. We obtain sufficient conditions for exact reconstruction using modified-CS. These are much weaker than those needed for compressive sensing (CS) when the sizes of the unknown part of the support and of errors in the known part are small compared to the support size. An important extension called Regularized Modified-CS (RegModCS) is developed which also uses prior signal estimate knowledge. Simulation comparisons for both sparse and compressible signals are shown.
0903.5074
Analyzing Least Squares and Kalman Filtered Compressed Sensing
cs.IT math.IT
In recent work, we studied the problem of causally reconstructing time sequences of spatially sparse signals, with unknown and slow time-varying sparsity patterns, from a limited number of linear "incoherent" measurements. We proposed a solution called Kalman Filtered Compressed Sensing (KF-CS). The key idea is to run a reduced order KF only for the current signal's estimated nonzero coefficients' set, while performing CS on the Kalman filtering error to estimate new additions, if any, to the set. KF may be replaced by Least Squares (LS) estimation and we call the resulting algorithm LS-CS. In this work, (a) we bound the error in performing CS on the LS error and (b) we obtain the conditions under which the KF-CS (or LS-CS) estimate converges to that of a genie-aided KF (or LS), i.e. the KF (or LS) which knows the true nonzero sets.
0903.5108
Multi-mode Transmission for the MIMO Broadcast Channel with Imperfect Channel State Information
cs.IT math.IT
This paper proposes an adaptive multi-mode transmission strategy to improve the spectral efficiency achieved in the multiple-input multiple-output (MIMO) broadcast channel with delayed and quantized channel state information. The adaptive strategy adjusts the number of active users, denoted as the transmission mode, to balance transmit array gain, spatial division multiplexing gain, and residual inter-user interference. Accurate closed-form approximations are derived for the achievable rates for different modes, which help identify the active mode that maximizes the average sum throughput for given feedback delay and channel quantization error. The proposed transmission strategy is combined with round-robin scheduling, and is shown to provide throughput gain over single-user MIMO at moderate signal-to-noise ratio. It only requires feedback of instantaneous channel state information from a small number of users. With a feedback load constraint, the proposed algorithm provides performance close to that achieved by opportunistic scheduling with instantaneous feedback from a large number of users.
0903.5122
A Constructive Generalization of Nash Equilibrium for Better Payoffs and Stability
cs.GT cs.MA
In a society of completely selfish individuals where everybody is only interested in maximizing his own payoff, does any equilibrium exist for the society? John Nash proved more than 50 years ago that an equilibrium always exists such that nobody would benefit from unilaterally changing his strategy. Nash Equilibrium is a central concept in game theory, which offers a mathematical foundation for social science and economy. However, it is important from both a theoretical and a practical point of view to understand game playing where individuals are less selfish. This paper offers a constructive generalization of Nash equilibrium to study n-person games where the selfishness of individuals can be defined at any level, including the extreme of complete selfishness. The generalization is constructive since it offers a protocol for individuals in a society to reach an equilibrium. Most importantly, this paper presents experimental results and theoretical investigation to show that the individuals in a society can reduce their selfishness level together to reach a new equilibrium where they can have better payoffs and the society is more stable at the same time. This study suggests that, for the benefit of everyone in a society (including the financial market), the pursuit of maximal payoff by each individual should be controlled at some level either by voluntary good citizenship or by imposed regulations.
0903.5168
Mathematical Model for Transformation of Sentences from Active Voice to Passive Voice
cs.CL
Formal work in linguistics has both produced and used important mathematical tools. Motivated by a survey of models for context and word meaning, syntactic categories, phrase structure rules and trees, an attempt is being made in the present paper to present a mathematical model for structuring of sentences from active voice to passive voice, which is is the form of a transitive verb whose grammatical subject serves as the patient, receiving the action of the verb. For this purpose we have parsed all sentences of a corpus and have generated Boolean groups for each of them. It has been observed that when we take constituents of the sentences as subgroups, the sequences of phrases form permutation roups. Application of isomorphism property yields permutation mapping between the important subgroups. It has resulted in a model for transformation of sentences from active voice to passive voice. A computer program has been written to enable the software developers to evolve grammar software for sentence transformations.
0903.5172
Delocalization transition for the Google matrix
cs.IR cond-mat.dis-nn nlin.AO
We study the localization properties of eigenvectors of the Google matrix, generated both from the World Wide Web and from the Albert-Barabasi model of networks. We establish the emergence of a delocalization phase for the PageRank vector when network parameters are changed. In the phase of localized PageRank, a delocalization takes place in the complex plane of eigenvalues of the matrix, leading to delocalized relaxation modes. We argue that the efficiency of information retrieval by Google-type search is strongly affected in the phase of delocalized PageRank.
0903.5188
Quantum decision theory as quantum theory of measurement
quant-ph cs.AI
We present a general theory of quantum information processing devices, that can be applied to human decision makers, to atomic multimode registers, or to molecular high-spin registers. Our quantum decision theory is a generalization of the quantum theory of measurement, endowed with an action ring, a prospect lattice and a probability operator measure. The algebra of probability operators plays the role of the algebra of local observables. Because of the composite nature of prospects and of the entangling properties of the probability operators, quantum interference terms appear, which make actions noncommutative and the prospect probabilities non-additive. The theory provides the basis for explaining a variety of paradoxes typical of the application of classical utility theory to real human decision making. The principal advantage of our approach is that it is formulated as a self-consistent mathematical theory, which allows us to explain not just one effect but actually all known paradoxes in human decision making. Being general, the approach can serve as a tool for characterizing quantum information processing by means of atomic, molecular, and condensed-matter systems.
0903.5254
Comparing Bibliometric Statistics Obtained from the Web of Science and Scopus
cs.IR cs.DL
For more than 40 years, the Institute for Scientific Information (ISI, now part of Thomson Reuters) produced the only available bibliographic databases from which bibliometricians could compile large-scale bibliometric indicators. ISI's citation indexes, now regrouped under the Web of Science (WoS), were the major sources of bibliometric data until 2004, when Scopus was launched by the publisher Reed Elsevier. For those who perform bibliometric analyses and comparisons of countries or institutions, the existence of these two major databases raises the important question of the comparability and stability of statistics obtained from different data sources. This paper uses macro-level bibliometric indicators to compare results obtained from the WoS and Scopus. It shows that the correlations between the measures obtained with both databases for the number of papers and the number of citations received by countries, as well as for their ranks, are extremely high (R2 > .99). There is also a very high correlation when countries' papers are broken down by field. The paper thus provides evidence that indicators of scientific production and citations at the country level are stable and largely independent of the database.
0903.5267
Equitable Partitioning Policies for Mobile Robotic Networks
cs.RO
The most widely applied strategy for workload sharing is to equalize the workload assigned to each resource. In mobile multi-agent systems, this principle directly leads to equitable partitioning policies in which (i) the workspace is divided into subregions of equal measure, (ii) there is a bijective correspondence between agents and subregions, and (iii) each agent is responsible for service requests originating within its own subregion. In this paper, we design provably correct, spatially-distributed and adaptive policies that allow a team of agents to achieve a convex and equitable partition of a convex workspace, where each subregion has the same measure. We also consider the issue of achieving convex and equitable partitions where subregions have shapes similar to those of regular polygons. Our approach is related to the classic Lloyd algorithm, and exploits the unique features of power diagrams. We discuss possible applications to routing of vehicles in stochastic and dynamic environments. Simulation results are presented and discussed.
0903.5282
Multi-agent Q-Learning of Channel Selection in Multi-user Cognitive Radio Systems: A Two by Two Case
cs.IT math.IT
Resource allocation is an important issue in cognitive radio systems. It can be done by carrying out negotiation among secondary users. However, significant overhead may be incurred by the negotiation since the negotiation needs to be done frequently due to the rapid change of primary users' activity. In this paper, a channel selection scheme without negotiation is considered for multi-user and multi-channel cognitive radio systems. To avoid collision incurred by non-coordination, each user secondary learns how to select channels according to its experience. Multi-agent reinforcement leaning (MARL) is applied in the framework of Q-learning by considering the opponent secondary users as a part of the environment. The dynamics of the Q-learning are illustrated using Metrick-Polak plot. A rigorous proof of the convergence of Q-learning is provided via the similarity between the Q-learning and Robinson-Monro algorithm, as well as the analysis of convergence of the corresponding ordinary differential equation (via Lyapunov function). Examples are illustrated and the performance of learning is evaluated by numerical simulations.
0903.5289
Heterogeneous knowledge representation using a finite automaton and first order logic: a case study in electromyography
cs.AI
In a certain number of situations, human cognitive functioning is difficult to represent with classical artificial intelligence structures. Such a difficulty arises in the polyneuropathy diagnosis which is based on the spatial distribution, along the nerve fibres, of lesions, together with the synthesis of several partial diagnoses. Faced with this problem while building up an expert system (NEUROP), we developed a heterogeneous knowledge representation associating a finite automaton with first order logic. A number of knowledge representation problems raised by the electromyography test features are examined in this study and the expert system architecture allowing such a knowledge modeling are laid out.
0903.5328
A Stochastic View of Optimal Regret through Minimax Duality
cs.LG stat.ML
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary.
0903.5341
Unspecified distribution in single disorder problem
math.PR cs.IT math.IT math.ST stat.TH
We register a stochastic sequence affected by one disorder. Monitoring of the sequence is made in the circumstances when not full information about distributions before and after the change is available. The initial problem of disorder detection is transformed to optimal stopping of observed sequence. Formula for optimal decision functions is derived.
0903.5342
Exact Non-Parametric Bayesian Inference on Infinite Trees
math.PR cs.LG math.ST stat.TH
Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate data-dependent granularity. A Bayesian would assign a data-independent prior probability to "subdivide", which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive distribution, the effective model dimension, moments, and other quantities. We prove asymptotic convergence and consistency results, and illustrate the behavior of our model on some prototypical functions.
0903.5346
Cooperative Update Exchange in the Youtopia System
cs.DB
Youtopia is a platform for collaborative management and integration of relational data. At the heart of Youtopia is an update exchange abstraction: changes to the data propagate through the system to satisfy user-specified mappings. We present a novel change propagation model that combines a deterministic chase with human intervention. The process is fundamentally cooperative and gives users significant control over how mappings are repaired. An additional advantage of our model is that mapping cycles can be permitted without compromising correctness. We investigate potential harmful interference between updates in our model; we introduce two appropriate notions of serializability that avoid such interference if enforced. The first is very general and related to classical final-state serializability; the second is more restrictive but highly practical and related to conflict-serializability. We present an algorithm to enforce the latter notion. Our algorithm is an optimistic one, and as such may sometimes require updates to be aborted. We develop techniques for reducing the number of aborts and we test these experimentally.
0903.5372
A game theory approach for self-coexistence analysis among IEEE 802.22 networks
cs.IT cs.GT math.IT
This paper has been withdrawn by the author due to some errors
0903.5399
Regret and Jeffreys Integrals in Exp. Families
cs.IT math.IT
The problem of whether minimax redundancy, minimax regret and Jeffreys integrals are finite or infinite are discussed.
0903.5426
Testing Goodness-of-Fit via Rate Distortion
cs.IT math.IT math.ST stat.TH
A framework is developed using techniques from rate distortion theory in statistical testing. The idea is first to do optimal compression according to a certain distortion function and then use information divergence from the compressed empirical distribution to the compressed null hypothesis as statistic. Only very special cases have been studied in more detail, but they indicate that the approach can be used under very general conditions.
0904.0016
Stochastic Models of User-Contributory Web Sites
cs.CY cs.IR
We describe a general stochastic processes-based approach to modeling user-contributory web sites, where users create, rate and share content. These models describe aggregate measures of activity and how they arise from simple models of individual users. This approach provides a tractable method to understand user activity on the web site and how this activity depends on web site design choices, especially the choice of what information about other users' behaviors is shown to each user. We illustrate this modeling approach in the context of user-created content on the news rating site Digg.
0904.0019
On Solving Boolean Multilevel Optimization Problems
cs.LO cs.AI
Many combinatorial optimization problems entail a number of hierarchically dependent optimization problems. An often used solution is to associate a suitably large cost with each individual optimization problem, such that the solution of the resulting aggregated optimization problem solves the original set of hierarchically dependent optimization problems. This paper starts by studying the package upgradeability problem in software distributions. Straightforward solutions based on Maximum Satisfiability (MaxSAT) and pseudo-Boolean (PB) optimization are shown to be ineffective, and unlikely to scale for large problem instances. Afterwards, the package upgradeability problem is related to multilevel optimization. The paper then develops new algorithms for Boolean Multilevel Optimization (BMO) and highlights a large number of potential applications. The experimental results indicate that the proposed algorithms for BMO allow solving optimization problems that existing MaxSAT and PB solvers would otherwise be unable to solve.
0904.0027
Faith in the Algorithm, Part 2: Computational Eudaemonics
cs.CY cs.AI
Eudaemonics is the study of the nature, causes, and conditions of human well-being. According to the ethical theory of eudaemonia, reaping satisfaction and fulfillment from life is not only a desirable end, but a moral responsibility. However, in modern society, many individuals struggle to meet this responsibility. Computational mechanisms could better enable individuals to achieve eudaemonia by yielding practical real-world systems that embody algorithms that promote human flourishing. This article presents eudaemonic systems as the evolutionary goal of the present day recommender system.
0904.0029
Learning for Dynamic subsumption
cs.AI
In this paper a new dynamic subsumption technique for Boolean CNF formulae is proposed. It exploits simple and sufficient conditions to detect during conflict analysis, clauses from the original formula that can be reduced by subsumption. During the learnt clause derivation, and at each step of the resolution process, we simply check for backward subsumption between the current resolvent and clauses from the original formula and encoded in the implication graph. Our approach give rise to a strong and dynamic simplification technique that exploits learning to eliminate literals from the original clauses. Experimental results show that the integration of our dynamic subsumption approach within the state-of-the-art SAT solvers Minisat and Rsat achieves interesting improvements particularly on crafted instances.
0904.0037
Deterministic Capacity of MIMO Relay Networks
cs.IT math.IT
The deterministic capacity of a relay network is the capacity of a network when relays are restricted to transmitting \emph{reliable} information, that is, (asymptotically) deterministic function of the source message. In this paper it is shown that the deterministic capacity of a number of MIMO relay networks can be found in the low power regime where $\SNR\to0$. This is accomplished through deriving single letter upper bounds and finding the limit of these as $\SNR\to0$. The advantage of this technique is that it overcomes the difficulty of finding optimum distributions for mutual information.
0904.0052
Stiffness Analysis of Overconstrained Parallel Manipulators
cs.RO
The paper presents a new stiffness modeling method for overconstrained parallel manipulators with flexible links and compliant actuating joints. It is based on a multidimensional lumped-parameter model that replaces the link flexibility by localized 6-dof virtual springs that describe both translational/rotational compliance and the coupling between them. In contrast to other works, the method involves a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations for the unloaded manipulator configuration, which allows computing the stiffness matrix for the overconstrained architectures, including singular manipulator postures. The advantages of the developed technique are confirmed by application examples, which deal with comparative stiffness analysis of two translational parallel manipulators of 3-PUU and 3-PRPaR architectures. Accuracy of the proposed approach was evaluated for a case study, which focuses on stiffness analysis of Orthoglide parallel manipulator.
0904.0058
Kinematics of A 3-PRP planar parallel robot
cs.RO
Recursive modelling for the kinematics of a 3-PRP planar parallel robot is presented in this paper. Three planar chains connecting to the moving platform of the manipulator are located in a vertical plane. Knowing the motion of the platform, we develop the inverse kinematics and determine the positions, velocities and accelerations of the robot. Several matrix equations offer iterative expressions and graphs for the displacements, velocities and accelerations of three prismatic actuators.
0904.0145
Kinematic and Dynamic Analysis of the 2-DOF Spherical Wrist of Orthoglide 5-axis
cs.RO physics.class-ph
This paper deals with the kinematics and dynamics of a two degree of freedom spherical manipulator, the wrist of Orthoglide 5-axis. The latter is a parallel kinematics machine composed of two manipulators: i) the Orthoglide 3-axis; a three-dof translational parallel manipulator that belongs to the family of Delta robots, and ii) the Agile eye; a two-dof parallel spherical wrist. The geometric and inertial parameters used in the model are determined by means of a CAD software. The performance of the spherical wrist is emphasized by means of several test trajectories. The effects of machining and/or cutting forces and the length of the cutting tool on the dynamic performance of the wrist are also analyzed. Finally, a preliminary selection of the motors is proposed from the velocities and torques required by the actuators to carry out the test trajectories.
0904.0226
Coding Versus ARQ in Fading Channels: How reliable should the PHY be?
cs.IT math.IT
This paper studies the tradeoff between channel coding and ARQ (automatic repeat request) in Rayleigh block-fading channels. A heavily coded system corresponds to a low transmission rate with few ARQ re-transmissions, whereas lighter coding corresponds to a higher transmitted rate but more re-transmissions. The optimum error probability, where optimum refers to the maximization of the average successful throughput, is derived and is shown to be a decreasing function of the average signal-to-noise ratio and of the channel diversity order. A general conclusion of the work is that the optimum error probability is quite large (e.g., 10% or larger) for reasonable channel parameters, and that operating at a very small error probability can lead to a significantly reduced throughput. This conclusion holds even when a number of practical ARQ considerations, such as delay constraints and acknowledgement feedback errors, are taken into account.
0904.0228
Safe Reasoning Over Ontologies
cs.AI cs.DS
As ontologies proliferate and automatic reasoners become more powerful, the problem of protecting sensitive information becomes more serious. In particular, as facts can be inferred from other facts, it becomes increasingly likely that information included in an ontology, while not itself deemed sensitive, may be able to be used to infer other sensitive information. We first consider the problem of testing an ontology for safeness defined as its not being able to be used to derive any sensitive facts using a given collection of inference rules. We then consider the problem of optimizing an ontology based on the criterion of making as much useful information as possible available without revealing any sensitive facts.
0904.0274
Interference Alignment with Asymmetric Complex Signaling - Settling the Host-Madsen-Nosratinia Conjecture
cs.IT math.IT
It has been conjectured by Host-Madsen and Nosratinia that complex Gaussian interference channels with constant channel coefficients have only one degree-of-freedom regardless of the number of users. While several examples are known of constant channels that achieve more than 1 degree of freedom, these special cases only span a subset of measure zero. In other words, for almost all channel coefficient values, it is not known if more than 1 degree-of-freedom is achievable. In this paper, we settle the Host-Madsen-Nosratinia conjecture in the negative. We show that at least 1.2 degrees-of-freedom are achievable for all values of complex channel coefficients except for a subset of measure zero. For the class of linear beamforming and interference alignment schemes considered in this paper, it is also shown that 1.2 is the maximum number of degrees of freedom achievable on the complex Gaussian 3 user interference channel with constant channel coefficients, for almost all values of channel coefficients. To establish the achievability of 1.2 degrees of freedom we introduce the novel idea of asymmetric complex signaling - i.e., the inputs are chosen to be complex but not circularly symmetric. It is shown that unlike Gaussian point-to-point, multiple-access and broadcast channels where circularly symmetric complex Gaussian inputs are optimal, for interference channels optimal inputs are in general asymmetric. With asymmetric complex signaling, we also show that the 2 user complex Gaussian X channel with constant channel coefficients achieves the outer bound of 4/3 degrees-of-freedom, i.e., the assumption of time-variations/frequency-selectivity used in prior work to establish the same result, is not needed.
0904.0300
Design, development and implementation of a tool for construction of declarative functional descriptions of semantic web services based on WSMO methodology
cs.AI cs.LO
Semantic web services (SWS) are self-contained, self-describing, semantically marked-up software resources that can be published, discovered, composed and executed across the Web in a semi-automatic way. They are a key component of the future Semantic Web, in which networked computer programs become providers and users of information at the same time. This work focuses on developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) based on the Web Service Modelling Ontology (WSMO) framework. A main part of WSMO-based SWS is service capability - a declarative description of Web service functionality. A formal syntax and semantics for such a description is provided by Web Service Modeling Language (WSML), which is based on different logical formalisms, namely, Description Logics, First-Order Logic and Logic Programming. A WSML description of a Web service capability is represented as a set of complex logical expressions (axioms). We develop a specialized user-friendly tool for constructing and editing WSMO-based SWS capabilities. Since the users of this tool are not specialists in first-order logic, a graphical way for constricting and editing axioms is proposed. The designed process for constructing logical expressions is ontology-driven, which abstracts away as much as possible from any concrete syntax of logical language. We propose several mechanisms to guarantees the semantic consistency of the produced logical expressions. The tool is implemented in Java using Eclipse for IDE and GEF (Graphical Editing Framework) for visualization.
0904.0308
Exponential decreasing rate of leaked information in universal random privacy amplification
cs.IT cs.CR math.AC math.IT
We derive a new upper bound for Eve's information in secret key generation from a common random number without communication. This bound improves on Bennett et al(1995)'s bound based on the R\'enyi entropy of order 2 because the bound obtained here uses the R\'enyi entropy of order $1+s$ for $s \in [0,1]$. This bound is applied to a wire-tap channel. Then, we derive an exponential upper bound for Eve's information. Our exponent is compared with Hayashi(2006)'s exponent. For the additive case, the bound obtained here is better. The result is applied to secret key agreement by public discussion.
0904.0313
Visual approach for data mining on medical information databases using Fastmap algorithm
cs.IR cs.DB
The rapid development of tools for acquisition and storage of information has lead to the formation of enormous medical databases. The large quantity of data definitely surpasses the abilities of humans for efficient usage without specialized tools for analysis. The situation is described as rich in data, but poor in information. In order to fill this growing gap, different approaches from the field of Data Mining are applied. These methods perform analysis of large sets of observed data in order to find new dependencies or concise representation of the data, which is more meaningful to humans. One of the possible approaches for discovery of dependencies is the visual approach, in which data is processed and visualized in a way suitable for analysis by a domain expert. This work proposes a visual approach, in which data is processed and visualized in a way suitable for analysis by a domain expert. We design and implement a software solution for visualization of multi-dimensional, classified medical data using the FastMap algorithm for graduate reduction of dimensions. The implementation of the graphical user interface is described in detail since it is the most important factor for the ease of use of these tools by non-professionals in data mining.
0904.0477
Message Passing for Optimization and Control of Power Grid: Model of Distribution System with Redundancy
cond-mat.stat-mech cs.CE cs.NI
We use a power grid model with $M$ generators and $N$ consumption units to optimize the grid and its control. Each consumer demand is drawn from a predefined finite-size-support distribution, thus simulating the instantaneous load fluctuations. Each generator has a maximum power capability. A generator is not overloaded if the sum of the loads of consumers connected to a generator does not exceed its maximum production. In the standard grid each consumer is connected only to its designated generator, while we consider a more general organization of the grid allowing each consumer to select one generator depending on the load from a pre-defined consumer-dependent and sufficiently small set of generators which can all serve the load. The model grid is interconnected in a graph with loops, drawn from an ensemble of random bipartite graphs, while each allowed configuration of loaded links represent a set of graph covering trees. Losses, the reactive character of the grid and the transmission-level connections between generators (and many other details relevant to realistic power grid) are ignored in this proof-of-principles study. We focus on the asymptotic limit and we show that the interconnects allow significant expansion of the parameter domains for which the probability of a generator overload is asymptotically zero. Our construction explores the formal relation between the problem of grid optimization and the modern theory of sparse graphical models. We also design heuristic algorithms that achieve the asymptotically optimal selection of loaded links. We conclude discussing the ability of this approach to include other effects, such as a more realistic modeling of the power grid and related optimization and control algorithms.
0904.0494
Average Case Analysis of Multichannel Sparse Recovery Using Convex Relaxation
cs.IT math.IT
In this paper, we consider recovery of jointly sparse multichannel signals from incomplete measurements. Several approaches have been developed to recover the unknown sparse vectors from the given observations, including thresholding, simultaneous orthogonal matching pursuit (SOMP), and convex relaxation based on a mixed matrix norm. Typically, worst-case analysis is carried out in order to analyze conditions under which the algorithms are able to recover any jointly sparse set of vectors. However, such an approach is not able to provide insights into why joint sparse recovery is superior to applying standard sparse reconstruction methods to each channel individually. Previous work considered an average case analysis of thresholding and SOMP by imposing a probability model on the measured signals. In this paper, our main focus is on analysis of convex relaxation techniques. In particular, we focus on the mixed l_2,1 approach to multichannel recovery. We show that under a very mild condition on the sparsity and on the dictionary characteristics, measured for example by the coherence, the probability of recovery failure decays exponentially in the number of channels. This demonstrates that most of the time, multichannel sparse recovery is indeed superior to single channel methods. Our probability bounds are valid and meaningful even for a small number of signals. Using the tools we develop to analyze the convex relaxation method, we also tighten the previous bounds for thresholding and SOMP.
0904.0525
The Minimal Polynomial over F_q of Linear Recurring Sequence over F_{q^m}
cs.IT cs.CR math.IT
Recently, motivated by the study of vectorized stream cipher systems, the joint linear complexity and joint minimal polynomial of multisequences have been investigated. Let S be a linear recurring sequence over finite field F_{q^m} with minimal polynomial h(x) over F_{q^m}. Since F_{q^m} and F_{q}^m are isomorphic vector spaces over the finite field F_q, S is identified with an m-fold multisequence S^{(m)} over the finite field F_q. The joint minimal polynomial and joint linear complexity of the m-fold multisequence S^{(m)} are the minimal polynomial and linear complexity over F_q of S respectively. In this paper, we study the minimal polynomial and linear complexity over F_q of a linear recurring sequence S over F_{q^m} with minimal polynomial h(x) over F_{q^m}. If the canonical factorization of h(x) in F_{q^m}[x] is known, we determine the minimal polynomial and linear complexity over F_q of the linear recurring sequence S over F_{q^m}.
0904.0544
Mission-Aware Medium Access Control in Random Access Networks
cs.NI cs.GT cs.IT math.IT
We study mission-critical networking in wireless communication networks, where network users are subject to critical events such as emergencies and crises. If a critical event occurs to a user, the user needs to send necessary information for help as early as possible. However, most existing medium access control (MAC) protocols are not adequate to meet the urgent need for information transmission by users in a critical situation. In this paer, we propose a novel class of MAC protocols that utilize available past information as well as current information. Our proposed protocols are mission-aware since they prescribe different transmission decision rules to users in different situations. We show that the proposed protocols perform well not only when the system faces a critical situation but also when there is no critical situation. By utilizing past information, the proposed protocols coordinate transmissions by users to achieve high throughput in the normal phase of operation and to let a user in a critical situation make successful transmissions while it is in the critical situation. Moreover, the proposed protocols require short memory and no message exchanges.
0904.0545
Time Hopping technique for faster reinforcement learning in simulations
cs.AI cs.LG cs.RO
This preprint has been withdrawn by the author for revision
0904.0546
Eligibility Propagation to Speed up Time Hopping for Reinforcement Learning
cs.AI cs.LG cs.RO
A mechanism called Eligibility Propagation is proposed to speed up the Time Hopping technique used for faster Reinforcement Learning in simulations. Eligibility Propagation provides for Time Hopping similar abilities to what eligibility traces provide for conventional Reinforcement Learning. It propagates values from one state to all of its temporal predecessors using a state transitions graph. Experiments on a simulated biped crawling robot confirm that Eligibility Propagation accelerates the learning process more than 3 times.
0904.0570
The Derivational Complexity Induced by the Dependency Pair Method
cs.LO cs.AI cs.CC cs.PL
We study the derivational complexity induced by the dependency pair method, enhanced with standard refinements. We obtain upper bounds on the derivational complexity induced by the dependency pair method in terms of the derivational complexity of the base techniques employed. In particular we show that the derivational complexity induced by the dependency pair method based on some direct technique, possibly refined by argument filtering, the usable rules criterion, or dependency graphs, is primitive recursive in the derivational complexity induced by the direct method. This implies that the derivational complexity induced by a standard application of the dependency pair method based on traditional termination orders like KBO, LPO, and MPO is exactly the same as if those orders were applied as the only termination technique.
0904.0585
Controller synthesis with very simplified linear constraints in PN model
cs.IT math.IT
This paper addresses the problem of forbidden states for safe Petri net modeling discrete event systems. We present an efficient method to construct a controller. A set of linear constraints allow forbidding the reachability of specific states. The number of these so-called forbidden states and consequently the number of constraints are large and lead to a large number of control places. A systematic method for constructing very simplified controller is offered. By using a method based on Petri nets partial invariants, maximal permissive controllers are determined.
0904.0586
Optimal Supervisory Control Synthesis
cs.IT math.IT
The place invariant method is well known as an elegant way to construct a Petri net controller. It is possible to use the constraint for preventing forbidden states. But in general case, the number forbidden states can be very large giving a great number of control places. In this paper is presented a systematic method to reduce the size and the number of constraints. This method is applicable for safe and conservative Petri nets giving a maximally permissive controller.
0904.0643
Performing Nonlinear Blind Source Separation with Signal Invariants
cs.AI cs.LG
Given a time series of multicomponent measurements x(t), the usual objective of nonlinear blind source separation (BSS) is to find a "source" time series s(t), comprised of statistically independent combinations of the measured components. In this paper, the source time series is required to have a density function in (s,ds/dt)-space that is equal to the product of density functions of individual components. This formulation of the BSS problem has a solution that is unique, up to permutations and component-wise transformations. Separability is shown to impose constraints on certain locally invariant (scalar) functions of x, which are derived from local higher-order correlations of the data's velocity dx/dt. The data are separable if and only if they satisfy these constraints, and, if the constraints are satisfied, the sources can be explicitly constructed from the data. The method is illustrated by using it to separate two speech-like sounds recorded with a single microphone.
0904.0648
Evolvability need not imply learnability
cs.LG cs.CC
We show that Boolean functions expressible as monotone disjunctive normal forms are PAC-evolvable under a uniform distribution on the Boolean cube if the hypothesis size is allowed to remain fixed. We further show that this result is insufficient to prove the PAC-learnability of monotone Boolean functions, thereby demonstrating a counter-example to a recent claim to the contrary. We further discuss scenarios wherein evolvability and learnability will coincide as well as scenarios under which they differ. The implications of the latter case on the prospects of learning in complex hypothesis spaces is briefly examined.
0904.0682
Privacy in Search Logs
cs.DB cs.IR
Search engine companies collect the "database of intentions", the histories of their users' search queries. These search logs are a gold mine for researchers. Search engine companies, however, are wary of publishing search logs in order not to disclose sensitive information. In this paper we analyze algorithms for publishing frequent keywords, queries and clicks of a search log. We first show how methods that achieve variants of $k$-anonymity are vulnerable to active attacks. We then demonstrate that the stronger guarantee ensured by $\epsilon$-differential privacy unfortunately does not provide any utility for this problem. We then propose an algorithm ZEALOUS and show how to set its parameters to achieve $(\epsilon,\delta)$-probabilistic privacy. We also contrast our analysis of ZEALOUS with an analysis by Korolova et al. [17] that achieves $(\epsilon',\delta')$-indistinguishability. Our paper concludes with a large experimental study using real applications where we compare ZEALOUS and previous work that achieves $k$-anonymity in search log publishing. Our results show that ZEALOUS yields comparable utility to $k-$anonymity while at the same time achieving much stronger privacy guarantees.
0904.0721
Optimal Tableau Decision Procedures for PDL
cs.LO cs.AI cs.CC
We reformulate Pratt's tableau decision procedure of checking satisfiability of a set of formulas in PDL. Our formulation is simpler and more direct for implementation. Extending the method we give the first EXPTIME (optimal) tableau decision procedure not based on transformation for checking consistency of an ABox w.r.t. a TBox in PDL (here, PDL is treated as a description logic). We also prove the new result that the data complexity of the instance checking problem in PDL is coNP-complete.
0904.0747
Bethe Free Energy Approach to LDPC Decoding on Memory Channels
cs.IT math.IT
We address the problem of the joint sequence detection in partial-response (PR) channels and decoding of low-density parity-check (LDPC) codes. We model the PR channel and the LDPC code as a combined inference problem. We present for the first time the derivation of the belief propagation (BP) equations that allow the simultaneous detection and decoding of a LDPC codeword in a PR channel. To accomplish this we follow an approach from statistical mechanics, in which the Bethe free energy is minimized with respect to the beliefs on the nodes of the PR-LDPC graph. The equations obtained are explicit and are optimal for decoding LDPC codes on PR channels with polynomial $h(D) = 1 - a D^n$ (a real, n positive integer) in the sense that they provide the exact inference of the marginal probabilities on the nodes in a graph free of loops. A simple algorithmic solution to the set of BP equations is proposed and evaluated using numerical simulations, yielding bit-error rate performances that surpass those of turbo equalization.
0904.0751
Distributed Source Coding of Correlated Gaussian Remote Sources
cs.IT math.IT
We consider the distributed source coding system for $L$ correlated Gaussian observations $Y_i, i=1,2, ..., L$. Let $X_i,i=1,2, ..., L$ be $L$ correlated Gaussian random variables and $N_i,$ $i=1,2,... L$ be independent additive Gaussian noises also independent of $X_i, i=1,2,..., L$. We consider the case where for each $i=1,2,..., L$, $Y_i$ is a noisy observation of $X_i$, that is, $Y_i=X_i+N_i$. On this coding system the determination problem of the rate distortion region remains open. In this paper, we derive explicit outer and inner bounds of the rate distortion region. We further find an explicit sufficient condition for those two to match. We also study the sum rate part of the rate distortion region when the correlation has some symmetrical property and derive a new lower bound of the sum rate part. We derive a sufficient condition for this lower bound to be tight. The derived sufficient condition depends only on the correlation property of the sources and their observations.
0904.0768
Codes on Planar Graphs
cs.IT math.IT
Codes defined on graphs and their properties have been subjects of intense recent research. On the practical side, constructions for capacity-approaching codes are graphical. On the theoretical side, codes on graphs provide several intriguing problems in the intersection of coding theory and graph theory. In this paper, we study codes defined by planar Tanner graphs. We derive an upper bound on minimum distance $d$ of such codes as a function of the code rate $R$ for $R \ge 5/8$. The bound is given by $$d\le \lceil \frac{7-8R}{2(2R-1)} \rceil + 3\le 7.$$ Among the interesting conclusions of this result are the following: (1) planar graphs do not support asymptotically good codes, and (2) finite-length, high-rate codes on graphs with high minimum distance will necessarily be non-planar.
0904.0776
Induction of High-level Behaviors from Problem-solving Traces using Machine Learning Tools
stat.ML cs.LG
This paper applies machine learning techniques to student modeling. It presents a method for discovering high-level student behaviors from a very large set of low-level traces corresponding to problem-solving actions in a learning environment. Basic actions are encoded into sets of domain-dependent attribute-value patterns called cases. Then a domain-independent hierarchical clustering identifies what we call general attitudes, yielding automatic diagnosis expressed in natural language, addressed in principle to teachers. The method can be applied to individual students or to entire groups, like a class. We exhibit examples of this system applied to thousands of students' actions in the domain of algebraic transformations.
0904.0811
The density of weights of Generalized Reed--Muller codes
cs.IT math.IT
We study the density of the weights of Generalized Reed--Muller codes. Let $RM_p(r,m)$ denote the code of multivariate polynomials over $\F_p$ in $m$ variables of total degree at most $r$. We consider the case of fixed degree $r$, when we let the number of variables $m$ tend to infinity. We prove that the set of relative weights of codewords is quite sparse: for every $\alpha \in [0,1]$ which is not rational of the form $\frac{\ell}{p^k}$, there exists an interval around $\alpha$ in which no relative weight exists, for any value of $m$. This line of research is to the best of our knowledge new, and complements the traditional lines of research, which focus on the weight distribution and the divisibility properties of the weights. Equivalently, we study distributions taking values in a finite field, which can be approximated by distributions coming from constant degree polynomials, where we do not bound the number of variables. We give a complete characterization of all such distributions.
0904.0813
Projective Space Codes for the Injection Metric
cs.IT math.IT
In the context of error control in random linear network coding, it is useful to construct codes that comprise well-separated collections of subspaces of a vector space over a finite field. In this paper, the metric used is the so-called "injection distance", introduced by Silva and Kschischang. A Gilbert-Varshamov bound for such codes is derived. Using the code-construction framework of Etzion and Silberstein, new non-constant-dimension codes are constructed; these codes contain more codewords than comparable codes designed for the subspace metric.
0904.0814
Stability Analysis and Learning Bounds for Transductive Regression Algorithms
cs.LG
This paper uses the notion of algorithmic stability to derive novel generalization bounds for several families of transductive regression algorithms, both by using convexity and closed-form solutions. Our analysis helps compare the stability of these algorithms. It also shows that a number of widely used transductive regression algorithms are in fact unstable. Finally, it reports the results of experiments with local transductive regression demonstrating the benefit of our stability bounds for model selection, for one of the algorithms, in particular for determining the radius of the local neighborhood used by the algorithm.
0904.0821
Imaging of moving targets with multi-static SAR using an overcomplete dictionary
cs.IT math.IT
This paper presents a method for imaging of moving targets using multi-static SAR by treating the problem as one of spatial reflectivity signal inversion over an overcomplete dictionary of target velocities. Since SAR sensor returns can be related to the spatial frequency domain projections of the scattering field, we exploit insights from compressed sensing theory to show that moving targets can be effectively imaged with transmitters and receivers randomly dispersed in a multi-static geometry within a narrow forward cone around the scene of interest. Existing approaches to dealing with moving targets in SAR solve a coupled non-linear problem of target scattering and motion estimation typically through matched filtering. In contrast, by using an overcomplete dictionary approach we effectively linearize the forward model and solve the moving target problem as a larger, unified regularized inversion problem subject to sparsity constraints.
0904.0828
On approximating Gaussian relay networks by deterministic networks
cs.IT math.IT
We examine the extent to which Gaussian relay networks can be approximated by deterministic networks, and present two results, one negative and one positive. The gap between the capacities of a Gaussian relay network and a corresponding linear deterministic network can be unbounded. The key reasons are that the linear deterministic model fails to capture the phase of received signals, and there is a loss in signal strength in the reduction to a linear deterministic network. On the positive side, Gaussian relay networks are indeed well approximated by certain discrete superposition networks, where the inputs and outputs to the channels are discrete, and channel gains are signed integers. As a corollary, MIMO channels cannot be approximated by the linear deterministic model but can be by the discrete superposition model.
0904.0879
On Superposition Coding for the Wyner-Ziv Problem
cs.IT math.IT
In problems of lossy source/noisy channel coding with side information, the theoretical bounds are achieved using "good" source/channel codes that can be partitioned into "good" channel/source codes. A scheme that achieves optimality in channel coding with side information at the encoder using independent channel and source codes was outlined in previous works. In practice, the original problem is transformed into a multiple-access problem in which the superposition of the two independent codes can be decoded using successive interference cancellation. Inspired by this work, we analyze the superposition approach for source coding with side information at the decoder. We present a random coding analysis that shows achievability of the Wyner-Ziv bound. Then, we discuss some issues related to the practical implementation of this method.
0904.0942
Boosting the Accuracy of Differentially-Private Histograms Through Consistency
cs.DB cs.CR
We show that it is possible to significantly improve the accuracy of a general class of histogram queries while satisfying differential privacy. Our approach carefully chooses a set of queries to evaluate, and then exploits consistency constraints that should hold over the noisy output. In a post-processing phase, we compute the consistent input most likely to have produced the noisy output. The final output is differentially-private and consistent, but in addition, it is often much more accurate. We show, both theoretically and experimentally, that these techniques can be used for estimating the degree sequence of a graph very precisely, and for computing a histogram that can support arbitrary range queries accurately.
0904.0962
Color Dipole Moments for Edge Detection
cs.CV
Dipole and higher moments are physical quantities used to describe a charge distribution. In analogy with electromagnetism, it is possible to define the dipole moments for a gray-scale image, according to the single aspect of a gray-tone map. In this paper we define the color dipole moments for color images. For color maps in fact, we have three aspects, the three primary colors, to consider. Associating three color charges to each pixel, color dipole moments can be easily defined and used for edge detection.
0904.0973
A statistical mechanical interpretation of algorithmic information theory III: Composite systems and fixed points
cs.IT cs.CC math.IT math.PR
The statistical mechanical interpretation of algorithmic information theory (AIT, for short) was introduced and developed by our former works [K. Tadaki, Local Proceedings of CiE 2008, pp.425-434, 2008] and [K. Tadaki, Proceedings of LFCS'09, Springer's LNCS, vol.5407, pp.422-440, 2009], where we introduced the notion of thermodynamic quantities, such as partition function Z(T), free energy F(T), energy E(T), and statistical mechanical entropy S(T), into AIT. We then discovered that, in the interpretation, the temperature T equals to the partial randomness of the values of all these thermodynamic quantities, where the notion of partial randomness is a stronger representation of the compression rate by means of program-size complexity. Furthermore, we showed that this situation holds for the temperature itself as a thermodynamic quantity, namely, for each of all the thermodynamic quantities above, the computability of its value at temperature T gives a sufficient condition for T in (0,1) to be a fixed point on partial randomness. In this paper, we develop the statistical mechanical interpretation of AIT further and pursue its formal correspondence to normal statistical mechanics. The thermodynamic quantities in AIT are defined based on the halting set of an optimal computer, which is a universal decoding algorithm used to define the notion of program-size complexity. We show that there are infinitely many optimal computers which give completely different sufficient conditions in each of the thermodynamic quantities in AIT. We do this by introducing the notion of composition of computers into AIT, which corresponds to the notion of composition of systems in normal statistical mechanics.
0904.0981
Dependency Pairs and Polynomial Path Orders
cs.LO cs.AI cs.CC cs.SC
We show how polynomial path orders can be employed efficiently in conjunction with weak innermost dependency pairs to automatically certify polynomial runtime complexity of term rewrite systems and the polytime computability of the functions computed. The established techniques have been implemented and we provide ample experimental data to assess the new method.
0904.0986
Approche conceptuelle par un processus d'annotation pour la repr\'esentation et la valorisation de contenus informationnels en intelligence \'economique (IE)
cs.IR
In the era of the information society, the impact of the information systems on the economy of material and immaterial is certainly perceptible. With regards to the information resources of an organization, the annotation involved to enrich informational content, to track the intellectual activities on a document and to set the added value on information for the benefit of solving a decision-making problem in the context of economic intelligence. Our contribution is distinguished by the representation of an annotation process and its inherent concepts to lead the decisionmaker to an anticipated decision: the provision of relevant and annotated information. Such information in the system is made easy by taking into account the diversity of resources and those that are well annotated so formally and informally by the EI actors. A capital research framework consist of integrating in the decision-making process the annotator activity, the software agent (or the reasoning mechanisms) and the information resources enhancement.
0904.0994
Breaking through the Thresholds: an Analysis for Iterative Reweighted $\ell_1$ Minimization via the Grassmann Angle Framework
math.PR cs.IT math.IT
It is now well understood that $\ell_1$ minimization algorithm is able to recover sparse signals from incomplete measurements [2], [1], [3] and sharp recoverable sparsity thresholds have also been obtained for the $\ell_1$ minimization algorithm. However, even though iterative reweighted $\ell_1$ minimization algorithms or related algorithms have been empirically observed to boost the recoverable sparsity thresholds for certain types of signals, no rigorous theoretical results have been established to prove this fact. In this paper, we try to provide a theoretical foundation for analyzing the iterative reweighted $\ell_1$ algorithms. In particular, we show that for a nontrivial class of signals, the iterative reweighted $\ell_1$ minimization can indeed deliver recoverable sparsity thresholds larger than that given in [1], [3]. Our results are based on a high-dimensional geometrical analysis (Grassmann angle analysis) of the null-space characterization for $\ell_1$ minimization and weighted $\ell_1$ minimization algorithms.
0904.1144
Alternative evaluation of statistical indicators in atoms: the non-relativistic and relativistic cases
nlin.AO cs.IT math.IT physics.atom-ph
In this work, the calculation of a statistical measure of complexity and the Fisher-Shannon information is performed for all the atoms in the periodic table. Non-relativistic and relativistic cases are considered. We follow the method suggested in [C.P. Panos, N.S. Nikolaidis, K. Ch. Chatzisavvas, C.C. Tsouros, arXiv:0812.3963v1] that uses the fractional occupation probabilities of electrons in atomic orbitals, instead of the continuous electronic wave functions. For the order of shell filling in the relativistic case, we take into account the effect due to electronic spin-orbit interaction. The increasing of both magnitudes, the statistical complexity and the Fisher-Shannon information, with the atomic number $Z$ is observed. The shell structure and the irregular shell filling is well displayed by the Fisher-Shannon information in the relativistic case.
0904.1149
Chaitin \Omega numbers and halting problems
math.LO cs.CC cs.IT math.IT
Chaitin [G. J. Chaitin, J. Assoc. Comput. Mach., vol.22, pp.329-340, 1975] introduced \Omega number as a concrete example of random real. The real \Omega is defined as the probability that an optimal computer halts, where the optimal computer is a universal decoding algorithm used to define the notion of program-size complexity. Chaitin showed \Omega to be random by discovering the property that the first n bits of the base-two expansion of \Omega solve the halting problem of the optimal computer for all binary inputs of length at most n. In the present paper we investigate this property from various aspects. We consider the relative computational power between the base-two expansion of \Omega and the halting problem by imposing the restriction to finite size on both the problems. It is known that the base-two expansion of \Omega and the halting problem are Turing equivalent. We thus consider an elaboration of the Turing equivalence in a certain manner.
0904.1150
Upper Bounds on the Capacities of Noncontrollable Finite-State Channels with/without Feedback
cs.IT cs.SY math.IT math.OC
Noncontrollable finite-state channels (FSCs) are FSCs in which the channel inputs have no influence on the channel states, i.e., the channel states evolve freely. Since single-letter formulae for the channel capacities are rarely available for general noncontrollable FSCs, computable bounds are usually utilized to numerically bound the capacities. In this paper, we take the delayed channel state as part of the channel input and then define the {\em directed information rate} from the new channel input (including the source and the delayed channel state) sequence to the channel output sequence. With this technique, we derive a series of upper bounds on the capacities of noncontrollable FSCs with/without feedback. These upper bounds can be achieved by conditional Markov sources and computed by solving an average reward per stage stochastic control problem (ARSCP) with a compact state space and a compact action space. By showing that the ARSCP has a uniformly continuous reward function, we transform the original ARSCP into a finite-state and finite-action ARSCP that can be solved by a value iteration method. Under a mild assumption, the value iteration algorithm is convergent and delivers a near-optimal stationary policy and a numerical upper bound.