id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
cs/0606011
Vectorial Resilient $PC(l)$ of Order $k$ Boolean Functions from AG-Codes
cs.CR cs.IT math.IT
Propagation criterion of degree $l$ and order $k$ ($PC(l)$ of order $k$) and resiliency of vectorial Boolean functions are important for cryptographic purpose (see [1, 2, 3,6, 7,8,10,11,16]. Kurosawa, Stoh [8] and Carlet [1] gave a construction of Boolean functions satisfying $PC(l)$ of order $k$ from binary linear or nonlinear codes in. In this paper, algebraic-geometric codes over $GF(2^m)$ are used to modify Carlet and Kurosawa-Satoh's construction for giving vectorial resilient Boolean functions satisfying $PC(l)$ of order $k$. The new construction is compared with previously known results.
cs/0606014
On the Capacity of Multiple Access Channels with State Information and Feedback
cs.IT math.IT
In this paper, the multiple access channel (MAC) with channel state is analyzed in a scenario where a) the channel state is known non-causally to the transmitters and b) there is perfect causal feedback from the receiver to the transmitters. An achievable region and an outer bound are found for a discrete memoryless MAC that extend existing results, bringing together ideas from the two separate domains of MAC with state and MAC with feedback. Although this achievable region does not match the outer bound in general, special cases where they meet are identified. In the case of a Gaussian MAC, a specialized achievable region is found by using a combination of dirty paper coding and a generalization of the Schalkwijk-Kailath, Ozarow and Merhav-Weissman schemes, and this region is found to be capacity achieving. Specifically, it is shown that additive Gaussian interference that is known non-causally to the transmitter causes no loss in capacity for the Gaussian MAC with feedback.
cs/0606015
The Size of Optimal Sequence Sets for Synchronous CDMA Systems
cs.IT math.IT
The sum capacity on a symbol-synchronous CDMA system having processing gain $N$ and supporting $K$ power constrained users is achieved by employing at most $2N-1$ sequences. Analogously, the minimum received power (energy-per-chip) on the symbol-synchronous CDMA system supporting $K$ users that demand specified data rates is attained by employing at most $2N-1$ sequences. If there are $L$ oversized users in the system, at most $2N-L-1$ sequences are needed. $2N-1$ is the minimum number of sequences needed to guarantee optimal allocation for single dimensional signaling. $N$ orthogonal sequences are sufficient if a few users (at most $N-1$) are allowed to signal in multiple dimensions. If there are no oversized users, these split users need to signal only in two dimensions each. The above results are shown by proving a converse to a well-known result of Weyl on the interlacing eigenvalues of the sum of two Hermitian matrices, one of which is of rank 1. The converse is analogous to Mirsky's converse to the interlacing eigenvalues theorem for bordering matrices.
cs/0606016
Performance Analysis of Iterative Channel Estimation and Multiuser Detection in Multipath DS-CDMA Channels
cs.IT math.IT
This paper examines the performance of decision feedback based iterative channel estimation and multiuser detection in channel coded aperiodic DS-CDMA systems operating over multipath fading channels. First, explicit expressions describing the performance of channel estimation and parallel interference cancellation based multiuser detection are developed. These results are then combined to characterize the evolution of the performance of a system that iterates among channel estimation, multiuser detection and channel decoding. Sufficient conditions for convergence of this system to a unique fixed point are developed.
cs/0606017
From semiotics of hypermedia to physics of semiosis: A view from system theory
cs.HC cs.CL cs.IT math.IT
Given that theoretical analysis and empirical validation is fundamental to any model, whether conceptual or formal, it is surprising that these two tools of scientific discovery are so often ignored in the contemporary studies of communication. In this paper, we pursued the ideas of a) correcting and expanding the modeling approaches of linguistics, which are otherwise inapplicable (more precisely, which should not but are widely applied), to the general case of hypermedia-based communication, and b) developing techniques for empirical validation of semiotic models, which are nowadays routinely used to explore (in fact, to conjecture about) internal mechanisms of complex systems, yet on a purely speculative basis. This study thus offers two experimentally tested substantive contributions: the formal representation of communication as the mutually-orienting behavior of coupled autonomous systems, and the mathematical interpretation of the semiosis of communication, which together offer a concrete and parsimonious understanding of diverse communication phenomena.
cs/0606020
Imagination as Holographic Processor for Text Animation
cs.AI
Imagination is the critical point in developing of realistic artificial intelligence (AI) systems. One way to approach imagination would be simulation of its properties and operations. We developed two models: AI-Brain Network Hierarchy of Languages and Semantical Holographic Calculus as well as simulation system ScriptWriter that emulate the process of imagination through an automatic animation of English texts. The purpose of this paper is to demonstrate the model and to present ScriptWriter system http://nvo.sdsc.edu/NVO/JCSG/get_SRB_mime_file2.cgi//home/tamara.sdsc/test/demo.zip?F=/home/tamara.sdsc/test/demo.zip&M=application/x-gtar for simulation of the imagination.
cs/0606021
A simulation engine to support production scheduling using genetics-based machine learning
cs.CE cs.AI
The ever higher complexity of manufacturing systems, continually shortening life cycles of products and their increasing variety, as well as the unstable market situation of the recent years require introducing grater flexibility and responsiveness to manufacturing processes. From this perspective, one of the critical manufacturing tasks, which traditionally attract significant attention in both academia and the industry, but which have no satisfactory universal solution, is production scheduling. This paper proposes an approach based on genetics-based machine learning (GBML) to treat the problem of flow shop scheduling. By the approach, a set of scheduling rules is represented as an individual of genetic algorithms, and the fitness of the individual is estimated based on the makespan of the schedule generated by using the rule-set. A concept of the interactive software environment consisting of a simulator and a GBML simulation engine is introduced to support human decision-making during scheduling. A pilot study is underway to evaluate the performance of the GBML technique in comparison with other methods (such as Johnson's algorithm and simulated annealing) while completing test examples.
cs/0606022
Limited Feedback Beamforming Over Temporally-Correlated Channels
cs.IT math.IT
Feedback of quantized channel state information (CSI), called limited feedback, enables transmit beamforming in multiple-input-multiple-output (MIMO) wireless systems with a small amount of overhead. Due to its efficiency, beamforming with limited feedback has been adopted in several wireless communication standards. Prior work on limited feedback commonly adopts the block fading channel model where temporal correlation in wireless channels is neglected. This paper considers temporally-correlated channels and designs single-user transmit beamforming with limited feedback. Analytical results concerning CSI feedback are derived by modeling quantized CSI as a first-order finite-state Markov chain. These results include the source bit rate generated by time-varying quantized CSI, the required bit rate for a CSI feedback channel, and the effect of feedback delay. In particular, based on the theory of Markov chain convergence rate, feedback delay is proved to reduce the throughput gain due to CSI feedback at least exponentially. Furthermore, an algorithm is proposed for CSI feedback compression in time. Combining the results in this work leads to a new method for designing limited feedback beamforming as demonstrated by a design example.
cs/0606024
Consecutive Support: Better Be Close!
cs.AI cs.DB
We propose a new measure of support (the number of occur- rences of a pattern), in which instances are more important if they occur with a certain frequency and close after each other in the stream of trans- actions. We will explain this new consecutive support and discuss how patterns can be found faster by pruning the search space, for instance using so-called parent support recalculation. Both consecutiveness and the notion of hypercliques are incorporated into the Eclat algorithm. Synthetic examples show how interesting phenomena can now be discov- ered in the datasets. The new measure can be applied in many areas, ranging from bio-informatics to trade, supermarkets, and even law en- forcement. E.g., in bio-informatics it is important to find patterns con- tained in many individuals, where patterns close together in one chro- mosome are more significant.
cs/0606026
Generating parity check equations for bounded-distance iterative erasure decoding
cs.IT math.IT
A generic $(r,m)$-erasure correcting set is a collection of vectors in $\bF_2^r$ which can be used to generate, for each binary linear code of codimension $r$, a collection of parity check equations that enables iterative decoding of all correctable erasure patterns of size at most $m$. That is to say, the only stopping sets of size at most $m$ for the generated parity check equations are the erasure patterns for which there is more than one manner to fill in theerasures to obtain a codeword. We give an explicit construction of generic $(r,m)$-erasure correcting sets of cardinality $\sum_{i=0}^{m-1} {r-1\choose i}$. Using a random-coding-like argument, we show that for fixed $m$, the minimum size of a generic $(r,m)$-erasure correcting set is linear in $r$. Keywords: iterative decoding, binary erasure channel, stopping set
cs/0606027
Building a logical model in the machining domain for CAPP expert systems
cs.AI cs.CE cs.SE
Recently, extensive efforts have been made on the application of expert system technique to solving the process planning task in the machining domain. This paper introduces a new formal method to design CAPP expert systems. The formal method is applied to provide a contour of the CAPP expert system building technology. Theoretical aspects of the formalism are described and illustrated by an example of know-how analysis. Flexible facilities to utilize multiple knowledge types and multiple planning strategies within one system are provided by the technology.
cs/0606029
Belief Calculus
cs.AI
In Dempster-Shafer belief theory, general beliefs are expressed as belief mass distribution functions over frames of discernment. In Subjective Logic beliefs are expressed as belief mass distribution functions over binary frames of discernment. Belief representations in Subjective Logic, which are called opinions, also contain a base rate parameter which express the a priori belief in the absence of evidence. Philosophically, beliefs are quantitative representations of evidence as perceived by humans or by other intelligent agents. The basic operators of classical probability calculus, such as addition and multiplication, can be applied to opinions, thereby making belief calculus practical. Through the equivalence between opinions and Beta probability density functions, this also provides a calculus for Beta probability density functions. This article explains the basic elements of belief calculus.
cs/0606035
Finding roots of polynomials over finite fields
cs.IT math.IT
We propose an improved algorithm for finding roots of polynomials over finite fields. This makes possible significant speedup of the decoding process of Bose-Chaudhuri-Hocquenghem, Reed-Solomon, and some other error-correcting codes.
cs/0606039
Evolutionary Design: Philosophy, Theory, and Application Tactics
cs.CE cs.AI
Although it has contributed to remarkable improvements in some specific areas, attempts to develop a universal design theory are generally characterized by failure. This paper sketches arguments for a new approach to engineering design based on Semiotics - the science about signs. The approach is to combine different design theories over all the product life cycle stages into one coherent and traceable framework. Besides, it is to bring together the designer's and user's understandings of the notion of 'good product'. Building on the insight from natural sciences that complex systems always exhibit a self-organizing meaning-influential hierarchical dynamics, objective laws controlling product development are found through an examination of design as a semiosis process. These laws are then applied to support evolutionary design of products. An experiment validating some of the theoretical findings is outlined, and concluding remarks are given.
cs/0606048
A New Quartet Tree Heuristic for Hierarchical Clustering
cs.DS cs.CV cs.DM math.ST physics.data-an q-bio.QM stat.TH
We consider the problem of constructing an an optimal-weight tree from the 3*(n choose 4) weighted quartet topologies on n objects, where optimality means that the summed weight of the embedded quartet topologiesis optimal (so it can be the case that the optimal tree embeds all quartets as non-optimal topologies). We present a heuristic for reconstructing the optimal-weight tree, and a canonical manner to derive the quartet-topology weights from a given distance matrix. The method repeatedly transforms a bifurcating tree, with all objects involved as leaves, achieving a monotonic approximation to the exact single globally optimal tree. This contrasts to other heuristic search methods from biological phylogeny, like DNAML or quartet puzzling, which, repeatedly, incrementally construct a solution from a random order of objects, and subsequently add agreement values.
cs/0606049
Decentralized Erasure Codes for Distributed Networked Storage
cs.IT cs.NI math.IT
We consider the problem of constructing an erasure code for storage over a network when the data sources are distributed. Specifically, we assume that there are n storage nodes with limited memory and k<n sources generating the data. We want a data collector, who can appear anywhere in the network, to query any k storage nodes and be able to retrieve the data. We introduce Decentralized Erasure Codes, which are linear codes with a specific randomized structure inspired by network coding on random bipartite graphs. We show that decentralized erasure codes are optimally sparse, and lead to reduced communication, storage and computation cost over random linear coding.
cs/0606051
Minimum Pseudo-Weight and Minimum Pseudo-Codewords of LDPC Codes
cs.IT math.IT
In this correspondence, we study the minimum pseudo-weight and minimum pseudo-codewords of low-density parity-check (LDPC) codes under linear programming (LP) decoding. First, we show that the lower bound of Kelly, Sridhara, Xu and Rosenthal on the pseudo-weight of a pseudo-codeword of an LDPC code with girth greater than 4 is tight if and only if this pseudo-codeword is a real multiple of a codeword. Then, we show that the lower bound of Kashyap and Vardy on the stopping distance of an LDPC code is also a lower bound on the pseudo-weight of a pseudo-codeword of this LDPC code with girth 4, and this lower bound is tight if and only if this pseudo-codeword is a real multiple of a codeword. Using these results we further show that for some LDPC codes, there are no other minimum pseudo-codewords except the real multiples of minimum codewords. This means that the LP decoding for these LDPC codes is asymptotically optimal in the sense that the ratio of the probabilities of decoding errors of LP decoding and maximum-likelihood decoding approaches to 1 as the signal-to-noise ratio leads to infinity. Finally, some LDPC codes are listed to illustrate these results.
cs/0606052
Topology for Distributed Inference on Graphs
cs.IT math.IT
Let $N$ local decision makers in a sensor network communicate with their neighbors to reach a decision \emph{consensus}. Communication is local, among neighboring sensors only, through noiseless or noisy links. We study the design of the network topology that optimizes the rate of convergence of the iterative decision consensus algorithm. We reformulate the topology design problem as a spectral graph design problem, namely, maximizing the eigenratio~$\gamma$ of two eigenvalues of the graph Laplacian~$L$, a matrix that is naturally associated with the interconnectivity pattern of the network. This reformulation avoids costly Monte Carlo simulations and leads to the class of non-bipartite Ramanujan graphs for which we find a lower bound on~$\gamma$. For Ramanujan topologies and noiseless links, the local probability of error converges much faster to the overall global probability of error than for structured graphs, random graphs, or graphs exhibiting small-world characteristics. With noisy links, we determine the optimal number of iterations before calling a decision. Finally, we introduce a new class of random graphs that are easy to construct, can be designed with arbitrary number of sensors, and whose spectral and convergence properties make them practically equivalent to Ramanujan topologies.
cs/0606060
Complex Networks: New Concepts and Tools for Real-Time Imaging and Vision
cs.CV cs.DC physics.soc-ph
This article discusses how concepts and methods of complex networks can be applied to real-time imaging and computer vision. After a brief introduction of complex networks basic concepts, their use as means to represent and characterize images, as well as for modeling visual saliency, are briefly described. The possibility to apply complex networks in order to model and simulate the performance of parallel and distributed computing systems for performance of visual methods is also proposed.
cs/0606062
Logics for Unranked Trees: An Overview
cs.LO cs.DB
Labeled unranked trees are used as a model of XML documents, and logical languages for them have been studied actively over the past several years. Such logics have different purposes: some are better suited for extracting data, some for expressing navigational properties, and some make it easy to relate complex properties of trees to the existence of tree automata for those properties. Furthermore, logics differ significantly in their model-checking properties, their automata models, and their behavior on ordered and unordered trees. In this paper we present a survey of logics for unranked trees.
cs/0606065
On the complexity of XPath containment in the presence of disjunction, DTDs, and variables
cs.DB cs.LO
XPath is a simple language for navigating an XML-tree and returning a set of answer nodes. The focus in this paper is on the complexity of the containment problem for various fragments of XPath. We restrict attention to the most common XPath expressions which navigate along the child and/or descendant axis. In addition to basic expressions using only node tests and simple predicates, we also consider disjunction and variables (ranging over nodes). Further, we investigate the containment problem relative to a given DTD. With respect to variables we study two semantics, (1) the original semantics of XPath, where the values of variables are given by an outer context, and (2) an existential semantics introduced by Deutsch and Tannen, in which the values of variables are existentially quantified. In this framework, we establish an exact classification of the complexity of the containment problem for many XPath fragments.
cs/0606066
The Cumulative Rule for Belief Fusion
cs.AI
The problem of combining beliefs in the Dempster-Shafer belief theory has attracted considerable attention over the last two decades. The classical Dempster's Rule has often been criticised, and many alternative rules for belief combination have been proposed in the literature. The consensus operator for combining beliefs has nice properties and produces more intuitive results than Dempster's rule, but has the limitation that it can only be applied to belief distribution functions on binary state spaces. In this paper we present a generalisation of the consensus operator that can be applied to Dirichlet belief functions on state spaces of arbitrary size. This rule, called the cumulative rule of belief combination, can be derived from classical statistical theory, and corresponds well with human intuition.
cs/0606069
Inference and Evaluation of the Multinomial Mixture Model for Text Clustering
cs.IR cs.CL
In this article, we investigate the use of a probabilistic model for unsupervised clustering in text collections. Unsupervised clustering has become a basic module for many intelligent text processing applications, such as information retrieval, text classification or information extraction. The model considered in this contribution consists of a mixture of multinomial distributions over the word counts, each component corresponding to a different theme. We present and contrast various estimation procedures, which apply both in supervised and unsupervised contexts. In supervised learning, this work suggests a criterion for evaluating the posterior odds of new documents which is more statistically sound than the "naive Bayes" approach. In an unsupervised context, we propose measures to set up a systematic evaluation framework and start with examining the Expectation-Maximization (EM) algorithm as the basic tool for inference. We discuss the importance of initialization and the influence of other features such as the smoothing strategy or the size of the vocabulary, thereby illustrating the difficulties incurred by the high dimensionality of the parameter space. We also propose a heuristic algorithm based on iterative EM with vocabulary reduction to solve this problem. Using the fact that the latent variables can be analytically integrated out, we finally show that Gibbs sampling algorithm is tractable and compares favorably to the basic expectation maximization approach.
cs/0606070
Is there an Elegant Universal Theory of Prediction?
cs.AI cs.CC
Solomonoff's inductive learning model is a powerful, universal and highly elegant theory of sequence prediction. Its critical flaw is that it is incomputable and thus cannot be used in practice. It is sometimes suggested that it may still be useful to help guide the development of very general and powerful theories of prediction which are computable. In this paper it is shown that although powerful algorithms exist, they are necessarily highly complex. This alone makes their theoretical analysis problematic, however it is further shown that beyond a moderate level of complexity the analysis runs into the deeper problem of Goedel incompleteness. This limits the power of mathematics to analyse and study prediction algorithms, and indeed intelligent systems in general.
cs/0606071
Scheduling and Codeword Length Optimization in Time Varying Wireless Networks
cs.IT math.IT
In this paper, a downlink scenario in which a single-antenna base station communicates with K single antenna users, over a time-correlated fading channel, is considered. It is assumed that channel state information is perfectly known at each receiver, while the statistical characteristics of the fading process and the fading gain at the beginning of each frame are known to the transmitter. By evaluating the random coding error exponent of the time-correlated fading channel, it is shown that there is an optimal codeword length which maximizes the throughput. The throughput of the conventional scheduling that transmits to the user with the maximum signal to noise ratio is examined using both fixed length codewords and variable length codewords. Although optimizing the codeword length improves the performance, it is shown that using the conventional scheduling, the gap between the achievable throughput and the maximum possible throughput of the system tends to infinity as K goes to infinity. A simple scheduling that considers both the signal to noise ratio and the channel time variation is proposed. It is shown that by using this scheduling, the gap between the achievable throughput and the maximum throughput of the system approaches zero.
cs/0606073
Comparison of the estimation of the degree of polarization from four or two intensity images degraded by speckle noise
cs.IR physics.optics
Active polarimetric imagery is a powerful tool for accessing the information present in a scene. Indeed, the polarimetric images obtained can reveal polarizing properties of the objects that are not avalaible using conventional imaging systems. However, when coherent light is used to illuminate the scene, the images are degraded by speckle noise. The polarization properties of a scene are characterized by the degree of polarization. In standard polarimetric imagery system, four intensity images are needed to estimate this degree . If we assume the uncorrelation of the measurements, this number can be decreased to two images using the Orthogonal State Contrast Image (OSCI). However, this approach appears too restrictive in some cases. We thus propose in this paper a new statistical parametric method to estimate the degree of polarization assuming correlated measurements with only two intensity images. The estimators obtained from four images, from the OSCI and from the proposed method, are compared using simulated polarimetric data degraded by speckle noise.
cs/0606074
Rate Regions for Relay Broadcast Channels
cs.IT math.IT
A partially cooperative relay broadcast channel (RBC) is a three-node network with one source node and two destination nodes (destinations 1 and 2) where destination 1 can act as a relay to assist destination 2. Inner and outer bounds on the capacity region of the discrete memoryless partially cooperative RBC are obtained. When the relay function is disabled, the inner and outer bounds reduce to new bounds on the capacity region of broadcast channels. Four classes of RBCs are studied in detail. For the partially cooperative RBC with degraded message sets, inner and outer bounds are obtained. For the semideterministic partially cooperative RBC and the orthogonal partially cooperative RBC, the capacity regions are established. For the parallel partially cooperative RBC with unmatched degraded subchannels, the capacity region is established for the case of degraded message sets. The capacity is also established when the source node has only a private message for destination 2, i.e., the channel reduces to a parallel relay channel with unmatched degraded subchannels.
cs/0606075
10^(10^6) Worlds and Beyond: Efficient Representation and Processing of Incomplete Information
cs.DB
Current systems and formalisms for representing incomplete information generally suffer from at least one of two weaknesses. Either they are not strong enough for representing results of simple queries, or the handling and processing of the data, e.g. for query evaluation, is intractable. In this paper, we present a decomposition-based approach to addressing this problem. We introduce world-set decompositions (WSDs), a space-efficient formalism for representing any finite set of possible worlds over relational databases. WSDs are therefore a strong representation system for any relational query language. We study the problem of efficiently evaluating relational algebra queries on sets of worlds represented by WSDs. We also evaluate our technique experimentally in a large census data scenario and show that it is both scalable and efficient.
cs/0606077
On Sequence Prediction for Arbitrary Measures
cs.LG
Suppose we are given two probability measures on the set of one-way infinite finite-alphabet sequences and consider the question when one of the measures predicts the other, that is, when conditional probabilities converge (in a certain sense) when one of the measures is chosen to generate the sequence. This question may be considered a refinement of the problem of sequence prediction in its most general formulation: for a given class of probability measures, does there exist a measure which predicts all of the measures in the class? To address this problem, we find some conditions on local absolute continuity which are sufficient for prediction and which generalize several different notions which are known to be sufficient for prediction. We also formulate some open questions to outline a direction for finding the conditions on classes of measures for which prediction is possible.
cs/0606078
Dimension Extractors and Optimal Decompression
cs.CC cs.IT math.IT
A *dimension extractor* is an algorithm designed to increase the effective dimension -- i.e., the amount of computational randomness -- of an infinite binary sequence, in order to turn a "partially random" sequence into a "more random" sequence. Extractors are exhibited for various effective dimensions, including constructive, computable, space-bounded, time-bounded, and finite-state dimension. Using similar techniques, the Kucera-Gacs theorem is examined from the perspective of decompression, by showing that every infinite sequence S is Turing reducible to a Martin-Loef random sequence R such that the asymptotic number of bits of R needed to compute n bits of S, divided by n, is precisely the constructive dimension of S, which is shown to be the optimal ratio of query bits to computed bits achievable with Turing reductions. The extractors and decompressors that are developed lead directly to new characterizations of some effective dimensions in terms of optimal decompression by Turing reductions.
cs/0606081
New Millennium AI and the Convergence of History
cs.AI
Artificial Intelligence (AI) has recently become a real formal science: the new millennium brought the first mathematically sound, asymptotically optimal, universal problem solvers, providing a new, rigorous foundation for the previously largely heuristic field of General AI and embedded agents. At the same time there has been rapid progress in practical methods for learning true sequence-processing programs, as opposed to traditional methods limited to stationary pattern association. Here we will briefly review some of the new results, and speculate about future developments, pointing out that the time intervals between the most notable events in over 40,000 years or 2^9 lifetimes of human history have sped up exponentially, apparently converging to zero within the next few decades. Or is this impression just a by-product of the way humans allocate memory space to past events?
cs/0606083
The Diversity Order of the Semidefinite Relaxation Detector
cs.IT math.IT
We consider the detection of binary (antipodal) signals transmitted in a spatially multiplexed fashion over a fading multiple-input multiple-output (MIMO) channel and where the detection is done by means of semidefinite relaxation (SDR). The SDR detector is an attractive alternative to maximum likelihood (ML) detection since the complexity is polynomial rather than exponential. Assuming that the channel matrix is drawn with i.i.d. real valued Gaussian entries, we study the receiver diversity and prove that the SDR detector achieves the maximum possible diversity. Thus, the error probability of the receiver tends to zero at the same rate as the optimal maximum likelihood (ML) receiver in the high signal to noise ratio (SNR) limit. This significantly strengthens previous performance guarantees available for the semidefinite relaxation detector. Additionally, it proves that full diversity detection is in certain scenarios also possible when using a non-combinatorial receiver structure.
cs/0606084
The Completeness of Propositional Resolution: A Simple and Constructive<br> Proof
cs.LO cs.AI
It is well known that the resolution method (for propositional logic) is complete. However, completeness proofs found in the literature use an argument by contradiction showing that if a set of clauses is unsatisfiable, then it must have a resolution refutation. As a consequence, none of these proofs actually gives an algorithm for producing a resolution refutation from an unsatisfiable set of clauses. In this note, we give a simple and constructive proof of the completeness of propositional resolution which consists of an algorithm together with a proof of its correctness.
cs/0606090
Error Rate Analysis for Coded Multicarrier Systems over Quasi-Static Fading Channels
cs.IT math.IT
Several recent standards such as IEEE 802.11a/g, IEEE 802.16, and ECMA Multiband Orthogonal Frequency Division Multiplexing (MB-OFDM) for high data-rate Ultra-Wideband (UWB), employ bit-interleaved convolutionally-coded multicarrier modulation over quasi-static fading channels. Motivated by the lack of appropriate error rate analysis techniques for this popular type of system and channel model, we present two novel analytical methods for bit error rate (BER) estimation of coded multicarrier systems operating over frequency-selective quasi-static channels with non-ideal interleaving. In the first method, the approximate performance of the system is calculated for each realization of the channel, which is suitable for obtaining the outage BER performance (a common performance measure for e.g. MB-OFDM systems). The second method assumes Rayleigh distributed frequency-domain subcarrier channel gains and knowledge of their correlation matrix, and can be used to directly obtain the average BER performance. Both methods are applicable to convolutionally-coded interleaved multicarrier systems employing Quadrature Amplitude Modulation (QAM), and are also able to account for narrowband interference (modeled as a sum of tone interferers). To illustrate the application of the proposed analysis, both methods are used to study the performance of a tone-interference-impaired MB-OFDM system.
cs/0606093
Predictions as statements and decisions
cs.LG
Prediction is a complex notion, and different predictors (such as people, computer programs, and probabilistic theories) can pursue very different goals. In this paper I will review some popular kinds of prediction and argue that the theory of competitive on-line learning can benefit from the kinds of prediction that are now foreign to it.
cs/0606094
On Typechecking Top-Down XML Tranformations: Fixed Input or Output Schemas
cs.DB cs.PL
Typechecking consists of statically verifying whether the output of an XML transformation always conforms to an output type for documents satisfying a given input type. In this general setting, both the input and output schema as well as the transformation are part of the input for the problem. However, scenarios where the input or output schema can be considered to be fixed, are quite common in practice. In the present work, we investigate the computational complexity of the typechecking problem in the latter setting.
cs/0606096
Building a resource for studying translation shifts
cs.CL
This paper describes an interdisciplinary approach which brings together the fields of corpus linguistics and translation studies. It presents ongoing work on the creation of a corpus resource in which translation shifts are explicitly annotated. Translation shifts denote departures from formal correspondence between source and target text, i.e. deviations that have occurred during the translation process. A resource in which such shifts are annotated in a systematic way will make it possible to study those phenomena that need to be addressed if machine translation output is to resemble human translation. The resource described in this paper contains English source texts (parliamentary proceedings) and their German translations. The shift annotation is based on predicate-argument structures and proceeds in two steps: first, predicates and their arguments are annotated monolingually in a straightforward manner. Then, the corresponding English and German predicates and arguments are aligned with each other. Whenever a shift - mainly grammatical or semantic -has occurred, the alignment is tagged accordingly.
cs/0606097
Synonym search in Wikipedia: Synarcher
cs.IR cs.DM
The program Synarcher for synonym (and related terms) search in the text corpus of special structure (Wikipedia) was developed. The results of the search are presented in the form of graph. It is possible to explore the graph and search for graph elements interactively. Adapted HITS algorithm for synonym search, program architecture, and program work evaluation with test examples are presented in the paper. The proposed algorithm can be applied to a query expansion by synonyms (in a search engine) and a synonym dictionary forming.
cs/0606099
Fairness in Multiuser Systems with Polymatroid Capacity Region
cs.IT math.IT
For a wide class of multi-user systems, a subset of capacity region which includes the corner points and the sum-capacity facet has a special structure known as polymatroid. Multiaccess channels with fixed input distributions and multiple-antenna broadcast channels are examples of such systems. Any interior point of the sum-capacity facet can be achieved by time-sharing among corner points or by an alternative method known as rate-splitting. The main purpose of this paper is to find a point on the sum-capacity facet which satisfies a notion of fairness among active users. This problem is addressed in two cases: (i) where the complexity of achieving interior points is not feasible, and (ii) where the complexity of achieving interior points is feasible. For the first case, the corner point for which the minimum rate of the active users is maximized (max-min corner point) is desired for signaling. A simple greedy algorithm is introduced to find the optimum max-min corner point. For the second case, the polymatroid properties are exploited to locate a rate-vector on the sum-capacity facet which is optimally fair in the sense that the minimum rate among all users is maximized (max-min rate). In the case that the rate of some users can not increase further (attain the max-min value), the algorithm recursively maximizes the minimum rate among the rest of the users. It is shown that the problems of deriving the time-sharing coefficients or rate-spitting scheme can be solved by decomposing the problem to some lower-dimensional subproblems. In addition, a fast algorithm to compute the time-sharing coefficients to attain a general point on the sum-capacity facet is proposed.
cs/0606100
The generating function of the polytope of transport matrices $U(r,c)$ as a positive semidefinite kernel of the marginals $r$ and $c$
cs.LG cs.DM
This paper has been withdrawn by the author due to a crucial error in the proof of Lemma 5.
cs/0606104
An information-spectrum approach to large deviation theorems
cs.IT math.IT
In this paper we show a some new look at large deviation theorems from the viewpoint of the information-spectrum (IS) methods, which has been first exploited in information theory, and also demonstrate a new basic formula for the large deviation rate function in general, which is a pair of the lower and upper IS rate functions. In particular, we are interested in establishing the general large deviation rate functions that can be derivable as the Fenchel-Legendre transform of the cumulant generating function. The final goal is to show a necessary and sufficient condition for the rate function to be of Cram\'er-G\"artner-Ellis type.
cs/0606105
Iso9000 Based Advanced Quality Approach for Continuous Improvement of Manufacturing Processes
cs.IR
The continuous improvement in TQM is considered as the core value by which organisation could maintain a competitive edge. Several techniques and tools are known to support this core value but most of the time these techniques are informal and without modelling the interdependence between the core value and tools. Thus, technique formalisation is one of TQM challenges for increasing efficiency of quality process implementation. In that way, the paper proposes and experiments an advanced quality modelling approach based on meta-modelling the "process approach" as advocated by the standard ISO9000:2000. This meta-model allows formalising the interdependence between technique, tools and core value
cs/0606106
Self-orthogonality of $q$-ary Images of $q^m$-ary Codes and Quantum Code Construction
cs.IT math.IT
A code over GF$(q^m)$ can be imaged or expanded into a code over GF$(q)$ using a basis for the extension field over the base field. The properties of such an image depend on the original code and the basis chosen for imaging. Problems relating the properties of a code and its image with respect to a basis have been of great interest in the field of coding theory. In this work, a generalized version of the problem of self-orthogonality of the $q$-ary image of a $q^m$-ary code has been considered. Given an inner product (more generally, a biadditive form), necessary and sufficient conditions have been derived for a code over a field extension and an expansion basis so that an image of that code is self-orthogonal. The conditions require that the original code be self-orthogonal with respect to several related biadditive forms whenever certain power sums of the dual basis elements do not vanish. Numerous interesting corollaries have been derived by specializing the general conditions. An interesting result for the canonical or regular inner product in fields of characteristic two is that only self-orthogonal codes result in self-orthogonal images. Another result is that image of a code is self-orthogonal for all bases if and only if trace of the code is self-orthogonal, except for the case of binary images of 4-ary codes. The conditions are particularly simple to state and apply for cyclic codes. To illustrate a possible application, new quantum error-correcting codes have been constructed with larger minimum distance than previously known.
cs/0606114
Hidden Markov Process: A New Representation, Entropy Rate and Estimation Entropy
cs.IT math.IT
We consider a pair of correlated processes {Z_n} and {S_n} (two sided), where the former is observable and the later is hidden. The uncertainty in the estimation of Z_n upon its finite past history is H(Z_n|Z_0^{n-1}), and for estimation of S_n upon this observation is H(S_n|Z_0^{n-1}), which are both sequences of n. The limits of these sequences (and their existence) are of practical and theoretical interest. The first limit, if exists, is the entropy rate. We call the second limit the estimation entropy. An example of a process jointly correlated to another one is the hidden Markov process. It is the memoryless observation of the Markov state process where state transitions are independent of past observations. We consider a new representation of hidden Markov process using iterated function system. In this representation the state transitions are deterministically related to the process. This representation provides a unified framework for the analysis of the two limiting entropies for this process, resulting in integral expressions for the limits. This analysis shows that under mild conditions the limits exist and provides a simple method for calculating the elements of the corresponding sequences.
cs/0606115
Evaluating Variable Length Markov Chain Models for Analysis of User Web Navigation Sessions
cs.AI cs.IR
Markov models have been widely used to represent and analyse user web navigation data. In previous work we have proposed a method to dynamically extend the order of a Markov chain model and a complimentary method for assessing the predictive power of such a variable length Markov chain. Herein, we review these two methods and propose a novel method for measuring the ability of a variable length Markov model to summarise user web navigation sessions up to a given length. While the summarisation ability of a model is important to enable the identification of user navigation patterns, the ability to make predictions is important in order to foresee the next link choice of a user after following a given trail so as, for example, to personalise a web site. We present an extensive experimental evaluation providing strong evidence that prediction accuracy increases linearly with summarisation ability.
cs/0606117
Performance comparison of multi-user detectors for the downlink of a broadband MC-CDMA system
cs.IT math.IT
In this paper multi-user detection techniques, such as Parallel and Serial Interference Cancellations (PIC & SIC), General Minimum Mean Square Error (GMMSE) and polynomial MMSE, for the downlink of a broadband Multi-Carrier Code Division Multiple Access (MCCDMA) system are investigated. The Bit Error Rate (BER) and Frame Error Rate (FER) results are evaluated, and compared with single-user detection (MMSEC, EGC) approaches, as well. The performance evaluation takes into account the system load, channel coding and modulation schemes.
cs/0606118
Adapting a general parser to a sublanguage
cs.CL cs.IR
In this paper, we propose a method to adapt a general parser (Link Parser) to sublanguages, focusing on the parsing of texts in biology. Our main proposal is the use of terminology (identication and analysis of terms) in order to reduce the complexity of the text to be parsed. Several other strategies are explored and finally combined among which text normalization, lexicon and morpho-guessing module extensions and grammar rules adaptation. We compare the parsing results before and after these adaptations.
cs/0606119
Lexical Adaptation of Link Grammar to the Biomedical Sublanguage: a Comparative Evaluation of Three Approaches
cs.CL cs.IR
We study the adaptation of Link Grammar Parser to the biomedical sublanguage with a focus on domain terms not found in a general parser lexicon. Using two biomedical corpora, we implement and evaluate three approaches to addressing unknown words: automatic lexicon expansion, the use of morphological clues, and disambiguation using a part-of-speech tagger. We evaluate each approach separately for its effect on parsing performance and consider combinations of these approaches. In addition to a 45% increase in parsing efficiency, we find that the best approach, incorporating information from a domain part-of-speech tagger, offers a statistically signicant 10% relative decrease in error. The adapted parser is available under an open-source license at http://www.it.utu.fi/biolg.
cs/0606121
Performance of Orthogonal Beamforming for SDMA with Limited Feedback
cs.IT math.IT
On the multi-antenna broadcast channel, the spatial degrees of freedom support simultaneous transmission to multiple users. The optimal multiuser transmission, known as dirty paper coding, is not directly realizable. Moreover, close-to-optimal solutions such as Tomlinson-Harashima precoding are sensitive to CSI inaccuracy. This paper considers a more practical design called per user unitary and rate control (PU2RC), which has been proposed for emerging cellular standards. PU2RC supports multiuser simultaneous transmission, enables limited feedback, and is capable of exploiting multiuser diversity. Its key feature is an orthogonal beamforming (or precoding) constraint, where each user selects a beamformer (or precoder) from a codebook of multiple orthonormal bases. In this paper, the asymptotic throughput scaling laws for PU2RC with a large user pool are derived for different regimes of the signal-to-noise ratio (SNR). In the multiuser-interference-limited regime, the throughput of PU2RC is shown to scale logarithmically with the number of users. In the normal SNR and noise-limited regimes, the throughput is found to scale double logarithmically with the number of users and also linearly with the number of antennas at the base station. In addition, numerical results show that PU2RC achieves higher throughput and is more robust against CSI quantization errors than the popular alternative of zero-forcing beamforming if the number of users is sufficiently large.
cs/0606126
May We Have Your Attention: Analysis of a Selective Attention Task
cs.NE cs.AI
In this paper we present a deeper analysis than has previously been carried out of a selective attention problem, and the evolution of continuous-time recurrent neural networks to solve it. We show that the task has a rich structure, and agents must solve a variety of subproblems to perform well. We consider the relationship between the complexity of an agent and the ease with which it can evolve behavior that generalizes well across subproblems, and demonstrate a shaping protocol that improves generalization.
cs/0606128
Automatic forming lists of semantically related terms based on texts rating in the corpus with hyperlinks and categories (In Russian)
cs.IR cs.DM
HITS adapted algorithm for synonym search, the program architecture, and the program work evaluation with test examples are presented in the paper. Synarcher program for synonym (and related terms) search in the text corpus of special structure (Wikipedia) was developed. The results of search are presented in the form of a graph. It is possible to explore the graph and search graph elements interactively. The proposed algorithm could be applied to the search request extending and for synonym dictionary forming.
cs/0607002
Coding for Parallel Channels: Gallager Bounds for Binary Linear Codes with Applications to Repeat-Accumulate Codes and Variations
cs.IT math.IT
This paper is focused on the performance analysis of binary linear block codes (or ensembles) whose transmission takes place over independent and memoryless parallel channels. New upper bounds on the maximum-likelihood (ML) decoding error probability are derived. These bounds are applied to various ensembles of turbo-like codes, focusing especially on repeat-accumulate codes and their recent variations which possess low encoding and decoding complexity and exhibit remarkable performance under iterative decoding. The framework of the second version of the Duman and Salehi (DS2) bounds is generalized to the case of parallel channels, along with the derivation of their optimized tilting measures. The connection between the generalized DS2 and the 1961 Gallager bounds, addressed by Divsalar and by Sason and Shamai for a single channel, is explored in the case of an arbitrary number of independent parallel channels. The generalization of the DS2 bound for parallel channels enables to re-derive specific bounds which were originally derived by Liu et al. as special cases of the Gallager bound. In the asymptotic case where we let the block length tend to infinity, the new bounds are used to obtain improved inner bounds on the attainable channel regions under ML decoding. The tightness of the new bounds for independent parallel channels is exemplified for structured ensembles of turbo-like codes. The improved bounds with their optimized tilting measures show, irrespectively of the block length of the codes, an improvement over the union bound and other previously reported bounds for independent parallel channels; this improvement is especially pronounced for moderate to large block lengths.
cs/0607003
Tightened Upper Bounds on the ML Decoding Error Probability of Binary Linear Block Codes
cs.IT math.IT
The performance of maximum-likelihood (ML) decoded binary linear block codes is addressed via the derivation of tightened upper bounds on their decoding error probability. The upper bounds on the block and bit error probabilities are valid for any memoryless, binary-input and output-symmetric communication channel, and their effectiveness is exemplified for various ensembles of turbo-like codes over the AWGN channel. An expurgation of the distance spectrum of binary linear block codes further tightens the resulting upper bounds.
cs/0607004
On the Error Exponents of Some Improved Tangential-Sphere Bounds
cs.IT math.IT
The performance of maximum-likelihood (ML) decoded binary linear block codes over the AWGN channel is addressed via the tangential-sphere bound (TSB) and two of its recent improved versions. The paper is focused on the derivation of the error exponents of these bounds. Although it was exemplified that some recent improvements of the TSB tighten this bound for finite-length codes, it is demonstrated in this paper that their error exponents coincide. For an arbitrary ensemble of binary linear block codes, the common value of these error exponents is explicitly expressed in terms of the asymptotic growth rate of the average distance spectrum.
cs/0607005
Belief Conditioning Rules (BCRs)
cs.AI
In this paper we propose a new family of Belief Conditioning Rules (BCRs) for belief revision. These rules are not directly related with the fusion of several sources of evidence but with the revision of a belief assignment available at a given time according to the new truth (i.e. conditioning constraint) one has about the space of solutions of the problem.
cs/0607007
Theory of sexes by Geodakian as it is advanced by Iskrin
cs.NE cs.GL
In 1960s V.Geodakian proposed a theory that explains sexes as a mechanism for evolutionary adaptation of the species to changing environmental conditions. In 2001 V.Iskrin refined and augmented the concepts of Geodakian and gave a new and interesting explanation to several phenomena which involve sex, and sex ratio, including the war-years phenomena. He also introduced a new concept of the "catastrophic sex ratio." This note is an attempt to digest technical aspects of the new ideas by Iskrin.
cs/0607010
ITs, a structure sensitive information theory
cs.IT math.IT
Broadly speaking Information theory (IT) assumes no structure of the underlying states. But what about contexts where states do have a clear structure - how should IT cope with such situations? And if such coping is at all possible then - how should structure be expressed so that it can be coped with? A possible answer to these questions is presented here. Noting that IT can cope well with a structure expressed as an accurate clustering (by shifting to the implied reduced alphabet), a generalization is suggested in which structure is expressed as a measure on reduced alphabets. Given such structure an extension of IT is presented where the reduced alphabets are treated simultaneously. This structure-sensitive IT, called ITs, extends traditional IT in the sense that: a)there are structure-sensitive analogs to the notions of traditional IT and b)translating a theorem in IT by replacing its notions with their structure-sensitive counterparts, yields a (provable) theorem of ITs. Seemingly paradoxically, ITs extends IT but it's completely within the framework of IT. The richness of the suggested structures is demonstrated by two disparate families studied in more detail: the family of hierarchical structures and the family of linear structures. The formal findings extend the scope of cases to which a rigorous application of IT can be applied (with implications on quantization, for example). The implications on the foundations of IT are that the assumption regarding no underlying structure of states is not mandatory and that there is a framework for expressing such underlying structure.
cs/0607012
A Flexible Structured-based Representation for XML Document Mining
cs.IR
This paper reports on the INRIA group's approach to XML mining while participating in the INEX XML Mining track 2005. We use a flexible representation of XML documents that allows taking into account the structure only or both the structure and content. Our approach consists of representing XML documents by a set of their sub-paths, defined according to some criteria (length, root beginning, leaf ending). By considering those sub-paths as words, we can use standard methods for vocabulary reduction, and simple clustering methods such as K-means that scale well. We actually use an implementation of the clustering algorithm known as "dynamic clouds" that can work with distinct groups of independent variables put in separate variables. This is useful in our model since embedded sub-paths are not independent: we split potentially dependant paths into separate variables, resulting in each of them containing independant paths. Experiments with the INEX collections show good results for the structure-only collections, but our approach could not scale well for large structure-and-content collections.
cs/0607013
Database Querying under Changing Preferences
cs.DB cs.AI
We present here a formal foundation for an iterative and incremental approach to constructing and evaluating preference queries. Our main focus is on query modification: a query transformation approach which works by revising the preference relation in the query. We provide a detailed analysis of the cases where the order-theoretic properties of the preference relation are preserved by the revision. We consider a number of different revision operators: union, prioritized and Pareto composition. We also formulate algebraic laws that enable incremental evaluation of preference queries. Finally, we consider two variations of the basic framework: finite restrictions of preference relations and weak-order extensions of strict partial order preference relations.
cs/0607014
Strong Consistency of the Good-Turing Estimator
cs.IT math.IT
We consider the problem of estimating the total probability of all symbols that appear with a given frequency in a string of i.i.d. random variables with unknown distribution. We focus on the regime in which the block length is large yet no symbol appears frequently in the string. This is accomplished by allowing the distribution to change with the block length. Under a natural convergence assumption on the sequence of underlying distributions, we show that the total probabilities converge to a deterministic limit, which we characterize. We then show that the Good-Turing total probability estimator is strongly consistent.
cs/0607015
The uncovering of hidden structures by Latent Semantic Analysis
cs.IR
Latent Semantic Analysis (LSA) is a well known method for information retrieval. It has also been applied as a model of cognitive processing and word-meaning acquisition. This dual importance of LSA derives from its capacity to modulate the meaning of words by contexts, dealing successfully with polysemy and synonymy. The underlying reasons that make the method work are not clear enough. We propose that the method works because it detects an underlying block structure (the blocks corresponding to topics) in the term by document matrix. In real cases this block structure is hidden because of perturbations. We propose that the correct explanation for LSA must be searched in the structure of singular vectors rather than in the profile of singular values. Using Perron-Frobenius theory we show that the presence of disjoint blocks of documents is marked by sign-homogeneous entries in the vectors corresponding to the documents of one block and zeros elsewhere. In the case of nearly disjoint blocks, perturbation theory shows that if the perturbations are small the zeros in the leading vectors are replaced by small numbers (pseudo-zeros). Since the singular values of each block might be very different in magnitude, their order does not mirror the order of blocks. When the norms of the blocks are similar, LSA works fine, but we propose that when the topics have different sizes, the usual procedure of selecting the first k singular triplets (k being the number of blocks) should be replaced by a method that selects the perturbed Perron vectors for each block.
cs/0607016
An Analysis of Arithmetic Constraints on Integer Intervals
cs.AI cs.PL
Arithmetic constraints on integer intervals are supported in many constraint programming systems. We study here a number of approaches to implement constraint propagation for these constraints. To describe them we introduce integer interval arithmetic. Each approach is explained using appropriate proof rules that reduce the variable domains. We compare these approaches using a set of benchmarks. For the most promising approach we provide results that characterize the effect of constraint propagation. This is a full version of our earlier paper, cs.PL/0403016.
cs/0607017
Performance of STBC MC-CDMA systems over outdoor realistic MIMO channels
cs.IT math.IT
The paper deals with orthogonal space-time block coded MC-CDMA systems in outdoor realistic downlink scenarios with up to two transmit and receive antennas. Assuming no channel state information at the transmitter, we compare several linear single-user detection and spreading schemes, with or without channel coding, achieving a spectral efficiency of 1-2 bits/s/Hz. The different results obtained demonstrate that spatial diversity significantly improves the performance of MC-CDMA systems, and allows different chip-mapping without notably decreasing performance. Moreover, the global system exhibits a good trade-off between complexity at mobile stations and performance. Then, Alamouti's STBC MC-CDMA schemes derive full benefit from the frequency and spatial diversities and can be considered as a very realistic and promising candidate for the air interface downlink of the 4/sup th/ generation mobile radio systems.
cs/0607018
Feynman Checkerboard as a Model of Discrete Space-Time
cs.CE
In 1965, Feynman wrote of using a lattice containing one dimension of space and one dimension of time to derive aspects of quantum mechanics. Instead of summing the behavior of all possible paths as he did, this paper will consider the motion of single particles within this discrete Space-Time lattice, sometimes called Feynman's Checkerboard. This empirical approach yielded several predicted emergent properties for a discrete Space-Time lattice, one of which is novel and testable.
cs/0607019
Modelling the Probability Density of Markov Sources
cs.NE
This paper introduces an objective function that seeks to minimise the average total number of bits required to encode the joint state of all of the layers of a Markov source. This type of encoder may be applied to the problem of optimising the bottom-up (recognition model) and top-down (generative model) connections in a multilayer neural network, and it unifies several previous results on the optimisation of multilayer neural networks.
cs/0607020
Iterative Decoding Performance Bounds for LDPC Codes on Noisy Channels
cs.IT math.IT
The asymptotic iterative decoding performances of low-density parity-check (LDPC) codes using min-sum (MS) and sum-product (SP) decoding algorithms on memoryless binary-input output-symmetric (MBIOS) channels are analyzed in this paper. For MS decoding, the analysis is done by upper bounding the bit error probability of the root bit of a tree code by the sequence error probability of a subcode of the tree code assuming the transmission of the all-zero codeword. The result is a recursive upper bound on the bit error probability after each iteration. For SP decoding, we derive a recursively determined lower bound on the bit error probability after each iteration. This recursive lower bound recovers the density evolution equation of LDPC codes on the binary erasure channel (BEC) with inequalities satisfied with equalities. A significant implication of this result is that the performance of LDPC codes under SP decoding on the BEC is an upper bound of the performance on all MBIOS channels with the same uncoded bit error probability. All results hold for the more general multi-edge type LDPC codes.
cs/0607021
Slepian-Wolf Code Design via Source-Channel Correspondence
cs.IT math.IT
We consider Slepian-Wolf code design based on LDPC (low-density parity-check) coset codes for memoryless source-side information pairs. A density evolution formula, equipped with a concentration theorem, is derived for Slepian- Wolf coding based on LDPC coset codes. As a consequence, an intimate connection between Slepian-Wolf coding and channel coding is established. Specifically we show that, under density evolution, design of binary LDPC coset codes for Slepian-Wolf coding of an arbitrary memoryless source-side information pair reduces to design of binary LDPC codes for binary-input output-symmetric channels without loss of optimality. With this connection, many classic results in channel coding can be easily translated into the Slepian-Wolf setting.
cs/0607024
Results on Parity-Check Matrices with Optimal Stopping and/or Dead-End Set Enumerators
cs.IT math.IT
The performance of iterative decoding techniques for linear block codes correcting erasures depends very much on the sizes of the stopping sets associated with the underlying Tanner graph, or, equivalently, the parity-check matrix representing the code. In this paper, we introduce the notion of dead-end sets to explicitly demonstrate this dependency. The choice of the parity-check matrix entails a trade-off between performance and complexity. We give bounds on the complexity of iterative decoders achieving optimal performance in terms of the sizes of the underlying parity-check matrices. Further, we fully characterize codes for which the optimal stopping set enumerator equals the weight enumerator.
cs/0607027
A general computation rule for lossy summaries/messages with examples from equalization
cs.IT math.IT
Elaborating on prior work by Minka, we formulate a general computation rule for lossy messages. An important special case (with many applications in communications) is the conversion of "soft-bit" messages to Gaussian messages. By this method, the performance of a Kalman equalizer is improved, both for uncoded and coded transmission.
cs/0607029
A Coding Theorem Characterizing Renyi's Entropy through Variable-to-Fixed Length Codes
cs.IT math.IT
This paper has been withdrawn
cs/0607030
Towards a General Theory of Simultaneous Diophantine Approximation of Formal Power Series: Multidimensional Linear Complexity
cs.IT math.IT
We model the development of the linear complexity of multisequences by a stochastic infinite state machine, the Battery-Discharge-Model, BDM. The states s in S of the BDM have asymptotic probabilities or mass Pr(s)=1/(P(q,M) q^K(s)), where K(s) in N_0 is the class of the state s, and P(q,M)=\sum_(K in\N0) P_M(K)q^(-K)=\prod_(i=1..M) q^i/(q^i-1) is the generating function of the number of partitions into at most M parts. We have (for each timestep modulo M+1) just P_M(K) states of class K \. We obtain a closed formula for the asymptotic probability for the linear complexity deviation d(n) := L(n)-\lceil n\cdot M/(M+1)\rceil with Pr(d)=O(q^(-|d|(M+1))), for M in N, for d in Z. The precise formula is given in the text. It has been verified numerically for M=1..8, and is conjectured to hold for all M in N. From the asymptotic growth (proven for all M in N), we infer the Law of the Logarithm for the linear complexity deviation, -liminf_{n\to\infty} d_a(n) / log n = 1 /((M+1)log q) = limsup_{n\to\infty} d_a(n) / log n, which immediately yields L_a(n)/n \to M/(M+1) with measure one, for all M in N, a result recently shown already by Niederreiter and Wang. Keywords: Linear complexity, linear complexity deviation, multisequence, Battery Discharge Model, isometry.
cs/0607037
The Minimal Cost Algorithm for Off-Line Diagnosability of Discrete Event Systems
cs.AI cs.CC
The failure diagnosis for {\it discrete event systems} (DESs) has been given considerable attention in recent years. Both on-line and off-line diagnostics in the framework of DESs was first considered by Lin Feng in 1994, and particularly an algorithm for diagnosability of DESs was presented. Motivated by some existing problems to be overcome in previous work, in this paper, we investigate the minimal cost algorithm for diagnosability of DESs. More specifically: (i) we give a generic method for judging a system's off-line diagnosability, and the complexity of this algorithm is polynomial-time; (ii) and in particular, we present an algorithm of how to search for the minimal set in all observable event sets, whereas the previous algorithm may find {\it non-minimal} one.
cs/0607039
Set-Theoretic Preliminaries for Computer Scientists
cs.DM cs.DB
The basics of set theory are usually copied, directly or indirectly, by computer scientists from introductions to mathematical texts. Often mathematicians are content with special cases when the general case is of no mathematical interest. But sometimes what is of no mathematical interest is of great practical interest in computer science. For example, non-binary relations in mathematics tend to have numerical indexes and tend to be unsorted. In the theory and practice of relational databases both these simplifications are unwarranted. In response to this situation we present here an alternative to the ``set-theoretic preliminaries'' usually found in computer science texts. This paper separates binary relations from the kind of relations that are needed in relational databases. Its treatment of functions supports both computer science in general and the kind of relations needed in databases. As a sample application this paper shows how the mathematical theory of relations naturally leads to the relational data model and how the operations on relations are by themselves already a powerful vehicle for queries.
cs/0607042
Towards a classical proof of exponential lower bound for 2-probe smooth codes
cs.CR cs.IT math.IT
Let C: {0,1}^n -> {0,1}^m be a code encoding an n-bit string into an m-bit string. Such a code is called a (q, c, e) smooth code if there exists a decoding algorithm which while decoding any bit of the input, makes at most q probes on the code word and the probability that it looks at any location is at most c/m. The error made by the decoding algorithm is at most e. Smooth codes were introduced by Katz and Trevisan in connection with Locally decodable codes. For 2-probe smooth codes Kerenedis and de Wolf have shown exponential in n lower bound on m in case c and e are constants. Their lower bound proof went through quantum arguments and interestingly there is no completely classical argument as yet for the same (albeit completely classical !) statement. We do not match the bounds shown by Kerenedis and de Wolf but however show the following. Let C: {0,1}^n -> {0,1}^m be a (2,c,e) smooth code and if e <= c^2/8n^2, then m >= 2^(n/320c^2 - 1)$. We hope that the arguments and techniques used in this paper extend (or are helpful in making similar other arguments), to match the bounds shown using quantum arguments. More so, hopefully they extend to show bounds for codes with greater number of probes where quantum arguments unfortunately do not yield good bounds (even for 3-probe codes).
cs/0607043
Analysis of CDMA systems that are characterized by eigenvalue spectrum
cs.IT cond-mat.dis-nn math.IT
An approach by which to analyze the performance of the code division multiple access (CDMA) scheme, which is a core technology used in modern wireless communication systems, is provided. The approach characterizes the objective system by the eigenvalue spectrum of a cross-correlation matrix composed of signature sequences used in CDMA communication, which enables us to handle a wider class of CDMA systems beyond the basic model reported by Tanaka. The utility of the novel scheme is shown by analyzing a system in which the generation of signature sequences is designed for enhancing the orthogonality.
cs/0607047
PAC Classification based on PAC Estimates of Label Class Distributions
cs.LG
A standard approach in pattern classification is to estimate the distributions of the label classes, and then to apply the Bayes classifier to the estimates of the distributions in order to classify unlabeled examples. As one might expect, the better our estimates of the label class distributions, the better the resulting classifier will be. In this paper we make this observation precise by identifying risk bounds of a classifier in terms of the quality of the estimates of the label class distributions. We show how PAC learnability relates to estimates of the distributions that have a PAC guarantee on their $L_1$ distance from the true distribution, and we bound the increase in negative log likelihood risk in terms of PAC bounds on the KL-divergence. We give an inefficient but general-purpose smoothing method for converting an estimated distribution that is good under the $L_1$ metric into a distribution that is good under the KL-divergence.
cs/0607048
Evaluation de Techniques de Traitement des Refus\'{e}s pour l'Octroi de Cr\'{e}dit
cs.NE math.ST stat.TH
We present the problem of "Reject Inference" for credit acceptance. Because of the current legal framework (Basel II), credit institutions need to industrialize their processes for credit acceptance, including Reject Inference. We present here a methodology to compare various techniques of Reject Inference and show that it is necessary, in the absence of real theoretical results, to be able to produce and compare models adapted to available data (selection of "best" model conditionnaly on data). We describe some simulations run on a small data set to illustrate the approach and some strategies for choosing the control group, which is the only valid approach to Reject Inference.
cs/0607051
Raisonner avec des diagrammes : perspectives cognitives et computationnelles
cs.CL
Diagrammatic, analogical or iconic representations are often contrasted with linguistic or logical representations, in which the shape of the symbols is arbitrary. The aim of this paper is to make a case for the usefulness of diagrams in inferential knowledge representation systems. Although commonly used, diagrams have for a long time suffered from the reputation of being only a heuristic tool or a mere support for intuition. The first part of this paper is an historical background paying tribute to the logicians, psychologists and computer scientists who put an end to this formal prejudice against diagrams. The second part is a discussion of their characteristics as opposed to those of linguistic forms. The last part is aimed at reviving the interest for heterogeneous representation systems including both linguistic and diagrammatic representations.
cs/0607052
Dealing with Metonymic Readings of Named Entities
cs.AI cs.CL
The aim of this paper is to propose a method for tagging named entities (NE), using natural language processing techniques. Beyond their literal meaning, named entities are frequently subject to metonymy. We show the limits of current NE type hierarchies and detail a new proposal aiming at dynamically capturing the semantics of entities in context. This model can analyze complex linguistic phenomena like metonymy, which are known to be difficult for natural language processing but crucial for most applications. We present an implementation and some test using the French ESTER corpus and give significant results.
cs/0607053
Linguistically Grounded Models of Language Change
cs.AI cs.CL
Questions related to the evolution of language have recently known an impressive increase of interest (Briscoe, 2002). This short paper aims at questioning the scientific status of these models and their relations to attested data. We show that one cannot directly model non-linguistic factors (exogenous factors) even if they play a crucial role in language evolution. We then examine the relation between linguistic models and attested language data, as well as their contribution to cognitive linguistics.
cs/0607056
Reasoning with Intervals on Granules
cs.AI cs.DM
The formalizations of periods of time inside a linear model of Time are usually based on the notion of intervals, that may contain or may not their endpoints. This is not enought when the periods are written in terms of coarse granularities with respect to the event taken into account. For instance, how to express the inter-war period in terms of a {\em years} interval? This paper presents a new type of intervals, neither open, nor closed or open-closed and the extension of operations on intervals of this new type, in order to reduce the gap between the discourse related to temporal relationship and its translation into a discretized model of Time.
cs/0607060
Circle Formation of Weak Mobile Robots
cs.RO
In this paper we prove the conjecture of D\'{e}fago & Konagaya. Furthermore, we describe a deterministic protocol for forming a regular n-gon in finite time.
cs/0607062
Get out the vote: Determining support or opposition from Congressional floor-debate transcripts
cs.CL cs.SI physics.soc-ph
We investigate whether one can determine from the transcripts of U.S. Congressional floor debates whether the speeches represent support of or opposition to proposed legislation. To address this problem, we exploit the fact that these speeches occur as part of a discussion; this allows us to use sources of information regarding relationships between discourse segments, such as whether a given utterance indicates agreement with the opinion expressed by another. We find that the incorporation of such information yields substantial improvements over classifying speeches in isolation.
cs/0607064
How to Find Good Finite-Length Codes: From Art Towards Science
cs.IT math.IT
We explain how to optimize finite-length LDPC codes for transmission over the binary erasure channel. Our approach relies on an analytic approximation of the erasure probability. This is in turn based on a finite-length scaling result to model large scale erasures and a union bound involving minimal stopping sets to take into account small error events. We show that the performances of optimized ensembles as observed in simulations are well described by our approximation. Although we only address the case of transmission over the binary erasure channel, our method should be applicable to a more general setting.
cs/0607065
Decomposable Theories
cs.LO cs.AI
We present in this paper a general algorithm for solving first-order formulas in particular theories called "decomposable theories". First of all, using special quantifiers, we give a formal characterization of decomposable theories and show some of their properties. Then, we present a general algorithm for solving first-order formulas in any decomposable theory "T". The algorithm is given in the form of five rewriting rules. It transforms a first-order formula "P", which can possibly contain free variables, into a conjunction "Q" of solved formulas easily transformable into a Boolean combination of existentially quantified conjunctions of atomic formulas. In particular, if "P" has no free variables then "Q" is either the formula "true" or "false". The correctness of our algorithm proves the completeness of the decomposable theories. Finally, we show that the theory "Tr" of finite or infinite trees is a decomposable theory and give some benchmarks realized by an implementation of our algorithm, solving formulas on two-partner games in "Tr" with more than 160 nested alternated quantifiers.
cs/0607067
Competing with stationary prediction strategies
cs.LG
In this paper we introduce the class of stationary prediction strategies and construct a prediction algorithm that asymptotically performs as well as the best continuous stationary strategy. We make mild compactness assumptions but no stochastic assumptions about the environment. In particular, no assumption of stationarity is made about the environment, and the stationarity of the considered strategies only means that they do not depend explicitly on time; we argue that it is natural to consider only stationary strategies even for highly non-stationary environments.
cs/0607068
Computation of the Weight Distribution of CRC Codes
cs.IT math.AC math.IT
In this article, we illustrate an algorithm for the computation of the weight distribution of CRC codes. The recursive structure of CRC codes will give us an iterative way to compute the weight distribution of their dual codes starting from just some ``representative'' words. Thanks to MacWilliams Theorem, the computation of the weight distribution of dual codes can be easily brought back to that of CRC codes. This algorithm is a good alternative to the standard algorithm that involves listing every word of the code.
cs/0607071
Islands for SAT
cs.AI
In this note we introduce the notion of islands for restricting local search. We show how we can construct islands for CNF SAT problems, and how much search space can be eliminated by restricting search to the island.
cs/0607074
On Construction of the (24,12,8) Golay Codes
cs.IT math.IT
Two product array codes are used to construct the (24, 12, 8) binary Golay code through the direct sum operation. This construction provides a systematic way to find proper (8, 4, 4) linear block component codes for generating the Golay code, and it generates and extends previously existing methods that use a similar construction framework. The code constructed is simple to decode.
cs/0607075
On entropy for mixtures of discrete and continuous variables
cs.IT math.IT
Let $X$ be a discrete random variable with support $S$ and $f : S \to S^\prime$ be a bijection. Then it is well-known that the entropy of $X$ is the same as the entropy of $f(X)$. This entropy preservation property has been well-utilized to establish non-trivial properties of discrete stochastic processes, e.g. queuing process \cite{prg03}. Entropy as well as entropy preservation is well-defined only in the context of purely discrete or continuous random variables. However for a mixture of discrete and continuous random variables, which arise in many interesting situations, the notions of entropy and entropy preservation have not been well understood. In this paper, we extend the notion of entropy in a natural manner for a mixed-pair random variable, a pair of random variables with one discrete and the other continuous. Our extensions are consistent with the existing definitions of entropy in the sense that there exist natural injections from discrete or continuous random variables into mixed-pair random variables such that their entropy remains the same. This extension of entropy allows us to obtain sufficient conditions for entropy preservation in mixtures of discrete and continuous random variables under bijections. The extended definition of entropy leads to an entropy rate for continuous time Markov chains. As an application, we recover a known probabilistic result related to Poisson process. We strongly believe that the frame-work developed in this paper can be useful in establishing probabilistic properties of complex processes, such as load balancing systems, queuing network, caching algorithms.
cs/0607076
Capacity of Cooperative Fusion in the Presence of Byzantine Sensors
cs.IT math.IT
The problem of cooperative fusion in the presence of Byzantine sensors is considered. An information theoretic formulation is used to characterize the Shannon capacity of sensor fusion. It is shown that when less than half of the sensors are Byzantine, the effect of Byzantine attack can be entirely mitigated, and the fusion capacity is identical to that when all sensors are honest. But when at least half of the sensors are Byzantine, they can completely defeat the sensor fusion so that no information can be transmitted reliably. A capacity achieving transmit-then-verify strategy is proposed for the case that less than half of the sensors are Byzantine, and its error probability and coding rate is analyzed by using a Markov decision process modeling of the transmission protocol.
cs/0607078
Complex Lattice Reduction Algorithm for Low-Complexity MIMO Detection
cs.DS cs.IT math.IT
Recently, lattice-reduction-aided detectors have been proposed for multiple-input multiple-output (MIMO) systems to give performance with full diversity like maximum likelihood receiver, and yet with complexity similar to linear receivers. However, these lattice-reduction-aided detectors are based on the traditional LLL reduction algorithm that was originally introduced for reducing real lattice bases, in spite of the fact that the channel matrices are inherently complex-valued. In this paper, we introduce the complex LLL algorithm for direct application to reduce the basis of a complex lattice which is naturally defined by a complex-valued channel matrix. We prove that complex LLL reduction-aided detection can also achieve full diversity. Our analysis reveals that the new complex LLL algorithm can achieve a reduction in complexity of nearly 50% over the traditional LLL algorithm, and this is confirmed by simulation. It is noteworthy that the complex LLL algorithm aforementioned has nearly the same bit-error-rate performance as the traditional LLL algorithm.
cs/0607081
Syst\`{e}me de repr\'{e}sentation d'aide au besoin dans le domaine architectural
cs.OH cs.IR
The image is a very important mean of communication in the field of architectural who intervenes in the various phases of the design of a project. It can be regarded as a tool of decision-making aid. The study of our research aims at to see the contribution of the Economic Intelligence in the resolution of a decisional problem of the various partners (Architect, Contractor, Customer) in the architectural field, in order to make strategic decisions within the framework of the realization or design of an architectural work. The economic Intelligence allows the taking into account of the real needs for the user-decision makers, so that their waiting are considered at the first stage of a search for information and not in the final stage of the development of the tool in the evaluation of this last.
cs/0607083
Mathematical Modelling of the Thermal Accumulation in Hot Water Solar Systems
cs.CE
Mathematical modelling and defining useful recommendations for construction and regimes of exploitation for hot water solar installation with thermal stratification is the main purpose of this work. A special experimental solar module for hot water was build and equipped with sufficient measure apparatus. The main concept of investigation is to optimise the stratified regime of thermal accumulation and constructive parameters of heat exchange equipment (heat serpentine in tank). Accumulation and heat exchange processes were investigated by theoretical end experimental means. Special mathematical model was composed to simulate the energy transfer in stratified tank. Computer program was developed to solve mathematical equations for thermal accumulation and energy exchange. Extensive numerical and experimental tests were carried out. A good correspondence between theoretical and experimental data was arrived. Keywords: Mathematical modelling, accumulation
cs/0607084
About Norms and Causes
cs.AI
Knowing the norms of a domain is crucial, but there exist no repository of norms. We propose a method to extract them from texts: texts generally do not describe a norm, but rather how a state-of-affairs differs from it. Answers concerning the cause of the state-of-affairs described often reveal the implicit norm. We apply this idea to the domain of driving, and validate it by designing algorithms that identify, in a text, the "basic" norms to which it refers implicitly.
cs/0607085
Using Pseudo-Stochastic Rational Languages in Probabilistic Grammatical Inference
cs.LG
In probabilistic grammatical inference, a usual goal is to infer a good approximation of an unknown distribution P called a stochastic language. The estimate of P stands in some class of probabilistic models such as probabilistic automata (PA). In this paper, we focus on probabilistic models based on multiplicity automata (MA). The stochastic languages generated by MA are called rational stochastic languages; they strictly include stochastic languages generated by PA; they also admit a very concise canonical representation. Despite the fact that this class is not recursively enumerable, it is efficiently identifiable in the limit by using the algorithm DEES, introduced by the authors in a previous paper. However, the identification is not proper and before the convergence of the algorithm, DEES can produce MA that do not define stochastic languages. Nevertheless, it is possible to use these MA to define stochastic languages. We show that they belong to a broader class of rational series, that we call pseudo-stochastic rational languages. The aim of this paper is twofold. First we provide a theoretical study of pseudo-stochastic rational languages, the languages output by DEES, showing for example that this class is decidable within polynomial time. Second, we have carried out a lot of experiments in order to compare DEES to classical inference algorithms such as ALERGIA and MDI. They show that DEES outperforms them in most cases.
cs/0607086
Representing Knowledge about Norms
cs.AI
Norms are essential to extend inference: inferences based on norms are far richer than those based on logical implications. In the recent decades, much effort has been devoted to reason on a domain, once its norms are represented. How to extract and express those norms has received far less attention. Extraction is difficult: as the readers are supposed to know them, the norms of a domain are seldom made explicit. For one thing, extracting norms requires a language to represent them, and this is the topic of this paper. We apply this language to represent norms in the domain of driving, and show that it is adequate to reason on the causes of accidents, as described by car-crash reports.
cs/0607088
Using Answer Set Programming in an Inference-Based approach to Natural Language Semantics
cs.CL cs.AI
Using Answer Set Programming in an Inference-Based approach to Natural Language Semantics
cs/0607089
Superregular Matrices and the Construction of Convolutional Codes having a Maximum Distance Profile
cs.IT math.CO math.IT
Superregular matrices are a class of lower triangular Toeplitz matrices that arise in the context of constructing convolutional codes having a maximum distance profile. These matrices are characterized by the property that no submatrix has a zero determinant unless it is trivially zero due to the lower triangular structure. In this paper, we discuss how superregular matrices may be used to construct codes having a maximum distance profile. We also introduce group actions that preserve the superregularity property and present an upper bound on the minimum size a finite field must have in order that a superregular matrix of a given size can exist over that field.
cs/0607090
Neural Networks with Complex and Quaternion Inputs
cs.NE
This article investigates Kak neural networks, which can be instantaneously trained, for complex and quaternion inputs. The performance of the basic algorithm has been analyzed and shown how it provides a plausible model of human perception and understanding of images. The motivation for studying quaternion inputs is their use in representing spatial rotations that find applications in computer graphics, robotics, global navigation, computer vision and the spatial orientation of instruments. The problem of efficient mapping of data in quaternion neural networks is examined. Some problems that need to be addressed before quaternion neural networks find applications are identified.