id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0610107
|
Interference Channels with Common Information
|
cs.IT math.IT
|
In this paper, we consider the discrete memoryless interference channel with
common information, in which two senders need deliver not only private messages
but also certain common messages to their corresponding receivers. We derive an
achievable rate region for such a channel by exploiting a random coding
strategy, namely cascaded superposition coding. We reveal that the derived
achievable rate region generalizes some important existing results for the
interference channels with or without common information. Furthermore, we
specialize to a class of deterministic interference channels with common
information, and show that the derived achievable rate region is indeed the
capacity region for this class of channels.
|
cs/0610108
|
Doppler Spectrum Estimation by Ramanujan Fourier Transforms
|
cs.NA cs.CE
|
The Doppler spectrum estimation of a weather radar signal in a classic way
can be made by two methods, temporal one based in the autocorrelation of the
successful signals, whereas the other one uses the estimation of the power
spectral density PSD by using Fourier transforms. We introduces a new tool of
signal processing based on Ramanujan sums cq(n), adapted to the analysis of
arithmetical sequences with several resonances p/q. These sums are almost
periodic according to time n of resonances and aperiodic according to the order
q of resonances. New results will be supplied by the use of Ramanujan Fourier
Transform (RFT) for the estimation of the Doppler spectrum for the weather
radar signal.
|
cs/0610111
|
Local approximate inference algorithms
|
cs.AI
|
We present a new local approximation algorithm for computing Maximum a
Posteriori (MAP) and log-partition function for arbitrary exponential family
distribution represented by a finite-valued pair-wise Markov random field
(MRF), say $G$. Our algorithm is based on decomposition of $G$ into {\em
appropriately} chosen small components; then computing estimates locally in
each of these components and then producing a {\em good} global solution. We
show that if the underlying graph $G$ either excludes some finite-sized graph
as its minor (e.g. Planar graph) or has low doubling dimension (e.g. any graph
with {\em geometry}), then our algorithm will produce solution for both
questions within {\em arbitrary accuracy}. We present a message-passing
implementation of our algorithm for MAP computation using self-avoiding walk of
graph. In order to evaluate the computational cost of this implementation, we
derive novel tight bounds on the size of self-avoiding walk tree for arbitrary
graph.
As a consequence of our algorithmic result, we show that the normalized
log-partition function (also known as free-energy) for a class of {\em regular}
MRFs will converge to a limit, that is computable to an arbitrary accuracy.
|
cs/0610112
|
On the Performance of Lossless Joint Source-Channel Coding Based on
Linear Codes
|
cs.IT math.IT
|
A general lossless joint source-channel coding scheme based on linear codes
is proposed and then analyzed in this paper. It is shown that a linear code
with good joint spectrum can be used to establish limit-approaching joint
source-channel coding schemes for arbitrary sources and channels, where the
joint spectrum of the code is a generalization of the input-output weight
distribution.
|
cs/0610113
|
CHAC. A MOACO Algorithm for Computation of Bi-Criteria Military Unit
Path in the Battlefield
|
cs.MA cs.CC
|
In this paper we propose a Multi-Objective Ant Colony Optimization (MOACO)
algorithm called CHAC, which has been designed to solve the problem of finding
the path on a map (corresponding to a simulated battlefield) that minimizes
resources while maximizing safety. CHAC has been tested with two different
state transition rules: an aggregative function that combines the heuristic and
pheromone information of both objectives and a second one that is based on the
dominance concept of multiobjective optimization problems. These rules have
been evaluated in several different situations (maps with different degree of
difficulty), and we have found that they yield better results than a greedy
algorithm (taken as baseline) in addition to a military behaviour that is also
better in the tactical sense. The aggregative function, in general, yields
better results than the one based on dominance.
|
cs/0610115
|
An Achievable Rate Region for the Gaussian Interference Channel
|
cs.IT math.IT
|
An achievable rate region for the Gaussian interference channel is derived
using Sato's modified frequency division multiplexing idea and a special case
of Han and Kobayashi's rate region (denoted by $\Gmat^\prime$). We show that
the new inner bound includes $\Gmat^\prime$, Sason's rate region $\Dmat$, as
well as the achievable region via TDM/FDM, as its subsets. The advantage of
this improved inner bound over $\Gmat^\prime$ arises due to its inherent
ability to utilize the whole transmit power range on the real line without
violating the power constraint. We also provide analysis to examine the
conditions for the new achievable region to strictly extend $\Gmat^\prime$.
|
cs/0610116
|
DepAnn - An Annotation Tool for Dependency Treebanks
|
cs.CL
|
DepAnn is an interactive annotation tool for dependency treebanks, providing
both graphical and text-based annotation interfaces. The tool is aimed for
semi-automatic creation of treebanks. It aids the manual inspection and
correction of automatically created parses, making the annotation process
faster and less error-prone. A novel feature of the tool is that it enables the
user to view outputs from several parsers as the basis for creating the final
tree to be saved to the treebank. DepAnn uses TIGER-XML, an XML-based general
encoding format for both, representing the parser outputs and saving the
annotated treebank. The tool includes an automatic consistency checker for
sentence structures. In addition, the tool enables users to build structures
manually, add comments on the annotations, modify the tagsets, and mark
sentences for further revision.
|
cs/0610118
|
Applying Part-of-Seech Enhanced LSA to Automatic Essay Grading
|
cs.IR cs.CL
|
Latent Semantic Analysis (LSA) is a widely used Information Retrieval method
based on "bag-of-words" assumption. However, according to general conception,
syntax plays a role in representing meaning of sentences. Thus, enhancing LSA
with part-of-speech (POS) information to capture the context of word
occurrences appears to be theoretically feasible extension. The approach is
tested empirically on a automatic essay grading system using LSA for document
similarity comparisons. A comparison on several POS-enhanced LSA models is
reported. Our findings show that the addition of contextual information in the
form of POS tags can raise the accuracy of the LSA-based scoring models up to
10.77 per cent.
|
cs/0610120
|
Classdesc and Graphcode: support for scientific programming in C++
|
cs.MS cs.CE cs.DC
|
Object-oriented programming languages such as Java and Objective C have
become popular for implementing agent-based and other object-based simulations
since objects in those languages can {\em reflect} (i.e. make runtime queries
of an object's structure). This allows, for example, a fairly trivial {\em
serialisation} routine (conversion of an object into a binary representation
that can be stored or passed over a network) to be written. However C++ does
not offer this ability, as type information is thrown away at compile time. Yet
C++ is often a preferred development environment, whether for performance
reasons or for its expressive features such as operator overloading.
In scientific coding, changes to a model's codes takes place constantly, as
the model is refined, and different phenomena are studied. Yet traditionally,
facilities such as checkpointing, routines for initialising model parameters
and analysis of model output depend on the underlying model remaining static,
otherwise each time a model is modified, a whole slew of supporting routines
needs to be changed to reflect the new data structures. Reflection offers the
advantage of the simulation framework adapting to the underlying model without
programmer intervention, reducing the effort of modifying the model.
In this paper, we present the {\em Classdesc} system which brings many of the
benefits of object reflection to C++, {\em ClassdescMP} which dramatically
simplifies coding of MPI based parallel programs and {\em
Graphcode} a general purpose data parallel programming environment.
|
cs/0610121
|
Construction algorithm for network error-correcting codes attaining the
Singleton bound
|
cs.IT cs.DM cs.NI math.IT
|
We give a centralized deterministic algorithm for constructing linear network
error-correcting codes that attain the Singleton bound of network
error-correcting codes. The proposed algorithm is based on the algorithm by
Jaggi et al. We give estimates on the time complexity and the required symbol
size of the proposed algorithm. We also estimate the probability of a random
choice of local encoding vectors by all intermediate nodes giving a network
error-correcting codes attaining the Singleton bound. We also clarify the
relationship between the robust network coding and the network error-correcting
codes with known locations of errors.
|
cs/0610124
|
Dependency Treebanks: Methods, Annotation Schemes and Tools
|
cs.CL
|
In this paper, current dependencybased treebanks are introduced and analyzed.
The methods used for building the resources, the annotation schemes applied,
and the tools used (such as POS taggers, parsers and annotation software) are
discussed.
|
cs/0610126
|
Fitness Uniform Optimization
|
cs.NE cs.LG
|
In evolutionary algorithms, the fitness of a population increases with time
by mutating and recombining individuals and by a biased selection of more fit
individuals. The right selection pressure is critical in ensuring sufficient
optimization progress on the one hand and in preserving genetic diversity to be
able to escape from local optima on the other hand. Motivated by a universal
similarity relation on the individuals, we propose a new selection scheme,
which is uniform in the fitness values. It generates selection pressure toward
sparsely populated fitness regions, not necessarily toward higher fitness, as
is the case for all other selection schemes. We show analytically on a simple
example that the new selection scheme can be much more effective than standard
selection schemes. We also propose a new deletion scheme which achieves a
similar result via deletion and show how such a scheme preserves genetic
diversity more effectively than standard approaches. We compare the performance
of the new schemes to tournament selection and random deletion on an artificial
deceptive problem and a range of NP-hard problems: traveling salesman, set
covering and satisfiability.
|
cs/0610128
|
Hierarchical Bin Buffering: Online Local Moments for Dynamic External
Memory Arrays
|
cs.DS cs.DB
|
Local moments are used for local regression, to compute statistical measures
such as sums, averages, and standard deviations, and to approximate probability
distributions. We consider the case where the data source is a very large I/O
array of size n and we want to compute the first N local moments, for some
constant N. Without precomputation, this requires O(n) time. We develop a
sequence of algorithms of increasing sophistication that use precomputation and
additional buffer space to speed up queries. The simpler algorithms partition
the I/O array into consecutive ranges called bins, and they are applicable not
only to local-moment queries, but also to algebraic queries (MAX, AVERAGE, SUM,
etc.). With N buffers of size sqrt{n}, time complexity drops to O(sqrt n). A
more sophisticated approach uses hierarchical buffering and has a logarithmic
time complexity (O(b log_b n)), when using N hierarchical buffers of size n/b.
Using Overlapped Bin Buffering, we show that only a single buffer is needed, as
with wavelet-based algorithms, but using much less storage. Applications exist
in multidimensional and statistical databases over massive data sets,
interactive image processing, and visualization.
|
cs/0610129
|
Community Detection in Complex Networks Using Agents
|
cs.MA cs.CY
|
Community structure identification has been one of the most popular research
areas in recent years due to its applicability to the wide scale of
disciplines. To detect communities in varied topics, there have been many
algorithms proposed so far. However, most of them still have some drawbacks to
be addressed. In this paper, we present an agent-based based community
detection algorithm. The algorithm that is a stochastic one makes use of agents
by forcing them to perform biased moves in a smart way. Using the information
collected by the traverses of these agents in the network, the network
structure is revealed. Also, the network modularity is used for determining the
number of communities. Our algorithm removes the need for prior knowledge about
the network such as number of the communities or any threshold values.
Furthermore, the definite community structure is provided as a result instead
of giving some structures requiring further processes. Besides, the
computational and time costs are optimized because of using thread like working
agents. The algorithm is tested on three network data of different types and
sizes named Zachary karate club, college football and political books. For all
three networks, the real network structures are identified in almost every run.
|
cs/0610130
|
On Bounds for $E$-capacity of DMC
|
cs.IT math.IT
|
Random coding, expurgated and sphere packing bounds are derived by method of
types and method of graph decomposition for $E$-capacity of discrete memoryless
channel (DMC). Three decoding rules are considered, the random coding bound is
attainable by each of the three rules, but the expurgated bound is achievable
only by maximum-likelihood decoding. Sphere packing bound is obtained by very
simple combinatorial reasonings of the method of types. The paper joins and
reviews the results of previous hard achievable publications.
|
cs/0610132
|
List Decoding of Hermitian Codes using Groebner Bases
|
cs.IT cs.SC math.IT
|
List decoding of Hermitian codes is reformulated to allow an efficient and
simple algorithm for the interpolation step. The algorithm is developed using
the theory of Groebner bases of modules. The computational complexity of the
algorithm seems comparable to previously known algorithms achieving the same
task, and the algorithm is better suited for hardware implementation.
|
cs/0610138
|
Why block length and delay behave differently if feedback is present
|
cs.IT math.IT
|
For output-symmetric DMCs at even moderately high rates, fixed-block-length
communication systems show no improvements in their error exponents with
feedback. In this paper, we study systems with fixed end-to-end delay and show
that feedback generally provides dramatic gains in the error exponents.
A new upper bound (the uncertainty-focusing bound) is given on the
probability of symbol error in a fixed-delay communication system with
feedback. This bound turns out to have a similar form to Viterbi's bound used
for the block error probability of convolutional codes as a function of the
fixed constraint length. The uncertainty-focusing bound is shown to be
asymptotically achievable with noiseless feedback for erasure channels as well
as any output-symmetric DMC that has strictly positive zero-error capacity.
Furthermore, it can be achieved in a delay-universal (anytime) fashion even if
the feedback itself is delayed by a small amount. Finally, it is shown that for
end-to-end delay, it is generally possible at high rates to beat the
sphere-packing bound for general DMCs -- thereby providing a counterexample to
a conjecture of Pinsker.
|
cs/0610139
|
How to beat the sphere-packing bound with feedback
|
cs.IT math.IT
|
The sphere-packing bound $E_{sp}(R)$ bounds the reliability function for
fixed-length block-codes. For symmetric channels, it remains a valid bound even
when strictly causal noiseless feedback is allowed from the decoder to the
encoder. To beat the bound, the problem must be changed. While it has long been
known that variable-length block codes can do better when trading-off error
probability with expected block-length, this correspondence shows that the {\em
fixed-delay} setting also presents such an opportunity for generic channels.
While $E_{sp}(R)$ continues to bound the tradeoff between bit error and fixed
end-to-end latency for symmetric channels used {\em without} feedback, a new
bound called the ``focusing bound'' gives the limits on what can be done with
feedback. If low-rate reliable flow-control is free (ie. the noisy channel has
strictly positive zero-error capacity), then the focusing bound can be
asymptotically achieved. Even when the channel has no zero-error capacity, it
is possible to substantially beat the sphere-packing bound by synthesizing an
appropriately reliable channel to carry the flow-control information.
|
cs/0610140
|
Constant for associative patterns ensemble
|
cs.AI
|
Creation procedure of associative patterns ensemble in terms of formal logic
with using neural net-work (NN) model is formulated. It is shown that the
associative patterns set is created by means of unique procedure of NN work
which having individual parameters of entrance stimulus transformation. It is
ascer-tained that the quantity of the selected associative patterns possesses
is a constant.
|
cs/0610141
|
Stabilization using both noisy and noiseless feedback
|
cs.IT math.IT
|
When designing a distributed control system, the system designer has a choice
in how to connect the different units through communication channels. In
practice, noiseless and noisy channels may coexist. Using the standard toy
example of scalar stabilization, this paper shows how a small amount of
noiseless feedback can perform a ``supervisory'' role and thereby boost the
effectiveness of noisy feedback.
|
cs/0610142
|
Coding into a source: a direct inverse Rate-Distortion theorem
|
cs.IT math.IT
|
Shannon proved that if we can transmit bits reliably at rates larger than the
rate distortion function $R(D)$, then we can transmit this source to within a
distortion $D$. We answer the converse question ``If we can transmit a source
to within a distortion $D$, can we transmit bits reliably at rates less than
the rate distortion function?'' in the affirmative. This can be viewed as a
direct converse of the rate distortion theorem.
|
cs/0610143
|
Source coding and channel requirements for unstable processes
|
cs.IT math.IT
|
Our understanding of information in systems has been based on the foundation
of memoryless processes. Extensions to stable Markov and auto-regressive
processes are classical. Berger proved a source coding theorem for the
marginally unstable Wiener process, but the infinite-horizon exponentially
unstable case has been open since Gray's 1970 paper. There were also no
theorems showing what is needed to communicate such processes across noisy
channels.
In this work, we give a fixed-rate source-coding theorem for the
infinite-horizon problem of coding an exponentially unstable Markov process.
The encoding naturally results in two distinct bitstreams that have
qualitatively different QoS requirements for communicating over a noisy medium.
The first stream captures the information that is accumulating within the
nonstationary process and requires sufficient anytime reliability from the
channel used to communicate the process. The second stream captures the
historical information that dissipates within the process and is essentially
classical. This historical information can also be identified with a natural
stable counterpart to the unstable process. A converse demonstrating the
fundamentally layered nature of unstable sources is given by means of
information-embedding ideas.
|
cs/0610144
|
Lossless coding for distributed streaming sources
|
cs.IT math.IT
|
Distributed source coding is traditionally viewed in the block coding context
-- all the source symbols are known in advance at the encoders. This paper
instead considers a streaming setting in which iid source symbol pairs are
revealed to the separate encoders in real time and need to be reconstructed at
the decoder with some tolerable end-to-end delay using finite rate noiseless
channels. A sequential random binning argument is used to derive a lower bound
on the error exponent with delay and show that both ML decoding and universal
decoding achieve the same positive error exponents inside the traditional
Slepian-Wolf rate region. The error events are different from the block-coding
error events and give rise to slightly different exponents. Because the
sequential random binning scheme is also universal over delays, the resulting
code eventually reconstructs every source symbol correctly with probability 1.
|
cs/0610145
|
A Simple Converse of Burnashev's Reliability
|
cs.IT math.IT
|
In a remarkable paper published in 1976, Burnashev determined the reliability
function of variable-length block codes over discrete memoryless channels with
feedback. Subsequently, an alternative achievability proof was obtained by
Yamamoto and Itoh via a particularly simple and instructive scheme. Their idea
is to alternate between a communication and a confirmation phase until the
receiver detects the codeword used by the sender to acknowledge that the
message is correct. We provide a converse that parallels the Yamamoto-Itoh
achievability construction. Besides being simpler than the original, the
proposed converse suggests that a communication and a confirmation phase are
implicit in any scheme for which the probability of error decreases with the
largest possible exponent. The proposed converse also makes it intuitively
clear why the terms that appear in Burnashev's exponent are necessary.
|
cs/0610146
|
The necessity and sufficiency of anytime capacity for stabilization of a
linear system over a noisy communication link, Part II: vector systems
|
cs.IT math.IT
|
In part I, we reviewed how Shannon's classical notion of capacity is not
sufficient to characterize a noisy communication channel if the channel is
intended to be used as part of a feedback loop to stabilize an unstable scalar
linear system. While classical capacity is not enough, a sense of capacity
(parametrized by reliability) called "anytime capacity" is both necessary and
sufficient for channel evaluation in this context. The rate required is the log
of the open-loop system gain and the required reliability comes from the
desired sense of stability. Sufficiency is maintained even in cases with noisy
observations and without any explicit feedback between the observer and the
controller. This established the asymptotic equivalence between scalar
stabilization problems and delay-universal communication problems with
feedback.
Here in part II, the vector-state generalizations are established and it is
the magnitudes of the unstable eigenvalues that play an essential role. To deal
with such systems, the concept of the anytime rate-region is introduced. This
is the region of rates that the channel can support while still meeting
potentially different anytime reliability targets for parallel message streams.
All the scalar results generalize on an eigenvalue by eigenvalue basis. When
there is no explicit feedback of the noisy channel outputs, the intrinsic delay
of the unstable system tells us what the feedback delay needs to be while
evaluating the anytime-rate-region for the channel. An example involving a
binary erasure channel is used to illustrate how differentiated service is
required in any separation-based control architecture.
|
cs/0610148
|
Decoder Error Probability of MRD Codes
|
cs.IT math.IT
|
In this paper, we first introduce the concept of elementary linear subspace,
which has similar properties to those of a set of coordinates. Using this new
concept, we derive properties of maximum rank distance (MRD) codes that
parallel those of maximum distance separable (MDS) codes. Using these
properties, we show that the decoder error probability of MRD codes with error
correction capability t decreases exponentially with t^2 based on the
assumption that all errors with the same rank are equally likely. We argue that
the channel based on this assumption is an approximation of a channel corrupted
by crisscross errors.
|
cs/0610150
|
On LAO Testing of Multiple Hypotheses for Many Independent Objects
|
cs.IT math.IT
|
The problem of many hypotheses logarithmically asymptotically optimal (LAO)
testing for a model consisting of three or more independent objects is solved.
It is supposed that $M$ probability distributions are known and each object
independently of others follows to one of them. The matrix of asymptotic
interdependencies (reliability--reliability functions) of all possible pairs of
the error probability exponents (reliabilities) in optimal testing for this
model is studied.
This problem was introduced (and solved for the case of two objects and two
given probability distributions) by Ahlswede and Haroutunian. The model with
two independent objects with $M$ hypotheses was explored by Haroutunian and
Hakobyan.
|
cs/0610151
|
Anytime coding on the infinite bandwidth AWGN channel: A sequential
semi-orthogonal optimal code
|
cs.IT math.IT
|
It is well known that orthogonal coding can be used to approach the Shannon
capacity of the power-constrained AWGN channel without a bandwidth constraint.
This correspondence describes a semi-orthogonal variation of pulse position
modulation that is sequential in nature -- bits can be ``streamed across''
without having to buffer up blocks of bits at the transmitter. ML decoding
results in an exponentially small probability of error as a function of
tolerated receiver delay and thus eventually a zero probability of error on
every transmitted bit. In the high-rate regime, a matching upper bound is given
on the delay error exponent. We close with some comments on the case with
feedback and the connections to the capacity per unit cost problem.
|
cs/0610153
|
Most Programs Stop Quickly or Never Halt
|
cs.IT math.IT
|
Since many real-world problems arising in the fields of compiler
optimisation, automated software engineering, formal proof systems, and so
forth are equivalent to the Halting Problem--the most notorious undecidable
problem--there is a growing interest, not only academically, in understanding
the problem better and in providing alternative solutions. Halting computations
can be recognised by simply running them; the main difficulty is to detect
non-halting programs. Our approach is to have the probability space extend over
both space and time and to consider the probability that a random $N$-bit
program has halted by a random time. We postulate an a priori computable
probability distribution on all possible runtimes and we prove that given an
integer k>0, we can effectively compute a time bound T such that the
probability that an N-bit program will eventually halt given that it has not
halted by T is smaller than 2^{-k}. We also show that the set of halting
programs (which is computably enumerable, but not computable) can be written as
a disjoint union of a computable set and a set of effectively vanishing
probability. Finally, we show that ``long'' runtimes are effectively rare. More
formally, the set of times at which an N-bit program can stop after the time
2^{N+constant} has effectively zero density.
|
cs/0610155
|
Nonlinear Estimators and Tail Bounds for Dimension Reduction in $l_1$
Using Cauchy Random Projections
|
cs.DS cs.IR cs.LG
|
For dimension reduction in $l_1$, the method of {\em Cauchy random
projections} multiplies the original data matrix $\mathbf{A}
\in\mathbb{R}^{n\times D}$ with a random matrix $\mathbf{R} \in
\mathbb{R}^{D\times k}$ ($k\ll\min(n,D)$) whose entries are i.i.d. samples of
the standard Cauchy C(0,1). Because of the impossibility results, one can not
hope to recover the pairwise $l_1$ distances in $\mathbf{A}$ from $\mathbf{B} =
\mathbf{AR} \in \mathbb{R}^{n\times k}$, using linear estimators without
incurring large errors. However, nonlinear estimators are still useful for
certain applications in data stream computation, information retrieval,
learning, and data mining.
We propose three types of nonlinear estimators: the bias-corrected sample
median estimator, the bias-corrected geometric mean estimator, and the
bias-corrected maximum likelihood estimator. The sample median estimator and
the geometric mean estimator are asymptotically (as $k\to \infty$) equivalent
but the latter is more accurate at small $k$. We derive explicit tail bounds
for the geometric mean estimator and establish an analog of the
Johnson-Lindenstrauss (JL) lemma for dimension reduction in $l_1$, which is
weaker than the classical JL lemma for dimension reduction in $l_2$.
Asymptotically, both the sample median estimator and the geometric mean
estimators are about 80% efficient compared to the maximum likelihood estimator
(MLE). We analyze the moments of the MLE and propose approximating the
distribution of the MLE by an inverse Gaussian.
|
cs/0610156
|
Adaptation Knowledge Discovery from a Case Base
|
cs.AI
|
In case-based reasoning, the adaptation step depends in general on
domain-dependent knowledge, which motivates studies on adaptation knowledge
acquisition (AKA). CABAMAKA is an AKA system based on principles of knowledge
discovery from databases. This system explores the variations within the case
base to elicit adaptation knowledge. It has been successfully tested in an
application of case-based decision support to breast cancer treatment.
|
cs/0610158
|
Considering users' behaviours in improving the responses of an
information base
|
cs.LG cs.IR
|
In this paper, our aim is to propose a model that helps in the efficient use
of an information system by users, within the organization represented by the
IS, in order to resolve their decisional problems. In other words we want to
aid the user within an organization in obtaining the information that
corresponds to his needs (informational needs that result from his decisional
problems). This type of information system is what we refer to as economic
intelligence system because of its support for economic intelligence processes
of the organisation. Our assumption is that every EI process begins with the
identification of the decisional problem which is translated into an
informational need. This need is then translated into one or many information
search problems (ISP). We also assumed that an ISP is expressed in terms of the
user's expectations and that these expectations determine the activities or the
behaviors of the user, when he/she uses an IS. The model we are proposing is
used for the conception of the IS so that the process of retrieving of
solution(s) or the responses given by the system to an ISP is based on these
behaviours and correspond to the needs of the user.
|
cs/0610159
|
Boolean Functions, Projection Operators and Quantum Error Correcting
Codes
|
cs.IT math.IT quant-ph
|
This paper describes a fundamental correspondence between Boolean functions
and projection operators in Hilbert space. The correspondence is widely
applicable, and it is used in this paper to provide a common mathematical
framework for the design of both additive and non-additive quantum error
correcting codes. The new framework leads to the construction of a variety of
codes including an infinite class of codes that extend the original ((5,6,2))
code found by Rains [21]. It also extends to operator quantum error correcting
codes.
|
cs/0610160
|
A Non-Orthogonal Distributed Space-Time Coded Protocol Part II-Code
Construction and DM-G Tradeoff
|
cs.IT math.IT
|
This is the second part of a two-part series of papers. In this paper, for
the generalized non-orthogonal amplify and forward (GNAF) protocol presented in
Part-I, a construction of a new family of distributed space-time codes based on
Co-ordinate Interleaved Orthogonal Designs (CIOD) which result in reduced
Maximum Likelihood (ML) decoding complexity at the destination is proposed.
Further, it is established that the recently proposed Toeplitz space-time codes
as well as space-time block codes (STBCs) from cyclic division algebras can be
used in GNAF protocol. Finally, a lower bound on the optimal
Diversity-Multiplexing Gain (DM-G) tradeoff for the GNAF protocol is
established and it is shown that this bound approaches the transmit diversity
bound asymptotically as the number of relays and the number of channels uses
increases.
|
cs/0610161
|
A Non-Orthogonal Distributed Space-Time Coded Protocol Part I: Signal
Model and Design Criteria
|
cs.IT math.IT
|
In this two-part series of papers, a generalized non-orthogonal amplify and
forward (GNAF) protocol which generalizes several known cooperative diversity
protocols is proposed. Transmission in the GNAF protocol comprises of two
phases - the broadcast phase and the cooperation phase. In the broadcast phase,
the source broadcasts its information to the relays as well as the destination.
In the cooperation phase, the source and the relays together transmit a
space-time code in a distributed fashion. The GNAF protocol relaxes the
constraints imposed by the protocol of Jing and Hassibi on the code structure.
In Part-I of this paper, a code design criteria is obtained and it is shown
that the GNAF protocol is delay efficient and coding gain efficient as well.
Moreover GNAF protocol enables the use of sphere decoders at the destination
with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II,
several low decoding complexity code constructions are studied and a lower
bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is
obtained.
|
cs/0610162
|
Multigroup-Decodable STBCs from Clifford Algebras
|
cs.IT math.IT
|
A Space-Time Block Code (STBC) in $K$ symbols (variables) is called $g$-group
decodable STBC if its maximum-likelihood decoding metric can be written as a
sum of $g$ terms such that each term is a function of a subset of the $K$
variables and each variable appears in only one term. In this paper we provide
a general structure of the weight matrices of multi-group decodable codes using
Clifford algebras. Without assuming that the number of variables in each group
to be the same, a method of explicitly constructing the weight matrices of
full-diversity, delay-optimal $g$-group decodable codes is presented for
arbitrary number of antennas. For the special case of $N_t=2^a$ we construct
two subclass of codes: (i) A class of $2a$-group decodable codes with rate
$\frac{a}{2^{(a-1)}}$, which is, equivalently, a class of Single-Symbol
Decodable codes, (ii) A class of $(2a-2)$-group decodable with rate
$\frac{(a-1)}{2^{(a-2)}}$, i.e., a class of Double-Symbol Decodable codes.
Simulation results show that the DSD codes of this paper perform better than
previously known Quasi-Orthogonal Designs.
|
cs/0610165
|
Decentralized Failure Diagnosis of Stochastic Discrete Event Systems
|
cs.AI
|
Recently, the diagnosability of {\it stochastic discrete event systems}
(SDESs) was investigated in the literature, and, the failure diagnosis
considered was {\it centralized}. In this paper, we propose an approach to {\it
decentralized} failure diagnosis of SDESs, where the stochastic system uses
multiple local diagnosers to detect failures and each local diagnoser possesses
its own information. In a way, the centralized failure diagnosis of SDESs can
be viewed as a special case of the decentralized failure diagnosis presented in
this paper with only one projection. The main contributions are as follows: (1)
We formalize the notion of codiagnosability for stochastic automata, which
means that a failure can be detected by at least one local stochastic diagnoser
within a finite delay. (2) We construct a codiagnoser from a given stochastic
automaton with multiple projections, and the codiagnoser associated with the
local diagnosers is used to test codiagnosability condition of SDESs. (3) We
deal with a number of basic properties of the codiagnoser. In particular, a
necessary and sufficient condition for the codiagnosability of SDESs is
presented. (4) We give a computing method in detail to check whether
codiagnosability is violated. And (5) some examples are described to illustrate
the applications of the codiagnosability and its computing method.
|
cs/0610167
|
ECA-RuleML: An Approach combining ECA Rules with temporal interval-based
KR Event/Action Logics and Transactional Update Logics
|
cs.AI cs.LO cs.MA cs.SE
|
An important problem to be addressed within Event-Driven Architecture (EDA)
is how to correctly and efficiently capture and process the event/action-based
logic. This paper endeavors to bridge the gap between the Knowledge
Representation (KR) approaches based on durable events/actions and such
formalisms as event calculus, on one hand, and event-condition-action (ECA)
reaction rules extending the approach of active databases that view events as
instantaneous occurrences and/or sequences of events, on the other. We propose
formalism based on reaction rules (ECA rules) and a novel interval-based event
logic and present concrete RuleML-based syntax, semantics and implementation.
We further evaluate this approach theoretically, experimentally and on an
example derived from common industry use cases and illustrate its benefits.
|
cs/0610169
|
On the User Selection for MIMO Broadcast Channels
|
cs.IT math.IT
|
In this paper, a downlink communication system, in which a Base Station (BS)
equipped with $M$ antennas communicates with $N$ users each equipped with $K$
receive antennas, is considered. An efficient suboptimum algorithm is proposed
for selecting a set of users in order to maximize the sum-rate throughput of
the system. For the asymptotic case when $N$ tends to infinity, the necessary
and sufficient conditions in order to achieve the maximum sum-rate throughput,
such that the difference between the achievable sum-rate and the maximum value
approaches zero, is derived. The complexity of our algorithm is investigated in
terms of the required amount of feedback from the users to the base station, as
well as the number of searches required for selecting the users. It is shown
that the proposed method is capable of achieving a large portion of the
sum-rate capacity, with a very low complexity.
|
cs/0610170
|
Low-complexity modular policies: learning to play Pac-Man and a new
framework beyond MDPs
|
cs.LG cs.AI
|
In this paper we propose a method that learns to play Pac-Man. We define a
set of high-level observation and action modules. Actions are temporally
extended, and multiple action modules may be in effect concurrently. A decision
of the agent is represented as a rule-based policy. For learning, we apply the
cross-entropy method, a recent global optimization algorithm. The learned
policies reached better score than the hand-crafted policy, and neared the
score of average human players. We argue that learning is successful mainly
because (i) the policy space includes the combination of individual actions and
thus it is sufficiently rich, (ii) the search is biased towards low-complexity
policies and low complexity solutions can be found quickly if they exist. Based
on these principles, we formulate a new theoretical framework, which can be
found in the Appendix as supporting material.
|
cs/0610175
|
DSmT: A new paradigm shift for information fusion
|
cs.AI
|
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been and
still remains of primal importance for the development of reliable information
fusion systems. In this short survey paper, we present the theory of plausible
and paradoxical reasoning, known as DSmT (Dezert-Smarandache Theory) in
literature, developed for dealing with imprecise, uncertain and potentially
highly conflicting sources of information. DSmT is a new paradigm shift for
information fusion and recent publications have shown the interest and the
potential ability of DSmT to solve fusion problems where Dempster's rule used
in Dempster-Shafer Theory (DST) provides counter-intuitive results or fails to
provide useful result at all. This paper is focused on the foundations of DSmT
and on its main rules of combination (classic, hybrid and Proportional Conflict
Redistribution rules). Shafer's model on which is based DST appears as a
particular and specific case of DSm hybrid model which can be easily handled by
DSmT as well. Several simple but illustrative examples are given throughout
this paper to show the interest and the generality of this new theory.
|
cs/0611002
|
Lattice Quantization with Side Information: Codes, Asymptotics, and
Applications in Sensor Networks
|
cs.IT math.IT
|
We consider the problem of rate/distortion with side information available
only at the decoder. For the case of jointly-Gaussian source X and side
information Y, and mean-squared error distortion, Wyner proved in 1976 that the
rate/distortion function for this problem is identical to the conditional
rate/distortion function R_{X|Y}, assuming the side information Y is available
at the encoder. In this paper we construct a structured class of asymptotically
optimal quantizers for this problem: under the assumption of high correlation
between source X and side information Y, we show there exist quantizers within
our class whose performance comes arbitrarily close to Wyner's bound. As an
application illustrating the relevance of the high-correlation asymptotics, we
also explore the use of these quantizers in the context of a problem of data
compression for sensor networks, in a setup involving a large number of devices
collecting highly correlated measurements within a confined area. An important
feature of our formulation is that, although the per-node throughput of the
network tends to zero as network size increases, so does the amount of
information generated by each transmitter. This is a situation likely to be
encountered often in practice, which allows us to cast under new--and more
``optimistic''--light some negative results on the transport capacity of
large-scale wireless networks.
|
cs/0611003
|
A Scalable Protocol for Cooperative Time Synchronization Using Spatial
Averaging
|
cs.NI cs.IT math.IT
|
Time synchronization is an important aspect of sensor network operation.
However, it is well known that synchronization error accumulates over multiple
hops. This presents a challenge for large-scale, multi-hop sensor networks with
a large number of nodes distributed over wide areas. In this work, we present a
protocol that uses spatial averaging to reduce error accumulation in
large-scale networks. We provide an analysis to quantify the synchronization
improvement achieved using spatial averaging and find that in a basic
cooperative network, the skew and offset variance decrease approximately as
$1/\bar{N}$ where $\bar{N}$ is the number of cooperating nodes. For general
networks, simulation results and a comparison to basic cooperative network
results are used to illustrate the improvement in synchronization performance.
|
cs/0611006
|
Evolving controllers for simulated car racing
|
cs.NE cs.LG cs.RO
|
This paper describes the evolution of controllers for racing a simulated
radio-controlled car around a track, modelled on a real physical track. Five
different controller architectures were compared, based on neural networks,
force fields and action sequences. The controllers use either egocentric (first
person), Newtonian (third person) or no information about the state of the car
(open-loop controller). The only controller that was able to evolve good racing
behaviour was based on a neural network acting on egocentric inputs.
|
cs/0611007
|
MIMO Multichannel Beamforming: SER and Outage Using New Eigenvalue
Distributions of Complex Noncentral Wishart Matrices
|
cs.IT math.IT
|
This paper analyzes MIMO systems with multichannel beamforming in Ricean
fading. Our results apply to a wide class of multichannel systems which
transmit on the eigenmodes of the MIMO channel. We first present new
closed-form expressions for the marginal ordered eigenvalue distributions of
complex noncentral Wishart matrices. These are used to characterize the
statistics of the signal to noise ratio (SNR) on each eigenmode. Based on this,
we present exact symbol error rate (SER) expressions. We also derive
closed-form expressions for the diversity order, array gain, and outage
probability. We show that the global SER performance is dominated by the
subchannel corresponding to the minimum channel singular value. We also show
that, at low outage levels, the outage probability varies inversely with the
Ricean K-factor for cases where transmission is only on the most dominant
subchannel (i.e. a singlechannel beamforming system). Numerical results are
presented to validate the theoretical analysis.
|
cs/0611009
|
Efficient constraint propagation engines
|
cs.AI cs.PL
|
This paper presents a model and implementation techniques for speeding up
constraint propagation. Three fundamental approaches to improving constraint
propagation based on propagators as implementations of constraints are
explored: keeping track of which propagators are at fixpoint, choosing which
propagator to apply next, and how to combine several propagators for the same
constraint. We show how idempotence reasoning and events help track fixpoints
more accurately. We improve these methods by using them dynamically (taking
into account current domains to improve accuracy). We define priority-based
approaches to choosing a next propagator and show that dynamic priorities can
improve propagation. We illustrate that the use of multiple propagators for the
same constraint can be advantageous with priorities, and introduce staged
propagators that combine the effects of multiple propagators with priorities
for greater efficiency.
|
cs/0611010
|
On the structure of generalized toric codes
|
cs.IT math.IT
|
Toric codes are obtained by evaluating rational functions of a nonsingular
toric variety at the algebraic torus. One can extend toric codes to the so
called generalized toric codes. This extension consists on evaluating elements
of an arbitrary polynomial algebra at the algebraic torus instead of a linear
combination of monomials whose exponents are rational points of a convex
polytope. We study their multicyclic and metric structure, and we use them to
express their dual and to estimate their minimum distance.
|
cs/0611011
|
Hedging predictions in machine learning
|
cs.LG
|
Recent advances in machine learning make it possible to design efficient
prediction algorithms for data sets with huge numbers of parameters. This paper
describes a new technique for "hedging" the predictions output by many such
algorithms, including support vector machines, kernel ridge regression, kernel
nearest neighbours, and by many other state-of-the-art methods. The hedged
predictions for the labels of new objects include quantitative measures of
their own accuracy and reliability. These measures are provably valid under the
assumption of randomness, traditional in machine learning: the objects and
their labels are assumed to be generated independently from the same
probability distribution. In particular, it becomes possible to control (up to
statistical fluctuations) the number of erroneous predictions by selecting a
suitable confidence level. Validity being achieved automatically, the remaining
goal of hedged prediction is efficiency: taking full account of the new
objects' features and other available information to produce as accurate
predictions as possible. This can be done successfully using the powerful
machinery of modern machine learning.
|
cs/0611012
|
Asymptotic SER and Outage Probability of MIMO MRC in Correlated Fading
|
cs.IT math.IT
|
This letter derives the asymptotic symbol error rate (SER) and outage
probability of multiple-input multiple-output (MIMO) maximum ratio combining
(MRC) systems. We consider Rayleigh fading channels with both transmit and
receive spatial correlation. Our results are based on new asymptotic
expressions which we derive for the p.d.f. and c.d.f. of the maximum eigenvalue
of positive-definite quadratic forms in complex Gaussian matrices. We prove
that spatial correlation does not affect the diversity order, but that it
reduces the array gain and hence increases the SER in the high SNR regime.
|
cs/0611015
|
On the Fairness of Rate Allocation in Gaussian Multiple Access Channel
and Broadcast Channel
|
cs.IT math.IT
|
The capacity region of a channel consists of all achievable rate vectors.
Picking a particular point in the capacity region is synonymous with rate
allocation. The issue of fairness in rate allocation is addressed in this
paper. We review several notions of fairness, including max-min fairness,
proportional fairness and Nash bargaining solution. Their efficiencies for
general multiuser channels are discussed. We apply these ideas to the Gaussian
multiple access channel (MAC) and the Gaussian broadcast channel (BC). We show
that in the Gaussian MAC, max-min fairness and proportional fairness coincide.
For both Gaussian MAC and BC, we devise efficient algorithms that locate the
fair point in the capacity region. Some elementary properties of fair rate
allocations are proved.
|
cs/0611017
|
A New Data Processing Inequality and Its Applications in Distributed
Source and Channel Coding
|
cs.IT math.IT
|
In the distributed coding of correlated sources, the problem of
characterizing the joint probability distribution of a pair of random variables
satisfying an n-letter Markov chain arises. The exact solution of this problem
is intractable. In this paper, we seek a single-letter necessary condition for
this n-letter Markov chain. To this end, we propose a new data processing
inequality on a new measure of correlation by means of spectrum analysis. Based
on this new data processing inequality, we provide a single-letter necessary
condition for the required joint probability distribution. We apply our results
to two specific examples involving the distributed coding of correlated
sources: multi-terminal rate-distortion region and multiple access channel with
correlated sources, and propose new necessary conditions for these two
problems.
|
cs/0611020
|
An associative memory for the on-line recognition and prediction of
temporal sequences
|
cs.NE cs.AI
|
This paper presents the design of an associative memory with feedback that is
capable of on-line temporal sequence learning. A framework for on-line sequence
learning has been proposed, and different sequence learning models have been
analysed according to this framework. The network model is an associative
memory with a separate store for the sequence context of a symbol. A sparse
distributed memory is used to gain scalability. The context store combines the
functionality of a neural layer with a shift register. The sensitivity of the
machine to the sequence context is controllable, resulting in different
characteristic behaviours. The model can store and predict on-line sequences of
various types and length. Numerical simulations on the model have been carried
out to determine its properties.
|
cs/0611022
|
Multirobot rendezvous with visibility sensors in nonconvex environments
|
cs.RO
|
This paper presents a coordination algorithm for mobile autonomous robots.
Relying upon distributed sensing the robots achieve rendezvous, that is, they
move to a common location. Each robot is a point mass moving in a nonconvex
environment according to an omnidirectional kinematic model. Each robot is
equipped with line-of-sight limited-range sensors, i.e., a robot can measure
the relative position of any object (robots or environment boundary) if and
only if the object is within a given distance and there are no obstacles
in-between. The algorithm is designed using the notions of robust visibility,
connectivity-preserving constraint sets, and proximity graphs. Simulations
illustrate the theoretical results on the correctness of the proposed
algorithm, and its performance in asynchronous setups and with sensor
measurement and control errors.
|
cs/0611024
|
A Relational Approach to Functional Decomposition of Logic Circuits
|
cs.DM cs.LG
|
Functional decomposition of logic circuits has profound influence on all
quality aspects of the cost-effective implementation of modern digital systems.
In this paper, a relational approach to the decomposition of logic circuits is
proposed. This approach is parallel to the normalization of relational
databases, they are governed by the same concepts of functional dependency (FD)
and multi-valued dependency (MVD). It is manifest that the functional
decomposition of switching function actually exploits the same idea and serves
a similar purpose as database normalization. Partitions play an important role
in the decomposition. The interdependency of two partitions can be represented
by a bipartite graph. We demonstrate that both FD and MVD can be represented by
bipartite graphs with specific topological properties, which are delineated by
partitions of minterms. It follows that our algorithms are procedures of
constructing those specific bipartite graphs of interest to meet the
information-lossless criteria of functional decomposition.
|
cs/0611025
|
A Logical Approach to Efficient Max-SAT solving
|
cs.AI cs.LO
|
Weighted Max-SAT is the optimization version of SAT and many important
problems can be naturally encoded as such. Solving weighted Max-SAT is an
important problem from both a theoretical and a practical point of view. In
recent years, there has been considerable interest in finding efficient solving
techniques. Most of this work focus on the computation of good quality lower
bounds to be used within a branch and bound DPLL-like algorithm. Most often,
these lower bounds are described in a procedural way. Because of that, it is
difficult to realize the {\em logic} that is behind.
In this paper we introduce an original framework for Max-SAT that stresses
the parallelism with classical SAT. Then, we extend the two basic SAT solving
techniques: {\em search} and {\em inference}. We show that many algorithmic
{\em tricks} used in state-of-the-art Max-SAT solvers are easily expressable in
{\em logic} terms with our framework in a unified manner.
Besides, we introduce an original search algorithm that performs a restricted
amount of {\em weighted resolution} at each visited node. We empirically
compare our algorithm with a variety of solving alternatives on several
benchmarks. Our experiments, which constitute to the best of our knowledge the
most comprehensive Max-sat evaluation ever reported, show that our algorithm is
generally orders of magnitude faster than any competitor.
|
cs/0611026
|
Un mod\`ele g\'en\'erique d'organisation de corpus en ligne: application
\`a la FReeBank
|
cs.CL
|
The few available French resources for evaluating linguistic models or
algorithms on other linguistic levels than morpho-syntax are either
insufficient from quantitative as well as qualitative point of view or not
freely accessible. Based on this fact, the FREEBANK project intends to create
French corpora constructed using manually revised output from a hybrid
Constraint Grammar parser and annotated on several linguistic levels
(structure, morpho-syntax, syntax, coreference), with the objective to make
them available on-line for research purposes. Therefore, we will focus on using
standard annotation schemes, integration of existing resources and maintenance
allowing for continuous enrichment of the annotations. Prior to the actual
presentation of the prototype that has been implemented, this paper describes a
generic model for the organization and deployment of a linguistic resource
archive, in compliance with the various works currently conducted within
international standardization initiatives (TEI and ISO/TC 37/SC 4).
|
cs/0611028
|
A Decomposition Theory for Binary Linear Codes
|
cs.DM cs.IT math.IT
|
The decomposition theory of matroids initiated by Paul Seymour in the 1980's
has had an enormous impact on research in matroid theory. This theory, when
applied to matrices over the binary field, yields a powerful decomposition
theory for binary linear codes. In this paper, we give an overview of this code
decomposition theory, and discuss some of its implications in the context of
the recently discovered formulation of maximum-likelihood (ML) decoding of a
binary linear code over a discrete memoryless channel as a linear programming
problem. We translate matroid-theoretic results of Gr\"otschel and Truemper
from the combinatorial optimization literature to give examples of non-trivial
families of codes for which the ML decoding problem can be solved in time
polynomial in the length of the code. One such family is that consisting of
codes $C$ for which the codeword polytope is identical to the Koetter-Vontobel
fundamental polytope derived from the entire dual code $C^\perp$. However, we
also show that such families of codes are not good in a coding-theoretic sense
-- either their dimension or their minimum distance must grow sub-linearly with
codelength. As a consequence, we have that decoding by linear programming, when
applied to good codes, cannot avoid failing occasionally due to the presence of
pseudocodewords.
|
cs/0611030
|
Nonextensive Pythagoras' Theorem
|
cs.IT math.IT
|
Kullback-Leibler relative-entropy, in cases involving distributions resulting
from relative-entropy minimization, has a celebrated property reminiscent of
squared Euclidean distance: it satisfies an analogue of the Pythagoras'
theorem. And hence, this property is referred to as Pythagoras' theorem of
relative-entropy minimization or triangle equality and plays a fundamental role
in geometrical approaches of statistical estimation theory like information
geometry. Equvalent of Pythagoras' theorem in the generalized nonextensive
formalism is established in (Dukkipati at el., Physica A, 361 (2006) 124-138).
In this paper we give a detailed account of it.
|
cs/0611031
|
Efficient Threshold Aggregation of Moving Objects
|
cs.DB
|
Calculating aggregation operators of moving point objects, using time as a
continuous variable, presents unique problems when querying for congestion in a
moving and changing (or dynamic) query space. We present a set of congestion
query operators, based on a threshold value, that estimate the following 5
aggregation operations in d-dimensions. 1) We call the count of point objects
that intersect the dynamic query space during the query time interval, the
CountRange. 2) We call the Maximum (or Minimum) congestion in the dynamic query
space at any time during the query time interval, the MaxCount (or MinCount).
3) We call the sum of time that the dynamic query space is congested, the
ThresholdSum. 4) We call the number of times that the dynamic query space is
congested, the ThresholdCount. And 5) we call the average length of time of all
the time intervals when the dynamic query space is congested, the
ThresholdAverage. These operators rely on a novel approach to transforming the
problem of selection based on position to a problem of selection based on a
threshold. These operators can be used to predict concentrations of migrating
birds that may carry disease such as Bird Flu and hence the information may be
used to predict high risk areas. On a smaller scale, those operators are also
applicable to maintaining safety in airplane operations. We present the theory
of our estimation operators and provide algorithms for exact operators. The
implementations of those operators, and experiments, which include data from
more than 7500 queries, indicate that our estimation operators produce fast,
efficient results with error under 5%.
|
cs/0611032
|
V-like formations in flocks of artificial birds
|
cs.NE
|
We consider flocks of artificial birds and study the emergence of V-like
formations during flight. We introduce a small set of fully distributed
positioning rules to guide the birds' movements and demonstrate, by means of
simulations, that they tend to lead to stabilization into several of the
well-known V-like formations that have been observed in nature. We also provide
quantitative indicators that we believe are closely related to achieving V-like
formations, and study their behavior over a large set of independent
simulations.
|
cs/0611035
|
The Role of Quasi-identifiers in k-Anonymity Revisited
|
cs.DB cs.CR
|
The concept of k-anonymity, used in the recent literature to formally
evaluate the privacy preservation of published tables, was introduced based on
the notion of quasi-identifiers (or QI for short). The process of obtaining
k-anonymity for a given private table is first to recognize the QIs in the
table, and then to anonymize the QI values, the latter being called
k-anonymization. While k-anonymization is usually rigorously validated by the
authors, the definition of QI remains mostly informal, and different authors
seem to have different interpretations of the concept of QI. The purpose of
this paper is to provide a formal underpinning of QI and examine the
correctness and incorrectness of various interpretations of QI in our formal
framework. We observe that in cases where the concept has been used correctly,
its application has been conservative; this note provides a formal
understanding of the conservative nature in such cases.
|
cs/0611037
|
On Conditional Branches in Optimal Decision Trees
|
cs.PF cs.IT math.IT
|
The decision tree is one of the most fundamental programming abstractions. A
commonly used type of decision tree is the alphabetic binary tree, which uses
(without loss of generality) ``less than'' versus ''greater than or equal to''
tests in order to determine one of $n$ outcome events. The process of finding
an optimal alphabetic binary tree for a known probability distribution on
outcome events usually has the underlying assumption that the cost (time) per
decision is uniform and thus independent of the outcome of the decision. This
assumption, however, is incorrect in the case of software to be optimized for a
given microprocessor, e.g., in compiling switch statements or in fine-tuning
program bottlenecks. The operation of the microprocessor generally means that
the cost for the more likely decision outcome can or will be less -- often far
less -- than the less likely decision outcome. Here we formulate a variety of
$O(n^3)$-time $O(n^2)$-space dynamic programming algorithms to solve such
optimal binary decision tree problems, optimizing for the behavior of
processors with predictive branch capabilities, both static and dynamic. In the
static case, we use existing results to arrive at entropy-based performance
bounds. Solutions to this formulation are often faster in practice than
``optimal'' decision trees as formulated in the literature, and, for small
problems, are easily worth the extra complexity in finding the better solution.
This can be applied in fast implementation of decoding Huffman codes.
|
cs/0611038
|
Nonsymmetric entropy I: basic concepts and results
|
cs.IT math.IT
|
A new concept named nonsymmetric entropy which generalizes the concepts of
Boltzman's entropy and shannon's entropy, was introduced. Maximal nonsymmetric
entropy principle was proven. Some important distribution laws were derived
naturally from maximal nonsymmetric entropy principle.
|
cs/0611042
|
CSCR:Computer Supported Collaborative Research
|
cs.HC cs.LG
|
It is suggested that a new area of CSCR (Computer Supported Collaborative
Research) is distinguished from CSCW and CSCL and that the demarcation between
the three areas could do with greater clarification and prescription.
|
cs/0611043
|
On the Convexity of log det (I + K X^{-1})
|
cs.IT math.IT
|
A simple proof is given for the convexity of log det (I+K X^{-1}) in the
positive definite matrix variable X with a given positive semidefinite K.
|
cs/0611044
|
Protection of the information in a complex CAD system of renovation of
industrial firms
|
cs.CE
|
The threats to security of the information originating owing to involuntary
operations of the users of a CAD, and methods of its protection implemented in
a complex CAD system of renovation of firms are considered: rollback, autosave,
automatic backup copying and electronic subscript. The specificity of a complex
CAD is reflected in necessity of rollback and autosave both of the draw and the
parametric representations of its parts, which are the information models of
the problem-oriented extensions of the CAD
|
cs/0611045
|
The evolution of the parametric models of drawings (modules) in the
enterprises reconstruction CAD system
|
cs.CE
|
Progressing methods of drawings creating automation is discussed on the basis
of so-called modules containing parametric representation of a part of the
drawing and the geometrical elements. The stages of evolution of modular
technology of automation of engineering are describing alternatives of applying
of moduluss for simple association of elements of the drawing without
parametric representation with an opportunity of its commenting, for graphic
symbols creating in the schemas of automation and drawings of pipelines, for
storage of the specific properties of elements, for development of the
specialized parts of the project: the axonometric schemas, profiles of outboard
pipe networks etc.
|
cs/0611046
|
Analytic Tableaux Calculi for KLM Logics of Nonmonotonic Reasoning
|
cs.LO cs.AI
|
We present tableau calculi for some logics of nonmonotonic reasoning, as
defined by Kraus, Lehmann and Magidor. We give a tableau proof procedure for
all KLM logics, namely preferential, loop-cumulative, cumulative and rational
logics. Our calculi are obtained by introducing suitable modalities to
interpret conditional assertions. We provide a decision procedure for the
logics considered, and we study their complexity.
|
cs/0611047
|
The Reaction RuleML Classification of the Event / Action / State
Processing and Reasoning Space
|
cs.AI
|
Reaction RuleML is a general, practical, compact and user-friendly
XML-serialized language for the family of reaction rules. In this white paper
we give a review of the history of event / action /state processing and
reaction rule approaches and systems in different domains, define basic
concepts and give a classification of the event, action, state processing and
reasoning space as well as a discussion of relevant / related work
|
cs/0611049
|
On numerical stability of recursive present value computation method
|
cs.CE cs.NA
|
We analyze numerical stability of a recursive computation scheme of present
value (PV) amd show that the absolute error increases exponentially for
positive discount rates. We show that reversing the direction of calculations
in the recurrence equation yields a robust PV computation routine.
|
cs/0611053
|
Capacity of a Class of Deterministic Relay Channels
|
cs.IT math.IT
|
The capacity of a class of deterministic relay channels with the transmitter
input X, the receiver output Y, the relay output Y_1 = f(X, Y), and a separate
communication link from the relay to the receiver with capacity R_0, is shown
to be
C(R_0) = \max_{p(x)} \min \{I(X;Y)+R_0, I(X;Y, Y_1) \}.
Thus every bit from the relay is worth exactly one bit to the receiver. Two
alternative coding schemes are presented that achieve this capacity. The first
scheme, ``hash-and-forward'', is based on a simple yet novel use of random
binning on the space of relay outputs, while the second scheme uses the usual
``compress-and-forward''. In fact, these two schemes can be combined together
to give a class of optimal coding schemes. As a corollary, this relay capacity
result confirms a conjecture by Ahlswede and Han on the capacity of a channel
with rate-limited state information at the decoder in the special case when the
channel state is recoverable from the channel input and the output.
|
cs/0611054
|
How Random is a Coin Toss? Bayesian Inference and the Symbolic Dynamics
of Deterministic Chaos
|
cs.LG cs.IT math.IT nlin.CD
|
Symbolic dynamics has proven to be an invaluable tool in analyzing the
mechanisms that lead to unpredictability and random behavior in nonlinear
dynamical systems. Surprisingly, a discrete partition of continuous state space
can produce a coarse-grained description of the behavior that accurately
describes the invariant properties of an underlying chaotic attractor. In
particular, measures of the rate of information production--the topological and
metric entropy rates--can be estimated from the outputs of Markov or generating
partitions. Here we develop Bayesian inference for k-th order Markov chains as
a method to finding generating partitions and estimating entropy rates from
finite samples of discretized data produced by coarse-grained dynamical
systems.
|
cs/0611058
|
Advances in Self Organising Maps
|
cs.NE math.ST nlin.AO stat.TH
|
The Self-Organizing Map (SOM) with its related extensions is the most popular
artificial neural algorithm for use in unsupervised learning, clustering,
classification and data visualization. Over 5,000 publications have been
reported in the open literature, and many commercial projects employ the SOM as
a tool for solving hard real-world problems. Each two years, the "Workshop on
Self-Organizing Maps" (WSOM) covers the new developments in the field. The WSOM
series of conferences was initiated in 1997 by Prof. Teuvo Kohonen, and has
been successfully organized in 1997 and 1999 by the Helsinki University of
Technology, in 2001 by the University of Lincolnshire and Humberside, and in
2003 by the Kyushu Institute of Technology. The Universit\'{e} Paris I
Panth\'{e}on Sorbonne (SAMOS-MATISSE research centre) organized WSOM 2005 in
Paris on September 5-8, 2005.
|
cs/0611059
|
Is the cyclic prefix necessary?
|
cs.IT math.IT
|
We show that one can do away with the cyclic prefix (CP) for SC-FDE and OFDM
at the cost of a moderate increase in the complexity of a DFT-based receiver.
Such an approach effectively deals with the decrease in the number of channel
uses due to the introduction of the CP. It is shown that the SINR for SC-FDE
remains the same asymptotically with the proposed receiver without CP as that
of the conventional receiver with CP. The results are shown for $N_t$ transmit
antennas and $N_r$ receive antennas where $N_r \geq N_t$.
|
cs/0611060
|
The effect of 'Open Access' upon citation impact: An analysis of ArXiv's
Condensed Matter Section
|
cs.DL cs.IR physics.soc-ph
|
This article statistically analyses how the citation impact of articles
deposited in the Condensed Matter section of the preprint server ArXiv (hosted
by Cornell University), and subsequently published in a scientific journal,
compares to that of articles in the same journal that were not deposited in
that archive. Its principal aim is to further illustrate and roughly estimate
the effect of two factors, 'early view' and 'quality bias', upon differences in
citation impact between these two sets of papers, using citation data from
Thomson Scientific's Web of Science. It presents estimates for a number of
journals in the field of condensed matter physics. In order to discriminate
between an 'open access' effect and an early view effect, longitudinal citation
data was analysed covering a time period as long as 7 years. Quality bias was
measured by calculating ArXiv citation impact differentials at the level of
individual authors publishing in a journal, taking into account co-authorship.
The analysis provided evidence of a strong quality bias and early view effect.
Correcting for these effects, there is in a sample of 6 condensed matter
physics journals studied in detail, no sign of a general 'open access
advantage' of papers deposited in ArXiv. The study does provide evidence that
ArXiv accelerates citation, due to the fact that that ArXiv makes papers
earlier available rather than that it makes papers freely available.
|
cs/0611061
|
Multivariate Integral Perturbation Techniques - I (Theory)
|
cs.CE cs.NA
|
We present a quasi-analytic perturbation expansion for multivariate
N-dimensional Gaussian integrals. The perturbation expansion is an infinite
series of lower-dimensional integrals (one-dimensional in the simplest
approximation). This perturbative idea can also be applied to multivariate
Student-t integrals. We evaluate the perturbation expansion explicitly through
2nd order, and discuss the convergence, including enhancement using Pade
approximants. Brief comments on potential applications in finance are given,
including options, models for credit risk and derivatives, and correlation
sensitivities.
|
cs/0611069
|
Scaling Construction Grammar up to Production Systems: the SCIM
|
cs.CL
|
While a great effort has concerned the development of fully integrated
modular understanding systems, few researches have focused on the problem of
unifying existing linguistic formalisms with cognitive processing models. The
Situated Constructional Interpretation Model is one of these attempts. In this
model, the notion of "construction" has been adapted in order to be able to
mimic the behavior of Production Systems. The Construction Grammar approach
establishes a model of the relations between linguistic forms and meaning, by
the mean of constructions. The latter can be considered as pairings from a
topologically structured space to an unstructured space, in some way a special
kind of production rules.
|
cs/0611070
|
Hierarchical Cooperation Achieves Optimal Capacity Scaling in Ad Hoc
Networks
|
cs.IT math.IT
|
n source and destination pairs randomly located in an area want to
communicate with each other. Signals transmitted from one user to another at
distance r apart are subject to a power loss of r^{-alpha}, as well as a random
phase. We identify the scaling laws of the information theoretic capacity of
the network. In the case of dense networks, where the area is fixed and the
density of nodes increasing, we show that the total capacity of the network
scales linearly with n. This improves on the best known achievability result of
n^{2/3} of Aeron and Saligrama, 2006. In the case of extended networks, where
the density of nodes is fixed and the area increasing linearly with n, we show
that this capacity scales as n^{2-alpha/2} for 2<alpha<3 and sqrt{n} for
alpha>3. The best known earlier result (Xie and Kumar 2006) identified the
scaling law for alpha > 4. Thus, much better scaling than multihop can be
achieved in dense networks, as well as in extended networks with low
attenuation. The performance gain is achieved by intelligent node cooperation
and distributed MIMO communication. The key ingredient is a hierarchical and
digital architecture for nodal exchange of information for realizing the
cooperation.
|
cs/0611073
|
Prefix Codes for Power Laws with Countable Support
|
cs.IT math.IT
|
In prefix coding over an infinite alphabet, methods that consider specific
distributions generally consider those that decline more quickly than a power
law (e.g., Golomb coding). Particular power-law distributions, however, model
many random variables encountered in practice. For such random variables,
compression performance is judged via estimates of expected bits per input
symbol. This correspondence introduces a family of prefix codes with an eye
towards near-optimal coding of known distributions. Compression performance is
precisely estimated for well-known probability distributions using these codes
and using previously known prefix codes. One application of these near-optimal
codes is an improved representation of rational numbers.
|
cs/0611075
|
Proportional Fairness in Multi-channel Multi-rate Wireless Networks-Part
I: The Case of Deterministic Channels
|
cs.NI cs.IT cs.PF math.IT
|
This is Part I of a two-part paper series that studies the use of the
proportional fairness (PF) utility function as the basis for capacity
allocation and scheduling in multi-channel multi-rate wireless networks. The
contributions of Part I are threefold. (i) First, we lay down the theoretical
foundation for PF. Specifically, we present the fundamental properties and
physical/economic interpretation of PF. We show by general mathematical
arguments that PF leads to equal airtime allocation to users for the
single-channel case; and equal equivalent airtime allocation to users for the
multi-channel case, where the equivalent airtime enjoyed by a user is a
weighted sum of the airtimes enjoyed by the user on all channels, with the
weight of a channel being the price or value of that channel. We also establish
the Pareto efficiency of PF solutions. (ii) Second, we derive characteristics
of PF solutions that are useful for the construction of PF-optimization
algorithms. We present several PF-optimization algorithms, including a fast
algorithm that is amenable to parallel implementation. (iii) Third, we study
the use of PF utility for capacity allocation in large-scale WiFi networks
consisting of many adjacent wireless LANs. We find that the PF solution
simultaneously achieves higher system throughput, better fairness, and lower
outage probability with respect to the default solution given by today's 802.11
commercial products. Part II of this paper series extends our investigation to
the time-varying-channel case in which the data rates enjoyed by users over the
channels vary dynamically over time
|
cs/0611076
|
Proportional Fairness in Multi-channel Multi-rate Wireless Networks-Part
II: The Case of Time-Varying Channels
|
cs.PF cs.IT cs.NI math.IT
|
This is Part II of a two-part paper series that studies the use of the
proportional fairness (PF) utility function as the basis for capacity
allocation and scheduling in multi-channel multi-rate wireless networks. The
contributions of Part II are twofold. (i) First, we extend the problem
formulation, theoretical results, and algorithms to the case of time-varying
channels, where opportunistic capacity allocation and scheduling can be
exploited to improve system performance. We lay down the theoretical foundation
for optimization that "couples" the time-varying characteristic of channels
with the requirements of the underlying applications into one consideration. In
particular, the extent to which opportunistic optimization is possible is not
just a function of how fast the channel characteristics vary, but also a
function of the elasticity of the underlying applications for delayed capacity
allocation. (ii) Second, building upon our theoretical framework and results,
we study subcarrier allocation and scheduling in orthogonal frequency division
multiplexing (OFDM) cellular wireless networks. We introduce the concept of a
W-normalized Doppler frequency to capture the extent to which opportunistic
scheduling can be exploited to achieve throughput-fairness performance gain. We
show that a "look-back PF" scheduling can strike a good balance between system
throughput and fairness while taking the underlying application requirements
into account.
|
cs/0611077
|
Evolutionary Optimization in an Algorithmic Setting
|
cs.NE cs.AI
|
Evolutionary processes proved very useful for solving optimization problems.
In this work, we build a formalization of the notion of cooperation and
competition of multiple systems working toward a common optimization goal of
the population using evolutionary computation techniques. It is justified that
evolutionary algorithms are more expressive than conventional recursive
algorithms. Three subclasses of evolutionary algorithms are proposed here:
bounded finite, unbounded finite and infinite types. Some results on
completeness, optimality and search decidability for the above classes are
presented. A natural extension of Evolutionary Turing Machine model developed
in this paper allows one to mathematically represent and study properties of
cooperation and competition in a population of optimized species.
|
cs/0611079
|
Managing network congestion with a Kohonen-based RED queue
|
cs.NI cs.NE
|
The behaviour of the TCP AIMD algorithm is known to cause queue length
oscillations when congestion occurs at a router output link. Indeed, due to
these queueing variations, end-to-end applications experience large delay
jitter. Many studies have proposed efficient Active Queue Management (AQM)
mechanisms in order to reduce queue oscillations and stabilize the queue
length. These AQM are mostly improvements of the Random Early Detection (RED)
model. Unfortunately, these enhancements do not react in a similar manner for
various network conditions and are strongly sensitive to their initial setting
parameters. Although this paper proposes a solution to overcome the
difficulties of setting these parameters by using a Kohonen neural network
model, another goal of this study is to investigate whether cognitive
intelligence could be placed in the core network to solve such stability
problem. In our context, we use results from the neural network area to
demonstrate that our proposal, named Kohonen-RED (KRED), enables a stable queue
length without complex parameters setting and passive measurements.
|
cs/0611080
|
A Multi-server Scheduling Framework for Resource Allocation in Wireless
Multi-carrier Networks
|
cs.NA cs.CE cs.NI cs.PF
|
Multiuser resource allocation has recently been recognized as an effective
methodology for enhancing the power and spectrum efficiency in OFDM (orthogonal
frequency division multiplexing) systems. It is, however, not directly
applicable to current packet-switched networks, because (i) most existing
packet-scheduling schemes are based on a single-server model and do not serve
multiple users at the same time; and (ii) the conventional separate design of
MAC (medium access control) packet scheduling and PHY (physical) resource
allocation yields inefficient resource utilization. In this paper, we propose a
cross-layer resource allocation algorithm based on a novel multi-server
scheduling framework to achieve overall high system power efficiency in
packet-switched OFDM networks. Our contribution is four fold: (i) we propose
and analyze a MPGPS (multi-server packetized general processor sharing) service
discipline that serves multiple users at the same time and facilitates
multiuser resource allocation; (ii) we present a MPGPS-based joint MAC-PHY
resource allocation scheme that incorporates packet scheduling, subcarrier
allocation, and power allocation in an integrated framework; (iii) by
investigating the fundamental tradeoff between multiuser-diversity and queueing
performance, we present an A-MPGPS (adaptive MPGPS) service discipline that
strikes balance between power efficiency and queueing performance; and (iv) we
extend MPGPS to an O-MPGPS (opportunistic MPGPS) service discipline to further
enhance the resource utilization efficiency.
|
cs/0611081
|
The Importance of the Algorithmic Information Theory to Construct a
Possible Example Where NP # P - II: An Irreducible Sentence
|
cs.CC cs.IT math.IT
|
In this short communication it is discussed the relation between disentangled
states and algorithmic information theory aiming to construct an irreducible
sentence whose length increases in a non-polynomial way when the number of
qubits increases.
|
cs/0611083
|
Environment of development of the programs of parametric creating of the
drawings in CAD-system of renovation of the enterprises
|
cs.CE
|
The main ideas, data structures, structure and realization of operations with
them in environment of development of the programs of parametric creating of
the drawings are considered for the needs of the automated design engineering
system of renovation of the enterprises. The example of such program and
example of application of this environment for creating the drawing of the base
for equipment in CAD-system TechnoCAD GlassX are presented
|
cs/0611085
|
Fuzzy Logic Classification of Imaging Laser Desorption Fourier Transform
Mass Spectrometry Data
|
cs.AI
|
A fuzzy logic based classification engine has been developed for classifying
mass spectra obtained with an imaging internal source Fourier transform mass
spectrometer (I^2LD-FTMS). Traditionally, an operator uses the relative
abundance of ions with specific mass-to-charge (m/z) ratios to categorize
spectra. An operator does this by comparing the spectrum of m/z versus
abundance of an unknown sample against a library of spectra from known samples.
Automated positioning and acquisition allow I^2LD-FTMS to acquire data from
very large grids, this would require classification of up to 3600 spectrum per
hour to keep pace with the acquisition. The tedious job of classifying numerous
spectra generated in an I^2LD-FTMS imaging application can be replaced by a
fuzzy rule base if the cues an operator uses can be encapsulated. We present
the translation of linguistic rules to a fuzzy classifier for mineral phases in
basalt. This paper also describes a method for gathering statistics on ions,
which are not currently used in the rule base, but which may be candidates for
making the rule base more accurate and complete or to form new rule bases based
on data obtained from known samples. A spatial method for classifying spectra
with low membership values, based on neighboring sample classifications, is
also presented.
|
cs/0611086
|
Reliable Multi-Path Routing Schemes for Real-Time Streaming
|
cs.NI cs.IT math.IT
|
In off-line streaming, packet level erasure resilient Forward Error
Correction (FEC) codes rely on the unrestricted buffering time at the receiver.
In real-time streaming, the extremely short playback buffering time makes FEC
inefficient for protecting a single path communication against long link
failures. It has been shown that one alternative path added to a single path
route makes packet level FEC applicable even when the buffering time is
limited. Further path diversity, however, increases the number of underlying
links increasing the total link failure rate, requiring from the sender
possibly more FEC packets. We introduce a scalar coefficient for rating a
multi-path routing topology of any complexity. It is called Redundancy Overall
Requirement (ROR) and is proportional to the total number of adaptive FEC
packets required for protection of the communication. With the capillary
routing algorithm, introduced in this paper we build thousands of multi-path
routing patterns. By computing their ROR coefficients, we show that contrary to
the expectations the overall requirement in FEC codes is reduced when the
further diversity of dual-path routing is achieved by the capillary routing
algorithm.
|
cs/0611089
|
The Extraction and Complexity Limits of Graphical Models for Linear
Codes
|
cs.IT math.IT
|
Two broad classes of graphical modeling problems for codes can be identified
in the literature: constructive and extractive problems. The former class of
problems concern the construction of a graphical model in order to define a new
code. The latter class of problems concern the extraction of a graphical model
for a (fixed) given code. The design of a new low-density parity-check code for
some given criteria (e.g. target block length and code rate) is an example of a
constructive problem. The determination of a graphical model for a classical
linear block code which implies a decoding algorithm with desired performance
and complexity characteristics is an example of an extractive problem. This
work focuses on extractive graphical model problems and aims to lay out some of
the foundations of the theory of such problems for linear codes.
The primary focus of this work is a study of the space of all graphical
models for a (fixed) given code. The tradeoff between cyclic topology and
complexity in this space is characterized via the introduction of a new bound:
the tree-inducing cut-set bound. The proposed bound provides a more precise
characterization of this tradeoff than that which can be obtained using
existing tools (e.g. the Cut-Set Bound) and can be viewed as a generalization
of the square-root bound for tail-biting trellises to graphical models with
arbitrary cyclic topologies. Searching the space of graphical models for a
given code is then enabled by introducing a set of basic graphical model
transformation operations which are shown to span this space. Finally,
heuristics for extracting novel graphical models for linear block codes using
these transformations are investigated.
|
cs/0611090
|
Algebraic Soft-Decision Decoding of Reed-Solomon Codes Using Bit-level
Soft Information
|
cs.IT math.IT
|
The performance of algebraic soft-decision decoding of Reed-Solomon codes
using bit-level soft information is investigated. Optimal multiplicity
assignment strategies of algebraic soft-decision decoding with infinite cost
are first studied over erasure channels and the binary symmetric channel. The
corresponding decoding radii are calculated in closed forms and tight bounds on
the error probability are derived. The multiplicity assignment strategy and the
corresponding performance analysis are then generalized to characterize the
decoding region of algebraic softdecision decoding over a mixed error and
bit-level erasure channel. The bit-level decoding region of the proposed
multiplicity assignment strategy is shown to be significantly larger than that
of conventional Berlekamp-Massey decoding. As an application, a bit-level
generalized minimum distance decoding algorithm is proposed. The proposed
decoding compares favorably with many other Reed-Solomon soft-decision decoding
algorithms over various channels. Moreover, owing to the simplicity of the
proposed bit-level generalized minimum distance decoding, its performance can
be tightly bounded using order statistics.
|
cs/0611094
|
Reducing Order Enforcement Cost in Complex Query Plans
|
cs.DB
|
Algorithms that exploit sort orders are widely used to implement joins,
grouping, duplicate elimination and other set operations. Query optimizers
traditionally deal with sort orders by using the notion of interesting orders.
The number of interesting orders is unfortunately factorial in the number of
participating attributes. Optimizer implementations use heuristics to prune the
number of interesting orders, but the quality of the heuristics is unclear.
Increasingly complex decision support queries and increasing use of covering
indices, which provide multiple alternative sort orders for relations, motivate
us to better address the problem of optimization with interesting orders.
We show that even a simplified version of optimization with sort orders is
NP-hard and provide principled heuristics for choosing interesting orders. We
have implemented the proposed techniques in a Volcano-style cost-based
optimizer, and our performance study shows significant improvements in
estimated cost. We also executed our plans on a widely used commercial database
system, and on PostgreSQL, and found that actual execution times for our plans
were significantly better than for plans generated by those systems in several
cases.
|
cs/0611095
|
Dense Gaussian Sensor Networks: Minimum Achievable Distortion and the
Order Optimality of Separation
|
cs.IT math.IT
|
We investigate the optimal performance of dense sensor networks by studying
the joint source-channel coding problem. The overall goal of the sensor network
is to take measurements from an underlying random process, code and transmit
those measurement samples to a collector node in a cooperative multiple access
channel with potential feedback, and reconstruct the entire random process at
the collector node. We provide lower and upper bounds for the minimum
achievable expected distortion when the underlying random process is Gaussian.
When the Gaussian random process satisfies some general conditions, we evaluate
the lower and upper bounds explicitly, and show that they are of the same order
for a wide range of power constraints. Thus, for these random processes, under
these power constraints, we express the minimum achievable expected distortion
as a function of the power constraint. Further, we show that the achievability
scheme that achieves the lower bound on the distortion is a separation-based
scheme that is composed of multi-terminal rate-distortion coding and
amplify-and-forward channel coding. Therefore, we conclude that separation is
order-optimal for the dense Gaussian sensor network scenario under
consideration, when the underlying random process satisfies some general
conditions.
|
cs/0611096
|
On the Rate Distortion Function of Certain Sources with a Proportional
Mean-Square Error Distortion Measure
|
cs.IT math.IT
|
New bounds on the rate distortion function of certain non-Gaussian sources,
with a proportional-weighted mean-square error (MSE) distortion measure, are
given. The growth, g, of the rate distortion function, as a result of changing
from a non-weighted MSE distortion measure to a proportional-weighted
distortion criterion is analyzed. It is shown that for a small distortion, d,
the growth, g, and the difference between the rate distortion functions of a
Gaussian memoryless source and a source with memory, both with the same
marginal statistics and MSE distortion measure, share the same lower bound.
Several examples and applications are also given.
|
cs/0611097
|
Conditionally Cycle-Free Generalized Tanner Graphs: Theory and
Application to High-Rate Serially Concatenated Codes
|
cs.IT math.IT
|
Generalized Tanner graphs have been implicitly studied by a number of authors
under the rubric of generalized parity-check matrices. This work considers the
conditioning of binary hidden variables in such models in order to break all
cycles and thus derive optimal soft-in soft-out (SISO) decoding algorithms.
Conditionally cycle-free generalized Tanner graphs are shown to imply optimal
SISO decoding algorithms for the first order Reed-Muller codes and their duals
- the extended Hamming codes - which are substantially less complex than
conventional bit-level trellis decoding. The study of low-complexity optimal
SISO decoding algorithms for the family of extended Hamming codes is
practically motivated. Specifically, it is shown that exended Hamming codes
offer an attractive alternative to high-rate convolutional codes in terms of
both performance and complexity for use in very high-rate, very low-floor,
serially concatenated coding schemes.
|
cs/0611099
|
On the space complexity of one-pass compression
|
cs.IT math.IT
|
We study how much memory one-pass compression algorithms need to compete with
the best multi-pass algorithms. We call a one-pass algorithm an (f (n,
\ell))-footprint compressor if, given $n$, $\ell$ and an $n$-ary string $S$, it
stores $S$ in ((\rule{0ex}{2ex} O (H_\ell (S)) + o (\log n)) |S| + O (n^{\ell +
1} \log n)) bits -- where (H_\ell (S)) is the $\ell$th-order empirical entropy
of $S$ -- while using at most (f (n, \ell)) bits of memory. We prove that, for
any (\epsilon > 0) and some (f (n, \ell) \in O (n^{\ell + \epsilon} \log n)),
there is an (f (n, \ell))-footprint compressor; on the other hand, there is no
(f (n, \ell))-footprint compressor for (f (n, \ell) \in o (n^\ell \log n)).
|
cs/0611104
|
Learning and discrimination through STDP in a top-down modulated
associative memory
|
cs.NE cs.AI
|
This article underlines the learning and discrimination capabilities of a
model of associative memory based on artificial networks of spiking neurons.
Inspired from neuropsychology and neurobiology, the model implements top-down
modulations, as in neocortical layer V pyramidal neurons, with a learning rule
based on synaptic plasticity (STDP), for performing a multimodal association
learning task. A temporal correlation method of analysis proves the ability of
the model to associate specific activity patterns to different samples of
stimulation. Even in the absence of initial learning and with continuously
varying weights, the activity patterns become stable enough for discrimination.
|
cs/0611106
|
Mixing and non-mixing local minima of the entropy contrast for blind
source separation
|
cs.IT math.IT
|
In this paper, both non-mixing and mixing local minima of the entropy are
analyzed from the viewpoint of blind source separation (BSS); they correspond
respectively to acceptable and spurious solutions of the BSS problem. The
contribution of this work is twofold. First, a Taylor development is used to
show that the \textit{exact} output entropy cost function has a non-mixing
minimum when this output is proportional to \textit{any} of the non-Gaussian
sources, and not only when the output is proportional to the lowest entropic
source. Second, in order to prove that mixing entropy minima exist when the
source densities are strongly multimodal, an entropy approximator is proposed.
The latter has the major advantage that an error bound can be provided. Even if
this approximator (and the associated bound) is used here in the BSS context,
it can be applied for estimating the entropy of any random variable with
multimodal density.
|
cs/0611109
|
An approach to RAID-6 based on cyclic groups of a prime order
|
cs.IT math.IT math.NT
|
As the size of data storing arrays of disks grows, it becomes vital to
protect data against double disk failures. A popular method of protection is
via the Reed-Solomon (RS) code with two parity words. In the present paper we
construct alternative examples of linear block codes protecting against two
erasures. Our construction is based on an abstract notion of cone. Concrete
cones are constructed via matrix representations of cyclic groups of prime
order. In particular, this construction produces EVENODD code. Interesting
conditions on the prime number arise in our analysis of these codes. At the
end, we analyse an assembly implementation of the corresponding system on a
general purpose processor and compare its write and recovery speed with the
standard DP-RAID system.
|
cs/0611111
|
Distributed Control of Microscopic Robots in Biomedical Applications
|
cs.RO cs.MA
|
Current developments in molecular electronics, motors and chemical sensors
could enable constructing large numbers of devices able to sense, compute and
act in micron-scale environments. Such microscopic machines, of sizes
comparable to bacteria, could simultaneously monitor entire populations of
cells individually in vivo. This paper reviews plausible capabilities for
microscopic robots and the physical constraints due to operation in fluids at
low Reynolds number, diffusion-limited sensing and thermal noise from Brownian
motion. Simple distributed controls are then presented in the context of
prototypical biomedical tasks, which require control decisions on millisecond
time scales. The resulting behaviors illustrate trade-offs among speed,
accuracy and resource use. A specific example is monitoring for patterns of
chemicals in a flowing fluid released at chemically distinctive sites.
Information collected from a large number of such devices allows estimating
properties of cell-sized chemical sources in a macroscopic volume. The
microscopic devices moving with the fluid flow in small blood vessels can
detect chemicals released by tissues in response to localized injury or
infection. We find the devices can readily discriminate a single cell-sized
chemical source from the background chemical concentration, providing
high-resolution sensing in both time and space. By contrast, such a source
would be difficult to distinguish from background when diluted throughout the
blood volume as obtained with a blood sample.
|
cs/0611112
|
Channel Coding: The Road to Channel Capacity
|
cs.IT math.IT
|
Starting from Shannon's celebrated 1948 channel coding theorem, we trace the
evolution of channel coding from Hamming codes to capacity-approaching codes.
We focus on the contributions that have led to the most significant
improvements in performance vs. complexity for practical applications,
particularly on the additive white Gaussian noise (AWGN) channel. We discuss
algebraic block codes, and why they did not prove to be the way to get to the
Shannon limit. We trace the antecedents of today's capacity-approaching codes:
convolutional codes, concatenated codes, and other probabilistic coding
schemes. Finally, we sketch some of the practical applications of these codes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.