id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0503033
|
An Introduction to the Summarization of Evolving Events: Linear and
Non-linear Evolution
|
cs.CL cs.IR
|
This paper examines the summarization of events that evolve through time. It
discusses different types of evolution taking into account the time in which
the incidents of an event are happening and the different sources reporting on
the specific event. It proposes an approach for multi-document summarization
which employs ``messages'' for representing the incidents of an event and
cross-document relations that hold between messages according to certain
conditions. The paper also outlines the current version of the summarization
system we are implementing to realize this approach.
|
cs/0503037
|
Mining Top-k Approximate Frequent Patterns
|
cs.DB cs.AI
|
Frequent pattern (itemset) mining in transactional databases is one of the
most well-studied problems in data mining. One obstacle that limits the
practical usage of frequent pattern mining is the extremely large number of
patterns generated. Such a large size of the output collection makes it
difficult for users to understand and use in practice. Even restricting the
output to the border of the frequent itemset collection does not help much in
alleviating the problem. In this paper we address the issue of overwhelmingly
large output size by introducing and studying the following problem: mining
top-k approximate frequent patterns. The union of the power sets of these k
sets should satisfy the following conditions: (1) including itemsets with
larger support as many as possible and (2) including itemsets with smaller
support as less as possible. An integrated objective function is designed to
combine these two objectives. Consequently, we derive the upper bounds on
objective function and present an approximate branch-and-bound method for
finding the feasible solution. We give empirical evidence showing that our
formulation and approximation methods work well in practice.
|
cs/0503038
|
On a Kronecker products sum distance bounds
|
cs.IT math.IT
|
A binary linear error correcting codes represented by two code families
Kronecker products sum are considered. The dimension and distance of new code
is investigated. Upper and lower bounds of distance are obtained. Some examples
are given. It is shown that some classic constructions are the private cases of
considered one. The subclass of codes with equal lower and upper distance
bounds is allocated.
|
cs/0503040
|
Uplink Throughput in a Single-Macrocell/Single-Microcell CDMA System,
with Application to Data Access Points
|
cs.IT math.IT
|
This paper studies a two-tier CDMA system in which the microcell base is
converted into a data access point (DAP), i.e., a limited-range base station
that provides high-speed access to one user at a time. The microcell (or DAP)
user operates on the same frequency as the macrocell users and has the same
chip rate. However, it adapts its spreading factor, and thus its data rate, in
accordance with interference conditions. By contrast, the macrocell serves
multiple simultaneous data users, each with the same fixed rate. The
achieveable throughput for individual microcell users is examined and a simple,
accurate approximation for its probability distribution is presented.
Computations for average throughputs, both per-user and total, are also
presented. The numerical results highlight the impact of a desensitivity
parameter used in the base-selection process.
|
cs/0503041
|
Soft Handoff and Uplink Capacity in a Two-Tier CDMA System
|
cs.IT math.IT
|
This paper examines the effect of soft handoff on the uplink user capacity of
a CDMA system consisting of a single macrocell in which a single hotspot
microcell is embedded. The users of these two base stations operate over the
same frequency band. In the soft handoff scenario studied here, both macrocell
and microcell base stations serve each system user and the two received copies
of a desired user's signal are summed using maximal ratio combining. Exact and
approximate analytical methods are developed to compute uplink user capacity.
Simulation results demonstrate a 20% increase in user capacity compared to hard
handoff. In addition, simple, approximate methods are presented for estimating
soft handoff capacity and are shown to be quite accurate.
|
cs/0503042
|
Uplink User Capacity in a CDMA System with Hotspot Microcells: Effects
of Finite Transmit Power and Dispersion
|
cs.IT math.IT
|
This paper examines the uplink user capacity in a two-tier code division
multiple access (CDMA) system with hotspot microcells when user terminal power
is limited and the wireless channel is finitely-dispersive. A
finitely-dispersive channel causes variable fading of the signal power at the
output of the RAKE receiver. First, a two-cell system composed of one macrocell
and one embedded microcell is studied and analytical methods are developed to
estimate the user capacity as a function of a dimensionless parameter that
depends on the transmit power constraint and cell radius. Next, novel
analytical methods are developed to study the effect of variable fading, both
with and without transmit power constraints. Finally, the analytical methods
are extended to estimate uplink user capacity for multicell CDMA systems,
composed of multiple macrocells and multiple embedded microcells. In all cases,
the analysis-based estimates are compared with and confirmed by simulation
results.
|
cs/0503043
|
Complexity Issues in Finding Succinct Solutions of PSPACE-Complete
Problems
|
cs.AI cs.CC cs.LO
|
We study the problem of deciding whether some PSPACE-complete problems have
models of bounded size. Contrary to problems in NP, models of PSPACE-complete
problems may be exponentially large. However, such models may take polynomial
space in a succinct representation. For example, the models of a QBF are
explicitely represented by and-or trees (which are always of exponential size)
but can be succinctely represented by circuits (which can be polynomial or
exponential). We investigate the complexity of deciding the existence of such
succinct models when a bound on size is given.
|
cs/0503044
|
Generating Hard Satisfiable Formulas by Hiding Solutions Deceptively
|
cs.AI cond-mat.other cond-mat.stat-mech
|
To test incomplete search algorithms for constraint satisfaction problems
such as 3-SAT, we need a source of hard, but satisfiable, benchmark instances.
A simple way to do this is to choose a random truth assignment A, and then
choose clauses randomly from among those satisfied by A. However, this method
tends to produce easy problems, since the majority of literals point toward the
``hidden'' assignment A. Last year, Achlioptas, Jia and Moore proposed a
problem generator that cancels this effect by hiding both A and its complement.
While the resulting formulas appear to be just as hard for DPLL algorithms as
random 3-SAT formulas with no hidden assignment, they can be solved by WalkSAT
in only polynomial time. Here we propose a new method to cancel the attraction
to A, by choosing a clause with t > 0 literals satisfied by A with probability
proportional to q^t for some q < 1. By varying q, we can generate formulas
whose variables have no bias, i.e., which are equally likely to be true or
false; we can even cause the formula to ``deceptively'' point away from A. We
present theoretical and experimental results suggesting that these formulas are
exponentially hard both for DPLL algorithms and for incomplete algorithms such
as WalkSAT.
|
cs/0503046
|
Hiding Satisfying Assignments: Two are Better than One
|
cs.AI cond-mat.dis-nn cond-mat.stat-mech cs.CC
|
The evaluation of incomplete satisfiability solvers depends critically on the
availability of hard satisfiable instances. A plausible source of such
instances consists of random k-SAT formulas whose clauses are chosen uniformly
from among all clauses satisfying some randomly chosen truth assignment A.
Unfortunately, instances generated in this manner tend to be relatively easy
and can be solved efficiently by practical heuristics. Roughly speaking, as the
formula's density increases, for a number of different algorithms, A acts as a
stronger and stronger attractor. Motivated by recent results on the geometry of
the space of satisfying truth assignments of random k-SAT and NAE-k-SAT
formulas, we introduce a simple twist on this basic model, which appears to
dramatically increase its hardness. Namely, in addition to forbidding the
clauses violated by the hidden assignment A, we also forbid the clauses
violated by its complement, so that both A and complement of A are satisfying.
It appears that under this "symmetrization'' the effects of the two attractors
largely cancel out, making it much harder for algorithms to find any truth
assignment. We give theoretical and experimental evidence supporting this
assertion.
|
cs/0503047
|
On Multiflows in Random Unit-Disk Graphs, and the Capacity of Some
Wireless Networks
|
cs.IT math.IT
|
We consider the capacity problem for wireless networks. Networks are modeled
as random unit-disk graphs, and the capacity problem is formulated as one of
finding the maximum value of a multicommodity flow. In this paper, we develop a
proof technique based on which we are able to obtain a tight characterization
of the solution to the linear program associated with the multiflow problem, to
within constants independent of network size. We also use this proof method to
analyze network capacity for a variety of transmitter/receiver architectures,
for which we obtain some conclusive results. These results contain as a special
case (and strengthen) those of Gupta and Kumar for random networks, for which a
new derivation is provided using only elementary counting and discrete
probability tools.
|
cs/0503052
|
Zeta-Dimension
|
cs.CC cs.IT math.IT
|
The zeta-dimension of a set A of positive integers is the infimum s such that
the sum of the reciprocals of the s-th powers of the elements of A is finite.
Zeta-dimension serves as a fractal dimension on the positive integers that
extends naturally usefully to discrete lattices such as the set of all integer
lattice points in d-dimensional space.
This paper reviews the origins of zeta-dimension (which date to the
eighteenth and nineteenth centuries) and develops its basic theory, with
particular attention to its relationship with algorithmic information theory.
New results presented include extended connections between zeta-dimension and
classical fractal dimensions, a gale characterization of zeta-dimension, and a
theorem on the zeta-dimensions of pointwise sums and products of sets of
positive integers.
|
cs/0503053
|
A hybrid MLP-PNN architecture for fast image superresolution
|
cs.CV cs.MM
|
Image superresolution methods process an input image sequence of a scene to
obtain a still image with increased resolution. Classical approaches to this
problem involve complex iterative minimization procedures, typically with high
computational costs. In this paper is proposed a novel algorithm for
super-resolution that enables a substantial decrease in computer load. First, a
probabilistic neural network architecture is used to perform a scattered-point
interpolation of the image sequence data. The network kernel function is
optimally determined for this problem by a multi-layer perceptron trained on
synthetic data. Network parameters dependence on sequence noise level is
quantitatively analyzed. This super-sampled image is spatially filtered to
correct finite pixel size effects, to yield the final high-resolution estimate.
Results on a real outdoor sequence are presented, showing the quality of the
proposed method.
|
cs/0503056
|
Semi-automatic vectorization of linear networks on rasterized
cartographic maps
|
cs.CV cs.MM
|
A system for semi-automatic vectorization of linear networks (roads, rivers,
etc.) on rasterized cartographic maps is presented. In this system, human
intervention is limited to a graphic, interactive selection of the color
attributes of the information to be obtained. Using this data, the system
performs a preliminary extraction of the linear network, which is subsequently
completed, refined and vectorized by means of an automatic procedure. Results
on maps of different sources and scales are included.
-----
Se presenta un sistema semi-automatico de vectorizacion de redes de objetos
lineales (carreteras, rios, etc.) en mapas cartograficos digitalizados. En este
sistema, la intervencion humana queda reducida a la seleccion grafica
interactiva de los atributos de color de la informacion a obtener. Con estos
datos, el sistema realiza una extraccion preliminar de la red lineal, que se
completa, refina y vectoriza mediante un procedimiento automatico. Se presentan
resultados de la aplicacion del sistema sobre imagenes digitalizadas de mapas
de distinta procedencia y escala.
|
cs/0503058
|
On the Stopping Distance and the Stopping Redundancy of Codes
|
cs.IT cs.DM math.IT
|
It is now well known that the performance of a linear code $C$ under
iterative decoding on a binary erasure channel (and other channels) is
determined by the size of the smallest stopping set in the Tanner graph for
$C$. Several recent papers refer to this parameter as the \emph{stopping
distance} $s$ of $C$. This is somewhat of a misnomer since the size of the
smallest stopping set in the Tanner graph for $C$ depends on the corresponding
choice of a parity-check matrix. It is easy to see that $s \le d$, where $d$ is
the minimum Hamming distance of $C$, and we show that it is always possible to
choose a parity-check matrix for $C$ (with sufficiently many dependent rows)
such that $s = d$. We thus introduce a new parameter, termed the \emph{stopping
redundancy} of $C$, defined as the minimum number of rows in a parity-check
matrix $H$ for $C$ such that the corresponding stopping distance $s(H)$ attains
its largest possible value, namely $s(H) = d$.
We then derive general bounds on the stopping redundancy of linear codes. We
also examine several simple ways of constructing codes from other codes, and
study the effect of these constructions on the stopping redundancy.
Specifically, for the family of binary Reed-Muller codes (of all orders), we
prove that their stopping redundancy is at most a constant times their
conventional redundancy. We show that the stopping redundancies of the binary
and ternary extended Golay codes are at most 35 and 22, respectively. Finally,
we provide upper and lower bounds on the stopping redundancy of MDS codes.
|
cs/0503059
|
Les repr\'{e}sentations g\'{e}n\'{e}tiques d'objets : simples analogies
ou mod\`{e}les pertinents ? Le point de vue de l'
"\'{e}volutique".<br>–––<br>Genetic representations of
objects : simple analogies or efficient models ? The "evolutic" point of view
|
cs.AI nlin.AO
|
Depuis une trentaine d'ann\'{e}es, les ing\'{e}nieurs utilisent couramment
des analogies avec l'\'{e}volution naturelle pour optimiser des dispositifs
techniques. Le plus souvent, ces m\'{e}thodes "g\'{e}n\'{e}tiques" ou
"\'{e}volutionnaires" sont consid\'{e}r\'{e}es uniquement du point de vue
pratique, comme des m\'{e}thodes d'optimisation performantes, qu'on peut
utiliser \`{a} la place d'autres m\'{e}thodes (gradients, simplexes, ...). Dans
cet article, nous essayons de montrer que les sciences et les techniques, mais
aussi les organisations humaines, et g\'{e}n\'{e}ralement tous les syst\`{e}mes
complexes, ob\'{e}issent \`{a} des lois d'\'{e}volution dont la
g\'{e}n\'{e}tique est un bon mod\`{e}le repr\'{e}sentatif, m\^{e}me si
g\^{e}nes et chromosomes sont "virtuels" : ainsi loin d'\^{e}tre seulement un
outil ponctuel d'aide \`{a} la synth\`{e}se de solutions technologiques, la
repr\'{e}sentation g\'{e}n\'{e}tique est-elle un mod\`{e}le dynamique global de
l'\'{e}volution du monde fa\c{c}onn\'{e} par l'agitation
humaine.––––For thirty years, engineers commonly use
analogies with natural evolution to optimize technical devices. More often that
not, these "genetic" or "evolutionary" methods are only view as efficient
tools, which could replace other optimization techniques (gradient methods,
simplex, ...). In this paper, we try to show that sciences, techniques, human
organizations, and more generally all complex systems, obey to evolution rules,
whose the genetic is a good representative model, even if genes and chromosomes
are "virtual". Thus, the genetic representation is not only a specific tool
helping for the design of technological solutions, but also a global and
dynamic model for the action of the human agitation on our world.
|
cs/0503061
|
Integrity Constraints in Trust Management
|
cs.CR cs.DB
|
We introduce the use, monitoring, and enforcement of integrity constraints in
trust management-style authorization systems. We consider what portions of the
policy state must be monitored to detect violations of integrity constraints.
Then we address the fact that not all participants in a trust management system
can be trusted to assist in such monitoring, and show how many integrity
constraints can be monitored in a conservative manner so that trusted
participants detect and report if the system enters a policy state from which
evolution in unmonitored portions of the policy could lead to a constraint
violation.
|
cs/0503062
|
On the Complexity of Nonrecursive XQuery and Functional Query Languages
on Complex Values
|
cs.DB cs.CC
|
This paper studies the complexity of evaluating functional query languages
for complex values such as monad algebra and the recursion-free fragment of
XQuery.
We show that monad algebra with equality restricted to atomic values is
complete for the class TA[2^{O(n)}, O(n)] of problems solvable in linear
exponential time with a linear number of alternations. The monotone fragment of
monad algebra with atomic value equality but without negation is complete for
nondeterministic exponential time. For monad algebra with deep equality, we
establish TA[2^{O(n)}, O(n)] lower and exponential-space upper bounds.
Then we study a fragment of XQuery, Core XQuery, that seems to incorporate
all the features of a query language on complex values that are traditionally
deemed essential. A close connection between monad algebra on lists and Core
XQuery (with ``child'' as the only axis) is exhibited, and it is shown that
these languages are expressively equivalent up to representation issues. We
show that Core XQuery is just as hard as monad algebra w.r.t. combined
complexity, and that it is in TC0 if the query is assumed fixed.
|
cs/0503063
|
Randomly Spread CDMA: Asymptotics via Statistical Physics
|
cs.IT math.IT
|
This paper studies randomly spread code-division multiple access (CDMA) and
multiuser detection in the large-system limit using the replica method
developed in statistical physics. Arbitrary input distributions and flat fading
are considered. A generic multiuser detector in the form of the posterior mean
estimator is applied before single-user decoding. The generic detector can be
particularized to the matched filter, decorrelator, linear MMSE detector, the
jointly or the individually optimal detector, and others. It is found that the
detection output for each user, although in general asymptotically non-Gaussian
conditioned on the transmitted symbol, converges as the number of users go to
infinity to a deterministic function of a "hidden" Gaussian statistic
independent of the interferers. Thus the multiuser channel can be decoupled:
Each user experiences an equivalent single-user Gaussian channel, whose
signal-to-noise ratio suffers a degradation due to the multiple-access
interference. The uncoded error performance (e.g., symbol-error-rate) and the
mutual information can then be fully characterized using the degradation
factor, also known as the multiuser efficiency, which can be obtained by
solving a pair of coupled fixed-point equations identified in this paper. Based
on a general linear vector channel model, the results are also applicable to
MIMO channels such as in multiantenna systems.
|
cs/0503064
|
Minimum-Cost Multicast over Coded Packet Networks
|
cs.IT cs.NI math.IT
|
We consider the problem of establishing minimum-cost multicast connections
over coded packet networks, i.e. packet networks where the contents of outgoing
packets are arbitrary, causal functions of the contents of received packets. We
consider both wireline and wireless packet networks as well as both static
multicast (where membership of the multicast group remains constant for the
duration of the connection) and dynamic multicast (where membership of the
multicast group changes in time, with nodes joining and leaving the group).
For static multicast, we reduce the problem to a polynomial-time solvable
optimization problem, and we present decentralized algorithms for solving it.
These algorithms, when coupled with existing decentralized schemes for
constructing network codes, yield a fully decentralized approach for achieving
minimum-cost multicast. By contrast, establishing minimum-cost static multicast
connections over routed packet networks is a very difficult problem even using
centralized computation, except in the special cases of unicast and broadcast
connections.
For dynamic multicast, we reduce the problem to a dynamic programming problem
and apply the theory of dynamic programming to suggest how it may be solved.
|
cs/0503070
|
Improved message passing for inference in densely connected systems
|
cs.IT cond-mat.dis-nn math.IT
|
An improved inference method for densely connected systems is presented. The
approach is based on passing condensed messages between variables, representing
macroscopic averages of microscopic messages. We extend previous work that
showed promising results in cases where the solution space is contiguous to
cases where fragmentation occurs. We apply the method to the signal detection
problem of Code Division Multiple Access (CDMA) for demonstrating its
potential. A highly efficient practical algorithm is also derived on the basis
of insight gained from the analysis.
|
cs/0503071
|
Consistency in Models for Distributed Learning under Communication
Constraints
|
cs.IT cs.LG math.IT
|
Motivated by sensor networks and other distributed settings, several models
for distributed learning are presented. The models differ from classical works
in statistical pattern recognition by allocating observations of an independent
and identically distributed (i.i.d.) sampling process amongst members of a
network of simple learning agents. The agents are limited in their ability to
communicate to a central fusion center and thus, the amount of information
available for use in classification or regression is constrained. For several
basic communication models in both the binary classification and regression
frameworks, we question the existence of agent decision rules and fusion rules
that result in a universally consistent ensemble. The answers to this question
present new issues to consider with regard to universal consistency. Insofar as
these models present a useful picture of distributed scenarios, this paper
addresses the issue of whether or not the guarantees provided by Stone's
Theorem in centralized environments hold in distributed settings.
|
cs/0503072
|
Distributed Learning in Wireless Sensor Networks
|
cs.IT cs.LG math.IT
|
The problem of distributed or decentralized detection and estimation in
applications such as wireless sensor networks has often been considered in the
framework of parametric models, in which strong assumptions are made about a
statistical description of nature. In certain applications, such assumptions
are warranted and systems designed from these models show promise. However, in
other scenarios, prior knowledge is at best vague and translating such
knowledge into a statistical model is undesirable. Applications such as these
pave the way for a nonparametric study of distributed detection and estimation.
In this paper, we review recent work of the authors in which some elementary
models for distributed learning are considered. These models are in the spirit
of classical work in nonparametric statistics and are applicable to wireless
sensor networks.
|
cs/0503076
|
Geometric Models of Rolling-Shutter Cameras
|
cs.CV cs.RO
|
Cameras with rolling shutters are becoming more common as low-power, low-cost
CMOS sensors are being used more frequently in cameras. The rolling shutter
means that not all scanlines are exposed over the same time interval. The
effects of a rolling shutter are noticeable when either the camera or objects
in the scene are moving and can lead to systematic biases in projection
estimation. We develop a general projection equation for a rolling shutter
camera and show how it is affected by different types of camera motion. In the
case of fronto-parallel motion, we show how that camera can be modeled as an
X-slit camera. We also develop approximate projection equations for a non-zero
angular velocity about the optical axis and approximate the projection equation
for a constant velocity screw motion. We demonstrate how the rolling shutter
effects the projective geometry of the camera and in turn the
structure-from-motion.
|
cs/0503077
|
Weighted Automata in Text and Speech Processing
|
cs.CL cs.HC
|
Finite-state automata are a very effective tool in natural language
processing. However, in a variety of applications and especially in speech
precessing, it is necessary to consider more general machines in which arcs are
assigned weights or costs. We briefly describe some of the main theoretical and
algorithmic aspects of these machines. In particular, we describe an efficient
composition algorithm for weighted transducers, and give examples illustrating
the value of determinization and minimization algorithms for weighted automata.
|
cs/0503078
|
Obtaining Membership Functions from a Neuron Fuzzy System extended by
Kohonen Network
|
cs.NE
|
This article presents the Neo-Fuzzy-Neuron Modified by Kohonen Network
(NFN-MK), an hybrid computational model that combines fuzzy system technique
and artificial neural networks. Its main task consists in the automatic
generation of membership functions, in particular, triangle forms, aiming a
dynamic modeling of a system. The model is tested by simulating real systems,
here represented by a nonlinear mathematical function. Comparison with the
results obtained by traditional neural networks, and correlated studies of
neurofuzzy systems applied in system identification area, shows that the NFN-MK
model has a similar performance, despite its greater simplicity.
|
cs/0503079
|
Space-time databases modeling global semantic networks
|
cs.IT cs.IR math.IT
|
This paper represents an approach to creating global knowledge systems, using
new philosophy and infrastructure of global distributed semantic network (frame
knowledge representation system) based on the space-time database construction.
The main idea of the space-time database environment introduced in the paper is
to bind a document (an information frame, a knowledge) to a special kind of
entity, that we call permanent entity, -- an object without history and
evolution, described by a "point" in the generalized, informational space-time
(not an evolving object in the real space having history). For documents
(information) it means that document content is unchangeable, and documents are
absolutely persistent. This approach leads to new knowledge representation and
retreival techniques. We discuss the way of applying the concept to a global
distributed scientific library and scientific workspace. Some practical aspects
of the work are elaborated by the open IT project at
http://sourceforge.net/projects/gil/.
|
cs/0503081
|
An Optimization Model for Outlier Detection in Categorical Data
|
cs.DB cs.AI
|
The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. Detection of such
outliers is important for many applications such as fraud detection and
customer migration. Most existing methods are designed for numeric data. They
will encounter problems with real-life applications that contain categorical
data. In this paper, we formally define the problem of outlier detection in
categorical data as an optimization problem from a global viewpoint. Moreover,
we present a local-search heuristic based algorithm for efficiently finding
feasible solutions. Experimental results on real datasets and large synthetic
datasets demonstrate the superiority of our model and algorithm.
|
cs/0503082
|
Spines of Random Constraint Satisfaction Problems: Definition and
Connection with Computational Complexity
|
cs.CC cond-mat.dis-nn cs.AI
|
We study the connection between the order of phase transitions in
combinatorial problems and the complexity of decision algorithms for such
problems. We rigorously show that, for a class of random constraint
satisfaction problems, a limited connection between the two phenomena indeed
exists. Specifically, we extend the definition of the spine order parameter of
Bollobas et al. to random constraint satisfaction problems, rigorously showing
that for such problems a discontinuity of the spine is associated with a
$2^{\Omega(n)}$ resolution complexity (and thus a $2^{\Omega(n)}$ complexity of
DPLL algorithms) on random instances. The two phenomena have a common
underlying cause: the emergence of ``large'' (linear size) minimally
unsatisfiable subformulas of a random formula at the satisfiability phase
transition.
We present several further results that add weight to the intuition that
random constraint satisfaction problems with a sharp threshold and a continuous
spine are ``qualitatively similar to random 2-SAT''. Finally, we argue that it
is the spine rather than the backbone parameter whose continuity has
implications for the decision complexity of combinatorial problems, and we
provide experimental evidence that the two parameters can behave in a different
manner.
|
cs/0503084
|
The Peculiarities of Nonstationary Formation of Inhomogeneous Structures
of Charged Particles in the Electrodiffusion Processes
|
cs.CE
|
In this paper the distribution of charged particles is constructed under the
approximation of ambipolar diffusion. The results of mathematical modelling in
two-dimensional case taking into account the velocities of the system are
presented.
|
cs/0503085
|
Dynamic Shannon Coding
|
cs.IT math.IT
|
We present a new algorithm for dynamic prefix-free coding, based on Shannon
coding. We give a simple analysis and prove a better upper bound on the length
of the encoding produced than the corresponding bound for dynamic Huffman
coding. We show how our algorithm can be modified for efficient
length-restricted coding, alphabetic coding and coding with unequal letter
costs.
|
cs/0503087
|
Dynamic Simulation of Construction Machinery: Towards an Operator Model
|
cs.CE
|
In dynamic simulation of complete wheel loaders, one interesting aspect,
specific for the working task, is the momentary power distribution between
drive train and hydraulics, which is balanced by the operator.
This paper presents the initial results to a simulation model of a human
operator. Rather than letting the operator model follow a predefined path with
control inputs at given points, it follows a collection of general rules that
together describe the machine's working cycle in a generic way. The advantage
of this is that the working task description and the operator model itself are
independent of the machine's technical parameters. Complete sub-system
characteristics can thus be changed without compromising the relevance and
validity of the simulation. Ultimately, this can be used to assess a machine's
total performance, fuel efficiency and operability already in the concept phase
of the product development process.
|
cs/0503088
|
General non-asymptotic and asymptotic formulas in channel resolvability
and identification capacity and their application to wire-tap channel
|
cs.IT math.IT
|
Several non-asymptotic formulas are established in channel resolvability and
identification capacity, and they are applied to wire-tap channel. By using
these formulas, the $\epsilon$ capacities of the above three problems are
considered in the most general setting, where no structural assumptions such as
the stationary memoryless property are made on a channel. As a result, we solve
an open problem proposed in Han & Verdu and Han. Moreover, we obtain lower
bounds of the exponents of error probability and the wire-tapper's information
in wire-tap channel.
|
cs/0503089
|
Second order asymptotics in fixed-length source coding and intrinsic
randomness
|
cs.IT math.IT
|
Second order asymptotics of fixed-length source coding and intrinsic
randomness is discussed with a constant error constraint. There was a
difference between optimal rates of fixed-length source coding and intrinsic
randomness, which never occurred in the first order asymptotics. In addition,
the relation between uniform distribution and compressed data is discussed
based on this fact. These results are valid for general information sources as
well as independent and identical distributions. A universal code attaining the
second order optimal rate is also constructed.
|
cs/0503092
|
Monotonic and Nonmonotonic Preference Revision
|
cs.DB cs.AI
|
We study here preference revision, considering both the monotonic case where
the original preferences are preserved and the nonmonotonic case where the new
preferences may override the original ones. We use a relational framework in
which preferences are represented using binary relations (not necessarily
finite). We identify several classes of revisions that preserve order axioms,
for example the axioms of strict partial or weak orders. We consider
applications of our results to preference querying in relational databases.
|
cs/0504001
|
Probabilistic and Team PFIN-type Learning: General Properties
|
cs.LG
|
We consider the probability hierarchy for Popperian FINite learning and study
the general properties of this hierarchy. We prove that the probability
hierarchy is decidable, i.e. there exists an algorithm that receives p_1 and
p_2 and answers whether PFIN-type learning with the probability of success p_1
is equivalent to PFIN-type learning with the probability of success p_2.
To prove our result, we analyze the topological structure of the probability
hierarchy. We prove that it is well-ordered in descending ordering and
order-equivalent to ordinal epsilon_0. This shows that the structure of the
hierarchy is very complicated.
Using similar methods, we also prove that, for PFIN-type learning, team
learning and probabilistic learning are of the same power.
|
cs/0504003
|
Multiple Description Quantization via Gram-Schmidt Orthogonalization
|
cs.IT math.IT
|
The multiple description (MD) problem has received considerable attention as
a model of information transmission over unreliable channels. A general
framework for designing efficient multiple description quantization schemes is
proposed in this paper. We provide a systematic treatment of the El Gamal-Cover
(EGC) achievable MD rate-distortion region, and show that any point in the EGC
region can be achieved via a successive quantization scheme along with
quantization splitting. For the quadratic Gaussian case, the proposed scheme
has an intrinsic connection with the Gram-Schmidt orthogonalization, which
implies that the whole Gaussian MD rate-distortion region is achievable with a
sequential dithered lattice-based quantization scheme as the dimension of the
(optimal) lattice quantizers becomes large. Moreover, this scheme is shown to
be universal for all i.i.d. smooth sources with performance no worse than that
for an i.i.d. Gaussian source with the same variance and asymptotically optimal
at high resolution. A class of low-complexity MD scalar quantizers in the
proposed general framework also is constructed and is illustrated
geometrically; the performance is analyzed in the high resolution regime, which
exhibits a noticeable improvement over the existing MD scalar quantization
schemes.
|
cs/0504005
|
Fast Codes for Large Alphabets
|
cs.IT math.IT
|
We address the problem of constructing a fast lossless code in the case when
the source alphabet is large. The main idea of the new scheme may be described
as follows. We group letters with small probabilities in subsets (acting as
super letters) and use time consuming coding for these subsets only, whereas
letters in the subsets have the same code length and therefore can be coded
fast. The described scheme can be applied to sources with known and unknown
statistics.
|
cs/0504006
|
Using Information Theory Approach to Randomness Testing
|
cs.IT math.IT
|
We address the problem of detecting deviations of binary sequence from
randomness,which is very important for random number (RNG) and pseudorandom
number generators (PRNG). Namely, we consider a null hypothesis $H_0$ that a
given bit sequence is generated by Bernoulli source with equal probabilities of
0 and 1 and the alternative hypothesis $H_1$ that the sequence is generated by
a stationary and ergodic source which differs from the source under $H_0$. We
show that data compression methods can be used as a basis for such testing and
describe two new tests for randomness, which are based on ideas of universal
coding. Known statistical tests and suggested ones are applied for testing
PRNGs. Those experiments show that the power of the new tests is greater than
of many known algorithms.
|
cs/0504010
|
Reversible Fault-Tolerant Logic
|
cs.IT math.IT quant-ph
|
It is now widely accepted that the CMOS technology implementing irreversible
logic will hit a scaling limit beyond 2016, and that the increased power
dissipation is a major limiting factor. Reversible computing can potentially
require arbitrarily small amounts of energy. Recently several nano-scale
devices which have the potential to scale, and which naturally perform
reversible logic, have emerged. This paper addresses several fundamental issues
that need to be addressed before any nano-scale reversible computing systems
can be realized, including reliability and performance trade-offs and
architecture optimization. Many nano-scale devices will be limited to only near
neighbor interactions, requiring careful optimization of circuits. We provide
efficient fault-tolerant (FT) circuits when restricted to both 2D and 1D.
Finally, we compute bounds on the entropy (and hence, heat) generated by our FT
circuits and provide quantitative estimates on how large can we make our
circuits before we lose any advantage over irreversible computing.
|
cs/0504011
|
Average Coset Weight Distribution of Combined LDPC Matrix Ensemble
|
cs.IT math.IT
|
In this paper, the average coset weight distribution (ACWD) of structured
ensembles of LDPC (Low-density Parity-Check) matrix, which is called combined
ensembles, is discussed. A combined ensemble is composed of a set of simpler
ensembles such as a regular bipartite ensemble. Two classes of combined
ensembles have prime importance; a stacked ensemble and a concatenated
ensemble, which consists of set of stacked matrices and concatenated matrices,
respectively. The ACWD formulas of these ensembles is shown in this paper. Such
formulas are key tools to evaluate the ACWD of a complex combined ensemble.
From the ACWD of an ensemble, we can obtain some detailed properties of a
code (e.g., weight of coset leaders) which is not available from an average
weight distribution. Moreover, it is shown that the analysis based on the ACWD
is indispensable to evaluate the average weight distribution of some classes of
combined ensembles.
|
cs/0504013
|
Pseudocodewords of Tanner graphs
|
cs.IT math.IT
|
This papers presents a detailed analysis of pseudocodewords of Tanner graphs.
Pseudocodewords arising on the iterative decoder's computation tree are
distinguished from pseudocodewords arising on finite degree lifts. Lower bounds
on the minimum pseudocodeword weight are presented for the BEC, BSC, and AWGN
channel. Some structural properties of pseudocodewords are examined, and
pseudocodewords and graph properties that are potentially problematic with
min-sum iterative decoding are identified. An upper bound on the minimum degree
lift needed to realize a particular irreducible lift-realizable pseudocodeword
is given in terms of its maximal component, and it is shown that all
irreducible lift-realizable pseudocodewords have components upper bounded by a
finite value $t$ that is dependent on the graph structure. Examples and
different Tanner graph representations of individual codes are examined and the
resulting pseudocodeword distributions and iterative decoding performances are
analyzed. The results obtained provide some insights in relating the structure
of the Tanner graph to the pseudocodeword distribution and suggest ways of
designing Tanner graphs with good minimum pseudocodeword weight.
|
cs/0504014
|
Network Information Flow with Correlated Sources
|
cs.IT math.IT
|
In this paper, we consider a network communications problem in which multiple
correlated sources must be delivered to a single data collector node, over a
network of noisy independent point-to-point channels. We prove that perfect
reconstruction of all the sources at the sink is possible if and only if, for
all partitions of the network nodes into two subsets S and S^c such that the
sink is always in S^c, we have that H(U_S|U_{S^c}) < \sum_{i\in S,j\in S^c}
C_{ij}. Our main finding is that in this setup a general source/channel
separation theorem holds, and that Shannon information behaves as a classical
network flow, identical in nature to the flow of water in pipes. At first
glance, it might seem surprising that separation holds in a fairly general
network situation like the one we study. A closer look, however, reveals that
the reason for this is that our model allows only for independent
point-to-point channels between pairs of nodes, and not multiple-access and/or
broadcast channels, for which separation is well known not to hold. This
``information as flow'' view provides an algorithmic interpretation for our
results, among which perhaps the most important one is the optimality of
implementing codes using a layered protocol stack.
|
cs/0504015
|
Design of Block Transceivers with Decision Feedback Detection
|
cs.IT math.IT
|
This paper presents a method for jointly designing the transmitter-receiver
pair in a block-by-block communication system that employs (intra-block)
decision feedback detection. We provide closed-form expressions for
transmitter-receiver pairs that simultaneously minimize the arithmetic mean
squared error (MSE) at the decision point (assuming perfect feedback), the
geometric MSE, and the bit error rate of a uniformly bit-loaded system at
moderate-to-high signal-to-noise ratios. Separate expressions apply for the
``zero-forcing'' and ``minimum MSE'' (MMSE) decision feedback structures. In
the MMSE case, the proposed design also maximizes the Gaussian mutual
information and suggests that one can approach the capacity of the block
transmission system using (independent instances of) the same (Gaussian) code
for each element of the block. Our simulation studies indicate that the
proposed transceivers perform significantly better than standard transceivers,
and that they retain their performance advantages in the presence of error
propagation.
|
cs/0504016
|
Shortened Array Codes of Large Girth
|
cs.DM cs.IT math.IT
|
One approach to designing structured low-density parity-check (LDPC) codes
with large girth is to shorten codes with small girth in such a manner that the
deleted columns of the parity-check matrix contain all the variables involved
in short cycles. This approach is especially effective if the parity-check
matrix of a code is a matrix composed of blocks of circulant permutation
matrices, as is the case for the class of codes known as array codes. We show
how to shorten array codes by deleting certain columns of their parity-check
matrices so as to increase their girth. The shortening approach is based on the
observation that for array codes, and in fact for a slightly more general class
of LDPC codes, the cycles in the corresponding Tanner graph are governed by
certain homogeneous linear equations with integer coefficients. Consequently,
we can selectively eliminate cycles from an array code by only retaining those
columns from the parity-check matrix of the original code that are indexed by
integer sequences that do not contain solutions to the equations governing
those cycles. We provide Ramsey-theoretic estimates for the maximum number of
columns that can be retained from the original parity-check matrix with the
property that the sequence of their indices avoid solutions to various types of
cycle-governing equations. This translates to estimates of the rate penalty
incurred in shortening a code to eliminate cycles. Simulation results show that
for the codes considered, shortening them to increase the girth can lead to
significant gains in signal-to-noise ratio in the case of communication over an
additive white Gaussian noise channel.
|
cs/0504017
|
A new SISO algorithm with application to turbo equalization
|
cs.IT math.IT
|
In this paper we propose a new soft-input soft-output equalization algorithm,
offering very good performance/complexity tradeoffs. It follows the structure
of the BCJR algorithm, but dynamically constructs a simplified trellis during
the forward recursion. In each trellis section, only the M states with the
strongest forward metric are preserved, similar to the M-BCJR algorithm. Unlike
the M-BCJR, however, the remaining states are not deleted, but rather merged
into the surviving states. The new algorithm compares favorably with the
reduced-state BCJR algorithm, offering better performance and more flexibility,
particularly for systems with higher order modulations.
|
cs/0504020
|
The Viterbi Algorithm: A Personal History
|
cs.IT math.IT
|
The story of the Viterbi algorithm (VA) is told from a personal perspective.
Applications both within and beyond communications are discussed. In brief
summary, the VA has proved to be an extremely important algorithm in a
surprising variety of fields.
|
cs/0504021
|
Near Perfect Decoding of LDPC Codes
|
cs.IT math.IT
|
Cooperative optimization is a new way for finding global optima of
complicated functions of many variables. It has some important properties not
possessed by any conventional optimization methods. It has been successfully
applied in solving many large scale optimization problems in image processing,
computer vision, and computational chemistry. This paper shows the application
of this optimization principle in decoding LDPC codes, which is another hard
combinatorial optimization problem. In our experiments, it significantly
out-performed the sum-product algorithm, the best known method for decoding
LDPC codes. Compared to the sum-product algorithm, our algorithm reduced the
error rate further by three fold, improved the speed by six times, and lowered
error floors dramatically in the decoding.
|
cs/0504022
|
A Matter of Opinion: Sentiment Analysis and Business Intelligence
(position paper)
|
cs.CL
|
A general-audience introduction to the area of "sentiment analysis", the
computational treatment of subjective, opinion-oriented language (an example
application is determining whether a review is "thumbs up" or "thumbs down").
Some challenges, applications to business-intelligence tasks, and potential
future directions are described.
|
cs/0504024
|
Constraint-Based Qualitative Simulation
|
cs.AI cs.LO
|
We consider qualitative simulation involving a finite set of qualitative
relations in presence of complete knowledge about their interrelationship. We
show how it can be naturally captured by means of constraints expressed in
temporal logic and constraint satisfaction problems. The constraints relate at
each stage the 'past' of a simulation with its 'future'. The benefit of this
approach is that it readily leads to an implementation based on constraint
technology that can be used to generate simulations and to answer queries about
them.
|
cs/0504028
|
On Extrinsic Information of Good Codes Operating Over Discrete
Memoryless Channels
|
cs.IT math.IT
|
We show that the Extrinsic Information about the coded bits of any good
(capacity achieving) code operating over a wide class of discrete memoryless
channels (DMC) is zero when channel capacity is below the code rate and
positive constant otherwise, that is, the Extrinsic Information Transfer (EXIT)
chart is a step function of channel quality, for any capacity achieving code.
It follows that, for a common class of iterative receivers where the error
correcting decoder must operate at first iteration at rate above capacity (such
as in turbo equalization, turbo channel estimation, parallel and serial
concatenated coding and the like), classical good codes which achieve capacity
over the DMC are not effective and should be replaced by different new ones.
Another meaning of the results is that a good code operating at rate above
channel capacity falls apart into its individual transmitted symbols in the
sense that all the information about a coded transmitted symbol is contained in
the corresponding received symbol and no information about it can be inferred
from the other received symbols. The binary input additive white Gaussian noise
channel is treated in part 1 of this report. Part 2 extends the results to the
symmetric binary channel and to the binary erasure channel and provides an
heuristic extension to wider class of channel models.
|
cs/0504030
|
Sufficient conditions for convergence of the Sum-Product Algorithm
|
cs.IT cs.AI math.IT
|
We derive novel conditions that guarantee convergence of the Sum-Product
algorithm (also known as Loopy Belief Propagation or simply Belief Propagation)
to a unique fixed point, irrespective of the initial messages. The
computational complexity of the conditions is polynomial in the number of
variables. In contrast with previously existing conditions, our results are
directly applicable to arbitrary factor graphs (with discrete variables) and
are shown to be valid also in the case of factors containing zeros, under some
additional conditions. We compare our bounds with existing ones, numerically
and, if possible, analytically. For binary variables with pairwise
interactions, we derive sufficient conditions that take into account local
evidence (i.e., single variable factors) and the type of pair interactions
(attractive or repulsive). It is shown empirically that this bound outperforms
existing bounds.
|
cs/0504031
|
Convexity Analysis of Snake Models Based on Hamiltonian Formulation
|
cs.CV cs.GR
|
This paper presents a convexity analysis for the dynamic snake model based on
the Potential Energy functional and the Hamiltonian formulation of the
classical mechanics. First we see the snake model as a dynamical system whose
singular points are the borders we seek. Next we show that a necessary
condition for a singular point to be an attractor is that the energy functional
is strictly convex in a neighborhood of it, that means, if the singular point
is a local minimum of the potential energy. As a consequence of this analysis,
a local expression relating the dynamic parameters and the rate of convergence
arises. Such results link the convexity analysis of the potential energy and
the dynamic snake model and point forward to the necessity of a physical
quantity whose convexity analysis is related to the dynamic and which
incorporate the velocity space. Such a quantity is exactly the (conservative)
Hamiltonian of the system.
|
cs/0504032
|
Critical Point for Maximum Likelihood Decoding of Linear Block Codes
|
cs.IT math.IT
|
In this letter, the SNR value at which the error performance curve of a soft
decision maximum likelihood decoder reaches the slope corresponding to the code
minimum distance is determined for a random code. Based on this value, referred
to as the critical point, new insight about soft bounded distance decoding of
random-like codes (and particularly Reed-Solomon codes) is provided.
|
cs/0504035
|
Fitness Uniform Deletion: A Simple Way to Preserve Diversity
|
cs.NE cs.AI
|
A commonly experienced problem with population based optimisation methods is
the gradual decline in population diversity that tends to occur over time. This
can slow a system's progress or even halt it completely if the population
converges on a local optimum from which it cannot escape. In this paper we
present the Fitness Uniform Deletion Scheme (FUDS), a simple but somewhat
unconventional approach to this problem. Under FUDS the deletion operation is
modified to only delete those individuals which are "common" in the sense that
there exist many other individuals of similar fitness in the population. This
makes it impossible for the population to collapse to a collection of highly
related individuals with similar fitness. Our experimental results on a range
of optimisation problems confirm this, in particular for deceptive optimisation
problems the performance is significantly more robust to variation in the
selection intensity.
|
cs/0504036
|
Scientific impact quantity and quality: Analysis of two sources of
bibliographic data
|
cs.IR cs.DL
|
Attempts to understand the consequence of any individual scientist's activity
within the long-term trajectory of science is one of the most difficult
questions within the philosophy of science. Because scientific publications
play such as central role in the modern enterprise of science, bibliometric
techniques which measure the ``impact'' of an individual publication as a
function of the number of citations it receives from subsequent authors have
provided some of the most useful empirical data on this question. Until
recently, Thompson/ISI has provided the only source of large-scale ``inverted''
bibliographic data of the sort required for impact analysis. In the end of
2004, Google introduced a new service, GoogleScholar, making much of this same
data available. Here we analyze 203 publications, collectively cited by more
than 4000 other publications. We show surprisingly good agreement between data
citation counts provided by the two services. Data quality across the systems
is analyzed, and potentially useful complementarities between are considered.
The additional robustness offered by multiple sources of such data promises to
increase the utility of these measurements as open citation protocols and open
access increase their impact on electronic scientific publication practices.
|
cs/0504037
|
Bayesian Restoration of Digital Images Employing Markov Chain Monte
Carlo a Review
|
cs.CV cond-mat.stat-mech physics.comp-ph
|
A review of Bayesian restoration of digital images based on Monte Carlo
techniques is presented. The topics covered include Likelihood, Prior and
Posterior distributions, Poisson, Binay symmetric channel, and Gaussian channel
models of Likelihood distribution,Ising and Potts spin models of Prior
distribution, restoration of an image through Posterior maximization,
statistical estimation of a true image from Posterior ensembles, Markov Chain
Monte Carlo methods and cluster algorithms.
|
cs/0504041
|
Learning Polynomial Networks for Classification of Clinical
Electroencephalograms
|
cs.AI cs.NE
|
We describe a polynomial network technique developed for learning to classify
clinical electroencephalograms (EEGs) presented by noisy features. Using an
evolutionary strategy implemented within Group Method of Data Handling, we
learn classification models which are comprehensively described by sets of
short-term polynomials. The polynomial models were learnt to classify the EEGs
recorded from Alzheimer and healthy patients and recognize the EEG artifacts.
Comparing the performances of our technique and some machine learning methods
we conclude that our technique can learn well-suited polynomial models which
experts can find easy-to-understand.
|
cs/0504042
|
The Bayesian Decision Tree Technique with a Sweeping Strategy
|
cs.AI cs.LG
|
The uncertainty of classification outcomes is of crucial importance for many
safety critical applications including, for example, medical diagnostics. In
such applications the uncertainty of classification can be reliably estimated
within a Bayesian model averaging technique that allows the use of prior
information. Decision Tree (DT) classification models used within such a
technique gives experts additional information by making this classification
scheme observable. The use of the Markov Chain Monte Carlo (MCMC) methodology
of stochastic sampling makes the Bayesian DT technique feasible to perform.
However, in practice, the MCMC technique may become stuck in a particular DT
which is far away from a region with a maximal posterior. Sampling such DTs
causes bias in the posterior estimates, and as a result the evaluation of
classification uncertainty may be incorrect. In a particular case, the negative
effect of such sampling may be reduced by giving additional prior information
on the shape of DTs. In this paper we describe a new approach based on sweeping
the DTs without additional priors on the favorite shape of DTs. The
performances of Bayesian DT techniques with the standard and sweeping
strategies are compared on a synthetic data as well as on real datasets.
Quantitatively evaluating the uncertainty in terms of entropy of class
posterior probabilities, we found that the sweeping strategy is superior to the
standard strategy.
|
cs/0504043
|
Experimental Comparison of Classification Uncertainty for Randomised and
Bayesian Decision Tree Ensembles
|
cs.AI cs.LG
|
In this paper we experimentally compare the classification uncertainty of the
randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique
with a restarting strategy on a synthetic dataset as well as on some datasets
commonly used in the machine learning community. For quantitative evaluation of
classification uncertainty, we use an Uncertainty Envelope dealing with the
class posterior distribution and a given confidence probability. Counting the
classifier outcomes, this technique produces feasible evaluations of the
classification uncertainty. Using this technique in our experiments, we found
that the Bayesian DT technique is superior to the randomised DT ensemble
technique.
|
cs/0504046
|
On the Entropy Rate of Pattern Processes
|
cs.IT math.IT
|
We study the entropy rate of pattern sequences of stochastic processes, and
its relationship to the entropy rate of the original process. We give a
complete characterization of this relationship for i.i.d. processes over
arbitrary alphabets, stationary ergodic processes over discrete alphabets, and
a broad family of stationary ergodic processes over uncountable alphabets. For
cases where the entropy rate of the pattern process is infinite, we
characterize the possible growth rate of the block entropy.
|
cs/0504047
|
Pushdown dimension
|
cs.IT cs.CC math.IT
|
This paper develops the theory of pushdown dimension and explores its
relationship with finite-state dimension. Pushdown dimension is trivially
bounded above by finite-state dimension for all sequences, since a pushdown
gambler can simulate any finite-state gambler. We show that for every rational
0 < d < 1, there exists a sequence with finite-state dimension d whose pushdown
dimension is at most d/2. This establishes a quantitative analogue of the
well-known fact that pushdown automata decide strictly more languages than
finite automata.
|
cs/0504049
|
Bounds on the Entropy of Patterns of I.I.D. Sequences
|
cs.IT math.IT
|
Bounds on the entropy of patterns of sequences generated by independently
identically distributed (i.i.d.) sources are derived. A pattern is a sequence
of indices that contains all consecutive integer indices in increasing order of
first occurrence. If the alphabet of a source that generated a sequence is
unknown, the inevitable cost of coding the unknown alphabet symbols can be
exploited to create the pattern of the sequence. This pattern can in turn be
compressed by itself. The bounds derived here are functions of the i.i.d.
source entropy, alphabet size, and letter probabilities. It is shown that for
large alphabets, the pattern entropy must decrease from the i.i.d. one. The
decrease is in many cases more significant than the universal coding redundancy
bounds derived in prior works. The pattern entropy is confined between two
bounds that depend on the arrangement of the letter probabilities in the
probability space. For very large alphabets whose size may be greater than the
coded pattern length, all low probability letters are packed into one symbol.
The pattern entropy is upper and lower bounded in terms of the i.i.d. entropy
of the new packed alphabet. Correction terms, which are usually negligible, are
provided for both upper and lower bounds.
|
cs/0504051
|
A Scalable Stream-Oriented Framework for Cluster Applications
|
cs.DC cs.DB cs.NI cs.OS cs.PL
|
This paper presents a stream-oriented architecture for structuring cluster
applications. Clusters that run applications based on this architecture can
scale to tenths of thousands of nodes with significantly less performance loss
or reliability problems. Our architecture exploits the stream nature of the
data flow and reduces congestion through load balancing, hides latency behind
data pushes and transparently handles node failures. In our ongoing work, we
are developing an implementation for this architecture and we are able to run
simple data mining applications on a cluster simulator.
|
cs/0504052
|
Learning Multi-Class Neural-Network Models from Electroencephalograms
|
cs.NE cs.LG
|
We describe a new algorithm for learning multi-class neural-network models
from large-scale clinical electroencephalograms (EEGs). This algorithm trains
hidden neurons separately to classify all the pairs of classes. To find best
pairwise classifiers, our algorithm searches for input variables which are
relevant to the classification problem. Despite patient variability and heavily
overlapping classes, a 16-class model learnt from EEGs of 65 sleeping newborns
correctly classified 80.8% of the training and 80.1% of the testing examples.
Additionally, the neural-network model provides a probabilistic interpretation
of decisions.
|
cs/0504053
|
A Neural-Network Technique for Recognition of Filaments in Solar Images
|
cs.NE
|
We describe a new neural-network technique developed for an automated
recognition of solar filaments visible in the hydrogen H-alpha line full disk
spectroheliograms. This technique allows neural networks learn from a few image
fragments labelled manually to recognize the single filaments depicted on a
local background. The trained network is able to recognize filaments depicted
on the backgrounds with variations in brightness caused by atmospherics
distortions. Despite the difference in backgrounds in our experiments the
neural network has properly recognized filaments in the testing image
fragments. Using a parabolic activation function we extend this technique to
recognize multiple solar filaments which may appear in one fragment.
|
cs/0504054
|
Learning from Web: Review of Approaches
|
cs.NE cs.LG
|
Knowledge discovery is defined as non-trivial extraction of implicit,
previously unknown and potentially useful information from given data.
Knowledge extraction from web documents deals with unstructured, free-format
documents whose number is enormous and rapidly growing. The artificial neural
networks are well suitable to solve a problem of knowledge discovery from web
documents because trained networks are able more accurately and easily to
classify the learning and testing examples those represent the text mining
domain. However, the neural networks that consist of large number of weighted
connections and activation units often generate the incomprehensible and
hard-to-understand models of text classification. This problem may be also
addressed to most powerful recurrent neural networks that employ the feedback
links from hidden or output units to their input units. Due to feedback links,
recurrent neural networks are able take into account of a context in document.
To be useful for data mining, self-organizing neural network techniques of
knowledge extraction have been explored and developed. Self-organization
principles were used to create an adequate neural-network structure and reduce
a dimensionality of features used to describe text documents. The use of these
principles seems interesting because ones are able to reduce a neural-network
redundancy and considerably facilitate the knowledge representation.
|
cs/0504055
|
A Learning Algorithm for Evolving Cascade Neural Networks
|
cs.NE cs.AI
|
A new learning algorithm for Evolving Cascade Neural Networks (ECNNs) is
described. An ECNN starts to learn with one input node and then adding new
inputs as well as new hidden neurons evolves it. The trained ECNN has a nearly
minimal number of input and hidden neurons as well as connections. The
algorithm was successfully applied to classify artifacts and normal segments in
clinical electroencephalograms (EEGs). The EEG segments were visually labeled
by EEG-viewer. The trained ECNN has correctly classified 96.69% of the testing
segments. It is slightly better than a standard fully connected neural network.
|
cs/0504056
|
Self-Organizing Multilayered Neural Networks of Optimal Complexity
|
cs.NE cs.AI
|
The principles of self-organizing the neural networks of optimal complexity
is considered under the unrepresentative learning set. The method of
self-organizing the multi-layered neural networks is offered and used to train
the logical neural networks which were applied to the medical diagnostics.
|
cs/0504057
|
Diagnostic Rule Extraction Using Neural Networks
|
cs.NE cs.AI
|
The neural networks have trained on incomplete sets that a doctor could
collect. Trained neural networks have correctly classified all the presented
instances. The number of intervals entered for encoding the quantitative
variables is equal two. The number of features as well as the number of neurons
and layers in trained neural networks was minimal. Trained neural networks are
adequately represented as a set of logical formulas that more comprehensible
and easy-to-understand. These formulas are as the syndrome-complexes, which may
be easily tabulated and represented as a diagnostic table that the doctors
usually use. Decision rules provide the evaluations of their confidence in
which interested a doctor. Conducted clinical researches have shown that
iagnostic decisions produced by symbolic rules have coincided with the doctor's
conclusions.
|
cs/0504058
|
Polynomial Neural Networks Learnt to Classify EEG Signals
|
cs.NE cs.AI
|
A neural network based technique is presented, which is able to successfully
extract polynomial classification rules from labeled electroencephalogram (EEG)
signals. To represent the classification rules in an analytical form, we use
the polynomial neural networks trained by a modified Group Method of Data
Handling (GMDH). The classification rules were extracted from clinical EEG data
that were recorded from an Alzheimer patient and the sudden death risk
patients. The third data is EEG recordings that include the normal and artifact
segments. These EEG data were visually identified by medical experts. The
extracted polynomial rules verified on the testing EEG data allow to correctly
classify 72% of the risk group patients and 96.5% of the segments. These rules
performs slightly better than standard feedforward neural networks.
|
cs/0504059
|
A Neural Network Decision Tree for Learning Concepts from EEG Data
|
cs.NE cs.AI
|
To learn the multi-class conceptions from the electroencephalogram (EEG) data
we developed a neural network decision tree (DT), that performs the linear
tests, and a new training algorithm. We found that the known methods fail
inducting the classification models when the data are presented by the features
some of them are irrelevant, and the classes are heavily overlapped. To train
the DT, our algorithm exploits a bottom up search of the features that provide
the best classification accuracy of the linear tests. We applied the developed
algorithm to induce the DT from the large EEG dataset consisted of 65 patients
belonging to 16 age groups. In these recordings each EEG segment was
represented by 72 calculated features. The DT correctly classified 80.8% of the
training and 80.1% of the testing examples. Correspondingly it correctly
classified 89.2% and 87.7% of the EEG recordings.
|
cs/0504060
|
Universal Minimax Discrete Denoising under Channel Uncertainty
|
cs.IT math.IT
|
The goal of a denoising algorithm is to recover a signal from its
noise-corrupted observations. Perfect recovery is seldom possible and
performance is measured under a given single-letter fidelity criterion. For
discrete signals corrupted by a known discrete memoryless channel, the DUDE was
recently shown to perform this task asymptotically optimally, without knowledge
of the statistical properties of the source. In the present work we address the
scenario where, in addition to the lack of knowledge of the source statistics,
there is also uncertainty in the channel characteristics. We propose a family
of discrete denoisers and establish their asymptotic optimality under a minimax
performance criterion which we argue is appropriate for this setting. As we
show elsewhere, the proposed schemes can also be implemented computationally
efficiently.
|
cs/0504061
|
Summarization from Medical Documents: A Survey
|
cs.CL cs.IR
|
Objective:
The aim of this paper is to survey the recent work in medical documents
summarization.
Background:
During the last decade, documents summarization got increasing attention by
the AI research community. More recently it also attracted the interest of the
medical research community as well, due to the enormous growth of information
that is available to the physicians and researchers in medicine, through the
large and growing number of published journals, conference proceedings, medical
sites and portals on the World Wide Web, electronic medical records, etc.
Methodology:
This survey gives first a general background on documents summarization,
presenting the factors that summarization depends upon, discussing evaluation
issues and describing briefly the various types of summarization techniques. It
then examines the characteristics of the medical domain through the different
types of medical documents. Finally, it presents and discusses the
summarization techniques used so far in the medical domain, referring to the
corresponding systems and their characteristics.
Discussion and conclusions:
The paper discusses thoroughly the promising paths for future research in
medical documents summarization. It mainly focuses on the issue of scaling to
large collections of documents in various languages and from different media,
on personalization issues, on portability to new sub-domains, and on the
integration of summarization technology in practical applications
|
cs/0504063
|
Selection in Scale-Free Small World
|
cs.LG cs.IR
|
In this paper we compare the performance characteristics of our selection
based learning algorithm for Web crawlers with the characteristics of the
reinforcement learning algorithm. The task of the crawlers is to find new
information on the Web. The selection algorithm, called weblog update, modifies
the starting URL lists of our crawlers based on the found URLs containing new
information. The reinforcement learning algorithm modifies the URL orderings of
the crawlers based on the received reinforcements for submitted documents. We
performed simulations based on data collected from the Web. The collected
portion of the Web is typical and exhibits scale-free small world (SFSW)
structure. We have found that on this SFSW, the weblog update algorithm
performs better than the reinforcement learning algorithm. It finds the new
information faster than the reinforcement learning algorithm and has better new
information/all submitted documents ratio. We believe that the advantages of
the selection algorithm over reinforcement learning algorithm is due to the
small world property of the Web.
|
cs/0504064
|
Neural-Network Techniques for Visual Mining Clinical
Electroencephalograms
|
cs.AI
|
In this chapter we describe new neural-network techniques developed for
visual mining clinical electroencephalograms (EEGs), the weak electrical
potentials invoked by brain activity. These techniques exploit fruitful ideas
of Group Method of Data Handling (GMDH). Section 2 briefly describes the
standard neural-network techniques which are able to learn well-suited
classification modes from data presented by relevant features. Section 3
introduces an evolving cascade neural network technique which adds new input
nodes as well as new neurons to the network while the training error decreases.
This algorithm is applied to recognize artifacts in the clinical EEGs. Section
4 presents the GMDH-type polynomial networks learnt from data. We applied this
technique to distinguish the EEGs recorded from an Alzheimer and a healthy
patient as well as recognize EEG artifacts. Section 5 describes the new
neural-network technique developed to induce multi-class concepts from data. We
used this technique for inducing a 16-class concept from the large-scale
clinical EEG data. Finally we discuss perspectives of applying the
neural-network techniques to clinical EEGs.
|
cs/0504065
|
Estimating Classification Uncertainty of Bayesian Decision Tree
Technique on Financial Data
|
cs.AI
|
Bayesian averaging over classification models allows the uncertainty of
classification outcomes to be evaluated, which is of crucial importance for
making reliable decisions in applications such as financial in which risks have
to be estimated. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the diversity of a
classifier ensemble and the required performance. The interpretability of
classification models can also give useful information for experts responsible
for making reliable classifications. For this reason Decision Trees (DTs) seem
to be attractive classification models. The required diversity of the DT
ensemble can be achieved by using the Bayesian model averaging all possible
DTs. In practice, the Bayesian approach can be implemented on the base of a
Markov Chain Monte Carlo (MCMC) technique of random sampling from the posterior
distribution. For sampling large DTs, the MCMC method is extended by Reversible
Jump technique which allows inducing DTs under given priors. For the case when
the prior information on the DT size is unavailable, the sweeping technique
defining the prior implicitly reveals a better performance. Within this Chapter
we explore the classification uncertainty of the Bayesian MCMC techniques on
some datasets from the StatLog Repository and real financial data. The
classification uncertainty is compared within an Uncertainty Envelope technique
dealing with the class posterior distribution and a given confidence
probability. This technique provides realistic estimates of the classification
uncertainty which can be easily interpreted in statistical terms with the aim
of risk evaluation.
|
cs/0504066
|
Comparison of the Bayesian and Randomised Decision Tree Ensembles within
an Uncertainty Envelope Technique
|
cs.AI
|
Multiple Classifier Systems (MCSs) allow evaluation of the uncertainty of
classification outcomes that is of crucial importance for safety critical
applications. The uncertainty of classification is determined by a trade-off
between the amount of data available for training, the classifier diversity and
the required performance. The interpretability of MCSs can also give useful
information for experts responsible for making reliable classifications. For
this reason Decision Trees (DTs) seem to be attractive classification models
for experts. The required diversity of MCSs exploiting such classification
models can be achieved by using two techniques, the Bayesian model averaging
and the randomised DT ensemble. Both techniques have revealed promising results
when applied to real-world problems. In this paper we experimentally compare
the classification uncertainty of the Bayesian model averaging with a
restarting strategy and the randomised DT ensemble on a synthetic dataset and
some domain problems commonly used in the machine learning community. To make
the Bayesian DT averaging feasible, we use a Markov Chain Monte Carlo
technique. The classification uncertainty is evaluated within an Uncertainty
Envelope technique dealing with the class posterior distribution and a given
confidence probability. Exploring a full posterior distribution, this technique
produces realistic estimates which can be easily interpreted in statistical
terms. In our experiments we found out that the Bayesian DTs are superior to
the randomised DT ensembles within the Uncertainty Envelope technique.
|
cs/0504067
|
An Evolving Cascade Neural Network Technique for Cleaning Sleep
Electroencephalograms
|
cs.NE cs.AI
|
Evolving Cascade Neural Networks (ECNNs) and a new training algorithm capable
of selecting informative features are described. The ECNN initially learns with
one input node and then evolves by adding new inputs as well as new hidden
neurons. The resultant ECNN has a near minimal number of hidden neurons and
inputs. The algorithm is successfully used for training ECNN to recognise
artefacts in sleep electroencephalograms (EEGs) which were visually labelled by
EEG-viewers. In our experiments, the ECNN outperforms the standard
neural-network as well as evolutionary techniques.
|
cs/0504068
|
Self-Organization of the Neuron Collective of Optimal Complexity
|
cs.NE cs.AI
|
The optimal complexity of neural networks is achieved when the
self-organization principles is used to eliminate the contradictions existing
in accordance with the K. Godel theorem about incompleteness of the systems
based on axiomatics. The principle of S. Beer exterior addition the Heuristic
Group Method of Data Handling by A. Ivakhnenko realized is used.
|
cs/0504069
|
A Neural-Network Technique to Learn Concepts from Electroencephalograms
|
cs.NE cs.AI cs.LG
|
A new technique is presented developed to learn multi-class concepts from
clinical electroencephalograms. A desired concept is represented as a neuronal
computational model consisting of the input, hidden, and output neurons. In
this model the hidden neurons learn independently to classify the
electroencephalogram segments presented by spectral and statistical features.
This technique has been applied to the electroencephalogram data recorded from
65 sleeping healthy newborns in order to learn a brain maturation concept of
newborns aged between 35 and 51 weeks. The 39399 and 19670 segments from these
data have been used for learning and testing the concept, respectively. As a
result, the concept has correctly classified 80.1% of the testing segments or
87.7% of the 65 records.
|
cs/0504070
|
The Combined Technique for Detection of Artifacts in Clinical
Electroencephalograms of Sleeping Newborns
|
cs.NE cs.AI cs.LG
|
In this paper we describe a new method combining the polynomial neural
network and decision tree techniques in order to derive comprehensible
classification rules from clinical electroencephalograms (EEGs) recorded from
sleeping newborns. These EEGs are heavily corrupted by cardiac, eye movement,
muscle and noise artifacts and as a consequence some EEG features are
irrelevant to classification problems. Combining the polynomial network and
decision tree techniques, we discover comprehensible classification rules
whilst also attempting to keep their classification error down. This technique
is shown to outperform a number of commonly used machine learning technique
applied to automatically recognize artifacts in the sleep EEGs.
|
cs/0504071
|
Proceedings of the Pacific Knowledge Acquisition Workshop 2004
|
cs.AI
|
Artificial intelligence (AI) research has evolved over the last few decades
and knowledge acquisition research is at the core of AI research. PKAW-04 is
one of three international knowledge acquisition workshops held in the
Pacific-Rim, Canada and Europe over the last two decades. PKAW-04 has a strong
emphasis on incremental knowledge acquisition, machine learning, neural nets
and active mining.
The proceedings contain 19 papers that were selected by the program committee
among 24 submitted papers. All papers were peer reviewed by at least two
reviewers. The papers in these proceedings cover the methods and tools as well
as the applications related to develop expert systems or knowledge based
systems.
|
cs/0504072
|
Knowledge Representation Issues in Semantic Graphs for Relationship
Detection
|
cs.AI physics.soc-ph
|
An important task for Homeland Security is the prediction of threat
vulnerabilities, such as through the detection of relationships between
seemingly disjoint entities. A structure used for this task is a "semantic
graph", also known as a "relational data graph" or an "attributed relational
graph". These graphs encode relationships as "typed" links between a pair of
"typed" nodes. Indeed, semantic graphs are very similar to semantic networks
used in AI. The node and link types are related through an ontology graph (also
known as a schema). Furthermore, each node has a set of attributes associated
with it (e.g., "age" may be an attribute of a node of type "person").
Unfortunately, the selection of types and attributes for both nodes and links
depends on human expertise and is somewhat subjective and even arbitrary. This
subjectiveness introduces biases into any algorithm that operates on semantic
graphs. Here, we raise some knowledge representation issues for semantic graphs
and provide some possible solutions using recently developed ideas in the field
of complex networks. In particular, we use the concept of transitivity to
evaluate the relevance of individual links in the semantic graph for detecting
relationships. We also propose new statistical measures for semantic graphs and
illustrate these semantic measures on graphs constructed from movies and
terrorism data.
|
cs/0504074
|
Metalinguistic Information Extraction for Terminology
|
cs.CL cs.AI cs.IR
|
This paper describes and evaluates the Metalinguistic Operation Processor
(MOP) system for automatic compilation of metalinguistic information from
technical and scientific documents. This system is designed to extract
non-standard terminological resources that we have called Metalinguistic
Information Databases (or MIDs), in order to help update changing glossaries,
knowledge bases and ontologies, as well as to reflect the metastable dynamics
of special-domain knowledge.
|
cs/0504075
|
Dichotomy for Voting Systems
|
cs.GT cs.CC cs.MA
|
Scoring protocols are a broad class of voting systems. Each is defined by a
vector $(\alpha_1,\alpha_2,...,\alpha_m)$, $\alpha_1 \geq \alpha_2 \geq >...
\geq \alpha_m$, of integers such that each voter contributes $\alpha_1$ points
to his/her first choice, $\alpha_2$ points to his/her second choice, and so on,
and any candidate receiving the most points is a winner.
What is it about scoring-protocol election systems that makes some have the
desirable property of being NP-complete to manipulate, while others can be
manipulated in polynomial time? We find the complete, dichotomizing answer:
Diversity of dislike. Every scoring-protocol election system having two or more
point values assigned to candidates other than the favorite--i.e., having
$||\{\alpha_i \condition 2 \leq i \leq m\}||\geq 2$--is NP-complete to
manipulate. Every other scoring-protocol election system can be manipulated in
polynomial time. In effect, we show that--other than trivial systems (where all
candidates alway tie), plurality voting, and plurality voting's transparently
disguised translations--\emph{every} scoring-protocol election system is
NP-complete to manipulate.
|
cs/0504078
|
Adaptive Online Prediction by Following the Perturbed Leader
|
cs.AI cs.LG
|
When applying aggregating strategies to Prediction with Expert Advice, the
learning rate must be adaptively tuned. The natural choice of
sqrt(complexity/current loss) renders the analysis of Weighted Majority
derivatives quite complicated. In particular, for arbitrary weights there have
been no results proven so far. The analysis of the alternative "Follow the
Perturbed Leader" (FPL) algorithm from Kalai & Vempala (2003) (based on
Hannan's algorithm) is easier. We derive loss bounds for adaptive learning rate
and both finite expert classes with uniform weights and countable expert
classes with arbitrary weights. For the former setup, our loss bounds match the
best known results so far, while for the latter our results are new.
|
cs/0504079
|
Prediction of Large Alphabet Processes and Its Application to Adaptive
Source Coding
|
cs.IT math.IT
|
The problem of predicting a sequence $x_1,x_2,...$ generated by a discrete
source with unknown statistics is considered. Each letter $x_{t+1}$ is
predicted using information on the word $x_1x_2... x_t$ only. In fact, this
problem is a classical problem which has received much attention. Its history
can be traced back to Laplace. We address the problem where each $x_i$ belongs
to some large (or even infinite) alphabet. A method is presented for which the
precision is greater than for known algorithms, where precision is estimated by
the Kullback-Leibler divergence. The results can readily be translated to
results about adaptive coding.
|
cs/0504080
|
Performance of Gaussian Signalling in Non Coherent Rayleigh Fading
Channels
|
cs.IT math.IT
|
The mutual information of a discrete time memoryless Rayleigh fading channel
is considered, where neither the transmitter nor the receiver has the knowledge
of the channel state information except the fading statistics. We present the
mutual information of this channel in closed form when the input distribution
is complex Gaussian, and derive a lower bound in terms of the capacity of the
corresponding non fading channel and the capacity when the perfect channel
state information is known at the receiver.
|
cs/0504081
|
A Decomposition Approach to Multi-Vehicle Cooperative Control
|
cs.RO
|
We present methods that generate cooperative strategies for multi-vehicle
control problems using a decomposition approach. By introducing a set of tasks
to be completed by the team of vehicles and a task execution method for each
vehicle, we decomposed the problem into a combinatorial component and a
continuous component. The continuous component of the problem is captured by
task execution, and the combinatorial component is captured by task assignment.
In this paper, we present a solver for task assignment that generates
near-optimal assignments quickly and can be used in real-time applications. To
motivate our methods, we apply them to an adversarial game between two teams of
vehicles. One team is governed by simple rules and the other by our algorithms.
In our study of this game we found phase transitions, showing that the task
assignment problem is most difficult to solve when the capabilities of the
adversaries are comparable. Finally, we implement our algorithms in a
multi-level architecture with a variable replanning rate at each level to
provide feedback on a dynamically changing and uncertain environment.
|
cs/0504085
|
Capacity per Unit Energy of Fading Channels with a Peak Constraint
|
cs.IT math.IT
|
A discrete-time single-user scalar channel with temporally correlated
Rayleigh fading is analyzed. There is no side information at the transmitter or
the receiver. A simple expression is given for the capacity per unit energy, in
the presence of a peak constraint. The simple formula of Verdu for capacity per
unit cost is adapted to a channel with memory, and is used in the proof. In
addition to bounding the capacity of a channel with correlated fading, the
result gives some insight into the relationship between the correlation in the
fading process and the channel capacity. The results are extended to a channel
with side information, showing that the capacity per unit energy is one nat per
Joule, independently of the peak power constraint.
A continuous-time version of the model is also considered. The capacity per
unit energy subject to a peak constraint (but no bandwidth constraint) is given
by an expression similar to that for discrete time, and is evaluated for
Gauss-Markov and Clarke fading channels.
|
cs/0504086
|
Componentwise Least Squares Support Vector Machines
|
cs.LG cs.AI
|
This chapter describes componentwise Least Squares Support Vector Machines
(LS-SVMs) for the estimation of additive models consisting of a sum of
nonlinear components. The primal-dual derivations characterizing LS-SVMs for
the estimation of the additive model result in a single set of linear equations
with size growing in the number of data-points. The derivation is elaborated
for the classification as well as the regression case. Furthermore, different
techniques are proposed to discover structure in the data by looking for sparse
components in the model based on dedicated regularization schemes on the one
hand and fusion of the componentwise LS-SVMs training with a validation
criterion on the other hand. (keywords: LS-SVMs, additive models,
regularization, structure detection)
|
cs/0504089
|
Universal Similarity
|
cs.IR cs.AI cs.CL physics.data-an
|
We survey a new area of parameter-free similarity distance measures useful in
data-mining, pattern recognition, learning and automatic semantics extraction.
Given a family of distances on a set of objects, a distance is universal up to
a certain precision for that family if it minorizes every distance in the
family between every two objects in the set, up to the stated precision (we do
not require the universal distance to be an element of the family). We consider
similarity distances for two types of objects: literal objects that as such
contain all of their meaning, like genomes or books, and names for objects. The
latter may have literal embodyments like the first type, but may also be
abstract like ``red'' or ``christianity.'' For the first type we consider a
family of computable distance measures corresponding to parameters expressing
similarity according to particular features between pairs of literal objects.
For the second type we consider similarity distances generated by web users
corresponding to particular semantic relations between the (names for) the
designated objects. For both families we give universal similarity distance
measures, incorporating all particular distance measures in the family. In the
first case the universal distance is based on compression and in the second
case it is based on Google page counts related to search terms. In both cases
experiments on a massive scale give evidence of the viability of the
approaches.
|
cs/0504091
|
A Probabilistic Upper Bound on Differential Entropy
|
cs.IT math.IT
|
A novel, non-trivial, probabilistic upper bound on the entropy of an unknown
one-dimensional distribution, given the support of the distribution and a
sample from that distribution, is presented. No knowledge beyond the support of
the unknown distribution is required, nor is the distribution required to have
a density. Previous distribution-free bounds on the cumulative distribution
function of a random variable given a sample of that variable are used to
construct the bound. A simple, fast, and intuitive algorithm for computing the
entropy bound from a sample is provided.
|
cs/0504099
|
The Capacity of Random Ad hoc Networks under a Realistic Link Layer
Model
|
cs.IT cs.NI math.IT
|
The problem of determining asymptotic bounds on the capacity of a random ad
hoc network is considered. Previous approaches assumed a threshold-based link
layer model in which a packet transmission is successful if the SINR at the
receiver is greater than a fixed threshold. In reality, the mapping from SINR
to packet success probability is continuous. Hence, over each hop, for every
finite SINR, there is a non-zero probability of packet loss. With this more
realistic link model, it is shown that for a broad class of routing and
scheduling schemes, a fixed fraction of hops on each route have a fixed
non-zero packet loss probability. In a large network, a packet travels an
asymptotically large number of hops from source to destination. Consequently,
it is shown that the cumulative effect of per-hop packet loss results in a
per-node throughput of only O(1/n) (instead of Theta(1/sqrt{n log{n}})) as
shown previously for the threshold-based link model).
A scheduling scheme is then proposed to counter this effect. The proposed
scheme improves the link SINR by using conservative spatial reuse, and improves
the per-node throughput to O(1/(K_n sqrt{n log{n}})), where each cell gets a
transmission opportunity at least once every K_n slots, and K_n tends to
infinity as n tends to infinity.
|
cs/0504100
|
A DNA Sequence Compression Algorithm Based on LUT and LZ77
|
cs.IT math.IT
|
This article introduces a new DNA sequence compression algorithm which is
based on LUT and LZ77 algorithm. Combined a LUT-based pre-coding routine and
LZ77 compression routine,this algorithm can approach a compression ratio of
1.9bits \slash base and even lower.The biggest advantage of this algorithm is
fast execution, small memory occupation and easy implementation.
|
cs/0504101
|
Single-solution Random 3-SAT Instances
|
cs.AI cs.CC cs.DM
|
We study a class of random 3-SAT instances having exactly one solution. The
properties of this ensemble considerably differ from those of a random 3-SAT
ensemble. It is numerically shown that the running time of several complete and
stochastic local search algorithms monotonically increases as the clause
density is decreased. Therefore, there is no easy-hard-easy pattern of hardness
as for standard random 3-SAT ensemble. Furthermore, the running time for short
single-solution formulas increases with the problem size much faster than for
random 3-SAT formulas from the phase transition region.
|
cs/0504102
|
Spectral Orbits and Peak-to-Average Power Ratio of Boolean Functions
with respect to the {I,H,N}^n Transform
|
cs.IT math.IT
|
We enumerate the inequivalent self-dual additive codes over GF(4) of
blocklength n, thereby extending the sequence A090899 in The On-Line
Encyclopedia of Integer Sequences from n = 9 to n = 12. These codes have a
well-known interpretation as quantum codes. They can also be represented by
graphs, where a simple graph operation generates the orbits of equivalent
codes. We highlight the regularity and structure of some graphs that correspond
to codes with high distance. The codes can also be interpreted as quadratic
Boolean functions, where inequivalence takes on a spectral meaning. In this
context we define PAR_IHN, peak-to-average power ratio with respect to the
{I,H,N}^n transform set. We prove that PAR_IHN of a Boolean function is
equivalent to the the size of the maximum independent set over the associated
orbit of graphs. Finally we propose a construction technique to generate
Boolean functions with low PAR_IHN and algebraic degree higher than 2.
|
cs/0504108
|
Cooperative Game Theory within Multi-Agent Systems for Systems
Scheduling
|
cs.AI cs.MA
|
Research concerning organization and coordination within multi-agent systems
continues to draw from a variety of architectures and methodologies. The work
presented in this paper combines techniques from game theory and multi-agent
systems to produce self-organizing, polymorphic, lightweight, embedded agents
for systems scheduling within a large-scale real-time systems environment.
Results show how this approach is used to experimentally produce optimum
real-time scheduling through the emergent behavior of thousands of agents.
These results are obtained using a SWARM simulation of systems scheduling
within a High Energy Physics experiment consisting of 2500 digital signal
processors.
|
cs/0505001
|
Modelling investment in artificial stock markets: Analytical and
Numerical Results
|
cs.CE
|
In this article we study the behavior of a group of economic agents in the
context of cooperative game theory, interacting according to rules based on the
Potts Model with suitable modifications. Each agent can be thought of as
belonging to a chain, where agents can only interact with their nearest
neighbors (periodic boundary conditions are imposed). Each agent can invest an
amount σ_{i}=0,...,q-1. Using the transfer matrix method we study
analytically, among other things, the behavior of the investment as a function
of a control parameter (denoted β) for the cases q=2 and 3. For q>3
numerical evaluation of eigenvalues and high precision numerical derivatives
are used in order to assess this information.
|
cs/0505002
|
Tight Lower Bounds for Query Processing on Streaming and External Memory
Data
|
cs.DB cs.CC
|
We study a clean machine model for external memory and stream processing. We
show that the number of scans of the external data induces a strict hierarchy
(as long as work space is sufficiently small, e.g., polylogarithmic in the size
of the input). We also show that neither joins nor sorting are feasible if the
product of the number $r(n)$ of scans of the external memory and the size
$s(n)$ of the internal memory buffers is sufficiently small, e.g., of size
$o(\sqrt[5]{n})$. We also establish tight bounds for the complexity of XPath
evaluation and filtering.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.