id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0607091
|
Finite element method for thermal analysis of concentrating solar
receivers
|
cs.CE
|
Application of finite element method and heat conductivity transfer model for
calculation of temperature distribution in receiver for dish-Stirling
concentrating solar system is described. The method yields discretized
equations that are entirely local to the elements and provides complete
geometric flexibility. A computer program solving the finite element method
problem is created and great number of numerical experiments is carried out.
Illustrative numerical results are given for an array of triangular elements in
receiver for dish-Stirling system.
|
cs/0607095
|
Gallager's Exponent for MIMO Channels: A Reliability-Rate Tradeoff
|
cs.IT math.IT
|
In this paper, we derive Gallager's random coding error exponent for
multiple-input multiple-output (MIMO) channels, assuming no channel-state
information (CSI) at the transmitter and perfect CSI at the receiver. This
measure gives insight into a fundamental tradeoff between the communication
reliability and information rate of MIMO channels, enabling to determine the
required codeword length to achieve a prescribed error probability at a given
rate below the channel capacity. We quantify the effects of the number of
antennas, channel coherence time, and spatial fading correlation on the MIMO
exponent. In addition, general formulae for the ergodic capacity and the cutoff
rate in the presence of spatial correlation are deduced from the exponent
expressions. These formulae are applicable to arbitrary structures of transmit
and receive correlation, encompassing all the previously known results as
special cases of our expressions.
|
cs/0607096
|
Logical settings for concept learning from incomplete examples in First
Order Logic
|
cs.LG
|
We investigate here concept learning from incomplete examples. Our first
purpose is to discuss to what extent logical learning settings have to be
modified in order to cope with data incompleteness. More precisely we are
interested in extending the learning from interpretations setting introduced by
L. De Raedt that extends to relational representations the classical
propositional (or attribute-value) concept learning from examples framework. We
are inspired here by ideas presented by H. Hirsh in a work extending the
Version space inductive paradigm to incomplete data. H. Hirsh proposes to
slightly modify the notion of solution when dealing with incomplete examples: a
solution has to be a hypothesis compatible with all pieces of information
concerning the examples. We identify two main classes of incompleteness. First,
uncertainty deals with our state of knowledge concerning an example. Second,
generalization (or abstraction) deals with what part of the description of the
example is sufficient for the learning purpose. These two main sources of
incompleteness can be mixed up when only part of the useful information is
known. We discuss a general learning setting, referred to as "learning from
possibilities" that formalizes these ideas, then we present a more specific
learning setting, referred to as "assumption-based learning" that cope with
examples which uncertainty can be reduced when considering contextual
information outside of the proper description of the examples. Assumption-based
learning is illustrated on a recent work concerning the prediction of a
consensus secondary structure common to a set of RNA sequences.
|
cs/0607098
|
List decoding of noisy Reed-Muller-like codes
|
cs.DS cs.IT math.IT
|
First- and second-order Reed-Muller (RM(1) and RM(2), respectively) codes are
two fundamental error-correcting codes which arise in communication as well as
in probabilistically-checkable proofs and learning. In this paper, we take the
first steps toward extending the quick randomized decoding tools of RM(1) into
the realm of quadratic binary and, equivalently, Z_4 codes. Our main
algorithmic result is an extension of the RM(1) techniques from Goldreich-Levin
and Kushilevitz-Mansour algorithms to the Hankel code, a code between RM(1) and
RM(2). That is, given signal s of length N, we find a list that is a superset
of all Hankel codewords phi with dot product to s at least (1/sqrt(k)) times
the norm of s, in time polynomial in k and log(N). We also give a new and
simple formulation of a known Kerdock code as a subcode of the Hankel code. As
a corollary, we can list-decode Kerdock, too. Also, we get a quick algorithm
for finding a sparse Kerdock approximation. That is, for k small compared with
1/sqrt{N} and for epsilon > 0, we find, in time polynomial in (k
log(N)/epsilon), a k-Kerdock-term approximation s~ to s with Euclidean error at
most the factor (1+epsilon+O(k^2/sqrt{N})) times that of the best such
approximation.
|
cs/0607099
|
Degrees of Freedom Region for the MIMO X Channel
|
cs.IT math.IT
|
We provide achievability as well as converse results for the degrees of
freedom region of a MIMO $X$ channel, i.e., a system with two transmitters, two
receivers, each equipped with multiple antennas, where independent messages
need to be conveyed over fixed channels from each transmitter to each receiver.
With M=1 antennas at each node, we find that the total (sum rate) degrees of
freedom are bounded above and below as $1 \leq\eta_X^\star \leq {4/3}$. If
$M>1$ and channel matrices are non-degenerate then the precise degrees of
freedom $\eta_X^\star = {4/3}M$. Simple zero forcing without dirty paper
encoding or successive decoding, suffices to achieve the ${4/3}M$ degrees of
freedom. With equal number of antennas at all nodes, we explore the increase in
degrees of freedom when some of the messages are made available to a
transmitter or receiver in the manner of cognitive radio. With a cognitive
transmitter we show that the number of degrees of freedom $\eta = {3/2}M$ (for
$M>1$) on the MIMO $X$ channel. The same degrees of freedom are obtained on the
MIMO $X$ channel with a cognitive receiver as well. In contrast to the $X$
channel result, we show that for the MIMO \emph{interference} channel, the
degrees of freedom are not increased even if both the transmitter and the
receiver of one user know the other user's message. However, the interference
channel can achieve the full $2M$ degrees of freedom if \emph{each} user has
either a cognitive transmitter or a cognitive receiver. Lastly, if the channels
vary with time/frequency then the $X$ channel with single antennas $(M=1)$ at
all nodes has exactly 4/3 degrees of freedom with no shared messages and
exactly 3/2 degrees of freedom with a cognitive transmitter or a cognitive
receiver.
|
cs/0607102
|
Multiaccess Channels with State Known to Some Encoders and Independent
Messages
|
cs.IT math.IT
|
We consider a state-dependent multiaccess channel (MAC) with state
non-causally known to some encoders. We derive an inner bound for the capacity
region in the general discrete memoryless case and specialize to a binary
noiseless case. In the case of maximum entropy channel state, we obtain the
capacity region for binary noiseless MAC with one informed encoder by deriving
a non-trivial outer bound for this case. For a Gaussian state-dependent MAC
with one encoder being informed of the channel state, we present an inner bound
by applying a slightly generalized dirty paper coding (GDPC) at the informed
encoder that allows for partial state cancellation, and a trivial outer bound
by providing channel state to the decoder also. The uninformed encoders benefit
from the state cancellation in terms of achievable rates, however, appears that
GDPC cannot completely eliminate the effect of the channel state on the
achievable rate region, in contrast to the case of all encoders being informed.
In the case of infinite state variance, we analyze how the uninformed encoder
benefits from the informed encoder's actions using the inner bound and also
provide a non-trivial outer bound for this case which is better than the
trivial outer bound.
|
cs/0607103
|
Ideas by Statistical Mechanics (ISM)
|
cs.CE cs.MS cs.NE
|
Ideas by Statistical Mechanics (ISM) is a generic program to model evolution
and propagation of ideas/patterns throughout populations subjected to
endogenous and exogenous interactions. The program is based on the author's
work in Statistical Mechanics of Neocortical Interactions (SMNI), and uses the
author's Adaptive Simulated Annealing (ASA) code for optimizations of training
sets, as well as for importance-sampling to apply the author's copula financial
risk-management codes, Trading in Risk Dimensions (TRD), for assessments of
risk and uncertainty. This product can be used for decision support for
projects ranging from diplomatic, information, military, and economic (DIME)
factors of propagation/evolution of ideas, to commercial sales, trading
indicators across sectors of financial markets, advertising and political
campaigns, etc. A statistical mechanical model of neocortical interactions,
developed by the author and tested successfully in describing short-term memory
and EEG indicators, is the proposed model. Parameters with a given subset of
macrocolumns will be fit using ASA to patterns representing ideas. Parameters
of external and inter-regional interactions will be determined that promote or
inhibit the spread of these ideas. Tools of financial risk management,
developed by the author to process correlated multivariate systems with
differing non-Gaussian distributions using modern copula analysis,
importance-sampled using ASA, will enable bona fide correlations and
uncertainties of success and failure to be calculated. Marginal distributions
will be evolved to determine their expected duration and stability using
algorithms developed by the author, i.e., PATHTREE and PATHINT codes.
|
cs/0607104
|
Reducing the Computation of Linear Complexities of Periodic Sequences
over $GF(p^m)$
|
cs.CR cs.IT math.IT
|
The linear complexity of a periodic sequence over $GF(p^m)$ plays an
important role in cryptography and communication [12]. In this correspondence,
we prove a result which reduces the computation of the linear complexity and
minimal connection polynomial of a period $un$ sequence over $GF(p^m)$ to the
computation of the linear complexities and minimal connection polynomials of
$u$ period $n$ sequences. The conditions $u|p^m-1$ and
$\gcd(n,p^m-1)=1$ are required for the result to hold. Some applications of
this reduction in fast algorithms to determine the linear complexities and
minimal connection polynomials of sequences over $GF(p^m)$ are presented.
|
cs/0607107
|
Linear Predictive Coding as an Estimator of Volatility
|
cs.IT math.IT
|
In this paper, we present a method of estimating the volatility of a signal
that displays stochastic noise (such as a risky asset traded on an open market)
utilizing Linear Predictive Coding. The main purpose is to associate volatility
with a series of statistical properties that can lead us, through further
investigation, toward a better understanding of structural volatility as well
as to improve the quality of our current estimates.
|
cs/0607108
|
Properties of subspace subcodes of optimum codes in rank metric
|
cs.IT cs.DM math.IT
|
Maximum rank distance codes denoted MRD-codes are the equivalent in rank
metric of MDS-codes. Given any integer $q$ power of a prime and any integer $n$
there is a family of MRD-codes of length $n$ over $\FF{q^n}$ having
polynomial-time decoding algorithms. These codes can be seen as the analogs of
Reed-Solomon codes (hereafter denoted RS-codes) for rank metric. In this paper
their subspace subcodes are characterized. It is shown that hey are equivalent
to MRD-codes constructed in the same way but with smaller parameters. A
specific polynomial-time decoding algorithm is designed. Moreover, it is shown
that the direct sum of subspace subcodes is equivalent to the direct product of
MRD-codes with smaller parameters. This implies that the decoding procedure can
correct errors of higher rank than the error-correcting capability. Finally it
is shown that, for given parameters, subfield subcodes are completely
characterized by elements of the general linear group ${GL}_n(\FF{q})$ of
non-singular $q$-ary matrices of size $n$.
|
cs/0607110
|
A Theory of Probabilistic Boosting, Decision Trees and Matryoshki
|
cs.LG
|
We present a theory of boosting probabilistic classifiers. We place ourselves
in the situation of a user who only provides a stopping parameter and a
probabilistic weak learner/classifier and compare three types of boosting
algorithms: probabilistic Adaboost, decision tree, and tree of trees of ... of
trees, which we call matryoshka. "Nested tree," "embedded tree" and "recursive
tree" are also appropriate names for this algorithm, which is one of our
contributions. Our other contribution is the theoretical analysis of the
algorithms, in which we give training error bounds. This analysis suggests that
the matryoshka leverages probabilistic weak classifiers more efficiently than
simple decision trees.
|
cs/0607112
|
Improving convergence of Belief Propagation decoding
|
cs.IT math.IT
|
The decoding of Low-Density Parity-Check codes by the Belief Propagation (BP)
algorithm is revisited. We check the iterative algorithm for its convergence to
a codeword (termination), we run Monte Carlo simulations to find the
probability distribution function of the termination time, n_it. Tested on an
example [155, 64, 20] code, this termination curve shows a maximum and an
extended algebraic tail at the highest values of n_it. Aiming to reduce the
tail of the termination curve we consider a family of iterative algorithms
modifying the standard BP by means of a simple relaxation. The relaxation
parameter controls the convergence of the modified BP algorithm to a minimum of
the Bethe free energy. The improvement is experimentally demonstrated for
Additive-White-Gaussian-Noise channel in some range of the signal-to-noise
ratios. We also discuss the trade-off between the relaxation parameter of the
improved iterative scheme and the number of iterations.
|
cs/0607120
|
Expressing Implicit Semantic Relations without Supervision
|
cs.CL cs.AI cs.IR cs.LG
|
We present an unsupervised learning algorithm that mines large text corpora
for patterns that express implicit semantic relations. For a given input word
pair X:Y with some unspecified semantic relations, the corresponding output
list of patterns <P1,...,Pm> is ranked according to how well each pattern Pi
expresses the relations between X and Y. For example, given X=ostrich and
Y=bird, the two highest ranking output patterns are "X is the largest Y" and "Y
such as the X". The output patterns are intended to be useful for finding
further pairs with the same relations, to support the construction of lexicons,
ontologies, and semantic networks. The patterns are sorted by pertinence, where
the pertinence of a pattern Pi for a word pair X:Y is the expected relational
similarity between the given pair and typical pairs for Pi. The algorithm is
empirically evaluated on two tasks, solving multiple-choice SAT word analogy
questions and classifying semantic relations in noun-modifier pairs. On both
tasks, the algorithm achieves state-of-the-art results, performing
significantly better than several alternative pattern ranking algorithms, based
on tf-idf.
|
cs/0607132
|
On q-ary codes correcting all unidirectional errors of a limited
magnitude
|
cs.IT math.IT
|
We consider codes over the alphabet Q={0,1,..,q-1}intended for the control of
unidirectional errors of level l. That is, the transmission channel is such
that the received word cannot contain both a component larger than the
transmitted one and a component smaller than the transmitted one. Moreover, the
absolute value of the difference between a transmitted component and its
received version is at most l.
We introduce and study q-ary codes capable of correcting all unidirectional
errors of level l. Lower and upper bounds for the maximal size of those codes
are presented.
We also study codes for this aim that are defined by a single equation on the
codeword coordinates(similar to the Varshamov-Tenengolts codes for correcting
binary asymmetric errors). We finally consider the problem of detecting all
unidirectional errors of level l.
|
cs/0607133
|
Self-Replication and Self-Assembly for Manufacturing
|
cs.MA cs.CE
|
It has been argued that a central objective of nanotechnology is to make
products inexpensively, and that self-replication is an effective approach to
very low-cost manufacturing. The research presented here is intended to be a
step towards this vision. We describe a computational simulation of nanoscale
machines floating in a virtual liquid. The machines can bond together to form
strands (chains) that self-replicate and self-assemble into user-specified
meshes. There are four types of machines and the sequence of machine types in a
strand determines the shape of the mesh they will build. A strand may be in an
unfolded state, in which the bonds are straight, or in a folded state, in which
the bond angles depend on the types of machines. By choosing the sequence of
machine types in a strand, the user can specify a variety of polygonal shapes.
A simulation typically begins with an initial unfolded seed strand in a soup of
unbonded machines. The seed strand replicates by bonding with free machines in
the soup. The child strands fold into the encoded polygonal shape, and then the
polygons drift together and bond to form a mesh. We demonstrate that a variety
of polygonal meshes can be manufactured in the simulation, by simply changing
the sequence of machine types in the seed.
|
cs/0607134
|
Leading strategies in competitive on-line prediction
|
cs.LG
|
We start from a simple asymptotic result for the problem of on-line
regression with the quadratic loss function: the class of continuous
limited-memory prediction strategies admits a "leading prediction strategy",
which not only asymptotically performs at least as well as any continuous
limited-memory strategy but also satisfies the property that the excess loss of
any continuous limited-memory strategy is determined by how closely it imitates
the leading strategy. More specifically, for any class of prediction strategies
constituting a reproducing kernel Hilbert space we construct a leading
strategy, in the sense that the loss of any prediction strategy whose norm is
not too large is determined by how closely it imitates the leading strategy.
This result is extended to the loss functions given by Bregman divergences and
by strictly proper scoring rules.
|
cs/0607136
|
Competing with Markov prediction strategies
|
cs.LG
|
Assuming that the loss function is convex in the prediction, we construct a
prediction strategy universal for the class of Markov prediction strategies,
not necessarily continuous. Allowing randomization, we remove the requirement
of convexity.
|
cs/0607138
|
A Foundation to Perception Computing, Logic and Automata
|
cs.AI cs.LG
|
In this report, a novel approach to intelligence and learning is introduced,
this approach is based on what we call 'perception logic'. Based on this logic,
a computing mechanism and automata are introduced. Multi-resolution analysis of
perceptual information is given, in which learning is accomplished in at most
O(log(N))epochs, where N is the number of samples, and the convergence is
guarnteed. This approach combines the favors of computational modeles in the
sense that they are structured and mathematically well-defined, and the
adaptivity of soft computing approaches, in addition to the continuity and
real-time response of dynamical systems.
|
cs/0607140
|
Stylized Facts in Internal Rates of Return on Stock Index and its
Derivative Transactions
|
cs.IT cs.CE math.IT
|
Universal features in stock markets and their derivative markets are studied
by means of probability distributions in internal rates of return on buy and
sell transaction pairs. Unlike the stylized facts in log normalized returns,
the probability distributions for such single asset encounters encorporate the
time factor by means of the internal rate of return defined as the continuous
compound interest. Resulting stylized facts are shown in the probability
distributions derived from the daily series of TOPIX, S & P 500 and FTSE 100
index close values. The application of the above analysis to minute-tick data
of NIKKEI 225 and its futures market, respectively, reveals an interesting
diffference in the behavior of the two probability distributions, in case a
threshold on the minimal duration of the long position is imposed. It is
therefore suggested that the probability distributions of the internal rates of
return could be used for causality mining between the underlying and derivative
stock markets. The highly specific discrete spectrum, which results from noise
trader strategies as opposed to the smooth distributions observed for
fundamentalist strategies in single encounter transactions may be also useful
in deducing the type of investment strategy from trading revenues of small
portfolio investors.
|
cs/0607143
|
Target Type Tracking with PCR5 and Dempster's rules: A Comparative
Analysis
|
cs.AI
|
In this paper we consider and analyze the behavior of two combinational rules
for temporal (sequential) attribute data fusion for target type estimation. Our
comparative analysis is based on Dempster's fusion rule proposed in
Dempster-Shafer Theory (DST) and on the Proportional Conflict Redistribution
rule no. 5 (PCR5) recently proposed in Dezert-Smarandache Theory (DSmT). We
show through very simple scenario and Monte-Carlo simulation, how PCR5 allows a
very efficient Target Type Tracking and reduces drastically the latency delay
for correct Target Type decision with respect to Demspter's rule. For cases
presenting some short Target Type switches, Demspter's rule is proved to be
unable to detect the switches and thus to track correctly the Target Type
changes. The approach proposed here is totally new, efficient and promising to
be incorporated in real-time Generalized Data Association - Multi Target
Tracking systems (GDA-MTT) and provides an important result on the behavior of
PCR5 with respect to Dempster's rule. The MatLab source code is provided in
|
cs/0607147
|
Fusion of qualitative beliefs using DSmT
|
cs.AI
|
This paper introduces the notion of qualitative belief assignment to model
beliefs of human experts expressed in natural language (with linguistic
labels). We show how qualitative beliefs can be efficiently combined using an
extension of Dezert-Smarandache Theory (DSmT) of plausible and paradoxical
quantitative reasoning to qualitative reasoning. We propose a new arithmetic on
linguistic labels which allows a direct extension of classical DSm fusion rule
or DSm Hybrid rules. An approximate qualitative PCR5 rule is also proposed
jointly with a Qualitative Average Operator. We also show how crisp or interval
mappings can be used to deal indirectly with linguistic labels. A very simple
example is provided to illustrate our qualitative fusion rules.
|
cs/0608002
|
An Introduction to the DSm Theory for the Combination of Paradoxical,
Uncertain, and Imprecise Sources of Information
|
cs.AI
|
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this
introduction, we present a survey of our recent theory of plausible and
paradoxical reasoning, known as Dezert-Smarandache Theory (DSmT) in the
literature, developed for dealing with imprecise, uncertain and paradoxical
sources of information. We focus our presentation here rather on the
foundations of DSmT, and on the two important new rules of combination, than on
browsing specific applications of DSmT available in literature. Several simple
examples are given throughout the presentation to show the efficiency and the
generality of this new approach.
|
cs/0608004
|
Separating the articles of authors with the same name
|
cs.DL cs.IR
|
I describe a method to separate the articles of different authors with the
same name. It is based on a distance between any two publications, defined in
terms of the probability that they would have as many coincidences if they were
drawn at random from all published documents. Articles with a given author name
are then clustered according to their distance, so that all articles in a
cluster belong very likely to the same author. The method has proven very
useful in generating groups of papers that are then selected manually. This
simplifies considerably citation analysis when the author publication lists are
not available.
|
cs/0608006
|
A Graph-based Framework for Transmission of Correlated Sources over
Broadcast Channels
|
cs.IT math.IT
|
In this paper we consider the communication problem that involves
transmission of correlated sources over broadcast channels. We consider a
graph-based framework for this information transmission problem. The system
involves a source coding module and a channel coding module. In the source
coding module, the sources are efficiently mapped into a nearly semi-regular
bipartite graph, and in the channel coding module, the edges of this graph are
reliably transmitted over a broadcast channel. We consider nearly semi-regular
bipartite graphs as discrete interface between source coding and channel coding
in this multiterminal setting. We provide an information-theoretic
characterization of (1) the rate of exponential growth (as a function of the
number of channel uses) of the size of the bipartite graphs whose edges can be
reliably transmitted over a broadcast channel and (2) the rate of exponential
growth (as a function of the number of source samples) of the size of the
bipartite graphs which can reliably represent a pair of correlated sources to
be transmitted over a broadcast channel.
|
cs/0608007
|
On the randomness of independent experiments
|
cs.IT math.IT
|
Given a probability distribution P, what is the minimum amount of bits needed
to store a value x sampled according to P, such that x can later be recovered
(except with some small probability)? Or, what is the maximum amount of uniform
randomness that can be extracted from x? Answering these and similar
information-theoretic questions typically boils down to computing so-called
smooth entropies. In this paper, we derive explicit and almost tight bounds on
the smooth entropies of n-fold product distributions.
|
cs/0608009
|
Stability in multidimensional Size Theory
|
cs.CG cs.CV
|
This paper proves that in Size Theory the comparison of multidimensional size
functions can be reduced to the 1-dimensional case by a suitable change of
variables. Indeed, we show that a foliation in half-planes can be given, such
that the restriction of a multidimensional size function to each of these
half-planes turns out to be a classical size function in two scalar variables.
This leads to the definition of a new distance between multidimensional size
functions, and to the proof of their stability with respect to that distance.
|
cs/0608010
|
MIMO scheme performance and detection in epsilon noise
|
cs.IT math.IT
|
New approach for analysis and decoding MIMO signaling is developed for usual
model of nongaussion noise consists of background and impulsive noise named
epsilon - noise. It is shown that non-gaussion noise performance significantly
worse than gaussion ones. Stimulation results strengthen out theory. Robust in
statistical sense detection rule is suggested for such kind of noise features
much best robust detector performance than detector designed for Gaussian noise
in impulsive environment and modest margin in background noise. Proposed
algorithms performance are comparable with developed potential bound. Proposed
tool, is crucial issue for MIMO communication system design, since real noise
environment has impulsive character that contradict with wide used Gaussian
approach, so real MIMO performance much different for Gaussian a non-Gaussian
noise model.
|
cs/0608015
|
Towards "Propagation = Logic + Control"
|
cs.PL cs.AI
|
Constraint propagation algorithms implement logical inference. For
efficiency, it is essential to control whether and in what order basic
inference steps are taken. We provide a high-level framework that clearly
differentiates between information needed for controlling propagation versus
that needed for the logical semantics of complex constraints composed from
primitive ones. We argue for the appropriateness of our controlled propagation
framework by showing that it captures the underlying principles of manually
designed propagation algorithms, such as literal watching for unit clause
propagation and the lexicographic ordering constraint. We provide an
implementation and benchmark results that demonstrate the practicality and
efficiency of our framework.
|
cs/0608017
|
Infinite Qualitative Simulations by Means of Constraint Programming
|
cs.AI cs.LO
|
We introduce a constraint-based framework for studying infinite qualitative
simulations concerned with contingencies such as time, space, shape, size,
abstracted into a finite set of qualitative relations. To define the
simulations, we combine constraints that formalize the background knowledge
concerned with qualitative reasoning with appropriate inter-state constraints
that are formulated using linear temporal logic. We implemented this approach
in a constraint programming system by drawing on ideas from bounded model
checking. The resulting system allows us to test and modify the problem
specifications in a straightforward way and to combine various knowledge
aspects.
|
cs/0608018
|
The single-serving channel capacity
|
cs.IT math.IT
|
In this paper we provide the answer to the following question: Given a noisy
channel and epsilon>0, how many bits can be transmitted with an error of at
most epsilon by a single use of the channel?
|
cs/0608019
|
Relation Variables in Qualitative Spatial Reasoning
|
cs.AI
|
We study an alternative to the prevailing approach to modelling qualitative
spatial reasoning (QSR) problems as constraint satisfaction problems. In the
standard approach, a relation between objects is a constraint whereas in the
alternative approach it is a variable. The relation-variable approach greatly
simplifies integration and implementation of QSR. To substantiate this point,
we discuss several QSR algorithms from the literature which in the
relation-variable approach reduce to the customary constraint propagation
algorithm enforcing generalised arc-consistency.
|
cs/0608021
|
The Shannon capacity of a graph and the independence numbers of its
powers
|
cs.IT cs.DM math.IT
|
The independence numbers of powers of graphs have been long studied, under
several definitions of graph products, and in particular, under the strong
graph product. We show that the series of independence numbers in strong powers
of a fixed graph can exhibit a complex structure, implying that the Shannon
Capacity of a graph cannot be approximated (up to a sub-polynomial factor of
the number of vertices) by any arbitrarily large, yet fixed, prefix of the
series. This is true even if this prefix shows a significant increase of the
independence number at a given power, after which it stabilizes for a while.
|
cs/0608023
|
Optimal resource allocation for OFDM multiuser channels
|
cs.IT math.IT
|
In this paper, a unifying framework for orthogonal frequency division
multiplexing (OFDM) multiuser resource allocation is presented. The isolated
seeming problems of maximizing a weighted sum of rates for a given power budget
$\bar{P}$ and minimizing sum power for given rate requirements
$\mathbf{\bar{R}}$ can be interpreted jointly in this framework. To this end we
embed the problems in a higher dimensional space. Based on these results, we
subsequently consider the combined problem of maximizing a weighted sum of
rates under given rate requirements $\mathbf{\bar{R}}$ and a fixed power budget
$\bar{P}$. This new problem is challenging, since the additional constraints do
not allow to use the hitherto existing approaches. Interestingly, the optimal
decoding orders turn out to be the ordering of the Lagrangian factors in all
problems.
|
cs/0608028
|
Using Sets of Probability Measures to Represent Uncertainty
|
cs.AI
|
I explore the use of sets of probability measures as a representation of
uncertainty.
|
cs/0608029
|
Guessing Facets: Polytope Structure and Improved LP Decoder
|
cs.IT math.IT
|
A new approach for decoding binary linear codes by solving a linear program
(LP) over a relaxed codeword polytope was recently proposed by Feldman et al.
In this paper we investigate the structure of the polytope used in the LP
relaxation decoding. We begin by showing that for expander codes, every
fractional pseudocodeword always has at least a constant fraction of
non-integral bits. We then prove that for expander codes, the active set of any
fractional pseudocodeword is smaller by a constant fraction than the active set
of any codeword. We exploit this fact to devise a decoding algorithm that
provably outperforms the LP decoder for finite blocklengths. It proceeds by
guessing facets of the polytope, and resolving the linear program on these
facets. While the LP decoder succeeds only if the ML codeword has the highest
likelihood over all pseudocodewords, we prove that for expander codes the
proposed algorithm succeeds even with a constant number of pseudocodewords of
higher likelihood. Moreover, the complexity of the proposed algorithm is only a
constant factor larger than that of the LP decoder.
|
cs/0608033
|
A Study on Learnability for Rigid Lambek Grammars
|
cs.LG
|
We present basic notions of Gold's "learnability in the limit" paradigm,
first presented in 1967, a formalization of the cognitive process by which a
native speaker gets to grasp the underlying grammar of his/her own native
language by being exposed to well formed sentences generated by that grammar.
Then we present Lambek grammars, a formalism issued from categorial grammars
which, although not as expressive as needed for a full formalization of natural
languages, is particularly suited to easily implement a natural interface
between syntax and semantics. In the last part of this work, we present a
learnability result for Rigid Lambek grammars from structured examples.
|
cs/0608037
|
Cascade hash tables: a series of multilevel double hashing schemes with
O(1) worst case lookup time
|
cs.DS cs.AI
|
In this paper, the author proposes a series of multilevel double hashing
schemes called cascade hash tables. They use several levels of hash tables. In
each table, we use the common double hashing scheme. Higher level hash tables
work as fail-safes of lower level hash tables. By this strategy, it could
effectively reduce collisions in hash insertion. Thus it gains a constant worst
case lookup time with a relatively high load factor(70%-85%) in random
experiments. Different parameters of cascade hash tables are tested.
|
cs/0608042
|
An Improved Sphere-Packing Bound for Finite-Length Codes on Symmetric
Memoryless Channels
|
cs.IT math.IT
|
This paper derives an improved sphere-packing (ISP) bound for finite-length
codes whose transmission takes place over symmetric memoryless channels. We
first review classical results, i.e., the 1959 sphere-packing (SP59) bound of
Shannon for the Gaussian channel, and the 1967 sphere-packing (SP67) bound of
Shannon et al. for discrete memoryless channels. A recent improvement on the
SP67 bound, as suggested by Valembois and Fossorier, is also discussed. These
concepts are used for the derivation of a new lower bound on the decoding error
probability (referred to as the ISP bound) which is uniformly tighter than the
SP67 bound and its recent improved version. The ISP bound is applicable to
symmetric memoryless channels, and some of its applications are exemplified.
Its tightness is studied by comparing it with bounds on the ML decoding error
probability, and computer simulations of iteratively decoded turbo-like codes.
The paper also presents a technique which performs the entire calculation of
the SP59 bound in the logarithmic domain, thus facilitating the exact
calculation of this bound for moderate to large block lengths without the need
for the asymptotic approximations provided by Shannon.
|
cs/0608043
|
Using Users' Expectations to Adapt Business Intelligence Systems
|
cs.IR
|
This paper takes a look at the general characteristics of business or
economic intelligence system. The role of the user within this type of system
is emphasized. We propose two models which we consider important in order to
adapt this system to the user. The first model is based on the definition of
decisional problem and the second on the four cognitive phases of human
learning. We also describe the application domain we are using to test these
models in this type of system.
|
cs/0608044
|
Network Coding in a Multicast Switch
|
cs.NI cs.IT math.IT
|
We consider the problem of serving multicast flows in a crossbar switch. We
show that linear network coding across packets of a flow can sustain traffic
patterns that cannot be served if network coding were not allowed. Thus,
network coding leads to a larger rate region in a multicast crossbar switch. We
demonstrate a traffic pattern which requires a switch speedup if coding is not
allowed, whereas, with coding the speedup requirement is eliminated completely.
In addition to throughput benefits, coding simplifies the characterization of
the rate region. We give a graph-theoretic characterization of the rate region
with fanout splitting and intra-flow coding, in terms of the stable set
polytope of the 'enhanced conflict graph' of the traffic pattern. Such a
formulation is not known in the case of fanout splitting without coding. We
show that computing the offline schedule (i.e. using prior knowledge of the
flow arrival rates) can be reduced to certain graph coloring problems. Finally,
we propose online algorithms (i.e. using only the current queue occupancy
information) for multicast scheduling based on our graph-theoretic formulation.
In particular, we show that a maximum weighted stable set algorithm stabilizes
the queues for all rates within the rate region.
|
cs/0608049
|
Solving non-uniqueness in agglomerative hierarchical clustering using
multidendrograms
|
cs.IR math.ST physics.data-an stat.TH
|
In agglomerative hierarchical clustering, pair-group methods suffer from a
problem of non-uniqueness when two or more distances between different clusters
coincide during the amalgamation process. The traditional approach for solving
this drawback has been to take any arbitrary criterion in order to break ties
between distances, which results in different hierarchical classifications
depending on the criterion followed. In this article we propose a
variable-group algorithm that consists in grouping more than two clusters at
the same time when ties occur. We give a tree representation for the results of
the algorithm, which we call a multidendrogram, as well as a generalization of
the Lance and Williams' formula which enables the implementation of the
algorithm in a recursive way.
|
cs/0608056
|
Wiretap Channel With Side Information
|
cs.IT math.IT
|
This submission has been withdrawn by the author.
|
cs/0608057
|
Hybrid Elections Broaden Complexity-Theoretic Resistance to Control
|
cs.GT cs.CC cs.MA
|
Electoral control refers to attempts by an election's organizer ("the chair")
to influence the outcome by adding/deleting/partitioning voters or candidates.
The groundbreaking work of Bartholdi, Tovey, and Trick [BTT92] on
(constructive) control proposes computational complexity as a means of
resisting control attempts: Look for election systems where the chair's task in
seeking control is itself computationally infeasible.
We introduce and study a method of combining two or more candidate-anonymous
election schemes in such a way that the combined scheme possesses all the
resistances to control (i.e., all the NP-hardnesses of control) possessed by
any of its constituents: It combines their strengths. From this and new
resistance constructions, we prove for the first time that there exists an
election scheme that is resistant to all twenty standard types of electoral
control.
|
cs/0608060
|
Duality and Capacity Region of AF Relay MAC and BC
|
cs.IT math.IT
|
We consider multi-hop multiple access (MAC) and broadcast channels (BC) where
communication takes place with the assistance of relays that amplify and
forward (AF) their received signals. For a two hop parallel AF relay MAC,
assuming a sum power constraint across all relays we characterize optimal relay
amplification factors and the resulting capacity regions. We find that the
parallel AF relay MAC with total transmit power of the two users $P_1+P_2=P$
and total relay power $P_R$ is the dual of the parallel AF relay BC where the
MAC source nodes become the BC destination nodes, the MAC destination node
becomes the BC source node, the dual BC source transmit power is $P_R$ and the
total transmit power of the AF relays is $P$. The duality means that the
capacity region of the AF relay MAC with a sum power constraint $P$ on the
transmitters is the same as that of the dual BC. The duality relationship is
found to be useful in characterizing the capacity region of the AF relay BC as
the union of MAC capacity regions. The duality extends to distributed relays
with multiple antennas and more than 2 hops as well.
|
cs/0608070
|
Finite State Channels with Time-Invariant Deterministic Feedback
|
cs.IT math.IT
|
We consider capacity of discrete-time channels with feedback for the general
case where the feedback is a time-invariant deterministic function of the
output samples. Under the assumption that the channel states take values in a
finite alphabet, we find an achievable rate and an upper bound on the capacity.
We further show that when the channel is indecomposable, and has no intersymbol
interference (ISI), its capacity is given by the limit of the maximum of the
(normalized) directed information between the input $X^N$ and the output $Y^N$,
i.e. $C = \lim_{N \to \infty} \frac{1}{N} \max I(X^N \to Y^N)$, where the
maximization is taken over the causal conditioning probability
$Q(x^N||z^{N-1})$ defined in this paper. The capacity result is used to show
that the source-channel separation theorem holds for time-invariant determinist
feedback. We also show that if the state of the channel is known both at the
encoder and the decoder then feedback does not increase capacity.
|
cs/0608071
|
Broadcast Cooperation Strategies for Two Colocated Users
|
cs.IT math.IT
|
This work considers the problem of communication from a single transmitter,
over a network with colocated users, through an independent block Rayleigh
fading channel. The colocation nature of the users allows cooperation, which
increases the overall achievable rate, from the transmitter to the destined
user. The transmitter is ignorant of the fading coefficients, while receivers
have access to perfect channel state information (CSI). This gives rise to the
multi-layer broadcast approach used by the transmitter. The broadcast approach
allows, in our network setting, to improve the cooperation between the
colocated users. That is due to the nature of broadcasting, where the better
the channel quality, the more layers that can be decoded. The cooperation
between the users is performed over an additive white Gaussian channels (AWGN),
with a relaying power constraint, and unlimited bandwidth. Three commonly used
cooperation techniques are studied: amplify-forward (AF), compress-forward
(CF), and decode-forward (DF). These methods are extended using the broadcast
approach, for the case of relaxed decoding delay constraint. For this case a
separated processing of the layers, which includes multi-session cooperation is
shown to be beneficial. Further, closed form expressions for infinitely many AF
sessions and recursive expressions for the more complex CF are given. Numerical
results for the various cooperation strategies demonstrate the efficiency of
multi-session cooperation.
|
cs/0608072
|
Applications of Random Parameter Matrices Kalman Filtering in Uncertain
Observation and Multi-Model Systems
|
cs.IT math.IT
|
This paper considers the Linear Minimum Variance recursive state estimation
for the linear discrete time dynamic system with random state transition and
measurement matrices, i.e., random parameter matrices Kalman filtering. It is
shown that such system can be converted to a linear dynamic system with
deterministic parameter matrices but state-dependent process and measurement
noises. It is proved that under mild conditions, the recursive state estimation
of this system is still of the form of a modified Kalman filtering. More
importantly, this result can be applied to Kalman filtering with intermittent
and partial observations as well as randomly variant dynamic systems.
|
cs/0608073
|
Parametrical Neural Networks and Some Other Similar Architectures
|
cs.CV cs.NE
|
A review of works on associative neural networks accomplished during last
four years in the Institute of Optical Neural Technologies RAS is given. The
presentation is based on description of parametrical neural networks (PNN). For
today PNN have record recognizing characteristics (storage capacity, noise
immunity and speed of operation). Presentation of basic ideas and principles is
accentuated.
|
cs/0608078
|
Searching for Globally Optimal Functional Forms for Inter-Atomic
Potentials Using Parallel Tempering and Genetic Programming
|
cs.NE cs.AI
|
We develop a Genetic Programming-based methodology that enables discovery of
novel functional forms for classical inter-atomic force-fields, used in
molecular dynamics simulations. Unlike previous efforts in the field, that fit
only the parameters to the fixed functional forms, we instead use a novel
algorithm to search the space of many possible functional forms. While a
follow-on practical procedure will use experimental and {\it ab inito} data to
find an optimal functional form for a forcefield, we first validate the
approach using a manufactured solution. This validation has the advantage of a
well-defined metric of success. We manufactured a training set of atomic
coordinate data with an associated set of global energies using the well-known
Lennard-Jones inter-atomic potential. We performed an automatic functional form
fitting procedure starting with a population of random functions, using a
genetic programming functional formulation, and a parallel tempering
Metropolis-based optimization algorithm. Our massively-parallel method
independently discovered the Lennard-Jones function after searching for several
hours on 100 processors and covering a miniscule portion of the configuration
space. We find that the method is suitable for unsupervised discovery of
functional forms for inter-atomic potentials/force-fields. We also find that
our parallel tempering Metropolis-based approach significantly improves the
optimization convergence time, and takes good advantage of the parallel cluster
architecture.
|
cs/0608081
|
How Hard Is Bribery in Elections?
|
cs.GT cs.CC cs.MA
|
We study the complexity of influencing elections through bribery: How
computationally complex is it for an external actor to determine whether by a
certain amount of bribing voters a specified candidate can be made the
election's winner? We study this problem for election systems as varied as
scoring protocols and Dodgson voting, and in a variety of settings regarding
homogeneous-vs.-nonhomogeneous electorate bribability,
bounded-size-vs.-arbitrary-sized candidate sets, weighted-vs.-unweighted
voters, and succinct-vs.-nonsuccinct input specification. We obtain both
polynomial-time bribery algorithms and proofs of the intractability of bribery,
and indeed our results show that the complexity of bribery is extremely
sensitive to the setting. For example, we find settings in which bribery is
NP-complete but manipulation (by voters) is in P, and we find settings in which
bribing weighted voters is NP-complete but bribing voters with individual bribe
thresholds is in P. For the broad class of elections (including plurality,
Borda, k-approval, and veto) known as scoring protocols, we prove a dichotomy
result for bribery of weighted voters: We find a simple-to-evaluate condition
that classifies every case as either NP-complete or in P.
|
cs/0608085
|
A Quadratic Time-Space Tradeoff for Unrestricted Deterministic Decision
Branching Programs
|
cs.CC cs.DM cs.IT math.IT
|
For a decision problem from coding theory, we prove a quadratic expected
time-space tradeoff of the form $\eT\eS=\Omega(\tfrac{n^2}{q})$ for $q$-way
deterministic decision branching programs, where $q\geq 2$. Here $\eT$ is the
expected computation time and $\eS$ is the expected space, when all inputs are
equally likely. This bound is to our knowledge, the first such to show an
exponential size requirement whenever $\eT = O(n^2)$. Previous exponential size
tradeoffs for Boolean decision branching programs were valid for
time-restricted models with $T=o(n\log_2{n})$. Proving quadratic time-space
tradeoffs for unrestricted time decision branching programs has been a major
goal of recent research -- this goal has already been achieved for
multiple-output branching programs two decades ago. We also show the first
quadratic time-space tradeoffs for Boolean decision branching programs
verifying circular convolution, matrix-vector multiplication and discrete
Fourier transform. Furthermore, we demonstrate a constructive Boolean decision
function which has a quadratic expected time-space tradeoff in the Boolean
deterministic decision branching program model. When $q$ is a constant the
tradeoff results derived here for decision functions verifying various
functions are order-comparable to previously known tradeoff bounds for
calculating the corresponding multiple-output functions.
|
cs/0608086
|
Analog Codes on Graphs
|
cs.IT cs.DM math.IT
|
We consider the problem of transmission of a sequence of real data produced
by a Nyquist sampled band-limited analog source over a band-limited analog
channel, which introduces an additive white Gaussian noise. An analog coding
scheme is described, which can achieve a mean-squared error distortion
proportional to $(1+SNR)^{-B}$ for a bandwidth expansion factor of $B/R$, where
$0 < R < 1$ is the rate of individual component binary codes used in the
construction and $B \geq 1$ is an integer. Thus, over a wide range of SNR
values, the proposed code performs much better than any single previously known
analog coding system.
|
cs/0608087
|
On an Improvement over R\'enyi's Equivocation Bound
|
cs.IT cs.DM math.IT
|
We consider the problem of estimating the probability of error in
multi-hypothesis testing when MAP criterion is used. This probability, which is
also known as the Bayes risk is an important measure in many communication and
information theory problems. In general, the exact Bayes risk can be difficult
to obtain. Many upper and lower bounds are known in literature. One such upper
bound is the equivocation bound due to R\'enyi which is of great philosophical
interest because it connects the Bayes risk to conditional entropy. Here we
give a simple derivation for an improved equivocation bound.
We then give some typical examples of problems where these bounds can be of
use. We first consider a binary hypothesis testing problem for which the exact
Bayes risk is difficult to derive. In such problems bounds are of interest.
Furthermore using the bounds on Bayes risk derived in the paper and a random
coding argument, we prove a lower bound on equivocation valid for most random
codes over memoryless channels.
|
cs/0608089
|
Wireless ad-hoc networks: Strategies and Scaling laws for the fixed SNR
regime
|
cs.IT math.IT
|
This paper deals with throughput scaling laws for random ad-hoc wireless
networks in a rich scattering environment. We develop schemes to optimize the
ratio, $\rho(n)$ of achievable network sum capacity to the sum of the
point-to-point capacities of source-destinations pairs operating in isolation.
For fixed SNR networks, i.e., where the worst case SNR over the
source-destination pairs is fixed independent of $n$, we show that
collaborative strategies yield a scaling law of $\rho(n) = {\cal
O}(\frac{1}{n^{1/3}})$ in contrast to multi-hop strategies which yield a
scaling law of $\rho(n) = {\cal O}(\frac{1}{\sqrt{n}})$. While, networks where
worst case SNR goes to zero, do not preclude the possibility of collaboration,
multi-hop strategies achieve optimal throughput. The plausible reason is that
the gains due to collaboration cannot offset the effect of vanishing receive
SNR. This suggests that for fixed SNR networks, a network designer should look
for network protocols that exploit collaboration. The fact that most current
networks operate in a fixed SNR interference limited environment provides
further motivation for considering this regime.
|
cs/0608091
|
On-line topological simplification of weighted graphs
|
cs.DS cs.DB
|
We describe two efficient on-line algorithms to simplify weighted graphs by
eliminating degree-two vertices. Our algorithms are on-line in that they react
to updates on the data, keeping the simplification up-to-date. The supported
updates are insertions of vertices and edges; hence, our algorithms are
partially dynamic. We provide both analytical and empirical evaluations of the
efficiency of our approaches. Specifically, we prove an O(log n) upper bound on
the amortized time complexity of our maintenance algorithms, with n the number
of insertions.
|
cs/0608093
|
Connection between continuous and digital n-manifolds and the Poincare
conjecture
|
cs.DM cs.CV math.AT
|
We introduce LCL covers of closed n-dimensional manifolds by n-dimensional
disks and study their properties. We show that any LCL cover of an
n-dimensional sphere can be converted to the minimal LCL cover, which consists
of 2n+2 disks. We prove that an LCL collection of n-disks is a cover of a
continuous n-sphere if and only if the intersection graph of this collection is
a digital n-sphere. Using a link between LCL covers of closed continuous
n-manifolds and digital n-manifolds, we find conditions where a continuous
closed three-dimensional manifold is the three-dimensional sphere. We discuss a
connection between the classification problems for closed continuous
three-dimensional manifolds and digital three-manifolds.
|
cs/0608095
|
Stationary Algorithmic Probability
|
cs.IT cs.CC math.IT math.PR
|
Kolmogorov complexity and algorithmic probability are defined only up to an
additive resp. multiplicative constant, since their actual values depend on the
choice of the universal reference computer. In this paper, we analyze a natural
approach to eliminate this machine-dependence.
Our method is to assign algorithmic probabilities to the different computers
themselves, based on the idea that "unnatural" computers should be hard to
emulate. Therefore, we study the Markov process of universal computers randomly
emulating each other. The corresponding stationary distribution, if it existed,
would give a natural and machine-independent probability measure on the
computers, and also on the binary strings.
Unfortunately, we show that no stationary distribution exists on the set of
all computers; thus, this method cannot eliminate machine-dependence. Moreover,
we show that the reason for failure has a clear and interesting physical
interpretation, suggesting that every other conceivable attempt to get rid of
those additive constants must fail in principle, too.
However, we show that restricting to some subclass of computers might help to
get rid of some amount of machine-dependence in some situations, and the
resulting stationary computer and string probabilities have beautiful
properties.
|
cs/0608099
|
Automated verification of weak equivalence within the SMODELS system
|
cs.AI cs.LO
|
In answer set programming (ASP), a problem at hand is solved by (i) writing a
logic program whose answer sets correspond to the solutions of the problem, and
by (ii) computing the answer sets of the program using an answer set solver as
a search engine. Typically, a programmer creates a series of gradually
improving logic programs for a particular problem when optimizing program
length and execution time on a particular solver. This leads the programmer to
a meta-level problem of ensuring that the programs are equivalent, i.e., they
give rise to the same answer sets. To ease answer set programming at
methodological level, we propose a translation-based method for verifying the
equivalence of logic programs. The basic idea is to translate logic programs P
and Q under consideration into a single logic program EQT(P,Q) whose answer
sets (if such exist) yield counter-examples to the equivalence of P and Q. The
method is developed here in a slightly more general setting by taking the
visibility of atoms properly into account when comparing answer sets. The
translation-based approach presented in the paper has been implemented as a
translator called lpeq that enables the verification of weak equivalence within
the smodels system using the same search engine as for the search of models.
Our experiments with lpeq and smodels suggest that establishing the equivalence
of logic programs in this way is in certain cases much faster than naive
cross-checking of answer sets.
|
cs/0608100
|
Similarity of Semantic Relations
|
cs.CL cs.IR cs.LG
|
There are at least two kinds of similarity. Relational similarity is
correspondence between relations, in contrast with attributional similarity,
which is correspondence between attributes. When two words have a high degree
of attributional similarity, we call them synonyms. When two pairs of words
have a high degree of relational similarity, we say that their relations are
analogous. For example, the word pair mason:stone is analogous to the pair
carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a
method for measuring relational similarity. LRA has potential applications in
many areas, including information extraction, word sense disambiguation, and
information retrieval. Recently the Vector Space Model (VSM) of information
retrieval has been adapted to measuring relational similarity, achieving a
score of 47% on a collection of 374 college-level multiple-choice word analogy
questions. In the VSM approach, the relation between a pair of words is
characterized by a vector of frequencies of predefined patterns in a large
corpus. LRA extends the VSM approach in three ways: (1) the patterns are
derived automatically from the corpus, (2) the Singular Value Decomposition
(SVD) is used to smooth the frequency data, and (3) automatically generated
synonyms are used to explore variations of the word pairs. LRA achieves 56% on
the 374 analogy questions, statistically equivalent to the average human score
of 57%. On the related problem of classifying semantic relations, LRA achieves
similar gains over the VSM.
|
cs/0608103
|
Logic programs with monotone abstract constraint atoms
|
cs.AI cs.LO
|
We introduce and study logic programs whose clauses are built out of monotone
constraint atoms. We show that the operational concept of the one-step
provability operator generalizes to programs with monotone constraint atoms,
but the generalization involves nondeterminism. Our main results demonstrate
that our formalism is a common generalization of (1) normal logic programming
with its semantics of models, supported models and stable models, (2) logic
programming with weight atoms (lparse programs) with the semantics of stable
models, as defined by Niemela, Simons and Soininen, and (3) of disjunctive
logic programming with the possible-model semantics of Sakama and Inoue.
|
cs/0608105
|
Application Layer Definition and Analyses of Controller Area Network Bus
for Wire Harness Assembly Machine
|
cs.RO cs.NI
|
With the feature of multi-master bus access, nondestructive contention-based
arbitration and flexible configuration, Controller Area Network (CAN) bus is
applied into the control system of Wire Harness Assembly Machine (WHAM). To
accomplish desired goal, the specific features of the CAN bus is analyzed by
compared with other field buses and the functional performances in the CAN bus
system of WHAM is discussed. Then the application layer planning of CAN bus for
dynamic priority is presented. The critical issue for the use of CAN bus system
in WHAM is the data transfer rate between different nodes. So processing
efficient model is introduced to assist analyzing data transfer procedure.
Through the model, it is convenient to verify the real time feature of the CAN
bus system in WHAM.
|
cs/0608107
|
The Haar Wavelet Transform of a Dendrogram
|
cs.IR
|
We describe a new wavelet transform, for use on hierarchies or binary rooted
trees. The theoretical framework of this approach to data analysis is
described. Case studies are used to further exemplify this approach. A first
set of application studies deals with data array smoothing, or filtering. A
second set of application studies relates to hierarchical tree condensation.
Finally, a third study explores the wavelet decomposition, and the
reproducibility of data sets such as text, including a new perspective on the
generation or computability of such data objects.
|
cs/0608115
|
Neural Network Clustering Based on Distances Between Objects
|
cs.CV cs.NE
|
We present an algorithm of clustering of many-dimensional objects, where only
the distances between objects are used. Centers of classes are found with the
aid of neuron-like procedure with lateral inhibition. The result of clustering
does not depend on starting conditions. Our algorithm makes it possible to give
an idea about classes that really exist in the empirical data. The results of
computer simulations are presented.
|
cs/0608117
|
Code Annealing and the Suppressing Effect of the Cyclically Lifted LDPC
Code Ensemble
|
cs.IT math.IT
|
Code annealing, a new method of designing good codes of short block length,
is proposed, which is then concatenated with cyclic lifting to create finite
codes of low frame error rate (FER) error floors without performance outliers.
The stopping set analysis is performed on the cyclically lifted code ensemble
assuming uniformly random lifting sequences, and the suppressing effect/weight
of the cyclic lifting is identified for the first time, based on which the
ensemble FER error floor can be analytically determined and a scaling law is
derived. Both the first-order and high-order suppressing effects are discussed
and quantified by different methods including the explicit expression, an
algorithmic upper bound, and an algebraic lower bound.
The mismatch between the suppressing weight and the stopping distances
explains the dramatic performance discrepancy among different cyclically lifted
codes when the underlying base codes have degree 2 variable nodes or not. For
the former case, a degree augmentation method is further introduced to mitigate
this metric mismatch, and a systematic method of constructing irregular codes
of low FER error floors is presented. Both regular and irregular codes of very
low FER error floors are reported, for which the improvement factor ranges from
10^6-10^4 when compared to the classic graph-based code ensembles.
|
cs/0608121
|
Cross Entropy Approximation of Structured Covariance Matrices
|
cs.IT math.IT
|
We apply two variations of the principle of Minimum Cross Entropy (the
Kullback information measure) to fit parameterized probability density models
to observed data densities. For an array beamforming problem with P incident
narrowband point sources, N > P sensors, and colored noise, both approaches
yield eigenvector fitting methods similar to that of the MUSIC algorithm[1].
Furthermore, the corresponding cross-entropies are related to the MDL model
order selection criterion[2].
|
cs/0608123
|
Proof of a Conjecture of Helleseth Regarding Pairs of Binary m-Sequences
|
cs.IT math.IT
|
This paper has been withdrawn by the author(s), due a crucial sign error in
Thm. 11.
|
cs/0609001
|
A Robust Solution Procedure for Hyperelastic Solids with Large Boundary
Deformation
|
cs.NA cs.CE
|
Compressible Mooney-Rivlin theory has been used to model hyperelastic solids,
such as rubber and porous polymers, and more recently for the modeling of soft
tissues for biomedical tissues, undergoing large elastic deformations. We
propose a solution procedure for Lagrangian finite element discretization of a
static nonlinear compressible Mooney-Rivlin hyperelastic solid. We consider the
case in which the boundary condition is a large prescribed deformation, so that
mesh tangling becomes an obstacle for straightforward algorithms. Our solution
procedure involves a largely geometric procedure to untangle the mesh: solution
of a sequence of linear systems to obtain initial guesses for interior nodal
positions for which no element is inverted. After the mesh is untangled, we
take Newton iterations to converge to a mechanical equilibrium. The Newton
iterations are safeguarded by a line search similar to one used in
optimization. Our computational results indicate that the algorithm is up to 70
times faster than a straightforward Newton continuation procedure and is also
more robust (i.e., able to tolerate much larger deformations). For a few
extremely large deformations, the deformed mesh could only be computed through
the use of an expensive Newton continuation method while using a tight
convergence tolerance and taking very small steps.
|
cs/0609003
|
In Quest of Image Semantics: Are We Looking for It Under the Right
Lamppost?
|
cs.CV cs.IR
|
In the last years we witness a dramatic growth of research focused on
semantic image understanding. Indeed, without understanding image content
successful accomplishment of any image-processing task is simply incredible. Up
to the recent times, the ultimate need for such understanding has been met by
the knowledge that a domain expert or a vision system supervisor have
contributed to every image-processing application. The advent of the Internet
has drastically changed this situation. Internet sources of visual information
are diffused and dispersed over the whole Web, so the duty of information
content discovery and evaluation must be relegated now to an image
understanding agent (a machine or a computer program) capable to perform image
content assessment at a remote image location. Development of Content Based
Image Retrieval (CBIR) techniques was a right move in a right direction,
launched about ten years ago. Unfortunately, very little progress has been made
since then. The reason for this can be seen in a rank of long lasting
misconceptions that CBIR designers are continuing to adhere to. I hope, my
arguments will help them to change their minds.
|
cs/0609006
|
New Quasi-Cyclic Codes from Simplex Codes
|
cs.IT math.IT
|
As a generalization of cyclic codes, quasi-cyclic (QC) codes contain many
good linear codes. But quasi-cyclic codes studied so far are mainly limited to
one generator (1-generator) QC codes. In this correspondence, 2-generator and
3-generator QC codes are studied, and many good, new QC codes are constructed
from simplex codes. Some new binary QC codes or related codes, that improve the
bounds on maximum minimum distance for binary linear codes are constructed.
They are 5-generator QC [93, 17, 34] and [254, 23, 102] codes, and related [96,
17, 36], [256, 23, 104] codes.
|
cs/0609007
|
A Massive Local Rules Search Approach to the Classification Problem
|
cs.LG
|
An approach to the classification problem of machine learning, based on
building local classification rules, is developed. The local rules are
considered as projections of the global classification rules to the event we
want to classify. A massive global optimization algorithm is used for
optimization of quality criterion. The algorithm, which has polynomial
complexity in typical case, is used to find all high--quality local rules. The
other distinctive feature of the algorithm is the integration of attributes
levels selection (for ordered attributes) with rules searching and original
conflicting rules resolution strategy. The algorithm is practical; it was
tested on a number of data sets from UCI repository, and a comparison with the
other predicting techniques is presented.
|
cs/0609010
|
An effective edge--directed frequency filter for removal of aliasing in
upsampled images
|
cs.CV
|
Raster images can have a range of various distortions connected to their
raster structure. Upsampling them might in effect substantially yield the
raster structure of the original image, known as aliasing. The upsampling
itself may introduce aliasing into the upsampled image as well. The presented
method attempts to remove the aliasing using frequency filters based on the
discrete fast Fourier transform, and applied directionally in certain regions
placed along the edges in the image.
As opposed to some anisotropic smoothing methods, the presented algorithm
aims to selectively reduce only the aliasing, preserving the sharpness of image
details.
The method can be used as a post--processing filter along with various
upsampling algorithms. It was experimentally shown that the method can improve
the visual quality of the upsampled images.
|
cs/0609011
|
Scheduling for Stable and Reliable Communication over Multiaccess
Channels and Degraded Broadcast Channels
|
cs.NI cs.IT math.IT
|
Information-theoretic arguments focus on modeling the reliability of
information transmission, assuming availability of infinite data at sources,
thus ignoring randomness in message generation times at the respective sources.
However, in information transport networks, not only is reliable transmission
important, but also stability, i.e., finiteness of mean delay incurred by
messages from the time of generation to the time of successful reception.
Usually, delay analysis is done separately using queueing-theoretic arguments,
whereas reliable information transmission is studied using information theory.
In this thesis, we investigate these two important aspects of data
communication jointly by suitably combining models from these two fields. In
particular, we model scheduled communication of messages, that arrive in a
random process, (i) over multiaccess channels, with either independent decoding
or joint decoding, and (ii) over degraded broadcast channels. The scheduling
policies proposed permit up to a certain maximum number of messages for
simultaneous transmission.
In the first part of the thesis, we develop a multi-class discrete-time
processor-sharing queueing model, and then investigate the stability of this
queue. In particular, we model the queue by a discrete-time Markov chain
defined on a countable state space, and then establish (i) a sufficient
condition for $c$-regularity of the chain, and hence positive recurrence and
finiteness of stationary mean of the function $c$ of the state, and (ii) a
sufficient condition for transience of the chain. These stability results form
the basis for the conclusions drawn in the thesis.
|
cs/0609018
|
Bilayer Low-Density Parity-Check Codes for Decode-and-Forward in Relay
Channels
|
cs.IT math.IT
|
This paper describes an efficient implementation of binning for the relay
channel using low-density parity-check (LDPC) codes. We devise bilayer LDPC
codes to approach the theoretically promised rate of the decode-and-forward
relaying strategy by incorporating relay-generated information bits in
specially designed bilayer graphical code structures. While conventional LDPC
codes are sensitively tuned to operate efficiently at a certain channel
parameter, the proposed bilayer LDPC codes are capable of working at two
different channel parameters and two different rates: that at the relay and at
the destination. To analyze the performance of bilayer LDPC codes, bilayer
density evolution is devised as an extension of the standard density evolution
algorithm. Based on bilayer density evolution, a design methodology is
developed for the bilayer codes in which the degree distribution is iteratively
improved using linear programming. Further, in order to approach the
theoretical decode-and-forward rate for a wide range of channel parameters,
this paper proposes two different forms bilayer codes, the bilayer-expurgated
and bilayer-lengthened codes. It is demonstrated that a properly designed
bilayer LDPC code can achieve an asymptotic infinite-length threshold within
0.24 dB gap to the Shannon limits of two different channels simultaneously for
a wide range of channel parameters. By practical code construction,
finite-length bilayer codes are shown to be able to approach within a 0.6 dB
gap to the theoretical decode-and-forward rate of the relay channel at a block
length of $10^5$ and a bit-error probability (BER) of $10^{-4}$. Finally, it is
demonstrated that a generalized version of the proposed bilayer code
construction is applicable to relay networks with multiple relays.
|
cs/0609019
|
Improving Term Extraction with Terminological Resources
|
cs.CL
|
Studies of different term extractors on a corpus of the biomedical domain
revealed decreasing performances when applied to highly technical texts. The
difficulty or impossibility of customising them to new domains is an additional
limitation. In this paper, we propose to use external terminologies to
influence generic linguistic data in order to augment the quality of the
extraction. The tool we implemented exploits testified terms at different steps
of the process: chunking, parsing and extraction of term candidates.
Experiments reported here show that, using this method, more term candidates
can be acquired with a higher level of reliability. We further describe the
extraction process involving endogenous disambiguation implemented in the term
extractor YaTeA.
|
cs/0609030
|
Space Division Multiple Access with a Sum Feedback Rate Constraint
|
cs.IT cs.NI math.IT
|
On a multi-antenna broadcast channel, simultaneous transmission to multiple
users by joint beamforming and scheduling is capable of achieving high
throughput, which grows double logarithmically with the number of users. The
sum rate for channel state information (CSI) feedback, however, increases
linearly with the number of users, reducing the effective uplink capacity. To
address this problem, a novel space division multiple access (SDMA) design is
proposed, where the sum feedback rate is upper-bounded by a constant. This
design consists of algorithms for CSI quantization, threshold based CSI
feedback, and joint beamforming and scheduling. The key feature of the proposed
approach is the use of feedback thresholds to select feedback users with large
channel gains and small CSI quantization errors such that the sum feedback rate
constraint is satisfied. Despite this constraint, the proposed SDMA design is
shown to achieve a sum capacity growth rate close to the optimal one. Moreover,
the feedback overflow probability for this design is found to decrease
exponentially with the difference between the allowable and the average sum
feedback rates. Numerical results show that the proposed SDMA design is capable
of attaining higher sum capacities than existing ones, even though the sum
feedback rate is bounded.
|
cs/0609041
|
Primitive operations for the construction and reorganization of
minimally persistent formations
|
cs.MA
|
In this paper, we study the construction and transformation of
two-dimensional persistent graphs. Persistence is a generalization to directed
graphs of the undirected notion of rigidity. In the context of moving
autonomous agent formations, persistence characterizes the efficacy of a
directed structure of unilateral distances constraints seeking to preserve a
formation shape. Analogously to the powerful results about Henneberg sequences
in minimal rigidity theory, we propose different types of directed graph
operations allowing one to sequentially build any minimally persistent graph
(i.e. persistent graph with a minimal number of edges for a given number of
vertices), each intermediate graph being also minimally persistent. We also
consider the more generic problem of obtaining one minimally persistent graph
from another, which corresponds to the on-line reorganization of an autonomous
agent formation. We prove that we can obtain any minimally persistent formation
from any other one by a sequence of elementary local operations such that
minimal persistence is preserved throughout the reorganization process.
|
cs/0609042
|
On Divergence-Power Inequalities
|
cs.IT math.IT
|
Expressions for (EPI Shannon type) Divergence-Power Inequalities (DPI) in two
cases (time-discrete and band-limited time-continuous) of stationary random
processes are given. The new expressions connect the divergence rate of the sum
of independent processes, the individual divergence rate of each process, and
their power spectral densities. All divergences are between a process and a
Gaussian process with same second order statistics, and are assumed to be
finite. A new proof of the Shannon entropy-power inequality EPI, based on the
relationship between divergence and causal minimum mean-square error (CMMSE) in
Gaussian channels with large signal-to-noise ratio, is also shown.
|
cs/0609043
|
Challenging the principle of compositionality in interpreting natural
language texts
|
cs.CL
|
The paper aims at emphasizing that, even relaxed, the hypothesis of
compositionality has to face many problems when used for interpreting natural
language texts. Rather than fixing these problems within the compositional
framework, we believe that a more radical change is necessary, and propose
another approach.
|
cs/0609044
|
The role of time in considering collections
|
cs.CL
|
The paper concerns the understanding of plurals in the framework of
Artificial Intelligence and emphasizes the role of time. The construction of
collection(s) and their evolution across time is often crucial and has to be
accounted for. The paper contrasts a "de dicto" collection where the collection
can be considered as persisting over these situations even if its members
change with a "de re" collection whose composition does not vary through time.
It expresses different criteria of choice between the two interpretations (de
re and de dicto) depending on the context of enunciation.
|
cs/0609045
|
Metric entropy in competitive on-line prediction
|
cs.LG
|
Competitive on-line prediction (also known as universal prediction of
individual sequences) is a strand of learning theory avoiding making any
stochastic assumptions about the way the observations are generated. The
predictor's goal is to compete with a benchmark class of prediction rules,
which is often a proper Banach function space. Metric entropy provides a
unifying framework for competitive on-line prediction: the numerous known upper
bounds on the metric entropy of various compact sets in function spaces readily
imply bounds on the performance of on-line prediction strategies. This paper
discusses strengths and limitations of the direct approach to competitive
on-line prediction via metric entropy, including comparisons to other
approaches.
|
cs/0609046
|
Exhausting Error-Prone Patterns in LDPC Codes
|
cs.IT cs.DS math.IT
|
It is proved in this work that exhaustively determining bad patterns in
arbitrary, finite low-density parity-check (LDPC) codes, including stopping
sets for binary erasure channels (BECs) and trapping sets (also known as
near-codewords) for general memoryless symmetric channels, is an NP-complete
problem, and efficient algorithms are provided for codes of practical short
lengths n~=500. By exploiting the sparse connectivity of LDPC codes, the
stopping sets of size <=13 and the trapping sets of size <=11 can be
efficiently exhaustively determined for the first time, and the resulting
exhaustive list is of great importance for code analysis and finite code
optimization. The featured tree-based narrowing search distinguishes this
algorithm from existing ones for which inexhaustive methods are employed. One
important byproduct is a pair of upper bounds on the bit-error rate (BER) &
frame-error rate (FER) iterative decoding performance of arbitrary codes over
BECs that can be evaluated for any value of the erasure probability, including
both the waterfall and the error floor regions. The tightness of these upper
bounds and the exhaustion capability of the proposed algorithm are proved when
combining an optimal leaf-finding module with the tree-based search. These
upper bounds also provide a worst-case-performance guarantee which is crucial
to optimizing LDPC codes for extremely low error rate applications, e.g.,
optical/satellite communications. Extensive numerical experiments are conducted
that include both randomly and algebraically constructed LDPC codes, the
results of which demonstrate the superior efficiency of the exhaustion
algorithm and its significant value for finite length code optimization.
|
cs/0609049
|
Scanning and Sequential Decision Making for Multi-Dimensional Data -
Part I: the Noiseless Case
|
cs.IT cs.LG math.IT
|
We investigate the problem of scanning and prediction ("scandiction", for
short) of multidimensional data arrays. This problem arises in several aspects
of image and video processing, such as predictive coding, for example, where an
image is compressed by coding the error sequence resulting from scandicting it.
Thus, it is natural to ask what is the optimal method to scan and predict a
given image, what is the resulting minimum prediction loss, and whether there
exist specific scandiction schemes which are universal in some sense.
Specifically, we investigate the following problems: First, modeling the data
array as a random field, we wish to examine whether there exists a scandiction
scheme which is independent of the field's distribution, yet asymptotically
achieves the same performance as if this distribution was known. This question
is answered in the affirmative for the set of all spatially stationary random
fields and under mild conditions on the loss function. We then discuss the
scenario where a non-optimal scanning order is used, yet accompanied by an
optimal predictor, and derive bounds on the excess loss compared to optimal
scanning and prediction.
This paper is the first part of a two-part paper on sequential decision
making for multi-dimensional data. It deals with clean, noiseless data arrays.
The second part deals with noisy data arrays, namely, with the case where the
decision maker observes only a noisy version of the data, yet it is judged with
respect to the original, clean data.
|
cs/0609050
|
Exact Spectral Analysis of Single-h and Multi-h CPM Signals through PAM
decomposition and Matrix Series Evaluation
|
cs.IT math.IT
|
In this paper we address the problem of closed-form spectral evaluation of
CPM. We show that the multi-h CPM signal can be conveniently generated by a PTI
SM. The output is governed by a Markov chain with the unusual peculiarity of
being cyclostationary and reducible; this holds also in the single-h context.
Judicious reinterpretation of the result leads to a formalization through a
stationary and irreducible Markov chain, whose spectral evaluation is known in
closed-form from the literature. Two are the major outcomes of this paper.
First, unlike the literature, we obtain a PSD in true closed-form. Second, we
give novel insights into the CPM format.
|
cs/0609051
|
Multilingual person name recognition and transliteration
|
cs.CL cs.IR
|
We present an exploratory tool that extracts person names from multilingual
news collections, matches name variants referring to the same person, and
infers relationships between people based on the co-occurrence of their names
in related news. A novel feature is the matching of name variants across
languages and writing systems, including names written with the Greek, Cyrillic
and Arabic writing system. Due to our highly multilingual setting, we use an
internal standard representation for name representation and matching, instead
of adopting the traditional bilingual approach to transliteration. This work is
part of the news analysis system NewsExplorer that clusters an average of
25,000 news articles per day to detect related news within the same and across
different languages.
|
cs/0609052
|
Undecidability of the unification and admissibility problems for modal
and description logics
|
cs.LO cs.AI
|
We show that the unification problem `is there a substitution instance of a
given formula that is provable in a given logic?' is undecidable for basic
modal logics K and K4 extended with the universal modality. It follows that the
admissibility problem for inference rules is undecidable for these logics as
well. These are the first examples of standard decidable modal logics for which
the unification and admissibility problems are undecidable. We also prove
undecidability of the unification and admissibility problems for K and K4 with
at least two modal operators and nominals (instead of the universal modality),
thereby showing that these problems are undecidable for basic hybrid logics.
Recently, unification has been introduced as an important reasoning service for
description logics. The undecidability proof for K with nominals can be used to
show the undecidability of unification for boolean description logics with
nominals (such as ALCO and SHIQO). The undecidability proof for K with the
universal modality can be used to show that the unification problem relative to
role boxes is undecidable for Boolean description logic with transitive roles,
inverse roles, and role hierarchies (such as SHI and SHIQ).
|
cs/0609053
|
Navigating multilingual news collections using automatically extracted
information
|
cs.CL cs.IR
|
We are presenting a text analysis tool set that allows analysts in various
fields to sieve through large collections of multilingual news items quickly
and to find information that is of relevance to them. For a given document
collection, the tool set automatically clusters the texts into groups of
similar articles, extracts names of places, people and organisations, lists the
user-defined specialist terms found, links clusters and entities, and generates
hyperlinks. Through its daily news analysis operating on thousands of articles
per day, the tool also learns relationships between people and other entities.
The fully functional prototype system allows users to explore and navigate
multilingual document collections across languages and time.
|
cs/0609054
|
High Data-Rate Single-Symbol ML Decodable Distributed STBCs for
Cooperative Networks
|
cs.IT math.IT
|
High data-rate Distributed Orthogonal Space-Time Block Codes (DOSTBCs) which
achieve the single-symbol decodability and full diversity order are proposed in
this paper. An upper bound of the data-rate of the DOSTBC is derived and it is
approximately twice larger than that of the conventional repetition-based
cooperative strategy. In order to facilitate the systematic constructions of
the DOSTBCs achieving the upper bound of the data-rate, some special DOSTBCs,
which have diagonal noise covariance matrices at the destination terminal, are
investigated. These codes are referred to as the row-monomial DOSTBCs. An upper
bound of the data-rate of the row-monomial DOSTBC is derived and it is equal to
or slightly smaller than that of the DOSTBC. Lastly, the systematic
construction methods of the row-monomial DOSTBCs achieving the upper bound of
the data-rate are presented.
|
cs/0609055
|
Coding for Additive White Noise Channels with Feedback Corrupted by
Uniform Quantization or Bounded Noise
|
cs.IT math.IT
|
We present simple coding strategies, which are variants of the
Schalkwijk-Kailath scheme, for communicating reliably over additive white noise
channels in the presence of corrupted feedback. More specifically, we consider
a framework comprising an additive white forward channel and a backward link
which is used for feedback. We consider two types of corruption mechanisms in
the backward link. The first is quantization noise, i.e., the encoder receives
the quantized values of the past outputs of the forward channel. The
quantization is uniform, memoryless and time invariant (that is,
symbol-by-symbol scalar quantization), with bounded quantization error. The
second corruption mechanism is an arbitrarily distributed additive bounded
noise in the backward link. Here we allow symbol-by-symbol encoding at the
input to the backward channel. We propose simple explicit schemes that
guarantee positive information rate, in bits per channel use, with positive
error exponent. If the forward channel is additive white Gaussian then our
schemes achieve capacity, in the limit of diminishing amplitude of the noise
components at the backward link, while guaranteeing that the probability of
error converges to zero as a doubly exponential function of the block length.
Furthermore, if the forward channel is additive white Gaussian and the backward
link consists of an additive bounded noise channel, with signal-to-noise ratio
(SNR) constrained symbol-by-symbol encoding, then our schemes are also
capacity-achieving in the limit of high SNR.
|
cs/0609056
|
Matrix Games, Linear Programming, and Linear Approximation
|
cs.GT cs.AI
|
The following four classes of computational problems are equivalent: solving
matrix games, solving linear programs, best $l^{\infty}$ linear approximation,
best $l^1$ linear approximation.
|
cs/0609058
|
The JRC-Acquis: A multilingual aligned parallel corpus with 20+
languages
|
cs.CL
|
We present a new, unique and freely available parallel corpus containing
European Union (EU) documents of mostly legal nature. It is available in all 20
official EUanguages, with additional documents being available in the languages
of the EU candidate countries. The corpus consists of almost 8,000 documents
per language, with an average size of nearly 9 million words per language.
Pair-wise paragraph alignment information produced by two different aligners
(Vanilla and HunAlign) is available for all 190+ language pair combinations.
Most texts have been manually classified according to the EUROVOC subject
domains so that the collection can also be used to train and test multi-label
classification algorithms and keyword-assignment software. The corpus is
encoded in XML, according to the Text Encoding Initiative Guidelines. Due to
the large number of parallel texts in many languages, the JRC-Acquis is
particularly suitable to carry out all types of cross-language research, as
well as to test and benchmark text analysis software across different languages
(for instance for alignment, sentence splitting and term extraction).
|
cs/0609059
|
Automatic annotation of multilingual text collections with a conceptual
thesaurus
|
cs.CL cs.IR
|
Automatic annotation of documents with controlled vocabulary terms
(descriptors) from a conceptual thesaurus is not only useful for document
indexing and retrieval. The mapping of texts onto the same thesaurus
furthermore allows to establish links between similar documents. This is also a
substantial requirement of the Semantic Web. This paper presents an almost
language-independent system that maps documents written in different languages
onto the same multilingual conceptual thesaurus, EUROVOC. Conceptual thesauri
differ from Natural Language Thesauri in that they consist of relatively small
controlled lists of words or phrases with a rather abstract meaning. To
automatically identify which thesaurus descriptors describe the contents of a
document best, we developed a statistical, associative system that is trained
on texts that have previously been indexed manually. In addition to describing
the large number of empirically optimised parameters of the fully functional
application, we present the performance of the software according to a human
evaluation by professional indexers.
|
cs/0609060
|
Automatic Identification of Document Translations in Large Multilingual
Document Collections
|
cs.CL cs.IR
|
Texts and their translations are a rich linguistic resource that can be used
to train and test statistics-based Machine Translation systems and many other
applications. In this paper, we present a working system that can identify
translations and other very similar documents among a large number of
candidates, by representing the document contents with a vector of thesaurus
terms from a multilingual thesaurus, and by then measuring the semantic
similarity between the vectors. Tests on different text types have shown that
the system can detect translations with over 96% precision in a large search
space of 820 documents or more. The system was tuned to ignore
language-specific similarities and to give similar documents in a second
language the same similarity score as equivalent documents in the same
language. The application can also be used to detect cross-lingual document
plagiarism.
|
cs/0609061
|
Cross-lingual keyword assignment
|
cs.CL cs.IR
|
This paper presents a language-independent approach to controlled vocabulary
keyword assignment using the EUROVOC thesaurus. Due to the multilingual nature
of EUROVOC, the keywords for a document written in one language can be
displayed in all eleven official European Union languages. The mapping of
documents written in different languages to the same multilingual thesaurus
furthermore allows cross-language document comparison. The assignment of the
controlled vocabulary thesaurus descriptors is achieved by applying a
statistical method that uses a collection of manually indexed documents to
identify, for each thesaurus descriptor, a large number of lemmas that are
statistically associated to the descriptor. These associated words are then
used during the assignment procedure to identify a ranked list of those EUROVOC
terms that are most likely to be good keywords for a given document. The paper
also describes the challenges of this task and discusses the achieved results
of the fully functional prototype.
|
cs/0609063
|
Extending an Information Extraction tool set to Central and Eastern
European languages
|
cs.CL cs.IR
|
In a highly multilingual and multicultural environment such as in the
European Commission with soon over twenty official languages, there is an
urgent need for text analysis tools that use minimal linguistic knowledge so
that they can be adapted to many languages without much human effort. We are
presenting two such Information Extraction tools that have already been adapted
to various Western and Eastern European languages: one for the recognition of
date expressions in text, and one for the detection of geographical place names
and the visualisation of the results in geographical maps. An evaluation of the
performance has produced very satisfying results.
|
cs/0609064
|
Exploiting multilingual nomenclatures and language-independent text
features as an interlingua for cross-lingual text analysis applications
|
cs.CL cs.IR
|
We are proposing a simple, but efficient basic approach for a number of
multilingual and cross-lingual language technology applications that are not
limited to the usual two or three languages, but that can be applied with
relatively little effort to larger sets of languages. The approach consists of
using existing multilingual linguistic resources such as thesauri,
nomenclatures and gazetteers, as well as exploiting the existence of additional
more or less language-independent text items such as dates, currency
expressions, numbers, names and cognates. Mapping texts onto the multilingual
resources and identifying word token links between texts in different languages
are basic ingredients for applications such as cross-lingual document
similarity calculation, multilingual clustering and categorisation,
cross-lingual document retrieval, and tools to provide cross-lingual
information access.
|
cs/0609065
|
Geocoding multilingual texts: Recognition, disambiguation and
visualisation
|
cs.CL cs.IR
|
We are presenting a method to recognise geographical references in free text.
Our tool must work on various languages with a minimum of language-dependent
resources, except a gazetteer. The main difficulty is to disambiguate these
place names by distinguishing places from persons and by selecting the most
likely place out of a list of homographic place names world-wide. The system
uses a number of language-independent clues and heuristics to disambiguate
place name homographs. The final aim is to index texts with the countries and
cities they mention and to automatically visualise this information on
geographical maps using various tools.
|
cs/0609066
|
Building and displaying name relations using automatic unsupervised
analysis of newspaper articles
|
cs.CL cs.IR
|
We present a tool that, from automatically recognised names, tries to infer
inter-person relations in order to present associated people on maps. Based on
an in-house Named Entity Recognition tool, applied on clusters of an average of
15,000 news articles per day, in 15 different languages, we build a knowledge
base that allows extracting statistical co-occurrences of persons and
visualising them on a per-person page or in various graphs.
|
cs/0609067
|
A tool set for the quick and efficient exploration of large document
collections
|
cs.CL cs.IR
|
We are presenting a set of multilingual text analysis tools that can help
analysts in any field to explore large document collections quickly in order to
determine whether the documents contain information of interest, and to find
the relevant text passages. The automatic tool, which currently exists as a
fully functional prototype, is expected to be particularly useful when users
repeatedly have to sieve through large collections of documents such as those
downloaded automatically from the internet. The proposed system takes a whole
document collection as input. It first carries out some automatic analysis
tasks (named entity recognition, geo-coding, clustering, term extraction),
annotates the texts with the generated meta-information and stores the
meta-information in a database. The system then generates a zoomable and
hyperlinked geographic map enhanced with information on entities and terms
found. When the system is used on a regular basis, it builds up a historical
database that contains information on which names have been mentioned together
with which other names or places, and users can query this database to retrieve
information extracted in the past.
|
cs/0609071
|
A kernel method for canonical correlation analysis
|
cs.LG cs.CV
|
Canonical correlation analysis is a technique to extract common features from
a pair of multivariate data. In complex situations, however, it does not
extract useful features because of its linearity. On the other hand, kernel
method used in support vector machine is an efficient approach to improve such
a linear method. In this paper, we investigate the effectiveness of applying
kernel method to canonical correlation analysis.
|
cs/0609073
|
Optimal power allocation for downlink cooperative cellular networks
|
cs.IT math.IT
|
This paper has been withdrawn by the author
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.