id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0609076
|
Asymptotic Spectral Distribution of Crosscorrelation Matrix in
Asynchronous CDMA
|
cs.IT math.IT
|
Asymptotic spectral distribution (ASD) of the crosscorrelation matrix is
investigated for a random spreading short/long-code asynchronous direct
sequence-code division multiple access (DS-CDMA) system. The discrete-time
decision statistics are obtained as the output samples of a bank of symbol
matched filters of all users. The crosscorrelation matrix is studied when the
number of symbols transmitted by each user tends to infinity. Two levels of
asynchronism are considered. One is symbol-asynchronous but chip-synchronous,
and the other is chip-asynchronous. The existence of a nonrandom ASD is proved
by moment convergence theorem, where the focus is on the derivation of
asymptotic eigenvalue moments (AEM) of the crosscorrelation matrix. A
combinatorics approach based on noncrossing partition of set partition theory
is adopted for AEM computation. The spectral efficiency and the minimum
mean-square-error (MMSE) achievable by a linear receiver of asynchronous CDMA
are plotted by AEM using a numerical method.
|
cs/0609079
|
Modern Statistics by Kriging
|
cs.NA cs.CE
|
We present statistics (S-statistics) based only on random variable (not
random value) with a mean squared error of mean estimation as a concept of
error.
|
cs/0609081
|
Recurrence relations and fast algorithms
|
cs.CE cs.NA
|
We construct fast algorithms for evaluating transforms associated with
families of functions which satisfy recurrence relations. These include
algorithms both for computing the coefficients in linear combinations of the
functions, given the values of these linear combinations at certain points,
and, vice versa, for evaluating such linear combinations at those points, given
the coefficients in the linear combinations; such procedures are also known as
analysis and synthesis of series of certain special functions. The algorithms
of the present paper are efficient in the sense that their computational costs
are proportional to n (ln n) (ln(1/epsilon))^3, where n is the amount of input
and output data, and epsilon is the precision of computations. Stated somewhat
more precisely, we find a positive real number C such that, for any positive
integer n > 10, the algorithms require at most C n (ln n) (ln(1/epsilon))^3
floating-point operations and words of memory to evaluate at n appropriately
chosen points any linear combination of n special functions, given the
coefficients in the linear combination, where epsilon is the precision of
computations.
|
cs/0609087
|
A comparative analysis of the geometrical surface texture of a real and
virtual model of a tooth flank of a cylindrical gear
|
cs.CE
|
The paper presents the methodology of modelling tooth flanks of cylindrical
gears in the Cad environment. The modelling consists in a computer simulation
of gear generation. A model of tooth flanks is an envelope curve of a family of
envelopes that originate from the rolling motion of a solid tool model in
relation to a solid model of the cylindrical gear. The surface stereometry and
topography of the tooth flanks, hobbed and chiselled by Fellows method, are
compared to their numerical models. Metrological measurements of the real gears
were carried out using a coordinated measuring machine and a two - and a
three-dimensional profilometer. A computer simulation of the gear generation
was performed in the Mechanical Desktop environment.
|
cs/0609088
|
Deriving the Normalized Min-Sum Algorithm from Cooperative Optimization
|
cs.IT math.IT
|
The normalized min-sum algorithm can achieve near-optimal performance at
decoding LDPC codes. However, it is a critical question to understand the
mathematical principle underlying the algorithm. Traditionally, people thought
that the normalized min-sum algorithm is a good approximation to the
sum-product algorithm, the best known algorithm for decoding LDPC codes and
Turbo codes. This paper offers an alternative approach to understand the
normalized min-sum algorithm. The algorithm is derived directly from
cooperative optimization, a newly discovered general method for
global/combinatorial optimization. This approach provides us another
theoretical basis for the algorithm and offers new insights on its power and
limitation. It also gives us a general framework for designing new decoding
algorithms.
|
cs/0609089
|
Fast Min-Sum Algorithms for Decoding of LDPC over GF(q)
|
cs.IT math.IT
|
In this paper, we present a fast min-sum algorithm for decoding LDPC codes
over GF(q). Our algorithm is different from the one presented by David Declercq
and Marc Fossorier in ISIT 05 only at the way of speeding up the horizontal
scan in the min-sum algorithm. The Declercq and Fossorier's algorithm speeds up
the computation by reducing the number of configurations, while our algorithm
uses the dynamic programming instead. Compared with the configuration reduction
algorithm, the dynamic programming one is simpler at the design stage because
it has less parameters to tune. Furthermore, it does not have the performance
degradation problem caused by the configuration reduction because it searches
the whole configuration space efficiently through dynamic programming. Both
algorithms have the same level of complexity and use simple operations which
are suitable for hardware implementations.
|
cs/0609090
|
Single-Scan Min-Sum Algorithms for Fast Decoding of LDPC Codes
|
cs.IT math.IT
|
Many implementations for decoding LDPC codes are based on the
(normalized/offset) min-sum algorithm due to its satisfactory performance and
simplicity in operations. Usually, each iteration of the min-sum algorithm
contains two scans, the horizontal scan and the vertical scan. This paper
presents a single-scan version of the min-sum algorithm to speed up the
decoding process. It can also reduce memory usage or wiring because it only
needs the addressing from check nodes to variable nodes while the original
min-sum algorithm requires that addressing plus the addressing from variable
nodes to check nodes. To cut down memory usage or wiring further, another
version of the single-scan min-sum algorithm is presented where the messages of
the algorithm are represented by single bit values instead of using fixed point
ones. The software implementation has shown that the single-scan min-sum
algorithm is more than twice as fast as the original min-sum algorithm.
|
cs/0609093
|
PAC Learning Mixtures of Axis-Aligned Gaussians with No Separation
Assumption
|
cs.LG
|
We propose and analyze a new vantage point for the learning of mixtures of
Gaussians: namely, the PAC-style model of learning probability distributions
introduced by Kearns et al. Here the task is to construct a hypothesis mixture
of Gaussians that is statistically indistinguishable from the actual mixture
generating the data; specifically, the KL-divergence should be at most epsilon.
In this scenario, we give a poly(n/epsilon)-time algorithm that learns the
class of mixtures of any constant number of axis-aligned Gaussians in
n-dimensional Euclidean space. Our algorithm makes no assumptions about the
separation between the means of the Gaussians, nor does it have any dependence
on the minimum mixing weight. This is in contrast to learning results known in
the ``clustering'' model, where such assumptions are unavoidable.
Our algorithm relies on the method of moments, and a subalgorithm developed
in previous work by the authors (FOCS 2005) for a discrete mixture-learning
problem.
|
cs/0609094
|
An Improved Sphere-Packing Bound Targeting Codes of Short to Moderate
Block Lengths and Applications
|
cs.IT math.IT
|
This paper derives an improved sphere-packing (ISP) bound targeting codes of
short to moderate block lengths. We first review the 1967 sphere-packing (SP67)
bound for discrete memoryless channels, and a recent improvement by Valembois
and Fossorier. These concepts are used for the derivation of a new lower bound
on the decoding error probability (referred to as the ISP bound) which is
uniformly tighter than the SP67 bound and its recent improved version. Under a
mild condition, the ISP bound is applicable to general memoryless channels, and
some of its applications are exemplified. Its tightness is studied by comparing
it with bounds on the ML decoding error probability. It is exemplified that the
ISP bound suggests an interesting alternative to the 1959 sphere-packing (SP59)
bound of Shannon for the Gaussian channel, especially for digital modulations
of high spectral efficiency.
|
cs/0609096
|
Finite-State Dimension and Lossy Decompressors
|
cs.CC cs.IT math.IT
|
This paper examines information-theoretic questions regarding the difficulty
of compressing data versus the difficulty of decompressing data and the role
that information loss plays in this interaction. Finite-state compression and
decompression are shown to be of equivalent difficulty, even when the
decompressors are allowed to be lossy.
Inspired by Kolmogorov complexity, this paper defines the optimal
*decompression *ratio achievable on an infinite sequence by finite-state
decompressors (that is, finite-state transducers outputting the sequence in
question). It is shown that the optimal compression ratio achievable on a
sequence S by any *information lossless* finite state compressor, known as the
finite-state dimension of S, is equal to the optimal decompression ratio
achievable on S by any finite-state decompressor. This result implies a new
decompression characterization of finite-state dimension in terms of lossy
finite-state transducers.
|
cs/0609097
|
Traveing Salesperson Problems for a double integrator
|
cs.RO
|
In this paper we propose some novel path planning strategies for a double
integrator with bounded velocity and bounded control inputs. First, we study
the following version of the Traveling Salesperson Problem (TSP): given a set
of points in $\real^d$, find the fastest tour over the point set for a double
integrator. We first give asymptotic bounds on the time taken to complete such
a tour in the worst-case. Then, we study a stochastic version of the TSP for
double integrator where the points are randomly sampled from a uniform
distribution in a compact environment in $\real^2$ and $\real^3$. We propose
novel algorithms that perform within a constant factor of the optimal strategy
with high probability. Lastly, we study a dynamic TSP: given a stochastic
process that generates targets, is there a policy which guarantees that the
number of unvisited targets does not diverge over time? If such stable policies
exist, what is the minimum wait for a target? We propose novel stabilizing
receding-horizon algorithms whose performances are within a constant factor
from the optimum with high probability, in $\real^2$ as well as $\real^3$. We
also argue that these algorithms give identical performances for a particular
nonholonomic vehicle, Dubins vehicle.
|
cs/0609099
|
Coding for Parallel Channels: Gallager Bounds and Applications to
Repeat-Accumulate Codes
|
cs.IT math.IT
|
This paper is focused on the performance analysis of binary linear block
codes (or ensembles) whose transmission takes place over independent and
memoryless parallel channels. New upper bounds on the maximum-likelihood (ML)
decoding error probability are derived. The framework of the second version of
the Duman and Salehi (DS2) bounds is generalized to the case of parallel
channels, along with the derivation of optimized tilting measures. The
connection between the generalized DS2 and the 1961 Gallager bounds, known
previously for a single channel, is revisited for the case of parallel
channels. The new bounds are used to obtain improved inner bounds on the
attainable channel regions under ML decoding. These improved bounds are applied
to ensembles of turbo-like codes, focusing on repeat-accumulate codes and their
recent variations.
|
cs/0609100
|
Total Variation Minimization and Graph Cuts for Moving Objects
Segmentation
|
cs.CV
|
In this paper, we are interested in the application to video segmentation of
the discrete shape optimization problem involving the shape weighted perimeter
and an additional term depending on a parameter. Based on recent works and in
particular the one of Darbon and Sigelle, we justify the equivalence of the
shape optimization problem and a weighted total variation regularization. For
solving this problem, we adapt the projection algorithm proposed recently for
solving the basic TV regularization problem. Another solution to the shape
optimization investigated here is the graph cut technique. Both methods have
the advantage to lead to a global minimum. Since we can distinguish moving
objects from static elements of a scene by analyzing norm of the optical flow
vectors, we choose the optical flow norm as initial data. In order to have the
contour as close as possible to an edge in the image, we use a classical edge
detector function as the weight of the weighted total variation. This model has
been used in one of our former works. We also apply the same methods to a video
segmentation model used by Jehan-Besson, Barlaud and Aubert. In this case, only
standard perimeter is incorporated in the shape functional. We also propose
another way for finding moving objects by using an a contrario detection of
objects on the image obtained by solving the Rudin-Osher-Fatemi Total Variation
regularization problem.We can notice the segmentation can be associated to a
level set in the former methods.
|
cs/0609111
|
A State-Based Regression Formulation for Domains with Sensing
Actions<br> and Incomplete Information
|
cs.AI
|
We present a state-based regression function for planning domains where an
agent does not have complete information and may have sensing actions. We
consider binary domains and employ a three-valued characterization of domains
with sensing actions to define the regression function. We prove the soundness
and completeness of our regression formulation with respect to the definition
of progression. More specifically, we show that (i) a plan obtained through
regression for a planning problem is indeed a progression solution of that
planning problem, and that (ii) for each plan found through progression, using
regression one obtains that plan or an equivalent one.
|
cs/0609112
|
A Richer Understanding of the Complexity of Election Systems
|
cs.GT cs.CC cs.MA
|
We provide an overview of some recent progress on the complexity of election
systems. The issues studied include the complexity of the winner, manipulation,
bribery, and control problems.
|
cs/0609117
|
Constructing LDPC Codes by 2-Lifts
|
cs.IT math.IT
|
We propose a new low-density parity-check code construction scheme based on
2-lifts. The proposed codes have an advantage of admitting efficient hardware
implementations. With the motivation of designing codes with low error floors,
we present an analysis of the low-weight stopping set distributions of the
proposed codes. Based on this analysis, we propose design criteria for
designing codes with low error floors. Numerical results show that the
resulting codes have low error probabilities over binary erasure channels.
|
cs/0609119
|
Verification, Validation and Integrity of Distributed and Interchanged
Rule Based Policies and Contracts in the Semantic Web
|
cs.AI cs.SE
|
Rule-based policy and contract systems have rarely been studied in terms of
their software engineering properties. This is a serious omission, because in
rule-based policy or contract representation languages rules are being used as
a declarative programming language to formalize real-world decision logic and
create IS production systems upon. This paper adopts an SE methodology from
extreme programming, namely test driven development, and discusses how it can
be adapted to verification, validation and integrity testing (V&V&I) of policy
and contract specifications. Since, the test-driven approach focuses on the
behavioral aspects and the drawn conclusions instead of the structure of the
rule base and the causes of faults, it is independent of the complexity of the
rule language and the system under test and thus much easier to use and
understand for the rule engineer and the user.
|
cs/0609120
|
Rule-based Knowledge Representation for Service Level Agreement
|
cs.AI cs.DB cs.LO cs.MA cs.SE
|
Automated management and monitoring of service contracts like Service Level
Agreements (SLAs) or higher-level policies is vital for efficient and reliable
distributed service-oriented architectures (SOA) with high quality of ser-vice
(QoS) levels. IT service provider need to manage, execute and maintain
thousands of SLAs for different customers and different types of services,
which needs new levels of flexibility and automation not available with the
current technol-ogy. I propose a novel rule-based knowledge representation (KR)
for SLA rules and a respective rule-based service level management (RBSLM)
framework. My rule-based approach based on logic programming provides several
advantages including automated rule chaining allowing for compact knowledge
representation and high levels of automation as well as flexibility to adapt to
rapidly changing business requirements. Therewith, I address an urgent need
service-oriented busi-nesses do have nowadays which is to dynamically change
their business and contractual logic in order to adapt to rapidly changing
business environments and to overcome the restricting nature of slow change
cycles.
|
cs/0609121
|
Approximating Rate-Distortion Graphs of Individual Data: Experiments in
Lossy Compression and Denoising
|
cs.IT math.IT
|
Classical rate-distortion theory requires knowledge of an elusive source
distribution. Instead, we analyze rate-distortion properties of individual
objects using the recently developed algorithmic rate-distortion theory. The
latter is based on the noncomputable notion of Kolmogorov complexity. To apply
the theory we approximate the Kolmogorov complexity by standard data
compression techniques, and perform a number of experiments with lossy
compression and denoising of objects from different domains. We also introduce
a natural generalization to lossy compression with side information. To
maintain full generality we need to address a difficult searching problem.
While our solutions are therefore not time efficient, we do observe good
denoising and compression performance.
|
cs/0609122
|
Multi-Antenna Cooperative Wireless Systems: A Diversity-Multiplexing
Tradeoff Perspective
|
cs.IT math.IT
|
We consider a general multiple antenna network with multiple sources,
multiple destinations and multiple relays in terms of the
diversity-multiplexing tradeoff (DMT). We examine several subcases of this most
general problem taking into account the processing capability of the relays
(half-duplex or full-duplex), and the network geometry (clustered or
non-clustered). We first study the multiple antenna relay channel with a
full-duplex relay to understand the effect of increased degrees of freedom in
the direct link. We find DMT upper bounds and investigate the achievable
performance of decode-and-forward (DF), and compress-and-forward (CF)
protocols. Our results suggest that while DF is DMT optimal when all terminals
have one antenna each, it may not maintain its good performance when the
degrees of freedom in the direct link is increased, whereas CF continues to
perform optimally. We also study the multiple antenna relay channel with a
half-duplex relay. We show that the half-duplex DMT behavior can significantly
be different from the full-duplex case. We find that CF is DMT optimal for
half-duplex relaying as well, and is the first protocol known to achieve the
half-duplex relay DMT. We next study the multiple-access relay channel (MARC)
DMT. Finally, we investigate a system with a single source-destination pair and
multiple relays, each node with a single antenna, and show that even under the
idealistic assumption of full-duplex relays and a clustered network, this
virtual multi-input multi-output (MIMO) system can never fully mimic a real
MIMO DMT. For cooperative systems with multiple sources and multiple
destinations the same limitation remains to be in effect.
|
cs/0609123
|
Optimal Design of Multiple Description Lattice Vector Quantizers
|
cs.IT math.IT
|
In the design of multiple description lattice vector quantizers (MDLVQ),
index assignment plays a critical role. In addition, one also needs to choose
the Voronoi cell size of the central lattice v, the sublattice index N, and the
number of side descriptions K to minimize the expected MDLVQ distortion, given
the total entropy rate of all side descriptions Rt and description loss
probability p. In this paper we propose a linear-time MDLVQ index assignment
algorithm for any K >= 2 balanced descriptions in any dimensions, based on a
new construction of so-called K-fraction lattice. The algorithm is greedy in
nature but is proven to be asymptotically (N -> infinity) optimal for any K >=
2 balanced descriptions in any dimensions, given Rt and p. The result is
stronger when K = 2: the optimality holds for finite N as well, under some mild
conditions. For K > 2, a local adjustment algorithm is developed to augment the
greedy index assignment, and conjectured to be optimal for finite N.
Our algorithmic study also leads to better understanding of v, N and K in
optimal MDLVQ design. For K = 2 we derive, for the first time, a
non-asymptotical closed form expression of the expected distortion of optimal
MDLVQ in p, Rt, N. For K > 2, we tighten the current asymptotic formula of the
expected distortion, relating the optimal values of N and K to p and Rt more
precisely.
|
cs/0609125
|
Problem Evolution: A new approach to problem solving systems
|
cs.NE
|
In this paper we present a novel tool to evaluate problem solving systems.
Instead of using a system to solve a problem, we suggest using the problem to
evaluate the system. By finding a numerical representation of a problem's
complexity, one can implement genetic algorithm to search for the most complex
problem the given system can solve. This allows a comparison between different
systems that solve the same set of problems. In this paper we implement this
approach on pattern recognition neural networks to try and find the most
complex pattern a given configuration can solve. The complexity of the pattern
is calculated using linguistic complexity. The results demonstrate the power of
the problem evolution approach in ranking different neural network
configurations according to their pattern recognition abilities. Future
research and implementations of this technique are also discussed.
|
cs/0609132
|
Semantic Description of Parameters in Web Service Annotations
|
cs.AI
|
A modification of OWL-S regarding parameter description is proposed. It is
strictly based on Description Logic. In addition to class description of
parameters it also allows the modelling of relations between parameters and the
precise description of the size of data to be supplied to a service. In
particular, it solves two major issues identified within current proposals for
a Semantic Web Service annotation standard.
|
cs/0609133
|
An application-oriented terminology evaluation: the case of back-of-the
book indexes
|
cs.AI cs.IR
|
This paper addresses the problem of computational terminology evaluation not
per se but in a specific application context. This paper describes the
evaluation procedure that has been used to assess the validity of our overall
indexing approach and the quality of the IndDoc indexing tool. Even if
user-oriented extended evaluation is irreplaceable, we argue that early
evaluations are possible and they are useful for development guidance.
|
cs/0609134
|
Using NLP to build the hypertextuel network of a back-of-the-book index
|
cs.AI cs.IR
|
Relying on the idea that back-of-the-book indexes are traditional devices for
navigation through large documents, we have developed a method to build a
hypertextual network that helps the navigation in a document. Building such an
hypertextual network requires selecting a list of descriptors, identifying the
relevant text segments to associate with each descriptor and finally ranking
the descriptors and reference segments by relevance order. We propose a
specific document segmentation method and a relevance measure for information
ranking. The algorithms are tested on 4 corpora (of different types and
domains) without human intervention or any semantic knowledge.
|
cs/0609135
|
Event-based Information Extraction for the biomedical domain: the
Caderige project
|
cs.AI cs.IR
|
This paper gives an overview of the Caderige project. This project involves
teams from different areas (biology, machine learning, natural language
processing) in order to develop high-level analysis tools for extracting
structured information from biological bibliographical databases, especially
Medline. The paper gives an overview of the approach and compares it to the
state of the art.
|
cs/0609136
|
The ALVIS Format for Linguistically Annotated Documents
|
cs.AI
|
The paper describes the ALVIS annotation format designed for the indexing of
large collections of documents in topic-specific search engines. This paper is
exemplified on the biological domain and on MedLine abstracts, as developing a
specialized search engine for biologists is one of the ALVIS case studies. The
ALVIS principle for linguistic annotations is based on existing works and
standard propositions. We made the choice of stand-off annotations rather than
inserted mark-up. Annotations are encoded as XML elements which form the
linguistic subsection of the document record.
|
cs/0609137
|
Ontologies and Information Extraction
|
cs.AI cs.IR
|
This report argues that, even in the simplest cases, IE is an ontology-driven
process. It is not a mere text filtering method based on simple pattern
matching and keywords, because the extracted pieces of texts are interpreted
with respect to a predefined partial domain model. This report shows that
depending on the nature and the depth of the interpretation to be done for
extracting the information, more or less knowledge must be involved. This
report is mainly illustrated in biology, a domain in which there are critical
needs for content-based exploration of the scientific literature and which
becomes a major application domain for IE.
|
cs/0609138
|
MDL Denoising Revisited
|
cs.IT math.IT
|
We refine and extend an earlier MDL denoising criterion for wavelet-based
denoising. We start by showing that the denoising problem can be reformulated
as a clustering problem, where the goal is to obtain separate clusters for
informative and non-informative wavelet coefficients, respectively. This
suggests two refinements, adding a code-length for the model index, and
extending the model in order to account for subband-dependent coefficient
distributions. A third refinement is derivation of soft thresholding inspired
by predictive universal coding with weighted mixtures. We propose a practical
method incorporating all three refinements, which is shown to achieve good
performance and robustness in denoising both artificial and natural signals.
|
cs/0609139
|
The Capacity of Channels with Feedback
|
cs.IT math.IT
|
We introduce a general framework for treating channels with memory and
feedback. First, we generalize Massey's concept of directed information and use
it to characterize the feedback capacity of general channels. Second, we
present coding results for Markov channels. This requires determining
appropriate sufficient statistics at the encoder and decoder. Third, a dynamic
programming framework for computing the capacity of Markov channels is
presented. Fourth, it is shown that the average cost optimality equation (ACOE)
can be viewed as an implicit single-letter characterization of the capacity.
Fifth, scenarios with simple sufficient statistics are described.
|
cs/0609140
|
Motion Primitives for Robotic Flight Control
|
cs.RO cs.LG
|
We introduce a simple framework for learning aggressive maneuvers in flight
control of UAVs. Having inspired from biological environment, dynamic movement
primitives are analyzed and extended using nonlinear contraction theory.
Accordingly, primitives of an observed movement are stably combined and
concatenated. We demonstrate our results experimentally on the Quanser
Helicopter, in which we first imitate aggressive maneuvers and then use them as
primitives to achieve new maneuvers that can fly over an obstacle.
|
cs/0609142
|
Modular self-organization
|
cs.AI
|
The aim of this paper is to provide a sound framework for addressing a
difficult problem: the automatic construction of an autonomous agent's modular
architecture. We combine results from two apparently uncorrelated domains:
Autonomous planning through Markov Decision Processes and a General Data
Clustering Approach using a kernel-like method. Our fundamental idea is that
the former is a good framework for addressing autonomy whereas the latter
allows to tackle self-organizing problems.
|
cs/0609143
|
ECA-LP / ECA-RuleML: A Homogeneous Event-Condition-Action Logic
Programming Language
|
cs.AI cs.LO cs.SE
|
Event-driven reactive functionalities are an urgent need in nowadays
distributed service-oriented applications and (Semantic) Web-based
environments. An important problem to be addressed is how to correctly and
efficiently capture and process the event-based behavioral, reactive logic
represented as ECA rules in combination with other conditional decision logic
which is represented as derivation rules. In this paper we elaborate on a
homogeneous integration approach which combines derivation rules, reaction
rules (ECA rules) and other rule types such as integrity constraint into the
general framework of logic programming. The developed ECA-LP language provides
expressive features such as ID-based updates with support for external and
self-updates of the intensional and extensional knowledge, transac-tions
including integrity testing and an event algebra to define and process complex
events and actions based on a novel interval-based Event Calculus variant.
|
cs/0609144
|
The Management and Integration of Biomedical Knowledge: Application in
the Health-e-Child Project (Position Paper)
|
cs.DB
|
The Health-e-Child project aims to develop an integrated healthcare platform
for European paediatrics. In order to achieve a comprehensive view of childrens
health, a complex integration of biomedical data, information, and knowledge is
necessary. Ontologies will be used to formally define this domain knowledge and
will form the basis for the medical knowledge management system. This paper
introduces an innovative methodology for the vertical integration of biomedical
knowledge. This approach will be largely clinician-centered and will enable the
definition of ontology fragments, connections between them (semantic bridges)
and enriched ontology fragments (views). The strategy for the specification and
capture of fragments, bridges and views is outlined with preliminary examples
demonstrated in the collection of biomedical information from hospital
databases, biomedical ontologies, and biomedical public databases.
|
cs/0609145
|
A Semidefinite Relaxation for Air Traffic Flow Scheduling
|
cs.CE
|
We first formulate the problem of optimally scheduling air traffic low with
sector capacity constraints as a mixed integer linear program. We then use
semidefinite relaxation techniques to form a convex relaxation of that problem.
Finally, we present a randomization algorithm to further improve the quality of
the solution. Because of the specific structure of the air traffic flow
problem, the relaxation has a single semidefinite constraint of size dn where d
is the maximum delay and n the number of flights.
|
cs/0609146
|
A Combinatorial Family of Near Regular LDPC Codes
|
cs.IT math.IT
|
An elementary combinatorial Tanner graph construction for a family of
near-regular low density parity check codes achieving high girth is presented.
The construction allows flexibility in the choice of design parameters like
rate, average degree, girth and block length of the code and yields an
asymptotic family. The complexity of constructing codes in the family grows
only quadratically with the block length.
|
cs/0609148
|
Pseudo-Codeword Performance Analysis for LDPC Convolutional Codes
|
cs.IT math.IT
|
Message-passing iterative decoders for low-density parity-check (LDPC) block
codes are known to be subject to decoding failures due to so-called
pseudo-codewords. These failures can cause the large signal-to-noise ratio
performance of message-passing iterative decoding to be worse than that
predicted by the maximum-likelihood decoding union bound. In this paper we
address the pseudo-codeword problem from the convolutional-code perspective. In
particular, we compare the performance of LDPC convolutional codes with that of
their ``wrapped'' quasi-cyclic block versions and we show that the minimum
pseudo-weight of an LDPC convolutional code is at least as large as the minimum
pseudo-weight of an underlying quasi-cyclic code. This result, which parallels
a well-known relationship between the minimum Hamming weight of convolutional
codes and the minimum Hamming weight of their quasi-cyclic counterparts, is due
to the fact that every pseudo-codeword in the convolutional code induces a
pseudo-codeword in the block code with pseudo-weight no larger than that of the
convolutional code's pseudo-codeword. This difference in the weight spectra
leads to improved performance at low-to-moderate signal-to-noise ratios for the
convolutional code, a conclusion supported by simulation results.
|
cs/0609153
|
Mining Generalized Graph Patterns based on User Examples
|
cs.DS cs.LG
|
There has been a lot of recent interest in mining patterns from graphs.
Often, the exact structure of the patterns of interest is not known. This
happens, for example, when molecular structures are mined to discover fragments
useful as features in chemical compound classification task, or when web sites
are mined to discover sets of web pages representing logical documents. Such
patterns are often generated from a few small subgraphs (cores), according to
certain generalization rules (GRs). We call such patterns "generalized
patterns"(GPs). While being structurally different, GPs often perform the same
function in the network. Previously proposed approaches to mining GPs either
assumed that the cores and the GRs are given, or that all interesting GPs are
frequent. These are strong assumptions, which often do not hold in practical
applications. In this paper, we propose an approach to mining GPs that is free
from the above assumptions. Given a small number of GPs selected by the user,
our algorithm discovers all GPs similar to the user examples. First, a machine
learning-style approach is used to find the cores. Second, generalizations of
the cores in the graph are computed to identify GPs. Evaluation on synthetic
data, generated using real cores and GRs from biological and web domains,
demonstrates effectiveness of our approach.
|
cs/0609154
|
Loop Calculus Helps to Improve Belief Propagation and Linear Programming
Decodings of Low-Density-Parity-Check Codes
|
cs.IT cond-mat.dis-nn cond-mat.stat-mech math.IT
|
We illustrate the utility of the recently developed loop calculus for
improving the Belief Propagation (BP) algorithm. If the algorithm that
minimizes the Bethe free energy fails we modify the free energy by accounting
for a critical loop in a graphical representation of the code. The
log-likelihood specific critical loop is found by means of the loop calculus.
The general method is tested using an example of the Linear Programming (LP)
decoding, that can be viewed as a special limit of the BP decoding. Considering
the (155,64,20) code that performs over Additive-White-Gaussian-Noise channel
we show that the loop calculus improves the LP decoding and corrects all
previously found dangerous configurations of log-likelihoods related to
pseudo-codewords with low effective distance, thus reducing the code's
error-floor.
|
cs/0609155
|
Detection of Markov Random Fields on Two-Dimensional Intersymbol
Interference Channels
|
cs.IT math.IT
|
We present a novel iterative algorithm for detection of binary Markov random
fields (MRFs) corrupted by two-dimensional (2D) intersymbol interference (ISI)
and additive white Gaussian noise (AWGN). We assume a first-order binary MRF as
a simple model for correlated images. We assume a 2D digital storage channel,
where the MRF is interleaved before being written and then read by a 2D
transducer; such channels occur in recently proposed optical disk storage
systems. The detection algorithm is a concatenation of two
soft-input/soft-output (SISO) detectors: an iterative row-column soft-decision
feedback (IRCSDF) ISI detector, and a MRF detector. The MRF detector is a SISO
version of the stochastic relaxation algorithm by Geman and Geman in IEEE
Trans. Pattern Anal. and Mach. Intell., Nov. 1984. On the 2 x 2 averaging-mask
ISI channel, at a bit error rate (BER) of 10^{-5}, the concatenated algorithm
achieves SNR savings of between 0.5 and 2.0 dB over the IRCSDF detector alone;
the savings increase as the MRFs become more correlated, or as the SNR
decreases. The algorithm is also fairly robust to mismatches between the
assumed and actual MRF parameters.
|
cs/0609156
|
Entangled Graphs
|
cs.IT cs.DM math.IT
|
In this paper we prove a separability criterion for mixed states in $\mathbb
C^p\otimes\mathbb C^q$. We also show that the density matrix of a graph with
only one entangled edge is entangled.
|
cs/0609157
|
Sensor Scheduling for Optimal Observability Using Estimation Entropy
|
cs.IT cs.AI math.IT
|
We consider sensor scheduling as the optimal observability problem for
partially observable Markov decision processes (POMDP). This model fits to the
cases where a Markov process is observed by a single sensor which needs to be
dynamically adjusted or by a set of sensors which are selected one at a time in
a way that maximizes the information acquisition from the process. Similar to
conventional POMDP problems, in this model the control action is based on all
past measurements; however here this action is not for the control of state
process, which is autonomous, but it is for influencing the measurement of that
process. This POMDP is a controlled version of the hidden Markov process, and
we show that its optimal observability problem can be formulated as an average
cost Markov decision process (MDP) scheduling problem. In this problem, a
policy is a rule for selecting sensors or adjusting the measuring device based
on the measurement history. Given a policy, we can evaluate the estimation
entropy for the joint state-measurement processes which inversely measures the
observability of state process for that policy. Considering estimation entropy
as the cost of a policy, we show that the problem of finding optimal policy is
equivalent to an average cost MDP scheduling problem where the cost function is
the entropy function over the belief space. This allows the application of the
policy iteration algorithm for finding the policy achieving minimum estimation
entropy, thus optimum observability.
|
cs/0609159
|
Duality for Several Families of Evaluation Codes
|
cs.IT cs.DM math.IT
|
We consider generalizations of Reed-Muller codes, toric codes, and codes from
certain plane curves, such as those defined by norm and trace functions on
finite fields. In each case we are interested in codes defined by evaluating
arbitrary subsets of monomials, and in identifying when the dual codes are also
obtained by evaluating monomials. We then move to the context of order domain
theory, in which the subsets of monomials can be chosen to optimize decoding
performance using the Berlekamp-Massey-Sakata algorithm with majority voting.
We show that for the codes under consideration these subsets are well-behaved
and the dual codes are also defined by monomials.
|
cs/0609160
|
Redundancies of Correction-Capability-Optimized Reed-Muller Codes
|
cs.IT cs.DM math.IT
|
This article is focused on some variations of Reed-Muller codes that yield
improvements to the rate for a prescribed decoding performance under the
Berlekamp-Massey-Sakata algorithm with majority voting. Explicit formulas for
the redundancies of the new codes are given.
|
cs/0609161
|
The Order Bound on the Minimum Distance of the One-Point Codes
Associated to a Garcia-Stichtenoth Tower of Function Fields
|
cs.IT cs.DM math.IT
|
Garcia and Stichtenoth discovered two towers of function fields that meet the
Drinfeld-Vl\u{a}du\c{t} bound on the ratio of the number of points to the
genus. For one of these towers, Garcia, Pellikaan and Torres derived a
recursive description of the Weierstrass semigroups associated to a tower of
points on the associated curves. In this article, a non-recursive description
of the semigroups is given and from this the enumeration of each of the
semigroups is derived as well as its inverse. This enables us to find an
explicit formula for the order (Feng-Rao) bound on the minimum distance of the
associated one-point codes.
|
cs/0609162
|
On Semigroups Generated by Two Consecutive Integers and Improved
Hermitian Codes
|
cs.IT cs.DM math.IT
|
Analysis of the Berlekamp-Massey-Sakata algorithm for decoding one-point
codes leads to two methods for improving code rate. One method, due to Feng and
Rao, removes parity checks that may be recovered by their majority voting
algorithm. The second method is to design the code to correct only those error
vectors of a given weight that are also geometrically generic. In this work,
formulae are given for the redundancies of Hermitian codes optimized with
respect to these criteria as well as the formula for the order bound on the
minimum distance. The results proceed from an analysis of numerical semigroups
generated by two consecutive integers. The formula for the redundancy of
optimal Hermitian codes correcting a given number of errors answers an open
question stated by Pellikaan and Torres in 1999.
|
cs/0609164
|
Conditional Expressions for Blind Deconvolution: Multi-point form
|
cs.CV
|
We present conditional expression (CE) for finding blurs convolved in given
images. The CE is given in terms of the zero-values of the blurs evaluated at
multi-point. The CE can detect multiple blur all at once. We illustrate the
multiple blur-detection by using a test image.
|
cs/0609165
|
Simple method to eliminate blur based on Lane and Bates algorithm
|
cs.CV
|
A simple search method for finding a blur convolved in a given image is
presented. The method can be easily extended to a large blur. The method has
been experimentally tested with a model blurred image.
|
cs/0610002
|
Conditional Expressions for Blind Deconvolution: Derivative form
|
cs.CV
|
We developed novel conditional expressions (CEs) for Lane and Bates' blind
deconvolution. The CEs are given in term of the derivatives of the zero-values
of the z-transform of given images. The CEs make it possible to automatically
detect multiple blur convolved in the given images all at once without
performing any analysis of the zero-sheets of the given images. We illustrate
the multiple blur-detection by the CEs for a model image
|
cs/0610004
|
Rapport technique du projet OGRE
|
cs.CL cs.AI
|
This repport concerns automatic understanding of (french) iterative
sentences, i.e. sentences where one single verb has to be interpreted by a more
or less regular plurality of events. A linguistic analysis is proposed along an
extension of Reichenbach's theory, several formal representations are
considered and a corpus of 18000 newspaper extracts is described.
|
cs/0610006
|
A Typed Hybrid Description Logic Programming Language with Polymorphic
Order-Sorted DL-Typed Unification for Semantic Web Type Systems
|
cs.AI
|
In this paper we elaborate on a specific application in the context of hybrid
description logic programs (hybrid DLPs), namely description logic Semantic Web
type systems (DL-types) which are used for term typing of LP rules based on a
polymorphic, order-sorted, hybrid DL-typed unification as procedural semantics
of hybrid DLPs. Using Semantic Web ontologies as type systems facilitates
interchange of domain-independent rules over domain boundaries via dynamically
typing and mapping of explicitly defined type ontologies.
|
cs/0610007
|
Full Text Searching in the Astrophysics Data System
|
cs.DL astro-ph cs.DB
|
The Smithsonian/NASA Astrophysics Data System (ADS) provides a search system
for the astronomy and physics scholarly literature. All major and many smaller
astronomy journals that were published on paper have been scanned back to
volume 1 and are available through the ADS free of charge. All scanned pages
have been converted to text and can be searched through the ADS Full Text
Search System. In addition, searches can be fanned out to several external
search systems to include the literature published in electronic form. Results
from the different search systems are combined into one results list.
The ADS Full Text Search System is available at:
http://adsabs.harvard.edu/fulltext_service.html
|
cs/0610008
|
Connectivity in the Astronomy Digital Library
|
cs.DL astro-ph cs.DB
|
The Astrophysics Data System (ADS) provides an extensive system of links
between the literature and other on-line information. Recently, the journals of
the American Astronomical Society (AAS) and a group of NASA data centers have
collaborated to provide more links between on-line data obtained by space
missions and the on-line journals. Authors can now specify which data sets they
have used in their article. This information is used by the participants to
provide the links between the literature and the data.
The ADS is available at: http://ads.harvard.edu
|
cs/0610010
|
One-Pass, One-Hash n-Gram Statistics Estimation
|
cs.DB cs.CL
|
In multimedia, text or bioinformatics databases, applications query sequences
of n consecutive symbols called n-grams. Estimating the number of distinct
n-grams is a view-size estimation problem. While view sizes can be estimated by
sampling under statistical assumptions, we desire an unassuming algorithm with
universally valid accuracy bounds. Most related work has focused on repeatedly
hashing the data, which is prohibitive for large data sources. We prove that a
one-pass one-hash algorithm is sufficient for accurate estimates if the hashing
is sufficiently independent. To reduce costs further, we investigate recursive
random hashing algorithms and show that they are sufficiently independent in
practice. We compare our running times with exact counts using suffix arrays
and show that, while we use hardly any storage, we are an order of magnitude
faster. The approach further is extended to a one-pass/one-hash computation of
n-gram entropy and iceberg counts. The experiments use a large collection of
English text from the Gutenberg Project as well as synthetic data.
|
cs/0610011
|
Creation and use of Citations in the ADS
|
cs.DL astro-ph cs.DB cs.IR
|
With over 20 million records, the ADS citation database is regularly used by
researchers and librarians to measure the scientific impact of individuals,
groups, and institutions. In addition to the traditional sources of citations,
the ADS has recently added references extracted from the arXiv e-prints on a
nightly basis. We review the procedures used to harvest and identify the
reference data used in the creation of citations, the policies and procedures
that we follow to avoid double-counting and to eliminate contributions which
may not be scholarly in nature. Finally, we describe how users and institutions
can easily obtain quantitative citation data from the ADS, both interactively
and via web-based programming tools.
The ADS is available at http://ads.harvard.edu.
|
cs/0610012
|
On Shift Sequences for Interleaved Construction of Sequence Sets with
Low Correlation
|
cs.IT math.IT
|
Construction of signal sets with low correlation property is of interest to
designers of CDMA systems. One of the preferred ways of constructing such sets
is the interleaved construction which uses two sequences a and b with 2-level
autocorrelation and a shift sequence e. The shift sequence has to satisfy
certain conditions for the resulting signal set to have low correlation
properties. This article shows that the conditions reported in literature are
too strong and gives a version which results in more number of shift sequences.
An open problem on the existence of shift sequences for attaining an
interleaved set with maximum correlation value bounded by v+2 is also taken up
and solved.
|
cs/0610015
|
Why did the accident happen? A norm-based reasoning approach
|
cs.AI
|
In this paper we describe an architecture of a system that answer the
question : Why did the accident happen? from the textual description of an
accident. We present briefly the different parts of the architecture and then
we describe with more detail the semantic part of the system i.e. the part in
which the norm-based reasoning is performed on the explicit knowlege extracted
from the text.
|
cs/0610016
|
Norm Based Causal Reasoning in Textual Corpus
|
cs.AI cs.CL
|
Truth based entailments are not sufficient for a good comprehension of NL. In
fact, it can not deduce implicit information necessary to understand a text. On
the other hand, norm based entailments are able to reach this goal. This idea
was behind the development of Frames (Minsky 75) and Scripts (Schank 77, Schank
79) in the 70's. But these theories are not formalized enough and their
adaptation to new situations is far from being obvious. In this paper, we
present a reasoning system which uses norms in a causal reasoning process in
order to find the cause of an accident from a text describing it.
|
cs/0610018
|
Raisonnement stratifi\'{e} \`{a} base de normes pour inf\'{e}rer les
causes dans un corpus textuel
|
cs.AI cs.CL
|
To understand texts written in natural language (LN), we use our knowledge
about the norms of the domain. Norms allow to infer more implicit information
from the text. This kind of information can, in general, be defeasible, but it
remains useful and acceptable while the text do not contradict it explicitly.
In this paper we describe a non-monotonic reasoning system based on the norms
of the car crash domain. The system infers the cause of an accident from its
textual description. The cause of an accident is seen as the most specific norm
which has been violated. The predicates and the rules of the system are
stratified: organized on layers in order to obtain an efficient reasoning.
|
cs/0610019
|
NectaRSS, an RSS feed ranking system that implicitly learns user
preferences
|
cs.IR cs.HC
|
In this paper a new RSS feed ranking method called NectaRSS is introduced.
The system recommends information to a user based on his/her past choices. User
preferences are automatically acquired, avoiding explicit feedback, and ranking
is based on those preferences distilled to a user profile. NectaRSS uses the
well-known vector space model for user profiles and new documents, and compares
them using information-retrieval techniques, but introduces a novel method for
user profile creation and adaptation from users' past choices. The efficiency
of the proposed method has been tested by embedding it into an intelligent
aggregator (RSS feed reader), which has been used by different and
heterogeneous users. Besides, this paper proves that the ranking of newsitems
yielded by NectaRSS improves its quality with user's choices, and its
superiority over other algorithms that use a different information
representation method.
|
cs/0610020
|
XString: XML as a String
|
cs.DB
|
Extensible markup language (XML) is a technology that has been much hyped, so
that XML has become an industry buzzword. Behind the hype is a powerful
technology for data representation in a platform independent manner. As a text
document, however, XML suffers from being too bloated, and requires an XML
parser to access and manipulate it. XString is an encoding method for XML, in
essence, a markup language's markup language. XString gives the benefit of
compressing XML, and allows for easy manipulation and processing of XML source
as a very long string.
|
cs/0610021
|
On the Fading Paper Achievable Region of the Fading MIMO Broadcast
Channel
|
cs.IT math.IT
|
We consider transmission over the ergodic fading multi-antenna broadcast
(MIMO-BC) channel with partial channel state information at the transmitter and
full information at the receiver. Over the equivalent {\it non}-fading channel,
capacity has recently been shown to be achievable using transmission schemes
that were designed for the ``dirty paper'' channel. We focus on a similar
``fading paper'' model. The evaluation of the fading paper capacity is
difficult to obtain. We confine ourselves to the {\it linear-assignment}
capacity, which we define, and use convex analysis methods to prove that its
maximizing distribution is Gaussian. We compare our fading-paper transmission
to an application of dirty paper coding that ignores the partial state
information and assumes the channel is fixed at the average fade. We show that
a gain is easily achieved by appropriately exploiting the information. We also
consider a cooperative upper bound on the sum-rate capacity as suggested by
Sato. We present a numeric example that indicates that our scheme is capable of
realizing much of this upper bound.
|
cs/0610022
|
Iterative Decoding of Low-Density Parity Check Codes (A Survey)
|
cs.IT cs.CC math.IT
|
Much progress has been made on decoding algorithms for error-correcting codes
in the last decade. In this article, we give an introduction to some
fundamental results on iterative, message-passing algorithms for low-density
parity check codes. For certain important stochastic channels, this line of
work has enabled getting very close to Shannon capacity with algorithms that
are extremely efficient (both in theory and practice).
|
cs/0610023
|
Une exp\'{e}rience de s\'{e}mantique inf\'{e}rentielle
|
cs.AI
|
We develop a system which must be able to perform the same inferences that a
human reader of an accident report can do and more particularly to determine
the apparent causes of the accident. We describe the general framework in which
we are situated, linguistic and semantic levels of the analysis and the
inference rules used by the system.
|
cs/0610025
|
Low Correlation Sequences over the QAM Constellation
|
cs.IT math.IT
|
This paper presents the first concerted look at low correlation sequence
families over QAM constellations of size M^2=4^m and their potential
applicability as spreading sequences in a CDMA setting.
Five constructions are presented, and it is shown how such sequence families
have the ability to transport a larger amount of data as well as enable
variable-rate signalling on the reverse link.
Canonical family CQ has period N, normalized maximum-correlation parameter
theta_max bounded above by A sqrt(N), where 'A' ranges from 1.8 in the 16-QAM
case to 3.0 for large M. In a CDMA setting, each user is enabled to transfer 2m
bits of data per period of the spreading sequence which can be increased to 3m
bits of data by halving the size of the sequence family. The technique used to
construct CQ is easily extended to produce larger sequence families and an
example is provided.
Selected family SQ has a lower value of theta_max but permits only (m+1)-bit
data modulation. The interleaved 16-QAM sequence family IQ has theta_max <=
sqrt(2) sqrt(N) and supports 3-bit data modulation.
The remaining two families are over a quadrature-PAM (Q-PAM) subset of size
2M of the M^2-QAM constellation. Family P has a lower value of theta_max in
comparison with Family SQ, while still permitting (m+1)-bit data modulation.
Interleaved family IP, over the 8-ary Q-PAM constellation, permits 3-bit data
modulation and interestingly, achieves the Welch lower bound on theta_max.
|
cs/0610029
|
Data in the ADS -- Understanding How to Use it Better
|
cs.DL cs.DB
|
The Smithsonian/NASA ADS Abstract Service contains a wealth of data for
astronomers and librarians alike, yet the vast majority of usage consists of
rudimentary searches. Hints on how to obtain more focused search results by
using more of the various capabilities of the ADS are presented, including
searching by affiliation. We also discuss the classification of articles by
content and by referee status.
The ADS is funded by NASA Grant NNG06GG68G-16613687.
|
cs/0610033
|
A kernel for time series based on global alignments
|
cs.CV cs.LG
|
We propose in this paper a new family of kernels to handle times series,
notably speech data, within the framework of kernel methods which includes
popular algorithms such as the Support Vector Machine. These kernels elaborate
on the well known Dynamic Time Warping (DTW) family of distances by considering
the same set of elementary operations, namely substitutions and repetitions of
tokens, to map a sequence onto another. Associating to each of these operations
a given score, DTW algorithms use dynamic programming techniques to compute an
optimal sequence of operations with high overall score. In this paper we
consider instead the score spanned by all possible alignments, take a smoothed
version of their maximum and derive a kernel out of this formulation. We prove
that this kernel is positive definite under favorable conditions and show how
it can be tuned effectively for practical applications as we report encouraging
results on a speech recognition task.
|
cs/0610037
|
The Capacity Region of a Class of Discrete Degraded Interference
Channels
|
cs.IT math.IT
|
We provide a single-letter characterization for the capacity region of a
class of discrete degraded interference channels (DDICs). The class of DDICs
considered includes the discrete additive degraded interference channel (DADIC)
studied by Benzel. We show that for the class of DDICs studied, encoder
cooperation does not increase the capacity region, and therefore, the capacity
region of the class of DDICs is the same as the capacity region of the
corresponding degraded broadcast channel.
|
cs/0610039
|
The Application of Fuzzy Logic to the Construction of the Ranking
Function of Information Retrieval Systems
|
cs.IR cs.AI
|
The quality of the ranking function is an important factor that determines
the quality of the Information Retrieval system. Each document is assigned a
score by the ranking function; the score indicates the likelihood of relevance
of the document given a query. In the vector space model, the ranking function
is defined by a mathematic expression. We propose a fuzzy logic (FL) approach
to defining the ranking function. FL provides a convenient way of converting
knowledge expressed in a natural language into fuzzy logic rules. The resulting
ranking function could be easily viewed, extended, and verified: * if (tf is
high) and (idf is high) > (relevance is high); * if (overlap is high) >
(relevance is high). By using above FL rules, we are able to achieve
performance approximately equal to the state of the art search engine Apache
Lucene (deltaP10 +0.92%; deltaMAP -0.1%). The fuzzy logic approach allows
combining the logic-based model with the vector model. The resulting model
possesses simplicity and formalism of the logic based model, and the
flexibility and performance of the vector model.
|
cs/0610041
|
A Computational Model of Spatial Memory Anticipation during Visual
Search
|
cs.NE
|
Some visual search tasks require to memorize the location of stimuli that
have been previously scanned. Considerations about the eye movements raise the
question of how we are able to maintain a coherent memory, despite the frequent
drastically changes in the perception. In this article, we present a
computational model that is able to anticipate the consequences of the eye
movements on the visual perception in order to update a spatial memory
|
cs/0610043
|
Farthest-Point Heuristic based Initialization Methods for K-Modes
Clustering
|
cs.AI
|
The k-modes algorithm has become a popular technique in solving categorical
data clustering problems in different application domains. However, the
algorithm requires random selection of initial points for the clusters.
Different initial points often lead to considerable distinct clustering
results. In this paper we present an experimental study on applying a
farthest-point heuristic based initialization method to k-modes clustering to
improve its performance. Experiments show that new initialization method leads
to better clustering accuracy than random selection initialization method for
k-modes clustering.
|
cs/0610045
|
Spectra of large block matrices
|
cs.IT math.IT math.OA
|
In a frequency selective slow-fading channel in a MIMO system, the channel
matrix is of the form of a block matrix. This paper proposes a method to
calculate the limit of the eigenvalue distribution of block matrices if the
size of the blocks tends to infinity. While it considers random matrices, it
takes an operator-valued free probability approach to achieve this goal. Using
this method, one derives a system of equations, which can be solved numerically
to compute the desired eigenvalue distribution. The paper initially tackles the
problem for square block matrices, then extends the solution to rectangular
block matrices. Finally, it deals with Wishart type block matrices. For two
special cases, the results of our approach are compared with results from
simulations. The first scenario investigates the limit eigenvalue distribution
of block Toeplitz matrices. The second scenario deals with the distribution of
Wishart type block matrices for a frequency selective slow-fading channel in a
MIMO system for two different cases of $n_R=n_T$ and $n_R=2n_T$. Using this
method, one may calculate the capacity and the Signal-to-Interference-and-Noise
Ratio in large MIMO systems.
|
cs/0610047
|
Capacity of the Trapdoor Channel with Feedback
|
cs.IT math.IT
|
We establish that the feedback capacity of the trapdoor channel is the
logarithm of the golden ratio and provide a simple communication scheme that
achieves capacity. As part of the analysis, we formulate a class of dynamic
programs that characterize capacities of unifilar finite-state channels. The
trapdoor channel is an instance that admits a simple analytic solution.
|
cs/0610050
|
The Mathematical Parallels Between Packet Switching and Information
Transmission
|
cs.IT cs.NI math.IT
|
All communication networks comprise of transmission systems and switching
systems, even though they are usually treated as two separate issues.
Communication channels are generally disturbed by noise from various sources.
In circuit switched networks, reliable communication requires the
error-tolerant transmission of bits over noisy channels. In packet switched
networks, however, not only can bits be corrupted with noise, but resources
along connection paths are also subject to contention. Thus, quality of service
(QoS) is determined by buffer delays and packet losses. The theme of this paper
is to show that transmission noise and packet contention actually have similar
characteristics and can be tamed by comparable means to achieve reliable
communication, and a number of analogies between switching and transmission are
identified. The sampling theorem of bandlimited signals provides the
cornerstone of digital communication and signal processing. Recently, the
Birkhoff-von Neumann decomposition of traffic matrices has been widely applied
to packet switches. With respect to the complexity reduction of packet
switching, we show that the decomposition of a doubly stochastic traffic matrix
plays a similar role to that of the sampling theorem in digital transmission.
We conclude that packet switching systems are governed by mathematical laws
that are similar to those of digital transmission systems as envisioned by
Shannon in his seminal 1948 paper, A Mathematical Theory of Communication.
|
cs/0610052
|
Finite-Dimensional Bounds on Zm and Binary LDPC Codes with Belief
Propagation Decoders
|
cs.IT math.IT
|
This paper focuses on finite-dimensional upper and lower bounds on decodable
thresholds of Zm and binary low-density parity-check (LDPC) codes, assuming
belief propagation decoding on memoryless channels. A concrete framework is
presented, admitting systematic searches for new bounds. Two noise measures are
considered: the Bhattacharyya noise parameter and the soft bit value for a
maximum a posteriori probability (MAP) decoder on the uncoded channel. For Zm
LDPC codes, an iterative m-dimensional bound is derived for
m-ary-input/symmetric-output channels, which gives a sufficient stability
condition for Zm LDPC codes and is complemented by a matched necessary
stability condition introduced herein. Applications to coded modulation and to
codes with non-equiprobable distributed codewords are also discussed.
For binary codes, two new lower bounds are provided for symmetric channels,
including a two-dimensional iterative bound and a one-dimensional non-iterative
bound, the latter of which is the best known bound that is tight for binary
symmetric channels (BSCs), and is a strict improvement over the bound derived
by the channel degradation argument. By adopting the reverse channel
perspective, upper and lower bounds on the decodable Bhattacharyya noise
parameter are derived for non-symmetric channels, which coincides with the
existing bound for symmetric channels.
|
cs/0610053
|
Towards a Bayesian framework for option pricing
|
cs.CE q-fin.PR
|
In this paper, we describe a general method for constructing the posterior
distribution of an option price. Our framework takes as inputs the prior
distributions of the parameters of the stochastic process followed by the
underlying, as well as the likelihood function implied by the observed price
history for the underlying. Our work extends that of Karolyi (1993) and
Darsinos and Satchell (2001), but with the crucial difference that the
likelihood function we use for inference is that which is directly implied by
the underlying, rather than imposed in an ad hoc manner via the introduction of
a function representing "measurement error." As such, an important problem
still relevant for our method is that of model risk, and we address this issue
by describing how to perform a Bayesian averaging of parameter inferences based
on the different models considered using our framework.
|
cs/0610057
|
Properties of codes in rank metric
|
cs.DM cs.IT math.IT
|
We study properties of rank metric and codes in rank metric over finite
fields. We show that in rank metric perfect codes do not exist. We derive an
existence bound that is the equivalent of the Gilbert--Varshamov bound in
Hamming metric. We study the asymptotic behavior of the minimum rank distance
of codes satisfying GV. We derive the probability distribution of minimum rank
distance for random and random $\F{q}$-linear codes. We give an asymptotic
equivalent of their average minimum rank distance and show that random
$\F{q}$-linear codes are on GV bound for rank metric.
We show that the covering density of optimum codes whose codewords can be
seen as square matrices is lower bounded by a function depending only on the
error-correcting capability of the codes. We show that there are quasi-perfect
codes in rank metric over fields of characteristic 2.
|
cs/0610058
|
Context-sensitive access to e-document corpus
|
cs.IR
|
The methodology of context-sensitive access to e-documents considers context
as a problem model based on the knowledge extracted from the application
domain, and presented in the form of application ontology. Efficient access to
an information in the text form is needed. Wiki resources as a modern text
format provides huge number of text in a semi formalized structure. At the
first stage of the methodology, documents are indexed against the ontology
representing macro-situation. The indexing method uses a topic tree as a middle
layer between documents and the application ontology. At the second stage
documents relevant to the current situation (the abstract and operational
contexts) are identified and sorted by degree of relevance. Abstract context is
a problem-oriented ontology-based model. Operational context is an
instantiation of the abstract context with data provided by the information
sources. The following parts of the methodology are described: (i) metrics for
measuring similarity of e-documents to ontology, (ii) a document index storing
results of indexing of e-documents against the ontology; (iii) a method for
identification of relevant e-documents based on semantic similarity measures.
Wikipedia (wiki resource) is used as a corpus of e-documents for approach
evaluation in a case study. Text categorization, the presence of metadata, and
an existence of a lot of articles related to different topics characterize the
corpus.
|
cs/0610059
|
Camera motion estimation through planar deformation determination
|
cs.CV
|
In this paper, we propose a global method for estimating the motion of a
camera which films a static scene. Our approach is direct, fast and robust, and
deals with adjacent frames of a sequence. It is based on a quadratic
approximation of the deformation between two images, in the case of a scene
with constant depth in the camera coordinate system. This condition is very
restrictive but we show that provided translation and depth inverse variations
are small enough, the error on optical flow involved by the approximation of
depths by a constant is small. In this context, we propose a new model of
camera motion, that allows to separate the image deformation in a similarity
and a ``purely'' projective application, due to change of optical axis
direction. This model leads to a quadratic approximation of image deformation
that we estimate with an M-estimator; we can immediatly deduce camera motion
parameters.
|
cs/0610060
|
Comparing Typical Opening Move Choices Made by Humans and Chess Engines
|
cs.AI
|
The opening book is an important component of a chess engine, and thus
computer chess programmers have been developing automated methods to improve
the quality of their books. For chess, which has a very rich opening theory,
large databases of high-quality games can be used as the basis of an opening
book, from which statistics relating to move choices from given positions can
be collected. In order to find out whether the opening books used by modern
chess engines in machine versus machine competitions are ``comparable'' to
those used by chess players in human versus human competitions, we carried out
analysis on 26 test positions using statistics from two opening books one
compiled from humans' games and the other from machines' games. Our analysis
using several nonparametric measures, shows that, overall, there is a strong
association between humans' and machines' choices of opening moves when using a
book to guide their choices.
|
cs/0610061
|
The Delay-Limited Capacity Region of OFDM Broadcast Channels
|
cs.IT math.IT
|
In this work, the delay limited capacity (DLC) of orthogonal frequency
division multiplexing (OFDM) systems is investigated. The analysis is organized
into two parts. In the first part, the impact of system parameters on the OFDM
DLC is analyzed in a general setting. The main results are that under weak
assumptions the maximum achievable single user DLC is almost independent of the
distribution of the path attenuations in the low signal-to-noise (SNR) region
but depends strongly on the delay spread. In the high SNR region the roles are
exchanged. Here, the impact of delay spread is negligible while the impact of
the distribution becomes dominant. The relevant asymptotic quantities are
derived without employing simplifying assumptions on the OFDM correlation
structure. Moreover, for both cases it is shown that the DLC is maximized if
the total channel energy is uniformly spread, i.e. the power delay profile is
uniform. It is worth pointing out that since universal bounds are obtained the
results can also be used for other classes of parallel channels with block
fading characteristic. The second part extends the setting to the broadcast
channel and studies the corresponding OFDM DLC BC region. An algorithm for
computing the OFDM BC DLC region is presented. To derive simple but smart
resource allocation strategies, the principle of rate water-filling employing
order statistics is introduced. This yields analytical lower bounds on the OFDM
DLC region based on orthogonal frequency division multiple access (OFDMA) and
ordinal channel state information (CSI). Finally, the schemes are compared to
an algorithm using full CSI.
|
cs/0610067
|
Language, logic and ontology: uncovering the structure of commonsense
knowledge
|
cs.AI math.LO
|
The purpose of this paper is twofold: (i) we argue that the structure of
commonsense knowledge must be discovered, rather than invented; and (ii) we
argue that natural language, which is the best known theory of our (shared)
commonsense knowledge, should itself be used as a guide to discovering the
structure of commonsense knowledge. In addition to suggesting a systematic
method to the discovery of the structure of commonsense knowledge, the method
we propose seems to also provide an explanation for a number of phenomena in
natural language, such as metaphor, intensionality, and the semantics of
nominal compounds. Admittedly, our ultimate goal is quite ambitious, and it is
no less than the systematic 'discovery' of a well-typed ontology of commonsense
knowledge, and the subsequent formulation of the long-awaited goal of a meaning
algebra.
|
cs/0610074
|
Collaborative Decoding of Interleaved Reed-Solomon Codes and
Concatenated Code Designs
|
cs.IT math.IT
|
Interleaved Reed-Solomon codes are applied in numerous data processing, data
transmission, and data storage systems. They are generated by interleaving
several codewords of ordinary Reed-Solomon codes. Usually, these codewords are
decoded independently by classical algebraic decoding methods. However, by
collaborative algebraic decoding approaches, such interleaved schemes allow the
correction of error patterns beyond half the minimum distance, provided that
the errors in the received signal occur in bursts. In this work, collaborative
decoding of interleaved Reed-Solomon codes by multi-sequence shift-register
synthesis is considered and analyzed. Based on the framework of interleaved
Reed-Solomon codes, concatenated code designs are investigated, which are
obtained by interleaving several Reed-Solomon codes, and concatenating them
with an inner block code.
|
cs/0610075
|
On Geometric Algebra representation of Binary Spatter Codes
|
cs.AI quant-ph
|
Kanerva's Binary Spatter Codes are reformulated in terms of geometric
algebra. The key ingredient of the construction is the representation of XOR
binding in terms of geometric product.
|
cs/0610076
|
Peano Count Trees (P-Trees) and Rule Association Mining for Gene
Expression Profiling of Microarray Data
|
cs.DS cs.IR q-bio.MN
|
The greatest challenge in maximizing the use of gene expression data is to
develop new computational tools capable of interconnecting and interpreting the
results from different organisms and experimental settings. We propose an
integrative and comprehensive approach including a super-chip containing data
from microarray experiments collected on different species subjected to hypoxic
and anoxic stress. A data mining technology called Peano count tree (P-trees)
is used to represent genomic data in multidimensions. Each microarray spot is
presented as a pixel with its corresponding red/green intensity feature bands.
Each bad is stored separately in a reorganized 8-separate (bSQ) file format.
Each bSQ is converted to a quadrant base tree structure (P-tree) from which a
superchip is represented as expression P-trees (EP-trees) and repression
P-trees (RP-trees). The use of association rule mining is proposed to derived
to meanigingfully organize signal transduction pathways taking in consideration
evolutionary considerations. We argue that the genetic constitution of an
organism (K) can be represented by the total number of genes belonging to two
groups. The group X constitutes genes (X1,Xn) and they can be represented as 1
or 0 depending on whether the gene was expressed or not. The second group of Y
genes (Y1,Yn) is expressed at different levels. These genes have a very high
repression, high expression, very repressed or highly repressed. However, many
genes of the group Y are specie specific and modulated by the products and
combinations of genes of the group X. In this paper, we introduce the dSQ and
P-tree technology; the biological implications of association rule mining using
X and Y gene groups and some advances in the integration of this information
using the BRAIN architecture.
|
cs/0610077
|
MIMO Broadcast Channels with Block Diagonalization and Finite Rate
Feedback
|
cs.IT math.IT
|
Block diagonalization is a linear precoding technique for the multiple
antenna broadcast (downlink) channel that involves transmission of multiple
data streams to each receiver such that no multi-user interference is
experienced at any of the receivers. This low-complexity scheme operates only a
few dB away from capacity but does require very accurate channel knowledge at
the transmitter, which can be very difficult to obtain in fading scenarios. We
consider a limited feedback system where each receiver knows its channel
perfectly, but the transmitter is only provided with a finite number of channel
feedback bits from each receiver. Using a random vector quantization argument,
we quantify the throughput loss due to imperfect channel knowledge as a
function of the feedback level. The quality of channel knowledge must improve
proportional to the SNR in order to prevent interference-limitations, and we
show that scaling the number of feedback bits linearly with the system SNR is
sufficient to maintain a bounded rate loss. Finally, we investigate a simple
scalar quantization scheme that is seen to achieve the same scaling behavior as
vector quantization.
|
cs/0610079
|
An Enhanced Covering Lemma for Multiterminal Source Coding
|
cs.IT math.IT
|
An enhanced covering lemma for a Markov chain is proved in this paper, and
then the distributed source coding problem of correlated general sources with
one average distortion criterion under fixed-length coding is investigated.
Based on the enhanced lemma, a sufficient and necessary condition for
determining the achievability of rate-distortion triples is given.
|
cs/0610082
|
Theoretical analysis of network cranback protocols performance
|
cs.IT math.IT
|
Suggested the decision of the network cranback protocols performance
analyzing problem from Eyal Felstine, Reuven Cohen and Ofer Hadar, " Crankback
Prediction in Hierarchical ATM networks", Journal of Network and Systems
Management, Vol. 10, No. 3, September 2002. It show that the false alarm
probability and probability of successful way crossing can be calculated. The
main optimization equations are developed for cranback protocol parameters by
using analytical expressions for statistical protocol characteristics.
|
cs/0610083
|
Estimation of the traffic in the binary channel for data networks
|
cs.IT math.IT
|
It is impossible to provide an effective utilization of communication
networks without the analysis of the quantitative characteristics of the
traffic in real time. The constant supervision of all channels of the data
practically is impracticable because requires transfer of the significant
additional information on a network and large resources expenses for devices of
the control. Thus, the task on traffic estimation with small expenses in real
time is the urgent.
|
cs/0610091
|
On the Behavior of Journal Impact Factor Rank-Order Distribution
|
cs.IR physics.soc-ph
|
An empirical law for the rank-order behavior of journal impact factors is
found. Using an extensive data base on impact factors including journals on
Education, Agrosciences, Geosciences, Biosciences and Environ- mental,
Chemical, Computer, Engineering, Material, Mathematical, Medical and Physical
Sciences we have found extremely good fits out- performing other rank-order
models. Some extensions to other areas of knowledge are discussed.
|
cs/0610093
|
Semantic results for ontic and epistemic change
|
cs.LO cs.AI cs.MA
|
We give some semantic results for an epistemic logic incorporating dynamic
operators to describe information changing events. Such events include
epistemic changes, where agents become more informed about the non-changing
state of the world, and ontic changes, wherein the world changes. The events
are executed in information states that are modeled as pointed Kripke models.
Our contribution consists of three semantic results. (i) Given two information
states, there is an event transforming one into the other. The linguistic
correspondent to this is that every consistent formula can be made true in
every information state by the execution of an event. (ii) A more technical
result is that: every event corresponds to an event in which the postconditions
formalizing ontic change are assignments to `true' and `false' only (instead of
assignments to arbitrary formulas in the logical language). `Corresponds' means
that execution of either event in a given information state results in
bisimilar information states. (iii) The third, also technical, result is that
every event corresponds to a sequence of events wherein all postconditions are
assignments of a single atom only (instead of simultaneous assignments of more
than one atom).
|
cs/0610095
|
Solving planning domains with polytree causal graphs is NP-complete
|
cs.AI cs.CC
|
We show that solving planning domains on binary variables with polytree
causal graph is \NP-complete. This is in contrast to a polynomial-time
algorithm of Domshlak and Brafman that solves these planning domains for
polytree causal graphs of bounded indegree.
|
cs/0610098
|
Characterizing Solution Concepts in Games Using Knowledge-Based Programs
|
cs.GT cs.DC cs.MA
|
We show how solution concepts in games such as Nash equilibrium, correlated
equilibrium, rationalizability, and sequential equilibrium can be given a
uniform definition in terms of \emph{knowledge-based programs}. Intuitively,
all solution concepts are implementations of two knowledge-based programs, one
appropriate for games represented in normal form, the other for games
represented in extensive form. These knowledge-based programs can be viewed as
embodying rationality. The representation works even if (a) information sets do
not capture an agent's knowledge, (b) uncertainty is not represented by
probability, or (c) the underlying game is not common knowledge.
|
cs/0610099
|
Properties of Codes with the Rank Metric
|
cs.IT math.IT
|
In this paper, we study properties of rank metric codes in general and
maximum rank distance (MRD) codes in particular. For codes with the rank
metric, we first establish Gilbert and sphere-packing bounds, and then obtain
the asymptotic forms of these two bounds and the Singleton bound. Based on the
asymptotic bounds, we observe that asymptotically Gilbert-Varsharmov bound is
exceeded by MRD codes and sphere-packing bound cannot be attained. We also
establish bounds on the rank covering radius of maximal codes, and show that
all MRD codes are maximal codes and all the MRD codes known so far achieve the
maximum rank covering radius.
|
cs/0610100
|
A Mobile Transient Internet Architecture
|
cs.NI cs.IT math.IT
|
This paper describes a new architecture for transient mobile networks
destined to merge existing and future network architectures, communication
implementations and protocol operations by introducing a new paradigm to data
delivery and identification. The main goal of our research is to enable
seamless end-to-end communication between mobile and stationary devices across
multiple networks and through multiple communication environments. The
architecture establishes a set of infrastructure components and protocols that
set the ground for a Persistent Identification Network (PIN). The basis for the
operation of PIN is an identification space consisting of unique location
independent identifiers similar to the ones implemented in the Handle system.
Persistent Identifiers are used to identify and locate Digital Entities which
can include devices, services, users and even traffic. The architecture
establishes a primary connection independent logical structure that can operate
over conventional networks or more advanced peer-to-peer aggregation networks.
Communication is based on routing pools and novel protocols for routing data
across several abstraction levels of the network, regardless of the end-points'
current association and state...
|
cs/0610102
|
Quantum communication is possible with pure state
|
cs.IT math.IT
|
It is believed that quantum communication is not possible with a pure
ensemble of states because quantum entropy of pure state is zero. This is
indeed possible due to geometric consequence of entanglement.
|
cs/0610103
|
On the Secrecy Capacity of Fading Channels
|
cs.IT math.IT
|
We consider the secure transmission of information over an ergodic fading
channel in the presence of an eavesdropper. Our eavesdropper can be viewed as
the wireless counterpart of Wyner's wiretapper. The secrecy capacity of such a
system is characterized under the assumption of asymptotically long coherence
intervals. We first consider the full Channel State Information (CSI) case,
where the transmitter has access to the channel gains of the legitimate
receiver and the eavesdropper. The secrecy capacity under this full CSI
assumption serves as an upper bound for the secrecy capacity when only the CSI
of the legitimate receiver is known at the transmitter, which is characterized
next. In each scenario, the perfect secrecy capacity is obtained along with the
optimal power and rate allocation strategies. We then propose a low-complexity
on/off power allocation strategy that achieves near-optimal performance with
only the main channel CSI. More specifically, this scheme is shown to be
asymptotically optimal as the average SNR goes to infinity, and interestingly,
is shown to attain the secrecy capacity under the full CSI assumption.
Remarkably, our results reveal the positive impact of fading on the secrecy
capacity and establish the critical role of rate adaptation, based on the main
channel CSI, in facilitating secure communications over slow fading channels.
|
cs/0610104
|
ARQ Diversity in Fading Random Access Channels
|
cs.IT math.IT
|
A cross-layer optimization approach is adopted for the design of symmetric
random access wireless systems. Instead of the traditional collision model, a
more realistic physical layer model is considered. Based on this model, an
Incremental Redundancy Automatic Repeat reQuest (IR-ARQ) scheme, tailored to
jointly combat the effects of collisions, multi-path fading, and additive
noise, is developed. The Diversity-Multiplexing-Delay tradeoff (DMDT) of the
proposed scheme is analyzed for fully-loaded queues, and compared with that of
Gallager tree algorithm for collision resolution and the network-assisted
diversity multiple access (NDMA) protocol of Tsatsanis et al.. The fully-loaded
queue model is then replaced by one with random arrivals, under which these
protocols are compared in terms of the stability region, average delay and
diversity gain. Overall, our analytical and numerical results establish the
superiority of the proposed IR-ARQ scheme and reveal some important insights.
For example, it turns out that the performance is optimized, for a given total
throughput, by maximizing the probability that a certain user sends a new
packet and minimizing the transmission rate employed by each user.
|
cs/0610105
|
How To Break Anonymity of the Netflix Prize Dataset
|
cs.CR cs.DB
|
We present a new class of statistical de-anonymization attacks against
high-dimensional micro-data, such as individual preferences, recommendations,
transaction records and so on. Our techniques are robust to perturbation in the
data and tolerate some mistakes in the adversary's background knowledge.
We apply our de-anonymization methodology to the Netflix Prize dataset, which
contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's
largest online movie rental service. We demonstrate that an adversary who knows
only a little bit about an individual subscriber can easily identify this
subscriber's record in the dataset. Using the Internet Movie Database as the
source of background knowledge, we successfully identified the Netflix records
of known users, uncovering their apparent political preferences and other
potentially sensitive information.
|
cs/0610106
|
On the Error Exponents of ARQ Channels with Deadlines
|
cs.IT math.IT
|
We consider communication over Automatic Repeat reQuest (ARQ) memoryless
channels with deadlines. In particular, an upper bound L is imposed on the
maximum number of ARQ transmission rounds. In this setup, it is shown that
incremental redundancy ARQ outperforms Forney's memoryless decoding in terms of
the achievable error exponents.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.