id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0405050
|
Traffic Accident Analysis Using Decision Trees and Neural Networks
|
cs.AI
|
The costs of fatalities and injuries due to traffic accident have a great
impact on society. This paper presents our research to model the severity of
injury resulting from traffic accidents using artificial neural networks and
decision trees. We have applied them to an actual data set obtained from the
National Automotive Sampling System (NASS) General Estimates System (GES).
Experiment results reveal that in all the cases the decision tree outperforms
the neural network. Our research analysis also shows that the three most
important factors in fatal injury are: driver's seat belt usage, light
condition of the roadway, and driver's alcohol usage.
|
cs/0405051
|
Short Term Load Forecasting Models in Czech Republic Using Soft
Computing Paradigms
|
cs.AI
|
This paper presents a comparative study of six soft computing models namely
multilayer perceptron networks, Elman recurrent neural network, radial basis
function network, Hopfield model, fuzzy inference system and hybrid fuzzy
neural network for the hourly electricity demand forecast of Czech Republic.
The soft computing models were trained and tested using the actual hourly load
data for seven years. A comparison of the proposed techniques is presented for
predicting 2 day ahead demands for electricity. Simulation results indicate
that hybrid fuzzy neural network and radial basis function networks are the
best candidates for the analysis and forecasting of electricity demand.
|
cs/0405052
|
Decision Support Systems Using Intelligent Paradigms
|
cs.AI
|
Decision-making is a process of choosing among alternative courses of action
for solving complicated problems where multi-criteria objectives are involved.
The past few years have witnessed a growing recognition of Soft Computing (SC)
technologies that underlie the conception, design and utilization of
intelligent systems. In this paper, we present different SC paradigms involving
an artificial neural network trained using the scaled conjugate gradient
algorithm, two different fuzzy inference methods optimised using neural network
learning/evolutionary algorithms and regression trees for developing
intelligent decision support systems. We demonstrate the efficiency of the
different algorithms by developing a decision support system for a Tactical Air
Combat Environment (TACE). Some empirical comparisons between the different
algorithms are also provided.
|
cs/0405054
|
The model of the tables in design documentation for operating with the
electronic catalogs and for specifications making in a CAD system
|
cs.CE cs.DS
|
The hierarchic block model of the tables in design documentation as a part of
a CAD system is described, intended for automatic specifications making of
elements of the drawings, with usage of the electronic catalogs. The model is
created for needs of a CAD system of reconstruction of the industrial plants,
where the result of designing are the drawings, which include the
specifications of different types. The adequate simulation of the specification
tables is ensured with technology of storing in the drawing of the visible
geometric elements and invisible parametric representation, sufficient for
generation of this elements.
|
cs/0405055
|
Modular technology of developing of the extensions of a CAD system.
Axonometric piping diagrams. Parametric representation
|
cs.CE cs.DS
|
Applying the modular technology of developing of the problem-oriented
extensions of a CAD system to a problem of automation of creating of the
axonometric piping diagrams on an example of the program system TechnoCAD
GlassX is described. The proximity of composition of the schemas is detected
for special technological pipe lines, systems of a water line and water drain,
heating, heat supply, ventilating, air conditioning. The structured parametric
representation of the schemas, including properties of objects, their link,
common settings, settings by default and the special links of compatibility is
reviewed.
|
cs/0405056
|
Modular technology of developing of the extensions of a CAD system. The
axonometric piping diagrams. Common and special operations
|
cs.CE cs.DS
|
Applying the modular technology of developing of the problem-oriented
extensions of a CAD system to a problem of automation of creating of the
axonometric piping diagrams on an example of the program system TechnoCAD
GlassX is described. The features of realization of common operations,
composition and realization of special operations of a designing of the schemas
of the special technological pipe lines, systems of a water line and water
drain, heating, heat supply, ventilating, air conditioning are reviewed.
|
cs/0405057
|
Mathematical and programming toolkit of the computer aided design of the
axonometric piping diagrams
|
cs.CE cs.DS
|
The problem of the automation of the designing of the axonometric piping
diagrams include, as the minimum, manipulations with the flat schemas of
three-dimensional wireframe objects (with dimension of 2,5). The specialized
model, methodical and mathematical approaches are required because of large
bulk of calculuss. Coordinate systems, data types, common principles of
realization of operation with data and composition of the basic operations are
described which are realised in the complex CAD system of the reconstruction of
the plants TechnoCAD GlassX.
|
cs/0405062
|
Efficiency Enhancement of Probabilistic Model Building Genetic
Algorithms
|
cs.NE
|
This paper presents two different efficiency-enhancement techniques for
probabilistic model building genetic algorithms. The first technique proposes
the use of a mutation operator which performs local search in the sub-solution
neighborhood identified through the probabilistic model. The second technique
proposes building and using an internal probabilistic model of the fitness
along with the probabilistic model of variable interactions. The fitness values
of some offspring are estimated using the probabilistic model, thereby avoiding
computationally expensive function evaluations. The scalability of the
aforementioned techniques are analyzed using facetwise models for convergence
time and population sizing. The speed-up obtained by each of the methods is
predicted and verified with empirical results. The results show that for
additively separable problems the competent mutation operator requires O(k 0.5
logm)--where k is the building-block size, and m is the number of building
blocks--less function evaluations than its selectorecombinative counterpart.
The results also show that the use of an internal probabilistic fitness model
reduces the required number of function evaluations to as low as 1-10% and
yields a speed-up of 2--50.
|
cs/0405063
|
Let's Get Ready to Rumble: Crossover Versus Mutation Head to Head
|
cs.NE
|
This paper analyzes the relative advantages between crossover and mutation on
a class of deterministic and stochastic additively separable problems. This
study assumes that the recombination and mutation operators have the knowledge
of the building blocks (BBs) and effectively exchange or search among competing
BBs. Facetwise models of convergence time and population sizing have been used
to determine the scalability of each algorithm. The analysis shows that for
additively separable deterministic problems, the BB-wise mutation is more
efficient than crossover, while the crossover outperforms the mutation on
additively separable problems perturbed with additive Gaussian noise. The
results show that the speed-up of using BB-wise mutation on deterministic
problems is O(k^{0.5}logm), where k is the BB size, and m is the number of BBs.
Likewise, the speed-up of using crossover on stochastic problems with fixed
noise variance is O(mk^{0.5}log m).
|
cs/0405064
|
Designing Competent Mutation Operators via Probabilistic Model Building
of Neighborhoods
|
cs.NE
|
This paper presents a competent selectomutative genetic algorithm (GA), that
adapts linkage and solves hard problems quickly, reliably, and accurately. A
probabilistic model building process is used to automatically identify key
building blocks (BBs) of the search problem. The mutation operator uses the
probabilistic model of linkage groups to find the best among competing building
blocks. The competent selectomutative GA successfully solves additively
separable problems of bounded difficulty, requiring only subquadratic number of
function evaluations. The results show that for additively separable problems
the probabilistic model building BB-wise mutation scales as O(2^km^{1.5}), and
requires O(k^{0.5}logm) less function evaluations than its selectorecombinative
counterpart, confirming theoretical results reported elsewhere (Sastry &
Goldberg, 2004).
|
cs/0405065
|
Efficiency Enhancement of Genetic Algorithms via Building-Block-Wise
Fitness Estimation
|
cs.NE
|
This paper studies fitness inheritance as an efficiency enhancement technique
for a class of competent genetic algorithms called estimation distribution
algorithms. Probabilistic models of important sub-solutions are developed to
estimate the fitness of a proportion of individuals in the population, thereby
avoiding computationally expensive function evaluations. The effect of fitness
inheritance on the convergence time and population sizing are modeled and the
speed-up obtained through inheritance is predicted. The results show that a
fitness-inheritance mechanism which utilizes information on building-block
fitnesses provides significant efficiency enhancement. For additively separable
problems, fitness inheritance reduces the number of function evaluations to
about half and yields a speed-up of about 1.75--2.25.
|
cs/0405069
|
Mining Frequent Itemsets from Secondary Memory
|
cs.DB cs.IR
|
Mining frequent itemsets is at the core of mining association rules, and is
by now quite well understood algorithmically. However, most algorithms for
mining frequent itemsets assume that the main memory is large enough for the
data structures used in the mining, and very few efficient algorithms deal with
the case when the database is very large or the minimum support is very low.
Mining frequent itemsets from a very large database poses new challenges, as
astronomical amounts of raw data is ubiquitously being recorded in commerce,
science and government. In this paper, we discuss approaches to mining frequent
itemsets when data structures are too large to fit in main memory. Several
divide-and-conquer algorithms are given for mining from disks. Many novel
techniques are introduced. Experimental results show that the techniques reduce
the required disk accesses by orders of magnitude, and enable truly scalable
data mining.
|
cs/0405071
|
Regression with respect to sensing actions and partial states
|
cs.AI
|
In this paper, we present a state-based regression function for planning
domains where an agent does not have complete information and may have sensing
actions. We consider binary domains and employ the 0-approximation [Son & Baral
2001] to define the regression function. In binary domains, the use of
0-approximation means using 3-valued states. Although planning using this
approach is incomplete with respect to the full semantics, we adopt it to have
a lower complexity. We prove the soundness and completeness of our regression
formulation with respect to the definition of progression. More specifically,
we show that (i) a plan obtained through regression for a planning problem is
indeed a progression solution of that planning problem, and that (ii) for each
plan found through progression, using regression one obtains that plan or an
equivalent one. We then develop a conditional planner that utilizes our
regression function. We prove the soundness and completeness of our planning
algorithm and present experimental results with respect to several well known
planning problems in the literature.
|
cs/0405072
|
Grid Databases for Shared Image Analysis in the MammoGrid Project
|
cs.DB cs.DC
|
The MammoGrid project aims to prove that Grid infrastructures can be used for
collaborative clinical analysis of database-resident but geographically
distributed medical images. This requires: a) the provision of a
clinician-facing front-end workstation and b) the ability to service real-world
clinician queries across a distributed and federated database. The MammoGrid
project will prove the viability of the Grid by harnessing its power to enable
radiologists from geographically dispersed hospitals to share standardized
mammograms, to compare diagnoses (with and without computer aided detection of
tumours) and to perform sophisticated epidemiological studies across national
boundaries. This paper outlines the approach taken in MammoGrid to seamlessly
connect radiologist workstations across a Grid using an "information
infrastructure" and a DICOM-compliant object model residing in multiple
distributed data stores in Italy and the UK
|
cs/0405074
|
MammoGrid: A Service Oriented Architecture based Medical Grid
Application
|
cs.DC cs.DB
|
The MammoGrid project has recently delivered its first proof-of-concept
prototype using a Service-Oriented Architecture (SOA)-based Grid application to
enable distributed computing spanning national borders. The underlying AliEn
Grid infrastructure has been selected because of its practicality and because
of its emergence as a potential open source standards-based solution for
managing and coordinating distributed resources. The resultant prototype is
expected to harness the use of huge amounts of medical image data to perform
epidemiological studies, advanced image processing, radiographic education and
ultimately, tele-diagnosis over communities of medical virtual organisations.
The MammoGrid prototype comprises a high-quality clinician visualization
workstation used for data acquisition and inspection, a DICOM-compliant
interface to a set of medical services (annotation, security, image analysis,
data storage and querying services) residing on a so-called Grid-box and secure
access to a network of other Grid-boxes connected through Grid middleware. This
paper outlines the MammoGrid approach in managing a federation of
Grid-connected mammography databases in the context of the recently delivered
prototype and will also describe the next phase of prototyping.
|
cs/0405076
|
An Abductive Framework For Computing Knowledge Base Updates
|
cs.DB
|
This paper introduces an abductive framework for updating knowledge bases
represented by extended disjunctive programs. We first provide a simple
transformation from abductive programs to update programs which are logic
programs specifying changes on abductive hypotheses. Then, extended abduction,
which was introduced by the same authors as a generalization of traditional
abduction, is computed by the answer sets of update programs. Next, different
types of updates, view updates and theory updates are characterized by
abductive programs and computed by update programs. The task of consistency
restoration is also realized as special cases of these updates. Each update
problem is comparatively assessed from the computational complexity viewpoint.
The result of this paper provides a uniform framework for different types of
knowledge base updates, and each update is computed using existing procedures
of logic programming.
|
cs/0405087
|
A Grid Information Infrastructure for Medical Image Analysis
|
cs.DB cs.DC
|
The storage and manipulation of digital images and the analysis of the
information held in those images are essential requirements for next-generation
medical information systems. The medical community has been exploring
collaborative approaches for managing image data and exchanging knowledge and
Grid technology [1] is a promising approach to enabling distributed analysis
across medical institutions and for developing new collaborative and
cooperative approaches for image analysis without the necessity for clinicians
to co-locate. The EU-funded MammoGrid project [2] is one example of this and it
aims to develop a Europe-wide database of mammograms to support effective
co-working between healthcare professionals across the EU. The MammoGrid
prototype comprises a high-quality clinician visualization workstation (for
data acquisition and inspection), a DICOM-compliant interface to a set of
medical services (annotation, security, image analysis, data storage and
querying services) residing on a so-called Grid-box and secure access to a
network of other Grid-boxes connected through Grid middleware. One of the main
deliverables of the project is a Grid-enabled infrastructure that manages
federated mammogram databases across Europe. This paper outlines the MammoGrid
Information Infrastructure (MII) for meta-data analysis and knowledge discovery
in the medical imaging domain.
|
cs/0405090
|
Propositional Defeasible Logic has Linear Complexity
|
cs.AI
|
Defeasible logic is a rule-based nonmonotonic logic, with both strict and
defeasible rules, and a priority relation on rules. We show that inference in
the propositional form of the logic can be performed in linear time. This
contrasts markedly with most other propositional nonmonotonic logics, in which
inference is intractable.
|
cs/0405093
|
Computerized Face Detection and Recognition
|
cs.CV
|
This publication presents methods for face detection, analysis and
recognition: fast normalized cross-correlation (fast correlation coefficient)
between multiple templates based face pre-detection method, method for
detection of exact face contour based on snakes and Generalized Gradient Vector
Flow field, method for combining recognition algorithms based on Cumulative
Match Characteristics in order to increase recognition speed and accuracy, and
face recognition method based on Principal Component Analysis of the Wavelet
Packet Decomposition allowing to use PCA - based recognition method with large
number of training images. For all the methods are presented experimental
results and comparisons of speed and accuracy with large face databases.
|
cs/0405095
|
Blind Detection and Compensation of Camera Lens Geometric Distortions
|
cs.CV
|
This paper presents a blind detection and compensation technique for camera
lens geometric distortions. The lens distortion introduces higher-order
correlations in the frequency domain and in turn it can be detected using
higher-order spectral analysis tools without assuming any specific calibration
target. The existing blind lens distortion removal method only considered a
single-coefficient radial distortion model. In this paper, two coefficients are
considered to model approximately the geometric distortion. All the models
considered have analytical closed-form inverse formulae.
|
cs/0405098
|
A Logic for Reasoning about Evidence
|
cs.AI cs.LO
|
We introduce a logic for reasoning about evidence that essentially views
evidence as a function from prior beliefs (before making an observation) to
posterior beliefs (after making the observation). We provide a sound and
complete axiomatization for the logic, and consider the complexity of the
decision problem. Although the reasoning in the logic is mainly propositional,
we allow variables representing numbers and quantification over them. This
expressive power seems necessary to capture important properties of evidence.
|
cs/0405099
|
Web search engine based on DNS
|
cs.NI cs.IR
|
Now no web search engine can cover more than 60 percent of all the pages on
Internet. The update interval of most pages database is almost one month. This
condition hasn't changed for many years. Converge and recency problems have
become the bottleneck problem of current web search engine. To solve these
problems, a new system, search engine based on DNS is proposed in this paper.
This system adopts the hierarchical distributed architecture like DNS, which is
different from any current commercial search engine. In theory, this system can
cover all the web pages on Internet. Its update interval could even be one day.
The original idea, detailed content and implementation of this system all are
introduced in this paper.
|
cs/0405104
|
Knowledge Reduction and Discovery based on Demarcation Information
|
cs.LG cs.DB cs.IT math.IT
|
Knowledge reduction, includes attribute reduction and value reduction, is an
important topic in rough set literature. It is also closely relevant to other
fields, such as machine learning and data mining. In this paper, an algorithm
called TWI-SQUEEZE is proposed. It can find a reduct, or an irreducible
attribute subset after two scans. Its soundness and computational complexity
are given, which show that it is the fastest algorithm at present. A measure of
variety is brought forward, of which algorithm TWI-SQUEEZE can be regarded as
an application. The author also argues the rightness of this measure as a
measure of information, which can make it a unified measure for
"differentiation, a concept appeared in cognitive psychology literature. Value
reduction is another important aspect of knowledge reduction. It is interesting
that using the same algorithm we can execute a complete value reduction
efficiently. The complete knowledge reduction, which results in an irreducible
table, can therefore be accomplished after four scans of table. The byproducts
of reduction are two classifiers of different styles. In this paper, various
cases and models will be discussed to prove the efficiency and effectiveness of
the algorithm. Some topics, such as how to integrate user preference to find a
local optimal attribute subset will also be discussed.
|
cs/0405106
|
Pruning Search Space in Defeasible Argumentation
|
cs.AI
|
Defeasible argumentation has experienced a considerable growth in AI in the
last decade. Theoretical results have been combined with development of
practical applications in AI & Law, Case-Based Reasoning and various
knowledge-based systems. However, the dialectical process associated with
inference is computationally expensive. This paper focuses on speeding up this
inference process by pruning the involved search space. Our approach is
twofold. On one hand, we identify distinguished literals for computing defeat.
On the other hand, we restrict ourselves to a subset of all possible
conflicting arguments by introducing dialectical constraints.
|
cs/0405107
|
A Framework for Combining Defeasible Argumentation with Labeled
Deduction
|
cs.AI cs.SC
|
In the last years, there has been an increasing demand of a variety of
logical systems, prompted mostly by applications of logic in AI and other
related areas. Labeled Deductive Systems (LDS) were developed as a flexible
methodology to formalize such a kind of complex logical systems. Defeasible
argumentation has proven to be a successful approach to formalizing commonsense
reasoning, encompassing many other alternative formalisms for defeasible
reasoning. Argument-based frameworks share some common notions (such as the
concept of argument, defeater, etc.) along with a number of particular features
which make it difficult to compare them with each other from a logical
viewpoint. This paper introduces LDSar, a LDS for defeasible argumentation in
which many important issues concerning defeasible argumentation are captured
within a unified logical framework. We also discuss some logical properties and
extensions that emerge from the proposed framework.
|
cs/0405113
|
A proposal to design expert system for the calculations in the domain of
QFT
|
cs.AI
|
Main purposes of the paper are followings: 1) To show examples of the
calculations in domain of QFT via ``derivative rules'' of an expert system; 2)
To consider advantages and disadvantage that technology of the calculations; 3)
To reflect about how one would develop new physical theories, what knowledge
would be useful in their investigations and how this problem can be connected
with designing an expert system.
|
cs/0406001
|
Side-Information Coding with Turbo Codes and its Application to Quantum
Key Distribution
|
cs.IT cs.CR math.IT quant-ph
|
Turbo coding is a powerful class of forward error correcting codes, which can
achieve performances close to the Shannon limit. The turbo principle can be
applied to the problem of side-information source coding, and we investigate
here its application to the reconciliation problem occurring in a
continuous-variable quantum key distribution protocol.
|
cs/0406003
|
Algorithms for weighted multi-tape automata
|
cs.CL cs.DS
|
This report defines various operations for weighted multi-tape automata
(WMTAs) and describes algorithms that have been implemented for those
operations in the WFSC toolkit. Some algorithms are new, others are known or
similar to known algorithms. The latter will be recalled to make this report
more complete and self-standing. We present a new approach to multi-tape
intersection, meaning the intersection of a number of tapes of one WMTA with
the same number of tapes of another WMTA. In our approach, multi-tape
intersection is not considered as an atomic operation but rather as a sequence
of more elementary ones, which facilitates its implementation. We show an
example of multi-tape intersection, actually transducer intersection, that can
be compiled with our approach but not with several other methods that we
analysed. To show the practical relavance of our work, we include an example of
application: the preservation of intermediate results in transduction cascades.
|
cs/0406004
|
Application of Business Intelligence In Banks (Pakistan)
|
cs.DB
|
The financial services industry is rapidly changing. Factors such as
globalization, deregulation, mergers and acquisitions, competition from
non-financial institutions, and technological innovation, have forced companies
to re-think their business.Many large companies have been using Business
Intelligence (BI) computer software for some years to help them gain
competitive advantage. With the introduction of cheaper and more generalized
products to the market place BI is now in the reach of smaller and medium sized
companies. Business Intelligence is also known as knowledge management,
management information systems (MIS), Executive information systems (EIS) and
On-line analytical Processing (OLAP).
|
cs/0406007
|
Parallel Mixed Bayesian Optimization Algorithm: A Scaleup Analysis
|
cs.NE cs.DC
|
Estimation of Distribution Algorithms have been proposed as a new paradigm
for evolutionary optimization. This paper focuses on the parallelization of
Estimation of Distribution Algorithms. More specifically, the paper discusses
how to predict performance of parallel Mixed Bayesian Optimization Algorithm
(MBOA) that is based on parallel construction of Bayesian networks with
decision trees. We determine the time complexity of parallel Mixed Bayesian
Optimization Algorithm and compare this complexity with experimental results
obtained by solving the spin glass optimization problem. The empirical results
fit well the theoretical time complexity, so the scalability and efficiency of
parallel Mixed Bayesian Optimization Algorithm for unknown instances of spin
glass benchmarks can be predicted. Furthermore, we derive the guidelines that
can be used to design effective parallel Estimation of Distribution Algorithms
with the speedup proportional to the number of variables in the problem.
|
cs/0406008
|
Image compression by rectangular wavelet transform
|
cs.CV
|
We study image compression by a separable wavelet basis
$\big\{\psi(2^{k_1}x-i)\psi(2^{k_2}y-j),$ $\phi(x-i)\psi(2^{k_2}y-j),$
$\psi(2^{k_1}(x-i)\phi(y-j),$ $\phi(x-i)\phi(y-i)\big\},$ where $k_1, k_2 \in
\mathbb{Z}_+$; $i,j\in\mathbb{Z}$; and $\phi,\psi$ are elements of a standard
biorthogonal wavelet basis in $L_2(\mathbb{R})$. Because $k_1\ne k_2$, the
supports of the basis elements are rectangles, and the corresponding transform
is known as the {\em rectangular wavelet transform}. We prove that if
one-dimensional wavelet basis has $M$ dual vanishing moments then the rate of
approximation by $N$ coefficients of rectangular wavelet transform is
$\mathcal{O}(N^{-M}\log^C N)$ for functions with mixed derivative of order $M$
in each direction.
The square wavelet transform yields the approximation rate is
$\mathcal{O}(N^{-M/2})$ for functions with all derivatives of the total order
$M$. Thus, the rectangular wavelet transform can outperform the square one if
an image has a mixed derivative. We provide experimental comparison of image
compression which shows that rectangular wavelet transform outperform the
square one.
|
cs/0406011
|
Blind Construction of Optimal Nonlinear Recursive Predictors for
Discrete Sequences
|
cs.LG math.ST nlin.CD physics.data-an stat.TH
|
We present a new method for nonlinear prediction of discrete random sequences
under minimal structural assumptions. We give a mathematical construction for
optimal predictors of such processes, in the form of hidden Markov models. We
then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which
approximates the ideal predictor from data. We discuss the reliability of CSSR,
its data requirements, and its performance in simulations. Finally, we compare
our approach to existing methods using variable-length Markov models and
cross-validated hidden Markov models, and show theoretically and experimentally
that our method delivers results superior to the former and at least comparable
to the latter.
|
cs/0406015
|
Zipf's law and the creation of musical context
|
cs.CL cond-mat.stat-mech
|
This article discusses the extension of the notion of context from
linguistics to the domain of music. In language, the statistical regularity
known as Zipf's law -which concerns the frequency of usage of different words-
has been quantitatively related to the process of text generation. This
connection is established by Simon's model, on the basis of a few assumptions
regarding the accompanying creation of context. Here, it is shown that the
statistics of note usage in musical compositions are compatible with the
predictions of Simon's model. This result, which gives objective support to the
conceptual likeness of context in language and music, is obtained through
automatic analysis of the digital versions of several compositions. As a
by-product, a quantitative measure of context definiteness is introduced and
used to compare tonal and atonal works.
|
cs/0406016
|
Schema-based Scheduling of Event Processors and Buffer Minimization for
Queries on Structured Data Streams
|
cs.DB
|
We introduce an extension of the XQuery language, FluX, that supports
event-based query processing and the conscious handling of main memory buffers.
Purely event-based queries of this language can be executed on streaming XML
data in a very direct way. We then develop an algorithm that allows to
efficiently rewrite XQueries into the event-based FluX language. This algorithm
uses order constraints from a DTD to schedule event handlers and to thus
minimize the amount of buffering required for evaluating a query. We discuss
the various technical aspects of query optimization and query evaluation within
our framework. This is complemented with an experimental evaluation of our
approach.
|
cs/0406017
|
Using Self-Organising Mappings to Learn the Structure of Data Manifolds
|
cs.NE cs.CV
|
In this paper it is shown how to map a data manifold into a simpler form by
progressively discarding small degrees of freedom. This is the key to
self-organising data fusion, where the raw data is embedded in a very
high-dimensional space (e.g. the pixel values of one or more images), and the
requirement is to isolate the important degrees of freedom which lie on a
low-dimensional manifold. A useful advantage of the approach used in this paper
is that the computations are arranged as a feed-forward processing chain, where
all the details of the processing in each stage of the chain are learnt by
self-organisation. This approach is demonstrated using hierarchically
correlated data, which causes the processing chain to split the data into
separate processing channels, and then to progressively merge these channels
wherever they are correlated with each other. This is the key to
self-organising data fusion.
|
cs/0406021
|
A direct formulation for sparse PCA using semidefinite programming
|
cs.CE
|
We examine the problem of approximating, in the Frobenius-norm sense, a
positive, semidefinite symmetric matrix by a rank-one matrix, with an upper
bound on the cardinality of its eigenvector. The problem arises in the
decomposition of a covariance matrix into sparse factors, and has wide
applications ranging from biology to finance. We use a modification of the
classical variational representation of the largest eigenvalue of a symmetric
matrix, where cardinality is constrained, and derive a semidefinite programming
based relaxation for our problem. We also discuss Nesterov's smooth
minimization technique applied to the SDP arising in the direct sparse PCA
method.
|
cs/0406025
|
Directional Consistency for Continuous Numerical Constraints
|
cs.AI cs.MS
|
Bounds consistency is usually enforced on continuous constraints by first
decomposing them into binary and ternary primitives. This decomposition has
long been shown to drastically slow down the computation of solutions. To
tackle this, Benhamou et al. have introduced an algorithm that avoids formally
decomposing constraints. Its better efficiency compared to the former method
has already been experimentally demonstrated. It is shown here that their
algorithm implements a strategy to enforce on a continuous constraint a
consistency akin to Directional Bounds Consistency as introduced by Dechter and
Pearl for discrete problems. The algorithm is analyzed in this framework, and
compared with algorithms that enforce bounds consistency. These theoretical
results are eventually contrasted with new experimental results on standard
benchmarks from the interval constraint community.
|
cs/0406029
|
Subset Queries in Relational Databases
|
cs.DB
|
In this paper, we motivated the need for relational database systems to
support subset query processing. We defined new operators in relational
algebra, and new constructs in SQL for expressing subset queries. We also
illustrated the applicability of subset queries through different examples
expressed using extended SQL statements and relational algebra expressions. Our
aim is to show the utility of subset queries for next generation applications.
|
cs/0406031
|
A Public Reference Implementation of the RAP Anaphora Resolution
Algorithm
|
cs.CL
|
This paper describes a standalone, publicly-available implementation of the
Resolution of Anaphora Procedure (RAP) given by Lappin and Leass (1994). The
RAP algorithm resolves third person pronouns, lexical anaphors, and identifies
pleonastic pronouns. Our implementation, JavaRAP, fills a current need in
anaphora resolution research by providing a reference implementation that can
be benchmarked against current algorithms. The implementation uses the
standard, publicly available Charniak (2000) parser as input, and generates a
list of anaphora-antecedent pairs as output. Alternately, an in-place
annotation or substitution of the anaphors with their antecedents can be
produced. Evaluation on the MUC-6 co-reference task shows that JavaRAP has an
accuracy of 57.9%, similar to the performance given previously in the
literature (e.g., Preiss 2002).
|
cs/0406032
|
A Dynamic Clustering-Based Markov Model for Web Usage Mining
|
cs.IR cs.AI
|
Markov models have been widely utilized for modelling user web navigation
behaviour. In this work we propose a dynamic clustering-based method to
increase a Markov model's accuracy in representing a collection of user web
navigation sessions. The method makes use of the state cloning concept to
duplicate states in a way that separates in-links whose corresponding
second-order probabilities diverge. In addition, the new method incorporates a
clustering technique which determines an effcient way to assign in-links with
similar second-order probabilities to the same clone. We report on experiments
conducted with both real and random data and we provide a comparison with the
N-gram Markov concept. The results show that the number of additional states
induced by the dynamic clustering method can be controlled through a threshold
parameter, and suggest that the method's performance is linear time in the size
of the model.
|
cs/0406038
|
A New Approach to Draw Detection by Move Repetition in Computer Chess
Programming
|
cs.AI
|
We will try to tackle both the theoretical and practical aspects of a very
important problem in chess programming as stated in the title of this article -
the issue of draw detection by move repetition. The standard approach that has
so far been employed in most chess programs is based on utilising positional
matrices in original and compressed format as well as on the implementation of
the so-called bitboard format.
The new approach that we will be trying to introduce is based on using
variant strings generated by the search algorithm (searcher) during the tree
expansion in decision making. We hope to prove that this approach is more
efficient than the standard treatment of the issue, especially in positions
with few pieces (endgames). To illustrate what we have in mind a machine
language routine that implements our theoretical assumptions is attached. The
routine is part of the Axon chess program, developed by the authors. Axon, in
its current incarnation, plays chess at master strength (ca. 2400-2450 Elo,
based on both Axon vs computer programs and Axon vs human masters in over 3000
games altogether).
|
cs/0406039
|
Long Nonbinary Codes Exceeding the Gilbert - Varshamov Bound for any
Fixed Distance
|
cs.IT math.IT
|
Let A(q,n,d) denote the maximum size of a q-ary code of length n and distance
d. We study the minimum asymptotic redundancy \rho(q,n,d)=n-log_q A(q,n,d) as n
grows while q and d are fixed. For any d and q<=d-1, long algebraic codes are
designed that improve on the BCH codes and have the lowest asymptotic
redundancy \rho(q,n,d) <= ((d-3)+1/(d-2)) log_q n known to date. Prior to this
work, codes of fixed distance that asymptotically surpass BCH codes and the
Gilbert-Varshamov bound were designed only for distances 4,5 and 6.
|
cs/0406042
|
Business Process Measures
|
cs.CE cs.PF
|
The paper proposes a new methodology for defining business process measures
and their computation. The approach is based on metamodeling according to MOF.
Especially, a metamodel providing precise definitions of typical process
measures for UML activity diagram-like notation is proposed, including precise
definitions how measures should be aggregated for composite process elements.
The proposed approach allows defining values in a natural way, and measurement
of data, which are of interest to business, without deep investigation into
specific technical solutions. This provides new possibilities for business
process measurement, decreasing the gap between technical solutions and asset
management methodologies.
|
cs/0406043
|
The Computational Complexity of Orientation Search Problems in
Cryo-Electron Microscopy
|
cs.DS cs.CG cs.CV
|
In this report we study the problem of determining three-dimensional
orientations for noisy projections of randomly oriented identical particles.
The problem is of central importance in the tomographic reconstruction of the
density map of macromolecular complexes from electron microscope images and it
has been studied intensively for more than 30 years.
We analyze the computational complexity of the orientation problem and show
that while several variants of the problem are $NP$-hard, inapproximable and
fixed-parameter intractable, some restrictions are polynomial-time approximable
within a constant factor or even solvable in logarithmic space. The orientation
search problem is formalized as a constrained line arrangement problem that is
of independent interest. The negative complexity results give a partial
justification for the heuristic methods used in orientation search, and the
positive complexity results on the orientation search have some positive
implications also to the problem of finding functionally analogous genes.
A preliminary version ``The Computational Complexity of Orientation Search in
Cryo-Electron Microscopy'' appeared in Proc. ICCS 2004, LNCS 3036, pp.
231--238. Springer-Verlag 2004.
|
cs/0406047
|
Self-organizing neural networks in classification and image recognition
|
cs.CV cs.AI
|
Self-organizing neural networks are used for brick finding in OPERA
experiment. Self-organizing neural networks and wavelet analysis used for
recognition and extraction of car numbers from images.
|
cs/0406048
|
On Expanders Graphs: Parameters and Applications
|
cs.IT math.IT
|
We give a new lower bound on the expansion coefficient of an edge-vertex
graph of a $d$-regular graph. As a consequence, we obtain an improvement on the
lower bound on relative minimum distance of the expander codes constructed by
Sipser and Spielman. We also derive some improved results on the vertex
expansion of graphs that help us in improving the parameters of the expander
codes of Alon, Bruck, Naor, Naor, and Roth.
|
cs/0406050
|
Finite-Length Scaling for Iteratively Decoded LDPC Ensembles
|
cs.IT cond-mat.dis-nn cs.DM math.IT
|
In this paper we investigate the behavior of iteratively decoded low-density
parity-check codes over the binary erasure channel in the so-called ``waterfall
region." We show that the performance curves in this region follow a very basic
scaling law. We conjecture that essentially the same scaling behavior applies
in a much more general setting and we provide some empirical evidence to
support this conjecture. The scaling law, together with the error floor
expressions developed previously, can be used for fast finite-length
optimization.
|
cs/0406054
|
Building a linguistic corpus from bee dance data
|
cs.CL
|
This paper discusses the problems and possibility of collecting bee dance
data in a linguistic \textit{corpus} and use linguistic instruments such as
Zipf's law and entropy statistics to decide on the question whether the dance
carries information of any kind. We describe this against the historical
background of attempts to analyse non-human communication systems.
|
cs/0406055
|
Web Services: A Process Algebra Approach
|
cs.AI cs.DB
|
It is now well-admitted that formal methods are helpful for many issues
raised in the Web service area. In this paper we present a framework for the
design and verification of WSs using process algebras and their tools. We
define a two-way mapping between abstract specifications written using these
calculi and executable Web services written in BPEL4WS. Several choices are
available: design and correct errors in BPEL4WS, using process algebra
verification tools, or design and correct in process algebra and automatically
obtaining the corresponding BPEL4WS code. The approaches can be combined.
Process algebra are not useful only for temporal logic verification: we remark
the use of simulation/bisimulation both for verification and for the
hierarchical refinement design method. It is worth noting that our approach
allows the use of any process algebra depending on the needs of the user at
different levels (expressiveness, existence of reasoning tools, user
expertise).
|
cs/0406056
|
P=NP
|
cs.CC cs.AI
|
We claim to resolve the P=?NP problem via a formal argument for P=NP.
|
cs/0406058
|
Proofs of Zero Knowledge
|
cs.CR cs.DB
|
We present a protocol for verification of ``no such entry'' replies from
databases. We introduce a new cryptographic primitive as the underlying
structure, the keyed hash tree, which is an extension of Merkle's hash tree. We
compare our scheme to Buldas et al.'s Undeniable Attesters and Micali et al.'s
Zero Knowledge Sets.
|
cs/0406060
|
Well-Definedness and Semantic Type-Checking in the Nested Relational
Calculus and XQuery
|
cs.DB cs.PL
|
Two natural decision problems regarding the XML query language XQuery are
well-definedness and semantic type-checking. We study these problems in the
setting of a relational fragment of XQuery. We show that well-definedness and
semantic type-checking are undecidable, even in the positive-existential case.
Nevertheless, for a ``pure'' variant of XQuery, in which no identification is
made between an item and the singleton containing that item, the problems
become decidable. We also consider the analogous problems in the setting of the
nested relational calculus.
|
cs/0407002
|
Annotating Predicate-Argument Structure for a Parallel Treebank
|
cs.CL
|
We report on a recently initiated project which aims at building a
multi-layered parallel treebank of English and German. Particular attention is
devoted to a dedicated predicate-argument layer which is used for aligning
translationally equivalent sentences of the two languages. We describe both our
conceptual decisions and aspects of their technical realisation. We discuss
some selected problems and conclude with a few remarks on how this project
relates to similar projects in the field.
|
cs/0407004
|
Zero-error communication over networks
|
cs.IT cs.CR math.IT
|
Zero-Error communication investigates communication without any error. By
defining channels without probabilities, results from Elias can be used to
completely characterize which channel can simulate which other channels. We
introduce the ambiguity of a channel, which completely characterizes the
possibility in principle of a channel to simulate any other channel. In the
second part we will look at networks of players connected by channels, while
some players may be corrupted. We will show how the ambiguity of a virtual
channel connecting two arbitrary players can be calculated. This means that we
can exactly specify what kind of zero-error communication is possible between
two players in any network of players connected by channels.
|
cs/0407005
|
Statistical Machine Translation by Generalized Parsing
|
cs.CL
|
Designers of statistical machine translation (SMT) systems have begun to
employ tree-structured translation models. Systems involving tree-structured
translation models tend to be complex. This article aims to reduce the
conceptual complexity of such systems, in order to make them easier to design,
implement, debug, use, study, understand, explain, modify, and improve. In
service of this goal, the article extends the theory of semiring parsing to
arrive at a novel abstract parsing algorithm with five functional parameters: a
logic, a grammar, a semiring, a search strategy, and a termination condition.
The article then shows that all the common algorithms that revolve around
tree-structured translation models, including hierarchical alignment, inference
for parameter estimation, translation, and structured evaluation, can be
derived by generalizing two of these parameters -- the grammar and the logic.
The article culminates with a recipe for using such generalized parsers to
train, apply, and evaluate an SMT system that is driven by tree-structured
translation models.
|
cs/0407007
|
The semijoin algebra and the guarded fragment
|
cs.DB cs.LO
|
The semijoin algebra is the variant of the relational algebra obtained by
replacing the join operator by the semijoin operator. We discuss some
interesting connections between the semijoin algebra and the guarded fragment
of first-order logic. We also provide an Ehrenfeucht-Fraisse game,
characterizing the discerning power of the semijoin algebra. This game gives a
method for showing that certain queries are not expressible in the semijoin
algebra.
|
cs/0407008
|
Autogenic Training With Natural Language Processing Modules: A Recent
Tool For Certain Neuro Cognitive Studies
|
cs.AI
|
Learning to respond to voice-text input involves the subject's ability in
understanding the phonetic and text based contents and his/her ability to
communicate based on his/her experience. The neuro-cognitive facility of the
subject has to support two important domains in order to make the learning
process complete. In many cases, though the understanding is complete, the
response is partial. This is one valid reason why we need to support the
information from the subject with scalable techniques such as Natural Language
Processing (NLP) for abstraction of the contents from the output. This paper
explores the feasibility of using NLP modules interlaced with Neural Networks
to perform the required task in autogenic training related to medical
applications.
|
cs/0407009
|
Search Using N-gram Technique Based Statistical Analysis for Knowledge
Extraction in Case Based Reasoning Systems
|
cs.AI cs.IR
|
Searching techniques for Case Based Reasoning systems involve extensive
methods of elimination. In this paper, we look at a new method of arriving at
the right solution by performing a series of transformations upon the data.
These involve N-gram based comparison and deduction of the input data with the
case data, using Morphemes and Phonemes as the deciding parameters. A similar
technique for eliminating possible errors using a noise removal function is
performed. The error tracking and elimination is performed through a
statistical analysis of obtained data, where the entire data set is analyzed as
sub-categories of various etymological derivatives. A probability analysis for
the closest match is then performed, which yields the final expression. This
final expression is referred to the Case Base. The output is redirected through
an Expert System based on best possible match. The threshold for the match is
customizable, and could be set by the Knowledge-Architect.
|
cs/0407010
|
Improved error bounds for the erasure/list scheme: the binary and
spherical cases
|
cs.IT math.IT
|
We derive improved bounds on the error and erasure rate for spherical codes
and for binary linear codes under Forney's erasure/list decoding scheme and
prove some related results.
|
cs/0407011
|
Distance distribution of binary codes and the error probability of
decoding
|
cs.IT math.IT
|
We address the problem of bounding below the probability of error under
maximum likelihood decoding of a binary code with a known distance distribution
used on a binary symmetric channel. An improved upper bound is given for the
maximum attainable exponent of this probability (the reliability function of
the channel). In particular, we prove that the ``random coding exponent'' is
the true value of the channel reliability for code rate $R$ in some interval
immediately below the critical rate of the channel. An analogous result is
obtained for the Gaussian channel.
|
cs/0407016
|
Learning for Adaptive Real-time Search
|
cs.AI cs.LG
|
Real-time heuristic search is a popular model of acting and learning in
intelligent autonomous agents. Learning real-time search agents improve their
performance over time by acquiring and refining a value function guiding the
application of their actions. As computing the perfect value function is
typically intractable, a heuristic approximation is acquired instead. Most
studies of learning in real-time search (and reinforcement learning) assume
that a simple value-function-greedy policy is used to select actions. This is
in contrast to practice, where high-performance is usually attained by
interleaving planning and acting via a lookahead search of a non-trivial depth.
In this paper, we take a step toward bridging this gap and propose a novel
algorithm that (i) learns a heuristic function to be used specifically with a
lookahead-based policy, (ii) selects the lookahead depth adaptively in each
state, (iii) gives the user control over the trade-off between exploration and
exploitation. We extensively evaluate the algorithm in the sliding tile puzzle
testbed comparing it to the classical LRTA* and the more recent weighted LRTA*,
bounded LRTA*, and FALCONS. Improvements of 5 to 30 folds in convergence speed
are observed.
|
cs/0407021
|
Multi-agent coordination using nearest neighbor rules: revisiting the
Vicsek model
|
cs.MA cs.AI
|
Recently, Jadbabaie, Lin, and Morse (IEEE TAC, 48(6)2003:988-1001) offered a
mathematical analysis of the discrete time model of groups of mobile autonomous
agents raised by Vicsek et al. in 1995. In their paper, Jadbabaie et al. showed
that all agents shall move in the same heading, provided that these agents are
periodically linked together. This paper sharpens this result by showing that
coordination will be reached under a very weak condition that requires all
agents are finally linked together. This condition is also strictly weaker than
the one Jadbabaie et al. desired.
|
cs/0407024
|
An agent-based intelligent environmental monitoring system
|
cs.MA cs.CE
|
Fairly rapid environmental changes call for continuous surveillance and
on-line decision making. There are two main areas where IT technologies can be
valuable. In this paper we present a multi-agent system for monitoring and
assessing air-quality attributes, which uses data coming from a meteorological
station. A community of software agents is assigned to monitor and validate
measurements coming from several sensors, to assess air-quality, and, finally,
to fire alarms to appropriate recipients, when needed. Data mining techniques
have been used for adding data-driven, customized intelligence into agents. The
architecture of the developed system, its domain ontology, and typical agent
interactions are presented. Finally, the deployment of a real-world test case
is demonstrated.
|
cs/0407025
|
An agent framework for dynamic agent retraining: Agent academy
|
cs.MA
|
Agent Academy (AA) aims to develop a multi-agent society that can train new
agents for specific or general tasks, while constantly retraining existing
agents in a recursive mode. The system is based on collecting information both
from the environment and the behaviors of the acting agents and their related
successes/failures to generate a body of data, stored in the Agent Use
Repository, which is mined by the Data Miner module, in order to generate
useful knowledge about the application domain. Knowledge extracted by the Data
Miner is used by the Agent Training Module as to train new agents or to enhance
the behavior of agents already running. In this paper the Agent Academy
framework is introduced, and its overall architecture and functionality are
presented. Training issues as well as agent ontologies are discussed. Finally,
a scenario, which aims to provide environmental alerts to both individuals and
public authorities, is described an AA-based use case.
|
cs/0407026
|
Summarizing Encyclopedic Term Descriptions on the Web
|
cs.CL
|
We are developing an automatic method to compile an encyclopedic corpus from
the Web. In our previous work, paragraph-style descriptions for a term are
extracted from Web pages and organized based on domains. However, these
descriptions are independent and do not comprise a condensed text as in
hand-crafted encyclopedias. To resolve this problem, we propose a summarization
method, which produces a single text from multiple descriptions. The resultant
summary concisely describes a term from different viewpoints. We also show the
effectiveness of our method by means of experiments.
|
cs/0407027
|
Unsupervised Topic Adaptation for Lecture Speech Retrieval
|
cs.CL
|
We are developing a cross-media information retrieval system, in which users
can view specific segments of lecture videos by submitting text queries. To
produce a text index, the audio track is extracted from a lecture video and a
transcription is generated by automatic speech recognition. In this paper, to
improve the quality of our retrieval system, we extensively investigate the
effects of adapting acoustic and language models on speech recognition. We
perform an MLLR-based method to adapt an acoustic model. To obtain a corpus for
language model adaptation, we use the textbook for a target lecture to search a
Web collection for the pages associated with the lecture topic. We show the
effectiveness of our method by means of experiments.
|
cs/0407028
|
Effects of Language Modeling on Speech-driven Question Answering
|
cs.CL
|
We integrate automatic speech recognition (ASR) and question answering (QA)
to realize a speech-driven QA system, and evaluate its performance. We adapt an
N-gram language model to natural language questions, so that the input of our
system can be recognized with a high accuracy. We target WH-questions which
consist of the topic part and fixed phrase used to ask about something. We
first produce a general N-gram model intended to recognize the topic and
emphasize the counts of the N-grams that correspond to the fixed phrases. Given
a transcription by the ASR engine, the QA engine extracts the answer candidates
from target documents. We propose a passage retrieval method robust against
recognition errors in the transcription. We use the QA test collection produced
in NTCIR, which is a TREC-style evaluation workshop, and show the effectiveness
of our method by means of experiments.
|
cs/0407029
|
Static versus Dynamic Arbitrage Bounds on Multivariate Option Prices
|
cs.CE
|
We compare static arbitrage price bounds on basket calls, i.e. bounds that
only involve buy-and-hold trading strategies, with the price range obtained
within a multi-variate generalization of the Black-Scholes model. While there
is no gap between these two sets of prices in the univariate case, we observe
here that contrary to our intuition about model risk for at-the-money calls,
there is a somewhat large gap between model prices and static arbitrage prices,
hence a similarly large set of prices on which a multivariate Black-Scholes
model cannot be calibrated but where no conclusion can be drawn on the presence
or not of a static arbitrage opportunity.
|
cs/0407034
|
On the Complexity of Case-Based Planning
|
cs.AI cs.CC
|
We analyze the computational complexity of problems related to case-based
planning: planning when a plan for a similar instance is known, and planning
from a library of plans. We prove that planning from a single case has the same
complexity than generative planning (i.e., planning "from scratch"); using an
extended definition of cases, complexity is reduced if the domain stored in the
case is similar to the one to search plans for. Planning from a library of
cases is shown to have the same complexity. In both cases, the complexity of
planning remains, in the worst case, PSPACE-complete.
|
cs/0407035
|
A Framework for High-Accuracy Privacy-Preserving Mining
|
cs.DB cs.IR
|
To preserve client privacy in the data mining process, a variety of
techniques based on random perturbation of data records have been proposed
recently. In this paper, we present a generalized matrix-theoretic model of
random perturbation, which facilitates a systematic approach to the design of
perturbation mechanisms for privacy-preserving mining. Specifically, we
demonstrate that (a) the prior techniques differ only in their settings for the
model parameters, and (b) through appropriate choice of parameter settings, we
can derive new perturbation techniques that provide highly accurate mining
results even under strict privacy guarantees. We also propose a novel
perturbation mechanism wherein the model parameters are themselves
characterized as random variables, and demonstrate that this feature provides
significant improvements in privacy at a very marginal cost in accuracy.
While our model is valid for random-perturbation-based privacy-preserving
mining in general, we specifically evaluate its utility here with regard to
frequent-itemset mining on a variety of real datasets. The experimental results
indicate that our mechanisms incur substantially lower identity and support
errors as compared to the prior techniques.
|
cs/0407037
|
Generalized Evolutionary Algorithm based on Tsallis Statistics
|
cs.AI
|
Generalized evolutionary algorithm based on Tsallis canonical distribution is
proposed. The algorithm uses Tsallis generalized canonical distribution to
weigh the configurations for `selection' instead of Gibbs-Boltzmann
distribution. Our simulation results show that for an appropriate choice of
non-extensive index that is offered by Tsallis statistics, evolutionary
algorithms based on this generalization outperform algorithms based on
Gibbs-Boltzmann distribution.
|
cs/0407039
|
On the Convergence Speed of MDL Predictions for Bernoulli Sequences
|
cs.LG cs.AI cs.IT math.IT math.PR
|
We consider the Minimum Description Length principle for online sequence
prediction. If the underlying model class is discrete, then the total expected
square loss is a particularly interesting performance measure: (a) this
quantity is bounded, implying convergence with probability one, and (b) it
additionally specifies a `rate of convergence'. Generally, for MDL only
exponential loss bounds hold, as opposed to the linear bounds for a Bayes
mixture. We show that this is even the case if the model class contains only
Bernoulli distributions. We derive a new upper bound on the prediction error
for countable Bernoulli classes. This implies a small bound (comparable to the
one for Bayes mixtures) for certain important model classes. The results apply
to many Machine Learning tasks including classification and hypothesis testing.
We provide arguments that our theorems generalize to countable classes of
i.i.d. models.
|
cs/0407040
|
Decomposition Based Search - A theoretical and experimental evaluation
|
cs.AI
|
In this paper we present and evaluate a search strategy called Decomposition
Based Search (DBS) which is based on two steps: subproblem generation and
subproblem solution. The generation of subproblems is done through value
ranking and domain splitting. Subdomains are explored so as to generate,
according to the heuristic chosen, promising subproblems first.
We show that two well known search strategies, Limited Discrepancy Search
(LDS) and Iterative Broadening (IB), can be seen as special cases of DBS. First
we present a tuning of DBS that visits the same search nodes as IB, but avoids
restarts. Then we compare both theoretically and computationally DBS and LDS
using the same heuristic. We prove that DBS has a higher probability of being
successful than LDS on a comparable number of nodes, under realistic
assumptions. Experiments on a constraint satisfaction problem and an
optimization problem show that DBS is indeed very effective if compared to LDS.
|
cs/0407042
|
Postponing Branching Decisions
|
cs.AI
|
Solution techniques for Constraint Satisfaction and Optimisation Problems
often make use of backtrack search methods, exploiting variable and value
ordering heuristics. In this paper, we propose and analyse a very simple method
to apply in case the value ordering heuristic produces ties: postponing the
branching decision. To this end, we group together values in a tie, branch on
this sub-domain, and defer the decision among them to lower levels of the
search tree. We show theoretically and experimentally that this simple
modification can dramatically improve the efficiency of the search strategy.
Although in practise similar methods may have been applied already, to our
knowledge, no empirical or theoretical study has been proposed in the
literature to identify when and to what extent this strategy should be used.
|
cs/0407044
|
Reduced cost-based ranking for generating promising subproblems
|
cs.AI
|
In this paper, we propose an effective search procedure that interleaves two
steps: subproblem generation and subproblem solution. We mainly focus on the
first part. It consists of a variable domain value ranking based on reduced
costs. Exploiting the ranking, we generate, in a Limited Discrepancy Search
tree, the most promising subproblems first. An interesting result is that
reduced costs provide a very precise ranking that allows to almost always find
the optimal solution in the first generated subproblem, even if its dimension
is significantly smaller than that of the original problem. Concerning the
proof of optimality, we exploit a way to increase the lower bound for
subproblems at higher discrepancies. We show experimental results on the TSP
and its time constrained variant to show the effectiveness of the proposed
approach, but the technique could be generalized for other problems.
|
cs/0407046
|
A Bimachine Compiler for Ranked Tagging Rules
|
cs.CL
|
This paper describes a novel method of compiling ranked tagging rules into a
deterministic finite-state device called a bimachine. The rules are formulated
in the framework of regular rewrite operations and allow unrestricted regular
expressions in both left and right rule contexts. The compiler is illustrated
by an application within a speech synthesis system.
|
cs/0407047
|
Channel-Independent and Sensor-Independent Stimulus Representations
|
cs.CV cs.AI
|
This paper shows how a machine, which observes stimuli through an
uncharacterized, uncalibrated channel and sensor, can glean machine-independent
information (i.e., channel- and sensor-independent information) about the
stimuli. First, we demonstrate that a machine defines a specific coordinate
system on the stimulus state space, with the nature of that coordinate system
depending on the device's channel and sensor. Thus, machines with different
channels and sensors "see" the same stimulus trajectory through state space,
but in different machine-specific coordinate systems. For a large variety of
physical stimuli, statistical properties of that trajectory endow the stimulus
configuration space with differential geometric structure (a metric and
parallel transfer procedure), which can then be used to represent relative
stimulus configurations in a coordinate-system-independent manner (and,
therefore, in a channel- and sensor-independent manner). The resulting
description is an "inner" property of the stimulus time series in the sense
that it does not depend on extrinsic factors like the observer's choice of a
coordinate system in which the stimulus is viewed (i.e., the observer's choice
of channel and sensor). This methodology is illustrated with analytic examples
and with a numerically simulated experiment. In an intelligent sensory device,
this kind of representation "engine" could function as a "front-end" that
passes channel/sensor-independent stimulus representations to a pattern
recognition module. After a pattern recognizer has been trained in one of these
devices, it could be used without change in other devices having different
channels and sensors.
|
cs/0407049
|
Preferred Answer Sets for Ordered Logic Programs
|
cs.LO cs.AI
|
We extend answer set semantics to deal with inconsistent programs (containing
classical negation), by finding a ``best'' answer set. Within the context of
inconsistent programs, it is natural to have a partial order on rules,
representing a preference for satisfying certain rules, possibly at the cost of
violating less important ones. We show that such a rule order induces a natural
order on extended answer sets, the minimal elements of which we call preferred
answer sets. We characterize the expressiveness of the resulting semantics and
show that it can simulate negation as failure, disjunction and some other
formalisms such as logic programs with ordered disjunction. The approach is
shown to be useful in several application areas, e.g. repairing database, where
minimal repairs correspond to preferred answer sets.
To appear in Theory and Practice of Logic Programming (TPLP).
|
cs/0407053
|
Design of a Parallel and Distributed Web Search Engine
|
cs.IR cs.DC
|
This paper describes the architecture of MOSE (My Own Search Engine), a
scalable parallel and distributed engine for searching the web. MOSE was
specifically designed to efficiently exploit affordable parallel architectures,
such as clusters of workstations. Its modular and scalable architecture can
easily be tuned to fulfill the bandwidth requirements of the application at
hand. Both task-parallel and data-parallel approaches are exploited within MOSE
in order to increase the throughput and efficiently use communication, storing
and computational resources. We used a collection of html documents as a
benchmark, and conducted preliminary experiments on a cluster of three SMP
Linux PCs.
|
cs/0407054
|
From truth to computability I
|
cs.LO cs.AI cs.GT math.LO
|
The recently initiated approach called computability logic is a formal theory
of interactive computation. See a comprehensive online source on the subject at
http://www.cis.upenn.edu/~giorgi/cl.html . The present paper contains a
soundness and completeness proof for the deductive system CL3 which axiomatizes
the most basic first-order fragment of computability logic called the
finite-depth, elementary-base fragment. Among the potential application areas
for this result are the theory of interactive computation, constructive applied
theories, knowledgebase systems, systems for resource-bound planning and
action. This paper is self-contained as it reintroduces all relevant
definitions as well as main motivations.
|
cs/0407057
|
Universal Convergence of Semimeasures on Individual Random Sequences
|
cs.LG cs.AI cs.CC cs.IT math.IT math.PR
|
Solomonoff's central result on induction is that the posterior of a universal
semimeasure M converges rapidly and with probability 1 to the true sequence
generating posterior mu, if the latter is computable. Hence, M is eligible as a
universal sequence predictor in case of unknown mu. Despite some nearby results
and proofs in the literature, the stronger result of convergence for all
(Martin-Loef) random sequences remained open. Such a convergence result would
be particularly interesting and natural, since randomness can be defined in
terms of M itself. We show that there are universal semimeasures M which do not
converge for all random sequences, i.e. we give a partial negative answer to
the open problem. We also provide a positive answer for some non-universal
semimeasures. We define the incomputable measure D as a mixture over all
computable measures and the enumerable semimeasure W as a mixture over all
enumerable nearly-measures. We show that W converges to D and D to mu on all
random sequences. The Hellinger distance measuring closeness of two
distributions plays a central role.
|
cs/0407060
|
Tight bounds for LDPC and LDGM codes under MAP decoding
|
cs.IT cond-mat.dis-nn math.IT
|
A new method for analyzing low density parity check (LDPC) codes and low
density generator matrix (LDGM) codes under bit maximum a posteriori
probability (MAP) decoding is introduced. The method is based on a rigorous
approach to spin glasses developed by Francesco Guerra. It allows to construct
lower bounds on the entropy of the transmitted message conditional to the
received one. Based on heuristic statistical mechanics calculations, we
conjecture such bounds to be tight. The result holds for standard irregular
ensembles when used over binary input output symmetric channels. The method is
first developed for Tanner graph ensembles with Poisson left degree
distribution. It is then generalized to `multi-Poisson' graphs, and, by a
completion procedure, to arbitrary degree distribution.
|
cs/0407061
|
A measure of similarity between graph vertices
|
cs.IR cond-mat.dis-nn cs.DM physics.data-an
|
We introduce a concept of similarity between vertices of directed graphs. Let
G_A and G_B be two directed graphs. We define a similarity matrix whose (i,
j)-th real entry expresses how similar vertex j (in G_A) is to vertex i (in
G_B. The similarity matrix can be obtained as the limit of the normalized even
iterates of a linear transformation. In the special case where G_A=G_B=G, the
matrix is square and the (i, j)-th entry is the similarity score between the
vertices i and j of G. We point out that Kleinberg's "hub and authority" method
to identify web-pages relevant to a given query can be viewed as a special case
of our definition in the case where one of the graphs has two vertices and a
unique directed edge between them. In analogy to Kleinberg, we show that our
similarity scores are given by the components of a dominant eigenvector of a
non-negative matrix. Potential applications of our similarity concept are
numerous. We illustrate an application for the automatic extraction of synonyms
in a monolingual dictionary.
|
cs/0407064
|
A Sequent Calculus and a Theorem Prover for Standard Conditional Logics
|
cs.LO cs.AI
|
In this paper we present a cut-free sequent calculus, called SeqS, for some
standard conditional logics, namely CK, CK+ID, CK+MP and CK+MP+ID. The calculus
uses labels and transition formulas and can be used to prove decidability and
space complexity bounds for the respective logics. We also present CondLean, a
theorem prover for these logics implementing SeqS calculi written in SICStus
Prolog.
|
cs/0407065
|
Word Sense Disambiguation by Web Mining for Word Co-occurrence
Probabilities
|
cs.CL cs.IR cs.LG
|
This paper describes the National Research Council (NRC) Word Sense
Disambiguation (WSD) system, as applied to the English Lexical Sample (ELS)
task in Senseval-3. The NRC system approaches WSD as a classical supervised
machine learning problem, using familiar tools such as the Weka machine
learning software and Brill's rule-based part-of-speech tagger. Head words are
represented as feature vectors with several hundred features. Approximately
half of the features are syntactic and the other half are semantic. The main
novelty in the system is the method for generating the semantic features, based
on word \hbox{co-occurrence} probabilities. The probabilities are estimated
using the Waterloo MultiText System with a corpus of about one terabyte of
unlabeled text, collected by a web crawler.
|
cs/0408001
|
Semantic Linking - a Context-Based Approach to Interactivity in
Hypermedia
|
cs.IR cs.LG
|
The semantic Web initiates new, high level access schemes to online content
and applications. One area of superior need for a redefined content exploration
is given by on-line educational applications and their concepts of
interactivity in the framework of open hypermedia systems. In the present paper
we discuss aspects and opportunities of gaining interactivity schemes from
semantic notions of components. A transition from standard educational
annotation to semantic statements of hyperlinks is discussed. Further on we
introduce the concept of semantic link contexts as an approach to manage a
coherent rhetoric of linking. A practical implementation is introduced, as
well. Our semantic hyperlink implementation is based on the more general
Multimedia Information Repository MIR, an open hypermedia system supporting the
standards XML, Corba and JNDI.
|
cs/0408004
|
Hypermedia Learning Objects System - On the Way to a Semantic
Educational Web
|
cs.IR cs.LG
|
While eLearning systems become more and more popular in daily education,
available applications lack opportunities to structure, annotate and manage
their contents in a high-level fashion. General efforts to improve these
deficits are taken by initiatives to define rich meta data sets and a
semanticWeb layer. In the present paper we introduce Hylos, an online learning
system. Hylos is based on a cellular eLearning Object (ELO) information model
encapsulating meta data conforming to the LOM standard. Content management is
provisioned on this semantic meta data level and allows for variable,
dynamically adaptable access structures. Context aware multifunctional links
permit a systematic navigation depending on the learners and didactic needs,
thereby exploring the capabilities of the semantic web. Hylos is built upon the
more general Multimedia Information Repository (MIR) and the MIR adaptive
context linking environment (MIRaCLE), its linking extension. MIR is an open
system supporting the standards XML, Corba and JNDI. Hylos benefits from
manageable information structures, sophisticated access logic and high-level
authoring tools like the ELO editor responsible for the semi-manual creation of
meta data and WYSIWYG like content editing.
|
cs/0408005
|
Educational Content Management - A Cellular Approach
|
cs.CY cs.IR
|
In recent times online educational applications more and more are requested
to provide self-consistent learning offers for students at the university
level. Consequently they need to cope with the wide range of complexity and
interrelations university course teaching brings along. An urgent need to
overcome simplistically linked HTMLc ontent pages becomes apparent. In the
present paper we discuss a schematic concept of educational content
construction from information cells and introduce its implementation on the
storage and runtime layer. Starting from cells content is annotated according
to didactic needs, structured for dynamic arrangement, dynamically decorated
with hyperlinks and, as all works are based on XML, open to any presentation
layer. Data can be variably accessed through URIs built on semantic path-names
and edited via an adaptive authoring toolbox. Our content management approach
is based on the more general Multimedia Information Repository MIR. and allows
for personalisation, as well. MIR is an open system supporting the standards
XML, Corba and JNDI.
|
cs/0408006
|
Why Two Sexes?
|
cs.NE cs.GL q-bio.PE
|
Evolutionary role of the separation into two sexes from a cyberneticist's
point of view. [I translated this 1965 article from Russian "Nauka i Zhizn"
(Science and Life) in 1988. In a popular form, the article puts forward several
useful ideas not all of which even today are necessarily well known or widely
accepted. Boris Lubachevsky, bdl@bell-labs.com ]
|
cs/0408007
|
Online convex optimization in the bandit setting: gradient descent
without a gradient
|
cs.LG cs.CC
|
We consider a the general online convex optimization framework introduced by
Zinkevich. In this setting, there is a sequence of convex functions. Each
period, we must choose a signle point (from some feasible set) and pay a cost
equal to the value of the next function on our chosen point. Zinkevich shows
that, if the each function is revealed after the choice is made, then one can
achieve vanishingly small regret relative the best single decision chosen in
hindsight.
We extend this to the bandit setting where we do not find out the entire
functions but rather just their value at our chosen point. We show how to get
vanishingly small regret in this setting.
Our approach uses a simple approximation of the gradient that is computed
from evaluating a function at a single (random) point. We show that this
estimate is sufficient to mimic Zinkevich's gradient descent online analysis,
with access to the gradient (only being able to evaluate the function at a
single point).
|
cs/0408008
|
Iterative Quantization Using Codes On Graphs
|
cs.IT math.IT
|
We study codes on graphs combined with an iterative message passing algorithm
for quantization. Specifically, we consider the binary erasure quantization
(BEQ) problem which is the dual of the binary erasure channel (BEC) coding
problem. We show that duals of capacity achieving codes for the BEC yield codes
which approach the minimum possible rate for the BEQ. In contrast, low density
parity check codes cannot achieve the minimum rate unless their density grows
at least logarithmically with block length. Furthermore, we show that duals of
efficient iterative decoding algorithms for the BEC yield efficient encoding
algorithms for the BEQ. Hence our results suggest that graphical models may
yield near optimal codes in source coding as well as in channel coding and that
duality plays a key role in such constructions.
|
cs/0408010
|
A Simple Proportional Conflict Redistribution Rule
|
cs.AI
|
One proposes a first alternative rule of combination to WAO (Weighted Average
Operator) proposed recently by Josang, Daniel and Vannoorenberghe, called
Proportional Conflict Redistribution rule (denoted PCR1). PCR1 and WAO are
particular cases of WO (the Weighted Operator) because the conflicting mass is
redistributed with respect to some weighting factors. In this first PCR rule,
the proportionalization is done for each non-empty set with respect to the
non-zero sum of its corresponding mass matrix - instead of its mass column
average as in WAO, but the results are the same as Ph. Smets has pointed out.
Also, we extend WAO (which herein gives no solution) for the degenerate case
when all column sums of all non-empty sets are zero, and then the conflicting
mass is transferred to the non-empty disjunctive form of all non-empty sets
together; but if this disjunctive form happens to be empty, then one considers
an open world (i.e. the frame of discernment might contain new hypotheses) and
thus all conflicting mass is transferred to the empty set. In addition to WAO,
we propose a general formula for PCR1 (WAO for non-degenerate cases).
|
cs/0408011
|
The asymptotic number of binary codes and binary matroids
|
cs.IT cs.DM math.IT
|
The asyptotic number of nonequivalent binary n-codes is determined. This is
also the asymptotic number of nonisomorphic binary n-matroids. The connection
to a result of Lefmann, Roedl, Phelps is explored. The latter states that
almost all binary n-codes have a trivial automorphism group.
|
cs/0408012
|
Three-Dimensional Face Orientation and Gaze Detection from a Single
Image
|
cs.CV cs.HC
|
Gaze detection and head orientation are an important part of many advanced
human-machine interaction applications. Many systems have been proposed for
gaze detection. Typically, they require some form of user cooperation and
calibration. Additionally, they may require multiple cameras and/or restricted
head positions. We present a new approach for inference of both face
orientation and gaze direction from a single image with no restrictions on the
head position. Our algorithm is based on a face and eye model, deduced from
anthropometric data. This approach allows us to use a single camera and
requires no cooperation from the user. Using a single image avoids the
complexities associated with of a multi-camera system. Evaluation tests show
that our system is accurate, fast and can be used in a variety of applications,
including ones where the user is unaware of the system.
|
cs/0408017
|
Improved Upper Bound for the Redundancy of Fix-Free Codes
|
cs.IT math.IT
|
A variable-length code is a fix-free code if no codeword is a prefix or a
suffix of any other codeword. In a fix-free code any finite sequence of
codewords can be decoded in both directions, which can improve the robustness
to channel noise and speed up the decoding process. In this paper we prove a
new sufficient condition of the existence of fix-free codes and improve the
upper bound on the redundancy of optimal fix-free codes.
|
cs/0408021
|
An Algorithm for Quasi-Associative and Quasi-Markovian Rules of
Combination in Information Fusion
|
cs.AI
|
In this paper one proposes a simple algorithm of combining the fusion rules,
those rules which first use the conjunctive rule and then the transfer of
conflicting mass to the non-empty sets, in such a way that they gain the
property of associativity and fulfill the Markovian requirement for dynamic
fusion. Also, a new rule, SDL-improved, is presented.
|
cs/0408023
|
On Global Warming (Softening Global Constraints)
|
cs.AI cs.PL
|
We describe soft versions of the global cardinality constraint and the
regular constraint, with efficient filtering algorithms maintaining domain
consistency. For both constraints, the softening is achieved by augmenting the
underlying graph. The softened constraints can be used to extend the
meta-constraint framework for over-constrained problems proposed by Petit,
Regin and Bessiere.
|
cs/0408026
|
Incremental Construction of Minimal Acyclic Sequential Transducers from
Unsorted Data
|
cs.CL cs.DS
|
This paper presents an efficient algorithm for the incremental construction
of a minimal acyclic sequential transducer (ST) for a dictionary consisting of
a list of input and output strings. The algorithm generalises a known method of
constructing minimal finite-state automata (Daciuk et al. 2000). Unlike the
algorithm published by Mihov and Maurel (2001), it does not require the input
strings to be sorted. The new method is illustrated by an application to
pronunciation dictionaries.
|
cs/0408027
|
CHR Grammars
|
cs.CL cs.PL
|
A grammar formalism based upon CHR is proposed analogously to the way
Definite Clause Grammars are defined and implemented on top of Prolog. These
grammars execute as robust bottom-up parsers with an inherent treatment of
ambiguity and a high flexibility to model various linguistic phenomena. The
formalism extends previous logic programming based grammars with a form of
context-sensitive rules and the possibility to include extra-grammatical
hypotheses in both head and body of grammar rules. Among the applications are
straightforward implementations of Assumption Grammars and abduction under
integrity constraints for language analysis. CHR grammars appear as a powerful
tool for specification and implementation of language processors and may be
proposed as a new standard for bottom-up grammars in logic programming.
To appear in Theory and Practice of Logic Programming (TPLP), 2005
|
cs/0408030
|
The Revolution In Database System Architecture
|
cs.DB
|
Database system architectures are undergoing revolutionary changes.
Algorithms and data are being unified by integrating programming languages with
the database system. This gives an extensible object-relational system where
non-procedural relational operators manipulate object sets. Coupled with this,
each DBMS is now a web service. This has huge implications for how we structure
applications. DBMSs are now object containers. Queues are the first objects to
be added. These queues are the basis for transaction processing and workflow
applica-tions. Future workflow systems are likely to be built on this core.
Data cubes and online analytic processing are now baked into most DBMSs. Beyond
that, DBMSs have a framework for data mining and machine learning algorithms.
Decision trees, Bayes nets, clustering, and time series analysis are built in;
new algorithms can be added. Text, temporal, and spatial data access methods,
along with their probabilistic reasoning have been added to database systems.
Allowing approximate and probabilistic answers is essential for many
applications. Many believe that XML and xQuery will be the main data structure
and access pattern. Database systems must accommodate that perspective.These
changes mandate a much more dynamic query optimization strategy. Intelligence
is moving to the periphery of the network. Each disk and each sensor will be a
competent database machine. Relational algebra is a convenient way to program
these systems. Database systems are now expected to be self-managing,
self-healing, and always-up.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.