id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0402021
|
A Numerical Example on the Principles of Stochastic Discrimination
|
cs.CV cs.LG
|
Studies on ensemble methods for classification suffer from the difficulty of
modeling the complementary strengths of the components. Kleinberg's theory of
stochastic discrimination (SD) addresses this rigorously via mathematical
notions of enrichment, uniformity, and projectability of an ensemble. We
explain these concepts via a very simple numerical example that captures the
basic principles of the SD theory and method. We focus on a fundamental
symmetry in point set covering that is the key observation leading to the
foundation of the theory. We believe a better understanding of the SD method
will lead to developments of better tools for analyzing other ensemble methods.
|
cs/0402023
|
A Service-Based Approach for Managing Mammography Data
|
cs.DB cs.SE
|
Grid-based technologies are emerging as a potential open-source
standards-based solution for managing and collabo-rating distributed resources.
In view of these new computing solutions, the Mammogrid project is developing a
service-based and Grid-aware application which manages a Euro-pean-wide
database of mammograms. Medical conditions such as breast cancer, and
mammograms as images, are ex-tremely complex with many dimensions of
variability across the population. An effective solution for the management of
disparate mammogram data sources is a federation of autonomous multi-centre
sites which transcends national boundaries. The Mammogrid solution utilizes the
Grid tech-nologies to integrate geographically distributed data sets. The
Mammogrid application will explore the potential of the Grid to support
effective co-working among radiologists through-out the EU. This paper outlines
the Mammogrid service-based approach in managing a federation of grid-connected
mam-mography databases.
|
cs/0402024
|
Pattern Reification as the Basis for Description-Driven Systems
|
cs.DB cs.SE
|
One of the main factors driving object-oriented software development for
information systems is the requirement for systems to be tolerant to change. To
address this issue in designing systems, this paper proposes a pattern-based,
object-oriented, description-driven system (DDS) architecture as an extension
to the standard UML four-layer meta-model. A DDS architecture is proposed in
which aspects of both static and dynamic systems behavior can be captured via
descriptive models and meta-models. The proposed architecture embodies four
main elements - firstly, the adoption of a multi-layered meta-modeling
architecture and reflective meta-level architecture, secondly the
identification of four data modeling relationships that can be made explicit
such that they can be modified dynamically, thirdly the identification of five
design patterns which have emerged from practice and have proved essential in
providing reusable building blocks for data management, and fourthly the
encoding of the structural properties of the five design patterns by means of
one fundamental pattern, the Graph pattern. A practical example of this
philosophy, the CRISTAL project, is used to demonstrate the use of
description-driven data objects to handle system evolution.
|
cs/0402025
|
A perspective on the Healthgrid initiative
|
cs.DB cs.SE
|
This paper presents a perspective on the Healthgrid initiative which involves
European projects deploying pioneering applications of grid technology in the
health sector. In the last couple of years, several grid projects have been
funded on health related issues at national and European levels. A crucial
issue is to maximize their cross fertilization in the context of an environment
where data of medical interest can be stored and made easily available to the
different actors in healthcare, physicians, healthcare centres and
administrations, and of course the citizens. The Healthgrid initiative,
represented by the Healthgrid association (http://www.healthgrid.org), was
initiated to bring the necessary long term continuity, to reinforce and promote
awareness of the possibilities and advantages linked to the deployment of GRID
technologies in health. Technologies to address the specific requirements for
medical applications are under development. Results from the DataGrid and other
projects are given as examples of early applications.
|
cs/0402029
|
Mapping Topics and Topic Bursts in PNAS
|
cs.IR cs.HC
|
Scientific research is highly dynamic. New areas of science continually
evolve;others gain or lose importance, merge or split. Due to the steady
increase in the number of scientific publications it is hard to keep an
overview of the structure and dynamic development of one's own field of
science, much less all scientific domains. However, knowledge of hot topics,
emergent research frontiers, or change of focus in certain areas is a critical
component of resource allocation decisions in research labs, governmental
institutions, and corporations. This paper demonstrates the utilization of
Kleinberg's burst detection algorithm, co-word occurrence analysis, and graph
layout techniques to generate maps that support the identification of major
research topics and trends. The approach was applied to analyze and map the
complete set of papers published in the Proceedings of the National Academy of
Sciences (PNAS) in the years 1982-2001. Six domain experts examined and
commented on the resulting maps in an attempt to reconstruct the evolution of
major research areas covered by PNAS.
|
cs/0402030
|
Computational complexity and simulation of rare events of Ising spin
glasses
|
cs.NE cs.AI
|
We discuss the computational complexity of random 2D Ising spin glasses,
which represent an interesting class of constraint satisfaction problems for
black box optimization. Two extremal cases are considered: (1) the +/- J spin
glass, and (2) the Gaussian spin glass. We also study a smooth transition
between these two extremal cases. The computational complexity of all studied
spin glass systems is found to be dominated by rare events of extremely hard
spin glass samples. We show that complexity of all studied spin glass systems
is closely related to Frechet extremal value distribution. In a hybrid
algorithm that combines the hierarchical Bayesian optimization algorithm (hBOA)
with a deterministic bit-flip hill climber, the number of steps performed by
both the global searcher (hBOA) and the local searcher follow Frechet
distributions. Nonetheless, unlike in methods based purely on local search, the
parameters of these distributions confirm good scalability of hBOA with local
search. We further argue that standard performance measures for optimization
algorithms--such as the average number of evaluations until convergence--can be
misleading. Finally, our results indicate that for highly multimodal constraint
satisfaction problems, such as Ising spin glasses, recombination-based search
can provide qualitatively better results than mutation-based search.
|
cs/0402031
|
Parameter-less hierarchical BOA
|
cs.NE cs.AI
|
The parameter-less hierarchical Bayesian optimization algorithm (hBOA)
enables the use of hBOA without the need for tuning parameters for solving each
problem instance. There are three crucial parameters in hBOA: (1) the selection
pressure, (2) the window size for restricted tournaments, and (3) the
population size. Although both the selection pressure and the window size
influence hBOA performance, performance should remain low-order polynomial with
standard choices of these two parameters. However, there is no standard
population size that would work for all problems of interest and the population
size must thus be eliminated in a different way. To eliminate the population
size, the parameter-less hBOA adopts the population-sizing technique of the
parameter-less genetic algorithm. Based on the existing theory, the
parameter-less hBOA should be able to solve nearly decomposable and
hierarchical problems in quadratic or subquadratic number of function
evaluations without the need for setting any parameters whatsoever. A number of
experiments are presented to verify scalability of the parameter-less hBOA.
|
cs/0402032
|
Fitness inheritance in the Bayesian optimization algorithm
|
cs.NE cs.AI cs.LG
|
This paper describes how fitness inheritance can be used to estimate fitness
for a proportion of newly sampled candidate solutions in the Bayesian
optimization algorithm (BOA). The goal of estimating fitness for some candidate
solutions is to reduce the number of fitness evaluations for problems where
fitness evaluation is expensive. Bayesian networks used in BOA to model
promising solutions and generate the new ones are extended to allow not only
for modeling and sampling candidate solutions, but also for estimating their
fitness. The results indicate that fitness inheritance is a promising concept
in BOA, because population-sizing requirements for building appropriate models
of promising solutions lead to good fitness estimates even if only a small
proportion of candidate solutions is evaluated using the actual fitness
function. This can lead to a reduction of the number of actual fitness
evaluations by a factor of 30 or more.
|
cs/0402033
|
Recycling Computed Answers in Rewrite Systems for Abduction
|
cs.AI
|
In rule-based systems, goal-oriented computations correspond naturally to the
possible ways that an observation may be explained. In some applications, we
need to compute explanations for a series of observations with the same domain.
The question whether previously computed answers can be recycled arises. A yes
answer could result in substantial savings of repeated computations. For
systems based on classic logic, the answer is YES. For nonmonotonic systems
however, one tends to believe that the answer should be NO, since recycling is
a form of adding information. In this paper, we show that computed answers can
always be recycled, in a nontrivial way, for the class of rewrite procedures
that we proposed earlier for logic programs with negation. We present some
experimental results on an encoding of the logistics domain.
|
cs/0402035
|
Memory As A Monadic Control Construct In Problem-Solving
|
cs.AI
|
Recent advances in programming languages study and design have established a
standard way of grounding computational systems representation in category
theory. These formal results led to a better understanding of issues of control
and side-effects in functional and imperative languages. This framework can be
successfully applied to the investigation of the performance of Artificial
Intelligence (AI) inference and cognitive systems. In this paper, we delineate
a categorical formalisation of memory as a control structure driving
performance in inference systems. Abstracting away control mechanisms from
three widely used representations of memory in cognitive systems (scripts,
production rules and clusters) we explain how categorical triples capture the
interaction between learning and problem-solving.
|
cs/0402042
|
Anonymity and Information Hiding in Multiagent Systems
|
cs.CR cs.LO cs.MA
|
We provide a framework for reasoning about information-hiding requirements in
multiagent systems and for reasoning about anonymity in particular. Our
framework employs the modal logic of knowledge within the context of the runs
and systems framework, much in the spirit of our earlier work on secrecy
[Halpern and O'Neill 2002]. We give several definitions of anonymity with
respect to agents, actions, and observers in multiagent systems, and we relate
our definitions of anonymity to other definitions of information hiding, such
as secrecy. We also give probabilistic definitions of anonymity that are able
to quantify an observer s uncertainty about the state of the system. Finally,
we relate our definitions of anonymity to other formalizations of anonymity and
information hiding, including definitions of anonymity in the process algebra
CSP and definitions of information hiding using function views.
|
cs/0402047
|
Parameter-less Optimization with the Extended Compact Genetic Algorithm
and Iterated Local Search
|
cs.NE
|
This paper presents a parameter-less optimization framework that uses the
extended compact genetic algorithm (ECGA) and iterated local search (ILS), but
is not restricted to these algorithms. The presented optimization algorithm
(ILS+ECGA) comes as an extension of the parameter-less genetic algorithm (GA),
where the parameters of a selecto-recombinative GA are eliminated. The approach
that we propose is tested on several well known problems. In the absence of
domain knowledge, it is shown that ILS+ECGA is a robust and easy-to-use
optimization method.
|
cs/0402049
|
An architecture for massive parallelization of the compact genetic
algorithm
|
cs.NE
|
This paper presents an architecture which is suitable for a massive
parallelization of the compact genetic algorithm. The resulting scheme has
three major advantages. First, it has low synchronization costs. Second, it is
fault tolerant, and third, it is scalable.
The paper argues that the benefits that can be obtained with the proposed
approach is potentially higher than those obtained with traditional parallel
genetic algorithms. In addition, the ideas suggested in the paper may also be
relevant towards parallelizing more complex probabilistic model building
genetic algorithms.
|
cs/0402050
|
A philosophical essay on life and its connections with genetic
algorithms
|
cs.NE
|
This paper makes a number of connections between life and various facets of
genetic and evolutionary algorithms research. Specifically, it addresses the
topics of adaptation, multiobjective optimization, decision making, deception,
and search operators, among others. It argues that human life, from birth to
death, is an adaptive or dynamic optimization problem where people are
continuously searching for happiness. More important, the paper speculates that
genetic algorithms can be used as a source of inspiration for helping people
make decisions in their everyday life.
|
cs/0402051
|
Nested Intervals Tree Encoding with Continued Fractions
|
cs.DB
|
We introduce a new variation of Tree Encoding with Nested Intervals, find
connections with Materialized Path, and suggest a method for moving parts of
the hierarchy.
|
cs/0402053
|
The Complexity of Modified Instances
|
cs.CC cs.AI
|
In this paper we study the complexity of solving a problem when a solution of
a similar instance is known. This problem is relevant whenever instances may
change from time to time, and known solutions may not remain valid after the
change. We consider two scenarios: in the first one, what is known is only a
solution of the problem before the change; in the second case, we assume that
some additional information, found during the search for this solution, is also
known. In the first setting, the techniques from the theory of NP-completeness
suffice to show complexity results. In the second case, negative results can
only be proved using the techniques of compilability, and are often related to
the size of considered changes.
|
cs/0402055
|
Lexical Base as a Compressed Language Model of the World (on the
material of the Ukrainian language)
|
cs.CL
|
In the article the fact is verified that the list of words selected by formal
statistical methods (frequency and functional genre unrestrictedness) is not a
conglomerate of non-related words. It creates a system of interrelated items
and it can be named "lexical base of language". This selected list of words
covers all the spheres of human activities. To verify this statement the
invariant synoptical scheme common for ideographic dictionaries of different
language was determined.
|
cs/0402057
|
Integrating Defeasible Argumentation and Machine Learning Techniques
|
cs.AI
|
The field of machine learning (ML) is concerned with the question of how to
construct algorithms that automatically improve with experience. In recent
years many successful ML applications have been developed, such as datamining
programs, information-filtering systems, etc. Although ML algorithms allow the
detection and extraction of interesting patterns of data for several kinds of
problems, most of these algorithms are based on quantitative reasoning, as they
rely on training data in order to infer so-called target functions.
In the last years defeasible argumentation has proven to be a sound setting
to formalize common-sense qualitative reasoning. This approach can be combined
with other inference techniques, such as those provided by machine learning
theory.
In this paper we outline different alternatives for combining defeasible
argumentation and machine learning techniques. We suggest how different aspects
of a generic argument-based framework can be integrated with other ML-based
approaches.
|
cs/0402061
|
A Correlation-Based Distance
|
cs.IR
|
In this short technical report, we define on the sample space R^D a distance
between data points which depends on their correlation. We also derive an
expression for the center of mass of a set of points with respect to this
distance.
|
cs/0403001
|
Evolving a Stigmergic Self-Organized Data-Mining
|
cs.AI cs.IR
|
Self-organizing complex systems typically are comprised of a large number of
frequently similar components or events. Through their process, a pattern at
the global-level of a system emerges solely from numerous interactions among
the lower-level components of the system. Moreover, the rules specifying
interactions among the system's components are executed using only local
information, without reference to the global pattern, which, as in many
real-world problems is not easily accessible or possible to be found.
Stigmergy, a kind of indirect communication and learning by the environment
found in social insects is a well know example of self-organization, providing
not only vital clues in order to understand how the components can interact to
produce a complex pattern, as can pinpoint simple biological non-linear rules
and methods to achieve improved artificial intelligent adaptive categorization
systems, critical for Data-Mining. On the present work it is our intention to
show that a new type of Data-Mining can be designed based on Stigmergic
paradigms, taking profit of several natural features of this phenomenon. By
hybridizing bio-inspired Swarm Intelligence with Evolutionary Computation we
seek for an entire distributed, adaptive, collective and cooperative
self-organized Data-Mining. As a real-world, real-time test bed for our
proposal, World-Wide-Web Mining will be used. Having that purpose in mind, Web
usage Data was collected from the Monash University's Web site (Australia),
with over 7 million hits every week. Results are compared to other recent
systems, showing that the system presented is by far promising.
|
cs/0403002
|
Epistemic Foundation of Stable Model Semantics
|
cs.AI
|
Stable model semantics has become a very popular approach for the management
of negation in logic programming. This approach relies mainly on the closed
world assumption to complete the available knowledge and its formulation has
its basis in the so-called Gelfond-Lifschitz transformation.
The primary goal of this work is to present an alternative and
epistemic-based characterization of stable model semantics, to the
Gelfond-Lifschitz transformation. In particular, we show that stable model
semantics can be defined entirely as an extension of the Kripke-Kleene
semantics. Indeed, we show that the closed world assumption can be seen as an
additional source of `falsehood' to be added cumulatively to the Kripke-Kleene
semantics. Our approach is purely algebraic and can abstract from the
particular formalism of choice as it is based on monotone operators (under the
knowledge order) over bilattices only.
|
cs/0403003
|
Genetic Algorithms and Quantum Computation
|
cs.NE
|
Recently, researchers have applied genetic algorithms (GAs) to address some
problems in quantum computation. Also, there has been some works in the
designing of genetic algorithms based on quantum theoretical concepts and
techniques. The so called Quantum Evolutionary Programming has two major
sub-areas: Quantum Inspired Genetic Algorithms (QIGAs) and Quantum Genetic
Algorithms (QGAs). The former adopts qubit chromosomes as representations and
employs quantum gates for the search of the best solution. The later tries to
solve a key question in this field: what GAs will look like as an
implementation on quantum hardware? As we shall see, there is not a complete
answer for this question. An important point for QGAs is to build a quantum
algorithm that takes advantage of both the GA and quantum computing parallelism
as well as true randomness provided by quantum computers. In the first part of
this paper we present a survey of the main works in GAs plus quantum computing
including also our works in this area. Henceforth, we review some basic
concepts in quantum computation and GAs and emphasize their inherent
parallelism. Next, we review the application of GAs for learning quantum
operators and circuit design. Then, quantum evolutionary programming is
considered. Finally, we present our current research in this field and some
perspectives.
|
cs/0403006
|
The role of behavior modifiers in representation development
|
cs.AI
|
We address the problem of the development of representations and their
relationship to the environment. We study a software agent which develops in a
network a representation of its simple environment which captures and
integrates the relationships between agent and environment through a closure
mechanism. The inclusion of a variable behavior modifier allows better
representation development. This can be confirmed with an internal description
of the closure mechanism, and with an external description of the properties of
the representation network.
|
cs/0403009
|
Demolishing Searle's Chinese Room
|
cs.AI cs.GL
|
Searle's Chinese Room argument is refuted by showing that he has actually
given two different versions of the room, which fail for different reasons.
Hence, Searle does not achieve his stated goal of showing ``that a system could
have input and output capabilities that duplicated those of a native Chinese
speaker and still not understand Chinese''.
|
cs/0403012
|
Distributed Control by Lagrangian Steepest Descent
|
cs.MA cs.GT nlin.AO
|
Often adaptive, distributed control can be viewed as an iterated game between
independent players. The coupling between the players' mixed strategies,
arising as the system evolves from one instant to the next, is determined by
the system designer. Information theory tells us that the most likely joint
strategy of the players, given a value of the expectation of the overall
control objective function, is the minimizer of a Lagrangian function of the
joint strategy. So the goal of the system designer is to speed evolution of the
joint strategy to that Lagrangian minimizing point, lower the expectated value
of the control objective function, and repeat. Here we elaborate the theory of
algorithms that do this using local descent procedures, and that thereby
achieve efficient, adaptive, distributed control.
|
cs/0403014
|
Search Efficiency in Indexing Structures for Similarity Searching
|
cs.DB
|
Similarity searching finds application in a wide variety of domains including
multilingual databases, computational biology, pattern recognition and text
retrieval. Similarity is measured in terms of a distance function, edit
distance, in general metric spaces, which is expensive to compute. Indexing
techniques can be used reduce the number of distance computations. We present
an analysis of various existing similarity indexing structures for the same.
The performance obtained using the index structures studied was found to be
unsatisfactory . We propose an indexing technique that combines the features of
clustering with M tree(MTB) and the results indicate that this gives better
performance.
|
cs/0403016
|
A Comparative Study of Arithmetic Constraints on Integer Intervals
|
cs.PL cs.AI
|
We propose here a number of approaches to implement constraint propagation
for arithmetic constraints on integer intervals. To this end we introduce
integer interval arithmetic. Each approach is explained using appropriate proof
rules that reduce the variable domains. We compare these approaches using a set
of benchmarks.
|
cs/0403017
|
Extending the SDSS Batch Query System to the National Virtual
Observatory Grid
|
cs.DB
|
The Sloan Digital Sky Survey science database is approaching 2TB. While the
vast majority of queries normally execute in seconds or minutes, this
interactive execution time can be disproportionately increased by a small
fraction of queries that take hours or days to run; either because they require
non-index scans of the largest tables or because they request very large result
sets. In response to this, we added a multi-queue job submission and tracking
system. The transfer of very large result sets from queries over the network is
another serious problem. Statistics suggested that much of this data transfer
is unnecessary; users would prefer to store results locally in order to allow
further cross matching and filtering. To allow local analysis, we implemented a
system that gives users their own personal database (MyDB) at the portal site.
Users may transfer data to their MyDB, and then perform further analysis before
extracting it to their own machine.
We intend to extend the MyDB and asynchronous query ideas to multiple NVO
nodes. This implies development, in a distributed manner, of several features,
which have been demonstrated for a single node in the SDSS Batch Query System
(CasJobs). The generalization of asynchronous queries necessitates some form of
MyDB storage as well as workflow tracking services on each node and
coordination strategies among nodes.
|
cs/0403018
|
The World Wide Telescope: An Archetype for Online Science
|
cs.DB
|
Most scientific data will never be directly examined by scientists; rather it
will be put into online databases where it will be analyzed and summarized by
computer programs. Scientists increasingly see their instruments through online
scientific archives and analysis tools, rather than examining the raw data.
Today this analysis is primarily driven by scientists asking queries, but
scientific archives are becoming active databases that self-organize and
recognize interesting and anomalous facts as data arrives. In some fields, data
from many different archives can be cross-correlated to produce new insights.
Astronomy presents an excellent example of these trends; and, federating
Astronomy archives presents interesting challenges for computer scientists.
|
cs/0403020
|
The Sloan Digital Sky Survey Science Archive: Migrating a Multi-Terabyte
Astronomical Archive from Object to Relational DBMS
|
cs.DB
|
The Sloan Digital Sky Survey Science Archive is the first in a series of
multi-Terabyte digital archives in Astronomy and other data-intensive sciences.
To facilitate data mining in the SDSS archive, we adapted a commercial database
engine and built specialized tools on top of it. Originally we chose an
object-oriented database management system due to its data organization
capabilities, platform independence, query performance and conceptual fit to
the data. However, after using the object database for the first couple of
years of the project, it soon began to fall short in terms of its query support
and data mining performance. This was as much due to the inability of the
database vendor to respond our demands for features and bug fixes as it was due
to their failure to keep up with the rapid improvements in hardware
performance, particularly faster RAID disk systems. In the end, we were forced
to abandon the object database and migrate our data to a relational database.
We describe below the technical issues that we faced with the object database
and how and why we migrated to relational technology.
|
cs/0403021
|
A Quick Look at SATA Disk Performance
|
cs.DB cs.PF
|
We have been investigating the use of low-cost, commodity components for
multi-terabyte SQL Server databases. Dubbed storage bricks, these servers are
white box PCs containing the largest ATA drives, value-priced AMD or Intel
processors, and inexpensive ECC memory. One issue has been the wiring mess, air
flow problems, length restrictions, and connector failures created by seven or
more parallel ATA (PATA) ribbon cables and drives in]a tower or 3U rack-mount
chassis. Large capacity Serial ATA (SATA) drives have recently become widely
available for the PC environment at a reasonable price. In addition to being
faster, the SATA connectors seem more reliable, have a more reasonable length
restriction (1m) and allow better airflow. We tested two drive brands along
with two RAID controllers to evaluate SATA drive performance and reliablility.
This paper documents our results so far.
|
cs/0403025
|
Distribution of Mutual Information from Complete and Incomplete Data
|
cs.LG cs.AI cs.IT math.IT math.ST stat.TH
|
Mutual information is widely used, in a descriptive way, to measure the
stochastic dependence of categorical random variables. In order to address
questions such as the reliability of the descriptive value, one must consider
sample-to-population inferential approaches. This paper deals with the
posterior distribution of mutual information, as obtained in a Bayesian
framework by a second-order Dirichlet prior distribution. The exact analytical
expression for the mean, and analytical approximations for the variance,
skewness and kurtosis are derived. These approximations have a guaranteed
accuracy level of the order O(1/n^3), where n is the sample size. Leading order
approximations for the mean and the variance are derived in the case of
incomplete samples. The derived analytical expressions allow the distribution
of mutual information to be approximated reliably and quickly. In fact, the
derived expressions can be computed with the same order of complexity needed
for descriptive mutual information. This makes the distribution of mutual
information become a concrete alternative to descriptive mutual information in
many applications which would benefit from moving to the inductive side. Some
of these prospective applications are discussed, and one of them, namely
feature selection, is shown to perform significantly better when inductive
mutual information is used.
|
cs/0403027
|
An approach to membrane computing under inexactitude
|
cs.OH cs.NE
|
In this paper we introduce a fuzzy version of symport/antiport membrane
systems. Our fuzzy membrane systems handle possibly inexact copies of reactives
and their rules are endowed with threshold functions that determine whether a
rule can be applied or not to a given set of objects, depending of the degree
of accuracy of these objects to the reactives specified in the rule. We prove
that these fuzzy membrane systems generate exactly the recursively enumerable
finite-valued fuzzy sets of natural numbers.
|
cs/0403031
|
Concept of E-machine: How does a "dynamical" brain learn to process
"symbolic" information? Part I
|
cs.AI cs.LG
|
The human brain has many remarkable information processing characteristics
that deeply puzzle scientists and engineers. Among the most important and the
most intriguing of these characteristics are the brain's broad universality as
a learning system and its mysterious ability to dynamically change
(reconfigure) its behavior depending on a combinatorial number of different
contexts.
This paper discusses a class of hypothetically brain-like dynamically
reconfigurable associative learning systems that shed light on the possible
nature of these brain's properties. The systems are arranged on the general
principle referred to as the concept of E-machine.
The paper addresses the following questions:
1. How can "dynamical" neural networks function as universal programmable
"symbolic" machines?
2. What kind of a universal programmable symbolic machine can form
arbitrarily complex software in the process of programming similar to the
process of biological associative learning?
3. How can a universal learning machine dynamically reconfigure its software
depending on a combinatorial number of possible contexts?
|
cs/0403032
|
Where Fail-Safe Default Logics Fail
|
cs.AI cs.LO
|
Reiter's original definition of default logic allows for the application of a
default that contradicts a previously applied one. We call failure this
condition. The possibility of generating failures has been in the past
considered as a semantical problem, and variants have been proposed to solve
it. We show that it is instead a computational feature that is needed to encode
some domains into default logic.
|
cs/0403035
|
Web pages search engine based on DNS
|
cs.NI cs.IR
|
Search engine is main access to the largest information source in this world,
Internet. Now Internet is changing every aspect of our life. Information
retrieval service may be its most important services. But for common user,
internet search service is still far from our expectation, too many unrelated
search results, old information, etc. To solve these problems, a new system,
search engine based on DNS is proposed. The original idea, detailed content and
implementation of this system all are introduced in this paper.
|
cs/0403038
|
Tournament versus Fitness Uniform Selection
|
cs.LG cs.AI
|
In evolutionary algorithms a critical parameter that must be tuned is that of
selection pressure. If it is set too low then the rate of convergence towards
the optimum is likely to be slow. Alternatively if the selection pressure is
set too high the system is likely to become stuck in a local optimum due to a
loss of diversity in the population. The recent Fitness Uniform Selection
Scheme (FUSS) is a conceptually simple but somewhat radical approach to
addressing this problem - rather than biasing the selection towards higher
fitness, FUSS biases selection towards sparsely populated fitness levels. In
this paper we compare the relative performance of FUSS with the well known
tournament selection scheme on a range of problems.
|
cs/0403039
|
A Flexible Rule Compiler for Speech Synthesis
|
cs.CL cs.AI
|
We present a flexible rule compiler developed for a text-to-speech (TTS)
system. The compiler converts a set of rules into a finite-state transducer
(FST). The input and output of the FST are subject to parameterization, so that
the system can be applied to strings and sequences of feature-structures. The
resulting transducer is guaranteed to realize a function (as opposed to a
relation), and therefore can be implemented as a deterministic device (either a
deterministic FST or a bimachine).
|
cs/0404001
|
On the Practicality of Intrinsic Reconfiguration As a Fault Recovery
Method in Analog Systems
|
cs.PF cs.NE
|
Evolvable hardware combines the powerful search capability of evolutionary
algorithms with the flexibility of reprogrammable devices, thereby providing a
natural framework for reconfiguration. This framework has generated an interest
in using evolvable hardware for fault-tolerant systems because reconfiguration
can effectively deal with hardware faults whenever it is impossible to provide
spares. But systems cannot tolerate faults indefinitely, which means
reconfiguration does have a deadline. The focus of previous evolvable hardware
research relating to fault-tolerance has been primarily restricted to restoring
functionality, with no real consideration of time constraints. In this paper we
are concerned with evolvable hardware performing reconfiguration under deadline
constraints. In particular, we investigate reconfigurable hardware that
undergoes intrinsic evolution. We show that fault recovery done by intrinsic
reconfiguration has some restrictions, which designers cannot ignore.
|
cs/0404002
|
Mathematical Analysis of Multi-Agent Systems
|
cs.RO cs.MA
|
We review existing approaches to mathematical modeling and analysis of
multi-agent systems in which complex collective behavior arises out of local
interactions between many simple agents. Though the behavior of an individual
agent can be considered to be stochastic and unpredictable, the collective
behavior of such systems can have a simple probabilistic description. We show
that a class of mathematical models that describe the dynamics of collective
behavior of multi-agent systems can be written down from the details of the
individual agent controller. The models are valid for Markov or memoryless
agents, in which each agents future state depends only on its present state and
not any of the past states. We illustrate the approach by analyzing in detail
applications from the robotics domain: collaboration and foraging in groups of
robots.
|
cs/0404003
|
Enhancing the expressive power of the U-Datalog language
|
cs.DB
|
U-Datalog has been developed with the aim of providing a set-oriented logical
update language, guaranteeing update parallelism in the context of a
Datalog-like language. In U-Datalog, updates are expressed by introducing
constraints (+p(X), to denote insertion, and [minus sign]p(X), to denote
deletion) inside Datalog rules. A U-Datalog program can be interpreted as a CLP
program. In this framework, a set of updates (constraints) is satisfiable if it
does not represent an inconsistent theory, that is, it does not require the
insertion and the deletion of the same fact. This approach resembles a very
simple form of negation. However, on the other hand, U-Datalog does not provide
any mechanism to explicitly deal with negative information, resulting in a
language with limited expressive power. In this paper, we provide a semantics,
based on stratification, handling the use of negated atoms in U-Datalog
programs, and we show which problems arise in defining a compositional
semantics.
|
cs/0404004
|
Dealing With Curious Players in Secure Networks
|
cs.CR cs.GT cs.MA
|
In secure communications networks there are a great number of user
behavioural problems, which need to be dealt with. Curious players pose a very
real and serious threat to the integrity of such a network. By traversing a
network a Curious player could uncover secret information, which that user has
no need to know, by simply posing as a loyalty check. Loyalty checks are done
simply to gauge the integrity of the network with respect to players who act in
a malicious manner. We wish to propose a method, which can deal with Curious
players trying to obtain "Need to Know" information using a combined
Fault-tolerant, Cryptographic and Game Theoretic Approach.
|
cs/0404006
|
Delimited continuations in natural language: quantification and polarity
sensitivity
|
cs.CL cs.PL
|
Making a linguistic theory is like making a programming language: one
typically devises a type system to delineate the acceptable utterances and a
denotational semantics to explain observations on their behavior. Via this
connection, the programming language concept of delimited continuations can
help analyze natural language phenomena such as quantification and polarity
sensitivity. Using a logical metalanguage whose syntax includes control
operators and whose semantics involves evaluation order, these analyses can be
expressed in direct style rather than continuation-passing style, and these
phenomena can be thought of as computational side effects.
|
cs/0404007
|
Polarity sensitivity and evaluation order in type-logical grammar
|
cs.CL
|
We present a novel, type-logical analysis of_polarity sensitivity_: how
negative polarity items (like "any" and "ever") or positive ones (like "some")
are licensed or prohibited. It takes not just scopal relations but also linear
order into account, using the programming-language notions of delimited
continuations and evaluation order, respectively. It thus achieves greater
empirical coverage than previous proposals.
|
cs/0404009
|
Tabular Parsing
|
cs.CL
|
This is a tutorial on tabular parsing, on the basis of tabulation of
nondeterministic push-down automata. Discussed are Earley's algorithm, the
Cocke-Kasami-Younger algorithm, tabular LR parsing, the construction of parse
trees, and further issues.
|
cs/0404011
|
Parametric external predicates for the DLV System
|
cs.AI
|
This document describes syntax, semantics and implementation guidelines in
order to enrich the DLV system with the possibility to make external C function
calls. This feature is realized by the introduction of parametric external
predicates, whose extension is not specified through a logic program but
implicitly computed through external code.
|
cs/0404012
|
Toward the Implementation of Functions in the DLV System (Preliminary
Technical Report)
|
cs.AI
|
This document describes the functions as they are treated in the DLV system.
We give first the language, then specify the main implementation issues.
|
cs/0404013
|
Tycoon: A Distributed Market-based Resource Allocation System
|
cs.DC cs.MA
|
P2P clusters like the Grid and PlanetLab enable in principle the same
statistical multiplexing efficiency gains for computing as the Internet
provides for networking. The key unsolved problem is resource allocation.
Existing solutions are not economically efficient and require high latency to
acquire resources. We designed and implemented Tycoon, a market based
distributed resource allocation system based on an Auction Share scheduling
algorithm. Preliminary results show that Tycoon achieves low latency and high
fairness while providing incentives for truth-telling on the part of strategic
users.
|
cs/0404017
|
Exploring tradeoffs in pleiotropy and redundancy using evolutionary
computing
|
cs.NE cs.NI
|
Evolutionary computation algorithms are increasingly being used to solve
optimization problems as they have many advantages over traditional
optimization algorithms. In this paper we use evolutionary computation to study
the trade-off between pleiotropy and redundancy in a client-server based
network. Pleiotropy is a term used to describe components that perform multiple
tasks, while redundancy refers to multiple components performing one same task.
Pleiotropy reduces cost but lacks robustness, while redundancy increases
network reliability but is more costly, as together, pleiotropy and redundancy
build flexibility and robustness into systems. Therefore it is desirable to
have a network that contains a balance between pleiotropy and redundancy. We
explore how factors such as link failure probability, repair rates, and the
size of the network influence the design choices that we explore using genetic
algorithms.
|
cs/0404018
|
NLML--a Markup Language to Describe the Unlimited English Grammar
|
cs.CL cs.AI
|
In this paper we present NLML (Natural Language Markup Language), a markup
language to describe the syntactic and semantic structure of any grammatically
correct English expression. At first the related works are analyzed to
demonstrate the necessity of the NLML: simple form, easy management and direct
storage. Then the description of the English grammar with NLML is introduced in
details in three levels: sentences (with different complexities, voices, moods,
and tenses), clause (relative clause and noun clause) and phrase (noun phrase,
verb phrase, prepositional phrase, adjective phrase, adverb phrase and
predicate phrase). At last the application fields of the NLML in NLP are shown
with two typical examples: NLOJM (Natural Language Object Modal in Java) and
NLDB (Natural Language Database).
|
cs/0404019
|
Optimizing genetic algorithm strategies for evolving networks
|
cs.NE cs.NI
|
This paper explores the use of genetic algorithms for the design of networks,
where the demands on the network fluctuate in time. For varying network
constraints, we find the best network using the standard genetic algorithm
operators such as inversion, mutation and crossover. We also examine how the
choice of genetic algorithm operators affects the quality of the best network
found. Such networks typically contain redundancy in servers, where several
servers perform the same task and pleiotropy, where servers perform multiple
tasks. We explore this trade-off between pleiotropy versus redundancy on the
cost versus reliability as a measure of the quality of the network.
|
cs/0404024
|
Computability Logic: a formal theory of interaction
|
cs.LO cs.AI math.LO
|
Computability logic is a formal theory of (interactive) computability in the
same sense as classical logic is a formal theory of truth. This approach was
initiated very recently in "Introduction to computability logic" (Annals of
Pure and Applied Logic 123 (2003), pp.1-99). The present paper reintroduces
computability logic in a more compact and less technical way. It is written in
a semitutorial style with a general computer science, logic or mathematics
audience in mind. An Internet source on the subject is available at
http://www.cis.upenn.edu/~giorgi/cl.html, and additional material at
http://www.csc.villanova.edu/~japaridz/CL/gsoll.html .
|
cs/0404025
|
Test Collections for Patent-to-Patent Retrieval and Patent Map
Generation in NTCIR-4 Workshop
|
cs.CL
|
This paper describes the Patent Retrieval Task in the Fourth NTCIR Workshop,
and the test collections produced in this task. We perform the invalidity
search task, in which each participant group searches a patent collection for
the patents that can invalidate the demand in an existing claim. We also
perform the automatic patent map generation task, in which the patents
associated with a specific topic are organized in a multi-dimensional matrix.
|
cs/0404026
|
DAB Content Annotation and Receiver Hardware Control with XML
|
cs.GL cs.CL
|
The Eureka-147 Digital Audio Broadcasting (DAB) standard defines the 'dynamic
labels' data field for holding information about the transmission content.
However, this information does not follow a well-defined structure since it is
designed to carry text for direct output to displays, for human interpretation.
This poses a problem when machine interpretation of DAB content information is
desired. Extensible Markup Language (XML) was developed to allow for the
well-defined, structured machine-to-machine exchange of data over computer
networks. This article proposes a novel technique of machine-interpretable DAB
content annotation and receiver hardware control, involving the utilisation of
XML as metadata in the transmitted DAB frames.
|
cs/0404030
|
XML framework for concept description and knowledge representation
|
cs.AI cs.LO
|
An XML framework for concept description is given, based upon the fact that
the tree structure of XML implies the logical structure of concepts as defined
by attributional calculus. Especially, the attribute-value representation is
implementable in the XML framework. Since the attribute-value representation is
an important way to represent knowledge in AI, the framework offers a further
and simpler way than the powerful RDF technology.
|
cs/0404032
|
When Do Differences Matter? On-Line Feature Extraction Through Cognitive
Economy
|
cs.LG cs.AI cs.NE
|
For an intelligent agent to be truly autonomous, it must be able to adapt its
representation to the requirements of its task as it interacts with the world.
Most current approaches to on-line feature extraction are ad hoc; in contrast,
this paper presents an algorithm that bases judgments of state compatibility
and state-space abstraction on principled criteria derived from the
psychological principle of cognitive economy. The algorithm incorporates an
active form of Q-learning, and partitions continuous state-spaces by merging
and splitting Voronoi regions. The experiments illustrate a new methodology for
testing and comparing representations by means of learning curves. Results from
the puck-on-a-hill task demonstrate the algorithm's ability to learn effective
representations, superior to those produced by some other, well-known, methods.
|
cs/0404033
|
The Persistent Buffer Tree : An I/O-efficient Index for Temporal Data
|
cs.GL cs.DB
|
In a variety of applications, we need to keep track of the development of a
data set over time. For maintaining and querying this multi version data
I/O-efficiently, external memory data structures are required. In this paper,
we present a probabilistic self-balancing persistent data structure in external
memory called the persistent buffer tree, which supports insertions, updates
and deletions of data items at the present version and range queries for any
version, past or present. The persistent buffer tree is I/O-optimal in the
sense that the expected amortized I/O performance bounds are asymptotically the
same as the deterministic amortized bounds of the (single version) buffer tree
in the worst case.
|
cs/0404036
|
Online Searching with an Autonomous Robot
|
cs.RO cs.DS
|
We discuss online strategies for visibility-based searching for an object
hidden behind a corner, using Kurt3D, a real autonomous mobile robot. This task
is closely related to a number of well-studied problems. Our robot uses a
three-dimensional laser scanner in a stop, scan, plan, go fashion for building
a virtual three-dimensional environment. Besides planning trajectories and
avoiding obstacles, Kurt3D is capable of identifying objects like a chair. We
derive a practically useful and asymptotically optimal strategy that guarantees
a competitive ratio of 2, which differs remarkably from the well-studied
scenario without the need of stopping for surveying the environment. Our
strategy is used by Kurt3D, documented in a separate video.
|
cs/0404038
|
2-Sat Sub-Clauses and the Hypernodal Structure of the 3-Sat Problem
|
cs.CC cs.AI
|
Like simpler graphs, nested (hypernodal) graphs consist of two components: a
set of nodes and a set of edges, where each edge connects a pair of nodes. In
the hypernodal graph model, however, a node may contain other graphs, so that a
node may be contained in a graph that it contains. The inherently recursive
structure of the hypernodal graph model aptly characterizes both the structure
and dynamic of the 3-sat problem, a broadly applicable, though intractable,
computer science problem. In this paper I first discuss the structure of the
3-sat problem, analyzing the relation of 3-sat to 2-sat, a related, though
tractable problem. I then discuss sub-clauses and sub-clause thresholds and the
transformation of sub-clauses into implication graphs, demonstrating how
combinations of implication graphs are equivalent to hypernodal graphs. I
conclude with a brief discussion of the use of hypernodal graphs to model the
3-sat problem, illustrating how hypernodal graphs model both the conditions for
satisfiability and the process by which particular 3-sat assignments either
succeed or fail.
|
cs/0404039
|
Algorithms for Estimating Information Distance with Application to
Bioinformatics and Linguistics
|
cs.CC cs.CE q-bio.GN
|
After reviewing unnormalized and normalized information distances based on
incomputable notions of Kolmogorov complexity, we discuss how Kolmogorov
complexity can be approximated by data compression algorithms. We argue that
optimal algorithms for data compression with side information can be
successfully used to approximate the normalized distance. Next, we discuss an
alternative information distance, which is based on relative entropy rate (also
known as Kullback-Leibler divergence), and compression-based algorithms for its
estimation. Based on available biological and linguistic data, we arrive to
unexpected conclusion that in Bioinformatics and Computational Linguistics this
alternative distance is more relevant and important than the ones based on
Kolmogorov complexity.
|
cs/0404041
|
NLOMJ--Natural Language Object Model in Java
|
cs.CL cs.PL
|
In this paper we present NLOMJ--a natural language object model in Java with
English as the experiment language. This modal describes the grammar elements
of any permissible expression in a natural language and their complicated
relations with each other with the concept "Object" in OOP(Object Oriented
Programming). Directly mapped to the syntax and semantics of the natural
language, it can be used in information retrieval as a linguistic method.
Around the UML diagram of the NLOMJ the important classes(Sentence, Clause and
Phrase) and their sub classes are introduced and their syntactic and semantic
meanings are explained.
|
cs/0404042
|
Extraction of topological features from communication network
topological patterns using self-organizing feature maps
|
cs.NE cs.CV
|
Different classes of communication network topologies and their
representation in the form of adjacency matrix and its eigenvalues are
presented. A self-organizing feature map neural network is used to map
different classes of communication network topological patterns. The neural
network simulation results are reported.
|
cs/0404045
|
Speculation on graph computation architectures and computing via
synchronization
|
cs.NE cs.AI
|
A speculative overview of a future topic of research. The paper is a
collection of ideas concerning two related areas:
1) Graph computation machines ("computing with graphs"). This is the class of
models of computation in which the state of the computation is represented as a
graph or network.
2) Arc-based neural networks, which store information not as activation in
the nodes, but rather by adding and deleting arcs. Sometimes the arcs may be
interpreted as synchronization.
Warnings to readers: this is not the sort of thing that one might submit to a
journal or conference. No proofs are presented. The presentation is informal,
and written at an introductory level. You'll probably want to wait for a more
concise presentation.
|
cs/0404046
|
Visualising the structure of architectural open spaces based on shape
analysis
|
cs.CV cs.CG cs.DS
|
This paper proposes the application of some well known two-dimensional
geometrical shape descriptors for the visualisation of the structure of
architectural open spaces. The paper demonstrates the use of visibility
measures such as distance to obstacles and amount of visible space to calculate
shape descriptors such as convexity and skeleton of the open space. The aim of
the paper is to indicate a simple, objective and quantifiable approach to
understand the structure of open spaces otherwise impossible due to the complex
construction of built structures.
|
cs/0404049
|
Exploiting Cross-Document Relations for Multi-document Evolving
Summarization
|
cs.CL cs.AI
|
This paper presents a methodology for summarization from multiple documents
which are about a specific topic. It is based on the specification and
identification of the cross-document relations that occur among textual
elements within those documents. Our methodology involves the specification of
the topic-specific entities, the messages conveyed for the specific entities by
certain textual elements and the specification of the relations that can hold
among these messages. The above resources are necessary for setting up a
specific topic for our query-based summarization approach which uses these
resources to identify the query-specific messages within the documents and the
query-specific relations that connect these messages across documents.
|
cs/0404051
|
Knowledge And The Action Description Language A
|
cs.AI
|
We introduce Ak, an extension of the action description language A (Gelfond
and Lifschitz, 1993) to handle actions which affect knowledge. We use sensing
actions to increase an agent's knowledge of the world and non-deterministic
actions to remove knowledge. We include complex plans involving conditionals
and loops in our query language for hypothetical reasoning. We also present a
translation of Ak domain descriptions into epistemic logic programs.
|
cs/0404057
|
Convergence of Discrete MDL for Sequential Prediction
|
cs.LG cs.AI math.ST stat.TH
|
We study the properties of the Minimum Description Length principle for
sequence prediction, considering a two-part MDL estimator which is chosen from
a countable class of models. This applies in particular to the important case
of universal sequence prediction, where the model class corresponds to all
algorithms for some fixed universal Turing machine (this correspondence is by
enumerable semimeasures, hence the resulting models are stochastic). We prove
convergence theorems similar to Solomonoff's theorem of universal induction,
which also holds for general Bayes mixtures. The bound characterizing the
convergence speed for MDL predictions is exponentially larger as compared to
Bayes mixtures. We observe that there are at least three different ways of
using MDL for prediction. One of these has worse prediction properties, for
which predictions only converge if the MDL estimator stabilizes. We establish
sufficient conditions for this to occur. Finally, some immediate consequences
for complexity relations and randomness criteria are proven.
|
cs/0405002
|
Splitting an operator: Algebraic modularity results for logics with
fixpoint semantics
|
cs.AI cs.LO
|
It is well known that, under certain conditions, it is possible to split
logic programs under stable model semantics, i.e. to divide such a program into
a number of different "levels", such that the models of the entire program can
be constructed by incrementally constructing models for each level. Similar
results exist for other non-monotonic formalisms, such as auto-epistemic logic
and default logic. In this work, we present a general, algebraicsplitting
theory for logics with a fixpoint semantics. Together with the framework of
approximation theory, a general fixpoint theory for arbitrary operators, this
gives us a uniform and powerful way of deriving splitting results for each
logic with a fixpoint semantics. We demonstrate the usefulness of these
results, by generalizing existing results for logic programming, auto-epistemic
logic and default logic.
|
cs/0405004
|
Quantum Computers
|
cs.AI cs.AR
|
This research paper gives an overview of quantum computers - description of
their operation, differences between quantum and silicon computers, major
construction problems of a quantum computer and many other basic aspects. No
special scientific knowledge is necessary for the reader.
|
cs/0405005
|
Maximum-likelihood decoding of Reed-Solomon Codes is NP-hard
|
cs.CC cs.DM cs.IT math.IT
|
Maximum-likelihood decoding is one of the central algorithmic problems in
coding theory. It has been known for over 25 years that maximum-likelihood
decoding of general linear codes is NP-hard. Nevertheless, it was so far
unknown whether maximum- likelihood decoding remains hard for any specific
family of codes with nontrivial algebraic structure. In this paper, we prove
that maximum-likelihood decoding is NP-hard for the family of Reed-Solomon
codes. We moreover show that maximum-likelihood decoding of Reed-Solomon codes
remains hard even with unlimited preprocessing, thereby strengthening a result
of Bruck and Naor.
|
cs/0405007
|
"In vivo" spam filtering: A challenge problem for data mining
|
cs.AI cs.DB cs.IR
|
Spam, also known as Unsolicited Commercial Email (UCE), is the bane of email
communication. Many data mining researchers have addressed the problem of
detecting spam, generally by treating it as a static text classification
problem. True in vivo spam filtering has characteristics that make it a rich
and challenging domain for data mining. Indeed, real-world datasets with these
characteristics are typically difficult to acquire and to share. This paper
demonstrates some of these characteristics and argues that researchers should
pursue in vivo spam filtering as an accessible domain for investigating them.
|
cs/0405008
|
A Comparative Study of Fuzzy Classification Methods on Breast Cancer
Data
|
cs.AI
|
In this paper, we examine the performance of four fuzzy rule generation
methods on Wisconsin breast cancer data. The first method generates fuzzy if
then rules using the mean and the standard deviation of attribute values. The
second approach generates fuzzy if then rules using the histogram of attributes
values. The third procedure generates fuzzy if then rules with certainty of
each attribute into homogeneous fuzzy sets. In the fourth approach, only
overlapping areas are partitioned. The first two approaches generate a single
fuzzy if then rule for each class by specifying the membership function of each
antecedent fuzzy set using the information about attribute values of training
patterns. The other two approaches are based on fuzzy grids with homogeneous
fuzzy partitions of each attribute. The performance of each approach is
evaluated on breast cancer data sets. Simulation results show that the Modified
grid approach has a high classification rate of 99.73 %.
|
cs/0405009
|
Intelligent Systems: Architectures and Perspectives
|
cs.AI
|
The integration of different learning and adaptation techniques to overcome
individual limitations and to achieve synergetic effects through the
hybridization or fusion of these techniques has, in recent years, contributed
to a large number of new intelligent system designs. Computational intelligence
is an innovative framework for constructing intelligent hybrid architectures
involving Neural Networks (NN), Fuzzy Inference Systems (FIS), Probabilistic
Reasoning (PR) and derivative free optimization techniques such as Evolutionary
Computation (EC). Most of these hybridization approaches, however, follow an ad
hoc design methodology, justified by success in certain application domains.
Due to the lack of a common framework it often remains difficult to compare the
various hybrid systems conceptually and to evaluate their performance
comparatively. This chapter introduces the different generic architectures for
integrating intelligent systems. The designing aspects and perspectives of
different hybrid archirectures like NN-FIS, EC-FIS, EC-NN, FIS-PR and NN-FIS-EC
systems are presented. Some conclusions are also provided towards the end.
|
cs/0405010
|
A Neuro-Fuzzy Approach for Modelling Electricity Demand in Victoria
|
cs.AI
|
Neuro-fuzzy systems have attracted growing interest of researchers in various
scientific and engineering areas due to the increasing need of intelligent
systems. This paper evaluates the use of two popular soft computing techniques
and conventional statistical approach based on Box--Jenkins autoregressive
integrated moving average (ARIMA) model to predict electricity demand in the
State of Victoria, Australia. The soft computing methods considered are an
evolving fuzzy neural network (EFuNN) and an artificial neural network (ANN)
trained using scaled conjugate gradient algorithm (CGA) and backpropagation
(BP) algorithm. The forecast accuracy is compared with the forecasts used by
Victorian Power Exchange (VPX) and the actual energy demand. To evaluate, we
considered load demand patterns for 10 consecutive months taken every 30 min
for training the different prediction models. Test results show that the
neuro-fuzzy system performed better than neural networks, ARIMA model and the
VPX forecasts.
|
cs/0405011
|
Neuro Fuzzy Systems: Sate-of-the-Art Modeling Techniques
|
cs.AI
|
Fusion of Artificial Neural Networks (ANN) and Fuzzy Inference Systems (FIS)
have attracted the growing interest of researchers in various scientific and
engineering areas due to the growing need of adaptive intelligent systems to
solve the real world problems. ANN learns from scratch by adjusting the
interconnections between layers. FIS is a popular computing framework based on
the concept of fuzzy set theory, fuzzy if-then rules, and fuzzy reasoning. The
advantages of a combination of ANN and FIS are obvious. There are several
approaches to integrate ANN and FIS and very often it depends on the
application. We broadly classify the integration of ANN and FIS into three
categories namely concurrent model, cooperative model and fully fused model.
This paper starts with a discussion of the features of each model and
generalize the advantages and deficiencies of each model. We further focus the
review on the different types of fused neuro-fuzzy systems and citing the
advantages and disadvantages of each model.
|
cs/0405012
|
Is Neural Network a Reliable Forecaster on Earth? A MARS Query!
|
cs.AI
|
Long-term rainfall prediction is a challenging task especially in the modern
world where we are facing the major environmental problem of global warming. In
general, climate and rainfall are highly non-linear phenomena in nature
exhibiting what is known as the butterfly effect. While some regions of the
world are noticing a systematic decrease in annual rainfall, others notice
increases in flooding and severe storms. The global nature of this phenomenon
is very complicated and requires sophisticated computer modeling and simulation
to predict accurately. In this paper, we report a performance analysis for
Multivariate Adaptive Regression Splines (MARS)and artificial neural networks
for one month ahead prediction of rainfall. To evaluate the prediction
efficiency, we made use of 87 years of rainfall data in Kerala state, the
southern part of the Indian peninsula situated at latitude -longitude pairs
(8o29'N - 76o57' E). We used an artificial neural network trained using the
scaled conjugate gradient algorithm. The neural network and MARS were trained
with 40 years of rainfall data. For performance evaluation, network predicted
outputs were compared with the actual rainfall data. Simulation results reveal
that MARS is a good forecasting tool and performed better than the considered
neural network.
|
cs/0405013
|
DCT Based Texture Classification Using Soft Computing Approach
|
cs.AI
|
Classification of texture pattern is one of the most important problems in
pattern recognition. In this paper, we present a classification method based on
the Discrete Cosine Transform (DCT) coefficients of texture image. As DCT works
on gray level image, the color scheme of each image is transformed into gray
levels. For classifying the images using DCT we used two popular soft computing
techniques namely neurocomputing and neuro-fuzzy computing. We used a
feedforward neural network trained using the backpropagation learning and an
evolving fuzzy neural network to classify the textures. The soft computing
models were trained using 80% of the texture data and remaining was used for
testing and validation purposes. A performance comparison was made among the
soft computing models for the texture classification problem. We also analyzed
the effects of prolonged training of neural networks. It is observed that the
proposed neuro-fuzzy model performed better than neural network.
|
cs/0405014
|
Estimating Genome Reversal Distance by Genetic Algorithm
|
cs.AI
|
Sorting by reversals is an important problem in inferring the evolutionary
relationship between two genomes. The problem of sorting unsigned permutation
has been proven to be NP-hard. The best guaranteed error bounded is the 3/2-
approximation algorithm. However, the problem of sorting signed permutation can
be solved easily. Fast algorithms have been developed both for finding the
sorting sequence and finding the reversal distance of signed permutation. In
this paper, we present a way to view the problem of sorting unsigned
permutation as signed permutation. And the problem can then be seen as
searching an optimal signed permutation in all n2 corresponding signed
permutations. We use genetic algorithm to conduct the search. Our experimental
result shows that the proposed method outperform the 3/2-approximation
algorithm.
|
cs/0405016
|
Intrusion Detection Systems Using Adaptive Regression Splines
|
cs.AI
|
Past few years have witnessed a growing recognition of intelligent techniques
for the construction of efficient and reliable intrusion detection systems. Due
to increasing incidents of cyber attacks, building effective intrusion
detection systems (IDS) are essential for protecting information systems
security, and yet it remains an elusive goal and a great challenge. In this
paper, we report a performance analysis between Multivariate Adaptive
Regression Splines (MARS), neural networks and support vector machines. The
MARS procedure builds flexible regression models by fitting separate splines to
distinct intervals of the predictor variables. A brief comparison of different
neural network learning algorithms is also given.
|
cs/0405017
|
Data Mining Approach for Analyzing Call Center Performance
|
cs.AI
|
The aim of our research was to apply well-known data mining techniques (such
as linear neural networks, multi-layered perceptrons, probabilistic neural
networks, classification and regression trees, support vector machines and
finally a hybrid decision tree neural network approach) to the problem of
predicting the quality of service in call centers; based on the performance
data actually collected in a call center of a large insurance company. Our aim
was two-fold. First, to compare the performance of models built using the
above-mentioned techniques and, second, to analyze the characteristics of the
input sensitivity in order to better understand the relationship between the
perform-ance evaluation process and the actual performance and in this way help
improve the performance of call centers. In this paper we summarize our
findings.
|
cs/0405018
|
Modeling Chaotic Behavior of Stock Indices Using Intelligent Paradigms
|
cs.AI
|
The use of intelligent systems for stock market predictions has been widely
established. In this paper, we investigate how the seemingly chaotic behavior
of stock markets could be well represented using several connectionist
paradigms and soft computing techniques. To demonstrate the different
techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketS and the S&P
CNX NIFTY stock index. We analyzed 7 year's Nasdaq 100 main index values and 4
year's NIFTY index values. This paper investigates the development of a
reliable and efficient technique to model the seemingly chaotic behavior of
stock markets. We considered an artificial neural network trained using
Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno
neuro-fuzzy model and a Difference Boosting Neural Network (DBNN). This paper
briefly explains how the different connectionist paradigms could be formulated
using different learning methods and then investigates whether they can provide
the required level of performance, which are sufficiently good and robust so as
to provide a reliable forecast model for stock market indices. Experiment
results reveal that all the connectionist paradigms considered could represent
the stock indices behavior very accurately.
|
cs/0405019
|
Hybrid Fuzzy-Linear Programming Approach for Multi Criteria Decision
Making Problems
|
cs.AI
|
The purpose of this paper is to point to the usefulness of applying a linear
mathematical formulation of fuzzy multiple criteria objective decision methods
in organising business activities. In this respect fuzzy parameters of linear
programming are modelled by preference-based membership functions. This paper
begins with an introduction and some related research followed by some
fundamentals of fuzzy set theory and technical concepts of fuzzy multiple
objective decision models. Further a real case study of a manufacturing plant
and the implementation of the proposed technique is presented. Empirical
results clearly show the superiority of the fuzzy technique in optimising
individual objective functions when compared to non-fuzzy approach.
Furthermore, for the problem considered, the optimal solution helps to infer
that by incorporating fuzziness in a linear programming model either in
constraints, or both in objective functions and constraints, provides a similar
(or even better) level of satisfaction for obtained results compared to
non-fuzzy linear programming.
|
cs/0405024
|
Meta-Learning Evolutionary Artificial Neural Networks
|
cs.AI
|
In this paper, we present MLEANN (Meta-Learning Evolutionary Artificial
Neural Network), an automatic computational framework for the adaptive
optimization of artificial neural networks wherein the neural network
architecture, activation function, connection weights; learning algorithm and
its parameters are adapted according to the problem. We explored the
performance of MLEANN and conventionally designed artificial neural networks
for function approximation problems. To evaluate the comparative performance,
we used three different well-known chaotic time series. We also present the
state of the art popular neural network learning algorithms and some
experimentation results related to convergence speed and generalization
performance. We explored the performance of backpropagation algorithm;
conjugate gradient algorithm, quasi-Newton algorithm and Levenberg-Marquardt
algorithm for the three chaotic time series. Performances of the different
learning algorithms were evaluated when the activation functions and
architecture were changed. We further present the theoretical background,
algorithm, design strategy and further demonstrate how effective and inevitable
is the proposed MLEANN framework to design a neural network, which is smaller,
faster and with a better generalization performance.
|
cs/0405025
|
The Largest Compatible Subset Problem for Phylogenetic Data
|
cs.AI
|
The phylogenetic tree construction is to infer the evolutionary relationship
between species from the experimental data. However, the experimental data are
often imperfect and conflicting each others. Therefore, it is important to
extract the motif from the imperfect data. The largest compatible subset
problem is that, given a set of experimental data, we want to discard the
minimum such that the remaining is compatible. The largest compatible subset
problem can be viewed as the vertex cover problem in the graph theory that has
been proven to be NP-hard. In this paper, we propose a hybrid Evolutionary
Computing (EC) method for this problem. The proposed method combines the EC
approach and the algorithmic approach for special structured graphs. As a
result, the complexity of the problem is dramatically reduced. Experiments were
performed on randomly generated graphs with different edge densities. The
vertex covers produced by the proposed method were then compared to the vertex
covers produced by a 2-approximation algorithm. The experimental results showed
that the proposed method consistently outperformed a classical 2- approximation
algorithm. Furthermore, a significant improvement was found when the graph
density was small.
|
cs/0405026
|
A Concurrent Fuzzy-Neural Network Approach for Decision Support Systems
|
cs.AI
|
Decision-making is a process of choosing among alternative courses of action
for solving complicated problems where multi-criteria objectives are involved.
The past few years have witnessed a growing recognition of Soft Computing
technologies that underlie the conception, design and utilization of
intelligent systems. Several works have been done where engineers and
scientists have applied intelligent techniques and heuristics to obtain optimal
decisions from imprecise information. In this paper, we present a concurrent
fuzzy-neural network approach combining unsupervised and supervised learning
techniques to develop the Tactical Air Combat Decision Support System (TACDSS).
Experiment results clearly demonstrate the efficiency of the proposed
technique.
|
cs/0405027
|
Evolution of a Subsumption Architecture Neurocontroller
|
cs.AI cs.NE
|
An approach to robotics called layered evolution and merging features from
the subsumption architecture into evolutionary robotics is presented, and its
advantages are discussed. This approach is used to construct a layered
controller for a simulated robot that learns which light source to approach in
an environment with obstacles. The evolvability and performance of layered
evolution on this task is compared to (standard) monolithic evolution,
incremental and modularised evolution. To corroborate the hypothesis that a
layered controller performs at least as well as an integrated one, the evolved
layers are merged back into a single network. On the grounds of the test
results, it is argued that layered evolution provides a superior approach for
many tasks, and it is suggested that this approach may be the key to scaling up
evolutionary robotics.
|
cs/0405028
|
Analysis of Hybrid Soft and Hard Computing Techniques for Forex
Monitoring Systems
|
cs.AI
|
In a universe with a single currency, there would be no foreign exchange
market, no foreign exchange rates, and no foreign exchange. Over the past
twenty-five years, the way the market has performed those tasks has changed
enormously. The need for intelligent monitoring systems has become a necessity
to keep track of the complex forex market. The vast currency market is a
foreign concept to the average individual. However, once it is broken down into
simple terms, the average individual can begin to understand the foreign
exchange market and use it as a financial instrument for future investing. In
this paper, we attempt to compare the performance of hybrid soft computing and
hard computing techniques to predict the average monthly forex rates one month
ahead. The soft computing models considered are a neural network trained by the
scaled conjugate gradient algorithm and a neuro-fuzzy model implementing a
Takagi-Sugeno fuzzy inference system. We also considered Multivariate Adaptive
Regression Splines (MARS), Classification and Regression Trees (CART) and a
hybrid CART-MARS technique. We considered the exchange rates of Australian
dollar with respect to US dollar, Singapore dollar, New Zealand dollar,
Japanese yen and United Kingdom pounds. The models were trained using 70% of
the data and remaining was used for testing and validation purposes. It is
observed that the proposed hybrid models could predict the forex rates more
accurately than all the techniques when applied individually. Empirical results
also reveal that the hybrid hard computing approach also improved some of our
previous work using a neuro-fuzzy approach.
|
cs/0405029
|
A New Computational Framework For 2D Shape-Enclosing Contours
|
cs.CV cs.CG
|
In this paper, a new framework for one-dimensional contour extraction from
discrete two-dimensional data sets is presented. Contour extraction is
important in many scientific fields such as digital image processing, computer
vision, pattern recognition, etc. This novel framework includes (but is not
limited to) algorithms for dilated contour extraction, contour displacement,
shape skeleton extraction, contour continuation, shape feature based contour
refinement and contour simplification. Many of the new techniques depend
strongly on the application of a Delaunay tessellation. In order to demonstrate
the versatility of this novel toolbox approach, the contour extraction
techniques presented here are applied to scientific problems in material
science, biology and heavy ion physics.
|
cs/0405030
|
Business Intelligence from Web Usage Mining
|
cs.AI
|
The rapid e-commerce growth has made both business community and customers
face a new situation. Due to intense competition on one hand and the customer's
option to choose from several alternatives business community has realized the
necessity of intelligent marketing strategies and relationship management. Web
usage mining attempts to discover useful knowledge from the secondary data
obtained from the interactions of the users with the Web. Web usage mining has
become very critical for effective Web site management, creating adaptive Web
sites, business and support services, personalization, network traffic flow
analysis and so on. In this paper, we present the important concepts of Web
usage mining and its various practical applications. We further present a novel
approach 'intelligent-miner' (i-Miner) to optimize the concurrent architecture
of a fuzzy clustering algorithm (to discover web data clusters) and a fuzzy
inference system to analyze the Web site visitor trends. A hybrid evolutionary
fuzzy clustering algorithm is proposed in this paper to optimally segregate
similar user interests. The clustered data is then used to analyze the trends
using a Takagi-Sugeno fuzzy inference system learned using a combination of
evolutionary algorithm and neural network learning. Proposed approach is
compared with self-organizing maps (to discover patterns) and several function
approximation techniques like neural networks, linear genetic programming and
Takagi-Sugeno fuzzy inference system (to analyze the clusters). The results are
graphically illustrated and the practical significance is discussed in detail.
Empirical results clearly show that the proposed Web usage-mining framework is
efficient.
|
cs/0405031
|
Adaptation of Mamdani Fuzzy Inference System Using Neuro - Genetic
Approach for Tactical Air Combat Decision Support System
|
cs.AI
|
Normally a decision support system is build to solve problem where
multi-criteria decisions are involved. The knowledge base is the vital part of
the decision support containing the information or data that is used in
decision-making process. This is the field where engineers and scientists have
applied several intelligent techniques and heuristics to obtain optimal
decisions from imprecise information. In this paper, we present a hybrid
neuro-genetic learning approach for the adaptation a Mamdani fuzzy inference
system for the Tactical Air Combat Decision Support System (TACDSS). Some
simulation results demonstrating the difference of the learning techniques and
are also provided.
|
cs/0405032
|
EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using
Neural Network Learning and Evolutionary Computation
|
cs.AI
|
Several adaptation techniques have been investigated to optimize fuzzy
inference systems. Neural network learning algorithms have been used to
determine the parameters of fuzzy inference system. Such models are often
called as integrated neuro-fuzzy models. In an integrated neuro-fuzzy model
there is no guarantee that the neural network learning algorithm converges and
the tuning of fuzzy inference system will be successful. Success of
evolutionary search procedures for optimization of fuzzy inference system is
well proven and established in many application areas. In this paper, we will
explore how the optimization of fuzzy inference systems could be further
improved using a meta-heuristic approach combining neural network learning and
evolutionary computation. The proposed technique could be considered as a
methodology to integrate neural networks, fuzzy inference systems and
evolutionary search procedures. We present the theoretical frameworks and some
experimental results to demonstrate the efficiency of the proposed technique.
|
cs/0405033
|
Optimization of Evolutionary Neural Networks Using Hybrid Learning
Algorithms
|
cs.AI
|
Evolutionary artificial neural networks (EANNs) refer to a special class of
artificial neural networks (ANNs) in which evolution is another fundamental
form of adaptation in addition to learning. Evolutionary algorithms are used to
adapt the connection weights, network architecture and learning algorithms
according to the problem environment. Even though evolutionary algorithms are
well known as efficient global search algorithms, very often they miss the best
local solutions in the complex solution space. In this paper, we propose a
hybrid meta-heuristic learning approach combining evolutionary learning and
local search methods (using 1st and 2nd order error information) to improve the
learning and faster convergence obtained using a direct evolutionary approach.
The proposed technique is tested on three different chaotic time series and the
test results are compared with some popular neuro-fuzzy systems and a recently
developed cutting angle method of global optimization. Empirical results reveal
that the proposed technique is efficient in spite of the computational
complexity.
|
cs/0405037
|
A Probabilistic Model of Machine Translation
|
cs.CL
|
A probabilistic model for computer-based generation of a machine translation
system on the basis of English-Russian parallel text corpora is suggested. The
model is trained using parallel text corpora with pre-aligned source and target
sentences. The training of the model results in a bilingual dictionary of words
and "word blocks" with relevant translation probability.
|
cs/0405038
|
Deductive Algorithmic Knowledge
|
cs.AI cs.LO
|
The framework of algorithmic knowledge assumes that agents use algorithms to
compute the facts they explicitly know. In many cases of interest, a deductive
system, rather than a particular algorithm, captures the formal reasoning used
by the agents to compute what they explicitly know. We introduce a logic for
reasoning about both implicit and explicit knowledge with the latter defined
with respect to a deductive system formalizing a logical theory for agents. The
highly structured nature of deductive systems leads to very natural
axiomatizations of the resulting logic when interpreted over any fixed
deductive system. The decision problem for the logic, in the presence of a
single agent, is NP-complete in general, no harder than propositional logic. It
remains NP-complete when we fix a deductive system that is decidable in
nondeterministic polynomial time. These results extend in a straightforward way
to multiple agents.
|
cs/0405039
|
Catching the Drift: Probabilistic Content Models, with Applications to
Generation and Summarization
|
cs.CL
|
We consider the problem of modeling the content structure of texts within a
specific domain, in terms of the topics the texts address and the order in
which these topics appear. We first present an effective knowledge-lean method
for learning content models from un-annotated documents, utilizing a novel
adaptation of algorithms for Hidden Markov Models. We then apply our method to
two complementary tasks: information ordering and extractive summarization. Our
experiments show that incorporating content models in these applications yields
substantial improvement over previously-proposed methods.
|
cs/0405041
|
The modulus in the CAD system drawings as a base of developing of the
problem-oriented extensions
|
cs.CE cs.DS
|
The concept of the "modulus" in the CAD system drawings is characterized,
being a base of developing of the problem-oriented extensions. The modulus
consists of visible geometric elements of the drawing and invisible parametric
representation of the modelling object. The technological advantages of
moduluss in a complex CAD system developing are described.
|
cs/0405043
|
Prediction with Expert Advice by Following the Perturbed Leader for
General Weights
|
cs.LG cs.AI
|
When applying aggregating strategies to Prediction with Expert Advice, the
learning rate must be adaptively tuned. The natural choice of
sqrt(complexity/current loss) renders the analysis of Weighted Majority
derivatives quite complicated. In particular, for arbitrary weights there have
been no results proven so far. The analysis of the alternative "Follow the
Perturbed Leader" (FPL) algorithm from Kalai (2003} (based on Hannan's
algorithm) is easier. We derive loss bounds for adaptive learning rate and both
finite expert classes with uniform weights and countable expert classes with
arbitrary weights. For the former setup, our loss bounds match the best known
results so far, while for the latter our results are (to our knowledge) new.
|
cs/0405044
|
Corpus structure, language models, and ad hoc information retrieval
|
cs.IR cs.CL
|
Most previous work on the recently developed language-modeling approach to
information retrieval focuses on document-specific characteristics, and
therefore does not take into account the structure of the surrounding corpus.
We propose a novel algorithmic framework in which information provided by
document-based language models is enhanced by the incorporation of information
drawn from clusters of similar documents. Using this framework, we develop a
suite of new algorithms. Even the simplest typically outperforms the standard
language-modeling approach in precision and recall, and our new interpolation
algorithm posts statistically significant improvements for both metrics over
all three corpora tested.
|
cs/0405047
|
Modular technology of developing of the problem-oriented extensions of a
CAD system of reconstruction of the plant
|
cs.CE cs.DS
|
The modular technology of creation of the problem-oriented extensions of a
CAD system is described, which was realised in a system TechnoCAD GlassX for
designing of reconstruction of the plants. The modularity of the technology is
expressed in storage of all parameters of the design in one element of the
drawing - modulus, with automatic generation of a geometrical part of the
modulus from these parameters. The common principles of the system organization
of extensions developing are described: separation of the part of the design to
automize in this extension, architecture of parameters in the form of the lists
of objects with their properties and links to another objects, separation of
common and special operations, stages of the developing, boundaries of
applicability of technology.
|
cs/0405049
|
Export Behaviour Modeling Using EvoNF Approach
|
cs.AI
|
The academic literature suggests that the extent of exporting by
multinational corporation subsidiaries (MCS) depends on their product
manufactured, resources, tax protection, customers and markets, involvement
strategy, financial independence and suppliers' relationship with a
multinational corporation (MNC). The aim of this paper is to model the complex
export pattern behaviour using a Takagi-Sugeno fuzzy inference system in order
to determine the actual volume of MCS export output (sales exported). The
proposed fuzzy inference system is optimised by using neural network learning
and evolutionary computation. Empirical results clearly show that the proposed
approach could model the export behaviour reasonable well compared to a direct
neural network approach.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.