id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0411034
|
Generating Conditional Probabilities for Bayesian Networks: Easing the
Knowledge Acquisition Problem
|
cs.AI
|
The number of probability distributions required to populate a conditional
probability table (CPT) in a Bayesian network, grows exponentially with the
number of parent-nodes associated with that table. If the table is to be
populated through knowledge elicited from a domain expert then the sheer
magnitude of the task forms a considerable cognitive barrier. In this paper we
devise an algorithm to populate the CPT while easing the extent of knowledge
acquisition. The input to the algorithm consists of a set of weights that
quantify the relative strengths of the influences of the parent-nodes on the
child-node, and a set of probability distributions the number of which grows
only linearly with the number of associated parent-nodes. These are elicited
from the domain expert. The set of probabilities are obtained by taking into
consideration the heuristics that experts use while arriving at probabilistic
estimations. The algorithm is used to populate the CPT by computing appropriate
weighted sums of the elicited distributions. We invoke the methods of
information geometry to demonstrate how these weighted sums capture the
expert's judgemental strategy.
|
cs/0411035
|
A FP-Tree Based Approach for Mining All Strongly Correlated Pairs
without Candidate Generation
|
cs.DB cs.AI
|
Given a user-specified minimum correlation threshold and a transaction
database, the problem of mining all-strong correlated pairs is to find all item
pairs with Pearson's correlation coefficients above the threshold . Despite the
use of upper bound based pruning technique in the Taper algorithm [1], when the
number of items and transactions are very large, candidate pair generation and
test is still costly. To avoid the costly test of a large number of candidate
pairs, in this paper, we propose an efficient algorithm, called Tcp, based on
the well-known FP-tree data structure, for mining the complete set of
all-strong correlated item pairs. Our experimental results on both synthetic
and real world datasets show that, Tcp's performance is significantly better
than that of the previously developed Taper algorithm over practical ranges of
correlation threshold specifications.
|
cs/0411036
|
Feedback Capacity of the First-Order Moving Average Gaussian Channel
|
cs.IT math.IT
|
The feedback capacity of the stationary Gaussian additive noise channel has
been open, except for the case where the noise is white. Here we find the
feedback capacity of the stationary first-order moving average additive
Gaussian noise channel in closed form. Specifically, the channel is given by
$Y_i = X_i + Z_i,$ $i = 1, 2, ...,$ where the input $\{X_i\}$ satisfies a power
constraint and the noise $\{Z_i\}$ is a first-order moving average Gaussian
process defined by $Z_i = \alpha U_{i-1} + U_i,$ $|\alpha| \le 1,$ with white
Gaussian innovations $U_i,$ $i = 0,1,....$
We show that the feedback capacity of this channel is $-\log x_0,$ where
$x_0$ is the unique positive root of the equation $ \rho x^2 = (1-x^2) (1 -
|\alpha|x)^2,$ and $\rho$ is the ratio of the average input power per
transmission to the variance of the noise innovation $U_i$. The optimal coding
scheme parallels the simple linear signalling scheme by Schalkwijk and Kailath
for the additive white Gaussian noise channel -- the transmitter sends a
real-valued information-bearing signal at the beginning of communication and
subsequently refines the receiver's error by processing the feedback noise
signal through a linear stationary first-order autoregressive filter. The
resulting error probability of the maximum likelihood decoding decays
doubly-exponentially in the duration of the communication. This feedback
capacity of the first-order moving average Gaussian channel is very similar in
form to the best known achievable rate for the first-order
\emph{autoregressive} Gaussian noise channel studied by Butman, Wolfowitz, and
Tiernan, although the optimality of the latter is yet to be established.
|
cs/0411052
|
Spontaneous Dynamics of Asymmetric Random Recurrent Spiking Neural
Networks
|
cs.NE math.PR
|
We study in this paper the effect of an unique initial stimulation on random
recurrent networks of leaky integrate and fire neurons. Indeed given a
stochastic connectivity this so-called spontaneous mode exhibits various non
trivial dynamics. This study brings forward a mathematical formalism that
allows us to examine the variability of the afterward dynamics according to the
parameters of the weight distribution. Provided independence hypothesis (e.g.
in the case of very large networks) we are able to compute the average number
of neurons that fire at a given time -- the spiking activity. In accordance
with numerical simulations, we prove that this spiking activity reaches a
steady-state, we characterize this steady-state and explore the transients.
|
cs/0411069
|
CDN: Content Distribution Network
|
cs.NI cs.IR
|
Internet evolves and operates largely without a central coordination, the
lack of which was and is critically important to the rapid growth and evolution
of Internet. However, the lack of management in turn makes it very difficult to
guarantee proper performance and to deal systematically with performance
problems. Meanwhile, the available network bandwidth and server capacity
continue to be overwhelmed by the skyrocketing Internet utilization and the
accelerating growth of bandwidth intensive content. As a result, Internet
service quality perceived by customers is largely unpredictable and
unsatisfactory. Content Distribution Network (CDN) is an effective approach to
improve Internet service quality. CDN replicates the content from the place of
origin to the replica servers scattered over the Internet and serves a request
from a replica server close to where the request originates. In this paper, we
first give an overview about CDN. We then present the critical issues involved
in designing and implementing an effective CDN and survey the approaches
proposed in literature to address these problems. An example of CDN is
described to show how a real commercial CDN operates. After this, we present a
scheme that provides fast service location for peer-to-peer systems, a special
type of CDN with no infrastructure support. We conclude with a brief projection
about CDN.
|
cs/0411071
|
Comparing Multi-Target Trackers on Different Force Unit Levels
|
cs.AI
|
Consider the problem of tracking a set of moving targets. Apart from the
tracking result, it is often important to know where the tracking fails, either
to steer sensors to that part of the state-space, or to inform a human operator
about the status and quality of the obtained information. An intuitive quality
measure is the correlation between two tracking results based on uncorrelated
observations. In the case of Bayesian trackers such a correlation measure could
be the Kullback-Leibler difference.
We focus on a scenario with a large number of military units moving in some
terrain. The units are observed by several types of sensors and "meta-sensors"
with force aggregation capabilities. The sensors register units of different
size. Two separate multi-target probability hypothesis density (PHD) particle
filters are used to track some type of units (e.g., companies) and their
sub-units (e.g., platoons), respectively, based on observations of units of
those sizes. Each observation is used in one filter only.
Although the state-space may well be the same in both filters, the posterior
PHD distributions are not directly comparable -- one unit might correspond to
three or four spatially distributed sub-units. Therefore, we introduce a
mapping function between distributions for different unit size, based on
doctrine knowledge of unit configuration.
The mapped distributions can now be compared -- locally or globally -- using
some measure, which gives the correlation between two PHD distributions in a
bounded volume of the state-space. To locate areas where the tracking fails, a
discretized quality map of the state-space can be generated by applying the
measure locally to different parts of the space.
|
cs/0411072
|
Extremal optimization for sensor report pre-processing
|
cs.AI
|
We describe the recently introduced extremal optimization algorithm and apply
it to target detection and association problems arising in pre-processing for
multi-target tracking.
Here we consider the problem of pre-processing for multiple target tracking
when the number of sensor reports received is very large and arrives in large
bursts. In this case, it is sometimes necessary to pre-process reports before
sending them to tracking modules in the fusion system. The pre-processing step
associates reports to known tracks (or initializes new tracks for reports on
objects that have not been seen before). It could also be used as a pre-process
step before clustering, e.g., in order to test how many clusters to use.
The pre-processing is done by solving an approximate version of the original
problem. In this approximation, not all pair-wise conflicts are calculated. The
approximation relies on knowing how many such pair-wise conflicts that are
necessary to compute. To determine this, results on phase-transitions occurring
when coloring (or clustering) large random instances of a particular graph
ensemble are used.
|
cs/0411073
|
Geographic Routing with Limited Information in Sensor Networks
|
cs.IT math.IT
|
Geographic routing with greedy relaying strategies have been widely studied
as a routing scheme in sensor networks. These schemes assume that the nodes
have perfect information about the location of the destination. When the
distance between the source and destination is normalized to unity, the
asymptotic routing delays in these schemes are $\Theta(\frac{1}{M(n)}),$ where
M(n) is the maximum distance traveled in a single hop (transmission range of a
radio). In this paper, we consider routing scenarios where nodes have location
errors (imprecise GPS), or where only coarse geographic information about the
destination is available, and only a fraction of the nodes have routing
information. We show that even with such imprecise or limited
destination-location information, the routing delays are
$\Theta(\frac{1}{M(n)})$. We also consider the throughput-capacity of networks
with progressive routing strategies that take packets closer to the destination
in every step, but not necessarily along a straight-line. We show that the
throughput-capacity with progressive routing is order-wise the same as the
maximum achievable throughput-capacity.
|
cs/0411074
|
Building Chinese Lexicons from Scratch by Unsupervised Short Document
Self-Segmentation
|
cs.CL cs.IR
|
Chinese text segmentation is a well-known and difficult problem. On one side,
there is not a simple notion of "word" in Chinese language making really hard
to implement rule-based systems to segment written texts, thus lexicons and
statistical information are usually employed to achieve such a task. On the
other side, any piece of Chinese text usually includes segments present neither
in the lexicons nor in the training data. Even worse, such unseen sequences can
be segmented into a number of totally unrelated words making later processing
phases difficult. For instance, using a lexicon-based system the sequence
???(Baluozuo, Barroso, current president-designate of the European Commission)
can be segmented into ?(ba, to hope, to wish) and ??(luozuo, an undefined word)
changing completely the meaning of the sentence. A new and extremely simple
algorithm specially suited to work over short Chinese documents is introduced.
This new algorithm performs text "self-segmentation" producing results
comparable to those achieved by native speakers without using either lexicons
or any statistical information beyond the obtained from the input text.
Furthermore, it is really robust for finding new "words", especially proper
nouns, and it is well suited to build lexicons from scratch. Some preliminary
results are provided in addition to examples of its employment.
|
cs/0411098
|
On the High-SNR Capacity of Non-Coherent Networks
|
cs.IT math.IT
|
We obtain the first term in the high signal-to-noise ratio (SNR) expansion of
the capacity of fading networks where the transmitters and receivers--while
fully cognizant of the fading \emph{law}--have no access to the fading
\emph{realization}. This term is an integer multiple of $\log \log
\textnormal{SNR}$ with the coefficient having a simple combinatorial
characterization.
|
cs/0411099
|
A Note on the PAC Bayesian Theorem
|
cs.LG cs.AI
|
We prove general exponential moment inequalities for averages of [0,1]-valued
iid random variables and use them to tighten the PAC Bayesian Theorem. The
logarithmic dependence on the sample count in the enumerator of the PAC
Bayesian bound is halved.
|
cs/0412002
|
Ranking Pages by Topology and Popularity within Web Sites
|
cs.AI cs.IR
|
We compare two link analysis ranking methods of web pages in a site. The
first, called Site Rank, is an adaptation of PageRank to the granularity of a
web site and the second, called Popularity Rank, is based on the frequencies of
user clicks on the outlinks in a page that are captured by navigation sessions
of users through the web site. We ran experiments on artificially created web
sites of different sizes and on two real data sets, employing the relative
entropy to compare the distributions of the two ranking methods. For the real
data sets we also employ a nonparametric measure, called Spearman's footrule,
which we use to compare the top-ten web pages ranked by the two methods. Our
main result is that the distributions of the Popularity Rank and Site Rank are
surprisingly close to each other, implying that the topology of a web site is
very instrumental in guiding users through the site. Thus, in practice, the
Site Rank provides a reasonable first order approximation of the aggregate
behaviour of users within a web site given by the Popularity Rank.
|
cs/0412003
|
Mining Heterogeneous Multivariate Time-Series for Learning Meaningful
Patterns: Application to Home Health Telecare
|
cs.LG
|
For the last years, time-series mining has become a challenging issue for
researchers. An important application lies in most monitoring purposes, which
require analyzing large sets of time-series for learning usual patterns. Any
deviation from this learned profile is then considered as an unexpected
situation. Moreover, complex applications may involve the temporal study of
several heterogeneous parameters. In that paper, we propose a method for mining
heterogeneous multivariate time-series for learning meaningful patterns. The
proposed approach allows for mixed time-series -- containing both pattern and
non-pattern data -- such as for imprecise matches, outliers, stretching and
global translating of patterns instances in time. We present the early results
of our approach in the context of monitoring the health status of a person at
home. The purpose is to build a behavioral profile of a person by analyzing the
time variations of several quantitative or qualitative parameters recorded
through a provision of sensors installed in the home.
|
cs/0412015
|
A Tutorial on the Expectation-Maximization Algorithm Including
Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free
Grammars
|
cs.CL
|
The paper gives a brief review of the expectation-maximization algorithm
(Dempster 1977) in the comprehensible framework of discrete mathematics. In
Section 2, two prominent estimation methods, the relative-frequency estimation
and the maximum-likelihood estimation are presented. Section 3 is dedicated to
the expectation-maximization algorithm and a simpler variant, the generalized
expectation-maximization algorithm. In Section 4, two loaded dice are rolled. A
more interesting example is presented in Section 5: The estimation of
probabilistic context-free grammars.
|
cs/0412016
|
Inside-Outside Estimation Meets Dynamic EM
|
cs.CL
|
We briefly review the inside-outside and EM algorithm for probabilistic
context-free grammars. As a result, we formally prove that inside-outside
estimation is a dynamic-programming variant of EM. This is interesting in its
own right, but even more when considered in a theoretical context since the
well-known convergence behavior of inside-outside estimation has been confirmed
by many experiments but apparently has never been formally proved. However,
being a version of EM, inside-outside estimation also inherits the good
convergence behavior of EM. Therefore, the as yet imperfect line of
argumentation can be transformed into a coherent proof.
|
cs/0412018
|
Modeling Complex Higher Order Patterns
|
cs.DB cs.AI
|
The goal of this paper is to show that generalizing the notion of frequent
patterns can be useful in extending association analysis to more complex higher
order patterns. To that end, we describe a general framework for modeling a
complex pattern based on evaluating the interestingness of its sub-patterns. A
key goal of any framework is to allow people to more easily express, explore,
and communicate ideas, and hence, we illustrate how our framework can be used
to describe a variety of commonly used patterns, such as frequent patterns,
frequent closed patterns, indirect association patterns, hub patterns and
authority patterns. To further illustrate the usefulness of the framework, we
also present two new kinds of patterns that derived from the framework: clique
pattern and bi-clique pattern and illustrate their practical use.
|
cs/0412019
|
A Link Clustering Based Approach for Clustering Categorical Data
|
cs.DL cs.AI
|
Categorical data clustering (CDC) and link clustering (LC) have been
considered as separate research and application areas. The main focus of this
paper is to investigate the commonalities between these two problems and the
uses of these commonalities for the creation of new clustering algorithms for
categorical data based on cross-fertilization between the two disjoint research
fields. More precisely, we formally transform the CDC problem into an LC
problem, and apply LC approach for clustering categorical data. Experimental
results on real datasets show that LC based clustering method is competitive
with existing CDC algorithms with respect to clustering accuracy.
|
cs/0412021
|
Finite Domain Bounds Consistency Revisited
|
cs.AI cs.LO
|
A widely adopted approach to solving constraint satisfaction problems
combines systematic tree search with constraint propagation for pruning the
search space. Constraint propagation is performed by propagators implementing a
certain notion of consistency. Bounds consistency is the method of choice for
building propagators for arithmetic constraints and several global constraints
in the finite integer domain. However, there has been some confusion in the
definition of bounds consistency. In this paper we clarify the differences and
similarities among the three commonly used notions of bounds consistency.
|
cs/0412023
|
Multidimensional data classification with artificial neural networks
|
cs.NE cs.AI
|
Multi-dimensional data classification is an important and challenging problem
in many astro-particle experiments. Neural networks have proved to be versatile
and robust in multi-dimensional data classification. In this article we shall
study the classification of gamma from the hadrons for the MAGIC Experiment.
Two neural networks have been used for the classification task. One is
Multi-Layer Perceptron based on supervised learning and other is
Self-Organising Map (SOM), which is based on unsupervised learning technique.
The results have been shown and the possible ways of combining these networks
have been proposed to yield better and faster classification results.
|
cs/0412024
|
Human-Level Performance on Word Analogy Questions by Latent Relational
Analysis
|
cs.CL cs.IR cs.LG
|
This paper introduces Latent Relational Analysis (LRA), a method for
measuring relational similarity. LRA has potential applications in many areas,
including information extraction, word sense disambiguation, machine
translation, and information retrieval. Relational similarity is correspondence
between relations, in contrast with attributional similarity, which is
correspondence between attributes. When two words have a high degree of
attributional similarity, we call them synonyms. When two pairs of words have a
high degree of relational similarity, we say that their relations are
analogous. For example, the word pair mason/stone is analogous to the pair
carpenter/wood. Past work on semantic similarity measures has mainly been
concerned with attributional similarity. Recently the Vector Space Model (VSM)
of information retrieval has been adapted to the task of measuring relational
similarity, achieving a score of 47% on a collection of 374 college-level
multiple-choice word analogy questions. In the VSM approach, the relation
between a pair of words is characterized by a vector of frequencies of
predefined patterns in a large corpus. LRA extends the VSM approach in three
ways: (1) the patterns are derived automatically from the corpus (they are not
predefined), (2) the Singular Value Decomposition (SVD) is used to smooth the
frequency data (it is also used this way in Latent Semantic Analysis), and (3)
automatically generated synonyms are used to explore reformulations of the word
pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent
to the average human score of 57%. On the related problem of classifying
noun-modifier relations, LRA achieves similar gains over the VSM, while using a
smaller corpus.
|
cs/0412026
|
Removing Propagation Redundant Constraints in Redundant Modeling
|
cs.LO cs.AI
|
A widely adopted approach to solving constraint satisfaction problems
combines systematic tree search with various degrees of constraint propagation
for pruning the search space. One common technique to improve the execution
efficiency is to add redundant constraints, which are constraints logically
implied by others in the problem model. However, some redundant constraints are
propagation redundant and hence do not contribute additional propagation
information to the constraint solver. Redundant constraints arise naturally in
the process of redundant modeling where two models of the same problem are
connected and combined through channeling constraints. In this paper, we give
general theorems for proving propagation redundancy of one constraint with
respect to channeling constraints and constraints in the other model. We
illustrate, on problems from CSPlib (http://www.csplib.org/), how detecting and
removing propagation redundant constraints in redundant modeling can
significantly speed up constraint solving.
|
cs/0412029
|
The modular technology of development of the CAD expansions: profiles of
outside networks of water supply and water drain
|
cs.CE cs.DS
|
The modular technology of development of the problem-oriented CAD expansions
is applied to a task of designing of profiles of outside networks of water
supply and water drain with realization in program system TechnoCAD GlassX. The
unity of structure of this profiles is revealed, the system model of the
drawings of profiles of networks is developed including the structured
parametric representation (properties of objects and their interdependence,
general settings and default settings) and operations with it, which
efficiently automate designing
|
cs/0412030
|
The modular technology of development of the CAD expansions: protection
of the buildings from the lightning
|
cs.CE cs.DS
|
The modular technology of development of the problem-oriented CAD expansions
is applied to a task of designing of protection of the buildings from the
lightning with realization in program system TechnoCAD GlassX. The system model
of the drawings of lightning protection is developed including the structured
parametric representation (properties of objects and their interdependence,
general settings and default settings) and operations with it, which
efficiently automate designing
|
cs/0412031
|
The Features of the Complex CAD system of Reconstruction of the
Industrial Plants
|
cs.CE
|
The features of designing of reconstruction of the acting plant by its design
department are considered: the results of work are drawings corresponding with
the national standards; large number of the small projects for different acting
objects; variety of the types of the drawings in one project; large paper
archive. The models and methods of developing of the complex CAD system with
friend uniform environment of designing, with setting a profile of operations,
with usage of the general parts of the project, with a series of
problem-oriented subsystems are described on an example of a CAD system
TechnoCAD GlassX
|
cs/0412032
|
The methods of support of the requirements of the Russian standards at
development of a CAD of industrial objects
|
cs.CE cs.DS
|
The methods of support of the requirements of the Russian standards in a CAD
of industrial objects are explained, which were implemented in the CAD system
TechnoCAD GlassX with an own graphics core and own structures of data storage.
It is rotined, that the binding of storage structures and program code of a CAD
to the requirements of standards enable not only to fulfil these requirements
in project documentation, but also to increase a degree of compactness of
storage of drawings both on the disk and in the RAM
|
cs/0412033
|
The modelling of the build constructions in a CAD of the renovation of
the enterprises by means of units in the drawings
|
cs.CE
|
The parametric model of build constructions and features of design operations
are described for making drawings, which are the common component of the
different parts of the projects of renovation of enterprises. The key moment of
the deep design automation is the using of so-called units in the drawings,
which are joining a visible graphic part and invisible parameters. The model
has passed check during designing of several hundreds of drawings
|
cs/0412034
|
The informatization of design works at industry firm during its
renovation
|
cs.CE
|
The characteristic of design works on firm at its renovation and of the
common directions of their informatization is given. The implantation of a CAD
is selected as the key direction, and the requirements to a complex CAD-system
are stated. The methods of such a CAD-system development are featured, and the
connectedness of this development with the process of integration of
information space of design department of the firm is characterized. The
experience of development and implantation of a complex CAD of renovation of
firms TechnoCAD GlassX lies in a basis of this reviewing
|
cs/0412035
|
Deployment of a Grid-based Medical Imaging Application
|
cs.DC cs.DB
|
The MammoGrid project has deployed its Service-Oriented Architecture
(SOA)-based Grid application in a real environment comprising actual
participating hospitals. The resultant setup is currently being exploited to
conduct rigorous in-house tests in the first phase before handing over the
setup to the actual clinicians to get their feedback. This paper elaborates the
deployment details and the experiences acquired during this phase of the
project. Finally the strategy regarding migration to an upcoming middleware
from EGEE project will be described. This paper concludes by highlighting some
of the potential areas of future work.
|
cs/0412036
|
Reverse Engineering Ontology to Conceptual Data Models
|
cs.DC cs.DB
|
Ontologies facilitate the integration of heterogeneous data sources by
resolving semantic heterogeneity between them. This research aims to study the
possibility of generating a domain conceptual model from a given ontology with
the vision to grow this generated conceptual data model into a global
conceptual model integrating a number of existing data and information sources.
Based on ontologically derived semantics of the BWW model, rules are identified
that map elements of the ontology language (DAML+OIL) to domain conceptual
model elements. This mapping is demonstrated using TAMBIS ontology. A
significant corollary of this study is that it is possible to generate a domain
conceptual model from a given ontology subject to validation that needs to be
performed by the domain specialist before evolving this model into a global
conceptual model.
|
cs/0412041
|
An Efficient and Flexible Engine for Computing Fixed Points
|
cs.PL cs.AI cs.LO
|
An efficient and flexible engine for computing fixed points is critical for
many practical applications. In this paper, we firstly present a goal-directed
fixed point computation strategy in the logic programming paradigm. The
strategy adopts a tabled resolution (or memorized resolution) to mimic the
efficient semi-naive bottom-up computation. Its main idea is to dynamically
identify and record those clauses that will lead to recursive variant calls,
and then repetitively apply those alternatives incrementally until the fixed
point is reached. Secondly, there are many situations in which a fixed point
contains a large number or even infinite number of solutions. In these cases, a
fixed point computation engine may not be efficient enough or feasible at all.
We present a mode-declaration scheme which provides the capabilities to reduce
a fixed point from a big solution set to a preferred small one, or from an
infeasible infinite set to a finite one. The mode declaration scheme can be
characterized as a meta-level operation over the original fixed point. We show
the correctness of the mode declaration scheme. Thirdly, the mode-declaration
scheme provides a new declarative method for dynamic programming, which is
typically used for solving optimization problems. There is no need to define
the value of an optimal solution recursively, instead, defining a general
solution suffices. The optimal value as well as its corresponding concrete
solution can be derived implicitly and automatically using a mode-directed
fixed point computation engine. Finally, this fixed point computation engine
has been successfully implemented in a commercial Prolog system. Experimental
results are shown to indicate that the mode declaration improves both time and
space performances in solving dynamic programming problems.
|
cs/0412049
|
Neural Networks in Mobile Robot Motion
|
cs.RO cs.AI
|
This paper deals with a path planning and intelligent control of an
autonomous robot which should move safely in partially structured environment.
This environment may involve any number of obstacles of arbitrary shape and
size; some of them are allowed to move. We describe our approach to solving the
motion-planning problem in mobile robot control using neural networks-based
technique. Our method of the construction of a collision-free path for moving
robot among obstacles is based on two neural networks. The first neural network
is used to determine the "free" space using ultrasound range finder data. The
second neural network "finds" a safe direction for the next robot section of
the path in the workspace while avoiding the nearest obstacles. Simulation
examples of generated path with proposed techniques will be presented.
|
cs/0412050
|
Gyroscopically Stabilized Robot: Balance and Tracking
|
cs.RO
|
The single wheel, gyroscopically stabilized robot - Gyrover, is a dynamically
stable but statically unstable, underactuated system. In this paper, based on
the dynamic model of the robot, we investigate two classes of nonholonomic
constraints associated with the system. Then, based on the backstepping
technology, we propose a control law for balance control of Gyrover. Next,
through transferring the systems states from Cartesian coordinate to polar
coordinate, control laws for point-to-point control and line tracking in
Cartesian space are provided.
|
cs/0412051
|
Dynamic replanning in uncertain environments for a sewer inspection
robot
|
cs.RO
|
The sewer inspection robot MAKRO is an autonomous multi-segment robot with
worm-like shape driven by wheels. It is currently under development in the
project MAKRO-PLUS. The robot has to navigate autonomously within sewer
systems. Its first tasks will be to take water probes, analyze it onboard, and
measure positions of manholes and pipes to detect polluted-loaded sewage and to
improve current maps of sewer systems. One of the challenging problems is the
controller software, which should enable the robot to navigate in the sewer
system and perform the inspection tasks autonomously, not inflicting any
self-damage. This paper focuses on the route planning and replanning aspect of
the robot. The robots software has four different levels, of which the planning
system is the highest level, and the remaining three are controller levels each
with a different degree of abstraction. The planner coordinates the sequence of
actions that are to be successively executed by the robot.
|
cs/0412052
|
WebotsTM: Professional Mobile Robot Simulation
|
cs.RO
|
Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software
that provides you with a rapid prototyping environment for modelling,
programming and simulating mobile robots. The provided robot libraries enable
you to transfer your control programs to several commercially available real
mobile robots. WebotsTM lets you define and modify a complete mobile robotics
setup, even several different robots sharing the same environment. For each
object, you can define a number of properties, such as shape, color, texture,
mass, friction, etc. You can equip each robot with a large number of available
sensors and actuators. You can program these robots using your favorite
development environment, simulate them and optionally transfer the resulting
programs onto your real robots. WebotsTM has been developed in collaboration
with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested,
well documented and continuously maintained for over 7 years. It is now the
main commercial product available from Cyberbotics Ltd.
|
cs/0412053
|
Dynamic simulation of task constrained of a rigid-flexible manipulator
|
cs.RO
|
A rigid-flexible manipulator may be assigned tasks in a moving environment
where the winds or vibrations affect the position and/or orientation of surface
of operation. Consequently, losses of the contact and perhaps degradation of
the performance may occur as references are changed. When the environment is
moving, knowledge of the angle α between the contact surface and the
horizontal is required at every instant. In this paper, different profiles for
the time varying angle α are proposed to investigate the effect of this
change into the contact force and the joint torques of a rigid-flexible
manipulator. The coefficients of the equation of the proposed rotating surface
are changing with time to determine the new X and Y coordinates of the moving
surface as the surface rotates.
|
cs/0412054
|
Assembly and Disassembly Planning by using Fuzzy Logic & Genetic
Algorithms
|
cs.RO
|
The authors propose the implementation of hybrid Fuzzy Logic-Genetic
Algorithm (FL-GA) methodology to plan the automatic assembly and disassembly
sequence of products. The GA-Fuzzy Logic approach is implemented onto two
levels. The first level of hybridization consists of the development of a Fuzzy
controller for the parameters of an assembly or disassembly planner based on
GAs. This controller acts on mutation probability and crossover rate in order
to adapt their values dynamically while the algorithm runs. The second level
consists of the identification of theoptimal assembly or disassembly sequence
by a Fuzzy function, in order to obtain a closer control of the technological
knowledge of the assembly/disassembly process. Two case studies were analyzed
in order to test the efficiency of the Fuzzy-GA methodologies.
|
cs/0412055
|
Robotic Applications in Cardiac Surgery
|
cs.RO
|
Traditionally, cardiac surgery has been performed through a median
sternotomy, which allows the surgeon generous access to the heart and
surrounding great vessels. As a paradigm shift in the size and location of
incisions occurs in cardiac surgery, new methods have been developed to allow
the surgeon the same amount of dexterity and accessibility to the heart in
confined spaces and in a less invasive manner. Initially, long instruments
without pivot points were used, however, more recent robotic telemanipulation
systems have been applied that allow for improved dexterity, enabling the
surgeon to perform cardiac surgery from a distance not previously possible. In
this rapidly evolving field, we review the recent history and clinical results
of using robotics in cardiac surgery.
|
cs/0412056
|
One-Chip Solution to Intelligent Robot Control: Implementing Hexapod
Subsumption Architecture Using a Contemporary Microprocessor
|
cs.RO
|
This paper introduces a six-legged autonomous robot managed by a single
controller and a software core modeled on subsumption architecture. We begin by
discussing the features and capabilities of IsoPod, a new processor for
robotics which has enabled a streamlined implementation of our project. We
argue that this processor offers a unique set of hardware and software
features, making it a practical development platform for robotics in general
and for subsumption-based control architectures in particular. Next, we
summarize original ideas on subsumption architecture implementation for a
six-legged robot, as presented by its inventor Rodney Brooks in 1980s. A
comparison is then made to a more recent example of a hexapod control
architecture based on subsumption. The merits of both systems are analyzed and
a new subsumption architecture layout is formulated as a response. We conclude
with some remarks regarding the development of this project as a hint at new
potentials for intelligent robot design, opened by a recent development in
embedded controller market.
|
cs/0412057
|
How to achieve various gait patterns from single nominal
|
cs.RO
|
In this paper is presented an approach to achieving on-line modification of
nominal biped gait without recomputing entire dynamics when steady motion is
performed. Straight, dynamically balanced walk was used as a nominal gait, and
applied modifications were speed-up and slow-down walk and turning left and
right. It is shown that the disturbances caused by these modifications
jeopardize dynamic stability, but they can be simply compensated to enable walk
continuation.
|
cs/0412058
|
Clustering Categorical Data Streams
|
cs.DB cs.AI
|
The data stream model has been defined for new classes of applications
involving massive data being generated at a fast pace. Web click stream
analysis and detection of network intrusions are two examples. Cluster analysis
on data streams becomes more difficult, because the data objects in a data
stream must be accessed in order and can be read only once or few times with
limited resources. Recently, a few clustering algorithms have been developed
for analyzing numeric data streams. However, to our knowledge to date, no
algorithm exists for clustering categorical data streams. In this paper, we
propose an efficient clustering algorithm for analyzing categorical data
streams. It has been proved that the proposed algorithm uses small memory
footprints. We provide empirical analysis on the performance of the algorithm
in clustering both synthetic and real data streams
|
cs/0412059
|
Vector Symbolic Architectures answer Jackendoff's challenges for
cognitive neuroscience
|
cs.NE cs.AI
|
Jackendoff (2002) posed four challenges that linguistic combinatoriality and
rules of language present to theories of brain function. The essence of these
problems is the question of how to neurally instantiate the rapid construction
and transformation of the compositional structures that are typically taken to
be the domain of symbolic processing. He contended that typical connectionist
approaches fail to meet these challenges and that the dialogue between
linguistic theory and cognitive neuroscience will be relatively unproductive
until the importance of these problems is widely recognised and the challenges
answered by some technical innovation in connectionist modelling. This paper
claims that a little-known family of connectionist models (Vector Symbolic
Architectures) are able to meet Jackendoff's challenges.
|
cs/0412060
|
Monotonicity Results for Coherent MIMO Rician Channels
|
cs.IT math.IT
|
The dependence of the Gaussian input information rate on the line-of-sight
(LOS) matrix in multiple-input multiple-output coherent Rician fading channels
is explored. It is proved that the outage probability and the mutual
information induced by a multivariate circularly symmetric Gaussian input with
any covariance matrix are monotonic in the LOS matrix D, or more precisely,
monotonic in D'D in the sense of the Loewner partial order. Conversely, it is
also demonstrated that this ordering on the LOS matrices is a necessary
condition for the uniform monotonicity over all input covariance matrices. This
result is subsequently applied to prove the monotonicity of the isotropic
Gaussian input information rate and channel capacity in the singular values of
the LOS matrix. Extensions to multiple-access channels are also discussed.
|
cs/0412065
|
A Framework for Creating Natural Language User Interfaces for
Action-Based Applications
|
cs.CL cs.HC
|
In this paper we present a framework for creating natural language interfaces
to action-based applications. Our framework uses a number of reusable
application-independent components, in order to reduce the effort of creating a
natural language interface for a given application. Using a type-logical
grammar, we first translate natural language sentences into expressions in an
extended higher-order logic. These expressions can be seen as executable
specifications corresponding to the original sentences. The executable
specifications are then interpreted by invoking appropriate procedures provided
by the application for which a natural language interface is being created.
|
cs/0412066
|
From Feature Extraction to Classification: A multidisciplinary Approach
applied to Portuguese Granites
|
cs.AI cs.CV
|
The purpose of this paper is to present a complete methodology based on a
multidisciplinary approach, that goes from the extraction of features till the
classification of a set of different portuguese granites. The set of tools to
extract the features that characterise polished surfaces of the granites is
mainly based on mathematical morphology. The classification methodology is
based on a genetic algorithm capable of search the input feature space used by
the nearest neighbour rule classifier. Results show that is adequate to perform
feature reduction and simultaneous improve the recognition rate. Moreover, the
present methodology represents a robust strategy to understand the proper
nature of the images treated, and their discriminant features. KEYWORDS:
Portuguese grey granites, feature extraction, mathematical morphology, feature
reduction, genetic algorithms, nearest neighbour rule classifiers (k-NNR).
|
cs/0412067
|
Complete Characterization of the Equivalent MIMO Channel for
Quasi-Orthogonal Space-Time Codes
|
cs.IT math.IT
|
Recently, a quasi-orthogonal space-time block code (QSTBC) capable of
achieving a significant fraction of the outage mutual information of a
multiple-input-multiple output (MIMO) wireless communication system for the
case of four transmit and one receive antennas was proposed. We generalize
these results to $n_T=2^n$ transmit and an arbitrary number of receive antennas
$n_R$. Furthermore, we completely characterize the structure of the equivalent
channel for the general case and show that for all $n_T=2^n$ and $n_R$ the
eigenvectors of the equivalent channel are fixed and independent from the
channel realization. Furthermore, the eigenvalues of the equivalent channel are
independent identically distributed random variables each following a
noncentral chi-square distribution with $4n_R$ degrees of freedom.
Based on these important insights into the structure of the QSTBC, we derive
an analytical lower bound for the fraction of outage probability achieved with
QSTBC and show that this bound is tight for low signal-to-noise-ratios (SNR)
values and also for increasing number of receive antennas. We also present an
upper bound, which is tight for high SNR values and derive analytical
expressions for the case of four transmit antennas. Finally, by utilizing the
special structure of the QSTBC we propose a new transmit strategy, which
decouples the signals transmitted from different antennas in order to detect
the symbols separately with a linear ML-detector rather than joint detection,
an up to now only known advantage of orthogonal space-time block codes (OSTBC).
|
cs/0412068
|
ANTIDS: Self-Organized Ant-based Clustering Model for Intrusion
Detection System
|
cs.CR cs.AI
|
Security of computers and the networks that connect them is increasingly
becoming of great significance. Computer security is defined as the protection
of computing systems against threats to confidentiality, integrity, and
availability. There are two types of intruders: the external intruders who are
unauthorized users of the machines they attack, and internal intruders, who
have permission to access the system with some restrictions. Due to the fact
that it is more and more improbable to a system administrator to recognize and
manually intervene to stop an attack, there is an increasing recognition that
ID systems should have a lot to earn on following its basic principles on the
behavior of complex natural systems, namely in what refers to
self-organization, allowing for a real distributed and collective perception of
this phenomena. With that aim in mind, the present work presents a
self-organized ant colony based intrusion detection system (ANTIDS) to detect
intrusions in a network infrastructure. The performance is compared among
conventional soft computing paradigms like Decision Trees, Support Vector
Machines and Linear Genetic Programming to model fast, online and efficient
intrusion detection systems.
|
cs/0412069
|
Swarming around Shellfish Larvae
|
cs.AI cs.CV
|
The collection of wild larvae seed as a source of raw material is a major sub
industry of shellfish aquaculture. To predict when, where and in what
quantities wild seed will be available, it is necessary to track the appearance
and growth of planktonic larvae. One of the most difficult groups to identify,
particularly at the species level are the Bivalvia. This difficulty arises from
the fact that fundamentally all bivalve larvae have a similar shape and colour.
Identification based on gross morphological appearance is limited by the
time-consuming nature of the microscopic examination and by the limited
availability of expertise in this field. Molecular and immunological methods
are also being studied. We describe the application of computational pattern
recognition methods to the automated identification and size analysis of
scallop larvae. For identification, the shape features used are binary
invariant moments; that is, the features are invariant to shift (position
within the image), scale (induced either by growth or differential image
magnification) and rotation. Images of a sample of scallop and non-scallop
larvae covering a range of maturities have been analysed. In order to overcome
the automatic identification, as well as to allow the system to receive new
unknown samples at any moment, a self-organized and unsupervised ant-like
clustering algorithm based on Swarm Intelligence is proposed, followed by
simple k-NNR nearest neighbour classification on the final map. Results achieve
a full recognition rate of 100% under several situations (k =1 or 3).
|
cs/0412070
|
Less is More - Genetic Optimisation of Nearest Neighbour Classifiers
|
cs.AI cs.CV
|
The present paper deals with optimisation of Nearest Neighbour rule
Classifiers via Genetic Algorithms. The methodology consists on implement a
Genetic Algorithm capable of search the input feature space used by the NNR
classifier. Results show that is adequate to perform feature reduction and
simultaneous improve the Recognition Rate. Some practical examples prove that
is possible to Recognise Portuguese Granites in 100%, with only 3 morphological
features (from an original set of 117 features), which is well suited for real
time applications. Moreover, the present method represents a robust strategy to
understand the proper nature of the images treated, and their discriminant
features. KEYWORDS: Feature Reduction, Genetic Algorithms, Nearest Neighbour
Rule Classifiers (k-NNR).
|
cs/0412071
|
Web Usage Mining Using Artificial Ant Colony Clustering and Genetic
Programming
|
cs.AI cs.NE
|
The rapid e-commerce growth has made both business community and customers
face a new situation. Due to intense competition on one hand and the customer's
option to choose from several alternatives business community has realized the
necessity of intelligent marketing strategies and relationship management. Web
usage mining attempts to discover useful knowledge from the secondary data
obtained from the interactions of the users with the Web. Web usage mining has
become very critical for effective Web site management, creating adaptive Web
sites, business and support services, personalization, network traffic flow
analysis and so on. The study of ant colonies behavior and their
self-organizing capabilities is of interest to knowledge retrieval/management
and decision support systems sciences, because it provides models of
distributed adaptive organization, which are useful to solve difficult
optimization, classification, and distributed control problems, among others.
In this paper, we propose an ant clustering algorithm to discover Web usage
patterns (data clusters) and a linear genetic programming approach to analyze
the visitor trends. Empirical results clearly shows that ant colony clustering
performs well when compared to a self-organizing map (for clustering Web usage
patterns) even though the performance accuracy is not that efficient when
comparared to evolutionary-fuzzy clustering (i-miner) approach. KEYWORDS: Web
Usage Mining, Swarm Intelligence, Ant Systems, Stigmergy, Data-Mining, Linear
Genetic Programming.
|
cs/0412072
|
Swarms on Continuous Data
|
cs.AI cs.NE
|
While being it extremely important, many Exploratory Data Analysis (EDA)
systems have the inhability to perform classification and visualization in a
continuous basis or to self-organize new data-items into the older ones
(evenmore into new labels if necessary), which can be crucial in KDD -
Knowledge Discovery, Retrieval and Data Mining Systems (interactive and online
forms of Web Applications are just one example). This disadvantge is also
present in more recent approaches using Self-Organizing Maps. On the present
work, and exploiting past sucesses in recently proposed Stigmergic Ant Systems
a robust online classifier is presented, which produces class decisions on a
continuous stream data, allowing for continuous mappings. Results show that
increasingly better results are achieved, as demonstraded by other authors in
different areas. KEYWORDS: Swarm Intelligence, Ant Systems, Stigmergy,
Data-Mining, Exploratory Data Analysis, Image Retrieval, Continuous
Classification.
|
cs/0412073
|
Self-Organizing the Abstract: Canvas as a Swarm Habitat for Collective
Memory, Perception and Cooperative Distributed Creativity
|
cs.MM cs.AI
|
Past experiences under the designation of "Swarm Paintings" conducted in
2001, not only confirmed the possibility of realizing an artificial art (thus
non-human), as introduced into the process the questioning of creative
migration, specifically from the computer monitors to the canvas via a robotic
harm. In more recent self-organized based research we seek to develop and
profound the initial ideas by using a swarm of autonomous robots (ARTsBOT
project 2002-03), that "live" avoiding the purpose of being merely a simple
perpetrator of order streams coming from an external computer, but instead,
that actually co-evolve within the canvas space, acting (that is, laying ink)
according to simple inner threshold stimulus response functions, reacting
simultaneously to the chromatic stimulus present in the canvas environment done
by the passage of their team-mates, as well as by the distributed feedback,
affecting their future collective behaviour. In parallel, and in what respects
to certain types of collective systems, we seek to confirm, in a physically
embedded way, that the emergence of order (even as a concept) seems to be found
at a lower level of complexity, based on simple and basic interchange of
information, and on the local dynamic of parts, who, by self-organizing
mechanisms tend to form an lived whole, innovative and adapting, allowing for
emergent open-ended creative and distributed production. KEYWORDS: ArtSBots
Project, Swarm Intelligence, Stigmergy, UnManned Art, Symbiotic Art, Swarm
Paintings, Robot Paintings, Non-Human Art, Painting Emergence and Cooperation,
Art and Complexity, ArtBots: The Robot Talent Show.
|
cs/0412075
|
Self-Organized Stigmergic Document Maps: Environment as a Mechanism for
Context Learning
|
cs.AI cs.DC
|
Social insect societies and more specifically ant colonies, are distributed
systems that, in spite of the simplicity of their individuals, present a highly
structured social organization. As a result of this organization, ant colonies
can accomplish complex tasks that in some cases exceed the individual
capabilities of a single ant. The study of ant colonies behavior and of their
self-organizing capabilities is of interest to knowledge retrieval/management
and decision support systems sciences, because it provides models of
distributed adaptive organization which are useful to solve difficult
optimization, classification, and distributed control problems, among others.
In the present work we overview some models derived from the observation of
real ants, emphasizing the role played by stigmergy as distributed
communication paradigm, and we present a novel strategy to tackle unsupervised
clustering as well as data retrieval problems. The present ant clustering
system (ACLUSTER) avoids not only short-term memory based strategies, as well
as the use of several artificial ant types (using different speeds), present in
some recent approaches. Moreover and according to our knowledge, this is also
the first application of ant systems into textual document clustering.
KEYWORDS: Swarm Intelligence, Ant Systems, Unsupervised Clustering, Data
Retrieval, Data Mining, Distributed Computing, Document Maps, Textual Document
Clustering.
|
cs/0412076
|
Clustering Techniques for Marbles Classification
|
cs.AI cs.CV
|
Automatic marbles classification based on their visual appearance is an
important industrial issue. However, there is no definitive solution to the
problem mainly due to the presence of randomly distributed high number of
different colours and its subjective evaluation by the human expert. In this
paper we present a study of segmentation techniques, we evaluate they overall
performance using a training set and standard quality measures and finally we
apply different clustering techniques to automatically classify the marbles.
KEYWORDS: Segmentation, Clustering, Quadtrees, Learning Vector Quantization
(LVQ), Simulated Annealing (SA).
|
cs/0412077
|
On the Implicit and on the Artificial - Morphogenesis and Emergent
Aesthetics in Autonomous Collective Systems
|
cs.AI cs.MM
|
Imagine a "machine" where there is no pre-commitment to any particular
representational scheme: the desired behaviour is distributed and roughly
specified simultaneously among many parts, but there is minimal specification
of the mechanism required to generate that behaviour, i.e. the global behaviour
evolves from the many relations of multiple simple behaviours. A machine that
lives to and from/with Synergy. An artificial super-organism that avoids
specific constraints and emerges within multiple low-level implicit
bio-inspired mechanisms. KEYWORDS: Complex Science, ArtSBots Project, Swarm
Intelligence, Stigmergy, UnManned Art, Symbiotic Art, Swarm Paintings, Robot
Paintings, Non-Human Art, Painting Emergence and Cooperation, Art and
Complexity, ArtBots: The Robot Talent Show.
|
cs/0412079
|
The MC2 Project [Machines of Collective Conscience]: A possible walk, up
to Life-like Complexity and Behaviour, from bottom, basic and simple
bio-inspired heuristics - a walk, up into the morphogenesis of information
|
cs.AI cs.MM
|
Synergy (from the Greek word synergos), broadly defined, refers to combined
or co-operative effects produced by two or more elements (parts or
individuals). The definition is often associated with the holistic conviction
quote that "the whole is greater than the sum of its parts" (Aristotle, in
Metaphysics), or the whole cannot exceed the sum of the energies invested in
each of its parts (e.g. first law of thermodynamics) even if it is more
accurate to say that the functional effects produced by wholes are different
from what the parts can produce alone. Synergy is a ubiquitous phenomena in
nature and human societies alike. One well know example is provided by the
emergence of self-organization in social insects, via direct or indirect
interactions. The latter types are more subtle and defined as stigmergy to
explain task coordination and regulation in the context of nest reconstruction
in termites. An example, could be provided by two individuals, who interact
indirectly when one of them modifies the environment and the other responds to
the new environment at a later time. In other words, stigmergy could be defined
as a particular case of environmental or spatial synergy. The system is purely
holistic, and their properties are intrinsically emergent and autocatalytic. On
the present work we present a "machine" where there is no precommitment to any
particular representational scheme: the desired behaviour is distributed and
roughly specified simultaneously among many parts, but there is minimal
specification of the mechanism required to generate that behaviour, i.e. the
global behaviour evolves from the many relations of multiple simple behaviours.
|
cs/0412080
|
The Biological Concept of Neoteny in Evolutionary Colour Image
Segmentation - Simple Experiments in Simple Non-Memetic Genetic Algorithms
|
cs.AI cs.NE
|
Neoteny, also spelled Paedomorphosis, can be defined in biological terms as
the retention by an organism of juvenile or even larval traits into later life.
In some species, all morphological development is retarded; the organism is
juvenilized but sexually mature. Such shifts of reproductive capability would
appear to have adaptive significance to organisms that exhibit it. In terms of
evolutionary theory, the process of paedomorphosis suggests that larval stages
and developmental phases of existing organisms may give rise, under certain
circumstances, to wholly new organisms. Although the present work does not
pretend to model or simulate the biological details of such a concept in any
way, these ideas were incorporated by a rather simple abstract computational
strategy, in order to allow (if possible) for faster convergence into simple
non-memetic Genetic Algorithms, i.e. without using local improvement procedures
(e.g. via Baldwin or Lamarckian learning). As a case-study, the Genetic
Algorithm was used for colour image segmentation purposes by using K-mean
unsupervised clustering methods, namely for guiding the evolutionary algorithm
in his search for finding the optimal or sub-optimal data partition. Average
results suggest that the use of neotonic strategies by employing juvenile
genotypes into the later generations and the use of linear-dynamic mutation
rates instead of constant, can increase fitness values by 58% comparing to
classical Genetic Algorithms, independently from the starting population
characteristics on the search space. KEYWORDS: Genetic Algorithms, Artificial
Neoteny, Dynamic Mutation Rates, Faster Convergence, Colour Image Segmentation,
Classification, Clustering.
|
cs/0412081
|
Artificial Neoteny in Evolutionary Image Segmentation
|
cs.AI cs.NE
|
Neoteny, also spelled Paedomorphosis, can be defined in biological terms as
the retention by an organism of juvenile or even larval traits into later life.
In some species, all morphological development is retarded; the organism is
juvenilized but sexually mature. Such shifts of reproductive capability would
appear to have adaptive significance to organisms that exhibit it. In terms of
evolutionary theory, the process of paedomorphosis suggests that larval stages
and developmental phases of existing organisms may give rise, under certain
circumstances, to wholly new organisms. Although the present work does not
pretend to model or simulate the biological details of such a concept in any
way, these ideas were incorporated by a rather simple abstract computational
strategy, in order to allow (if possible) for faster convergence into simple
non-memetic Genetic Algorithms, i.e. without using local improvement procedures
(e.g. via Baldwin or Lamarckian learning). As a case-study, the Genetic
Algorithm was used for colour image segmentation purposes by using K-mean
unsupervised clustering methods, namely for guiding the evolutionary algorithm
in his search for finding the optimal or sub-optimal data partition. Average
results suggest that the use of neotonic strategies by employing juvenile
genotypes into the later generations and the use of linear-dynamic mutation
rates instead of constant, can increase fitness values by 58% comparing to
classical Genetic Algorithms, independently from the starting population
characteristics on the search space. KEYWORDS: Genetic Algorithms, Artificial
Neoteny, Dynamic Mutation Rates, Faster Convergence, Colour Image Segmentation,
Classification, Clustering.
|
cs/0412083
|
Line and Word Matching in Old Documents
|
cs.AI cs.CV
|
This paper is concerned with the problem of establishing an index based on
word matching. It is assumed that the book was digitised as better as possible
and some pre-processing techniques were already applied as line orientation
correction and some noise removal. However two main factor are responsible for
being not possible to apply ordinary optical character recognition techniques
(OCR): the presence of antique fonts and the degraded state of many characters
due to unrecoverable original time degradation. In this paper we make a short
introduction to word segmentation that involves finding the lines that
characterise a word. After we discuss different approaches for word matching
and how they can be combined to obtain an ordered list for candidate words for
the matching. This discussion will be illustrated by examples.
|
cs/0412084
|
Map Segmentation by Colour Cube Genetic K-Mean Clustering
|
cs.AI cs.NE
|
Segmentation of a colour image composed of different kinds of texture regions
can be a hard problem, namely to compute for an exact texture fields and a
decision of the optimum number of segmentation areas in an image when it
contains similar and/or unstationary texture fields. In this work, a method is
described for evolving adaptive procedures for these problems. In many real
world applications data clustering constitutes a fundamental issue whenever
behavioural or feature domains can be mapped into topological domains. We
formulate the segmentation problem upon such images as an optimisation problem
and adopt evolutionary strategy of Genetic Algorithms for the clustering of
small regions in colour feature space. The present approach uses k-Means
unsupervised clustering methods into Genetic Algorithms, namely for guiding
this last Evolutionary Algorithm in his search for finding the optimal or
sub-optimal data partition, task that as we know, requires a non-trivial search
because of its NP-complete nature. To solve this task, the appropriate genetic
coding is also discussed, since this is a key aspect in the implementation. Our
purpose is to demonstrate the efficiency of Genetic Algorithms to automatic and
unsupervised texture segmentation. Some examples in Colour Maps are presented
and overall results discussed. KEYWORDS: Genetic Algorithms, Artificial
Neoteny, Dynamic Mutation Rates, Faster Convergence, Colour Image Segmentation,
Classification, Clustering.
|
cs/0412085
|
A class of one-dimensional MDS convolutional codes
|
cs.IT math.IT math.RA
|
A class of one-dimensional convolutional codes will be presented. They are
all MDS codes, i. e., have the largest distance among all one-dimensional codes
of the same length n and overall constraint length delta. Furthermore, their
extended row distances are computed, and they increase with slope n-delta. In
certain cases of the algebraic parameters, we will also derive parity check
matrices of Vandermonde type for these codes. Finally, cyclicity in the
convolutional sense will be discussed for our class of codes. It will turn out
that they are cyclic if and only if the field element used in the generator
matrix has order n. This can be regarded as a generalization of the block code
case.
|
cs/0412086
|
Artificial Ant Colonies in Digital Image Habitats - A Mass Behaviour
Effect Study on Pattern Recognition
|
cs.AI cs.CV
|
Some recent studies have pointed that, the self-organization of neurons into
brain-like structures, and the self-organization of ants into a swarm are
similar in many respects. If possible to implement, these features could lead
to important developments in pattern recognition systems, where perceptive
capabilities can emerge and evolve from the interaction of many simple local
rules. The principle of the method is inspired by the work of Chialvo and
Millonas who developed the first numerical simulation in which swarm cognitive
map formation could be explained. From this point, an extended model is
presented in order to deal with digital image habitats, in which artificial
ants could be able to react to the environment and perceive it. Evolution of
pheromone fields point that artificial ant colonies could react and adapt
appropriately to any type of digital habitat. KEYWORDS: Swarm Intelligence,
Self-Organization, Stigmergy, Artificial Ant Systems, Pattern Recognition and
Perception, Image Segmentation, Gestalt Perception Theory, Distributed
Computation.
|
cs/0412087
|
Image Colour Segmentation by Genetic Algorithms
|
cs.AI cs.CV
|
Segmentation of a colour image composed of different kinds of texture regions
can be a hard problem, namely to compute for an exact texture fields and a
decision of the optimum number of segmentation areas in an image when it
contains similar and/or unstationary texture fields. In this work, a method is
described for evolving adaptive procedures for these problems. In many real
world applications data clustering constitutes a fundamental issue whenever
behavioural or feature domains can be mapped into topological domains. We
formulate the segmentation problem upon such images as an optimisation problem
and adopt evolutionary strategy of Genetic Algorithms for the clustering of
small regions in colour feature space. The present approach uses k-Means
unsupervised clustering methods into Genetic Algorithms, namely for guiding
this last Evolutionary Algorithm in his search for finding the optimal or
sub-optimal data partition, task that as we know, requires a non-trivial search
because of its intrinsic NP-complete nature. To solve this task, the
appropriate genetic coding is also discussed, since this is a key aspect in the
implementation. Our purpose is to demonstrate the efficiency of Genetic
Algorithms to automatic and unsupervised texture segmentation. Some examples in
Colour Maps, Ornamental Stones and in Human Skin Mark segmentation are
presented and overall results discussed. KEYWORDS: Genetic Algorithms, Colour
Image Segmentation, Classification, Clustering.
|
cs/0412088
|
On Image Filtering, Noise and Morphological Size Intensity Diagrams
|
cs.CV cs.AI
|
In the absence of a pure noise-free image it is hard to define what noise is,
in any original noisy image, and as a consequence also where it is, and in what
amount. In fact, the definition of noise depends largely on our own aim in the
whole image analysis process, and (perhaps more important) in our
self-perception of noise. For instance, when we perceive noise as disconnected
and small it is normal to use MM-ASF filters to treat it. There is two
evidences of this. First, in many instances there is no ideal and pure
noise-free image to compare our filtering process (nothing but our
self-perception of its pure image); second, and related with this first point,
MM transformations that we chose are only based on our self - and perhaps -
fuzzy notion. The present proposal combines the results of two MM filtering
transformations (FT1, FT2) and makes use of some measures and quantitative
relations on their Size/Intensity Diagrams to find the most appropriate noise
removal process. Results can also be used for finding the most appropriate stop
criteria, and the right sequence of MM operators combination on Alternating
Sequential Filters (ASF), if these measures are applied, for instance, on a
Genetic Algorithm's target function.
|
cs/0412091
|
The Combination of Paradoxical, Uncertain, and Imprecise Sources of
Information based on DSmT and Neutro-Fuzzy Inference
|
cs.AI
|
The management and combination of uncertain, imprecise, fuzzy and even
paradoxical or high conflicting sources of information has always been, and
still remains today, of primal importance for the development of reliable
modern information systems involving artificial reasoning. In this chapter, we
present a survey of our recent theory of plausible and paradoxical reasoning,
known as Dezert-Smarandache Theory (DSmT) in the literature, developed for
dealing with imprecise, uncertain and paradoxical sources of information. We
focus our presentation here rather on the foundations of DSmT, and on the two
important new rules of combination, than on browsing specific applications of
DSmT available in literature. Several simple examples are given throughout the
presentation to show the efficiency and the generality of this new approach.
The last part of this chapter concerns the presentation of the neutrosophic
logic, the neutro-fuzzy inference and its connection with DSmT. Fuzzy logic and
neutrosophic logic are useful tools in decision making after fusioning the
information using the DSm hybrid rule of combination of masses.
|
cs/0412098
|
The Google Similarity Distance
|
cs.CL cs.AI cs.DB cs.IR cs.LG
|
Words and phrases acquire meaning from the way they are used in society, from
their relative semantics to other words and phrases. For computers the
equivalent of `society' is `database,' and the equivalent of `use' is `way to
search the database.' We present a new theory of similarity between words and
phrases based on information distance and Kolmogorov complexity. To fix
thoughts we use the world-wide-web as database, and Google as search engine.
The method is also applicable to other search engines and databases. This
theory is then applied to construct a method to automatically extract
similarity, the Google similarity distance, of words and phrases from the
world-wide-web using Google page counts. The world-wide-web is the largest
database on earth, and the context information entered by millions of
independent users averages out to provide automatic semantics of useful
quality. We give applications in hierarchical clustering, classification, and
language translation. We give examples to distinguish between colors and
numbers, cluster names of paintings by 17th century Dutch masters and names of
books by English novelists, the ability to understand emergencies, and primes,
and we demonstrate the ability to do a simple automatic English-Spanish
translation. Finally, we use the WordNet database as an objective baseline
against which to judge the performance of our method. We conduct a massive
randomized trial in binary classification using support vector machines to
learn categories based on our Google distance, resulting in an a mean agreement
of 87% with the expert crafted WordNet categories.
|
cs/0412103
|
Chosen-Plaintext Cryptanalysis of a Clipped-Neural-Network-Based Chaotic
Cipher
|
cs.CR cs.NE nlin.CD
|
In ISNN'04, a novel symmetric cipher was proposed, by combining a chaotic
signal and a clipped neural network (CNN) for encryption. The present paper
analyzes the security of this chaotic cipher against chosen-plaintext attacks,
and points out that this cipher can be broken by a chosen-plaintext attack.
Experimental analyses are given to support the feasibility of the proposed
attack.
|
cs/0412104
|
Negotiating over Bundles and Prices Using Aggregate Knowledge
|
cs.MA cs.GT
|
Combining two or more items and selling them as one good, a practice called
bundling, can be a very effective strategy for reducing the costs of producing,
marketing, and selling goods. In this paper, we consider a form of multi-issue
negotiation where a shop negotiates both the contents and the price of bundles
of goods with his customers. We present some key insights about, as well as a
technique for, locating mutually beneficial alternatives to the bundle
currently under negotiation. The essence of our approach lies in combining
historical sales data, condensed into aggregate knowledge, with current data
about the ongoing negotiation process, to exploit these insights. In
particular, when negotiating a given bundle of goods with a customer, the shop
analyzes the sequence of the customer's offers to determine the progress in the
negotiation process. In addition, it uses aggregate knowledge concerning
customers' valuations of goods in general. We show how the shop can use these
two sources of data to locate promising alternatives to the current bundle.
When the current negotiation's progress slows down, the shop may suggest the
most promising of those alternatives and, depending on the customer's response,
continue negotiating about the alternative bundle, or propose another
alternative. Extensive computer simulation experiments show that our approach
increases the speed with which deals are reached, as well as the number and
quality of the deals reached, as compared to a benchmark. In addition, we show
that the performance of our system is robust to a variety of changes in the
negotiation strategies employed by the customers.
|
cs/0412105
|
On the existence of stable models of non-stratified logic programs
|
cs.AI cs.LO
|
This paper introduces a fundamental result, which is relevant for Answer Set
programming, and planning. For the first time since the definition of the
stable model semantics, the class of logic programs for which a stable model
exists is given a syntactic characterization. This condition may have a
practical importance both for defining new algorithms for checking consistency
and computing answer sets, and for improving the existing systems. The approach
of this paper is to introduce a new canonical form (to which any logic program
can be reduced to), to focus the attention on cyclic dependencies. The
technical result is then given in terms of programs in canonical form
(canonical programs), without loss of generality. The result is based on
identifying the cycles contained in the program, showing that stable models of
the overall program are composed of stable models of suitable sub-programs,
corresponding to the cycles, and on defining the Cycle Graph. Each vertex of
this graph corresponds to one cycle, and each edge corresponds to onehandle,
which is a literal containing an atom that, occurring in both cycles, actually
determines a connection between them. In fact, the truth value of the handle in
the cycle where it appears as the head of a rule, influences the truth value of
the atoms of the cycle(s) where it occurs in the body. We can therefore
introduce the concept of a handle path, connecting different cycles. If for
every odd cycle we can find a handle path with certain properties, then the
existence of stable model is guaranteed.
|
cs/0412106
|
Online Learning of Aggregate Knowledge about Non-linear Preferences
Applied to Negotiating Prices and Bundles
|
cs.MA cs.GT cs.LG
|
In this paper, we consider a form of multi-issue negotiation where a shop
negotiates both the contents and the price of bundles of goods with his
customers. We present some key insights about, as well as a procedure for,
locating mutually beneficial alternatives to the bundle currently under
negotiation. The essence of our approach lies in combining aggregate
(anonymous) knowledge of customer preferences with current data about the
ongoing negotiation process. The developed procedure either works with already
obtained aggregate knowledge or, in the absence of such knowledge, learns the
relevant information online. We conduct computer experiments with simulated
customers that have_nonlinear_ preferences. We show how, for various types of
customers, with distinct negotiation heuristics, our procedure (with and
without the necessary aggregate knowledge) increases the speed with which deals
are reached, as well as the number and the Pareto efficiency of the deals
reached compared to a benchmark.
|
cs/0412108
|
Mutual Information and Minimum Mean-square Error in Gaussian Channels
|
cs.IT math.IT
|
This paper deals with arbitrarily distributed finite-power input signals
observed through an additive Gaussian noise channel. It shows a new formula
that connects the input-output mutual information and the minimum mean-square
error (MMSE) achievable by optimal estimation of the input given the output.
That is, the derivative of the mutual information (nats) with respect to the
signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input
statistics. This relationship holds for both scalar and vector signals, as well
as for discrete-time and continuous-time noncausal MMSE estimation. This
fundamental information-theoretic result has an unexpected consequence in
continuous-time nonlinear estimation: For any input signal with finite power,
the causal filtering MMSE achieved at SNR is equal to the average value of the
noncausal smoothing MMSE achieved with a channel whose signal-to-noise ratio is
chosen uniformly distributed between 0 and SNR.
|
cs/0412109
|
Global minimization of a quadratic functional: neural network approach
|
cs.NE cs.DM
|
The problem of finding out the global minimum of a multiextremal functional
is discussed. One frequently faces with such a functional in various
applications. We propose a procedure, which depends on the dimensionality of
the problem polynomially. In our approach we use the eigenvalues and
eigenvectors of the connection matrix.
|
cs/0412110
|
Q-valued neural network as a system of fast identification and pattern
recognition
|
cs.NE cs.CV
|
An effective neural network algorithm of the perceptron type is proposed. The
algorithm allows us to identify strongly distorted input vector reliably. It is
shown that its reliability and processing speed are orders of magnitude higher
than that of full connected neural networks. The processing speed of our
algorithm exceeds the one of the stack fast-access retrieval algorithm that is
modified for working when there are noises in the input channel.
|
cs/0412111
|
On the asymptotic accuracy of the union bound
|
cs.IT math.IT
|
A new lower bound on the error probability of maximum likelihood decoding of
a binary code on a binary symmetric channel was proved in Barg and McGregor
(2004, cs.IT/0407011). It was observed in that paper that this bound leads to a
new region of code rates in which the random coding exponent is asymptotically
tight, giving a new region in which the reliability of the BSC is known
exactly. The present paper explains the relation of these results to the union
bound on the error probability.
|
cs/0412112
|
Source Coding With Encoder Side Information
|
cs.IT math.IT
|
We introduce the idea of distortion side information, which does not directly
depend on the source but instead affects the distortion measure. We show that
such distortion side information is not only useful at the encoder, but that
under certain conditions, knowing it at only the encoder is as good as knowing
it at both encoder and decoder, and knowing it at only the decoder is useless.
Thus distortion side information is a natural complement to the signal side
information studied by Wyner and Ziv, which depends on the source but does not
involve the distortion measure. Furthermore, when both types of side
information are present, we characterize the penalty for deviating from the
configuration of encoder-only distortion side information and decoder-only
signal side information, which in many cases is as good as full side
information knowledge.
|
cs/0412113
|
Source-Channel Diversity for Parallel Channels
|
cs.IT math.IT
|
We consider transmitting a source across a pair of independent, non-ergodic
channels with random states (e.g., slow fading channels) so as to minimize the
average distortion. The general problem is unsolved. Hence, we focus on
comparing two commonly used source and channel encoding systems which
correspond to exploiting diversity either at the physical layer through
parallel channel coding or at the application layer through multiple
description source coding.
For on-off channel models, source coding diversity offers better performance.
For channels with a continuous range of reception quality, we show the reverse
is true. Specifically, we introduce a new figure of merit called the distortion
exponent which measures how fast the average distortion decays with SNR. For
continuous-state models such as additive white Gaussian noise channels with
multiplicative Rayleigh fading, optimal channel coding diversity at the
physical layer is more efficient than source coding diversity at the
application layer in that the former achieves a better distortion exponent.
Finally, we consider a third decoding architecture: multiple description
encoding with a joint source-channel decoding. We show that this architecture
achieves the same distortion exponent as systems with optimal channel coding
diversity for continuous-state channels, and maintains the the advantages of
multiple description systems for on-off channels. Thus, the multiple
description system with joint decoding achieves the best performance, from
among the three architectures considered, on both continuous-state and on-off
channels.
|
cs/0412114
|
State of the Art, Evaluation and Recommendations regarding "Document
Processing and Visualization Techniques"
|
cs.CL
|
Several Networks of Excellence have been set up in the framework of the
European FP5 research program. Among these Networks of Excellence, the NEMIS
project focuses on the field of Text Mining.
Within this field, document processing and visualization was identified as
one of the key topics and the WG1 working group was created in the NEMIS
project, to carry out a detailed survey of techniques associated with the text
mining process and to identify the relevant research topics in related research
areas.
In this document we present the results of this comprehensive survey. The
report includes a description of the current state-of-the-art and practice, a
roadmap for follow-up research in the identified areas, and recommendations for
anticipated technological development in the domain of text mining.
|
cs/0412117
|
Thematic Annotation: extracting concepts out of documents
|
cs.CL
|
Contrarily to standard approaches to topic annotation, the technique used in
this work does not centrally rely on some sort of -- possibly statistical --
keyword extraction. In fact, the proposed annotation algorithm uses a large
scale semantic database -- the EDR Electronic Dictionary -- that provides a
concept hierarchy based on hyponym and hypernym relations. This concept
hierarchy is used to generate a synthetic representation of the document by
aggregating the words present in topically homogeneous document segments into a
set of concepts best preserving the document's content.
This new extraction technique uses an unexplored approach to topic selection.
Instead of using semantic similarity measures based on a semantic resource, the
later is processed to extract the part of the conceptual hierarchy relevant to
the document content. Then this conceptual hierarchy is searched to extract the
most relevant set of concepts to represent the topics discussed in the
document. Notice that this algorithm is able to extract generic concepts that
are not directly present in the document.
|
cs/0501005
|
Portfolio selection using neural networks
|
cs.NE
|
In this paper we apply a heuristic method based on artificial neural networks
in order to trace out the efficient frontier associated to the portfolio
selection problem. We consider a generalization of the standard Markowitz
mean-variance model which includes cardinality and bounding constraints. These
constraints ensure the investment in a given number of different assets and
limit the amount of capital to be invested in each asset. We present some
experimental results obtained with the neural network heuristic and we compare
them to those obtained with three previous heuristic methods.
|
cs/0501006
|
Formal Languages and Algorithms for Similarity based Retrieval from
Sequence Databases
|
cs.LO cs.DB
|
The paper considers various formalisms based on Automata, Temporal Logic and
Regular Expressions for specifying queries over sequences. Unlike traditional
binary semantics, the paper presents a similarity based semantics for thse
formalisms. More specifically, a distance measure in the range [0,1] is
associated with a sequence, query pair denoting how closely the sequence
satisfies the query. These measures are defined using a spectrum of normed
vector distance measures. Various distance measures based on the syntax and the
traditional semantics of the query are presented. Efficient algorithms for
computing these distance measure are presented. These algorithms can be
employed for retrieval of sequence from a database that closely satisfy a
given.
|
cs/0501008
|
Multipartite Secret Correlations and Bound Information
|
cs.CR cs.IT math.IT quant-ph
|
We consider the problem of secret key extraction when $n$ honest parties and
an eavesdropper share correlated information. We present a family of
probability distributions and give the full characterization of its
distillation properties. This formalism allows us to design a rich variety of
cryptographic scenarios. In particular, we provide examples of multipartite
probability distributions containing non-distillable secret correlations, also
known as bound information.
|
cs/0501011
|
A simple algorithm for decoding Reed-Solomon codes and its relation to
the Welch-Berlekamp algorithm
|
cs.IT math.IT
|
A simple and natural Gao algorithm for decoding algebraic codes is described.
Its relation to the Welch-Berlekamp and Euclidean algorithms is given.
|
cs/0501015
|
Application of Generating Functions and Partial Differential Equations
in Coding Theory
|
cs.IT math.IT
|
In this work we have considered formal power series and partial differential
equations, and their relationship with Coding Theory. We have obtained the
nature of solutions for the partial differential equations for Cycle Poisson
Case. The coefficients for this case have been simulated, and the high tendency
of growth is shown. In the light of Complex Analysis, the Hadamard
Multiplication's Theorem is presented as a new approach to divide the power
sums relating to the error probability, each part of which can be analyzed
later.
|
cs/0501016
|
On the weight distribution of convolutional codes
|
cs.IT math.IT math.OC
|
Detailed information about the weight distribution of a convolutional code is
given by the adjacency matrix of the state diagram associated with a controller
canonical form of the code. We will show that this matrix is an invariant of
the code. Moreover, it will be proven that codes with the same adjacency matrix
have the same dimension and the same Forney indices and finally that for
one-dimensional binary convolutional codes the adjacency matrix determines the
code uniquely up to monomial equivalence.
|
cs/0501017
|
Public Key Cryptography based on Semigroup Actions
|
cs.CR cs.IT math.IT
|
A generalization of the original Diffie-Hellman key exchange in $(\Z/p\Z)^*$
found a new depth when Miller and Koblitz suggested that such a protocol could
be used with the group over an elliptic curve. In this paper, we propose a
further vast generalization where abelian semigroups act on finite sets. We
define a Diffie-Hellman key exchange in this setting and we illustrate how to
build interesting semigroup actions using finite (simple) semirings. The
practicality of the proposed extensions rely on the orbit sizes of the
semigroup actions and at this point it is an open question how to compute the
sizes of these orbits in general and also if there exists a square root attack
in general. In Section 2 a concrete practical semigroup action built from
simple semirings is presented. It will require further research to analyse this
system.
|
cs/0501018
|
Combining Independent Modules in Lexical Multiple-Choice Problems
|
cs.LG cs.CL cs.IR
|
Existing statistical approaches to natural language problems are very coarse
approximations to the true complexity of language processing. As such, no
single technique will be best for all problem instances. Many researchers are
examining ensemble methods that combine the output of multiple modules to
create more accurate solutions. This paper examines three merging rules for
combining probability distributions: the familiar mixture rule, the logarithmic
rule, and a novel product rule. These rules were applied with state-of-the-art
results to two problems used to assess human mastery of lexical semantics --
synonym questions and analogy questions. All three merging rules result in
ensembles that are more accurate than any of their component modules. The
differences among the three rules are not statistically significant, but it is
suggestive that the popular mixture rule is not the best rule for either of the
two problems.
|
cs/0501019
|
Clustering SPIRES with EqRank
|
cs.DL cs.IR
|
SPIRES is the largest database of scientific papers in the subject field of
high energy and nuclear physics. It contains information on the citation graph
of more than half a million of papers (vertexes of the citation graph). We
outline the EqRank algorithm designed to cluster vertexes of directed graphs,
and present the results of EqRank application to the SPIRES citation graph. The
hierarchical clustering of SPIRES yielded by EqRank is used to set up a web
service, which is also outlined.
|
cs/0501023
|
No-cloning principal can alone provide security
|
cs.IT math.IT
|
Existing quantum key distribution schemes need the support of classical
authentication scheme to ensure security. This is a conceptual drawback of
quantum cryptography. It is pointed out that quantum cryptosystem does not need
any support of classical cryptosystem to ensure security. No-cloning principal
can alone provide security in communication. Even no-cloning principle itself
can help to authenticate each bit of information. It implies that quantum
password need not to be a secret password.
|
cs/0501025
|
A Logic for Non-Monotone Inductive Definitions
|
cs.AI cs.LO
|
Well-known principles of induction include monotone induction and different
sorts of non-monotone induction such as inflationary induction, induction over
well-founded sets and iterated induction. In this work, we define a logic
formalizing induction over well-founded sets and monotone and iterated
induction. Just as the principle of positive induction has been formalized in
FO(LFP), and the principle of inflationary induction has been formalized in
FO(IFP), this paper formalizes the principle of iterated induction in a new
logic for Non-Monotone Inductive Definitions (ID-logic). The semantics of the
logic is strongly influenced by the well-founded semantics of logic
programming. Our main result concerns the modularity properties of inductive
definitions in ID-logic. Specifically, we formulate conditions under which a
simultaneous definition $\D$ of several relations is logically equivalent to a
conjunction of smaller definitions $\D_1 \land ... \land \D_n$ with disjoint
sets of defined predicates. The difficulty of the result comes from the fact
that predicates $P_i$ and $P_j$ defined in $\D_i$ and $\D_j$, respectively, may
be mutually connected by simultaneous induction. Since logic programming and
abductive logic programming under well-founded semantics are proper fragments
of our logic, our modularity results are applicable there as well.
|
cs/0501028
|
An Empirical Study of MDL Model Selection with Infinite Parametric
Complexity
|
cs.LG cs.IT math.IT
|
Parametric complexity is a central concept in MDL model selection. In
practice it often turns out to be infinite, even for quite simple models such
as the Poisson and Geometric families. In such cases, MDL model selection as
based on NML and Bayesian inference based on Jeffreys' prior can not be used.
Several ways to resolve this problem have been proposed. We conduct experiments
to compare and evaluate their behaviour on small sample sizes.
We find interestingly poor behaviour for the plug-in predictive code; a
restricted NML model performs quite well but it is questionable if the results
validate its theoretical motivation. The Bayesian model with the improper
Jeffreys' prior is the most dependable.
|
cs/0501029
|
Estimating Range Queries using Aggregate Data with Integrity
Constraints: a Probabilistic Approach
|
cs.DB
|
The problem of recovering (count and sum) range queries over multidimensional
data only on the basis of aggregate information on such data is addressed. This
problem can be formalized as follows. Suppose that a transformation T producing
a summary from a multidimensional data set is used. Now, given a data set D, a
summary S=T(D) and a range query r on D, the problem consists of studying r by
modelling it as a random variable defined over the sample space of all the data
sets D' such that T(D) = S. The study of such a random variable, done by the
definition of its probability distribution and the computation of its mean
value and variance, represents a well-founded, theoretical probabilistic
approach for estimating the query only on the basis of the available
information (that is the summary S) without assumptions on original data.
|
cs/0501031
|
From truth to computability II
|
cs.LO cs.AI math.LO
|
Computability logic is a formal theory of computational tasks and resources.
Formulas in it represent interactive computational problems, and "truth" is
understood as algorithmic solvability. Interactive computational problems, in
turn, are defined as a certain sort games between a machine and its
environment, with logical operators standing for operations on such games.
Within the ambitious program of finding axiomatizations for incrementally rich
fragments of this semantically introduced logic, the earlier article "From
truth to computability I" proved soundness and completeness for system CL3,
whose language has the so called parallel connectives (including negation),
choice connectives, choice quantifiers, and blind quantifiers. The present
paper extends that result to the significantly more expressive system CL4 with
the same collection of logical operators. What makes CL4 expressive is the
presence of two sorts of atoms in its language: elementary atoms, representing
elementary computational problems (i.e. predicates, i.e. problems of zero
degree of interactivity), and general atoms, representing arbitrary
computational problems. CL4 conservatively extends CL3, with the latter being
nothing but the general-atom-free fragment of the former. Removing the blind
(classical) group of quantifiers from the language of CL4 is shown to yield a
decidable logic despite the fact that the latter is still first-order. A
comprehensive online source on computability logic can be found at
http://www.cis.upenn.edu/~giorgi/cl.html
|
cs/0501036
|
Enabling Agents to Dynamically Select Protocols for Interactions
|
cs.MA cs.SE
|
in this paper we describe a method which allows agents to dynamically select
protocols and roles when they need to execute collaborative tasks
|
cs/0501042
|
Maintaining Consistency of Data on the Web
|
cs.DB cs.DS
|
Increasingly more data is becoming available on the Web, estimates speaking
of 1 billion documents in 2002. Most of the documents are Web pages whose data
is considered to be in XML format, expecting it to eventually replace HTML.
A common problem in designing and maintaining a Web site is that data on a
Web page often replicates or derives from other data, the so-called base data,
that is usually not contained in the deriving or replicating page.
Consequently, replicas and derivations become inconsistent upon modifying base
data in a Web page or a relational database. For example, after assigning a
thesis to a student and modifying the Web page that describes it in detail, the
thesis is still incorrectly contained in the list of offered thesis, missing in
the list of ongoing thesis, and missing in the advisor's teaching record.
The thesis presents a solution by proposing a combined approach that provides
for maintaining consistency of data in Web pages that (i) replicate data in
relational databases, or (ii) replicate or derive from data in Web pages. Upon
modifying base data, the modification is immediately pushed to affected Web
pages. There, maintenance is performed incrementally by only modifying the
affected part of the page instead of re-generating the whole page from scratch.
|
cs/0501044
|
Augmented Segmentation and Visualization for Presentation Videos
|
cs.MM cs.IR
|
We investigate methods of segmenting, visualizing, and indexing presentation
videos by separately considering audio and visual data. The audio track is
segmented by speaker, and augmented with key phrases which are extracted using
an Automatic Speech Recognizer (ASR). The video track is segmented by visual
dissimilarities and augmented by representative key frames. An interactive user
interface combines a visual representation of audio, video, text, and key
frames, and allows the user to navigate a presentation video. We also explore
clustering and labeling of speaker data and present preliminary results.
|
cs/0501046
|
Thermodynamics of used punched tape: A weak and a strong equivalence
principle
|
cs.IT math.IT
|
We study the repeated use of a monotonic recording medium--such as punched
tape or photographic plate--where marks can be added at any time but never
erased. (For practical purposes, also the electromagnetic "ether" falls into
this class.) Our emphasis is on the case where the successive users act
independently and selfishly, but not maliciously; typically, the "first user"
would be a blind natural process tending to degrade the recording medium, and
the "second user" a human trying to make the most of whatever capacity is left.
To what extent is a length of used tape "equivalent"--for information
transmission purposes--to a shorter length of virgin tape? Can we characterize
a piece of used tape by an appropriate "effective length" and forget all other
details? We identify two equivalence principles. The weak principle is exact,
but only holds for a sequence of infinitesimal usage increments. The strong
principle holds for any amount of incremental usage, but is only approximate;
nonetheless, it is quite accurate even in the worst case and is virtually exact
over most of the range--becoming exact in the limit of heavily used tape.
The fact that strong equivalence does not hold exactly, but then it does
almost exactly, comes as a bit of a surprise.
|
cs/0501047
|
Impact of Channel Estimation Errors on Multiuser Detection via the
Replica Method
|
cs.IT math.IT
|
For practical wireless DS-CDMA systems, channel estimation is imperfect due
to noise and interference. In this paper, the impact of channel estimation
errors on multiuser detection (MUD) is analyzed under the framework of the
replica method. System performance is obtained in the large system limit for
optimal MUD, linear MUD and turbo MUD, and is validated by numerical results
for finite systems.
|
cs/0501048
|
Low Complexity Joint Iterative Equalization and Multiuser Detection in
Dispersive DS-CDMA Channels
|
cs.IT math.IT
|
Communications in dispersive direct-sequence code-division multiple-access
(DS-CDMA) channels suffer from intersymbol and multiple-access interference,
which can significantly impair performance. Joint maximum \textit{a posteriori}
probability (MAP) equalization and multiuser detection with error control
decoding can be used to mitigate this interference and to achieve the optimal
bit error rate. Unfortunately, such optimal detection typically requires
prohibitive computational complexity. This problem is addressed in this paper
through the development of a reduced state trellis search detection algorithm,
based on decision feedback from channel decoders. The performance of this
algorithm is analyzed in the large-system limit. This analysis and simulations
show that this low-complexity algorithm can obtain near-optimal performance
under moderate signal-to-noise ratio and attains larger system load capacity
than parallel interference cancellation.
|
cs/0501049
|
Performance Evaluation of Impulse Radio UWB Systems with Pulse-Based
Polarity Randomization
|
cs.IT math.IT
|
In this paper, the performance of a binary phase shift keyed random
time-hopping impulse radio system with pulse-based polarity randomization is
analyzed. Transmission over frequency-selective channels is considered and the
effects of inter-frame interference and multiple access interference on the
performance of a generic Rake receiver are investigated for both synchronous
and asynchronous systems. Closed form (approximate) expressions for the
probability of error that are valid for various Rake combining schemes are
derived. The asynchronous system is modelled as a chip-synchronous system with
uniformly distributed timing jitter for the transmitted pulses of interfering
users. This model allows the analytical technique developed for the synchronous
case to be extended to the asynchronous case. An approximate closed-form
expression for the probability of bit error, expressed in terms of the
autocorrelation function of the transmitted pulse, is derived for the
asynchronous case. Then, transmission over an additive white Gaussian noise
channel is studied as a special case, and the effects of multiple-access
interference is investigated for both synchronous and asynchronous systems. The
analysis shows that the chip-synchronous assumption can result in
over-estimating the error probability, and the degree of over-estimation mainly
depends on the autocorrelation function of the ultra-wideband pulse and the
signal-to-interference-plus-noise-ratio of the system. Simulations studies
support the approximate analysis.
|
cs/0501050
|
Energy-Efficient Joint Estimation in Sensor Networks: Analog vs. Digital
|
cs.IT math.IT
|
Sensor networks in which energy is a limited resource so that energy
consumption must be minimized for the intended application are considered. In
this context, an energy-efficient method for the joint estimation of an unknown
analog source under a given distortion constraint is proposed. The approach is
purely analog, in which each sensor simply amplifies and forwards the
noise-corrupted analog bservation to the fusion center for joint estimation.
The total transmission power across all the sensor nodes is minimized while
satisfying a distortion requirement on the joint estimate. The energy
efficiency of this analog approach is compared with previously proposed digital
approaches with and without coding. It is shown in our simulation that the
analog approach is more energy-efficient than the digital system without
coding, and in some cases outperforms the digital system with optimal coding.
|
cs/0501051
|
On the Capacity of Multiple Antenna Systems in Rician Fading
|
cs.IT math.IT
|
The effect of Rician-ness on the capacity of multiple antenna systems is
investigated under the assumption that channel state information (CSI) is
available only at the receiver. The average-power-constrained capacity of such
systems is considered under two different assumptions on the knowledge about
the fading available at the transmitter: the case in which the transmitter has
no knowledge of fading at all, and the case in which the transmitter has
knowledge of the distribution of the fading process but not the instantaneous
CSI. The exact capacity is given for the former case while capacity bounds are
derived for the latter case. A new signalling scheme is also proposed for the
latter case and it is shown that by exploiting the knowledge of Rician-ness at
the transmitter via this signalling scheme, significant capacity gain can be
achieved. The derived capacity bounds are evaluated explicitly to provide
numerical results in some representative situations.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.