id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0408031
|
There Goes the Neighborhood: Relational Algebra for Spatial Data Search
|
cs.DB
|
We explored ways of doing spatial search within a relational database: (1)
hierarchical triangular mesh (a tessellation of the sphere), (2) a zoned
bucketing system, and (3) representing areas as disjunctive-normal form
constraints. Each of these approaches has merits. They all allow efficient
point-in-region queries. A relational representation for regions allows Boolean
operations among them and allows quick tests for point-in-region,
regions-containing-point, and region-overlap. The speed of these algorithms is
much improved by a zone and multi-scale zone-pyramid scheme. The approach has
the virtue that the zone mechanism works well on B-Trees native to all SQL
systems and integrates naturally with current query optimizers - rather than
requiring a new spatial access method and concomitant query optimizer
extensions. Over the last 5 years, we have used these techniques extensively in
our work on SkyServer.sdss.org, and SkyQuery.net.
|
cs/0408036
|
Consensus on Transaction Commit
|
cs.DC cs.DB
|
The distributed transaction commit problem requires reaching agreement on
whether a transaction is committed or aborted. The classic Two-Phase Commit
protocol blocks if the coordinator fails. Fault-tolerant consensus algorithms
also reach agreement, but do not block whenever any majority of the processes
are working. Running a Paxos consensus algorithm on the commit/abort decision
of each participant yields a transaction commit protocol that uses 2F +1
coordinators and makes progress if at least F +1 of them are working. In the
fault-free case, this algorithm requires one extra message delay but has the
same stable-storage write delay as Two-Phase Commit. The classic Two-Phase
Commit algorithm is obtained as the special F = 0 case of the general Paxos
Commit algorithm.
|
cs/0408037
|
Multi-dimensional Type Theory: Rules, Categories, and Combinators for
Syntax and Semantics
|
cs.CL cs.AI cs.LO
|
We investigate the possibility of modelling the syntax and semantics of
natural language by constraints, or rules, imposed by the multi-dimensional
type theory Nabla. The only multiplicity we explicitly consider is two, namely
one dimension for the syntax and one dimension for the semantics, but the
general perspective is important. For example, issues of pragmatics could be
handled as additional dimensions.
One of the main problems addressed is the rather complicated repertoire of
operations that exists besides the notion of categories in traditional Montague
grammar. For the syntax we use a categorial grammar along the lines of Lambek.
For the semantics we use so-called lexical and logical combinators inspired by
work in natural logic. Nabla provides a concise interpretation and a sequent
calculus as the basis for implementations.
|
cs/0408038
|
The Dynamics of Group Codes: Dual Abelian Group Codes and Systems
|
cs.IT math.IT
|
Fundamental results concerning the dynamics of abelian group codes
(behaviors) and their duals are developed. Duals of sequence spaces over
locally compact abelian groups may be defined via Pontryagin duality; dual
group codes are orthogonal subgroups of dual sequence spaces. The dual of a
complete code or system is finite, and the dual of a Laurent code or system is
(anti-)Laurent. If C and C^\perp are dual codes, then the state spaces of C act
as the character groups of the state spaces of C^\perp. The controllability
properties of C are the observability properties of C^\perp. In particular, C
is (strongly) controllable if and only if C^\perp is (strongly) observable, and
the controller memory of C is the observer memory of C^\perp. The controller
granules of C act as the character groups of the observer granules of C^\perp.
Examples of minimal observer-form encoder and syndrome-former constructions are
given. Finally, every observer granule of C is an "end-around" controller
granule of C.
|
cs/0408039
|
Medians and Beyond: New Aggregation Techniques for Sensor Networks
|
cs.DC cs.DB cs.DS
|
Wireless sensor networks offer the potential to span and monitor large
geographical areas inexpensively. Sensors, however, have significant power
constraint (battery life), making communication very expensive. Another
important issue in the context of sensor-based information systems is that
individual sensor readings are inherently unreliable. In order to address these
two aspects, sensor database systems like TinyDB and Cougar enable in-network
data aggregation to reduce the communication cost and improve reliability. The
existing data aggregation techniques, however, are limited to relatively simple
types of queries such as SUM, COUNT, AVG, and MIN/MAX. In this paper we propose
a data aggregation scheme that significantly extends the class of queries that
can be answered using sensor networks. These queries include (approximate)
quantiles, such as the median, the most frequent data values, such as the
consensus value, a histogram of the data distribution, as well as range
queries. In our scheme, each sensor aggregates the data it has received from
other sensors into a fixed (user specified) size message. We provide strict
theoretical guarantees on the approximation quality of the queries in terms of
the message size. We evaluate the performance of our aggregation scheme by
simulation and demonstrate its accuracy, scalability and low resource
utilization for highly variable input data sets.
|
cs/0408041
|
Fractal geometry of literature: first attempt to Shakespeare's works
|
cs.CL cs.CC
|
It was demonstrated that there is a geometrical order in the structure of
literature. Fractal geometry as a modern mathematical approach and a new
geometrical viewpoint on natural objects including both processes and
structures was employed for analysis of literature. As the first study, the
works of William Shakespeare were chosen as the most important items in western
literature. By counting the number of letters applied in a manuscript, it is
possible to study the whole manuscript statistically. A novel method based on
basic assumption of fractal geometry was proposed for the calculation of
fractal dimensions of the literature. The results were compared with Zipf's
law. Zipf's law was successfully used for letters instead of words. Two new
concepts namely Zipf's dimension and Zipf's order were also introduced. It was
found that changes of both fractal dimension and Zipf's dimension are similar
and dependent on the manuscript length. Interestingly, direct plotting the data
obtained in semi-logarithmic and logarithmic forms also led to a power-law.
|
cs/0408044
|
FLUX: A Logic Programming Method for Reasoning Agents
|
cs.AI
|
FLUX is a programming method for the design of agents that reason logically
about their actions and sensor information in the presence of incomplete
knowledge. The core of FLUX is a system of Constraint Handling Rules, which
enables agents to maintain an internal model of their environment by which they
control their own behavior. The general action representation formalism of the
fluent calculus provides the formal semantics for the constraint solver. FLUX
exhibits excellent computational behavior due to both a carefully restricted
expressiveness and the inference paradigm of progression.
|
cs/0408047
|
Pervasive Service Architecture for a Digital Business Ecosystem
|
cs.CE cs.NI
|
In this paper we present ideas and architectural principles upon which we are
basing the development of a distributed, open-source infrastructure that, in
turn, will support the expression of business models, the dynamic composition
of software services, and the optimisation of service chains through automatic
self-organising and evolutionary algorithms derived from biology. The target
users are small and medium-sized enterprises (SMEs). We call the collection of
the infrastructure, the software services, and the SMEs a Digital Business
Ecosystem (DBE).
|
cs/0408048
|
Journal of New Democratic Methods: An Introduction
|
cs.CY cs.LG
|
This paper describes a new breed of academic journals that use statistical
machine learning techniques to make them more democratic. In particular, not
only can anyone submit an article, but anyone can also become a reviewer.
Machine learning is used to decide which reviewers accurately represent the
views of the journal's readers and thus deserve to have their opinions carry
more weight. The paper concentrates on describing a specific experimental
prototype of a democratic journal called the Journal of New Democratic Methods
(JNDM). The paper also mentions the wider implications that machine learning
and the techniques used in the JNDM may have for representative democracy in
general.
|
cs/0408049
|
Using Stochastic Encoders to Discover Structure in Data
|
cs.NE cs.CV
|
In this paper a stochastic generalisation of the standard Linde-Buzo-Gray
(LBG) approach to vector quantiser (VQ) design is presented, in which the
encoder is implemented as the sampling of a vector of code indices from a
probability distribution derived from the input vector, and the decoder is
implemented as a superposition of reconstruction vectors. This stochastic VQ
(SVQ) is optimised using a minimum mean Euclidean reconstruction distortion
criterion, as in the LBG case. Numerical simulations are used to demonstrate
how this leads to self-organisation of the SVQ, where different stochastically
sampled code indices become associated with different input subspaces.
|
cs/0408050
|
Invariant Stochastic Encoders
|
cs.NE cs.CV
|
The theory of stochastic vector quantisers (SVQ) has been extended to allow
the quantiser to develop invariances, so that only "large" degrees of freedom
in the input vector are represented in the code. This has been applied to the
problem of encoding data vectors which are a superposition of a "large" jammer
and a "small" signal, so that only the jammer is represented in the code. This
allows the jammer to be subtracted from the total input vector (i.e. the jammer
is nulled), leaving a residual that contains only the underlying signal. The
main advantage of this approach to jammer nulling is that little prior
knowledge of the jammer is assumed, because these properties are automatically
discovered by the SVQ as it is trained on examples of input vectors.
|
cs/0408051
|
Scalable XSLT Evaluation
|
cs.DB
|
XSLT is an increasingly popular language for processing XML data. It is
widely supported by application platform software. However, little optimization
effort has been made inside the current XSLT processing engines. Evaluating a
very simple XSLT program on a large XML document with a simple schema may
result in extensive usage of memory. In this paper, we present a novel notion
of \emph{Streaming Processing Model} (\emph{SPM}) to evaluate a subset of XSLT
programs on XML documents, especially large ones. With SPM, an XSLT processor
can transform an XML source document to other formats without extra memory
buffers required. Therefore, our approach can not only tackle large source
documents, but also produce large results. We demonstrate with a performance
study the advantages of the SPM approach. Experimental results clearly confirm
that SPM improves XSLT evaluation typically 2 to 10 times better than the
existing approaches. Moreover, the SPM approach also features high scalability.
|
cs/0408052
|
Application of the Double Metaphone Algorithm to Amharic Orthography
|
cs.CL
|
The Metaphone algorithm applies the phonetic encoding of orthographic
sequences to simplify words prior to comparison. While Metaphone has been
highly successful for the English language, for which it was designed, it may
not be applied directly to Ethiopian languages. The paper details how the
principles of Metaphone can be applied to Ethiopic script and uses Amharic as a
case study. Match results improve as specific considerations are made for
Amharic writing practices. Results are shown to improve further when common
errors from Amharic input methods are considered.
|
cs/0408054
|
Providing Authentic Long-term Archival Access to Complex Relational Data
|
cs.DL cs.DB
|
We discuss long-term preservation of and access to relational databases. The
focus is on national archives and science data archives which have to ingest
and integrate data from a broad spectrum of vendor-specific relational database
management systems (RDBMS). Furthermore, we present our solution SIARD which
analyzes and extracts data and data logic from almost any RDBMS. It enables, to
a reasonable level of authenticity, complete detachment of databases from their
vendor-specific environment. The user can add archival descriptive metadata
according to a customizable schema. A SIARD database archive integrates data,
data logic, technical metadata, and archival descriptive information in one
archival information package, independent of any specific software and
hardware, based upon plain text files and the standardized languages SQL and
XML. For usage purposes, a SIARD archive can be reloaded into any current or
future RDBMS which supports standard SQL. In addition, SIARD contains a client
that enables 'on demand' reload of archives into a target RDBMS, and multi-user
remote access for querying and browsing the data together with its technical
and descriptive metadata in one graphical user interface.
|
cs/0408055
|
Cauchy Annealing Schedule: An Annealing Schedule for Boltzmann Selection
Scheme in Evolutionary Algorithms
|
cs.AI
|
Boltzmann selection is an important selection mechanism in evolutionary
algorithms as it has theoretical properties which help in theoretical analysis.
However, Boltzmann selection is not used in practice because a good annealing
schedule for the `inverse temperature' parameter is lacking. In this paper we
propose a Cauchy annealing schedule for Boltzmann selection scheme based on a
hypothesis that selection-strength should increase as evolutionary process goes
on and distance between two selection strengths should decrease for the process
to converge. To formalize these aspects, we develop formalism for selection
mechanisms using fitness distributions and give an appropriate measure for
selection-strength. In this paper, we prove an important result, by which we
derive an annealing schedule called Cauchy annealing schedule. We demonstrate
the novelty of proposed annealing schedule using simulations in the framework
of genetic algorithms.
|
cs/0408056
|
A CHR-based Implementation of Known Arc-Consistency
|
cs.LO cs.AI
|
In classical CLP(FD) systems, domains of variables are completely known at
the beginning of the constraint propagation process. However, in systems
interacting with an external environment, acquiring the whole domains of
variables before the beginning of constraint propagation may cause waste of
computation time, or even obsolescence of the acquired data at the time of use.
For such cases, the Interactive Constraint Satisfaction Problem (ICSP) model
has been proposed as an extension of the CSP model, to make it possible to
start constraint propagation even when domains are not fully known, performing
acquisition of domain elements only when necessary, and without the need for
restarting the propagation after every acquisition.
In this paper, we show how a solver for the two sorted CLP language, defined
in previous work, to express ICSPs, has been implemented in the Constraint
Handling Rules (CHR) language, a declarative language particularly suitable for
high level implementation of constraint solvers.
|
cs/0408057
|
The role of robust semantic analysis in spoken language dialogue systems
|
cs.CL cs.AI cs.HC
|
In this paper we summarized a framework for designing grammar-based procedure
for the automatic extraction of the semantic content from spoken queries.
Starting with a case study and following an approach which combines the notions
of fuzziness and robustness in sentence parsing, we showed we built practical
domain-dependent rules which can be applied whenever it is possible to
superimpose a sentence-level semantic structure to a text without relying on a
previous deep syntactical analysis. This kind of procedure can be also
profitably used as a pre-processing tool in order to cut out part of the
sentence which have been recognized to have no relevance in the understanding
process. In the case of particular dialogue applications where there is no need
to build a complex semantic structure (e.g. word spotting or excerpting) the
presented methodology may represent an efficient alternative solution to a
sequential composition of deep linguistic analysis modules. Even if the query
generation problem may not seem a critical application it should be held in
mind that the sentence processing must be done on-line. Having this kind of
constraints we cannot design our system without caring for efficiency and thus
provide an immediate response. Another critical issue is related to whole
robustness of the system. In our case study we tried to make experiences on how
it is possible to deal with an unreliable and noisy input without asking the
user for any repetition or clarification. This may correspond to a similar
problem one may have when processing text coming from informal writing such as
e-mails, news and in many cases Web pages where it is often the case to have
irrelevant surrounding information.
|
cs/0408058
|
Non-negative matrix factorization with sparseness constraints
|
cs.LG cs.NE
|
Non-negative matrix factorization (NMF) is a recently developed technique for
finding parts-based, linear representations of non-negative data. Although it
has successfully been applied in several applications, it does not always
result in parts-based representations. In this paper, we show how explicitly
incorporating the notion of `sparseness' improves the found decompositions.
Additionally, we provide complete MATLAB code both for standard NMF and for our
extension. Our hope is that this will further the application of these methods
to solving novel data-analysis problems.
|
cs/0408059
|
Proofing Tools Technology at Neurosoft S.A.
|
cs.CL
|
The aim of this paper is to present the R&D activities carried out at
Neurosoft S.A. regarding the development of proofing tools for Modern Greek.
Firstly, we focus on infrastructure issues that we faced during our initial
steps. Subsequently, we describe the most important insights of three proofing
tools developed by Neurosoft, i.e. the spelling checker, the hyphenator and the
thesaurus, outlining their efficiencies and inefficiencies. Finally, we discuss
some improvement ideas and give our future directions.
|
cs/0408060
|
Verbal chunk extraction in French using limited resources
|
cs.CL
|
A way of extracting French verbal chunks, inflected and infinitive, is
explored and tested on effective corpus. Declarative morphological and local
grammar rules specifying chunks and some simple contextual structures are used,
relying on limited lexical information and some simple heuristic/statistic
properties obtained from restricted corpora. The specific goals, the
architecture and the formalism of the system, the linguistic information on
which it relies and the obtained results on effective corpus are presented.
|
cs/0408061
|
An electronic dictionary as a basis for NLP tools: The Greek case
|
cs.CL
|
The existence of a Dictionary in electronic form for Modern Greek (MG) is
mandatory if one is to process MG at the morphological and syntactic levels
since MG is a highly inflectional language with marked stress and a spelling
system with many characteristics carried over from Ancient Greek. Moreover,
such a tool becomes necessary if one is to create efficient and sophisticated
NLP applications with substantial linguistic backing and coverage. The present
paper will focus on the deployment of such an electronic dictionary for Modern
Greek, which was built in two phases: first it was constructed to be the basis
for a spelling correction schema and then it was reconstructed in order to
become the platform for the deployment of a wider spectrum of NLP tools.
|
cs/0408062
|
Source Coding With Distortion Side Information At The Encoder
|
cs.IT math.IT
|
We consider lossy source coding when side information affecting the
distortion measure may be available at the encoder, decoder, both, or neither.
For example, such distortion side information can model reliabilities for noisy
measurements, sensor calibration information, or perceptual effects like
masking and sensitivity to context. When the distortion side information is
statistically independent of the source, we show that in many cases (e.g, for
additive or multiplicative distortion side information) there is no penalty for
knowing the side information only at the encoder, and there is no advantage to
knowing it at the decoder. Furthermore, for quadratic distortion measures
scaled by the distortion side information, we evaluate the penalty for lack of
encoder knowledge and show that it can be arbitrarily large. In this scenario,
we also sketch transform based quantizers constructions which efficiently
exploit encoder side information in the high-resolution limit.
|
cs/0408063
|
Analysis and Visualization of Index Words from Audio Transcripts of
Instructional Videos
|
cs.IR cs.MM
|
We introduce new techniques for extracting, analyzing, and visualizing
textual contents from instructional videos of low production quality. Using
Automatic Speech Recognition, approximate transcripts (H75% Word Error Rate)
are obtained from the originally highly compressed videos of university
courses, each comprising between 10 to 30 lectures. Text material in the form
of books or papers that accompany the course are then used to filter meaningful
phrases from the seemingly incoherent transcripts. The resulting index into the
transcripts is tied together and visualized in 3 experimental graphs that help
in understanding the overall course structure and provide a tool for localizing
certain topics for indexing. We specifically discuss a Transcript Index Map,
which graphically lays out key phrases for a course, a Textbook Chapter to
Transcript Match, and finally a Lecture Transcript Similarity graph, which
clusters semantically similar lectures. We test our methods and tools on 7 full
courses with 230 hours of video and 273 transcripts. We are able to extract up
to 98 unique key terms for a given transcript and up to 347 unique key terms
for an entire course. The accuracy of the Textbook Chapter to Transcript Match
exceeds 70% on average. The methods used can be applied to genres of video in
which there are recurrent thematic words (news, sports, meetings,...)
|
cs/0408064
|
Proportional Conflict Redistribution Rules for Information Fusion
|
cs.AI
|
In this paper we propose five versions of a Proportional Conflict
Redistribution rule (PCR) for information fusion together with several
examples. From PCR1 to PCR2, PCR3, PCR4, PCR5 one increases the complexity of
the rules and also the exactitude of the redistribution of conflicting masses.
PCR1 restricted from the hyper-power set to the power set and without
degenerate cases gives the same result as the Weighted Average Operator (WAO)
proposed recently by J{\o}sang, Daniel and Vannoorenberghe but does not satisfy
the neutrality property of vacuous belief assignment. That's why improved PCR
rules are proposed in this paper. PCR4 is an improvement of minC and Dempster's
rules. The PCR rules redistribute the conflicting mass, after the conjunctive
rule has been applied, proportionally with some functions depending on the
masses assigned to their corresponding columns in the mass matrix. There are
infinitely many ways these functions (weighting factors) can be chosen
depending on the complexity one wants to deal with in specific applications and
fusion systems. Any fusion combination rule is at some degree ad-hoc.
|
cs/0408066
|
Robust Locally Testable Codes and Products of Codes
|
cs.IT cs.CC math.IT
|
We continue the investigation of locally testable codes, i.e.,
error-correcting codes for whom membership of a given word in the code can be
tested probabilistically by examining it in very few locations. We give two
general results on local testability: First, motivated by the recently proposed
notion of {\em robust} probabilistically checkable proofs, we introduce the
notion of {\em robust} local testability of codes. We relate this notion to a
product of codes introduced by Tanner, and show a very simple composition lemma
for this notion. Next, we show that codes built by tensor products can be
tested robustly and somewhat locally, by applying a variant of a test and proof
technique introduced by Raz and Safra in the context of testing low-degree
multivariate polynomials (which are a special case of tensor codes).
Combining these two results gives us a generic construction of codes of
inverse polynomial rate, that are testable with poly-logarithmically many
queries. We note these locally testable tensor codes can be obtained from {\em
any} linear error correcting code with good distance. Previous results on local
testability, albeit much stronger quantitatively, rely heavily on algebraic
properties of the underlying codes.
|
cs/0408069
|
The Integration of Connectionism and First-Order Knowledge
Representation and Reasoning as a Challenge for Artificial Intelligence
|
cs.AI cs.LO cs.NE
|
Intelligent systems based on first-order logic on the one hand, and on
artificial neural networks (also called connectionist systems) on the other,
differ substantially. It would be very desirable to combine the robust neural
networking machinery with symbolic knowledge representation and reasoning
paradigms like logic programming in such a way that the strengths of either
paradigm will be retained. Current state-of-the-art research, however, fails by
far to achieve this ultimate goal. As one of the main obstacles to be overcome
we perceive the question how symbolic knowledge can be encoded by means of
connectionist systems: Satisfactory answers to this will naturally lead the way
to knowledge extraction algorithms and to integrated neural-symbolic systems.
|
cs/0409002
|
Default reasoning over domains and concept hierarchies
|
cs.AI cs.LO
|
W.C. Rounds and G.-Q. Zhang (2001) have proposed to study a form of
disjunctive logic programming generalized to algebraic domains. This system
allows reasoning with information which is hierarchically structured and forms
a (suitable) domain. We extend this framework to include reasoning with default
negation, giving rise to a new nonmonotonic reasoning framework on hierarchical
knowledge which encompasses answer set programming with extended disjunctive
logic programs. We also show that the hierarchically structured knowledge on
which programming in this paradigm can be done, arises very naturally from
formal concept analysis. Together, we obtain a default reasoning paradigm for
conceptual knowledge which is in accordance with mainstream developments in
nonmonotonic reasoning.
|
cs/0409003
|
ScheduleNanny: Using GPS to Learn the User's Significant Locations,
Travel Times and Schedule
|
cs.AI cs.CV cs.HC
|
As computing technology becomes more pervasive, personal devices such as the
PDA, cell-phone, and notebook should use context to determine how to act.
Location is one form of context that can be used in many ways. We present a
multiple-device system that collects and clusters GPS data into significant
locations. These locations are then used to determine travel times and a
probabilistic model of the user's schedule, which is used to intelligently
alert the user. We evaluate our system and suggest how it should be integrated
with a variety of applications.
|
cs/0409007
|
The Generalized Pignistic Transformation
|
cs.AI
|
This paper presents in detail the generalized pignistic transformation (GPT)
succinctly developed in the Dezert-Smarandache Theory (DSmT) framework as a
tool for decision process. The GPT allows to provide a subjective probability
measure from any generalized basic belief assignment given by any corpus of
evidence. We mainly focus our presentation on the 3D case and provide the
complete result obtained by the GPT and its validation drawn from the
probability theory.
|
cs/0409008
|
A Model for Fine-Grained Alignment of Multilingual Texts
|
cs.CL
|
While alignment of texts on the sentential level is often seen as being too
coarse, and word alignment as being too fine-grained, bi- or multilingual texts
which are aligned on a level in-between are a useful resource for many
purposes. Starting from a number of examples of non-literal translations, which
tend to make alignment difficult, we describe an alignment model which copes
with these cases by explicitly coding them. The model is based on
predicate-argument structures and thus covers the middle ground between
sentence and word alignment. The model is currently used in a recently
initiated project of a parallel English-German treebank (FuSe), which can in
principle be extended with additional languages.
|
cs/0409010
|
Distance properties of expander codes
|
cs.IT cs.DM math.IT
|
We study the minimum distance of codes defined on bipartite graphs. Weight
spectrum and the minimum distance of a random ensemble of such codes are
computed. It is shown that if the vertex codes have minimum distance $\ge 3$,
the overall code is asymptotically good, and sometimes meets the
Gilbert-Varshamov bound.
Constructive families of expander codes are presented whose minimum distance
asymptotically exceeds the product bound for all code rates between 0 and 1.
|
cs/0409011
|
Shannon meets Wiener II: On MMSE estimation in successive decoding
schemes
|
cs.IT math.IT
|
We continue to discuss why MMSE estimation arises in coding schemes that
approach the capacity of linear Gaussian channels. Here we consider schemes
that involve successive decoding, such as decision-feedback equalization or
successive cancellation.
|
cs/0409019
|
Outlier Detection by Logic Programming
|
cs.AI cs.LO
|
The development of effective knowledge discovery techniques has become in the
recent few years a very active research area due to the important impact it has
in several relevant application areas. One interesting task thereof is that of
singling out anomalous individuals from a given population, e.g., to detect
rare events in time-series analysis settings, or to identify objects whose
behavior is deviant w.r.t. a codified standard set of "social" rules. Such
exceptional individuals are usually referred to as outliers in the literature.
Recently, outlier detection has also emerged as a relevant KR&R problem. In
this paper, we formally state the concept of outliers by generalizing in
several respects an approach recently proposed in the context of default logic,
for instance, by having outliers not being restricted to single individuals
but, rather, in the more general case, to correspond to entire (sub)theories.
We do that within the context of logic programming and, mainly through
examples, we discuss its potential practical impact in applications. The
formalization we propose is a novel one and helps in shedding some light on the
real nature of outliers. Moreover, as a major contribution of this work, we
illustrate the exploitation of minimality criteria in outlier detection. The
computational complexity of outlier detection problems arising in this novel
setting is thoroughly investigated and accounted for in the paper as well.
Finally, we also propose a rewriting algorithm that transforms any outlier
detection problem into an equivalent inference problem under the stable model
semantics, thereby making outlier computation effective and realizable on top
of any stable model solver.
|
cs/0409020
|
A Generalized Disjunctive Paraconsistent Data Model for Negative and
Disjunctive Information
|
cs.DB
|
This paper presents a generalization of the disjunctive paraconsistent
relational data model in which disjunctive positive and negative information
can be represented explicitly and manipulated. There are situations where the
closed world assumption to infer negative facts is not valid or undesirable and
there is a need to represent and reason with negation explicitly. We consider
explicit disjunctive negation in the context of disjunctive databases as there
is an interesting interplay between these two types of information. Generalized
disjunctive paraconsistent relation is introduced as the main structure in this
model. The relational algebra is appropriately generalized to work on
generalized disjunctive paraconsistent relations and their correctness is
established.
|
cs/0409026
|
Capacity-achieving ensembles for the binary erasure channel with bounded
complexity
|
cs.IT math.IT
|
We present two sequences of ensembles of non-systematic irregular
repeat-accumulate codes which asymptotically (as their block length tends to
infinity) achieve capacity on the binary erasure channel (BEC) with bounded
complexity per information bit. This is in contrast to all previous
constructions of capacity-achieving sequences of ensembles whose complexity
grows at least like the log of the inverse of the gap (in rate) to capacity.
The new bounded complexity result is achieved by puncturing bits, and allowing
in this way a sufficient number of state nodes in the Tanner graph representing
the codes. We also derive an information-theoretic lower bound on the decoding
complexity of randomly punctured codes on graphs. The bound holds for every
memoryless binary-input output-symmetric channel and is refined for the BEC.
|
cs/0409027
|
Bounds on the decoding complexity of punctured codes on graphs
|
cs.IT math.IT
|
We present two sequences of ensembles of non-systematic irregular
repeat-accumulate codes which asymptotically (as their block length tends to
infinity) achieve capacity on the binary erasure channel (BEC) with bounded
complexity per information bit. This is in contrast to all previous
constructions of capacity-achieving sequences of ensembles whose complexity
grows at least like the log of the inverse of the gap (in rate) to capacity.
The new bounded complexity result is achieved by puncturing bits, and allowing
in this way a sufficient number of state nodes in the Tanner graph representing
the codes. We also derive an information-theoretic lower bound on the decoding
complexity of randomly punctured codes on graphs. The bound holds for every
memoryless binary-input output-symmetric channel, and is refined for the BEC.
|
cs/0409031
|
Field Geology with a Wearable Computer: First Results of the Cyborg
Astrobiologist System
|
cs.CV astro-ph cs.RO
|
We present results from the first geological field tests of the `Cyborg
Astrobiologist', which is a wearable computer and video camcorder system that
we are using to test and train a computer-vision system towards having some of
the autonomous decision-making capabilities of a field-geologist. The Cyborg
Astrobiologist platform has thus far been used for testing and development of
these algorithms and systems: robotic acquisition of quasi-mosaics of images,
real-time image segmentation, and real-time determination of interesting points
in the image mosaics. The hardware and software systems function reliably, and
the computer-vision algorithms are adequate for the first field tests. In
addition to the proof-of-concept aspect of these field tests, the main result
of these field tests is the enumeration of those issues that we can improve in
the future, including: dealing with structural shadow and microtexture, and
also, controlling the camera's zoom lens in an intelligent manner. Nonetheless,
despite these and other technical inadequacies, this Cyborg Astrobiologist
system, consisting of a camera-equipped wearable-computer and its
computer-vision algorithms, has demonstrated its ability of finding genuinely
interesting points in real-time in the geological scenery, and then gathering
more information about these interest points in an automated manner.
|
cs/0409035
|
Parallel Computing Environments and Methods for Power Distribution
System Simulation
|
cs.DC cs.CE cs.MA cs.PF
|
The development of cost-effective highperformance parallel computing on
multi-processor supercomputers makes it attractive to port excessively time
consuming simulation software from personal computers (PC) to super computes.
The power distribution system simulator (PDSS) takes a bottom-up approach and
simulates load at the appliance level, where detailed thermal models for
appliances are used. This approach works well for a small power distribution
system consisting of a few thousand appliances. When the number of appliances
increases, the simulation uses up the PC memory and its runtime increases to a
point where the approach is no longer feasible to model a practical large power
distribution system. This paper presents an effort made to port a PC-based
power distribution system simulator to a 128-processor shared-memory
supercomputer. The paper offers an overview of the parallel computing
environment and a description of the modification made to the PDSS model. The
performance of the PDSS running on a standalone PC and on the supercomputer is
compared. Future research direction of utilizing parallel computing in the
power distribution system simulation is also addressed.
|
cs/0409040
|
Unification of Fusion Theories
|
cs.AI
|
Since no fusion theory neither rule fully satisfy all needed applications,
the author proposes a Unification of Fusion Theories and a combination of
fusion rules in solving problems/applications. For each particular application,
one selects the most appropriate model, rule(s), and algorithm of
implementation. We are working in the unification of the fusion theories and
rules, which looks like a cooking recipe, better we'd say like a logical chart
for a computer programmer, but we don't see another method to comprise/unify
all things. The unification scenario presented herein, which is now in an
incipient form, should periodically be updated incorporating new discoveries
from the fusion and engineering research.
|
cs/0409042
|
A new architecture for making highly scalable applications
|
cs.HC cs.CL
|
An application is a logical image of the world on a computer. A scalable
application is an application that allows one to update that logical image at
run time. To put it in operational terms: an application is scalable if a
client can change between time T1 and time T2 - the logic of the application as
expressed by language L;
- the structure and volume of the stored knowledge;
- the user interface of the application; while clients working with the
application at time T1 will work with the changed application at time T2
without performing any special action between T1 and T2. In order to realize
such a scalable application a new architecture has been developed that fully
orbits around language. In order to verify the soundness of that architecture a
program has been build. Both architecture and program are called CommunSENS.
The main purpose of this paper is: - to list the relevant elements of the
architecture; - to give a visual presentation of how the program and its image
of the world look like; - to give a visual presentation of how the image can be
updated. Some relevant philosophical and practical backgrounds are included in
the appendixes.
|
cs/0409044
|
Some Applications of Coding Theory in Computational Complexity
|
cs.CC cs.IT math.IT
|
Error-correcting codes and related combinatorial constructs play an important
role in several recent (and old) results in computational complexity theory. In
this paper we survey results on locally-testable and locally-decodable
error-correcting codes, and their applications to complexity theory and to
cryptography.
Locally decodable codes are error-correcting codes with sub-linear time
error-correcting algorithms. They are related to private information retrieval
(a type of cryptographic protocol), and they are used in average-case
complexity and to construct ``hard-core predicates'' for one-way permutations.
Locally testable codes are error-correcting codes with sub-linear time
error-detection algorithms, and they are the combinatorial core of
probabilistically checkable proofs.
|
cs/0409045
|
Augmenting ALC(D) (atemporal) roles and (aspatial) concrete domain with
temporal roles and a spatial concrete domain -first results
|
cs.AI cs.LO
|
We consider the well-known family ALC(D) of description logics with a
concrete domain, and provide first results on a framework obtained by
augmenting ALC(D) atemporal roles and aspatial concrete domain with temporal
roles and a spatial concrete domain.
|
cs/0409046
|
A TCSP-like decidable constraint language generalising existing cardinal
direction relations
|
cs.AI cs.LO
|
We define a quantitative constraint language subsuming two calculi well-known
in QSR (Qualitative Spatial Reasoning): Frank's cone-shaped and
projection-based calculi of cardinal direction relations. We show how to solve
a CSP (Constraint Satisfaction Problem) expressed in the language.
|
cs/0409047
|
An ALC(D)-based combination of temporal constraints and spatial
constraints suitable for continuous (spatial) change
|
cs.AI cs.LO
|
We present a family of spatio-temporal theories suitable for continuous
spatial change in general, and for continuous motion of spatial scenes in
particular. The family is obtained by spatio-temporalising the well-known
ALC(D) family of Description Logics (DLs) with a concrete domain D, as follows,
where TCSPs denotes "Temporal Constraint Satisfaction Problems", a well-known
constraint-based framework:
(1) temporalisation of the roles, so that they consist of TCSP constraints
(specifically, of an adaptation of TCSP constraints to interval variables); and
(2) spatialisation of the concrete domain D: the concrete domain is now
$D_x$, and is generated by a spatial Relation Algebra (RA) $x$, in the style of
the Region-Connection Calculus RCC8.
We assume durative truth (i.e., holding during a durative interval). We also
assume the homogeneity property (if a truth holds during a given interval, it
holds during all of its subintervals). Among other things, these assumptions
raise the "conflicting" problem of overlapping truths, which the work solves
with the use of a specific partition of the 13 atomic relations of Allen's
interval algebra.
|
cs/0409053
|
On the role of MMSE estimation in approaching the information-theoretic
limits of linear Gaussian channels: Shannon meets Wiener
|
cs.IT math.IT
|
We discuss why MMSE estimation arises in lattice-based schemes for
approaching the capacity of linear Gaussian channels, and comment on its
properties.
|
cs/0409056
|
Using sparse matrices and splines-based interpolation in computational
fluid dynamics simulations
|
cs.NA cs.CE physics.comp-ph
|
In this relation I present a technique of construction and fast evaluation of
a family of cubic polynomials for analytic smoothing and graphical rendering of
particles trajectories for flows in a generic geometry. The principal result of
the work was implementation and test of a method for interpolating 3D points by
regular parametric curves and their fast and efficient evaluation for a good
resolution of rendering. For the purpose a parallel environment using a
multiprocessor cluster architecture has been used. This work has been developed
for the Research and Development Department of my company for planning advanced
customized models of industrial burners.
|
cs/0409058
|
A Sentimental Education: Sentiment Analysis Using Subjectivity
Summarization Based on Minimum Cuts
|
cs.CL
|
Sentiment analysis seeks to identify the viewpoint(s) underlying a text span;
an example application is classifying a movie review as "thumbs up" or "thumbs
down". To determine this sentiment polarity, we propose a novel
machine-learning method that applies text-categorization techniques to just the
subjective portions of the document. Extracting these portions can be
implemented using efficient techniques for finding minimum cuts in graphs; this
greatly facilitates incorporation of cross-sentence contextual constraints.
|
cs/0410001
|
The Infati Data
|
cs.DB
|
The ability to perform meaningful empirical studies is of essence in research
in spatio-temporal query processing. Such studies are often necessary to gain
detailed insight into the functional and performance characteristics of
proposals for new query processing techniques.
We present a collection of spatio-temporal data, collected during an
intelligent speed adaptation project, termed INFATI, in which some two dozen
cars equipped with GPS receivers and logging equipment took part. We describe
how the data was collected and how it was "modified" to afford the drivers some
degree of anonymity.
We also present the road network in which the cars were moving during data
collection.
The GPS data is publicly available for non-commercial purposes. It is our
hope that this resource will help the spatio-temporal research community in its
efforts to develop new and better query processing techniques.
|
cs/0410002
|
Shannon Information and Kolmogorov Complexity
|
cs.IT math.IT
|
We compare the elementary theories of Shannon information and Kolmogorov
complexity, the extent to which they have a common purpose, and where they are
fundamentally different. We discuss and relate the basic notions of both
theories: Shannon entropy versus Kolmogorov complexity, the relation of both to
universal coding, Shannon mutual information versus Kolmogorov (`algorithmic')
mutual information, probabilistic sufficient statistic versus algorithmic
sufficient statistic (related to lossy compression in the Shannon theory versus
meaningful information in the Kolmogorov theory), and rate distortion theory
versus Kolmogorov's structure function. Part of the material has appeared in
print before, scattered through various publications, but this is the first
comprehensive systematic comparison. The last mentioned relations are new.
|
cs/0410003
|
Capacity and Random-Coding Exponents for Channel Coding with Side
Information
|
cs.IT math.IT
|
Capacity formulas and random-coding exponents are derived for a generalized
family of Gel'fand-Pinsker coding problems. These exponents yield asymptotic
upper bounds on the achievable log probability of error. In our model,
information is to be reliably transmitted through a noisy channel with finite
input and output alphabets and random state sequence, and the channel is
selected by a hypothetical adversary. Partial information about the state
sequence is available to the encoder, adversary, and decoder. The design of the
transmitter is subject to a cost constraint. Two families of channels are
considered: 1) compound discrete memoryless channels (CDMC), and 2) channels
with arbitrary memory, subject to an additive cost constraint, or more
generally to a hard constraint on the conditional type of the channel output
given the input. Both problems are closely connected. The random-coding
exponent is achieved using a stacked binning scheme and a maximum penalized
mutual information decoder, which may be thought of as an empirical generalized
Maximum a Posteriori decoder. For channels with arbitrary memory, the
random-coding exponents are larger than their CDMC counterparts. Applications
of this study include watermarking, data hiding, communication in presence of
partially known interferers, and problems such as broadcast channels, all of
which involve the fundamental idea of binning.
|
cs/0410004
|
Applying Policy Iteration for Training Recurrent Neural Networks
|
cs.AI cs.LG cs.NE
|
Recurrent neural networks are often used for learning time-series data. Based
on a few assumptions we model this learning task as a minimization problem of a
nonlinear least-squares cost function. The special structure of the cost
function allows us to build a connection to reinforcement learning. We exploit
this connection and derive a convergent, policy iteration-based algorithm.
Furthermore, we argue that RNN training can be fit naturally into the
reinforcement learning framework.
|
cs/0410005
|
A dynamical model of a GRID market
|
cs.MA cond-mat.other cs.CE
|
We discuss potential market mechanisms for the GRID. A complete dynamical
model of a GRID market is defined with three types of agents. Providers,
middlemen and users exchange universal GRID computing units (GCUs) at varying
prices. Providers and middlemen have strategies aimed at maximizing profit
while users are 'satisficing' agents, and only change their behavior if the
service they receive is sufficiently poor or overpriced. Preliminary results
from a multi-agent numerical simulation of the market model shows that the
distribution of price changes has a power law tail.
|
cs/0410008
|
Source Coding with Fixed Lag Side Information
|
cs.IT math.IT
|
We consider source coding with fixed lag side information at the decoder. We
focus on the special case of perfect side information with unit lag
corresponding to source coding with feedforward (the dual of channel coding
with feedback) introduced by Pradhan. We use this duality to develop a linear
complexity algorithm which achieves the rate-distortion bound for any
memoryless finite alphabet source and distortion measure.
|
cs/0410014
|
Normal forms for Answer Sets Programming
|
cs.AI
|
Normal forms for logic programs under stable/answer set semantics are
introduced. We argue that these forms can simplify the study of program
properties, mainly consistency. The first normal form, called the {\em kernel}
of the program, is useful for studying existence and number of answer sets. A
kernel program is composed of the atoms which are undefined in the Well-founded
semantics, which are those that directly affect the existence of answer sets.
The body of rules is composed of negative literals only. Thus, the kernel form
tends to be significantly more compact than other formulations. Also, it is
possible to check consistency of kernel programs in terms of colorings of the
Extended Dependency Graph program representation which we previously developed.
The second normal form is called {\em 3-kernel.} A 3-kernel program is composed
of the atoms which are undefined in the Well-founded semantics. Rules in
3-kernel programs have at most two conditions, and each rule either belongs to
a cycle, or defines a connection between cycles. 3-kernel programs may have
positive conditions. The 3-kernel normal form is very useful for the static
analysis of program consistency, i.e., the syntactic characterization of
existence of answer sets. This result can be obtained thanks to a novel
graph-like representation of programs, called Cycle Graph which presented in
the companion article \cite{Cos04b}.
|
cs/0410015
|
L1 regularization is better than L2 for learning and predicting chaotic
systems
|
cs.LG cs.AI
|
Emergent behaviors are in the focus of recent research interest. It is then
of considerable importance to investigate what optimizations suit the learning
and prediction of chaotic systems, the putative candidates for emergence. We
have compared L1 and L2 regularizations on predicting chaotic time series using
linear recurrent neural networks. The internal representation and the weights
of the networks were optimized in a unifying framework. Computational tests on
different problems indicate considerable advantages for the L1 regularization:
It had considerably better learning time and better interpolating capabilities.
We shall argue that optimization viewed as a maximum likelihood estimation
justifies our results, because L1 regularization fits heavy-tailed
distributions -- an apparently general feature of emergent systems -- better.
|
cs/0410017
|
Automated Pattern Detection--An Algorithm for Constructing Optimally
Synchronizing Multi-Regular Language Filters
|
cs.CV cond-mat.stat-mech cs.CL cs.DS cs.IR cs.LG nlin.AO nlin.CG nlin.PS physics.comp-ph q-bio.GN
|
In the computational-mechanics structural analysis of one-dimensional
cellular automata the following automata-theoretic analogue of the
\emph{change-point problem} from time series analysis arises: \emph{Given a
string $\sigma$ and a collection $\{\mc{D}_i\}$ of finite automata, identify
the regions of $\sigma$ that belong to each $\mc{D}_i$ and, in particular, the
boundaries separating them.} We present two methods for solving this
\emph{multi-regular language filtering problem}. The first, although providing
the ideal solution, requires a stack, has a worst-case compute time that grows
quadratically in $\sigma$'s length and conditions its output at any point on
arbitrarily long windows of future input. The second method is to
algorithmically construct a transducer that approximates the first algorithm.
In contrast to the stack-based algorithm, however, the transducer requires only
a finite amount of memory, runs in linear time, and gives immediate output for
each letter read; it is, moreover, the best possible finite-state approximation
with these three features.
|
cs/0410019
|
Finite-Length Scaling and Finite-Length Shift for Low-Density
Parity-Check Codes
|
cs.IT cond-mat.dis-nn math.IT
|
Consider communication over the binary erasure channel BEC using random
low-density parity-check codes with finite-blocklength n from `standard'
ensembles. We show that large error events is conveniently described within a
scaling theory, and explain how to estimate heuristically their effect. Among
other quantities, we consider the finite length threshold e(n), defined by
requiring a block error probability P_B = 1/2. For ensembles with minimum
variable degree larger than two, the following expression is argued to hold
e(n) = e -e_1 n^{-2/3} +\Theta(n^{-1}) with a calculable shift} parameter
e_1>0.
|
cs/0410020
|
Adaptive Cluster Expansion (ACE): A Hierarchical Bayesian Network
|
cs.NE cs.CV
|
Using the maximum entropy method, we derive the "adaptive cluster expansion"
(ACE), which can be trained to estimate probability density functions in high
dimensional spaces. The main advantage of ACE over other Bayesian networks is
its ability to capture high order statistics after short training times, which
it achieves by making use of a hierarchical vector quantisation of the input
data. We derive a scheme for representing the state of an ACE network as a
"probability image", which allows us to identify statistically anomalous
regions in an otherwise statistically homogeneous image, for instance. Finally,
we present some probability images that we obtained after training ACE on some
Brodatz texture images - these demonstrate the ability of ACE to detect subtle
textural anomalies.
|
cs/0410022
|
RRL: A Rich Representation Language for the Description of Agent
Behaviour in NECA
|
cs.MM cs.MA
|
In this paper, we describe the Rich Representation Language (RRL) which is
used in the NECA system. The NECA system generates interactions between two or
more animated characters. The RRL is an XML compliant framework for
representing the information that is exchanged at the interfaces between the
various NECA system modules. The full XML Schemas for the RRL are available at
http://www.ai.univie.ac.at/NECA/RRL
|
cs/0410027
|
Detecting User Engagement in Everyday Conversations
|
cs.SD cs.CL cs.HC
|
This paper presents a novel application of speech emotion recognition:
estimation of the level of conversational engagement between users of a voice
communication system. We begin by using machine learning techniques, such as
the support vector machine (SVM), to classify users' emotions as expressed in
individual utterances. However, this alone fails to model the temporal and
interactive aspects of conversational engagement. We therefore propose the use
of a multilevel structure based on coupled hidden Markov models (HMM) to
estimate engagement levels in continuous natural speech. The first level is
comprised of SVM-based classifiers that recognize emotional states, which could
be (e.g.) discrete emotion types or arousal/valence levels. A high-level HMM
then uses these emotional states as input, estimating users' engagement in
conversation by decoding the internal states of the HMM. We report experimental
results obtained by applying our algorithms to the LDC Emotional Prosody and
CallFriend speech corpora.
|
cs/0410028
|
Life Above Threshold: From List Decoding to Area Theorem and MSE
|
cs.IT cond-mat.dis-nn math.IT
|
We consider communication over memoryless channels using low-density
parity-check code ensembles above the iterative (belief propagation) threshold.
What is the computational complexity of decoding (i.e., of reconstructing all
the typical input codewords for a given channel output) in this regime? We
define an algorithm accomplishing this task and analyze its typical
performance. The behavior of the new algorithm can be expressed in purely
information-theoretical terms. Its analysis provides an alternative proof of
the area theorem for the binary erasure channel. Finally, we explain how the
area theorem is generalized to arbitrary memoryless channels. We note that the
recently discovered relation between mutual information and minimal square
error is an instance of the area theorem in the setting of Gaussian channels.
|
cs/0410033
|
An In-Depth Look at Information Fusion Rules & the Unification of Fusion
Theories
|
cs.AI
|
This paper may look like a glossary of the fusion rules and we also introduce
new ones presenting their formulas and examples: Conjunctive, Disjunctive,
Exclusive Disjunctive, Mixed Conjunctive-Disjunctive rules, Conditional rule,
Dempster's, Yager's, Smets' TBM rule, Dubois-Prade's, Dezert-Smarandache
classical and hybrid rules, Murphy's average rule,
Inagaki-Lefevre-Colot-Vannoorenberghe Unified Combination rules [and, as
particular cases: Iganaki's parameterized rule, Weighting Average Operator,
minC (M. Daniel), and newly Proportional Conflict Redistribution rules
(Smarandache-Dezert) among which PCR5 is the most exact way of redistribution
of the conflicting mass to non-empty sets following the path of the conjunctive
rule], Zhang's Center Combination rule, Convolutive x-Averaging, Consensus
Operator (Josang), Cautious Rule (Smets), ?-junctions rules (Smets), etc. and
three new T-norm & T-conorm rules adjusted from fuzzy and neutrosophic sets to
information fusion (Tchamova-Smarandache). Introducing the degree of union and
degree of inclusion with respect to the cardinal of sets not with the fuzzy set
point of view, besides that of intersection, many fusion rules can be improved.
There are corner cases where each rule might have difficulties working or may
not get an expected result.
|
cs/0410036
|
Self-Organised Factorial Encoding of a Toroidal Manifold
|
cs.LG cs.CV
|
It is shown analytically how a neural network can be used optimally to encode
input data that is derived from a toroidal manifold. The case of a 2-layer
network is considered, where the output is assumed to be a set of discrete
neural firing events. The network objective function measures the average
Euclidean error that occurs when the network attempts to reconstruct its input
from its output. This optimisation problem is solved analytically for a
toroidal input manifold, and two types of solution are obtained: a joint
encoder in which the network acts as a soft vector quantiser, and a factorial
encoder in which the network acts as a pair of soft vector quantisers (one for
each of the circular subspaces of the torus). The factorial encoder is favoured
for small network sizes when the number of observed firing events is large.
Such self-organised factorial encoding may be used to restrict the size of
network that is required to perform a given encoding task, and will decompose
an input manifold into its constituent submanifolds.
|
cs/0410038
|
Frequent Knot Discovery
|
cs.DB
|
We explore the possibility of applying the framework of frequent pattern
mining to a class of continuous objects appearing in nature, namely knots. We
introduce the frequent knot mining problem and present a solution. The key
observation is that a database consisting of knots can be transformed into a
transactional database. This observation is based on the Prime Decomposition
Theorem of knots.
|
cs/0410040
|
Two Methods for Decreasing the Computational Complexity of the MIMO ML
Decoder
|
cs.IT math.IT
|
We propose use of QR factorization with sort and Dijkstra's algorithm for
decreasing the computational complexity of the sphere decoder that is used for
ML detection of signals on the multi-antenna fading channel. QR factorization
with sort decreases the complexity of searching part of the decoder with small
increase in the complexity required for preprocessing part of the decoder.
Dijkstra's algorithm decreases the complexity of searching part of the decoder
with increase in the storage complexity. The computer simulation demonstrates
that the complexity of the decoder is reduced by the proposed methods
significantly.
|
cs/0410041
|
Maximum Mutual Information of Space-Time Block Codes with Symbolwise
Decodability
|
cs.IT math.IT
|
In this paper, we analyze the performance of space-time block codes which
enable symbolwise maximum likelihood decoding. We derive an upper bound of
maximum mutual information (MMI) on space-time block codes that enable
symbolwise maximum likelihood decoding for a frequency non-selective
quasi-static fading channel. MMI is an upper bound on how much one can send
information with vanishing error probability by using the target code.
|
cs/0410042
|
Neural Architectures for Robot Intelligence
|
cs.RO cs.CV cs.HC cs.LG cs.NE q-bio.NC
|
We argue that the direct experimental approaches to elucidate the
architecture of higher brains may benefit from insights gained from exploring
the possibilities and limits of artificial control architectures for robot
systems. We present some of our recent work that has been motivated by that
view and that is centered around the study of various aspects of hand actions
since these are intimately linked with many higher cognitive abilities. As
examples, we report on the development of a modular system for the recognition
of continuous hand postures based on neural nets, the use of vision and tactile
sensing for guiding prehensile movements of a multifingered hand, and the
recognition and use of hand gestures for robot teaching.
Regarding the issue of learning, we propose to view real-world learning from
the perspective of data mining and to focus more strongly on the imitation of
observed actions instead of purely reinforcement-based exploration. As a
concrete example of such an effort we report on the status of an ongoing
project in our lab in which a robot equipped with an attention system with a
neurally inspired architecture is taught actions by using hand gestures in
conjunction with speech commands. We point out some of the lessons learnt from
this system, and discuss how systems of this kind can contribute to the study
of issues at the junction between natural and artificial cognitive systems.
|
cs/0410043
|
Strategy in Ulam's Game and Tree Code Give Error-Resistant Protocols
|
cs.DC cs.IT math.IT
|
We present a new approach to construction of protocols which are proof
against communication errors. The construction is based on a generalization of
the well known Ulam's game. We show equivalence between winning strategies in
this game and robust protocols for multi-party computation. We do not give any
complete theory. We want rather to describe a new fresh idea. We use a tree
code defined by Schulman. The tree code is the most important part of the
interactive version of Shannon's Coding Theorem proved by Schulman. He uses
probabilistic argument for the existence of a tree code without giving any
effective construction. We show another proof yielding a randomized
construction which in contrary to his proof almost surely gives a good code.
Moreover our construction uses much smaller alphabet.
|
cs/0410049
|
Intransitivity and Vagueness
|
cs.AI
|
There are many examples in the literature that suggest that
indistinguishability is intransitive, despite the fact that the
indistinguishability relation is typically taken to be an equivalence relation
(and thus transitive). It is shown that if the uncertainty perception and the
question of when an agent reports that two things are indistinguishable are
both carefully modeled, the problems disappear, and indistinguishability can
indeed be taken to be an equivalence relation. Moreover, this model also
suggests a logic of vagueness that seems to solve many of the problems related
to vagueness discussed in the philosophical literature. In particular, it is
shown here how the logic can handle the sorites paradox.
|
cs/0410050
|
Sleeping Beauty Reconsidered: Conditioning and Reflection in
Asynchronous Systems
|
cs.AI
|
A careful analysis of conditioning in the Sleeping Beauty problem is done,
using the formal model for reasoning about knowledge and probability developed
by Halpern and Tuttle. While the Sleeping Beauty problem has been viewed as
revealing problems with conditioning in the presence of imperfect recall, the
analysis done here reveals that the problems are not so much due to imperfect
recall as to asynchrony. The implications of this analysis for van Fraassen's
Reflection Principle and Savage's Sure-Thing Principle are considered.
|
cs/0410053
|
An Extended Generalized Disjunctive Paraconsistent Data Model for
Disjunctive Information
|
cs.DB
|
This paper presents an extension of generalized disjunctive paraconsistent
relational data model in which pure disjunctive positive and negative
information as well as mixed disjunctive positive and negative information can
be represented explicitly and manipulated. We consider explicit mixed
disjunctive information in the context of disjunctive databases as there is an
interesting interplay between these two types of information. Extended
generalized disjunctive paraconsistent relation is introduced as the main
structure in this model. The relational algebra is appropriately generalized to
work on extended generalized disjunctive paraconsistent relations and their
correctness is established.
|
cs/0410054
|
Paraconsistent Intuitionistic Fuzzy Relational Data Model
|
cs.DB
|
In this paper, we present a generalization of the relational data model based
on paraconsistent intuitionistic fuzzy sets. Our data model is capable of
manipulating incomplete as well as inconsistent information. Fuzzy relation or
intuitionistic fuzzy relation can only handle incomplete information.
Associated with each relation are two membership functions one is called
truth-membership function $T$ which keeps track of the extent to which we
believe the tuple is in the relation, another is called false-membership
function which keeps track of the extent to which we believe that it is not in
the relation. A paraconsistent intuitionistic fuzzy relation is inconsistent if
there exists one tuple $a$ such that $T(a) + F(a) > 1$. In order to handle
inconsistent situation, we propose an operator called split to transform
inconsistent paraconsistent intuitionistic fuzzy relations into
pseudo-consistent paraconsistent intuitionistic fuzzy relations and do the
set-theoretic and relation-theoretic operations on them and finally use another
operator called combine to transform the result back to paraconsistent
intuitionistic fuzzy relation. For this model, we define algebraic operators
that are generalisations of the usual operators such as union, selection, join
on fuzzy relations. Our data model can underlie any database and knowledge-base
management system that deals with incomplete and inconsistent information.
|
cs/0410055
|
Mathematical knowledge management is needed
|
cs.IR
|
In this lecture I discuss some aspects of MKM, Mathematical Knowledge
Management, with particuar emphasis on information storage and information
retrieval.
|
cs/0410058
|
Robust Dialogue Understanding in HERALD
|
cs.CL cs.AI cs.HC cs.MA cs.SE
|
We tackle the problem of robust dialogue processing from the perspective of
language engineering. We propose an agent-oriented architecture that allows us
a flexible way of composing robust processors. Our approach is based on
Shoham's Agent Oriented Programming (AOP) paradigm. We will show how the AOP
agent model can be enriched with special features and components that allow us
to deal with classical problems of dialogue understanding.
|
cs/0410059
|
A knowledge-based approach to semi-automatic annotation of multimedia
documents via user adaptation
|
cs.DL cs.CL cs.IR
|
Current approaches to the annotation process focus on annotation schemas,
languages for annotation, or are very application driven. In this paper it is
proposed that a more flexible architecture for annotation requires a knowledge
component to allow for flexible search and navigation of the annotated
material. In particular, it is claimed that a general approach must take into
account the needs, competencies, and goals of the producers, annotators, and
consumers of the annotated material. We propose that a user-model based
approach is, therefore, necessary.
|
cs/0410060
|
Semantic filtering by inference on domain knowledge in spoken dialogue
systems
|
cs.CL cs.AI cs.HC cs.IR
|
General natural dialogue processing requires large amounts of domain
knowledge as well as linguistic knowledge in order to ensure acceptable
coverage and understanding. There are several ways of integrating lexical
resources (e.g. dictionaries, thesauri) and knowledge bases or ontologies at
different levels of dialogue processing. We concentrate in this paper on how to
exploit domain knowledge for filtering interpretation hypotheses generated by a
robust semantic parser. We use domain knowledge to semantically constrain the
hypothesis space. Moreover, adding an inference mechanism allows us to complete
the interpretation when information is not explicitly available. Further, we
discuss briefly how this can be generalized towards a predictive natural
interactive system.
|
cs/0410061
|
An argumentative annotation schema for meeting discussions
|
cs.CL cs.DL cs.IR
|
In this article, we are interested in the annotation of transcriptions of
human-human dialogue taken from meeting records. We first propose a meeting
content model where conversational acts are interpreted with respect to their
argumentative force and their role in building the argumentative structure of
the meeting discussion. Argumentation in dialogue describes the way
participants take part in the discussion and argue their standpoints. Then, we
propose an annotation scheme based on such an argumentative dialogue model as
well as the evaluation of its adequacy. The obtained higher-level semantic
annotations are exploited in the conceptual indexing of the information
contained in meeting discussions.
|
cs/0410062
|
Automatic Keyword Extraction from Spoken Text. A Comparison of two
Lexical Resources: the EDR and WordNet
|
cs.CL cs.DL cs.IR
|
Lexical resources such as WordNet and the EDR electronic dictionary have been
used in several NLP tasks. Probably, partly due to the fact that the EDR is not
freely available, WordNet has been used far more often than the EDR. We have
used both resources on the same task in order to make a comparison possible.
The task is automatic assignment of keywords to multi-party dialogue episodes
(i.e. thematically coherent stretches of spoken text). We show that the use of
lexical resources in such a task results in slightly higher performances than
the use of a purely statistically based method.
|
cs/0410063
|
INSPIRE: Evaluation of a Smart-Home System for Infotainment Management
and Device Control
|
cs.HC cs.CL
|
This paper gives an overview of the assessment and evaluation methods which
have been used to determine the quality of the INSPIRE smart home system. The
system allows different home appliances to be controlled via speech, and
consists of speech and speaker recognition, speech understanding, dialogue
management, and speech output components. The performance of these components
is first assessed individually, and then the entire system is evaluated in an
interaction experiment with test users. Initial results of the assessment and
evaluation are given, in particular with respect to the transmission channel
impact on speech and speaker recognition, and the assessment of speech output
for different system metaphors.
|
cs/0410064
|
Intelligent Computer Numerical Control unit for machine tools
|
cs.CE
|
The paper describes a new CNC control unit for machining centres with
learning ability and automatic intelligent generating of NC programs on the
bases of a neural network, which is built-in into a CNC unit as special device.
The device performs intelligent and completely automatically the NC part
programs only on the bases of 2D, 2,5D or 3D computer model of prismatic part.
Intervention of the operator is not needed. The neural network for milling,
drilling, reaming, threading and operations alike has learned to generate NC
programs in the learning module, which is a part of intelligent CAD/CAM system.
|
cs/0410068
|
Analyzing and Improving Performance of a Class of Anomaly-based
Intrusion Detectors
|
cs.CR cs.AI
|
Anomaly-based intrusion detection (AID) techniques are useful for detecting
novel intrusions into computing resources. One of the most successful AID
detectors proposed to date is stide, which is based on analysis of system call
sequences. In this paper, we present a detailed formal framework to analyze,
understand and improve the performance of stide and similar AID techniques.
Several important properties of stide-like detectors are established through
formal proofs, and validated by carefully conducted experiments using test
datasets. Finally, the framework is utilized to design two applications to
improve the cost and performance of stide-like detectors which are based on
sequence analysis. The first application reduces the cost of developing AID
detectors by identifying the critical sections in the training dataset, and the
second application identifies the intrusion context in the intrusive dataset,
that helps to fine-tune the detectors. Such fine-tuning in turn helps to
improve detection rate and reduce false alarm rate, thereby increasing the
effectiveness and efficiency of the intrusion detectors.
|
cs/0410070
|
Using image partitions in 4th Dimension
|
cs.DB
|
I have plotted an image by using mathematical functions in the Database "4th
Dimension". I'm going to show an alternative method to: detect which sector has
been clicked; highlight it and combine it with other sectors already
highlighted; store the graph information in an efficient way; load and splat
image layers to reconstruct the stored graph.
|
cs/0410071
|
The Cyborg Astrobiologist: First Field Experience
|
cs.CV astro-ph cs.AI cs.CE cs.HC cs.RO cs.SE q-bio.NC
|
We present results from the first geological field tests of the `Cyborg
Astrobiologist', which is a wearable computer and video camcorder system that
we are using to test and train a computer-vision system towards having some of
the autonomous decision-making capabilities of a field-geologist and
field-astrobiologist. The Cyborg Astrobiologist platform has thus far been used
for testing and development of these algorithms and systems: robotic
acquisition of quasi-mosaics of images, real-time image segmentation, and
real-time determination of interesting points in the image mosaics. The
hardware and software systems function reliably, and the computer-vision
algorithms are adequate for the first field tests. In addition to the
proof-of-concept aspect of these field tests, the main result of these field
tests is the enumeration of those issues that we can improve in the future,
including: first, detection and accounting for shadows caused by 3D jagged
edges in the outcrop; second, reincorporation of more sophisticated
texture-analysis algorithms into the system; third, creation of hardware and
software capabilities to control the camera's zoom lens in an intelligent
manner; and fourth, development of algorithms for interpretation of complex
geological scenery. Nonetheless, despite these technical inadequacies, this
Cyborg Astrobiologist system, consisting of a camera-equipped wearable-computer
and its computer-vision algorithms, has demonstrated its ability of finding
genuinely interesting points in real-time in the geological scenery, and then
gathering more information about these interest points in an automated manner.
|
cs/0410072
|
Temporal logic with predicate abstraction
|
cs.LO cs.CL
|
A predicate linear temporal logic LTL_{\lambda,=} without quantifiers but
with predicate abstraction mechanism and equality is considered. The models of
LTL_{\lambda,=} can be naturally seen as the systems of pebbles (flexible
constants) moving over the elements of some (possibly infinite) domain. This
allows to use LTL_{\lambda,=} for the specification of dynamic systems using
some resources, such as processes using memory locations, mobile agents
occupying some sites, etc. On the other hand we show that LTL_{\lambda,=} is
not recursively axiomatizable and, therefore, fully automated verification of
LTL_{\lambda,=} specifications is not, in general, possible.
|
cs/0411003
|
Applications of LDPC Codes to the Wiretap Channel
|
cs.IT cs.CR math.IT
|
With the advent of quantum key distribution (QKD) systems, perfect (i.e.
information-theoretic) security can now be achieved for distribution of a
cryptographic key. QKD systems and similar protocols use classical
error-correcting codes for both error correction (for the honest parties to
correct errors) and privacy amplification (to make an eavesdropper fully
ignorant). From a coding perspective, a good model that corresponds to such a
setting is the wire tap channel introduced by Wyner in 1975. In this paper, we
study fundamental limits and coding methods for wire tap channels. We provide
an alternative view of the proof for secrecy capacity of wire tap channels and
show how capacity achieving codes can be used to achieve the secrecy capacity
for any wiretap channel. We also consider binary erasure channel and binary
symmetric channel special cases for the wiretap channel and propose specific
practical codes. In some cases our designs achieve the secrecy capacity and in
others the codes provide security at rates below secrecy capacity. For the
special case of a noiseless main channel and binary erasure channel, we
consider encoder and decoder design for codes achieving secrecy on the wiretap
channel; we show that it is possible to construct linear-time decodable secrecy
codes based on LDPC codes that achieve secrecy.
|
cs/0411006
|
Capacity Achieving Code Constructions for Two Classes of (d,k)
Constraints
|
cs.IT math.IT
|
In this paper, we present two low complexity algorithms that achieve capacity
for the noiseless (d,k) constrained channel when k=2d+1, or when k-d+1 is not
prime. The first algorithm, called symbol sliding, is a generalized version of
the bit flipping algorithm introduced by Aviran et al. [1]. In addition to
achieving capacity for (d,2d+1) constraints, it comes close to capacity in
other cases. The second algorithm is based on interleaving, and is a
generalized version of the bit stuffing algorithm introduced by Bender and Wolf
[2]. This method uses fewer than k-d biased bit streams to achieve capacity for
(d,k) constraints with k-d+1 not prime. In particular, the encoder for
(d,d+2^m-1) constraints, 1\le m<\infty, requires only m biased bit streams.
|
cs/0411008
|
Intuitionistic computability logic
|
cs.LO cs.AI math.LO
|
Computability logic (CL) is a systematic formal theory of computational tasks
and resources, which, in a sense, can be seen as a semantics-based alternative
to (the syntactically introduced) linear logic. With its expressive and
flexible language, where formulas represent computational problems and "truth"
is understood as algorithmic solvability, CL potentially offers a comprehensive
logical basis for constructive applied theories and computing systems
inherently requiring constructive and computationally meaningful underlying
logics.
Among the best known constructivistic logics is Heyting's intuitionistic
calculus INT, whose language can be seen as a special fragment of that of CL.
The constructivistic philosophy of INT, however, has never really found an
intuitively convincing and mathematically strict semantical justification. CL
has good claims to provide such a justification and hence a materialization of
Kolmogorov's known thesis "INT = logic of problems". The present paper contains
a soundness proof for INT with respect to the CL semantics. A comprehensive
online source on CL is available at http://www.cis.upenn.edu/~giorgi/cl.html
|
cs/0411011
|
Capacity Analysis for Continuous Alphabet Channels with Side
Information, Part I: A General Framework
|
cs.IT math.IT
|
Capacity analysis for channels with side information at the receiver has been
an active area of interest. This problem is well investigated for the case of
finite alphabet channels. However, the results are not easily generalizable to
the case of continuous alphabet channels due to analytic difficulties inherent
with continuous alphabets. In the first part of this two-part paper, we address
an analytical framework for capacity analysis of continuous alphabet channels
with side information at the receiver. For this purpose, we establish novel
necessary and sufficient conditions for weak* continuity and strict concavity
of the mutual information. These conditions are used in investigating the
existence and uniqueness of the capacity-achieving measures. Furthermore, we
derive necessary and sufficient conditions that characterize the capacity value
and the capacity-achieving measure for continuous alphabet channels with side
information at the receiver.
|
cs/0411012
|
Capacity Analysis for Continuous Alphabet Channels with Side
Information, Part II: MIMO Channels
|
cs.IT math.IT
|
In this part, we consider the capacity analysis for wireless mobile systems
with multiple antenna architectures. We apply the results of the first part to
a commonly known baseband, discrete-time multiple antenna system where both the
transmitter and receiver know the channel's statistical law. We analyze the
capacity for additive white Gaussian noise (AWGN) channels, fading channels
with full channel state information (CSI) at the receiver, fading channels with
no CSI, and fading channels with partial CSI at the receiver. For each type of
channels, we study the capacity value as well as issues such as the existence,
uniqueness, and characterization of the capacity-achieving measures for
different types of moment constraints. The results are applicable to both
Rayleigh and Rician fading channels in the presence of arbitrary line-of-sight
and correlation profiles.
|
cs/0411014
|
Rate Distortion and Denoising of Individual Data Using Kolmogorov
complexity
|
cs.IT math.IT
|
We examine the structure of families of distortion balls from the perspective
of Kolmogorov complexity. Special attention is paid to the canonical
rate-distortion function of a source word which returns the minimal Kolmogorov
complexity of all distortion balls containing that word subject to a bound on
their cardinality. This canonical rate-distortion function is related to the
more standard algorithmic rate-distortion function for the given distortion
measure. Examples are given of list distortion, Hamming distortion, and
Euclidean distortion. The algorithmic rate-distortion function can behave
differently from Shannon's rate-distortion function. To this end, we show that
the canonical rate-distortion function can and does assume a wide class of
shapes (unlike Shannon's); we relate low algorithmic mutual information to low
Kolmogorov complexity (and consequently suggest that certain aspects of the
mutual information formulation of Shannon's rate-distortion function behave
differently than would an analogous formulation using algorithmic mutual
information); we explore the notion that low Kolmogorov complexity distortion
balls containing a given word capture the interesting properties of that word
(which is hard to formalize in Shannon's theory) and this suggests an approach
to denoising; and, finally, we show that the different behavior of the
rate-distortion curves of individual source words to some extent disappears
after averaging over the source words.
|
cs/0411015
|
Bounded Input Bounded Predefined Control Bounded Output
|
cs.AI
|
The paper is an attempt to generalize a methodology, which is similar to the
bounded-input bounded-output method currently widely used for the system
stability studies. The presented earlier methodology allows decomposition of
input space into bounded subspaces and defining for each subspace its bounding
surface. It also defines a corresponding predefined control, which maps any
point of a bounded input into a desired bounded output subspace. This
methodology was improved by providing a mechanism for the fast defining a
bounded surface. This paper presents enhanced bounded-input
bounded-predefined-control bounded-output approach, which provides adaptability
feature to the control and allows transferring of a controlled system along a
suboptimal trajectory.
|
cs/0411016
|
Intelligent search strategies based on adaptive Constraint Handling
Rules
|
cs.AI cs.PL
|
The most advanced implementation of adaptive constraint processing with
Constraint Handling Rules (CHR) allows the application of intelligent search
strategies to solve Constraint Satisfaction Problems (CSP). This presentation
compares an improved version of conflict-directed backjumping and two variants
of dynamic backtracking with respect to chronological backtracking on some of
the AIM instances which are a benchmark set of random 3-SAT problems. A CHR
implementation of a Boolean constraint solver combined with these different
search strategies in Java is thus being compared with a CHR implementation of
the same Boolean constraint solver combined with chronological backtracking in
SICStus Prolog. This comparison shows that the addition of ``intelligence'' to
the search process may reduce the number of search steps dramatically.
Furthermore, the runtime of their Java implementations is in most cases faster
than the implementations of chronological backtracking. More specifically,
conflict-directed backjumping is even faster than the SICStus Prolog
implementation of chronological backtracking, although our Java implementation
of CHR lacks the optimisations made in the SICStus Prolog system. To appear in
Theory and Practice of Logic Programming (TPLP).
|
cs/0411018
|
Artificial Intelligence and Systems Theory: Applied to Cooperative
Robots
|
cs.RO cs.AI
|
This paper describes an approach to the design of a population of cooperative
robots based on concepts borrowed from Systems Theory and Artificial
Intelligence. The research has been developed under the SocRob project, carried
out by the Intelligent Systems Laboratory at the Institute for Systems and
Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the
project stands both for "Society of Robots" and "Soccer Robots", the case study
where we are testing our population of robots. Designing soccer robots is a
very challenging problem, where the robots must act not only to shoot a ball
towards the goal, but also to detect and avoid static (walls, stopped robots)
and dynamic (moving robots) obstacles. Furthermore, they must cooperate to
defeat an opposing team. Our past and current research in soccer robotics
includes cooperative sensor fusion for world modeling, object recognition and
tracking, robot navigation, multi-robot distributed task planning and
coordination, including cooperative reinforcement learning in cooperative and
adversarial environments, and behavior-based architectures for real time task
execution of cooperating robot teams.
|
cs/0411020
|
Dynamic Modelling and Adaptive Traction Control for Mobile Robots
|
cs.RO
|
Mobile robots have received a great deal of research in recent years. A
significant amount of research has been published in many aspects related to
mobile robots. Most of the research is devoted to design and develop some
control techniques for robot motion and path planning. A large number of
researchers have used kinematic models to develop motion control strategy for
mobile robots. Their argument and assumption that these models are valid if the
robot has low speed, low acceleration and light load. However, dynamic
modelling of mobile robots is very important as they are designed to travel at
higher speed and perform heavy duty work. This paper presents and discusses a
new approach to develop a dynamic model and control strategy for wheeled mobile
robot which I modelled as a rigid body that roles on two wheels and a castor.
The motion control strategy consists of two levels. The first level is dealing
with the dynamic of the system and denoted as Low level controller. The second
level is developed to take care of path planning and trajectory generation.
|
cs/0411021
|
Coevolution Based Adaptive Monte Carlo Localization (CEAMCL)
|
cs.RO
|
An adaptive Monte Carlo localization algorithm based on coevolution mechanism
of ecological species is proposed. Samples are clustered into species, each of
which represents a hypothesis of the robots pose. Since the coevolution between
the species ensures that the multiple distinct hypotheses can be tracked
stably, the problem of premature convergence when using MCL in highly symmetric
environments can be solved. And the sample size can be adjusted adaptively over
time according to the uncertainty of the robots pose by using the population
growth model. In addition, by using the crossover and mutation operators in
evolutionary computation, intra-species evolution can drive the samples move
towards the regions where the desired posterior density is large. So a small
size of samples can represent the desired density well enough to make precise
localization. The new algorithm is termed coevolution based adaptive Monte
Carlo localization (CEAMCL). Experiments have been carried out to prove the
efficiency of the new localization algorithm.
|
cs/0411022
|
Topological Navigation of Simulated Robots using Occupancy Grid
|
cs.RO cs.AI
|
Formerly I presented a metric navigation method in the Webots mobile robot
simulator. The navigating Khepera-like robot builds an occupancy grid of the
environment and explores the square-shaped room around with a value iteration
algorithm. Now I created a topological navigation procedure based on the
occupancy grid process. The extension by a skeletonization algorithm results a
graph of important places and the connecting routes among them. I also show the
significant time profit gained during the process.
|
cs/0411023
|
Design and Implementation of a General Decision-making Model in RoboCup
Simulation
|
cs.RO
|
The study of the collaboration, coordination and negotiation among different
agents in a multi-agent system (MAS) has always been the most challenging yet
popular in the research of distributed artificial intelligence. In this paper,
we will suggest for RoboCup simulation, a typical MAS, a general
decision-making model, rather than define a different algorithm for each tactic
(e.g. ball handling, pass, shoot and interception, etc.) in soccer games as
most RoboCup simulation teams did. The general decision-making model is based
on two critical factors in soccer games: the vertical distance to the goal line
and the visual angle for the goalpost. We have used these two parameters to
formalize the defensive and offensive decisions in RoboCup simulation and the
results mentioned above had been applied in NOVAURO, original name is UJDB, a
RoboCup simulation team of Jiangsu University, whose decision-making model,
compared with that of Tsinghua University, the world champion team in 2001, is
a universal model and easier to be implemented.
|
cs/0411024
|
Space Robotics Part 2: Space-based Manipulators
|
cs.RO
|
In this second of three short papers, I introduce some of the basic concepts
of space robotics with an emphasis on some specific challenging areas of
research that are peculiar to the application of robotics to space
infrastructure development. The style of these short papers is pedagogical and
the concepts in this paper are developed from fundamental manipulator robotics.
This second paper considers the application of space manipulators to on-orbit
servicing (OOS), an application which has considerable commercial application.
I provide some background to the notion of robotic on-orbit servicing and
explore how manipulator control algorithms may be modified to accommodate space
manipulators which operate in the micro-gravity of space.
|
cs/0411025
|
Bionic Humans Using EAP as Artificial Muscles Reality and Challenges
|
cs.RO cs.AI
|
For many years, the idea of a human with bionic muscles immediately conjures
up science fiction images of a TV series superhuman character that was
implanted with bionic muscles and portrayed with strength and speed far
superior to any normal human. As fantastic as this idea may seem, recent
developments in electroactive polymers (EAP) may one day make such bionics
possible. Polymers that exhibit large displacement in response to stimulation
that is other than electrical signal were known for many years. Initially, EAP
received relatively little attention due to their limited actuation capability.
However, in the recent years, the view of the EAP materials has changed due to
the introduction of effective new materials that significantly surpassed the
capability of the widely used piezoelectric polymer, PVDF. As this technology
continues to evolve, novel mechanisms that are biologically inspired are
expected to emerge. EAP materials can potentially provide actuation with
lifelike response and more flexible configurations. While further improvements
in performance and robustness are still needed, there already have been several
reported successes. In recognition of the need for cooperation in this
multidisciplinary field, the author initiated and organized a series of
international forums that are leading to a growing number of research and
development projects and to great advances in the field. In 1999, he challenged
the worldwide science and engineering community of EAP experts to develop a
robotic arm that is actuated by artificial muscles to win a wrestling match
against a human opponent. In this paper, the field of EAP as artificial muscles
will be reviewed covering the state of the art, the challenges and the vision
for the progress in future years.
|
cs/0411026
|
A Search Relevancy Tuning Method Using Expert Results Content Evaluation
|
cs.IR
|
The article presents an online relevancy tuning method using explicit user
feedback. The author developed and tested a method of words' weights
modification based on search result evaluation by user. User decides whether
the result is useful or not after inspecting the full result content. The
experiment proved that the constantly accumulated words weights base leads to
better search quality in a specified data domain. The author also suggested
future improvements of the method.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.