id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0306125
|
Predicting Response-Function Results of Electrical/Mechanical Systems
Through Artificial Neural Network
|
cs.NE
|
In the present paper a newer application of Artificial Neural Network (ANN)
has been developed i.e., predicting response-function results of
electrical-mechanical system through ANN. This method is specially useful to
complex systems for which it is not possible to find the response-function
because of complexity of the system. The proposed approach suggests that how
even without knowing the response-function, the response-function results can
be predicted with the use of ANN to the system. The steps used are: (i)
Depending on the system, the ANN-architecture and the input & output parameters
are decided, (ii) Training & test data are generated from simplified circuits
and through tactic-superposition of it for complex circuits, (iii) Training the
ANN with training data through many cycles and (iv) Test-data are used for
predicting the response-function results. It is found that the proposed novel
method for response prediction works satisfactorily. Thus this method could be
used specially for complex systems where other methods are unable to tackle it.
In this paper the application of ANN is particularly demonstrated to
electrical-circuit system but can be applied to other systems too.
|
cs/0306126
|
Bayesian Treatment of Incomplete Discrete Data applied to Mutual
Information and Feature Selection
|
cs.LG cs.AI math.PR
|
Given the joint chances of a pair of random variables one can compute
quantities of interest, like the mutual information. The Bayesian treatment of
unknown chances involves computing, from a second order prior distribution and
the data likelihood, a posterior distribution of the chances. A common
treatment of incomplete data is to assume ignorability and determine the
chances by the expectation maximization (EM) algorithm. The two different
methods above are well established but typically separated. This paper joins
the two approaches in the case of Dirichlet priors, and derives efficient
approximations for the mean, mode and the (co)variance of the chances and the
mutual information. Furthermore, we prove the unimodality of the posterior
distribution, whence the important property of convergence of EM to the global
maximum in the chosen framework. These results are applied to the problem of
selecting features for incremental learning and naive Bayes classification. A
fast filter based on the distribution of mutual information is shown to
outperform the traditional filter based on empirical mutual information on a
number of incomplete real data sets.
|
cs/0306130
|
Anusaaraka: Machine Translation in Stages
|
cs.CL cs.AI
|
Fully-automatic general-purpose high-quality machine translation systems
(FGH-MT) are extremely difficult to build. In fact, there is no system in the
world for any pair of languages which qualifies to be called FGH-MT. The
reasons are not far to seek. Translation is a creative process which involves
interpretation of the given text by the translator. Translation would also vary
depending on the audience and the purpose for which it is meant. This would
explain the difficulty of building a machine translation system. Since, the
machine is not capable of interpreting a general text with sufficient accuracy
automatically at present - let alone re-expressing it for a given audience, it
fails to perform as FGH-MT. FOOTNOTE{The major difficulty that the machine
faces in interpreting a given text is the lack of general world knowledge or
common sense knowledge.}
|
cs/0306135
|
Pruning Isomorphic Structural Sub-problems in Configuration
|
cs.AI
|
Configuring consists in simulating the realization of a complex product from
a catalog of component parts, using known relations between types, and picking
values for object attributes. This highly combinatorial problem in the field of
constraint programming has been addressed with a variety of approaches since
the foundation system R1(McDermott82). An inherent difficulty in solving
configuration problems is the existence of many isomorphisms among
interpretations. We describe a formalism independent approach to improve the
detection of isomorphisms by configurators, which does not require to adapt the
problem model. To achieve this, we exploit the properties of a characteristic
subset of configuration problems, called the structural sub-problem, which
canonical solutions can be produced or tested at a limited cost. In this paper
we present an algorithm for testing the canonicity of configurations, that can
be added as a symmetry breaking constraint to any configurator. The cost and
efficiency of this canonicity test are given.
|
cs/0307001
|
Serving Database Information Using a Flexible Server in a Three Tier
Architecture
|
cs.DC cs.DB
|
The D0 experiment at Fermilab relies on a central Oracle database for storing
all detector calibration information. Access to this data is needed by hundreds
of physics applications distributed worldwide. In order to meet the demands of
these applications from scarce resources, we have created a distributed system
that isolates the user applications from the database facilities. This system,
known as the Database Application Network (DAN) operates as the middle tier in
a three tier architecture. A DAN server employs a hierarchical caching scheme
and database connection management facility that limits access to the database
resource. The modular design allows for caching strategies and database access
components to be determined by runtime configuration. To solve scalability
problems, a proxy database component allows for DAN servers to be arranged in a
hierarchy. Also included is an event based monitoring system that is currently
being used to collect statistics for performance analysis and problem
diagnosis. DAN servers are currently implemented as a Python multithreaded
program using CORBA for network communications and interface specification. The
requirement details, design, and implementation of DAN are discussed along with
operational experience and future plans.
|
cs/0307002
|
AWESOME: A General Multiagent Learning Algorithm that Converges in
Self-Play and Learns a Best Response Against Stationary Opponents
|
cs.GT cs.LG cs.MA
|
A satisfactory multiagent learning algorithm should, {\em at a minimum},
learn to play optimally against stationary opponents and converge to a Nash
equilibrium in self-play. The algorithm that has come closest, WoLF-IGA, has
been proven to have these two properties in 2-player 2-action repeated
games--assuming that the opponent's (mixed) strategy is observable. In this
paper we present AWESOME, the first algorithm that is guaranteed to have these
two properties in {\em all} repeated (finite) games. It requires only that the
other players' actual actions (not their strategies) can be observed at each
step. It also learns to play optimally against opponents that {\em eventually
become} stationary. The basic idea behind AWESOME ({\em Adapt When Everybody is
Stationary, Otherwise Move to Equilibrium}) is to try to adapt to the others'
strategies when they appear stationary, but otherwise to retreat to a
precomputed equilibrium strategy. The techniques used to prove the properties
of AWESOME are fundamentally different from those used for previous algorithms,
and may help in analyzing other multiagent learning algorithms also.
|
cs/0307003
|
How many candidates are needed to make elections hard to manipulate?
|
cs.GT cs.CC cs.MA
|
In multiagent settings where the agents have different preferences,
preference aggregation is a central issue. Voting is a general method for
preference aggregation, but seminal results have shown that all general voting
protocols are manipulable. One could try to avoid manipulation by using voting
protocols where determining a beneficial manipulation is hard computationally.
The complexity of manipulating realistic elections where the number of
candidates is a small constant was recently studied (Conitzer 2002), but the
emphasis was on the question of whether or not a protocol becomes hard to
manipulate for some constant number of candidates. That work, in many cases,
left open the question: How many candidates are needed to make elections hard
to manipulate? This is a crucial question when comparing the relative
manipulability of different voting protocols. In this paper we answer that
question for the voting protocols of the earlier study: plurality, Borda, STV,
Copeland, maximin, regular cup, and randomized cup. We also answer that
question for two voting protocols for which no results on the complexity of
manipulation have been derived before: veto and plurality with runoff. It turns
out that the voting protocols under study become hard to manipulate at 3
candidates, 4 candidates, 7 candidates, or never.
|
cs/0307004
|
State complexes for metamorphic robots
|
cs.RO cs.CG
|
A metamorphic robotic system is an aggregate of homogeneous robot units which
can individually and selectively locomote in such a way as to change the global
shape of the system. We introduce a mathematical framework for defining and
analyzing general metamorphic robots. This formal structure, combined with
ideas from geometric group theory, leads to a natural extension of a
configuration space for metamorphic robots -- the state complex -- which is
especially adapted to parallelization. We present an algorithm for optimizing
reconfiguration sequences with respect to elapsed time. A universal geometric
property of state complexes -- non-positive curvature -- is the key to proving
convergence to the globally time-optimal solution.
|
cs/0307006
|
BL-WoLF: A Framework For Loss-Bounded Learnability In Zero-Sum Games
|
cs.GT cs.LG cs.MA
|
We present BL-WoLF, a framework for learnability in repeated zero-sum games
where the cost of learning is measured by the losses the learning agent accrues
(rather than the number of rounds). The game is adversarially chosen from some
family that the learner knows. The opponent knows the game and the learner's
learning strategy. The learner tries to either not accrue losses, or to quickly
learn about the game so as to avoid future losses (this is consistent with the
Win or Learn Fast (WoLF) principle; BL stands for ``bounded loss''). Our
framework allows for both probabilistic and approximate learning. The resultant
notion of {\em BL-WoLF}-learnability can be applied to any class of games, and
allows us to measure the inherent disadvantage to a player that does not know
which game in the class it is in. We present {\em guaranteed
BL-WoLF-learnability} results for families of games with deterministic payoffs
and families of games with stochastic payoffs. We demonstrate that these
families are {\em guaranteed approximately BL-WoLF-learnable} with lower cost.
We then demonstrate families of games (both stochastic and deterministic) that
are not guaranteed BL-WoLF-learnable. We show that those families,
nevertheless, are {\em BL-WoLF-learnable}. To prove these results, we use a key
lemma which we derive.
|
cs/0307010
|
Probabilistic Reasoning as Information Compression by Multiple
Alignment, Unification and Search: An Introduction and Overview
|
cs.AI
|
This article introduces the idea that probabilistic reasoning (PR) may be
understood as "information compression by multiple alignment, unification and
search" (ICMAUS). In this context, multiple alignment has a meaning which is
similar to but distinct from its meaning in bio-informatics, while unification
means a simple merging of matching patterns, a meaning which is related to but
simpler than the meaning of that term in logic.
A software model, SP61, has been developed for the discovery and formation of
'good' multiple alignments, evaluated in terms of information compression. The
model is described in outline.
Using examples from the SP61 model, this article describes in outline how the
ICMAUS framework can model various kinds of PR including: PR in best-match
pattern recognition and information retrieval; one-step 'deductive' and
'abductive' PR; inheritance of attributes in a class hierarchy; chains of
reasoning (probabilistic decision networks and decision trees, and PR with
'rules'); geometric analogy problems; nonmonotonic reasoning and reasoning with
default values; modelling the function of a Bayesian network.
|
cs/0307011
|
Supporting Out-of-turn Interactions in a Multimodal Web Interface
|
cs.IR cs.HC
|
Multimodal interfaces are becoming increasingly important with the advent of
mobile devices, accessibility considerations, and novel software technologies
that combine diverse interaction media. This article investigates systems
support for web browsing in a multimodal interface. Specifically, we outline
the design and implementation of a software framework that integrates hyperlink
and speech modes of interaction. Instead of viewing speech as merely an
alternative interaction medium, the framework uses it to support out-of-turn
interaction, providing a flexibility of information access not possible with
hyperlinks alone. This approach enables the creation of websites that adapt to
the needs of users, yet permits the designer fine-grained control over what
interactions to support. Design methodology, implementation details, and two
case studies are presented.
|
cs/0307013
|
'Computing' as Information Compression by Multiple Alignment,
Unification and Search
|
cs.AI cs.CC
|
This paper argues that the operations of a 'Universal Turing Machine' (UTM)
and equivalent mechanisms such as the 'Post Canonical System' (PCS) - which are
widely accepted as definitions of the concept of `computing' - may be
interpreted as *information compression by multiple alignment, unification and
search* (ICMAUS).
The motivation for this interpretation is that it suggests ways in which the
UTM/PCS model may be augmented in a proposed new computing system designed to
exploit the ICMAUS principles as fully as possible. The provision of a
relatively sophisticated search mechanism in the proposed 'SP' system appears
to open the door to the integration and simplification of a range of functions
including unsupervised inductive learning, best-match pattern recognition and
information retrieval, probabilistic reasoning, planning and problem solving,
and others. Detailed consideration of how the ICMAUS principles may be applied
to these functions is outside the scope of this article but relevant sources
are cited in this article.
|
cs/0307014
|
Syntax, Parsing and Production of Natural Language in a Framework of
Information Compression by Multiple Alignment, Unification and Search
|
cs.AI cs.CL
|
This article introduces the idea that "information compression by multiple
alignment, unification and search" (ICMAUS) provides a framework within which
natural language syntax may be represented in a simple format and the parsing
and production of natural language may be performed in a transparent manner.
The ICMAUS concepts are embodied in a software model, SP61. The organisation
and operation of the model are described and a simple example is presented
showing how the model can achieve parsing of natural language.
Notwithstanding the apparent paradox of 'decompression by compression', the
ICMAUS framework, without any modification, can produce a sentence by decoding
a compressed code for the sentence. This is illustrated with output from the
SP61 model.
The article includes four other examples - one of the parsing of a sentence
in French and three from the domain of English auxiliary verbs. These examples
show how the ICMAUS framework and the SP61 model can accommodate 'context
sensitive' features of syntax in a relatively simple and direct manner.
|
cs/0307015
|
Architecture of an Open-Sourced, Extensible Data Warehouse Builder:
InterBase 6 Data Warehouse Builder (IB-DWB)
|
cs.DB
|
We report the development of an open-sourced data warehouse builder,
InterBase Data Warehouse Builder (IB-DWB), based on Borland InterBase 6 Open
Edition Database Server. InterBase 6 is used for its low maintenance and small
footprint. IB-DWB is designed modularly and consists of 5 main components, Data
Plug Platform, Discoverer Platform, Multi-Dimensional Cube Builder, and Query
Supporter, bounded together by a Kernel. It is also an extensible system, made
possible by the Data Plug Platform and the Discoverer Platform. Currently,
extensions are only possible via dynamic linked-libraries (DLLs).
Multi-Dimensional Cube Builder represents a basal mean of data aggregation. The
architectural philosophy of IB-DWB centers around providing a base platform
that is extensible, which is functionally supported by expansion modules.
IB-DWB is currently being hosted by sourceforge.net (Project Unix Name:
ib-dwb), licensed under GNU General Public License, Version 2.
|
cs/0307016
|
Complexity of Determining Nonemptiness of the Core
|
cs.GT cs.CC cs.MA
|
Coalition formation is a key problem in automated negotiation among
self-interested agents, and other multiagent applications. A coalition of
agents can sometimes accomplish things that the individual agents cannot, or
can do things more efficiently. However, motivating the agents to abide to a
solution requires careful analysis: only some of the solutions are stable in
the sense that no group of agents is motivated to break off and form a new
coalition. This constraint has been studied extensively in cooperative game
theory. However, the computational questions around this constraint have
received less attention. When it comes to coalition formation among software
agents (that represent real-world parties), these questions become increasingly
explicit.
In this paper we define a concise general representation for games in
characteristic form that relies on superadditivity, and show that it allows for
efficient checking of whether a given outcome is in the core. We then show that
determining whether the core is nonempty is $\mathcal{NP}$-complete both with
and without transferable utility. We demonstrate that what makes the problem
hard in both cases is determining the collaborative possibilities (the set of
outcomes possible for the grand coalition), by showing that if these are given,
the problem becomes tractable in both cases. However, we then demonstrate that
for a hybrid version of the problem, where utility transfer is possible only
within the grand coalition, the problem remains $\mathcal{NP}$-complete even
when the collaborative possibilities are given.
|
cs/0307017
|
Definition and Complexity of Some Basic Metareasoning Problems
|
cs.AI cs.CC
|
In most real-world settings, due to limited time or other resources, an agent
cannot perform all potentially useful deliberation and information gathering
actions. This leads to the metareasoning problem of selecting such actions.
Decision-theoretic methods for metareasoning have been studied in AI, but there
are few theoretical results on the complexity of metareasoning.
We derive hardness results for three settings which most real metareasoning
systems would have to encompass as special cases. In the first, the agent has
to decide how to allocate its deliberation time across anytime algorithms
running on different problem instances. We show this to be
$\mathcal{NP}$-complete. In the second, the agent has to (dynamically) allocate
its deliberation or information gathering resources across multiple actions
that it has to choose among. We show this to be $\mathcal{NP}$-hard even when
evaluating each individual action is extremely simple. In the third, the agent
has to (dynamically) choose a limited number of deliberation or information
gathering actions to disambiguate the state of the world. We show that this is
$\mathcal{NP}$-hard under a natural restriction, and $\mathcal{PSPACE}$-hard in
general.
|
cs/0307018
|
Universal Voting Protocol Tweaks to Make Manipulation Hard
|
cs.GT cs.CC cs.MA
|
Voting is a general method for preference aggregation in multiagent settings,
but seminal results have shown that all (nondictatorial) voting protocols are
manipulable. One could try to avoid manipulation by using voting protocols
where determining a beneficial manipulation is hard computationally. A number
of recent papers study the complexity of manipulating existing protocols. This
paper is the first work to take the next step of designing new protocols that
are especially hard to manipulate. Rather than designing these new protocols
from scratch, we instead show how to tweak existing protocols to make
manipulation hard, while leaving much of the original nature of the protocol
intact. The tweak studied consists of adding one elimination preround to the
election. Surprisingly, this extremely simple and universal tweak makes typical
protocols hard to manipulate! The protocols become NP-hard, #P-hard, or
PSPACE-hard to manipulate, depending on whether the schedule of the preround is
determined before the votes are collected, after the votes are collected, or
the scheduling and the vote collecting are interleaved, respectively. We prove
general sufficient conditions on the protocols for this tweak to introduce the
hardness, and show that the most common voting protocols satisfy those
conditions. These are the first results in voting settings where manipulation
is in a higher complexity class than NP (presuming PSPACE $\neq$ NP).
|
cs/0307025
|
Information Compression by Multiple Alignment, Unification and Search as
a Unifying Principle in Computing and Cognition
|
cs.AI
|
This article presents an overview of the idea that "information compression
by multiple alignment, unification and search" (ICMAUS) may serve as a unifying
principle in computing (including mathematics and logic) and in such aspects of
human cognition as the analysis and production of natural language, fuzzy
pattern recognition and best-match information retrieval, concept hierarchies
with inheritance of attributes, probabilistic reasoning, and unsupervised
inductive learning. The ICMAUS concepts are described together with an outline
of the SP61 software model in which the ICMAUS concepts are currently realised.
A range of examples is presented, illustrated with output from the SP61 model.
|
cs/0307028
|
Issues in Communication Game
|
cs.CL
|
As interaction between autonomous agents, communication can be analyzed in
game-theoretic terms. Meaning game is proposed to formalize the core of
intended communication in which the sender sends a message and the receiver
attempts to infer its meaning intended by the sender. Basic issues involved in
the game of natural language communication are discussed, such as salience,
grammaticality, common sense, and common belief, together with some
demonstration of the feasibility of game-theoretic account of language.
|
cs/0307030
|
Parsing and Generation with Tabulation and Compilation
|
cs.CL
|
The standard tabulation techniques for logic programming presuppose fixed
order of computation. Some data-driven control should be introduced in order to
deal with diverse contexts. The present paper describes a data-driven method of
constraint transformation with a sort of compilation which subsumes
accessibility check and last-call optimization, which characterize standard
natural-language parsing techniques, semantic-head-driven generation, etc.
|
cs/0307031
|
Automatic Classification using Self-Organising Neural Networks in
Astrophysical Experiments
|
cs.NE astro-ph cs.AI
|
Self-Organising Maps (SOMs) are effective tools in classification problems,
and in recent years the even more powerful Dynamic Growing Neural Networks, a
variant of SOMs, have been developed. Automatic Classification (also called
clustering) is an important and difficult problem in many Astrophysical
experiments, for instance, Gamma Ray Burst classification, or gamma-hadron
separation. After a brief introduction to classification problem, we discuss
Self-Organising Maps in section 2. Section 3 discusses with various models of
growing neural networks and finally in section 4 we discuss the research
perspectives in growing neural networks for efficient classification in
astrophysical problems.
|
cs/0307032
|
Data Management and Mining in Astrophysical Databases
|
cs.DB astro-ph physics.data-an
|
We analyse the issues involved in the management and mining of astrophysical
data. The traditional approach to data management in the astrophysical field is
not able to keep up with the increasing size of the data gathered by modern
detectors. An essential role in the astrophysical research will be assumed by
automatic tools for information extraction from large datasets, i.e. data
mining techniques, such as clustering and classification algorithms. This asks
for an approach to data management based on data warehousing, emphasizing the
efficiency and simplicity of data access; efficiency is obtained using
multidimensional access methods and simplicity is achieved by properly handling
metadata. Clustering and classification techniques, on large datasets, pose
additional requirements: computational and memory scalability with respect to
the data size, interpretability and objectivity of clustering or classification
results. In this study we address some possible solutions.
|
cs/0307037
|
Supporting Dynamic Ad hoc Collaboration Capabilities
|
cs.OH cs.AI
|
Modern HENP experiments such as CMS and Atlas involve as many as 2000
collaborators around the world. Collaborations this large will be unable to
meet often enough to support working closely together. Many of the tools
currently available for collaboration focus on heavy-weight applications such
as videoconferencing tools. While these are important, there is a more basic
need for tools that support connecting physicists to work together on an ad hoc
or continuous basis. Tools that support the day-to-day connectivity and
underlying needs of a group of collaborators are important for providing
light-weight, non-intrusive, and flexible ways to work collaboratively. Some
example tools include messaging, file-sharing, and shared plot viewers. An
important component of the environment is a scalable underlying communication
framework. In this paper we will describe our current progress on building a
dynamic and ad hoc collaboration environment and our vision for its evolution
into a HENP collaboration environment.
|
cs/0307038
|
Manifold Learning with Geodesic Minimal Spanning Trees
|
cs.CV cs.LG
|
In the manifold learning problem one seeks to discover a smooth low
dimensional surface, i.e., a manifold embedded in a higher dimensional linear
vector space, based on a set of measured sample points on the surface. In this
paper we consider the closely related problem of estimating the manifold's
intrinsic dimension and the intrinsic entropy of the sample points.
Specifically, we view the sample points as realizations of an unknown
multivariate density supported on an unknown smooth manifold. We present a
novel geometrical probability approach, called the
geodesic-minimal-spanning-tree (GMST), to obtaining asymptotically consistent
estimates of the manifold dimension and the R\'{e}nyi $\alpha$-entropy of the
sample density on the manifold. The GMST approach is striking in its simplicity
and does not require reconstructing the manifold or estimating the multivariate
density of the samples. The GMST method simply constructs a minimal spanning
tree (MST) sequence using a geodesic edge matrix and uses the overall lengths
of the MSTs to simultaneously estimate manifold dimension and entropy. We
illustrate the GMST approach for dimension and entropy estimation of a human
face dataset.
|
cs/0307039
|
Modeling Business
|
cs.CE
|
Business concepts are studied using a metamodel-based approach, using UML
2.0. The Notation Independent Business concepts metamodel is introduced. The
approach offers a mapping between different business modeling notations which
could be used for bridging BM tools and boosting the MDA approach.
|
cs/0307040
|
Bridging the gap between modal temporal logics and constraint-based QSR
as an ALC(D) spatio-temporalisation with weakly cyclic TBoxes
|
cs.AI cs.LO
|
The aim of this work is to provide a family of qualitative theories for
spatial change in general, and for motion of spatial scenes in particular. To
achieve this, we consider a spatio-temporalisation MTALC(D_x), of the
well-known ALC(D) family of Description Logics (DLs) with a concrete domainan.
In particular, the concrete domain D_x is generated by a qualitative spatial
Relation Algebra (RA) x. We show the important result that satisfiability of an
MTALC(D_x) concept with respect to a weakly cyclic TBox is decidable in
nondeterministic exponential time, by reducing it to the emptiness problem of a
weak alternating automaton augmented with spatial constraints, which we show to
remain decidable, although the accepting condition of a run involves,
additionally to the standard case, consistency of a CSP (Constraint
Satisfaction Problem) potentially infinite. The result provides an effective
tableaux-like satisfiability procedure which is discussed.
|
cs/0307044
|
The Linguistic DS: Linguisitic Description in MPEG-7
|
cs.CL
|
MPEG-7 (Moving Picture Experts Group Phase 7) is an XML-based international
standard on semantic description of multimedia content. This document discusses
the Linguistic DS and related tools. The linguistic DS is a tool, based on the
GDA tag set (http://i-content.org/GDA/tagset.html), for semantic annotation of
linguistic data in or associated with multimedia content. The current document
text reflects `Study of FPDAM - MPEG-7 MDS Extensions' issued in March 2003,
and not most part of MPEG-7 MDS, for which the readers are referred to the
first version of MPEG-7 MDS document available from ISO (http://www.iso.org).
Without that reference, however, this document should be mostly intelligible to
those who are familiar with XML and linguistic theories. Comments are welcome
and will be considered in the standardization process.
|
cs/0307045
|
Flexible Camera Calibration Using a New Analytical Radial Undistortion
Formula with Application to Mobile Robot Localization
|
cs.CV
|
Most algorithms in 3D computer vision rely on the pinhole camera model
because of its simplicity, whereas virtually all imaging devices introduce
certain amount of nonlinear distortion, where the radial distortion is the most
severe part. Common approach to radial distortion is by the means of polynomial
approximation, which introduces distortion-specific parameters into the camera
model and requires estimation of these distortion parameters. The task of
estimating radial distortion is to find a radial distortion model that allows
easy undistortion as well as satisfactory accuracy. This paper presents a new
radial distortion model with an easy analytical undistortion formula, which
also belongs to the polynomial approximation category. Experimental results are
presented to show that with this radial distortion model, satisfactory accuracy
is achieved. An application of the new radial distortion model is non-iterative
yellow line alignment with a calibrated camera on ODIS, a robot built in our
CSOIS.
|
cs/0307046
|
A New Analytical Radial Distortion Model for Camera Calibration
|
cs.CV
|
Common approach to radial distortion is by the means of polynomial
approximation, which introduces distortion-specific parameters into the camera
model and requires estimation of these distortion parameters. The task of
estimating radial distortion is to find a radial distortion model that allows
easy undistortion as well as satisfactory accuracy. This paper presents a new
radial distortion model with an easy analytical undistortion formula, which
also belongs to the polynomial approximation category. Experimental results are
presented to show that with this radial distortion model, satisfactory accuracy
is achieved.
|
cs/0307047
|
Rational Radial Distortion Models with Analytical Undistortion Formulae
|
cs.CV
|
The common approach to radial distortion is by the means of polynomial
approximation, which introduces distortion-specific parameters into the camera
model and requires estimation of these distortion parameters. The task of
estimating radial distortion is to find a radial distortion model that allows
easy undistortion as well as satisfactory accuracy. This paper presents a new
class of rational radial distortion models with easy analytical undistortion
formulae. Experimental results are presented to show that with this class of
rational radial distortion models, satisfactory and comparable accuracy is
achieved.
|
cs/0307048
|
Integrating cardinal direction relations and other orientation relations
in Qualitative Spatial Reasoning
|
cs.AI
|
We propose a calculus integrating two calculi well-known in Qualitative
Spatial Reasoning (QSR): Frank's projection-based cardinal direction calculus,
and a coarser version of Freksa's relative orientation calculus. An original
constraint propagation procedure is presented, which implements the interaction
between the two integrated calculi. The importance of taking into account the
interaction is shown with a real example providing an inconsistent knowledge
base, whose inconsistency (a) cannot be detected by reasoning separately about
each of the two components of the knowledge, just because, taken separately,
each is consistent, but (b) is detected by the proposed algorithm, thanks to
the interaction knowledge propagated from each of the two compnents to the
other.
|
cs/0307050
|
A ternary Relation Algebra of directed lines
|
cs.AI
|
We define a ternary Relation Algebra (RA) of relative position relations on
two-dimensional directed lines (d-lines for short). A d-line has two degrees of
freedom (DFs): a rotational DF (RDF), and a translational DF (TDF). The
representation of the RDF of a d-line will be handled by an RA of 2D
orientations, CYC_t, known in the literature. A second algebra, TA_t, which
will handle the TDF of a d-line, will be defined. The two algebras, CYC_t and
TA_t, will constitute, respectively, the translational and the rotational
components of the RA, PA_t, of relative position relations on d-lines: the PA_t
atoms will consist of those pairs <t,r> of a TA_t atom and a CYC_t atom that
are compatible. We present in detail the RA PA_t, with its converse table, its
rotation table and its composition tables. We show that a (polynomial)
constraint propagation algorithm, known in the literature, is complete for a
subset of PA_t relations including almost all of the atomic relations. We will
discuss the application scope of the RA, which includes incidence geometry, GIS
(Geographic Information Systems), shape representation, localisation in
(multi-)robot navigation, and the representation of motion prepositions in NLP
(Natural Language Processing). We then compare the RA to existing ones, such as
an algebra for reasoning about rectangles parallel to the axes of an
(orthogonal) coordinate system, a ``spatial Odyssey'' of Allen's interval
algebra, and an algebra for reasoning about 2D segments.
|
cs/0307051
|
An Analytical Piecewise Radial Distortion Model for Precision Camera
Calibration
|
cs.CV
|
The common approach to radial distortion is by the means of polynomial
approximation, which introduces distortion-specific parameters into the camera
model and requires estimation of these distortion parameters. The task of
estimating radial distortion is to find a radial distortion model that allows
easy undistortion as well as satisfactory accuracy. This paper presents a new
piecewise radial distortion model with easy analytical undistortion formula.
The motivation for seeking a piecewise radial distortion model is that, when a
camera is resulted in a low quality during manufacturing, the nonlinear radial
distortion can be complex. Using low order polynomials to approximate the
radial distortion might not be precise enough. On the other hand, higher order
polynomials suffer from the inverse problem. With the new piecewise radial
distortion function, more flexibility is obtained and the radial undistortion
can be performed analytically. Experimental results are presented to show that
with this new piecewise radial distortion model, better performance is achieved
than that using the single function. Furthermore, a comparable performance with
the conventional polynomial model using 2 coefficients can also be
accomplished.
|
cs/0307053
|
Hamevol1.0: a C++ code for differential equations based on Runge-Kutta
algorithm. An application to matter enhanced neutrino oscillation
|
cs.CE
|
We present a C++ implementation of a fifth order semi-implicit Runge-Kutta
algorithm for solving Ordinary Differential Equations. This algorithm can be
used for studying many different problems and in particular it can be applied
for computing the evolution of any system whose Hamiltonian is known. We
consider in particular the problem of calculating the neutrino oscillation
probabilities in presence of matter interactions. The time performance and the
accuracy of this implementation is competitive with respect to the other
analytical and numerical techniques used in literature. The algorithm design
and the salient features of the code are presented and discussed and some
explicit examples of code application are given.
|
cs/0307054
|
Contributions to the Development and Improvement of a Regulatory and
Pre-Regulatory Digitally System for the Tools within Flexible Fabrication
Systems
|
cs.CE cs.SE
|
The paper reports the obtained results for the projection and realization of
a digitally system aiming to assist the equipment for a regulatory and
pre-regulatory tools and holding tools within the flexible fabrication systems
(FFS). Moreover, based on the present results, the same methodology can be
applied for assisting tools from the point of view of their integrity and to
wear compensation in the FFS framework.
|
cs/0307055
|
Learning Analogies and Semantic Relations
|
cs.LG cs.CL cs.IR
|
We present an algorithm for learning from unlabeled text, based on the Vector
Space Model (VSM) of information retrieval, that can solve verbal analogy
questions of the kind found in the Scholastic Aptitude Test (SAT). A verbal
analogy has the form A:B::C:D, meaning "A is to B as C is to D"; for example,
mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B,
and the problem is to select the most analogous word pair, C:D, from a set of
five choices. The VSM algorithm correctly answers 47% of a collection of 374
college-level analogy questions (random guessing would yield 20% correct). We
motivate this research by relating it to work in cognitive science and
linguistics, and by applying it to a difficult problem in natural language
processing, determining semantic relations in noun-modifier pairs. The problem
is to classify a noun-modifier pair, such as "laser printer", according to the
semantic relation between the noun (printer) and the modifier (laser). We use a
supervised nearest-neighbour algorithm that assigns a class to a given
noun-modifier pair by finding the most analogous noun-modifier pair in the
training data. With 30 classes of semantic relations, on a collection of 600
labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5%
(random guessing: 3.3%). With 5 classes of semantic relations, the F value is
43.2% (random: 20%). The performance is state-of-the-art for these challenging
problems.
|
cs/0307056
|
From Statistical Knowledge Bases to Degrees of Belief
|
cs.AI
|
An intelligent agent will often be uncertain about various properties of its
environment, and when acting in that environment it will frequently need to
quantify its uncertainty. For example, if the agent wishes to employ the
expected-utility paradigm of decision theory to guide its actions, it will need
to assign degrees of belief (subjective probabilities) to various assertions.
Of course, these degrees of belief should not be arbitrary, but rather should
be based on the information available to the agent. This paper describes one
approach for inducing degrees of belief from very rich knowledge bases, that
can include information about particular individuals, statistical correlations,
physical laws, and default rules. We call our approach the random-worlds
method. The method is based on the principle of indifference: it treats all of
the worlds the agent considers possible as being equally likely. It is able to
integrate qualitative default reasoning with quantitative probabilistic
reasoning by providing a language in which both types of information can be
easily expressed. Our results show that a number of desiderata that arise in
direct inference (reasoning from statistical information to conclusions about
individuals) and default reasoning follow directly {from} the semantics of
random worlds. For example, random worlds captures important patterns of
reasoning such as specificity, inheritance, indifference to irrelevant
information, and default assumptions of independence. Furthermore, the
expressive power of the language used and the intuitive semantics of random
worlds allow the method to deal with problems that are beyond the scope of many
other non-deductive reasoning systems.
|
cs/0307060
|
Neural realisation of the SP theory: cell assemblies revisited
|
cs.AI cs.NE
|
This paper describes how the elements of the SP theory (Wolff, 2003a) may be
realised with neural structures and processes. To the extent that this is
successful, the insights that have been achieved in the SP theory - the
integration and simplification of a range of phenomena in perception and
cognition - may be incorporated in a neural view of brain function.
These proposals may be seen as a development of Hebb's (1949) concept of a
'cell assembly'. By contrast with that concept and variants of it, the version
described in this paper proposes that any one neuron can belong in one assembly
and only one assembly. A distinctive feature of the present proposals is that
any neuron or cluster of neurons within a cell assembly may serve as a proxy or
reference for another cell assembly or class of cell assemblies. This device
provides solutions to many of the problems associated with cell assemblies, it
allows information to be stored in a compressed form, and it provides a robust
mechanism by which assemblies may be connected to form hierarchies, grammars
and other kinds of knowledge structure.
Drawing on insights derived from the SP theory, the paper also describes how
unsupervised learning may be achieved with neural structures and processes.
This theory of learning overcomes weaknesses in the Hebbian concept of learning
and it is, at the same time, compatible with the observations that Hebb's
theory was designed to explain.
|
cs/0307061
|
Boundary knot method for Laplace and biharmonic problems
|
cs.CE cs.MS
|
The boundary knot method (BKM) [1] is a meshless boundary-type radial basis
function (RBF) collocation scheme, where the nonsingular general solution is
used instead of fundamental solution to evaluate the homogeneous solution,
while the dual reciprocity method (DRM) is employed to approximation of
particular solution. Despite the fact that there are not nonsingular RBF
general solutions available for Laplace and biharmonic problems, this study
shows that the method can be successfully applied to these problems. The
high-order general and fundamental solutions of Burger and Winkler equations
are also first presented here.
|
cs/0307063
|
An Alternative to RDF-Based Languages for the Representation and
Processing of Ontologies in the Semantic Web
|
cs.AI
|
This paper describes an approach to the representation and processing of
ontologies in the Semantic Web, based on the ICMAUS theory of computation and
AI. This approach has strengths that complement those of languages based on the
Resource Description Framework (RDF) such as RDF Schema and DAML+OIL. The main
benefits of the ICMAUS approach are simplicity and comprehensibility in the
representation of ontologies, an ability to cope with errors and uncertainties
in knowledge, and a versatile reasoning system with capabilities in the kinds
of probabilistic reasoning that seem to be required in the Semantic Web.
|
cs/0307064
|
Implementing an Agent Trade Server
|
cs.CE
|
An experimental server for stock trading autonomous agents is presented and
made available, together with an agent shell for swift development. The server,
written in Java, was implemented as proof-of-concept for an agent trade server
for a real financial exchange.
|
cs/0307068
|
Web Access to Cultural Heritage for the Disabled
|
cs.CY cs.HC cs.IR
|
Physical disabled access is something that most cultural institutions such as
museums consider very seriously. Indeed, there are normally legal requirements
to do so. However, online disabled access is still a relatively novel and
developing field. Many cultural organizations have not yet considered the
issues in depth and web developers are not necessarily experts either. The
interface for websites is normally tested with major browsers, but not with
specialist software like text to audio converters for the blind or against the
relevant accessibility and validation standards. We consider the current state
of the art in this area, especially with respect to aspects of particular
importance to the access of cultural heritage.
|
cs/0307069
|
A logic for reasoning about upper probabilities
|
cs.AI cs.LO
|
We present a propositional logic %which can be used to reason about the
uncertainty of events, where the uncertainty is modeled by a set of probability
measures assigning an interval of probability to each event. We give a sound
and complete axiomatization for the logic, and show that the satisfiability
problem is NP-complete, no harder than satisfiability for propositional logic.
|
cs/0307070
|
Modeling Belief in Dynamic Systems, Part I: Foundations
|
cs.AI cs.LO
|
Belief change is a fundamental problem in AI: Agents constantly have to
update their beliefs to accommodate new observations. In recent years, there
has been much work on axiomatic characterizations of belief change. We claim
that a better understanding of belief change can be gained from examining
appropriate semantic models. In this paper we propose a general framework in
which to model belief change. We begin by defining belief in terms of knowledge
and plausibility: an agent believes p if he knows that p is more plausible than
its negation. We then consider some properties defining the interaction between
knowledge and plausibility, and show how these properties affect the properties
of belief. In particular, we show that by assuming two of the most natural
properties, belief becomes a KD45 operator. Finally, we add time to the
picture. This gives us a framework in which we can talk about knowledge,
plausibility (and hence belief), and time, which extends the framework of
Halpern and Fagin for modeling knowledge in multi-agent systems. We then
examine the problem of ``minimal change''. This notion can be captured by using
prior plausibilities, an analogue to prior probabilities, which can be updated
by ``conditioning''. We show by example that conditioning on a plausibility
measure can capture many scenarios of interest. In a companion paper, we show
how the two best-studied scenarios of belief change, belief revisionand belief
update, fit into our framework.
|
cs/0307071
|
Modeling Belief in Dynamic Systems, Part II: Revisions and Update
|
cs.AI cs.LO
|
The study of belief change has been an active area in philosophy and AI. In
recent years two special cases of belief change, belief revision and belief
update, have been studied in detail. In a companion paper, we introduce a new
framework to model belief change. This framework combines temporal and
epistemic modalities with a notion of plausibility, allowing us to examine the
change of beliefs over time. In this paper, we show how belief revision and
belief update can be captured in our framework. This allows us to compare the
assumptions made by each method, and to better understand the principles
underlying them. In particular, it shows that Katsuno and Mendelzon's notion of
belief update depends on several strong assumptions that may limit its
applicability in artificial intelligence. Finally, our analysis allow us to
identify a notion of minimal change that underlies a broad range of belief
change operations including revision and update.
|
cs/0307072
|
Camera Calibration: a USU Implementation
|
cs.CV
|
The task of camera calibration is to estimate the intrinsic and extrinsic
parameters of a camera model. Though there are some restricted techniques to
infer the 3-D information about the scene from uncalibrated cameras, effective
camera calibration procedures will open up the possibility of using a wide
range of existing algorithms for 3-D reconstruction and recognition.
The applications of camera calibration include vision-based metrology, robust
visual platooning and visual docking of mobile robots where the depth
information is important.
|
cs/0307073
|
Search and Navigation in Relational Databases
|
cs.DB
|
We present a new application for keyword search within relational databases,
which uses a novel algorithm to solve the join discovery problem by finding
Memex-like trails through the graph of foreign key dependencies. It differs
from previous efforts in the algorithms used, in the presentation mechanism and
in the use of primary-key only database queries at query-time to maintain a
fast response for users. We present examples using the DBLP data set.
|
cs/0308001
|
Two- versus three-dimensional connectivity testing of first-order
queries to semi-algebraic sets
|
cs.LO cs.CG cs.DB
|
This paper addresses the question whether one can determine the connectivity
of a semi-algebraic set in three dimensions by testing the connectivity of a
finite number of two-dimensional ``samples'' of the set, where these samples
are defined by first-order queries. The question is answered negatively for two
classes of first-order queries: cartesian-product-free, and positive one-pass.
|
cs/0308002
|
Quantifying and Visualizing Attribute Interactions
|
cs.AI
|
Interactions are patterns between several attributes in data that cannot be
inferred from any subset of these attributes. While mutual information is a
well-established approach to evaluating the interactions between two
attributes, we surveyed its generalizations as to quantify interactions between
several attributes. We have chosen McGill's interaction information, which has
been independently rediscovered a number of times under various names in
various disciplines, because of its many intuitively appealing properties. We
apply interaction information to visually present the most important
interactions of the data. Visualization of interactions has provided insight
into the structure of data on a number of domains, identifying redundant
attributes and opportunities for constructing new features, discovering
unexpected regularities in data, and have helped during construction of
predictive models; we illustrate the methods on numerous examples. A machine
learning method that disregards interactions may get caught in two traps:
myopia is caused by learning algorithms assuming independence in spite of
interactions, whereas fragmentation arises from assuming an interaction in
spite of independence.
|
cs/0308003
|
A Family of Simplified Geometric Distortion Models for Camera
Calibration
|
cs.CV
|
The commonly used radial distortion model for camera calibration is in fact
an assumption or a restriction. In practice, camera distortion could happen in
a general geometrical manner that is not limited to the radial sense. This
paper proposes a simplified geometrical distortion modeling method by using two
different radial distortion functions in the two image axes. A family of
simplified geometric distortion models is proposed, which are either simple
polynomials or the rational functions of polynomials. Analytical geometric
undistortion is possible using two of the distortion functions discussed in
this paper and their performance can be improved by applying a piecewise
fitting idea. Our experimental results show that the geometrical distortion
models always perform better than their radial distortion counterparts.
Furthermore, the proposed geometric modeling method is more appropriate for
cameras whose distortion is not perfectly radially symmetric around the center
of distortion.
|
cs/0308004
|
DPG: A Cache-Efficient Accelerator for Sorting and for Join Operators
|
cs.DB cs.DS
|
We present a new algorithm for fast record retrieval,
distribute-probe-gather, or DPG. DPG has important applications both in sorting
and in joins. Current main memory sorting algorithms split their work into
three phases: extraction of key-pointer pairs; sorting of the key-pointer
pairs; and copying of the original records into the destination array according
the sorted key-pointer pairs. The copying in the last phase dominates today's
sorting time. Hence, the use of DPG in the third phase provides an accelerator
for existing sorting algorithms.
DPG also provides two new join methods for foreign key joins: DPG-move join
and DPG-sort join. The resulting join methods with DPG are faster because DPG
join is cache-efficient and at the same time DPG join avoids the need for
sorting or for hashing. The ideas presented for foreign key join can also be
extended to faster record pair retrieval for spatial and temporal databases.
|
cs/0308005
|
Disabled Access for Museum Websites
|
cs.CY cs.HC cs.IR
|
Physical disabled access is something that most museums consider very
seriously. Indeed, there are normally legal requirements to do so. However,
online disabled access is still a relatively novel field. Most museums have not
yet considered the issues in depth. The Human-Computer Interface for their
websites is normally tested with major browsers, but not with specialist
browsers or against the relevant accessibility and validation standards. We
consider the current state of the art in this area and mention an accessibility
survey of some museum websites.
|
cs/0308008
|
A Grid Based Architecture for High-Performance NLP
|
cs.DC cs.CL
|
We describe the design and early implementation of an extensible,
component-based software architecture for natural language engineering
applications which interfaces with high performance distributed computing
services. The architecture leverages existing linguistic resource description
and discovery mechanisms based on metadata descriptions, combining these in a
compatible fashion with other software definition abstractions. Within this
architecture, application design is highly flexible, allowing disparate
components to be combined to suit the overall application functionality, and
formally described independently of processing concerns. An application
specification language provides abstraction from the programming environment
and allows ease of interface with high performance computational grids via a
broker.
|
cs/0308009
|
The Generalized Riemann or Henstock Integral Underpinning Multivariate
Data Analysis: Application to Faint Structure Finding in Price Processes
|
cs.CE cs.CV
|
Practical data analysis involves many implicit or explicit assumptions about
the good behavior of the data, and excludes consideration of various
potentially pathological or limit cases. In this work, we present a new general
theory of data, and of data processing, to bypass some of these assumptions.
The new framework presented is focused on integration, and has direct
applicability to expectation, distance, correlation, and aggregation. In a case
study, we seek to reveal faint structure in financial data. Our new foundation
for data encoding and handling offers increased justification for our
conclusions.
|
cs/0308013
|
A Robust and Computational Characterisation of Peer-to-Peer Database
Systems
|
cs.DC cs.DB
|
In this paper we give a robust logical and computational characterisation of
peer-to-peer database systems. We first define a pre- cise model-theoretic
semantics of a peer-to-peer system, which allows for local inconsistency
handling. We then characterise the general computa- tional properties for the
problem of answering queries to such a peer-to- peer system. Finally, we devise
tight complexity bounds and distributed procedures for the problem of answering
queries in few relevant special cases.
|
cs/0308014
|
On the expressive power of semijoin queries
|
cs.DB cs.LO
|
The semijoin algebra is the variant of the relational algebra obtained by
replacing the join operator by the semijoin operator. We provide an
Ehrenfeucht-Fraiss\'{e} game, characterizing the discerning power of the
semijoin algebra. This game gives a method for showing that queries are not
expressible in the semijoin algebra.
|
cs/0308016
|
Collaborative Creation of Digital Content in Indian Languages
|
cs.CL
|
The world is passing through a major revolution called the information
revolution, in which information and knowledge is becoming available to people
in unprecedented amounts wherever and whenever they need it. Those societies
which fail to take advantage of the new technology will be left behind, just
like in the industrial revolution.
The information revolution is based on two major technologies: computers and
communication. These technologies have to be delivered in a COST EFFECTIVE
manner, and in LANGUAGES accessible to people.
One way to deliver them in cost effective manner is to make suitable
technology choices, and to allow people to access through shared resources.
This could be done throuch street corner shops (for computer usage, e-mail
etc.), schools, community centres and local library centres.
|
cs/0308017
|
Information Revolution
|
cs.CL
|
The world is passing through a major revolution called the information
revolution, in which information and knowledge is becoming available to people
in unprecedented amounts wherever and whenever they need it. Those societies
which fail to take advantage of the new technology will be left behind, just
like in the industrial revolution.
The information revolution is based on two major technologies: computers and
communication. These technologies have to be delivered in a COST EFFECTIVE
manner, and in LANGUAGES accessible to people.
One way to deliver them in cost effective manner is to make suitable
technology choices (discussed later), and to allow people to access through
shared resources. This could be done throuch street corner shops (for computer
usage, e-mail etc.), schools, community centers and local library centres.
|
cs/0308018
|
Anusaaraka: Overcoming the Language Barrier in India
|
cs.CL
|
The anusaaraka system makes text in one Indian language accessible in another
Indian language. In the anusaaraka approach, the load is so divided between man
and computer that the language load is taken by the machine, and the
interpretation of the text is left to the man. The machine presents an image of
the source text in a language close to the target language.In the image, some
constructions of the source language (which do not have equivalents) spill over
to the output. Some special notation is also devised. The user after some
training learns to read and understand the output. Because the Indian languages
are close, the learning time of the output language is short, and is expected
to be around 2 weeks.
The output can also be post-edited by a trained user to make it grammatically
correct in the target language. Style can also be changed, if necessary. Thus,
in this scenario, it can function as a human assisted translation system.
Currently, anusaarakas are being built from Telugu, Kannada, Marathi, Bengali
and Punjabi to Hindi. They can be built for all Indian languages in the near
future. Everybody must pitch in to build such systems connecting all Indian
languages, using the free software model.
|
cs/0308019
|
Language Access: An Information Based Approach
|
cs.CL
|
The anusaaraka system (a kind of machine translation system) makes text in
one Indian language accessible through another Indian language. The machine
presents an image of the source text in a language close to the target
language. In the image, some constructions of the source language (which do not
have equivalents in the target language) spill over to the output. Some special
notation is also devised.
Anusaarakas have been built from five pairs of languages: Telugu,Kannada,
Marathi, Bengali and Punjabi to Hindi. They are available for use through Email
servers.
Anusaarkas follows the principle of substitutibility and reversibility of
strings produced. This implies preservation of information while going from a
source language to a target language.
For narrow subject areas, specialized modules can be built by putting subject
domain knowledge into the system, which produce good quality grammatical
output. However, it should be remembered, that such modules will work only in
narrow areas, and will sometimes go wrong. In such a situation, anusaaraka
output will still remain useful.
|
cs/0308020
|
LERIL : Collaborative Effort for Creating Lexical Resources
|
cs.CL
|
The paper reports on efforts taken to create lexical resources pertaining to
Indian languages, using the collaborative model. The lexical resources being
developed are: (1) Transfer lexicon and grammar from English to several Indian
languages. (2) Dependencey tree bank of annotated corpora for several Indian
languages. The dependency trees are based on the Paninian model. (3) Bilingual
dictionary of 'core meanings'.
|
cs/0308022
|
Extending Dublin Core Metadata to Support the Description and Discovery
of Language Resources
|
cs.CL cs.DL
|
As language data and associated technologies proliferate and as the language
resources community expands, it is becoming increasingly difficult to locate
and reuse existing resources. Are there any lexical resources for such-and-such
a language? What tool works with transcripts in this particular format? What is
a good format to use for linguistic data of this type? Questions like these
dominate many mailing lists, since web search engines are an unreliable way to
find language resources. This paper reports on a new digital infrastructure for
discovering language resources being developed by the Open Language Archives
Community (OLAC). At the core of OLAC is its metadata format, which is designed
to facilitate description and discovery of all kinds of language resources,
including data, tools, or advice. The paper describes OLAC metadata, its
relationship to Dublin Core metadata, and its dissemination using the metadata
harvesting protocol of the Open Archives Initiative.
|
cs/0308023
|
On the complexity of curve fitting algorithms
|
cs.CC cs.CV
|
We study a popular algorithm for fitting polynomial curves to scattered data
based on the least squares with gradient weights. We show that sometimes this
algorithm admits a substantial reduction of complexity, and, furthermore, find
precise conditions under which this is possible. It turns out that this is,
indeed, possible when one fits circles but not ellipses or hyperbolas.
|
cs/0308025
|
Controlled hierarchical filtering: Model of neocortical sensory
processing
|
cs.NE cs.AI cs.LG q-bio.NC
|
A model of sensory information processing is presented. The model assumes
that learning of internal (hidden) generative models, which can predict the
future and evaluate the precision of that prediction, is of central importance
for information extraction. Furthermore, the model makes a bridge to
goal-oriented systems and builds upon the structural similarity between the
architecture of a robust controller and that of the hippocampal entorhinal
loop. This generative control architecture is mapped to the neocortex and to
the hippocampal entorhinal loop. Implicit memory phenomena; priming and
prototype learning are emerging features of the model. Mathematical theorems
ensure stability and attractive learning properties of the architecture.
Connections to reinforcement learning are also established: both the control
network, and the network with a hidden model converge to (near) optimal policy
under suitable conditions. Falsifying predictions, including the role of the
feedback connections between neocortical areas are made.
|
cs/0308028
|
Finding Traitors in Secure Networks Using Byzantine Agreements
|
cs.CR cs.DC cs.GT cs.MA
|
Secure networks rely upon players to maintain security and reliability.
However not every player can be assumed to have total loyalty and one must use
methods to uncover traitors in such networks. We use the original concept of
the Byzantine Generals Problem by Lamport, and the more formal Byzantine
Agreement describe by Linial, to nd traitors in secure networks. By applying
general fault-tolerance methods to develop a more formal design of secure
networks we are able to uncover traitors amongst a group of players. We also
propose methods to integrate this system with insecure channels. This new
resiliency can be applied to broadcast and peer-to-peer secure communication
systems where agents may be traitors or become unreliable due to faults.
|
cs/0308030
|
Learning in Multiagent Systems: An Introduction from a Game-Theoretic
Perspective
|
cs.MA cs.AI
|
We introduce the topic of learning in multiagent systems. We first provide a
quick introduction to the field of game theory, focusing on the equilibrium
concepts of iterated dominance, and Nash equilibrium. We show some of the most
relevant findings in the theory of learning in games, including theorems on
fictitious play, replicator dynamics, and evolutionary stable strategies. The
CLRI theory and n-level learning agents are introduced as attempts to apply
some of these findings to the problem of engineering multiagent systems with
learning agents. Finally, we summarize some of the remaining challenges in the
field of learning in multiagent systems.
|
cs/0308031
|
Artificial Neural Networks for Beginners
|
cs.NE cs.AI
|
The scope of this teaching package is to make a brief induction to Artificial
Neural Networks (ANNs) for people who have no previous knowledge of them. We
first make a brief introduction to models of networks, for then describing in
general terms ANNs. As an application, we explain the backpropagation
algorithm, since it is widely used and many other algorithms are derived from
it. The user should know algebra and the handling of functions and vectors.
Differential calculus is recommendable, but not necessary. The contents of this
package should be understood by people with high school education. It would be
useful for people who are just curious about what are ANNs, or for people who
want to become familiar with them, so when they study them more fully, they
will already have clear notions of ANNs. Also, people who only want to apply
the backpropagation algorithm without a detailed and formal explanation of it
will find this material useful. This work should not be seen as "Nets for
dummies", but of course it is not a treatise. Much of the formality is skipped
for the sake of simplicity. Detailed explanations and demonstrations can be
found in the referred readings. The included exercises complement the
understanding of the theory. The on-line resources are highly recommended for
extending this brief induction.
|
cs/0308032
|
Evaluation of text data mining for database curation: lessons learned
from the KDD Challenge Cup
|
cs.CL q-bio.OT
|
MOTIVATION: The biological literature is a major repository of knowledge.
Many biological databases draw much of their content from a careful curation of
this literature. However, as the volume of literature increases, the burden of
curation increases. Text mining may provide useful tools to assist in the
curation process. To date, the lack of standards has made it impossible to
determine whether text mining techniques are sufficiently mature to be useful.
RESULTS: We report on a Challenge Evaluation task that we created for the
Knowledge Discovery and Data Mining (KDD) Challenge Cup. We provided a training
corpus of 862 articles consisting of journal articles curated in FlyBase, along
with the associated lists of genes and gene products, as well as the relevant
data fields from FlyBase. For the test, we provided a corpus of 213 new
(`blind') articles; the 18 participating groups provided systems that flagged
articles for curation, based on whether the article contained experimental
evidence for gene expression products. We report on the the evaluation results
and describe the techniques used by the top performing groups.
CONTACT: asy@mitre.org
KEYWORDS: text mining, evaluation, curation, genomics, data management
|
cs/0308033
|
Coherent Keyphrase Extraction via Web Mining
|
cs.LG cs.CL cs.IR
|
Keyphrases are useful for a variety of purposes, including summarizing,
indexing, labeling, categorizing, clustering, highlighting, browsing, and
searching. The task of automatic keyphrase extraction is to select keyphrases
from within the text of a given document. Automatic keyphrase extraction makes
it feasible to generate keyphrases for the huge number of documents that do not
have manually assigned keyphrases. A limitation of previous keyphrase
extraction algorithms is that the selected keyphrases are occasionally
incoherent. That is, the majority of the output keyphrases may fit together
well, but there may be a minority that appear to be outliers, with no clear
semantic relation to the majority or to each other. This paper presents
enhancements to the Kea keyphrase extraction algorithm that are designed to
increase the coherence of the extracted keyphrases. The approach is to use the
degree of statistical association among candidate keyphrases as evidence that
they may be semantically related. The statistical association is measured using
web mining. Experiments demonstrate that the enhancements improve the quality
of the extracted keyphrases. Furthermore, the enhancements are not
domain-specific: the algorithm generalizes well when it is trained on one
domain (computer science documents) and tested on another (physics documents).
|
cs/0308034
|
Fingerprint based bio-starter and bio-access
|
cs.CV
|
In the paper will be presented a safety and security system based on
fingerprint technology. The results suggest a new scenario where the new cars
can use a fingerprint sensor integrated in car handle to allow access and in
the dashboard as starter button.
|
cs/0308035
|
IS (Iris Security)
|
cs.CV
|
In the paper will be presented a safety system based on iridology. The
results suggest a new scenario where the security problem in supervised and
unsupervised areas can be treat with the present system and the iris image
recognition.
|
cs/0308037
|
Distributed and Parallel Net Imaging
|
cs.CV astro-ph cs.DC
|
A very complex vision system is developed to detect luminosity variations
connected with the discovery of new planets in the Universe. The traditional
imaging system can not manage a so large load. A private net is implemented to
perform an automatic vision and decision architecture. It lets to carry out an
on-line discrimination of interesting events by using two levels of triggers.
This system can even manage many Tbytes of data per day. The architecture
avails itself of a distributed parallel network system based on a maximum of
256 standard workstations with Microsoft Window as OS.
|
cs/0308038
|
Image Analysis in Astronomy for very large vision machine
|
cs.CV astro-ph cs.DC
|
It is developed a very complex system (hardware/software) to detect
luminosity variations connected with the discovery of new planets outside the
Solar System. Traditional imaging approaches are very demanding in terms of
computing time; then, the implementation of an automatic vision and decision
software architecture is presented. It allows to perform an on-line
discrimination of interesting events by using two levels of triggers. A
fundamental challenge was to work with very large CCD camera (even 16k*16k
pixels) in line with very large telescopes. Then, the architecture can use a
distributed parallel network system based on a maximum of 256 standard
workstations.
|
cs/0308039
|
A new approach to relevancy in Internet searching - the "Vox Populi
Algorithm"
|
cs.DS cond-mat.dis-nn cs.IR
|
In this paper we will derive a new algorithm for Internet searching. The main
idea of this algorithm is to extend the existing algorithms by a component,
which reflects the interests of the users more than existing methods. The "Vox
Populi Algorithm" (VPA) creates a feedback from the users to the content of the
search index. The information derived from the users query analysis is used to
modify the existing crawling algorithms. The VPA controls the distribution of
the resources of the crawler. Finally, we also discuss methods of suppressing
unwanted content (spam).
|
cs/0308042
|
Centralized reward system gives rise to fast and efficient work sharing
for intelligent Internet agents lacking direct communication
|
cs.IR
|
WWW has a scale-free structure where novel information is often difficult to
locate. Moreover, Intelligent agents easily get trapped in this structure. Here
a novel method is put forth, which turns these traps into information
repositories, supplies: We populated an Internet environment with intelligent
news foragers. Foraging has its associated cost whereas foragers are rewarded
if they detect not yet discovered novel information. The intelligent news
foragers crawl by using the estimated long-term cumulated reward, and also have
a finite sized memory: the list of most promising supplies. Foragers form an
artificial life community: the most successful ones are allowed to multiply,
while unsuccessful ones die out. The specific property of this community is
that there is no direct communication amongst foragers but the centralized
rewarding system. Still, fast division of work is achieved.
|
cs/0309007
|
ROC Curves Within the Framework of Neural Network Assembly Memory Model:
Some Analytic Results
|
cs.AI cs.IR q-bio.NC q-bio.QM
|
On the basis of convolutional (Hamming) version of recent Neural Network
Assembly Memory Model (NNAMM) for intact two-layer autoassociative Hopfield
network optimal receiver operating characteristics (ROCs) have been derived
analytically. A method of taking into account explicitly a priori probabilities
of alternative hypotheses on the structure of information initiating memory
trace retrieval and modified ROCs (mROCs, a posteriori probabilities of correct
recall vs. false alarm probability) are introduced. The comparison of empirical
and calculated ROCs (or mROCs) demonstrates that they coincide quantitatively
and in this way intensities of cues used in appropriate experiments may be
estimated. It has been found that basic ROC properties which are one of
experimental findings underpinning dual-process models of recognition memory
can be explained within our one-factor NNAMM.
|
cs/0309009
|
What Is Working Memory and Mental Imagery? A Robot that Learns to
Perform Mental Computations
|
cs.AI cs.NE
|
This paper goes back to Turing (1936) and treats his machine as a cognitive
model (W,D,B), where W is an "external world" represented by memory device (the
tape divided into squares), and (D,B) is a simple robot that consists of the
sensory-motor devices, D, and the brain, B. The robot's sensory-motor devices
(the "eye", the "hand", and the "organ of speech") allow the robot to simulate
the work of any Turing machine. The robot simulates the internal states of a
Turing machine by "talking to itself." At the stage of training, the teacher
forces the robot (by acting directly on its motor centers) to perform several
examples of an algorithm with different input data presented on tape. Two
effects are achieved: 1) The robot learns to perform the shown algorithm with
any input data using the tape. 2) The robot learns to perform the algorithm
"mentally" using an "imaginary tape." The model illustrates the simplest
concept of a universal learning neurocomputer, demonstrates universality of
associative learning as the mechanism of programming, and provides a
simplified, but nontrivial neurobiologically plausible explanation of the
phenomena of working memory and mental imagery. The model is implemented as a
user-friendly program for Windows called EROBOT. The program is available at
www.brain0.com/software.html.
|
cs/0309011
|
Indexing of Tables Referencing Complex Structures
|
cs.DB
|
We introduce indexing of tables referencing complex structures such as
digraphs and spatial objects, appearing in genetics and other data intensive
analysis. The indexing is achieved by extracting dimension schemas from the
referenced structures. The schemas and their dimensionality are determined by
proper coloring algorithms and the duality between all such schemas and all
such possible proper colorings is established. This duality, in turn, provides
us with an extensive library of solutions when addressing indexing questions.
It is illustrated how to use the schemas, in connection with additional
relational database technologies, to optimize queries conditioned on the
structural information being referenced. Comparisons using bitmap indexing in
the Oracle 9.2i database, on the one hand, and multidimensional clustering in
DB2 8.1.2, on the other hand, are used to illustrate the applicability of the
indexing to different technology settings. Finally, we illustrate how the
indexing can be used to extract low dimensional schemas from a binary interval
tree in order to resolve efficiently interval and stabbing queries.
|
cs/0309012
|
Exploration of RNA Editing and Design of Robust Genetic Algorithms
|
cs.NE cs.AI nlin.AO q-bio.GN
|
This paper presents our computational methodology using Genetic Algorithms
(GA) for exploring the nature of RNA editing. These models are constructed
using several genetic editing characteristics that are gleaned from the RNA
editing system as observed in several organisms. We have expanded the
traditional Genetic Algorithm with artificial editing mechanisms as proposed by
(Rocha, 1997). The incorporation of editing mechanisms provides a means for
artificial agents with genetic descriptions to gain greater phenotypic
plasticity, which may be environmentally regulated. Our first implementations
of these ideas have shed some light into the evolutionary implications of RNA
editing. Based on these understandings, we demonstrate how to select proper RNA
editors for designing more robust GAs, and the results will show promising
applications to real-world problems. We expect that the framework proposed will
both facilitate determining the evolutionary role of RNA editing in biology,
and advance the current state of research in Genetic Algorithms.
|
cs/0309013
|
Semi-metric Behavior in Document Networks and its Application to
Recommendation Systems
|
cs.IR cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.DL cs.HC cs.MA
|
Recommendation systems for different Document Networks (DN) such as the World
Wide Web (WWW) and Digital Libraries, often use distance functions extracted
from relationships among documents and keywords. For instance, documents in the
WWW are related via a hyperlink network, while documents in bibliographic
databases are related by citation and collaboration networks. Furthermore,
documents are related to keyterms. The distance functions computed from these
relations establish associative networks among items of the DN, referred to as
Distance Graphs, which allow recommendation systems to identify relevant
associations for individual users. However, modern recommendation systems need
to integrate associative data from multiple sources such as different
databases, web sites, and even other users. Thus, we are presented with a
problem of combining evidence (about associations between items) from different
sources characterized by distance functions. In this paper we describe our work
on (1) inferring relevant associations from, as well as characterizing,
semi-metric distance graphs and (2) combining evidence from different distance
graphs in a recommendation system. Regarding (1), we present the idea of
semi-metric distance graphs, and introduce ratios to measure semi-metric
behavior. We compute these ratios for several DN such as digital libraries and
web sites and show that they are useful to identify implicit associations.
Regarding (2), we describe an algorithm to combine evidence from distance
graphs that uses Evidence Sets, a set structure based on Interval Valued Fuzzy
Sets and Dempster-Shafer Theory of Evidence. This algorithm has been developed
for a recommendation system named TalkMine.
|
cs/0309015
|
Reliable and Efficient Inference of Bayesian Networks from Sparse Data
by Statistical Learning Theory
|
cs.LG
|
To learn (statistical) dependencies among random variables requires
exponentially large sample size in the number of observed random variables if
any arbitrary joint probability distribution can occur.
We consider the case that sparse data strongly suggest that the probabilities
can be described by a simple Bayesian network, i.e., by a graph with small
in-degree \Delta. Then this simple law will also explain further data with high
confidence. This is shown by calculating bounds on the VC dimension of the set
of those probability measures that correspond to simple graphs. This allows to
select networks by structural risk minimization and gives reliability bounds on
the error of the estimated joint measure without (in contrast to a previous
paper) any prior assumptions on the set of possible joint measures.
The complexity for searching the optimal Bayesian networks of in-degree
\Delta increases only polynomially in the number of random varibales for
constant \Delta and the optimal joint measure associated with a given graph can
be found by convex optimization.
|
cs/0309016
|
Using Simulated Annealing to Calculate the Trembles of Trembling Hand
Perfection
|
cs.GT cs.CC cs.DS cs.LG cs.NE q-bio.PE
|
Within the literature on non-cooperative game theory, there have been a
number of attempts to propose logorithms which will compute Nash equilibria.
Rather than derive a new algorithm, this paper shows that the family of
algorithms known as Markov chain Monte Carlo (MCMC) can be used to calculate
Nash equilibria. MCMC is a type of Monte Carlo simulation that relies on Markov
chains to ensure its regularity conditions. MCMC has been widely used
throughout the statistics and optimization literature, where variants of this
algorithm are known as simulated annealing. This paper shows that there is
interesting connection between the trembles that underlie the functioning of
this algorithm and the type of Nash refinement known as trembling hand
perfection.
|
cs/0309018
|
Using Propagation for Solving Complex Arithmetic Constraints
|
math.NA cs.AR cs.CC cs.NA cs.PF cs.RO
|
Solving a system of nonlinear inequalities is an important problem for which
conventional numerical analysis has no satisfactory method. With a
box-consistency algorithm one can compute a cover for the solution set to
arbitrarily close approximation. Because of difficulties in the use of
propagation for complex arithmetic expressions, box consistency is computed
with interval arithmetic. In this paper we present theorems that support a
simple modification of propagation that allows complex arithmetic expressions
to be handled efficiently. The version of box consistency that is obtained in
this way is stronger than when interval arithmetic is used.
|
cs/0309019
|
Building a Test Collection for Speech-Driven Web Retrieval
|
cs.CL
|
This paper describes a test collection (benchmark data) for retrieval systems
driven by spoken queries. This collection was produced in the subtask of the
NTCIR-3 Web retrieval task, which was performed in a TREC-style evaluation
workshop. The search topics and document collection for the Web retrieval task
were used to produce spoken queries and language models for speech recognition,
respectively. We used this collection to evaluate the performance of our
retrieval system. Experimental results showed that (a) the use of target
documents for language modeling and (b) enhancement of the vocabulary size in
speech recognition were effective in improving the system performance.
|
cs/0309021
|
A Cross-media Retrieval System for Lecture Videos
|
cs.CL
|
We propose a cross-media lecture-on-demand system, in which users can
selectively view specific segments of lecture videos by submitting text
queries. Users can easily formulate queries by using the textbook associated
with a target lecture, even if they cannot come up with effective keywords. Our
system extracts the audio track from a target lecture video, generates a
transcription by large vocabulary continuous speech recognition, and produces a
text index. Experimental results showed that by adapting speech recognition to
the topic of the lecture, the recognition accuracy increased and the retrieval
accuracy was comparable with that obtained by human transcription.
|
cs/0309022
|
Proposed Specification of a Distributed XML-Query Network
|
cs.DC cs.IR
|
W3C's XML-Query language offers a powerful instrument for information
retrieval on XML repositories. This article describes an implementation of this
retrieval in a real world's scenario. Distributed XML-Query processing reduces
load on every single attending node to an acceptable level. The network allows
every participant to control their computing load themselves. Furthermore
XML-repositories may stay at the rights holder, so every Data-Provider can
decide, whether to process critical queries or not. If Data-Providers keep
redundant information, this distributed network improves reliability of
information with duplicates removed.
|
cs/0309025
|
Evidential Force Aggregation
|
cs.AI
|
In this paper we develop an evidential force aggregation method intended for
classification of evidential intelligence into recognized force structures. We
assume that the intelligence has already been partitioned into clusters and use
the classification method individually in each cluster. The classification is
based on a measure of fitness between template and fused intelligence that
makes it possible to handle intelligence reports with multiple nonspecific and
uncertain propositions. With this measure we can aggregate on a level-by-level
basis, starting from general intelligence to achieve a complete force structure
with recognized units on all hierarchical levels.
|
cs/0309030
|
Model-Based Debugging using Multiple Abstract Models
|
cs.SE cs.AI
|
This paper introduces an automatic debugging framework that relies on
model-based reasoning techniques to locate faults in programs. In particular,
model-based diagnosis, together with an abstract interpretation based conflict
detection mechanism is used to derive diagnoses, which correspond to possible
faults in programs. Design information and partial specifications are applied
to guide a model revision process, which allows for automatic detection and
correction of structural faults.
|
cs/0309034
|
Measuring Praise and Criticism: Inference of Semantic Orientation from
Association
|
cs.CL cs.IR cs.LG
|
The evaluative character of a word is called its semantic orientation.
Positive semantic orientation indicates praise (e.g., "honest", "intrepid") and
negative semantic orientation indicates criticism (e.g., "disturbing",
"superfluous"). Semantic orientation varies in both direction (positive or
negative) and degree (mild to strong). An automated system for measuring
semantic orientation would have application in text classification, text
filtering, tracking opinions in online discussions, analysis of survey
responses, and automated chat systems (chatbots). This paper introduces a
method for inferring the semantic orientation of a word from its statistical
association with a set of positive and negative paradigm words. Two instances
of this approach are evaluated, based on two different statistical measures of
word association: pointwise mutual information (PMI) and latent semantic
analysis (LSA). The method is experimentally tested with 3,596 words (including
adjectives, adverbs, nouns, and verbs) that have been manually labeled positive
(1,614 words) and negative (1,982 words). The method attains an accuracy of
82.8% on the full test set, but the accuracy rises above 95% when the algorithm
is allowed to abstain from classifying mild words.
|
cs/0309035
|
Combining Independent Modules to Solve Multiple-choice Synonym and
Analogy Problems
|
cs.CL cs.IR cs.LG
|
Existing statistical approaches to natural language problems are very coarse
approximations to the true complexity of language processing. As such, no
single technique will be best for all problem instances. Many researchers are
examining ensemble methods that combine the output of successful, separately
developed modules to create more accurate solutions. This paper examines three
merging rules for combining probability distributions: the well known mixture
rule, the logarithmic rule, and a novel product rule. These rules were applied
with state-of-the-art results to two problems commonly used to assess human
mastery of lexical semantics -- synonym questions and analogy questions. All
three merging rules result in ensembles that are more accurate than any of
their component modules. The differences among the three rules are not
statistically significant, but it is suggestive that the popular mixture rule
is not the best rule for either of the two problems.
|
cs/0309036
|
A Neural Network Assembly Memory Model Based on an Optimal Binary Signal
Detection Theory
|
cs.AI cs.IR cs.NE q-bio.NC q-bio.QM
|
A ternary/binary data coding algorithm and conditions under which Hopfield
networks implement optimal convolutional or Hamming decoding algorithms has
been described. Using the coding/decoding approach (an optimal Binary Signal
Detection Theory, BSDT) introduced a Neural Network Assembly Memory Model
(NNAMM) is built. The model provides optimal (the best) basic memory
performance and demands the use of a new memory unit architecture with
two-layer Hopfield network, N-channel time gate, auxiliary reference memory,
and two nested feedback loops. NNAMM explicitly describes the dependence on
time of a memory trace retrieval, gives a possibility of metamemory simulation,
generalized knowledge representation, and distinct description of conscious and
unconscious mental processes. A model of smallest inseparable part or an "atom"
of consciousness is also defined. The NNAMM's neurobiological backgrounds and
its applications to solving some interdisciplinary problems are shortly
discussed. BSDT could implement the "best neural code" used in nervous tissues
of animals and humans.
|
cs/0309038
|
A novel evolutionary formulation of the maximum independent set problem
|
cs.NE
|
We introduce a novel evolutionary formulation of the problem of finding a
maximum independent set of a graph. The new formulation is based on the
relationship that exists between a graph's independence number and its acyclic
orientations. It views such orientations as individuals and evolves them with
the aid of evolutionary operators that are very heavily based on the structure
of the graph and its acyclic orientations. The resulting heuristic has been
tested on some of the Second DIMACS Implementation Challenge benchmark graphs,
and has been found to be competitive when compared to several of the other
heuristics that have also been tested on those graphs.
|
cs/0309039
|
Two novel evolutionary formulations of the graph coloring problem
|
cs.NE
|
We introduce two novel evolutionary formulations of the problem of coloring
the nodes of a graph. The first formulation is based on the relationship that
exists between a graph's chromatic number and its acyclic orientations. It
views such orientations as individuals and evolves them with the aid of
evolutionary operators that are very heavily based on the structure of the
graph and its acyclic orientations. The second formulation, unlike the first
one, does not tackle one graph at a time, but rather aims at evolving a
`program' to color all graphs belonging to a class whose members all have the
same number of nodes and other common attributes. The heuristics that result
from these formulations have been tested on some of the Second DIMACS
Implementation Challenge benchmark graphs, and have been found to be
competitive when compared to the several other heuristics that have also been
tested on those graphs.
|
cs/0309041
|
Fast Verification of Convexity of Piecewise-linear Surfaces
|
cs.CG cs.CV
|
We show that a realization of a closed connected PL-manifold of dimension n-1
in n-dimensional Euclidean space (n>2) is the boundary of a convex polyhedron
(finite or infinite) if and only if the interior of each (n-3)-face has a
point, which has a neighborhood lying on the boundary of an n-dimensional
convex body. No initial assumptions about the topology or orientability of the
input surface are made. The theorem is derived from a refinement and
generalization of Van Heijenoort's theorem on locally convex manifolds to
spherical spaces. Our convexity criterion for PL-manifolds implies an easy
polynomial-time algorithm for checking convexity of a given PL-surface in
n-dimensional Euclidean or spherical space, n>2. The algorithm is worst case
optimal with respect to both the number of operations and the algebraic degree.
The algorithm works under significantly weaker assumptions and is easier to
implement than convexity verification algorithms suggested by Mehlhorn et al
(1996-1999), and Devillers et al.(1998). A paradigm of approximate convexity is
suggested and a simplified algorithm of smaller degree and complexity is
suggested for approximate floating point convexity verification.
|
cs/0309048
|
Goedel Machines: Self-Referential Universal Problem Solvers Making
Provably Optimal Self-Improvements
|
cs.LO cs.AI
|
We present the first class of mathematically rigorous, general, fully
self-referential, self-improving, optimally efficient problem solvers. Inspired
by Kurt Goedel's celebrated self-referential formulas (1931), such a problem
solver rewrites any part of its own code as soon as it has found a proof that
the rewrite is useful, where the problem-dependent utility function and the
hardware and the entire initial code are described by axioms encoded in an
initial proof searcher which is also part of the initial code. The searcher
systematically and efficiently tests computable proof techniques (programs
whose outputs are proofs) until it finds a provably useful, computable
self-rewrite. We show that such a self-rewrite is globally optimal - no local
maxima! - since the code first had to prove that it is not useful to continue
the proof search for alternative self-rewrites. Unlike previous
non-self-referential methods based on hardwired proof searchers, ours not only
boasts an optimal order of complexity but can optimally reduce any slowdowns
hidden by the O()-notation, provided the utility of such speed-ups is provable
at all.
|
cs/0309053
|
A Hierarchical Situation Calculus
|
cs.AI cs.LO
|
A situation calculus is presented that provides a solution to the frame
problem for hierarchical situations, that is, situations that have a modular
structure in which parts of the situation behave in a relatively independent
manner. This situation calculus is given in a relational, functional, and modal
logic form. Each form permits both a single level hierarchy or a multiple level
hierarchy, giving six versions of the formalism in all, and a number of
sub-versions of these. For multiple level hierarchies, it is possible to give
equations between parts of the situation to impose additional structure on the
problem. This approach is compared to others in the literature.
|
cs/0310005
|
Using Artificial Intelligence for Model Selection
|
cs.AI q-bio.QM
|
We apply the optimization algorithm Adaptive Simulated Annealing (ASA) to the
problem of analyzing data on a large population and selecting the best model to
predict that an individual with various traits will have a particular disease.
We compare ASA with traditional forward and backward regression on computer
simulated data. We find that the traditional methods of modeling are better for
smaller data sets whereas a numerically stable ASA seems to perform better on
larger and more complicated data sets.
|
cs/0310006
|
The Lowell Database Research Self Assessment
|
cs.DB
|
A group of senior database researchers gathers every few years to assess the
state of database research and to point out problem areas that deserve
additional focus. This report summarizes the discussion and conclusions of the
sixth ad-hoc meeting held May 4-6, 2003 in Lowell, Mass. It observes that
information management continues to be a critical component of most complex
software systems. It recommends that database researchers increase focus on:
integration of text, data, code, and streams; fusion of information from
heterogeneous data sources; reasoning about uncertain data; unsupervised data
mining for interesting correlations; information privacy; and self-adaptation
and repair.
|
cs/0310009
|
On Interference of Signals and Generalization in Feedforward Neural
Networks
|
cs.NE
|
This paper studies how the generalization ability of neurons can be affected
by mutual processing of different signals. This study is done on the basis of a
feedforward artificial neural network. The mutual processing of signals can
possibly be a good model of patterns in a set generalized by a neural network
and in effect may improve generalization. In this paper it is discussed that
the interference may also cause a highly random generalization. Adaptive
activation functions are discussed as a way of reducing that type of
generalization. A test of a feedforward neural network is performed that shows
the discussed random generalization.
|
cs/0310010
|
Transient Diversity in Multi-Agent Systems
|
cs.AI cs.MA
|
Diversity is an important aspect of highly efficient multi-agent teams. We
introduce the main factors that drive a multi-agent system in either direction
along the diversity scale. A metric for diversity is described, and we
speculate on the concept of transient diversity. Finally, an experiment on
social entropy using a RoboCup simulated soccer team is presented.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.