id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0202016
|
Linear Programming helps solving large multi-unit combinatorial auctions
|
cs.GT cs.AI
|
Previous works suggested the use of Branch and Bound techniques for finding
the optimal allocation in (multi-unit) combinatorial auctions. They remarked
that Linear Programming could provide a good upper-bound to the optimal
allocation, but they went on using lighter and less tight upper-bound
heuristics, on the ground that LP was too time-consuming to be used
repetitively to solve large combinatorial auctions. We present the results of
extensive experiments solving large (multi-unit) combinatorial auctions
generated according to distributions proposed by different researchers. Our
surprising conclusion is that Linear Programming is worth using. Investing
almost all of one's computing time in using LP to bound from above the value of
the optimal solution in order to prune aggressively pays off. We present a way
to save on the number of calls to the LP routine and experimental results
comparing different heuristics for choosing the bid to be considered next.
Those results show that the ordering based on the square root of the size of
the bids that was shown to be theoretically optimal in a previous paper by the
authors performs surprisingly better than others in practice. Choosing to deal
first with the bid with largest coefficient (typically 1) in the optimal
solution of the relaxed LP problem, is also a good choice. The gap between the
lower bound provided by greedy heuristics and the upper bound provided by LP is
typically small and pruning is therefore extensive. For most distributions,
auctions of a few hundred goods among a few thousand bids can be solved in
practice. All experiments were run on a PC under Matlab.
|
cs/0202018
|
Nonmonotonic Logics and Semantics
|
cs.AI cs.LO math.LO
|
Tarski gave a general semantics for deductive reasoning: a formula a may be
deduced from a set A of formulas iff a holds in all models in which each of the
elements of A holds. A more liberal semantics has been considered: a formula a
may be deduced from a set A of formulas iff a holds in all of the "preferred"
models in which all the elements of A hold. Shoham proposed that the notion of
"preferred" models be defined by a partial ordering on the models of the
underlying language. A more general semantics is described in this paper, based
on a set of natural properties of choice functions. This semantics is here
shown to be equivalent to a semantics based on comparing the relative
"importance" of sets of models, by what amounts to a qualitative probability
measure. The consequence operations defined by the equivalent semantics are
then characterized by a weakening of Tarski's properties in which the
monotonicity requirement is replaced by three weaker conditions. Classical
propositional connectives are characterized by natural introduction-elimination
rules in a nonmonotonic setting. Even in the nonmonotonic setting, one obtains
classical propositional logic, thus showing that monotonicity is not required
to justify classical propositional connectives.
|
cs/0202019
|
Hypernets -- Good (G)news for Gnutella
|
cs.PF cs.DC cs.IR cs.NI
|
Criticism of Gnutella network scalability has rested on the bandwidth
attributes of the original interconnection topology: a Cayley tree. Trees, in
general, are known to have lower aggregate bandwidth than higher dimensional
topologies e.g., hypercubes, meshes and tori. Gnutella was intended to support
thousands to millions of peers. Studies of interconnection topologies in the
literature, however, have focused on hardware implementations which are limited
by cost to a few thousand nodes. Since the Gnutella network is virtual,
hyper-topologies are relatively unfettered by such constraints. We present
performance models for several plausible hyper-topologies and compare their
query throughput up to millions of peers. The virtual hypercube and the virtual
hypertorus are shown to offer near linear scalability subject to the number of
peer TCP/IP connections that can be simultaneously kept open.
|
cs/0202020
|
The Mysterious Optimality of Naive Bayes: Estimation of the Probability
in the System of "Classifiers"
|
cs.CV cs.AI
|
Bayes Classifiers are widely used currently for recognition, identification
and knowledge discovery. The fields of application are, for example, image
processing, medicine, chemistry (QSAR). But by mysterious way the Naive Bayes
Classifier usually gives a very nice and good presentation of a recognition. It
can not be improved considerably by more complex models of Bayes Classifier. We
demonstrate here a very nice and simple proof of the Naive Bayes Classifier
optimality, that can explain this interesting fact.The derivation in the
current paper is based on arXiv:cs/0202020v1
|
cs/0202021
|
Nonmonotonic Reasoning, Preferential Models and Cumulative Logics
|
cs.AI
|
Many systems that exhibit nonmonotonic behavior have been described and
studied already in the literature. The general notion of nonmonotonic
reasoning, though, has almost always been described only negatively, by the
property it does not enjoy, i.e. monotonicity. We study here general patterns
of nonmonotonic reasoning and try to isolate properties that could help us map
the field of nonmonotonic reasoning by reference to positive properties. We
concentrate on a number of families of nonmonotonic consequence relations,
defined in the style of Gentzen. Both proof-theoretic and semantic points of
view are developed in parallel. The former point of view was pioneered by D.
Gabbay, while the latter has been advocated by Y. Shoham in. Five such families
are defined and characterized by representation theorems, relating the two
points of view. One of the families of interest, that of preferential
relations, turns out to have been studied by E. Adams. The "preferential"
models proposed here are a much stronger tool than Adams' probabilistic
semantics. The basic language used in this paper is that of propositional
logic. The extension of our results to first order predicate calculi and the
study of the computational complexity of the decision problems described in
this paper will be treated in another paper.
|
cs/0202022
|
What does a conditional knowledge base entail?
|
cs.AI
|
This paper presents a logical approach to nonmonotonic reasoning based on the
notion of a nonmonotonic consequence relation. A conditional knowledge base,
consisting of a set of conditional assertions of the type "if ... then ...",
represents the explicit defeasible knowledge an agent has about the way the
world generally behaves. We look for a plausible definition of the set of all
conditional assertions entailed by a conditional knowledge base. In a previous
paper, S. Kraus and the authors defined and studied "preferential" consequence
relations. They noticed that not all preferential relations could be considered
as reasonable inference procedures. This paper studies a more restricted class
of consequence relations, "rational" relations. It is argued that any
reasonable nonmonotonic inference procedure should define a rational relation.
It is shown that the rational relations are exactly those that may be
represented by a "ranked" preferential model, or by a (non-standard)
probabilistic model. The rational closure of a conditional knowledge base is
defined and shown to provide an attractive answer to the question of the title.
Global properties of this closure operation are proved: it is a cumulative
operation. It is also computationally tractable. This paper assumes the
underlying language is propositional.
|
cs/0202024
|
A note on Darwiche and Pearl
|
cs.AI
|
It is shown that Darwiche and Pearl's postulates imply an interesting
property, not noticed by the authors.
|
cs/0202025
|
Distance Semantics for Belief Revision
|
cs.AI
|
A vast and interesting family of natural semantics for belief revision is
defined. Suppose one is given a distance d between any two models. One may then
define the revision of a theory K by a formula a as the theory defined by the
set of all those models of a that are closest, by d, to the set of models of K.
This family is characterized by a set of rationality postulates that extends
the AGM postulates. The new postulates describe properties of iterated
revisions.
|
cs/0202026
|
Preferred History Semantics for Iterated Updates
|
cs.AI
|
We give a semantics to iterated update by a preference relation on possible
developments. An iterated update is a sequence of formulas, giving (incomplete)
information about successive states of the world. A development is a sequence
of models, describing a possible trajectory through time. We assume a principle
of inertia and prefer those developments, which are compatible with the
information, and avoid unnecessary changes. The logical properties of the
updates defined in this way are considered, and a representation result is
proved.
|
cs/0202027
|
BSML: A Binding Schema Markup Language for Data Interchange in Problem
Solving Environments (PSEs)
|
cs.CE cs.SE
|
We describe a binding schema markup language (BSML) for describing data
interchange between scientific codes. Such a facility is an important
constituent of scientific problem solving environments (PSEs). BSML is designed
to integrate with a PSE or application composition system that views model
specification and execution as a problem of managing semistructured data. The
data interchange problem is addressed by three techniques for processing
semistructured data: validation, binding, and conversion. We present BSML and
describe its application to a PSE for wireless communications system design.
|
cs/0202030
|
Generalized Qualitative Probability: Savage revisited
|
cs.GT cs.AI
|
Preferences among acts are analyzed in the style of L. Savage, but as
partially ordered. The rationality postulates considered are weaker than
Savage's on three counts. The Sure Thing Principle is derived in this setting.
The postulates are shown to lead to a characterization of generalized
qualitative probability that includes and blends both traditional qualitative
probability and the ranked structures used in logical approaches.
|
cs/0202031
|
Nonmonotonic inference operations
|
cs.AI
|
A. Tarski proposed the study of infinitary consequence operations as the
central topic of mathematical logic. He considered monotonicity to be a
property of all such operations. In this paper, we weaken the monotonicity
requirement and consider more general operations, inference operations. These
operations describe the nonmonotonic logics both humans and machines seem to be
using when infering defeasible information from incomplete knowledge. We single
out a number of interesting families of inference operations. This study of
infinitary inference operations is inspired by the results of Kraus, Lehmann
and Magidor on finitary nonmonotonic operations, but this paper is
self-contained.
|
cs/0202032
|
Optimal Solutions for Multi-Unit Combinatorial Auctions: Branch and
Bound Heuristics
|
cs.GT cs.AI
|
Finding optimal solutions for multi-unit combinatorial auctions is a hard
problem and finding approximations to the optimal solution is also hard. We
investigate the use of Branch-and-Bound techniques: they require both a way to
bound from above the value of the best allocation and a good criterion to
decide which bids are to be tried first. Different methods for efficiently
bounding from above the value of the best allocation are considered.
Theoretical original results characterize the best approximation ratio and the
ordering criterion that provides it. We suggest to use this criterion.
|
cs/0202033
|
The logical meaning of Expansion
|
cs.AI
|
The Expansion property considered by researchers in Social Choice is shown to
correspond to a logical property of nonmonotonic consequence relations that is
the {\em pure}, i.e., not involving connectives, version of a previously known
weak rationality condition. The assumption that the union of two definable sets
of models is definable is needed for the soundness part of the result.
|
cs/0202034
|
Covariance Plasticity and Regulated Criticality
|
cs.NE cs.AI nlin.AO q-bio
|
We propose that a regulation mechanism based on Hebbian covariance plasticity
may cause the brain to operate near criticality. We analyze the effect of such
a regulation on the dynamics of a network with excitatory and inhibitory
neurons and uniform connectivity within and across the two populations. We show
that, under broad conditions, the system converges to a critical state lying at
the common boundary of three regions in parameter space; these correspond to
three modes of behavior: high activity, low activity, oscillation.
|
cs/0202035
|
Sprinkling Selections over Join DAGs for Efficient Query Optimization
|
cs.DB
|
In optimizing queries, solutions based on AND/OR DAG can generate all
possible join orderings and select placements before searching for optimal
query execution strategy. But as the number of joins and selection conditions
increase, the space and time complexity to generate optimal query plan
increases exponentially. In this paper, we use join graph for a relational
database schema to either pre-compute all possible join orderings that can be
executed and store it as a join DAG or, extract joins in the queries to
incrementally build a history join DAG as and when the queries are executed.
The select conditions in the queries are appropriately placed in the retrieved
join DAG (or, history join DAG) to generate optimal query execution strategy.
We experimentally evaluate our query optimization technique on TPC-D/H query
sets to show their effectiveness over AND/OR DAG query optimization strategy.
Finally, we illustrate how our technique can be used for efficient multiple
query optimization and selection of materialized views in data warehousing
environments.
|
cs/0202037
|
Towards practical meta-querying
|
cs.DB
|
We describe a meta-querying system for databases containing queries in
addition to ordinary data. In the context of such databases, a meta-query is a
query about queries. Representing stored queries in XML, and using the standard
XML manipulation language XSLT as a sublanguage, we show that just a few
features need to be added to SQL to turn it into a fully-fledged meta-query
language. The good news is that these features can be directly supported by
extensible database technology.
|
cs/0202038
|
The efficient generation of unstructured control volumes in 2D and 3D
|
cs.CG cs.CE cs.NA math.NA physics.comp-ph
|
Many problems in engineering, chemistry and physics require the
representation of solutions in complex geometries. In the paper we deal with a
problem of unstructured mesh generation for the control volume method. We
propose an algorithm which bases on the spheres generation in central points of
the control volumes.
|
cs/0203002
|
Another perspective on Default Reasoning
|
cs.AI
|
The lexicographic closure of any given finite set D of normal defaults is
defined. A conditional assertion "if a then b" is in this lexicographic closure
if, given the defaults D and the fact a, one would conclude b. The
lexicographic closure is essentially a rational extension of D, and of its
rational closure, defined in a previous paper. It provides a logic of normal
defaults that is different from the one proposed by R. Reiter and that is rich
enough not to require the consideration of non-normal defaults. A large number
of examples are provided to show that the lexicographic closure corresponds to
the basic intuitions behind Reiter's logic of defaults.
|
cs/0203003
|
Deductive Nonmonotonic Inference Operations: Antitonic Representations
|
cs.AI
|
We provide a characterization of those nonmonotonic inference operations C
for which C(X) may be described as the set of all logical consequences of X
together with some set of additional assumptions S(X) that depends
anti-monotonically on X (i.e., X is a subset of Y implies that S(Y) is a subset
of S(X)). The operations represented are exactly characterized in terms of
properties most of which have been studied in Freund-Lehmann(cs.AI/0202031).
Similar characterizations of right-absorbing and cumulative operations are also
provided. For cumulative operations, our results fit in closely with those of
Freund. We then discuss extending finitary operations to infinitary operations
in a canonical way and discuss co-compactness properties. Our results provide a
satisfactory notion of pseudo-compactness, generalizing to deductive
nonmonotonic operations the notion of compactness for monotonic operations.
They also provide an alternative, more elegant and more general, proof of the
existence of an infinitary deductive extension for any finitary deductive
operation (Theorem 7.9 of Freund-Lehmann).
|
cs/0203004
|
Stereotypical Reasoning: Logical Properties
|
cs.AI
|
Stereotypical reasoning assumes that the situation at hand is one of a kind
and that it enjoys the properties generally associated with that kind of
situation. It is one of the most basic forms of nonmonotonic reasoning. A
formal model for stereotypical reasoning is proposed and the logical properties
of this form of reasoning are studied. Stereotypical reasoning is shown to be
cumulative under weak assumptions.
|
cs/0203005
|
A Framework for Compiling Preferences in Logic Programs
|
cs.AI
|
We introduce a methodology and framework for expressing general preference
information in logic programming under the answer set semantics. An ordered
logic program is an extended logic program in which rules are named by unique
terms, and in which preferences among rules are given by a set of atoms of form
s < t where s and t are names. An ordered logic program is transformed into a
second, regular, extended logic program wherein the preferences are respected,
in that the answer sets obtained in the transformed program correspond with the
preferred answer sets of the original program. Our approach allows the
specification of dynamic orderings, in which preferences can appear arbitrarily
within a program. Static orderings (in which preferences are external to a
logic program) are a trivial restriction of the general dynamic case. First, we
develop a specific approach to reasoning with preferences, wherein the
preference ordering specifies the order in which rules are to be applied. We
then demonstrate the wide range of applicability of our framework by showing
how other approaches, among them that of Brewka and Eiter, can be captured
within our framework. Since the result of each of these transformations is an
extended logic program, we can make use of existing implementations, such as
dlv and smodels. To this end, we have developed a publicly available compiler
as a front-end for these programming systems.
|
cs/0203007
|
Two results for proiritized logic programming
|
cs.AI
|
Prioritized default reasoning has illustrated its rich expressiveness and
flexibility in knowledge representation and reasoning. However, many important
aspects of prioritized default reasoning have yet to be thoroughly explored. In
this paper, we investigate two properties of prioritized logic programs in the
context of answer set semantics. Specifically, we reveal a close relationship
between mutual defeasibility and uniqueness of the answer set for a prioritized
logic program. We then explore how the splitting technique for extended logic
programs can be extended to prioritized logic programs. We prove splitting
theorems that can be used to simplify the evaluation of a prioritized logic
program under certain conditions.
|
cs/0203010
|
On Learning by Exchanging Advice
|
cs.LG cs.MA
|
One of the main questions concerning learning in Multi-Agent Systems is:
(How) can agents benefit from mutual interaction during the learning process?.
This paper describes the study of an interactive advice-exchange mechanism as a
possible way to improve agents' learning performance. The advice-exchange
technique, discussed here, uses supervised learning (backpropagation), where
reinforcement is not directly coming from the environment but is based on
advice given by peers with better performance score (higher confidence), to
enhance the performance of a heterogeneous group of Learning Agents (LAs). The
LAs are facing similar problems, in an environment where only reinforcement
information is available. Each LA applies a different, well known, learning
technique: Random Walk (hill-climbing), Simulated Annealing, Evolutionary
Algorithms and Q-Learning. The problem used for evaluation is a simplified
traffic-control simulation. Initial results indicate that advice-exchange can
improve learning speed, although bad advice and/or blind reliance can disturb
the learning performance.
|
cs/0203011
|
Capturing Knowledge of User Preferences: ontologies on recommender
systems
|
cs.LG cs.MA
|
Tools for filtering the World Wide Web exist, but they are hampered by the
difficulty of capturing user preferences in such a dynamic environment. We
explore the acquisition of user profiles by unobtrusive monitoring of browsing
behaviour and application of supervised machine-learning techniques coupled
with an ontological representation to extract user preferences. A multi-class
approach to paper classification is used, allowing the paper topic taxonomy to
be utilised during profile construction. The Quickstep recommender system is
presented and two empirical studies evaluate it in a real work setting,
measuring the effectiveness of using a hierarchical topic ontology compared
with an extendable flat list.
|
cs/0203012
|
Interface agents: A review of the field
|
cs.MA cs.LG
|
This paper reviews the origins of interface agents, discusses challenges that
exist within the interface agent field and presents a survey of current
attempts to find solutions to these challenges. A history of agent systems from
their birth in the 1960's to the current day is described, along with the
issues they try to address. A taxonomy of interface agent systems is presented,
and today's agent systems categorized accordingly. Lastly, an analysis of the
machine learning and user modelling techniques used by today's agents is
presented.
|
cs/0203013
|
Representing and Aggregating Conflicting Beliefs
|
cs.AI cs.LO
|
We consider the two-fold problem of representing collective beliefs and
aggregating these beliefs. We propose modular, transitive relations for
collective beliefs. They allow us to represent conflicting opinions and they
have a clear semantics. We compare them with the quasi-transitive relations
often used in Social Choice. Then, we describe a way to construct the belief
state of an agent informed by a set of sources of varying degrees of
reliability. This construction circumvents Arrow's Impossibility Theorem in a
satisfactory manner. Finally, we give a simple set-theory-based operator for
combining the information of multiple agents. We show that this operator
satisfies the desirable invariants of idempotence, commutativity, and
associativity, and, thus, is well-behaved when iterated, and we describe a
computationally effective way of computing the resulting belief state.
|
cs/0203021
|
NetNeg: A Connectionist-Agent Integrated System for Representing Musical
Knowledge
|
cs.AI cs.MA
|
The system presented here shows the feasibility of modeling the knowledge
involved in a complex musical activity by integrating sub-symbolic and symbolic
processes. This research focuses on the question of whether there is any
advantage in integrating a neural network together with a distributed
artificial intelligence approach within the music domain. The primary purpose
of our work is to design a model that describes the different aspects a user
might be interested in considering when involved in a musical activity. The
approach we suggest in this work enables the musician to encode his knowledge,
intuitions, and aesthetic taste into different modules. The system captures
these aspects by computing and applying three distinct functions: rules, fuzzy
concepts, and learning.
As a case study, we began experimenting with first species two-part
counterpoint melodies. We have developed a hybrid system composed of a
connectionist module and an agent-based module to combine the sub-symbolic and
symbolic levels to achieve this task. The technique presented here to represent
musical knowledge constitutes a new approach for composing polyphonic music.
|
cs/0203023
|
Agent trade servers in financial exchange systems
|
cs.CE
|
New services based on the best-effort paradigm could complement the current
deterministic services of an electronic financial exchange. Four crucial
aspects of such systems would benefit from a hybrid stance: proper use of
processing resources, bandwidth management, fault tolerance, and exception
handling. We argue that a more refined view on Quality-of-Service control for
exchange systems, in which the principal ambition of upholding a fair and
orderly marketplace is left uncompromised, would benefit all interested
parties.
|
cs/0203024
|
The structure of broad topics on the Web
|
cs.IR cs.DL
|
The Web graph is a giant social network whose properties have been measured
and modeled extensively in recent years. Most such studies concentrate on the
graph structure alone, and do not consider textual properties of the nodes.
Consequently, Web communities have been characterized purely in terms of graph
structure and not on page content. We propose that a topic taxonomy such as
Yahoo! or the Open Directory provides a useful framework for understanding the
structure of content-based clusters and communities. In particular, using a
topic taxonomy and an automatic classifier, we can measure the background
distribution of broad topics on the Web, and analyze the capability of recent
random walk algorithms to draw samples which follow such distributions. In
addition, we can measure the probability that a page about one broad topic will
link to another broad topic. Extending this experiment, we can measure how
quickly topic context is lost while walking randomly on the Web graph.
Estimates of this topic mixing distance may explain why a global PageRank is
still meaningful in the context of broad queries. In general, our measurements
may prove valuable in the design of community-specific crawlers and link-based
ranking systems.
|
cs/0203027
|
The Algorithms of Updating Sequential Patterns
|
cs.DB cs.AI
|
Because the data being mined in the temporal database will evolve with time,
many researchers have focused on the incremental mining of frequent sequences
in temporal database. In this paper, we propose an algorithm called IUS, using
the frequent and negative border sequences in the original database for
incremental sequence mining. To deal with the case where some data need to be
updated from the original database, we present an algorithm called DUS to
maintain sequential patterns in the updated database. We also define the
negative border sequence threshold: Min_nbd_supp to control the number of
sequences in the negative border.
|
cs/0203028
|
When to Update the sequential patterns of stream data?
|
cs.DB cs.AI
|
In this paper, we first define a difference measure between the old and new
sequential patterns of stream data, which is proved to be a distance. Then we
propose an experimental method, called TPD (Tradeoff between Performance and
Difference), to decide when to update the sequential patterns of stream data by
making a tradeoff between the performance of increasingly updating algorithms
and the difference of sequential patterns. The experiments for the incremental
updating algorithm IUS on two data sets show that generally, as the size of
incremental windows grows, the values of the speedup and the values of the
difference will decrease and increase respectively. It is also shown
experimentally that the incremental ratio determined by the TPD method does not
monotonically increase or decrease but changes in a range between 20 and 30
percentage for the IUS algorithm.
|
cs/0204001
|
A steady state model for graph power laws
|
cs.DM cond-mat.dis-nn cs.SI
|
Power law distribution seems to be an important characteristic of web graphs.
Several existing web graph models generate power law graphs by adding new
vertices and non-uniform edge connectivities to existing graphs. Researchers
have conjectured that preferential connectivity and incremental growth are both
required for the power law distribution. In this paper, we propose a different
web graph model with power law distribution that does not require incremental
growth. We also provide a comparison of our model with several others in their
ability to predict web graph clustering behavior.
|
cs/0204003
|
Blind Normalization of Speech From Different Channels and Speakers
|
cs.CL
|
This paper describes representations of time-dependent signals that are
invariant under any invertible time-independent transformation of the signal
time series. Such a representation is created by rescaling the signal in a
non-linear dynamic manner that is determined by recently encountered signal
levels. This technique may make it possible to normalize signals that are
related by channel-dependent and speaker-dependent transformations, without
having to characterize the form of the signal transformations, which remain
unknown. The technique is illustrated by applying it to the time-dependent
spectra of speech that has been filtered to simulate the effects of different
channels. The experimental results show that the rescaled speech
representations are largely normalized (i.e., channel-independent), despite the
channel-dependence of the raw (unrescaled) speech.
|
cs/0204004
|
Models and Tools for Collaborative Annotation
|
cs.CL cs.SD
|
The Annotation Graph Toolkit (AGTK) is a collection of software which
facilitates development of linguistic annotation tools. AGTK provides a
database interface which allows applications to use a database server for
persistent storage. This paper discusses various modes of collaborative
annotation and how they can be supported with tools built using AGTK and its
database interface. We describe the relational database schema and API, and
describe a version of the TableTrans tool which supports collaborative
annotation. The remainder of the paper discusses a high-level query language
for annotation graphs, along with optimizations, in support of expressive and
efficient access to the annotations held on a large central server. The paper
demonstrates that it is straightforward to support a variety of different
levels of collaborative annotation with existing AGTK-based tools, with a
minimum of additional programming effort.
|
cs/0204005
|
Creating Annotation Tools with the Annotation Graph Toolkit
|
cs.CL cs.SD
|
The Annotation Graph Toolkit is a collection of software supporting the
development of annotation tools based on the annotation graph model. The
toolkit includes application programming interfaces for manipulating annotation
graph data and for importing data from other formats. There are interfaces for
the scripting languages Tcl and Python, a database interface, specialized
graphical user interfaces for a variety of annotation tasks, and several sample
applications. This paper describes all the toolkit components for the benefit
of would-be application developers.
|
cs/0204006
|
TableTrans, MultiTrans, InterTrans and TreeTrans: Diverse Tools Built on
the Annotation Graph Toolkit
|
cs.CL cs.SD
|
Four diverse tools built on the Annotation Graph Toolkit are described. Each
tool associates linguistic codes and structures with time-series data. All are
based on the same software library and tool architecture. TableTrans is for
observational coding, using a spreadsheet whose rows are aligned to a signal.
MultiTrans is for transcribing multi-party communicative interactions recorded
using multi-channel signals. InterTrans is for creating interlinear text
aligned to audio. TreeTrans is for creating and manipulating syntactic trees.
This work demonstrates that the development of diverse tools and re-use of
software components is greatly facilitated by a common high-level application
programming interface for representing the data and managing input/output,
together with a common architecture for managing the interaction of multiple
components.
|
cs/0204007
|
An Integrated Framework for Treebanks and Multilayer Annotations
|
cs.CL
|
Treebank formats and associated software tools are proliferating rapidly,
with little consideration for interoperability. We survey a wide variety of
treebank structures and operations, and show how they can be mapped onto the
annotation graph model, and leading to an integrated framework encompassing
tree and non-tree annotations alike. This development opens up new
possibilities for managing and exploiting multilayer annotations.
|
cs/0204008
|
The tip-of-the-tongue phenomenon: Irrelevant neural network localization
or disruption of its interneuron links ?
|
cs.CL cs.AI q-bio.NC q-bio.QM
|
On the base of recently proposed three-stage quantitative neural network
model of the tip-of-the-tongue (TOT) phenomenon a possibility to occur of TOT
states coursed by neural network interneuron links' disruption has been
studied. Using a numerical example it was found that TOTs coursed by interneron
links' disruption are in (1.5 + - 0.3)x1000 times less probable then those
coursed by irrelevant (incomplete) neural network localization. It was shown
that delayed TOT states' etiology cannot be related to neural network
interneuron links' disruption.
|
cs/0204010
|
On the Computational Complexity of Consistent Query Answers
|
cs.DB
|
We consider here the problem of obtaining reliable, consistent information
from inconsistent databases -- databases that do not have to satisfy given
integrity constraints. We use the notion of consistent query answer -- a query
answer which is true in every (minimal) repair of the database. We provide a
complete classification of the computational complexity of consistent answers
to first-order queries w.r.t. functional dependencies and denial constraints.
We show how the complexity depends on the {\em type} of the constraints
considered, their {\em number}, and the {\em size} of the query. We obtain
several new PTIME cases, using new algorithms.
|
cs/0204012
|
Exploiting Synergy Between Ontologies and Recommender Systems
|
cs.LG cs.MA
|
Recommender systems learn about user preferences over time, automatically
finding things of similar interest. This reduces the burden of creating
explicit queries. Recommender systems do, however, suffer from cold-start
problems where no initial information is available early on upon which to base
recommendations. Semantic knowledge structures, such as ontologies, can provide
valuable domain knowledge and user information. However, acquiring such
knowledge and keeping it up to date is not a trivial task and user interests
are particularly difficult to acquire and maintain. This paper investigates the
synergy between a web-based research paper recommender system and an ontology
containing information automatically extracted from departmental databases
available on the web. The ontology is used to address the recommender systems
cold-start problem. The recommender system addresses the ontology's
interest-acquisition problem. An empirical evaluation of this approach is
conducted and the performance of the integrated systems measured.
|
cs/0204019
|
Fast Universalization of Investment Strategies with Provably Good
Relative Returns
|
cs.CE cs.DS
|
A universalization of a parameterized investment strategy is an online
algorithm whose average daily performance approaches that of the strategy
operating with the optimal parameters determined offline in hindsight. We
present a general framework for universalizing investment strategies and
discuss conditions under which investment strategies are universalizable. We
present examples of common investment strategies that fit into our framework.
The examples include both trading strategies that decide positions in
individual stocks, and portfolio strategies that allocate wealth among multiple
stocks. This work extends Cover's universal portfolio work. We also discuss the
runtime efficiency of universalization algorithms. While a straightforward
implementation of our algorithms runs in time exponential in the number of
parameters, we show that the efficient universal portfolio computation
technique of Kalai and Vempala involving the sampling of log-concave functions
can be generalized to other classes of investment strategies.
|
cs/0204020
|
Seven Dimensions of Portability for Language Documentation and
Description
|
cs.CL cs.DL
|
The process of documenting and describing the world's languages is undergoing
radical transformation with the rapid uptake of new digital technologies for
capture, storage, annotation and dissemination. However, uncritical adoption of
new tools and technologies is leading to resources that are difficult to reuse
and which are less portable than the conventional printed resources they
replace. We begin by reviewing current uses of software tools and digital
technologies for language documentation and description. This sheds light on
how digital language documentation and description are created and managed,
leading to an analysis of seven portability problems under the following
headings: content, format, discovery, access, citation, preservation and
rights. After characterizing each problem we provide a series of value
statements, and this provides the framework for a broad range of best practice
recommendations.
|
cs/0204022
|
Annotation Graphs and Servers and Multi-Modal Resources: Infrastructure
for Interdisciplinary Education, Research and Development
|
cs.CL
|
Annotation graphs and annotation servers offer infrastructure to support the
analysis of human language resources in the form of time-series data such as
text, audio and video. This paper outlines areas of common need among empirical
linguists and computational linguists. After reviewing examples of data and
tools used or under development for each of several areas, it proposes a common
framework for future tool development, data annotation and resource sharing
based upon annotation graphs and servers.
|
cs/0204023
|
Computational Phonology
|
cs.CL
|
Phonology, as it is practiced, is deeply computational. Phonological analysis
is data-intensive and the resulting models are nothing other than specialized
data structures and algorithms. In the past, phonological computation -
managing data and developing analyses - was done manually with pencil and
paper. Increasingly, with the proliferation of affordable computers, IPA fonts
and drawing software, phonologists are seeking to move their computation work
online. Computational Phonology provides the theoretical and technological
framework for this migration, building on methodologies and tools from
computational linguistics. This piece consists of an apology for computational
phonology, a history, and an overview of current research.
|
cs/0204025
|
Phonology
|
cs.CL
|
Phonology is the systematic study of the sounds used in language, their
internal structure, and their composition into syllables, words and phrases.
Computational phonology is the application of formal and computational
techniques to the representation and processing of phonological information.
This chapter will present the fundamentals of descriptive phonology along with
a brief overview of computational phonology.
|
cs/0204026
|
Querying Databases of Annotated Speech
|
cs.CL cs.DB
|
Annotated speech corpora are databases consisting of signal data along with
time-aligned symbolic `transcriptions'. Such databases are typically
multidimensional, heterogeneous and dynamic. These properties present a number
of tough challenges for representation and query. The temporal nature of the
data adds an additional layer of complexity. This paper presents and harmonises
two independent efforts to model annotated speech databases, one at Macquarie
University and one at the University of Pennsylvania. Various query languages
are described, along with illustrative applications to a variety of analytical
problems. The research reported here forms a part of several ongoing projects
to develop platform-independent open-source tools for creating, browsing,
searching, querying and transforming linguistic databases, and to disseminate
large linguistic databases over the internet.
|
cs/0204027
|
Integrating selectional preferences in WordNet
|
cs.CL
|
Selectional preference learning methods have usually focused on word-to-class
relations, e.g., a verb selects as its subject a given nominal class. This
paper extends previous statistical models to class-to-class preferences, and
presents a model that learns selectional preferences for classes of verbs,
together with an algorithm to integrate the learned preferences in WordNet. The
theoretical motivation is twofold: different senses of a verb may have
different preferences, and classes of verbs may share preferences. On the
practical side, class-to-class selectional preferences can be learned from
untagged corpora (the same as word-to-class), they provide selectional
preferences for less frequent word senses via inheritance, and more important,
they allow for easy integration in WordNet. The model is trained on
subject-verb and object-verb relationships extracted from a small corpus
disambiguated with WordNet senses. Examples are provided illustrating that the
theoretical motivations are well founded, and showing that the approach is
feasible. Experimental results on a word sense disambiguation task are also
provided.
|
cs/0204028
|
Decision Lists for English and Basque
|
cs.CL
|
In this paper we describe the systems we developed for the English (lexical
and all-words) and Basque tasks. They were all supervised systems based on
Yarowsky's Decision Lists. We used Semcor for training in the English all-words
task. We defined different feature sets for each language. For Basque, in order
to extract all the information from the text, we defined features that have not
been used before in the literature, using a morphological analyzer. We also
implemented systems that selected automatically good features and were able to
obtain a prefixed precision (85%) at the cost of coverage. The systems that
used all the features were identified as BCU-ehu-dlist-all and the systems that
selected some features as BCU-ehu-dlist-best.
|
cs/0204029
|
The Basque task: did systems perform in the upperbound?
|
cs.CL
|
In this paper we describe the Senseval 2 Basque lexical-sample task. The task
comprised 40 words (15 nouns, 15 verbs and 10 adjectives) selected from Euskal
Hiztegia, the main Basque dictionary. Most examples were taken from the
Egunkaria newspaper. The method used to hand-tag the examples produced low
inter-tagger agreement (75%) before arbitration. The four competing systems
attained results well above the most frequent baseline and the best system
scored 75% precision at 100% coverage. The paper includes an analysis of the
tagging procedure used, as well as the performance of the competing systems. In
particular, we argue that inter-tagger agreement is not a real upperbound for
the Basque WSD task.
|
cs/0204030
|
Fast Hands-free Writing by Gaze Direction
|
cs.HC cs.AI
|
We describe a method for text entry based on inverse arithmetic coding that
relies on gaze direction and which is faster and more accurate than using an
on-screen keyboard.
These benefits are derived from two innovations: the writing task is matched
to the capabilities of the eye, and a language model is used to make
predictable words and phrases easier to write.
|
cs/0204032
|
Belief Revision and Rational Inference
|
cs.AI
|
The (extended) AGM postulates for belief revision seem to deal with the
revision of a given theory K by an arbitrary formula, but not to constrain the
revisions of two different theories by the same formula. A new postulate is
proposed and compared with other similar postulates that have been proposed in
the literature. The AGM revisions that satisfy this new postulate stand in
one-to-one correspondence with the rational, consistency-preserving relations.
This correspondence is described explicitly. Two viewpoints on iterative
revisions are distinguished and discussed.
|
cs/0204038
|
Technology For Information Engineering (TIE): A New Way of Storing,
Retrieving and Analyzing Information
|
cs.DB cs.IR
|
The theoretical foundations of a new model and paradigm (called TIE) for data
storage and access are introduced. Associations between data elements are
stored in a single Matrix table, which is usually kept entirely in RAM for
quick access. The model ties together a very intuitive "guided" GUI to the
Matrix structure, allowing extremely easy complex searches through the data.
Although it is an "Associative Model" in that it stores the data associations
separately from the data itself, in contrast to other implementations of that
model TIE guides the user to only the available information ensuring that every
search is always fruitful. Very many diverse applications of the technology are
reviewed.
|
cs/0204040
|
Self-Optimizing and Pareto-Optimal Policies in General Environments
based on Bayes-Mixtures
|
cs.AI cs.LG math.OC math.PR
|
The problem of making sequential decisions in unknown probabilistic
environments is studied. In cycle $t$ action $y_t$ results in perception $x_t$
and reward $r_t$, where all quantities in general may depend on the complete
history. The perception $x_t$ and reward $r_t$ are sampled from the (reactive)
environmental probability distribution $\mu$. This very general setting
includes, but is not limited to, (partial observable, k-th order) Markov
decision processes. Sequential decision theory tells us how to act in order to
maximize the total expected reward, called value, if $\mu$ is known.
Reinforcement learning is usually used if $\mu$ is unknown. In the Bayesian
approach one defines a mixture distribution $\xi$ as a weighted sum of
distributions $\nu\in\M$, where $\M$ is any class of distributions including
the true environment $\mu$. We show that the Bayes-optimal policy $p^\xi$ based
on the mixture $\xi$ is self-optimizing in the sense that the average value
converges asymptotically for all $\mu\in\M$ to the optimal value achieved by
the (infeasible) Bayes-optimal policy $p^\mu$ which knows $\mu$ in advance. We
show that the necessary condition that $\M$ admits self-optimizing policies at
all, is also sufficient. No other structural assumptions are made on $\M$. As
an example application, we discuss ergodic Markov decision processes, which
allow for self-optimizing policies. Furthermore, we show that $p^\xi$ is
Pareto-optimal in the sense that there is no other policy yielding higher or
equal value in {\em all} environments $\nu\in\M$ and a strictly higher value in
at least one.
|
cs/0204041
|
Trust Brokerage Systems for the Internet
|
cs.CR cs.GT cs.NE
|
This thesis addresses the problem of providing trusted individuals with
confidential information about other individuals, in particular, granting
access to databases of personal records using the World-Wide Web. It proposes
an access rights management system for distributed databases which aims to
create and implement organisation structures based on the wishes of the owners
and of demands of the users of the databases. The dissertation describes how
current software components could be used to implement this system; it
re-examines the theory of collective choice to develop mechanisms for
generating hierarchies of authorities; it analyses organisational processes for
stability and develops a means of measuring the similarity of their
hierarchies.
|
cs/0204043
|
Learning from Scarce Experience
|
cs.AI cs.LG cs.NE cs.RO
|
Searching the space of policies directly for the optimal policy has been one
popular method for solving partially observable reinforcement learning
problems. Typically, with each change of the target policy, its value is
estimated from the results of following that very policy. This requires a large
number of interactions with the environment as different polices are
considered. We present a family of algorithms based on likelihood ratio
estimation that use data gathered when executing one policy (or collection of
policies) to estimate the value of a different policy. The algorithms combine
estimation and optimization stages. The former utilizes experience to build a
non-parametric representation of an optimized function. The latter performs
optimization on this estimate. We show positive empirical results and provide
the sample complexity bound.
|
cs/0204044
|
Robust Global Localization Using Clustered Particle Filtering
|
cs.RO cs.AI
|
Global mobile robot localization is the problem of determining a robot's pose
in an environment, using sensor data, when the starting position is unknown. A
family of probabilistic algorithms known as Monte Carlo Localization (MCL) is
currently among the most popular methods for solving this problem. MCL
algorithms represent a robot's belief by a set of weighted samples, which
approximate the posterior probability of where the robot is located by using a
Bayesian formulation of the localization problem. This article presents an
extension to the MCL algorithm, which addresses its problems when localizing in
highly symmetrical environments; a situation where MCL is often unable to
correctly track equally probable poses for the robot. The problem arises from
the fact that sample sets in MCL often become impoverished, when samples are
generated according to their posterior likelihood. Our approach incorporates
the idea of clusters of samples and modifies the proposal distribution
considering the probability mass of those clusters. Experimental results are
presented that show that this new extension to the MCL algorithm successfully
localizes in symmetric environments where ordinary MCL often fails.
|
cs/0204046
|
Optimal Aggregation Algorithms for Middleware
|
cs.DB cs.DS
|
Let D be a database of N objects where each object has m fields. The objects
are given in m sorted lists (where the ith list is sorted according to the ith
field). Our goal is to find the top k objects according to a monotone
aggregation function t, while minimizing access to the lists. The problem
arises in several contexts. In particular Fagin (JCSS 1999) considered it for
the purpose of aggregating information in a multimedia database system.
We are interested in instance optimality, i.e. that our algorithm will be as
good as any other (correct) algorithm on any instance. We provide and analyze
several instance optimal algorithms for the task, with various access costs and
models.
|
cs/0204047
|
Sampling Strategies for Mining in Data-Scarce Domains
|
cs.CE cs.AI
|
Data mining has traditionally focused on the task of drawing inferences from
large datasets. However, many scientific and engineering domains, such as fluid
dynamics and aircraft design, are characterized by scarce data, due to the
expense and complexity of associated experiments and simulations. In such
data-scarce domains, it is advantageous to focus the data collection effort on
only those regions deemed most important to support a particular data mining
objective. This paper describes a mechanism that interleaves bottom-up data
mining, to uncover multi-level structures in spatial data, with top-down
sampling, to clarify difficult decisions in the mining process. The mechanism
exploits relevant physical properties, such as continuity, correspondence, and
locality, in a unified framework. This leads to effective mining and sampling
decisions that are explainable in terms of domain knowledge and data
characteristics. This approach is demonstrated in two diverse applications --
mining pockets in spatial data, and qualitative determination of Jordan forms
of matrices.
|
cs/0204049
|
Memory-Based Shallow Parsing
|
cs.CL
|
We present memory-based learning approaches to shallow parsing and apply
these to five tasks: base noun phrase identification, arbitrary base phrase
recognition, clause detection, noun phrase parsing and full parsing. We use
feature selection techniques and system combination methods for improving the
performance of the memory-based learner. Our approach is evaluated on standard
data sets and the results are compared with that of other systems. This reveals
that our approach works well for base phrase identification while its
application towards recognizing embedded structures leaves some room for
improvement.
|
cs/0204051
|
Parrondo Strategies for Artificial Traders
|
cs.CE
|
On markets with receding prices, artificial noise traders may consider
alternatives to buy-and-hold. By simulating variations of the Parrondo
strategy, using real data from the Swedish stock market, we produce first
indications of a buy-low-sell-random Parrondo variation outperforming
buy-and-hold. Subject to our assumptions, buy-low-sell-random also outperforms
the traditional value and trend investor strategies. We measure the success of
the Parrondo variations not only through their performance compared to other
kinds of strategies, but also relative to varying levels of perfect
information, received through messages within a multi-agent system of
artificial traders.
|
cs/0204052
|
Required sample size for learning sparse Bayesian networks with many
variables
|
cs.LG math.PR
|
Learning joint probability distributions on n random variables requires
exponential sample size in the generic case. Here we consider the case that a
temporal (or causal) order of the variables is known and that the (unknown)
graph of causal dependencies has bounded in-degree Delta. Then the joint
measure is uniquely determined by the probabilities of all (2 Delta+1)-tuples.
Upper bounds on the sample size required for estimating their probabilities can
be given in terms of the VC-dimension of the set of corresponding cylinder
sets. The sample size grows less than linearly with n.
|
cs/0204053
|
Qualitative Analysis of Correspondence for Experimental Algorithmics
|
cs.AI cs.CE
|
Correspondence identifies relationships among objects via similarities among
their components; it is ubiquitous in the analysis of spatial datasets,
including images, weather maps, and computational simulations. This paper
develops a novel multi-level mechanism for qualitative analysis of
correspondence. Operators leverage domain knowledge to establish
correspondence, evaluate implications for model selection, and leverage
identified weaknesses to focus additional data collection. The utility of the
mechanism is demonstrated in two applications from experimental algorithmics --
matrix spectral portrait analysis and graphical assessment of Jordan forms of
matrices. Results show that the mechanism efficiently samples computational
experiments and successfully uncovers high-level problem properties. It
overcomes noise and data sparsity by leveraging domain knowledge to detect
mutually reinforcing interpretations of spatial data.
|
cs/0204054
|
Navigating the Small World Web by Textual Cues
|
cs.IR cs.NI
|
Can a Web crawler efficiently locate an unknown relevant page? While this
question is receiving much empirical attention due to its considerable
commercial value in the search engine community
[Cho98,Chakrabarti99,Menczer00,Menczer01], theoretical efforts to bound the
performance of focused navigation have only exploited the link structure of the
Web graph, neglecting other features [Kleinberg01,Adamic01,Kim02]. Here I
investigate the connection between linkage and a content-induced topology of
Web pages, suggesting that efficient paths can be discovered by decentralized
navigation algorithms based on textual cues.
|
cs/0204055
|
Intelligent Search of Correlated Alarms for GSM Networks with
Model-based Constraints
|
cs.NI cs.AI
|
In order to control the process of data mining and focus on the things of
interest to us, many kinds of constraints have been added into the algorithms
of data mining. However, discovering the correlated alarms in the alarm
database needs deep domain constraints. Because the correlated alarms greatly
depend on the logical and physical architecture of networks. Thus we use the
network model as the constraints of algorithms, including Scope constraint,
Inter-correlated constraint and Intra-correlated constraint, in our proposed
algorithm called SMC (Search with Model-based Constraints). The experiments
show that the SMC algorithm with Inter-correlated or Intra-correlated
constraint is about two times faster than the algorithm with no constraints.
|
cs/0204056
|
Trading Agents for Roaming Users
|
cs.CE
|
Some roaming users need services to manipulate autonomous processes. Trading
agents running on agent trade servers are used as a case in point. We present a
solution that provides the agent owners with means to upkeeping their desktop
environment, and maintaining their agent trade server processes, via a
briefcase service.
|
cs/0205006
|
Unsupervised discovery of morphologically related words based on
orthographic and semantic similarity
|
cs.CL
|
We present an algorithm that takes an unannotated corpus as its input, and
returns a ranked list of probable morphologically related pairs as its output.
The algorithm tries to discover morphologically related pairs by looking for
pairs that are both orthographically and semantically similar, where
orthographic similarity is measured in terms of minimum edit distance, and
semantic similarity is measured in terms of mutual information. The procedure
does not rely on a morpheme concatenation model, nor on distributional
properties of word substrings (such as affix frequency). Experiments with
German and English input give encouraging results, both in terms of precision
(proportion of good pairs found at various cutoff points of the ranked list),
and in terms of a qualitative analysis of the types of morphological patterns
discovered by the algorithm.
|
cs/0205009
|
Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences
|
cs.CL
|
Given the lack of word delimiters in written Japanese, word segmentation is
generally considered a crucial first step in processing Japanese texts. Typical
Japanese segmentation algorithms rely either on a lexicon and syntactic
analysis or on pre-segmented data; but these are labor-intensive, and the
lexico-syntactic techniques are vulnerable to the unknown word problem. In
contrast, we introduce a novel, more robust statistical method utilizing
unsegmented training data. Despite its simplicity, the algorithm yields
performance on long kanji sequences comparable to and sometimes surpassing that
of state-of-the-art morphological analyzers over a variety of error metrics.
The algorithm also outperforms another mostly-unsupervised statistical
algorithm previously proposed for Chinese.
Additionally, we present a two-level annotation scheme for Japanese to
incorporate multiple segmentation granularities, and introduce two novel
evaluation metrics, both based on the notion of a compatible bracket, that can
account for multiple granularities simultaneously.
|
cs/0205013
|
Computing stable models: worst-case performance estimates
|
cs.LO cs.AI
|
We study algorithms for computing stable models of propositional logic
programs and derive estimates on their worst-case performance that are
asymptotically better than the trivial bound of O(m 2^n), where m is the size
of an input program and n is the number of its atoms. For instance, for
programs, whose clauses consist of at most two literals (counting the head) we
design an algorithm to compute stable models that works in time O(m\times
1.44225^n). We present similar results for several broader classes of programs,
as well.
|
cs/0205014
|
Ultimate approximations in nonmonotonic knowledge representation systems
|
cs.AI
|
We study fixpoints of operators on lattices. To this end we introduce the
notion of an approximation of an operator. We order approximations by means of
a precision ordering. We show that each lattice operator O has a unique most
precise or ultimate approximation. We demonstrate that fixpoints of this
ultimate approximation provide useful insights into fixpoints of the operator
O.
We apply our theory to logic programming and introduce the ultimate
Kripke-Kleene, well-founded and stable semantics. We show that the ultimate
Kripke-Kleene and well-founded semantics are more precise then their standard
counterparts We argue that ultimate semantics for logic programming have
attractive epistemological properties and that, while in general they are
computationally more complex than the standard semantics, for many classes of
theories, their complexity is no worse.
|
cs/0205015
|
Instabilities of Robot Motion
|
cs.RO cs.CG math.AT
|
Instabilities of robot motion are caused by topological reasons. In this
paper we find a relation between the topological properties of a configuration
space (the structure of its cohomology algebra) and the character of
instabilities, which are unavoidable in any motion planning algorithm. More
specifically, let $X$ denote the space of all admissible configurations of a
mechanical system. A {\it motion planner} is given by a splitting $X\times X =
F_1\cup F_2\cup ... \cup F_k$ (where $F_1, ..., F_k$ are pairwise disjoint
ENRs, see below) and by continuous maps $s_j: F_j \to PX,$ such that $E\circ
s_j =1_{F_j}$. Here $PX$ denotes the space of all continuous paths in $X$
(admissible motions of the system) and $E: PX\to X\times X$ denotes the map
which assigns to a path the pair of its initial -- end points. Any motion
planner determines an algorithm of motion planning for the system. In this
paper we apply methods of algebraic topology to study the minimal number of
sets $F_j$ in any motion planner in $X$. We also introduce a new notion of {\it
order of instability} of a motion planner; it describes the number of
essentially distinct motions which may occur as a result of small perturbations
of the input data. We find the minimal order of instability, which may have
motion planners on a given configuration space $X$. We study a number of
specific problems: motion of a rigid body in $\R^3$, a robot arm, motion in
$\R^3$ in the presence of obstacles, and others.
|
cs/0205016
|
From Alife Agents to a Kingdom of N Queens
|
cs.AI cs.DS cs.MA
|
This paper presents a new approach to solving N-queen problems, which
involves a model of distributed autonomous agents with artificial life (ALife)
and a method of representing N-queen constraints in an agent environment. The
distributed agents locally interact with their living environment, i.e., a
chessboard, and execute their reactive behaviors by applying their behavioral
rules for randomized motion, least-conflict position searching, and cooperating
with other agents etc. The agent-based N-queen problem solving system evolves
through selection and contest according to the rule of Survival of the Fittest,
in which some agents will die or be eaten if their moving strategies are less
efficient than others. The experimental results have shown that this system is
capable of solving large-scale N-queen problems. This paper also provides a
model of ALife agents for solving general CSPs.
|
cs/0205017
|
Ellogon: A New Text Engineering Platform
|
cs.CL
|
This paper presents Ellogon, a multi-lingual, cross-platform, general-purpose
text engineering environment. Ellogon was designed in order to aid both
researchers in natural language processing, as well as companies that produce
language engineering systems for the end-user. Ellogon provides a powerful
TIPSTER-based infrastructure for managing, storing and exchanging textual data,
embedding and managing text processing components as well as visualising
textual data and their associated linguistic information. Among its key
features are full Unicode support, an extensive multi-lingual graphical user
interface, its modular architecture and the reduced hardware requirements.
|
cs/0205019
|
Distance function wavelets - Part I: Helmholtz and convection-diffusion
transforms and series
|
cs.CE cs.NA
|
This report aims to present my research updates on distance function wavelets
(DFW) based on the fundamental solutions and the general solutions of the
Helmholtz, modified Helmholtz, and convection-diffusion equations, which
include the isotropic Helmholtz-Fourier (HF) transform and series, the
Helmholtz-Laplace (HL) transform, and the anisotropic convection-diffusion
wavelets and ridgelets. The latter is set to handle discontinuous and track
data problems. The edge effect of the HF series is addressed. Alternative
existence conditions for the DFW transforms are proposed and discussed. To
simplify and streamline the expression of the HF and HL transforms, a new
dimension-dependent function notation is introduced. The HF series is also used
to evaluate the analytical solutions of linear diffusion problems of arbitrary
dimensionality and geometry. The weakness of this report is lacking of rigorous
mathematical analysis due to the author's limited mathematical knowledge.
|
cs/0205020
|
A quasi-RBF technique for numerical discretization of PDE's
|
cs.CE cs.CG
|
Atkinson developed a strategy which splits solution of a PDE system into
homogeneous and particular solutions, where the former have to satisfy the
boundary and governing equation, while the latter only need to satisfy the
governing equation without concerning geometry. Since the particular solution
can be solved irrespective of boundary shape, we can use a readily available
fast Fourier or orthogonal polynomial technique O(NlogN) to evaluate it in a
regular box or sphere surrounding physical domain. The distinction of this
study is that we approximate homogeneous solution with nonsingular general
solution RBF as in the boundary knot method. The collocation method using
general solution RBF has very high accuracy and spectral convergent speed and
is a simple, truly meshfree approach for any complicated geometry. More
importantly, the use of nonsingular general solution avoids the controversial
artificial boundary in the method of fundamental solution due to the
singularity of fundamental solution.
|
cs/0205022
|
The Traits of the Personable
|
cs.AI cs.IR
|
Information personalization is fertile ground for application of AI
techniques. In this article I relate personalization to the ability to capture
partial information in an information-seeking interaction. The specific focus
is on personalizing interactions at web sites. Using ideas from partial
evaluation and explanation-based generalization, I present a modeling
methodology for reasoning about personalization. This approach helps identify
seven tiers of `personable traits' in web sites.
|
cs/0205025
|
Bootstrapping Structure into Language: Alignment-Based Learning
|
cs.LG cs.CL
|
This thesis introduces a new unsupervised learning framework, called
Alignment-Based Learning, which is based on the alignment of sentences and
Harris's (1951) notion of substitutability. Instances of the framework can be
applied to an untagged, unstructured corpus of natural language sentences,
resulting in a labelled, bracketed version of that corpus.
Firstly, the framework aligns all sentences in the corpus in pairs, resulting
in a partition of the sentences consisting of parts of the sentences that are
equal in both sentences and parts that are unequal. Unequal parts of sentences
can be seen as being substitutable for each other, since substituting one
unequal part for the other results in another valid sentence. The unequal parts
of the sentences are thus considered to be possible (possibly overlapping)
constituents, called hypotheses.
Secondly, the selection learning phase considers all hypotheses found by the
alignment learning phase and selects the best of these. The hypotheses are
selected based on the order in which they were found, or based on a
probabilistic function.
The framework can be extended with a grammar extraction phase. This extended
framework is called parseABL. Instead of returning a structured version of the
unstructured input corpus, like the ABL system, this system also returns a
stochastic context-free or tree substitution grammar.
Different instances of the framework have been tested on the English ATIS
corpus, the Dutch OVIS corpus and the Wall Street Journal corpus. One of the
interesting results, apart from the encouraging numerical results, is that all
instances can (and do) learn recursive structures.
|
cs/0205026
|
Monads for natural language semantics
|
cs.CL cs.PL
|
Accounts of semantic phenomena often involve extending types of meanings and
revising composition rules at the same time. The concept of monads allows many
such accounts -- for intensionality, variable binding, quantification and focus
-- to be stated uniformly and compositionally.
|
cs/0205027
|
A variable-free dynamic semantics
|
cs.CL
|
I propose a variable-free treatment of dynamic semantics. By "dynamic
semantics" I mean analyses of donkey sentences ("Every farmer who owns a donkey
beats it") and other binding and anaphora phenomena in natural language where
meanings of constituents are updates to information states, for instance as
proposed by Groenendijk and Stokhof. By "variable-free" I mean denotational
semantics in which functional combinators replace variable indices and
assignment functions, for instance as advocated by Jacobson.
The new theory presented here achieves a compositional treatment of dynamic
anaphora that does not involve assignment functions, and separates the
combinatorics of variable-free semantics from the particular linguistic
phenomena it treats. Integrating variable-free semantics and dynamic semantics
gives rise to interactions that make new empirical predictions, for example
"donkey weak crossover" effects.
|
cs/0205028
|
NLTK: The Natural Language Toolkit
|
cs.CL
|
NLTK, the Natural Language Toolkit, is a suite of open source program
modules, tutorials and problem sets, providing ready-to-use computational
linguistics courseware. NLTK covers symbolic and statistical natural language
processing, and is interfaced to annotated corpora. Students augment and
replace existing components, learn structured programming by example, and
manipulate sophisticated models from the outset.
|
cs/0205034
|
Data-Collection for the Sloan Digital Sky Survey: a Network-Flow
Heuristic
|
cs.DS cs.CE
|
The goal of the Sloan Digital Sky Survey is ``to map in detail one-quarter of
the entire sky, determining the positions and absolute brightnesses of more
than 100 million celestial objects''. The survey will be performed by taking
``snapshots'' through a large telescope. Each snapshot can capture up to 600
objects from a small circle of the sky. This paper describes the design and
implementation of the algorithm that is being used to determine the snapshots
so as to minimize their number. The problem is NP-hard in general; the
algorithm described is a heuristic, based on Lagriangian-relaxation and
min-cost network flow. It gets within 5-15% of a naive lower bound, whereas
using a ``uniform'' cover only gets within 25-35%.
|
cs/0205057
|
Unsupervised Discovery of Morphemes
|
cs.CL
|
We present two methods for unsupervised segmentation of words into
morpheme-like units. The model utilized is especially suited for languages with
a rich morphology, such as Finnish. The first method is based on the Minimum
Description Length (MDL) principle and works online. In the second method,
Maximum Likelihood (ML) optimization is used. The quality of the segmentations
is measured using an evaluation method that compares the segmentations produced
to an existing morphological analysis. Experiments on both Finnish and English
corpora show that the presented methods perform well compared to a current
state-of-the-art system.
|
cs/0205059
|
A Connection-Centric Survey of Recommender Systems Research
|
cs.IR cs.HC
|
Recommender systems attempt to reduce information overload and retain
customers by selecting a subset of items from a universal set based on user
preferences. While research in recommender systems grew out of information
retrieval and filtering, the topic has steadily advanced into a legitimate and
challenging research area of its own. Recommender systems have traditionally
been studied from a content-based filtering vs. collaborative design
perspective. Recommendations, however, are not delivered within a vacuum, but
rather cast within an informal community of users and social context.
Therefore, ultimately all recommender systems make connections among people and
thus should be surveyed from such a perspective. This viewpoint is
under-emphasized in the recommender systems literature. We therefore take a
connection-oriented viewpoint toward recommender systems research. We posit
that recommendation has an inherently social element and is ultimately intended
to connect people either directly as a result of explicit user modeling or
indirectly through the discovery of relationships implicit in extant data.
Thus, recommender systems are characterized by how they model users to bring
people together: explicitly or implicitly. Finally, user modeling and the
connection-centric viewpoint raise broadening and social issues--such as
evaluation, targeting, and privacy and trust--which we also briefly address.
|
cs/0205060
|
Optimizing Queries Using a Meta-level Database
|
cs.DB
|
Graph simulation (using graph schemata or data guides) has been successfully
proposed as a technique for adding structure to semistructured data. Design
patterns for description (such as meta-classes and homomorphisms between schema
layers), which are prominent in the object-oriented programming community,
constitute a generalization of this graph simulation approach.
In this paper, we show description applicable to a wide range of data models
that have some notion of object (-identity), and propose to turn it into a data
model primitive much like, say, inheritance. We argue that such an extension
fills a practical need in contemporary data management. Then, we present
algebraic techniques for query optimization (using the notions of described and
description queries). Finally, in the semistructured setting, we discuss the
pruning of regular path queries (with nested conditions) using description
meta-data. In this context, our notion of meta-data extends graph schemata and
data guides by meta-level values, allowing to boost query performance and to
reduce the redundancy of data.
|
cs/0205061
|
Aging, double helix and small world property in genetic algorithms
|
cs.NE cs.DS physics.data-an
|
Over a quarter of century after the invention of genetic algorithms and
miriads of their modifications, as well as successful implementations, we are
still lacking many essential details of thorough analysis of it's inner
working. One of such fundamental questions is: how many generations do we need
to solve the optimization problem? This paper tries to answer this question,
albeit in a fuzzy way, making use of the double helix concept. As a byproduct
we gain better understanding of the ways, in which the genetic algorithm may be
fine tuned.
|
cs/0205063
|
Distance function wavelets - Part II: Extended results and conjectures
|
cs.CE cs.CG
|
Report II is concerned with the extended results of distance function
wavelets (DFW). The fractional DFW transforms are first addressed relating to
the fractal geometry and fractional derivative, and then, the discrete
Helmholtz-Fourier transform is briefly presented. The Green second identity may
be an alternative devise in developing the theoretical framework of the DFW
transform and series. The kernel solutions of the Winkler plate equation and
the Burger's equation are used to create the DFW transforms and series. Most
interestingly, it is found that the translation invariant monomial solutions of
the high-order Laplace equations can be used to make very simple harmonic
polynomial DFW series. In most cases of this study, solid mathematical analysis
is missing and results are obtained intuitively in the conjecture status.
|
cs/0205065
|
Bootstrapping Lexical Choice via Multiple-Sequence Alignment
|
cs.CL
|
An important component of any generation system is the mapping dictionary, a
lexicon of elementary semantic expressions and corresponding natural language
realizations. Typically, labor-intensive knowledge-based methods are used to
construct the dictionary. We instead propose to acquire it automatically via a
novel multiple-pass algorithm employing multiple-sequence alignment, a
technique commonly used in bioinformatics. Crucially, our method leverages
latent information contained in multi-parallel corpora -- datasets that supply
several verbalizations of the corresponding semantics rather than just one.
We used our techniques to generate natural language versions of
computer-generated mathematical proofs, with good results on both a
per-component and overall-output basis. For example, in evaluations involving a
dozen human judges, our system produced output whose readability and
faithfulness to the semantic input rivaled that of a traditional generation
system.
|
cs/0205066
|
Effectiveness of Preference Elicitation in Combinatorial Auctions
|
cs.GT cs.MA
|
Combinatorial auctions where agents can bid on bundles of items are desirable
because they allow the agents to express complementarity and substitutability
between the items. However, expressing one's preferences can require bidding on
all bundles. Selective incremental preference elicitation by the auctioneer was
recently proposed to address this problem (Conen & Sandholm 2001), but the idea
was not evaluated. In this paper we show, experimentally and theoretically,
that automated elicitation provides a drastic benefit. In all of the
elicitation schemes under study, as the number of items for sale increases, the
amount of information elicited is a vanishing fraction of the information
collected in traditional ``direct revelation mechanisms'' where bidders reveal
all their valuation information. Most of the elicitation schemes also maintain
the benefit as the number of agents increases. We develop more effective
elicitation policies for existing query types. We also present a new query type
that takes the incremental nature of elicitation to a new level by allowing
agents to give approximate answers that are refined only on an as-needed basis.
In the process, we present methods for evaluating different types of
elicitation policies.
|
cs/0205067
|
Evaluating the Effectiveness of Ensembles of Decision Trees in
Disambiguating Senseval Lexical Samples
|
cs.CL
|
This paper presents an evaluation of an ensemble--based system that
participated in the English and Spanish lexical sample tasks of Senseval-2. The
system combines decision trees of unigrams, bigrams, and co--occurrences into a
single classifier. The analysis is extended to include the Senseval-1 data.
|
cs/0205068
|
Assessing System Agreement and Instance Difficulty in the Lexical Sample
Tasks of Senseval-2
|
cs.CL
|
This paper presents a comparative evaluation among the systems that
participated in the Spanish and English lexical sample tasks of Senseval-2. The
focus is on pairwise comparisons among systems to assess the degree to which
they agree, and on measuring the difficulty of the test instances included in
these tasks.
|
cs/0205069
|
Machine Learning with Lexical Features: The Duluth Approach to
Senseval-2
|
cs.CL
|
This paper describes the sixteen Duluth entries in the Senseval-2 comparative
exercise among word sense disambiguation systems. There were eight pairs of
Duluth systems entered in the Spanish and English lexical sample tasks. These
are all based on standard machine learning algorithms that induce classifiers
from sense-tagged training text where the context in which ambiguous words
occur are represented by simple lexical features. These are highly portable,
robust methods that can serve as a foundation for more tailored approaches.
|
cs/0205070
|
Thumbs up? Sentiment Classification using Machine Learning Techniques
|
cs.CL cs.LG
|
We consider the problem of classifying documents not by topic, but by overall
sentiment, e.g., determining whether a review is positive or negative. Using
movie reviews as data, we find that standard machine learning techniques
definitively outperform human-produced baselines. However, the three machine
learning methods we employed (Naive Bayes, maximum entropy classification, and
support vector machines) do not perform as well on sentiment classification as
on traditional topic-based categorization. We conclude by examining factors
that make the sentiment classification problem more challenging.
|
cs/0205071
|
A Scalable Architecture for Harvest-Based Digital Libraries - The
ODU/Southampton Experiments
|
cs.DL cs.IR
|
This paper discusses the requirements of current and emerging applications
based on the Open Archives Initiative (OAI) and emphasizes the need for a
common infrastructure to support them. Inspired by HTTP proxy, cache, gateway
and web service concepts, a design for a scalable and reliable infrastructure
that aims at satisfying these requirements is presented. Moreover it is shown
how various applications can exploit the services included in the proposed
infrastructure. The paper concludes by discussing the current status of several
prototype implementations.
|
cs/0205072
|
Unsupervised Learning of Morphology without Morphemes
|
cs.CL cs.LG
|
The first morphological learner based upon the theory of Whole Word
Morphology Ford et al. (1997) is outlined, and preliminary evaluation results
are presented. The program, Whole Word Morphologizer, takes a POS-tagged
lexicon as input, induces morphological relationships without attempting to
discover or identify morphemes, and is then able to generate new words beyond
the learning sample. The accuracy (precision) of the generated new words is as
high as 80% using the pure Whole Word theory, and 92% after a post-hoc
adjustment is added to the routine.
|
cs/0205073
|
Vote Elicitation: Complexity and Strategy-Proofness
|
cs.GT cs.CC cs.MA
|
Preference elicitation is a central problem in AI, and has received
significant attention in single-agent settings. It is also a key problem in
multiagent systems, but has received little attention here so far. In this
setting, the agents may have different preferences that often must be
aggregated using voting. This leads to interesting issues because what, if any,
information should be elicited from an agent depends on what other agents have
revealed about their preferences so far.
In this paper we study effective elicitation, and its impediments, for the
most common voting protocols. It turns out that in the Single Transferable Vote
protocol, even knowing when to terminate elicitation is mathcal NP-complete,
while this is easy for all the other protocols under study. Even for these
protocols, determining how to elicit effectively is NP-complete, even with
perfect suspicions about how the agents will vote. The exception is the
Plurality protocol where such effective elicitation is easy.
We also show that elicitation introduces additional opportunities for
strategic manipulation by the voters. We demonstrate how to curtail the space
of elicitation schemes so that no such additional strategic issues arise.
|
cs/0205074
|
Complexity Results about Nash Equilibria
|
cs.GT cs.CC cs.MA
|
Noncooperative game theory provides a normative framework for analyzing
strategic interactions. However, for the toolbox to be operational, the
solutions it defines will have to be computed. In this paper, we provide a
single reduction that 1) demonstrates NP-hardness of determining whether Nash
equilibria with certain natural properties exist, and 2) demonstrates the
#P-hardness of counting Nash equilibria (or connected sets of Nash equilibria).
We also show that 3) determining whether a pure-strategy Bayes-Nash equilibrium
exists is NP-hard, and that 4) determining whether a pure-strategy Nash
equilibrium exists in a stochastic (Markov) game is PSPACE-hard even if the
game is invisible (this remains NP-hard if the game is finite). All of our
hardness results hold even if there are only two players and the game is
symmetric.
Keywords: Nash equilibrium; game theory; computational complexity;
noncooperative game theory; normal form game; stochastic game; Markov game;
Bayes-Nash equilibrium; multiagent systems.
|
cs/0205075
|
Complexity of Mechanism Design
|
cs.GT cs.CC cs.MA
|
The aggregation of conflicting preferences is a central problem in multiagent
systems. The key difficulty is that the agents may report their preferences
insincerely. Mechanism design is the art of designing the rules of the game so
that the agents are motivated to report their preferences truthfully and a
(socially) desirable outcome is chosen. We propose an approach where a
mechanism is automatically created for the preference aggregation setting at
hand. This has several advantages, but the downside is that the mechanism
design optimization problem needs to be solved anew each time. Focusing on
settings where side payments are not possible, we show that the mechanism
design problem is NP-complete for deterministic mechanisms. This holds both for
dominant-strategy implementation and for Bayes-Nash implementation. We then
show that if we allow randomized mechanisms, the mechanism design problem
becomes tractable. In other words, the coordinator can tackle the computational
complexity introduced by its uncertainty about the agents' preferences by
making the agents face additional uncertainty. This comes at no loss, and in
some cases at a gain, in the (social) objective.
|
cs/0205078
|
A Spectrum of Applications of Automated Reasoning
|
cs.AI cs.LO
|
The likelihood of an automated reasoning program being of substantial
assistance for a wide spectrum of applications rests with the nature of the
options and parameters it offers on which to base needed strategies and
methodologies. This article focuses on such a spectrum, featuring W. McCune's
program OTTER, discussing widely varied successes in answering open questions,
and touching on some of the strategies and methodologies that played a key
role. The applications include finding a first proof, discovering single
axioms, locating improved axiom systems, and simplifying existing proofs. The
last application is directly pertinent to the recently found (by R. Thiele)
Hilbert's twenty-fourth problem--which is extremely amenable to attack with the
appropriate automated reasoning program--a problem concerned with proof
simplification. The methodologies include those for seeking shorter proofs and
for finding proofs that avoid unwanted lemmas or classes of term, a specific
option for seeking proofs with smaller equational or formula complexity, and a
different option to address the variable richness of a proof. The type of proof
one obtains with the use of OTTER is Hilbert-style axiomatic, including details
that permit one sometimes to gain new insights. We include questions still open
and challenges that merit consideration.
|
cs/0205079
|
Connectives in Quantum and other Cumulative Logics
|
cs.AI math.LO
|
Cumulative logics are studied in an abstract setting, i.e., without
connectives, very much in the spirit of Makinson's early work. A powerful
representation theorem characterizes those logics by choice functions that
satisfy a weakening of Sen's property alpha, in the spirit of the author's
"Nonmonotonic Logics and Semantics" (JLC). The representation results obtained
are surprisingly smooth: in the completeness part the choice function may be
defined on any set of worlds, not only definable sets and no
definability-preservation property is required in the soundness part. For
abstract cumulative logics, proper conjunction and negation may be defined.
Contrary to the situation studied in "Nonmonotonic Logics and Semantics" no
proper disjunction seems to be definable in general. The cumulative relations
of KLM that satisfy some weakening of the consistency preservation property all
define cumulative logics with a proper negation. Quantum Logics, as defined by
Engesser and Gabbay are such cumulative logics but the negation defined by
orthogonal complement does not provide a proper negation.
|
cs/0205080
|
Transforming the World Wide Web into a Complexity-Based Semantic Network
|
cs.NI cs.IR
|
The aim of this paper is to introduce the idea of the Semantic Web to the
Complexity community and set a basic ground for a project resulting in creation
of Internet-based semantic network of Complexity-related information providers.
Implementation of the Semantic Web technology would be of mutual benefit to
both the participants and users and will confirm self-referencing power of the
community to apply the products of its own research to itself. We first explain
the logic of the transition and discuss important notions associated with the
Semantic Web technology. We then present a brief outline of the project
milestones.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.