id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0108004
|
Links tell us about lexical and semantic Web content
|
cs.IR cs.DL
|
The latest generation of Web search tools is beginning to exploit hypertext
link information to improve ranking\cite{Brin98,Kleinberg98} and
crawling\cite{Menczer00,Ben-Shaul99etal,Chakrabarti99} algorithms. The hidden
assumption behind such approaches, a correlation between the graph structure of
the Web and its content, has not been tested explicitly despite increasing
research on Web topology\cite{Lawrence98,Albert99,Adamic99,Butler00}. Here I
formalize and quantitatively validate two conjectures drawing connections from
link information to lexical and semantic Web content. The clink-content
conjecture states that a page is similar to the pages that link to it, i.e.,
one can infer the lexical content of a page by looking at the pages that link
to it. I also show that lexical inferences based on link cues are quite
heterogeneous across Web communities. The link-cluster conjecture states that
pages about the same topic are clustered together, i.e., one can infer the
meaning of a page by looking at its neighbours. These results explain the
success of the newest search technologies and open the way for more dynamic and
scalable methods to locate information in a topic or user driven way.
|
cs/0108005
|
A Bit of Progress in Language Modeling
|
cs.CL
|
In the past several years, a number of different language modeling
improvements over simple trigram models have been found, including caching,
higher-order n-grams, skipping, interpolated Kneser-Ney smoothing, and
clustering. We present explorations of variations on, or of the limits of, each
of these techniques, including showing that sentence mixture models may have
more potential. While all of these techniques have been studied separately,
they have rarely been studied in combination. We find some significant
interactions, especially with smoothing and clustering techniques. We compare a
combination of all techniques together to a Katz smoothed trigram model with no
count cutoffs. We achieve perplexity reductions between 38% and 50% (1 bit of
entropy), depending on training data size, as well as a word error rate
reduction of 8.9%. Our perplexity reductions are perhaps the highest reported
compared to a fair baseline. This is the extended version of the paper; it
contains additional details and proofs, and is designed to be a good
introduction to the state of the art in language modeling.
|
cs/0108006
|
Classes for Fast Maximum Entropy Training
|
cs.CL
|
Maximum entropy models are considered by many to be one of the most promising
avenues of language modeling research. Unfortunately, long training times make
maximum entropy research difficult. We present a novel speedup technique: we
change the form of the model to use classes. Our speedup works by creating two
maximum entropy models, the first of which predicts the class of each word, and
the second of which predicts the word itself. This factoring of the model leads
to fewer non-zero indicator functions, and faster normalization, achieving
speedups of up to a factor of 35 over one of the best previous techniques. It
also results in typically slightly lower perplexities. The same trick can be
used to speed training of other machine learning techniques, e.g. neural
networks, applied to any problem with a large number of outputs, such as
language modeling.
|
cs/0108008
|
Using Methods of Declarative Logic Programming for Intelligent
Information Agents
|
cs.MA cs.AI
|
The search for information on the web is faced with several problems, which
arise on the one hand from the vast number of available sources, and on the
other hand from their heterogeneity. A promising approach is the use of
multi-agent systems of information agents, which cooperatively solve advanced
information-retrieval problems. This requires capabilities to address complex
tasks, such as search and assessment of sources, query planning, information
merging and fusion, dealing with incomplete information, and handling of
inconsistency. In this paper, our interest is in the role which some methods
from the field of declarative logic programming can play in the realization of
reasoning capabilities for information agents. In particular, we are interested
in how they can be used and further developed for the specific needs of this
application domain. We review some existing systems and current projects, which
address information-integration problems. We then focus on declarative
knowledge-representation methods, and review and evaluate approaches from logic
programming and nonmonotonic reasoning for information agents. We discuss
advantages and drawbacks, and point out possible extensions and open issues.
|
cs/0108009
|
Artificial Neurons with Arbitrarily Complex Internal Structures
|
cs.NE q-bio.NC
|
Artificial neurons with arbitrarily complex internal structure are
introduced. The neurons can be described in terms of a set of internal
variables, a set activation functions which describe the time evolution of
these variables and a set of characteristic functions which control how the
neurons interact with one another. The information capacity of attractor
networks composed of these generalized neurons is shown to reach the maximum
allowed bound. A simple example taken from the domain of pattern recognition
demonstrates the increased computational power of these neurons. Furthermore, a
specific class of generalized neurons gives rise to a simple transformation
relating attractor networks of generalized neurons to standard three layer
feed-forward networks. Given this correspondence, we conjecture that the
maximum information capacity of a three layer feed-forward network is 2 bits
per weight.
|
cs/0108011
|
On Classes of Functions for which No Free Lunch Results Hold
|
cs.NE math.OC nlin.AO
|
In a recent paper it was shown that No Free Lunch results hold for any subset
F of the set of all possible functions from a finite set X to a finite set Y
iff F is closed under permutation of X. In this article, we prove that the
number of those subsets can be neglected compared to the overall number of
possible subsets. Further, we present some arguments why problem classes
relevant in practice are not likely to be closed under permutation.
|
cs/0108013
|
Convergent Approximate Solving of First-Order Constraints by Approximate
Quantifiers
|
cs.LO cs.AI
|
Exactly solving first-order constraints (i.e., first-order formulas over a
certain predefined structure) can be a very hard, or even undecidable problem.
In continuous structures like the real numbers it is promising to compute
approximate solutions instead of exact ones. However, the quantifiers of the
first-order predicate language are an obstacle to allowing approximations to
arbitrary small error bounds. In this paper we solve the problem by modifying
the first-order language and replacing the classical quantifiers with
approximate quantifiers. These also have two additional advantages: First, they
are tunable, in the sense that they allow the user to decide on the trade-off
between precision and efficiency. Second, they introduce additional
expressivity into the first-order language by allowing reasoning over the size
of solution sets.
|
cs/0108018
|
Bipartite graph partitioning and data clustering
|
cs.IR cs.LG
|
Many data types arising from data mining applications can be modeled as
bipartite graphs, examples include terms and documents in a text corpus,
customers and purchasing items in market basket analysis and reviewers and
movies in a movie recommender system. In this paper, we propose a new data
clustering method based on partitioning the underlying bipartite graph. The
partition is constructed by minimizing a normalized sum of edge weights between
unmatched pairs of vertices of the bipartite graph. We show that an approximate
solution to the minimization problem can be obtained by computing a partial
singular value decomposition (SVD) of the associated edge weight matrix of the
bipartite graph. We point out the connection of our clustering algorithm to
correspondence analysis used in multivariate analysis. We also briefly discuss
the issue of assigning data objects to multiple clusters. In the experimental
results, we apply our clustering algorithm to the problem of document
clustering to illustrate its effectiveness and efficiency.
|
cs/0108022
|
Portability of Syntactic Structure for Language Modeling
|
cs.CL
|
The paper presents a study on the portability of statistical syntactic
knowledge in the framework of the structured language model (SLM). We
investigate the impact of porting SLM statistics from the Wall Street Journal
(WSJ) to the Air Travel Information System (ATIS) domain. We compare this
approach to applying the Microsoft rule-based parser (NLPwin) for the ATIS data
and to using a small amount of data manually parsed at UPenn for gathering the
intial SLM statistics. Surprisingly, despite the fact that it performs modestly
in perplexity (PPL), the model initialized on WSJ parses outperforms the other
initialization methods based on in-domain annotated data, achieving a
significant 0.4% absolute and 7% relative reduction in word error rate (WER)
over a baseline system whose word error rate is 5.8%; the improvement measured
relative to the minimum WER achievable on the N-best lists we worked with is
12%.
|
cs/0108023
|
Information Extraction Using the Structured Language Model
|
cs.CL cs.IR
|
The paper presents a data-driven approach to information extraction (viewed
as template filling) using the structured language model (SLM) as a statistical
parser. The task of template filling is cast as constrained parsing using the
SLM. The model is automatically trained from a set of sentences annotated with
frame/slot labels and spans. Training proceeds in stages: first a constrained
syntactic parser is trained such that the parses on training data meet the
specified semantic spans, then the non-terminal labels are enriched to contain
semantic information and finally a constrained syntactic+semantic parser is
trained on the parse trees resulting from the previous stage. Despite the small
amount of training data used, the model is shown to outperform the slot level
accuracy of a simple semantic grammar authored manually for the MiPad ---
personal information management --- task.
|
cs/0109006
|
On Properties of Update Sequences Based on Causal Rejection
|
cs.AI
|
We consider an approach to update nonmonotonic knowledge bases represented as
extended logic programs under answer set semantics. New information is
incorporated into the current knowledge base subject to a causal rejection
principle enforcing that, in case of conflicts, more recent rules are preferred
and older rules are overridden. Such a rejection principle is also exploited in
other approaches to update logic programs, e.g., in dynamic logic programming
by Alferes et al. We give a thorough analysis of properties of our approach, to
get a better understanding of the causal rejection principle. We review
postulates for update and revision operators from the area of theory change and
nonmonotonic reasoning, and some new properties are considered as well. We then
consider refinements of our semantics which incorporate a notion of minimality
of change. As well, we investigate the relationship to other approaches,
showing that our approach is semantically equivalent to inheritance programs by
Buccafurri et al. and that it coincides with certain classes of dynamic logic
programs, for which we provide characterizations in terms of graph conditions.
Therefore, most of our results about properties of causal rejection principle
apply to these approaches as well. Finally, we deal with computational
complexity of our approach, and outline how the update semantics and its
refinements can be implemented on top of existing logic programming engines.
|
cs/0109010
|
Anaphora and Discourse Structure
|
cs.CL
|
We argue in this paper that many common adverbial phrases generally taken to
signal a discourse relation between syntactically connected units within
discourse structure, instead work anaphorically to contribute relational
meaning, with only indirect dependence on discourse structure. This allows a
simpler discourse structure to provide scaffolding for compositional semantics,
and reveals multiple ways in which the relational meaning conveyed by adverbial
connectives can interact with that associated with discourse structure. We
conclude by sketching out a lexicalised grammar for discourse that facilitates
discourse interpretation as a product of compositional rules, anaphor
resolution and inference.
|
cs/0109013
|
Conceptual Analysis of Lexical Taxonomies: The Case of WordNet Top-Level
|
cs.CL cs.IR
|
In this paper we propose an analysis and an upgrade of WordNet's top-level
synset taxonomy. We briefly review WordNet and identify its main semantic
limitations. Some principles from a forthcoming OntoClean methodology are
applied to the ontological analysis of WordNet. A revised top-level taxonomy is
proposed, which is meant to be more conceptually rigorous, cognitively
transparent, and efficiently exploitable in several applications.
|
cs/0109014
|
Assigning Satisfaction Values to Constraints: An Algorithm to Solve
Dynamic Meta-Constraints
|
cs.PL cs.AI
|
The model of Dynamic Meta-Constraints has special activity constraints which
can activate other constraints. It also has meta-constraints which range over
other constraints. An algorithm is presented in which constraints can be
assigned one of five different satisfaction values, which leads to the
assignment of domain values to the variables in the CSP. An outline of the
model and the algorithm is presented, followed by some initial results for two
problems: a simple classic CSP and the Car Configuration Problem. The algorithm
is shown to perform few backtracks per solution, but to have overheads in the
form of historical records required for the implementation of state.
|
cs/0109015
|
Boosting Trees for Anti-Spam Email Filtering
|
cs.CL
|
This paper describes a set of comparative experiments for the problem of
automatically filtering unwanted electronic mail messages. Several variants of
the AdaBoost algorithm with confidence-rated predictions [Schapire & Singer,
99] have been applied, which differ in the complexity of the base learners
considered. Two main conclusions can be drawn from our experiments: a) The
boosting-based methods clearly outperform the baseline learning algorithms
(Naive Bayes and Induction of Decision Trees) on the PU1 corpus, achieving very
high levels of the F1 measure; b) Increasing the complexity of the base
learners allows to obtain better ``high-precision'' classifiers, which is a
very important issue when misclassification costs are considered.
|
cs/0109020
|
Modelling Semantic Association and Conceptual Inheritance for Semantic
Analysis
|
cs.CL
|
Allowing users to interact through language borders is an interesting
challenge for information technology. For the purpose of a computer assisted
language learning system, we have chosen icons for representing meaning on the
input interface, since icons do not depend on a particular language. However, a
key limitation of this type of communication is the expression of articulated
ideas instead of isolated concepts. We propose a method to interpret sequences
of icons as complex messages by reconstructing the relations between concepts,
so as to build conceptual graphs able to represent meaning and to be used for
natural language sentence generation. This method is based on an electronic
dictionary containing semantic information.
|
cs/0109022
|
Interactive Timetabling
|
cs.PL cs.AI
|
Timetabling is a typical application of constraint programming whose task is
to allocate activities to slots in available resources respecting various
constraints like precedence and capacity. In this paper we present a basic
concept, a constraint model, and the solving algorithms for interactive
timetabling. Interactive timetabling combines automated timetabling (the
machine allocates the activities) with user interaction (the user can interfere
with the process of timetabling). Because the user can see how the timetabling
proceeds and can intervene this process, we believe that such approach is more
convenient than full automated timetabling which behaves like a black-box. The
contribution of this paper is twofold: we present a generic model to describe
timetabling (and scheduling in general) problems and we propose an interactive
algorithm for solving such problems.
|
cs/0109023
|
Integrating Multiple Knowledge Sources for Robust Semantic Parsing
|
cs.CL cs.AI
|
This work explores a new robust approach for Semantic Parsing of unrestricted
texts. Our approach considers Semantic Parsing as a Consistent Labelling
Problem (CLP), allowing the integration of several knowledge types (syntactic
and semantic) obtained from different sources (linguistic and statistic). The
current implementation obtains 95% accuracy in model identification and 72% in
case-role filling.
|
cs/0109025
|
Dynamic Global Constraints: A First View
|
cs.PL cs.AI
|
Global constraints proved themselves to be an efficient tool for modelling
and solving large-scale real-life combinatorial problems. They encapsulate a
set of binary constraints and using global reasoning about this set they filter
the domains of involved variables better than arc consistency among the set of
binary constraints. Moreover, global constraints exploit semantic information
to achieve more efficient filtering than generalised consistency algorithms for
n-ary constraints. Continued expansion of constraint programming (CP) to
various application areas brings new challenges for design of global
constraints. In particular, application of CP to advanced planning and
scheduling (APS) requires dynamic additions of new variables and constraints
during the process of constraint satisfaction and, thus, it would be helpful if
the global constraints could adopt new variables. In the paper, we give a
motivation for such dynamic global constraints and we describe a dynamic
version of the well-known alldifferent constraint.
|
cs/0109029
|
Learning class-to-class selectional preferences
|
cs.CL
|
Selectional preference learning methods have usually focused on word-to-class
relations, e.g., a verb selects as its subject a given nominal class. This
papers extends previous statistical models to class-to-class preferences, and
presents a model that learns selectional preferences for classes of verbs. The
motivation is twofold: different senses of a verb may have different
preferences, and some classes of verbs can share preferences. The model is
tested on a word sense disambiguation task which uses subject-verb and
object-verb relationships extracted from a small sense-disambiguated corpus.
|
cs/0109030
|
Knowledge Sources for Word Sense Disambiguation
|
cs.CL
|
Two kinds of systems have been defined during the long history of WSD:
principled systems that define which knowledge types are useful for WSD, and
robust systems that use the information sources at hand, such as, dictionaries,
light-weight ontologies or hand-tagged corpora. This paper tries to systematize
the relation between desired knowledge types and actual information sources. We
also compare the results for a wide range of algorithms that have been
evaluated on a common test setting in our research group. We hope that this
analysis will help change the shift from systems based on information sources
to systems based on knowledge sources. This study might also shed some light on
semi-automatic acquisition of desired knowledge types from existing resources.
|
cs/0109031
|
Enriching WordNet concepts with topic signatures
|
cs.CL
|
This paper explores the possibility of enriching the content of existing
ontologies. The overall goal is to overcome the lack of topical links among
concepts in WordNet. Each concept is to be associated to a topic signature,
i.e., a set of related words with associated weights. The signatures can be
automatically constructed from the WWW or from sense-tagged corpora. Both
approaches are compared and evaluated on a word sense disambiguation task. The
results show that it is possible to construct clean signatures from the WWW
using some filtering techniques.
|
cs/0109034
|
Relevant Knowledge First - Reinforcement Learning and Forgetting in
Knowledge Based Configuration
|
cs.AI cs.LG
|
In order to solve complex configuration tasks in technical domains, various
knowledge based methods have been developed. However their applicability is
often unsuccessful due to their low efficiency. One of the reasons for this is
that (parts of the) problems have to be solved again and again, instead of
being "learnt" from preceding processes. However, learning processes bring with
them the problem of conservatism, for in technical domains innovation is a
deciding factor in competition. On the other hand a certain amount of
conservatism is often desired since uncontrolled innovation as a rule is also
detrimental. This paper proposes the heuristic RKF (Relevant Knowledge First)
for making decisions in configuration processes based on the so-called
relevance of objects in a knowledge base. The underlying relevance-function has
two components, one based on reinforcement learning and the other based on
forgetting (fading). Relevance of an object increases with its successful use
and decreases with age when it is not used. RKF has been developed to speed up
the configuration process and to improve the quality of the solutions relative
to the reward value that is given by users.
|
cs/0109039
|
Testing for Mathematical Lineation in Jim Crace's "Quarantine" and T. S.
Eliot's "Four Quartets"
|
cs.CL
|
The mathematical distinction between prose and verse may be detected in
writings that are not apparently lineated, for example in T. S. Eliot's "Burnt
Norton", and Jim Crace's "Quarantine". In this paper we offer comments on
appropriate statistical methods for such work, and also on the nature of formal
innovation in these two texts. Additional remarks are made on the roots of
lineation as a metrical form, and on the prose-verse continuum.
|
cs/0109040
|
The Building of BODHI, a Bio-diversity Database System
|
cs.DB q-bio.PE
|
We have recently built a database system called BODHI, intended to store
plant bio-diversity information. It is based on an object-oriented modeling
approach and is developed completely around public-domain software. The unique
feature of BODHI is that it seamlessly integrates diverse types of data,
including taxonomic characteristics, spatial distributions, and genetic
sequences, thereby spanning the entire range from molecular to organism-level
information. A variety of sophisticated indexing strategies are incorporated to
efficiently access the various types of data, and a rule-based query processor
is employed for optimizing query execution. In this paper, we report on our
experiences in building BODHI and on its performance characteristics for a
representative set of queries.
|
cs/0109042
|
Intelligent Search of Correlated Alarms from Database containing Noise
Data
|
cs.NI cs.AI
|
Alarm correlation plays an important role in improving the service and
reliability in modern telecommunications networks. Most previous research of
alarm correlation didn't consider the effect of noise data in Database. This
paper focuses on the method of discovering alarm correlation rules from
database containing noise data. We firstly define two parameters Win_freq and
Win_add as the measure of noise data and then present the Robust_search
algorithm to solve the problem. At different size of Win_freq and Win_add,
experiments with alarm data containing noise data show that the Robust_search
Algorithm can discover the more rules with the bigger size of Win_add. We also
experimentally compare two different interestingness measures of confidence and
correlation.
|
cs/0109084
|
The Internet and Community Networks: Case Studies of Five U.S. Cities
|
cs.DB
|
This paper looks at five U.S. cities (Austin, Cleveland, Nashville, Portland,
and Washington, DC) and explores strategies being employed by community
activists and local governments to create and sustain community networking
projects. In some cities, community networking initiatives are relatively
mature, while in others they are in early or intermediate stages. The paper
looks at several factors that help explain the evolution of community networks
in cities:
1) Local government support; 2) Federal support 3) Degree of community
activism, often reflected by public-private partnerships that help support
community networks.
In addition to these (more or less) measurable elements of local support, the
case studies enable description of the different objectives of community
networks in different cities. Several community networking projects aim to
improve the delivery of government services (e.g., Portland and Cleveland),
some have a job-training focus (e.g., Austin, Washington, DC), others are
oriented very explicitly toward community building (Nashville, DC), and others
toward neighborhood entrepreneurship (Portland and Cleveland).
The paper ties the case studies together by asking whether community
technology initiatives contribute to social capital in the cities studied.
|
cs/0109116
|
Digital Color Imaging
|
cs.CV cs.GR
|
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided.
|
cs/0110003
|
The temporal calculus of conditional objects and conditional events
|
cs.AI cs.LO
|
We consider the problem of defining conditional objects (a|b), which would
allow one to regard the conditional probability Pr(a|b) as a probability of a
well-defined event rather than as a shorthand for Pr(ab)/Pr(b). The next issue
is to define boolean combinations of conditional objects, and possibly also the
operator of further conditioning. These questions have been investigated at
least since the times of George Boole, leading to a number of formalisms
proposed for conditional objects, mostly of syntactical, proof-theoretic vein.
We propose a unifying, semantical approach, in which conditional events are
(projections of) Markov chains, definable in the three-valued extension of the
past tense fragment of propositional linear time logic, or, equivalently, by
three-valued counter-free Moore machines. Thus our conditional objects are
indeed stochastic processes, one of the central notions of modern probability
theory.
Our model fulfills early ideas of Bruno de Finetti and, moreover, as we show
in a separate paper, all the previously proposed algebras of conditional events
can be isomorphically embedded in our model.
|
cs/0110004
|
Embedding conditional event algebras into temporal calculus of
conditionals
|
cs.AI cs.LO
|
In this paper we prove that all the existing conditional event algebras embed
into a three-valued extension of temporal logic of discrete past time, which
the authors of this paper have proposed in anothe paper as a general model of
conditional events.
First of all, we discuss the descriptive incompleteness of the cea's. In this
direction, we show that some important notions, like independence of
conditional events, cannot be properly addressed in the framework of
conditional event algebras, while they can be precisely formulated and analyzed
in the temporal setting.
We also demonstrate that the embeddings allow one to use Markov chain
algorithms (suitable for the temporal calculus) for computing probabilities of
complex conditional expressions of the embedded conditional event algebras, and
that these algorithms can outperform those previously known.
|
cs/0110014
|
The Open Language Archives Community and Asian Language Resources
|
cs.CL cs.DL
|
The Open Language Archives Community (OLAC) is a new project to build a
worldwide system of federated language archives based on the Open Archives
Initiative and the Dublin Core Metadata Initiative. This paper aims to
disseminate the OLAC vision to the language resources community in Asia, and to
show language technologists and linguists how they can document their tools and
data in such a way that others can easily discover them. We describe OLAC and
the OLAC Metadata Set, then discuss two key issues in the Asian context:
language classification and multilingual resource classification.
|
cs/0110015
|
Richer Syntactic Dependencies for Structured Language Modeling
|
cs.CL
|
The paper investigates the use of richer syntactic dependencies in the
structured language model (SLM). We present two simple methods of enriching the
dependencies in the syntactic parse trees used for intializing the SLM. We
evaluate the impact of both methods on the perplexity (PPL) and
word-error-rate(WER, N-best rescoring) performance of the SLM. We show that the
new model achieves an improvement in PPL and WER over the baseline results
reported using the SLM on the UPenn Treebank and Wall Street Journal (WSJ)
corpora, respectively.
|
cs/0110020
|
Structuring Business Metadata in Data Warehouse Systems for Effective
Business Support
|
cs.DB
|
Large organizations today are being served by different types of data
processing and informations systems, ranging from the operational (OLTP)
systems, data warehouse systems, to data mining and business intelligence
applications. It is important to create an integrated repository of what these
systems contain and do in order to use them collectively and effectively. The
repository contains metadata of source systems, data warehouse, and also the
business metadata. Decision support and business analysis require extensive and
in-depth understanding of business entities, tasks, rules and the environment.
The purpose of business metadata is to provide this understanding. Realizing
the importance of metadata, many standardization efforts has been initiated to
define metadata models. In trying to define an integrated metadata and
information systems for a banking application, we discover some important
limitations or inadequacies of the business metadata proposals. They relate to
providing an integrated and flexible inter-operability and navigation between
metadata and data, and to the important issue of systematically handling
temporal characteristics and evolution of the metadata itself.
In this paper, we study the issue of structuring business metadata so that it
can provide a context for business management and decision support when
integrated with data warehousing. We define temporal object-oriented business
metadata model, and relate it both to the technical metadata and the data
warehouse. We also define ways of accessing and navigating metadata in
conjunction with data.
|
cs/0110021
|
Alife Model of Evolutionary Emergence of Purposeful Adaptive Behavior
|
cs.NE
|
The process of evolutionary emergence of purposeful adaptive behavior is
investigated by means of computer simulations. The model proposed implies that
there is an evolving population of simple agents, which have two natural needs:
energy and reproduction. Any need is characterized quantitatively by a
corresponding motivation. Motivations determine goal-directed behavior of
agents. The model demonstrates that purposeful behavior does emerge in the
simulated evolutionary processes. Emergence of purposefulness is accompanied by
origin of a simple hierarchy in the control system of agents.
|
cs/0110023
|
Set Unification
|
cs.LO cs.AI cs.SC
|
The unification problem in algebras capable of describing sets has been
tackled, directly or indirectly, by many researchers and it finds important
applications in various research areas--e.g., deductive databases, theorem
proving, static analysis, rapid software prototyping. The various solutions
proposed are spread across a large literature. In this paper we provide a
uniform presentation of unification of sets, formalizing it at the level of set
theory. We address the problem of deciding existence of solutions at an
abstract level. This provides also the ability to classify different types of
set unification problems. Unification algorithms are uniformly proposed to
solve the unification problem in each of such classes.
The algorithms presented are partly drawn from the literature--and properly
revisited and analyzed--and partly novel proposals. In particular, we present a
new goal-driven algorithm for general ACI1 unification and a new simpler
algorithm for general (Ab)(Cl) unification.
|
cs/0110026
|
Information retrieval in Current Research Information Systems
|
cs.IR cs.DL
|
In this paper we describe the requirements for research information systems
and problems which arise in the development of such system. Here is shown which
problems could be solved by using of knowledge markup technologies. Ontology
for Research Information System offered. Architecture for collecting research
data and providing access to it is described.
|
cs/0110027
|
Part-of-Speech Tagging with Two Sequential Transducers
|
cs.CL
|
We present a method of constructing and using a cascade consisting of a left-
and a right-sequential finite-state transducer (FST), T1 and T2, for
part-of-speech (POS) disambiguation. Compared to an HMM, this FST cascade has
the advantage of significantly higher processing speed, but at the cost of
slightly lower accuracy. Applications such as Information Retrieval, where the
speed can be more important than accuracy, could benefit from this approach.
In the process of tagging, we first assign every word a unique ambiguity
class c_i that can be looked up in a lexicon encoded by a sequential FST. Every
c_i is denoted by a single symbol, e.g. [ADJ_NOUN], although it represents a
set of alternative tags that a given word can occur with. The sequence of the
c_i of all words of one sentence is the input to our FST cascade. It is mapped
by T1, from left to right, to a sequence of reduced ambiguity classes r_i.
Every r_i is denoted by a single symbol, although it represents a set of
alternative tags. Intuitively, T1 eliminates the less likely tags from c_i,
thus creating r_i. Finally, T2 maps the sequence of r_i, from right to left, to
a sequence of single POS tags t_i. Intuitively, T2 selects the most likely t_i
from every r_i.
The probabilities of all t_i, r_i, and c_i are used only at compile time, not
at run time. They do not (directly) occur in the FSTs, but are "implicitly
contained" in their structure.
|
cs/0110032
|
A logic-based approach to data integration
|
cs.DB cs.AI
|
An important aspect of data integration involves answering queries using
various resources rather than by accessing database relations. The process of
transforming a query from the database relations to the resources is often
referred to as query folding or answering queries using views, where the views
are the resources. We present a uniform approach that includes as special cases
much of the previous work on this subject. Our approach is logic-based using
resolution. We deal with integrity constraints, negation, and recursion also
within this framework.
|
cs/0110036
|
Efficient algorithms for decision tree cross-validation
|
cs.LG
|
Cross-validation is a useful and generally applicable technique often
employed in machine learning, including decision tree induction. An important
disadvantage of straightforward implementation of the technique is its
computational overhead. In this paper we show that, for decision trees, the
computational overhead of cross-validation can be reduced significantly by
integrating the cross-validation with the normal decision tree induction
process. We discuss how existing decision tree algorithms can be adapted to
this aim, and provide an analysis of the speedups these adaptations may yield.
The analysis is supported by experimental results.
|
cs/0110041
|
Towards Solving the Interdisciplinary Language Barrier Problem
|
cs.CY cs.CL cs.IR
|
This work aims to make it easier for a specialist in one field to find and
explore ideas from another field which may be useful in solving a new problem
arising in his practice. It presents a methodology which serves to represent
the relationships that exist between concepts, problems, and solution patterns
from different fields of human activity in the form of a graph. Our approach is
based upon generalization and specialization relationships and problem solving.
It is simple enough to be understood quite easily, and general enough to enable
coherent integration of concepts and problems from virtually any field. We have
built an implementation which uses the World Wide Web as a support to allow
navigation between graph nodes and collaborative development of the graph.
|
cs/0110044
|
EquiX--A Search and Query Language for XML
|
cs.DB
|
EquiX is a search language for XML that combines the power of querying with
the simplicity of searching. Requirements for such languages are discussed and
it is shown that EquiX meets the necessary criteria. Both a graph-based
abstract syntax and a formal concrete syntax are presented for EquiX queries.
In addition, the semantics is defined and an evaluation algorithm is presented.
The evaluation algorithm is polynomial under combined complexity.
EquiX combines pattern matching, quantification and logical expressions to
query both the data and meta-data of XML documents. The result of a query in
EquiX is a set of XML documents. A DTD describing the result documents is
derived automatically from the query.
|
cs/0110047
|
The Expresso Microarray Experiment Management System: The Functional
Genomics of Stress Responses in Loblolly Pine
|
cs.OH cs.CE q-bio.GN
|
Conception, design, and implementation of cDNA microarray experiments present
a variety of bioinformatics challenges for biologists and computational
scientists. The multiple stages of data acquisition and analysis have motivated
the design of Expresso, a system for microarray experiment management. Salient
aspects of Expresso include support for clone replication and randomized
placement; automatic gridding, extraction of expression data from each spot,
and quality monitoring; flexible methods of combining data from individual
spots into information about clones and functional categories; and the use of
inductive logic programming for higher-level data analysis and mining. The
development of Expresso is occurring in parallel with several generations of
microarray experiments aimed at elucidating genomic responses to drought stress
in loblolly pine seedlings. The current experimental design incorporates 384
pine cDNAs replicated and randomly placed in two specific microarray layouts.
We describe the design of Expresso as well as results of analysis with Expresso
that suggest the importance of molecular chaperones and membrane transport
proteins in mechanisms conferring successful adaptation to long-term drought
stress.
|
cs/0110048
|
Multivariant Branching Prediction, Reflection, and Retrospection
|
cs.CE cs.DC
|
In branching simulation, a novel approach to simulation presented in this
paper, a multiplicity of plausible scenarios are concurrently developed and
implemented. In conventional simulations of complex systems, there arise from
time to time uncertainties as to which of two or more alternatives are more
likely to be pursued by the system being simulated. Under these conditions the
simulationist makes a judicious choice of one of these alternatives and embeds
this choice in the simulation model. By contrast, in the branching approach,
two or more of such alternatives (or branches) are included in the model and
implemented for concurrent computer solution. The theoretical foundations for
branching simulation as a computational process are in the domains of
alternating Turing machines, molecular computing, and E-machines. Branching
simulations constitute the development of diagrams of scenarios representing
significant, alternative flows of events. Logical means for interpretation and
investigation of the branching simulation and prediction are provided by the
logical theories of possible worlds, which have been formalized by the
construction of logical varieties. Under certain conditions, the branching
approach can considerably enhance the efficiency of computer simulations and
provide more complete insights into the interpretation of predictions based on
simulations. As an example, the concepts developed in this paper have been
applied to a simulation task that plays an important role in radiology - the
noninvasive treatment of brain aneurysms.
|
cs/0110050
|
What is the minimal set of fragments that achieves maximal parse
accuracy?
|
cs.CL
|
We aim at finding the minimal set of fragments which achieves maximal parse
accuracy in Data Oriented Parsing. Experiments with the Penn Wall Street
Journal treebank show that counts of almost arbitrary fragments within parse
trees are important, leading to improved parse accuracy over previous models
tested on this treebank (a precision of 90.8% and a recall of 90.6%). We
isolate some dependency relations which previous models neglect but which
contribute to higher parse accuracy.
|
cs/0110051
|
Combining semantic and syntactic structure for language modeling
|
cs.CL
|
Structured language models for speech recognition have been shown to remedy
the weaknesses of n-gram models. All current structured language models are,
however, limited in that they do not take into account dependencies between
non-headwords. We show that non-headword dependencies contribute to
significantly improved word error rate, and that a data-oriented parsing model
trained on semantically and syntactically annotated data can exploit these
dependencies. This paper also contains the first DOP model trained by means of
a maximum likelihood reestimation procedure, which solves some of the
theoretical shortcomings of previous DOP models.
|
cs/0110052
|
Mragyati : A System for Keyword-based Searching in Databases
|
cs.DB
|
The web, through many search engine sites, has popularized the keyword-based
search paradigm, where a user can specify a string of keywords and expect to
retrieve relevant documents, possibly ranked by their relevance to the query.
Since a lot of information is stored in databases (and not as HTML documents),
it is important to provide a similar search paradigm for databases, where users
can query a database without knowing the database schema and database query
languages such as SQL. In this paper, we propose such a database search system,
which accepts a free-form query as a collection of keywords, translates it into
queries on the database using the database metadata, and presents query results
in a well-structured and browsable form. Th eysytem maps keywords onto the
database schema and uses inter-relationships (i.e., data semantics) among the
referred tables to generate meaningful query results. We also describe our
prototype for database search, called Mragyati. Th eapproach proposed here is
scalable, as it does not build an in-memory graph of the entire database for
searching for relationships among the objects selected by the user's query.
|
cs/0110053
|
Machine Learning in Automated Text Categorization
|
cs.IR cs.LG
|
The automated categorization (or classification) of texts into predefined
categories has witnessed a booming interest in the last ten years, due to the
increased availability of documents in digital form and the ensuing need to
organize them. In the research community the dominant approach to this problem
is based on machine learning techniques: a general inductive process
automatically builds a classifier by learning, from a set of preclassified
documents, the characteristics of the categories. The advantages of this
approach over the knowledge engineering approach (consisting in the manual
definition of a classifier by domain experts) are a very good effectiveness,
considerable savings in terms of expert manpower, and straightforward
portability to different domains. This survey discusses the main approaches to
text categorization that fall within the machine learning paradigm. We will
discuss in detail issues pertaining to three different problems, namely
document representation, classifier construction, and classifier evaluation.
|
cs/0110055
|
Analytical solution of transient scalar wave and diffusion problems of
arbitrary dimensionality and geometry by RBF wavelet series
|
cs.NA cs.CE
|
This study applies the RBF wavelet series to the evaluation of analytical
solutions of linear time-dependent wave and diffusion problems of any
dimensionality and geometry. To the best of the author's knowledge, such
analytical solutions have never been achieved before. The RBF wavelets can be
understood an alternative for multidimensional problems to the standard Fourier
series via fundamental and general solutions of partial differential equation.
The present RBF wavelets are infinitely differential, compactly supported,
orthogonal over different scales and very simple. The rigorous mathematical
proof of completeness and convergence is still missing in this study. The
present work may open a new window to numerical solution and theoretical
analysis of many other high-dimensional time-dependent PDE problems under
arbitrary geometry.
|
cs/0110057
|
Generating Multilingual Personalized Descriptions of Museum Exhibits -
The M-PIRO Project
|
cs.CL cs.AI
|
This paper provides an overall presentation of the M-PIRO project. M-PIRO is
developing technology that will allow museums to generate automatically textual
or spoken descriptions of exhibits for collections available over the Web or in
virtual reality environments. The descriptions are generated in several
languages from information in a language-independent database and small
fragments of text, and they can be tailored according to the backgrounds of the
users, their ages, and their previous interaction with the system. An authoring
tool allows museum curators to update the system's database and to control the
language and content of the resulting descriptions. Although the project is
still in progress, a Web-based demonstrator that supports English, Greek and
Italian is already available, and it is used throughout the paper to highlight
the capabilities of the emerging technology.
|
cs/0110067
|
Analysis of Investment Policy in Belarus
|
cs.CE
|
The optimal planning trajectory is analyzed on the basis of the growth model
with effectiveness. The saving per capital value has to be rather high
initially with smooth decrement in the future years.
|
cs/0111003
|
The Use of Classifiers in Sequential Inference
|
cs.LG cs.CL
|
We study the problem of combining the outcomes of several different
classifiers in a way that provides a coherent inference that satisfies some
constraints. In particular, we develop two general approaches for an important
subproblem-identifying phrase structure. The first is a Markovian approach that
extends standard HMMs to allow the use of a rich observation structure and of
general classifiers to model state-observation dependencies. The second is an
extension of constraint satisfaction formalisms. We develop efficient
combination algorithms under both models and study them experimentally in the
context of shallow parsing.
|
cs/0111004
|
The Relational Database Aspects of Argonne's ATLAS Control System
|
cs.DB
|
The Relational Database Aspects of Argonnes ATLAS Control System Argonnes
ATLAS (Argonne Tandem Linac Accelerator System) control system comprises two
separate database concepts. The first is the distributed real-time database
structure provided by the commercial product Vsystem [1]. The second is a more
static relational database archiving system designed by ATLAS personnel using
Oracle Rdb [2] and Paradox [3] software. The configuration of the ATLAS
facility has presented a unique opportunity to construct a control system
relational database that is capable of storing and retrieving complete archived
tune-up configurations for the entire accelerator. This capability has been a
major factor in allowing the facility to adhere to a rigorous operating
schedule. Most recently, a Web-based operator interface to the control systems
Oracle Rdb database has been installed. This paper explains the history of the
ATLAS database systems, how they interact with each other, the design of the
new Web-based operator interface, and future plans.
|
cs/0111006
|
Proliferation of SDDS Support for Various Platforms and Languages
|
cs.DB
|
Since Self-Describing Data Sets (SDDS) were first introduced, the source code
has been ported to many different operating systems and various languages. SDDS
is now available in C, Tcl, Java, Fortran, and Python. All of these versions
are supported on Solaris, Linux, and Windows. The C version of SDDS is also
supported on VxWorks. With the recent addition of the Java port, SDDS can now
be deployed on virtually any operating system. Due to this proliferation, SDDS
files serve to link not only a collection of C programs, but programs and
scripts in many languages on different operating systems. The platform
independent binary feature of SDDS also facilitates portability among operating
systems. This paper presents an overview of various benefits of SDDS platform
interoperability.
|
cs/0111007
|
Explaining Scenarios for Information Personalization
|
cs.HC cs.IR
|
Personalization customizes information access. The PIPE ("Personalization is
Partial Evaluation") modeling methodology represents interaction with an
information space as a program. The program is then specialized to a user's
known interests or information seeking activity by the technique of partial
evaluation. In this paper, we elaborate PIPE by considering requirements
analysis in the personalization lifecycle. We investigate the use of scenarios
as a means of identifying and analyzing personalization requirements. As our
first result, we show how designing a PIPE representation can be cast as a
search within a space of PIPE models, organized along a partial order. This
allows us to view the design of a personalization system, itself, as
specialized interpretation of an information space. We then exploit the
underlying equivalence of explanation-based generalization (EBG) and partial
evaluation to realize high-level goals and needs identified in scenarios; in
particular, we specialize (personalize) an information space based on the
explanation of a user scenario in that information space, just as EBG
specializes a theory based on the explanation of an example in that theory. In
this approach, personalization becomes the transformation of information spaces
to support the explanation of usage scenarios. An example application is
described.
|
cs/0111012
|
Intelligent Anticipated Exploration of Web Sites
|
cs.AI cs.IR
|
In this paper we describe a web search agent, called Global Search Agent
(hereafter GSA for short). GSA integrates and enhances several search
techniques in order to achieve significant improvements in the user-perceived
quality of delivered information as compared to usual web search engines. GSA
features intelligent merging of relevant documents from different search
engines, anticipated selective exploration and evaluation of links from the
current result set, automated derivation of refined queries based on user
relevance feedback. System architecture as well as experimental accounts are
also illustrated.
|
cs/0111015
|
The SDSS SkyServer, Public Access to the Sloan Digital Sky Server Data
|
cs.DL cs.DB
|
The SkyServer provides Internet access to the public Sloan Digital Sky Survey
(SDSS) data for both astronomers and for science education. This paper
describes the SkyServer goals and architecture. It also describes our
experience operating the SkyServer on the Internet. The SDSS data is public and
well-documented so it makes a good test platform for research on database
algorithms and performance.
|
cs/0111018
|
Data Acquisition and Database Management System for Samsung
Superconductor Test Facility
|
cs.DB cs.AI
|
In order to fulfill the test requirement of KSTAR (Korea Superconducting
Tokamak Advanced Research) superconducting magnet system, a large scale
superconducting magnet and conductor test facility, SSTF (Samsung
Superconductor Test Facility), has been constructed at Samsung Advanced
Institute of Technology. The computer system for SSTF DAC (Data Acquisition and
Control) is based on UNIX system and VxWorks is used for the real-time OS of
the VME system. EPICS (Experimental Physics and Industrial Control System) is
used for the communication between IOC server and client. A database program
has been developed for the efficient management of measured data and a Linux
workstation with PENTIUM-4 CPU is used for the database server. In this paper,
the current status of SSTF DAC system, the database management system and
recent test results are presented.
|
cs/0111038
|
Arc consistency for soft constraints
|
cs.AI cs.CC cs.DS
|
The notion of arc consistency plays a central role in constraint
satisfaction. It is known that the notion of local consistency can be extended
to constraint optimisation problems defined by soft constraint frameworks based
on an idempotent cost combination operator. This excludes non idempotent
operators such as + which define problems which are very important in practical
applications such as Max-CSP, where the aim is to minimize the number of
violated constraints. In this paper, we show that using a weak additional axiom
satisfied by most existing soft constraints proposals, it is possible to define
a notion of soft arc consistency that extends the classical notion of arc
consistency and this even in the case of non idempotent cost combination
operators. A polynomial time algorithm for enforcing this soft arc consistency
exists and its space and time complexities are identical to that of enforcing
arc consistency in CSPs when the cost combination operator is strictly
monotonic (for example Max-CSP). A directional version of arc consistency is
potentially even stronger than the non-directional version, since it allows non
local propagation of penalties. We demonstrate the utility of directional arc
consistency by showing that it not only solves soft constraint problems on
trees, but that it also implies a form of local optimality, which we call arc
irreducibility.
|
cs/0111051
|
Predicting RNA Secondary Structures with Arbitrary Pseudoknots by
Maximizing the Number of Stacking Pairs
|
cs.CE cs.DS q-bio
|
The paper investigates the computational problem of predicting RNA secondary
structures. The general belief is that allowing pseudoknots makes the problem
hard. Existing polynomial-time algorithms are heuristic algorithms with no
performance guarantee and can only handle limited types of pseudoknots. In this
paper we initiate the study of predicting RNA secondary structures with a
maximum number of stacking pairs while allowing arbitrary pseudoknots. We
obtain two approximation algorithms with worst-case approximation ratios of 1/2
and 1/3 for planar and general secondary structures,respectively. For an RNA
sequence of $n$ bases, the approximation algorithm for planar secondary
structures runs in $O(n^3)$ time while that for the general case runs in linear
time. Furthermore, we prove that allowing pseudoknots makes it NP-hard to
maximize the number of stacking pairs in a planar secondary structure. This
result is in contrast with the recent NP-hard results on psuedoknots which are
based on optimizing some general and complicated energy functions.
|
cs/0111054
|
The similarity metric
|
cs.CC cond-mat.stat-mech cs.CE cs.CV math.CO math.MG math.ST physics.data-an q-bio.GN stat.TH
|
A new class of distances appropriate for measuring similarity relations
between sequences, say one type of similarity per distance, is studied. We
propose a new ``normalized information distance'', based on the noncomputable
notion of Kolmogorov complexity, and show that it is in this class and it
minorizes every computable distance in the class (that is, it is universal in
that it discovers all computable similarities). We demonstrate that it is a
metric and call it the {\em similarity metric}. This theory forms the
foundation for a new practical tool. To evidence generality and robustness we
give two distinctive applications in widely divergent areas using standard
compression programs like gzip and GenCompress. First, we compare whole
mitochondrial genomes and infer their evolutionary history. This results in a
first completely automatic computed whole mitochondrial phylogeny tree.
Secondly, we fully automatically compute the language tree of 52 different
languages.
|
cs/0111058
|
Bayesian Logic Programs
|
cs.AI cs.LO
|
Bayesian networks provide an elegant formalism for representing and reasoning
about uncertainty using probability theory. Theyare a probabilistic extension
of propositional logic and, hence, inherit some of the limitations of
propositional logic, such as the difficulties to represent objects and
relations. We introduce a generalization of Bayesian networks, called Bayesian
logic programs, to overcome these limitations. In order to represent objects
and relations it combines Bayesian networks with definite clause logic by
establishing a one-to-one mapping between ground atoms and random variables. We
show that Bayesian logic programs combine the advantages of both definite
clause logic and Bayesian networks. This includes the separation of
quantitative and qualitative aspects of the model. Furthermore, Bayesian logic
programs generalize both Bayesian networks as well as logic programs. So, many
ideas developed
|
cs/0111060
|
Gradient-based Reinforcement Planning in Policy-Search Methods
|
cs.AI
|
We introduce a learning method called ``gradient-based reinforcement
planning'' (GREP). Unlike traditional DP methods that improve their policy
backwards in time, GREP is a gradient-based method that plans ahead and
improves its policy before it actually acts in the environment. We derive
formulas for the exact policy gradient that maximizes the expected future
reward and confirm our ideas with numerical experiments.
|
cs/0111063
|
New RBF collocation methods and kernel RBF with applications
|
cs.NA cs.CE
|
A few novel radial basis function (RBF) discretization schemes for partial
differential equations are developed in this study. For boundary-type methods,
we derive the indirect and direct symmetric boundary knot methods. Based on the
multiple reciprocity principle, the boundary particle method is introduced for
general inhomogeneous problems without using inner nodes. For domain-type
schemes, by using the Green integral we develop a novel Hermite RBF scheme
called the modified Kansa method, which significantly reduces calculation
errors at close-to-boundary nodes. To avoid Gibbs phenomenon, we present the
least square RBF collocation scheme. Finally, five types of the kernel RBF are
also briefly presented.
|
cs/0111064
|
A procedure for unsupervised lexicon learning
|
cs.CL
|
We describe an incremental unsupervised procedure to learn words from
transcribed continuous speech. The algorithm is based on a conservative and
traditional statistical model, and results of empirical tests show that it is
competitive with other algorithms that have been proposed recently for this
task.
|
cs/0111065
|
A Statistical Model for Word Discovery in Transcribed Speech
|
cs.CL
|
A statistical model for segmentation and word discovery in continuous speech
is presented. An incremental unsupervised learning algorithm to infer word
boundaries based on this model is described. Results of empirical tests showing
that the algorithm is competitive with other models that have been used for
similar tasks are also presented.
|
cs/0112003
|
Using a Support-Vector Machine for Japanese-to-English Translation of
Tense, Aspect, and Modality
|
cs.CL
|
This paper describes experiments carried out using a variety of
machine-learning methods, including the k-nearest neighborhood method that was
used in a previous study, for the translation of tense, aspect, and modality.
It was found that the support-vector machine method was the most precise of all
the methods tested.
|
cs/0112004
|
Part of Speech Tagging in Thai Language Using Support Vector Machine
|
cs.CL
|
The elastic-input neuro tagger and hybrid tagger, combined with a neural
network and Brill's error-driven learning, have already been proposed for the
purpose of constructing a practical tagger using as little training data as
possible. When a small Thai corpus is used for training, these taggers have
tagging accuracies of 94.4% and 95.5% (accounting only for the ambiguous words
in terms of the part of speech), respectively. In this study, in order to
construct more accurate taggers we developed new tagging methods using three
machine learning methods: the decision-list, maximum entropy, and support
vector machine methods. We then performed tagging experiments by using these
methods. Our results showed that the support vector machine method has the best
precision (96.1%), and that it is capable of improving the accuracy of tagging
in the Thai language. Finally, we theoretically examined all these methods and
discussed how the improvements were achived.
|
cs/0112005
|
Universal Model for Paraphrasing -- Using Transformation Based on a
Defined Criteria --
|
cs.CL
|
This paper describes a universal model for paraphrasing that transforms
according to defined criteria. We showed that by using different criteria we
could construct different kinds of paraphrasing systems including one for
answering questions, one for compressing sentences, one for polishing up, and
one for transforming written language to spoken language.
|
cs/0112006
|
A Logic Programming Approach to Knowledge-State Planning: Semantics and
Complexity
|
cs.AI cs.LO
|
We propose a new declarative planning language, called K, which is based on
principles and methods of logic programming. In this language, transitions
between states of knowledge can be described, rather than transitions between
completely described states of the world, which makes the language well-suited
for planning under incomplete knowledge. Furthermore, it enables the use of
default principles in the planning process by supporting negation as failure.
Nonetheless, K also supports the representation of transitions between states
of the world (i.e., states of complete knowledge) as a special case, which
shows that the language is very flexible. As we demonstrate on particular
examples, the use of knowledge states may allow for a natural and compact
problem representation. We then provide a thorough analysis of the
computational complexity of K, and consider different planning problems,
including standard planning and secure planning (also known as conformant
planning) problems. We show that these problems have different complexities
under various restrictions, ranging from NP to NEXPTIME in the propositional
case. Our results form the theoretical basis for the DLV^K system, which
implements the language K on top of the DLV logic programming system.
|
cs/0112007
|
A Tight Upper Bound on the Number of Candidate Patterns
|
cs.DB cs.AI
|
In the context of mining for frequent patterns using the standard levelwise
algorithm, the following question arises: given the current level and the
current set of frequent patterns, what is the maximal number of candidate
patterns that can be generated on the next level? We answer this question by
providing a tight upper bound, derived from a combinatorial result from the
sixties by Kruskal and Katona. Our result is useful to reduce the number of
database scans.
|
cs/0112008
|
Representation of Uncertainty for Limit Processes
|
cs.AI cs.NA
|
Many mathematical models utilize limit processes. Continuous functions and
the calculus, differential equations and topology, all are based on limits and
continuity. However, when we perform measurements and computations, we can
achieve only approximate results. In some cases, this discrepancy between
theoretical schemes and practical actions changes drastically outcomes of a
research and decision-making resulting in uncertainty of knowledge. In the
paper, a mathematical approach to such kind of uncertainty, which emerges in
computation and measurement, is suggested on the base of the concept of a fuzzy
limit. A mathematical technique is developed for differential models with
uncertainty. To take into account the intrinsic uncertainty of a model, it is
suggested to use fuzzy derivatives instead of conventional derivatives of
functions in this model.
|
cs/0112009
|
DNA Self-Assembly For Constructing 3D Boxes
|
cs.CC cs.CE
|
We propose a mathematical model of DNA self-assembly using 2D tiles to form
3D nanostructures. This is the first work to combine studies in self-assembly
and nanotechnology in 3D, just as Rothemund and Winfree did in the 2D case. Our
model is a more precise superset of their Tile Assembly Model that facilitates
building scalable 3D molecules. Under our model, we present algorithms to build
a hollow cube, which is intuitively one of the simplest 3D structures to
construct. We also introduce five basic measures of complexity to analyze these
algorithms. Our model and algorithmic techniques are applicable to more complex
2D and 3D nanostructures.
|
cs/0112010
|
A Straightforward Approach to Morphological Analysis and Synthesis
|
cs.CL cs.DS
|
In this paper we present a lexicon-based approach to the problem of
morphological processing. Full-form words, lemmas and grammatical tags are
interconnected in a DAWG. Thus, the process of analysis/synthesis is reduced to
a search in the graph, which is very fast and can be performed even if several
pieces of information are missing from the input. The contents of the DAWG are
updated using an on-line incremental process. The proposed approach is language
independent and it does not utilize any morphophonetic rules or any other
special linguistic information.
|
cs/0112011
|
Interactive Constrained Association Rule Mining
|
cs.DB cs.AI
|
We investigate ways to support interactive mining sessions, in the setting of
association rule mining. In such sessions, users specify conditions (queries)
on the associations to be generated. Our approach is a combination of the
integration of querying conditions inside the mining phase, and the incremental
querying of already generated associations. We present several concrete
algorithms and compare their performance.
|
cs/0112013
|
A Data Mining Framework for Optimal Product Selection in Retail
Supermarket Data: The Generalized PROFSET Model
|
cs.DB cs.AI
|
In recent years, data mining researchers have developed efficient association
rule algorithms for retail market basket analysis. Still, retailers often
complain about how to adopt association rules to optimize concrete retail
marketing-mix decisions. It is in this context that, in a previous paper, the
authors have introduced a product selection model called PROFSET. This model
selects the most interesting products from a product assortment based on their
cross-selling potential given some retailer defined constraints. However this
model suffered from an important deficiency: it could not deal effectively with
supermarket data, and no provisions were taken to include retail category
management principles. Therefore, in this paper, the authors present an
important generalization of the existing model in order to make it suitable for
supermarket data as well, and to enable retailers to add category restrictions
to the model. Experiments on real world data obtained from a Belgian
supermarket chain produce very promising results and demonstrate the
effectiveness of the generalized PROFSET model.
|
cs/0112015
|
Rational Competitive Analysis
|
cs.AI
|
Much work in computer science has adopted competitive analysis as a tool for
decision making under uncertainty. In this work we extend competitive analysis
to the context of multi-agent systems. Unlike classical competitive analysis
where the behavior of an agent's environment is taken to be arbitrary, we
consider the case where an agent's environment consists of other agents. These
agents will usually obey some (minimal) rationality constraints. This leads to
the definition of rational competitive analysis. We introduce the concept of
rational competitive analysis, and initiate the study of competitive analysis
for multi-agent systems. We also discuss the application of rational
competitive analysis to the context of bidding games, as well as to the
classical one-way trading problem.
|
cs/0112018
|
Fast Context-Free Grammar Parsing Requires Fast Boolean Matrix
Multiplication
|
cs.CL cs.DS
|
In 1975, Valiant showed that Boolean matrix multiplication can be used for
parsing context-free grammars (CFGs), yielding the asympotically fastest
(although not practical) CFG parsing algorithm known. We prove a dual result:
any CFG parser with time complexity $O(g n^{3 - \epsilson})$, where $g$ is the
size of the grammar and $n$ is the length of the input string, can be
efficiently converted into an algorithm to multiply $m \times m$ Boolean
matrices in time $O(m^{3 - \epsilon/3})$.
Given that practical, substantially sub-cubic Boolean matrix multiplication
algorithms have been quite difficult to find, we thus explain why there has
been little progress in developing practical, substantially sub-cubic general
CFG parsers. In proving this result, we also develop a formalization of the
notion of parsing.
|
cs/0112019
|
Distribution of Mutual Information
|
cs.AI cs.IT math.IT math.ST stat.TH
|
The mutual information of two random variables i and j with joint
probabilities t_ij is commonly used in learning Bayesian nets as well as in
many other fields. The chances t_ij are usually estimated by the empirical
sampling frequency n_ij/n leading to a point estimate I(n_ij/n) for the mutual
information. To answer questions like "is I(n_ij/n) consistent with zero?" or
"what is the probability that the true mutual information is much larger than
the point estimate?" one has to go beyond the point estimate. In the Bayesian
framework one can answer these questions by utilizing a (second order) prior
distribution p(t) comprising prior information about t. From the prior p(t) one
can compute the posterior p(t|n), from which the distribution p(I|n) of the
mutual information can be calculated. We derive reliable and quickly computable
approximations for p(I|n). We concentrate on the mean, variance, skewness, and
kurtosis, and non-informative priors. For the mean we also give an exact
expression. Numerical issues and the range of validity are discussed.
|
cs/0201002
|
Incremental Construction of Compact Acyclic NFAs
|
cs.DS cs.CL
|
This paper presents and analyzes an incremental algorithm for the
construction of Acyclic Non-deterministic Finite-state Automata (NFA). Automata
of this type are quite useful in computational linguistics, especially for
storing lexicons. The proposed algorithm produces compact NFAs, i.e. NFAs that
do not contain equivalent states. Unlike Deterministic Finite-state Automata
(DFA), this property is not sufficient to ensure minimality, but still the
resulting NFAs are considerably smaller than the minimal DFAs for the same
languages.
|
cs/0201005
|
Sharpening Occam's Razor
|
cs.LG cond-mat.dis-nn cs.AI cs.CC math.PR physics.data-an
|
We provide a new representation-independent formulation of Occam's razor
theorem, based on Kolmogorov complexity. This new formulation allows us to:
(i) Obtain better sample complexity than both length-based and VC-based
versions of Occam's razor theorem, in many applications.
(ii) Achieve a sharper reverse of Occam's razor theorem than previous work.
Specifically, we weaken the assumptions made in an earlier publication, and
extend the reverse to superpolynomial running times.
|
cs/0201008
|
Using Tree Automata and Regular Expressions to Manipulate Hierarchically
Structured Data
|
cs.CL cs.DS
|
Information, stored or transmitted in digital form, is often structured.
Individual data records are usually represented as hierarchies of their
elements. Together, records form larger structures. Information processing
applications have to take account of this structuring, which assigns different
semantics to different data elements or records. Big variety of structural
schemata in use today often requires much flexibility from applications--for
example, to process information coming from different sources. To ensure
application interoperability, translators are needed that can convert one
structure into another. This paper puts forward a formal data model aimed at
supporting hierarchical data processing in a simple and flexible way. The model
is based on and extends results of two classical theories, studying finite
string and tree automata. The concept of finite automata and regular languages
is applied to the case of arbitrarily structured tree-like hierarchical data
records, represented as "structured strings." These automata are compared with
classical string and tree automata; the model is shown to be a superset of the
classical models. Regular grammars and expressions over structured strings are
introduced. Regular expression matching and substitution has been widely used
for efficient unstructured text processing; the model described here brings the
power of this proven technique to applications that deal with information
trees. A simple generic alternative is offered to replace today's specialised
ad-hoc approaches. The model unifies structural and content transformations,
providing applications with a single data type. An example scenario of how to
build applications based on this theory is discussed. Further research
directions are outlined.
|
cs/0201009
|
The performance of the batch learner algorithm
|
cs.LG cs.DM
|
We analyze completely the convergence speed of the \emph{batch learning
algorithm}, and compare its speed to that of the memoryless learning algorithm
and of learning with memory. We show that the batch learning algorithm is never
worse than the memoryless learning algorithm (at least asymptotically). Its
performance \emph{vis-a-vis} learning with full memory is less clearcut, and
depends on certain probabilistic assumptions.
|
cs/0201013
|
Computing Preferred Answer Sets by Meta-Interpretation in Answer Set
Programming
|
cs.LO cs.AI
|
Most recently, Answer Set Programming (ASP) is attracting interest as a new
paradigm for problem solving. An important aspect which needs to be supported
is the handling of preferences between rules, for which several approaches have
been presented. In this paper, we consider the problem of implementing
preference handling approaches by means of meta-interpreters in Answer Set
Programming. In particular, we consider the preferred answer set approaches by
Brewka and Eiter, by Delgrande, Schaub and Tompits, and by Wang, Zhou and Lin.
We present suitable meta-interpreters for these semantics using DLV, which is
an efficient engine for ASP. Moreover, we also present a meta-interpreter for
the weakly preferred answer set approach by Brewka and Eiter, which uses the
weak constraint feature of DLV as a tool for expressing and solving an
underlying optimization problem. We also consider advanced meta-interpreters,
which make use of graph-based characterizations and often allow for more
efficient computations. Our approach shows the suitability of ASP in general
and of DLV in particular for fast prototyping. This can be fruitfully exploited
for experimenting with new languages and knowledge-representation formalisms.
|
cs/0201014
|
The Dynamics of AdaBoost Weights Tells You What's Hard to Classify
|
cs.LG cs.DS
|
The dynamical evolution of weights in the Adaboost algorithm contains useful
information about the role that the associated data points play in the built of
the Adaboost model. In particular, the dynamics induces a bipartition of the
data set into two (easy/hard) classes. Easy points are ininfluential in the
making of the model, while the varying relevance of hard points can be gauged
in terms of an entropy value associated to their evolution. Smooth
approximations of entropy highlight regions where classification is most
uncertain. Promising results are obtained when methods proposed are applied in
the Optimal Sampling framework.
|
cs/0201016
|
A computer scientist looks at game theory
|
cs.GT cs.DC cs.MA
|
I consider issues in distributed computation that should be of relevance to
game theory. In particular, I focus on (a) representing knowledge and
uncertainty, (b) dealing with failures, and (c) specification of mechanisms.
|
cs/0201017
|
Collusion in Unrepeated, First-Price Auctions with an Uncertain Number
of Participants
|
cs.GT cs.AI
|
We consider the question of whether collusion among bidders (a "bidding
ring") can be supported in equilibrium of unrepeated first-price auctions.
Unlike previous work on the topic such as that by McAfee and McMillan [1992]
and Marshall and Marx [2007], we do not assume that non-colluding agents have
perfect knowledge about the number of colluding agents whose bids are
suppressed by the bidding ring, and indeed even allow for the existence of
multiple cartels. Furthermore, while we treat the association of bidders with
bidding rings as exogenous, we allow bidders to make strategic decisions about
whether to join bidding rings when invited. We identify a bidding ring protocol
that results in an efficient allocation in Bayes{Nash equilibrium, under which
non-colluding agents bid straightforwardly, and colluding agents join bidding
rings when invited and truthfully declare their valuations to the ring center.
We show that bidding rings benefit ring centers and all agents, both members
and non-members of bidding rings, at the auctioneer's expense. The techniques
we introduce in this paper may also be useful for reasoning about other
problems in which agents have asymmetric information about a setting.
|
cs/0201019
|
Structure from Motion: Theoretical Foundations of a Novel Approach Using
Custom Built Invariants
|
cs.CV math.DG
|
We rephrase the problem of 3D reconstruction from images in terms of
intersections of projections of orbits of custom built Lie groups actions. We
then use an algorithmic method based on moving frames "a la Fels-Olver" to
obtain a fundamental set of invariants of these groups actions. The invariants
are used to define a set of equations to be solved by the points of the 3D
object, providing a new technique for recovering 3D structure from motion.
|
cs/0201020
|
A Modal Logic Framework for Multi-agent Belief Fusion
|
cs.AI cs.LO
|
This paper is aimed at providing a uniform framework for reasoning about
beliefs of multiple agents and their fusion. In the first part of the paper, we
develop logics for reasoning about cautiously merged beliefs of agents with
different degrees of reliability. The logics are obtained by combining the
multi-agent epistemic logic and multi-sources reasoning systems. Every ordering
for the reliability of the agents is represented by a modal operator, so we can
reason with the merged results under different situations. The fusion is
cautious in the sense that if an agent's belief is in conflict with those of
higher priorities, then his belief is completely discarded from the merged
result. We consider two strategies for the cautious merging of beliefs. In the
first one, if inconsistency occurs at some level, then all beliefs at the lower
levels are discarded simultaneously, so it is called level cutting strategy.
For the second one, only the level at which the inconsistency occurs is
skipped, so it is called level skipping strategy. The formal semantics and
axiomatic systems for these two strategies are presented. In the second part,
we extend the logics both syntactically and semantically to cover some more
sophisticated belief fusion and revision operators. While most existing
approaches treat belief fusion operators as meta-level constructs, these
operators are directly incorporated into our object logic language. Thus it is
possible to reason not only with the merged results but also about the fusion
process in our logics. The relationship of our extended logics with the
conditional logics of belief revision is also discussed.
|
cs/0201021
|
Learning to Play Games in Extensive Form by Valuation
|
cs.LG cs.GT
|
A valuation for a player in a game in extensive form is an assignment of
numeric values to the players moves. The valuation reflects the desirability
moves. We assume a myopic player, who chooses a move with the highest
valuation. Valuations can also be revised, and hopefully improved, after each
play of the game. Here, a very simple valuation revision is considered, in
which the moves made in a play are assigned the payoff obtained in the play. We
show that by adopting such a learning process a player who has a winning
strategy in a win-lose game can almost surely guarantee a win in a repeated
game. When a player has more than two payoffs, a more elaborate learning
procedure is required. We consider one that associates with each move the
average payoff in the rounds in which this move was made. When all players
adopt this learning procedure, with some perturbations, then, with probability
1, strategies that are close to subgame perfect equilibrium are played after
some time. A single player who adopts this procedure can guarantee only her
individually rational payoff.
|
cs/0201022
|
A theory of experiment
|
cs.AI
|
This article aims at clarifying the language and practice of scientific
experiment, mainly by hooking observability on calculability.
|
cs/0201024
|
Design of statistical quality control procedures using genetic
algorithms
|
cs.NE
|
In general, we can not use algebraic or enumerative methods to optimize a
quality control (QC) procedure so as to detect the critical random and
systematic analytical errors with stated probabilities, while the probability
for false rejection is minimum. Genetic algorithms (GAs) offer an alternative,
as they do not require knowledge of the objective function to be optimized and
search through large parameter spaces quickly. To explore the application of
GAs in statistical QC, we have developed an interactive GAs based computer
program that designs a novel near optimal QC procedure, given an analytical
process. The program uses the deterministic crowding algorithm. An illustrative
application of the program suggests that it has the potential to design QC
procedures that are significantly better than 45 alternative ones that are used
in the clinical laboratories.
|
cs/0201026
|
An Empirical Model for Volatility of Returns and Option Pricing
|
cs.CE
|
In a seminal paper in 1973, Black and Scholes argued how expected
distributions of stock prices can be used to price options. Their model assumed
a directed random motion for the returns and consequently a lognormal
distribution of asset prices after a finite time. We point out two problems
with their formulation. First, we show that the option valuation is not
uniquely determined; in particular, stratergies based on the delta-hedge and
CAMP (Capital Asset Pricing Model) are shown to provide different valuations of
an option. Second, asset returns are known not to be Gaussian distributed.
Empirically, distributions of returns are seen to be much better approximated
by an exponential distribution. This exponential distribution of asset prices
can be used to develop a new pricing model for options that is shown to provide
valuations that agree very well with those used by traders. We show how the
Fokker-Planck formulation of fluctuations (i.e., the dynamics of the
distribution) can be modified to provide an exponential distribution for
returns. We also show how a singular volatility can be used to go smoothly from
exponential to Gaussian returns and thereby illustrate why exponential returns
cannot be reached perturbatively starting from Gaussian ones, and explain how
the theory of 'stochastic volatility' can be obtained from our model by making
a bad approximation. Finally, we show how to calculate put and call prices for
a stretched exponential density.
|
cs/0202001
|
The Deductive Database System LDL++
|
cs.DB cs.AI
|
This paper describes the LDL++ system and the research advances that have
enabled its design and development. We begin by discussing the new nonmonotonic
and nondeterministic constructs that extend the functionality of the LDL++
language, while preserving its model-theoretic and fixpoint semantics. Then, we
describe the execution model and the open architecture designed to support
these new constructs and to facilitate the integration with existing DBMSs and
applications. Finally, we describe the lessons learned by using LDL++ on
various tested applications, such as middleware and datamining.
|
cs/0202004
|
A Qualitative Dynamical Modelling Approach to Capital Accumulation in
Unregulated Fisheries
|
cs.AI cs.CE
|
Capital accumulation has been a major issue in fisheries economics over the
last two decades, whereby the interaction of the fish and capital stocks were
of particular interest. Because bio-economic systems are intrinsically complex,
previous efforts in this field have relied on a variety of simplifying
assumptions. The model presented here relaxes some of these simplifications.
Problems of tractability are surmounted by using the methodology of qualitative
differential equations (QDE). The theory of QDEs takes into account that
scientific knowledge about particular fisheries is usually limited, and
facilitates an analysis of the global dynamics of systems with more than two
ordinary differential equations. The model is able to trace the evolution of
capital and fish stock in good agreement with observed patterns, and shows that
over-capitalization is unavoidable in unregulated fisheries.
|
cs/0202005
|
Secure History Preservation Through Timeline Entanglement
|
cs.DC cs.CR cs.DB cs.DS
|
A secure timeline is a tamper-evident historic record of the states through
which a system goes throughout its operational history. Secure timelines can
help us reason about the temporal ordering of system states in a provable
manner. We extend secure timelines to encompass multiple, mutually distrustful
services, using timeline entanglement. Timeline entanglement associates
disparate timelines maintained at independent systems, by linking undeniably
the past of one timeline to the future of another. Timeline entanglement is a
sound method to map a time step in the history of one service onto the timeline
of another, and helps clients of entangled services to get persistent temporal
proofs for services rendered that survive the demise or non-cooperation of the
originating service. In this paper we present the design and implementation of
Timeweave, our service development framework for timeline entanglement based on
two novel disk-based authenticated data structures. We evaluate Timeweave's
performance characteristics and show that it can be efficiently deployed in a
loosely-coupled distributed system of a few hundred services with overhead of
roughly 2-8% of the processing resources of a PC-grade system.
|
cs/0202007
|
Steady State Resource Allocation Analysis of the Stochastic Diffusion
Search
|
cs.AI cs.NE
|
This article presents the long-term behaviour analysis of Stochastic
Diffusion Search (SDS), a distributed agent-based system for best-fit pattern
matching. SDS operates by allocating simple agents into different regions of
the search space. Agents independently pose hypotheses about the presence of
the pattern in the search space and its potential distortion. Assuming a
compositional structure of hypotheses about pattern matching agents perform an
inference on the basis of partial evidence from the hypothesised solution.
Agents posing mutually consistent hypotheses about the pattern support each
other and inhibit agents with inconsistent hypotheses. This results in the
emergence of a stable agent population identifying the desired solution.
Positive feedback via diffusion of information between the agents significantly
contributes to the speed with which the solution population is formed. The
formulation of the SDS model in terms of interacting Markov Chains enables its
characterisation in terms of the allocation of agents, or computational
resources. The analysis characterises the stationary probability distribution
of the activity of agents, which leads to the characterisation of the solution
population in terms of its similarity to the target pattern.
|
cs/0202009
|
Non-negative sparse coding
|
cs.NE cs.CV
|
Non-negative sparse coding is a method for decomposing multivariate data into
non-negative sparse components. In this paper we briefly describe the
motivation behind this type of data representation and its relation to standard
sparse coding and non-negative matrix factorization. We then give a simple yet
efficient multiplicative algorithm for finding the optimal values of the hidden
components. In addition, we show how the basis vectors can be learned from the
observed data. Simulations demonstrate the effectiveness of the proposed
method.
|
cs/0202012
|
Logic program specialisation through partial deduction: Control issues
|
cs.PL cs.AI
|
Program specialisation aims at improving the overall performance of programs
by performing source to source transformations. A common approach within
functional and logic programming, known respectively as partial evaluation and
partial deduction, is to exploit partial knowledge about the input. It is
achieved through a well-automated application of parts of the
Burstall-Darlington unfold/fold transformation framework. The main challenge in
developing systems is to design automatic control that ensures correctness,
efficiency, and termination. This survey and tutorial presents the main
developments in controlling partial deduction over the past 10 years and
analyses their respective merits and shortcomings. It ends with an assessment
of current achievements and sketches some remaining research challenges.
|
cs/0202013
|
The SDSS SkyServer: Public Access to the Sloan Digital Sky Server Data
|
cs.DL cs.DB
|
The SkyServer provides Internet access to the public Sloan Digi-tal Sky
Survey (SDSS) data for both astronomers and for science education. This paper
describes the SkyServer goals and archi-tecture. It also describes our
experience operating the SkyServer on the Internet. The SDSS data is public and
well-documented so it makes a good test platform for research on database
algorithms and performance.
|
cs/0202014
|
Data Mining the SDSS SkyServer Database
|
cs.DB cs.DL
|
An earlier paper (Szalay et. al. "Designing and Mining MultiTerabyte
Astronomy Archives: The Sloan Digital Sky Survey," ACM SIGMOD 2000) described
the Sloan Digital Sky Survey's (SDSS) data management needs by defining twenty
database queries and twelve data visualization tasks that a good data
management system should support. We built a database and interfaces to support
both the query load and also a website for ad-hoc access. This paper reports on
the database design, describes the data loading pipeline, and reports on the
query implementation and performance. The queries typically translated to a
single SQL statement. Most queries run in less than 20 seconds, allowing
scientists to interactively explore the database. This paper is an in-depth
tour of those queries. Readers should first have studied the companion overview
paper Szalay et. al. "The SDSS SkyServer, Public Access to the Sloan Digital
Sky Server Data" ACM SIGMOND 2002.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.