id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0310011
|
Re-Finding Found Things: An Exploratory Study of How Users Re-Find
Information
|
cs.HC cs.IR
|
The problem of how people find information is studied extensively; however,
the problem of how people organize, re-use, and re-find information that they
have found is not as well understood. Recently, several projects have conducted
in-situ studies to explore how people re-find and re-use information. Here, we
present results and observations from a controlled, laboratory study of
refinding information found on the web.
Our study was conducted as a collaborative exercise with pairs of
participants. One participant acted as a retriever, helping the other
participant re-find information by telephone. This design allowed us to gain
insight into the strategies that users employed to re-find information, and
into how domain artifacts and contextual information were used to aid the
re-finding process. We also introduced the ability for users to add their own
explicitly artifacts in the form of making annotations on the web pages they
viewed.
We observe that re-finding often occurs as a two stage, iterative process in
which users first attempt to locate an information source (search), and once
found, begin a process to find the specific information being sought (browse).
Our findings are consistent with research on waypoints; orienteering approaches
to re-finding; and navigation of electronic spaces. Furthermore, we observed
that annotations were utilized extensively, indicating that explicitly added
context by the user can play an important role in re-finding.
|
cs/0310012
|
A Formal Comparison of Visual Web Wrapper Generators
|
cs.DB
|
We study the core fragment of the Elog wrapping language used in the Lixto
system (a visual wrapper generator) and formally compare Elog to other wrapping
languages proposed in the literature.
|
cs/0310013
|
WebTeach in practice: the entrance test to the Engineering faculty in
Florence
|
cs.HC cs.IR
|
We present the WebTeach project, formed by a web interface to database for
test management, a wiki site for the diffusion of teaching material and student
forums, and a suite for the generation of multiple-choice mathematical quiz
with automatic elaboration of forms. This system has been massively tested for
the entrance test to the Engineering Faculty of the University of Florence,
Italy
|
cs/0310014
|
Effective XML Representation for Spoken Language in Organisations
|
cs.CL
|
Spoken Language can be used to provide insights into organisational
processes, unfortunately transcription and coding stages are very time
consuming and expensive. The concept of partial transcription and coding is
proposed in which spoken language is indexed prior to any subsequent
processing. The functional linguistic theory of texture is used to describe the
effects of partial transcription on observational records. The standard used to
encode transcript context and metadata is called CHAT, but a previous XML
schema developed to implement it contains design assumptions that make it
difficult to support partial transcription for example. This paper describes a
more effective XML schema that overcomes many of these problems and is intended
for use in applications that support the rapid development of spoken language
deliverables.
|
cs/0310018
|
The Study of the Application of a Keywords-based Chatbot System on the
Teaching of Foreign Languages
|
cs.CY cs.CL
|
This paper reports the findings of a study conducted on the application of an
on-line human-computer dialog system with natural language (chatbot) on the
teaching of foreign languages. A keywords-based human-computer dialog system
makes it possible that the user could chat with the computer using a natural
language, i.e. in English or in German to some extent. So an experiment has
been made using this system online to work as a chat partner with the users
learning the foreign languages. Dialogs between the users and the chatbot are
collected. Findings indicate that the dialogs between the human and the
computer are mostly very short because the user finds the responses from the
computer are mostly repeated and irrelevant with the topics and context and the
program does not understand the language at all. With analysis of the keywords
or pattern-matching mechanism used in this chatbot it can be concluded that
this kind of system can not work as a teaching assistant program in foreign
language learning.
|
cs/0310021
|
Fuzzy Relational Modeling of Cost and Affordability for Advanced
Technology Manufacturing Environment
|
cs.CE cs.AI math.OC
|
Relational representation of knowledge makes it possible to perform all the
computations and decision making in a uniform relational way by means of
special relational compositions called triangle and square products. In this
paper some applications in manufacturing related to cost analysis are
described. Testing fuzzy relational structures for various relational
properties allows us to discover dependencies, hierarchies, similarities, and
equivalences of the attributes characterizing technological processes and
manufactured artifacts in their relationship to costs and performance.
A brief overview of mathematical aspects of BK-relational products is given
in Appendix 1 together with further references in the literature.
|
cs/0310023
|
Application of Kullback-Leibler Metric to Speech Recognition
|
cs.AI
|
Article discusses the application of Kullback-Leibler divergence to the
recognition of speech signals and suggests three algorithms implementing this
divergence criterion: correlation algorithm, spectral algorithm and filter
algorithm. Discussion covers an approach to the problem of speech variability
and is illustrated with the results of experimental modeling of speech signals.
The article gives a number of recommendations on the choice of appropriate
model parameters and provides a comparison to some other methods of speech
recognition.
|
cs/0310028
|
Providing Diversity in K-Nearest Neighbor Query Results
|
cs.DB
|
Given a point query Q in multi-dimensional space, K-Nearest Neighbor (KNN)
queries return the K closest answers according to given distance metric in the
database with respect to Q. In this scenario, it is possible that a majority of
the answers may be very similar to some other, especially when the data has
clusters. For a variety of applications, such homogeneous result sets may not
add value to the user. In this paper, we consider the problem of providing
diversity in the results of KNN queries, that is, to produce the closest result
set such that each answer is sufficiently different from the rest. We first
propose a user-tunable definition of diversity, and then present an algorithm,
called MOTLEY, for producing a diverse result set as per this definition.
Through a detailed experimental evaluation on real and synthetic data, we show
that MOTLEY can produce diverse result sets by reading only a small fraction of
the tuples in the database. Further, it imposes no additional overhead on the
evaluation of traditional KNN queries, thereby providing a seamless interface
between diversity and distance.
|
cs/0310035
|
Supporting Exploratory Queries in Database Centric Web Applications
|
cs.DB
|
Users of database-centric Web applications, especially in the e-commerce
domain, often resort to exploratory ``trial-and-error'' queries since the
underlying data space is huge and unfamiliar, and there are several
alternatives for search attributes in this space. For example, scouting for
cheap airfares typically involves posing multiple queries, varying flight
times, dates, and airport locations. Exploratory queries are problematic from
the perspective of both the user and the server. For the database server, it
results in a drastic reduction in effective throughput since much of the
processing is duplicated in each successive query. For the client, it results
in a marked increase in response times, especially when accessing the service
through wireless channels.
In this paper, we investigate the design of automated techniques to minimize
the need for repetitive exploratory queries. Specifically, we present SAUNA, a
server-side query relaxation algorithm that, given the user's initial range
query and a desired cardinality for the answer set, produces a relaxed query
that is expected to contain the required number of answers. The algorithm
incorporates a range-query-specific distance metric that is weighted to produce
relaxed queries of a desired shape (e.g. aspect ratio preserving), and utilizes
multi-dimensional histograms for query size estimation. A detailed performance
evaluation of SAUNA over a variety of multi-dimensional data sets indicates
that its relaxed queries can significantly reduce the costs associated with
exploratory query processing.
|
cs/0310038
|
On Addressing Efficiency Concerns in Privacy Preserving Data Mining
|
cs.DB
|
Data mining services require accurate input data for their results to be
meaningful, but privacy concerns may influence users to provide spurious
information. To encourage users to provide correct inputs, we recently proposed
a data distortion scheme for association rule mining that simultaneously
provides both privacy to the user and accuracy in the mining results. However,
mining the distorted database can be orders of magnitude more time-consuming as
compared to mining the original database. In this paper, we address this issue
and demonstrate that by (a) generalizing the distortion process to perform
symbol-specific distortion, (b) appropriately choosing the distortion
parameters, and (c) applying a variety of optimizations in the reconstruction
process, runtime efficiencies that are well within an order of magnitude of
undistorted mining can be achieved.
|
cs/0310041
|
A Dynamic Programming Algorithm for the Segmentation of Greek Texts
|
cs.CL cs.DL
|
In this paper we introduce a dynamic programming algorithm to perform linear
text segmentation by global minimization of a segmentation cost function which
consists of: (a) within-segment word similarity and (b) prior information about
segment length. The evaluation of the segmentation accuracy of the algorithm on
a text collection consisting of Greek texts showed that the algorithm achieves
high segmentation accuracy and appears to be very innovating and promissing.
|
cs/0310043
|
Value-at-Risk and Expected Shortfall for Quadratic portfolio of
securities with mixture of elliptic Distributed Risk Factors
|
cs.CE math.CA
|
Generally, in the financial literature, the notion of quadratic VaR is
implicitly confused with the Delta-Gamma VaR, because more authors dealt with
portfolios that contains derivatives instruments.
In this paper, we postpone to estimate the Value-at-Risk of a quadratic
portfolio of securities (i.e equities) without the Delta and Gamma greeks, when
the joint log-returns changes with multivariate elliptic distribution. We have
reduced the estimation of the quadratic VaR of such portfolio to a resolution
of one dimensional integral equation. To illustrate our method, we give special
attention to the mixture of normal and mixture of t-student distribution. For
given VaR, when joint Risk Factors changes with elliptic distribution, we show
how to estimate an Expected Shortfall .
|
cs/0310044
|
The Algebra of Utility Inference
|
cs.AI
|
Richard Cox [1] set the axiomatic foundations of probable inference and the
algebra of propositions. He showed that consistency within these axioms
requires certain rules for updating belief. In this paper we use the analogy
between probability and utility introduced in [2] to propose an axiomatic
foundation for utility inference and the algebra of preferences. We show that
consistency within these axioms requires certain rules for updating preference.
We discuss a class of utility functions that stems from the axioms of utility
inference and show that this class is the basic building block for any general
multiattribute utility function. We use this class of utility functions
together with the algebra of preferences to construct utility functions
represented by logical operations on the attributes.
|
cs/0310045
|
An information theory for preferences
|
cs.AI
|
Recent literature in the last Maximum Entropy workshop introduced an analogy
between cumulative probability distributions and normalized utility functions.
Based on this analogy, a utility density function can de defined as the
derivative of a normalized utility function. A utility density function is
non-negative and integrates to unity. These two properties form the basis of a
correspondence between utility and probability. A natural application of this
analogy is a maximum entropy principle to assign maximum entropy utility
values. Maximum entropy utility interprets many of the common utility functions
based on the preference information needed for their assignment, and helps
assign utility values based on partial preference information. This paper
reviews maximum entropy utility and introduces further results that stem from
the duality between probability and utility.
|
cs/0310047
|
Abductive Logic Programs with Penalization: Semantics, Complexity and
Implementation
|
cs.AI
|
Abduction, first proposed in the setting of classical logics, has been
studied with growing interest in the logic programming area during the last
years.
In this paper we study abduction with penalization in the logic programming
framework. This form of abductive reasoning, which has not been previously
analyzed in logic programming, turns out to represent several relevant
problems, including optimization problems, very naturally. We define a formal
model for abduction with penalization over logic programs, which extends the
abductive framework proposed by Kakas and Mancarella. We address knowledge
representation issues, encoding a number of problems in our abductive
framework. In particular, we consider some relevant problems, taken from
different domains, ranging from optimization theory to diagnosis and planning;
their encodings turn out to be simple and elegant in our formalism. We
thoroughly analyze the computational complexity of the main problems arising in
the context of abduction with penalization from logic programs. Finally, we
implement a system supporting the proposed abductive framework on top of the
DLV engine. To this end, we design a translation from abduction problems with
penalties into logic programs with weak constraints. We prove that this
approach is sound and complete.
|
cs/0310048
|
Managing Evolving Business Workflows through the Capture of Descriptive
Information
|
cs.SE cs.DB
|
Business systems these days need to be agile to address the needs of a
changing world. In particular the discipline of Enterprise Application
Integration requires business process management to be highly reconfigurable
with the ability to support dynamic workflows, inter-application integration
and process reconfiguration. Basing EAI systems on model-resident or on a
so-called description-driven approach enables aspects of flexibility,
distribution, system evolution and integration to be addressed in a
domain-independent manner. Such a system called CRISTAL is described in this
paper with particular emphasis on its application to EAI problem domains. A
practical example of the CRISTAL technology in the domain of manufacturing
systems, called Agilium, is described to demonstrate the principles of
model-driven system evolution and integration. The approach is compared to
other model-driven development approaches such as the Model-Driven Architecture
of the OMG and so-called Adaptive Object Models.
|
cs/0310050
|
Feedforward Neural Networks with Diffused Nonlinear Weight Functions
|
cs.NE
|
In this paper, feedforward neural networks are presented that have nonlinear
weight functions based on look--up tables, that are specially smoothed in a
regularization called the diffusion. The idea of such a type of networks is
based on the hypothesis that the greater number of adaptive parameters per a
weight function might reduce the total number of the weight functions needed to
solve a given problem. Then, if the computational complexity of a propagation
through a single such a weight function would be kept low, then the introduced
neural networks might possibly be relatively fast.
A number of tests is performed, showing that the presented neural networks
may indeed perform better in some cases than the classic neural networks and a
number of other learning machines.
|
cs/0310058
|
Application Architecture for Spoken Language Resources in Organisational
Settings
|
cs.CL
|
Special technologies need to be used to take advantage of, and overcome, the
challenges associated with acquiring, transforming, storing, processing, and
distributing spoken language resources in organisations. This paper introduces
an application architecture consisting of tools and supporting utilities for
indexing and transcription, and describes how these tools, together with
downstream processing and distribution systems, can be integrated into a
workflow. Two sample applications for this architecture are outlined- the
analysis of decision-making processes in organisations and the deployment of
systems development methods by designers in the field.
|
cs/0310061
|
Local-search techniques for propositional logic extended with
cardinality constraints
|
cs.AI
|
We study local-search satisfiability solvers for propositional logic extended
with cardinality atoms, that is, expressions that provide explicit ways to
model constraints on cardinalities of sets. Adding cardinality atoms to the
language of propositional logic facilitates modeling search problems and often
results in concise encodings. We propose two ``native'' local-search solvers
for theories in the extended language. We also describe techniques to reduce
the problem to standard propositional satisfiability and allow us to use
off-the-shelf SAT solvers. We study these methods experimentally. Our general
finding is that native solvers designed specifically for the extended language
perform better than indirect methods relying on SAT solvers.
|
cs/0310062
|
WSAT(cc) - a fast local-search ASP solver
|
cs.AI
|
We describe WSAT(cc), a local-search solver for computing models of theories
in the language of propositional logic extended by cardinality atoms. WSAT(cc)
is a processing back-end for the logic PS+, a recently proposed formalism for
answer-set programming.
|
cs/0311001
|
Modeling State in Software Debugging of VHDL-RTL Designs -- A
Model-Based Diagnosis Approach
|
cs.AI cs.SE
|
In this paper we outline an approach of applying model-based diagnosis to the
field of automatic software debugging of hardware designs. We present our
value-level model for debugging VHDL-RTL designs and show how to localize the
erroneous component responsible for an observed misbehavior. Furthermore, we
discuss an extension of our model that supports the debugging of sequential
circuits, not only at a given point in time, but also allows for considering
the temporal behavior of VHDL-RTL designs. The introduced model is capable of
handling state inherently present in every sequential circuit. The principal
applicability of the new model is outlined briefly and we use industrial-sized
real world examples from the ISCAS'85 benchmark suite to discuss the
scalability of our approach.
|
cs/0311003
|
Enhancing a Search Algorithm to Perform Intelligent Backtracking
|
cs.AI cs.LO
|
This paper illustrates how a Prolog program, using chronological backtracking
to find a solution in some search space, can be enhanced to perform intelligent
backtracking. The enhancement crucially relies on the impurity of Prolog that
allows a program to store information when a dead end is reached. To illustrate
the technique, a simple search program is enhanced.
To appear in Theory and Practice of Logic Programming.
Keywords: intelligent backtracking, dependency-directed backtracking,
backjumping, conflict-directed backjumping, nogood sets, look-back.
|
cs/0311004
|
Utility-Probability Duality
|
cs.AI
|
This paper presents duality between probability distributions and utility
functions.
|
cs/0311007
|
Parametric Connectives in Disjunctive Logic Programming
|
cs.AI
|
Disjunctive Logic Programming (\DLP) is an advanced formalism for Knowledge
Representation and Reasoning (KRR). \DLP is very expressive in a precise
mathematical sense: it allows to express every property of finite structures
that is decidable in the complexity class $\SigmaP{2}$ ($\NP^{\NP}$).
Importantly, the \DLP encodings are often simple and natural.
In this paper, we single out some limitations of \DLP for KRR, which cannot
naturally express problems where the size of the disjunction is not known ``a
priori'' (like N-Coloring), but it is part of the input. To overcome these
limitations, we further enhance the knowledge modelling abilities of \DLP, by
extending this language by {\em Parametric Connectives (OR and AND)}. These
connectives allow us to represent compactly the disjunction/conjunction of a
set of atoms having a given property. We formally define the semantics of the
new language, named $DLP^{\bigvee,\bigwedge}$ and we show the usefulness of the
new constructs on relevant knowledge-based problems. We address implementation
issues and discuss related works.
|
cs/0311008
|
A Parameterised Hierarchy of Argumentation Semantics for Extended Logic
Programming and its Application to the Well-founded Semantics
|
cs.LO cs.AI
|
Argumentation has proved a useful tool in defining formal semantics for
assumption-based reasoning by viewing a proof as a process in which proponents
and opponents attack each others arguments by undercuts (attack to an
argument's premise) and rebuts (attack to an argument's conclusion). In this
paper, we formulate a variety of notions of attack for extended logic programs
from combinations of undercuts and rebuts and define a general hierarchy of
argumentation semantics parameterised by the notions of attack chosen by
proponent and opponent. We prove the equivalence and subset relationships
between the semantics and examine some essential properties concerning
consistency and the coherence principle, which relates default negation and
explicit negation. Most significantly, we place existing semantics put forward
in the literature in our hierarchy and identify a particular argumentation
semantics for which we prove equivalence to the paraconsistent well-founded
semantics with explicit negation, WFSX$_p$. Finally, we present a general proof
theory, based on dialogue trees, and show that it is sound and complete with
respect to the argumentation semantics.
|
cs/0311011
|
On an explicit finite difference method for fractional diffusion
equations
|
cs.NA cond-mat.stat-mech cs.CE physics.comp-ph
|
A numerical method to solve the fractional diffusion equation, which could
also be easily extended to many other fractional dynamics equations, is
considered. These fractional equations have been proposed in order to describe
anomalous transport characterized by non-Markovian kinetics and the breakdown
of Fick's law. In this paper we combine the forward time centered space (FTCS)
method, well known for the numerical integration of ordinary diffusion
equations, with the Grunwald-Letnikov definition of the fractional derivative
operator to obtain an explicit fractional FTCS scheme for solving the
fractional diffusion equation. The resulting method is amenable to a stability
analysis a la von Neumann. We show that the analytical stability bounds are in
excellent agreement with numerical tests. Comparison between exact analytical
solutions and numerical predictions are made.
|
cs/0311012
|
A rigorous definition of axial lines: ridges on isovist fields
|
cs.CV cs.CG
|
We suggest that 'axial lines' defined by (Hillier and Hanson, 1984) as lines
of uninterrupted movement within urban streetscapes or buildings, appear as
ridges in isovist fields (Benedikt, 1979). These are formed from the maximum
diametric lengths of the individual isovists, sometimes called viewsheds, that
make up these fields (Batty and Rana, 2004). We present an image processing
technique for the identification of lines from ridges, discuss current
strengths and weaknesses of the method, and show how it can be implemented
easily and effectively.
|
cs/0311014
|
Optimality of Universal Bayesian Sequence Prediction for General Loss
and Alphabet
|
cs.LG cs.AI math.PR
|
Various optimality properties of universal sequence predictors based on
Bayes-mixtures in general, and Solomonoff's prediction scheme in particular,
will be studied. The probability of observing $x_t$ at time $t$, given past
observations $x_1...x_{t-1}$ can be computed with the chain rule if the true
generating distribution $\mu$ of the sequences $x_1x_2x_3...$ is known. If
$\mu$ is unknown, but known to belong to a countable or continuous class $\M$
one can base ones prediction on the Bayes-mixture $\xi$ defined as a
$w_\nu$-weighted sum or integral of distributions $\nu\in\M$. The cumulative
expected loss of the Bayes-optimal universal prediction scheme based on $\xi$
is shown to be close to the loss of the Bayes-optimal, but infeasible
prediction scheme based on $\mu$. We show that the bounds are tight and that no
other predictor can lead to significantly smaller bounds. Furthermore, for
various performance measures, we show Pareto-optimality of $\xi$ and give an
Occam's razor argument that the choice $w_\nu\sim 2^{-K(\nu)}$ for the weights
is optimal, where $K(\nu)$ is the length of the shortest program describing
$\nu$. The results are applied to games of chance, defined as a sequence of
bets, observations, and rewards. The prediction schemes (and bounds) are
compared to the popular predictors based on expert advice. Extensions to
infinite alphabets, partial, delayed and probabilistic prediction,
classification, and more active systems are briefly discussed.
|
cs/0311015
|
Make search become the internal function of Internet
|
cs.IR cs.DL cs.NI
|
Domain Resource Integrated System (DRIS) is introduced in this paper. DRIS is
a distributed information retrieval system, which will solve problems like poor
coverage, long update interval in current web search system. The most distinct
character of DRIS is that it's a public opening system, and acts as an internal
component of Internet, but not the production of a company. The implementation
of DRIS is also represented.
|
cs/0311019
|
Replay Debugging of Complex Real-Time Systems: Experiences from Two
Industrial Case Studies
|
cs.RO
|
Deterministic replay is a method for allowing complex multitasking real-time
systems to be debugged using standard interactive debuggers. Even though
several replay techniques have been proposed for parallel, multi-tasking and
real-time systems, the solutions have so far lingered on a prototype academic
level, with very little results to show from actual state-of-the-practice
commercial applications. This paper describes a major deterministic replay
debugging case study performed on a full-scale industrial robot control system,
as well as a minor replay instrumentation case study performed on a military
aircraft radar system. In this article, we will show that replay debugging is
feasible in complex multi-million lines of code software projects running on
top of off-the-shelf real-time operating systems. Furthermore, we will discuss
how replay debugging can be introduced in existing systems without
impracticable analysis efforts. In addition, we will present benchmarking
results from both studies, indicating that the instrumentation overhead is
acceptable and affordable.
|
cs/0311024
|
Logic-Based Specification Languages for Intelligent Software Agents
|
cs.AI
|
The research field of Agent-Oriented Software Engineering (AOSE) aims to find
abstractions, languages, methodologies and toolkits for modeling, verifying,
validating and prototyping complex applications conceptualized as Multiagent
Systems (MASs). A very lively research sub-field studies how formal methods can
be used for AOSE. This paper presents a detailed survey of six logic-based
executable agent specification languages that have been chosen for their
potential to be integrated in our ARPEGGIO project, an open framework for
specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the
IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each
executable language, the logic foundations are described and an example of use
is shown. A comparison of the six languages and a survey of similar approaches
complete the paper, together with considerations of the advantages of using
logic-based languages in MAS modeling and prototyping.
|
cs/0311026
|
Great Expectations. Part I: On the Customizability of Generalized
Expected Utility
|
cs.AI
|
We propose a generalization of expected utility that we call generalized EU
(GEU), where a decision maker's beliefs are represented by plausibility
measures, and the decision maker's tastes are represented by general (i.e.,not
necessarily real-valued) utility functions. We show that every agent,
``rational'' or not, can be modeled as a GEU maximizer. We then show that we
can customize GEU by selectively imposing just the constraints we want. In
particular, we show how each of Savage's postulates corresponds to constraints
on GEU.
|
cs/0311027
|
Great Expectations. Part II: Generalized Expected Utility as a Universal
Decision Rule
|
cs.AI
|
Many different rules for decision making have been introduced in the
literature. We show that a notion of generalized expected utility proposed in
Part I of this paper is a universal decision rule, in the sense that it can
represent essentially all other decision rules.
|
cs/0311028
|
Using Counterfactuals in Knowledge-Based Programming
|
cs.DC cs.AI
|
This paper adds counterfactuals to the framework of knowledge-based programs
of Fagin, Halpern, Moses, and Vardi. The use of counterfactuals is illustrated
by designing a protocol in which an agent stops sending messages once it knows
that it is safe to do so. Such behavior is difficult to capture in the original
framework because it involves reasoning about counterfactual executions,
including ones that are not consistent with the protocol. Attempts to formalize
these notions without counterfactuals are shown to lead to rather
counterintuitive behavior.
|
cs/0311029
|
Staging Transformations for Multimodal Web Interaction Management
|
cs.IR cs.PL
|
Multimodal interfaces are becoming increasingly ubiquitous with the advent of
mobile devices, accessibility considerations, and novel software technologies
that combine diverse interaction media. In addition to improving access and
delivery capabilities, such interfaces enable flexible and personalized dialogs
with websites, much like a conversation between humans. In this paper, we
present a software framework for multimodal web interaction management that
supports mixed-initiative dialogs between users and websites. A
mixed-initiative dialog is one where the user and the website take turns
changing the flow of interaction. The framework supports the functional
specification and realization of such dialogs using staging transformations --
a theory for representing and reasoning about dialogs based on partial input.
It supports multiple interaction interfaces, and offers sessioning, caching,
and co-ordination functions through the use of an interaction manager. Two case
studies are presented to illustrate the promise of this approach.
|
cs/0311031
|
Towards an Intelligent Database System Founded on the SP Theory of
Computing and Cognition
|
cs.DB cs.AI
|
The SP theory of computing and cognition, described in previous publications,
is an attractive model for intelligent databases because it provides a simple
but versatile format for different kinds of knowledge, it has capabilities in
artificial intelligence, and it can also function like established database
models when that is required.
This paper describes how the SP model can emulate other models used in
database applications and compares the SP model with those other models. The
artificial intelligence capabilities of the SP model are reviewed and its
relationship with other artificial intelligence systems is described. Also
considered are ways in which current prototypes may be translated into an
'industrial strength' working system.
|
cs/0311033
|
The Rank-Frequency Analysis for the Functional Style Corpora in the
Ukrainian Language
|
cs.CL
|
We use the rank-frequency analysis for the estimation of Kernel Vocabulary
size within specific corpora of Ukrainian. The extrapolation of high-rank
behaviour is utilized for estimation of the total vocabulary size.
|
cs/0311036
|
Measuring the Functional Load of Phonological Contrasts
|
cs.CL
|
Frequency counts are a measure of how much use a language makes of a
linguistic unit, such as a phoneme or word. However, what is often important is
not the units themselves, but the contrasts between them. A measure is
therefore needed for how much use a language makes of a contrast, i.e. the
functional load (FL) of the contrast. We generalize previous work in
linguistics and speech recognition and propose a family of measures for the FL
of several phonological contrasts, including phonemic oppositions, distinctive
features, suprasegmentals, and phonological rules. We then test it for
robustness to changes of corpora. Finally, we provide examples in Cantonese,
Dutch, English, German and Mandarin, in the context of historical linguistics,
language acquisition and speech recognition. More information can be found at
http://dinoj.info/research/fload
|
cs/0311038
|
XPath-Logic and XPathLog: A Logic-Programming Style XML Data
Manipulation Language
|
cs.DB
|
We define XPathLog as a Datalog-style extension of XPath. XPathLog provides a
clear, declarative language for querying and manipulating XML whose
perspectives are especially in XML data integration. In our characterization,
the formal semantics is defined wrt. an edge-labeled graph-based model which
covers the XML data model. We give a complete, logic-based characterization of
XML data and the main language concept for XML, XPath. XPath-Logic extends the
XPath language with variable bindings and embeds it into first-order logic.
XPathLog is then the Horn fragment of XPath-Logic, providing a Datalog-style,
rule-based language for querying and manipulating XML data. The model-theoretic
semantics of XPath-Logic serves as the base of XPathLog as a logic-programming
language, whereas also an equivalent answer-set semantics for evaluating
XPathLog queries is given. In contrast to other approaches, the XPath syntax
and semantics is also used for a declarative specification how the database
should be updated: when used in rule heads, XPath filters are interpreted as
specifications of elements and properties which should be added to the
database.
|
cs/0311041
|
S-ToPSS: Semantic Toronto Publish/Subscribe System
|
cs.DC cs.DB
|
The increase in the amount of data on the Internet has led to the development
of a new generation of applications based on selective information
dissemination where, data is distributed only to interested clients. Such
applications require a new middleware architecture that can efficiently match
user interests with available information. Middleware that can satisfy this
requirement include event-based architectures such as publish-subscribe
systems. In this demonstration paper we address the problem of semantic
matching. We investigate how current publish/subscribe systems can be extended
with semantic capabilities. Our main contribution is the development and
validation (through demonstration) of a semantic pub/sub system prototype
S-ToPSS (Semantic Toronto Publish/Subscribe System).
|
cs/0311042
|
Toward Attribute Efficient Learning Algorithms
|
cs.LG
|
We make progress on two important problems regarding attribute efficient
learnability.
First, we give an algorithm for learning decision lists of length $k$ over
$n$ variables using $2^{\tilde{O}(k^{1/3})} \log n$ examples and time
$n^{\tilde{O}(k^{1/3})}$. This is the first algorithm for learning decision
lists that has both subexponential sample complexity and subexponential running
time in the relevant parameters. Our approach establishes a relationship
between attribute efficient learning and polynomial threshold functions and is
based on a new construction of low degree, low weight polynomial threshold
functions for decision lists. For a wide range of parameters our construction
matches a 1994 lower bound due to Beigel for the ODDMAXBIT predicate and gives
an essentially optimal tradeoff between polynomial threshold function degree
and weight.
Second, we give an algorithm for learning an unknown parity function on $k$
out of $n$ variables using $O(n^{1-1/k})$ examples in time polynomial in $n$.
For $k=o(\log n)$ this yields a polynomial time algorithm with sample
complexity $o(n)$. This is the first polynomial time algorithm for learning
parity on a superconstant number of variables with sublinear sample complexity.
|
cs/0311045
|
Unsupervised Grammar Induction in a Framework of Information Compression
by Multiple Alignment, Unification and Search
|
cs.AI
|
This paper describes a novel approach to grammar induction that has been
developed within a framework designed to integrate learning with other aspects
of computing, AI, mathematics and logic. This framework, called "information
compression by multiple alignment, unification and search" (ICMAUS), is founded
on principles of Minimum Length Encoding pioneered by Solomonoff and others.
Most of the paper describes SP70, a computer model of the ICMAUS framework that
incorporates processes for unsupervised learning of grammars. An example is
presented to show how the model can infer a plausible grammar from appropriate
input. Limitations of the current model and how they may be overcome are
briefly discussed.
|
cs/0311047
|
I know what you mean: semantic issues in Internet-scale
publish/subscribe systems
|
cs.DC cs.DB
|
In recent years, the amount of information on the Internet has increased
exponentially developing great interest in selective information dissemination
systems. The publish/subscribe paradigm is particularly suited for designing
systems for routing information and requests according to their content
throughout wide-area network of brokers. Current publish/subscribe systems use
limited syntax-based content routing but since publishers and subscribers are
anonymous and decoupled in time, space and location, often over wide-area
network boundary, they do not necessarily speak the same language.
Consequently, adding semantics to current publish/subscribe systems is
important. In this paper we identify and examine the issues in developing
semantic-based content routing for publish/subscribe broker networks.
|
cs/0311048
|
Turning CARTwheels: An Alternating Algorithm for Mining Redescriptions
|
cs.CE cs.AI
|
We present an unusual algorithm involving classification trees where two
trees are grown in opposite directions so that they are matched at their
leaves. This approach finds application in a new data mining task we formulate,
called "redescription mining". A redescription is a shift-of-vocabulary, or a
different way of communicating information about a given subset of data; the
goal of redescription mining is to find subsets of data that afford multiple
descriptions. We highlight the importance of this problem in domains such as
bioinformatics, which exhibit an underlying richness and diversity of data
descriptors (e.g., genes can be studied in a variety of ways). Our approach
helps integrate multiple forms of characterizing datasets, situates the
knowledge gained from one dataset in the context of others, and harnesses
high-level abstractions for uncovering cryptic and subtle features of data.
Algorithm design decisions, implementation details, and experimental results
are presented.
|
cs/0311050
|
Data mining and Privacy in Public Sector using Intelligent Agents
(discussion paper)
|
cs.CY cs.AI cs.IR cs.MA
|
The public sector comprises government agencies, ministries, education
institutions, health providers and other types of government, commercial and
not-for-profit organisations. Unlike commercial enterprises, this environment
is highly heterogeneous in all aspects. This forms a complex network which is
not always optimised. A lack of optimisation and communication hinders
information sharing between the network nodes limiting the flow of information.
Another limiting aspect is privacy of personal information and security of
operations of some nodes or segments of the network. Attempts to reorganise the
network or improve communications to make more information available for
sharing and analysis may be hindered or completely halted by public concerns
over privacy, political agendas, social and technological barriers. This paper
discusses a technical solution for information sharing while addressing the
privacy concerns with no need for reorganisation of the existing public sector
infrastructure . The solution is based on imposing an additional layer of
Intelligent Software Agents and Knowledge Bases for data mining and analysis.
|
cs/0311051
|
Integrating existing cone-shaped and projection-based cardinal direction
relations and a TCSP-like decidable generalisation
|
cs.AI
|
We consider the integration of existing cone-shaped and projection-based
calculi of cardinal direction relations, well-known in QSR. The more general,
integrating language we consider is based on convex constraints of the
qualitative form $r(x,y)$, $r$ being a cone-shaped or projection-based cardinal
direction atomic relation, or of the quantitative form $(\alpha ,\beta)(x,y)$,
with $\alpha ,\beta\in [0,2\pi)$ and $(\beta -\alpha)\in [0,\pi ]$: the meaning
of the quantitative constraint, in particular, is that point $x$ belongs to the
(convex) cone-shaped area rooted at $y$, and bounded by angles $\alpha$ and
$\beta$. The general form of a constraint is a disjunction of the form
$[r_1\vee...\vee r_{n_1}\vee (\alpha_1,\beta_1)\vee...\vee (\alpha
_{n_2},\beta_{n_2})](x,y)$, with $r_i(x,y)$, $i=1... n_1$, and $(\alpha
_i,\beta_i)(x,y)$, $i=1... n_2$, being convex constraints as described above:
the meaning of such a general constraint is that, for some $i=1... n_1$,
$r_i(x,y)$ holds, or, for some $i=1... n_2$, $(\alpha_i,\beta_i)(x,y)$ holds. A
conjunction of such general constraints is a $\tcsp$-like CSP, which we will
refer to as an $\scsp$ (Spatial Constraint Satisfaction Problem). An effective
solution search algorithm for an $\scsp$ will be described, which uses (1)
constraint propagation, based on a composition operation to be defined, as the
filtering method during the search, and (2) the Simplex algorithm, guaranteeing
completeness, at the leaves of the search tree. The approach is particularly
suited for large-scale high-level vision, such as, e.g., satellite-like
surveillance of a geographic area.
|
cs/0311052
|
A Situation Calculus-based Approach To Model Ubiquitous Information
Services
|
cs.AI cs.HC
|
This paper presents an augmented situation calculus-based approach to model
autonomous computing paradigm in ubiquitous information services. To make it
practical for commercial development and easier to support autonomous paradigm
imposed by ubiquitous information services, we made improvements based on
Reiter's standard situation calculus. First we explore the inherent
relationship between fluents and evolution: since not all fluents contribute to
systems' evolution and some fluents can be derived from some others, we define
those fluents that are sufficient and necessary to determine evolutional
potential as decisive fluents, and then we prove that their successor states
wrt to deterministic complex actions satisfy Markov property. Then, within the
calculus framework we build, we introduce validity theory to model the
autonomous services with application-specific validity requirements, including:
validity fluents to axiomatize validity requirements, heuristic multiple
alternative service choices ranging from complete acceptance, partial
acceptance, to complete rejection, and validity-ensured policy to comprise such
alternative service choices into organic, autonomously-computable services. Our
approach is demonstrated by a ubiquitous calendaring service, ACS, throughout
the paper.
|
cs/0312003
|
Hybrid LQG-Neural Controller for Inverted Pendulum System
|
cs.NE cs.LG
|
The paper presents a hybrid system controller, incorporating a neural and an
LQG controller. The neural controller has been optimized by genetic algorithms
directly on the inverted pendulum system. The failure free optimization process
stipulated a relatively small region of the asymptotic stability of the neural
controller, which is concentrated around the regulation point. The presented
hybrid controller combines benefits of a genetically optimized neural
controller and an LQG controller in a single system controller. High quality of
the regulation process is achieved through utilization of the neural
controller, while stability of the system during transient processes and a wide
range of operation are assured through application of the LQG controller. The
hybrid controller has been validated by applying it to a simulation model of an
inherently unstable system of inverted pendulum.
|
cs/0312004
|
Improving spam filtering by combining Naive Bayes with simple k-nearest
neighbor searches
|
cs.LG
|
Using naive Bayes for email classification has become very popular within the
last few months. They are quite easy to implement and very efficient. In this
paper we want to present empirical results of email classification using a
combination of naive Bayes and k-nearest neighbor searches. Using this
technique we show that the accuracy of a Bayes filter can be improved slightly
for a high number of features and significantly for a small number of features.
|
cs/0312008
|
Embedding Web-based Statistical Translation Models in Cross-Language
Information Retrieval
|
cs.CL cs.IR
|
Although more and more language pairs are covered by machine translation
services, there are still many pairs that lack translation resources.
Cross-language information retrieval (CLIR) is an application which needs
translation functionality of a relatively low level of sophistication since
current models for information retrieval (IR) are still based on a
bag-of-words. The Web provides a vast resource for the automatic construction
of parallel corpora which can be used to train statistical translation models
automatically. The resulting translation models can be embedded in several ways
in a retrieval model. In this paper, we will investigate the problem of
automatically mining parallel texts from the Web and different ways of
integrating the translation models within the retrieval process. Our
experiments on standard test collections for CLIR show that the Web-based
translation models can surpass commercial MT systems in CLIR tasks. These
results open the perspective of constructing a fully automatic query
translation device for CLIR at a very low cost.
|
cs/0312009
|
Failure-Free Genetic Algorithm Optimization of a System Controller Using
SAFE/LEARNING Controllers in Tandem
|
cs.NE cs.LG
|
The paper presents a method for failure free genetic algorithm optimization
of a system controller. Genetic algorithms present a powerful tool that
facilitates producing near-optimal system controllers. Applied to such methods
of computational intelligence as neural networks or fuzzy logic, these methods
are capable of combining the non-linear mapping capabilities of the latter with
learning the system behavior directly, that is, without a prior model. At the
same time, genetic algorithms routinely produce solutions that lead to the
failure of the controlled system. Such solutions are generally unacceptable for
applications where safe operation must be guaranteed. We present here a method
of design, which allows failure-free application of genetic algorithms through
utilization of SAFE and LEARNING controllers in tandem, where the SAFE
controller recovers the system from dangerous states while the LEARNING
controller learns its behavior. The method has been validated by applying it to
an inherently unstable system of inverted pendulum.
|
cs/0312016
|
Taking the Initiative with Extempore: Exploring Out-of-Turn Interactions
with Websites
|
cs.HC cs.IR
|
We present the first study to explore the use of out-of-turn interaction in
websites. Out-of-turn interaction is a technique which empowers the user to
supply unsolicited information while browsing. This approach helps flexibly
bridge any mental mismatch between the user and the website, in a manner
fundamentally different from faceted browsing and site-specific search tools.
We built a user interface (Extempore) which accepts out-of-turn input via voice
or text; and employed it in a US congressional website, to determine if users
utilize out-of-turn interaction for information-finding tasks, and their
rationale for doing so. The results indicate that users are adept at discerning
when out-of-turn interaction is necessary in a particular task, and actively
interleaved it with browsing. However, users found cascading information across
information-finding subtasks challenging. Therefore, this work not only
improves our understanding of out-of-turn interaction, but also suggests
further opportunities to enrich browsing experiences for users.
|
cs/0312018
|
Mapping Subsets of Scholarly Information
|
cs.IR cs.LG
|
We illustrate the use of machine learning techniques to analyze, structure,
maintain, and evolve a large online corpus of academic literature. An emerging
field of research can be identified as part of an existing corpus, permitting
the implementation of a more coherent community structure for its
practitioners.
|
cs/0312020
|
Modeling Object Oriented Constraint Programs in Z
|
cs.AI
|
Object oriented constraint programs (OOCPs) emerge as a leading evolution of
constraint programming and artificial intelligence, first applied to a range of
industrial applications called configuration problems. The rich variety of
technical approaches to solving configuration problems (CLP(FD), CC(FD), DCSP,
Terminological systems, constraint programs with set variables ...) is a source
of difficulty. No universally accepted formal language exists for communicating
about OOCPs, which makes the comparison of systems difficult. We present here a
Z based specification of OOCPs which avoids the falltrap of hidden object
semantics. The object system is part of the specification, and captures all of
the most advanced notions from the object oriented modeling standard UML. The
paper illustrates these issues and the conciseness and precision of Z by the
specification of a working OOCP that solves an historical AI problem : parsing
a context free grammar. Being written in Z, an OOCP specification also supports
formal proofs. The whole builds the foundation of an adaptative and evolving
framework for communicating about constrained object models and programs.
|
cs/0312024
|
Evolution: Google vs. DRIS
|
cs.DL cs.IR cs.NI
|
This paper gives an absolute new search system that builds the information
retrieval infrastructure for Internet. Now most search engine companies are
mainly concerned with how to make profit from company users by advertisement
and ranking prominence, but never consider what its real customers will feel.
Few web search engines can sell billions dollars just at the cost of
inconvenience of most Internet users, but not its high quality of search
service. When we have to bear the bothersome advertisements in the awful
results and have no choices, Internet as the kind of public good will surely be
undermined. If current Internet can't fully ensure our right to know, it may
need some sound improvements or a revolution.
|
cs/0312025
|
Soft Constraint Programming to Analysing Security Protocols
|
cs.CR cs.AI
|
Security protocols stipulate how the remote principals of a computer network
should interact in order to obtain specific security goals. The crucial goals
of confidentiality and authentication may be achieved in various forms, each of
different strength. Using soft (rather than crisp) constraints, we develop a
uniform formal notion for the two goals. They are no longer formalised as mere
yes/no properties as in the existing literature, but gain an extra parameter,
the security level. For example, different messages can enjoy different levels
of confidentiality, or a principal can achieve different levels of
authentication with different principals.
The goals are formalised within a general framework for protocol analysis
that is amenable to mechanisation by model checking. Following the application
of the framework to analysing the asymmetric Needham-Schroeder protocol, we
have recently discovered a new attack on that protocol as a form of retaliation
by principals who have been attacked previously. Having commented on that
attack, we then demonstrate the framework on a bigger, largely deployed
protocol consisting of three phases, Kerberos.
|
cs/0312026
|
Speedup of Logic Programs by Binarization and Partial Deduction
|
cs.PL cs.AI
|
Binary logic programs can be obtained from ordinary logic programs by a
binarizing transformation. In most cases, binary programs obtained this way are
less efficient than the original programs. (Demoen, 1992) showed an interesting
example of a logic program whose computational behaviour was improved when it
was transformed to a binary program and then specialized by partial deduction.
The class of B-stratifiable logic programs is defined. It is shown that for
every B-stratifiable logic program, binarization and subsequent partial
deduction produce a binary program which does not contain variables for
continuations introduced by binarization. Such programs usually have a better
computational behaviour than the original ones. Both binarization and partial
deduction can be easily automated. A comparison with other related approaches
to program transformation is given.
|
cs/0312028
|
Minimal founded semantics for disjunctive logic programs and deductive
databases
|
cs.LO cs.AI
|
In this paper, we propose a variant of stable model semantics for disjunctive
logic programming and deductive databases. The semantics, called minimal
founded, generalizes stable model semantics for normal (i.e. non disjunctive)
programs but differs from disjunctive stable model semantics (the extension of
stable model semantics for disjunctive programs). Compared with disjunctive
stable model semantics, minimal founded semantics seems to be more intuitive,
it gives meaning to programs which are meaningless under stable model semantics
and is no harder to compute. More specifically, minimal founded semantics
differs from stable model semantics only for disjunctive programs having
constraint rules or rules working as constraints. We study the expressive power
of the semantics and show that for general disjunctive datalog programs it has
the same power as disjunctive stable model semantics.
|
cs/0312029
|
Strong Equivalence Made Easy: Nested Expressions and Weight Constraints
|
cs.LO cs.AI
|
Logic programs P and Q are strongly equivalent if, given any program R,
programs P union R and Q union R are equivalent (that is, have the same answer
sets). Strong equivalence is convenient for the study of equivalent
transformations of logic programs: one can prove that a local change is correct
without considering the whole program. Lifschitz, Pearce and Valverde showed
that Heyting's logic of here-and-there can be used to characterize strong
equivalence for logic programs with nested expressions (which subsume the
better-known extended disjunctive programs). This note considers a simpler,
more direct characterization of strong equivalence for such programs, and shows
that it can also be applied without modification to the weight constraint
programs of Niemela and Simons. Thus, this characterization of strong
equivalence is convenient for the study of equivalent transformations of logic
programs written in the input languages of answer set programming systems dlv
and smodels. The note concludes with a brief discussion of results that can be
used to automate reasoning about strong equivalence, including a novel encoding
that reduces the problem of deciding the strong equivalence of a pair of weight
constraint programs to that of deciding the inconsistency of a weight
constraint program.
|
cs/0312033
|
Using sensors in the web crawling process
|
cs.IR cs.DL
|
This paper offers a short description of an Internet information field
monitoring system, which places a special module-sensor on the side of the
Web-server to detect changes in information resources and subsequently
reindexes only the resources signalized by the corresponding sensor. Concise
results of simulation research and an implementation attempt of the given
"sensors" concept are provided.
|
cs/0312036
|
What Causes a System to Satisfy a Specification?
|
cs.LO cs.AI
|
Even when a system is proven to be correct with respect to a specification,
there is still a question of how complete the specification is, and whether it
really covers all the behaviors of the system. Coverage metrics attempt to
check which parts of a system are actually relevant for the verification
process to succeed. Recent work on coverage in model checking suggests several
coverage metrics and algorithms for finding parts of the system that are not
covered by the specification. The work has already proven to be effective in
practice, detecting design errors that escape early verification efforts in
industrial settings. In this paper, we relate a formal definition of causality
given by Halpern and Pearl [2001] to coverage. We show that it gives
significant insight into unresolved issues regarding the definition of coverage
and leads to potentially useful extensions of coverage. In particular, we
introduce the notion of responsibility, which assigns to components of a system
a quantitative measure of their relevance to the satisfaction of the
specification.
|
cs/0312037
|
Characterizing and Reasoning about Probabilistic and Non-Probabilistic
Expectation
|
cs.AI cs.LO
|
Expectation is a central notion in probability theory. The notion of
expectation also makes sense for other notions of uncertainty. We introduce a
propositional logic for reasoning about expectation, where the semantics
depends on the underlying representation of uncertainty. We give sound and
complete axiomatizations for the logic in the case that the underlying
representation is (a) probability, (b) sets of probability measures, (c) belief
functions, and (d) possibility measures. We show that this logic is more
expressive than the corresponding logic for reasoning about likelihood in the
case of sets of probability measures, but equi-expressive in the case of
probability, belief, and possibility. Finally, we show that satisfiability for
these logics is NP-complete, no harder than satisfiability for propositional
logic.
|
cs/0312038
|
Responsibility and blame: a structural-model approach
|
cs.AI cs.LO
|
Causality is typically treated an all-or-nothing concept; either A is a cause
of B or it is not. We extend the definition of causality introduced by Halpern
and Pearl [2001] to take into account the degree of responsibility of A for B.
For example, if someone wins an election 11--0, then each person who votes for
him is less responsible for the victory than if he had won 6--5. We then define
a notion of degree of blame, which takes into account an agent's epistemic
state. Roughly speaking, the degree of blame of A for B is the expected degree
of responsibility of A for B, taken over the epistemic state of an agent.
|
cs/0312040
|
Diagnostic reasoning with A-Prolog
|
cs.AI
|
In this paper we suggest an architecture for a software agent which operates
a physical device and is capable of making observations and of testing and
repairing the device's components. We present simplified definitions of the
notions of symptom, candidate diagnosis, and diagnosis which are based on the
theory of action language ${\cal AL}$. The definitions allow one to give a
simple account of the agent's behavior in which many of the agent's tasks are
reduced to computing stable models of logic programs.
|
cs/0312041
|
Greedy Algorithms in Datalog
|
cs.DB cs.AI
|
In the design of algorithms, the greedy paradigm provides a powerful tool for
solving efficiently classical computational problems, within the framework of
procedural languages. However, expressing these algorithms within the
declarative framework of logic-based languages has proven a difficult research
challenge. In this paper, we extend the framework of Datalog-like languages to
obtain simple declarative formulations for such problems, and propose effective
implementation techniques to ensure computational complexities comparable to
those of procedural formulations. These advances are achieved through the use
of the "choice" construct, extended with preference annotations to effect the
selection of alternative stable-models and nondeterministic fixpoints. We show
that, with suitable storage structures, the differential fixpoint computation
of our programs matches the complexity of procedural algorithms in classical
search and optimization problems.
|
cs/0312042
|
Declarative Semantics for Active Rules
|
cs.DB
|
In this paper we analyze declarative deterministic and non-deterministic
semantics for active rules. In particular we consider several (partial) stable
model semantics, previously defined for deductive rules, such as well-founded,
max deterministic, unique total stable model, total stable model, and maximal
stable model semantics. The semantics of an active program AP is given by first
rewriting it into a deductive program P, then computing a model M defining the
declarative semantics of P and, finally, applying `consistent' updates
contained in M to the source database. The framework we propose permits a
natural integration of deductive and active rules and can also be applied to
queries with function symbols or to queries over infinite databases.
|
cs/0312043
|
On A Theory of Probabilistic Deductive Databases
|
cs.DB
|
We propose a framework for modeling uncertainty where both belief and doubt
can be given independent, first-class status. We adopt probability theory as
the mathematical formalism for manipulating uncertainty. An agent can express
the uncertainty in her knowledge about a piece of information in the form of a
confidence level, consisting of a pair of intervals of probability, one for
each of her belief and doubt. The space of confidence levels naturally leads to
the notion of a trilattice, similar in spirit to Fitting's bilattices.
Intuitively, thep oints in such a trilattice can be ordered according to truth,
information, or precision. We develop a framework for probabilistic deductive
databases by associating confidence levels with the facts and rules of a
classical deductive database. While the trilattice structure offers a variety
of choices for defining the semantics of probabilistic deductive databases, our
choice of semantics is based on the truth-ordering, which we find to be closest
to the classical framework for deductive databases. In addition to proposing a
declarative semantics based on valuations and an equivalent semantics based on
fixpoint theory, we also propose a proof procedure and prove it sound and
complete. We show that while classical Datalog query programs have a polynomial
time data complexity, certain query programs in the probabilistic deductive
database framework do not even terminate on some input databases. We identify a
large natural class of query programs of practical interest in our framework,
and show that programs in this class possess polynomial time data complexity,
i.e., not only do they terminate on every input database, they are guaranteed
to do so in a number of steps polynomial in the input database size.
|
cs/0312044
|
Clustering by compression
|
cs.CV cond-mat.stat-mech cs.AI physics.data-an q-bio.GN q-bio.QM
|
We present a new method for clustering based on compression. The method
doesn't use subject-specific features or background knowledge, and works as
follows: First, we determine a universal similarity distance, the normalized
compression distance or NCD, computed from the lengths of compressed data files
(singly and in pairwise concatenation). Second, we apply a hierarchical
clustering method. The NCD is universal in that it is not restricted to a
specific application area, and works across application area boundaries. A
theoretical precursor, the normalized information distance, co-developed by one
of the authors, is provably optimal but uses the non-computable notion of
Kolmogorov complexity. We propose precise notions of similarity metric, normal
compressor, and show that the NCD based on a normal compressor is a similarity
metric that approximates universality. To extract a hierarchy of clusters from
the distance matrix, we determine a dendrogram (binary tree) by a new quartet
method and a fast heuristic to implement it. The method is implemented and
available as public software, and is robust under choice of different
compressors. To substantiate our claims of universality and robustness, we
report evidence of successful application in areas as diverse as genomics,
virology, languages, literature, music, handwritten digits, astronomy, and
combinations of objects from completely different domains, using statistical,
dictionary, and block sorting compressors. In genomics we presented new
evidence for major questions in Mammalian evolution, based on
whole-mitochondrial genomic analysis: the Eutherian orders and the Marsupionta
hypothesis against the Theria hypothesis.
|
cs/0312045
|
Weight Constraints as Nested Expressions
|
cs.AI
|
We compare two recent extensions of the answer set (stable model) semantics
of logic programs. One of them, due to Lifschitz, Tang and Turner, allows the
bodies and heads of rules to contain nested expressions. The other, due to
Niemela and Simons, uses weight constraints. We show that there is a simple,
modular translation from the language of weight constraints into the language
of nested expressions that preserves the program's answer sets. Nested
expressions can be eliminated from the result of this translation in favor of
additional atoms. The translation makes it possible to compute answer sets for
some programs with weight constraints using satisfiability solvers, and to
prove the strong equivalence of programs with weight constraints using the
logic of here-and there.
|
cs/0312046
|
On the Abductive or Deductive Nature of Database Schema Validation and
Update Processing Problems
|
cs.DB cs.LO
|
We show that database schema validation and update processing problems such
as view updating, materialized view maintenance, integrity constraint checking,
integrity constraint maintenance or condition monitoring can be classified as
problems of either abductive or deductive nature, according to the reasoning
paradigm that inherently suites them. This is done by performing abductive and
deductive reasoning on the event rules [Oli91], a set of rules that define the
difference between consecutive database states In this way, we show that it is
possible to provide methods able to deal with all these problems as a whole. We
also show how some existing general deductive and abductive procedures may be
used to reason on the event rules. In this way, we show that these procedures
can deal with all database schema validation and update processing problems
considered in this paper.
|
cs/0312047
|
Mapping weblog communities
|
cs.NE
|
Websites of a particular class form increasingly complex networks, and new
tools are needed to map and understand them. A way of visualizing this complex
network is by mapping it. A map highlights which members of the community have
similar interests, and reveals the underlying social network. In this paper, we
will map a network of websites using Kohonen's self-organizing map (SOM), a
neural-net like method generally used for clustering and visualization of
complex data sets. The set of websites considered has been the Blogalia weblog
hosting site (based at http://www.blogalia.com/), a thriving community of
around 200 members, created in January 2002. In this paper we show how SOM
discovers interesting community features, its relation with other
community-discovering algorithms, and the way it highlights the set of
communities formed over the network.
|
cs/0312048
|
Representation Dependence in Probabilistic Inference
|
cs.AI cs.LO
|
Non-deductive reasoning systems are often {\em representation dependent}:
representing the same situation in two different ways may cause such a system
to return two different answers. Some have viewed this as a significant
problem. For example, the principle of maximum entropy has been subjected to
much criticism due to its representation dependence. There has, however, been
almost no work investigating representation dependence. In this paper, we
formalize this notion and show that it is not a problem specific to maximum
entropy. In fact, we show that any representation-independent probabilistic
inference procedure that ignores irrelevant information is essentially
entailment, in a precise sense. Moreover, we show that representation
independence is incompatible with even a weak default assumption of
independence. We then show that invariance under a restricted class of
representation changes can form a reasonable compromise between representation
independence and other desiderata, and provide a construction of a family of
inference procedures that provides such restricted representation independence,
using relative entropy.
|
cs/0312050
|
A Flexible Pragmatics-driven Language Generator for Animated Agents
|
cs.CL cs.MM
|
This paper describes the NECA MNLG; a fully implemented Multimodal Natural
Language Generation module. The MNLG is deployed as part of the NECA system
which generates dialogues between animated agents. The generation module
supports the seamless integration of full grammar rules, templates and canned
text. The generator takes input which allows for the specification of
syntactic, semantic and pragmatic constraints on the output.
|
cs/0312051
|
Towards Automated Generation of Scripted Dialogue: Some Time-Honoured
Strategies
|
cs.CL cs.AI
|
The main aim of this paper is to introduce automated generation of scripted
dialogue as a worthwhile topic of investigation. In particular the fact that
scripted dialogue involves two layers of communication, i.e., uni-directional
communication between the author and the audience of a scripted dialogue and
bi-directional pretended communication between the characters featuring in the
dialogue, is argued to raise some interesting issues. Our hope is that the
combined study of the two layers will forge links between research in text
generation and dialogue processing. The paper presents a first attempt at
creating such links by studying three types of strategies for the automated
generation of scripted dialogue. The strategies are derived from examples of
human-authored and naturally occurring dialogue.
|
cs/0312052
|
Dialogue as Discourse: Controlling Global Properties of Scripted
Dialogue
|
cs.CL cs.AI
|
This paper explains why scripted dialogue shares some crucial properties with
discourse. In particular, when scripted dialogues are generated by a Natural
Language Generation system, the generator can apply revision strategies that
cannot normally be used when the dialogue results from an interaction between
autonomous agents (i.e., when the dialogue is not scripted). The paper explains
that the relevant revision operators are best applied at the level of a
dialogue plan and discusses how the generator may decide when to apply a given
revision operator.
|
cs/0312053
|
On the Expressibility of Stable Logic Programming
|
cs.AI
|
(We apologize for pidgin LaTeX) Schlipf \cite{sch91} proved that Stable Logic
Programming (SLP) solves all $\mathit{NP}$ decision problems. We extend
Schlipf's result to prove that SLP solves all search problems in the class
$\mathit{NP}$. Moreover, we do this in a uniform way as defined in \cite{mt99}.
Specifically, we show that there is a single $\mathrm{DATALOG}^{\neg}$ program
$P_{\mathit{Trg}}$ such that given any Turing machine $M$, any polynomial $p$
with non-negative integer coefficients and any input $\sigma$ of size $n$ over
a fixed alphabet $\Sigma$, there is an extensional database
$\mathit{edb}_{M,p,\sigma}$ such that there is a one-to-one correspondence
between the stable models of $\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$
and the accepting computations of the machine $M$ that reach the final state in
at most $p(n)$ steps. Moreover, $\mathit{edb}_{M,p,\sigma}$ can be computed in
polynomial time from $p$, $\sigma$ and the description of $M$ and the decoding
of such accepting computations from its corresponding stable model of
$\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$ can be computed in linear
time. A similar statement holds for Default Logic with respect to
$\Sigma_2^\mathrm{P}$-search problems\footnote{The proof of this result
involves additional technical complications and will be a subject of another
publication.}.
|
cs/0312057
|
Abduction in Well-Founded Semantics and Generalized Stable Models
|
cs.LO cs.AI
|
Abductive logic programming offers a formalism to declaratively express and
solve problems in areas such as diagnosis, planning, belief revision and
hypothetical reasoning. Tabled logic programming offers a computational
mechanism that provides a level of declarativity superior to that of Prolog,
and which has supported successful applications in fields such as parsing,
program analysis, and model checking. In this paper we show how to use tabled
logic programming to evaluate queries to abductive frameworks with integrity
constraints when these frameworks contain both default and explicit negation.
The result is the ability to compute abduction over well-founded semantics with
explicit negation and answer sets. Our approach consists of a transformation
and an evaluation method. The transformation adjoins to each objective literal
$O$ in a program, an objective literal $not(O)$ along with rules that ensure
that $not(O)$ will be true if and only if $O$ is false. We call the resulting
program a {\em dual} program. The evaluation method, \wfsmeth, then operates on
the dual program. \wfsmeth{} is sound and complete for evaluating queries to
abductive frameworks whose entailment method is based on either the
well-founded semantics with explicit negation, or on answer sets. Further,
\wfsmeth{} is asymptotically as efficient as any known method for either class
of problems. In addition, when abduction is not desired, \wfsmeth{} operating
on a dual program provides a novel tabling method for evaluating queries to
ground extended programs whose complexity and termination properties are
similar to those of the best tabling methods for the well-founded semantics. A
publicly available meta-interpreter has been developed for \wfsmeth{} using the
XSB system.
|
cs/0312058
|
Acquiring Lexical Paraphrases from a Single Corpus
|
cs.CL cs.AI cs.IR cs.LG
|
This paper studies the potential of identifying lexical paraphrases within a
single corpus, focusing on the extraction of verb paraphrases. Most previous
approaches detect individual paraphrase instances within a pair (or set) of
comparable corpora, each of them containing roughly the same information, and
rely on the substantial level of correspondence of such corpora. We present a
novel method that successfully detects isolated paraphrase instances within a
single corpus without relying on any a-priori structure and information. A
comparison suggests that an instance-based approach may be combined with a
vector based approach in order to assess better the paraphrase likelihood for
many verb pairs.
|
cs/0312059
|
Polyhierarchical Classifications Induced by Criteria Polyhierarchies,
and Taxonomy Algebra
|
cs.AI cs.IR
|
A new approach to the construction of general persistent polyhierarchical
classifications is proposed. It is based on implicit description of category
polyhierarchy by a generating polyhierarchy of classification criteria.
Similarly to existing approaches, the classification categories are defined by
logical functions encoded by attributive expressions. However, the generating
hierarchy explicitly predefines domains of criteria applicability, and the
semantics of relations between categories is invariant to changes in the
universe composition, extending variety of criteria, and increasing their
cardinalities. The generating polyhierarchy is an independent, compact,
portable, and re-usable information structure serving as a template
classification. It can be associated with one or more particular sets of
objects, included in more general classifications as a standard component, or
used as a prototype for more comprehensive classifications. The approach
dramatically simplifies development and unplanned modifications of persistent
hierarchical classifications compared with tree, DAG, and faceted schemes. It
can be efficiently implemented in common DBMS, while considerably reducing
amount of computer resources required for storage, maintenance, and use of
complex polyhierarchies.
|
cs/0312060
|
Part-of-Speech Tagging with Minimal Lexicalization
|
cs.CL cs.LG
|
We use a Dynamic Bayesian Network to represent compactly a variety of
sublexical and contextual features relevant to Part-of-Speech (PoS) tagging.
The outcome is a flexible tagger (LegoTag) with state-of-the-art performance
(3.6% error on a benchmark corpus). We explore the effect of eliminating
redundancy and radically reducing the size of feature vocabularies. We find
that a small but linguistically motivated set of suffixes results in improved
cross-corpora generalization. We also show that a minimal lexicon limited to
function words is sufficient to ensure reasonable performance.
|
cs/0401004
|
Cyborg Systems as Platforms for Computer-Vision Algorithm-Development
for Astrobiology
|
cs.CV astro-ph cs.AI
|
Employing the allegorical imagery from the film "The Matrix", we motivate and
discuss our `Cyborg Astrobiologist' research program. In this research program,
we are using a wearable computer and video camcorder in order to test and train
a computer-vision system to be a field-geologist and field-astrobiologist.
|
cs/0401005
|
About Unitary Rating Score Constructing
|
cs.LG
|
It is offered to pool test points of different subjects and different aspects
of the same subject together in order to get the unitary rating score, by the
way of nonlinear transformation of indicator points in accordance with Zipf's
distribution. It is proposed to use the well-studied distribution of
Intellectuality Quotient IQ as the reference distribution for latent variable
"progress in studies".
|
cs/0401009
|
Unifying Computing and Cognition: The SP Theory and its Applications
|
cs.AI
|
This book develops the conjecture that all kinds of information processing in
computers and in brains may usefully be understood as "information compression
by multiple alignment, unification and search". This "SP theory", which has
been under development since 1987, provides a unified view of such things as
the workings of a universal Turing machine, the nature of 'knowledge', the
interpretation and production of natural language, pattern recognition and
best-match information retrieval, several kinds of probabilistic reasoning,
planning and problem solving, unsupervised learning, and a range of concepts in
mathematics and logic. The theory also provides a basis for the design of an
'SP' computer with several potential advantages compared with traditional
digital computers.
|
cs/0401014
|
Nested Intervals with Farey Fractions
|
cs.DB
|
Relational Databases are universally conceived as an advance over their
predecessors Network and Hierarchical models. Superior in every querying
respect, they turned out to be surprisingly incomplete when modeling transitive
dependencies. Almost every couple of months a question how to model a tree in
the database surfaces at comp.database.theory newsgroup. This article completes
a series of articles exploring Nested Intervals Model. Previous articles
introduced tree encoding with Binary Rational Numbers. However, binary encoding
grows exponentially, both in breadth and in depth. In this article, we'll
leverage Farey fractions in order to overcome this problem. We'll also
demonstrate that our implementation scales to a tree with 1M nodes.
|
cs/0401015
|
Query Answering in Peer-to-Peer Data Exchange Systems
|
cs.DB cs.LO
|
The problem of answering queries posed to a peer who is a member of a
peer-to-peer data exchange system is studied. The answers have to be consistent
wrt to both the local semantic constraints and the data exchange constraints
with other peers; and must also respect certain trust relationships between
peers. A semantics for peer consistent answers under exchange constraints and
trust relationships is introduced and some techniques for obtaining those
answers are presented.
|
cs/0401017
|
Better Foreground Segmentation Through Graph Cuts
|
cs.CV
|
For many tracking and surveillance applications, background subtraction
provides an effective means of segmenting objects moving in front of a static
background. Researchers have traditionally used combinations of morphological
operations to remove the noise inherent in the background-subtracted result.
Such techniques can effectively isolate foreground objects, but tend to lose
fidelity around the borders of the segmentation, especially for noisy input.
This paper explores the use of a minimum graph cut algorithm to segment the
foreground, resulting in qualitatively and quantitiatively cleaner
segmentations. Experiments on both artificial and real data show that the
graph-based method reduces the error around segmented foreground objects. A
MATLAB code implementation is available at
http://www.cs.smith.edu/~nhowe/research/code/#fgseg
|
cs/0401018
|
Factor Temporal Prognosis of Tick-Borne Encephalitis Foci Functioning on
the South of Russian Far East
|
cs.CV
|
A method of temporal factor prognosis of TE (tick-borne encephalitis)
infection has been developed. The high precision of the prognosis results for a
number of geographical regions of Primorsky Krai has been achieved. The method
can be applied not only to epidemiological research but also to others.
|
cs/0401020
|
Presynaptic modulation as fast synaptic switching: state-dependent
modulation of task performance
|
cs.NE q-bio.NC
|
Neuromodulatory receptors in presynaptic position have the ability to
suppress synaptic transmission for seconds to minutes when fully engaged. This
effectively alters the synaptic strength of a connection. Much work on
neuromodulation has rested on the assumption that these effects are uniform at
every neuron. However, there is considerable evidence to suggest that
presynaptic regulation may be in effect synapse-specific. This would define a
second "weight modulation" matrix, which reflects presynaptic receptor efficacy
at a given site. Here we explore functional consequences of this hypothesis. By
analyzing and comparing the weight matrices of networks trained on different
aspects of a task, we identify the potential for a low complexity "modulation
matrix", which allows to switch between differently trained subtasks while
retaining general performance characteristics for the task. This means that a
given network can adapt itself to different task demands by regulating its
release of neuromodulators. Specifically, we suggest that (a) a network can
provide optimized responses for related classification tasks without the need
to train entirely separate networks and (b) a network can blend a "memory mode"
which aims at reproducing memorized patterns and a "novelty mode" which aims to
facilitate classification of new patterns. We relate this work to the known
effects of neuromodulators on brain-state dependent processing.
|
cs/0401025
|
Running C++ models undet the Swarm environment
|
cs.MA
|
Objective-C is still the language of choice if users want to run their
simulation efficiently under the Swarm environment since the Swarm environment
itself was written in Objective-C. The language is a fast, object-oriented and
easy to learn. However, the language is less well known than, less expressive
than, and lacks support for many important features of C++ (eg. OpenMP for high
performance computing application). In this paper, we present a methodology and
software tools that we have developed for auto generating an Objective-C object
template (and all the necessary interfacing functions) from a given C++ model,
utilising the Classdesc's object description technology, so that the C++ model
can both be run and accessed under the Objective-C and C++ environments. We
also present a methodology for modifying an existing Swarm application to make
part of the model (eg. the heatbug's step method) run under the C++
environment.
|
cs/0401026
|
EcoLab: Agent Based Modeling for C++ programmers
|
cs.MA
|
\EcoLab{} is an agent based modeling system for C++ programmers, strongly
influenced by the design of Swarm. This paper is just a brief outline of
\EcoLab's features, more details can be found in other published articles,
documentation and source code from the \EcoLab{} website.
|
cs/0402001
|
Mobile Re-Finding of Web Information Using a Voice Interface
|
cs.HC cs.IR
|
Mobile access to information is a considerable problem for many users,
especially to information found on the Web. In this paper, we explore how a
voice-controlled service, accessible by telephone, could support mobile users'
needs for refinding specific information previously found on the Web. We
outline challenges in creating such a service and describe architectural and
user interfaces issues discovered in an exploratory prototype we built called
WebContext.
We also present the results of a study, motivated by our experience with
WebContext, to explore what people remember about information that they are
trying to refind and how they express information refinding requests in a
collaborative conversation. As part of the study, we examine how
end-usercreated Web page annotations can be used to help support mobile
information re-finding. We observed the use of URLs, page titles, and
descriptions of page contents to help identify waypoints in the search process.
Furthermore, we observed that the annotations were utilized extensively,
indicating that explicitly added context by the user can play an important role
in re-finding.
|
cs/0402003
|
Semantic Optimization of Preference Queries
|
cs.DB
|
The notion of preference is becoming more and more ubiquitous in present-day
information systems. Preferences are primarily used to filter and personalize
the information reaching the users of such systems. In database systems,
preferences are usually captured as preference relations that are used to build
preference queries. In our approach, preference queries are relational algebra
or SQL queries that contain occurrences of the winnow operator ("find the most
preferred tuples in a given relation").
We present here a number of semantic optimization techniques applicable to
preference queries. The techniques make use of integrity constraints, and make
it possible to remove redundant occurrences of the winnow operator and to apply
a more efficient algorithm for the computation of winnow. We also study the
propagation of integrity constraints in the result of the winnow. We have
identified necessary and sufficient conditions for the applicability of our
techniques, and formulated those conditions as constraint satisfiability
problems.
|
cs/0402007
|
An Integrated Approach for Extraction of Objects from XML and
Transformation to Heterogeneous Object Oriented Databases
|
cs.DB cs.SE
|
CERN's (European Organization for Nuclear Research) WISDOM project uses XML
for the replication of data between different data repositories in a
heterogeneous operating system environment. For exchanging data from
Web-resident databases, the data needs to be transformed into XML and back to
the database format. Many different approaches are employed to do this
transformation. This paper addresses issues that make this job more efficient
and robust than existing approaches. It incorporates the World Wide Web
Consortium (W3C) XML Schema specification in the database-XML relationship.
Incorporation of the XML Schema exhibits significant improvements in XML
content usage and reduces the limitations of DTD-based database XML services.
Secondly the paper explores the possibility of database independent
transformation of data between XML and different databases. It proposes a
standard XML format that every serialized object should follow. This makes it
possible to use objects of heterogeneous database seamlessly using XML.
|
cs/0402008
|
A Use-Case Driven Approach in Requirements Engineering : The Mammogrid
Project
|
cs.DB cs.SE
|
We report on the application of the use-case modeling technique to identify
and specify the user requirements of the MammoGrid project in an incremental
and controlled iterative approach. Modeling has been carried out in close
collaboration with clinicians and radiologists with no prior experience of use
cases. The study reveals the advantages and limitations of applying this
technique to requirements specification in the domains of breast cancer
screening and mammography research, with implications for medical imaging more
generally. In addition, this research has shown a return on investment in
use-case modeling in shorter gaps between phases of the requirements
engineering process. The qualitative result of this analysis leads us to
propose that a use-case modeling approach may result in reducing the cycle of
the requirements engineering process for medical imaging.
|
cs/0402009
|
Resolving Clinicians Queries Across a Grids Infrastructure
|
cs.DB cs.SE
|
The past decade has witnessed order of magnitude increases in computing
power, data storage capacity and network speed, giving birth to applications
which may handle large data volumes of increased complexity, distributed over
the Internet. Grids computing promises to resolve many of the difficulties in
facilitating medical image analysis to allow radiologists to collaborate
without having to co-locate. The EU-funded MammoGrid project aims to
investigate the feasibility of developing a Grid-enabled European database of
mammograms and provide an information infrastructure which federates multiple
mammogram databases. This will enable clinicians to develop new common,
collaborative and co-operative approaches to the analysis of mammographic data.
This paper focuses on one of the key requirements for large-scale distributed
mammogram analysis: resolving queries across a grid-connected federation of
images.
|
cs/0402013
|
Corollaries on the fixpoint completion: studying the stable semantics by
means of the Clark completion
|
cs.AI cs.LO
|
The fixpoint completion fix(P) of a normal logic program P is a program
transformation such that the stable models of P are exactly the models of the
Clark completion of fix(P). This is well-known and was studied by Dung and
Kanchanasut (1989). The correspondence, however, goes much further: The
Gelfond-Lifschitz operator of P coincides with the immediate consequence
operator of fix(P), as shown by Wendt (2002), and even carries over to standard
operators used for characterizing the well-founded and the Kripke-Kleene
semantics. We will apply this knowledge to the study of the stable semantics,
and this will allow us to almost effortlessly derive new results concerning
fixed-point and metric-based semantics, and neural-symbolic integration.
|
cs/0402014
|
Self-Organising Networks for Classification: developing Applications to
Science Analysis for Astroparticle Physics
|
cs.NE astro-ph cs.AI
|
Physics analysis in astroparticle experiments requires the capability of
recognizing new phenomena; in order to establish what is new, it is important
to develop tools for automatic classification, able to compare the final result
with data from different detectors. A typical example is the problem of Gamma
Ray Burst detection, classification, and possible association to known sources:
for this task physicists will need in the next years tools to associate data
from optical databases, from satellite experiments (EGRET, GLAST), and from
Cherenkov telescopes (MAGIC, HESS, CANGAROO, VERITAS).
|
cs/0402016
|
Perspects in astrophysical databases
|
cs.DB astro-ph
|
Astrophysics has become a domain extremely rich of scientific data. Data
mining tools are needed for information extraction from such large datasets.
This asks for an approach to data management emphasizing the efficiency and
simplicity of data access; efficiency is obtained using multidimensional access
methods and simplicity is achieved by properly handling metadata. Moreover,
clustering and classification techniques on large datasets pose additional
requirements in terms of computation and memory scalability and
interpretability of results. In this study we review some possible solutions.
|
cs/0402019
|
The Munich Rent Advisor: A Success for Logic Programming on the Internet
|
cs.AI cs.DS
|
Most cities in Germany regularly publish a booklet called the {\em
Mietspiegel}. It basically contains a verbal description of an expert system.
It allows the calculation of the estimated fair rent for a flat. By hand, one
may need a weekend to do so. With our computerized version, the {\em Munich
Rent Advisor}, the user just fills in a form in a few minutes and the rent is
calculated immediately. We also extended the functionality and applicability of
the {\em Mietspiegel} so that the user need not answer all questions on the
form. The key to computing with partial information using high-level
programming was to use constraint logic programming. We rely on the internet,
and more specifically the World Wide Web, to provide this service to a broad
user group. More than ten thousand people have used our service in the last
three years. This article describes the experiences in implementing and using
the {\em Munich Rent Advisor}. Our results suggests that logic programming with
constraints can be an important ingredient in intelligent internet systems.
|
cs/0402020
|
Geometrical Complexity of Classification Problems
|
cs.CV
|
Despite encouraging recent progresses in ensemble approaches, classification
methods seem to have reached a plateau in development. Further advances depend
on a better understanding of geometrical and topological characteristics of
point sets in high-dimensional spaces, the preservation of such characteristics
under feature transformations and sampling processes, and their interaction
with geometrical models used in classifiers. We discuss an attempt to measure
such properties from data sets and relate them to classifier accuracies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.