node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
2,180
2
Salticus: Guided Crawling for Personal Digital Libraries In this paper, we describe Salticus, a web crawler that learns from users' web browsing activity. Salticus enables users to build a personal digital library by collecting documents and generalizing over the user's choices. Keywords personal digital library, business intelligence, web crawling, document acquisition 1.
[ 2471 ]
Test
2,181
2
The GENIA project: corpus-based knowledge acquisition and information extraction from genome research papers We present an outline of the genome information acquisition (GENIA) project for automatically extracting biochemical information from journal papers and abstracts. GENIA will be available over the Internet and is designed to aid in information extraction, retrieval and visualisation and to help reduce information overload on researchers. The vast repository of papers available online in databases such as MEDLINE is a natural environment in which to develop language engineering methods and tools and is an opportunity to show how language engineering can play a key role on the Internet. 1 Introduction In the context of the global research effort to map the human genome, the Genome Informatics Extraction project, GENIA (GENIA, 1999), aims to support such research by automatically extracting information from biochemical papers and their abstracts such as those available from MEDLINE (MEDLINE, 1999) written by domain specialists. The vast repository of research papers which are the results...
[ 1726, 2680 ]
Train
2,182
0
From the Specification of Multiagent Systems by Statecharts to Their Formal Analysis by Model Checking: Towards Safety-Critical Applications A formalism for the specification of multiagent systems should be expressive and illustrative enough to model not only the behavior of one single agent, but also the collaboration among several agents and the influences caused by external events from the environment. For this, state machines [25] seem to provide an adequate means. Furthermore, it should be easily possible to obtain an implementation for each agent automatically from this specification. Last but not least, it is desirable to be able to check whether the multiagent system satisfies some interesting properties. Therefore, the formalism should also allow for the verification or formal analysis of multiagent systems, e.g. by model checking [6]. In this paper, a framework is introduced, which allows us to express declarative aspects of multiagent systems by means of (classical) propositional logic and procedural aspects of these systems by means of state machines (statecharts). Nowadays statecharts are a well accepted means to specify dynamic behavior of software systems. They are a part of the Unified Modeling Language (UML). We describe in a rigorously formal manner, how the specification of spatial knowledge and robot interaction and its verification by model checking can be done, integrating different methods from the field of artificial intelligence such as qualitative (spatial) reasoning and the situation calculus. As example application domain, we will consider robotic soccer, see also [24, 31], which present predecessor work towards a formal logic-based approach for agents engineering.
[ 303, 1129, 1367 ]
Test
2,183
3
Exploiting Planned Disconnections in Mobile Environments We present the notion of a distributed database made up entirely of mobile components. Since disconnections will be frequent in such an environment, we develop a disconnection and reconnection procedure to allow normal processing on the connected components. We briefly discuss a protocol based on epidemic communication to support such a system while ensuring one-copy serializability. 1 Introduction Mobile computers and wireless networks are now being integrated into a variety of enterprises for different applications. The prevailing mode of operation with occasional disconnection by a single user will rapidly evolve into a situation where many if not all users are disconnecting and reconnecting in networks that are created in an ad hoc manner, e.g., a wireless network in a meeting room. This will result in mobile computers being integrated as first class entities in distributed information systems. Such mobile computers will inevitably contain data and information that will need to b...
[ 1303 ]
Validation
2,184
3
Database Replication Using Epidemic Communication . There is a growing interest in asynchronous replica management protocols in which database transactions are executed locally, and their effects are incorporated asynchronously on remote database copies. In this paper we investigate an epidemic update protocol that guarantees consistency and serializability in spite of a write-anywhere capability and conduct simulation experiments to evaluate this protocol. Our results indicate that this epidemic approach is indeed a viable alternative to eager update protocols for a distributed database environment where serializability is needed. 1 Introduction Data replication in distributed databases is an important problem that has been investigated extensively. In spite of numerous proposals, the solution to efficient access of replicated data remains elusive. Data replication has long been touted as a technique for improved performance and high reliability in distributed databases. Unfortunately, data replication has not delivered on ...
[ 1513 ]
Train
2,185
4
Interaction Techniques For Common Tasks In Immersive Virtual Environments - Design, Evaluation, And Application 13.44> . Drew Kessler for help with the SVE toolkit . The Virtual Environments group at Georgia Tech . The numerous experimental subjects who volunteered their time . Dawn Bowman iv TABLE OF CONTENTS Introduction ..................................................................... ................. 1 1.1 Motivation ..................................................................... ...............1 1.2 Definitions.......................................................... ..........................4 1.3 Problem Statement............................................................ ...............6 1.4 Scope of the Research............................................................. ..........7 1.5 Hypotheses........................................................... ........................8 1.6 Contributions........................................................ .....
[ 804, 1141, 1793, 2009 ]
Test
2,186
2
Twenty-One at TREC-7: Ad-hoc and Cross-language track This paper describes the official runs of the Twenty-One group for TREC-7. The Twenty-One group participated in the ad-hoc and the cross-language track and made the following accomplishments: We developed a new weighting algorithm, which outperforms the popular Cornell version of BM25 on the ad-hoc collection. For the CLIR task we developed a fuzzy matching algorithm to recover from missing translations and spelling variants of proper names. Also for CLIR we investigated translation strategies that make extensive use of information from our dictionaries by identifying preferred translations, main translations and synonym translations, by defining weights of possible translations and by experimenting with probabilistic boolean matching strategies. 1 Introduction Twenty-One is a 2 MECU project with 12 partners funded by the EU Telematics programme, sector Information Engineering. The project subtitle is "Development of a Multimedia Information Transaction and Dissemination Tool". Twenty...
[ 2112 ]
Validation
2,187
3
Creating a Customized Access Method for Blobworld We present the design and analysis of a customized access method for the content-based image retrieval system, Blobworld. Using the amdb access method analysis tool, we analyze three existing multidimensional access methods that support nearest neighbor search in the context of the Blobworld application. Based on this analysis, we propose several variants of the R-tree, tailored to address the problems the analysis revealed. We implemented the access methods we propose in the Generalized Search Trees (GiST) framework and analyzed them using amdb, a tool that enables visualization and performance analysis of access methods. We found that two of our access methods have better performance characteristics for the Blobworld application than any of the traditional multi-dimensional access methods we examined. Based on this experience, we draw conclusions for nearest neighbor access method design, and for the task of constructing custom access methods tailored to particular applications. In particular, we found that our \Top X Jagged Bites " bounding predicate performed better than all the other access methods we tested. 1
[ 2267, 2891 ]
Test
2,188
2
SavvySearch: A Meta-Search Engine that Learns which Search Engines to Query Search engines are among the most successful applications on the Web today. So many search engines have been created that it is difficult for users to know where they are, how to use them and what topics they best address. Meta-search engines reduce the user burden by dispatching queries to multiple search engines in parallel. The SavvySearch meta-search engine is designed to efficiently query other search engines by carefully selecting those search engines likely to return useful results and by responding to fluctuating load demands on the Web. SavvySearch learns to identify which search engines are most appropriate for particular queries, reasons about resource demands and represents an iterative parallel search strategy as a simple plan. 1 The Application: Meta-Search on the Web Companies, institutions and individuals must have a presence on the Web; each are vying for the attention of millions of people. Not too surprisingly then, the most successful applications on the Web to dat...
[ 124, 130, 165, 587, 1004, 1059, 1352, 1393, 1642, 1767, 1888, 2161, 2275, 2532, 2569, 2627, 2705, 2771, 2804, 2920, 3037 ]
Train
2,189
4
Translingual Visual Speech Synthesis Audio-driven facial animation is an interesting and evolving technique for human-computer interaction. Based on an incoming audio stream, a face image is animated with full lip synchronization. This requires a speech recognition system in the language in which audio is provided to get the time alignment for the phonetic sequence of the audio signal. However, building a speech recognition system is data intensive and is a very tedious and time consuming task. We present a novel scheme to implement a language independent system for audio-driven facial animation given a speech recognition system for just one language, in our case, English. The method presented here can also be used for text to audio-visual speech synthesis. 1.
[ 598 ]
Train
2,190
5
Monte Carlo Localization: Efficient Position Estimation for Mobile Robots This paper presents a new algorithm for mobile robot localization, called Monte Carlo Localization (MCL). MCL is a version of Markov localization, a family of probabilistic approaches that have recently been applied with great practical success. However, previous approaches were either computationally cumbersome (such as grid-based approaches that represent the state space by high-resolution 3D grids), or had to resort to extremely coarse-grained resolutions. Our approach is computationally efficient while retaining the ability to represent (almost) arbitrary distributions. MCL applies sampling-based methods for approximating probability distributions, in a way that places computation " where needed." The number of samples is adapted on-line, thereby invoking large sample sets only when necessary. Empirical results illustrate that MCL yields improved accuracy while requiring an order of magnitude less computation when compared to previous approaches. It is also much easier to implement...
[ 793, 1093 ]
Train
2,191
2
Transforming Paper Documents into XML Format with WISDOM++ The transformation of scanned paper documents to a form suitable for an Internet browser is a complex process that requires solutions to several problems. The application of an OCR to some parts of the document image is only one of the problems. In fact, the generation of documents in HTML format is easier when the layout structure of a page has been extracted by means of a document analysis process. The adoption of an XML format is even better, since it can facilitate the retrieval of documents in the Web. Nevertheless, an effective transformation of paper documents into this format requires further processing steps, namely document image classification and understanding. WISDOM++ is a document processing system that operates in five steps: document analysis, document classification, document understanding, text recognition with an OCR, and text transformation into HTML/XML format. The innovative aspects described in the paper are: the preprocessing algorithm, the adaptive page segmen...
[ 2490 ]
Train
2,192
0
Goal Creation in Motivated Agents . Goal creation is an important consideration for an agent that is required to behave autonomously in a real-world domain. This paper describes an agent that is directed, not by a conjunction of top level goals, but by a set of motives. The agent is motivated to create and prioritise different goals at different times as a part of an on-going activity under changing circumstances. Goals can be created both in reaction to, and in anticipation of a situation. While there has been much work on the creation of reactive goals, i.e. goals created in reaction to a situation, the issues involved in the creation of anticipatory, or proactive goals have not been considered in depth. The solution to the goal creation problem outlined here provides an agent with an effective method of creating goals both reactively and proactively, giving the agent a greater degree of autonomy. 1 Introduction The focus of planning research has principally been concerned with the creation of good plans to satisfy ...
[ 620 ]
Train
2,193
3
Maintaining Horizontally Partitioned Warehouse Views Data warehouses usually store large amounts of information, representing an integration of base data from different data sources over a long time period. Aggregate views can be stored as a set of its horizontal fragments for the purposes of reducing warehouse query response time and maintenance cost. This paper proposes a scheme that efficiently maintains horizontally partitioned data warehouse views. Using the proposed scheme, only one view fragment holding the relevant subset of tuples of the view is accessed for each update. The scheme also includes an approach to reduce the refresh time for maintaining views that compute aggregate functions MIN and MAX. Keywords: Data Warehouse Applications, View Maintenance, Horizontal Partitioning, Performance Improvement. 1
[ 461 ]
Train
2,194
4
Using Dynamic Mediation to Integrate COTS Entities in a Ubiquitous Computing Environment . The original vision of ubiquitous computing [14] is about enabling people to more easily accomplish tasks through the seamless interworking of the physical environment and a computing infrastructure. A major challenge to the practical realization of this vision involves the integration of commercial-o-the-shelf (COTS) hardware and software components: consider the awkwardness of such a mundane task as exporting a textual memo written on a Palm Pilot to a Microsoft Word document. It is not enough to overcome the protocol and data format mismatches that currently impede the interoperation of these entities: for the user experience to be truly seamless, we must provide a framework for the dynamic connection of such endpoints on demand, to support the ad-hoc interactions that are an integral part of ubiquitous computing. To this end, we oer a dynamic mediation framework called Paths. A Path consists of dynamically instantiated, automatically composable operators that brid...
[ 2049 ]
Test
2,195
4
A Secure Infrastructure for Service Discovery Access in Pervasive Computing Security is paramount to the success of pervasive computing environments. The system presented in this paper provides a communications and security infrastructure that goes far in advancing the goal of anywhere - anytime computing. Our work securely enables clients to access and utilize services in heterogeneous networks. We provide a service registration and discovery mechanism implemented through a hierarchy of service management. The system is built upon a simplified Public Key Infrastructure that provides for authentication, non-repudiation, anti-playback, and access control. Smartcards are used as secure containers for digital certificates. The system is implemented in Java and we use Extensible Markup Language as the sole medium for communications and data exchange. Currently, we are solely dependent on a base set of access rights for our distributed trust model however, we are expanding the model to include the delegation of rights based upon a predefined policy. In our proposed expansion, instead of exclusively relying on predefined access rights, we have developed a flexible representation of trust information, in Prolog, that can model permissions, obligations, entitlements, and prohibitions. In this paper, we present the implementation of our system and describe the modifications to the design that are required to further enhance distributed trust. Our implementation is applicable to any distributed service infrastructure, whether the infrastructure is wired, mobile, or ad-hoc.
[ 1882 ]
Train
2,196
1
Integration of Machine Learning and Knowledge Acquisition Introduction "Integration of Machine Learning and Knowledge Acquisition" may be a surprising title for an ECAI-94 workshop since most Machine Learning (ML) systems are dedicated to Knowledge Acquisition (KA). What could thus mean integrating ML and KA ? The answer lies in the difference between the approaches developed by what is referred to as ML and KA research. Apart from some major exceptions, such as learning apprentice tools [ Mitchell et al., 1989 ] , or libraries like Machine Learning Toolbox [ MLT, 1993 ] , most ML algorithms were described without any characterization in terms of real application needs, in term of what they could be effectively useful for. However, ML methods were applied to "real world" problems, but few general and reusable conclusions were drawn from these knowledge acquisition experiments. As ML techniques become more and more sophisticated and able to produce various forms of knowledge, the number of possible applications grows.
[ 498 ]
Train
2,197
0
Studying Robot Social Cognition Within A Developmental Psychology Framework This paper discusses two prominent theories of cognitive development and relates them to experiments in social robotics. The main difference between these theories lies in the different views on the relationship between a child and its social environment: a) the child as a solitary thinker (Piaget) and b) the child in society (Vygotsky). We discuss the implications this has on the design of socially intelligent agents, focusing on robotic agents. We argue that the framework proposed by Vygotsky provides a promising research direction in autonomous agents. We give examples of implementations in the area of social robotics which support our theoretical considerations. More specifically, we demonstrate how a teacher-learner setup can be used to teach a robot a proto-language. The same control architecture is also used for a humanoid doll robot which can interact with a human by imitation. Another experiment addresses dynamic coupling of movements between a human and a mobile robot. Here, ...
[ 757, 3112 ]
Validation
2,198
3
Data Mining on an OLTP System (Nearly) for Free This paper proposes a scheme for scheduling disk requests that takes advantage of the ability of high-level functions to operate directly at individual disk drives. We show that such a scheme makes it possible to support a Data Mining workload on an OLTP system almost for free: there is only a small impact on the throughput and response time of the existing workload. Specifically, we show that an OLTP system has the disk resources to consistently provide one third of its sequential bandwidth to a background Data Mining task with close to zero impact on OLTP throughput and response time at high transaction loads. At low transaction loads, we show much lower impact than observed in previous work. This means that a production OLTP system can be used for Data Mining tasks without the expense of a second dedicated system. Our scheme takes advantage of close interaction with the on-disk scheduler by reading blocks for the Data Mining workload as the disk head “passes over ” them while satisfying demand blocks from the OLTP request stream. We show that this scheme provides a consistent level of throughput for the background workload even at very high foreground loads. Such a scheme is of most benefit in combination with an Active Disk environment that allows the background Data Mining application to also take advantage of the processing power and memory available directly on the disk drives. This research was sponsored by DARPA/ITO through ARPA Order D306, and issued
[ 3034 ]
Train
2,199
2
Genre Classification and Domain Transfer for Information Filtering The World Wide Web is a vast repository of information, but the sheer volume makes it difficult to identify useful documents. We identify document genre is an important factor in retrieving useful documents and focus on the novel document genre dimension of subjectivity.
[ 190, 349, 647, 1117, 2484 ]
Train
2,200
3
Comprehensive Hardware and Software Support for Operating Systems to Exploit MP Memory Hierarchies AbstractÐHigh-performance multiprocessor workstations are becoming increasingly popular. Since many of the workloads running on these machines are operating-system intensive, we are interested in exploring the types of support for the operating system that the memory hierarchy of these machines should provide. In this paper, we evaluate a comprehensive set of hardware and software supports that minimize the performance losses for the operating system in a sophisticated cache hierarchy. These supports, selected from recent papers, are code layout optimization, guarded sequential instruction prefetching, instruction stream buffers, support for block operations, support for coherence activity, and software data prefetching. We evaluate these supports under a simulated environment. We show that they have a largely complementary impact and that, when combined, speed up the operating system by an average of 40 percent. Finally, a cost-performance comparison of these schemes suggests that the most cost-effective ones are code layout optimization and block operation support, while the least cost-effective one is software data prefetching. Index TermsÐCache hierarchies, shared-memory multiprocessors, architectural support for operating system, prefetching, tracedriven simulations, performance, block operations. 1
[ 1067 ]
Train
2,201
4
Adaptation and Plasticity of User Interfaces . This paper introduces the notion of plasticity, a new property of interactive systems that denotes a particular type of user interface adaptation. It also presents a generic framework inspired from the model-based approach, for supporting the development of plastic user interfaces. This framework is illustrated with simple case studies. KEYSWORDS. User interface adaptation, plasticity. 1. Introduction: A Design Space for Adaptation In HCI, adaptation is modeled as two complementary system properties: adaptability and adaptivity. Adaptability is the capacity of the system to allow users to customize their system from a predefined set of parameters. Adaptivity is the capacity of the system to perform adaptation automatically without deliberate action from the user's part. Whether adaptation is performed on human requests or automatically, the design space for adaptation includes three additional orthogonal axes (see Figure 1): . The target for adaptation. This axis denotes the enti...
[ 1743 ]
Train
2,202
0
Risk Management in Concurrent Engineering in Presence of Intelligent Agents Contents 1 Introduction 2 1.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.3 Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Working principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2 Current Status 5 2.1 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 SDMA as an agent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.3 Assumptions about the environment . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 SDMA-Risk Version 0.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Summary 8 A KQML Specification for SD
[ 1107 ]
Train
2,203
4
Designing a Miniature Wearable Visual Robot In this paper we report on two methods we have developed to aid in the design of a Wearable Visual Robot --- a body mounted robot for which the main sensor is a camera. Specifically, we have first refined the analysis of sensor placement through the computation of the field of view and body motion using a 3D model of the human form. Second we have improved the design of the robot's morphology with the help of an optimization algorithm based on the Pareto front, within constraints set by the overall choice of robot kinematic chain and the need to specify obtainable actuators and sensors. The methods could be of use for the design and performance evaluation of rather different kinds of wearable robots and devices.
[ 727, 1854 ]
Train
2,204
3
A Framework for Describing Visual Interfaces to Databases In the field of HCI there exist many formalisms for analysing, describing and evaluating interactive systems. However, in developing and evaluating user interfaces to databases, we found it necessary to be able to describe presentation and interaction aspects that are catered for poorly or not at all in current formalisms. This paper presents a framework for the systematic description of data model, presentation and interaction components that together form a graphical user interface. The utility of the framework is then demonstrated by showing how it can be used to describe two existing visual query interfaces. These examples show that the framework provides a systematic method for the concise description of graphical interfaces to databases that can be used either during interface design or as a communication aid. 1 Introduction Research in user interfaces for databases is gaining momentum with many recent conferences and workshops [10, 21, 22, 23, 36]. However, many papers on datab...
[ 2551 ]
Train
2,205
3
Flexible and Scalable Cost-Based Query Planning in Mediators: A Transformational Approach The Internet provides access to a wealth of information. For any given topic or application domain there are a variety of available information sources. However, current systems, such as search engines or topic directories in the World Wide Web, offer only very limited capabilities for locating, combining, and organizing information. Mediators, systems that provide integrated access and database-like query capabilities to information distributed over heterogeneous sources, are critical to realize the full potential of meaningful access to networked information. Query planning, the task of generating a cost-efficient plan that computes a user query from the relevant information sources, is central to mediator systems. However, query planning is a computationally hard problem due to the large number of possible sources and possible orderings on the operations to process the data. Moreover, the choice of sources, data processing operations, and their ordering, strongly affects the plan c...
[ 857, 1127, 1536 ]
Validation
2,206
3
On the Equivalence of the Static and Disjunctive Well-Founded Semantics and its Computation In recent years, much work was devoted to the study of theoretical foundations of Disjunctive Logic Programming and Disjunctive Deductive Databases. While the semantics of non-disjunctive programs is fairly well understood, the declarative and computational foundations of disjunctive logic programming proved to be much more elusive and difficult. Recently, two new and promising semantics have been proposed for the class of disjunctive logic programs. The first one is the static semantics STATIC, proposed by Przymusinski, and, the other is the disjunctive well-founded semantics D-WFS, proposed by Brass and Dix. Although the two semantics are based on very different ideas, both of them have been shown to share a number of natural and intuitive properties. In particular, both Preprint submitted to Elsevier Preprint 4 October 1999 of them extend the well-founded semantics of normal logic programs. Nevertheless, since the static semantics employs a much richer underlying language than the ...
[ 494, 2325, 2732, 3104 ]
Validation
2,207
5
Landscapes on Spaces of Trees Combinatorial optimization problems defined on sets of phylogenetic trees are an important issue in computational biology, for instance the problem of reconstruction a phylogeny using maximum likelihood or parsimony approaches. The collection of possible phylogenetic trees is arranged as a so-called Robinson graph by means of the nearest neighborhood interchange move. The coherent algebra and spectra of Robinson graphs are discussed in some detail as their knowledge is important for an understanding of the landscape structure. We consider simple model landscapes as well as landscapes arising from the maximum parsimony problem, focusing on two complementary measures of ruggedness: the amplitude spectrum arising from projecting the cost functions onto the eigenspaces of the underlying graph and the topology of local minima and their connecting saddle points.
[ 3084 ]
Test
2,208
3
On Constructing the Right Sort of CBR Implementation Case based reasoning implementations as currently constructed tend to #t three general models, characterized by implementation constraints: task-based #task alone#, enterprise #integrating databases#, and web-based #integrating web representations#. These implementations represent the targets for automatic system construction, and it is important to understand the strengths of each, how they are built, and how one may be constructed by transforming another. This paper describes a framework that relates the three types of CBR implementation, discusses their typical strengths and weaknesses, and describes practical methods for automating the construction of new CBR systems by transforming and synthesizing existing resources. 1 Introduction CBR systems as currently constructed tend to #t three general implementation models, de#ned by broad implementation constraints on representation and process. Traditionally, task-based implementations have addressed system goals bas...
[ 2592 ]
Train
2,209
0
Leveled Commitment and Trust in Negotiation As agents become more autonomous, agent negotiation and motivational attitudes such as commitment and trust become more important. In this paper we consider the important choice in advanced negotiation applications whether negotiation parameters -- such as cardinality of interaction, agent attitude, and agent architectures -- are incorporated in the negotiation protocol or in the negotiation strategy. Only in the first case parameters are fixed and agents do not have to reason about them when they choose their strategy. We define a dynamic deontic logic which can also be used for the second case, because it models concepts like leveled commitment and trust. For example, it formalizes that violating commitments leads to a decrease in trustworthiness. 1 Introduction In advanced applications of multi-agent systems agents interact more frequently, deliberate more extensively, and in general act more autonomously. For example, in electronic commerce agents are allowed to negotiate ...
[ 716, 2643 ]
Train
2,210
1
Classifying Unseen Cases with Many Missing Values Handling missing attribute values is an important issue for classifier learning, since missing attribute values in either training data or test (unseen) data affect the prediction accuracy of learned classifiers. In many real KDD applications, attributes with missing values are very common. This paper studies the robustness of four recently developed committee learning techniques, including Boosting, Bagging, Sasc, and SascMB, relative to C4.5 for tolerating missing values in test data. Boosting is found to have a similar level of robustness to C4.5 for tolerating missing values in test data in terms of average error in a representative collection of natural domains under investigation. Bagging performs slightly better than Boosting, while Sasc and SascMB perform better than them in this regard, with SascMB performing best. Furthermore, we propose a novel voting weight scheme for the committee learning techniques. Although it is very simple, it can improve the robustness of all these ...
[ 191, 1871 ]
Train
2,211
4
The Conference Assistant: Combining Context-Awareness with Wearable Computing We describe the Conference Assistant, a prototype mobile, context-aware application that assists conference attendees. We discuss the strong relationship between context-awareness and wearable computing and apply this relationship in the Conference Assistant. The application uses a wide variety of context and enhances user interactions with both the environment and other users. We describe how the application is used and the context-aware architecture on which it is based. 1. Introduction In human-human interaction, a great deal of information is conveyed without explicit communication, but rather by using cues. These shared cues, or context, help to facilitate grounding between participants in an interaction [3]. We define context to be any information that can be used to characterize the situation of an entity, where an entity can be a person, place, or physical or computational object. In human--computer interaction, there is very little shared context between the human and the co...
[ 1514 ]
Train
2,212
3
Deductive Queries in ODMG Databases: the DOQL Approach The Deductive Object Query Language (DOQL) is a rule-based query language designed to provide recursion, aggregates, grouping and virtual collections in the context of an ODMG compliant object database system. This paper provides a description of the constructs supported by DOQL and the algebraic operational semantics induced by DOQL's query translation approach to implementation. The translation consists of a logical rewriting step used to normalise DOQL expressions into molecular forms, and a mapping step that transforms the canonical molecular form into algebraic expressions. The paper thus not only describes a deductive language for use with ODMG databases, but indicates how this language can be implemented using conventional query processing techniques. 1 Introduction The ODMG standard is an important step forward due to the provision of a reference architecture for object databases. This architecture encompasses an object model and type system, a set of imperative lan...
[ 2382, 2551 ]
Train
2,213
0
AT Humboldt in RoboCup-99 The paper describes the architecture and the scientific goals of the virtual soccer team "AT Humboldt 99", which is the successor of vice champion "AT Humboldt 98" from RoboCup-98 in Paris and world champion "AT Humboldt 97" from RoboCup-97 in Nagoya. Scientific goals are the development of agent-oriented techniques and learning methods.
[ 1474, 2364 ]
Test
2,214
0
Cooperative Plan Selection Through Trust Cooperation plays a fundamental role in multi-agent systems in which individual agents must interact for the overall system to function effectively.
[ 484 ]
Train
2,215
3
Conjunctive-Query Containment and Constraint Satisfaction Conjunctive-query containment is recognized as a fundamental problem in database query evaluation and optimization. At the same time, constraint satisfaction is recognized as a fundamental problem in artificial intelligence. What do conjunctive-query containment and constraint satisfaction have in common? Our main conceptual contribution in this paper is to point out that, despite their very different formulation, conjunctive-query containment and constraint satisfaction are essentially the same problem. The reason is that they can be recast as the following fundamental algebraic problem: given two finite relational structures A and B, is there a homomorphism h : A ! B? As formulated above, the homomorphism problem is uniform in the sense that both relational structures A and B are part of the input. By fixing the structure B, one obtains the following non-uniform problem: given a finite relational structure A, is there a homomorphism h : A ! B? In general, non-uniform tractability results do not uniformize. Thus, it is natural to ask: which tractable cases of non-uniform tractability results for constraint satisfaction and conjunctive-query containment do uniformize? Our main technical contribution in this paper is to show that several cases of tractable non-uniform constraint satisfaction problems do indeed uniformize. We exhibit three non-uniform tractability results that uniformize and, thus, give rise to polynomial-time solvable cases of constraint satisfaction and conjunctive-query containment.
[ 1318, 2272 ]
Validation
2,216
1
How to Interpret Neural Networks In Terms of Fuzzy Logic? Neural networks are a very efficient learning tool, e.g., for transforming an experience of an expert human controller into the design of an automatic controller. It is desirable to reformulate the neural network expression for the input-output function in terms most understandable to an expert controller, i.e., by using words from natural language. There are several methodologies for transforming such natural-language knowledge into a precise form; since these methodologies have to take into consideration the uncertainty (fuzziness) of natural language, they are usually called fuzzy logics. 1
[ 2782, 3067 ]
Test
2,217
1
The Maximum Clique Problem Contents 1 Introduction 2 1.1 Notations and Definitions . . . . . . . . . . . . . . . . . . . . . . . . 3 2 Problem Formulations 4 2.1 Integer Programming Formulations . . . . . . . . . . . . . . . . . . . 5 2.2 Continuous Formulations . . . . . . . . . . . . . . . . . . . . . . . . 8 3 Computational Complexity 12 4 Bounds and Estimates 15 5 Exact Algorithms 19 5.1 Enumerative Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 19 5.2 Exact Algorithms for the Unweighted Case . . . . . . . . . . . . . . 21 5.3 Exact Algorithms for the Weighted Case . . . . . . . . . . . . . . . . 25 6 Heuristics 27 6.1 Sequential Greedy Heuristics . . . . . . . . . . . . . . . . . . . . . . 28 6.2 Local Search Heuristics . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.3 Advanced Search Heuristics . . . . . . . . . . . . . . . . . . . . . . . 30 6.3.1 Simulated annealing . . . . . . . . . . . . . . . . . . . . . . . 30 6.3.2 Neural networks . . . . . . . . . . . . . . . . . . . . . . . .
[ 2244, 2444, 2766 ]
Train
2,218
4
Single Display Groupware: Exploring Computer Support for Co-located Collaboration This panel will explore an interaction paradigm for colocated computer-based collaboration we term Single Display Groupware (SDG). SDG is a class of applications that support multiple simultaneous users interacting in the same room on a single shared display with multiple inputdevices. SDG are being used in various applications in the educational, entertainment and research communities, but many issues remain to be explored. Keywords Computer-supported cooperative work (CSCW); computersupported collaborative learning (CSCL); computersupported collaborative entertainment (CSCE), multiple input devices. INTRODUCTION Single Display Groupware (SDG) is a class of applications that support multiple simultaneous users interacting in a colocated environment on a single shared display with multiple input-devices [4]. SDG allows users to interact more naturally and comfortably around the computer. Such applications take advantage of our human ability to interact and communicate in a face-to-...
[ 1645, 2801 ]
Train
2,219
3
Adaptive Query Processing for Internet Applications As the area of data management for the Internet has gained in popularity, recent work has focused on effectively dealing with unpredictable, dynamic data volumes and transfer rates using adaptive query processing techniques. Important requirements of the Internet domain include: (1) the ability to process XML data as it streams in from the network, in addition to working on locally stored data; (2) dynamic scheduling of operators to adjust to I/O delays and flow rates; (3) sharing and re-use of data across multiple queries, where possible; (4) the ability to output results and later update them. An equally important consideration is the high degree of variability in performance needs for different query processing domains: perhaps an ad-hoc query application should optimize for display of incomplete and partial incremental results, whereas a corporate data integration application may need the best time-to-completion and may have very strict data "freshness" guarantees. The goal of...
[ 1056, 1536, 2022, 2910 ]
Validation
2,220
3
TxnWrap: A Transactional Approach to Data Warehouse Maintenance A Data Warehouse Management System (DWMS) maintains materialized views derived from one or more information sources (ISs) under source changes. Much recent research has developed maintenance algorithms to achieve data warehouse consistency under source data updates typically by sending additional compensation-based messages to information source space. Given the highly dynamic nature of modern distributed environments such as the WWW, both source data or schema changes are likely to occur autonomously and even concurrently. Most previous solutions become invalid under this new requirement, causing both wrong query results returned from IS space or even complete rejection messages. This paper now proposes to tackle this problem from a different angle by rephasing it as a global distributed transaction problem, and develops a novel solution strategy that is not only handling this as of now unsolved problem of concurrent source shcmea changes but also added benefit is more efficient than previous solutions from the literature. As foundation of our solution, we encapsulate the complete data warehouse maintenance process as a DWMS Transaction. We design a multiversion timestamp source wrapper materialization and associated concurrency control algorithm that guarantees a consistent view of the information source space data inside each DWMS Transaction, thus removing the maintenance anomaly problem. This integrated solution called TxnWrap is proven to be correct and achieve complete consistency of data warehouse maintenance even under a mixture of concurrent data updates or schema changes. TxnWrap is complementary to previous maintenance algorithms of removing their concurrency considerations. We have implemented TxnWrap and plugged it into an existing DWMS testbed...
[ 1395, 1463, 2438 ]
Validation
2,221
0
Convergence of Gradient Dynamics with a Variable Learning Rate As multiagent environments become more prevalent we need to understand how this changes the agent-based paradigm. One aspect that is heavily affected by the presence of multiple agents is learning. Traditional learning algorithms have core assumptions, such as Markovian transitions, which are violated in these environments.
[ 1758 ]
Train
2,222
1
Experiences Using Case-Based Reasoning to Predict Software Project Effort This paper explores some of the practical issues associated with the use of case-based reasoning (CBR) or estimation by analogy. We note that different research teams have reported widely differing results with this technology. Whilst we accept that underlying characteristics of the datasets being used play a major role we also argue that configuring a CBR system can also have an impact. We examine the impact of the choice of number of analogies when making predictions; we also look at different adaptation strategies. Our analysis is based on a dataset of software projects collected by a Canadian software house. Our results show that choosing analogies is important but adaptation strategy appears to be less so. These findings must be tempered, however, with the finding that it was difficult to show statistical significance for smaller datasets even when the accuracy indicators differed quite substantially. For this reason we urge some degree of caution when comparing competing predicti...
[ 2489 ]
Validation
2,223
0
A Logical Framework for Multi-Agent Systems and Joint Attitudes We present a logical framework for reasoning about multi-agent systems. This framework uses Giunchiglia et al.'s notion of a logical context to define a methodology for the modular specification of agents and systems of agents. In particular, the suggested methodology possesses important features from the paradigm of object-oriented (OO) design. We are particularly interested in the specification of agent behaviours via BDI theories---i.e., theories of belief, desire and intention. We explore various issues arising from the BDI specification of systems of agents and illustrate how our framework can be used to specify bottom-level agent behaviour via the specification of top-level intentions, or to reason about complex "emergent behaviour" by specifying the relationship between simple interacting agents. 1 Introduction The formal specification of autonomous reasoning agents has recently received much attention in the AI community, particular under the paradigm of agent-oriented progr...
[ 2364 ]
Train
2,224
2
Going beyond Mobile Agent Platforms: Component-Based Development of Mobile Agent Systems Although mobile agents are a promising programming paradigm, the actual deployment of this technology in real applications has been far away from what the researchers were expecting. One important reason for this is the fact that in the current mobile agent frameworks it is quite difficult to develop applications without having to center them on the mobile agents and on the agent platforms. In this paper, we present a component-based framework that enables ordinary applications to use mobile agents in an easy and flexible way. By using this approach, applications can be developed using current objectoriented approaches and become able of sending and receiving agents by the simple drag-and-drop of mobility components. The framework was implemented using the JavaBeans component model and provides integration with ActiveX, which allows applications to be written in a wide variety of programming languages. By using this framework, the development of applications that can make use of mobile agents is greatly simplified, which can contribute to a wider spreading of the mobile agent technology.
[ 589, 1539, 2099 ]
Validation
2,225
4
Real-time Analysis of Data from Many Sensors with Neural Networks Much research has been conducted that uses sensorbased modules with dedicated software to automatically distinguish the user's situation or context. The best results were obtained when powerful sensors (such as cameras or GPS systems) and/or sensor-specific algorithms (like sound analysis) were applied. A somewhat new approach is to replace the one smart sensor by many simple sensors. We argue that neural networks are ideal algorithms to analyze the data coming from these sensors and describe how we came to one specific algorithm that gives good results, by giving an overview of several requirements. Finally, wearable implementations are given to show the feasibility and benefits of this approach and its implications. 1.
[ 533, 664, 1097, 1613, 1731, 1757, 1762, 2472 ]
Train
2,226
0
Finding and Moving Constraints in Cyberspace Agent-based architectures are an effective method for constructing open, dynamic, distributed information systems. The KRAFT system exploits such an architecture, focusing on the exchange of information --- in the form of constraints and data --- among participating agents. The KRAFT approach is particularly wellsuited to solving design and configuration problems, in which constraints and data are retrieved from agents representing customers and vendors on an extranet network, transformed to a common ontology, and processed by mediator agents. This paper describes the KRAFT system, discusses the issues involved in joining a KRAFT network from the point-of-view of information providers in Cyberspace, and examines the role of autonomous and mobile agents in KRAFT. Introduction Traditional distributed database systems provide uniform and transparent access to data objects across the network. These systems, however, are focused on the utilisation of data instead of other semantic knowledg...
[ 1552, 1843 ]
Train
2,227
4
Amplifying Reality . Many novel applications take on the task of moving the personal computer away from the desktop with the approach to merge digital information with physical space and objects. These new applications have given rise to a plethora of notions and terms used to classify them. We introduce amplified reality as a concept complementary to that of augmented reality. To amplify reality is to enhance the publicly available properties of persons and physical objects, by means of using wearable or embedded computational resources. The differences between the two concepts are discussed and examples of implementations are given. The reason for introducing this term is to contribute to the terminology available to discuss already existing applications, but also to open up for a discussion of interesting design implications. Keywords: amplified reality, augmented reality, ubiquitous computing, wearable computing, embedded vs. superimposed properties, private vs. public 1 Breaking away f...
[ 1331, 2118 ]
Train
2,228
1
Towards the Use of Case Properties for Maintaining Case-Based Reasoning Systems . Because of the importance of maintenance in the realm of case--based reasoning systems, methods of maintaining case bases using case properties will be presented. The necessary notation is given, along with definitions of the properties themselves, which are correctness, consistency, incoherence, minimality, and uniqueness. Use of these properties in five experiments is explained, and the results of these experiments on three real world case bases is given. While the prediction accuracy remains constant the case base size is reduced up to 69.1%. 1 Introduction The maintaining of case--based reasoning systems has become increasingly an important research topic during the last few years. For example, a workshop entitled Flexible Strategies for Maintaining Knowledge Containers [7] held at the 14th European Conference on Artificial Intelligence gave researchers the opportunity to present and discuss new progress in this field. The development of useful and accurate quality measur...
[ 402, 1844 ]
Train
2,229
0
From Active Objects to Autonomous Agents This paper studies how to extend the concept of active objects into a structure of agents. It first discusses the requirements for autonomous agents that are not covered by simple active objects. We propose then the extension of the single behavior of an active object into a set of behaviors with a meta-behavior scheduling their activities. To make a concrete proposal based on these ideas we describe how we extended a framework of active objects, named Actalk, into a generic multi-agent platform, named DIMA. We discuss how this extension has been implemented. We finally report on one application of DIMA to simulate economic models. Keywords: active object, agent, implementation, meta-behavior, modularity, re-usability, simulation. 1 Introduction Object-oriented concurrent programming (OOCP) is the most appropriate and promising technology to implement agents. The concept of active object may be considered as the basic structure for building agents. Furthermore, the combinat...
[ 771, 990, 1834, 2341 ]
Train
2,230
2
On Current Technology for Information Filtering and User Profiling in Agent-Based Systems, Part I: A Perspective Several current techniques and methods in information filtering and profiling are surveyed, including state-of-the art technology, various techniques currently used by large businesses, and the academic stateof -the-art projects. Given the simplicity of the techniques currently applied in the field, the further development and application of technology currently available in AI and algorithmics will yield significant improvements in both filtering and profiling results. 1 Introduction The terms "Information Filtering" and "profiling" are widely used. Here "Information Filtering" will refer to computer software systems which ffl split (usually large) data streams into useful and not useful components and direct the useful to interested users. Of particular interest are systems which recognize their users are different and split the data stream into separate (possible overlapping) streams which are directed at distinct users or groups of users. Typically data in the streams is c...
[ 604 ]
Train
2,231
0
Vicious Strategies for Vickrey Auctions AB$C/?- ?D:*.>/ 22*/ <(=.->?--5$% /@"AB$C/?- +;/ 2LM+; - <(=.->?--5$% /@"AB$C/?- / *(25 /N8 28860-45710 -5$% /@"AB$C/? -37LU/-5VTDW.> UXU*%T2OK!8(;O'*(; 8(0$%' -$%+C('"Y66TZ(C34!*((;*5X8( )U[6 M3&C($%/Z!*((;*5%6-. 5\( 5X8( 28810 )U[6 M3&C($% M%DW.>/ 0-40490 $%/Z!*((;* U* 2'!# U1 `*%++;+;I ?]3BbA/-/ *2cd5 / +- +8 N=VU*?+;/ - @Te?N-+# ]122]/`8B, -/ *2M-]8(;?37? Te?N-+# ]122 f$C> Bg-h-ijk 8 o9p hq8g-r6s o r6tWkru:qh o l rg.vZ ]&$C%528!# -*( N-+# ]1 ]VO'\+8 /x<:*/- (; (y@ZzW8(; ]122] 2{ O'\+8 /x< 5 /| (VZ( *39$}(;$%8/a-;,~ 22*5'6-.-, 5 /| (VZ( *39$} 5?37W!*((; . 17590-33150 $} D:U>/ W!*((; . 17590-33150 $}(;$%8/a-;,~ 22*5'6-. 5 /:*.( . 17590-33150 $}(;$%8/a...
[ 1418, 2602 ]
Train
2,232
3
Efficient Management of Multiversion Documents by Object Referencing Traditional approaches to versioning semistructured information are edit-based, i.e., subsequent document versions are represented by using edit scripts. This paper proposes a reference-based version management scheme that preserves the logical structure of the evolving document through the use of object references. By preserving the document structure among versions the new scheme facilitates more efficient query support. In particular, we examine queries involving projections and selections on the document versions, as well as queries on the document evolution history. Moreover, we show that the proposed scheme provides an effective representation of multiversioned XML documents, both at the transport and exchange levels. In fact, with the reference-based scheme, a document's history can also be viewed and processed as yet another XML document. Furthermore, we demonstrate the effectiveness of the new scheme at the storage level. In particular, the scheme is enhanced with a usefulness-based page management policy that extends and adapts techniques used in transaction-time databases to ensure efficient clustering of information among versions. An extensive comparison of the reference-based versioning against representations used in temporal databases and persistent object managers depicts the performance advantages of the new approach. Finally it should be noted that reference-based versioning is applicable to other kinds of semistructured information (besides XML documents), and can be used to replace traditional version control schemes, such as the edit-based RCS and the timestamp-based SCCS.
[ 731, 877, 2172, 2632 ]
Train
2,233
1
ANSWER: Network Monitoring Using Object-Oriented Rules This paper describes ANSWER, the expert system responsible for monitoring AT&T's 4ESS switches. These switches are extremely important, since they handle virtually all of AT&T's long distance traffic. ANSWER is implemented in R++, a rule-based extension to the C++ object-oriented programming language, and is innovative because it employs both rule-based and object-oriented programming paradigms. The use of object technology in ANSWER has provided a principled way of modeling the 4ESS and of reasoning about failures within the 4ESS. This has resulted in an expert system that is more clearly organized, easily understood and maintainable than its predecessor, which was implemented using the rule-based paradigm alone. ANSWER has been deployed for more than a year and handles all 140 of AT&T's 4ESS switches and processes over 100,000 4ESS alarms per week. Introduction Network reliability is of critical concern to AT&T, since its reputation for network reliability has taken many years to ...
[ 1030, 2986 ]
Test
2,234
1
Three Dimensional Optimization of Supersonic Inlets This paper presents the implementation of these new design techniques and their application to a Mach 3 inlet case. The significant improvements obtained using two different optimizers are presented and compared. The results of these optimizations have been verified using a full Reynolds Averaged Navier-Stokes solver. All the following results are thoroughly analysed and placed into an industrial context. 1 Introduction
[]
Test
2,235
3
Accessing Data Integration Systems through Conceptual Schemas Data integration systems provide access to a set of heterogeneous, autonomous data sources through a so-called global, or mediated view. There is a general consensus that the best way to describe the global view is through a conceptual data model, and that there are basically two approaches for designing a data integration system. In the global-as-view approach, one defines the concepts in the global schema as views over the sources, whereas in the local-as-view approach, one characterizes the sources as views over the global schema. It is well known that processing queries in the latter approach is similar to query answering with incomplete information, and, therefore, is a complex task. On the other hand, it is a common opinion that query processing is much easier in the former approach. In this paper we show the surprising result that, when the global schema is expressed in terms of a conceptual data model, even a very simple one, query processing becomes di#cult in the global-as-view approach also. We demonstrate that the problem of incomplete information arises in this case too, and we illustrate some basic techniques for e#ectively answering queries posed to the global schema of the data integration system. 1
[ 1252, 1612, 1643, 1801, 2594 ]
Validation
2,236
3
Knowledge Management in Heterogeneous Data Warehouse Environments This paper addresses issues related to Knowledge Management in the context of heterogeneous data warehouse environments. The traditional notion of data warehouse is evolving into a federated warehouse augmented by a knowledge repository, together with a set of processes and services to support enterprise knowledge creation, refinement, indexing, dissemination and evolution.
[ 2666, 2804 ]
Test
2,237
3
Panel: Is Generic Metadata Management Feasible? dels, such as invert and compose? ## What is the role of an expression language that captures the semantics of models and mappings, not only for design but also for run-time execution? ## Does a generic approach offer any advantages for model manipulation areas of current interest, such as data integration and XML? If the skeptics are right that a generic approach to model management is unachievable pie-in-the-sky, are writers of metadata-driven applications doomed forever to writing special-purpose object-at-a-time code for navigating their information structures? If so, what is the leverage that the database field can offer for these problems? 2. Panelists ## Dr. Laura Haas, IBM Research is working on a tool that can (semi-)automatically produce mappings between two data representations. She has been working on various aspects of data integration since starting the Garlic project in 1994. ##<
[ 444, 1099, 1819, 2785 ]
Train
2,238
1
Soft Margins for AdaBoost Recently ensemble methods like AdaBoost have been applied successfully in many problems, while seemingly defying the problems of overfitting. AdaBoost rarely overfits in the low noise regime, however, we show that it clearly does so for higher noise levels. Central to the understanding of this fact is the margin distribution. AdaBoost can be viewed as a constraint gradient descent in an error function with respect to the margin. We find that AdaBoost asymptotically achieves a hard margin distribution, i.e. the algorithm concentrates its resources on a few hard-to-learn patterns that are interestingly very similar to Support Vectors. A hard margin is clearly a sub-optimal strategy in the noisy case, and regularization, in our case a ``mistrust'' in the data, must be introduced in the algorithm to alleviate the distortions that single difficult patterns (e.g. outliers) can cause to the margin distribution. We propose several regularization methods and generalizations of the original AdaBoost algorithm to achieve a soft margin. In particular we suggest (1) regularized AdaBoost-Reg where the gradient decent is done directly with respect to the soft margin and (2) regularized linear and quadratic programming (LP/QP-) AdaBoost, where the soft margin is attained by introducing slack variables. Extensive simulations demonstrate that the proposed regularized AdaBoost-type algorithms are useful and yield competitive results for noisy data.
[ 1299 ]
Train
2,239
3
Integrity Constraints and Constraint Logic Programming It is shown that constraint logic is useful for evaluation of integrity constraints in deductive databases. Integrity constraints are represented as calls to a metainterpreter for negation-as-failure implemented as a constraint solver. This procedure, called lazy negationas -failure, yields an incremental evaluation: It starts checking the existing database and each time an update request occurs, simplified constraints are produced for checking the particular update and new constraints corresponding to specialized integrity constraints are generated for the updated database. 1 Introduction There is a relationship between integrity constraints in databases and the constraints of constraint logic programming going beyond the partial overlap of the names applied for these phenomena. Both concern conditions that should be ensured for systems of interdependent entities: the different tuples in a database, and the set of variables in a program execution state. Both relate to problems that e...
[ 1168 ]
Validation
2,240
0
QLB: A Quantified Logic for Belief . This paper describes QLB, a quantified logic of belief that is a possible extension of the modal system KD45n to predicate level. The main features of QLB are that: (i) it is allowed to quantify over the agents of belief; (ii) the belief operator can be indexed by any term of the formal language; (iii) terms are not rigid designators, but are interpreted contextually; (iv) automatic theorem proving is possible in QLB (but it is not presented in this paper). QLB is constructed as a partial logic with a monotonic semantics on ordered sets, and its semantic theorems are defined as the formulae that are sometimes true and never false. 1 Introduction Agents are complex objects: they can be modelled in terms of mental states, like knowledge, beliefs, intentions, goals, plans, etc. and they perform actions (sometimes cooperatively) in a society of other agents. In this book the reader can find contributions from different schools of thought related to agents: agent theories for specificati...
[ 1812 ]
Train
2,241
3
Generic Agent Framework for Internet Information Systems For effective Internet database services, it is essential that the information requirements of regular users can be met without the typical delays currently experienced using Internet browsers and the World Wide Web. We use cooperating agents to manage both client and server caches, thereby bringing significant performance improvements. The caching and prefetching of information is based on both user and application profiles and agents communicate to ensure the currency of client caches. According to specific application requirements, various forms of agents can be installed on the server and client sides to provide value-added services to both casual and regular users. All component agents are instantiations and/or specialisations of a generic agent. We describe how a specific Internet brokering system for engineering product data has been constructed using our general framework for the development of Internet information systems. 1 Introduction With the development of World-Wide Web ...
[ 436 ]
Train
2,242
1
Toward Scalable Learning with Non-uniform Class and Cost Distributions: A Case Study in Credit Card Fraud Detection Many factors influence the performance of a learned classifier. In this paper we study different methods of measuring performance based on a unified set of cost models and the effects of training class distribution with respect to these models. Observations from these effects help us devise a distributed multi-classifier meta-learning approach to learn in domains with skewed class distributions, non-uniform cost per error, and large amounts of data. One such domain is credit card fraud detection and our empirical results indicate that, up to a certain degree of skewed distribution, our approach can significantly reduce loss due to illegitimate transactions. Introduction Inductive learning research has been focusing on devising algorithms that generate highly accurate classifiers. Many factors contribute to the quality of the learned classifier. One factor is the class distribution in the training set. Using the same algorithm, different training class distributions can generate classi...
[ 688, 995, 999, 1590, 1944, 2479 ]
Train
2,243
0
Engineering Executable Agents Using Multi-Context Systems In the area of agent-based computing there are many proposals for specific system architectures, and a number of proposals for general approaches to building agents. As yet, however, there are comparatively few attempts to relate these together, and even fewer attempts to provide methodologies which relate designs to architectures and then to executable agents. This paper provides a first attempt to address this shortcoming. We propose a general method of specifying logic-based agents, which is based on the use of multi-context systems, and give examples of its use. The resulting specifications can be directly executed, and we discuss an implementation which makes this direct execution possible.
[ 438, 566, 1260, 1724, 2056 ]
Test
2,244
2
On Maximum Clique Problems In Very Large Graphs . We present an approach for clique and quasi-clique computations in very large multi-digraphs. We discuss graph decomposition schemes used to break up the problem into several pieces of manageable dimensions. A semiexternal greedy randomized adaptive search procedure (GRASP) for finding approximate solutions to the maximum clique problem and maximum quasiclique problem in very large sparse graphs is presented. We experiment with this heuristic on real data sets collected in the telecommunications industry. These graphs contain on the order of millions of vertices and edges. 1. Introduction The proliferation of massive data sets brings with it a series of special computational challenges. Many of these data sets can be modeled as very large multidigraphs M with a special set of edge attributes that represent special characteristics of the application at hand [1]. Understanding the structure of the underlying digraph D(M) is essential for storage organization and information retrieval...
[ 2217 ]
Train
2,245
3
A Foundation for Representing and Querying Moving Objects Spatio-temporal databases deal with geometries changing over time. The goal of our work is to provide a DBMS data model and query language capable of handling such time-dependent geometries, including those changing continuously which describe moving objects. Two fundamental abstractions are moving point and moving region, describing objects for which only the time-dependent position, or position and extent, are of interest, respectively. We propose to represent such time-dependent geometries as attribute data types with suitable operations, that is, to provide an abstract data type extension to a DBMS data model and query language. This paper presents a design of such a system of abstract data types. It turns out that besides the main types of interest, moving point and moving region, a relatively large number of auxiliary data types is needed. For example, one needs a line type to represent the projection of a moving point into the plane, or a “moving real ” to represent the time-dependent distance of two moving points. It then becomes crucial to achieve (i) orthogonality in the design of the type system, i.e., type constructors can be applied uniformly, (ii) genericity
[ 1068, 1927 ]
Test
2,246
4
Constraint Diagram Reasoning Diagrammatic human-computer interfaces are now becoming standard. In the near future, diagrammatic front-ends, such as those of UML-based CASE tools, will be required to offer a much more intelligent behavior than just editing. Yet there is very little formal support and there are almost no tools available for the construction of such environments. The present paper introduces a constraint-based formalism for the specification and implementation of complex diagrammatic environments. We start from grammar-based definitions of diagrammatic languages and show how a constraint solver for diagram recognition and interpretation can automatically be constructed from such grammars. In a second step, the capabilities of these solvers are extended by allowing to axiomatise formal diagrammatic systems, such as Venn Diagrams, so that they can be regarded as a new constraint domain. The ultimate aim of this schema is to establish a language of type CLP(Diagram) for diagrammatic reasoni...
[ 2145 ]
Train
2,247
0
Personality in Synthetic Agents ID: A043 Personality in Synthetic Agents Rousseau, Daniel KSL, Stanford University Hayes-Roth, Barbara KSL, Stanford University Abstract Personality characterizes an individual through a set of psychological traits that influence his or her behavior. Combining visions from psychology, artificial intelligence and theater, we are studying the use of personality by intelligent, automated actors able to improvise their behavior in order to portray characters, and to interact with users in a multimedia environment We show how psychological personality traits can be exploited to produce a performance that is theatrically interesting and believable without being completely predictable. We explain how personality can influence moods and interpersonal relationships. We describe the model of a synthetic actor that takes into account those concepts to choose its behavior in a given context. In order to test our approach, we observe the performance of autonomous actors portraying waiters with di...
[ 429, 1344, 1421, 2711 ]
Train
2,248
4
Requirements for a Group Communication Service for FLARE This document explores what the requirements for a group communication service for the Framework for Location-aware Augmented Reality Environments (FLARE) are. This chapter provides an introduction to FLARE. The next chapter will explain the game rules for the rst application called Quazoom that we will build using FLARE. Since network partitions are important, we rst describe the game rules in the case when there are no partitions, and treat the partitioned case in a seperate section
[ 41, 1752 ]
Train
2,249
0
The RETSINA MAS Infrastructure RETSINA is an implemented Multi-Agent System infrastructure that has been developed for several years and applied in many domains ranging from financial portfolio management to logistic planning. In this paper, we distill from our experience in developing MASs to clearly define a generic MAS infrastructure as the domain independent and reusable substratum that supports the agents' social interactions. In addition, we show that the MAS infrastructure imposes requirements on an individual agent if the agent is to be a member of a MAS and take advantage of various components of the MAS infrastructure. Although agents are expected to enter a MAS and seamlessly and e ortlessly interact with the agents in the MAS infrastructure, the current state of the art demands agents to be programmed with the knowledge of what infrastructure they will utilize, and what are various fallback and recovery mechanisms that the infrastructure provides. By providing an abstract MAS infrastructure model and a concrete implemented instance of the model, RETSINA, we contribute towards the development of principles and practice to make the MAS infrastructure "invisible" and ubiquitous to the interacting agents.
[ 1104, 1618, 2628, 3169 ]
Train
2,250
2
Explaining Collaborative Filtering Recommendations $XWRPDWHG FROODERUDWLYH ILOWHULQJ #$&)# V\VWHPV SUHGLFW D SHUVRQV DIILQLW\ IRU LWHPV RU LQIRUPDWLRQ E\ FRQQHFWLQJ WKDW SHUVRQV UHFRUGHG LQWHUHVWV ZLWK WKH UHFRUGHG LQWHUHVWV RI D FRPPXQLW\ RI SHRSOH DQG VKDULQJ UDWLQJV EHWZHHQ OLNH# PLQGHG SHUVRQV# +RZHYHU# FXUUHQW UHFRPPHQGHU V\VWHPV DUH EODFN ER[HV# SURYLGLQJ QR WUDQVSDUHQF\ LQWR WKH ZRUNLQJ RI WKH UHFRPPHQGDWLRQ# ([SODQDWLRQV SURYLGH WKDW WUDQVSDUHQF\# H[SRVLQJ WKH UHDVRQLQJ DQG GDWD EHKLQG D UHFRPPHQGDWLRQ# ,Q WKLV SDSHU# ZH DGGUHVV H[SODQDWLRQ LQWHUIDFHV IRU $&) V\VWHPV KRZ#WKH\#VKRXOG#EH#LPSOHPHQWHG#DQG#ZK\#WKH\# VKRXOG#EH#LPSOHPHQWHG##7R#H[SORUH#KRZ##ZH#SUHVHQW#D#PRGHO# IRU#H[SODQDWLRQV#EDVHG#RQ#WKH#XVHUV#FRQFHSWXDO#PRGHO#RI#WKH# UHFRPPHQGDWLRQ#SURFHVV##:H#WKHQ#SUHVHQW#H[SHULPHQWDO# UHVXOWV#GHPRQVWUDWLQJ#ZKDW#FRPSRQHQWV#RI#DQ#H[SODQDWLRQ#DUH# WKH#PRVW#FRPSHOOLQJ##7R#DGGUHVV#ZK\##ZH#SUHVHQW# H[SHULPHQWDO#HYLGHQFH#WKDW#VKRZV#WKDW#SURYLGLQJ#H[SODQDWLRQV# FDQ#LPSURYH#WKH#DFFHSWDQFH#RI#$&)#V\VWHPV##:H#DOVR# GHVFULEH#VRPH#LQL...
[ 705 ]
Train
2,251
0
A Knowledge-based Approach for Lifelike Gesture Animation . The inclusion of additional modalities into the communicative behavior of virtual agents besides speech has moved into focus of human-computer interface researchers, as humans are more likely to consider computer-generated figures lifelike when appropriate nonverbal behaviors are displayed in addition to speech. In this paper, we propose a knowledge-based approach for the automatic generation of gesture animations for an articulated figure. It combines a formalism for the representation of spatiotemporal gesture features, methods for planning individual gestural animations w.r.t. to form and timing, and formation of arm trajectories. Finally, enhanced methods for rendering animations from motor programs are incorporated in the execution of planned gestures. The approach is targetted to achieve a great variety of gestures as well as a higher degree of lifelikeness in synthetic agents. 1 Introduction The communicative behaviors of virtual anthropomorphic agents, widely used in human-...
[ 386, 421, 646, 2892 ]
Train
2,252
3
Model Generation without Normal Forms and Applications in Natural-Language Semantics . I present a new tableaux-based model generation method for first-order formulas without function symbols. Unlike comparable approaches, the Relational Models (RM) tableaux calculus does not require clausal input theories. I propose some applications of the RM calculus in natural-language semantics and discuss its usefulness as an inference procedure in natural-language processing. 1 Introduction Refutational methods in automated deduction prove the unsatisfiability of logical theories. For many applications, the interpretations of a theory that show its satisfiability are at least as interesting as proofs. Model generation refers to the automatic construction of such interpretations from first-order theories. In the recent years, there has been a growing interest in the automated deduction community in developing model generation methods for various application areas such as finite mathematics [25, 22], deductive databases [7], diagnosis [13, 1], and planning [19]. As a result, mode...
[ 67 ]
Train
2,253
5
Probabilistic Self-Localization for Mobile Robots Localization is a critical issue in mobile robotics. If the robot does not know where it is, it, cannot effectively plan movements, locate objects, or reach goals. In this paper, we describe probabilistic self-localization techniques for mobile robots that are based on the principal of maximum-likelihood estimation. The basic method is to compare a map generated at the current robot position to a previously generated map of the environment to prohabilistically maximize the agreement between the maps. This method is able to operate in both indoor and outdoor environments using either discrete features or an occupancy grid to represent the world map. The map may be generated using any method to detect features in the robot's surroundings, including vision, sonar, a d laser range-finder. A global search of the pose space is performed that guarantees that the best position in a discretized pose space is found according to the probabilistic: map agreement measure. In addition, fitting the likelihood function with a parameterized smface allows both subpixel localization and uncertainty estimation to be performed. The application of these techniques in several experiments is described, including experimental localization results for the Sojourner Mars rover. 1
[ 2674 ]
Test
2,254
2
Micro-Workflow: A Workflow Architecture Supporting Compositional Object-Oriented Software Development This dissertation proposes micro-workflow, a new workflow architecture that bridges the gap between the type of functionality provided by current workflow systems and the type of workflow functionality required in object-oriented applications. Micro-workflow provides a better solution when the focus is on customizing the workflow features and integrating with other systems. In this thesis I discuss how micro-workflow leverages object technology to provide workflow functionality. As an example, I present the design of an object-oriented framework which provides a reusable micro-workflow architecture and enables developers to customize it through framework-specific reuse techniques. I show how through composition, developers extend micro-workflow to support history, persistence, monitoring, manual intervention, worklists, and federated workflow. I evaluate this approach with three case studies that implement processes with different requirements
[ 415, 1181 ]
Train
2,255
2
Information Triage using Prospective Criteria : In many applications, large volumes of time-sensitive textual information require triage: rapid, approximate prioritization for subsequent action. In this paper, we explore the use of prospective indications of the importance of a time-sensitive document, for the purpose of producing better document filtering or ranking. By prospective, we mean importance that could be assessed by actions that occur in the future. For example, a news story may be assessed (retrospectively) as being important, based on events that occurred after the story appeared, such as a stock-price plummeting or the issuance of many follow-up stories. If a system could anticipate (prospectively) such occurrences, it could provide a timely indication of importance. Clearly, perfect prescience is impossible. However, sometimes there is sufficient correlation between the content of an information item and the events that occur subsequently. We describe a process for creating and evaluating approximate information-triage procedures that are based on prospective indications. Unlike many information-retrieval applications for which document labeling is a laborious, manual process, for many prospective criteria it is possible to build very large, labeled, training corpora automatically. Such corpora can be used to train text classification procedures that will predict the (prospective) importance of each document. This paper illustrates the process with two case studies, demonstrating the ability to predict whether the stock price of one or more companies mentioned in a news story will move significantly following the appearance of that story. We conclude by discussing that the comprehensibility of the learned classifiers can be critical to success. 1
[ 739, 2109, 3107 ]
Train
2,256
2
Proposal Id: P480 Proposal for MPEG-7 Image Description Scheme Name: this document, although any specific DDL selected by MPEG-7 can be used to serve the same purpose as well. We will demonstrate that new features can be easily accommodated using the hierarchical structures and entity relation structures.
[ 1048, 3031 ]
Validation
2,257
2
Creating a Semantic web Interface with Virtual Reality Novel initiatives amongst the Internet community such as Internet2 [1] and Qbone [2] are based on the use of high bandwidth and powerful computers. However the experience amongst the majority of Internet users is light-years from these emerging technologies. We describe the construction of a distributed high performance search engine, utilizing advanced threading techniques on a diskless Linux cluster. The resulting Virtual Reality scene is passed to a standard client machine for viewing. This search engine bridges the gap between the Internet of today, and the Internet of the future. Keywords: Internet Searching, High Performance VRML, Visualization. 1.
[ 505, 742, 2503 ]
Train
2,258
5
Using Guidelines to Constrain Interactive Case-Based HTN Planning This paper describes HICAP, a general purpose and interactive case-based planning architecture. HICAP is a decision support tool for planning a hierarchical course of action. It integrates a hierarchical task editor, HTE, with a conversational case-based planner, NaCoDAE/HTN. HTE maintains a task hierarchy representing guidelines that constrain the final plan. HTE also encodes the hierarchical organization responsible for these tasks. This supports bookkeeping, which is crucial for real-world large-scale planning tasks. HTE can be used to activate NaCoDAE/HTN to interactively refine user-selected guideline tasks into a concrete plan. Our application of HICAP to the task of noncombatant evacuation operations inspired its architecture. In this application, our empirical evaluation with ModSAF simulations confirms that the plans output by HICAP outperform those generated using alternative approaches on three dimensions.
[ 814 ]
Train
2,259
1
Using Error-Correcting Codes for Efficient Text Classification with a Large Number of Categories We investigate the use of Error-Correcting Output Codes (ECOC) for efficient text classification with a large number of categories and propose several extensions which improve the performance of ECOC. ECOC has been shown to perform well for classification tasks, including text classification, but it still remains an under-explored area in ensemble learning algorithms. We explore the use of error-correcting codes that are short (minimizing computational cost) but result in highly accurate classifiers for several real-world text classification problems. Our results also show that ECOC is particularly effective for highprecision classification. In addition, we develop modifications and improvements to make ECOC more accurate, such as intelligently assigning codewords to categories according to their confusability, and learning the decoding (combining the decisions of the individual classifiers) in order to adapt to different datasets. To reduce the need for labeled training data, we develop a framework for ECOC where unlabeled data can be used to improve classification accuracy. This research will impact any area where efficient classification of documents is useful such as web portals, information filtering and routing, especially in open-domain applications where the number of categories is usually very large, and new documents and categories are being constantly added, and the system needs to be very efficient.
[ 526, 2580 ]
Train
2,260
0
Cryptographic Traces for Mobile Agents . Mobile code systems are technologies that allow applications to move their code, and possibly the corresponding state, among the nodes of a wide-area network. Code mobility is a flexible and powerful mechanism that can be exploited to build distributed applications in an Internet scale. At the same time, the ability to move code to and from remote hosts introduces serious security issues. These issues include authentication of the parties involved and protection of the hosts from malicious code. However, the most difficult task is to protect mobile code against attacks coming from hosts. This paper presents a mechanism based on execution tracing and cryptography that allows one to detect attacks against code, state, and execution flow of mobile software components. 1 Introduction Mobile code technologies are languages and systems that exploit some form of code mobility in an Internet-scale setting. In this framework, the network is populated by several loosely coupled co...
[ 169, 1205, 1665, 2969 ]
Test
2,261
4
Mixed-Initiative Interaction = Mixed Computation We show that partial evaluation can be usefully viewed as a programming model for realizing mixed-initiative functionality in interactive applications. Mixed-initiative interaction between two participants is one where the parties can take turns at any time to change and steer the flow of interaction. We concentrate on the facet of mixed-initiative referred to as `unsolicited reporting' and demonstrate how out-of-turn interactions by users can be modeled by `jumping ahead' to nested dialogs (via partial evaluation). Our approach permits the view of dialog management systems in terms of their native support for staging and simplifying interactions; we characterize three different voice-based interaction technologies using this viewpoint. In particular, we show that the built-in form interpretation algorithm (FIA) in the VoiceXML dialog management architecture is actually a (well disguised) combination of an interpreter and a partial evaluator. This work is supported in part by US National Science Foundation grants DGE-9553458 and IIS-9876167. 1 1
[ 2653 ]
Train
2,262
3
The State of Change: A Survey . Updates are a crucial component of any database programming language. Even the simplest database transactions, such as withdrawal from a bank account, require updates. Unfortunately, updates are not accounted for by the classical Horn semantics of logic programs and deductive databases, which limits their usefulness in real-world applications. As a short-term practical solution, logic programming languages have resorted to handling updates using ad hoc operators without a logical semantics. A great many works have been dedicated to developing logical theories in which the state of the underlying database can evolve with time. Many of these theories were developed with specific applications in mind, such as reasoning about actions, database transactions, program verification, etc. As a result, the different approaches have different strengths and weaknesses. In this survey, we review a number of these works, discuss their application domains, and highlight their strong and weak points...
[ 2347, 3055 ]
Train
2,263
1
A Framework for Learning Adaptation Knowledge Based on Knowledge Light Approaches In this paper, we present a framework for learning adaptation knowledge with knowledge light approaches for case-based reasoning (CBR) systems. "Knowledge light" means that these approaches use knowledge already acquired and represented inside the CBR system. Therefore, we describe the sources of knowledge inside a CBR system along with the different knowledge containers. Next we present our framework in terms of these knowledge containers. Further, we apply our framework to two very different knowledge light approaches for learning adaptation knowledge. After that we point out some issues which should be addressed during the design or the use of such algorithms for learning adaptation knowledge. From our point of view, many of these issues should be the topic of further research. Finally we close with a short discussion. 1 Introduction One of the major challenges during designing a case-based reasoning (CBR) system is the modeling of appropriate adaptation knowledge. Usually adaptati...
[ 1424 ]
Train
2,264
5
The RoboCup Physical Agent Challenge: Phase I Traditional AI research has not given due attention to the important role that physical bodies play for agents as their interactions produce complex emergent behaviors to achieve goals in the dynamic real world. The RoboCup Physical Agent Challenge provides a good testbed for studying how physical bodies play a signi cant role in realizing intelligent behaviors using the RoboCup framework [Kitano, et al., 95]. In order for the robots to play a soccer game reasonably well, a wide range of technologies needs to be integrated and a number of technical breakthroughs must be made. In this paper, we present three challenging tasks as the RoboCup Physical Agent Challenge Phase I: (1) moving the ball to the speci ed area (shooting, passing, and dribbling) with no, stationary, or moving obstacles, (2) catching the ball from an opponent or a teammate (receiving, goal-keeping, and intercepting), and (3) passing the ball between two players. The rst two are concerned with single agent skills while the third one is related to a simple cooperative behavior. Motivation for these challenges and evaluation methodology are given. 1.
[ 287, 346, 864, 1630, 3017, 3173 ]
Train
2,265
0
Return from the Ant - Synthetic Ecosystems for Manufacturing Control The synthetic ecosystems approach attempts to adopt basic principles of natural ecosystems in the design of multiagent systems. Natural agent systems like insect colonies are fascinating in that they are robust, flexible, and adaptive. Made up of millions of very simple entities, these systems express a highly complex and coordinated global behavior. There are several branches in different sciences, for instance in biology, physics, economics, or in computer science, that focus on distributed systems of locally interacting entities. Their research yields a number of commonly observed characteristics. To supply engineered systems with similar characteristics this thesis proposes a set of principles that should be observed when designing synthetic ecosystems. Each principle is systematically stated and motivated, and its consequences for the manufacturing control domain are discussed. Stigmergy has shown its usefulness in the coordination of large crowds of agents in a synthetic ecosystem...
[ 2831 ]
Validation
2,266
5
Intelligent Data Analysis in Medicine Extensive amounts of knowledge and data stored in medical databases require the development of specialized tools for storing and accessing of data, data analysis, and effective use of stored knowledge and data. This paper focuses on methods and tools for intelligent data analysis, aimed at narrowing the increasing gap between data gathering and data comprehension. The paper sketches the history of research that led to the development of current intelligent data analysis techniques, discusses the need for intelligent data analysis in medicine, and proposes a classification of intelligent data analysis methods. The scope of the paper covers temporal data abstraction methods and data mining methods. A selection of methods is presented and illustrated in medical problem domains. Presently data abstraction and data mining are attracting considerable research interest. However the two technologies, in spite of the fact that they share their central objective, namely the intelligen...
[ 1391 ]
Train
2,267
3
Amdb: A Visual Access Method Development Tool The development process for access methods (AMs) in database systems is complex and tedious. Amdb is a graphical tool that facilitates the design and tuning process for height-balanced tree-structured AMs. Central to amdb's user interface is a suite of graphical views that visualize the entire search tree, paths and subtrees within the tree, and data contained in the tree. These views animate search tree operations in order to visualize the behavior of an access method. Amdb provides metrics that characterize the performance of queries, the tree structure, and the structureshaping aspects of an AM implementation. The visualizations can be used to browse the performance metrics in the context of the tree structure. The combination of these features allows a designer to locate the sources of performance loss reported by the metrics and investigate causes for those deficiencies. 1. Introduction The recent explosion in the volume and diversity of electronically available information has ...
[ 2187 ]
Train
2,268
4
Evaluation challenges for a federation of heterogeneous information providers: The case of NASA's Earth Science Information Partnerships NASA's Earth Science Information Partnership Federation is an experiment funded to assess the ability of a group of widely heterogeneous earth science data or service providers to self organize and provide improved and cheaper access to an expanding earth science user community. As it is organizing itself, the federation is mandated to set in place an evaluation methodology and collect metrics reflecting the health and benefits of the Federation. This paper describes the challenges of organizing such a federated partnership self-evaluation and discusses the issues encountered during the metrics definition phase of the early data collection. Keyword: metrics, quantitative evaluation, qualitative evaluation, earth science 1 Introduction Beside the obvious need to evaluate any experiment to measure its positive and negative impact the impact of the Government Performance and Results Act (GPRA) is slowly changing the way federal projects are being conducted. Quantitative and qualitative...
[ 2520 ]
Test
2,269
1
Face Recognition Identifying a human individual from his or her face is one of the most nonintrusive modalities in biometrics. However, it is also one of the most challenging ones. This chapter discusses why it is challenging and the factors that a practitioner can take advantage of in developing a practical face recognition system. Some major existing approaches are discussed along with some algorithmic considerations. A face recognition algorithm is presented as an example along with some experimental data. Some possible future research directions are outlined at the end of the chapter. 1.1 INTRODUCTION Face recognition from images is a sub-area of the general object recognition problem. It is of particular interest in a wide variety of applications. Applications in law enforcement for mugshot identification, verification for personal identification such as driver's licenses and credit cards, gateways to limited access areas, surveillance of crowd behavior are all potential applications of a succes...
[ 3058 ]
Validation
2,270
3
Bimodal System for Interactive Indexing and Retrieval of Pathology Images The prototype of a system to assist the physicians in differential diagnosis of lymphoproliferative disorders of blood cells from digitized specimens is presented. The user selects the region of interest (ROI) in the image which is then analyzed with a fast, robust color segmenter. Queries in a database of validated cases can be formulated in terms of shape (similarity invariant Fourier descriptors), texture (multiresolution simultaneous autoregressive model), color (L u v space), and area, derived from the delineated ROI. The uncertainty of the segmentation process (obtained through a numerical method) determines the accuracy of shape description (number of Fourier harmonics). Tenfold cross-validated classification over a database of 261 color 640\Theta480 images was implemented to assess the system performance. The ground truth was obtained through immunophenotyping by flow cytometry. To provide a natural man-machine interface, most input commands are bimodal: either using t...
[ 1562 ]
Train
2,271
3
Animating Spatiotemporal Constraint Databases . Constraint databases provide a very expressive framework for spatiotemporal database applications. However, animating such databases is difficult because of the cost of constructing a graphical representation of a single snapshot of a constraint database. We present a novel approach that makes the efficient animation of constraint databases possible. The approach is based on a new construct: parametric polygon. We present an algorithm to construct the set of parametric polygons that represent a given linear constraint database. We also show how to animate objects defined by parametric polygons, analyze the computational complexity of animation, and present empirical data to demonstrate the efficiency of our approach. 1 Introduction Spatiotemporal databases have recently begun to attract broader interest [10, 12, 27]. While the temporal [4, 23, 24] and spatial [14, 28] database technologies are relatively mature, their combination is far from straightforward. In this conte...
[ 54, 1214, 1927 ]
Train
2,272
3
A Case for Dynamic View Management this paper, we present DynaMat, a system that manages dynamic collections of materialized aggregate views in a data warehouse. At query time DynaMat utilizes a dedicated disk space for storing computed aggregates that are further engaged for answering new queries. Queries are executed independently, or can be bundled within a multi query expression. In the latter case we present an execution mechanism that exploits dependencies among the queries and the materialized set to further optimize their execution. During updates, DynaMat reconciles the current materialized view selection and refreshes the most beneficial subset of it within a given maintenance window. We show how to derive an efficient update plan with respect to the available maintenance window, the different update policies for the views and the dependencies that exist among them. Categories and Subject Descriptors: H.2.7 [DATABASE MANAGEMENT]: Database Administration ---Data warehouse and re
[ 2215 ]
Test
2,273
3
An Approach to Classify Semi-Structured Objects . Several advanced applications, such as those dealing with the Web, need to handle data whose structure is not known a-priori. Such requirement severely limits the applicability of traditional database techniques, that are based on the fact that the structure of data (e.g. the database schema) is known before data are entered into the database. Moreover, in traditional database systems, whenever a data item (e.g. a tuple, an object, and so on) is entered, the application species the collection (e.g. relation, class, and so on) the data item belongs to. Collections are the basis for handling queries and indexing and therefore a proper classication of data items in collections is crucial. In this paper, we address this issue in the context of an extended object-oriented data model. We propose an approach to classify objects, created without specifying the class they belong to, in the most appropriate class of the schema, that is, the class closest to the object state. In particular, w...
[ 650 ]
Validation
2,274
2
A Parameterized Algebra for Event Notification Services Event notification services are used in various applications such as digital libraries, stock tickers, traffic control, or facility management. However, to our knowledge, a common semantics of events in event notification services has not been defined so far. In this paper, we propose a parameterized event algebra which describes the semantics of composite events for event notification systems. The parameters serve as a basis for flexible handling of duplicates in both primitive and composite events. 1.
[ 1893, 1995 ]
Test
2,275
2
Experiences with Selecting Search Engines Using Metasearch Search engines are among the most useful and high profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve and how to use them. This paper describes and evaluates SavvySearch, a meta-search engine designed to intelligently select and interface with multiple remote search engines. The primary meta-search issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired meta-index approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the meta-index approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. 1 Introduction Search engines are powerful tools for assisting the otherwise unmanageable task of navigating the rapidly ex...
[ 263, 270, 587, 879, 976, 982, 1108, 1134, 1167, 1432, 1804, 1824, 2188, 2558, 2771, 3031, 3139 ]
Validation
2,276
4
Using the Resources Model in Virtual Environment Design this paper we take a step back from the formal specification of VEs to investigate where requirements and design information are located within these environments and how it can be structured and analysed. More specifically, we are interested in considering VEs in terms of distributed cognition (DC) [5, 7, 14, 20].
[ 1585, 2649, 2731 ]
Validation
2,277
3
A Rule-based Query Language for HTML With the recent popularity of the web, enormous amount of information is now available on line. Most web documents available over the web are in HTML format and are hierarchically structured in nature. How to query such web documents based on their internal hierarchical structure becomes more and more important. In this paper, we present a rule-based language called WebQL to support effective and flexible web queries. Unlike other web query languages, WebQL is a high level declarative query language with a logical semantics. It allows us to query web documents based on their internal hierarchical structures. It supports not only negation and recursion, but also query result restructuring in a natural way. We also describe the implementation of the system that supports the WebQL query language.
[ 2083 ]
Test
2,278
4
Context Awareness in Systems with Limited Resources Mobile embedded systems often have strong limitations regarding available resources. In this paper we propose a statistical approach which could scale down to microcontrollers with scarce resources, to model simple contexts based on raw sensor data. As a case study, two experiments are provided where statistical modeling techniques were applied to learn and recognize different contexts, based on accelerometer data. We furthermore point out applications that utilize contextual information for power savings in mobile embedded systems.
[ 1156 ]
Train
2,279
0
Enlightened Agents in TuCSoN In the network-centric computing era, applications often involve sets of autonomous, unpredictable, and possibly mobile entities interacting within open, dynamic, and possibly unreliable environments: Intelligent Environments are a typical case. The complexity of such scenarios requires novel engineering tools, providing effective support from the analysis to the deployment stage. In this paper we illustrate the impact of a general-purpose coordination infrastructure for multiagent systems -- providing a model, a run-time, and suitable deployment tools -- on the engineering of such applications. As a case study, we consider the intelligent management of lights inside a building: despite its simplicity, this problem endorses the typical challenges of this class of applications. The case study is built upon the TuCSoN coordination infrastructure, which provides engineers with both the abstractions and the run-time support for effectively managing the application complexity. I. INFRASTR...
[ 614, 1035, 1041 ]
Validation