node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
980
1
Automated Text Categorization Using Support Vector Machine In this paper, we study the use of support vector machine in text categorization. Unlike other machine learning techniques, it allows easy incorporation of new documents into an existing trained system. Moreover, dimension reduction, which is usually imperative, now becomes optional. Thus, SVM adapts efficiently in dynamic environments that require frequent additions to the document collection. Empirical results on the Reuters-22173 collection are also discussed. 1. Introduction The increasingly widespread use of information services made possible by the Internet and World Wide Web (WWW) has led to the so-called information overloading problem. Today, millions of online documents on every topic are easily accessible via the Internet. As the available information increases, the inability of people to assimilate and profitably utilize such large amounts of information becomes more and more apparent. Developing user-friendly, automatic tools for efficient as well as effective retrieval ...
[ 2178 ]
Train
981
3
VisualMOQL: The DISIMA Visual Query Language Multimedia data are now available to a variety of users ranging from naive to sophisticated. To make querying easy, visual query languages have been proposed. Most of these languages have a low expressive power and have their own query processors. Efforts have been made to design query languages with proper semantics to facilitate query optimization and processing in existing database systems. The majority of multimedia database systems are built on top of object or object-relational database systems with the underlying query facilities inherited. The DISIMA system is being built on top of a commercial OODBMS and we have chosen to extend the standard object-oriented query language OQL with some multimedia functionalities. The resulting language is called MOQL. This paper presents VisualMOQL, a visual query language implementing the image component of MOQL. 1 Introduction In this paper we present the visual query interface, VisualMOQL, of the DISIMA distributed image database managemen...
[ 947 ]
Train
982
2
Concept Hierarchy Based Text Database Categorization Document categorization as a technique to improve the retrieval of useful documents has been extensively investigated. One important issue in a large-scale metasearch engine is to select text databases that are likely to contain useful documents for a given query. We believe that database categorization can be a potentially effective technique for good database selection, especially in the Internet environment where short queries are usually submitted. In this paper, we propose and evaluate several database categorization algorithms. This study indicates that while some document categorization algorithms could be adopted for database categorization, algorithms that take into consideration the special characteristics of databases may be more effective. Preliminary experimental results are provided to compare the proposed database categorization algorithms. A prototype database categorization system based on one of the proposed algorithms has been developed.
[ 587, 634, 1642, 1804, 1888, 2275, 2464, 2556, 2771 ]
Train
983
4
An analysis of WebWho: How does awareness of presence affect written messages? We present preliminary results from a study of how awareness of presence affects instant messaging in a computer lab. The easily accessible web based awareness tool, WebWho, visualizes a large university computer lab, allowing students to virtually locate one another and communicate via an instant messaging system. Messages can be sent anonymously, by a conscious act of ticking a box. Cross analyses of sender location (collocated, distributed, and distant), sender status (anonymous vs. identified) and message content were made. Results show that awareness of both physical presence and virtual presence affect the messages, and that these factors affect the text differently. The students use the messaging system to support collaborative work and coordinate social activities, as well as allow for playful behavior. Keywords Instant messaging, computer-mediated communication, awareness of presence, web visualization, social coordination INTRODUCTION WebWho [15] is a lightweight, web bas...
[ 552, 1776 ]
Train
984
0
The Organisation of Sociality: A Manifesto for a New Science of MultiAgent Systems . In this paper, we pose and motivate a challenge, namely the need for a new science of multiagent systems. We propose that this new science should be grounded, theoretically on a richer conception of sociality, and methodologically on the extensive use of computational modelling for real-world applications and social simulations. Here, the steps we set forth towards meeting that challenge are mainly theoretical. In this respect, we provide a new model of multi-agent systems that reflects a fully explicated conception of cognition, both at the individual and the collective level. Finally, the mechanisms and principles underpinning the model will be examined with particular emphasis on the contributions provided by contemporary organisation theory. 1.
[ 1553, 2604 ]
Train
985
2
The RoadRunner Project: Towards Automatic Extraction of Web Data Introduction ROADRUNNER is a research project that aims at developing solutions for automatically extracting data from large HTML data sources. The target of our research are data-intensive Web sites, i.e., HTML-based sites that publish large amounts of data in a fairly complex structure. In our view, we aim at ideally seeing the data extraction process of a data-intensive Web site as a black-box taking as input the URL of an entry point to the site (e.g. the home page), and returning as output data extracted from HTML pages in the site in a structured database-like format. This paper describes the top-level software architecture of the ROADRUNNER System, which has been specifically designed to automatize the data extraction process. Several components of the system have already been implemented, and preliminary experiments show the feasibility of our ideas. Data-intensive Web sites usually share a number o
[ 906 ]
Test
986
1
A Comparative Study of Neural Network Based Feature Extraction Paradigms The projection maps and derived classification accuracies of a neural network (NN) implementation of Sammon's mapping, an auto-associative NN (AANN) and a multilayer perceptron (MLP) feature extractor are compared with those of the conventional principal component analysis (PCA). Tested on five real-world databases, the MLP provides the highest classification accuracy at the cost of deforming the data structure, whereas the linear models preserve the structure but usually with inferior accuracy.
[ 103 ]
Train
987
3
Shaping a CBR view with XML . Case Based Reasoning has found increasing application on the Internet as an assistant in Internet commerce stores and as a reasoning agent for online technical support. The strength of CBR in this area stems from its reuse of the knowledge base associated with a particular application, thus providing an ideal way to make personalised configuration or technical information available to the Internet user. Since case data may be one aspect of a company's entire corporate knowledge system, it is important to integrate case data easily within a company's IT infrastructure, using industry specific vocabulary. We suggest XML as the likely candidate to provide such integration. Some applications have already begun to use XML as a case representation language. We review these and present the idea of a standard case view in XML that can work with the vocabularies or namespaces being developed by specific industries. Earlier research has produced version 1.0 of a Case Based Mark-up ...
[ 2592 ]
Train
988
2
Kernel Expansions With Unlabeled Examples Modern classification applications necessitate supplementing the few available labeled examples with unlabeled examples to improve classification performance. We present a new tractable algorithm for exploiting unlabeled examples in discriminative classification. This is achieved essentially by expanding the input vectors into longer feature vectors via both labeled and unlabeled examples. The resulting classification method can be interpreted as a discriminative kernel density estimate and is readily trained via the EM algorithm, which in this case is both discriminative and achieves the optimal solution. We provide, in addition, a purely discriminative formulation of the estimation problem by appealing to the maximum entropy framework. We demonstrate that the proposed approach requires very few labeled examples for high classification accuracy. 1 Introduction In many modern classification problems such as text categorization, very few labeled examples are available but a...
[ 855, 1126, 1326, 1386, 2808, 2889, 3078 ]
Validation
989
4
NEXUS - Distributed Data Management Concepts for Location Aware Applications Nowadays, mobile computers like subnotebooks or personal digital assistants, as well as cellular phones can not only communicate wirelessly, but they can also determine their position via appropriate sensors like DGPS. Socalled location aware applications take advantage of this fact and structure information according to the position of their users. In order to be able to assign data to a certain location, these information systems have to refer to spatial computer models. The NEXUS project, which is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), aims at the development of a generic infrastructure that serves as a basis for location aware applications. The central task of this platform deals with the data management.
[ 554 ]
Train
990
0
Towards Adaptive Fault Tolerance For Distributed Multi-Agent Systems This paper studies how to bring flexibility to fault-tolerant systems. Firstly, multi-agent systems are identified as a very valuable basis for reaching this goal, and reliability is also shown to be a rare and attractive feature for such systems. We then propose a framework for building applications that provide adaptive fault tolerance, and put forward the promising results obtained when testing the implementation of this framework. We conclude with drawing some perspectives of evolution of our work.
[ 1001, 2229 ]
Validation
991
4
MACK: Media lab Autonomous Conversational Kiosk In this paper, we describe an embodied conversational kiosk that builds on research in embodied conversational agents (ECAs) and on information displays in mixed reality and kiosk format in order to display spatial intelligence. ECAs leverage people’s abilities to coordinate information displayed in multiple modalities, particularly information conveyed in speech and gesture. Mixed reality depends on users ’ interactions with everyday objects that are enhanced with computational overlays. We describe an implementation, MACK (Media lab Autonomous Conversational Kiosk), an ECA who can answer questions about and give directions to the MIT Media Lab’s various research groups, projects and people. MACK uses a combination of speech, gesture, and indications on a normal paper map that users place on a table between themselves and MACK. Research issues involve users’ differential attention to hand gestures, speech and the map, and flexible architectures for Embodied Conversational Agents that allow these modalities to be fused in input and generation.
[ 68, 2331, 2486 ]
Train
992
3
Managing Intervals Efficiently in Object-Relational Databases Modern database applications show a growing demand for efficient and dynamic management of intervals, particularly for temporal and spatial data or for constraint handling. Common approaches require the augmentation of index structures which, however, is not supported by existing relational database systems. By design, the new Relational Interval Tree 1 (RI-tree) employs built-in indexes on an as-they-are basis and is easy to implement. Whereas the functionality and efficiency of the RI-tree is supported by any off-the-shelf relational DBMS, it is perfectly encapsulated by the object-relational data model. The RI-tree requires O(n/b) disk blocks of size b to store n intervals, O(log b n) I/O operations for insertion or deletion, and O(h log b n + r/b) I/Os for an intersection query producing r results. The height h of the virtual backbone tree corresponds to the current expansion and granularity of the data space but does not depend on n. As demonstrated by our ex...
[ 945 ]
Train
993
2
Error-Correcting Output Coding for Text Classification This paper applies error-correcting output coding (ECOC) to the task of document categorization. ECOC, of recent vintage in the AI literature, is a method for decomposing a multiway classification problem into many binary classification tasks, and then combining the results of the subtasks into a hypothesized solution to the original problem. There has been much recent interest in the machine learning community about algorithms which integrate "advice" from many subordinate predictors into a single classifier, and error-correcting output coding is one such technique. We provide experimental results on several real-world datasets, extracted from the Internet, which demonstrate that ECOC can offer significant improvements in accuracy over conventional classification algorithms. 1 Introduction Error-correcting output coding is a recipe for solving multi-way classification problems. It works in two stages: first, independently construct many subordinate classifiers, each responsible for r...
[ 426, 526, 612, 759, 2650 ]
Train
994
2
Conceptual Linking: Ontology-based Open Hypermedia This paper describes the attempts of the COHSE project to define and deploy a Conceptual Open Hypermedia Service. Consisting of . an ontological reasoning service which is used to represent a sophisticated conceptual model of document terms and their relationships; . a Web-based open hypermedia link service that can offer a range of different linkproviding facilities in a scalable and non-intrusive fashion; and integrated to form a conceptual hypermedia system to enable documents to be linked via metadata describing their contents and hence to improve the consistency and breadth of linking of WWW documents at retrieval time (as readers browse the documents) and authoring time (as authors create the documents). Introduction: concepts and metadata Metadata is data that describes other data to enhance its usefulness. The library catalogue or database schema are canonical examples. For our purposes, metadata falls into three broad categories: . Catalogue information: e.g. the artist ...
[ 512, 2616 ]
Train
995
1
Meta-Learning in Distributed Data Mining Systems: Issues and Approaches Data mining systems aim to discover patterns and extract useful information from facts recorded in databases. A widely adopted approach to this objective is to apply various machine learning algorithms to compute descriptive models of the available data. Here, we explore one of the main challenges in this research area, the development of techniques that scale up to large and possibly physically distributed databases. Meta-learning is a technique that seeks to compute higher-level classifiers (or classification models), called meta-classifiers, that integrate in some principled fashion multiple classifiers computed separately over different databases. This study, describes meta-learning and presents the JAM system (Java Agents for Meta-learning), an agent-based meta-learning system for large-scale data mining applications. Specifically, it identifies and addresses several important desiderata for distributed data mining systems that stem from their additional complexity co...
[ 225, 934, 943, 946, 960, 2072, 2242, 3140 ]
Train
996
0
Investigating Interactions Between Agent Conversations and Agent Control Components Exploring agent conversation in the context of fine-grained agent coordination research has raised several intellectual questions. The major issues pertain to interactions between different agent conversations, the representations chosen for different classes of conversations, the explicit modeling of interactions between the conversations, and how to address these interactions. This paper is not so ambitious as to attempt to address these questions, only frame them in the context of quantified, scheduling-centric multi-agent coordination. research. 1 Introduction Based on a long history of work in agents and agent control components for building distributed AI and multi-agent systems, we are attempting to frame and address a set of intellectual questions pertaining to agent conversation. Interaction lies at the heart of the matter; the issue is interaction between different agent conversations, that possibly occur at different levels of abstraction, but also interaction between the m...
[ 789, 1107, 1115, 1724, 2043, 2321 ]
Train
997
0
Fast File Access for Fast Agents . Mobile agents are a powerful tool for coordinating general purpose distributed computing, where the main goal is high performance. In this paper we demonstrate how the inherent mobility of agents may be exploited to achieve fast file access, which is necessary for most general-purpose applications. We present a file system for mobile agents based exclusively on local disks of the participating workstations. The mobility of agents allows us to make all file operations local, which significantly reduces access time. We also demonstrate how code files and special system files can be handled efficiently in a localdisk -based environment. 1
[ 260 ]
Train
998
1
Applications of Machine Learning and Rule Induction An important area of application for machine learning is in automating the acquisition of knowledge bases required for expert systems. In this paper, we review the major paradigms for machine learning, including neural networks, instance-based methods, genetic learning, rule induction, and analytic approaches. We consider rule induction in greater detail and review some of its recent applications, in each case stating the problem, how rule induction was used, and the status of the resulting expert system. In closing, we identify the main stages in fielding an applied learning system and draw some lessons from successful applications. Introduction Machine learning is the study of computational methods for improving performance by mechanizing the acquisition of knowledge from experience. Expert performance requires much domainspecific knowledge, and knowledge engineering has produced hundreds of AI expert systems that are now used regularly in industry. Machine learning aims to provide ...
[ 1, 831, 999, 2603, 2914, 3130 ]
Train
999
1
A Survey of Methods for Scaling Up Inductive Algorithms . One of the defining challenges for the KDD research community is to enable inductive learning algorithms to mine very large databases. This paper summarizes, categorizes, and compares existing work on scaling up inductive algorithms. We concentrate on algorithms that build decision trees and rule sets, in order to provide focus and specific details; the issues and techniques generalize to other types of data mining. We begin with a discussion of important issues related to scaling up. We highlight similarities among scaling techniques by categorizing them into three main approaches. For each approach, we then describe, compare, and contrast the different constituent techniques, drawing on specific examples from published papers. Finally, we use the preceding analysis to suggest how to proceed when dealing with a large problem, and where to focus future research. Keywords: scaling up, inductive learning, decision trees, rule learning 1. Introduction The knowledge discovery and data...
[ 90, 225, 545, 839, 921, 998, 2242 ]
Validation
1,000
4
The Structure of Object Transportation and Orientation in Human-Computer Interaction An experiment was conducted to investigate the relationship between object transportation and object orientation by the human hand in the context of humancomputer interaction (HCI). This work merges two streams of research: the structure of interactive manipulation in HCI and the natural hand prehension in human motor control. It was found that object transportation and object orientation have a parallel, interdependent structure which is generally persistent over different visual feedback conditions. The notion of concurrency and interdependence of multidimensional visuomotor control structure can provide a new framework for human-computer interface evaluation and design. Keywords Direct manipulation, input device, multi-dimensional control, visuomotor control, visual conditions, information processing, interface design, virtual reality. INTRODUCTION Object manipulation is a basic operation in humancomputer interaction (HCI). Modern computer technology advances towards affording m...
[ 562, 2567 ]
Train
1,001
0
From Active Objects to Autonomous Agents This paper studies how to extend the concept of active objects into a structure of agents. It first discusses the requirements for autonomous agents that are not covered by simple active objects. We propose then the extension of the single behavior of an active object into a set of behaviors with a meta-behavior scheduling their activities. To make a concrete proposal based on these ideas we describe how we extended a framework of active objects, named Actalk, into a generic multi-agent platform, named DIMA. We discuss how this extension has been implemented. We finally report on one application of DIMA to simulate economic models. Keywords: active object, agent, implementation, meta-behavior, modularity, re-usability, simulation. 1 Introduction Object-oriented concurrent programming (OOCP) is the most appropriate and promising technology to implement agents. The concept of active object may be considered as the basic structure for building agents. Furthermore, the combinat...
[ 771, 990, 1834, 2341 ]
Validation
1,002
3
Scalable Feature Selection, Classification and Signature Generation for Organizing Large Text Databases Into Hierarchical Topic Taxonomies We explore how to organize large text databases hierarchically by topic to aid better searching, browsing and filtering. Many corpora, such as internet directories, digital libraries, and patent databases are manually organized into topic hierarchies, also called taxonomies. Similar to indices for relational data, taxonomies make search and access more efficient. However, the exponential growth in the volume of on-line textual information makes it nearly impossible to maintain such taxonomic organization for large, fast-changing corpora by hand. We describe an automatic system that starts with a small sample of the corpus in which topics have been assigned by hand, and then updates the database with new documents as the corpus grows, assigning topics to these new documents with high speed and accuracy. To do this, we use techniques from statistical pattern recognition to efficiently separate the feature words, or discriminants, from the noise words at each node of the taxonomy. Usi...
[ 846 ]
Train
1,003
5
OBPRM: An Obstacle-Based PRM for 3D Workspaces this paper we consider an obstacle-based prm
[ 437, 693, 841, 1936, 2587 ]
Train
1,004
2
From Resource Discovery to Knowledge Discovery on the Internet More than 50 years ago, at a time when modern computers didn't exist yet, Vannevar Bush wrote about a multimedia digital library containing human collective knowledge and filled with "trails" linking materials of the same topic. At the end of World War II, Vannevar urged scientists to build such a knowledge store and make it useful, continuously extendable and more importantly, accessible for consultation. Today, the closest to the materialization of Vannevar's dream is the World-Wide Web hypertext and multimedia document collection. However, the ease of use and accessibility of the knowledge described by Vannevar is yet to be realized. Since the 60s, extensive research has been accomplished in the information retrieval field, and free-text search was finally adopted by many text repository systems in the late 80s. The advent of the World-Wide Web in the 90s helped text search become routine as millions of users use search engines daily to pinpoint resources on the Internet. However, r...
[ 18, 901, 2188, 2503 ]
Train
1,005
1
Three Ways to Grow Designs: A Comparison of Evolved Embryogenies for a Design Problem This paper explores the use of growth processes, or embryogenies, to map genotypes to phenotypes within evolutionary systems. Following a summary of the significant features of embryogenies, the three main types of embryogenies in Evolutionary Computation are then identified and explained: external, explicit and implicit. An experimental comparison between these three different embryogenies and an evolutionary algorithm with no embryogeny is performed. The problem set to the four evolutionary systems is to evolve tessellating tiles. In order to assess the scalability of the embryogenies, the problem is increased in difficulty by enlarging the size of tiles to be evolved. The results are surprising, with the implicit embryogeny outperforming all other techniques by showing no significant increase in the size of the genotypes or decrease in accuracy of evolution as the scale of the problem is increased. 1. Introduction The use of computers to evolve solutions to problems has seen a dra...
[ 1123 ]
Train
1,006
3
DrawCAD: Using Deductive Object-Relational Databases in CAD Computer-Aided Design (CAD) involves the use of computers in the various stages of engineering design. It has large volumes of data with complex structures that needs to be stored and managed efficiently and properly. In CAD, graphical objects with complex structures can be created by reusing previously created objects. The data of these objects have the references to the other objects they contain. Deductive object-relational databases can be used to compute the complete data of graphical objects that reuse other objects. This is the idea behind the development of the DrawCAD system. DrawCAD is a CAD system built on top of the Relationlog object-relational deductive database system. It facilitates the creation of graphical objects by reusing previously created objects. The DrawCAD system illustrates how CAD systems can be developed, using deductive object-relational databases to store and manage data and also perform the computations that are normally performed by the application program.
[ 178 ]
Train
1,007
2
Minimization in Cooperative Response to Failing Database Queries When a query fails, it is more cooperative to identify the cause of failure, rather than just to report the empty answer set. If there is not a cause for the query's failure, it is worthwhile to report the part of the query which failed. To identify a minimal failing subquery (MFS) of the query is the best way to do this. (This MFS is not unique; there may be many of them.) Likewise, to identify a maximal succeeding subquery (MSS) can help a user to recast a new query that leads to a non-empty answer set. Database systems do not provide the functionality of these types of cooperative responses. This may be, in part, because algorithmic approaches to finding the MFSs and the MSSs to a failing query are not obvious. The search space of subqueries is large. Despite work on MFSs in the past, the algorithmic complexity of these identification problems had remained uncharted. This paper shows the complexity profile of MFS and MSS identification. It is shown that there exists a simple algorit...
[ 876 ]
Validation
1,008
0
Uso Di Piani Di Problem-Solving Nel Riconoscimento Di Piani E Obiettivi In questo articolo si discute il ruolo dei piani di problem-solving nell'interpretazione dei dialoghi in linguaggio naturale. Per "piano di problem-solving" si intende una descrizione dichiarativa dei passi del processo di pianificazione ed esecuzione di azioni linguistiche e di dominio. L'articolo mostra che una rappresentazione appropriata di questi piani e` la base per modellare il comportamento cooperativo degli agenti che partecipano ad un dialogo. I piani di problem-solving sono parte di un'architettura di agente in grado di cooperare con altri agenti. 1 Introduzione L'analisi del Linguaggio Naturale ha un ruolo importante nello sviluppo di interfacce intelligenti: infatti, per rendere un sistema amichevole nei confronti dei suoi utenti, e` importante arricchirlo con una teoria del linguaggio che descrive le strategie comunicative adottate dalle persone per interagire. Inoltre, e` necessario esplicitare nel sistema i concetti basilari di razionalita` e cooperativita`. L'idea e` ...
[ 774 ]
Train
1,009
2
Facilitating the Exchange of Explicit Knowledge Through Ontology Mappings In this paper, we give an overview of a system (CAIMAN) that can facilitate the exchange of relevant documents between geographically dispersed people in Communities of Interest. The nature of Communities of Interest prevents the creation and enforcement of a common organizational scheme for documents, to which all community members adhere. Each community member organizes her documents according to her own categorization scheme (ontology). CAIMAN exploits this personal ontology, which is essentially the perspective of a user on a domain, for information retrieval. Related documents are retrieved on a concept granularity level from a central community document repository. To find the related concepts in the queried ontology, CAIMAN performs an ontology mapping. The ontology mapping in CAIMAN is based on a novel approach, which considers the concepts in an ontology implicitly represented by the documents assigned to each concept. Using machine learning techniques for text classification, a concept in a personal ontology is mapped to a concept in a community ontology. The CAIMAN system uses this mapping to provide document publishing and retrieval services both for the community and the user. First results of the prototype system showed that this approach can be a valid alternative to existing techniques for information retrieval.
[ 3065 ]
Train
1,010
5
Implementing a Knowledge Date-a-Base Knowledge-based systems are very useful, but can be dicult to design because of the complexity of the realworld knowledge they represent. This paper compares the experiences of building the same knowledge base by hand in two dierent systems, Otter and CLIPS. The knowledge base considered is that of people's preferences towards others, in the interests of nding a \dating match." Finally, this paper considers Horn theorems and their impact on the usefulness of knowledge systems. Introduction Because technology and automation are increasingly becoming a part of everyday life, it is benecial to enable technology to \understand" its application area. An obvious way of doing this is to implement and embed a knowledge base in an application. However, designing a good knowledge base is not trivial. A good knowledge base needs to be general so it can be reused, complete to avoid bad models, and ecient in description and time. This paper presents the authors' experiences implement...
[ 2406 ]
Train
1,011
4
Jazz: An Extensible Zoomable User Interface Graphics Toolkit in Java In this paper we investigate the use of scene graphs as a general approach for implementing two-dimensional (2D) graphical applications, and in particular Zoomable User Interfaces (ZUIs). Scene graphs are typically found in three-dimensional (3D) graphics packages such as Sun's Java3D and SGI's OpenInventor. They have not been widely adopted by 2D graphical user interface toolkits. To explore the effectiveness of scene graph techniques, we have developed Jazz, a general-purpose 2D scene graph toolkit. Jazz is implemented in Java using Java2D, and runs on all platforms that support Java 2. This paper describes Jazz and the lessons we learned using Jazz for ZUIs. It also discusses how 2D scene graphs can be applied to other application areas. Keywords Zoomable User Interfaces (ZUIs), Animation, Graphics, User Interface Management Systems (UIMS), Pad++, Jazz. INTRODUCTION Today's Graphical User Interface (GUI) toolkits contain a wide range of built-in user interface objects (also kno...
[ 466, 1219, 2504, 2510 ]
Train
1,012
0
Designing Agent-Oriented Systems by Analysing Agent Interactions . We propose a preliminary methodology for agent-oriented software engineering based on the idea of agent interaction analysis. This approach uses interactions between undetermined agents as the primary component of analysis and design. Agents as a basis for software engineering are useful because they provide a powerful and intuitive abstraction which can increase the comprehensiblity of a complex design. The paper describes a process by which the designer can derive the interactions that can occur in a system satisfying the given requirements and use them to design the structure of an agent-based system, including the identification of the agents themselves. We suggest that this approach has the flexibility necessary to provide agent-oriented designs for open and complex applications, and has value for future maintenance and extension of these systems. 1
[ 320, 340, 1297, 1759, 2309, 2343 ]
Train
1,013
0
Component Based Agent Construction . In this paper, an agent architecture is proposed that can be used to integrate pre-existing components that provide the domain dependent agent functionality. The key integrating feature of the agent is an active message board that is used for inter-component, hence intra-agent communication. The board is active because it automatically forwards messages to components, they do not have to poll the message board. It does this on the basis of message pattern functions that components place on the board using advertisement messages. These functions can contain component provided semantic tests on the content of the message, they can also communicate with any other component whilst they are being applied. In addition an agent management toolkit, called ALFA, is described which offers a set of agent management services. This toolkit consists of a number of servers for storing the code of the components and symbolic descriptions of agents regarding their component makeup. A third server use...
[]
Train
1,014
3
Searching Documents on the Intranet Searching for documents on the internet with today’s search engines, which are mainly based on words in a document, is not satisfactory. Results can be improved by also taking the content of a document into account. The Extensible Markup Language (XML) enables us to do semantic tagging and to make the structure of a document explicit. But this describes a document only at the syntactical level. A more ideal situation would be when the XML tagging is also used to define the document at the semantical level. To realize this we allow an author of a document to describe the relevant concepts by means of tags like he would design an object-oriented database schema. In our approach a user searching for a particular document is presented a graphical description of such a schema, that describes the concepts defined for the webspace of an intranet. Via this interface the user can formulate OO-like queries or navigate to relevant web pages. To realize our ideas we are building an architecture based on the concept of an index-database. A prototype is up and running.
[ 176, 356, 1794 ]
Validation
1,015
1
Economic Value of EWA Lite: A Functional Theory of Learning in Games This paper describes a theory of learning in decisions and games called EWA Lite, with only one parameter. EWA Lite predicts the time path of individual behavior in any normal-form game (given initial conditions) including new games in which behavior has never been observed.
[ 127 ]
Validation
1,016
0
CiteSeer: An Autonomous Web Agent for Automatic Retrieval and Identification of Interesting Publications Published research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in non-text form (e.g. Postscript) documents, and increase in quantity daily. Significant amounts of time and effort are commonly needed to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations which are placed into an SQL database. The agent's Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both "citing" and "cited" publications can be followed. In addition to simple browsing and keyword searches, the agent ...
[ 95, 166, 544, 825, 836, 1269, 1357, 2032, 2968 ]
Train
1,017
4
Future Multimedia User Interfaces this article, we examine some of the work that has been done in these two fields and explore where they are heading. First, we review their often-confusing terminology and provide a brief historical overview. Since both fields rely largely on relatively unusual, and largely immature, hardware technologies, we next provide a high-level introduction to important hardware issues. This is followed by a description of the key approaches to system architecture used by current researchers. We then build on the background provided by these sections to lay out a set of current research issues and directions for future work. Throughout, we attempt to emphasize the many ways in which virtual environments and ubiquitous computing can complement each other, creating an exciting new form of multimedia computing that is far more powerful than either approach would make possible alone.
[ 48, 706, 1083, 2086 ]
Validation
1,018
3
The SDCC Framework For Integrating Existing Algorithms for Diverse Data Warehouse Maintenance Tasks Recently proposed view maintenance algorithms tackle the problem of concurrent data updates happening at different autonomous ISs, whereas the EVE system addresses the maintenance of a data warehouse after schema changes of ISs. The concurrency of schema changes and data updates still remains an unexplored problem however. This paper now provides a first solution that guarantees concurrent view definition evolution and view extent maintenance of a DW defined over distributed ISs. For this problem, we introduce a framework called SDCC (Schema change and Data update Concurrency Control) system. SDCC integrates existing algorithms designed to address view maintenance subproblems, such as view extent maintenance after IS data updates, view definition evolution after IS schema changes, and view extent adaptation after view definition changes, into one system by providing protocols that enable them to correctly co-exist and collaborate. SDCC tracks any potential faulty updates of the DW ca...
[ 826, 1463, 2438, 3040 ]
Train
1,019
0
Corporate Memory Management through Agents . The CoMMA project (Corporate Memory Management through Agents) aims at developing an open, agent-based platform for the management of a corporate memory by using the most advanced results on the technical, the content, and the user interaction level. We focus here on methodologies for the set-up of multi-agent systems, requirement engineering and knowledge acquisition approaches. 1. Introduction How to improve access, share and reuse of both internal and external knowledge in a company? How to improve newcomers' learning and integration in a company? How to enhance technology monitoring in a company? Knowledge Management (KM) aims at solving such problems. Different research communities offer - partial - solutions for supporting KM. The integration of results from these different research fields seems to be a promising approach. This is the motivation of the CoMMA IST project-funded by the European Commission- which started February 2000. The main objective is to implement and ...
[ 2284, 2665 ]
Validation
1,020
3
eMediator: A Next Generation Electronic Commerce Server This paper presents eMediator, an electronic commerce server prototype that demonstrates ways in which algorithmic support and game-theoretic incentive engineering can jointly improve the efficiency of ecommerce. eAuctionHouse, the configurable auction server, includes a variety of generalized combinatorial auctions and exchanges, pricing schemes, bidding languages, mobile agents, and user support for choosing an auction type. We introduce two new logical bidding languages for combinatorial markets: the XOR bidding language and the OR-of-XORs bidding language. Unlike the traditional OR bidding language, these are fully expressive. They therefore enable the use of the Clarke-Groves pricing mechanism for motivating the bidders to bid truthfully. eAuctionHouse also supports supply/demand curve bidding. eCommitter, the leveled commitment contract optimizer, determines the optimal contract price and decommitting penalties for a variety of leveled commitment contracting mechanisms, taking into account that rational agents will decommit strategically in Nash equilibrium. It also determines the optimal decommitting strategies for any given leveled commitment contract. eExchangeHouse, the safe exchange planner, enables unenforced anonymous exchanges by dividing the exchange into chunks and sequencing those chunks to be delivered safely in alternation between the buyer and the seller.
[ 1680 ]
Validation
1,021
2
Intelligent Crawling on the World Wide Web with Arbitrary Predicates The enormous growth of the world wide web in recent years has made it important to perform resource discovery efficiently. Consequently, several new ideas have been proposed in recent years; among them a key technique is focused crawling which is able to crawl particular topical portions of the world wide web quickly without having to explore all web pages. In this paper, we propose the novel concept of intelligent crawling which actually learns characteristics of the linkage structure of the world wide web while performing the crawling. Specifically, the intelligent crawler uses the inlinking web page content, candidate URL structure, or other behaviors of the inlinking web pages or siblings in order to estimate the probability that a candidate is useful for a given crawl. This is a much more general framework than the focused crawling technique which is based on a pre-defined understanding of the topical structure of the web. The techniques discussed in this paper are applicable for crawling web pages which satisfy arbitrary user-defined predicates such as topical queries, keyword queries or any combinations of the above. Unlike focused crawling, it is not necessary to provide representative topical examples, since the crawler can learn its way into the appropriate topic. We refer to this technique as intelligent crawling because of its adaptive nature in adjusting to the web page linkage structure. The learning crawler is capable of reusing the knowledge gained in a given crawl in order to provide more efficient crawling for closely related predicates.
[ 2, 116, 825, 1512, 1838, 2459, 2610, 2716 ]
Train
1,022
0
Agent-Oriented Software Engineering ion: The process of defining a simplified model of the system that emphasises some of the details or properties, while suppressing others. . Organisation 1 : The process of identifying and managing interrelationships between various problem solving components. Next, the characteristics of complex systems need to be enumerated [8]: . Complexity frequently takes the form of a hierarchy. That is, a system that is composed of inter-related sub-systems, each of which is in turn hierarchic in structure, until the lowest level of elementary sub-system is reached. The precise nature of these organisational relationships varies between sub-systems, however some generic forms (such as client-server, peer, team, etc.) can be identified. These relationships are not static: they often vary over time. . The choice of which components in the system are primitive is relatively arbitrary and is defined by the observer's aims and objectives. . Hierarchic systems evolve more quickly than non-hiera...
[ 185, 567, 1232, 1829, 2576, 2944 ]
Validation
1,023
2
OILing the way to Machine Understandable Bioinformatics Resources The complex questions and analyses posed by biologists, as well as the diverse data resources they develop, require the fusion of evidence from different, independently developed and heterogeneous resources. The web as an enabler for interoperability has been an excellent mechanism for data publication and transportation. Successful exchange and integration of information, however, depends on a shared language for communication (a terminology) and a shared understanding of what the data means (an ontology). Without this kind of understanding, semantic heterogeneity remains a problem for both humans and machines. One means of dealing with heterogeneity in bioinformatics resources is through terminology founded upon an ontology. Bioinformatics resources tend to be rich in human readable and understandable annotation, with each resource using its own terminology. These resources are machine readable, but not machine understandable. Ontologies have a role in increasing this machine understanding, reducing the semantic heterogeneity between resources and thus promoting the flexible and reliable interoperation of bioinformatics resources. This paper describes a solution derived from the semantic web (a machine understandable WWW), the Ontology Inference Layer (OIL), as a solution for semantic bioinformatics resources. The nature of the heterogeneity problems are presented along with a description of how metadata from domain ontologies can be used to alleviate this problem. A companion paper in this issue gives an example of the development of a bioontology using OIL. Keywords: Ontology; OIL; semantics; interoperation; heterogeneity; understanding. 1
[ 338, 1730 ]
Train
1,024
3
Tractable Query Answering in Indefinite Constraint Databases: Basic Results and Applications to Querying Spatiotemporal Information . We consider the scheme of indefinite constraint databases proposed by Koubarakis. This scheme can be used to represent indefinite information arising in temporal, spatial and truly spatiotemporal applications. The main technical problem that we address in this paper is the discovery of tractable classes of databases and queries in this scheme. We start with the assumption that we have a class of constraints C with satisfiability and variable elimination problems that can be solved in PTIME. Under this assumption, we show that there are several general classes of databases and queries for which query evaluation can be done with PTIME data complexity. We then search for tractable instances of C in the area of temporal and spatial constraints. Classes of constraints with tractable satisfiability problems can be easily found in the literature. The largest class that we consider is the class of Horn disjunctive linear constraints over the rationals. Because variable eliminati...
[ 72, 228, 1068, 1602, 2156, 2488 ]
Train
1,025
5
Generating, Executing and Revising Schedules for Autonomous Robot Office Couriers Scheduling the tasks of an autonomous robot office courier and carrying out the scheduled tasks reliably and efficiently pose challenging problems for autonomous robot control. To carry out their jobs reliably and efficiently many autonomous mobile service robots acting in human working environments have to view their jobs as everyday activity: they should accomplish longterm efficiency rather than optimize problem-solving episodes. They should also exploit opportunities and avoid problems flexibly because often robots are forced to generate schedules based on partial information. We propose to implement the controller for scheduled activity by employing concurrent reactive plans that reschedule the course of action whenever necessary and while performing their actions. The plans are represented modularly and transparently to allow for easy transformation. Scheduling and schedule repair methods are implemented as plan transformation rules. Introduction To carry out their jobs reliably...
[ 738 ]
Train
1,026
2
Collaborative Maintenance this paper we examine a classical AI problem (knowledge maintenance) and propose an innovative solution (collaborative maintenance) that has been inspired by the recommendation technique of
[ 2928 ]
Train
1,027
4
Sensing Techniques for Mobile Interaction We describe sensing techniques motivated by unique aspects of human-computer interaction with handheld devices in mobile settings. Special features of mobile interaction include changing orientation and position, changing venues, the use of computing as auxiliary to ongoing, real-world activities like talking to a colleague, and the general intimacy of use for such devices. We introduce and integrate a set of sensors into a handheld device, and demonstrate several new functionalities engendered by the sensors, such as recording memos when the device is held like a cell phone, switching between portrait and landscape display modes by holding the device in the desired orientation, automatically powering up the device when the user picks it up the device to start using it, and scrolling the display using tilt. We present an informal experiment, initial usability testing results, and user reactions to these techniques. Keywords Input devices, interaction techniques, sensing, contextaware...
[ 59, 414, 692, 1897, 2472 ]
Train
1,028
3
Summary this paper. The main questions addressed in this setting deal with conditions under which it is possible to evaluate queries incrementally.
[ 2550 ]
Test
1,029
0
Self-Adaptive Operator Scheduling using the Religion-Based EA The optimal choice of the variation operators mutation and crossover and their parameters can be decisive for the performance of evolutionary algorithms (EAs). Usually the type of the operators (such as Gaussian mutation) remains the same during the entire run and the probabilistic frequency of their application is determined by a constant parameter, such as a fixed mutation rate. However, recent studies have shown that the optimal usage of a variation operator changes during the EA run. In this study, we combined the idea of self-adaptive mutation operator scheduling with the Religion-Based EA (RBEA), which is an agent model with spatially structured and variable sized subpopulations (religions). In our new model (OSRBEA), we used a selection of different operators, such that each operator type was applied within one specific subpopulation only. Our results indicate that the optimal choice of operators is problem dependent, varies during the run, and can be handled by our self-adaptive OSRBEA approach. Operator scheduling could clearly improve the performance of the already very powerful RBEA and was superior compared to a classic and other advanced EA approaches.
[ 1864 ]
Validation
1,030
5
Predicting Telecommunication Equipment Failures from Sequences of Network Alarms The computer and telecommunication industries rely heavily on knowledge-based expert systems to manage the performance of their networks. These expert systems are developed by knowledge engineers, who must first interview domain experts to extract the pertinent knowledge. This knowledge acquisition process is laborious and costly, and typically is better at capturing qualitative knowledge than quantitative knowledge. This is a liability, especially for domains like the telecommunication domain, where enormous amounts of data are readily available for analysis. Data mining holds tremendous promise for the development of expert systems for monitoring network performance since it provides a way of automatically identifying subtle, yet important, patterns in data. This case study describes a project in which a temporal data mining system called Timeweaver is used to identify faulty telecommunication equipment from logs of network alarm messages. Project Overview Managing the p...
[ 673, 2233, 2986 ]
Test
1,031
1
Dynamic on-line clustering and state extraction: An approach to symbolic learning Researchers often try to understand the representations that develop in the hidden layers of a neural network during training. Interpretation is difficult because the representations are typically highly distributed and continuous. By "continuous," we mean that if one constructed a scatter plot over the hidden unit activity space of patterns obtained in response to various inputs, examination at any scale would reveal the patterns to be broadly distributed over the space. Such continuous representations are naturally obtained if the input space and activation dynamics are continuous. Continuous representations are not always appropriate. Many task domains might benefit from discrete representations -- representations selected from a finite set of alternatives. Example domains include finite-state machine emulation, data compression, language and higher cognition (involving discrete symbol processing), and categorization. In such domains, standard neural...
[ 367 ]
Train
1,032
1
A Behavior-Based Intelligent Control Architecture with Application to Coordination of Multiple Underwater Vehicles The paper presents a behavior-based intelligent control architecture for designing controllers which, based on their observation of sensor signals, compute the discrete control actions. These control actions then serve as the "set-points" for the lower level controllers. The behavior-based approach yields an intelligent controller which is a cascade of a perceptor and a response controller. The perceptor extracts the relevant symbolic information from the incoming continuous sensor signals, which enables the execution of one of the behaviors. The response controller is a discrete event system that computes the discrete control actions by executing one of the enabled behaviors. The behavioral approach additionally yields a hierarchical two layered response controller, which provides better complexity management. The inputs from the perceptor are used to first compute the higher level activities, called behaviors, and next to compute the corresponding lower level activities, called actio...
[]
Validation
1,033
4
An Introductory Course on Visualization A Visualization course offered twice (1997/98 and 98/99) as an elective in the MSc degree on Electronics and Telecommunications at the University of Aveiro is presented. Its contents, bibliography and teaching methods are described. Some difficulties encountered during the preparation and lecturing of this course are identified. 1. Introduction Taking into consideration that Visualization is becoming very important and useful in many areas, an introductory course on Vizualization seems a valid contribution to the curriculum of any postgraduation in science or technology and thus it was considered adequate as an elective course of the MSc in Electronics and Telecommunications offered at the University of Aveiro. This post-graduation program includes several courses and a thesis and aims to be a large spectrum degree encompassing mainly one of four areas of Electrical Engineering (Electronics, Telecommunications, Signal Analysis and Processing and Computer Science); this means that it ...
[]
Train
1,034
3
Towards a Model for Spatio-Temporal Schema Selection Schema versioning provides a mechanism for handling change in the structure of database systems and has been investigated widely, both in the context of static and temporal databases. With the growing interest in spatial and spatio-temporal data as well as the mechanisms for holding such data, the spatial context within which data is formatted also becomes an issue. This paper presents a generalised model that accommodates schema versioning within static, temporal, spatial and spatio-temporal relational and object-oriented databases.
[ 2935, 3100 ]
Train
1,035
0
Hive: Distributed Agents for Networking Things Hive is a distributed agents platform, a decentralized system for building applications by networking local system resources. This paper presents the architecture of Hive, concentrating on the idea of an "ecology of distributed agents" and its implementation in a practical Java based system. Hive provides ad-hoc agent interaction, ontologies of agent capabilities, mobile agents, and a graphical interface to the distributed system. We are applying Hive to the problems of networking "Things That Think," putting computation and communication in everyday places such as your shoes, your kitchen, or your own body. TTT shares the challenges and potentials of ubiquitous computing and embedded network applications. We have found that the flexibility of a distributed agents architecture is well suited for this application domain, enabling us to easily build applications and to reconfigure our systems on the fly. Hive enables us to make our environment and network more alive. This paper is dedic...
[ 560, 829, 911, 1498, 1578, 1653, 2137, 2279, 2401, 2763, 2866 ]
Train
1,036
3
Equality, Type and Word Constraints As a generalization of inclusion dependencies that are found in relational databases, word constraints have been studied for semistructured data [6] as well as for an objectoriented model [10]. In both contexts, it is assumed that each data entity has a unique identity, and two entities are equal if and only if they have the same identify. In this setting, the decidability of the implication and finite implication problems for word constraints has been established. A question left open is whether these problems are still decidable in the context of an object-oriented model M ß which supports complex values with nested structures and complex value equality. This paper provides an answer to that question. We characterize a schema in M ß in terms of a type constraint and an equality constraint, and investigate the interaction between these constraints and word constraints. We show that in the presence of equality and type constraint, the implication and finite implication problems for...
[ 1600 ]
Train
1,037
0
A Distributed Pi-Calculus with Local Areas of Communication This paper introduces a process calculus designed to capture the phenomenon of names which are known universally but always refer to local information. Our system extends the #-calculus so that a channel name can have within its scope several disjoint local areas. Such a channel name may be used for communication within an area, it may be sent between areas, but it cannot itself be used to transmit information from one area to another. Areas are arranged in a hierarchy of levels, distinguishing for example between a single application, a machine, or a whole network. We give an operational semantics for the calculus, and develop a type system that guarantees the proper use of channels within their local areas. We illustrate with models of an internet service protocol and a pair of distributed agents. 1
[ 2084 ]
Train
1,038
2
How Much More is Better? - Characterizing the Effects of Adding More IR Systems to a Combination We present the results of some expansion experiments for solving the routing, data fusion problem using TREC5 systems. The experiments address the question "How much more is better?" when combining the results of multiple information retrieval systems using a linear combination (weighted sum) model. By investigating all 2way, 3-way, 4-way and 10-way combinations of 10 IR systems on 10 queries, we show that: (1) one can expect potentially significant amounts of improvement in performance over the best system used in the combination if enough systems are used, (2) for this number of candidate systems, the point of diminishing returns is reached when around four systems are used in the combination, (3) queries generally have too few relevant documents, causing little correlation in performance between the training set and test set; thus making it difficult to get test set improvement even when multiple systems are used, and (4) if one knows the relative past performance of the candidate s...
[ 1120, 2528 ]
Test
1,039
2
Crawling the Hidden Web Current-day crawlers retrieve content only from the publicly indexable Web, i.e., the set of Web pages reachable purely by following hypertext links, ignoring search forms and pages that require authorization or prior registration. In particular, they ignore the tremendous amount of high quality content "hidden" behind search forms, in large searchable electronic databases. In this paper, we address the problem of designing a crawler capable of extracting content from this hidden Web. We introduce a generic operational model of a hidden Web crawler and describe how this model is realized in HiWE (Hidden Web Exposer), a prototype crawler built at Stanford. We introduce a new Layout-based Information Extraction Technique (LITE) and demonstrate its use in automatically extracting semantic information from search forms and response pages. We also present results from experiments conducted to test and validate our techniques. 1
[ 488, 1750, 2822 ]
Train
1,040
3
Multi Agenten Systeme Coalition Formation 4.4 Payoff Division Overview 98 4 Contract Nets, Coalition Formation 98-1 Chapter 4: Contract Nets, Coalition Formation Multi-Agenten Systeme (VU), SS 00 4.1 General Contract Nets How to distribute tasks? . Global Market Mechanisms. Implementations use a single centralized mediator . . Announce, bid, award -cycle. Distributed Negotiation . We need the following: 1. Define a task allocation problem in precise terms. 2. Define a formal model for making bidding and awarding decisions. 4.1 General Contract Nets 99 Chapter 4: Contract Nets, Coalition Formation Multi-Agenten Systeme (VU), SS 00 Definition 4.1 (Task-Allocation Problem) A task allocation problem is given by 1. a set of tasks T , 2. a set of agents A A A, 3. a cost function cost i i i : 2 T -# R#{} (stating the costs that agent i i i incurs by handling some tasks), and 4. the initial allocation of tasks #T init 1 1 1 , . . . , T init |A A A| #, where T = # i i i#A A A T init i i...
[ 740 ]
Validation
1,041
0
Enlightened Agents in TuCSoN In the network-centric computing era, applications often involve sets of autonomous, unpredictable, and possibly mobile entities interacting within open, dynamic, and possibly unreliable environments: Intelligent Environments are a typical case. The complexity of such scenarios requires novel engineering tools, providing effective support from the analysis to the deployment stage. In this paper we illustrate the impact of a general-purpose coordination infrastructure for multiagent systems -- providing a model, a run-time, and suitable deployment tools -- on the engineering of such applications. As a case study, we consider the intelligent management of lights inside a building: despite its simplicity, this problem endorses the typical challenges of this class of applications. The case study is built upon the TuCSoN coordination infrastructure, which provides engineers with both the abstractions and the run-time support for effectively managing the application complexity. I. INFRASTR...
[ 2279 ]
Train
1,042
4
BUILD-IT: A Planning Tool for Construction and Design It is time to go beyond the established approaches in humancomputer interaction. With the Augmented Reality (AR) design strategy humans are able to behave as much as possible in a natural way: behavior of humans in the real world with other humans and/or real world objects. Following the fundamental constraints of natural way of interacting we derive a set of recommendations for the next generation of user interfaces: the Natural User Interface (NUI). The concept of NUI is presented in form of a runnable demonstrator: a computer vision-based interaction technique for a planning tool for construction and design tasks. Keywords augmented reality, digital desk, natural user interface, computer vision-based interaction
[ 511, 2290 ]
Train
1,043
3
Normal Forms for Defeasible Logic Defeasible logic is an important logic-programming based nonmonotonic reasoning formalism which has an efficient implementation. It makes use of facts, strict rules, defeasible rules, defeaters, and a superiority relation. Representation results are important because they can help the assimilation of a concept by confining attention to its critical aspects. In this paper we derive some representation results for defeasible logic. In particular we show that the superiority relation does not add to the expressive power of the logic, and can be simulated by other ingredients in a modular way. Also, facts can be simulated by strict rules. Finally we show that we cannot simplify the logic any further in a modular way: Strict rules, defeasible rules, and defeaters form a minimal set of independent ingredients in the logic. 1 Introduction Normal forms play an important role in computer science. Examples of areas where normal forms have proved fruitful include logic [10], where normal forms o...
[ 2873 ]
Validation
1,044
0
ITR: A Framework for Environment-Aware, Massively Distributed Computing physical environment in real-time, and the need to reason about emerging aggregate properties as opposed to individual component behavior. In this research we propose to develop theory, methods and tools for massively distributed, environment-aware computing (more succinctly referred to as swarm computing). The state of swarm computing today is similar to that of sequential computing in the early 1950s. Developers painstakingly produce swarm programs by designing and programming the actions of individual devices, and converge on an acceptable program through extensive simulation and experimentation. In the pre-compiler era, skeptical programmers believed that a mechanical process could not possibly produce code of comparable quality to that produced by highly skilled machine coders and that the cost of machine time is high enough to outweigh any possible savings in programmer effort. The state of swarm programming today is similar: devices are still expensive enough an
[ 1621 ]
Test
1,045
0
Human Behavior Models for Game-Theoretic Agents: Case of Crowd Tipping This paper describes an effort to integrate human behavior models from a range of ability, stress, emotion, decision theoretic, and motivation literatures into a game-theoretic framework. Our goal is to create a common mathematical framework (CMF) and a simulation environment that allows one to research and explore alternative behavior models to add realism to software agents -- e.g., human reaction times, constrained rationality, emotive states, and cultural influences. Our CMF is based on a dynamical, gametheoretic approach to evolution and equilibria in Markov chains representing states of the world that the agents can act upon. In these worlds the agents' utilities (payoffs) are derived by a deep model of cognitive appraisal of intention achievement including assessment of emotional activation/decay relative to concern ontologies, and subject to (integrated) stress and related constraints. We present the progress to date on the mathematical framework, and on an environment for editing the various elements of the cognitive appraiser, utility generators, concern ontologies, and Markov chains. We summarize a prototype of an example training game for counter-terrorism and crowd management. Future research needs are elaborated including validity issues and the gaps in the behavioral literatures that agent developers must struggle with.
[ 459, 577 ]
Validation
1,046
2
Analysis and extraction of useful information across networks of Web databases Contents 1 Introduction 2 2 Problem Statement 2 3 Literature Review 3 3.1 Retrieving Text . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 Understanding Music . . . . . . . . . . . . . . . . . . . . . . . 7 3.3 Identifying Images . . . . . . . . . . . . . . . . . . . . . . . . 9 3.4 Extracting Video . . . . . . . . . . . . . . . . . . . . . . . . . 11 4 Work Completed and in Progress 12 5 Research Plan and Time-line 14 A List of Published Work 15 1 1 INTRODUCTION 2 1 Introduction The World Wide Web of documents on the Internet contains a huge amount of information and resources. It has been growing at a rapid rate for nearly a decade and is now one of the main resources of information for many people. The large interest in the Web is due to the fact that it is uncontrolled and easily accessible, no single person owns it and anyone can add to it. The Web has also brought with it a lot of controversy, also due to the
[ 2132, 2503 ]
Train
1,047
3
The View Holder Approach: Utilizing Customized Materialized Views To Create Database Services Suitable For Mobile Database Applications among mobile devices (i.e., a laptop vs. a pager) and the amount of information available from today's database environments and the Internet. To this end, this dissertation presents the development of customizable view maintenance services, called the View Holder approach, whose middleware mechanism within the fixed network dynamically maintains versions of the views so that to meet the data consistency and currency requirements of a particular mobile client. In a general form, a View Holder can support a community of mobile clients with common interests. The motivation for maintaining versions is to compensate for the data changes that occurred to the materialized views that were used during disconnection as well as to reduce the cost of wireless communication. In order to maintain these views, customized view maintenance is performed at the data sources by translating the mobile machine's request into a materialization program containing a triggering
[ 253, 381, 602, 768, 800 ]
Train
1,048
2
Using Relevance Feedback In Contentbased Image Metasearch this article with a review of the issues in content-based visual query, then describe the current MetaSeek implementation. We present the results of experiments that evaluated the implementation in comparison to a previous version of the system and a baseline engine that randomly selects the individual search engines to query. We conclude by summarizing open issues for future research.
[ 2256, 2558, 2569, 2627 ]
Train
1,049
0
JACK Intelligent Agents - Components for Intelligent Agents in Java This paper is organised as follows. Section 2 introduces JACK Intelligent Agents, presenting the approach taken by AOS to its design and outlining its major engineering characteristics. The BDI model is discussed briefly in Section 3. Section 4 gives an outline of how to build an application with JACK Intelligent Agents. Finally, in Section 5 we discuss how the use of this framework can be beneficial to both engineers and researchers. For brevity, we will refer to JACK Intelligent Agents simply as "JACK".
[ 400, 1118, 1681, 1935, 2164, 2374 ]
Train
1,050
4
Community Search Assistant This paper describes a new software agent, the community search assistant, which recommends related searches to users of search engines. The community search assistant enables communities of users to search in a collaborative fashion. All queries submitted by the community are stored in the form of a graph. Links are made between queries that are found to be related. Users can peruse the network of related queries in an ordered way: following a path from a first cousin, to a second cousin to a third cousin, etc. to a set of search results. The first key idea behind the use of query graphs is that the determination of relatedness depends on the documents returned by the queries, not on the actual terms in the queries themselves. The second key idea is that the construction of the query graph transforms single user usage of information networks (e.g. search) into collaborative usage: all users can tap into the knowledge base of queries submitted by others. Introduction ...
[ 711 ]
Train
1,051
1
Applying Formal Concepts to Learning Systems Validation In the problem area of evaluating complex software systems, there are two distinguished areas of research, development, and application identified by the two buzzwords validation and verification, respectively. From the perspective adopted by the authors, verification is usually more formally based and, thus, can be supported by formal reasoning tools like theorem provers, for instance. The scope of verification approaches is limited by the difficulty of finding a sufficiently complete formalization to built upon. In paramount realistic problem domains, validation seems to be more appropriate, although it is less stringent in character and, therefore, validation results are often less definite. The aim of this paper is to exemplify a validation approach based on a clear and thoroughly formal theory. In this way, validation and verification should be brought closer to each other. To allow for precise and sufficiently clear results, the authors have selected the applicatio...
[ 100, 2285, 3176 ]
Test
1,052
4
Design and Implementation of Expressive Footwear As an outgrowth of our interest in dense wireless sensing and expressive applications of wearable computing, we have developed the world's most versatile human-computer interface for the foot. By dense wireless sensing, we mean the remote acquisition of many different parameters with a compact, autonomous sensor cluster. We have developed such a low-power sensor card to measure over 16 continuous quantities and transmit them wirelessly to a remote base station, updating all variables at 50 Hz. We have integrated a pair of these devices onto the feet of dancers and athletes, measuring continuous pressure at 3 points near the toe, dynamic pressure at the heel, bidirectional bend of the sole, height of each foot off conducting strips in the stage, angular rate of each foot about the vertical, angular position of each foot about the Earth's local magnetic field, as well as their tilt and low-G acceleration, 3-axis shock acceleration (from kicks and jumps), and position (via an integrated s...
[ 1220, 2404 ]
Train
1,053
5
Ontobroker: How to make the WWW Intelligent . The World Wide Web can be viewed as the largest knowledge base that has ever existed. However, its support in query answering and automated inference is very limited. We propose formalized ontologies as means to enrich web documents for representing semantic information to overcome this bottleneck. Ontologies enable informed search as well as the derivation of new knowledge that is not represented in the WWW. The paper describes a software tool called Ontobroker that provides the necessary support in realizing this idea. Basically it provides formalisms and tools for formulating queries, for defining ontologies, and for annotating HTML documents with ontological information. 1 Introduction The World Wide Web (WWW) contains huge amounts of knowledge about most subjects one can think of. HTML documents enriched by multi-media applications provide knowledge in different representations (i.e., text, graphics, animated pictures, video, sound, virtual reality, etc.). Hypertext li...
[ 579, 1510, 1847, 3035 ]
Test
1,054
0
Creatures: Entertainment Software Agents with Artificial Life We present a technical description of Creatures, a commercial home-entertainment software package. Creatures provides a simulatedenvironment in which exist a number of synthetic agents that a user can interact with in real-time. The agents (known as "creatures") are intended as sophisticated "virtual pets". The internal architecture of the creatures is strongly inspired by animal biology. Each creature has a neural network responsible for sensory-motorcoordinationand behavior selection, and an "artificial biochemistry" that models a simple energy metabolism along with a "hormonal" system that interacts with the neural network to model diffuse modulation of neuronal activity and staged ontogenetic development. A biologically inspired learning mechanism allows the neural network to adapt during the lifetime of a creature. Learning includes the ability to acquire a simple verb--object language.
[ 620, 1157 ]
Test
1,055
3
Managing Time Consistency for Active Data Warehouse Environments Abstract. Real-world changes are generally discovered delayed by computer systems. The typical update patterns for traditional data warehouses on an overnight or even weekly basis enlarge this propagation delay until the information is available to knowledge workers. Typically, traditional data warehouses focus on summarized data (at some level) rather than detail data. For active data warehouse environments, also detailed data about individual entities are required for checking the data conditions and triggering actions. Hence, keeping data current and consistent in that context is not an easy task. In this paper we present an approach for modeling conceptual time consistency problems and introduce a data model that deals with timely delays. It supports knowledge workers, to find out, why (or why not) an active system responded to a certain state of the data. Therefore the model enables analytical processing of detail data (enhanced by valid time) based on a knowledge state at a specified instant of time. All states that were not yet knowable to the system at that point in time are consistently ignored. 1.
[ 938, 1957, 2050 ]
Test
1,056
3
NiagaraCQ: A Scalable Continuous Query System for Internet Databases Continuous queries are persistent queries that allow users to receive new results when they become available. While continuous query systems can transform a passive web into an active environment, they need to be able to support millions of queries due to the scale of the Internet. No existing systems have achieved this level of scalability. NiagaraCQ addresses this problem by grouping continuous queries based on the observation that many web queries share similar structures. Grouped queries can share the common computation, tend to fit in memory and can reduce the I/O cost significantly. Furthermore, grouping on selection predicates can eliminate a large number of unnecessary query invocations. Our grouping technique is distinguished from previous group optimization approaches in the following ways. First, we use an incremental group optimization strategy with dynamic re-grouping. New queries are added to existing query groups, without having to regroup already installed queries. Second, we use a query-split scheme that requires minimal changes to a general-purpose query engine. Third, NiagaraCQ groups both change-based and timer-based queries in a uniform way. To insure that NiagaraCQ is scalable, we have also employed other techniques including incremental evaluation of continuous queries, use of both pull and push models for detecting heterogeneous data source changes, and memory caching. This paper presents the design of NiagaraCQ system and gives some experimental results on the system’s performance and scalability. 1.
[ 35, 230, 1096, 1519, 2219, 2292, 2436, 3081 ]
Train
1,057
1
CABINS: A Framework of Knowledge Acquisition and Iterative Revision for Schedule Improvement and Reactive Repair Practical scheduling problems generally require allocation of resources in the presence of a large, diverse and typically conflicting set of constraints and optimization criteria. The ill-structuredness of both the solution space and the desired objectives make scheduling problems difficult to formalize. This paper describes a case-based learning method for acquiring context-dependent user optimization preferences and tradeoffs and using them to incrementally improve schedule quality in predictive scheduling and reactive schedule management in response to unexpected execution events. The approach, implemented in the CABINS system, uses acquired user preferences to dynamically modify search control to guide schedule improvement. During iterative repair, cases are exploited for: (1) repair action selection, (2) evaluation of intermediate repair results and (3) recovery from revision failures. The method allows the system to dynamically switch between repair heuristic actions, each of whi...
[ 423 ]
Train
1,058
3
Are you ready for Yottabytes? StorHouse - Federated and Object/Relational Solution This paper describes how federated and object/relational database systems can exploit cost-effective active storage hierarchies. By active storage hierarchy we mean a database system that uses all storage media (i.e. optical, tape, and disk) to store and retrieve data and not just disk. A detailed discussion of the Atomic Data Store data warehouse concept can be found in [CB 99]. These also describe a commercial relational database product, StorHouse/Relational Manager (RM), that executes SQL queries directly against data stored in a complete storage hierarchy. This paper focuses on applications that can use, and may even require the use of, emerging federated and object/relational database technologies. Our analysis is based on two products now in development. We will refer to these as StorHouse/Fed (a federated database system that includes StorHouse/RM) and StorHouse/ORM (an Object-Relational database system). We conclude by describing candidate applications (with an emphasis on the federal sector) that can exploit the combination of costeffective active storage hierarchy with federated and/or object/relational database technology.
[ 3068 ]
Train
1,059
2
Efficient and Effective Metasearch for a Large Number of Text Databases Metasearch engines can be used to facilitate ordinary users for retrieving information from multiple local sources (text databases). In a metasearch engine, the contents of each local database is represented by a representative. Each user query is evaluated against the set of representatives of all databases in order to determine the appropriate databases to search. When the number of databases is very large, say in the order of tens of thousands or more, then a traditional metasearch engine may become inefficient as each query needs to be evaluated against too many database representatives. Furthermore, the storage requirement on the site containing the metasearch engine can be very large. In this paper, we propose to use a hierarchy of database representatives to improve the efficiency. We provide an algorithm to search the hierarchy. We show that the retrieval effectiveness of our algorithm is the same as that of evaluating the user query against all database representatives. We als...
[ 347, 587, 976, 1167, 1642, 1804, 2188, 2771, 2920 ]
Validation
1,060
2
Contextual Rules for Text Analysis In this paper we describe a rule-based formalism for the analysis and labelling of texts segments. The rules are contextual rewriting rules with a restricted form of negation. They allow to underspecify text segments not considered relevant to a given task and to base decisions upon context. A parser for these rules is presented and consistence and completeness issues are discussed. Some results of an implementation of this parser with a set of rules oriented to the segmentation of texts in propositions are shown.
[ 2440 ]
Validation
1,061
3
Algebraic rewritings for optimizing regular path queries Rewriting queries using views is a powerful technique that has applications in query optimization, data integration, data warehousing etc. Query rewriting in relational databases is by now rather well investigated. However, in the framework of semistructured data the problem of rewriting has received much less attention. In this paper we focus on extracting as much information as possible from algebraic rewritings for the purpose of optimizing regular path queries. The cases when we can find a complete exact rewriting of a query using a set a views are very "ideal." However, there is always information available in the views, even if this information is only partial. We introduce "lower" and "possibility" partial rewritings and provide algorithms for computing them. These rewritings are algebraic in their nature, i.e. we use only the algebraic view definitions for computing the rewritings. This fact makes them a main memory product which can be used for reducing secondary memory and remote access. We give two algorithms for utilizing the partial lower and partial possibility rewritings in the context of query optimization.
[ 965 ]
Validation
1,062
0
Mobile Agents for Information Integration . The large amount of information that is spread over the Internet is an important resource for all people but also introduces some issues that must be faced. The dynamism and the uncertainty of the Internet, along with the heterogeneity of the sources of information are the two main challanges for the today's technologies. This paper proposes an approach based on mobile agents integrated in an information integration infrastructure. Mobile agents can significantly improve the design and the development of Internet applications thanks to their characteristics of autonomy and adaptability to open and distributed environments, such as the Internet. MOMIS (Mediator envirOnment for Multiple Information Sources) is an infrastructure for semi-automatic information integration that deals with the integration and query of multiple, heterogeneous information sources (relational, object, XML and semi-structured sources). The aim of this paper is to show the advantage of the introduction in the MOMIS infrastructure of intelligent and mobile software agents for the autonomous management and coordination of the integration and query processes over heterogeneous data sources. 1
[ 55, 275, 1819 ]
Validation
1,063
1
Protein Structure Prediction With Evolutionary Algorithms Evolutionary algorithms have been successfully applied to a variety of molecular structure prediction problems. In this paper we reconsider the design of genetic algorithms that have been applied to a simple protein structure prediction problem. Our analysis considers the impact of several algorithmic factors for this problem: the conformational representation, the energy formulation and the way in which infeasible conformations are penalized. Further we empirically evaluate the impact of these factors on a small set of polymer sequences. Our analysis leads to specific recommendations for both GAs as well as other heuristic methods for solving PSP on the HP model. 1 INTRODUCTION A protein is a chain of amino acid residues that folds into a specific native tertiary structure under certain physiological conditions. A protein's structure determines its biological function. Consequently, methods for solving protein structure prediction (PSP) problems are valuable tools for modern molecula...
[ 2034 ]
Train
1,064
3
Visual Exploration of Temporal Object Databases Two complementary families of users' tasks may be identified during database visualization: data browsing and data analysis. On the one hand, data browsing involves extensively exploring a subset of the database using navigational interaction techniques. Classical object database browsers provide means for navigating within a collection of objects and amongst objects by way of their relationships. In temporal object databases, these techniques are not sufficient to adequately support time-related tasks, such as studying a snapshot of a collection of objects at a given instant, or detecting changes within temporal attributes and relationships. Visual data analysis on the other hand, is dedicated to the extraction of valuable knowledge by exploiting the human visual perception capabilities. In temporal databases, examples of data analysis tasks include observing the layout of a history, detecting regularities and trends, and comparing the evolution of the values taken by two or more histories. In this paper, we identify several users' tasks related to temporal database exploration, and we propose three novel visualization techniques addressing them. The first of them is dedicated to temporal object browsing, while the two others are oriented towards the analysis of quantitative histories. All three techniques are shown to satisfy several ergonomic properties. Keywords: temporal database, object database, data browsing and analysis, visualization technique. 1
[ 1255, 2837 ]
Test
1,065
0
Towards a Layered Approach for Agent Infrastructure: The Right Tools for the Right Job It is clear by now that the take-up of agent technologies and the wide use of such technologies in open environments depends on the provision of appropriate infrastructure to support the rapid development of applications. In this paper, we argue that the elements required for the development of infrastructure span three different fields, which, nevertheless, have a great degree of overlap. Middleware technologies, mobile agent and intelligent agent research all have significant contributions to make towards a holistic approach to infrastructure development, but it is necessary to make clear distinctions between the requirements at each level and explain how they can be integrated so as to provide a clearer focus and allow the use of existing technologies. Our view of the requirements for infrastructure to support agent-based systems has been formed through experience with developing an agent implementation environment based on a formal agent framework. We argue that in order to provide support to developers, this infrastructure must address both conceptual concerns relating the different types of entities, and relationships between agent and non-agent entities in the environment, as well as more technical concerns. This paper describes the general requirements for infrastructure, the specific contributions from different areas, and our own efforts in progressing towards them. 1.
[ 210, 275 ]
Test
1,066
1
Boosting Applied to Word Sense Disambiguation . In this paper Schapire and Singer's AdaBoost.MH boosting algorithm is applied to the Word Sense Disambiguation (WSD) problem. Initial experiments on a set of 15 selected polysemous words show that the boosting approach surpasses Naive Bayes and Exemplar--based approaches, which represent state--of--the--art accuracy on supervised WSD. In order to make boosting practical for a real learning domain of thousands of words, several ways of accelerating the algorithm by reducing the feature space are studied. The best variant, which we call LazyBoosting, is tested on the largest sense--tagged corpus available containing 192,800 examples of the 191 most frequent and ambiguous English words. Again, boosting compares favourably to the other benchmark algorithms. 1 Introduction Word Sense Disambiguation (WSD) is the problem of assigning the appropriate meaning (sense) to a given word in a text or discourse. This meaning is distinguishable from other senses potentially attributable ...
[ 751, 2676 ]
Train
1,067
5
Characterizing Operating System Activity In Specjvm98 Benchmarks : Complete system simulation to understand the influence of architecture and operating systems on application execution has been identified to be crucial for systems design. This problem is particularly interesting in the context of Java since it is not only the application that can invoke kernel services, but so does the underlying Java Virtual Machine (JVM) implementation which runs these programs. Further, the JVM style (JIT compiler or interpreter) and the manner in which the different JVM components (such as the garbage collector and class loader) are exercised, can have a significant impact on the kernel activities. To investigate these issues, this chapter uses complete system simulation of the SPECjvm98 benchmarks on the SimOS simulation platform. The execution of these benchmarks on both JIT compilers and interpreters is profiled in detail. The kernel activity of SPECjvm98 applications constitutes up to 17% of the execution time in the large dataset and up to 31% i...
[ 2200 ]
Validation
1,068
3
The DEDALE System for Complex Spatial Queries This paper presents dedale, a spatial database system intended to overcome some limitations of current systems by providing an abstract and non-specialized data model and query language for the representation and manipulation of spatial objects. dedale relies on a logical model based on linear constraints, which generalizes the constraint database model of [KKR90]. While in the classical constraint model, spatial data is always decomposed into its convex components, in dedale holes are allowed to fit the need of practical applications. The logical representation of spatial data although slightly more costly in memory, has the advantage of simplifying the algorithms. dedale relies on nested relations, in which all sorts of data (thematic, spatial, etc.) are stored in a uniform fashion. This new data model supports declarative query languages, which allow an intuitive and efficient manipulation of spatial objects. Their formal foundation constitutes a basis for practical query optimizati...
[ 331, 869, 1024, 2245, 2723, 2837 ]
Train
1,069
2
SimRank: A Measure of Structural-Context Similarity The problem of measuring "similarity" of objects arises in many applications, and many domain-specific measures have been developed, e.g., matching text across documents or computing overlap among item-sets. We propose a complementary approach, applicable in any domain with object-to-object relationships, that measures similarity of the structural context in which objects occur, based on their relationships with other objects. Effectively, we compute a measure that says "two objects are similar if they are related to similar objects." This general similarity measure, called SimRank, is based on a simple and intuitive graph-theoretic model. For a given domain, SimRank can be combined with other domain-specific similarity measures. We suggest techniques for efficient computation of SimRank scores, and provide experimental results on two application domains showing the computational feasibility and effectiveness of our approach.
[ 298, 471, 2984 ]
Train
1,070
2
The THISL Broadcast News Retrieval System This paper described the THISL spoken document retrieval system for British and North American Broadcast News. The system is based on the ABBOT large vocabulary speech recognizer, using a recurrent network acoustic model, and a probabilistic text retrieval system. We discuss the development of a realtime British English Broadcast News system, and its integration into a spoken document retrieval system. Detailed evaluation is performed using a similar North American Broadcast News system, to take advantage of the TREC SDR evaluation methodology. We report results on this evaluation, with particular reference to the effect of query expansion and of automatic segmentation algorithms. 1. INTRODUCTION THISL is an ESPRIT Long Term Research project in the area of speech retrieval. It is concerned with the construction of a system which performs good recognition of broadcast speech from television and radio news programmes, from which it can produce multimedia indexing data. The principal obj...
[ 1135, 1372 ]
Validation
1,071
4
Syntactic Autonomy - Or Why There is no Autonomy Without Symbols and how Self-Organization Systems Might Evolve Them Two different types of agency are discussed based on dynamically coherent and incoherent couplings with an environment respectively. I propose that until a private syntax (syntactic autonomy) is discovered by dynamically coherent agents, there are no significant or interesting types of closure or autonomy. When syntactic autonomy is established, then, because of a process of description-based selected self-organization, open-ended evolution is enabled. At this stage, agents depend, in addition to dynamics, on localized, symbolic memory, thus adding a level of dynamical incoherence to their interaction with the environment. Furthermore, it is the appearance of syntactic autonomy which enables much more interesting types of closures amongst agents which share the same syntax. To investigate how we can study the emergence of syntax from dynamical systems, experiments with cellular automata leading to emergent computation to solve non-trivial tasks are discussed. RNA editing is also mentio...
[ 2102, 2497 ]
Test
1,072
3
A String-based Model for Infinite Granularities (Extended Abstract) ) Jef Wijsen Universit'e de Mons-Hainaut Jef.Wijsen@umh.ac.be Abstract In the last few years, the concept of time granularity has been defined by several researchers, and a glossary of time granularity concepts has been published. These definitions often view a time granularity as a (mostly infinite) sequence of time granules. Although this view is conceptually clean, it is extremely inefficient or even practically impossible to represent a time granularity in this manner. In this paper, we present a practical formalism for the finite representation of infinite granularities. The formalism is string-based, allows symbolic reasoning, and can be extended to multiple dimensions to accommodate, for example, space. Introduction In the last few years, formalisms to represent and to reason about temporal and spatial granularity have been developed in several areas of computer science. Although several researchers have used different definitions of time granularity, they comm...
[ 474, 2131, 2460, 3105 ]
Test
1,073
2
Maximizing Coverage of Mediated Web Queries Over the Web, mediators are built on large collections of sources to provide integrated access to Web content (e.g., meta-search engines). In order to minimize the expense of visiting a large number of sources, mediators need to choose a subset of sources to contact when processing queries. As fewer sources participate in processing a mediated query, the coverage of the query goes down. In this paper, we study this trade-off and develop techniques for mediators to maximize the coverage for their queries while at the same time visiting a subset of their sources. We formalize the problem; study its complexity; propose algorithms to solve it; and analyze the theoretical performance guarantees of the algorithms. We also study the performance of our algorithms through simulation experiments. 1 Introduction Web sources often provide limited information "coverage." For instance, one type of information source is search engines, such as Lycos [27], Northern Light [29] and Yahoo [30]....
[ 1536, 2458 ]
Test
1,074
3
Cache Digests This paper presents Cache Digest, a novel protocol and optimization technique for cooperative Web caching. Cache Digest allows proxies to make information about their cache contents available to peers in a compact form. A peer uses digests to identify neighbors that are likely to have a given document. Cache Digest is a promising alternative to traditional per-request query/reply schemes such as ICP. We discuss the design ideas behind Cache Digest and its implementation in the Squid proxy cache. The performance of Cache Digest is compared to ICP using real-world Web caches operated by NLANR. Our analysis shows that Cache Digest outperforms ICP in several categories. Finally, we outline improvements to the techniques we are currently working on. 1 Introduction One of the most difficult problems in the design of Web cache hierarchies is efficiently locating objects held in neighbor caches. When a cache needs to forward a request, how does it know whether to use a sibling, a parent, or p...
[ 2518 ]
Train
1,075
3
Broadcast of Consistent Data to Read-Only Transactions from Mobile Clients In this paper, we study the data inconsistency problem in data broadcast to mobile transactions. While data items in a mobile computing system are being broadcast, update transactions may install new values for the data items. If the executions of update transactions and broadcast of data items are interleaved without any control, the transactions generated by mobile clients, called mobile transactions, may observe inconsistent data values. In this paper, we propose a new protocol, called Update-First with Order (UFO), for concurrency control between read-only mobile transactions and update transactions. We show that although the protocol is simple, all the schedules are serializable when the UFO protocol is applied. Furthermore, the new protocol possesses many desirable properties for mobile computing systems such as the mobile transactions do not need to set any lock before they read a data item from the "air" and the protocol can be applied to different broadcast algorithms. Its performance has been investigated with extensive simulation experiment. The results show that the protocol can maximize the freshness of the data items provided to mobile transactions and the broadcast overhead is not heavy especially when the arrival rate of the update transactions is not very high.
[ 954 ]
Test
1,076
0
A Multi-Agent Approach to Vehicle Monitoring in Motorway . This paper describes CaseLP, a prototyping environment for MultiAgent Systems (MAS), and its adoption for the development of a distributed industrial application. CaseLP employs architecture definition, communication, logic and procedural languages to model a MAS from the top-level architecture down to procedural behavior of each agent's instance. The executable specification which is obtained can be employed as a rapid prototype which helps in taking quick decisions on the best possible implementation solutions. Such capabilities have been applied to a distributed application of Elsag company, in order to assess the best policies for data communication and database allocation before the concrete implementation. The application consists in remote traffic control and surveillance over service areas on an Italian motorway, employing automatic detection and car plate reading at monitored gates. CaseLP allowed to predict data communication performance statistics under differe...
[ 239, 1157, 2432, 3066 ]
Test
1,077
0
Process- and Agent-Based Modelling Techniques for Dialogue Systems and Virtual Environments This text presents results of ongoing research, which is aimed at developing a framework for developing multimodal natural language dialogue systems operating within virtual environments. The aspects of multimodality and presence in a virtual environment are chosen as the main focus of this research. It may be argued that specification techniques would form the basis of such a framework. Therefore, a general overview and evaluation is given of existing specification techniques for interactive systems, based on both literature and previous research results. This includes the object-oriented model, process algebras, interactor models, and agent systems. Agent systems are further subdivided into intentional logics, production rule systems, agent communication languages, agent platforms, and agent architectures. A new agent system is proposed, which is based on update notification mechanisms as found in interactor models, and the `facilitator' function as found in some agent platfo...
[ 140, 2309 ]
Train
1,078
3
Probabilistic Deduction with Conditional Constraints over Basic Events We study the problem of probabilistic deduction with conditional constraints over basic events. We show that globally complete probabilistic deduction with conditional constraints over basic events is NP-hard. We then concentrate on the special case of probabilistic deduction in conditional constraint trees. We elaborate very efficient techniques for globally complete probabilistic deduction. In detail, for conditional constraint trees with point probabilities, we present a local approach to globally complete probabilistic deduction, which runs in linear time in the size of the conditional constraint trees. For conditional constraint trees with interval probabilities, we show that globally complete probabilistic deduction can be done in a global approach by solving nonlinear programs. We show how these nonlinear programs can be transformed into equivalent linear programs, which are solvable in polynomial time in the size of the conditional constraint trees. 1. Introduction Dealing wit...
[ 440, 1907, 2499, 2500 ]
Validation
1,079
0
A Mobile Agent-Based Active Network Architecture Active networks enable customization of network functionality without the lengthy standard-mediated committee processes. Most of the works in the literature utilize the capsules or active packets as the means to transfer code information across active networks. In this paper, we propose an active network infrastructure based on mobile agents technologies. In our prototype implementation, mobile agents are the building blocks of carrying functional customizations, and the active nodes offer software application layers, the Agent Servers, to process mobile agent-specific customizations to facilitate network functionality. Both integrated and discrete operational models of network customizations are supported. In addition, for the application-specific protocol development and deployment, an abstract protocol structure and a protocol loading mechanism are presented. Furthermore, we provide an agent management/control mechanism and devise a protocol management /control mechanism. As a result, improved network functionality can be achieved. 1
[ 725 ]
Train