node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
2,280 | 3 | Implementing Clinical Practice Guidelines While Taking Account of Changing Evidence: ATHENA DSS, an Easily Modifiable Decision-Support System for Managing Hypertension in Primary Care This paper describes the ATHENA Decision Support System (DSS), which operationalizes guidelines for hypertension using the EON architecture. ATHENA DSS encourages blood pressure control and recommends guideline-concordant choice of drug therapy in relation to comorbid diseases. ATHENA DSS has an easily modifiable knowledge base that specifies eligibility criteria, risk stratification, blood pressure targets, relevant comorbid diseases, guideline-recommended drug classes for patients with comorbid disease, preferred drugs within each drug class, and clinical messages. Because evidence for best management of hypertension evolves continually, ATHENA DSS is designed to allow clinical experts to customize the knowledge base to incorporate new evidence or to reflect local interpretations of guideline ambiguities. Together with its database mediator Athenaeum, ATHENA DSS has physical and logical data independence from the legacy Computerized Patient Record System(CPRS) supplying the patient data, so it can be integrated into a variety of electronic medical record systems. | [
2698
] | Test |
2,281 | 1 | Inductive Bias in Case-Based Reasoning Systems In order to learn more about the behaviour of case-based reasoners as learning systems, we formalise a simple case-based learner as a PAC learning algorithm, using the case-based representation hCB; oei. We first consider a `naive' case-based learning algorithm CB1(oeH ) which learns by collecting all available cases into the case-base and which calculates similarity by counting the number of features on which two problem descriptions agree. We present results concerning the consistency of this learning algorithm and give some partial results regarding its sample complexity. We are able to characterise CB1(oeH ) as a `weak but general' learning algorithm. We then consider how the sample complexity of case-based learning can be reduced for specific classes of target concept by the application of inductive bias, or prior knowledge of the class of target concepts. Following recent work demonstrating how case-based learning can be improved by choosing a similarity measure appropriate to t... | [
636,
966,
3160
] | Train |
2,282 | 4 | Classroom 2000: An Experiment with the Instrumentation of a Living Educational Environment One potentially useful feature of future computing environments is the ability to capture the live experiences of the occupants and to provide that record to users for later access and review. Over the last 3 years, we have designed and extensively used a particular instrumented environment, the classroom, designed to facilitate the easy capture of the traditional lecture experience. We will describe the history of the Classroom 2000 project at Georgia Tech, and provide results of extended evaluations of the impact of automated capture on the teaching and learning experience. In addition to understanding the impact of automated capture in this educational domain, there are many important lessons to take away from this long-term, largescale experiment with a living ubiquitous computing environment. The environment needs to address issues of scale and extensibility, and there needs to be a way to continuously evaluate the effectiveness of the environment and understand and react to the w... | [
837
] | Test |
2,283 | 2 | Mining the Link Structure of the World Wide Web The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. We develop algorithms that exploit the hyperlink structure of the WWW for information discovery and categorization, the construction of high-quality resource lists, and the analysis of on-line hyperlinked communities. 1 Introduction The World Wide Web contains an enormous amount of information, but it can be exceedingly difficult for users to locate resources that are both high in quality and relevant to their information needs. There are a number of fundamental reasons for this. The Web is a hypertext corpus of enormous size --- approximately three hundred million Web pages as of this writing --- and it continues to grow at a phenomenal rate. But the variation in pages is even worse than the raw scale of the data: the set of Web pages taken as a whole has almost no unifying structure, wi... | [
255,
323,
825,
1838,
2091,
2437,
2459,
2503
] | Train |
2,284 | 5 | Embedding Knowledge in Web Documents The paper argues for the use of general and intuitive knowledge representation languages (and simpler notational variants, e.g. subsets of natural languages) for indexing the content of Web documents and representing knowledge within them. We believe that these languages have advantages over metadata languages based on the Extensible Mark-up Language (XML). Indeed, the retrieval of precise information is better supported by languages designed to represent semantic content and support logical inference, and the readability of such a language eases its exploitation, presentation and direct insertion within a document (thus also avoiding information duplication). We advocate the use of Conceptual Graphs and simpler notational variants that enhance knowledge readability. To further ease the representation process, we propose techniques allowing users to leave some knowledge terms undeclared. We also show how lexical, structural and knowledge-based techniques may be combined to retrieve or ... | [
409,
1019,
1146,
2003,
2787
] | Train |
2,285 | 1 | Solving Stabilization Problems in Case-Based Knowledge Acquisition Case-based reasoning is widely deemed an important methodology towards alleviating the bottleneck of knowledge acquisition. The key idea is to collect cases representing a human's or a system's experience directly rather than trying to construct generalizations. Episodic knowledge accumulated this way may be used flexibly for different purposes by determining similarities between formerly solved problems and current situations under investigation. But the flexibility of case-based reasoning brings with it a number of disadvantages. One crucial difficulty is that every new experience might seem worth to be memorized. As a result, a case-based reasoning system may substantially suffer from collecting a huge amount of garbage without being able to separate the chaff from the wheat. This paper presents a case study in case-based learning. Some target concept has to be learned by collecting cases and tuning similarity concepts. It is extremely difficult to avoid collecting a huge amount of ... | [
38,
100,
1051,
1844,
3160,
3176
] | Train |
2,286 | 4 | Affectively tunable environments for the virtual stage (Abstract) Affective computing [8, 9] is an emerging area of human-computer interaction which studies the ways in which computer systems can recognize and work with human emotions. The word affective is an adjective used in psychology to mean "relating to emotions". Examples of affective computing are the creation of systems which respond in different ways according to the system's perception of the user's emotional state [1, 2, 3], and systems which facilitate the communication of emotion through a virtual environment [7]. One aspect of aective computing which is particularly apposite to the theatre is the ability to create environments in which the capacity for communication of emotion (what we might call the affective bandwidth) of the environment is variable [6]. Consider the follo... | [
451
] | Train |
2,287 | 2 | Optimizing the parSOM Neural Network Implementation for Data Mining with Distributed Memory Systems and Cluster Computing The self-organizing map is a prominent unsupervised neural network model which lends itself to the analysis of high-dimensional input data and data mining applications. However, the high execution times required to train the map put a limit to its application in many high-performance data analysis application domains. In this paper we discuss the parSOM implementation, a software-based parallel implementation of the selforganizing map, and its optimization for the analysis of high-dimensional input data using distributed memory systems and clusters. The original parSOM algorithm scales very well in a parallel execution environment with low communication latencies and exploits parallelism to cope with memory latencies. However it suffers from poor scalability on distributed memory computers. We present optimizations to further decouple the subprocesses, simplify the communication model and improve the portability of the system. 1 Introduction The self-organizing map (SOM) [5] is a pr... | [
2396
] | Train |
2,288 | 2 | Modeling Score Distributions for Combining the Outputs of Search Engines In this paper the score distributions of a number of text search engines are modeled. It is shown empirically that the score distributions on a per query basis may be fitted using an exponential distribution for the set of non-relevant documents and a normal distribution for the set of relevant documents. Experiments show that this model fits TREC-3 and TREC-4 data for not only probabilistic search engines like INQUERY but also vector space search engines like SMART for English. We have also used this model to fit the output of other search engines like LSI search engines and search engines indexing other languages like Chinese. It is then shown that given a query for which relevance information is not available, a mixture model consisting of an exponential and a normal distribution can be fitted to the score distribution. These distributions can be used to map the scores of a search engine to probabilities. We also discuss how the shape of the score distributions arise given certain assumptions about word distributions in documents. We hypothesize that all 'good' text search engines operating on any language have similar characteristics. This model has many possible applications. For example, the outputs of different search engines can be combined by averaging the probabilities (optimal if the search engines are independent) or by using the probabilities to select the best engine for each query. Results show that the technique performs as well as the best current combination techniques. This material is based on work supported in part by the National Science Foundation, Library of Congress and Department of Commerce under cooperative agreement number EEC-9209623, in part by the National Science Foundation under grant numbers IRI-9619117 and IIS-9909073, in part by N... | [
435,
887
] | Test |
2,289 | 3 | Concept Based Design of Data Warehouses: The DWQ Demonstrators The ESPRIT Project DWQ (Foundations of Data Warehouse Quality) aimed at improving the quality of DW design and operation through systematic enrichment of the semantic foundations of data warehousing. Logic-based knowledge representation and reasoning techniques were developed to control accuracy, consistency, and completeness via advanced conceptual modeling techniques for source integration, data reconciliation, and multi-dimensional aggregation. This is complemented by quantitative optimization techniques for view materialization, optimizing timeliness and responsiveness without losing the semantic advantages from the conceptual approach. At the operational level, query rewriting and materialization refreshment algorithms exploit the knowledge developed at design time. The demonstration shows the interplay of these tools under a shared metadata repository, based on an example extracted from an application at Telecom Italia. 1 Overview of the Demonstration The demonstration follows ... | [
454,
1222,
2297,
2487,
2677,
2802
] | Validation |
2,290 | 4 | Designing Graspable Groupware for Co-Located Planning and Configuration Tasks This paper shows some of the vital steps in the design process of a graspable groupware system. Activity theory is the theoretical foundation for our research. Our design philosophy is based on the tradition of Augmented Reality (AR), which enriches natural communication with virtual features. Another important part of our design philosophy is the use of coinciding action and perception spaces. We developed groupware for layout planning and configuration tasks called the BUILD-IT system. This system enables users, grouped around a table, to cooperate in the design manipulation of a virtual setting, thus supporting colocated, instead of distributed, interaction (Rauterberg et al., 1997a, 1997b, 1998; Fjeld et al., 1998a). The multi-user nature of BUILD-IT overcomes a serious drawback often seen with CSCW tools, namely that they are based on single-user applications (Grudin, 1988). We believe that co-location is an indispensable factor for the early stage of a complex planning process. Input and output, however, can be prepared and further developed off-line (Fjeld et al., 1998b), using any conventional CAD system. | [
1042,
1938,
2373
] | Test |
2,291 | 2 | Distributed Query Scheduling Service: An Architecture and Its Implementation We present the systematic design and development of a distributed query scheduling service (DQS) in the context of DIOM, a distributed and interoperable query mediation system [26]. DQS consists of an extensible architecture for distributed query processing, a three-phase optimization algorithm for generating efficient query execution schedules, and a prototype implementation. Functionally, two important execution models of distributed queries, namely moving query to data or moving data to query, are supported and combined into a unified framework, allowing the data sources with limited search and filtering capabilities to be incorporated through wrappers into the distributed query scheduling process. Algorithmically, conventional optimization factors (such as join order) are considered separately from and refined by distributed system factors (such as data distribution, execution location, heterogeneous host capabilities), allowing for stepwise refinement through three optimization phases: compilation, parallelization, site selection and execution. A subset of DQS algorithms has been implemented in Java to demonstrate the practicality of the architecture and the usefulness of the distributed query scheduling algorithm in optimizing execution schedules for inter-site queries. | [
3062
] | Validation |
2,292 | 3 | Following the paths of XML Data: An algebraic framework for XML query evaluation This paper introduces an algebraic framework for expressing and evaluating queries over XML data. It presents the underlying assumptions of the framework, describes the input and output of the algebraic operators, and defines these operators and their semantics. It evaluates the framework with regard to other proposed XML query algebras. Examples show that this framework is flexible enough to capture queries expressed in Quilt, one of the dominant XML query languages. We have used this algebra in the context of an Internet query engine, in which it is used to formulate logical plans for XML-QL queries. We define equivalence rules that provide opportunities for optimization, and give example cases that point out the usefulness of these rules. 1 | [
392,
1056,
2910
] | Train |
2,293 | 3 | Modeling and Executing the Data Warehouse Refreshment Process Data warehouse refreshment is often viewed as a problem of maintaining | [
610
] | Test |
2,294 | 4 | Requirements Interaction Management ion. Requirements may be distinguished based on the abstraction level of their description. A requirement may be further defined by add new details defined in more specialized subrequirements. Through specialization of abstract requirements, or generalization of detailed requirement, a requirement abstraction hierarchy can be defined. . Development p roperties . Requirements may be distinguished based on their development properties. For example, a requirement may have just been proposed. Late r, it may be accepted or rejected. . Representational properties. Requirements may be distinguished based on their representation. A requirement may begin as an informal sketch, then become a natural language sentence (e.g., "The system shall ..."). Finall y, more formal representations, such as UML, Z, or predicate cal- Requirements Interaction Management - Definition and scope 6 1999 William N. Robinson Requirements Interaction Management GSU CIS 99-7 culus, may be used to express a requir... | [
286,
2785
] | Train |
2,295 | 3 | OMS Java: Lessons Learned from Building a Multi-Tier Object Management Framework We present the object-oriented multi-tier application framework OMS Java which is independent of the underlying database management system (DBMS). We detail the storage management component and sketch which part of the framework has to be extended when introducing a new DBMS. We compare versions of OMS Java using the persistent storage engine ObjectStore PSE Pro for Java, the object-oriented DBMS Objectivity/DB, the objectrelational DBMS Oracle and the proprietary DBMS Berkley DB. 1 Introduction Most applications create data that extends the life of an application process making it necessary that application objects can be stored in and retrieved from non-volatile storage. Furthermore, looking at pure object-oriented applications, i.e. applications developed entirely using an objectoriented language environment such as Java [KA96], application objects typically refer to many other application objects resulting in complex object hierarchies. It is therefore crucial to find mechan... | [
436,
576,
2784
] | Test |
2,296 | 4 | Essential Principles for Workflow Modelling Effectiveness While the specification languages of workflow management systems focus on process execution semantics, the successful development of workflows relies on a fuller conceptualisation of business processing, including process semantics. Traditionally, the success of conceptual modelling techniques has depended largely on the adequacy of certain requirements: conceptualisation (following the Conceptualisation Principle), expressive power (following the One Hundred Principle) , comprehensibility and formal foundation. An equally important requirement, particularly with the increased conceptualisation of business aspects, is business suitability. In this paper, the focus is on the suitability of workflow modelling for a commonly encountered class of (operational) business processing, e.g. those of insurance claims, bank loans and land conveyancing. Based on a previously conducted assessment of a number of integrated techniques, the results of which are summarised in this paper, fiv... | [
248
] | Validation |
2,297 | 3 | A Temporal Description Logic for Reasoning over Conceptual Schemas and Queries This paper introduces a new logical formalism, intended for temporal conceptual modelling, as a natural combination of the well-known description logic DLR and point-based linear temporal logic with Since and Until. The expressive power of the resulting DLRUS logic is illustrated by providing a characterisation of the most important temporal conceptual modelling constructs appeared in the literature. We define a query language (where queries are non-recursive Datalog programs and atoms are complex DLRUS expressions) and investigate the problem of checking query containment under the constraints defined by DLRUS conceptual schemas---i.e., DLRUS knowledge bases---as well as the problems of schema satisfiability and logical implication. | [
108,
262,
2289,
2789
] | Test |
2,298 | 1 | Learning in Case-Based Classification Algorithms While symbolic learning approaches encode the knowledge provided by the presentation of the cases explicitly into a symbolic representation of the concept , e.g. formulas, rules, or decision trees, case-based approaches describe learned concepts implicitly by a pair (CB;d), i.e. by a set CB of cases and a distance measure d. Given the same information, symbolic as well as the case-based approach compute a classification when a new case is presented. This poses the question if there are any differences concerning the learning power of the two approaches. In this work we will study the relationship between the case base, the measure of distance, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the conjecture of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure... | [
1844,
3160
] | Train |
2,299 | 3 | Warehousing Workflow Data: Challenges and Opportunities Workflow management systems (WfMSs) are software platforms that allow the definition, execution, monitoring, and management of business processes. WfMSs log every event that occurs during process execution. Therefore, workflow logs include a significant amount of information that can be used to analyze process executions, understand the causes of high- and low-quality process executions, and rate the performance of internal resources and business partners. In this paper we present a packaged data warehousing solution, coupled with HP Process Manager, for collecting and analyzing workflow execution data. We first present the main challenges involved in this effort, and then detail the proposed approach. 1. | [
483
] | Train |
2,300 | 2 | Methods and Metrics for Cold-Start Recommendations Ve have developed a method for recommending items that combines content and collaborative data under a single probabifistic framework. We benchmark our algorithm against a naYve Bayes classifier on the cold-start problem, where we wish to recommend items that no one in the commu- nity has yet rated. Ve systematically explore three testing methodologies using a publicly available data set, and explain how these methods apply to specific real-world appli- cations. Ve advocate heuristic recommeuders when bench- marking to give competent baseline performance. Ve introduce a nev perfbrmance metric, the CROC curve, and demonstrate empirically that the various components of our testing strategy combine to obtain deeper understanding of the performance characteristics of recommender systems. Though the emphasis of onr testing is on cold-start recommending, our methods fbr recommending and evaluation are general. | [
1161,
1958,
2033,
2096,
2631,
2847,
2903,
3003
] | Test |
2,301 | 4 | A Visual Modality for the Augmentation of Paper In this paper we describe how we have enhanced our multimodal paper-based system, Rasa, with visual perceptual input. We briefly explain how Rasa improves upon current decisionsupport tools by augmenting, rather than replacing, the paperbased tools that people in command and control centers have come to rely upon. We note shortcomings in our initial approach, discuss how we have added computer-vision as another input modality in our multimodal fusion system, and characterize the advantages that it has to offer. We conclude by discussing our current limitations and the work we intend to pursue to overcome them in the future. | [
424,
2147,
2359
] | Train |
2,302 | 1 | Erratic Fudgets: A Semantic Theory for an Embedded Coordination Language The powerful abstraction mechanisms of functional programming languages provide the means to develop domain-specific programming languages within the language itself. Typically, this is realised by designing a set of combinators (higher-order reusable programs) for an application area, and by constructing individual applications by combining and coordinating individual combinators. This paper is concerned with a successful example of such an embedded programming language, namely Fudgets, a library of combinators for building graphical user interfaces in the lazy functional language Haskell. The Fudget library has been used to build a number of substantial applications, including a web browser and a proof editor interface to a proof checker for constructive type theory. This paper develops a semantic theory for the non-deterministic stream processors that are at the heart of the Fudget concept. The interaction of two features of stream processors makes the development of such a semantic theory problematic: the sharing of computation provided by the lazy evaluation mechanism of the underlying host language, and the addition of non-deterministic choice needed to handle the natural concurrency that reactive applications entail. We demonstrate that this combination of features in a higher-order functional language can be tamed to provide a tractable semantic theory and induction principles suitable for reasoning about contextual equivalence of Fudgets. | [
1274
] | Test |
2,303 | 4 | Visualisation Effectiveness Providing evaluations of visualisations is one way to demonstrate that they support a purpose and are adequate for the role claimed for them. The problem in doing so is that there is no central source of evaluation issues that one can use a subset of for this purpose. There is also very little in the way of agreement over what constitutes a good visualisation hence the evaluation criteria differ. There are the human-computer interaction ideals, the slightly differing ones from usability engineering, those from the visualisation community, and also the need to be able to support the variable abilities of the users. Graphics, as the medium behind visualisation, may support greater bandwidth, but is also prone to more likes and dislikes than other forms of interface. The concept of visualisation effectiveness and therefore ways of evaluating visualisations provide the focus for this paper. Keywords: Visualisation, Evaluation, Usability, Understanding 1 | [
2020
] | Train |
2,304 | 2 | Discovering Internet Resources to Enrich a Structured Personal Information Space The Internet is a tremendous resource where one can find documents to enrich a personal information space. The question is: how can one find relevant documents and how can these be organized into an information space? In this paper, we describe a prototype which aims to provide the user with assistance in these two tasks. Our approach assumes the existence of an initial concept structure set up by the user. This structure may contain only rudimentary descriptions for each concept. The system's task is to find relevant documents from the Internet and to insert them in the appropriate places in the concept structure. 1. Information Management for Internet Users The amount of information available through the Internet is overwhelming; as a result, most of this information goes unnoticed or gets lost again soon after having been noticed. The problem is not new, it is just being exacerbated by two factors: a sudden growth in the number of information consumers accompanied by acceleration ... | [
2392
] | Test |
2,305 | 1 | How Statistics and Prosody can guide a Chunky Parser Introduction Following the most common architecture of spoken dialog systems as shown in Figure 1, the main task of linguistic processing is to yield a semantic representation of what the user said. utterance User System answer Word recognizer Generator Linguistic processor Database base Knowl. control Dialog Figure 1. Typical dialog system architecture. These semantic representations are interpreted by the dialog module according to the dialog context and the system answer will be generated accordingly. The system utterance depends on whether the system still needs certain information or if all necessary information has been given to accomplish its task. In order to know, when all required information has been provided, the dialog This work was partly funded by the European Community in the framework of the SQEL--Project (Spoken Queries in European Languages), Copernicus Project No. 1634. The responsibility for the contents lies with | [
680
] | Train |
2,306 | 3 | BBQ: A Visual Interface for Integrated Browsing and Querying of XML In this paper we present BBQ (Blended Browsing and Querying), a graphic user interface for seamlessly browsing and querying XML data sources. BBQ displays the structure of multiple data sources using a paradigm that resembles drilling-down in Windows' directory structures. BBQ allows queries incorporating one or more of the sources. Queries are constructed in a query-by-example (QBE) manner, where DTDs play the role of schema. The queries are arbitrary conjunctive queries with GROUPBY, and their results can be subsequently used and refined. To support query refinement, BBQ introduces virtual result views: standalone virtual data sources that (i) are constructed by user queries, from elements in other data sources, and (ii) can be used in subsequent queries as first-class data sources themselves. Furthermore, BBQ allows users to query data sources with loose or incomplete schema, and can augment such schema with a DTD inference mechanism. | [
524
] | Train |
2,307 | 3 | Computational Logic and Machine Learning: A roadmap for Inductive Logic Programming Computational logic has already significantly influenced (symbolic) machine learning through the field of inductive logic programming (ILP) which is concerned with the induction of logic programs from examples and background knowledge. In ILP, the shift of attention from program synthesis to knowledge discovery resulted in advanced techniques that are practically applicable for discovering knowledge in relational databases. Machine learning, and ILP in particular, has the potential to influence computational logic by providing an application area full of industrially significant problems, thus providing a challenge for other techniques in computational logic. This paper gives a brief introduction to ILP, presents state-of-the-art ILP techniques for relational knowledge discovery as well as some research and organizational directions for further developments in this area. 1 Introduction Inductive logic programming (ILP) [35, 39, 29] is a research area that has its backgrounds in induct... | [
1192,
1760,
2140
] | Train |
2,308 | 3 | A systematic classification of replicated database protocols based on atomic broadcast Database replication protocols based on group communication primitives have recently emerged as a promising technology to improve database faulttolerance and performance. Roughly speaking, this approach consists in exploiting the order and atomicity properties provided by group communication primitives or, more specifically Atomic Broadcast, to guarantee transaction properties. This paper proposes a systematic classification of non voting database replication algorithms based on Atomic Broadcast. 1. | [
2312,
2917
] | Train |
2,309 | 0 | A Methodology for Agent-Oriented Analysis and Design . This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societal) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system). 1. Introduction Progress in software engineering over the past two decades has been made through the development of increasingly powerful and natural high-level abstractions with which to model and develop complex systems. Procedural abstraction, abstract data types, and, most recently, objects and components are all examples of such abstractions. It is our belief that agents represent a similar advance in abstraction: they may be used by software developers to more n... | [
1012,
1077,
1151,
1272,
1309,
1414,
1483,
1502,
2337,
2343,
2659,
2922,
2950,
3108,
3132
] | Validation |
2,310 | 0 | The Abels Framework For Linking Distributed Simulations Using Software Agents Many simulations need access to dynamicallychanging data from other sources such as sensors or even other simulations. For example, a forest fire simulation may need data from sensors in the forest and a weather simulation at a remote site. We have developed an agentbased software framework to facilitate the dynamic exchange of data between distributed simulations and other remote data resources. The framework, called ABELS (Agent-Based Environment for Linking Simulations), allows independently designed simulations to communicate seamlessly with no a priori knowledge of the details of other simulations and data resources. This paper discusses our architecture and current implementation using Sun Microsystems' Jini technology and the D'Agents mobile agent system. This paper extends earlier work by describing the implementation of the brokering system for matching data producers and consumers and providing additional details of other components. | [
2553,
2669
] | Train |
2,311 | 4 | Development and Characterization of an Acoustic Rangefinder Localization is important to wearable and embedded applications at many levels. For example, it might be used to implement \physical" user interfaces which detect body language or the user's positioning of objects in the environment. This paper presents the initial results of a prototype implementation of an acoustic rangender. Unlike other published work in this area, this design uses a sliding correlator to detect the acoustic signal. This technique signicantly improves performance in obstructed and noisy environments. Ongoing work towards a more compact implementation and towards new protocols for establishing coordinate systems is also described. 1 Introduction When the members of our research group are brainstorming applications for our soon-to-be-indispensible ad-hoc networked sensor technology, we often run into the same brick wall: how do we do anything that is physically motivated and context aware without localization ? Virtually every application we propose degenerates t... | [
217
] | Test |
2,312 | 3 | Totally Ordered Broadcast and Multicast Algorithms: A Comprehensive Survey Total order multicast algorithms constitute an important class of problems in distributed systems, especially in the context of fault-tolerance. In short, the problem of total order multicast consists in sending messages to a set of processes, in such a way that all messages are delivered by all correct destinations in the same order. However, the huge amount of literature on the subject and the plethora of solutions proposed so far make it difficult for practitioners to select a solution adapted to their specific problem. As a result, naive solutions are often used while better solutions are ignored. This paper proposes a classification of total order multicast algorithms based on the ordering mechanism of the algorithms, and describes a set of common characteristics (e.g., assumptions, properties) with which to evaluate them. In this classification, more than fifty total order broadcast and multicast algorithms are surveyed. The presentation includes asynchronous algorithms ... | [
2308
] | Validation |
2,313 | 0 | Supporting Internet-Scale Multi-Agent Systems ts a model of AgentScape from the agent perspective, that is, the location comprising the middleware and the resources are represented by a location manager agent and resource objects. Calls from an agent to the middleware are modeled by requests to the location manager agent to, for example, create an agent or move an agent. Information about resources residing at the location can be retrieved by binding to the resource objects, which are local distributed objects. These objects can be accessed only within the location they reside, not from outside the location. For development of agent applications, an application programming interface (API) and a runtime system (RTS) are provided, see Fig. 1. The default API and RTS can be extended to provide a higher-level application programming interface with, for example, a model that offers more structure and semantics to the agent application developer. Within AgentScape, management of large-scale agent systems is an important issue, includi | [
170,
555,
1223,
1369,
1618,
2364,
2593,
2843,
3089
] | Test |
2,314 | 0 | Exception Handling in Agent Systems A critical challenge to creating effective agent-based systems is allowing them to operate effectively when the operating environment is complex, dynamic, and error-prone. In this paper we will review the limitations of current "agent-local" approaches to exception handling in agent systems, and propose an alternative approach based on a shared exception handling service that is "plugged", with little or no customization, into existing agent systems. This service can be viewed as a kind of "coordination doctor"; it knows about the different ways multi-agent systems can get "sick", actively looks system-wide for symptoms of such "illnesses", and prescribes specific interventions instantiated for this particular context from a body of general treatment procedures. Agents need only implement their normative behavior plus a minimal set of interfaces. We claim that this approach offers simplified agent development as well as more effective and easier to modify exception handling behavior. T... | [
286,
424,
2359,
2408
] | Train |
2,315 | 2 | Efficient Web Search on Mobile Devices with Multi-Modal Input and Intelligent Text Summarization Ease of browsing and searching for information on mobile devices has been an area of increasing interest in the World Wide Web research community [1, 2, 3, 6, 7]. While some work has been done to enhance the usability of handwriting recognition to input queries through techniques such as automatic word suggestion [2], the use of speech as an input mechanism has not been extensively studied. This paper presents a system which combines spoken query in addition to automatic title summarization to ease searching for information on a mobile device. Preliminary usability study with 10 subjects indicates that spoken queries is preferred over other input methods. | [
741,
1909
] | Train |
2,316 | 1 | Face Image Retrieval Using HMMs This paper introduces a new face recognition system that can be used to index (and thus retrieve) images and videos of a database of faces. New face recognition approaches are needed because, although much progress has been made to identify face taken from different viewpoints, we still cannot robustly identify faces under different illumination conditions, or when the facial expression changes, or when a part of the face is occluded on account of glasses or parts of clothing. When face recognition methods have worked in the past, it was only when all possible "image variations" were learned. Principal Components Analysis (PCA) and Fisher Discriminant Analysis (FDA) are well-known cases of such methods. In this paper we present a different approach to the indexing of face images. Our approach is based on identifying frontal faces and it allows reasonable variability in facial expressions, illumination conditions, and occlusions caused by eye-wear or items of clothing such as scarves. W... | [
690,
2107
] | Train |
2,317 | 1 | Classification With Sparse Grids Using Simplicial Basis Functions Recently we presented a new approach [20] to the classification problem arising in data mining. It is based on the regularization network approach but in contrast to other methods, which employ ansatz functions associated to data points, we use a grid in the usually high-dimensional feature space for the minimization process. To cope with the curse of dimensionality, we employ sparse grids [52]. Thus, only O(h −1 n n d−1) instead of O(h −d n) grid points and unknowns are involved. Here d denotes the dimension of the feature space and hn = 2 −n gives the mesh size. We use the sparse grid combination technique [30] where the classification problem is discretized and solved on a sequence of conventional grids with uniform mesh sizes in each dimension. The sparse grid solution is then obtained by linear combination. The method computes a nonlinear classifier but scales only linearly with the number of data points and is well suited for data mining applications where the amount of data is very large, but where the dimension of the feature space is moderately high. In contrast to our former work, where d-linear functions were used, we now apply linear basis functions based on a simplicial discretization. This allows to handle more dimensions and the algorithm needs less operations per data point. We further extend the method to so-called anisotropic sparse grids, where now different a-priori chosen mesh sizes can be used for the discretization of each attribute. This can improve the run time of the method and the approximation results in the case of data sets with different importance of the attributes. We describe the sparse grid combination technique for the classification problem, give implementational details and discuss the complexity of the algorithm. It turns out that the method scales linearly with the number of given data points. Finally we report on the quality of the classifier built by our new method on data sets with up to 14 dimensions. We show that our new method achieves correctness rates which are competitive to those of the best existing methods. | [
2513
] | Train |
2,318 | 2 | Semantics-Based Information Retrieval : In this paper we investigate the use of conceptual descriptions based on description logics for contentbased information retrieval and present several innovative contributions. We provide a query-byexamples retrieval framework which avoids the drawback of a sophisticated query language. We extend an existing DL to deal with spatial concepts. We provide a content-based similarity measure based on the least common subsumer which extracts conceptual similarities of examples. 1 Introduction As more and more information of various kinds becomes available for an increasing number of users, one major challenge for Computer Science is to provide e#cient access and retrieval mechanisms. This is not only true for Web-based information which by its nature tends to be highly unorganized and heterogeneous, but also for dedicated databases which are designed to provide a particular service. The guiding example of this paper is a "TV-Assistant" with a database containing TV-program information. Its... | [
208,
2361
] | Train |
2,319 | 4 | Appliance Data Services: Making Steps Towards an Appliance Computing World Although digital appliances are designed to be easy to use, their users often cannot even perform simple tasks because the devices lack infrastructural support. The Appliance Data Services project seeks to explore the attributes of an appliance computing world and develop the infrastructure required to support users with digital appliances. 1 | [
2049
] | Test |
2,320 | 1 | Real-world Data is Dirty: Data Cleansing and The Merge/Purge Problem The problem of merging multiple databases of information about common entities is frequently encountered in KDD and decision support applications in large commercial and government organizations. The problem we study is often called the Merge/Purge problem and is difficult to solve both in scale and accuracy. Large repositories of data typically have numerous duplicate information entries about the same entities that are difficult to cull together without an intelligent "equational theory" that identifies equivalent items by a complex, domain-dependent matching process. We have developed a system for accomplishing this Data Cleansing task and demonstrate its use for cleansing lists of names of potential customers in a direct marketing-type application. Our results for statistically generated data are shown to be accurate and effective when processing the data multiple times using different keys for sorting on each successive pass. Combing results of individual passes using transitive c... | [
565,
1109,
1884
] | Train |
2,321 | 0 | Diagnosis as an Integral Part of Multi-Agent Adaptability Agents working under real world conditions may face an environment capable of changing rapidly from one moment to the next, either through perceived faults, unexpected interactions or adversarial intrusions. To gracefully and efficiently handle such situations, the members of a multi-agent system must be able to adapt, either by evolving internal structures and behavior or repairing or isolating those external influenced believed to be malfunctioning. The first step in achieving adaptability is diagnosis - being able to accurately detect and determine the cause of a fault based on its symptoms. In this paper we examine how domain independent diagnosis plays a role in multi-agent systems, including the information required to support and produce diagnoses. Particular attention is paid to coordination based diagnosis directed by a causal model. Several examples are described in the context of an Intelligent Home environment, and the issue of diagnostic sensitivity versus efficiency is ad... | [
438,
726,
996,
1139,
1210,
1875,
2628
] | Train |
2,322 | 4 | The Role of Children in the Design of New Technology This paper suggests a framework for understanding the roles that children can play in the technology design process, particularly in regards to designing technologies that support learning. Each role, user, tester, informant, and design partner has been defined based upon a review of the literature and my lab’s own research experiences. This discussion does not suggest that any one role is appropriate for all research or development needs. Instead, by understanding this framework the reader may be able to make more informed decisions about the design processes they choose to use with children in creating new technologies. This paper will present for each role a historical overview, research and development methods, as well as the strengths, challenges, and unique contributions associated with children in the design process. | [
1433,
1511,
1610,
2640
] | Train |
2,323 | 4 | Combining Statistical Measures to Find Image Text Regions We present a method based on statistical properties of local image pixels for focussing attention on regions of text in arbitrary scenes where the text plane is not necessarily fronto-parallel to the camera. This is particularly useful for Desktop or Wearable Computing applications. The statistical measures are chosen to reveal charactersitic properties of text. We combine a number of localised measures using a neural network to classify each pixel as text or non-text. We demonstrate our results on typical images. 1. Introduction To automatically enter the contents of a text document into a computer, one can place it on a flatbed scanner and use state of the art Optical Character Recognition (OCR) software to retrieve the characters. However, automatic segmentation and recognition of text in arbitrary scenes, where the text may or may not be fronto-parallel to the viewing plane, is an area of computer vision which has not been extensively researched previously. The problems involved a... | [
1861
] | Train |
2,324 | 2 | Concept Decompositions for Large Sparse Text Data using Clustering Abstract. Unlabeled document collections are becoming increasingly common and available; mining such data sets represents a major contemporary challenge. Using words as features, text documents are often represented as high-dimensional and sparse vectors–a few thousand dimensions and a sparsity of 95 to 99 % is typical. In this paper, we study a certain spherical k-means algorithm for clustering such document vectors. The algorithm outputs k disjoint clusters each with a concept vector that is the centroid of the cluster normalized to have unit Euclidean norm. As our first contribution, we empirically demonstrate that, owing to the high-dimensionality and sparsity of the text data, the clusters produced by the algorithm have a certain “fractal-like ” and “self-similar ” behavior. As our second contribution, we introduce concept decompositions to approximate the matrix of document vectors; these decompositions are obtained by taking the least-squares approximation onto the linear subspace spanned by all the concept vectors. We empirically establish that the approximation errors of the concept decompositions are close to the best possible, namely, to truncated singular value decompositions. As our third contribution, we show that the concept vectors are localized in the word space, are sparse, and tend towards orthonormality. In contrast, the singular vectors are global in the word space and are dense. Nonetheless, we observe the surprising fact that the linear subspaces spanned by the concept vectors and the leading singular vectors are quite close in the sense of small principal angles between them. In conclusion, the concept vectors produced by the spherical k-means | [
847,
901,
1234,
1452,
2029,
2179,
2580,
2839,
2989
] | Test |
2,325 | 3 | Computation of the Semantics of Autoepistemic Belief Theories Recently, one of the authors introduced a simple and yet powerful non-monotonic knowledge representation framework, called the Autoepistemic Logic of Beliefs, AEB. Theories in AEB are called autoepistemic belief theories. Every belief theory T has been shown to have the least static expansion T which is computed by iterating a natural monotonic belief closure operator \Psi T starting from T . This way, the least static expansion T of any belief theory provides its natural non-monotonic semantics which is called the static semantics. It is easy to see that if a belief theory T is finite then the construction of its least static expansion T stops after countably many iterations. However, a somewhat surprising result obtained in this paper shows that the least static expansion of any finite belief theory T is in fact obtained by means of a single iteration of the belief closure operator \Psi T (although this requires T to be of a special form, we also show that T can be always put in th... | [
226,
2206,
2732,
2798,
3104
] | Train |
2,326 | 3 | Belief Reasoning in MLS Deductive Databases It is envisaged that the application of the multilevel security (MLS) scheme will enhance exibility and e ectiveness of authorization policies in shared enterprise databases and will replace cumbersome authorization enforcement practices through complicated view de nitions on a per user basis. However, as advances in this area are being made and ideas crystallized, the concomitantweaknesses of the MLS databases are also surfacing. We insist that the critical problem with the current model is that the belief at a higher security level is cluttered with irrelevant or inconsistent data as no mechanism for attenuation is supported. Critics also argue that it is imperative for MLS database users to theorize about the belief of others, perhaps at di erent security levels, an apparatus that is currently missing and the absence of which is seriously felt. The impetus for our current research is this need to provide an adequate framework for belief reasoning in MLS databases. We demonstrate that a prudent application of the concept of inheritance in a deductive database setting will help capture the notion of declarative belief and belief reasoning in MLS databases in an elegantway. To this end, we develop a function to compute belief in multiple modes which can be used to reason about the beliefs of other users. We strive to develop a poised and practical logical characterization of MLS databases for the rst time based on the inherently di cult concept of non-monotonic inheritance. We present an extension of the acclaimed Datalog language, called the MultiLog, and show that Datalog is a special case of our language. We also suggest an implementation scheme for MultiLog as a front-end for CORAL. Key Words: MLS databases, belief assertion, reasoning, | [
2356,
3075
] | Train |
2,327 | 0 | Distributed Scheduling to Support a Call Centre: a Co-operative Multi-Agent Approach This paper introduces a multi-agent system architecture to increase the value of 24 hour a day call centre service. This system supports call centres in making appointments with clients on the basis of knowledge of employees and their schedules. Relevant activities of employees are scheduled for employees in preparation of such appointments. The multi-agent system architecture is based on principled design, using the compositional development method for multi-agent systems DESIRE. To schedule procedures in which more than one employee is involved, each employee is represented by its own personal assistant agent, and a work manager agent co-ordinates the schedules of the personal assistant agents, and clients through the call centre. The multi-agent system architecture has been applied to the banking domain, in co-operation with and partially funded by the Rabobank. 1 Introduction Over the past few years, more and more companies and organisations have become aware of the potential of a... | [
555,
2036
] | Validation |
2,328 | 1 | Face Recognition Using Evolutionary Pursuit . This paper describes a novel and adaptive dictionary method for face recognition using genetic algorithms (GAs) in determining the optimal basis for encoding human faces. In analogy to pursuit methods, our novel method is called Evolutionary Pursuit (EP), and it allows for different types of (non-orthogonal) bases. EP processes face images in a lower dimensional whitened PCA subspace. Directed but random rotations of the basis vectors in this subspace are searched by GAs where evolution is driven by a fitness function defined in terms of performance accuracy and class separation (scatter index). Accuracy indicates the extent to which learning has been successful so far, while the scatter index gives an indication of the expected fitness on future trials. As a result, our approach improves the face recognition performance compared to PCA, and shows better generalization abilities than the Fisher Linear Discriminant (FLD) based methods. 1 Introduction A successful face recognition met... | [
719,
2751
] | Train |
2,329 | 2 | A Logical View of Structured Files .<F3.733e+05> Structured data stored in files can benefit from standard database technology. In particular, we show here how such data can be queried and updated using declarative database languages. We introduce the notion of<F3.967e+05> structuring <F3.733e+05> schema, which consists of a grammar annotated with database programs. Based on a structuring schema, a file can be viewed as a database structure, queried and updated as such. For<F3.967e+05><F3.733e+05> queries, we show that almost standard database optimization techniques can be used to answer queries without having to construct the entire database. For<F3.967e+05><F3.733e+05> updates, we study in depth the propagation to the file of an update specified on the database view of this file. The problem is not feasible in general and we present a number of negative results. The positive results consist of techniques that allow to propagate updates efficiently under some reasonable<F3.967e+05> locality<F3.733e+05> conditions on ... | [
2120,
2367
] | Train |
2,330 | 1 | Developing and Investigating Two-level Grammar Concepts For Design | [
636
] | Train |
2,331 | 5 | A Media-Independent Content Language for Integrated Text and Graphics Generation This paper describes a media-independent knowledge representation scheme, or content language, for describing the content of communicative goals and actions. The language is used within an intelligent system for automatically generating integrated text and information graphics presentations about complex, quantitative information. The language is designed to satisfy four requirements: to represent information about complex quantitative relations and aggregate properties; compositionality; to represent certain pragmatic distinctions needed for satisfying communicative goals; and to be usable as input by the media-specific generators in our system. 1 Introduction This paper describes a media-independent knowledge representation scheme, or content language, for describing the content of communicative goals and actions. The language is used within an intelligent system for automatically generating integrated text and information graphics 1 presentations about complex, quantitative infor... | [
991,
2486
] | Train |
2,332 | 4 | Coordinating Heterogeneous Work: Information and Representation in Medical Care Introduction The concept of a common information space, or CIS, has become an influential way to think about the use of shared information in collaboration. Originating in the work of Schmidt and Bannon (1992), and further explored by Bannon and Bdker (1997), it was designed to extend then-current notions about the role of technology and shared information. At the time this was originally proposed, a great deal of technical attention was being paid to the development of "shared workspace" systems (e.g. Lu and Mantei 1991; Ishii et al. 1992). These systems attempted to extend the workspaces of conventional single-user applications such as word processors and drawing tools, allowing synchronous or asynchronous collaboration across digital networks. Designing effective shared workspace systems presented a range of technical challenges concerning appropriate network protocols, synchronisation, concurrency control mechanisms, and user interface design. Still, over time con | [
2127
] | Test |
2,333 | 5 | Pattern Search Methods for Molecular Geometry Problems This paper deals with the application of pattern search methods to the numerical solution of a class of molecular geometry problems with important applications in molecular physics and chemistry. The goal is to nd a conguration of a cluster or a molecule with minimum total energy. The minimization problems in this class of geometry molecular problems have no constraints and the objective function is smooth. The diculties arise from the existence of several local minima, and especially, from the expensive function evaluation (total energy) and the possible non-availability of rst-order derivatives. We introduce a pattern search approach that attempts to exploit the physical nature of the problem by using energy lowering geometrical transformations. Numerical results with a particular instance of this new class of pattern search methods and with the parallel direct search method of Dennis and Torczon are presented showing the promise of our approach. Key words. molecular ... | [
243
] | Train |
2,334 | 3 | A Case for Parameterized Views and Relational Unification In this paper, we address the issue of remedying the scepter of impedance mismatch in object-relational SQL. Our approach makes it possible to remain within the current connes of relational models, yet oers the capability of dening methods by SQL's declarative means, thereby preserving all opportunities of query optimization to the fullest extent. We propose the idea of parameterized views and an extension of SQL's create view construct with an optional with parameter clause. Parameterizing enables traditional SQL views to accept input values and delay the computation of the view until invoked with a call statement. This extension empowers users with the capability of modifying the behavior of predened procedures (views) by sending arguments and evaluating the procedure on demand. Keywords: parameterized views, declarative methods, objectrelational databases, inheritance and overriding, reasoning, unication. 1 Introduction An outstanding issue in object-relational databases dem... | [
2941
] | Test |
2,335 | 2 | Rank Aggregation Revisited The rank aggregation problem is to combine many different rank orderings on the same set of candidates, or alternatives, in order to obtain a "better" ordering. Rank aggregation has been studied extensively in the context of social choice theory, where several "voting paradoxes" have been discovered. The problem | [
147,
1249,
2475,
2503,
2701
] | Test |
2,336 | 2 | Evolving User Profiles to Reduce Internet Information Overload . This paper discusses the use of Evolving Personal Agent Environments as a potential solution to the problem of information overload as experienced in habitual Web surfing. Some first experimental results on evolving user profiles using speciating hybrid GAs, the reasoning behind them and support for their potential application in mobile, wireless and location aware information devices are also presented. 1 Information Overload In everyday life, the Internet user is faced with the ever increasing problem of information overload, whether this occurs at home, at the workplace, or as will soon be happening, everywhere [1] [2]. The overwhelming information feed that computer users face leads to anxiety, strain, inefficiency and finally results in uninformed (or misinformed) and frustrated users [3] [4]. Continuously and increasingly Internet users are confronted with laborious and difficult tasks of information filtering and/or gathering, which are inherently computer-oriented processes... | [
676,
1377,
1857
] | Validation |
2,337 | 0 | Why Autonomy Makes the Agent This paper works on the premise that the position stated by Jennings et al. [17] is correct. Specifically that, amongst other things, the agent metaphor is a useful extension of the object-oriented metaphor. Object-oriented (OO) programming [29] is programming where data-abstraction is achieved by users defining their own data-structures (see figure 1), or "objects". These objects encapsulate data and methods for operating on that data; and the OO framework allows new objects to be created that inherit the properties (both data and methods) of existing objects. This allows archetypeal objects to be defined and then extended by different programmers, who needn't have complete understanding of exactly how the underlying objects are implemented | [
9,
374,
1413,
2309,
2629
] | Test |
2,338 | 0 | Formal Models and Decision Procedures for Multi-Agent Systems The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. A number of theoretical formalizations for such multiagent systems have been proposed. However, most of these formalizations do not have a strong semantic basis nor a sound and complete axiomatization. Hence, it has not been clear as to how these formalizations could be used in building agents in practice. This paper explores a particular type of multi-agent system, in which each agent is viewed as having the three mental attitudes of belief (B), desire (D), and intention (I). It provides a family of multi-modal branching-time BDI logics with a semantics that is grounded in traditional decision theory and a possible-worlds framework, categorizes them, provides sound and complete axiomatizations, and gives constructive tableaubased decision procedures for testing the satisfiability and validity of formulas. The computational complexity of these decision procedures is n... | [
353,
566,
712,
1260,
1501,
1616,
1951,
2457,
2598,
2972
] | Train |
2,339 | 0 | GripSee: A Gesture-controlled Robot for Object Perception and Manipulation We have designed a research platform for a perceptually guided robot, which also serves as a demonstrator for a coming generation of service robots. In order to operate semi-autonomously, these require a capacity for learning about their environment and tasks, and will have to interact directly with their human operators. Thus, they must be supplied with skills in the fields of human-computer interaction, vision, and manipulation. (GripSee is able to autonomously grasp and manipulate objects on a table in front of it. The choice of object, the grip to be used, and the desired final position are indicated by an operator using hand gestures. Grasping is performed similar to human behavior: the object is first fixated, then its form, size, orientation, and position are determined, a grip is planned, and finally the object is grasped, moved to a new position, and released. As a final example for useful autonomous behavior we show how the calibration of the robot's image-to-world coordinate transform can be learned from experience, thus making detailed and unstable calibration of this important subsystem superfluous. The integration concepts developed at our institute have led to a flexible library of robot skills that can be easily recombined for a variety of useful behaviors. | [
1974
] | Train |
2,340 | 1 | Closing the Loop: Heuristics for Autonomous Discovery Autonomous discovery systems will be able to peruse very large databases more thoroughly than people can. In a companion paper [1], we describe a general framework for autonomous systems. We present and evaluate heuristics for use in this framework. Although these heuristics were designed for a prototype system, we believe they provide good initial solutions to problems encountered when implementing fully autonomous discovery systems. As such, these heuristics may be used as the starting point for future research into fully autonomous discovery systems. 1. | [
2441
] | Train |
2,341 | 5 | Steps towards C+C: a Language for Interactions . We present in this paper our reflections about the requirements of | [
1001,
2229,
2629
] | Train |
2,342 | 2 | Active Hidden Markov Models for Information Extraction Information extraction from HTML documents requires a classifier capable of assigning semantic labels to the words or word sequences to be extracted. If completely labeled documents are available for training, well-known Markov model techniques can be used to learn such classifiers. In this paper, we consider the more challenging task of learning hidden Markov models (HMMs) when only partially (sparsely) labeled documents are available for training. We first give detailed account of the task and its appropriate loss function, and show how it can be minimized given an HMM. We describe an EM style algorithm for learning HMMs from partially labeled data. We then present an active learning algorithm that selects "difficult" unlabeled tokens and asks the user to label them. We study empirically by how much active learning reduces the required data labeling effort, or increases the quality of the learned model achievable with a given amount of user effort. | [
603,
815,
2133,
2522
] | Train |
2,343 | 0 | A Methodology and Modelling Technique for Systems of BDI Agents The construction of large-scale embedded software systems demands the use of design methodologies and modelling techniques that support abstraction, inheritance, modularity, and other mechanisms for reducing complexity and preventing error. If multi-agent systems are to become widely accepted as a basis for large-scale applications, adequate agentoriented methodologies and modelling techniques will be essential. This is not just to ensure that systems are reliable, maintainable, and conformant, but to allow their design, implementation, and maintenance to be carried out by software analysts and engineers rather than researchers. In this paper we describe an agent-oriented methodology and modelling technique for systems of agents based upon the Belief-Desire-Intention (BDI) paradigm. Our models extend existing Object-Oriented (OO) models. By building upon and adapting existing, well-understood techniques, we take advantage of their maturity to produce an approach that can be easily lear... | [
210,
245,
320,
445,
620,
770,
1012,
1151,
1157,
1414,
1681,
1716,
1759,
2164,
2309
] | Train |
2,344 | 2 | Towards A Semantic Framework For Service Description The rapid development of the Internet and of distributed computing has led to a proliferation of online service providers such as digital libraries, web information sources, electronically requestable traditional services, and even software-to-software services such as those provided by persistence and event managers. This has created a need for catalogs of services, based on description languages covering both traditional and electronic services. This paper presents a classification and a domainindependent characterisation of services, which provide a foundation for their description to potential consumers. For each of the service characteristics that we consider, we identify the range of its possible values in di#erent settings, and when applicable, we point to alternative approaches for representing these values. The idea is that by merging # This work was funded by an Australian Research Council SPIRT Grant entitled "Selfdescribing transactions operating in a open, heterogeneous and distributed environment" involving QUT and GBST Holdings Pty Ltd. 1 2 these individual approaches, and by mapping them into a unified notation, it is possible to design service description languages suitable for advertisement and matchmaking within specific application settings. 1. | [
297,
717,
1927
] | Train |
2,345 | 2 | Mixed Initiative Interfaces for Learning Tasks: SMARTedit Talks Back Applications of machine learning can be viewed as teacherstudent interactions in which the teacher provides training examples and the student learns a generalization of the training examples. One such application of great interest to the IUI community is adaptive user interfaces. In the traditional learning interface, the scope of teacher-student interactions consists solely of the teacher/user providing some number of training examples to the student/learner and testing the learned model on new examples. Active learning approaches go one step beyond the traditional interaction model and allow the student to propose new training examples that are then solved by the teacher. In this paper, we propose that interfaces for machine learning should even more closely resemble human teacher-student relationships. A teacher's time and attention are precious resources. An intelligent student must proactively contribute to the learning process, by reasoning about the quality of its knowledge, collaborating with the teacher, and suggesting new examples for her to solve. The paper describes a variety of rich interaction modes that enhance the learning process and presents a decision-theoretic framework, called DIAManD, for choosing the best interaction. We apply the framework to the SMARTedit programming by demonstration system and describe experimental validation and preliminary user feedback. | [
369,
2603
] | Test |
2,346 | 0 | Semantics of BDI Agents and their Environment This paper describes an approach for reasoning about the interactions of multiple agents in moderately complex environments. The semantics of Belief Desire Intention (BDI) agents has been investigated by many researchers and the gap between theoretical specification and practical design is starting to be bridged. However, the research has concentrated on single-agent semantics rather than multiagent semantics and has not emphasised the semantics of the environment and its interaction with the agent. This paper describes a class of simple BDI agents and uses a recently introduced logic of actions to provide semantics for these agents independent of the environment in which they may be embedded. The same logic is used to describe the semantics of the environment itself and the interactions between the agent and the environment. As there is no restriction on the number of agents the environment may interact with, the approach can be used to address the semantics of multiple interacting ag... | [
2364
] | Train |
2,347 | 3 | Logics for Databases and Information Systems Temporal Databases 34 3.2.2 Relational Database Histories 36 3.3 Temporal Queries 36 3.3.1 Abstract Temporal Query Languages 37 3.3.2 Expressive Power 41 3.3.3 Space-efficient Encoding of Temporal Databases 44 3.3.4 Concrete Temporal Query Languages 46 3.3.5 Evaluation of Abstract Query Languages using Compilation 47 3.3.6 SQL and Derived Temporal Query Languages 48 3.4 Temporal Integrity Constraints 53 3.4.1 Notions of constraint satisfaction 53 3.4.2 Temporal Integrity Maintenance 54 3.4.3 Temporal Constraint Checking 56 3.5 Multidimensional Time 58 3.5.1 Why Multiple Temporal Dimensions? 59 3.5.2 Abstract Query Languages for Multi-dimensional Time 59 3.5.3 Encoding of Multi-dimensional Temporal Databases 61 3.6 Beyond First-order Temporal Logic 62 3.7 Conclusion 65 References 65 4 The Role of Deontic Logic in the Specification of Information Systems 71 J.-J. Ch. Meyer, R.J. Wieringa, and F.P.M. Dignum 4.1 Introduction: Soft Constraints and Deontic Logic 72 4.1.1 Integrity Constrai... | [
876,
1534,
2262,
3055
] | Train |
2,348 | 1 | Estimation of Rényi Information Divergence via Pruned Minimal Spanning Trees In this paper we develop robust estimators of the R enyi information divergence (I-divergence) given a reference distribution and a random sample from an unknown distribution. Estimation is performed by constructing a minimal spanning tree (MST) passing through the random sample points and applying a change of measure which flattens the reference distribution. In a mixture model where the reference distribution is contaminated by an unknown noise distribution one can use these results to reject noise samples by implementing a greedy algorithm for pruning the k- longest branches of the MST, resulting in a tree called the k-MST. We illustrate this procedure in the context of density discrimination and robust clustering for a planar mixture model. 1. Introduction Let Xn = fx 1 ; x 2 ; : : : ; xn g denote a sample of i.i.d. data points in R d having unknown Lebesgue multivariate density f(x i ) supported on [0; 1] d . Define the order Renyi entropy of f [7] H (f) = 1 1 ln Z ... | [
232
] | Train |
2,349 | 2 | Supporting Situated Actions in High Volume Conversational Data Situations The global conferencing system Usenet news offers an amount of articles per day that exceeds human cognitive capabilities by far although the articles are already organized in hierarchically structured discussion groups covering distinct topics. We report here on a situated information filtering system that significantly reduces the burden by supporting the user in acting situated. Interpreting the user's actions as situated actions, the approach complements current filtering and recommender approaches by completely avoiding the modeling of user interests; the user is the only instance for assigning (un-)interestingness to Usenet discussions. Keywords Situated cognition, situated actions, Usenet news, information filtering INTRODUCTION The huge and increasing amount of information available in the information age suggests to investigate new ways to support humans in gathering information that might be interesting, helpful, or necessary for them. Since the overall amount of informat... | [
427
] | Test |
2,350 | 5 | DOGMA: A GA-Based Relational Learner We describe a GA-based concept learning/theory revision system DOGMA and discuss how it can be applied to relational learning. The search for better theories in DOGMA is guided by a novel fitness function that combines the minimal description length and information gain measures. To show the efficacy of the system we compare it to other learners in three relational domains. Keywords: Relational Learning, Genetic Algorithms, Minimal Description Length 1 Introduction Genetic Algorithms (GAs) are stochastic general purpose search algorithms, that have been applied to a wide range of Machine Learning problems. They work by evolving a population of chromosomes, each of which encodes a potential solution to the problem at hand. The task of a GA is to find a highly fit chromosome through the application of different selection and perturbation operators. In this paper we consider the use of GAs in relational concept learning, i.e. in the process of learning and extracting relational classif... | [
2852
] | Train |
2,351 | 4 | Architecture and Implementation of a Java Package for Multiple Input Devices (MID) A major difficulty in writing Single Display Groupware (co-present collaborative) applications is getting input from multiple devices. We introduce MID, a Java package that addresses this problem and offers an architecture to access advanced events through Java. In this paper, we describe the features, architecture and limitations of MID. We also briefly describe an application that uses MID to get input from multiple mice: KidPad. Keywords Single Display Groupware (SDG), Computer-Supported Cooperative Work (CSCW), Multiple Input Devices (MID), Multi-Modal Input, Java, DirectInput, Windows 98, Universal Serial Bus (USB), KidPad, Jazz, Pad++. INTRODUCTION Communication, collaboration, and coordination are brought to many people's desktops thanks to groupware applications such as Lotus Notes and Microsoft Exchange, some of the leading commercial products in the field of Computer-Supported Cooperative Work (CSCW). They help people collaborate when they are not in the same place at th... | [
682,
2801
] | Train |
2,352 | 1 | Probabilistic Retrieval: New Insights and Experimental Results We present new insights on the relations between a recently introduced probabilistic formulation of the content-based retrieval problem and standard solutions. New experimental results are presented, providing evidence that probabilistic retrieval has superior performance. Finally, a unified representation for texture and color is introduced. 1 Introduction The problem of retrieving images or video from a database is naturally formulated as a problem of pattern recognition. Given a representation (or feature) space F for the entries in the database, the design of a retrieval system consists of finding a map g : F ! M = f1; : : : ; Kg x ! y from F to the set M of classes identified as useful for the retrieval operation. K, the cardinality of M , can be as large as the number of items in the database (in which case each item is a class by itself), or smaller. If the goal of the retrieval system is to minimize the probability of error, i.e. P (g(x) 6= y), it is well known that the opt... | [
237
] | Train |
2,353 | 4 | An Introduction to Software Agents ion and delegation: Agents can be made extensible and composable in ways that common iconic interface objects cannot. Because we can "communicate" with them, they can share our goals, rather than simply process our commands. They can show us how to do things and tell us what went wrong (Miller and Neches 1987). . Flexibility and opportunism: Because they can be instructed at the level of 16 BRADSHAW goals and strategies, agents can find ways to "work around" unforeseen problems and exploit new opportunities as they help solve problems. . Task orientation: Agents can be designed to take the context of the person's tasks and situation into account as they present information and take action. . Adaptivity: Agents can use learning algorithms to continually improve their behavior by noticing recurrent patterns of actions and events. Toward Agent-Enabled System Architectures In the future, assistant agents at the user interface and resource-managing agents behind the scenes will increas... | [
163,
1411,
1552,
1609,
1725,
2874
] | Validation |
2,354 | 1 | Tackling Multimodal Problems in Hybrid Genetic Algorithms A method is proposed to address the issue of multimodality while using hybrid genetic algorithms (GAs). The hybrid GA framework that is used is one in which a local searcher is employed during... | [
1592
] | Test |
2,355 | 0 | Rewriting Logic: Roadmap and Bibliography Machine [218]; (7) CCS and LOTOS [230,208,314,45,89,311,309,201]; (8) the calculus [316,292]; (9) concurrent objects and actors [218,220,300,302,304]; (10) the UNITY language [218]; (11) concurrent graph rewriting [223]; (12) dataflow [223]; (13) neural networks [223]; (14) real-time systems, including timed automata, timed transition systems, hybrid automata, and timed Petri nets [268,262]; and (15) the tile logic [146,147,135] model of synchronized concurrent computation [232,39,34,148]. | [
78,
1720,
2533
] | Test |
2,356 | 3 | Declarative Semantics Of Belief Queries In MLS Deductive Databases A logic based language, called MultiLog, for multi level secure relational databases has recently been proposed. It has been shown that MultiLog is capable of capturing the notion of user belief, of ltering unwanted and \useless" information in its proof theory. Additionally, it can guard against a previously unknown security breach { the so called surprise stories. In this paper, we outline a possible approach to a declarative characterization of belief queries in MultiLog in a very informal manner. We show that for \simple programs" with belief queries, the semantics is rather straight forward. Semantics for the general Horn programs may be developed based on the understanding of the model theoretic characterization of belief queries developed in this paper. Keywords: Multi level security, belief queries, declarative semantics, completeness. Introduction In a recent research, Jukic and Vrbsky [8] demonstrate that users in the relational MLS model potentially have a cluttered view... | [
2326
] | Test |
2,357 | 0 | "Plug And Test" - Software Agents In Virtual Environments James - A Java Based agent modeling environment for simulation has been developed to support the compositional construction of test beds for multi-agent systems and their execution in distributed environments. The modeling formalism of James imposes only few constraints on the modeling of agents and facilitates a \plug and test" with pieces of agent code which has been demonstrated in earlier work. However, even entire agents can be run in James as they are run in their run-time environment. The integration of agents as a whole is based on model templates which serve as the agents' interface and representative during the simulation run. The eort which is put into dening model templates for selected agent systems obviates the need for the single agent programmer to get acquainted with the underlying modeling and simulation formalism. Instead the agent programmer can compose the experimental frame and test the programmed agents as they are. The approach is illustrated with agents of the mobile agent system Mole. 1 | [
81,
770
] | Train |
2,358 | 0 | Using Self-Diagnosis to Adapt Organizational Structures The specific organization used by a multi-agent system is crucial for its effectiveness and efficiency. In dynamic environments, or when the objectives of the system shift, the organization must therefore be able to change as well. In this abstract we propose using a general diagnosis engine to drive this process of adaptation, using the TMS modeling language as the primary representation of organizational information. A complete version of this paper is at [1]. As the sizes of multi-agent systems grow in the number of their participants, the organization of those agents will be increasingly important. In such an environment, an organization is used to limit the range of control decisions agents must make, which is a necessary component of scalable systems. Are agent agents arranged in clusters, a hierarchy, a graph, or some other type of organization? Are the agents` activities or behaviors driven solely by local concerns, or do external peers or managers have direct influence as wel... | [
779,
1115,
3071
] | Test |
2,359 | 0 | The Adaptive Agent Architecture: Achieving Fault-Tolerance Using Persistent Broker Teams Brokers are used in many multi-agent systems for locating agents, for routing and sharing information, for managing the system, and for legal purposes, as independent third parties. However, these multi-agent systems can be incapacitated and rendered non-functional when the brokers become inaccessible due to failures such as machine crashes, network breakdowns, and process failures that can occur in any distributed software system. We propose that the theory of teamwork can be used to create robust brokered architectures that can recover from broker failures, and we present the Adaptive Agent Architecture (AAA) to show the feasibility of this approach. The AAA brokers form a team with a joint commitment to serve any agent that registers with the broker team as long as the agent remains registered with the team. This commitment enables the brokers to substitute for each other when needed. A multiagent system based on the AAA can continue to work despite broker failures as long... | [
438,
672,
1724,
2301,
2314,
2955
] | Validation |
2,360 | 3 | Path Constraints on Deterministic Graphs We study path constraints for deterministic graph model [9], a variation of semistructured data model in which data is represented as a rooted edge-labeled directed graph with deterministic edge relations. The path constraint languages considered include the class of word constraints introduced in [4], the language P c investigated in [8], and an extension of P c defined in terms of regular expressions. Complexity results on the implication and finite implication problems for these constraint languages are established. 1 Introduction Semistructured data is characterized as having no type constraints, irregular structure and rapidly evolving or missing schema [1, 6]. Examples of such data can be found on the WorldWide -Web, in biological databases and after data integration. In particular, documents of XML (eXtensible Markup Language [5]) can also be viewed as semistructured data [10]. The unifying idea in modeling semistructured data is the representation of data as an edge-labeled, r... | [
242,
1600
] | Train |
2,361 | 5 | Foundations of Spatioterminological Reasoning with Description Logics This paper presents a method for reasoning about spatial objects and their qualitative spatial relationships. In contrast to existing work, which mainly focusses on reasoning about qualitative spatial relations alone, we integrate quantitative and qualitative information with terminological reasoning. For spatioterminological reasoning we present the description logic ALCRP(D) and define an appropriate concrete domain D for polygons. The theory is motivated as a basis for knowledge representation and query processing in the domain of deductive geographic information systems. 1 Introduction Qualitative relations play an important role in formal reasoning systems that can be part of, for instance, geographic information systems (GIS). In this context, inferences about spatial relations should not be considered in isolation but should be integrated with formal inferences about structural descriptions of domain objects (e.g. automatic consistency checking and classification) and infer... | [
143,
2077,
2318
] | Train |
2,362 | 1 | Improving the Wang and Mendel's Fuzzy Rule Learning Method by Inducing Cooperation Among Rules Nowadays, Linguistic Modeling (LM) is considered to be one of the most important areas of application for Fuzzy Logic. It is accomplished by descriptive Fuzzy Rule-Based Systems (FRBSs), whose most interesting feature is the interpolative reasoning they develop. This characteristic plays a key role in the high performance of FRBSs and is a consequence of the cooperation among the fuzzy rules involved in the FRBS. A large quantity of automatic techniques has been proposed to generate these fuzzy rules from numerical data. One of the most interesting families of techniques, due to its simplicity and quickness, is the ad hoc datadriven methods. However, its main drawback is the cooperation among the rules which is not suitably considered. With the aim of facing up this drawback, which makes the obtained models not to be as accurate as desired, a new approach to improve the performance obtaining more cooperative rules is introduced in this paper. Following this appro... | [
2002
] | Validation |
2,363 | 4 | Using Augmented Reality to Visualise Architecture Designs in an Outdoor Environment This paper presents the use of a wearable computer system to visualise outdoor architectural features using augmented reality. The paper examines the question - How does one visualise a design for a building, modification to a building, or extension to an existing building relative to its physical surroundings? The solution presented to this problem is to use a mobile augmented reality platform to visualise the design in spatial context of its final physical surroundings. The paper describes the mobile augmented reality platform TINMITH2 used in the investigation. The operation of the system is described through a detailed example of the system in operation. The system was used to visualise a simple extension to a building on one of the University of South Australia campuses. | [
1088
] | Test |
2,364 | 0 | BDI Agents: from Theory to Practice The study of computational agents capable of rational behaviour has received a great deal of attention in recent years. Theoretical formalizations of such agents and their implementations have proceeded in parallel with little or no connection between them. This paper explores a particular type of rational agent, a BeliefDesire -Intention (BDI) agent. The primary aim of this paper is to integrate (a) the theoretical foundations of BDI agents from both a quantitative decision-theoretic perspective and a symbolic reasoning perspective; (b) the implementations of BDI agents from an ideal theoretical perspective and a more practical perspective; and (c) the building of large-scale applications based on BDI agents. In particular, an air-traffic management application will be described from both a theoretical and an implementation perspective. Introduction The design of systems that are required to perform high-level management and control tasks in complex dynamic environments is becoming ... | [
28,
49,
93,
239,
321,
340,
476,
496,
566,
600,
615,
665,
701,
808,
851,
867,
881,
942,
1145,
1158,
1209,
1260,
1323,
1459,
1474,
1501,
1789,
1842,
1862,
1943,
2141,
2150,
2160,
2213,
2223,
2313,
2346,
2457,
2544,
2584,
2591,
2598,
2... | Test |
2,365 | 4 | Bridging Multiple User Interface Dimensions with Augmented Reality Studierstube is an experimental user interface system, which uses collaborative augmented reality to incorporate true 3D interaction into a productivity environment. This concept is extended to bridge multiple user interface dimensions by including multiple users, multiple host platforms, multiple display types, multiple concurrent applications, and a multi-context (i. e., 3D document) interface into a heterogeneous distributed environment. With this architecture, we can explore the user interface design space between pure augmented reality and the popular ubiquitous computing paradigm. We report on our design philosophy centered around the notion of contexts and locales, as well as the underlying software and hardware architecture. Contexts encapsulate a live application together with 3D (visual) and other data, while locales are used to organize geometric reference systems. By separating geometric relationships (locales) from semantic relationships (contexts), we achieve a great amou... | [
172,
468,
618,
1261,
1740,
2143
] | Train |
2,366 | 4 | Visualization Methods for Personal Photo Collections: Browsing and Searching in the PhotoFinder Software tools for personal photo collection management are proliferating, but they usually have limited searching and browsing functions. We implemented the PhotoFinder prototype to enable non-technical users of personal photo collections to search and browse easily. PhotoFinder provides a set of visual Boolean query interfaces, coupled with dynamic query and query preview features. It gives users powerful search capabilities. Using a scatter plot thumbnail display and dragand -drop interface, PhotoFinder is designed to be easy to use for searching and browsing photos. Keywords : PhotoFinder, user interface, dynamic query, query preview, search, browsing, Boolean query, digital photo library. 1. INTRODUCTION Digital cameras, scanners and personal computers are now common. But as collections grow in size, the need to organize, search, and browse digital photos increases [1]. There are many personal photo collection management tools available either commercially or non-commercially. ... | [
2504
] | Test |
2,367 | 2 | DAG Matching Techniques for Information Retrieval on Structured Documents With the establishment of international standards for document representation like SGML, ODA, or XML, attention in Information Retrieval has shifted to representation models and query languages that make active use both of the logical structure and the contents of the documents in a document database. At the same time, representation of structure has become more and more important in other types of databases as well. Among several related approaches, Kilpelainen's Tree Matching is one of the most expressive and intuitive formalisms for querying databases with treestructured entities. However, in its original formulation it leaves aside most of the problems that arise in real-life applications of Information Retrieval. In this paper we extend Tree Matching to DAG Matching and suggest various techniques that should be useful when using the formalism in a practical IR system. In particular we suggest a representation of answers that can cope with the potentially huge number of entities in... | [
2329,
3027
] | Validation |
2,368 | 3 | TIP: A Temporal Extension to Informix Commercial relational database systems today provide only limited temporal support. To address the needs of applications requiring rich temporal data and queries, we have built TIP (Temporal Information Processor), a temporal extension to the Informix database system based on its DataBlade technology. Our TIP DataBlade extends Informix with a rich set of datatypes and routines that facilitate temporal modeling and querying. TIP provides both C and Java libraries for client applications to access a TIPenabled database, and provides end-users with a GUI interface for querying and browsing temporal data. 1 Introduction Our research in temporal data warehouses [9, 10] has led us to require a relational database system with full SQL as well as rich temporal support, in order to experiment with our temporal view-maintenance techniques. Most commercial relational database systems support only a DATE type (or its variants). An attribute of type DATE can be used to timestamp a tuple with... | [
10
] | Train |
2,369 | 0 | Cross-Entropy Guided Ant-like Agents Finding Cyclic Paths in Scarcely Meshed Networks Telecommunication network owners and operators have for half a century been well aware of the potential loss of revenue if a major trunk is damaged, thus dependability at high cost has been implemented. A simple, effective and common dependability scheme is 1:1 protection with 100% capacity redundancy in the network. A growing number of applications in need of dependable connections with specific requirements to bandwidth and delay have started using the internet (which only provides best effort transport) as their base communication service. In this paper we adopt the 1:1 protection scheme and incorporate it as part of a routing system applicable for internet infrastructures. 100% capacity redundancy is no longer required. A distributed stochastic path finding (routing) algorithm based on Rubinstein's Cross Entropy method for combinatorial optimisation is presented. Early results from Monte Carlo simulations indeed indicate that the algorithm is capable of finding pairs of independent primary and backup paths satisfying specific bandwidth a constraints. | [
86,
234,
1910
] | Train |
2,370 | 3 | A Formal Approach to Detecting Security Flaws in Object-Oriented Databases this paper is to show an efficient decision algorithm for detecting a security flaw under a given authorization. This problem is solvable in polynomial time in practical cases by reducing it to the congruence closure problem. This paper also mentions the problem of finding a maximal subset of a given authorization under which no security flaw exists. | [
389
] | Train |
2,371 | 1 | Building Domain-Specific Search Engines with Machine Learning Techniques Domain-specific search engines are becoming increasingly popular because they offer increased accuracy and extra features not possible with the general, Web-wide search engines. For example, www.campsearch.com allows complex queries by agegroup, size, location and cost over summer camps. Unfortunately, these domain-specific search engines are difficult and time consuming to maintain. This paper proposes the use of machine learning techniques to greatly automate the creation and maintenance of domain-specific search engines. We describe new research in reinforcement learning, text classification and information extraction that automates efficient spidering, populating topic hierarchies, and identifying informative text segments. Using these techniques, we have built a demonstration system: a search engine for computer science research papers. It already contains over 33,000 papers and is publicly available at www.cora.jprc.com. 1 Introduction As the amount of information on the World ... | [
323,
514,
603,
759,
836,
1386,
1843,
1987,
2471,
2595
] | Train |
2,372 | 2 | Breadth-First Search Crawling Yields High-Quality Pages This paper examines the average page quality over time of pages downloaded during a web crawl of 328 million unique pages. We use the connectivity-based metric PageRank to measure the quality of a page. We show that traversing the web graph in breadth-first search order is a good crawling strategy, as it tends to discover high-quality pages early on in the crawl. | [
2,
116,
507,
1512,
1750,
1815,
2471,
2503
] | Train |
2,373 | 4 | BUILD-IT: a computer vision-based interaction technique of a planning tool for construction and design It is time to go beyond the established approaches in human-computer interaction. With the Augmented Reality (AR) design strategy humans are able to behave as much as possible in a natural way: behavior of humans in the real world with other humans and/or real world objects. Following the fundamental constraints of natural way of interacting we derive a set of recommendations for the next generation of user interfaces: the Natural User Interface (NUI). The concept of NUI is presented in form of a runnable demonstrator: a computer vision-based interaction technique for a planning tool for construction and design tasks. KEYWORDS augmented reality, natural user interface, computer vision-based interaction 1. | [
2290
] | Train |
2,374 | 0 | Agent-Oriented Programming in Linear Logic This thesis investigates how a linear logic programming language, such as Lygon, can be used in the implementation of agent-oriented programs. Agent-oriented programming is a recent computational framework of interest to both academic and industrial researchers. Agent methodology is being successfully utilised in designing complex (distributed) applications that require concurrency, reasoning, communication, sharing and integration of knowledge, and, of course, intelligence. On the other hand, linear logic, a logic of resource-consumption, provides the possibility to construct efficient tools for modelling updates, reasoning about the environment and implementing concurrency. Linear logic has been used as a basis for creating a number of programming languages. One of these is the logic programming language Lygon. The aim of this thesis is to investigate the possibility of implementing agents with Lygon. A number of experiments have been carried out and results analysed, which... | [
400,
1049
] | Test |
2,375 | 0 | Generating Code for Agent UML Sequence Diagrams For several years, a new category of description techniques exists: Agent UML [10] which is based on UML. Agent UML is an extension of UML to tackle dierences between agents and objects. Since this description technique is rather new, it does not supply tools or algorithms for protocol synthesis. Protocol synthesis corresponds to generate code for a formal description of a protocol. The derived program behaves like the formal description. This work presents rst elements to help designers generating code for Agent UML sequence diagrams. The protocol synthesis is applied to the example of English Auction protocol. | [
567
] | Train |
2,376 | 4 | Real-Time Input of 3D Pose and Gestures of a User's Hand and Its Applications for HCI In this paper, we introduce a method for tracking a user's hand in 3D and recognizing the hand's gesture in real-time without the use of any invasive devices attached to the hand. Our method uses multiple cameras for determining the position and orientation of a user's hand moving freely in a 3D space. In addition, the method identifies predetermined gestures in a fast and robust manner by using a neural network which has been properly trained beforehand. This paper also describes results of user study of our proposed method and its application for several types of applications, including 3D object handling for a desktop system and 3D walk-through for a large immersive display system. 1. | [
1454,
1472
] | Train |
2,377 | 3 | Probabilistic Temporal Databases, I: Algebra Dyreson and Snodgrass have drawn attention to the fact that in many temporal database applications, there is often uncertainty present about the start time of events, the end time of events, the duration of events, etc. When the granularity of time is small (e.g. milliseconds), a statement such as "Packet p was shipped sometime during the first 5 days of January, 1998" leads to a massive amount of uncertainty (5 \Theta 24 \Theta 60 \Theta 60 \Theta 1000)possibilities. As noted in [53], past attempts to deal with uncertainty in databases have been restricted to relatively small amounts of uncertainty in attributes. Dyreson and Snodgrass have taken an important first step towards solving this problem. In this paper, we first introduce the syntax of Temporal-Probabilistic (TP) relations and then show how they can be converted to an explicit, significantly more space-consuming form called Annotated Relations. We then present a Theoretical Annotated Temporal Algebra (TATA). Being e... | [
1550
] | Train |
2,378 | 1 | An Overview of Some Recent Developments in Bayesian Problem Solving Techniques The last five years have seen a surge in interest in the use of techniques from Bayesian decision theory to address problems in AI. Decision theory provides a normative framework for representing and reasoning about decision problems under uncertainty. Within the context of this framework, researchers in uncertainty in the AI community have been developing computational techniques for building rational agents and representations suited to engineering their knowledge bases. This special issue reviews recent research in Bayesian problem-solving techniques. The articles cover the topics of inference in Bayesian networks, decision-theoretic planning, and qualitative decision theory. Here, I provide a brief introduction to Bayesian networks and then cover applications of Bayesian problem-solving techniques, knowledge-based model construction and structured representations, and the learning of graphical probability models. The past five years or so have seen increased interest and tremendous... | [
1314
] | Test |
2,379 | 2 | Mostly-Unsupervised Statistical Segmentation of Japanese Kanji Sequences Given the lack of word delimiters in written Japanese, word segmentation is generally considered a crucial first step in processing Japanese texts. Typical Japanese segmentation algorithms rely either on a lexicon and syntactic analysis or on pre-segmented data; but these are labor-intensive, and the lexico-syntactic techniques are vulnerable to the unknown word problem. In contrast, we introduce a novel, more robust statistical method utilizing unsegmented training data. Despite its simplicity, the algorithm yields performance on long kanji sequences comparable to and sometimes surpassing that of state-of-the-art morphological analyzers over a variety of error metrics. The algorithm also outperforms another mostly-unsupervised statistical algorithm previously proposed for Chinese. Additionally, we present a two-level annotation scheme for Japanese to incorporate multiple segmentation granularities, and introduce two novel evaluation metrics, both based on the notion of a compatible bracket, that can account for multiple granularities simultaneously. | [
855
] | Validation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.