node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
2,380 | 0 | Autonomous Bidding Agents in the Trading Agent Competition Designing agents that can bid in online simultaneous auctions is a complex task.The authors describe task-specific details and strategies of agents in a trading agent competition. Anatural offshoot of the growing prevalence of online auctions is the creation of autonomous bidding agents that monitor and participate in these auctions. It is straightforward to write a bidding agent to participate in an online auction for a single good, particularly when the value of that good is fixed ahead of time: the agent can bid slightly over the ask price until the auction closes or the price exceeds the value. In simultaneous auctions offering complementary and substitutable goods, however, agent deployment is a much more complex endeavor. The first trading agent competition (TAC), held in Boston, Massachusetts, on 8 July 2000, challenged participants to design a trading agent capable of bidding in online simultaneous auctions for complimentary and substitutable goods. TAC was organized by a group of researchers and developers led by Michael Wellman of | [
357
] | Train |
2,381 | 3 | From Databases to Information Systems - Information Quality Makes the Difference Research and business is currently moving from centralized databases towards in-formation systems integrating distributed and autonomous data sources. Simultane-ously, it is a well acknowledged fact that consideration of information quality - IQ-reasoning - is an important issue for large-scale integrated information systems. We show that IQ-reasoning can be the driving force of the current shift from databases to integrated information systems. In this paper, we explore the implications and consequences of this shift. All areas of answering user queries are affected - from user input, to query planning and query optimization, and finally to building the query result. The application of IQ-reasoning brings both challenges, such as new cost models for optimization, and opportunities, such as improved query planning. We highlight several emerging aspects and suggest solutions toward a pervasion of information quality in information systems. | [
1242,
1252,
2163,
3157
] | Train |
2,382 | 3 | Design and Implementation of a Deductive Query Language for ODMG Compliant Object Databases Introduction Deductive object-oriented databases (DOODs) seek to provide the combined support for the expressive modelling features available in the object-oriented data model and the powerful query language features available in deductive databases. When successfully engineered, this combination can broaden the spectrum of declarative queries that can be supported by the DBMS, and ease their implementation due to the increased functionality in the query capabilities of the resulting system. The extra leverage obtained from support for deductive functionality is relevant to building database middleware for distributed information systems [19], managing semistructured data [11], and for building decision support and knowledge discovery systems [4]. Unlike deductive relational database systems (DRDBs), which were designed and implemented based on the formal denition of the relational data model by Codd, and on the widely researched deductive query language model (language cons | [
2212
] | Test |
2,383 | 3 | The S²-Tree: An Index Structure for Subsequence Matching of Spatial Objects We present the S²-Tree, an indexing method for subsequence matching of spatial objects. The S²-Tree locates subsequences within a collection of spatial sequences, i.e., sequences made up of spatial objects, such that the subsequences match a given query pattern within a specified tolerance. Our method is based on (i) the string-searching techniques that locate substrings within a string of symbols drawn from a discrete alphabet (e.g., ASCII characters) and (ii) the spatial access methods that index (unsequenced) spatial objects. Particularly, the S²-Tree can be applied to solve problems such as subsequence matching of time-series data, where features of subsequences are often extracted and mapped into spatial objects. Moreover, it supports queries such as "what is the longest common pattern of the two time series?", which previous subsequence matching algorithms find difficult to solve efficiently. | [
1356
] | Train |
2,384 | 1 | Goal Directed Adaptive Behavior in Second-Order Neural Networks: The MAXSON family of architectures The paper presents a neural network architecture (MAXSON) based on second-order connections that can learn a multiple goal approach/avoid task using reinforcement from the environment. It also enables an agent to learn vicariously, from the successes and failures of other agents. The paper shows that MAXSON can learn certain spatial navigation tasks much faster than traditional Q-learning, as well as learn goal directed behavior, increasing the agent's chances of long-term survival. The paper shows that an extension of MAXSON (V-MAXSON) enables agents to learn vicariously, and this improves the overall survivability of the agent population. | [
129,
1324,
1494
] | Train |
2,385 | 3 | View Disassembly . We explore a new form of view rewrite called view disassembly. The objective is to rewrite views in order to "remove" certain sub-views (or unfoldings) of the view. This becomes pertinent for complex views which may defined over other views and which may involve union. Such complex views arise necessarily in environments as data warehousing and mediation over heterogeneous databases. View disassembly can be used for view and query optimization, preserving data security, making use of cached queries and materialized views, and view maintenance. We provide computational complexity results of view disassembly. We show that the optimal rewrites for disassembled views is at least NP- hard. However, we provide good news too. We provide an approximation algorithm that has much better run-time behavior. We show a pertinent class of unfoldings for which their removal always results in a simpler disassembled view than the view itself. We also show the complexity to determine when a collection... | [
1670
] | Train |
2,386 | 2 | CS 395T Large-Scale Data Mining Fall 2001 ojects the vectors x 1 : : : x n onto the principal components. Note that the eigenvectors of ^ are the left singular vectors of the matrix 1 p n 1 [x 1 ; x 2 ; : : : ; x n ]: Thus PCA can be obtained from the SVD of mean centered data. The mean centering is the important dierence between PCA and SVD and can yield qualitatively dierent results for data sets where the mean is not equal to 0, as shown in gure 1. SVD PCA SVD,PCA Figure 1: The leading singular vector may not be in the same direction as the principal component 1 2 CS 395T: Large-Scale Data Mining 2 Clustering Considerations Clustering is the grouping together of similar objects. Usually, the clustering problem is posed as a | [
1201,
1959
] | Train |
2,387 | 3 | Discovery-driven Exploration of OLAP Data Cubes . Analysts predominantly use OLAP data cubes to identify regions of anomalies that may represent problem areas or new opportunities. The current OLAP systems support hypothesis-driven exploration of data cubes through operations such as drill-down, roll-up, and selection. Using these operations, an analyst navigates unaided through a huge search space looking at large number of values to spot exceptions. We propose a new discovery-driven exploration paradigm that mines the data for such exceptions and summarizes the exceptions at appropriate levels in advance. It then uses these exceptions to lead the analyst to interesting regions of the cube during navigation. We present the statistical foundation underlying our approach. We then discuss the computational issue of finding exceptions in data and making the process efficient on large multidimensional data bases. 1 Introduction On-Line Analytical Processing (OLAP) characterizes the operations of summarizing, consolidating, viewing, a... | [
182,
3138
] | Train |
2,388 | 5 | Layered Learning in Genetic Programming for a Cooperative Robot Soccer Problem We present an alternative to standard genetic programming (GP) that applies layered learning techniques to decompose a problem. GP is applied to subproblems sequentially, where the population in the last generation of a subproblem is used as the initial generation of the next subproblem. This method is used to evolve agents to play keep-away soccer, a subproblem of robotic soccer that requires cooperation among multiple agents in a dynamic environment. The layered learning paradigm allows GP to evolve better solutions faster than standard GP. Results show that the layered learning GP outperforms standard GP by evolving a lower fitness faster and an overall better fitness. Results indicate a wide area of future research with layered learning in GP. | [
1350,
2614
] | Train |
2,389 | 1 | Support Vector Machine - Reference Manual this document will describe these programs. To find out more about SVMs, see the bibliography. We will not describe how SVMs work here. The first program we will describe is the paragen program, as it specifies all parameters needed for the SVM. 3 paragen When using the support vector machine for any given task, it is always necessary to specify a set of parameters. These parameters include information such as whether you are interested in pattern recognition or regression estimation, what kernel you are using, what scaling is to be done on the data, etc... paragen generates parameter files used by the SVM program, if no file was generated the user will be asked interactively. | [
63,
305,
1728
] | Validation |
2,390 | 0 | Locating Objects in Mobile Computing In current distributed systems, the notion of mobility is emerging in many forms and applications. Mobility arises naturally in wireless computing, since the location of users changes as they move. Besides mobility in wireless computing, software mobile agents are another popular form of moving objects. Locating objects, i.e., identifying their current location, is central to mobile computing. In this paper, we present a comprehensive survey of the various approaches to the problem of storing, querying, and updating the location of objects in mobile computing. The fundamental techniques underlying the proposed approaches are identified, analyzed and classified along various dimensions. Keywords: mobile computing, location management, location databases, caching, replication, moving objects, spatio-temporal databases 1 Introduction In current distributed systems, the notion of mobility is emerging in many forms and applications. Increasingly many users are not tied to a fixed... | [
3169
] | Test |
2,391 | 2 | Mining Semi-Structured Data by Path Expressions A new data model for ltering semi-structured texts is presented. | [
439,
815
] | Validation |
2,392 | 2 | Results and Challenges in Web Search Evaluation A frozen 18.5 million page snapshot of part of the Web has been created to enable and encourage meaningful and reproducible evaluation of Web search systems and techniques. This collection is being used in an evaluation framework within the Text Retrieval Conference (TREC) and will hopefully provide convincing answers to questions such as, "Can link information result in better rankings?", "Do longer queries result in better answers?", and, "Do TREC systems work well on Web data?" The snapshot and associated evaluation methods are described and an invitation is extended to participate. Preliminary results are presented for an effectivess comparison of six TREC systems working on the snapshot collection against five well-known Web search systems working over the current Web. These suggest that the standard of document rankings produced by public Web search engines is by no means state-of-the-art. 1999 Published by Elsevier Science B.V. All rights reserved. Keywords: Evaluation; Search... | [
488,
885,
1432,
1529,
1690,
1999,
2304,
2503,
2532,
2867
] | Test |
2,393 | 2 | Machine Learning based User Modeling for WWW Search The World Wide Web (Www) offers a huge number of documents which deal with information concerning nearly any topic. Thus, search engines and meta search engines currently are the key to finding information. Search engines with crawler based indexes vary in recall and offer a very bad precision. Meta search engines try to overcome these lacks by simple methods for information extraction, information filtering and integration of heterogenous information resources. Only few search engines employ intelligent techniques in order to increase precision. Recently, user modeling techniques have become more and more popular since they proved to be a useful means for user centered information filtering and presentation. Many personal agent based system for web browsing are currently developed. It is a straightforward idea to incorporate the idea of machine learning based user modeling (see [17]) methods into web search services. We propose an abstract prototype which is being developed at the Uni... | [
1510,
3035
] | Train |
2,394 | 1 | Selecting a Fuzzy Logic Operation from the DNF-CNF Interval: How Practical Are the Resulting Operations? In classical (two-valued) logic, CNF and DNF forms of each propositional formula are equivalent to each other. In fuzzy logic, CNF and DNF forms are not equivalent, they form an interval that contains the fuzzy values of all classically equivalent propositional formulas. If we want to select a single value from this interval, then it is natural to select a linear combination of the interval's endpoints. In particular, we can do that for CNF and DNF forms of "and" and "or", thus designing natural fuzzy analogues of classical "and" and "or" operations. | [
1704
] | Validation |
2,395 | 1 | Extended Experimental Explorations Of The Necessity Of User Guidance In Case-Based Learning This is an extended report focussing on experimental results to explore the necessity of user guidance in case-based knowledge acquisition. It is covering a collection of theoretical investigations as well. The methodology of our approach is quite simple: We choose a well-understood area which is tailored to case-based knowledge acquisition. Furthermore, we choose a prototypical case-based learning algorithm which is obviously suitable for the problem domain under consideration. Then, we perform a number of knowledge acquisition experiments. They clearly exhibit essential limitations of knowledge acquisition from randomly chosen cases. As a consequence, we develop scenarios of user guidance. Based on these theoretical concepts, we prove a few theoretical results characterizing the power of our approach. Next, we perform a new series of more constrained results which support our theoretical investigations. The main experiments deal with the difficulties of learning from randomly arrange... | [
100,
967,
1844,
2171,
2882,
2890,
3160,
3176
] | Train |
2,396 | 1 | The Self-Organizing Map in Industry Analysis The Self-Organizing Map (SOM) is a powerful neural network method for the analysis and visualization of high-dimensional data. It maps nonlinear statistical relationships between high-dimensional measurement data into simple geometric relationships, usually on a two-dimensional grid. The mapping roughly preserves the most important topological and metric relationships of the original data elements and, thus, inherently clusters the data. The need for visualization and clustering occurs, for instance, in the data analysis of complex processes or systems. In various engineering applications, entire fields of industry can be investigated using SOM based methods. The data exploration tool presented in this chapter allows visualization and analysis of large data bases of industrial systems. Forest industry is the ørst chosen application for the tool. To illustrate the global nature of forest industry, the example case is used to cluster the pulp and paper mills of the world. | [
2287
] | Train |
2,397 | 0 | Network Processing of Mobile Agents, by Mobile Agents, for Mobile Agents . This paper presents a framework for building network protocols for migrating mobile agents over a network. The framework allows network protocols for agent migration to be naturally implemented within mobile agents and to be constructed in a hierarchy as most data transmission protocols are. These protocols are given as mobile agents and they can transmit other mobile agents to remote hosts as first-class objects. Since they can be dynamically deployed at remote hosts by migrating the agents that carry them, these protocols can dynamically and flexibly customize network processing for agent migration according to the requirements of respective visiting agents and changes in the environments. A prototype implementation was built on a Java-based mobile agent system, and several practical protocols for agent migration were designed and implemented. The framework can make major contributions to mobile agent technology for telecommunication systems. 1 | [
2401,
2964
] | Train |
2,398 | 0 | Coordinating Human and Computer Agents . In many application areas individuals are responsible for an agenda of tasks and face choices about the best way to locally handle each task, in what order to do tasks, and when to do them. Such decisions are often hard to make because of coordination problems: individual tasks are related to the tasks of others in complex ways, and there are many sources of uncertainty (no one has a complete view of the task structure at arbitrary levels of detail, the situation may be changing dynamically, and no one is entirely sure of the outcomes of all of their actions). The focus of this paper is the development of support tools for distributed, cooperative work by groups (collaborative teams) of human and computational agents. We will discuss the design of a set of distributed autonomous computer programs ("agents") that assist people in coordinating their activities by helping them to manage their agendas. We describe several ongoing implementations of these ideas including 1) simulated agen... | [
1107
] | Train |
2,399 | 3 | Tertiary Storage Organization for Large Multidimensional Datasets | [
651
] | Validation |
2,400 | 0 | Reasoning About Intentions in Uncertain Domains The design of autonomous agents that are situated in real world domains involves dealing with uncertainty in terms of dynamism, observability and non-determinism. These three types of uncertainty, when combined with the real-time requirements of many application domains, imply that an agent must be capable of eectively coordinating its reasoning. As such, situated belief-desire-intention (bdi) agents need an ecient intention reconsideration policy, which denes when computational resources are spent on reasoning, i.e., deliberating over intentions, and when resources are better spent on either object-level reasoning or action. This paper presents an implementation of such a policy by modelling intention reconsideration as a partially observable Markov decision process (pomdp). The motivation for a pomdp implementation of intention reconsideration is that the two processes have similar properties and functions, as we demonstrate in this paper. Our approach achieves better results than existing intention reconsideration frameworks, as is demonstrated empirically in this paper. | [
1662,
2063,
3128
] | Validation |
2,401 | 0 | Flying Emulator: Rapid Building and Testing of Networked Applications for Mobile Computers This paper presents a mobile-agent framework for building and testing mobile computing applications. When a portable computing device is moved into and attached to a new network, the proper functioning of an application running on the device often depends on the resources and services provided locally in the current network. To solve this problem, this framework provides an applicationlevel emulator of portable computing devices. Since the emulator is constructed as a mobile agent, it can carry target applications across networks on behalf of a device, and it allows the applications to connect to local servers in its current network in the same way as if they were moved with and executed on the device itself. This paper also demonstrates the utility of this framework by describing the development of typical location-dependent applications in mobile computing settings. | [
1035,
2397,
2964
] | Train |
2,402 | 5 | A Planning Algorithm not based on Directional Search The initiative in STRIPS planning has recently been taken by work on propositional satisfiability. Best current planners, like Graphplan, and earlier planners originating in the partial-order or refinement planning community have proved in many cases to be inferior to general-purpose satisfiability algorithms in solving planning problems. However, no explanation of the success of programs like Walksat or relsat in planning has been offered. In this paper we discuss a simple planning algorithm that reconstructs the planner in the background of the SAT/CSP approach. 1 INTRODUCTION Many of the recent interesting results in AI planning did not originate in traditional planning research, but in work on algorithms for checking the satisfiability of propositional formulae. STRIPS planning problems have been used as benchmarks to test SAT algorithms based on greedy local search [Kautz and Selman, 1992; Kautz and Selman, 1996] , and new developments [Bayardo, Jr. and Schrag, 1997] of the well-... | [
770
] | Train |
2,403 | 4 | Adaptive Navigation Support in Educational Hypermedia: the Role of Student Knowledge Level and the Case for Meta-Adaptation This paper provides a brief overview of main adaptive navigation support techniques and analyzes the results of most representative empirical studies of these techniques. It demonstrates an evidence that different known techniques work most efficiently in different context. In particular, the studies summarized in the paper have provided evidence that users with different knowledge level of the subject may appreciate different adaptive navigation support technologies. The paper argues that more empirical studies are required to help the developers of adaptive hypermedia systems in selecting most relevant adaptation technologies. It also attempts to build a case for meta-adaptive hypermedia systems, ie, systems that are able to adapt the very adaptation technology to the given user and context | [
1387
] | Train |
2,404 | 4 | Interactive Music for Instrumented Dancing Shoes We have designed and built a pair of sneakers that each sense 16 different tactile and free-gesture parameters. These include continuous pressure at 3 points in the forward sole, dynamic pressure at the heel, bidirectional bend of the sole, height above instrumented portions of the floor, 3-axis orientation about the Earth's magnetic field, 2-axis gravitational tilt and low-G acceleration, 3-axis shock, angular rate about the vertical, and translational position via a sonar transponder. Both shoes transfer these parameters to a base station across an RF link at 50 Hz State updates. As they are powered by a local battery, there are no tethers or wires running off the shoe. A PC monitors the data streaming off both shoes and translates it into real-time interactive music. The shoe design is introduced, and the interactive music mappings that we have developed for dance performances are discussed. 1) Introduction A trained dancer is capable of expressing highly dexterous control... | [
69,
1052,
1220
] | Train |
2,405 | 4 | Signer-independent Continuous Sign Language Recognition Based on SRN/HMM A divide-and-conquer approach is presented for signer-independent continuous Chinese Sign Language(CSL) recognition in this paper. The problem of continuous CSL recognition is divided into the subproblems of isolated CSL recognition. We combine the simple recurrent network(SRN) with the hidden Markov models(HMM) in this approach. The improved SRN is introduced for segmentation of continuous CSL. Outputs of SRN are regarded as the states of HMM, and the Lattice Viterbi algorithm is employed to search the best word sequence in the HMM framework. Experimental results show SRN/HMM approach has better performance than the standard HMM one. | [
1383
] | Train |
2,406 | 5 | Incremental Recompilation of Knowledge Approximating a general formula from above and below by Horn formulas (its Horn envelope and Horn core, respectively) was proposed in [22] as a form of "knowledge compilation, " supporting rapid approximate reasoning; on the negative side, this scheme is static in that it supports no updates, and has certain complexity drawbacks pointed out in [17]. On the other hand, the many frameworks and schemes proposed in the literature for theory update and revision are plagued by serious complexity-theoretic impediments, even in the Horn case, as was pointed out in [6], and is further demonstrated in the present paper. More fundamentally, these schemes are not inductive, in that they may lose in a single update any positive properties of the represented sets of formulas (small size, Horn structure, etc.). In this paper 1 we propose a new scheme, incremental recompilation, which combines Horn approximation and model-based updates; this scheme is inductive and very efficient, free of... | [
458,
1010,
1399,
2078
] | Train |
2,407 | 4 | Domain-Specific Informative and Indicative Summarization for Information Retrieval In this paper, we propose the use of multidocument summarization as a post-processing step in document retrieval. We examine the use of the summary as a replacement to the standard ranked list. The form of the summary is novel because it has both informative and indicate elements, designed to help di#erent users perform their tasks better. Our summary uses the documents' topical structure as a backbone for its own structure, as it was deemed the most useful document feature in our study of a corpus of summaries. 1. | [
1152
] | Validation |
2,408 | 0 | Supporting Conflict Management in Cooperative Design Teams The design of complex artifacts has increasingly become a cooperative process, with the detection and resolution of conflicts between design agents playing a central role. Effective tools for supporting the conflict management process, however, are still lacking. This paper describes a system called DCSS (the Design Collaboration Support System) developed to meet this challenge in design teams with both human and machine-based agents. Every design agent is provided with an "assistant" that provides domain-independent conflict detection, classification and resolution expertise. The design agents provide the domainspecific expertise needed to instantiate this general expertise, including the rationale for their actions, as a part of their design activities. DCSS has been used successfully to support the cooperative design of Local Area Networks by human and machine-based designers. This paper includes a description of DCSS's underlying model and implementation, examples of its operation... | [
286,
2314
] | Train |
2,409 | 0 | A Reactive-Deliberative Model of Dialogue Agency . For an agent to engage in substantive dialogues with other agents, there are several complexities which go beyond the scope of standard models of rational agency. In particular, an agent must reason about social attitudes that span more than one agent, as well as the dynamic and fallible process of plan execution. In this paper we sketch a theory of plan execution which allows the representation of failure and repair, extend the underlying agency model with social attitudes of mutual belief, obligation, and multi-agent plan execution, and describe an implemented dialogue agent which uses these notions, reacting to its environment and mental state, and deliberating and planning action only when more pressing concerns are absent. 1 Overview For autonomous agents that operate in a realm of heterogeneous agents (including human agents), an agent theory should allow many of the features of natural language dialogue. The agent communication protocols should allow flexible turn-taking and ... | [
395
] | Test |
2,410 | 1 | Multi-modal Identity Verification using Support Vector Machines (SVM) The contribution of this paper is twofold: (1) to formulate a decision fusion problem encountered in the design of a multi-modal identity verification system as a particular classification problem, (2) to propose to solve this problem by a Support Vector Machine (SVM). The multi-modal identity verification system under consideration is built of d modalities in parallel, each one delivering as output a scalar number, called score, stating how well the claimed identity is verified. A fusion module receiving as input the d scores has to take a binary decision: accept or reject identity. This fusion problem has been solved using Support Vector Machines. The performances of this fusion module have been evaluated and compared with other proposed methods on a multimodal database, containing both vocal and visual modalities. Keywords: Decision Fusion, Support Vector Machine, Multi-Modal Identity Verification. 1 Introduction Automatic identification/verification is rapidly becoming an importa... | [
832
] | Train |
2,411 | 1 | A Neural Network Diagnosis Model without Disorder Independence Assumption . Generally, the disorders in a neural network diagnosis model are assumed independent each other. In this paper, we propose a neural network model for diagnostic problem solving where the disorder independence assumption is no longer necessary. Firstly, we characterize the diagnostic tasks and the causal network which is used to represent the diagnostic problem, then we describe the neural network diagnosis model, finally, some experiment results will be given. 1 Introduction Finding explanations for a given set of events is an important aspect of general intelligent behaviour. The process of finding the best explanation was defined as Abduction by the philosopher C. S. Peirce [8]. Diagnosis is a typical abductive problem. For a set of manifestations(observations), the diagnostic inference is to find the most plausible faults or disorders which can explain the manifestations observed. In general, an individual fault or disorder can explain only a portion of the manifestations. Ther... | [
694
] | Train |
2,412 | 1 | Soft Computing and Fault Management Soft computing is a partnership between A.I. techniques that are tolerant of imprecision, uncertainty and partial truth, with the aim of obtaining a robust solution for complex systems. Telecommunication systems are built with extensive redundancy and complexity to ensure robustness and quality of service. To facilitate this requires complex fault identification and management systems. Fault identification and management is generally handled by reducing the amount of alarm events (symptoms) presented to the operating engineer through monitoring, filtering and masking. The ultimate goal is to determine and present the actual underlying fault. Fault Management is a complex task subject to uncertainty in the 'symptoms' presented and as such is ideal for treatment by soft computing techniques. The aim of this paper is to present a soft computing approach to fault management in telecommunication systems. Two key approaches are considered; AI & soft computing rule discovery and techniques to attempt to present less symptoms with greater diagnostic assistance for the more traditional rule based system approach and a hybrid soft computing approach which utilises a genetic algorithm to learn Bayesian belief networks (BBNs). It is also highlighted that research and development of the two target Fault Management Systems are complementary. Keywords: Network management, fault management, knowledge discovery, Bayesian belief networks, genetic algorithms, soft computing. 1. | [
3002
] | Train |
2,413 | 3 | Efficient Goal Directed Bottom-up Evaluation of Logic Programs This paper introduces a new strategy for the efficient goal directed bottomup evaluation of logic programs. Instead of combining a standard bottomup evaluation strategy with a Magic-set transformation, the evaluation strategy is specialized for the application to Magic-set programs which are characterized by clause bodies with a high degree of overlapping. The approach is similar to other techniques which avoid re-computation by maintaining and reusing partial solutions to clause bodies. However, the overhead is considerably reduced as these are maintained implicitly by the underlying Prolog implementation. The technique is presented as a simple meta-interpreter for goal directed bottom-up evaluation. No Magic-set transformation is involved as the dependencies between calls and answers are expressed directly within the interpreter. The proposed technique has been implemented and shown to provide substantial speed-ups in applications of semantic based program analysis based on bottom-up... | [
1281,
2811
] | Train |
2,414 | 3 | Combining Inductive and Deductive Inference in Knowledge Management Tasks This paper indicates how different logic programming technologies can underpin an architecture for distributed knowledge management in which higher throughput in information supply is achieved by a (semi-)automated solution to the more challenging problem of knowledge creation. The paper first proposes working definitions of the notions of data, knowledge and information in purely logical terms, and then shows how existing technologies can be combined into an inference engine, referred to as a knowledge, information and data engine (KIDE), integrating inductive and deductive capabilities. The paper then briefly introduces the notion of virtual organizations and uses the set-up stage of virtual organizations to exemplify the value-adding potential of KIDEs in knowledge management contexts. | [
24
] | Validation |
2,415 | 5 | Towards Text Knowledge Engineering We introduce a methodology for automating the maintenance of domain-specific taxonomies based on natural language text understanding. A given ontology is incrementally updated as new concepts are acquired from real-world texts. The acquisition process is centered around the linguistic and conceptual "quality" of various forms of evidence underlying the generation and refinement of concept hypotheses. On the basis of the quality of evidence, concept hypotheses are ranked according to credibility and the most credible ones are selected for assimilation into the domain knowledge base. Appeared in: AAAI'98 - Proceedings of the 15th National Conference on Artificial Intelligence, July 26-30, 1998, Madison, Wisconsin (forthcoming) Towards Text Knowledge Engineering Udo Hahn & Klemens Schnattinger L F Computational Linguistics Group Text Knowledge Engineering Lab Freiburg University Werthmannplatz 1, D-79085 Freiburg, Germany http://www.coling.uni-freiburg.de Abstract We introduce a me... | [
2555
] | Train |
2,416 | 1 | Modeling Spatial Dependencies for Mining Geospatial Data: An Introduction Spatial data mining is a process to discover interesting, potentially useful and high utility patterns embedded in spatial databases. Efficient tools for extracting information from spatial data sets can be of importance to organizations which own, generate and manage large spatial data sets. The current approach towards solving spatial data mining problems is to use classical data mining tools after "materializing" spatial relationships. However, the key property of spatial data is that of spatial autocorrelation. Like temporal data, spatial data values are influenced by values in their immediate vicinity. Ignoring spatial autocorrelation in the modeling process leads to results which are a poor-fit and unreliable. In this chapter we will first review spatial statistical techniques which explictly model spatial autocorrelation. Second, we will propose PLUMS(Predicting Locations Using Map Similarity), a new approach for supervised spatial data mining problems. PLUMS searches the space of solutions using a map-similarity measure which is more appropriate in the context of spatial data. We will show that compared to state-of-the-art spatial statistics approaches, PLUMS achives comparable accuracy but at a fraction of the computational cost. Furthermore, PLUMS provides a general framework for specializing other data mining techniques for mining spatial data. | [
2845
] | Train |
2,417 | 0 | Heterogeneous Active Agents, III: Polynomially Implementable Agents . In (Eiter, Subrahmanian, and Pick 1999), the authors have introduced techniques to build agents on top of arbitrary data structures, and to "agentize" new/existing programs. They provided a series of successively more sophisticated semantics for such agent systems, and showed that as these semantics become epistemically more desirable, a computational price may need to be paid. In this paper, we identify a class of agents that are called weak regular---this is done by first identifying a fragment of agent programs (Eiter, Subrahmanian, and Pick 1999) called weak regular agent programs (WRAPs for short). It is shown that WRAPs are definable via three parameters---checking for a property called "safety", checking for a property called "conflict freedom" and checking for a "deontic stratifiability" property. Algorithms for each of these are developed. A weak regular agent is then defined in terms of these concepts , and a regular agent is one that satisfies an additional "boundedness" ... | [
2081
] | Train |
2,418 | 1 | Wrapper Induction: Efficiency and Expressiveness (Extended Abstract) Recently, many systems have been built that automatically interact with Internet information resources. However, these resources are usually formatted for use by people; e.g., the relevant content is embedded in HTML pages. Wrappers are often used to extract a resource's content, but hand-coding wrappers is tedious and error-prone. We advocate wrapper induction, a technique for automatically constructing wrappers. We have identified several wrapper classes that can be learned quickly (most sites require only a handful of examples, consuming a few CPU seconds of processing), yet which are useful for handling numerous Internet resources (70% of surveyed sites can be handled by our techniques). Introduction The Internet presents a stunning variety of on-line information resources: telephone directories, retail product catalogs, weather forecasts, and many more. Recently, there has been much interest in systems (such as software agents (Etzioni & Weld 1994; Kwok & Weld 1996) or informati... | [
371,
409,
1108,
1294,
1884,
3098
] | Train |
2,419 | 2 | SpiderServer: the MetaSearch Engine of WebNaut Search engines on the Web are valuable tools for searching information according to a user's interests whether an individual or a software agent. In the present article we describe the design and the operation mode of SpiderServer, a metasearch engine used for the submission of a query followed by the retrieving of results from five popular search engines. SpiderServer is the metasearch engine of the WebNaut system but it can be easily used by any other metasearch platform. There are two files for every search engine describing the phases of query formation and filtering respectively. These files contain directions on the way a query must be modified for a specific search engine and on the methodology SpiderServer must follow in order to parse the results from the specific search engine. The ultimate goal is to construct platform independent meta-search engines, which can be easily programmed to adapt to any search engine available on the WEB. | [
2558
] | Validation |
2,420 | 0 | Statistical Modeling of Human Interactions In this paper we describe a real-time computer vision and machine learning system for modeling and recognizing human behaviors in a visual surveillance task. The system is particularly concerned with detecting when interactions between people occur, and classifying the type of interaction. Examples of interesting interaction behaviors include following another person, altering one's path to meet another, and so forth. Our system combines top-down with bottom-up information in a closed feedback loop, with both components employing a statistical Bayesian approach. We propose and compare two different state-based learning architectures, namely HMMs and CHMMs, for modeling behaviors and interactions. The CHMM model is shown to work much more efficiently and accurately. Finally, a synthetic agent training system is used to develop a priori models for recognizing human behaviors and interactions. We demonstrate the ability to use these a priori models to accurately classify real human beha... | [
1110,
1301,
1556,
2933
] | Test |
2,421 | 3 | An Algebraic Approach to Static Analysis of Active Database Rules ing with credit is permitted. To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this work in other works, requires prior specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or permissions@acm.org. This is a preliminary release of an article accepted by ACM Transactions on Database Systems. The definitive version is currently in production at ACM and, when released, will supersede this version. 2 \Delta E. Baralis and J. Widom 1. INTRODUCTION An active database system is a conventional database system extended with a facility for managing active rules (or triggers). Incorporating active rules into a conventional database system has raised considerable interest both in the scientific community and in the commercial world: A number of prototypes that incorporate active rules into relational and object-oriented database system... | [
886,
1771
] | Train |
2,422 | 1 | Bounded Explanation and Inductive Refinement For Acquiring Control Knowledge One approach to learning control knowledge from a problem solving trace consists of generating explanations for the local decisions made during the search process. | [
312,
1429
] | Train |
2,423 | 1 | Statistical Models for Text Segmentation Abstract. This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms. 1. | [
2522
] | Train |
2,424 | 2 | Automatic Hierarchical E-Mail Classification Using Association Rules The explosive growth of on-line communication, in particular e-mail communication, makes it necessary to organize the information for faster and easier processing and searching. Storing e-mail messages into hierarchically organized folders, where each folder corresponds to a separate topic, has proven to be very useful. Previous approaches to this problem use Nave Bayes- or TF-IDF-style classifiers that are based on the unrealistic term independence assumption. These methods are also context-insensitive in that the meaning of words is independent of presence/absence of other words in the same message. It was shown that text classification methods that deviate from the independence assumption and capture context achieve higher accuracy. In this thesis, we address the problem of term dependence by building an associative classifier called Classification using Cohesion and Multiple Association Rules, or COMAR in short. The problem of context capturing is addressed by looking for phrases in message corpora. Both rules and phrases are generated using an efficient FP-growth-like approach. Since the amount of rules and phrases produced can be very large, we propose two new measures, rule cohesion and phrase cohesion, that possess the anti-monotone property which allows the push of rule and phrase pruning deeply into the process of their generation. This approach to pattern pruning proves to be much more efficient than "generate-and-prune" methods. Both unstructured text attributes and semi-structured non-text attributes, such as senders and recipients, are used for the classification. COMAR classification algorithm uses multiple rules to predict several highest probability topics for each message. Different feature selection and rule ranking methods are compared. Our studies show ... | [
478,
570,
834,
1175,
1773,
1814,
2663
] | Test |
2,425 | 4 | An Architecture for Outdoor Wearable Computers to Support Augmented Reality and Multimedia Applications This paper describes an architecture to support a hardware and software platform for research into the use of wearable computers and augmented reality in an outdoor environment. The architecture supports such devices as a GPS, compass, and head-mounted display. A prototype system was built to support novel applications by drawing graphics (such as maps, building plans, and compass bearings) on the head-mounted display, projecting information over that normally seen by the user and hence augmenting a user's perception of reality. This paper presents a set of novel augmented reality and multimedia applications operated in an outdoor environment. 1 | [
1088,
2549
] | Train |
2,426 | 4 | ICrafter: A Service Framework for Ubiquitous Computing Environments In this paper, we propose ICrafter, a framework for services and their user interfaces in a class of ubiquitous computing environments. | [
718,
1817
] | Validation |
2,427 | 2 | Term Selection for Filtering based on Distribution of Terms over Time In this article we investigate the use of time distributions in retrieval tasks. Specifically, we introduce a novel term selection method, namely Term Occurrence Uniformity (TOU), based on the hypothesis that terms which occur uniformly in time are more valuable than others. Our empirical evaluation so far has neither proved nor disproved this hypothesis. However, results are promising and suggest the need for a deeper theoretical and empirical investigation. Our current concern is filtering, but this line of research may easily be extended to other retrieval tasks which involve temporally-dependent data. 1 Introduction Information Filtering is the process of searching in large amounts of data for information which matches a user information need. The filtering task is usually described as the inverse of the traditional retrieval task. In retrieval, a one-time user request (called query) is matched to a static collection of information objects. In filtering, users issue a long-term r... | [
1826
] | Train |
2,428 | 5 | Hybrid Methods Using Evolutionary Algorithms for On-line Training A novel hybrid evolutionary approach is presented in this paper for improving the performance of neural network classifiers in slowly varying environments. For this purpose, we investigate a coupling of Differential Evolution Strategy and Stochastic Gradient Descent, using both the global search capabilities of Evolutionary Strategies and the effectiveness of on-line gradient descent. The use of Differential Evolution Strategy is related to the concept of evolution of a number of individuals from generation to generation and that of on-line gradient descent to the concept of adaptation to the environment by learning. The hybrid algorithm is tested in two real-life image processing applications. Experimental results suggest that the hybrid strategy is capable to train on-line effectively leading to networks with increased generalization capability. | [
2070,
2647
] | Train |
2,429 | 2 | Unbiased Evaluation of Retrieval Quality using Clickthrough Data This paper proposes a new method for evaluating the quality of retrieval functions. Unlike traditional methods that require relevance judgements by experts or explicit user feedback, it is based entirely on clickthrough data. This is a key advantage, since clickthrough data can be collected at very low cost and without overhead for the user. Taking an approach from experiment design, the paper proposes an experiment setup that generates unbiased feedback about the relative quality of two search results without explicit user feedback. A theoretical analysis shows that the method gives the same results as evaluation with traditional relevance judgements under mild statistical assumptions. An empirical analysis verifies that the assumptions are indeed justified and that the new method leads to conclusive results in a WWW retrieval study. | [
2774
] | Test |
2,430 | 1 | Towards Automatic Discovery of Object Categories We propose a method to learn heterogeneous models of object classes for visual recognition. The training images contain a preponderance of clutter and learning is unsupervised. Our models represent objects as probabilistic constellations of rigid parts (features). The variability within a class is represented by a joint probability density function on the shape of the constellation and the appearance of the parts. Our method automatically identifies distinctive features in the training set. The set of model parameters is then learned using expectation maximization (see the companion paper [11] for details). When trained on different, unlabeled and unsegmented views of a class of objects, each component of the mixture model can adapt to represent a subset of the views. Similarly, different component models can also "specialize" on sub-classes of an object class. Experiments on images of human heads, leaves from different species of trees, and motor-cars demonstrate that the method works... | [
405,
1863
] | Train |
2,431 | 2 | A Scalable Integrated Region-Based Image Retrieval System Statistical clustering is critical in designing scalable image retrieval systems. In this paper, we presentascalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similaritybetween images that incorporates properties of all the regions in the images by a region-matching scheme. Compared with retrieval based on individual regions, our overall similarity approach (a) reduces the inuence of inaccurate segmentation, (b) helps to clarify the semantics of a particular region, and (c) enables a simple querying interface for region-based image retrieval systems. The algorithm has been implemented as a part of our experimental SIMPLIcity image retrieval system and tested on large-scale image databases of both general-purpose images and pathology slides. Experiments have demonstrated that this technique maintains the accuracy and robustness of the original system while reducing the matching time significantly. | [
2891
] | Train |
2,432 | 0 | A Logic Programming Framework for Component-Based Software Prototyping The paper presents CaseLP, a logic-based prototyping environment for specifying and verifying complex distributed applications. CaseLP provides a set of languages for modeling intelligent and interacting components (agents) at different levels of abstraction. It also furnishes tools for integrating legacy software into a prototype. The possibility of integrating, into the same executable prototype, agents which are only specified as well as already developed components can prove extremely useful in the engineering process of complex applications. In fact, the reusability of existing components can be verified before the application has been implemented and the developer can be more confident on the correctness of the new components specification, if it has been executed and tested by means of an interaction with the existing components. Besides the aspects of integration and reuse, CaseLP also faces another fundamental issue of nowadays applications, namely distribution. The... | [
1076,
3066
] | Validation |
2,433 | 2 | Evaluating Strategies for Similarity Search on the Web Finding pages on the Web that are similar to a query page (Related Pages) is an important component of modern search engines. A variety of strategies have been proposed for answering Related Pages queries, but comparative evaluation by user studies is expensive, especially when large strategy spaces must be searched (e.g., when tuning parameters). We present a technique for automatically evaluating strategies using Web hierarchies, such as Open Directory, in place of user feedback. We apply this evaluation methodology to a mix of document representation strategies, including the use of text, anchor-text, and links. We discuss the relative advantages and disadvantages of the various approaches examined. Finally, we describe how to efficiently construct a similarity index out of our chosen strategies, and provide sample results from our index. | [
508,
1170,
1201,
1466,
2454,
2459,
2503,
2705
] | Train |
2,434 | 5 | A glimpse into the future of ID Cyberspace is a complex dimension of both enabling and inhibiting data flows in electronic data networks. Current generation intrusion detection (ID) systems are not technologically advanced enough to create the situational knowledge required to manage these networks. Next generation ID system will fuse data, combining both short-term sensor data with long-term knowledge databases, to create cyberspace situational awareness. This article offers a glimpse into the foggy crystal ball of future ID systems. Before diving into the technical discussion we ask the reader to keep in mind the generic model of a datagram traversing the Internet. Figure 1 illustrates an IP datagram moving in a store-and-forward environment from source to destination; routed based on a destination address with a uncertain source address decrementing the datagram time-to-live (TTL) at every router hop [1]. The datagram is routed through major Internets and IP transit providers. There is striking similarity between the transit of a datagram in the Internet and an airplane through airspace; future network management and air traffic control. At a very high abstract level, the concepts used to monitor objects in airspace apply to monitoring objects in networks. The Federal Aviation Administration (FAA) divides airspace management into two distinct entities. On the one hand, local controllers guide aircraft into and out of the air space surrounding an airport. Their job is to maintain awareness of the location of all aircraft in their vicinity, ensure proper separation, identify threats to aircraft, and manage the overall safety of passengers. Functionally, this is similar to the role of network controllers who must control the environment within their administrative domains. The network administrator must ensure the proper ports are open and the information is not delayed, the collisions are kept to a minimum, and the integrity of the delivery systems are not compromised. This is naturally similar to the situational awareness required in current generation air traffic control (ATC). | [] | Train |
2,435 | 3 | How to Avoid Building DataBlades That Know the Value of Everything and the Cost of Nothing The object-relational database management system (ORDBMS) offers many potential benefits for scientific, multimedia and financial applications. However, work remains in the integration of domain-specific class libraries (data cartridges, extenders, DataBlades ® ) into ORDBMS query processing. A major problem is that the standard mechanisms for query selectivity estimation, taken from relational database systems, rely on properties specific to the standard data types; creation of new mechanisms remains extremely difficult because the software interfaces provided by vendors are relatively low-level. In this paper, we discuss extensions of the generalized search tree, or GiST, to support a higher-level but less type-specific approach. Specifically, we discuss the computation of selectivity estimates with confidence intervals using a variety of index-based approaches and present results from an experimental comparison of these methods with several estimators from the literature. 1. Intro... | [
1153,
2022,
2095
] | Test |
2,436 | 3 | HyperQueries: Dynamic Distributed Query Processing on the Internet In this paper we propose a new framework for dynamic distributed query processing based on so-called HyperQueries which are essentially query evaluation sub-plans "sitting behind " hyperlinks. We illustrate the flexibility of this distributed query processing architecture in the context of B2B electronic market places. Architecting an electronic market place as a data warehouse by integrating all thedatafromall participating enterprises in one centralized repository incurs severe problems. Using HyperQueries, application integration is achieved via dynamic distributed query evaluation plans. The electronic market place serves as an intermediary between clients and providers executing their sub-queries referenced via hyperlinks. The hyperlinks are embedded within data objects of the intermediary 's database. Retrieving such a virtual object will automatically initiate the execution of the referenced HyperQuery in order to materialize the entire object. Thus, sensitive data remains under the full control of the data providers. 1 | [
923,
1056
] | Train |
2,437 | 2 | The Shape of the Web and Its Implications for Searching the Web With the rapid growth of the number of web pages, designing a search engine that can retrieve high quality information in response to a user query is a challenging task. Automated search engines that rely on keyword matching usually return too many low quality matches and they take a long time to run. It is argued in the literature that link-following search methods can substantially increase the search quality, provided that these methods use an accurate assumption about useful patterns in the hyperlink topology of the web. Recent work in the field has focused on detecting identi able patterns in the web graph and exploiting this information to improve the performance of search algorithms. We survey relevant work in this area and comment on the implications of these patterns for other areas such as advertisement and marketing. | [
1201,
1838,
2283,
2459,
2503,
3016
] | Test |
2,438 | 3 | Detection and Correction of Conflicting Concurrent Data Warehouse Updates Data integration over multiple heterogeneous data sources has become increasingly important for modern applications. The integrated data is usually stored in materialized views to allow better access, performance and high availability. Materialized view must be maintained after the data sources change. In a loosely-coupled environment, such as the Data Grid, the data sources are autonomous. Hence the source updates can be concurrent and cause erroneous maintenance results. State-of-the-art maintenance strategies apply compensating queries to correct such errors, making the restricting assumption that all source schemata remain static over time. However, in such dynamic environments, the data sources may change not only their data but also their schema, query capabilities or semantics. Consequently, either the maintenance queries or compensating queries would fail. We now propose a novel solution that handles both concurrent data and schema changes. First, we analyze the concurrency between source updates and classify them into different classes of dependencies. We then propose Dyno, a two-pronged strategy composed of dependency detection and correction algorithms to handle these new classes of concurrency. Our techniques are not tied to specific maintenance algorithms nor to a particular data model. To our knowledge, this is the first comprehensive solution to the view maintenance concurrency problems in loosely-coupled environments. Our experimental results illustrate that Dyno imposes an almost negligible overhead on existing maintenance algorithms for data updates while now allowing for this extended functionality. | [
1018,
1395,
1701,
2220
] | Train |
2,439 | 2 | EquiX-A Search and Query Language for XML EquiX is a search language for XML that combines the power of querying with the simplicity of searching. Requirements for such languages are discussed and it is shown that EquiX meets the necessary criteria. Both a graph-based abstract syntax and a formal concrete syntax are presented for EquiX queries. In addition, the semantics is defined and an evaluation algorithm is presented. The evaluation algorithm is polynomial under combined complexity. EquiX combines pattern matching, quantification and logical expressions to query both the data and meta-data of XML documents. The result of a query in EquiX is a set of XML documents. A DTD describing the result documents is derived automatically from the query. 1 | [
1545
] | Train |
2,440 | 2 | A System for Extraction of Temporal Expressions from French Texts We present a system for extraction of temporal expressions from French texts. The identication of the temporal expressions is based on a context-scanning strategy (CSS) which is carried out by two complementary techniques: search for regular expressions and left-to-right and right-to-left local chart-parsing. A System for Extraction of Temporal Expressions from French Texts Paper-ID: ACL-2001-XXXX 1 Introduction The identication and the interpretation of temporal and aspectual information plays an important role in text understanding. This information is encoded in the natural languages by a wide array of linguistic means ranging from grammatical (morpho-syntactic) to lexical (verbs and adverbials) or strictly syntactic phenomena (temporal anaphora (Webber, 1988) or argument structure of the verb (Verkuyl, 1972; Verkuyl, 1993)). In this paper we present a system for identi- cation of lexical non-verbal means of expressing temporal information in French texts. The system det... | [
1060
] | Train |
2,441 | 1 | Closing the Loop: an Agenda- and Justification-Based Framework for Selecting the Next Discovery Task to Perform We propose and evaluate an agenda- and justificationbased architecture for discovery systems that contains a mechanism for selecting the next task to perform. This framework has many desirable properties: (1) its use of heuristics to perform and propose tasks facilitates the use of general discovery strategies that are able to use a variety of background knowledge, (2) through the use of justifications its mechanism for selecting the next task to perform is able to reason about the appropriateness of the tasks being considered, and (3) its mechanism for selecting the next task to perform also considers the users interests, allowing a discovery program to tailor its behavior toward them. We evaluate the extent to which both reasons and estimates of interestingness contribute to performance in the domain of protein crystallization. With both aspects contributing to task selection, a high fraction of discoveries by the HAMB prototype were judged interesting by an expert (21% interesting and novel; 45% interesting but rediscoveries). 1. | [
2340
] | Train |
2,442 | 3 | Dynamic Logic Programming In this paper we investigate updates of knowledge bases represented by logic programs. In order to represent negative information, we use generalized logic programs which allow default negation not only in rule bodies but also in their heads.We start by introducing the notion of an update $P\oplus U$ of a logic program $P$ by another logic program $U$. Subsequently, we provide a precise semantic characterization of $P\oplus U$, and study some basic properties of program updates. In particular, we show that our update programs generalize the notion of interpretation update. We then extend this notion to compositional sequences of logic programs updates $P_{1}\oplus P_{2}\oplus \dots $, defining a dynamic program update, and thereby introducing the paradigm of \emph{dynamic logic programming}. This paradigm significantly facilitates modularization of logic programming, and thus modularization of non-monotonic reasoning as a whole. Specifically, suppose that we are given a set of logic program modules, each describing a different state of our knowledge of the world. Different states may represent different time points or different sets of priorities or perhaps even different viewpoints. Consequently, program modules may contain mutually contradictory as well as overlapping information. The role of the dynamic program update is to employ the mutual relationships existing between different modules to precisely determine, at any given module composition stage, the declarative as well as the procedural semantics of the combined program resulting from the modules. | [
441,
1192
] | Train |
2,443 | 1 | Information Extraction with HMMs and Shrinkage Hidden Markov models (HMMs) are a powerful probabilistic tool for modeling time series data, and have been applied with success to many language-related tasks such as part of speech tagging, speech recognition, text segmentation and topic detection. This paper describes the application of HMMs to another language related task|information extraction|the problem of locating textual sub-segments that answer a particular information need. In our work, the HMM state transition probabilities and word emission probabilities are learned from labeled training data. As in many machine learning problems, however, the lack of suÆcient labeled training data hinders the reliability of the model. The key contribution of this paper is the use of a statistical technique called \shrinkage" that signi cantly improves parameter estimation of the HMM emission probabilities in the face of sparse training data. In experiments on seminar announcements and Reuters acquisitions articles, shrinkage is shown to r... | [
255,
855,
875,
1546,
2104,
3152
] | Validation |
2,444 | 1 | A Simple Heuristic Based Genetic Algorithm for the Maximum Clique Problem This paper proposes a novel heuristic based genetic algorithm (HGA) for the maximum clique problem, which consists of the combination of a simple genetic algorithm and a naive heuristic algorithm. The heuristic based genetic algorithm is tested on the so-called DIMACS benchmark graphs, with up to 4000 nodes and up to 5506380 edges, consisting of randomly generated graphs with known maximum clique and of graphs derived from various practical applications. The performance of HGA on these graphs is very satisfactory both in terms of solution quality and running time. Despite its simplicity, HGA dramatically improves on all previous approaches based on genetic algorithms we are aware of, and yields results comparable to those of more involved heuristic algorithms based on local search. This provides empirical evidence of the effectiveness of heuristic based genetic algorithms as a search technique for solving the maximum clique problem, which is competitive with respect to other (variants... | [
2217
] | Validation |
2,445 | 1 | Flexibly Instructable Agents This paper presents an approach to learning from situated, interactive tutorial instruction within an ongoing agent. Tutorial instruction is a exible (and thus powerful) paradigm for teaching tasks because it allows an instructor to communicate whatever types of knowledge an agent might need in whatever situations might arise. To support this exibility, however, the agent must be able to learn multiple kinds of knowledge from a broad range of instructional interactions. Our approach, called situated explanation, achieves such learning through a combination of analytic and inductive techniques. It combines a form of explanation-based learning that is situated for each instruction with a full suite of contextually guided responses to incomplete explanations. The approach is implemented in an agent called Instructo-Soar that learns hierarchies of new tasks and other domain knowledge from interactive natural language instructions. Instructo-Soar meets three key requirements of exible instructability that distinguish it from previous systems: (1) it can take known or unknown commands at any instruction point; (2) it can handle instructions that apply to either its current situation or to a hypothetical situation speci ed in language (as in, for instance, conditional instructions); and (3) it can learn, from instructions, each class of knowledge it uses to perform tasks. 1. | [
142
] | Train |
2,446 | 2 | The Use of Classifiers in Sequential Inference We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem - identifying phrase structure. The first is a Markovian approach that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model state-observation dependencies. The second is an extension of constraint satisfaction formalisms. We develop efficient combination algorithms under both models and study them experimentally in the context of shallow parsing. 1 Introduction In many situations it is necessary to make decisions that depend on the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints - the sequential nature of the data or other domain specific constraints. Consider, for example, the problem of chunking natural language sentences ... | [
16,
2522,
2658,
2694,
2898
] | Test |
2,447 | 0 | A Scalable, Distributed Middleware Service Architecture to Support Mobile Internet Applications Middleware layers placed between user clients and application servers have been used to perform a variety of functions. In previous work we have used middleware to perform a new capability, application session handoff, using a single Middleware Server to provide all functionality. However, to improve the scalability of our architecture, we have designed an efficient distributed Middleware Service layer that properly maintains application session handoff semantics while being able to service a large number of clients. We show that this service layer improves the scalability of general clientto -application server interaction as well as the specific case of application session handoff. We detail protocols involved in performing handoff and analyse an implementation of the architecture that supports the use of a real medical teaching tool. From experimental results it can be seen that our Middleware Service effectively provides scalability as a response to increased workload. 1. | [
348,
659,
1496
] | Train |
2,448 | 3 | Generic Schema Matching with Cupid Schema matching is a critical step in many applications, such as XML message mapping, data warehouse loading, and schema integration. In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems. | [
2004
] | Train |
2,449 | 1 | Defining and Combining Symmetric and Asymmetric Similarity Measures . In this paper, we present a framework for the definition of similarity measures using lattice-valued functions. We show their strengths (particularly for combining similarity measures). Then we investigate a particular instantiation of the framework, in which sets are used both to represent objects and to denote degrees of similarity. The paper concludes by suggesting some generalisations of the findings. 1 Introduction There are many different ways of computing the similarity of object representations. These include: -- the feature-based approach, in which objects are represented by sets of features, and similarity is based on feature commonality and difference (e.g. [13]); -- the geometric approach, in which objects are represented by points in an n- dimensional space (usually specified by sets of pairs of attributes and atomic values), and similarity is based on the inverse of the distance between objects in the space (e.g. [12]); and -- the structural approach, which uses gr... | [
631
] | Train |
2,450 | 5 | Optimal Design of Neural Nets Using Hybrid Algorithms Selection of the topology of a network and correct parameters for the learning algorithm is a tedious task for designing an optimal Artificial Neural Network (ANN), which is smaller, faster and with a better generalization performance. Genetic algorithm (GA) is an adaptive search technique based on the principles and mechanisms of natural selection and survival of the fittest from natural evolution. Simulated annealing (SA) is a global optimization algorithm that can process cost functions possessing quite arbitrary degrees of nonlinearities, discontinuities and stochasticity but statistically assuring a optimal solution. In this paper we explain how a hybrid algorithm integrating the desirable aspects of GA and SA can be applied for the optimal design of an ANN. This paper is more concerned with the understanding of current theoretical developments of Evolutionary Artificial Neural Networks (EANNs) using GAs and other heuristic procedures and how the proposed hybrid and other heuristic procedures can be combined to produce an optimal ANN. | [
2071,
2563
] | Test |
2,451 | 1 | Spatial Cognition and Neuro-Mimetic Navigation: A Model of Hippocampal Place Cell Activity . A computational model of hippocampal activity during spatial cognition and navigation tasks is presented. The spatial representation in our model of the rat hippocampus is built on-line during exploration via two processing streams. An allothetic vision-based representation is built by unsupervised Hebbian learning extracting spatio-temporal properties of the environment from visual input. An idiothetic representation is learned based on internal movement-related information provided by path integration. On the level of the hippocampus, allothetic and idiothetic representations are integrated to yield a stable representation of the environment by a population of localized overlapping CA3-CA1 place fields. The hippocampal spatial representation is used as a basis for goal-oriented spatial behavior. We focus on the neural pathway connecting the hippocampus to the nucleus accumbens. Place cells drive a population of locomotor action neurons in the nucleus accumbens. Reward-based learnin... | [
893
] | Train |
2,452 | 4 | Phidgets: Easy Development of Physical Interfaces through Physical Widgets Physical widgets or phidgets are to physical user interfaces what widgets are to graphical user interfaces. Similar to widgets, phidgets abstract and package input and output devices: they hide implementation and construction details, they expose functionality through a well-defined API, and they have an (optional) on-screen interactive interface for displaying and controlling device state. Unlike widgets, phidgets also require: a connection manager to track how devices appear on-line; a way to link a software phidget with its physical counterpart; and a simulation mode to allow the programmer to develop, debug and test a physical interface even when no physical device is present. Our evaluation shows that everyday programmers using phidgets can rapidly develop physical interfaces. | [
1082,
2574
] | Validation |
2,453 | 3 | Evaluating Top-k Queries over Web-Accessible Databases A query to a web search engine usually consists of a list of keywords, to which the search engine responds with the best or “top ” k pages for the query. This top-k query model is prevalent over multimedia collections in general, but also over plain relational data for certain applications. For example, consider a relation with information on available restaurants, including their location, price range for one diner, and overall food rating. A user who queries such a relation might simply specify the user’s location and target price range, and expect in return the best 10 restaurants in terms of some combination of proximity to the user, closeness of match to the target price range, and overall food rating. Processing top-k queries efficiently is challenging for a number of reasons. One critical such reason is that, in many web applications, the relation attributes might not be available other than through external web-accessible form interfaces, which we will have to query repeatedly for a potentially large set of candidate objects. In this article, we study how to process top-k queries efficiently in this setting, where the attributes for which users specify target values might be handled by external, autonomous sources with a variety of access interfaces. We present a sequential algorithm for processing such queries, but observe that any sequential top-k query processing strategy is bound to require unnecessarily long query processing times, since web accesses exhibit high and variable latency. Fortunately, web sources can be probed in parallel, and each source can typically process concurrent requests, although sources may impose some restrictions on the type and number of probes that they are willing to accept. We adapt our sequential query processing technique and introduce an efficient algorithm that maximizes source-access parallelism to minimize query response time, while satisfying source-access constraints. We evaluate | [
923,
1249
] | Test |
2,454 | 2 | Theseus: Categorization by Context Introduction The traditional approach to document categorization is categorization by content, since information for categorizing a document is extracted from the document itself. In a hypertext environment like the Web, the structure of documents and the link topology can be exploited to perform what we call categorization by context [Attardi 98]: the context surrounding a link in an HTML document is used for categorizing the document referred by the link. Categorization by context is capable of dealing also with multimedia material, since it does not rely on the ability to analyze the content of documents. Categorization by context leverages on the categorization activity implicitly performed when someone places or refers to a document on the Web. By focusing the analysis to the documents used by a group of people, one can build a catalogue tuned to the need of that group. Categorization by context is based on the following assumptions: 1 | [
2433,
2459,
2662,
2984
] | Test |
2,455 | 2 | Concept Network: A Structure for Context Sensitive Document Representation In this paper we propose a directed acyclic graphical structure called concept network (CNW) for context sensitive document representation for use in information filtering. Nodes of CNW represent concepts and links represent the relationships between the concepts. A concept can either be a phrase or a topic of discourse, or a mode of discourse. An important feature of the CNW based scheme (CNWBS) is context filters [Murthy and Keerthi, 1999] which are employed on the links of the graph to enable context sensitive analysis and representation of documents. Context filters are intended to filter the noise in the inputs of the concepts based on the context of appearance of their inputs. The CNWBS automatically finds the paragraphs related to all concepts in the document. It also provides good comprehensibility in representation; allows sharing of CNW among a group of users; and, reduces the credit-assignment problem during its construction. This representation scheme is used for... | [
835,
2477
] | Train |
2,456 | 4 | Multimodal Interactions with Agents in Virtual Worlds Introduction World Wide Web allows interactions and transactions through Web pages using speech and language, either by inanimate or live agents, image interpretation and generation, and, of course the more traditional ways of presenting explicitly predefined information of text, tables, figures, pictures, audio, animation and video. In a task- or domain-oriented way of interaction current technology allows the recognition and interpretation of rather natural speech and language in dialogues. However, rather than the current two-dimensional web-pages, many interesting parts of the Web will become three-dimensional, allowing the building of virtual worlds inhabited by user and task agents, with which the user can interact using different types of modalities, including speech and language interpretation and generation. Agents can work on behalf of users, hence, human computer interaction will make use of `indirect management', rather than interacting through direct manipulation of data | [
2462
] | Test |
2,457 | 0 | A Logic of BDI Agents with Procedural Knowledge In this paper, we present a new logic for specifying the behaviour of multi-agent systems. In this logic, agents are viewed as BDI systems, in that their state is characterised in terms of beliefs, desires, and intentions: the semantics of the BDI component of the logic are based on the wellknown system of Rao and Georgeff. In addition, agents have available to them a library of plans, representing their `know-how': procedural knowledge about how to achieve their intentions. These plans are, in effect, programs, that specify how a group of agents can work in parallel to achieve certain ends. The logic provides a rich set of constructs for describing the structure and execution of plans. Some properties of the logic are investigated, (in particular, those relating to plans), and some comments on future work are presented. 1 Introduction There is currently much international interest in computer systems that go under the banner of intelligent agents [17]. Crudely, an intelligent agent i... | [
2338,
2364
] | Test |
2,458 | 3 | Partial Answers for Unavailable Data Sources Abstract. Many heterogeneous database system products and prototypes exist today; they will soon be deployed in a wide variety of environments. Most existing systems suffer from an Achilles ’ heel: they ungracefully fail in presence of unavailable data sources. If some data sources are unavailable when accessed, these systems either silently ignore them or generate an error. This behavior is improper in environments where there is a non-negligible probability that data sources cannot be accessed (e.g., Internet). In case some data sources cannot be accessed when processing a query, the complete answer to this query cannot be computed; some work can however be done with the data sources that are available. In this paper, we propose a novel approach where, in presence of unavailable data sources, the answer to a query is a partial answer. A partial answer is a representation of the work that has been done in case the complete answer to a query cannot be computed, and of the work that remains to be done in order to obtain this complete answer. The use of a partial answer is twofold. First, it contains an incremental query that allows to obtain the complete answer without redoing the work that has already been done. Second, the application program can extract information from a partial answer through the use of a secondary query, which we call a parachute query. In this paper, we present a framework for partial answers and we propose three algorithms for the evaluation of queries in presence of unavailable sources, the construction of incremental queries and the evaluation of parachute queries. 1 | [
923,
1073
] | Train |
2,459 | 2 | Automatic Resource list Compilation by Analyzing Hyperlink Structure and Associated Text We describe the design, prototyping and evaluation of ARC, a system for automatically compiling a list of authoritative web resources on any (sufficiently broad) topic. The goal of ARC is to compile resource lists similar to those provided by Yahoo! or Infoseek. The fundamental difference is that these services construct lists either manually or through a combination of human and automated effort, while ARC operates fully automatically. We describe the evaluation of ARC, Yahoo!, and Infoseek resource lists by a panel of human users. This evaluation suggests that the resources found by ARC frequently fare almost as well as, and sometimes better than, lists of resources that are manually compiled or classified into a topic. We also provide examples of ARC resource lists for the reader to examine. | [
2,
95,
124,
166,
370,
507,
538,
843,
1021,
1091,
1108,
1201,
1296,
1307,
1315,
1357,
1547,
1838,
1976,
2038,
2091,
2283,
2433,
2437,
2454,
2471,
2532,
2565,
2610,
2705,
2716,
2971,
2989,
3090,
3131
] | Train |
2,460 | 3 | Efficiently Supporting Temporal Granularities AbstractÐGranularity is an integral feature of temporal data. For instance, a person's age is commonly given to the granularity of years and the time of their next airline flight to the granularity of minutes. A granularity creates a discrete image, in terms of granules,of a (possibly continuous) time-line. We present a formal model for granularity in temporal operations that is integrated with temporal indeterminacy, or ªdon't know whenº information. We also minimally extend the syntax and semantics of SQL-92 to support mixed granularities. This support rests on two operations, scale and cast, that move times between granularities, e.g., from days to months. We demonstrate that our solution is practical by showing how granularities can be specified in a modular fashion, and by outlining a time- and space-efficient implementation. The implementation uses several optimization strategies to mitigate the expense of accommodating multiple granularities. Index TermsÐCalendar, granularity, indeterminacy, SQL-92, temporal database, TSQL2. 1 | [
1072
] | Test |
2,461 | 5 | Towards a Comprehensive Topic Hierarchy for News To date, a comprehensive, Yahoo-like hierarchy of topics has yet to be offered for the domain of news. The Yahoo approach of managing such a hierarchy --- hiring editorial staff to read documents and correctly assign them to topics --- is simply not practical in the domain of news. Far too many stories are written and made available online everyday. While many Machine Learning methods exist for organising documents into topics, these methods typically require a large number of labelled training examples before performing accurately. When managing a large and ever-changing topic hierarchy, it is unlikely that there would be enough time to provide many examples per topic. For this reason, it would be useful to identify extra information within the domain of news that could be harnessed to minimise the number of labelled examples required to achieve reasonable accuracy. To this end, the notion of a semi-labelled document is introduced. These documents, which are partially labelled by th... | [
739,
1446,
2100,
2781
] | Train |
2,462 | 4 | Jacob project - Documentation The Jacob software system has been built as part of the Jacob project, which is a pilot project of the Virtual Reality Valley Twente initiative. The Jacob project investigates the application of virtual reality techniques and involves the design and construction of an animated agent in a 3-dimensional virtual environment. The project focuses on software engineering aspects, multimodal interaction, and the use of agent technology. In the current version of the Jacob system, an agent called Jacob teaches the user the Towers of Hanoi game; interaction takes place through natural language and manipulations of objects in the virtual environment. The purpose of this report is to provide information needed for further development of the Jacob system. It describes a number of technical details, including the file and directory structure, the software architecture, the event handling mechanism, and the integration of the dialogue management system. 3 Table of Contents 1 Introduction ........... | [
2456
] | Test |
2,463 | 1 | Learning Nonlinear Dynamical Systems using an EM Algorithm The Expectation Maximization (EM) algorithm is an iterative procedure for maximum likelihood parameter estimation from data sets with missing or hidden variables[2]. It has been applied to system identification in linear stochastic state-space models, where the state variables are hidden from the observer and both the state and the parameters of the model have to be estimated simultaneously [9]. We present a generalization of the EM algorithm for parameter estimation in nonlinear dynamical systems. The "expectation" step makes use of Extended Kalman Smoothing to estimate the state, while the "maximization" step re-estimates the parameters using these uncertain state estimates. In general, the nonlinear maximization step is difficult because it requires integrating out the uncertainty in the states. However, if Gaussian radial basis function (RBF) approximators are used to model the nonlinearities, the integrals become tractable and the maximization step can be solved via systems of linear equations. | [
799,
1345
] | Validation |
2,464 | 2 | Experiences with Selecting Search Engines using Meta-Search Search engines are among the most useful and high profile resources on the Internet. The problem of finding information on the Internet has been replaced with the problem of knowing where search engines are, what they are designed to retrieve and how to use them. This paper describes and evaluates SavvySearch, a meta-search engine designed to intelligently select and interface with multiple remote search engines. The primary meta-search issue examined is the importance of carefully selecting and ranking remote search engines for user queries. We studied the efficacy of SavvySearch's incrementally acquired meta-index approach to selecting search engines by analyzing the effect of time and experience on performance. We also compared the meta-index approach to the simpler categorical approach and showed how much experience is required to surpass the simple scheme. 1 Introduction Search engines are powerful tools for assisting the otherwise unmanageable task of navigating the rapidly ex... | [
263,
879,
976,
982,
1108,
1134,
1167,
1432,
1804,
2558,
2771,
3139
] | Train |
2,465 | 0 | A Contract Decommitment Protocol for Automated Negotiation in Time Variant Environments Negotiation is a fundamental mechanism in distributed multi-agent systems. Since negotiation is a time-spending process, in many scenarios agents have to take into account the passage of time and to react to uncertain events. The possibility to decommit from a contract is considered a powerful technique to manage this aspect. This paper considers interactions among self-interested and autonomous agent, each with their own utility function, and focuses on incomplete information. We define a negotiation model based on asynchronous message passing in which the negotiation doesn't end when an agreement is reached but when the consequences of the contract have happened -- i.e. the action is done. In this model the agent utility functions are time dependent. We present an extension of the contract net protocol that implements the model. | [
1291
] | Validation |
2,466 | 1 | Selective Sampling With Redundant Views Selective sampling, a form of active learning, reduces the cost of labeling training data by asking only for the labels of the most informative unlabeled examples. We introduce a novel approach to selective sampling which we call co-testing. Cotesting can be applied to problems with redundant views (i.e., problems with multiple disjoint sets of attributes that can be used for learning). We analyze the most general algorithm in the co-testing family, naive co-testing, which can be used with virtually any type of learner. Naive co-testing simply selects at random an example on which the existing views disagree. We applied our algorithm to a variety of domains, including three real-world problems: wrapper induction, Web page classification, and discourse trees parsing. The empirical results show that besides reducing the number of labeled examples, naive co-testing may also boost the classification accuracy. Introduction In order to learn a classifier, supervised learn... | [
529,
714,
1294,
3098
] | Train |
2,467 | 3 | A First-Order Approach to Unsupervised Learning . This paper deals with learning first-order logic rules from data lacking an explicit classification predicate. Consequently, the learned rules are not restricted to predicate definitions as in supervised Inductive Logic Programming. First-order logic offers the ability to deal with structured, multi-relational knowledge. Possible applications include first-order knowledge discovery, induction of integrity constraints in databases, multiple predicate learning, and learning mixed theories of predicate definitions and integrity constraints. One of the contributions of our work is a heuristic measure of confirmation, trading off satisfaction and novelty of the rule. The approach has been implemented in the Tertius system. The system performs an optimal best-first search, finding the k most confirmed hypotheses. It can be tuned to many different domains by setting its parameters, and it can deal either with individual-based representations as in propositional learning or with general logi... | [
2111,
2140
] | Validation |
2,468 | 2 | Webmining: Learning from the World Wide Web : Automated analysis of the world wide web is a new challenging area relevant in many applications, e.g., retrieval, navigation and organization of information, automated information assistants, and e-commerce. This paper discusses the use of unsupervised and supervised learning methods for user behavior modeling and content-based segmentation and classification of web pages. The modeling is based on independent component analysis and hierarchical probabilistic clustering techniques. Keywords: Webmining, unsupervised learning, hierarchical probabilistic clustering 1. | [
878,
890
] | Validation |
2,469 | 1 | Domain-Specific Knowledge Acquisition For Conceptual Sentence Analysis The availability of on-line corpora is rapidly changing the field of natural language processing (NLP) from one dominated by theoretical models of often very specific linguistic phenomena to one guided by computational models that simultaneously account for a wide variety of phenomena that occur in real-world text. Thus far, among the best-performing and most robust systems for reading and summarizing large amounts of real-world text are knowledge-based natural language systems. These systems rely heavily on domain-specific, handcrafted knowledge to handle the myriad syntactic, semantic, and pragmatic ambiguities that pervade virtually all aspects of sentence analysis. Not surprisingly, however, generating this knowledge for new domains is ti... | [
300,
2641,
2676
] | Train |
2,470 | 0 | An Instructor's Assistant for Team-Training in Dynamic Multi-agent Virtual Worlds . The training of teams in highly dynamic, multi-agent virtual worlds places a heavy demand on an instructor. We address the instructor 's problem with the PuppetMaster. The PuppetMaster manages a network of monitors that report on the activities in the simulation in order to provide the instructor with an interpretation and situation-speci#c analysis of student behavior. The approach used to model student teams is to structure the state space into an abstract situation-based model of behavior that supports interpretation in the face of missing information about agent's actions and goals. 1 Introduction Teams of people operating in highly dynamic, multi-agentenvironments must learn to deal with rapid and unpredictable turns of events. Simulation-based training environments inhabited by synthetic agents can be e#ective in providing realistic but safe settings in which to develop skills these environments require #e.g., #14##. To faithfully capture the unpredictable multi-agent... | [
674,
1840
] | Train |
2,471 | 2 | Background Readings for Collection Synthesis | [
471,
488,
608,
630,
901,
1201,
1547,
1567,
1838,
1966,
1977,
2180,
2371,
2372,
2459,
2503,
2705,
3170
] | Train |
2,472 | 4 | Advanced Interaction in Context . Mobile information appliances are increasingly used in numerous different situations and locations, setting new requirements to their interaction methods. When the user's situation, place or activity changes, the functionality of the device should adapt to these changes. In this work we propose a layered real-time architecture for this kind of context-aware adaptation based on redundant collections of low-level sensors. Two kinds of sensors are distinguished: physical and logical sensors, which give cues from environment parameters and host information. A prototype board that consists of eight sensors was built for experimentation. The contexts are derived from cues using real-time recognition software, which was constructed after experiments with Kohonen's Self-Organizing Maps and its variants. A personal digital assistant (PDA) and a mobile phone were used with the prototype to demonstrate situational awareness. On the PDA font size and backlight were changed depending... | [
217,
664,
1027,
1097,
1227,
1634,
1825,
2225,
2959
] | Train |
2,473 | 4 | Location-aware information delivery with comMotion This paper appears in the HUC 2000 Proceedings, pp.157-171, Springer-Verlag | [
500,
1992
] | Train |
2,474 | 1 | Boosting Image Retrieval We present an approach for image retrieval using a very large number of highly selective features and efficient online learning. Our approach is predicated on the assumption that each image is generated by a sparse set of visual "causes" and that images which are visually similar share causes. We propose a mechanism for computing a very large number of highly selective features which capture some aspects of this causal structure (in our implementation there are over 45,000 highly selective features). At query time a user selects a few example images, and a technique known as "boosting" is used to learn a classification function in this feature space. By construction, the boosting procedure learns a simple classifier which only relies on 20 of the features. As a result a very large database of images can be scanned rapidly, perhaps a million images per second. Finally we will describe a set of experiments performed using our retrieval system on a database of 3000 images. 1. Introductio... | [
1525
] | Train |
2,475 | 2 | An Efficient Boosting Algorithm for Combining Preferences We study the problem of learning to accurately rank a set of objects by combining a given collection of ranking or preference functions. This problem of combining preferences arises in several applications, such as that of combining the results of different search engines, or the "collaborativefiltering " problem of ranking movies for a user based on the movie rankings provided by other users. In this work, we begin by presenting a formal framework for this general problem. We then describe and analyze an efficient algorithm called RankBoost for combining preferences based on the boosting approach to machine learning. We give theoretical results describing the algorithm's behavior both on the training data, and on new test data not seen during training. We also describe an efficient implementation of the algorithm for a particular restricted but common case. We next discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different web search strategies, each of which is a query expansion for a given domain. The second experiment is a collaborative-filtering task for making movie recommendations. | [
323,
1357,
1577,
2335,
2631,
2774,
2847,
3004
] | Test |
2,476 | 5 | Application of Moving Objects and Spatiotemporal Reasoning In order to predict future variations of moving objects which general attributes, locations, and regions of spatial objects are changed over time, spatiotemporal data, domain knowledge, and spatiotemporal operations are required to process together with temporal and spatial attributes of data. However, conventional researches on temporal and spatial reasoning cannot be applied directly to the inference using moving objects, because they have been studied separately on temporal or spatial attribute of data. Therefore, in this paper, we not only define spatial objects in time domain but also propose a new type of moving objects and spatiotemporal reasoning model that has the capability of operation and inference for moving objects. The proposed model is made up of spatiotemporal database, GIS tool, and inference engine for application of spatiotemporal reasoning using moving objects and they execute operations and inferences for moving objects. Finally, to show the applicability of the proposed model, a proper domain is established for the battlefield analysis system to support commander's decision making in the army operational situation and it is experimented with this domain. | [
1927,
2156
] | Validation |
2,477 | 2 | Context Filters for Document-Based Information Filtering In this paper we propose a keyPhrase-sense disambiguation methodology called "context filters" for use in keyPhrase based information filtering systems. A context filter finds whether an input keyPhrase has occurred in the required context. Context filters consider various factors of ambiguity. Some of these factors are special to information filtering and they are handled in a structured fashion. The proposed context filters are very comprehensibile. Context filters consider varieties of contexts which are not considered in existing word-sense disambiguation methods but these are all needed for information filtering. The ideas on context filters that we report in this paper form important elements of an Instructible Information Filtering Agent that we are developing. 1. Introduction Information filtering is the process of separating out irrelevant documents from relevant ones. Its importance has motivated several researchers to develop software agents such as SIFT, InfoScan, iAgent, ... | [
835,
2455
] | Train |
2,478 | 3 | Sangam: Modeling Transformations For Integrating Now and Tomorrow Today many application engineers struggle to not only publish their relational, object or ascii file data on the Web but to also integrate information from diverse sources, often inventing and reinventing a suite of hard-wired integration tools. A model management system that supports the specification and manipulation of not only data models and schemata, but also mappings between the different models in a generic manner has the promise of solving these issues. However, support for modeling and managing such mappings as objects remains an unsolved challenge. In our work, we propose a powerful middleware tool that successfully tackles this challenge. For this, we propose a graph-theoretic framework that allows users to explicitly model mappings between different data models as well as re-structuring within one data model. Our map metamodel is based on a set of re-usable mapping constructs that can in principle be applied on any data model described in our framework. In our work, we have tested these operators for XML and relational model mappings. Using the description of maps at the model level, mappings between specific application schemas and transformations of associated application data can be automated by our framework. Our framework guarantees the correctness of the map, of the generated transformation code, of the output data model, and of the generated application schemas, based on the correctness criteria for the map metamodel. In this paper, we also introduce the model management system that we are developing to realize our proposed map modeling theory. With ! we show not only the feasibility of our approach but also demonstrate the re-usability and the ease of end-to-end development of modeling strategies. To further illustrate our ide... | [
1099
] | Train |
2,479 | 1 | Pruning Classifiers in a Distributed Meta-Learning System JAM is a powerful and portable agent-based distributed data mining system that employs meta-learning techniques to integrate a number of independent classifiers (concepts) derived in parallel from independent and (possibly) inherently distributed databases. Although metalearning promotes scalability and accuracy in a simple and straightforward manner, brute force meta-learning techniques can result in large, inefficient and some times inaccurate meta-classifier hierarchies. In this paper we explore several techniques for evaluating classifiers and we demonstrate that meta-learning combined with certain pruning methods can achieve similar or even better performance results in a much more cost effective manner. Keywords: classifier evaluation, pruning, metrics, distributed mining, meta-learning. This research is supported by the Intrusion Detection Program (BAA9603) from DARPA (F30602-96-1-0311), NSF (IRI-96-32225 and CDA-96-25374) and NYSSTF (423115-445). y Supported in part by IBM... | [
2242
] | Validation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.