node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
1,380 | 0 | Uncertain Knowledge Representation and Communicative Behavior in Coordinated Defense This paper reports on results we obtained on communication among artificial and human agents interacting in a simulated air defense domain. In our research, we postulate that the artificial agents use a decision-theoretic method to select optimal communicative acts, given the characteristics of the particular situation. Thus, the agents we implemented compute the expected utilities of various alternative communicative acts, and execute the best one. The agents use a probabilistic frame-based knowledge formalism to represent the uncertain information they have about the domain and about the other agents present. We build on our earlier work that uses the Recursive Modeling Method (RMM) for coordination, and apply RMM to rational communication in an anti-air defense domain. In this domain, distributed units coordinate and communicate to defend a specified territory from a number of attacking missiles. We measure the benefits of rational communication by showing the improvement in the qua... | [
411,
2825
] | Validation |
1,381 | 3 | An Automatic Closed-Loop Methodology for Generating Character Groundtruth for Scanned Documents Abstract—Character groundtruth for real, scanned document images is crucial for evaluating the performance of OCR systems, training OCR algorithms, and validating document degradation models. Unfortunately, manual collection of accurate groundtruth for characters in a real (scanned) document image is not practical because (i) accuracy in delineating groundtruth character bounding boxes is not high enough, (ii) it is extremely laborious and time consuming, and (iii) the manual labor required for this task is prohibitively expensive. In this paper we describe a closed-loop methodology for collecting very accurate groundtruth for scanned documents. We first create ideal documents using a typesetting language. Next we create the groundtruth for the ideal document. The ideal document is then printed, photocopied and then scanned. A registration algorithm estimates the global geometric transformation and then performs a robust local bitmap match to register the ideal document image to the scanned document image. Finally, groundtruth associated with the ideal document image is transformed using the estimated geometric transformation to create the groundtruth for the scanned document image. This methodology is very general and can be used for creating groundtruth for documents in typeset in any language, layout, font, and style. We have demonstrated the method by generating groundtruth for English, Hindi, and FAX document images. The cost of creating groundtruth using our methodology is minimal. If character, word or zone groundtruth is available for any real document, the registration algorithm can be used to generate the corresponding groundtruth for a rescanned version of the document. Index Terms—Automatic real groundtruth, document image analysis, OCR, performance evaluation, image registration, geometric transformations, image warping. ——————— — F ———————— 1 | [
501
] | Train |
1,382 | 4 | Active User Interfaces For Building Decision-Theoretic Systems Knowledge elicitation/acquisition continues to be a bottleneck to constructing decisiontheoretic systems. Methodologies and techniques for incremental elicitation/acquisition of knowledge especially under uncertainty in support of users' current goals is desirable. This paper presents PESKI, a probabilistic expert system development environment. PESKI provides users with a highly interactive and integrated suite of intelligent knowledge engineering tools for decision-theoretic systems. From knowledge acquisition, data mining, and verification and validation to a distributed inference engine for querying knowledge, PESKI is based on the concept of active user interfaces -- actuators to the human-machine interface. PESKI uses a number of techniques to reduce the inherent complexity of developing a cohesive, realworld knowledge-based system. This is accomplished by providing multiple communication modes for human-computer interaction and the use of a knowledge representation endowed with the ability to detect problems with the knowledge acquired and alert the user to these possible problems. We discuss PESKI's use of these intelligent assistants to help users with the acquisition of knowledge especially in the presence of uncertainty. | [
337,
1413,
1637,
1810,
2738
] | Validation |
1,383 | 4 | Toward Scalability in ASL Recognition: Breaking Down Signs into Phonemes In this paper we present a novel approach to continuous, whole-sentence ASL recognition that uses phonemes instead of whole signs as the basic units. Our approach is based on a sequential phonological model of ASL. According to this model the ASL signs can be broken into movements and holds, which are both considered phonemes. This model does away with the distinction between whole signs and epenthesis movements that we made in previous work [13]. Instead, epenthesis movements are just like the other movements that constitute the signs. We subsequently train Hidden Markov Models (HMMs) to recognize the phonemes, instead of whole signs and epenthesis movements that we recognized previously [13]. Because the number of phonemes is limited, HMM-based training and recognition of the ASL signal becomes computationally more tractable and has the potential to lead to the recognition of large-scale vocabularies. We experimented with a 22 word vocabulary, and we achieved similar recognition r... | [
1239,
1969,
2405
] | Train |
1,384 | 1 | A Generalized Approach to Handling Parameter Interdependencies in Probabilistic Modeling and Reinforcement Learning Optimization Algorithms This paper generalizes our research on parameter interdependencies in reinforcement learning algorithms for optimization problem solving. This generalization expands the work to a larger class of search algorithms that use explicit search statistics to form feasible solutions. Our results suggest that genetic algorithms can both enrich and benefit from probabilistic modeling, reinforcement learning, ant colony optimization or other similar algorithms using values to encode preferences for parameter assignments. The approach is shown to be effective on both the Asymmetric Traveling Salesman and the Quadratic Assignment Problems. Introduction There has been a recent upsurge of interest in a family of search algorithms that store past experience not only as the best solutions generated, but also abstract representations of the decision processes employed. This interest is provoked by a number of factors. First, since solution memory is always limited, search algorithms which store only b... | [
1848
] | Train |
1,385 | 1 | G-Algorithm for Extraction of Robust Decision Rules-Children's Postoperative Intra-Atrial Arrhythmia Case Study Clinical medicine is facing a challenge of knowledge discovery from the growing volume of data. In this paper, a data mining algorithm (G-algorithm) is proposed for extraction of robust rules that can be used in clinical practice for better understanding and prevention of unwanted medical events. The G-algorithm is applied to the data set obtained for children born with a malformation of the heart (univentricular heart). As the result of the Fontan surgical procedure, designed to palliate the children, 10%--35% of patients postoperatively develop an arrhythmia known as the intra-atrial reentrant tachycardia. There is an obvious need to identify the children that may develop the tachycardia before the surgery is performed. Prior attempts to identify such children with statistical techniques have been unrewarding. The G-algorithm discussed in this paper shows that there exists an unambiguous relationship between measurable features and the tachycardia. The data set used in this study shows that, for 78.08% of infants, the occurrence of tachycardia can be accurately predicted. The authors' prior computational experience with diverse medical data sets indicates that the percentage of accurate predictions may become even higher if data on additional features is collected for a larger data set. | [
2655
] | Train |
1,386 | 3 | Text Classification from Labeled and Unlabeled Documents using EM This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%. | [
368,
526,
759,
815,
836,
855,
866,
878,
888,
988,
1159,
1366,
2055,
2176,
2371,
2676,
2682,
3119
] | Train |
1,387 | 3 | Socratenon and its Application to the Learning of Italian Language tudents (the product was developed through a cooperation between universities in Salerno and Belgrade). This paper presents the basic elements of the Socratenon application and implementation philosophy, and discusses its possibilities in the general languagelearning environment. It describes three different experiments and explains the lessons learned. The stress is on the statistical analysis of success of those who used our Web-based product and those who relied on the classical approaches. 1 Introduction Rapid growth of Internet as a medium and Internet technologies has led to the point when education can be detached from humans and books as the only possessors of knowledge. From the early days, Internet has been exploited in educational institutions for dissemination of research results, and knowledge in general. First shapes were unpolished and required a lot of attention from the users. Such sources of knowledge also included wrestling with several resources of | [
2403
] | Validation |
1,388 | 1 | On Representation and Approximation of Operations in Boolean Algebras Several universal approximation and universal representation results are known for non-Boolean multi-valued logics such as fuzzy logics. In this paper, we show that similar results can be proven for multi-valued Boolean logics as well. Introduction. The study of Boolean algebras started when G. Boole [1] considered the simplest Boolean algebra -- the two-valued algebra B (2) = f0; 1g (=ffalse,trueg). In this algebra, the operation ([), ("), and negation a 0 (:a) have the direct logical meaning of "or", "and", and "not". It is known that in this Boolean algebra, an arbitrary operation, i.e., an arbitrary function B \Theta : : : \Theta B ! B, can be represented as a superposition of these three basic logical operations: e.g., the implication a ! b can be represented as b :a, etc. Logic is still one of the area of application of Boolean algebras, but, starting from the classical Kolmogorov's monograph [4], Boolean algebras -- namely, algebras of events -- became an important tool i... | [
1636
] | Validation |
1,389 | 1 | The Omnipresence of Case-Based Reasoning in Science and Application A surprisingly large number of research disciplines have contributed towards the development of knowledge on lazy problem solving, which ischaracterized by its storage of ground cases and its demand driven response to queries. Case-based reasoning (CBR) is an alternative, increasingly popular approach for designing expert systems that implements this approach. This paper lists pointers to some contributions in some related disciplines that o er insights for CBR research. We then outline a small number of Navy applications based on this approach that demonstrate its breadth of applicability. Finally, we list a few successful and failed attempts to apply CBR, and list some predictions on the future roles of CBR in applications. 1 Case-Based Reasoning Case-based reasoning (CBR) is a multi-disciplinary subject that focuses on the reuse of experiences (i.e., cases). It is di cult to nd consensus on more detailed de nitions of CBR because it means di erent things to di erent groups of people. For example, consider its interpretation by the following three groups: Cognitive Scientists: CBR is a plausible high-level model for cognitive processing (Kolodner, | [
670,
909,
1737,
2079,
2641,
2823
] | Train |
1,390 | 0 | Mobile Agent Organizations Mobile agents are a useful paradigm -- other than a useful technology -- for the development of complex Internet applications. However, the effective development of mobile agent applications requires suitable models and infrastructures. This paper proposes an organizational approach to the high-level design of mobile agent applications. The idea is to models the Internet as a multiplicity of local and active organizational contexts, intended as the places where coordination activities of application agents occur and are ruled. The paper discusses the advantages and the generality of such an approach, also with the help of a case study in the area of tourist assistance. | [
275
] | Train |
1,391 | 5 | AI in Medicine on its way from knowledge-intensive to data-intensive systems The last 20 years of research and development in the field of artificial intelligence in medicine show a path from knowledge-intensive systems, which try to capture the essential knowledge of experts in a knowledge-based system, to data-intensive systems available today. Nowadays enormous amounts of information is accessible electronically. Large data sets are collected continuously monitoring physiological parameters of patients. Knowledgebased systems are needed to make use of all these data available and to help us to cope with the information explosion. In addition, temporal data analysis and intelligent information visualization can help us to get a summarized view of the change over time of clinical parameters. Integrating AIM modules into the daily-routine software environment of our care providers gives us a great chance for maintaining and improving quality of care. 1 AIM: a partial view of its scope and potential This paper gives a personalized view of research and develop... | [
2266,
2502
] | Train |
1,392 | 2 | Supporting Distributed Cooperative Work in CAGIS This paper describes how the CAGIS environment can be used to manage work-processes, cooperative processes, and how to share and control information in a distributed, heterogeneous environment. We have used a conference organising process as a scenario and applied our CAGIS environment on this process. The CAGIS environment consists of three main parts: a document management system, a process management system, and a transaction management system. Keywords: Web-based software engineering, Internet computing: JAVA, XML, Intelligent agent software, Database systems, Document modelling, Process modelling, Transaction modelling. 1 Introduction After the introduction of the Internet, more and more projects are taking place in heterogeneous environments where both people, information and working processes are distributed. Work is often dynamic and cooperative and involves multiple actors with different kinds of needs. In these settings there is a need to help people coordinate their ... | [
1164
] | Train |
1,393 | 2 | Intelligent Anticipated Exploration of Web Sites In this paper we describe a web search agent, called Global Search Agent (hereafter GSA for short). GSA integrates and enhances several search techniques in order to achieve significant improvements in the user-perceived quality of delivered information as compared to usual web search engines. GSA features intelligent merging of relevant documents from different search engines, anticipated selective exploration and evaluation of links from the current resuk set, automated derivation of refined queries based on user relevance feedback. System architecture as well as experimental accounts are also illustrated. | [
471,
906,
2188,
2661
] | Train |
1,394 | 2 | Web Page Classification Using Spatial Information Extracting and processing information from web pages is an important task in many areas like constructing search engines, information retrieval, and data mining from the Web. Common approach in the extraction process is to represent a page as a "bag of words" and then to perform additional processing on such a flat representation. In this paper we propose a new, hierarchical representation that includes browser screen coordinates for every HTML object in a page. Such spatial information allows the definition of heuristics for recognition of common page areas such as header, left and right menu, footer and center of a page. We show a preliminary experiment where our heuristics are able to correctly recognize objects in 73% of cases. Finally, we show that a Naive Bayes classifier, taking into account the proposed representation, clearly outperforms the same classifier using only information about the content of documents. | [
1987,
2654
] | Train |
1,395 | 3 | PVM: Parallel View Maintenance Under Concurrent Data Updates of Distributed Sources Data warehouses (DW) are built by gathering information from distributed information sources (ISs) and integrating it into one customized repository. In recent years, work has begun to address the problem of view maintenance of DWs under concurrent data updates of different ISs. Popular solutions such as ECA and Strobe achieve such concurrent maintenance however with the requirement of quiescence of the ISs. More recently, the SWEEP solution releases this quiescence requirement using a local compensation strategy that now processes all update messages in a sequential manner. To optimize upon this sequential processing, we have developed a parallel view maintenance algorithm, called PVM, that incorporates all benefits of previous maintenance approaches while offering improved performance due to parallelism. In order to perform parallel view maintenance, we have identified two critical issues: (1) detecting maintenance-concurrent data updates in a parallel mode, and (2) correcting the problem that the DW commit order may not correspond to the DW update processing order due to parallel maintenance handling. In this work, we provide solutions to both issues. Given a modular component-based system architecture, we insert a middle-layer timestamp assignment module for detecting maintenance-concurrent data updates without requiring any global clock synchronization. In addition, we introduce the negative counter concept as a simple yet elegant solution to solve the problem of variant orders of committing effects of data updates to the DW. We have proven the correctness of PVM to guarantee that our strategy indeed generates the correct final DW state. We have implemented both SWEEP and PVM in our EVE data warehousing system. Our performance study demonstrates ... | [
1463,
2220,
2438
] | Train |
1,396 | 3 | Performance and Memory-Access Characterization of Data Mining Applications This paper characterizes the performance and memoryaccess behavior of a decision tree induction program, a previously unstudied application used in data mining and knowledge discovery in databases. Performance is studied via RSIM, an execution driven simulator, for three uniprocessor models that exploit Instruction Level Parallelism (ILP) to varying degrees. Several properties of the program are noted. Outof -order dispatch and multiple-issue provide a significant performance advantage: 50%--250% improvement in IPC for out-of-order versus in-order, and 5%--120% improvement in IPC for four-way issue versus singleissue. Multiple-issue provides a greater performance improvement for larger L2 cache sizes, when the program is limited by CPU performance; out-of-order dispatch provides a greater performance improvement for smaller L2 cache sizes. The program has a very small instruction footprint: an 8-kB L1 instruction cache is sufficient to bring the instruction miss rate below 0.1%. A smal... | [
1694
] | Train |
1,397 | 4 | A Learning Agent for Wireless News Access We describe a user interface for wireless information devices, specifically designed to facilitate learning about users' individual interests in daily news stories. User feedback is collected unobtrusively to form the basis for a content-based machine learning algorithm. As a result, the described system can adapt to users' individual interests, reduce the amount of information that needs to be transmitted, and help users access relevant information with minimal effort. Keywords Wireless, intelligent information access, news, user modeling, machine learning. 1. INTRODUCTION Driven by the explosive growth of information available on the Internet, intelligent information access has become a central research area in computer science. The 20 th century is commonly characterized as "The Information Age", and the sheer amount of information readily available today has created novel challenges. Numerous intelligent information agents -- software tools that provide personalized assistanc... | [
2041
] | Train |
1,398 | 2 | Testing Access To External Information Sources in a Mediator Environment This paper discusses the testing of communication in the increasingly important class of distributed information systems that are based on a mediator architecture. A mediator integrates existing information sources into a new application. In order to answer complex queries, the mediator splits them up into subqueries which it sends to the information sources, and it combines the replies to answer the original query. Since the information sources are usually remote, autonomous systems, the access to them can be erroneous, most notably when the information source is subject to modifications. Such errors may result in incorrect behaviour of the whole system. This paper addresses the problem of deciding whether an information source as part of a mediatory system was successfully queried or not. The primary contribution is a formal framework for the general information access testing problem. Besides proposing several solutions, it is investigated what the most important quality measures of such testing methods are. Moreover, the practical usability of the presented approaches is demonstrated on a real-world application using Web-based information sources. Several empirical experiments are conductedto compare the presented methods with previous work in the field. | [
371,
529,
2074
] | Validation |
1,399 | 5 | The Complexity of Revising Logic Programs A rule-based program will return a set of answers to each query. An impure program, which includes the Prolog cut "!" and "not(\Delta)" operators, can return different answers if its rules are re-ordered. There are also many reasoning systems that return only the first answer found for each query; these first answers, too, depend on the rule order, even in pure rule-based systems. A theory revision algorithm, seeking a revised rule-base whose expected accuracy, over the distribution of queries, is optimal, should therefore consider modifying the order of the rules. This paper first shows that a polynomial number of training "labeled queries" (each a query paired with its correct answer) provides the distribution information necessary to identify the optimal ordering. It then proves, however, that the task of determining which ordering is optimal, once given this distributional information, is intractable even in trivial situations; e.g., even if each query is an atomic literal, we are... | [
2406
] | Validation |
1,400 | 1 | From Regularization Operators to Support Vector Kernels We derive the correspondence between regularization operators used in Regularization Networks and Hilbert Schmidt Kernels appearing in Support Vector Machines. More specifically, we prove that the Green's Functions associated with regularization operators are suitable Support Vector Kernels with equivalent regularization properties. As a by--product we show that a large number of Radial Basis Functions namely conditionally positive definite functions may be used as Support Vector kernels. 1 INTRODUCTION Support Vector (SV) Machines for pattern recognition, regression estimation and operator inversion exploit the idea of transforming into a high dimensional feature space where they perform a linear algorithm. Instead of evaluating this map explicitly, one uses Hilbert Schmidt Kernels k(x; y) which correspond to dot products of the mapped data in high dimensional space, i.e. k(x; y) = (\Phi(x) \Delta \Phi(y)) (1) with \Phi : R n ! F denoting the map into feature space. Mostly, this m... | [
1122,
2177
] | Test |
1,401 | 4 | Mining High-Speed Data Streams Many organizations today have more than very large databases; they have databases that grow without limit at a rate of several million records per day. Mining these continuous data streams brings unique opportunities, but also new challenges. This paper describes and evaluates VFDT, an anytime system that builds decision trees using constant memory and constant time per example. VFDT can incorporate tens of thousands of examples per second using o#-the-shelf hardware. It uses Hoe#ding bounds to guarantee that its output is asymptotically nearly identical to that of a conventional learner. We study VFDT's properties and demonstrate its utility through an extensive set of experiments on synthetic data. We apply VFDT to mining the continuous stream of Web access data from the whole University of Washington main campus. | [
545,
1820
] | Validation |
1,402 | 1 | A Hybrid Architecture for Situated Learning of Reactive Sequential Decision Making In developing autonomous agents, one usually emphasizes only (situated) procedural knowledge, ignoring more explicit declarative knowledge. On the other hand, in developing symbolic reasoning models, one usually emphasizes only declarative knowledge, ignoring procedural knowledge. In contrast, we have developed a learning model Clarion, which is a hybrid connectionist model consisting of both localist and distributed representations, based on the two-level approach proposed in Sun (1995). Clarion learns and utilizes both procedural and declarative knowledge, tapping into the synergy of the two types of processes, and enables an agent to learn in situated contexts and generalize resulting knowledge to different scenarios. It unifies connectionist, reinforcement, and symbolic learning in a synergistic way, to perform on-line, bottom-up learning. This summary paper presents one version of the architecture and some results of the experiments. Key Words: hybrid models, sequential decision ... | [
1603
] | Train |
1,403 | 1 | GTM: The Generative Topographic Mapping Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline. GTM: The Generative Topographic Mapping 2 1 Introduction Many data sets exhibit significant correlations between the variables. One way to capture such structure is to model the distribution of the data in term... | [
110,
299,
1243,
1445,
1673
] | Validation |
1,404 | 1 | Learning Visual Landmarks for Pose Estimation We present an approach to vision-based mobile robot localization, even without an a-priori pose estimate. This is accomplished by learning a set of visual features called image-domain landmarks. The landmark learning mechanism is designed to be applicable to a wide range of environments. Each landmark is detected as a local extremum of a measure of uniqueness and represented by an appearance-based encoding. Localization is performed using a method that matches observed landmarks to learned prototypes and generates independent position estimates for each match. The independent estimates are then combined to obtain a final position estimate, with an associated uncertainty. Quantitative experimental evidence is presented that demonstrates that accurate pose estimates can be obtained, despite changes to the environment. | [
540,
850
] | Train |
1,405 | 2 | Concept Indexing - A Fast Dimensionality Reduction Algorithm with Applications to Document Retrieval & Categorization In recent years, we have seen a tremendous growth in the volume of text documents available on the Internet, digital libraries, news sources, and company-wide intranets. This has led to an increased interest in developing meth-ods that can efficiently categorize and retrieve relevant information. Retrieval techniques based on dimensionality reduction, such as Latent Semantic Indexing (LSI), have been shown to improve the quality of the information being retrieved by capturing the latent meaning of the words present in the documents. Unfortunately, the high computa-tional requirements of LSI and its inability to compute an effective dimensionality reduction in a supervised setting limits its applicability. In this paper we present a fast dimensionality reduction algorithm, called concept indexing (CI) that is equally effective for unsupervised and supervised dimensionality reduction. CI computes a k-dimensional representation of a collection of documents by first clustering the documents into k groups, and then using the cen-troid vectors of the clusters to derive the axes of the reduced k-dimensional space. Experimental results show that the dimensionality reduction computed by CI achieves comparable retrieval performance to that obtained using LSI, while requiring an order of magnitude less time. Moreover, when CI is used to compute the dimensionality reduction in a supervised setting, it greatly improves the performance of traditional classification algorithms such as C4.5 and kNN. 1 | [
291,
495,
847,
1234,
1360,
1452,
1726,
2580,
2625,
2680
] | Validation |
1,406 | 2 | Data Visualization, Indexing and Mining Engine - A Parallel Computing Architecture for Information Processing Over the Internet ion : : : : : : : : : : : 9 3.2 Spatial Representation Of the System's Networks : : : : : : : : : : : : : : : : : : : : : : 10 3.3 Display And Interaction Mechanisms : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 3.3.1 Overviews, orientation, and network abstraction : : : : : : : : : : : : : : : : : : 11 3.3.2 Other Navigation And Orientation Tools : : : : : : : : : : : : : : : : : : : : : : : 12 3.3.3 Head-tracked Stereoscopic Display : : : : : : : : : : : : : : : : : : : : : : : : : : 12 4 Web Access of Geographic Information System Data 13 4.1 Functions Provided by GIS2WEB : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 13 4.2 Structure : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 14 5 ParaCrawler --- Parallel Web Searching With Machine Learning 15 5.1 A Machine Learning Approach Towards Web Searching : : : : : : : : : : : : : : : : : : 15 6 DUSIE --- Interactive Content-based Web Structuring 16 6.1 Creating t... | [
1517
] | Test |
1,407 | 3 | The Gateway System: Uniform Web Based Access to Remote Resources Exploiting our experience developing the WebFlow system, we designed the Gateway system to provide seamless and secure access to computational resources at ASC MSRC. The Gateway follows our commodity components strategy, and it is implemented as a modern three-tier system. Tier 1 is a highlevel front end for visual programming, steering, run-time data analysis and visualization that is built on top of the Web and OO commodity standards. Distributed object-based, scalable, and reusable Web server and Object broker middleware forms Tier 2. Back-end services comprise Tier 3. In particular, access to high-performance computational resources is provided by implementing the emerging standard for metacomputing API. 1. Introduction The last few years have seen the growing power and capability of commodity computing and communication technologies largely driven by commercial distributed information systems. All of them can be abstracted to a three-tier model with largely independent clients c... | [
2136
] | Validation |
1,408 | 3 | A Possible World Semantics for Disjunctive Databases We investigate the fundamental problem of when a ground atom in a disjunctive database is assumed false. There are basically two different approaches for inferring negative information for disjunctive databases; they are Minker's Generalized Closed World Assumption (GCWA) and Ross and Topor's Disjunctive Database Rule (DDR). A problem with the GCWA is that disjunctive clauses are sometimes interpreted exclusively, even when they are intended for inclusive interpretation. On the other hand, the DDR always interprets disjunctive clauses inclusively. We argue that neither approach is satisfactory. Whether a disjunctive clause is interpreted exclusively or inclusively should be specified explicitly. Negative information should then be inferred according to the stated intent of the disjunctive clauses. A database semantics called PWS is proposed to solve the aforementioned problem. We also show that for propositional databases with no negative clauses, the problem of determining... | [
876
] | Validation |
1,409 | 0 | Extending Agent UML Protocol Diagrams this paper is to present some new features that we propose. Several features deal with reliability such as triggering actions and exception handling. These new features are in fact proposed due to our work in electronic commerce and in supply chain management [16]. As much as possible, we apply these new features to needs in the supply chain management example | [
567,
2129,
2747,
2944
] | Train |
1,410 | 3 | Query Processing for Moving Objects with Space-Time Grid Storage Model With growing popularity of mobile computing devices and wireless communications, managing dynamically changing information about moving objects is becoming feasible. In this paper, we implement a system that manages such information and propose two new algorithms. One is an efficient range query algorithm with a ltering step which efficiently determines if a polyline corresponding to the trajectory of a moving object intersects with a given range. The other is a nearest neighbor query algorithm. We study the performance of the system, which shows that despite the filtering step, for moderately large ranges, the range query algorithm we propose outperforms the algorithm without filtering. | [
1256,
1615
] | Validation |
1,411 | 0 | Automated Discovery of Concise Predictive Rules for Intrusion Detection This paper details an essential component of a multi-agent distributed knowledge network system for intrusion detection. We describe a distributed intrusion detection architecture, complete with a data warehouse and mobile and stationary agents for distributed problem-solving to facilitate building, monitoring, and analyzing global, spatio-temporal views of intrusions on large distributed systems. An agent for the intrusion detection system, which uses a machine learning approach to automated discovery of concise rules from system call traces, is described. We use a feature vector representation to describe the system calls executed by privileged processes. The feature vectors are labeled as good or bad depending on whether or not they were executed during an observed attack. A rule learning algorithm is then used to induce rules that can be used to monitor the system and detect potential intrusions. We study the performance of the rule learning algorithm on this task with an... | [
428,
1285,
2353,
2482,
2833
] | Train |
1,412 | 1 | Reinforcement Learning for a Vision Based Mobile Robot Reinforcement learning systems improve behaviour based on scalar rewards from a critic. In this work vision based behaviours, servoing and wandering, are learned through a Q-learning method which handles continuous states and actions. There is no requirement for camera calibration, an actuator model, or a knowledgeable teacher. Learning through observing the actions of other behaviours improves learning speed. Experiments were performed on a mobile robot using a real-time vision system. 1 Introduction Collision free wandering and visual servoing are building blocks for purposeful robot behaviours such as foraging, target pursuit and landmark based navigation. Visual servoing consists of moving some part of a robot to a desired position using visual feedback [15]. Wandering is an environment exploration behaviour [6]. In this work we demonstrate real-time learning of wandering and servoing on a real robot. Learning eliminates the calibration process and leads to flexible behaviour.... | [
134
] | Train |
1,413 | 4 | Using Explicit Requirements and Metrics for Interface Agent User Model Correction The complexity of current computer systems and software warrants research into methods to decrease the cognitive load on users. Determining how to get the right information into the right form with the right tool at the right time has become a monumental task --- one necessitating intelligent interfaces agents with the ability to predict the users' needs or intent. An accurate user model is considered necessary for effective prediction of user intent. Methods for maintaining accurate user models is the main thrust of this paper. We describe an approach for dynamically correcting an interface agent's user model based on utility-theory. We explicitly take into account an agent's requirements and metrics for measuring the agent's effectiveness of meeting those requirements. Using these requirements and metrics, we develop a requirements utility function that determines when a user model should be corrected and how. We present a correction model based on a multi-agent bidding... | [
171,
1382,
1810,
2337,
2738
] | Train |
1,414 | 0 | Multiagent Systems Engineering: A Methodology And Language for Designing Agent Systems This paper overviews MaSE and provides a high-level introduction to one critical component used within MaSE, the Agent Modeling Language. Details on the Agent Definition Language and detailed agent design are left for a future paper. | [
139,
473,
1297,
1693,
1716,
1759,
2309,
2343,
2948,
3018,
3108
] | Train |
1,415 | 2 | Evaluating Database Selection Techniques: A Testbed and Experiment We describe a testbed for database selection techniques and an experiment conducted using this testbed. The testbed is a decomposition of the TREC/TIPSTER data that allows analysis of the data along multiple dimensions, including collection-based and temporal-based analysis. We characterize the subcollections in this testbed in terms of number of documents, queries against which the documents have been evaluated for relevance, and distribution of relevant documents. We then present initial results from a study conducted using this testbed that examines the effectiveness of the gGlOSS approach to database selection. The databases from our testbed were ranked using the gGlOSS techniques and compared to the gGlOSS Ideal(l) baseline and a baseline derived from TREC relevance judgements. We have examined the degree to which several gGlOSS estimate functions approximate these baselines. Our initial results confirm that the gGlOSS estimators are excellent predictors of the Ideal(l) ranks but... | [
521,
1167,
1265,
1432,
2716,
2771,
2775
] | Test |
1,416 | 5 | Integrated Document Caching and Prefetching in Storage Hierarchies Based on Markov-Chain Predictions .<F3.733e+05> Large multimedia document archives may hold a major fraction of their data in tertiary storage libraries for cost reasons. This paper develops an integrated approach to the vertical data migration between the tertiary, secondary, and primary storage in that it reconciles speculative prefetching, to mask the high latency of the tertiary storage, with the replacement policy of the document caches at the secondary and primary storage level, and also considers the interaction of these policies with the tertiary and secondary storage request scheduling. The integrated migration policy is based on a continuoustime Markov chain model for predicting the expected number of accesses to a document within a specified time horizon. Prefetching is initiated only if that expectation is higher than those of the documents that need to be dropped from secondary storage to free up the necessary space. In addition, the possible resource contention at the tertiary and secondary storage is tak... | [
1850,
2037
] | Test |
1,417 | 5 | On the Non-Linear Optimization of Projective Motion Using Minimal Parameters I address the problem of optimizing projective motion over a minimal set of parameters. Most of the existing works overparameterize the problem. | [
794,
1347
] | Test |
1,418 | 0 | Antisocial Bidding in Repeated Vickrey Auctions In recent years auctions have become more and more important in the eld of multiagent systems as useful mechanisms for resource allocation and task assignment. In many cases the Vickrey (second-price sealed-bid) auction is used as a protocol that prescribes how the individual agents have to interact in order to come to an agreement. The main reasons for choosing the Vickrey auction are the low bandwidth and time consumption due to just one round of bidding and the existence of a dominant bidding strategy under certain conditions. We show that the Vickrey auction, despite its theoretical benets, is inappropriate if \antisocial" agents participate in the auction process. More specically, an antisocial attitude for economic agents that makes reducing the prot of competitors their main goal besides maximizing their own prot is introduced. Under this novel condition, agents need to deviate from the dominant truth-telling strategy. This report presents a strategy for bidders... | [
827,
1561,
2231
] | Train |
1,419 | 1 | A Computational Model of Word Learning from Multimodal Sensory Input How do infants segment continuous streams of speech to discover words of their language? Current theories emphasize the role of acoustic evidence in discovering word boundaries (Cutler 1991; Brent 1999; de Marcken 1996; Friederici & Wessels 1993; see also Bolinger & Gertsman 1957). To test an alternate hypothesis, we recorded natural infant-directed speech from caregivers engaged in play with their pre-linguistic infants centered around common objects. We also recorded the visual context in which the speech occurred by capturing images of these objects. We analyzed the data using two computational models, one of which processed only acoustic recordings, and a second model which integrated acoustic and visual input. The models were implemented using standard speech and vision processing techniques enabling the models to process sensory data. We show that using visual context in conjunction with spoken input dramatically improves learning when compared with using acoustic evidence alone.... | [
1986
] | Validation |
1,420 | 1 | Focused Web Crawling: A Generic Framework for Specifying the User Interest and for Adaptive Crawling Strategies Compared to the standard web search engines, focused crawlers yield good recall as well as good precision by restricting themselves to a limited domain. In this paper, we do not introduce another focused crawler, but we introduce a generic framework for focused crawling consisting of two major components: (1) Specification of the user interest and measuring the resulting relevance of a given web page. The proposed method of specifying the user interest by a formula combining atomic topics significantly improves the expressive power of the user. (2) Crawling strategy. Ordering the links at the crawl frontier is a challenging task since pages of a low relevance may be on a path to highly relevant pages. Thus, tunneling may be necessary. The explicit specification of the user interest allows us to define topic-specific strategies for tunneling. Our system Ariadne is a prototype implementation of the proposed framework. An experimental evaluation of different crawling strategies demonstrates the performance gain obtained by focusing a crawl and by dynamically adapting the focus. 1 | [
471,
1676,
1987,
3077
] | Train |
1,421 | 0 | Mixed Initiative in Interactions between Software Agents We have been working during the past several years on techniques for modeling the way that software agents can take and release the initiative while interacting together. We are interested in building multiagent systems composed of software agents that can interact with human users in sophisticated ways which are analogous to human conversations. In this paper, we describe two projects we have worked on: a multiagent approach for simulating conversations between software agents, and the Virtual Theater. 1. Introduction The need for software agents that assist users in achieving various tasks, collaborate with them, entertain them, or even act on their behalf is getting greater. Software agents are computer systems that exploit their own knowledge bases, have their own goals and their own capabilities, perform actions, and interact with other agents as well as with people. Autonomy is an essential characteristic of such agents, which they express when they take the initiative. Agents t... | [
2247
] | Test |
1,422 | 3 | Capturing and Querying Multiple Aspects of Semistructured Data Motivated to a large extent by the substantial and growing prominence of the World-Wide Web and the potential benefits that may be obtained by applying database concepts and techniques to web data management, new data models and query languages have emerged that contend with web data. These models organize data in graphs where nodes denote objects or values and edges are labeled with single words or phrases. Nodes are described by the labels of the paths that lead to them, and these descriptions serve as the basis for querying. This paper proposes an extensible framework for capturing and querying meta-data properties in a semistructured data model. Properties such as temporal aspects of data, prices associated with data access, quality ratings associated with the data, and access restrictions on the data are considered. Specifically, the paper defines an extensible data model and an accompanying query language that provides new facilities for matching, slicing, col... | [
1545
] | Train |
1,423 | 0 | A Multi-Agent Architecture for an Intelligent Website in Insurance . In this paper a multi-agent architecture for intelligent Websites is presented and applied in insurance. The architecture has been designed and implemented using the compositional development method for multi-agent systems DESIRE. The agents within this architecture are based on a generic broker agent model. It is shown how it can be exploited to design an intelligent Website for insurance, developed in co-operation with the software company Ordina Utopics and an insurance company. 1 Introduction An analysis of most current business Websites from the perspectives of marketing and customer relations suggests that Websites should become more active and personalised, just as in the nonvirtual case where contacts are based on human servants. Intelligent agents provide the possibility to reflect at least a number of aspects of the nonvirtual situation in a simulated form, and, in addition, enables to use new opportunities for one-to-one marketing, integrated in the Website. The generic a... | [
555,
2875,
2902
] | Train |
1,424 | 1 | Case-Based and Symbolic Classification Algorithms . Contrary to symbolic learning approaches, that represent a learned concept explicitly, case-based approaches describe concepts implicitly by a pair (CB;sim), i.e. by a measure of similarity sim and a set CB of cases. This poses the question if there are any differences concerning the learning power of the two approaches. In this article we will study the relationship between the case base, the measure of similarity, and the target concept of the learning process. To do so, we transform a simple symbolic learning algorithm (the version space algorithm) into an equivalent case-based variant. The achieved results strengthen the hypothesis of the equivalence of the learning power of symbolic and casebased methods and show the interdependency between the measure used by a case-based algorithm and the target concept. 1 Introduction In this article we want to compare the learning power of two important learning paradigms -- the symbolic and the case-based approach [1, 4]. As a first step ... | [
419,
593,
1284,
1844,
2263
] | Validation |
1,425 | 0 | Co-X: Defining what Agents Do Together Discussions of agent interactions frequently characterize behavior as Coherent, collaborative, cooperative, competitive, or coordinated. We propose a series of formal distinctions among these terms and several others. We argue that all of these are specializations of the more foundational category of correlation, which can be measured by the joint information of a system. We also propose congruence as a category orthogonal to the others, reflecting the degree to which correlation and its specializations satisfy user requirements. Then we explore the degree to which lack of correlation can arise purposefully, and show the need to use formal stochasticity in cases where such lack of correlation is truly necessary (such as in stochastic search). Keywords Coordination, correlation, competition, contention, cooperation, congruence, communication, command, constraint, construction, conversation, stigmergy, agent interaction 1. | [
1774,
3177
] | Train |
1,426 | 1 | On-Line Analytical Mining of Association Rules With wide applications of computers and automated data collection tools, massive amounts of data have been continuously collected and stored in databases, which creates an imminent need and great opportunities for mining interesting knowledge from data. Association rule mining is one kind of data mining techniques which discovers strong association or correlation relationships among data. The discovered rules may help market basket or cross-sales analysis, decision making, and business management. In this thesis, we propose and develop an interesting association rule mining approach, called on-line analytical mining of association rules, which integrates the recently developed OLAP (on-line analytical processing) technology with some efficient association mining methods. It leads to flexible, multi-dimensional, multi-level association rule mining with high performance. Several algorithms are developed based on this approach for mining various kinds of associations in multi-dimensional ... | [
1956,
3043
] | Train |
1,427 | 4 | Rivalry and Interference with a Head Mounted Display Perceptual factors that effect monocular, transparent (a.k.a "see-thru") head mounted displays include binocular rivalry, visual interference, and depth of focus. | [
1659,
2549
] | Test |
1,428 | 0 | Automatic Landmark Selection for Navigation with Panoramic Vision The use of visual landmarks for robot navigation is a promising field. It is apparent that the success of navigating by visual landmarks depends on the landmarks chosen. This paper reviews a monocular camera system proposed by [ Bianco and Zelinsky, 1999 ] which automatically selects landmarks and uses them for localisation and navigation tasks. The monocular system's landmark selection policy results in a limited area in which the robot can successfully localise itself. A new landmark navigation system which uses a panoramic vision system to overcome this restriction is proposed and the first steps taken in the development of the new system are reported. 1 Introduction Visual navigation is one of the key problems in making successful autonomous robots. Vision as a sensor is the richest source of information about a mobile agents enviroment and as such contains information vital to solving navigation problems. One limit to visual navigation is the narrow field of view offered by norma... | [
648,
2152
] | Train |
1,429 | 1 | Incremental Learning of Control Knowledge For Nonlinear Problem Solving In this paper we advocate a learning method where a deductive and an inductive strategies are combined to efficiently learn control knowledge. The approach consists of initially bounding the explanation to a predetermined set of problem solving features. Since there is no proof that the set is sufficient to capture the correct and complete explanation for the decisions, the control rules acquired are then refined, if and when applied incorrectly to new examples. The method is especially significant as it applies directly to nonlinear problem solving, where the search space is complete. We present hamlet, a system where we implemented this learning method, within the context of the prodigy architecture. hamlet learns control rules for individual decisions corresponding to new learning opportunities offered by the nonlinear problem solver that go beyond the linear one. These opportunities involve, among other issues, completeness, quality of plans, and opportunistic decision making. Finally, we show empirical results illustrating hamlet's learning performance. | [
312,
1631,
2422
] | Train |
1,430 | 3 | Model Generation for Natural-Language Semantic Analysis . Semantic analysis refers to the analysis of semantic representations by inference on the basis of semantic information and world knowledge. I present some potential applications of model generators and model generation theorem provers in the construction and analysis of natural-language semantics where both the process of model generation and the computed models are valuable sources of information. I discuss Bry and Torge's hyper-resolution tableaux calculus EP as an approach to model generation for natural-language semantic analysis. 1 Introduction One goal of modern natural-language semantics has been to capture the conditions under which a sentence can be uttered truthfully. In logic-based semantic formalisms such as Discourse Representation Theory [17] or Montague Grammar [22], the representations constructed for natural-language sentences are logical formulas whose truth-conditions are described by their models. The construction of semantic representations requires, amongst oth... | [
67
] | Train |
1,431 | 2 | AutoDoc: A Search and Navigation Tool for Web-Based Program Documentation We present a search and navigation tool for use with automatically generated program documentation, which builds trails in the information space. Three user interfaces are suggested, which show the web pages in context, and hence better explain the structure of the code. | [
2106
] | Train |
1,432 | 2 | Server Selection on the World Wide Web We evaluate server selection methods in a Web environment, modeling a digital library which makes use of existing Web search servers rather than building its own index. The evaluation framework portrays the Web realistically in several ways. Its search servers index real Web documents, are of various sizes, cover different topic areas and employ different retrieval methods. Selection is based on statistics extracted from the results of probe queries submitted to each server. We evaluate published selection methods and a new method for enhancing selection based on expected search server effectiveness. Results show CORI to be the most effective of three published selection methods. CORI selection steadily degrades with fewer probe queries, causing a drop in early precision of as much as 0#05 (one relevant document out of 20). Modifying CORI selection based on an estimation of expected effectiveness disappointingly yields no significant improvement in effectiveness. However, modifying COR... | [
521,
1108,
1415,
2275,
2392,
2464,
2556,
2712
] | Train |
1,433 | 4 | Single Display Groupware: A Model for Co-present Collaboration We introduce a model for supporting collaborative work between people that are physically close to each other. We call this model Single Display Groupware (SDG). In this paper, we describe this model, comparing it to more traditional remote collaboration. We describe the requirements that SDG places on computer technology, and our understanding of the benefits and costs of SDG systems. Finally, we describe a prototype SDG system that we built and the results of a usability test we ran with 60 elementary school children. Keywords CSCW, Single Display Groupware, children, educational applications, input devices, Pad++, KidPad. INTRODUCTION In the early 1970's, researchers at Xerox PARC created an atmosphere in which they lived and worked with technology of the future. When the world's first personal computer, the Alto, was invented, it had only a single keyboard and mouse. This fundamental design legacy has carried through to nearly all modern computer systems. Although networks have... | [
682,
1610,
2322,
2681,
2801
] | Test |
1,434 | 4 | Tracking in Unprepared Environments for Augmented Reality Systems Many Augmented Reality applications require accurate tracking. Existing tracking techniques require prepared environments to ensure accurate results. This paper motivates the need to pursue Augmented Reality tracking techniques that work in unprepared environments, where users are not allowed to modify the real environment, such as in outdoor applications. Accurate tracking in such situations is difficult, requiring hybrid approaches. This paper summarizes two 3DOF results: a real-time system with a compass -- inertial hybrid, and a non-real-time system fusing optical and inertial inputs. We then describe the preliminary results of 5- and 6-DOF tracking methods run in simulation. Future work and limitations are described. | [
639,
2758
] | Train |
1,435 | 4 | A Software Model and Specification Language for Non-WIMP User Interfaces We present a software model and language for describing and programming the fine-grained aspects of interaction in a non-WIMP user interface, such as a virtual environment. Our approach is based on our view that the essence of a non-WIMP dialogue is a set of continuous relationships---most of which are temporary. The model combines a data-flow or constraint-like component for the continuous relationships with an event-based component for discrete interactions, which can enable or disable individual continuous relationships. To demonstrate our approach, we present the PMIW user interface management system for non-WIMP interactions, a set of examples running under it, a visual editor for our user interface description language, and a discussion of our implementation and our restricted use of constraints for a performance-driven interactive situation. Our goal is to provide a model and language that captures the formal structure of non-WIMP interactions in the way that various previous te... | [
482,
804
] | Train |
1,436 | 2 | Proverb: The Probabilistic Cruciverbalist We attacked the problem of solving crossword puzzles by computer: given a set of clues and a crossword grid, try to maximize the number of words correctly filled in. After an analysis of a large collection of puzzles, we decided to use an open architecture in which independent programs specialize in solving specific types of clues, drawing on ideas from information retrieval, database search, and machine learning. Each expert module generates a (possibly empty) candidate list for each clue, and the lists are merged together and placed into the grid by a centralized solver. We used a probabilistic representation throughout the system as a common interchange language between subsystems and to drive the search for an optimal solution. Proverb, the complete system, averages 95.3% words correct and 98.1% letters correct in under 15 minutes per puzzle on a sample of 370 puzzles taken from the New York Times and several other puzzle sources. This corresponds to missing roughly 3 words or 4 l... | [] | Test |
1,437 | 3 | Alternative Representations and Abstractions for Moving Sensors Databases Moving sensors refers to an emerging class of data intensive applications that impacts disciplines such as communication, health-care, scientific applications, etc. These applications consist of a fixed number of sensors that move and produce streams of data as a function of time. They may require the system to match these streams against stored streams to retrieve relevant data (patterns). With communication, for example, a speaking impaired individual might utilize a haptic glove that translates hand signs into written (spoken) words. The glove consists of sensors for dierent nger joints. These sensors report their location and values as a function of time, producing streams of data. These streams are matched against a repository of spatio-temporal streams to retrieve the corresponding English character or word. The contributions of this study are two folds. First, it introduces a framework to store and retrieve "moving sensors" data. The framework advocates physical data independence and software-reuse. Second, we investigate alternative representations for storage and retrieval of data in support of query processing. We quantify the tradeoff associated with these alternatives using empirical data from RoboCup soccer matches. | [
331,
969,
1239,
1927
] | Test |
1,438 | 1 | A Case-Based Reasoning Approach to Learning Control The paper describes an application of a case-based reasoning system TA3, to a control task in robotics. It differs from previous methods in its approach to relevancy-based retrieval, which allows for greater flexibility and for improved accuracy in performance. Most successful robotic manipulators are precise, fast, smooth and have high repeatability. Their major drawback is their tendency to have only a limited repertoire of movements that can only be extended by costly inverse kinematic calculations or direct teaching. Our approach also starts with a limited repertoire of movements, but can incrementally add new positions and thus allows for higher flexibility. The proposed architecture is experimentally evaluated on two real world domains, albeit simple, and the results are compared to other machine learning algorithms applied to the same problem. Keywords: Learning control systems, case-based reasoning, flexible manufacturing, inverse kinematic task This research was supported by... | [
227,
623,
631,
966,
2581,
2878
] | Validation |
1,439 | 2 | Personalized Information Management for Web Intelligence Web intelligence can be defined as the process of scanning and tracking information on the World Wide Web so as to gain competitive advantages. This paper describes a system known as Flexible Organizer for Competitive Intelligence (FOCI) that transforms raw URLs returned by internet search engines into personalized information portfolios. FOCI builds information portfolios by gathering and organizing on-line information according to a user's needs and preferences. Through a novel method called User-Configurable Clustering, a user can personalize his/her portfolios in terms of the content as well as the information structure. The personalized portfolios can then be used to track new information and organize them into appropriate folders accordingly. We show a sample session of FOCI which illustrates how a user may create and personalize an information portfolio according to his/her preferences and how the system discovers novel information groupings while organizing familiar information according to the userdefined themes. | [] | Train |
1,440 | 2 | Statistical Phrases in Automated Text Categorization In this work we investigate the usefulness of n-grams for document indexing in text categorization (TC). We call n-gram a set t k of n word stems, and we say that t k occurs in a document d j when a sequence of words appears in d j that, after stop word removal and stemming, consists exactly of the n stems in t k , in some order. Previous researches have investigated the use of n-grams (or some variant of them) in the context of specific learning algorithms, and thus have not obtained general answers on their usefulness for TC. In this work we investigate the usefulness of n-grams in TC independently of any specific learning algorithm. We do so by applying feature selection to the pool of all #-grams (# # n), and checking how many n-grams score high enough to be selected in the top # #-grams. We report the results of our experiments, using several feature selection functions and varying values of #, performed on the Reuters-21578 standard TC benchmark. We also report results of making actual use of the selected n-grams in the context of a linear classifier induced by means of the Rocchio method. | [
739,
846,
1446,
1564,
1698
] | Train |
1,441 | 3 | An Analytical Study of Object Identifier Indexing The object identifier index of an object-oriented database system is typically 20% of the size of the database itself, and for large databases, only a small part of the index fits in main memory. To avoid index retrievals becoming a bottleneck, efficient buffering strategies are needed to minimize the number of disk accesses. In this report, we develop analytical cost models which we use to find optimal sizes of index page buffer and index entry cache, for different memory sizes, index sizes, and access patterns. Because existing buffer hit estimation models are not applicable for index page buffering in the case of tree based indexes, we have also developed an analytical model for index page buffer performance. The cost gain from using the results in this report is typically in the order of 200-300%. Thus, the results should be of valuable use in optimizers and tools for configuration and tuning of object-oriented database systems. 1 Introduction In a large OODB with logical object i... | [
802,
1121,
1187,
1624,
1803
] | Train |
1,442 | 3 | Extracting Semistructured Data - Lessons Learnt The Yellow Pages Assistant (YPA) is a natural language dialogue system which guides a user through a dialogue in order to retrieve addresses from the Yellow Pages. Part of the work in this project is concerned with the construction of a Backend, i.e. the database extracted from the raw input text that is needed for the online access of the addresses. Here we discuss some aspects involved in this task as well as report on experiences which might be interesting for other projects as well. | [
370,
586
] | Train |
1,443 | 3 | Reasoning With Inconsistency in Structured Text Reasoning with inconsistency involves some compromise on classical logic. There is a range of proposals for logics (called paraconsistent logics) for reasoning with inconsistency each with pros and cons. Selecting an appropriate paraconsistent logic for an application depends on the requirements of the application. Here we review paraconsistent logics for the potentially significant application area of technology for structured text. Structured text is a general concept that is implicit in a variety of approaches to handling information. Syntactically, an item of structured text is a number of grammatically simple phrases together with a semantic label for each phrase. Items of structured text may be nested within larger items of structured text. The semantic labels in a structured text are meant to parameterize a stereotypical situation, and so a particular item of structured text is an instance of that stereotypical situation. Much information is potentially available as st... | [
274,
1632
] | Train |
1,444 | 2 | Parsing As Information Compression By Multiple Alignment, Unification And Search: SP52 This article presents and discusses examples illustrating aspects of the proposition, described in the accompanying article (Wolff, 1998), that parsing may be understood as information compression by multiple alignment, unification and search (ICMAUS). The later examples show that the multiple alignment framework as described in the accompanying article has expressive power which is comparable with other `context sensitive' systems used to represent the syntax of natural languages. In all the examples, the SP52 model, described in the accompanying article, is capable of finding an alignment which is intuitively `correct' and assigning to it a `compression score' which is higher than for any other alignment. The congruance which has been found between this range of alignments produced by a system which is dedicated to information compression and what is judged to be `correct' in terms of linguistic intuition lends support to the hypothesis that linguistic intuition is itself a product o... | [] | Validation |
1,445 | 2 | Document Classification with Unsupervised Artificial Neural Networks Text collections may be regarded as an almost perfect application arena for unsupervised neural networks. This is because many operations computers have to perform on text documents are classification tasks based on noisy patterns. In particular we rely on self-organizing maps which produce a map of the document space after their training process. From geography, however, it is known that maps are not always the best way to represent information spaces. For most applications it is better to provide a hierarchical view of the underlying data collection in form of an atlas where, starting from a map representing the complete data collection, different regions are shown at finer levels of granularity. Using an atlas, the user can easily "zoom" into regions of particular interest while still having general maps for overall orientation. We show that a similar display can be obtained by using hierarchical feature maps to represent the contents of a document archive. These neural networks have layerd architecture where each layer consists of a number of individual self-organizing maps. By this, the contents of the text archive may be represented at arbitrary detail while still having the general maps available for global orientation. | [
721,
1403
] | Test |
1,446 | 1 | Boosting and Rocchio Applied to Text Filtering We discuss two learning algorithms for text filtering: modified Rocchio and a boosting algorithm called AdaBoost. We show how both algorithms can be adapted to maximize any general utility matrix that associates cost (or gain) for each pair of machine prediction and correct label. We first show that AdaBoost significantly outperforms another highly effective text filtering algorithm. We then compare AdaBoost and Rocchio over three large text filtering tasks. Overall both algorithms are comparable and are quite effective. AdaBoost produces better classifiers than Rocchio when the training collection contains a very large number of relevant documents. However, on these tasks, Rocchio runs much faster than AdaBoost. 1 Introduction With the explosion in the amount of information available electronically, information filtering systems that automatically send articles of potential interest to a user are becoming increasingly important. If users indicate their interests to a filtering system... | [
322,
887,
1215,
1440,
1525,
1577,
2461,
2676,
2690,
3107
] | Train |
1,447 | 1 | An Fft-Based Algorithm For Multichannel Blind Deconvolution A new update equation for the general multichannel blind deconvolution (MCBD) of a convolved mixture of source signals is derived. It is based on the update equation for blind source separation (BSS), which has been shown to be an alternative interpretation [1] of the natural gradient applied to the minimization of some mutual information criterion [2]. Computational complexity is held at a minimum by carrying out the separation/equalization task in the frequency domain. The algorithm is compared to similar known blind algorithms and its validity is demonstrated by simulations of real-world acoustic filters. In order to assess the performance of the algorithm, performance measures for multichannel blind deconvolution of signals are given in the paper. 1. INTRODUCTION Blind separation, blind deconvolution and the combination of both, multichannel blind deconvolution, are tasks that have to be carried out in an increasing number of applications, particularly in acoustics and communicati... | [
1921
] | Test |
1,448 | 1 | Algorithm-Directed Exploration for Model-Based Reinforcement Learning in Factored MDPs One of the central challenges in reinforcement learning is to balance the exploration/exploitation tradeoff while scaling up to large problems. Although model-based reinforcement learning has been less prominent than value-based methods in addressing these challenges, recent progress has generated renewed interest in pursuing modelbased approaches: Theoretical work on the exploration /exploitation tradeoff has yielded provably sound model-based algorithms such as E Rmax , while work on factored MDP representations has yielded model-based algorithms that can scale up to large problems. Recently the benefits of both achievements have been combined in the algorithm of Kearns and Koller. In this paper, we address a significant shortcoming of Factored E : namely that it requires an oracle planner that cannot be feasibly implemented. We propose an alternative approach that uses a practical approximate planner, approximate linear programming, that maintains desirable properties. Further, we develop an exploration strategy that is targeted toward improving the performance of the linear programming algorithm, rather than an oracle planner. This leads to a simple exploration strategy that visits states relevant to tightening the LP solution, and achieves sample efficiency logarithmic in the size of the problem description. Our experimental results show that the targeted approach performs better than using approximate planning for implementing either Factored E or Factored Rmax . | [
157,
1816
] | Train |
1,449 | 3 | Representing School Timetabling in a Disjunctive Logic Programming Language In this paper, we show how school timetabling problems with preferences originating from didactical, organisational and personal considerations can be represented in a highly declarative and natural way, using an extension of disjunctive datalog by strong and weak (integrity) constraints. 1 Introduction Almost all people have come across school timetabling during their lives. For a long time almost all school timetables were created manually, a timeconsuming task, which often yielded suboptimal schedules. In the last thirty years, however, systems have been designed which automate timetable creation. Timetabling in general is the problem of finding suitable combinations of two or more types of resources which have to be at the same place during several discrete periods of time and which have to satisfy various additional constraints. Some of these constraints are strict while some are not (the latter express preferences or desiderata). In the case of school timetabling problems these... | [
907,
3085
] | Train |
1,450 | 3 | The Isomorphism Between a Class of Place Transition Nets and a Multi-Plane State Machine Agent Model Recently we introduced a multi-plane state machine model of an agent, released an implementation of the model, and designed several applications of the agent framework. In this paper we address the translation from the Petri net language to the Blueprint language used for agent description as well as the translation from Blueprint to Petri Nets. The simulation of a class of Place Transition Nets is part of an effort to create an agent--based workflow management system. Contents 1 Introduction 1 2 A Multi-Plane State Machine Agent Model 3 2.1 Bond Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 2.2 Bond Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2.3 Bond Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.4 Using Planes to Implement Facets of Behavior . . . . . . . . . . . . . . . . . . . . . 5 3 Simulating a Class of Place-Transition Nets on the Bond ... | [
3020
] | Validation |
1,451 | 1 | A Learning Mobile Robot: Theory, Simulation and Practice . This paper presents an implementation of the sins multi-strategy learning controller for mobile robot navigation. This controller uses low-level reactive control that is modulated on-line by a learning system based on case-based reasoning and reinforcement learning. The case-based reasoning part captures regularities in the environment. The reinforcement learning part gradually improves the acquired knowledge. Evaluation of the controller is presented in a real and in a simulated mobile robot. 1 Introduction How to specify behaviour in a robot has come a long way since the low-level languages of assembly robotics (Lozano-Perez, 1982). The classical AI approach to control 1 proved too slow and too fragile for the real world but showed that representations of the environment, however difficult to maintain, produce interesting behaviour. In nouvelle AI, e.g. (Brooks, 1985; Brooks, 1991a; Brooks, 1991b), agents merely react to the current environmental situation posed, limited ... | [
2573,
3015
] | Train |
1,452 | 2 | A Probabilistic Model for Dimensionality Reduction in Information Retrieval and Filtering Dimension reduction methods, such as Latent Semantic Indexing (LSI), when applied to semantic spaces built upon text collections, improve information retrieval, information filtering and word sense disambiguation. A new dual probability model based on similarity concepts is introduced to explain the observed success. Semantic associations can be quantitatively characterized by their statistical significance, the likelihood. Semantic dimensions containing redundant and noisy information can be separated out and should be ignored because their contribution to the overall statistical significance is negative, giving rise to LSI: LSI is the optimal solution of the model. The peak in likelihood curve indicates the existence of an intrinsic semantic dimension. The importance of LSI dimensions follows the Zipf-distribution, indicating that LSI dimensions represent latent concepts. Document frequency of words follow the Zipf distribution, and the number of distinct words follows log-normal distribution. Experiments on four standard document collections both confirm and illustrate the results and concepts presented here. | [
222,
1405,
2324
] | Train |
1,453 | 3 | Indexing and Querying XML Data for Regular Path Expressions With the advent of XML as a standard for data representation and exchange on the Internet, storing and querying XML data becomes more and more important. Several XML query languages have been proposed, and the common feature of the languages is the use of regular path expressions to query XML data. This poses a new challenge concerning indexing and searching XML data, because conventional approaches based on tree traversals may not meet the processing requirements under heavy access requests. In this paper, we propose a new system for indexing and storing XML data based on a numbering scheme for elements. This numbering scheme quickly determines the ancestor-descendant relationship between elements in the hierarchy of XML data. We also propose several algorithms for processing regular path expressions, namely, (1) ##-Join for searching paths from an element to another, (2) ##-Join for scanning sorted elements and attributes to find element-attribute pairs, and (3) ##-Join for finding Kleene-Closure on repeated paths or elements. The ##-Join algorithm is highly effective particularly for searching paths that are very long or whose lengths are unknown. Experimental results from our prototype system implementation show that the proposed algorithms can process XML queries with regular path expressions by up to an or- # This work was sponsored in part by National Science Foundation CAREER Award (IIS-9876037) and Research Infrastructure program EIA-0080123. The authors assume all responsibility for the contents of the paper. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its... | [
661,
781,
930,
1086,
2910,
2966,
3006,
3141
] | Train |
1,454 | 4 | Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review The use of hand gestures provides an attractive alternative to cumbersome interface devices for human-computer interaction (HCI). In particular, visual interpretation of hand gestures can help in achieving the ease and naturalness desired for HCI. We survey the literature on vision-based hand gesture recognition within the context of its role in HCI. The number of approaches to video-based hand gesture recognition has grown in recent years. Thus, the need for systematization and analysis of different aspects of gestural interaction has developed. We discuss a complete model of hand gestures that possesses both spatial and dynamic properties of human hand gestures and can accommodate for all their natural types. Two classes of models that have been employed for interpretation of hand gestures for HCI are considered. The first utilizes 3D models of the human hand, while the second relies on the appearance of the human hand in the image. Investigation of model parameters and analysis feat... | [
76,
120,
258,
386,
506,
696,
1080,
1259,
1651,
1778,
2001,
2114,
2376,
2493,
2893,
3158
] | Train |
1,455 | 2 | RAW: A Relational Algebra for the Web The main idea underlying the paper is to extend the relational algebra such that it becomes possible to process queries against the World-Wide Web. These extensions are minor in that we tried to keep them at the domain level. Additionally to the known domains (int, bool, float, string), we introduce three new domains to deal with URLs, html-documents or fragments thereof, and path expressions. Over these domains we define several functions that are accessible from the algebra within the subscripts of the relational operators. The approach allows us to reuse the operators of the relational algebra without major modifications. Indead, the only extension necessary is the introduction of a map operator. Further, two modifications to the scan and the indexscan are necessary. Finally, the indexscan which has the functionality of a typical meta-search engine is capable of computing a unified rank based on the tuple order provided by the underlying search engines. 1 Introduction The Web [2] w... | [
1120,
1843,
2864
] | Train |
1,456 | 2 | About Knowledge Discovery in Texts: A Definition and an Example This paper claims that Knowledge Discovery in texts (KDT) is a new scientific topic, stemming from Knowledge Discovery in DataBases (KDD), both of them relying on a rather specific definition of what "knowledge" is (it has to be, as it is often said, understandable and useful, and even surprising). We shall rapidly explore this definition of knowledge, and apply it to a definition of what KDT is. We shall then illustrate on a detailed example what kind of new results KDT can bring, showing their interest and that they have little to do with the results of the long existing Natural Language Processing (NLP) research field. | [
516
] | Train |
1,457 | 2 | Reactive Tuple Spaces for Mobile Agent Coordination Mobile active computational entities introduce peculiar problems in the coordination of distributed application components. The paper surveys several coordination models for mobile agent applications and outlines the advantages of uncoupled coordination models based on reactive blackboards. On this base, the paper presents the design and the implementation of the MARS system, a coordination tool for Java-based mobile agents. MARS defines Linda-like tuple spaces that can be programmed to react with specific actions to the accesses made by mobile agents. Keywords: Mobile Agents, Coordination, Reactive Tuple Spaces, Java, WWW Information Retrieval 1. Introduction Traditional distributed applications are designed as a set of processes statically assigned to given execution environments and cooperating in a (mostly) network-unaware fashion [Adl95]. The mobile agent paradigm, instead, defines applications composed by network-aware entities (agents) capable of changing their execution env... | [
686,
2626
] | Test |
1,458 | 5 | Dynamic Logic Programming In this paper we investigate updates of knowledge bases represented by logic programs. In order to represent negative information, we use generalized logic programs which allow default negation not only in rule bodies but also in their heads.We start by introducing the notion of an update $P\oplus U$ of a logic program $P$ by another logic program $U$. Subsequently, we provide a precise semantic characterization of $P\oplus U$, and study some basic properties of program updates. In particular, we show that our update programs generalize the notion of interpretation update. We then extend this notion to compositional sequences of logic programs updates $P_{1}\oplus P_{2}\oplus \dots $, defining a dynamic program update, and thereby introducing the paradigm of \emph{dynamic logic programming}. This paradigm significantly facilitates modularization of logic programming, and thus modularization of non-monotonic reasoning as a whole. Specifically, suppose that we are given a set of logic program modules, each describing a different state of our knowledge of the world. Different states may represent different time points or different sets of priorities or perhaps even different viewpoints. Consequently, program modules may contain mutually contradictory as well as overlapping information. The role of the dynamic program update is to employ the mutual relationships existing between different modules to precisely determine, at any given module composition stage, the declarative as well as the procedural semantics of the combined program resulting from the modules. | [
441,
1192
] | Train |
1,459 | 0 | The BOID Architecture - Conflicts Between Beliefs, Obligations, Intentions and Desires In this paper we introduce the so-called Beliefs-Obligations-Intentions-Desires or BOID architecture. It contains feedback loops to consider all eects of actions before committing to them, and mechanisms to resolve conflicts between the outputs of its four components. Agent types such as realistic or social agents correspond to specific types of conflict resolution embedded in the BOID architecture. | [
716,
2364,
3177
] | Train |
1,460 | 3 | Extraction of Semantic XML DTDs from Texts Using Data Mining Techniques Although composed of unstructured texts, documents contained in textual archives such as public announcements, patient records and annual reports to shareholders often share an inherent though undocumented structure. In order to facilitate efficient, structure-based search in archives and to enable information integration of text collections with related data sources, this inherent structure should be made explicit as detailed as possible. Inferring a semantic and structured XML document type definition (DTD) for an archive and subsequently transforming the corresponding texts into XML documents is a successful method to achieve this objective. The main contribution of this paper is a new method to derive structured XML DTDs in order to extend previously derived flat DTDs. We use the DIAsDEM framework to derive a preliminary, unstructured XML DTD whose components are supported by a large number of documents. However, all XML tags contained in this preliminary DTD cannot a priori be assumed to be mandatory. Additionally, there is no fixed order of XML tags and automatically tagging an archive using a derived DTD always implicates tagging errors. Hence, we introduce the notion of probabilistic XML DTDs whose components are assigned probabilities of being semantically and structurally correct. Our method for establishing a probabilistic XML DTD is based on discovering associations between, resp. frequent sequences of XML tags. Keywords semantic annotation, XML, DTD derivation, knowledge discovery, data mining, clustering | [
1229,
1684
] | Train |
1,461 | 1 | Greedy Function Approximation: A Gradient Boosting Machine Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest--descent minimization. A general gradient--descent "boosting" paradigm is developed for additive expansions based on any fitting criterion. Specific algorithms are presented for least--squares, least--absolute--deviation, and Huber--M loss functions for regression, and multi--class logistic likelihood for classification. Special enhancements are derived for the particular case where the individual additive components are decision trees, and tools for interpreting such "TreeBoost" models are presented. Gradient boosting of decision trees produces competitive, highly robust, interpretable procedures for regression and classification, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire 1996, and Friedman, Has... | [
545
] | Train |
1,462 | 2 | Computationally Private Information Retrieval with Polylogarithmic Communication We present a single-database computationally private information retrieval scheme with polylogarithmic communication complexity. Our construction is based on a new, but reasonable intractability assumption, which we call the \Phi-Hiding Assumption (\PhiHA): essentially the difficulty of deciding whether a small prime divides OE(m), where m is a composite integer of unknown factorization. Keywords: Integer factorization, Euler's function, \Phi-hiding assumption, private information retrieval. 1 Introduction Private information retrieval. The notion of private information retrieval (PIR for short) was introduced by Chor, Goldreich, Kushilevitz and Sudan [CGKS95] and has already received a lot of attention. The study of PIR is motivated by the growing concern about the user's privacy when querying a large commercial database. (The problem was independently studied by Cooper and Birman [CB95] to implement an anonymous messaging service for mobile users.) Ideally, the PIR problem consists... | [
2708
] | Train |
1,463 | 3 | DyDa: Dynamic Data Warehouse Maintenance in a Fully Concurrent Environment Data warehouse is an emerging technology to support high-level decision making by gathering data from several distributed information sources into one repository. In dynamic environments, data warehouses must be maintained in order to stay consistent with the underlying sources. Recently proposed view maintenance algorithms tackle the problem of data warehouse maintenance under concurrent source data updates.While the view synchronization is to handle non-concurrent source schema changes. However, the concurrency between interleaved schema changes and data updates still remain unexplored problems. In this paper, we propose a solution framework called DyDa that successfully addresses this problem. The DyDa framework detects concurrent schema changes by a broken query scheme and conicting concurrent data updates by a local timestamp scheme. A fundamental idea of the DyDa framework is the development of a two-layered architecture that separates the concerns for concurrent data updates and concurrent schema changes handling without imposing any restrictions on the sourse update transactions. At the lower level of the framework, it employs a local compensation algorithm to handle concurrent data updates, and a metadata name mapping strategy to handle concurrent source rename operations. At the higher level, it addresses the problem of concurrent source drop operations. For the latter problem, we design a strategy for the detection and correction of such concurrency and nd an executable plan for the aected updates. We then develop a new view adaption algorithm, called Batch-VA for execution of such plan to incrementally adapt the view. Put together, these algorithms are the rst to provide a complete solution to data warehouse management in a fully concurrent environment.... | [
826,
1018,
1395,
1701,
2220
] | Train |
1,464 | 4 | MIND-WARPING: Towards Creating a Compelling Collaborative Augmented Reality Game Computer gaming offers a unique test-bed and market for advanced concepts in computer science, such as Human Computer Interaction (HCI), computer-supported collaborative work (CSCW), intelligent agents, graphics, and sensing technology. In addition, computer gaming is especially wellsuited for explorations in the relatively young fields of wearable computing and augmented reality (AR). This paper presents a developing multi-player augmented reality game, patterned as a cross between a martial arts fighting game and an agent controller, as implemented using the Wearable Augmented Reality for Personal, Intelligent, and Networked Gaming (WARPING) system. Through interactions based on gesture, voice, and head movement input and audio and graphical output, the WARPING system demonstrates how computer vision techniques can be exploited for advanced, intelligent interfaces. Keywords Augmented reality, wearable computing, computer vision 1. INTRODUCTION: WHY GAMES? Computer gaming provides... | [
241,
727,
2679
] | Train |
1,465 | 2 | User Modelling as an Aid for Human Web This paper explores how user modelling can work as an aid for human assistants in a user support system for web sites. Information about the user can facilitate for the assistant the tailoring of the consultation to the individual needs of the user. Such information can be represented and structured in a user model made available for the assistant. A user modelling approach has been implemented and deployed in a real web environment as part of a user support system. Following the deployment we have analysed consultation dialogue logs and answers to a questionnaire for participating assistants. The initial results show that assistants consider user modelling to be helpful and that consultation dialogues can be an important source for user model data collection. | [
1470
] | Train |
1,466 | 2 | Using Common Hypertext Links to Identify the Best Phrasal Description of Target Web Documents This paper describes previous work which studied and compared the distribution of words in web documents with the distribution of words in "normal" flat texts. Based on the findings from this study it is suggested that the traditional IR techniques cannot be used for web search purposes the same way they are used for "normal" text collections, e.g. news articles. Then, based on these same findings, I will describe a new document description model which exploits valuable anchor text information provided on the web that is ignored by the traditional techniques. The problem Amitay (1997) has found, through a corpus analysis of a 1000 web pages that the lexical distribution in documents which were written especially for the web (home pages), is significantly different than the lexical distribution observed in a corpus of "normal" English language (the British National Corpus - 100,000,000 words). For example, in the web documents collection there were some HTML files which contained no v... | [
124,
2433,
2503,
2705
] | Train |
1,467 | 1 | Speeding up Relational Reinforcement Learning Through the Use of an Incremental First Order Decision Tree Learner Relational reinforcement learning (RRL) is a learning technique that combines standard reinforcement learning with inductive logic programming to enable the learning system to exploit structural knowledge about the application domain. | [
352,
833
] | Test |
1,468 | 3 | Evaluating Functional Joins Along Nested Reference Sets in Object-Relational and Object-Oriented Databases Previous work on functional joins was constrained in two ways: (1) all approaches we know assume references being implemented as physical object identifiers (OIDs) and (2) most approaches are, in addition, limited to single-valued reference attributes. Both are severe limitations since most object-relational and all object-oriented database systems do support nested reference sets and many object systems do implement references as location-independent (logical) OIDs. In this work, we develop a new functional join algorithm that can be used for any realization form for OIDs (physical or logical) and is particularly geared towards supporting functional joins along nested reference sets. The algorithm can be applied to evaluate joins along arbitrarily long path expressions which may include one or more reference sets. The new algorithm generalizes previously proposed partition-based pointer joins by repeatedly applying partitioning with interleaved re-merging before evaluating the next fu... | [
280
] | Train |
1,469 | 4 | Towards Visually Mediated Interaction using Appearance-Based Models . This paper reports initial research on supporting Visually Mediated Interaction (VMI) by developing generic expression models and person-specific and generic gesture models for the control of active cameras. We investigate the recognition of both head pose and expression using simple generalisation of trained generic models using Radial Basis Function (RBF) networks. Then we go on to describe a time-delay variant (TDRBF) of the network and evaluate its performance on recognising simple pointing and waving hand gestures in image sequences. Experimental results are presented that show that high levels of performance in gesture recognition can be obtained using these techniques, both for particular individuals and across a set of individuals. Characteristic visual evidence can be automatically selected and used even to recognise individuals from their gestures, depending on the task demands. 1 Introduction This paper reports initial research on supporting Visually Mediated ... | [
1963
] | Validation |
1,470 | 2 | Collection and Exploitation of Expert Knowledge in Web Assistant Systems Recent research and commercial developments have highlighted the importance of human involvement in user support for web information systems. In our earlier work a web assistant system has been introduced, which is a hybrid support system with human web assistants and computer-based support. An important issue with web assistant systems is how to make optimal use of these support resources. We use a knowledge management approach with frequently asked questions for a question-answering system that acts as a question filter for the human assistants. Knowledge is continuously collected from the assistants and exploited to augment the question-answering capabilities. Our system has been deployed and evaluated by an analysis of conversation logs and questionnaires for users and assistants. The results show that our approach is feasible and useful. Lessons learned are summarised in a set of recommendations. | [
1465,
3039
] | Test |
1,471 | 2 | Searchable Words on the Web In designing data structures for text databases, it is valuable to know how many different words are likely to be encountered in a particular collection. For example, vocabulary accumulation is central to index construction for text database systems; it is useful to be able to estimate the space requirements and performance characteristics of the main-memory data structures used for this task. However, it is not clear how many distinct words will be found in a text collection or whether new words will continue to appear after inspecting large volumes of data. We propose practical definitions of a word, and investigate new word occurrences under these models in a large text collection. We inspected around two billion word occurrences in 45 gigabytes of world-wide web documents, and found just over 9.74 million different words in 5.5 million documents; overall, 1 word in 200 was new. We observe that new words continue to occur, even in very large data sets, and that choosing stricter definitions of what constitutes a word has only limited impact on the number of new words found. | [
2630
] | Validation |
1,472 | 4 | Perceptual User Interfaces For some time, graphical user interfaces (GUIs) have been the dominant platform for human computer interaction. The GUI-based style of interaction has made computers simpler and easier to use, especially for office productivity applications where computers are used as tools to accomplish specific tasks. However, as the way we use computers changes and computing becomes more pervasive and ubiquitous, GUIs will not easily support the range of interactions necessary to meet users ’ needs. In order to accommodate a wider range of scenarios, tasks, users, and preferences, we need to move toward interfaces that are natural, intuitive, adaptive, and unobtrusive. The aim of a new focus in HCI, called Perceptual User Interfaces (PUIs), is to make human-computer interaction more like how people interact with each other and with the world. This paper describes the emerging PUI field and then reports on three PUImotivated projects: computer vision-based techniques to visually perceive relevant information about the user. 1. | [
1327,
2376
] | Test |
1,473 | 1 | A Comparison between ATNoSFERES and XCSM In this paper we present ATNoSFERES, a new framework based on an indirect encoding Genetic Algorithm which builds finite-state automata controllers able to deal with perceptual aliasing. We compare it with XCSM, a memory-based extension of the most studied Learning Classifier System, XCS, through a benchmark experiment. We then discuss the assets and drawbacks of ATNoSFERES in the context of that comparison. | [
1293
] | Validation |
1,474 | 5 | CBR for Dynamic Situation Assessment in an Agent-Oriented Setting . In this paper, we describe an approach of using case-based reasoning in an agent-oriented setting. CBR is used in a real-time environment to select actions of soccer players based on previously collected experiences encoded as cases. Keywords: Case-based reasoning, artificial soccer, agentoriented programming. 1 Introduction In recent years, agent-oriented technologies have caught much attention both in research and commercial areas. To provide a testbed for developing, evaluating, and testing various agent architectures, RoboCup Federation started the Robot World Cup Initiative (RoboCup), which is an attempt to foster AI and intelligent robotics research by providing a somewhat standardized problem where wide range of technologies from AI and robotics can be integrated and examined. In particular, during the annual RoboCup championships different teams utilizing different models and methods compete with each other in the domain of soccer playing. Key issues in this domain are that... | [
2213,
2364
] | Train |
1,475 | 0 | Toward Team-Oriented Programming The promise of agent-based systems is leading towards the development of autonomous, heterogeneous agents, designed by a variety of research/industrial groups and distributed over a variety of platforms and environments. | [
1724
] | Train |
1,476 | 0 | Design & Specification of Dynamic, Mobile, and Reconfigurable Multiagent Systems Multiagent Systems use the power of collaborative software agents to solve complex distributed problems. There are many Agent-Oriented Software Engineering (AOSE) methodologies available to assist system designers to create multiagent systems. However, none of these methodologies can specify agents with dynamic properties such as cloning, mobility or agent instantiation. This thesis starts the process to bridge the gap between AOSE methodologies and dynamic agent platforms by incorporating mobility into the current Multiagent Systems Engineering (MaSE) methodology. Mobility was specified within all components composing a mobile agent class. An agent component was also created that integrated the behavior of the components within an agent class and was transformed to handle most of the move responsibilities for a mobile agent. Those agent component and component mobility transformations were integrated into agentTool as a proof-of-concept and a demonstration system built on the mobility specifications was implemented for execution on the Carolina mobile agent platform. 1 DESIGN & SPECIFICATION OF DYNAMIC, MOBILE, AND RECONFIGURABLE MULTIAGENT SYSTEMS I. | [
1759
] | Train |
1,477 | 1 | The Chromatic Structure of Natural Scenes We applied Independent Component Analysis (ICA) to hyperspectral images in order to learn an ecient representation of color in natural scenes. In the spectra of single pixels, the algorithm found basis functions that had broadband spectra, as well as basis functions that were similar to natural reectance spectra. When applied to small image patches, the algorithm found basis functions that were achromatic and others with overall chromatic variation along lines in color space, indicating color opponency. The directions of opponency were not strictly orthogonal. Comparison 1 with Principal Component Analysis (PCA) on the basis of statistical measures such as average mutual information, kurtosis and entropy, shows that the ICA transformation results in much sparser coecients and gives higher coding eciency. Our ndings suggest that non-orthogonal opponent encoding of photoreceptor signals leads to higher coding eciency, and that ICA may be used to reveal the underlying stati... | [
2881
] | Train |
1,478 | 2 | Using Unlabeled Data to Improve Text Classification One key difficulty with text classification learning algorithms is that they require many hand-labeled examples to learn accurately. This dissertation demonstrates that supervised learning algorithms that use a small number of labeled examples and many inexpensive unlabeled examples can create high-accuracy text classifiers. By assuming that documents are created by a parametric generative model, Expectation-Maximization (EM) finds local maximum a posteriori models and classifiers from all the data -- labeled and unlabeled. These generative models do not capture all the intricacies of text; however on some domains this technique substantially improves classification accuracy, especially when labeled data are sparse. Two problems arise from this basic approach. First, unlabeled data can hurt performance in domains where the generative modeling assumptions are too strongly violated. In this case the assumptions can be made more representative in two ways: by modeling sub-topic class structure, and by modeling super-topic hierarchical class relationships. By doing so, model probability and classification accuracy come into correspondence, allowing unlabeled data to improve classification performance. The second problem is that even with a representative model, the improvements given by unlabeled data do not sufficiently compensate for a paucity of labeled data. Here, limited labeled data provide EM initializations that lead to low-probability models. Performance can be significantly improved by using active learning to select high-quality initializations, and by using alternatives to EM that avoid low-probability local maxima. | [
603,
815,
866,
1290,
1366,
2033,
2096,
2100,
2749,
3115,
3119
] | Test |
1,479 | 1 | Style Machines We approach the problem of stylistic motion synthesis by learning motion patterns from a highly varied set of motion capture sequences. Each sequence may have a distinct choreography, performed in a distinct style. Learning identifies common choreographic elements across sequences, the different styles in which each element is performed, and a small number of stylistic degrees of freedom which span the many variations in the dataset. The learned model can synthesize novel motion data in any interpolation or extrapolation of styles. For example, it can convert novice ballet motions into the more graceful modern dance of an expert. The model can also be driven by video, by scripts, or even by noise to generate new choreography and synthesize virtual motion-capture in many styles. In Proceedings of SIGGRAPH 2000, July 23-28, 2000. New Orleans, Louisiana, USA. This work may not be copied or reproduced in whole or in part for any commercial purpose. Permission to copy in whole o... | [
110,
401
] | Train |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.