node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
1,880
1
Coordinated Reinforcement Learning We present several new algorithms for multiagent reinforcement learning. A common feature of these algorithms is a parameterized, structured representation of a policy or value function. This structure is leveraged in an approach we call coordinated reinforcement learning, by which agents coordinate both their action selection activities and their parameter updates. Within the limits of our parametric representations, the agents will determine a jointly optimal action without explicitly considering every possible action in their exponentially large joint action space. Our methods differ from many previous reinforcement learning approaches to multiagent coordination in that structured communication and coordination between agents appears at the core of both the learning algorithm and the execution architecture. Our experimental results, comparing our approach to other RL methods, illustrate both the quality of the policies obtained and the additional benefits of coordination.
[ 22, 215, 1487, 1775 ]
Train
1,881
5
An Experimental Comparison of Localization Methods Localization is the process of updating the pose of a robot in an environment, based on sensor readings. In this experimental study, we compare two recent methods for localization of indoor mobile robots: Markov localization, which uses a probability distribution across a grid of robot poses; and scan matching, which uses Kalman filtering techniques based on matching sensor scans. Both these techniques are dense matching methods, that is, they match dense sets of environment features to an a priori map. To arrive at results for a range of situations, we utilize several different types of environments, and add noise to both the dead-reckoning and the sensors. Analysis shows that, roughly, the scan-matching techniques are more efficient and accurate, but Markov localization is better able to cope with large amounts of noise. These results suggest hybrid methods that are efficient, accurate and robust to noise. 1. Introduction To carry out tasks, such as delivering objects, an indoor ro...
[ 557, 1093, 2999 ]
Train
1,882
4
A Highly Adaptable Infrastructure for Service Discovery and Management in Ubiquitous Computing In an age where wirelessly networked appliances and devices are becoming commonplace, there is a necessity for providing a standard interface to them that is easily accessible by any mobile user. The design outlined in this paper provides an infrastructure and communication protocol for presenting services to heterogeneous mobile clients in a physical space via some short range wireless links. This system uses a Communication Manager to communicate with the client devices. The Communication Manager can be modified easily to work with any type of communication medium, including TCP/IP, Infrared, CDPD and Bluetooth. All the components in our model use a language based on Extensible Markup Language (XML) giving it a uniform and easily adaptable interface. We explain our trade-offs in implementation and through experiments we show that the design is feasible and that it indeed provides a flexible structure for providing services. Centaurus defines a uniform infrastructure for heterogeneous services, both hardware and software , to be made available to diverse mobile users within a confined space. 1
[ 138, 2195 ]
Validation
1,883
2
Searching the Web: General and Scientific Information Access he World Wide Web is revolutionizing the way people access information, and has opened up new possibilities in areas such as digital libraries, general and scientific information dissemination and retrieval, education, commerce, entertainment, government, and health care. The amount of publicly available information on the Web is increasing rapidly [1]. The Web is a gigantic digital library, a searchable 15 billion word encyclopedia [2]. It has stimulated research and development in information retrieval and dissemination, and fostered search engines such as AltaVista. These new developments are not limited to the Web, and can enhance access to virtually all forms of digital libraries. The revolution the Web has brought to information access is not so much due to the availability of information (huge amounts of information has long been available in libraries and elsewhere), but rather the increased efficiency of accessing information, which can make previously impractical tasks practical. There are many avenues for improvement in the efficiency of accessing information on the Web, for example, in the areas of locating and organizing information. This article discusses general and scientific information access on the Web, and many of our comments are applicable to digital libraries in general. The effectiveness of Web search engines is discussed, including results that show that the major search engines cover only a fraction of the “publicly indexable Web ” (the part of the Web which is considered for indexing by the major engines, which excludes pages hidden behind search forms, pages with authorization requirements, etc.). Current research into improved searching of the Web is discussed, including new techniques for ranking the relevance of results, and new techniques in metasearch that can improve the efficiency and effectiveness of Web search. The amount of scientific information and the number of electronic journals on the Internet continues to increase. Researchers are increasingly making their work available online. This article also discusses the creation of digital libraries of the scientific literature, incorporating autonomous citation indexing. The autonomous creation of citation indices
[ 488, 1108, 1321, 1755, 2503, 2558 ]
Test
1,884
3
Potter's Wheel: An Interactive Data Cleaning System Cleaning data of errors in structure and content is important for data warehousing and integration. Current solutions for data cleaning involve many iterations of data "auditing" to find errors, and long-running transformations to fix them. Users need to endure long waits, and often write complex transformation scripts. We present Potter's Wheel, an interactive data cleaning system that tightly integrates transformation and discrepancy detection. Users gradually build transformations to clean the data by adding or undoing transforms on a spreadsheet-like interface; the effect of a transform is shown at once on records visible on screen. These transforms are specified either through simple graphical operations, or by showing the desired effects on example data values. In the background, Potter's Wheel automatically infers structures for data values in terms of user-defined domains, and accordingly checks for constraint violations. Thus users can gradually build a transformation as discrepancies are found, and clean the data without writing complex programs or enduring long delays. 1
[ 2320, 2418, 3099 ]
Test
1,885
0
Bisimulation Congruences in Safe Ambients We study a variant of Levi and Sangiorgi's Safe Ambients (SA) enriched with passwords (SAP). In SAP by managing passwords, for example generating new ones and distributing them selectively, an ambient may now program who may migrate into its computation space, and when. Moreover in SAP an ambient may provide different services depending on the passwords exhibited by its incoming clients. We give an lts based operational semantics for SAP and a labelled bisimulation based equivalence which is proved to coincide with barbed congruence. Our notion of bisimulation is used to prove a set of algebraic laws which are subsequently exploited to prove more significant examples. 1
[]
Train
1,886
1
The Use Of Artificial Intelligence To Improve The Numerical Optimization Of Complex Engineering Designs Gradient-based numerical optimization of complex engineering designs promises to produce better designs rapidly. However, such methods generally assume that the objective function and constraint functions are continuous, smooth, and defined everywhere. Unfortunately, realistic simulators tend to violate these assumptions. We present several artificial intelligence-based techniques for improving the numerical optimization of complex engineering designs in the presence of such pathologies in the simulators. We have tested the resulting system in several realistic engineering domains, and have found that using our techniques can greatly decrease the cost of design space search, and can also increase the quality of the resulting designs.
[ 144, 2125 ]
Test
1,887
5
The Bayes Net Toolbox for MATLAB The Bayes Net Toolbox (BNT) is an open-source Matlab package for directed graphical models. BNT supports many kinds of nodes (probability distributions), exact and approximate inference, parameter and structure learning, and static and dynamic models. BNT is widely used in teaching and research: the web page has received over 28,000 hits since May 2000. In this paper, we discuss a broad spectrum of issues related to graphical models (directed and undirected), and describe, at a high-level, how BNT was designed to cope with them all. We also compare BNT to other software packages for graphical models, and to the nascent OpenBayes effort.
[ 73, 1345 ]
Train
1,888
2
A Statistical Method for Estimating the Usefulness of Text Databases Searching desired data in the Internet is one of the most common ways the Internet is used. No single search engine is capable of searching all data in the Internet. The approach that provides an interface for invoking multiple search engines for each user query has the potential to satisfy more users. When the number of search engines under the interface is large, invoking all search engines for each query is often not cost effective because it creates unnecessary network traffic by sending the query to a large number of useless search engines and searching these useless search engines wastes local resources. The problem can be overcome if the usefulness of every search engine with respect to each query can be predicted. In this paper, we present a statistical method to estimate the usefulness of a search engine for any given query. For a given query, the usefulness of a search engine in this paper is defined to be a combination of the number of documents in the search engine that are sufficiently similar to the query and the average similarity of these documents. Experimental results indicate that our estimation method is much more accurate than existing methods.
[ 263, 488, 521, 587, 976, 982, 1120, 1134, 1167, 1642, 1804, 2188, 2771, 2920, 3139 ]
Train
1,889
5
An Integrated Vision Sensor for the Computation of Optical Flow Singular Points A robust, integrative algorithm is presented for computing the position of the focus of expansion or axis of rotation (the singular point) in optical flow fields such as those generated by self-motion. Measurements are shown of a fully parallel CMOS analog VLSI motion sensor array which computes the direction of local motion (sign of optical flow) at each pixel and can directly implement this algorithm. The flow field singular point is computed in real time with a power consumption of less than 2 mW . Computation of the singular point for more general flow fields requires measures of field expansion and rotation, which it is shown can also be computed in real-time hardware, again using only the sign of the optical flow field. These measures, along with the location of the singular point, provide robust real-time self-motion information for the visual guidance of a moving platform such as a robot. 1 INTRODUCTION Visually guided navigation of autonomous vehicles requires robust measures...
[ 2157 ]
Train
1,890
1
Rotation Invariant Neural Network-Based Face Detection In this paper, we present a neural network-based face detection system. Unlike similar systems which are limited to detecting upright, frontal faces, this system detects faces at any degree of rotation in the image plane. The system employs multiple networks; the first is a "router" network which processes each input window to determine its orientation and then uses this information to prepare the window for one or more "detector" networks. We present the training methods for both types of networks. We also perform sensitivity analysis on the networks, and present empirical results on a large test set. Finally, we present preliminary results for detecting faces which are rotated out of the image plane, such as profiles and semi-profiles. This work was partially supported by grants from Hewlett-Packard Corporation, Siemens Corporate Research, Inc., the Department of the Army, Army Research Office under grant number DAAH04-94-G-0006, and by the Office of Naval Research under grant number...
[ 410, 953, 1106, 1554, 1969, 2764 ]
Train
1,891
2
Natural-Sounding Speech Synthesis Using Variable-Length Units The goal of this work was to develop a speech synthesis system which concatenates variable-length units to create naturalsounding speech. Our initial work in this area showed that by careful design of system responses to ensure consistent intonation contours, natural-sounding speech synthesis was achievable with word- and phrase-level concatenation. In order to extend the flexibility of this framework, we focused on the problem of generating novel words from a corpus of sub-word units. The design of the sub-word units was motivated by perceptual studies that investigated where speech could be spliced with minimal audible distortion and what contextual constraints were necessary to maintain in order to produce natural sounding speech. The sub-word corpus is searched during synthesis using a Viterbi search which selects a sequence of units based on how well they individually match the input specification and on how well they sound as an ensemble. This concatenative speech synthesis syste...
[ 2931 ]
Train
1,892
4
Adapting Web Information to Disabled and Elderly Users : Substantial research and standardization efforts already exist to make it easier for people with physical impairments to perceive and interact with web pages. This paper describes work aimed at catering the content of web pages to the needs of different users, including elderly people and users with vision and motor impairments. The AVANTI system and related efforts in the AVANTI project will be discussed and experiences reported. 1. Introduction The World Wide Web is currently the most frequently visited electronic resource and is likely to become the access ramp to the electronic information highway of the next millennium. Web access should therefore ideally be available to everyone in order not to create yet another informational, and hence economical and social, disparity in society. Special efforts must be put into making the access to the web available to those who so far have been at a disadvantage, including people with disabilities and elderly people who until recently...
[ 766, 1084 ]
Train
1,893
3
Event Composition in Time-dependent Distributed Systems Many interesting application systems, ranging from workflow management and CSCW to air traffic control, are eventdriven and time-dependent and must interact with heterogeneous components in the real world. Event services are used to glue together distributed components. They assume a virtual global time base to trigger actions and to order events. The notion of a global time that is provided by synchronized local clocks in distributed systems has a fundamental impact on the semantics of event-driven systems, especially the composition of events. The well studied 2g-precedence model, which assumes that the granularity of global time-base g can be derived from a priori known and bounded precision of local clocks may not be suitable for the Internet where the accuracy and external synchronization of local clocks is best effort and cannot be guaranteed because of large transmission delay variations and phases of disconnection. In this paper we introduce a mechanism based on...
[ 1995, 2274, 2657 ]
Test
1,894
3
Incremental Computation and Maintenance of Temporal Aggregates Abstract. We consider the problems of computing aggregation queries in temporal databases and of maintaining materialized temporal aggregate views efficiently. The latter problem is particularly challenging since a single data update can cause aggregate results to change over the entire time line. We introduce a new index structure called the SB-tree, which incorporates features from both segment-trees and B-trees. SB-trees support fast lookup of aggregate results based on time and can be maintained efficiently when the data change. We extend the basicSB-tree index to handle cumulative (also called moving-window) aggregates, considering separately cases when the window size is or is not fixed in advance. For materialized aggregate views in a temporal database or warehouse, we propose building and maintaining SB-tree indices instead of the views themselves.
[ 791, 1661, 2095 ]
Train
1,895
2
General Query Expansion Techniques For Spoken Document Retrieval This paper presents some developments in query expansion and document representation of our Spoken Document Retrieval (SDR) system since the 1998 Text REtrieval Conference (TREC-7). We have shown that a modification of the document representation combining several techniques for query expansion can improve Average Precision by 17 % relative to a system similar to that which we presented at TREC-7 [1]. These new experiments have also confirmed that the degradation of Average Precision due to a Word Error Rate (WER) of 25 % is relatively small (around 2 % relative). We hope to repeat these experiments when larger document collections become available to evaluate the scalability of these techniques. 1.
[ 20, 1372, 1792 ]
Train
1,896
1
Integrating Boosting and Stochastic Attribute Selection Committees for Further Improving the Performance of Decision Tree Learning Techniques for constructing classifier committees including Boosting and Bagging have demonstrated great success, especially Boosting for decision tree learning. This type of technique generates several classifiers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the final classification. Boosting and Bagging create different classifiers by modifying the distribution of the training set. SASC (Stochastic Attribute Selection Committees) uses an alternative approach to generating classifier committees by stochastic manipulation of the set of attributes considered at each node during tree induction, but keeping the distribution of the training set unchanged. In this paper, we propose a method for improving the performance of Boosting. This technique combines Boosting and SASC. It builds classifier committees by manipulating both the distribution of the training set and the set of attributes available during induction. In...
[ 191, 1871 ]
Train
1,897
4
Implicit Human Computer Interaction Through Context In this paper the term implicit human computer interaction is defined. It is discussed how the availability of processing power and advanced sensing technology can enable a shift in HCI from explicit interaction, such as direct manipulation GUIs, towards a more implicit interaction based on situational context. In the paper an algorithm that is based on a number of questions to identify applications that can facilitate implicit interaction is given. An XMLbased language to describe implicit HCI is proposed. The language uses contextual variables that can be grouped using different types of semantics as well as actions that are called by triggers. The term of perception is discussed and four basic approaches are identified that are useful when building context-aware applications. Providing two examples, a wearable context awareness component and a sensor-board, it is shown how sensor-based perception can be implemented. It is also discussed how situational context can be exploited to im...
[ 1027, 1825, 3087 ]
Train
1,898
1
Symbol Grounding: A New Look At An Old Idea Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or \objectively". They are formed in relation to the experience of agents, through their perceptual /motor apparatuses, in their world and linked to their goals and actions. In this paper, we will take a detailed look at this relatively old issue, using a new perspective, aided by our new work of computational cognitive model development. To further our understanding, we also go back in time to link up with earlier philosophical theories related to this issue. The result is an account that extends from computational mechanisms to philosophical abstractions. 1 Symbol Grounding: A New Look At An Old Idea Symbols should be grounded, as has been argued before. But we insist that they should be ground...
[ 1190, 1603 ]
Test
1,899
0
Model Checking Multi-Agent Systems with MABLE MABLE is a language for the design and automatic verification of multi-agent systems. MABLE is essentially a conventional imperative programming language, enriched by constructs from the agent-oriented programming paradigm. A MABLE system contains a number of agents, programmed using the MABLE imperative programming language. Agents in MABLE have a mental state consisting of beliefs, desires and intentions. Agents communicate using request and inform performatives, in the style of the FIPA agent communication language. MABLE systems may be augmented by the addition of formal claims about the system, expressed using a quantified, linear temporal belief-desire-intention logic. MABLE has been fully implemented, and makes use of the SPIN model checker to automatically verify the truth or falsity of claims.
[ 400, 1616, 1785 ]
Test
1,900
4
What Do We Want From a Wearable User Interface? This document outlines the author's ultimate goal and suggests one technology that can be exploited by applications writers to make them fit neatly into future application integration frameworks.
[ 746 ]
Train
1,901
2
Using Category-Based Collaborative Filtering in the Active Webmuseum Collaborative filtering is an important technology for creating useradapting Web sites. In general the efforts of improving filtering algorithms and using the predictions for the presentation of filtered objects are decoupled. Therefore, common measures (or metrics) for evaluating collaborative filtering (recommender) systems focus mainly on the prediction algorithm. It is hard to relate the classic measurements to actual user satisfaction because of the way the user interacts with the recommendations, determined by their representation, influences the benefits for the user. We propose an abstract access paradigm, which can be applied to the design of filtering systems, and at the same time formalizes the access to filtering results via multi-corridors (based on content-based categories) . This leads to new measures which better relate to the user satisfaction. We use these measures to evaluate the use of various kinds of multi-corridors for our prototype user-adapting Web site the: Active WebMuseum.
[ 2017 ]
Test
1,902
0
Secure Mobile Code: The JavaSeal experiment Mobile agents are programs that move between sites during execution to benefit from the services and information present at each site. To gain wide acceptance, strong security guarantees must be given: sites must be protected from malicious agents and agents must be protected from each other. Software based protection is widely viewed as the most efficient way of enforcing agent security. In the first part of the paper, we review programming language support for security. This review also helps to highlight weaknesses in the Java security model. In the second part of the paper, we make good on the lessons learned in the review to design a security architecture for the JavaSeal agent platform. 1 Introduction Wide area networks such as the Internet hold the promise of a brave new wired world of global computing. The hope is to have distributed applications that scale to the size of the Internet and seamlessly provide access to massive amounts of information and value added services. Bu...
[ 560, 695, 2027 ]
Train
1,903
2
Data-Driven Generation of Decision Trees for Motif-Based Assignment of Protein Sequences to Functional Families This paper describes an approach to data-driven discovery of sequence motif-based models in the form of decision trees for assigning
[ 551, 1808 ]
Test
1,904
1
Automated Facial Expression Recognition Based on FACS Action Units Automated recognition of facial expression is an important addition to computer vision research because of its relevance to the study of psychological phenomena and the development of human-computer interaction (HCI). We developed a computer vision system that automatically recognizes individual action units or action unit combinations in the upper face using Hidden Markov Models (HMMs). Our approach to facial expression recognition is based on the Facial Action Coding System (FACS), which separates expressions into upper and lower face action. In this paper, we use three approaches to extract facial expression information: (1) facial feature point tracking, (2) dense flow tracking with principal component analysis (PCA), and (3) high gradient component detection (i.e., furrow detection). The recognition results of the upper face expressions using feature point tracking, dense flow tracking, and high gradient component detection are 85%, 93%, and 85%, respectively. 1. Introduction Fa...
[ 1268 ]
Validation
1,905
4
Using Extreme Programming for Knowledge Transfer This paper presents the application of eXtreme Programming to the software agents research and development group at TRLabs Regina. The group had difficulties maintaining its identity due to a very rapid turnover and lack of strategic polarization. The application of eXtreme Programming resulted in a complete reorientation of the development culture, which now forms a continuous substrate in every individual.
[ 50 ]
Train
1,906
1
Monotonic and Residuated Logic Programs In this paper we define the rather general framework of Monotonic Logic Programs, where the main results of (definite) logic programming are validly extrapolated. Whenever defining new logic programming extensions, we can thus turn our attention to the stipulation and study of its intuitive algebraic properties within the very general setting. Then, the existence of a minimum model and of a monotonic immediate consequences operator is guaranteed, and they are related as in classical logic programming. Afterwards we study the more restricted class of residuated logic programs which is able to capture several quite distinct logic programming semantics. Namely: Generalized Annotated Logic Programs, Fuzzy Logic Programming, Hybrid Probabilistic Logic Programs, and Possibilistic Logic Programming. We provide the embedding of possibilistic logic programming.
[ 1991 ]
Test
1,907
3
Probabilistic Logic Programming with Conditional Constraints . We introduce a new approach to probabilistic logic programming in which probabilities are defined over a set of possible worlds. More precisely, classical program clauses are extended by a subinterval of [0; 1] that describes a range for the conditional probability of the head of a clause given its body. We then analyze the complexity of selected probabilistic logic programming tasks. It turns out that probabilistic logic programming is computationally more complex than classical logic programming. More precisely, the tractability of special cases of classical logic programming generally does not carry over to the corresponding special cases of probabilistic logic programming. Moreover, we also draw a precise picture of the complexity of deciding and computing tight logical consequences in probabilistic reasoning with conditional constraints in general. We then present linear optimization techniques for deciding satisfiability and computing tight logical consequences of probabilistic...
[ 440, 510, 1078, 1568 ]
Test
1,908
1
Learning Feed-Forward and Recurrent Fuzzy Systems: A Genetic Approach In this paper we present a new learning method for rule-based feed-forward and recurrent fuzzy systems. Recurrent fuzzy systems have hidden fuzzy variables and can approximate the temporal relation embedded in dynamic processes of unknown order. The learning method is universal i.e. it selects optimal width and position of Gaussian like membership functions and it selects a minimal set of fuzzy rules as well as the structure of the rules. A Genetic Algorithm is used to estimate the Fuzzy Systems which capture low complexity and minimal rule base. Optimization of the "entropy" of a fuzzy rule base leads to a minimal number of rules, of membership functions and of sub-premises together with an optimal input/output behavior. Most of the resulting Fuzzy Systems are comparable to systems designed by an expert but offers a better performance. The approach is compared to others by a standard benchmark (a system identification process). Different results for feed-forward and first order recurrent Fuzzy Systems with symmetric and non-symmetric membership functions are presented. Key words: Fuzzy logic controller, recurrent fuzzy systems, genetic algorithm, entropy of fuzzy rule, machine learning, dynamic processes. 1
[ 2013 ]
Train
1,909
4
Seeing the Whole in Parts: Text Summarization for Web Browsing on Handheld Devices We introduce five methods for summarizing parts of Web pages on handheld devices, such as personal digital assistants (PDAs), or cellular phones. Each Web page is broken into text units that can each be hidden, partially displayed, made fully visible, or summarized. The methods accomplish summarization by different means. One method extracts significant keywords from the text units, another attempts to find each text unit's most significant sentence to act as a summary for the unit. We use information retrieval techniques, which we adapt to the World-Wide Web context. We tested the relative performance of our five methods by asking human subjects to accomplish single-page information search tasks using each method. We found that the combination of keywords and single-sentence summaries provides significant improvements in access times and number of pen actions, as compared to other schemes.
[ 741, 1170, 2315 ]
Validation
1,910
0
Using the Cross-Entropy Method to Guide/Govern Mobile Agent's Path Finding in Networks The problem of finding paths in networks is general and many faceted with a wide range of engineering applications in communication networks. Finding the optimal path or combination of paths usually leads to NP-hard combinatorial optimization problems. A recent and promising method, the cross-entropy method proposed by Rubinstein, manages to produce optimal solutions to such problems in polynomial time. However this algorithm is centralized and batch oriented. In this paper we show how the cross-entropy method can be reformulated to govern the behaviour of multiple mobile agents which act independently and asynchronously of each other. The new algorithm is evaluate on a set of well known Travelling Salesman Problems. A simulator, based on the Network Simulator package, has been implemented which provide realistic simulation environments. Results show good performance and stable convergence towards near optimal solution of the problems tested.
[ 86, 234, 2369 ]
Test
1,911
1
Qualitative Velocity and Ball Interception Abstract. In many approaches for qualitative spatial reasoning, navigation of an agent in a more or less static environment is considered (e.g. in the double-cross calculus [12]). However, in general, real environment are dynamic, which means that both the agent itself and also other objects and agents in the environment may move. Thus, in order to perform spatial reasoning, not only (qualitative) distance and orientation information is needed (as e.g. in [1]), but also information about (relative) velocity of objects (see e.g. [2]). Therefore, we will introduce concepts for qualitative and relative velocity: (quick) to left, neutral, (quick) to right. We investigate the usefulness of this approach in a case study, namely ball interception of simulated soccer agents in the RoboCup [10]. We compare a numerical approach where the interception point is computed exactly, a strategy based on reinforcement learning, a method with qualitative velocities developed in this paper, and the naïve method where the agent simply goes directly to the actual ball position. Key words: cognitive robotics; multiagent systems; spatial reasoning. 1
[ 1367 ]
Train
1,912
2
Searching the World Wide Web in Low-Connectivity Communities The Internet has the potential to deliver information to communities around the world that have no other information resources. High telephone and ISP fees- in combination with lowbandwidth connections- make it unaffordable for many people to browse the Web online. We are developing the TEK system to enable users to search the Web using only email. TEK stands for "Time Equals Knowledge, " since the user exchanges time (waiting for email) for knowledge. The system contains three components: 1) the client, which provides a graphical interface for the end user, 2) the server, which performs the searches from MIT, and 3) a reliable email-based communication protocol between the client and the server. The TEK search engine differs from others in that it is designed to return low-bandwidth results, which are achieved by special filtering, analysis, and compression on the server side. We believe that TEK will bring Web resources to people who otherwise would not be able to afford them.
[ 2503 ]
Test
1,913
3
On Decidability of Boundedness Property for Regular Path Queries The paper studies the evaluation of regular path queries on semi-structured data, i.e. path queries of the form find all objects reachable by path whose labels form a word in r where r is a regular expression. We use local information expressed in the form of path constraints in the optimization of path expression queries. These constraints are of the form r ae w where r is a regular language and w is a word
[ 1600 ]
Test
1,914
1
An Overview of the Tatami Project This paper describes the Tatami project at UCSD, which is developing a system to support distributed cooperative software development over the web, and in particular, the validation of concurrent distributed software. The main components of our current prototype are a proof assistant, a generator for documentation websites, a database, an equational proof engine, and a communication protocol to support distributed cooperative work. We believe behavioral specification and verification are important for software development, and for this purpose we use first order hidden logic with equational atoms. The paper also briefly describes some novel user interface design methods that have been developed and applied in the project
[ 1761 ]
Train
1,915
1
Constructive Theory Refinement in Knowledge Based Neural Networks Knowledge based artificial neural networks offer an approach for connectionist theory refinement. We present an algorithm for refining and extending the domain theory incorporated in a knowledge based neural network using constructive neural network learning algorithms. The initial domain theory comprising of propositional rules is translated into a knowledge based network of threshold logic units (TLU). The domain theory is modified by dynamically adding neurons to the existing network. A constructive neural network learning algorithm is used to add and train these additional neurons using a sequence of labeled examples. We propose a novel hybrid constructive learning algorithm based on the Tiling and Pyramid constructive learning algorithms that allows knowledge based neural network to handle patterns with continuous valued attributes. Results of experiments on two non-trivial tasks (the ribosome binding site prediction and the financial advisor) show that our algorithm compares favo...
[ 1808, 2046, 2126 ]
Train
1,916
1
The Evolutionary Unfolding of Complexity We analyze the population dynamics of a broad class of tness functions that exhibit epochal evolution -- a dynamical behavior, commonly observed in both natural and artificial evolutionary processes, in which long periods of stasis in an evolving population are punctuated by sudden bursts of change. Our approach -- statistical dynamics -- combines methods from both statistical mechanics and dynamical systems theory in a way that offers an alternative to current "landscape" models of evolutionary optimization. We describe the population dynamics on the macroscopic level of tness classes or phenotype subbasins, while averaging out the genotypic variation that is consistent with a macroscopic state. Metastability in epochal evolution occurs solely at the macroscopic level of the fitness distribution. While a balance between selection and mutation maintains a quasistationary distribution of fitness, individuals diffuse randomly through selectively neutral subbasins in genotype space. Sudden innovations occur when, through this diffusion, a genotypic portal is discovered that connects to a new subbasin of higher fitness genotypes. In this way, we identify innovations with the unfolding and stabilization of a new dimension in the macroscopic state space. The architectural view of subbasins and portals in genotype space clarifies how frozen accidents and the resulting phenotypic constraints guide the evolution to higher complexity.
[]
Train
1,917
4
Consumer Eye Movement Patterns on Yellow Pages Advertising Process tracing data help understand how yellow pages advertisement characteristics influence consumer information processing behavior. A laboratory experiment collected eye movement data while consumers chose businesses from phone directories. Consumers scan listings in alphabetic order. Their scan is not exhaustive. As a result, some ads are never seen. Consumers noticed over 93% of the quarter page display ads but only 26% of the plain listings. Consumers perceived color ads before ads without color, noticed more color ads than non-color ads and viewed color ads 21% longer than equivalent ads without color. Users viewed 42% more bold listings than plain listings. Consumers spent 54% more time viewing ads they end up choosing which demonstrates the importance of attention on subsequent choice behavior. 1 INTRODUCTION In 1992, yellow pages directories were a $9.4 billion dollar information services business that reached 98% of American households (Mangel 1992). It is the fourth larg...
[ 1639 ]
Test
1,918
4
Easily Adding Animations to Interfaces Using Constraints Adding animation to interfaces is a very difficult task with today's toolkits, even though there are many situations in which it would be useful and effective. The Amulet toolkit contains a new form of animation constraint that allows animations to be added to interfaces extremely easily without changing the logic of the application or the graphical objects themselves. An animation constraint detects changes to the value of the slot to which it is attached, and causes the slot to instead take on a series of values interpolated between the original and new values. The advantage over previous approaches is that animation constraints provide significantly better modularity and reuse. The programmer has independent control over the graphics to be animated, the start and end values of the animation, the path through value space, and the timing of the animation. Animations can be attached to any object, even existing widgets from the toolkit, and any type of value can be animated: scalars, ...
[ 1853, 2672 ]
Validation
1,919
3
Progress Report on the Disjunctive Deductive Database System . dlv is a deductive database system, based on disjunctive logic programming, which offers front-ends to several advanced KR formalisms. The system has been developed since the end of 1996 at Technische Universitat Wien in an ongoing project funded by the Austrian Science Funds (FWF). Recent comparisons have shown that dlv is nowadays a state-of-the-art implementation of disjunctive logic programming. A major strength of dlv are its advanced knowledge modelling features. Its kernel language extends disjunctive logic programming by strong negation (a la Gelfond and Lifschitz) and integrity constraints; furthermore, front-ends for the database language SQL3 and for diagnostic reasoning are available. Suitable interfaces allow dlv users to utilize base relations which are stored in external commercial database systems. This paper provides an overview of the dlv system and describes recent advances in its implementation. In particular, the recent implementation of incremental techniques fo...
[ 862, 907 ]
Validation
1,920
3
Optimization of Constrained Frequent Set Queries with 2-variable Constraints Currently, there is tremendous interest in providing ad-hoc mining capabilities in database management systems. As a first step towards this goal, in [15] we proposed an architecture for supporting constraint-based, human-centered, exploratory mining of various kinds of rules including associations, introduced the notion of constrained frequent set queries (CFQs), and developed effective pruning optimizations for CFQs with 1-variable (1-var) constraints. While 1-var constraints are useful for constraining the antecedent and consequent separately, many natural examples of CFQs illustrate the need for constraining the antecedent and consequent jointly, for which 2-variable (2-var) constraints are indispensable. Developing pruning optimizations for CFQs with 2-var constraints is the subject of this paper. But this is a difficult problem because: (i) in 2var constraints, both variables keep changing and, unlike 1-var constraints, there is no fixed target for pruning; (ii) as we show, "conv...
[ 2105 ]
Train
1,921
1
A Unifying Information-theoretic Framework for Independent Component Analysis We show that different theories recently proposed for Independent Component Analysis (ICA) lead to the same iterative learning algorithm for blind separation of mixed independent sources. We review those theories and suggest that information theory can be used to unify several lines of research. Pearlmutter and Parra (1996) and Cardoso (1997) showed that the infomax approach of Bell and Sejnowski (1995) and the maximum likelihood estimation approach are equivalent. We show that negentropy maximization also has equivalent properties and therefore all three approaches yield the same learning rule for a fixed nonlinearity. Girolami and Fyfe (1997a) have shown that the nonlinear Principal Component Analysis (PCA) algorithm of Karhunen and Joutsensalo (1994) and Oja (1997) can also be viewed from information-theoretic principles since it minimizes the sum of squares of the fourth-order marginal cumulants and therefore approximately minimizes the mutual information (Comon, 1994). Lambert (19...
[ 1447, 1997 ]
Test
1,922
0
Extending Multi-Agent Cooperation by Overhearing Much cooperation among humans happens following a common pattern: by chance or deliberately, a person overhears a conversation between two or more parties and steps in to help, for instance by suggesting answers to questions, by volunteering to perform actions, by making observations or adding information. We describe an abstract architecture to support a similar pattern in societies of articial agents. Our architecture involves pairs of so-called service agents (or services) engaged in some tasks, and unlimited number of suggestive agents (or suggesters). The latter have an understanding of the work behaviors of the former through a publicly available model, and are able to observe the messages they exchange. Depending on their own objectives, the understanding they have available, and the observed communication, the suggesters try to cooperate with the services, by initiating assisting actions, and by sending suggestions to the services. These in eect may induce a change in services behavior. To test our architecture, we developed an experimental, multi-agent Web site. The system has been implemented by using a BDI toolkit, JACK Intelligent Agents. Keywords: autonomous agents, multiagent systems, AI architectures, distributed AI. 1
[ 132, 198, 615, 1113, 1616, 1951 ]
Test
1,923
0
Training for Teamwork Teams train by practicing basic interaction skills in order to develop a shared mental model between team members. A computer-based training system must have a model of teamwork to train a new team member to act as part of a team. This model of teamwork can be used to identify weaknesses in team interaction skills in the trainee. Existing multi-agent models for teamwork are limited in their ability to support proactive information exchange among teammates. To address this issue, we have developed and implemented a multi-agent architecture called CAST that simulates teamwork and supports proactive information exchange in a dynamic environment. To this we are adding a tutoring system that monitors, evaluates, and corrects the trainee in the performance of teamwork skills. Keywords: ITS, teamwork, multi-agent, collaboration, HCI
[ 109, 808, 1614, 1840 ]
Validation
1,924
3
A Probabilistic Framework for Matching Temporal Trajectories: Condensation-Based Recognition of Gestures and Expressions . The recognition of human gestures and facial expressions in image sequences is an important and challenging problem that enables a host of human-computer interaction applications. This paper describes a framework for incremental recognition of human motion that extends the "Condensation" algorithm proposed by Isard and Blake (ECCV'96). Human motions are modeled as temporal trajectories of some estimated parameters over time. The Condensation algorithm uses random sampling techniques to incrementally match the trajectory models to the multi-variate input data. The recognition framework is demonstrated with two examples. The first example involves an augmented office whiteboard with which a user can make simple hand gestures to grab regions of the board, print them, save them, etc. The second example illustrates the recognition of human facial expressions using the estimated parameters of a learned model of mouth motion. 1 Introduction Motion is intimately tied with our behavior; we m...
[ 1679 ]
Train
1,925
0
Fully Embodied Conversational Avatars: Making Communicative Behaviors Autonomous : Although avatars may resemble communicative interface agents, they have for the most part not profited from recent research into autonomous embodied conversational systems. In particular, even though avatars function within conversational environments (for example, chat or games), and even though they often resemble humans (with a head, hands, and a body) they are incapable of representing the kinds of knowledge that humans have about how to use the body during communication. Humans, however, do make extensive use of the visual channel for interaction management where many subtle and even involuntary cues are read from stance, gaze and gesture. We argue that the modeling and animation of such fundamental behavior is crucial for the credibility and effectiveness of the virtual interaction in chat. By treating the avatar as a communicative agent, we propose a method to automate the animation of important communicative behavior, deriving from work in conversation and discourse theory. B...
[ 1273 ]
Train
1,926
3
Partial Models of Extended Generalized Logic Programs . In recent years there has been an increasing interest in extensions of the logic programming paradigm beyond the class of normal logic programs motivated by the need for a satisfactory respresentation and processing of knowledge. An important problem in this area is to find an adequate declarative semantics for logic programs. In the present paper a general preference criterion is proposed that selects the `intended' partial models of extended generalized logic programs which is a conservative extension of the stationary semantics for normal logic programs of [13], [15] and generalizes the WFSX-semantics of [12]. The presented preference criterion defines a partial model of an extended generalized logic program as intended if it is generated by a stationary chain. The GWFSX-semantics is defined by the set-theoretical intersection of all stationary generated models, and thus generalizes the results from [9] and [1]. 1 Introduction Declarative semantics provides a mathem...
[]
Validation
1,927
3
Spatio-Temporal Data Types: An Approach to Modeling and Querying Moving Objects in Databases Spatio-temporal databases deal with geometries changing over time. In general, geometries cannot only change in discrete steps, but continuously, and we are talking about moving objects. If only the position in space of an object is relevant, then moving point is a basic abstraction; if also the extent is of interest, then the moving region abstraction captures moving as well as growing or shrinking regions. We propose a new line of research where moving points and moving regions are viewed as three-dimensional (2D space + time) or higher-dimensional entities whose structure and behavior is captured by modeling them as abstract data types. Such types can be integrated as base (attribute) data types into relational, object-oriented, or other DBMS data models; they can be implemented as data blades, cartridges, etc. for extensible DBMSs. We expect these spatio-temporal data types to play a similarly fundamental role for spatio-temporal databases as spatial data types have played for spatial databases. The paper explains the approach and discusses several fundamental issues and questions related to it that need to be clarified before delving into specific designs of spatio-temporal algebras.
[ 125, 292, 331, 1437, 2245, 2271, 2344, 2476, 2509 ]
Test
1,928
2
Active Learning For Automatic Speech Recognition State-of-the-art speech recognition systems are trained using transcribed utterances, preparation of which is labor intensive and time-consuming. In this paper, we describe a new method for reducing the transcription effort for training in automatic speech recognition (ASR). Active learning aims at reducing the number of training examples to be labeled by automatically processing the unlabeled examples, and then selecting the most informative ones with respect to a given cost function for a human to label. We automatically estimate a confidence score for each word of the utterance, exploiting the lattice output of a speech recognizer, which was trained on a small set of transcribed data. We compute utterance confidence scores based on these word confidence scores, then selectively sample the utterances to be transcribed using the utterance confidence scores. In our experiments, we show that we reduce the amount of labeled data needed for a given word accuracy by 27%.
[ 714 ]
Train
1,929
3
Processing of Spatiotemporal Queries in Image Databases Overlapping Linear Quadtrees is a structure suitable for storing consecutive raster images according to transaction time (a database of evolving images). This structure saves considerable space without sacrificing time performance in accessing every single image. Moreover, it can be used for answering efficiently window queries for a number of consecutive images (spatio-temporal queries). In this paper, we present three such temporal window queries: strict containment, border intersect and cover. Besides, based on a method of producing synthetic pairs of evolving images (random images with specified aggregation) we present empirical results on the I/O performance of these queries.
[ 1744 ]
Test
1,930
3
A Probabilistic Approach to Navigation in Hypertext One of the main unsolved problems confronting Hypertext is the navigation problem, namely the problem of having to know where you are in the database graph representing the structure of a Hypertext database, and knowing how to get to some other place you are searching for in the database graph. Previously we formalised a Hypertext database in terms of a directed graph whose nodes represent pages of information. The notion of a trail, which is a path in the database graph describing some logical association amongst the pages in the trail, is central to our model. We defined a Hypertext Query Language, HQL, over Hypertext databases and showed that in general the navigation problem, i.e. the problem of finding a trail that satisfies a HQL query (technically known as the model checking problem), is NPcomplete. Herein we present a preliminary investigation of using a probabilistic approach in order to enhance the efficiency of model checking. The flavour of our investigation is that if we h...
[ 2106, 2545 ]
Train
1,931
0
Vulnerability Testing of Software System Using Fault Injection We describe an approach for testing a software system for possible security flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the environment, we view the security testing problem as the problem of testing for the fault-tolerance properties of a software system. We consider each environment perturbation as a fault and the resulting security compromise a failure in the toleration of such faults. Our approach is based on the well known technique of fault-injection. Environment faults are injected into the system under test and system behavior observed. The failure to tolerate faults is an indicator of a potential security flaw in the system. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what faults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classify 142 security flaws in a vulnerability database. This classification revealed that 91 % of the security flaws in the database are covered by the EAI model.
[ 1258 ]
Train
1,932
4
From PETS to Storykit: Creating New Technology with an Intergenerational Design Team Working with children as our design partners, our intergenerational design team at the University of Maryland has been developing both new design methodologies and new storytelling technology for children. In this paper, we focus on two recent results of our efforts: PETS, a robotic storyteller, and Storykit, a construction kit of low-tech and high-tech components to build immersive StoryRooms. We then describe some lessons we learned. Introduction Over the past two years, our intergenerational design team at the University of Maryland has been developing new design methodologies to create new storytelling technology for children. This team is made of six adult researchers from computer science, education, art, and engineering, and seven children, ages 7 to 11, from local elementary schools. These children stay with us for a long term, at least for one year. The adults are undergraduate students, graduate students, and faculty from art, education, engineering, and computer sc...
[ 2640 ]
Test
1,933
4
A Technological Related Discussion on the Potential of Change in Education, Learning and Training This is a poster prepared based on the presentation material for a paper with the same name presented at the conference. The goal is to discuss the supporting role of Information & Communication Technology (ICT) in education activities and puts in context the impact that CSCW systems can have both in Open and Distance Learning and in general education, learning and training. The NetLab concept is presented and used to support the paper positions and serves as the base to propose a roadmap to a virtual university setting. 1. PRESENTATION CONTEXT At the end of the century education is on change. In particular, the high levels of students that miss presence classes and display a lack of interest to attend most of the subjects in their higher education is already a common
[ 854 ]
Validation
1,934
3
Design and Implementation of Bitmap Indices for Scientific Data Bitmap indices are efficient multi-dimensional index data structures for handling complex adhoc queries in read-mostly environments. They have been implemented in several commercial database systems but are only well suited for discrete attribute values which are very common in typical business applications. However, many scientific applications usually operate on floating point numbers and cannot take advantage of the optimisation techniques offered by current database solutions. We thus present a novel algorithm called GenericRangeEval for processing one-sided range queries over floating point values. In addition we present a cost model for predicting the performance of bitmap indices for high-dimensional search spaces. We verify our analytical results by a detailed experimental study and show that the presented bitmap evaluation algorithm scales well also for high-dimensional search spaces requiring only a fairly small index. Because of its simple arithmetic structure, the cost model could easily be integrated into a query optimiser for deciding whether the current multi-dimensional query shall be answered by means of a bitmap index or better by sequentially scanning the data values, without using an index at all.
[ 610, 2506 ]
Validation
1,935
0
Declarative Procedural Goals in Intelligent Agent Systems An important concept for intelligent agent systems is goals. Goals have two aspects: declarative (a description of the state sought), and procedural (a set of plans for achieving the goal). A declarative view of goals is necessary in order to reason about important properties of goals, while a procedural view of goals is necessary to ensure that goals can be achieved efficiently in dynamic environments. In this paper we propose a framework for goals which integrates both views. We discuss the requisite properties of goals and the link between the declarative and procedural aspects, then derive a formal semantics which has these properties. We present a high-level plan notation with goals and give its formal semantics. We then show how the use of declarative information permits reasoning (such as the detection and resolution of conflicts) to be performed on goals. 1
[ 1049, 1501 ]
Train
1,936
5
Providing Haptic 'Hints' to Automatic Motion Planners In this paper, we investigate methods for enabling a human operator and an automatic motion planner to cooperatively solve a motion planning query. Our work is motivated by our experience that automatic motion planners sometimes fail due to the difficulty of discovering `critical' configurations of the robot that are often naturally apparent to a human observer. Our goal is to develop techniques by which the automatic planner can utilize (easily generated) user-input, and determine `natural' ways to inform the user of the progress made by the motion planner. We show that simple randomized techniques inspired by probabilistic roadmap methods are quite useful for transforming approximate, user-generated paths into collision-free paths, and describe an iterative transformation method which enables one to transform a solution for an easier version of the problem into a solution for the original problem. We also show that simple visualization techniques can provide meaningful representatio...
[ 841, 1003 ]
Validation
1,937
2
A Textual Case-Based Reasoning Framework for Knowledge Management Applications Knowledge management (KM) systems manipulate organizational knowledge by storing and redistributing corporate memories that are acquired from the organization's members. In this paper, we introduce a textual casebased reasoning (TCBR) framework for KM systems that manipulates organizational knowledge embedded in artifacts (e.g., best practices, alerts, lessons learned). The TCBR approach acquires knowledge from human users (via knowledge elicitation) and from text documents (via knowledge extraction) using template-based information extraction methods, a subset of natural language, and a domain ontology. Organizational knowledge is stored in a case base and is distributed in the context of targeted processes (i.e., within external distribution systems). The knowledge artifacts in the case base have to be translated into the format of the external distribution systems. A domain ontology supports knowledge elicitation and extraction, storage of knowledge artifacts in a case base, and artifact translation.
[ 2915 ]
Train
1,938
4
A Gesture Based Interaction Technique for a Planning Tool for Construction and Design In this article we wish to show a method that goes beyond the established approaches of human-computer interaction. We first bring a serious critique of traditional interface types, showing their major drawbacks and limitations. Promising alternatives are offered by Virtual (or: immersive) Reality (VR) and by Augmented Reality (AR). The AR design strategy enables humans to behave in a nearly natural way. Natural interaction means human actions in the real world with other humans and/or with real world objects. Guided by the basic constraints of natural interaction, we derive a set of recommendations for the next generation of user interfaces: the Natural User Interface (NUI). Our approach to NUIs is discussed in the form of a general framework followed by a prototype. The prototype tool builds on video-based interaction and supports construction and plant layout. A first empirical evaluation is briefly presented. 1 Introduction The introduction of computers in the work place has had ...
[ 511, 691, 2290 ]
Validation
1,939
3
dlv { An Overview the Intelligent Grounding, the Model Generator and the Model Checker. All of these modules perform a modular evaluation of their input according to various dependency graphs as de ned in [5, 2] and try to detect and eciently handle special (syntactic) subclasses, which in general yields a tremendous speedup. Supported by FWF (Austrian Science Funds) under the project P11580-MAT \A Query System for Disjunctive Deductive Databases" Please address correspondence to this author. The Intelligent Grounding takes an input program, whose facts can be stored also in the tables of external relational databases, and eciently generates a subset of the program instantiation that has exactly the same stable models as the full program, but is much smaller in general. (For strati ed programs, for example, the Grounding already computes the single stable model.) Then the Model Generator is run on the (ground) output of the Intelligent Grounding. It generates one candidate for a stable model a
[ 907, 3085 ]
Train
1,940
3
Proceedings of the 6th International Workshop on Deductive Databases and Logic . . . The integration of concepts from logic and deduction into databases and knowledge bases has created the field of deductive databases. Logic programming provides a powerful declarative language for accessing and maintaining knowledge in databases. Techniques from relational databases and automated deduction are useful for achieving efficient retrieval and reasoning in large knowledge bases. Thus, deductive databases can be used for building intelligent information systems. The contributions in this Proceedings of the Sixth International Workshop on Deductive Databases and Logic Programming DDLP'98 are grouped into four sessions: theoretical aspects, applications, Datalog extensions, and semantics and a demo session. 3 4 Contents Preface 7 Schedule of Presentations 11 Theoretical Aspects 13 Nieves R. Brisaboa, Agustin Gonzales, Hector J. Hernandez, and Jose R. Parama: Chasing programs in Datalog 13 Francois Bry, Norbert Eisinger, Heribert Schuetz, and Sunna Torge: SIC: Satisfiabil...
[ 2083 ]
Train
1,941
3
Electronic Books in Digital Libraries 1 Electronic book is an application with a multimedia database of instructional resources, which include hyperlinked text, instructor's audio/video clips, slides, animation, still images, etc. as well as content-based information about these data, and metadata such as annotations, tags, and cross-referencing information. Electronic books in the Internet or on CDs today are not easy to learn from. We propose the use of a multimedia database of instructional resources in constructing and delivering multimedia lessons about topics in an electronic book. We introduce an electronic book data model containing (a) topic objects and (b) instructional resources, called instruction module objects, which are multimedia presentations possibly capturing real-life lectures of instructors. We use the notion of topic prerequisites for topics at different detail levels, to allow electronic book users to request/compose multimedia lessons about topics in the electronic book. We present automated construction of the "best" user-tailored lesson (as a multimedia presentation. 1.
[ 1338 ]
Validation
1,942
3
Justification for Inclusion Dependency Normal Form Functional dependencies (FDs) and inclusion dependencies (INDs) are the most fundamental integrity constraints that arise in practice in relational databases. In this paper we address the issue of normalisation in the presence of FDs and INDs and in particular the semantic justification for Inclusion Dependency Normal Form (IDNF), a normal form which combines Boyce-Codd normal form with the restriction on the INDs that they be noncircular and keybased. We motivate and formalise three goals of database design in the presence of FDs and INDs: non interaction between FDs and INDs, elimination of redundancy and update anomalies, and preservation of entity integrity. We show that, as for FDs, in the presence of INDs being free of redundancy is equivalent to being free of update anomalies. Then, for each of these properties we derive equivalent syntactic conditions on the database design. Individually, each of these syntactic conditions is weaker than IDNF and the restriction that an FD not ...
[ 2110 ]
Train
1,943
0
A Perspective on Software Agents Research This paper sets out, ambitiously, to present a brief reappraisal of software agents research. Evidently, software agent technology has promised much. However some five years after the word `agent' came into vogue in the popular computing press, it is perhaps time the efforts in this fledgling area are thoroughly evaluated with a view to refocusing future efforts. We do not pretend to have done this in this paper -- but we hope we have sown the first seeds towards a thorough first 5-year report of the software agents area. The paper contains some strong views not necessarily widely accepted by the agent community 1 . 1 Introduction The main goal of this paper is to provide a brief perspective on the progress of software agents research. Though `agents' research had been going on for more than a fifteen years before, agents really became a buzzword in the popular computing press (and also within the artificial intelligence and computing communities) around 1994. During this year sev...
[ 231, 450, 911, 1309, 2364, 3108, 3133 ]
Validation
1,944
3
Neural Networks in Business: Techniques and Applications for the Operations Researcher This paper presents an overview of the di!erent types of neural network models which are applicable when solving business problems. The history of neural networks in business is outlined, leading to a discussion of the current applications in business including data mining, as well as the current research directions. The role of neural networks as a modern operations research tool is discussed. Scope and purpose Neural networks are becoming increasingly popular in business. Many organisations are investing in neural network and data mining solutions to problems which have traditionally fallen under the responsibility of operations research. This article provides an overview for the operations research reader of the basic neural network techniques, as well as their historical and current use in business. The paper is intended as an introductory article for the remainder of this special issue on neural networks in business. # 2000 Elsevier Science Ltd. All rights reserved. Keywords: N...
[ 2242 ]
Train
1,945
3
InterBase-KB: Integrating a Knowledge Base System with a Multidatabase System for Data Warehousing This paper describes the integration of a multidatabase system and a knowledge-base system to support
[ 126, 1735 ]
Train
1,946
0
Negotiation Protocols and Dialogue Games In a dynamic and open environment negotiation protocols cannot be known beforehand. We propose a methodology for constructing exible negotiation protocols based on joint actions and dialogue games. We view negotiation as a combination of joint actions. Simple dialogue games that consist of initiatives followed by responses function as `recipes for joint action' from which larger interactions can be constructed coherently. 1 Introduction Agent based software engineering is an active research area. One of its main challenges is to bridge theoretical models and practical applications. For example, there are many theoretical results on negotiation [11], but automated negotiation is still rare on the internet. Implementing negotiation protocols raises a number of practical questions. In particular, current negotiation systems are either closed or semi-closed. In closed and semi-closed environments, like in auctions, there is central control over the agents that can participate. In s...
[ 3032 ]
Train
1,947
2
The Order of Things: Activity-Centred Information Access This paper focuses on the representation and access of Web-based information, and how to make such a representation adapt to the activities or interests of individuals within a community of users. The heterogeneous mix of information on the Web restricts the coverage of traditional indexing techniques and so limits the power of search engines. In contrast to traditional methods, and in a way that extends collaborative filtering approaches, the path model centres representation on usage histories rather than content analysis. By putting activity at the centre of representation and not the periphery, the path model concentrates on the reader not the author and the browser not the site. We describe metrics of similarity based on the path model, and their application in a URL recommender tool and in visualising sets of URLs. Keywords: heterogeneous data, activity, indexing, collaborative filtering, information retrieval, access and visualization. 1 Introduction As Tim Berners-Lee pointe...
[ 3083 ]
Train
1,948
3
Dynamic CPU Scheduling with Imprecise Knowledge of Computation-Time : The majority of the studies conducted in scheduling real-time transactions mostly concentrate on concurrency control protocols, while overlooking the CPU as being the primary resource. Consequently, there are various techniques for scheduling the CPU in conventional time-critical systems; meanwhile, there does not seem to be any technique that is adequately designed for scheduling such a resource in Real-Time Database (RTDB) systems. In this paper, we construct an efficient CPU scheduling scheme that minimizes the preemption rate in order to reduce the frequency by which synchronization protocols must be invoked, along with their inherited performance degradation. In addition, we also introduce a new timing model upon which the newly introduced scheduler is incorpo- rated in order to utilize the system's imprecise knowledge of computation time estimates. Keywords: CPU Scheduling, lowering-preemption, timeliness-functions, and imprecise computation estimates. 1. INTRODUCTIO...
[]
Validation
1,949
1
Optimizations of Rough Set Model . Rough set methodology is based on concept (set) approximations constructed from available background knowledge represented in information systems [14]. In many applications only partial knowledge about approximated concepts is given. Hence quite often first a parametrized family of concept approximations is built and next by tuning of the parameters the best, in a sense, approximation is chosen (see e.g. variable precision rough set model [40]) in approximation spaces. In this paper we follow this approach in generalized approximation spaces. We discuss rough set model based on approximation spaces with uncertainty functions and rough inclusions. Both elements of approximation space are parametrized and for the proper application of such model to a particular data set it is necessary to make optimization of the parameters. We discuss basic properties of the mentioned model and also strategies of parameters optimization. We also present different notions of rough relations....
[]
Test
1,950
3
Optimization of User-Defined Functions in Distributed Object-Relational DBMS Full support of parallelism in object-relational database systems (ORDBMSs) is desired. The parallelization techniques developed for relational database systems are not adequate for ORDBMS because of the introduction of complex abstract data types and operations on ordered domains. In this paper, we consider a data stream paradigm and develop a query parallelization framework that exploits characteristics of user-defined functions in a ORDBMS during query optimization. By introducing the concept of windows and abstract data type orderings, we develop a novel approach for parallelizing user-defined functions in a distributed ORDBMS environment. The implementation issues in providing query services in ordered domains are also discussed. 1 Introduction As time goes by, more and more database vendors agree that Object-Relational DBMSs (ORDBMSs) are the future [23]. Though full support of parallel ORDBMS is expected, the techniques used to parallelize relational database systems are not ad...
[ 376 ]
Test
1,951
0
Means-End Plan Recognition - Towards a Theory of Reactive Recognition This paper draws its inspiration from current work in reactive planning to guide plan recognition using "plans as recipes". The plan recognition process guided by such a library of plans is called means-end plan recognition. An extension of dynamic logic, called dynamic agent logic, is introduced to provide a formal semantics for means-end plan recognition and its counterpart, means-end plan execution. The operational semantics, given by algorithms for means-end plan recognition, are then related to the provability of formulas in the dynamic agent logic. This establishes the relative soundness and completeness of the algorithms with respect to a given library of plans. Some of the restrictive assumptions underlying means-end plan recognition are then relaxed to provide a theory of reactive recognition that allows for changes in the external world during the recognition process. Reactive recognition, when embedded with the mental attitudes of belief, desire, and intention, leads to a po...
[ 485, 654, 1113, 1922, 2338 ]
Train
1,952
2
Assessing Software Libraries by Browsing Similar Classes, Functions and Relationships Comparing and contrasting a set of software libraries is useful for reuse related activities such as selecting a library from among several candidates or porting an application from one library to another. The current state of the art in assessing libraries relies on qualitative methods. To reduce costs and/or assess a large collection of libraries, automation is necessary. Although there are tools that help a developer examine an individual library in terms of architecture, style, etc., we know of no tools that help the developer directly compare several libraries. With existing tools, the user must manually integrate the knowledge learned about each library. Automation to help developers directly compare and contrast libraries requires matching of similar components (such as classes and functions) across libraries. This is different than the traditional component retrieval problem in which components are returned that best match a user's query. Rather, we need to find those component...
[ 2895 ]
Test
1,953
3
Measuring Knowledge with Workflow Management Systems Expert knowledge is captured in the process design. In organisations knowledge becomes embedded in routines, processes, practices as well as norms and can be evaluated by decisions or actions to which it leads, for example measurable efficiencies, speed or quality gains. Knowledge develops over time, through experience that includes what we absorb from courses, books, and mentors as well as informal learning. In this paper we analyse workflow history and demonstrate that workflow management systems enable knowledge measurement. 1
[ 345 ]
Train
1,954
2
Using Probabilistic Relational Models for Collaborative Filtering Recent projects in collaborative filtering and information filtering address the task of inferring user preference relationships for products or information. The data on which these inferences are based typically consists of pairs of people and items. The items may be information sources (such as web pages or newspaper articles) or products (such as books, software, movies or CDs). We are interested in making recommendations or predictions. Traditional approaches to the problem derive from classical algorithms in statistical pattern recognition and machine learning. The majority of these approaches assume a ”flat ” data representation for each object, and focus on a single dyadic relationship between the objects. In this paper, we examine a richer model that allows us to reason about many different relations at the same time. We build on the recent work on probabilistic relational models (PRMs), and describe how PRMs can be applied to the task of collaborative filtering. PRMs allow us to represent uncertainty about the existence of relationships in the model and allow the properties of an object to depend probabilistically both on other properties of that object and on properties of related objects. 1
[ 411, 518 ]
Train
1,955
2
Improving the search on the Internet by using WordNet and lexical operators A vast amount of information is available on the Internet, and naturally, many information gathering tools have been developed. Search engines with dijerent characteristics, such as Alta Vista, Lycos, Infoseek, and others are available. However, there are inherent difficulties associated with the task of retrieving information on the Internet: (1) the web information is diverse and highly unstructured, (2) the size of information is large and it grows at an exponential rate. While these two issues are profound and require long term solutions, still it is possible to develop software around the search engines to improve the quality of the information retrieved. In this paper we present a natural language interface system to a search engine. The search improvement achieved by our system is based on: (1) a query extension using WordNet and (2) the use of new lexical operators that replace the classical boolean opera- tors used by current search engines. Several tests have been performed using the TIPSTER topics collection, provided at the 6lb Text Retrieval Conference (TREC-6); the results obtained are presented and discussed.
[ 363 ]
Train
1,956
1
Efficient Mining of Partial Periodic Patterns in Time Series Database Partial periodicity search, i.e., search for partial periodic patterns in time-series databases, is an interesting data mining problem. Previous studies on periodicity search mainly consider finding full periodic patterns, where every point in time contributes (precisely or approximately) to the periodicity. However, partial periodicity is very common in practice since it is more likely that only some of the time episodes may exhibit periodic patterns. We present several algorithms for efficient mining of partial periodic patterns, by exploring some interesting properties related to partial periodicity, such as the Apriori property and the max-subpattern hit set property, and by shared mining of multiple periods. The max-subpattern hit set property is a vital new property which allows us to derive the counts of all frequent patterns from a relatively small subset of patterns existing in the time series. We show that mining partial periodicity needs only two scans over the time series database, even for mining multiple periods. The performance study shows our proposed methods are very efficient in mining long periodic patterns.
[ 1426, 1982, 3043 ]
Train
1,957
3
Temporal View Self-Maintenance . View self-maintenance refers to maintaining materialized views without accessing base data. Self-maintenance is particularly useful in data warehousing settings, where base data comes from sources that may be inaccessible. Selfmaintenance has been studied for nontemporal views, but is even more important when a warehouse stores temporal views over the history of source data, since the source history needed to perform view maintenance may no longer exist. This paper tackles the self-maintenance problem for temporal views. We show how to derive auxiliary data to be stored at the warehouse so that the warehouse views and auxiliary data can be maintained without accessing the sources. The temporal view self-maintenance problem is considerably harder than the nontemporal case because a temporal view may need to be maintained not only when source data is modified but also as time advances, and these two dimensions of change interact in subtle ways. We also seek to minimize the amount of au...
[ 1055 ]
Validation
1,958
2
Generative Models for Cold-Start Recommendations Systems for automatically recommending items (e.g., movies, products, or information) to users are becoming increasingly important in e-commerce applications, digital libraries, and other domains where personalization is highly valued. Such recommender systems typically base their suggestions on (1) collaborative data encoding which users like which items, and/or (2) content data describing item features and user demographics. Systems that rely solely on collaborative data fail when operating from a cold start---that is, when recommending items (e.g., first-run movies) that no member of the community has yet seen. We develop several generative probabilistic models that circumvent the cold-start problem by mixing content data with collaborative data in a sound statistical manner. We evaluate the algorithms using MovieLens movie ratings data, augmented with actor and director information from the Internet Movie Database. We find that maximum likelihood learning with the expectation maximization (EM) algorithm and variants tends to overfit complex models that are initialized randomly. However, by seeding parameters of the complex models with parameters learned in simpler models, we obtain greatly improved performance. We explore both methods that exploit a single type of content data (e.g., actors only) and methods that leverage multiple types of content data (e.g., both actors and directors) simultaneously.
[ 1718, 2033, 2096, 2300, 2847 ]
Validation
1,959
2
What can you do with a Web in your Pocket? The amount of information available online has grown enormously over the past decade. Fortunately, computing power, disk capacity, and network bandwidth have also increased dramatically. It is currently possible for a university research project to store and process the entire World Wide Web. Since there is a limit on how much text humans can generate, it is plausible that within a few decades one will be able to store and process all the human-generated text on the Web in a shirt pocket. The Web is a very rich and interesting data source. In this paper, we describe the Stanford WebBase, a local repository of a significant portion of the Web. Furthermore, we describe a number of recent experiments that leverage the size and the diversity of the WebBase. First, we have largely automated the process of extracting a sizable relation of books (title, author pairs) from hundreds of data sources spread across the World Wide Web using a technique we call Dual Iterative Pattern Relation Extraction. Second, we have developed a global ranking of Web pages called PageRank based on the link structure of the Web that has properties that are useful for search and navigation. Third, we have used PageRank to develop a novel search engine called Google, which also makes heavy use of anchor text. All of these experiments rely significantly on the size and diversity of the WebBase. 1
[ 1296, 2386, 2503, 2984 ]
Train
1,960
2
OilEd: a Reason-able Ontology Editor for the Semantic Web Ontologies will play a pivotal r ole in the "Semantic Web", where they will provide a source of precisely defined terms that can be communicated across people and applications. OilEd, is an ontology editor that has an easy to use frame interface, yet at the same time allows users to exploit the full power of an expressive web ontology language (OIL). OilEd uses reasoning to support ontology design, facilitating the development of ontologies that are both more detailed and more accurate.
[ 1730, 3117 ]
Test
1,961
0
An Incremental Interpreter for High-Level Programs with Sensing Like classical planning, the execution of high-level agent programs requires a reasoner to look all the way to a final goal state before even a single action can be taken in the world. This deferral is a serious problem in practice for large programs. Furthermore, the problem is compounded in the presence of sensing actions which provide necessary information, but only after they are executed in the world. To deal with this, we propose (characterize formally in the situation calculus, and implement in Prolog) a new incremental way of interpreting such high-level programs and a new high-level language construct, which together, and without loss of generality, allow much more control to be exercised over when actions can be executed. We argue that such a scheme is the only practical way to deal with large agent programs containing both nondeterminism and sensing. Introduction In (De Giacomo, Lesperance, & Levesque 1997) it was argued that when it comes to providing high level control to...
[ 903 ]
Train
1,962
0
A Social Semantics for Agent Communication Languages . The ability to communicate is one of the salient properties of agents. Although a number of agent communication languages (ACLs) have been developed, obtaining a suitable formal semantics for ACLs remains one of the greatest challenges of multiagent systems theory. Previous semantics have largely been mentalistic in their orientation and are based solely on the beliefs and intentions of the participating agents. Such semantics are not suitable for most multiagent applications, which involve autonomous and heterogeneous agents, whose beliefs and intentions cannot be uniformly determined. Accordingly, we present a social semantics for ACLs that gives primacy to the interactions among the agents. Our semantics is based on social commitments and is developed in temporal logic. This semantics, because of its public orientation, is essential to providing a rigorous basis for multiagent protocols. 1 Introduction Interaction among agents is the distinguishing property of multia...
[ 2514 ]
Train
1,963
4
View-based Interpretation of Real-time Optical Flow for Gesture Recognition We have developed a real-time, view-based gesture recognition system. Optical flow is estimated and segmented into motion blobs. Gestures are recognized using a rule-based technique based on characteristics of the motion blobs such as relative motion and size. Parameters of the gesture (e.g., frequency) are then estimated using context specific techniques. The system has been applied to create an interactive environment for children. 1 Introduction For many applications, the use of hand and body gestures is an attractive alternative to the cumbersome interface devices for human-computer interaction. This is especially true for interacting in virtual reality environments, where the user is no longer confined to the desktop and should be able to move around freely. While special devices can be worn to achieve these goals, these can be expensive and unwieldy. There has been a recent surge in computer vision research to provide a solution that doesn't use such devices. This paper describe...
[ 1469, 1530, 1733, 1969, 2082 ]
Train
1,964
0
Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy ofsmaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics|as a subroutine hierarchy|and a declarative semantics|as a representation of the value function of a hierarchical policy. MAXQ uni es and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and de ne subtasks that achieve these subgoals. By de ning such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the
[ 157, 749, 1130, 2087, 3110, 3147 ]
Train
1,965
0
Agents That Talk Back (Sometimes): Filter Programs for Affective Communication This paper introduces a model of interaction between users and animated agents as well as inter-agent interaction that supports basic features of affective conversation. As essential requirements for animated agents' capability to engage in and exhibit affective communication we motivate reasoning about emotion and emotion expression, personality, and social role awareness. The main contribution of our paper is the discussion of so-called `filter programs' that may qualify an agent's expression of its emotional state by its personality and the social context. All of the mental concepts that determine emotion expression, such as emotional state, personality, standards, and attitudes, have associated intensities for fine-tuning the agent's reactions in user-adapted environments.
[ 1538, 1723, 2711 ]
Train
1,966
2
Integrating the Document Object Model with Hyperlinks for Enhanced Topic Distillation and Information Extraction Topic distillation is the process of finding authoritative Web pages a comprehensive "hubs" which reciprocally endorse each other and are relevant to a given query. Hyperlink-based topic distillation has been traditionally applied to a macroscopic Web model where documents are nodes in a directed graph and hyperlinks are edges.Mas.M::[KP models miss va lua44 clues such aba4'::M na viga::M paa els,as templa]M2'0]K inclusions, whicha: embedded in HTML paLM using ma0KP taKP Consequently, results of ma:]6:1M2' distillaKP] atillaKP have been deterioraKP] inqua:1 ya s Webpa0: a becoming more complex. We propose a uniformfine-gra'K] model for the Web in which pa:] a represented by theirta trees (aes caesM their Document Object Models or DOMs)aM these DOM trees ar interconnected by ordinaM hyperlinks. Surprisingly, ma]6:[M2K' distillaKKP atillaKK do not work in the finegra -M: scena:]6 We present a new awM0PK1P suitaK1 for the fine-gra2K0 model. It can dis-aggregate hubs into coherent regions by segmenting their DO trees.utua endorsement between hubs as aM0[1['M2K involve these regions, rans, tha single nodes representing complete hubs. Anecdotesae meatesMP' ts using a 28-query, 366000-document benchmark suite, used in ea0]K4 topic distilla[M2 reseai h, reveal two benefits from the new aM:0KK6M2 distillastion quati y improves, a,a by-product of distillation is the aeM14 y to extra0 relevat snippets from hubs which a: nonly payM40[K relevant to the query.
[ 904, 2471, 2610 ]
Test
1,967
3
Online Reconfiguration in Replicated Databases Based on Group Communication Use of group communication to support replication in database systems has proven to be an attractive alternative to traditional replica control schemes. And various replica control protocols have been developed that use the ordering and reliability semantics of group communication primitives to simplify database system design and to improve performance. Although current solutions are able to mask site failures effectively, they are unable to cope with recovery of failed sites, merging of partitions, or joining of new sites. This paper addresses this important issue and proposes efficient solutions for online system reconfiguration providing new sites with a current state of the database without interrupting transaction processing in the rest of the system. We present various alternatives that can match the needs of different operating environments. We analyze the implications of long and complex reconfigurations on applications such as replicated databases, and argue that their developement may be greatly simplified by extended forms of group communications.
[ 519, 2917, 2919 ]
Train
1,968
3
Software Tools Software is growing ever-more complex and new software processes, methods and products put greater demands on software engineers than ever before. The support of appropriate software tools is essential for developers to maximise their ability to effectively and efficiently deliver quality software products. This article surveys current practice in the software tools area, along with recent and expected near-future trends in software tools development. We provide a summary of tool applications during the software lifecycle, but focus on particular aspects of software tools that have changed in recent years and are likely to change in the near future as tools continue to evolve. These include the internal structure of tools, provision of multiple view interfaces, tool integration techniques, collaborative work support and the increasing use of automated assistance within tools. We hope this article will both inform software engineering practitioners of current research trends, and tool researchers of the relevant state-of-the-art in commercial tools and various likely future research trends in tools development.
[ 1645, 3042 ]
Train
1,969
4
3D Hand Pose Reconstruction Using Specialized Mappings A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. The viewpoint is encoded by two real values, therefore 24 real values represent a hand pose. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles and viewpoint from those features. Results are encouraging given the complexity of the task.
[ 368, 727, 1383, 1890, 1963, 2148 ]
Test
1,970
5
Accurate and Fast Proximity Queries Between Polyhedra Using Convex Surface Decomposition The need to perform fast and accurate proximity queries arises frequently in physically-based modeling, simulation, animation, real-time interaction within a virtual environment, and game dynamics. The set of proximity queries include intersection detection, tolerance verification, exact and approximate minimum distance computation, and (disjoint) contact determination. Specialized data structures and algorithms have often been designed to perform each type of query separately. We present a unified approach to perform any of these queries seamlessly for general, rigid polyhedral objects with boundary representations which are orientable 2-manifolds. The proposed method involves a hierarchical data structure built upon a surface decomposition of the models. Furthermore, the incremental query algorithm takes advantage of coherence between successive frames. It has been applied to complex benchmarks and compares very favorably with earlier algorithms and systems. 1.
[]
Train
1,971
1
Hierarchical Discriminant Regression The main motivation of this paper is to propose a new classification and regression method for challenging high dimensional data. The proposed new technique casts classification problems (class labels as output) and regression problems (numeric values as output) into a unified regression problem. This unified view enables classification problems to use numeric information in the output space that is available for regression problems but are traditionally not readily available for classification problems -- distance metric among clustered class labels for coarse and fine classifications. A doubly clustered subspace-based hierarchical discriminating regression (HDR) method is proposed in this work. The major characteristics include: (1) Clustering is performed in both output space and input space at each internal node and thus the term "doubly clustered." Clustering in the output space provides virtual labels for computing clusters in the input space. (2) Discriminants in the input spa...
[]
Validation
1,972
1
Hunting moving targets: an extension to Bayesian methods in multimedia databases It has been widely recognised that the difference between the level of abstraction of the formulation of a query (by example) and that of the desired result (usually an image with certain semantics) calls for the use of learning methods that try to bridge this gap. Cox et al. have proposed a Bayesian method to learn the user's preferences during each query. Cox et al.'s system, PicHunter, 1 is designed for optimal performance when the user is searching for a fixed target image. The performance of the system was evaluated using target testing, which ranks systems according to the number of interaction steps required to find the target, leading to simple, easily reproducible experiments. There are some aspects of image retrieval, however, which are not captured by this measure. In particular, the possibility of query drift (i.e. a moving target) is completely ignored. The algorithm proposed by Cox et al. does not cope well with a change of target at a late query stage, because it is ...
[ 1092 ]
Train
1,973
2
Similarity Measures With complex multimedia data, we see the emergence of database systems in which the fundamental operation is similarity assessment. Before database issues can be addressed, it is necessary to give a definition of similarity as an operation. In this paper we develop a similarity measure, based on fuzzy logic, that exhibit several features that match experimental findings in humans. The model is dubbed Fuzzy Feature Contrast (FFC) and is an extension to a more general domain of the Feature Contrast model due to Tversky. We show how the FFC model can be used to model similarity assessment from fuzzy judgment of properties, and we address the use of fuzzy measures to deal with dependencies among the properties. 1 Introduction Comparing two images, or an image and a model, is the fundamental operation for many Visual Information Retrieval systems. In most systems of interest, a simple pixel-by-pixel comparison won't do: the difference that we determine must bear some correlation with the p...
[ 657, 1119, 1159 ]
Test
1,974
1
A Novel Sensor for Dynamic Tactile Information We present a novel tactile sensor, which is useful for dextrous grasping with a simple robot gripper. The novel part consists of an array of capacitive sensors, which couple to the object by means of little brushes of fibers. These sensor elements are very sensitive (with a threshold of about 5 mN) but robust enough not to be damaged during grasping. They yield two types of dynamical tactile information corresponding roughly to two types of tactile sensors in the human skin. The complete sensor consists of a foil-based static force sensor, which yields the total force and the center of the two-dimensional force distribution and is surrounded by an array of the dynamical sensor elements. One such sensor has been mounted on each of the two gripper jaws of our humanoid robot and equipped with the necessary read-out electronics and a CAN bus interface. As first applications we describe experiments to evaluate the quality of a grip using the sensor measurements and a utility that allows to ...
[ 2339 ]
Validation
1,975
1
Language Identification From Prosody Without Explicit Features Most current language identification (LID) systems make little or no use of prosodic information, despite the importance of prosody in LID by humans. The greatest obstacle has been that of finding an appropriate feature set which captures linguistically relevant prosodic information. The only system to attempt LID entirely on the basis of prosodic variables uses a set of over 200 features which are selected and combined in a task-specific manner [12]. We apply a novel recurrent neural network model to the task of pairwise discrimination among languages. Network inputs are limited to delta-F 0 and the first difference of the band limited amplitude envelope. Initial results are based on all pairwise combinations of English, German, Japanese, Mandarin and Spanish, with 90 speakers per language. Keywords: Language identification, Recurrent neural networks, prosody 1. PROSODY AND LANGUAGE IDENTIFICATION Most current approaches to automatic language identification use some form of segment re...
[ 316 ]
Validation
1,976
2
World Wide Web Information Retrieval Using Web Connectivity Information Gathering, processing and distributing information from the World Wide Web will be a vital technology for the next century. Web search techniques have played a critical role in the development of information systems. Due to the diverse nature of web documents, traditional search techniques must be improved. Hyperlink structure based methods have proved to be powerful ways of exploring the relationships between web documents. In this project, a prototype web search engine was developed to exploit the link structure of web documents, based on the use of the Companion algorithm. The prototype consists of a web spider, local database, and search software. The system was written using the Java programming language. Our spider crawls and downloads web pages using Lynx, then saves the hyperlinks into an Oracle database. JDBC is used to implement the database processing. Search software makes a vicinity graph for the query URL and returns the most related pages after calculating the hub and authority weights. Finally, HTML web pages provide user interfaces and communicate with CGI using the Perl language. iii ACKNOWLEDGMENTS The author would like to express thanks to all of the members of his M.S. committee for their useful comments on the thesis, assistance in scheduling the defense date and kind help during the final defense period. The author would like to express his deepest appreciation to Dr. Wen-Chen Hu, his thesis mentor, for the depth of the training and the appropriate guidance he has provided. The author would also like to acknowledge the Department of Computer Science and Software Engineering of Auburn University for financial support. Finally, thanks especially go to the author's wife Qifang, his son, Alex, and his father and mother for their support and love. ...
[ 1201, 1838, 1983, 2459 ]
Test
1,977
2
Greenstone: A Comprehensive Open-Source Digital Library Software System This paper describes the Greenstone digital library software, a comprehensive, open-source system for the construction and presentation of information collections. Collections built with Greenstone offer effective full-text searching and metadata-based browsing facilities that are attractive and easy to use. Moreover, they are easily maintainable and can be augmented and rebuilt entirely automatically. The system is extensible: software “plugins ” accommodate different document and metadata types.
[ 507, 2471 ]
Test
1,978
0
An Architecture For Web Agents In this paper we propose an extended BDI architecture for web agents. The architecture is general, it covers 2D web agent with text based interfaces for retrieval services as well as 3D web agents like avatar-embodied guides that help visitors to navigate in virtual environments. Furthermore, we define the primitives of sensor/effector of web agents, and show how those different types of web agents can be implemented, based on the general architecture.
[ 1340, 1504 ]
Train
1,979
1
A Process-Oriented Heuristic for Model Selection Current methods to avoid overfitting are either data-oriented (using separate data for validation) or representation-oriented (penalizing complexity in the model). This paper proposes process-oriented evaluation, where a model's expected generalization error is computed as a function of the search process that led to it. The paper develops the necessary theoretical framework, and applies it to one type of learning: rule induction. A process-oriented version of the CN2 rule learner is empirically compared with the default CN2. The process-oriented version is more accurate in a large majority of the datasets, with high significance, and also produces simpler models. Experiments in artificial domains suggest that processoriented evaluation is particularly useful in high-dimensional domains. 1 INTRODUCTION Overfitting avoidance is often considered the central problem of machine learning (e.g., (Cheeseman & Oldford, 1994)). If a learner is sufficiently powerful, it must guard against selec...
[ 2175 ]
Train