node_id int64 0 76.9k | label int64 0 39 | text stringlengths 13 124k | neighbors listlengths 0 3.32k | mask stringclasses 4
values |
|---|---|---|---|---|
280 | 3 | Functional Join Processing . Inter-object references are one of the key concepts of object-relational and object-oriented database systems. In this work, we investigate alternative techniques to implement inter-object references and make the best use of them in query processing, i.e., in evaluating functional joins. We will give a comprehensive overview and performance evaluation of all known techniques for simple (singlevalued) as well as multi-valued functional joins. Furthermore, we will describe special order-preserving functionaljoin techniques that are particularly attractive for decision support queries that require ordered results. While most of the presentation of this paper is focused on object-relational and object-oriented database systems, some of the results can also be applied to plain relational databases because index nested-loop joins along key/foreign-key relationships, as they are frequently found in relational databases, are just one particular way to execute a functional join. Key words: O... | [
1468
] | Train |
281 | 5 | Managing Robot Autonomy and Interactivity Using Motives and Visual Communication An autonomous mobile robot operating in everyday life conditions will have to face a huge variety of situations and to interact with other agents (living or artificial). Such a robot needs flexible and robust methods for managing its goals and for adapting its control mechanisms to face the contingencies of the world. It also needs to communicate with others in order to get useful information about the world. This paper describes an approach based on a general architecture and on internal variables called `motives' to manage the goals of an autonomous robot. These variables are also used as a basis for communication using a visual communication system. Experiments using a vision- and sonar-based Pioneer I robot, equipped with a visual signaling device, are presented. 1 Introduction Designing an autonomous mobile robot to operate in unmodified environments, i.e., environments that have not been specifically engineered for the robot, is a very challenging problems. Dynamic and unpredic... | [
963,
1622,
2828,
3094
] | Train |
282 | 3 | Approximate Query Translation across Heterogeneous Information Sources In this paper we present a mechanism for approximately translating Boolean query constraints across heterogeneous information sources. Achieving the best translation is challenging because sources support different constraints for formulating queries, and often these constraints cannot be precisely translated. For instance, a query [score ? 8] might be "perfectly" translated as [rating ? 0.8] at some site, but can only be approximated as [grade = A] at another. Unlike other work, our general framework adopts a customizable "closeness" metric for the translation that combines both precision and recall. Our results show that for query translation we need to handle interdependencies among both query conjuncts as well as disjuncts. As the basis, we identify the essential requirements of a rule system for users to encode the mappings for atomic semantic units. Our algorithm then translates complex queries by rewriting them in terms of the semantic units. We show that, un... | [
62,
2712
] | Test |
283 | 4 | The Cub-e, a Novel Virtual 3D Display Device We have designed, and are in the process of building, a visualisation device, the Cub-e. The Cub-e consists of six TFT screens, arranged in a perspex cube, with a StrongARM processor and batteries inside. It is a multipurpose device with applications including teleconferencing, interaction with virtual worlds, and games. 1 | [
21
] | Validation |
284 | 2 | Automatic Text Representation, Classification and Labeling in European Law The huge text archives and retrieval systems of legal information have not achieved yet the representation in the wellknown subject-oriented structure of legal commentaries. Content-based classification and text analysis remains a high priority research topic. In the joint KONTERM, SOM and LabelSOM projects, learning techniques of neural networks are used to achieve similar high compression rates of classification and analysis like in manual legal indexing. The produced maps of legal text corpora cluster related documents in units that are described with automatically selected descriptors. Extensive tests with text corpora in European case law have shown the feasibility of this approach. Classification and labeling proved very helpful for legal research. The Growing Hierarchical Self-Organizing Map represents very interesting generalities and specialties of legal text corpora. The segmentation into document parts improved very much the quality of labeling. The next challenge would be a change from tfxidf vector representation to a modified vector representation taking into account thesauri or ontologies considering learned properties of legal text corpora. | [
133
] | Test |
285 | 3 | Concurrency Control and Recovery in Transactional Process Management The classical theory of transaction management contains two different aspects, namely concurrency control and recovery, which ensure serializability and atomicity of transaction executions, respectively. Although concurrency control and recovery are not independent of each other, the criteria for these two aspects were developed orthogonally and as a result, in most cases these criteria are incompatible with each other. Recently a unified theory of concurrency control and recovery for databases with read and write operations has been introduced in [SWY93, AVA + 94] that allows reasoning about serializability and atomicity within the same framework. In [SWY93, AVA + 94] a class of schedules was introduced (called prefix reducible), which guarantees both serializability and atomicity in a failure prone environment with read/write operations. Several protocols were developed to generate such schedules by a database concurrency control mechanism. We present here a unified transaction ... | [
1343,
1534
] | Train |
286 | 0 | Supporting Conflict Resolution in Cooperative Design Systems Complex modern-day artifacts are designed cooperatively by groups of experts, each with their own areas of expertise. The interaction of such experts inevitably involves conflict. This paper presents an implemented computational model, based on studies of human cooperative design, for supporting the resolution of such conflicts. This model is based centrally on the insights that general conflict resolution expertise exists separately from domain-level design expertise, and that this expertise can be instantiated in the context of particular conflicts into specific advice for resolving those conflicts. Conflict resolution expertise consists of a taxonomy of design conflict classes in addition to associated general advice suitable for resolving conflicts in these classes. The abstract nature of conflict resolution expertise makes it applicable to a wide variety of design domains. This paper describes this conflict resolution model and provides examples of its operation from an implemente... | [
858,
1332,
1378,
1724,
2294,
2314,
2408
] | Train |
287 | 5 | CMUnited-97: RoboCup-97 Small-Robot World Champion Team Robotic soccer is a challenging research domain which involves multiple agents that need to collaborate in an adversarial environment to achieve specificobjectives. In this paper, we describe CMUnited, the team of small robotic agents that we developed to enter the RoboCup-97 competition. We designed and built the robotic agents, devised the appropriate vision algorithm, and developed and implemented algorithms for strategic collaboration between the robots in an uncertain and dynamic environment. The robots can organize themselves in formations, hold specific roles, and pursue their goals. In game situations, they have demonstrated their collaborative behaviors on multiple occasions. We present an overview of the vision processing algorithm which successfully tracks multiple moving objects and predicts trajectories. The paper then focusses on the agent behaviors ranging from low-level individual behaviors to coordinated, strategic team behaviors. CMUnited won the RoboCup-97 small-robot competition at IJCAI-97 in Nagoya, Japan. | [
346,
962,
1573,
1630,
2264,
2492,
3173
] | Train |
288 | 2 | Computing Iceberg Queries Efficiently Many applications compute aggregate functions (such as COUNT, SUM) over an attribute (or set of attributes) to find aggregate values above some specified threshold. We call such queries iceberg queries because the number of above-threshold results is often very small (the tip of an iceberg), relative to the large amount of input data (the iceberg). Such iceberg queries are common in many applications, including data warehousing, information-retrieval, market basket analysis in data mining, clustering and copy detection. We propose efficient algorithms to evaluate iceberg queries using very little memory and significantly fewer passes over data, as compared to current techniques that use sorting or hashing. We present an experimental case study using over three gigabytes of Web data to illustrate the savings obtained by our algorithms. 1 Introduction In this paper we develop efficient execution strategies for an important class of queries that we call iceberg queries. An iceberg query... | [
2746
] | Validation |
289 | 2 | Assessing Software Libraries by Browsing Similar Classes, Functions, and Relationships Comparing and contrasting a set of software libraries is useful for reuse related activities such as selecting a library from among several candidates or porting an application from one library to another. The current state of the art in assessing libraries relies on qualitative methods. To reduce costs and/or assess a large collection of libraries, automation is necessary. Although there are tools that help a developer examine an individual library in terms of architecture, style, etc., we know of no tools that help the developer directly compare several libraries. With existing tools, the user must manually integrate the knowledge learned about each library. Automation to help developers directly compare and contrast libraries requires matching of similar components (such as classes and functions) across libraries. This is different than the traditional component retrieval problem in which components are returned that best match a user's query. Rather, we need to find those component... | [
2895
] | Validation |
290 | 4 | Self-Organization in Ad Hoc Sensor Networks: An Empirical Study Research in classifying and recognizing complex concepts has been directing its focus increasingly on distributed sensing using a large amount of sensors. The colossal amount of sensor data often obstructs traditional algorithms in centralized approaches, where all sensor data is directed to one central location to be processed. Spreading the processing of sensor data over the network seems to be a promising option, but distributed algorithms are harder to inspect and evaluate. Using self-sufficient sensor boards with short-range wireless communication capabilities, we are exploring approaches to achieve an emerging distributed perception of the sensed environment in realtime through clustering. Experiments in both simulation and real-world platforms indicate that this is a valid methodology, being especially promising for computation on many units with limited resources. | [
1731
] | Test |
291 | 1 | Clustering Large Datasets in Arbitrary Metric Spaces Clustering partitions a collection of objects into groups called clusters, such that similar objects fall into the same group. Similarity between objects is defined by a distance function satisfying the triangle inequality; this distance function along with the collection of objects describes a distance space. In a distance space, the only operation possible on data objects is the computation of distance between them. All scalable algorithms in the literature assume a special type of distance space, namely a k-dimensional vector space, which allows vector operations on objects. We present two scalable algorithms designed for clustering very large datasets in distance spaces. Our first algorithm BUBBLE is, to our knowledge, the first scalable clustering algorithm for data in a distance space. Our second algorithm BUBBLE-FM improves upon BUBBLE by reducing the number of calls to the distance function, which may be computationally very expensive. Both algorithms make only a single scan ov... | [
1405,
2971
] | Test |
292 | 3 | Manipulating Interpolated Data is Easier than You Thought Data defined by interpolation is frequently found in new applications involving geographical entities, moving objects, or spatiotemporal data. These data lead to potentially infinite collections of items, (e.g., the elevation of any point in a map), whose definitions are based on the association of a collection of samples with an interpolation function. The naive manipulation of the data through direct access to both the samples and the interpolation functions leads to cumbersome or inaccurate queries. It is desirable to hide the samples and the interpolation functions from the logical level, while their manipulation is performed automatically. We propose to model such data using infinite relations (e.g., the map with elevation yields an infinite ternary relation) which can be manipulated through standard relational query languages (e.g., SQL), with no mention of the interpolated definition. The clear separation between logical and physical levels ensures the accu... | [
54,
331,
1927
] | Train |
293 | 4 | Developing a Context-aware Electronic Tourist Guide: Some Issues and Experiences In this paper, we describe our experiences of developing and evaluating GUIDE, an intelligent electronic tourist guide. The GUIDE system has been built to overcome many of the limitations of the traditional information and navigation tools available to city visitors. For example, group-based tours are inherently inflexible with fixed starting times and fixed durations and (like most guidebooks) are constrained by the need to satisfy the interests of the majority rather than the specific interests of individuals. Following a period of requirements capture, involving experts in the field of tourism, we developed and installed a system for use by visitors to Lancaster. The system combines mobile computing technologies with a wireless infrastructure to present city visitors with information tailored to both their personal and environmental contexts. In this paper we present an evaluation of GUIDE, focusing on the quality of the visitors experience when using the system. Keywords Mobile c... | [
796,
2590,
3137
] | Train |
294 | 5 | Reasoning with Concrete Domains Description logics are knowledge representation and reasoning formalisms which represent conceptual knowledge on an abstract logical level. Concrete domains are a theoretically well-founded approach to the integration of description logic reasoning with reasoning about concrete objects such as numbers, time intervals or spatial regions. In this paper, the complexity of combined reasoning with description logics and concrete domains is investigated. We extend ALC(D), which is the basic description logic for reasoning with concrete domains, by the operators "feature agreement" and "feature disagreement". For the extended logic, called ALCF(D), an algorithm for deciding the ABox consistency problem is devised. The strategy employed by this algorithm is vital for the efficient implementation of reasoners for description logics incorporating concrete domains. Based on the algorithm, it is proved that the standard reasoning problems for both logics ALC(D) and ALCF(D) are PSpace-co... | [
1484,
2077,
3036
] | Train |
295 | 3 | Highly Concurrent Shared Storage . Shared storage arrays enable thousands of storage devices to be shared and directly accessed by end hosts over switched system-area networks, promising databases and filesystems highly scalable, reliable storage. In such systems, hosts perform access tasks (read and write) and management tasks (migration and reconstruction of data on failed devices.) Each task translates into multiple phases of low-level device I/Os, so that concurrent host tasks can span multiple shared devices and access overlapping ranges potentially leading to inconsistencies for redundancy codes and for data read by end hosts. Highly scalable concurrency control and recovery protocols are required to coordinate on-line storage management and access tasks. While expressing storage-level tasks as ACID transactions ensures proper concurrency control and recovery, such an approach imposes high performance overhead, results in replication of work and does not exploit the available knowledge about storage le... | [
104
] | Validation |
296 | 2 | Collaborative Learning for Recommender Systems Recommender systems use ratings from users on items such as movies and music for the purpose of predicting the user preferences on items that have not been rated. Predictions are normally done by using the ratings of other users of the system, by learning the user preference as a function of the features of the items or by a combination of both these methods. In this paper, we pose the problem as one of collaboratively learning of preference functions by multiple users of the recommender system. We study several mixture models for this task. We show, via theoretical analyses and experiments on a movie rating database, how the models can be designed to overcome common problems in recommender systems including the new user problem, the recurring startup problem, the sparse rating problem and the scaling problem. 1. | [
220
] | Train |
297 | 5 | Dynamic Service Matchmaking Among Agents in Open Information Environments Introduction The amount of services and deployed software agents in the most famous offspring of the Internet, the World Wide Web, is exponentially increasing. In addition, the Internet is an open environment, where information sources, communication links and agents themselves may appear and disappear unpredictably. Thus, an effective, automated search and selection of relevant services or agents is essential for human users and agents as well. We distinguish three general agent categories in the Cyberspace, service providers, service requester, and middle agents. Service providers provide some type of service, such as finding information, or performing some particular domain specific problem solving. Requester agents need provider agents to perform some service for them. Agents that help locate others are called middle agents[2]. Matchmaking is the process of finding an appropriate provider for a requester thr | [
70,
952,
1580,
1818,
2344,
2514
] | Train |
298 | 2 | Scaling Personalized Web Search Recent web search techniques augment traditional text matching with a global notion of “importance ” based on the linkage structure of the web, such as in Google’s PageRank algorithm. For more refined searches, this global notion of importance can be specialized to create personalized views of importance—for example, importance scores can be biased according to a user-specified set of initially-interesting pages. Computing and storing all possible personalized views in advance is impractical, as is computing personalized views at query time, since the computation of each view requires an iterative computation over the web graph. We present new graph-theoretical results, and a new technique based on these results, that encode personalized views as partial vectors. Partial vectors are shared across multiple personalized views, and their computation and storage costs scale well with the number of views. Our approach enables incremental computation, so that the construction of personalized views from partial vectors is practical at query time. We present efficient dynamic programming algorithms for computing partial vectors, an algorithm for constructing personalized views from partial vectors, and experimental results demonstrating the effectiveness and scalability of our techniques. 1 | [
538,
1069,
1170,
2984
] | Test |
299 | 2 | Regularized Principal Manifolds . Many settings of unsupervised learning can be viewed as quantization problems --- the minimization of the expected quantization error subject to some restrictions. This allows the use of tools such as regularization from the theory of (supervised) risk minimization for unsupervised settings. This setting turns out to be closely related to principal curves, the generative topographic map, and robust coding. We explore this connection in two ways: 1) we propose an algorithm for finding principal manifolds that can be regularized in a variety of ways. 2) We derive uniform convergence bounds and hence bounds on the learning rates of the algorithm. In particular, we give bounds on the covering numbers which allows us to obtain nearly optimal learning rates for certain types of regularization operators. Experimental results demonstrate the feasibility of the approach. Keywords: Regularization, Uniform Convergence, Kernels, Entropy Numbers, Principal Curves, Clustering, Generative Topograph... | [
1403
] | Train |
300 | 1 | Integrating Case-Based Learning and Cognitive Biases for Machine Learning of Natural Language This paper shows that psychological constraints on human information processing can be used effectively to guide feature set selection for case-based learning of linguistic knowledge. Given as input a baseline case representation for a natural language learning task, our algorithm selects the relevant cognitive biases for the task and then automatically modifies the representation in response to those biases by changing, deleting, and weighting features appropriately. We apply the cognitive bias approach to feature set selection to four natural language learning problems and show that performance of the casebased learning algorithm improves significantly when relevant cognitive biases are incorporated into the baseline instance representation. We argue that the cognitive bias approach offers new possibilities for case-based learning of natural language: it simplifies the process of instance representation design and, in theory, obviates the need for separate instance represent... | [
2469,
2823
] | Validation |
301 | 2 | A Study of Approaches to Hypertext Categorization Hypertext poses new research challenges for text classification. Hyperlinks, HTML tags, category labels distributed over linked documents, and meta data extracted from related Web sites all provide rich information for classifying hypertext documents. How to appropriately represent that information and automatically learn statistical patterns for solving hypertext classification problems is an open question. This paper seeks a principled approach to providing the answers. Specifically, we define five hypertext regularities which may (or may not) hold in a particular application domain, and whose presence (or absence) may significantly influence the optimal design of a classifier. Using three hypertext datasets and three well-known learning algorithms (Naive Bayes, Nearest Neighbor, and First Order Inductive Learner), we examine these regularities in different domains, and compare alternative ways to exploit them. Our results show that the identification of hypertext regularities in the data and the selection of appropriate representations for hypertext in particular domains are crucial, but seldom obvious, in real-world problems. We find that adding the words in the linked neighborhood to the page having those links (both inlinks and outlinks) were helpful for all our classifiers on one data set, but more harmful than helpful for two out of the three classifiers on the remaining datasets. We also observed that extracting meta data from related Web sites was extremely useful for improving classification accuracy in some of those domains. Finally, the relative performance of the classifiers being tested provided insights into their strengths and limitations for solving classification problems involving diverse and often noisy Web pages. | [
322,
471,
605,
815,
2178,
2961
] | Validation |
302 | 2 | Patterns of Search: Analyzing and Modeling Web Query Refinement . We discuss the construction of probabilistic models centering on temporal patterns of query refinement. Our analyses are derived from a large corpus of Web search queries extracted from server logs recorded by a popular Internet search service. We frame the modeling task in terms of pursuing an understanding of probabilistic relationships among temporal patterns of activity, informational goals, and classes of query refinement. We construct Bayesian networks that predict search behavior, with a focus on the progression of queries over time. We review a methodology for abstracting and tagging user queries. After presenting key statistics on query length, query frequency, and informational goals, we describe user models that capture the dynamics of query refinement. 1 Introduction The evolution of the World Wide Web has provided rich opportunities for gathering and analyzing anonymous log data generated by user interactions with network-based services. Web-based search engine... | [
1525,
1541,
2793
] | Train |
303 | 1 | OBDD-based Universal Planning for Synchronized Agents in Non-Deterministic Domains Recently model checking representation and search techniques were shown to be efficiently applicable to planning, in particular to non-deterministic planning. Such planning approaches use Ordered Binary Decision Diagrams (obdds) to encode a planning domain as a non-deterministic finite automaton and then apply fast algorithms from model checking to search for a solution. obdds can effectively scale and can provide universal plans for complex planning domains. We are particularly interested in addressing the complexities arising in non-deterministic, multi-agent domains. In this article, we present umop, a new universal obdd-based planning framework for non-deterministic, multi-agent domains. We introduce a new planning domain description language, NADL, to specify non-deterministic, multi-agent domains. The language contributes the explicit definition of controllable agents and uncontrollable environment agents. We describe the syntax and semantics of NADL and show how to bu... | [
2182,
3146
] | Train |
304 | 0 | An'alisis Din'amico De Las Creencias El modelo de los mundos posibles y su sem'antica de Kripke asociada dan una sem'antica intuitiva para las l'ogicas dox'asticas, pero parecen llevar inevitablemente a modelizar agentes l'ogicamente omniscientes y razonadores perfectos. Estos problemas son evitables, si los mundos posibles dejan de considerarse como descripciones completas y consistentes del mundo real. Adoptando una definici'on sint'actica de mundos posibles, en este art'iculo se sugiere c'omo se pueden analizar las creencias de una forma puramente l'ogica, usando el m'etodo de los tableros anal'iticos (con ciertas modificaciones). Este an'alisis constituye un primer paso hacia la modelizaci'on de la investigaci'on racional. Palabras clave: l'ogicas de creencias y de conocimiento, omnisciencia l'ogica, mundos posibles, relaci'on de accesibilidad, tableros anal'iticos Temas: representaci'on del conocimiento, razonamiento 1 Introducci'on Las l'ogicas de creencias y de conocimiento ([HALP92], [FAGI95]) son herramienta... | [
1689
] | Train |
305 | 1 | Comparison of Learning Approaches to Appearance-based 3D Object Recognition with and without cluttered background We re-evaluate the application of Support Vector Machines (SVM) to appearance-based 3D object recognition, by comparing it to two other learning approaches: the system developed at Columbia University ("Columbia") and a simple image matching system using a nearest neighbor classifier ("NNC"). In a first set of experiments, we compare correct recognition rates of the segmented 3D object images of the COIL database. We show that the performance of the simple "NNC" system compares to the more elaborated "Columbia" and "SVM" systems. Only when the experimental setting is more demanding, i.e. when we reduce the number of views during the training phase, some difference in performance can be observed. In a second set of experiments, we consider the more realistic task of 3D object recognition with cluttered background. Also in this case, we obtain that the performance of the three systems are comparable. Only with the recently proposed black/white background training scheme ("BW") applied t... | [
2389
] | Train |
306 | 1 | Distributed Case-Based Learning Multi-agent systems exploiting case based reasoning techniques have to deal with the problem of retrieving episodes that are themselves distributed across a set of agents. From a Gestalt perspective, a good overall case may not be the one derived from the summation of best subcases. In this paper we deal with issues involved in learning and exploiting the learned knowledge in multiagent case-based systems. Introduction Case Based Reasoning (CBR) has been attracting much attention recently as a paradigm with a wide variety of applications [Kolodner 93]. In this paper, we discuss issues pertaining to cooperative retrieval and composition of a case in which subcases are distributed across different agents in a multi-agent system. A multi-agent system comprises a group of intelligent agents working towards a set of common global goals or separate individual goals that may interact. In such a system, each of the agents may not be individually capable of achieving the global goal and/or ... | [
1378
] | Validation |
307 | 3 | Optional Locking Integrated with Operational Transformation in Distributed Real-Time Group Editors Locking is a standard technique in traditional distributed computing and database systems to ensure data integrity by prohibiting concurrent conflicting updates on shared data objects. Operational transformation is an innovative technique invented by groupware research for consistency maintenance in real-time group editors. In this paper, we will examine and explore the complementary roles of locking and operational transformation in consistency maintenance. A novel optional locking scheme is proposed and integrated with operation transformation to maintain both generic and context-specific consistency in a distributed, interactive, and collaborative environment. The integrated optional locking and operational transformation technique is fully distributed, highly responsive, non-blocking, and capable of avoiding locking overhead in the most common case of collaborative editing. Keywords: Locking, operational transformation, consistency maintenance, group editors, groupware, distribute... | [
31
] | Train |
308 | 4 | Rooms, Protocols, and Nets: Metaphors for Computer Supported Cooperative Learning of Distributed Groups : We discuss an integrative design for computer supported cooperative learning (CSCL) environments. Three common problems of CSCL are addressed: How to achieve social orientation and group awareness, how to coordinate goal-directed interaction, and how to construct a shared knowledge base. With respect to each problem, we propose a guiding metaphor which links theoretical, technical, and usability requirements. If appropriately implemented, each metaphor resolves one problem: Virtual rooms support social orientation, learning protocols guide interactions aimed at knowledge acquisition, and learning nets represent socially shared knowledge. Theoretically, the metaphor of virtual rooms originates in work on virtual spaces in human computer interaction, learning protocols are related to speech act theory, and learning nets are based on models of knowledge representation. A prototype system implementing the virtual room metaphor is presented. We argue that by further integrating these thre... | [
99
] | Test |
309 | 3 | An Algebraic Compression Framework for Query Results Decision-support applications in emerging environments require that SQL query results or intermediate results be shipped to clients for further analysis and presentation. These clients may use low bandwidth connections or have severe storage restrictions. Consequently, there is a need to compress the results of a query for efficient transfer and client-side access. This paper explores a variety of techniques that address this issue. Instead of using a fixed method, we choose a combination of compression methods that use statistical and semantic information of the query results to enhance the effect of compression. To represent such a combination, we present a framework of "compression plans" formed by composing primitive compression operators. We also present optimization algorithms that enumerate valid compression plans and choose an optimal plan. Our experiments show that our techniques achieve significant performance improvement over standard compression tools like WinZip. 1. Intro... | [
1521
] | Test |
310 | 4 | Learning and Tracking Cyclic Human Motion We present methods for learning and tracking human motion in video. We estimate a statistical model of typical activities from a large set of 3D periodic human motion data by segmenting these data automatically into "cycles". Then the mean and the principal components of the cycles are computed using a new algorithm that accounts for missing information and enforces smooth transitions between cycles. The learned temporal model provides a prior probability distribution over human motions that can be used in a Bayesian framework for tracking human subjects in complex monocular video sequences and recovering their 3D motion. | [
531
] | Validation |
311 | 4 | Does Zooming Improve Image Browsing? We describe an image retrieval system we built based on a Zoomable User Interface (ZUI). We also discuss the design, results and analysis of a controlled experiment we performed on the browsing aspects of the system. The experiment resulted in a statistically significant difference in the interaction between number of images (25, 75, 225) and style of browser (2D, ZUI, 3D). The 2D and ZUI browser systems performed equally, and both performed better than the 3D systems. The image browsers tested during the experiment include Cerious Software's Thumbs Plus, TriVista Technology's Simple LandScape and Photo GoRound, and our Zoomable Image Browser based on Pad++. Keywords Evaluation, controlled experiment, image browsers, retrieval systems, real-time computer graphics, Zoomable User Interfaces (ZUIs), multiscale interfaces, Pad++. INTRODUCTION In the past two decades, with the emergence of faster computers, the declining cost of memory, the popularity of digital cameras, online archives... | [
900,
2504
] | Train |
312 | 1 | Learning Strategy Knowledge Incrementally Modern industrial processes require advanced computer tools that should adapt to the user requirements and to the tasks being solved. Strategy learning consists of automating the acquisition of patterns of actions used while solving particular tasks. Current intelligent strategy learning systems acquire operational knowledge to improve the efficiency of a particular problem solver. However, these strategy learning tools should also provide a way of achieving low-cost solutions according to user-specific criteria. In this paper, we present a learning system, hamlet, which is integrated in a planning architecture, prodigy, and acquires control knowledge to guide prodigy to efficiently produce cost-effective plans. hamlet learns from planning episodes, by explaining why the correct decisions were made, and later refines the learned strategy knowledge to make it incrementally correct with experience. | [
1429,
2422
] | Train |
313 | 0 | Modeling Sociality In The BDI Framework . We present a conceptual model for how the social nature of agents impacts upon their individual mental states. Roles and social relationships provide an abstraction upon which we develop the notion of social mental shaping . 1 Introduction Belief-Desire-Intention (BDI) architectures for deliberative agents are based on the physical symbol system assumption that agents maintain and reason about internal representations of their world [2]. However, while such architectures conceptualise individual intentionality and behaviour, they say nothing about the social aspects of agents being situated in a multi-agent system. The main reason for this limitation is that mental attitudes are taken to be internal to a particular agent (or team) and are modeled as a relation between the agent (or a team) and a proposition. The purpose of this paper is, therefore, to extend BDI models in order to investigate the problem of how the social nature of agents can impact upon their individual mental ... | [
484,
1358,
2921
] | Test |
314 | 3 | Clause Aggregation Using Linguistic Knowledge By combining multiple clauses into one single sentence, a text generation system can express the same amount of information in fewer words and at the same time, produce a great variety of complex constructions. In this paper, we describe hypotactic and paratactic operators for generating complex sentences from clause-sized semantic representations. These two types of operators are portable and reusable because they are based on general resources such as the lexicon and the grammar. 1 Introduction An expression is more concise than another expression if it conveys the same amount of information in fewer words. Complex sentences generated by combining clauses are more concise than corresponding simple sentences because multiple references to the recurring entities are removed. For example, clauses like "Jones is a patient" and "Jones has hypertension" can be combined into a more concise sentence "Jones is a hypertensive patient." To illustrate the common occurrence of such repeated enti... | [
1111,
2931
] | Validation |
315 | 5 | Hybrid Heuristics for Optimal Design of Artificial Neural Networks Designing the architecture and correct parameters for the learning algorithm is a tedious task for modeling an optimal Artificial Neural Network (ANN), which is smaller, faster and with a better generalization performance. In this paper we explain how a hybrid algorithm integrating Genetic algorithm (GA), Simulated Annealing (SA) and other heuristic procedures can be applied for the optimal design of an ANN. This paper is more concerned with the understanding of current theoretical developments of Evolutionary Artificial Neural Networks (EANNs) using GAs and how the proposed hybrid heuristic procedures can be combined to produce an optimal ANN. The proposed meta-heuristic can be regarded as a general framework for adaptive systems, that is, systems that can change their connection weights, architectures and learning rules according to different environments without human intervention. | [
2071
] | Test |
316 | 1 | Automatic Discrimination Among Languages Based on Prosody Alone The development of methods for the automatic identification of languages is motivated both by speech-based applications intended for use in a multi-lingual environment, and by theoretical questions of cross-linguistic variation and similarity. We evaluate the potential utility of two prosodic variables, F 0 and amplitude envelope modulation, in a pairwise language discrimination task. Discrimination is done using a novel neural network which can successfully attend to temporal information at a range of timescales. Both variables are found to be useful in discriminating among languages, and confusion patterns, in general, reflect traditional intonational and rhythmic language classes. The methods employed allow empirical determination of prosodic similarity across languages. Die Entwicklung von Methoden zur automatischen Sprachidentifikation wird motiviert sowohl durch sprach-basierte Anwendungen, die zum Einsatz in einer mehrsprachigen Umgebung bestimmt sind, als auch durch theoretisch... | [
1975
] | Test |
317 | 1 | Class Representation and Image Retrieval with Non-Metric Distances One of the key problems in appearance-based vision is understanding how to use a set of labeled images to classify new images. Classification systems that can model human performance, or that use robust image matching methods, often make use of similarity judgments that are non-metric; but when the triangle inequality is not obeyed, most existing pattern recognition techniques are not applicable. We note that exemplar-based (or nearest-neighbor) methods can be applied naturally when using a wide class of non-metric similarity functions. The key issue, however, is to find methods for choosing good representatives of a class that accurately characterize it. We show that existing condensing techniques for finding class representatives are ill-suited to deal with non-metric dataspaces. We then focus on developing techniques for solving this problem, emphasizing two points: First, we show that the distance between two images is not a good measure of how well one image can represent another in non-metric spaces. Instead, we use the vector correlation between the distances from each image to other previously seen images. Second, we show that in non-metric spaces, boundary points are less significant for capturing the structure of a class | [
2564
] | Train |
318 | 5 | Ontology-Related Services in Agent-Based Distributed Information Infrastructures Ontologies are an emerging paradigm to support declarativity, interoperability, and intelligent services in many areas, such as Agent--based Computation, Distributed Information Systems, and Expert Systems. In the context of designing a scalable, agent-based middleware for the realization of distributed Organizational Memories (OM), we examine the question what ontology--related services must be provided as middleware components. To this end, we discuss three basic dimensions of information that have fundamental impact on the usefulness of ontologies for OMs, namely formality, stability, and sharing scope of information. A short discussion of techniques which are suited to find a balance in each of these dimensions leads to a characterization of roles of ontology--related actors in the OM scenario. We describe the several roles with respect to their goals, knowledge, competencies, rights, and obligations. These actor classes and the related competencies are candidates to define agent types, speech acts, and standard services in the envisioned OM middleware. 1. | [
1572
] | Train |
319 | 4 | Incremental Code Mobility with XML We demonstrate how XML and related technologies can be used for code mobility at any granularity, thus overcoming the restrictions of existing approaches. By not fixing a particular granularity for mobile code, we enable complete programs as well as individual lines of code to be sent across the network. We define the concept of incremental code mobility as the ability to migrate and add, remove, or replace code fragments (i.e., increments) in a remote program. The combination of fine-grained and incremental mobility achieves a previously unavailable degree of flexibility. We examine the application of incremental and fine-grained code mobility to a variety of domains, including user interface management, application management on mobile thin clients, for example PDAs, and management of distributed documents. Keywords Incremental Code Mobility, XML Technologies 1 INTRODUCTION The increasing popularity of Java and the spread of Webbased technologies are contributing to a growing int... | [
2710
] | Train |
320 | 0 | Organisational Abstractions for the Analysis and Design of Multi-Agent Systems Abstract. The architecture of a multi-agent system can naturally be viewed as a computational organisation. For this reason, we believe organisational abstractions should play a central role in the analysis and design of such systems. To this end, the concepts of agent roles and role models are increasingly being used to specify and design multi-agent systems. However, this is not the full picture. In this paper we introduce three additional organisational concepts — organisational rules, organisational structures, and organisational patterns — that we believe are necessary for the complete specification of computational organisations. We view the introduction of these concepts as a step towards a comprehensive methodology for agent-oriented systems. 1 | [
1012,
1297,
2343
] | Train |
321 | 0 | Dynamic Reconfiguration in Collaborative Problem Solving In this article we will describe our research efforts in coping with a trade-off that can be often found in the control and optimization of todays business processes. Though centralized control may achieve nearto -optimum results in optimizing the system behavior, there are usually social, technical and security restrictions on applying centralized control. Distributed control on the other hand may cope with these restrictions but also entails sub-optimality and communicational overhead. Our concept of composable agents tries to allow a dynamic and fluent transition between globalization and localization in business process control by adapting to the current real-world system structure. We are currently evaluating this concept in the framework of patient flow control at Charit'e Berlin. Introduction Research in Distributed Artificial Intelligence (DAI, (Bond & Gasser 1988)) has been traditionally divided into Distributed Problem Solving (DPS) and Multi Agent Systems (MAS). However, r... | [
49,
2364
] | Train |
322 | 2 | Combining Multiple Learning Strategies for Effective Cross Validation Parameter tuning through cross-validation becomes very difficult when the validation set contains no or only a few examples of the classes in the evaluation set. We address this open challenge by using a combination of classifiers with different performance characteristics to effectively reduce the performance variance on average of the overall system across all classes, including those not seen before. This approach allows us to tune the combination system on available but less-representative validation data and obtain smaller performance degradation of this system on the evaluation data than using a single-method classifier alone. We tested this approach by applying k-Nearest Neighbor, Rocchio and Language Modeling classifiers and their combination to the event tracking problem in the Topic Detection and Tracking (TDT) domain, where new classes (events) are created constantly over time, and representative validation sets for new classes are often difficult to ob... | [
88,
301,
739,
1446,
2961
] | Train |
323 | 2 | Web Metasearch as Belief Aggregation Web metasearch requires a mechanism for combining rank-ordered lists of ratings returned by multiple search engines in response to a given user query. We view this as being analogous to the need for combining degrees of belief in probabilistic and uncertain reasoning in artificial intelligence. This paper describes a practical method for performing web metasearch based on a novel transformationbased theory of belief aggregation. The consensus ratings produced by this method take into account the item ratings/rankings output by individual search engines as well as the user's preferences. Copyright c fl 2000, American Association for Artificial Intelligence (www.aaai.org). All rights reserved. Introduction Web search engines (WSE) use tools ranging from simple text-based search to more sophisticated methods that attempt to understand the intended meanings of both queries and data items. There has been much work in this area in recent years. The link structure of the web has... | [
147,
488,
1180,
2283,
2371,
2475,
2569
] | Train |
324 | 4 | FOLEYAUTOMATIC: Physically-based Sound Effects for Interactive Simulation and Animation Animations for which sound effects were automatically added by our system, demonstrated in the accompanying video. (a) A real wok in which a pebble is thrown; the pebble rattles around the wok and comes to rest after wobbling. (b) A simulation of a pebble thrown in wok, with all sound effects automatically generated. (c) A ball rolling back and forth on a ribbed surface. (d) Interaction with a sonified object. We describe algorithms for real-time synthesis of realistic sound effects for interactive simulations (e.g., games) and animation. These sound effects are produced automatically, from 3D models using dynamic simulation and user interaction. We develop algorithms that are efficient, physicallybased, and can be controlled by users in natural ways. We develop effective techniques for producing high quality continuous contact sounds from dynamic simulations running at video rates which are slow relative to audio synthesis. We accomplish this using modal models driven by contact forces modeled at audio rates, which are much higher than the graphics frame rate. The contact forces can be computed from simulations or can be custom designed. We demonstrate the effectiveness with complex realistic simulations. | [
2519
] | Validation |
325 | 1 | Analysis of Approximate Nearest Neighbor Searching with Clustered Point Sets this paper we study the performance of two other splitting methods, and compare them against the kd-tree splitting method. The first, called slidingmidpoint, is a splitting method that was introduced by Mount and Arya in the ANN library for approximate nearest neighbor searching [30]. This method was introduced into the library in order to better handle highly clustered data sets. We know of no analysis (empirical or theoretical) of this method. This method was designed as a simple technique for addressing one of the most serious flaws in the standard kd-tree splitting method. The flaw is that when the data points are highly clustered in low dimensional subspaces, then the standard kd-tree splitting method may produce highly elongated cells, and these can lead to slow query times. This splitting method starts with a simple midpoint split of the longest side of the cell, but if this split results in either subcell containing no data points, it translates (or "slides") the splitting plane in the direction of the points until hitting the first data point. In Section 3.1 we describe this splitting method and analyze some of its properties. The second splitting method, called minimum-ambiguity, is a query-based technique. The tree is given not only the data points, but also a collection of sample query points, called the training points. The algorithm applies a greedy heuristic to build the tree in an attempt to minimize the expected query time on the training points. We model query processing as the problem of eliminating data points from consideration as the possible candidates for the nearest neighbor. Given a collection of query points, we can model any stage of the nearest neighbor algorithm as a bipartite graph, called the candidate graph, whose vertices correspond t... | [
53
] | Train |
326 | 0 | Agents Supporting Information Integration: The Miks Framework During past years we have developed the MOMIS (Mediator envirOnment for Multiple Information Sources) system for the integration of data from structured and semi-structured data sources. In this paper we propose a new system, MIKS (Mediator agent for Integration of Knowledge Sources), which enriches the MOMIS architecture exploiting the intelligent and mobile agent features. 1. Motivation The web explosion, both at Internet and intranet level, has transformed the electronic information system from single isolated node to an entry point into a worldwide network of information exchange and business transactions. One of the main challenges for the designers of the e-commerce infrastructures is the information sharing, retrieving data located in different sources thus obtaining an integrated view to overcome any contradiction or redundancy. During past years we have developed the MOMIS (Mediator envirOnment for Multiple Information Sources) system for the integration of data from struc... | [
1481,
1819
] | Train |
327 | 3 | Relational Learning Techniques for Natural Language Information Extraction The recent growth of online information available in the form of natural language documents creates a greater need for computing systems with the ability to process those documents to simplify access to the information. One type of processing appropriate for many tasks is information extraction, a type of text skimming that retrieves specific types of information from text. Although information extraction systems have existed for two decades, these systems have generally been built by hand and contain domain specific information, making them difficult to port to other domains. A few researchers have begun to apply machine learning to information extraction tasks, but most of this work has involved applying learning to pieces of a much larger system. This paper presents a novel rule representation specific to natural language and a learning system, Rapier, which learns information extraction rules. Rapier takes pairs of documents and filled templates indicating the information to be ext... | [
786,
1171,
1287,
2068,
3098
] | Test |
328 | 3 | <bigwig> -- A language for developing interactive Web services <bigwig> is a high-level programming language and a compiler for developing interactive Web services. The overall goal of the language design is to remove many of the obstacles that face current developers of Web services in order to lower cost while increasing functionality and reliability. The compiler translates programs into a conglomerate of lower-level standard technologies such as CGI-scripts, HTML, JavaScript, and HTTP Authentication. This paper describes the major facets of the language design and the techniques used in their implementation, and compares the design with alternative Web service technologies. | [
448,
2085
] | Train |
329 | 3 | Implementation of a Linear Tabling Mechanism Delaying-based tabling mechanisms, such as the one adopted in XSB, are nonlinear in the sense that the computation state of delayed calls has to be preserved. | [
145
] | Test |
330 | 4 | Key Instructions: Solving the Code Location Problem for Optimized Code There are many difficulties to be overcome in the process of designing and implementing a debugger for optimized code. One of the first problems facing the designer of such a debugger is determining how to accurately map between locations in the source program and locations in the corresponding optimized binary. The solution to this problem is critical for many aspects of debugger design, from setting breakpoints, to implementing single-stepping, to reporting error locations. Previous approaches to debugging optimized code have presented many different techniques for solving this location mapping problem (commonly known as the code location problem). These techniques are often very complex and sometimes incomplete. Identifying key instructions allows for a simple yet formal way of mapping between locations in the source program and the optimized target program. In this paper we present the concept of key instructions. We give a formal definition of key instructions and present algorit... | [
2809
] | Test |
331 | 3 | A Data Model and Data Structures for Moving Objects Databases We consider spatio-temporal databases supporting spatial objects with continuously changing position and extent, termed moving objects databases. We formally define a data model for such databases that includes complex evolving spatial structures such as line networks or multi-component regions with holes. The data model is given as a collection of data types and operations which can be plugged as attribute types into any DBMS data model (e.g. relational, or object-oriented) to obtain a complete model and query language. A particular novel concept is the sliced representation which represents a temporal development as a set of units, where unit types for spatial and other data types represent certain "simple" functions of time. We also show how the model can be mapped into concrete physical data structures in a DBMS environment. 1 Introduction A wide and increasing range of database applications has to deal with spatial objects whose position and/or extent changes over time... | [
54,
292,
1068,
1437,
1927,
2900
] | Train |
332 | 4 | Time Series Segmentation for Context Recognition in Mobile Devices Recognizing the context of use is important in making mobile devices as simple to use as possible. Finding out what the user's situation is can help the device and underlying service in providing an adaptive and personalized user interface. The device can infer parts of the context of the user from sensor data: the mobile device can include sensors for acceleration, noise level, luminosity, humidity, etc. In this paper we consider context recognition by unsupervised segmentation of time series produced by sensors. Dynamic programming can be used to find segments that minimize the intra-segment variances. While this method produces optimal solutions, it is too slow for long sequences of data. We present and analyze randomized variations of the algorithm. One of them, Global Iterative Replacement or GIR, gives approximately optimal results in a fraction of the time required by dynamic programming. We demonstrate the use of time series segmentation in context recognition for mobile phone applications. 1 | [
1820
] | Validation |
333 | 2 | Stable Algorithms for Link Analysis The Kleinberg HITS and the Google PageRank algorithms are eigenvector methods for identifying "authoritative" or "influential" articles, given hyperlink or citation information. That such algorithms should give reliable or consistent answers is surely a desideratum, and in [10], we analyzed when they can be expected to give stable rankings under small perturbations to the linkage patterns. In this paper, we extend the analysis and show how it gives insight into ways of designing stable link analysis methods. This in turn motivates two new algorithms, whose performance we study empirically using citation data and web hyperlink data. 1. | [
222,
1838,
2984
] | Train |
334 | 2 | Memory Hierarchies as a Metaphor for Academic Library Collections Research libraries and their collections are a cornerstone of the academic tradition, representing 2000 years of development of the Western Civilization; they make written history widely accessible at low cost. Computer memories are a range of physical devices used for storing digital information that have undergone much formal study over 40 years and are well understood. This paper draws parallels between the organisation of research collections and computer memories, in particular examining their hierarchical structure, and examines the implication for digital libraries. | [
1993
] | Test |
335 | 1 | Autonomous Evolution of Gaits with the Sony Quadruped Robot A trend in robotics is towards legged robots. One of the issues with legged robots is the development of gaits. Typically gaits are developed manually. In this paper we report our results of autonomous evolution of dynamic gaits for the Sony Quadruped Robot. Fitness is determined using the robot's digital camera and infrared sensors. Using this system we evolve faster dynamic gaits than previously manually developed. 1 INTRODUCTION In this paper we present an implementation of an autonomous evolutionary algorithm (EA) for developing locomotion gaits. All processing is handled by the robot's onboard computer and individuals are evaluated using the robot's sensors. Our implementation successfully evolves trot and pace gaits for our robot for which the pace gait significantly outperforms previous hand-developed gaits. In addition to achieving our desired goal of automatically developing gaits these results show that EAs can be used on real robots to evolve non-trivial behaviors. A method... | [
2607
] | Train |
336 | 1 | Infinite-Horizon Policy-Gradient Estimation Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter 2 [0; 1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. | [
3012
] | Test |
337 | 5 | Identifying and Handling Structural Incompleteness for Validation of Probabilistic Knowledge-Bases The PESKI (Probabilities, Expert Systems, Knowledge, and Inference) system attempts to address some of the problems in expert system design through the use of the Bayesian Knowledge Base (BKB) representation. | [
1382
] | Train |
338 | 2 | Enabling knowledge representation on the Web by extending RDF Schema Recently, there has been a wide interest in using ontologies on the Web. As a basis for this, RDF Schema (RDFS) provides means to define vocabulary, structure and constraints for expressing metadata about Web resources. However, formal semantics are not provided, and the expressivity of it is not enough for full-fledged ontological modeling and reasoning. In this paper, we will show how RDFS can be extended in such a way that a full knowledge representation (KR) language can be expressed in it, thus enriching it with the required additional expressivity and the semantics of this language. We do this by describing the ontology language OIL as an extension of RDFS. An important advantage of our approach is a maximal backward compatability with RDFS: any meta-data in OIL format can still be partially interpreted by any RDFS-only-processor. The OIL extension of RDFS has been carefully engineered so that such a partial interpretation of OIL meta-data is still correct under the intended semantics of RDFS: simply ignoring the OIL specific portions of an OIL document yields a correct RDF(S) document whose intended RDFS semantics is precisely a subset of the semantics of the full OIL statements. In this way, our approach ensures maximal sharing of meta-data on the Web: even partial interpretation of meta-data by less semantically aware processors will yield a correct partial interpretation of the meta-data. We conclude that our method of extending is equally applicable to other KR formalisms. 1 1 | [
356,
409,
1023
] | Train |
339 | 1 | A Comparison of Feature Combination Strategies for Saliency-Based Visual Attention Systems Bottom-up or saliency-based visual attention allows primates to detect non-specific conspicuous targets in cluttered scenes. A classical metaphor, derived from electrophysiological and psychophysical studies, describes attention as a rapidly shiftable "spotlight". The model described here reproduces the attentional scanpaths of this spotlight: Simple multi-scale "feature maps" detect local spatial discontinuities in intensity, color, orientation or optical flow, and are combined into a unique "master" or "saliency" map. The saliency map is sequentially scanned, in order of decreasing saliency, by the focus of attention. We study the problem of combining feature maps, from different visual modalities and with unrelated dynamic ranges (such as color and motion), into a unique saliency map. Four combination strategies are compared using three databases of natural color images: (1) Simple normalized summation, (2) linear combination with learned weights, (3) global non-linear normalization... | [
447
] | Train |
340 | 3 | Issues in Agent-Oriented Software Engineering In this paper, I will discuss the conceptual foundation of agent-oriented software development by relating the fundamental elements of the agent-oriented view to those of other, well established programming paradigms, especially the object-oriented approach. Furthermore, I will motivate the concept of autonomy as the basic property of the agent-oriented school and discuss the development history of programming paradigms that lead to this perspective on software systems. The paper will be concluded by an outlook on how the new paradigm can change the way we think about software systems. | [
1012,
2364,
3108
] | Train |
341 | 3 | Developments in Spatio-Temporal Query Languages In contrast to the field view of spatial data that basically views spatial data as a mapping from points into some features, the object view clusters points by features and their values into spatial objects of type point, line, or region. When embedding these objects into a data model, such as the relational model, an additional clustering according to conceptually identified objects takes place. For example, we could define a relation City(name: string,center: point,area: region) that combines different features for cities in one relation. An important aspect of this kind of modeling is that clustering happens on two different levels: (i) points are grouped into spatial objects like regions and (ii) different attributes/features are grouped into a perceived object. When talking about data modeling there is no reason why this grouping should be limited to two levels. For example, we can consider storing regions of different population densities for each city in an attribute density: num → region. Although then the relation is not in first normal form anymore, we can “recover” the first normal form by encapsulating the function num → region in an abstract data type. The important aspect is that all the required operations on such a type as well as on regions and other complex types can be defined to a large degree independently from the data model. 1 The most important point about the preceding discussion is the way in which complex types can be easily | [
72,
228,
1081,
2488,
2509
] | Train |
342 | 1 | A Methodology to Improve Ad Hoc Data-Driven Linguistic Rule Learning Methods by Inducing Cooperation Among Rules Within the Linguistic Modeling eld |one of the most important applications of Fuzzy Rule-Based Systems|, a family of ecient and simple methods guided by covering criteria of the data in the example set, called \ad hoc data-driven methods", has been proposed in the literature in the last few years. Their high performance, in addition to their quickness and easy understanding, have make them very suitable for learning tasks. In this paper we are going to perform a double task analyzing these kinds of learning methods and introducing a new methodology to signicantly improve their accuracy keeping their descriptive power unalterable. On the one hand, a taxonomy of ad hoc data-driven learning methods based on the way in which the available data is used to guide the learning will be made. In this sense, we will distinguish between two approaches: the example-based and the fuzzy-grid-based one. Whilst in the former each rule is obtained from a specic example, in the latter the e... | [
2002
] | Train |
343 | 1 | Appearance-Based Obstacle Detection with Monocular Color Vision This paper presents a new vision-based obstacle detection method for mobile robots. Each individual image pixel is classified as belonging either to an obstacle or the ground based on its color appearance. The method uses a single passive color camera, performs in real-time, and provides a binary obstacle image at high resolution. The system is easily trained by simply driving the robot through its environment. In the adaptive mode, the system keeps learning the appearance of the ground during operation. The system has been tested successfully in a variety of environments, indoors as well as outdoors. 1. Introduction Obstacle detection is an important task for many mobile robot applications. Most mobile robots rely on range data for obstacle detection. Popular sensors for range-based obstacle detection systems include ultrasonic sensors, laser rangefinders, radar, stereo vision, optical flow, and depth from focus. Because these sensors measure the distances from obstacles t... | [
2850,
2998
] | Train |
344 | 3 | OMS/Java: Model Extensibility of OODBMS for Advanced Application Domains . We showhow model extensibility of object-oriented data management systems can be achieved through the combination of a highlevel core object data model and an architecture designed with model extensibility in mind. The resulting system, OMS#Java, is both a general data management system and a framework for the developmentof advanced database application systems. All aspects of the core model # constructs, query language and constraints # can easily be generalised to support, for example, the management of temporal, spatial and versioned data. Speci#cally,we showhow the framework was used to extend the core system to a temporal object-oriented database management system. 1 Introduction Extensibility has often been considered a purely architectural issue in database management systems #DBMS#. In the 1980s, there was an increase in the various forms of DBMS that appeared --- many of whichwere tailored to speci#c application domains such as Geographical Information Systems or ... | [
436
] | Train |
345 | 3 | A Holistic Process Performance Analysis through a Performance Data Warehouse This paper describes how a performance data warehouse can be used to facilitate business process improvement that is based on holistic performance measurement. The feasibility study shows how management and analysis of performance data can be facilitated by a data warehouse approach. It is argued that many of the shortcomings of traditional measurement systems can be overcome with this performance data warehouse approach. | [
1953
] | Train |
346 | 5 | A Layered Approach to Learning Client Behaviors in the RoboCup Soccer Server In the past few years, Multiagent Systems (MAS) has emerged as an active subfield of Artificial Intelligence (AI). Because of the inherent complexity of MAS, there is much interest in using Machine Learning (ML) techniques to help build multiagent systems. Robotic soccer is a particularly good domain for studying MAS and Multiagent Learning. Our approach to using ML as a tool for building Soccer Server clients involves layering increasingly complex learned behaviors. In this article, we describe two levels of learned behaviors. First, the clients learn a low-level individual skill that allows them to control the ball effectively. Then, using this learned skill, they learn a higher-level skill that involves multiple players. For both skills, we describe the learning method in detail and report on our extensive empirical testing. We also verify empirically that the learned skills are applicable to game situations. 1 Introduction In the past few years, Multiagent Systems (MAS) has emerge... | [
49,
287,
864,
962,
1523,
2264,
2492,
2614,
2620,
3173
] | Validation |
347 | 2 | A Meta-search Method Reinforced by Cluster Descriptors A meta-search engine acts as an agent for the participant search engines. It receives queries from users and redirects them to one or more of the participant search engines for processing. A meta-search engine incorporating many participant search engines is better than a single global search engine in terms of the number of pages indexed and the freshness of the indexes. The meta-search engine stores descriptive data (i.e., descriptors) about the index maintained by each participant search engine so that it can estimate the relevance of each search engine when a query is received. The ability for the meta-search engine to select the most relevant search engines determines the quality of the final result. To facilitate the selection process, the document space covered by each search engine must be described not only concisely but also precisely. Existing methods tend to focus on the conciseness of the descriptors by keeping a descriptor for a search engine 's entire index. This paper proposes to cluster a search engine's document space into clusters and keep a descriptor for each cluster. We show that cluster descriptors can provide a finer and more accurate representation of the document space, and hence enable the meta-search engine to improve the selection of relevant search engines. Two cluster-based search engine selection scenarios (i.e., independent and high-correlation) are discussed in this paper. Experiments verify that the cluster-based search engine selection can effectively identify the most relevant search engines and improve the quality of the search results consequently. 1 | [
976,
1059,
1265,
2503,
2920,
3139
] | Test |
348 | 4 | A Security Architecture for Application Session Handoff Ubiquitous computing across a variety of wired and wireless connections still lacks an effective security architecture. In our research work, we address the specific issue of designing and building a security architecture for Application Session Handoff, a functionality which we envision will be a key component enabling ubiquitous computing. Our architecture incorporates a number of proven approaches into the new context of ubiquitous computing. We employ the Bell-LaPadula and capability models to realise access control and adopt Public Key Infrastructure (PKI)-based approaches to provide efficient and authenticated end-to-end security. To demonstrate the effectiveness of our design, we implemented an application enabled with this security architecture and showed that it incurred low latency. | [
659,
2447
] | Train |
349 | 2 | Iterative Information Retrieval Using Fast Clustering and Usage-Specific Genres This paper describes how collection specific empirically defined stylistics based genre prediction can be brought together together with rapid topical clustering to build an interactive information retrieval interface with multi-dimensional presentation of search results. The prototype presented addresses two specific problems of information retrieval: how to enrich the information seeking dialog by encouraging and supporting iterative refinement of queries, and how to enrich the document representation past the shallow semantics allowed by term frequencies. Searching For More Than Words Today's tools for searching information in a document database are based on term occurrence in texts. The searcher enters a number of terms and a number of documents where those terms or closely related terms appear comparatively frequently are retrieved and presented by the system in list form. This method works well up to a point. It is intuitively understandable, and for competent users and well e... | [
647,
2199
] | Train |
350 | 0 | Agent-mediated Electronic Commerce: Scientific and Technological roadmap. this report is that a big part of Internet users have already sampled buying over the web (f.i. 40% in the UK), and a significat part qualify themselves as regular shoppers (f.i. 10% in the UK). Again, important differences between countries may be detected with respect to the expenses produced. For instance, Finland spent 20 times more that Spain on a per capita basis. The forecasts for European buying goods and services for the year 2002 suggest that the current 5.2 million shoppers will increase until 28.8 millions, and the European revenues from the current EUR3,032 million to EUR57,210 million. Finally, a significant increase in the number of European executives that believe in the future of electronic commerce has been observed (33% in 1999 up from 23% in 1998) | [
1260,
2498,
3156
] | Test |
351 | 1 | Modified Gath-Geva Fuzzy Clustering for Identification of Takagi-Sugeno Fuzzy Models The construction of interpretable Takagi--Sugeno (TS) fuzzy models by means of clustering is addressed. First, it is shown how the antecedent fuzzy sets and the corresponding consequent parameters of the TS model can be derived from clusters obtained by the Gath--Geva algorithm. To preserve the partitioning of the antecedent space, linearly transformed input variables can be used in the model. This may, however, complicate the interpretation of the rules. To form an easily interpretable model that does not use the transformed input variables, a new clustering algorithm is proposed, based on the Expectation Maximization (EM) identification of Gaussian mixture models. This new technique is applied to two well-known benchmark problems: the MPG (miles per gallon) prediction and a simulated second-order nonlinear process. The obtained results are compared with results from the literature. | [
1984
] | Train |
352 | 3 | Executing Query Packs in ILP Inductive logic programming systems usually send large numbers of queries to a database. The lattice structure from which these queries are typically selected causes many of these queries to be highly similar. As a consequence, independent execution of all queries may involve a lot of redundant computation. We propose a mechanism for executing a hierarchically structured set of queries (a "query pack") through which a lot of redundancy in the computation is removed. We have incorporated our query pack execution mechanism in the ILP systems Tilde and Warmr by implementing a new Prolog engine ilProlog which provides support for pack execution at a lower level. Experimental results demonstrate significant efficiency gains. Our query pack execution mechanism is very general in nature and could be incorporated in most other ILP systems, with similar efficiency improvements to be expected. | [
1467,
3163
] | Train |
353 | 0 | Resolution-Based Proof for Multi-Modal Temporal Logics of Knowledge Temporal logics of knowledge are useful in order to specify complex systems in which agents are both dynamic and have information about their surroundings. We present a resolution method for propositional temporal logic combined with multi-modal S5 and illustrate its use on examples. This paper corrects a previous proposal for resolution in multi-modal temporal logics of knowledge. Keywords: temporal and modal logics, non-classical resolution, theorem-proving 1 Introduction Combinations of logics have been useful for specifying and reasoning about complex situations, for example multi-agent systems [21, 24], accident analysis [15], and security protocols [18]. For example, logics to formalise multi-agent systems often incorporate a dynamic component representing change of over time; an informational component to capture the agent's knowledge or beliefs; and a motivational component for notions such as goals, wishes, desires or intentions. Often temporal or dynamic logic is used for... | [
183,
543,
712,
2059,
2338
] | Validation |
354 | 1 | G-Prop-II: Global Optimization of Multilayer Perceptrons using GAs A general problem in model selection is to obtain the right parameters that make a model fit observed data. For a Multilayer Perceptron (MLP) trained with Backpropagation (BP), this means finding appropiate layer size and initial weights. This paper proposes a method (G-Prop, genetic backpropagation) that attempts to solve that problem by combining a genetic algorithm (GA) and BP to train MLPs with a single hidden layer. The GA selects the initial weights and changes the number of neurons in the hidden layer through the application of specific genetic operators. G-Prop combines the advantages of the global search performed by the GA over the MLP parameter space and the local search of the BP algorithm. The application of the G-Prop algorithm to several real-world and benchmark problems shows that MLPs evolved using G-Prop are smaller and achieve a higher level of generalization than other perceptron training algorithms, such as QuickPropagation or RPROP, and other evolutive algorithms, s... | [
958,
2126
] | Validation |
355 | 1 | Advances in Analogy-Based Learning: False Friends and Exceptional Items in Pronunciation By Paradigm-Driven Analogy When looked at from a multilingual perspective, grapheme-to-phoneme conversion is a challenging task, fraught with most of the classical NLP "vexed questions": bottle-neck problem of data acquisition, pervasiveness of exceptions, difficulty to state range and order of rule application, proper treatment of context-sensitive phenomena and long-distance dependencies, and so on. The hand-crafting of transcription rules by a human expert is onerous and time-consuming, and yet, for some European languages, still stops short of a level of correctness and accuracy acceptable for practical applications. We illustrate here a self-learning multilingual system for analogy-based pronunciation which was tested on Italian, English and French, and whose performances are assessed against the output of both statistically and rule-based transcribers. The general point is made that analogy-based self-learning techniques are no longer just psycholinguistically-plausible models, but competitive tools, combining the advantages of using language-independent, self-learning, tractable algorithms, with the welcome bonus of being more reliable for applications than traditional text-to-speech systems. | [
2494
] | Train |
356 | 2 | Ontobroker: The Very High Idea The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW. The technique is based on the use of ontologies. Ontologies are applied to annotate web documents and to provide query access and inference service that deal with the semantics of the presented information. In consequence, intelligent brokering services for web documents can be achieved without requiring to change the semiformal nature of web documents. Introduction The World Wide Web (WWW) contains huge amounts of knowledge about almost all subjects you can think of. HTML documents enriched by multi-media applications provide knowledge in different representations (i.e., text, graphics, animated pictures, video, sound, virtual reality, etc.). Hypertext links between web documents represent r... | [
338,
1014,
2670,
2817,
2990,
3099
] | Validation |
357 | 0 | Market Protocols for Decentralized Supply Chain Formation In order to effectively respond to changing market conditions, business partners must be able to rapidly form supply chains. This thesis approaches the problem of automating supply chain formation---the process of determining the participants in a supply chain, who will exchange what with whom, and the terms of the exchanges---within an economic framework. In this thesis, supply chain formation is formalized as task dependency networks. This model captures subtask decomposition in the presence of resource contention---two important and challenging aspects of supply chain formation. In order to form supply chains in a decentralized fashion, price systems provide an economic framework for guiding the decisions of self-interested agents. In competitive price equilibrium, agents choose optimal allocations with respect to prices, and outcomes are optimal overall. Approximate competitive equilibria yield approximately optimal allocations. Different market protocols are proposed for agents to negotiate the allocation of resources to form supply chains. In the presence of resource contention, these protocols produce better solutions than the greedy protocols common in the artificial intelligence and multiagent systems literature. The first protocol proposed is based on distributed, progressive, price-based auctions, and is analyzed with non-strategic, agent bidding policies. The protocol often converges to high-value supply chains, and when competitive equilibria exist, typically to approximate competitive equilibria. However, complemen- tarities in agent production technologies can cause the protocol to wastefully allocate inputs to agents that do not produce their out... | [
2380
] | Train |
358 | 3 | A Theorem Prover-Based Analysis Tool for Object-Oriented Databases We present a theorem-prover based analysis tool for object-oriented database systems with integrity constraints. Object-oriented database specifications are mapped to higher-order logic (HOL). This allows us to reason about the semantics of database operations using a mechanical theorem prover such as Isabelle or PVS. The tool can be used to verify various semantics requirements of the schema (such as transaction safety, compensation, and commutativity) to support the advanced transaction models used in workflow and cooperative work. We give an example of method safety analysis for the generic structure editing operations of a cooperative authoring system. 1 Introduction Object-oriented specification methodologies and object-oriented programming have become increasingly important in the past ten years. Not surprisingly, this has recently led to an interest in object-oriented program verification in the theorem prover community, mainly using higher-order logic (HOL). Several dif... | [
2539
] | Train |
359 | 4 | The PicSOM Retrieval System: Description and Evaluations We have developed an experimental system called PicSOM for retrieving images similar to a given set of reference images in large unannotated image databases. The technique is based on a hierarchical variant of the Self-Organizing Map (SOM) called the Tree Structured Self-Organizing Map (TS-SOM). Given a set of reference images, PicSOM is able to retrieve another set of images which are most similar to the given ones. Each TS-SOM is formed using a different image feature representation like color, texture, or shape. A new technique introduced in PicSOM facilitates automatic combination of the responses from multiple TS-SOMs and their hierarchical levels. This mechanism adapts to the user's preferences in selecting which images resemble each other. In this paper, a brief description of the system and a set of methods applicable to evaluating retrieval performance of image retrieval applications are presented. 1 Introduction Content-based image retrieval (CBIR) has been a subject... | [
2683
] | Train |
360 | 5 | Embodied Evolution: Embodying an Evolutionary Algorithm in a Population of Robots We introduce Embodied Evolution (EE) as a methodology for the automatic design of robotic controllers. EE is an evolutionary robotics (ER) technique that avoids the pitfalls of the simulate-and-transfer method, allows the speed-up of evaluation time by utilizing parallelism, and is particularly suited to future work on multi-agent behaviors. In EE, an evolutionary algorithm is distributed amongst and embodied within a population of physical robots that reproduce with one another while situated in the task environment. We have built a population of eight robots and successfully implemented our first experiments. The controllers evolved by EE compare favorably to hand-designed solutions for a simple task. We detail our methodology, report our initial results, and discuss the application of EE to more advanced and distributed robotics tasks. 1. Introduction Our work is inspired by the following vision. A large number of robots freely interact with each other in a shared environment, atte... | [
1764
] | Validation |
361 | 0 | Dynamic Agents this paper, we shall explain how dynamic behaviors are obtained and utilized through automatic action installation, and inter-agent communication. We also describe intra-agent communication between the carrier-part and the action part of a dynamic agent, and between di#erent actions carried by the same dynamic agent. We shall also discuss three triggering mechanisms for dynamic behavior modi#cation: planning-based, request-driven, and event-based. | [] | Train |
362 | 3 | CHIME: Customizable Hyperlink Insertion and Maintenance Engine for Software Engineering Environments Source code browsing is an important part of program comprehension. Browsers expose semantic and syntactic relationships (such as between object references and definitions) in GUI-accessible forms. These relationships are derived using tools which perform static analysis on the original software documents. Implementing such browsers is tricky. Program comprehension strategies vary, and it is necessary to provide the right browsing support. Analysis tools to derive the relevant crossreference relationships are often difficult to build. Tools to browse distributed documents require extensive coding for the GUI, as well as for data communications. Therefore, there are powerful motivations for using existing static analysis tools in conjunction with WWW technology to implement browsers for distributed software projects. The chime framework provides a flexible, customizable platform for inserting HTML links into software documents using information generated by existing software analysis tools. Using the chime specification language, and a simple, retargetable database interface, it is possible to quickly incorporate a range of different link insertion tools for software documents, into an existing, legacy software development environment. This enables tool builders to offer customized browsing support with a well-known GUI. This paper describes the chime architecture, and describes our experience with several re-targeting efforts of this system. 1 | [
2829
] | Train |
363 | 5 | Word Sense Disambiguation based on Semantic Density This paper presents a Word Sense Disambiguation method based on the idea of semantic density between words. The disambiguation is done in the context of WordNet. The Internet is used as a raw corpora to provide statistical information for word associations. A metric is introduced and used to measure the semantic density and to rank all possible combinations of the senses of two words. This method provides a precision of 58% in indicating the correct sense for both words at the same time. The precision increases as we consider more choices: 70% for top two ranked and 73% for top three ranked. 1 Introduction Word Sense Disambiguation (WSD) is an open problem in Natural Language Processing. Its solution impacts other tasks such as discourse, reference resolution, coherence, inference and others. WSD methods can be broadly classified into three types: 1. WSD that make use of the information provided by machine readable dictionaries (Cowie et al.1992), (Miller et al.1994), (Agirre and Rig... | [
977,
1955
] | Validation |
364 | 2 | The Open Archives Initiative: Building a low-barrier interoperability framework The Open Archives Initiative (OAI) develops and promotes interoperability solutions that aim to facilitate the efficient dissemination of content. The roots of the OAI lie in the E-Print community. Over the last year its focus has been extended to include all content providers. This paper describes the recent history of the OAI – its origins in promoting E-Prints, the broadening of its focus, the details of its technical standard for metadata harvesting, the applications of this standard, and future plans. Categories and Subject Descriptors | [
2830
] | Train |
365 | 4 | Perceptual Interfaces For Information Interaction: Joint Processing Of Audio And Visual Information For Human-Computer Interaction We are exploiting the human perceptual principle of sensory integration (the joint use of audio and visual information) to improve the recognition of human activity (speech recognition, speech event detection and speaker change), intent (intent to speak) and human identity (speaker recognition), particularly in the presence of acoustic degradation due to noise and channel. In this paper, we present experimental results in a variety of contexts that demonstrate the benefit of joint audio-visual processing. | [
1593
] | Train |
366 | 4 | A Feasible Low-Power Augmented-Reality Terminal This paper studies the requirements for a truly wearable augmented-reality (AR) terminal. The requirements translate into a generic hardware architecture consisting of programmable modules communicating through a central interconnect. Careful selection of low-power components shows that it is feasible to construct an AR terminal that weighs about 2 kg and roughly dissipates 26 W. With stateof -the-art batteries and a 50% average resource utilization, the terminal can operate for about 10 hours. 1. Introduction The goal of ubiquitous computing is to have computers act as "human assistants" that support us instantly. Computers should move out of our awareness instead of being at the center of our attention [14]. For ubiquitous computing to become reality we need two important technologies to mature: wireless communication (wearability) and augmented reality (user-interface). Wireless communication is obviously required to obtain services provided by an arbitrary computer regardless the ... | [
497
] | Validation |
367 | 1 | Learning-Based Vision and Its Application to Autonomous Indoor Navigation Learning-Based Vision and Its Application to Autonomous Indoor Navigation By Shaoyun Chen Adaptation is critical to autonomous navigation of mobile robots. Many adaptive mechanisms have been implemented, ranging from simple color thresholding to complicated learning with artificial neural networks (ANN). The major focus of this thesis lies in machine learning for vision-based navigation. Two well known vision-based navigation systems are ALVINN and ROBIN developed by Carnegie-Mellon University and University of Maryland, respectively. ALVINN uses a two-layer feedforward neural network while ROBIN relies on a radial basis function network (RBFN). Although current ANN-based methods have achieved great success in vision-based navigation, they have two major disadvantages: (1) Local minimum problem: The training of either multilayer perceptron or radial basis function network can get stuck at poor local minimums. (2) The flexibility problem: After the system has been trained in certain r... | [
530,
1031
] | Train |
368 | 2 | View-independent Recognition of Hand Postures Since human hand is highly articulated and deformable, hand posture recognition is a challenging example in the research of view-independent object recognition. Due to the difficulties of the modelbased approach, the appearance-based learning approach is promising to handle large variation in visual inputs. However, the generalization of many proposed supervised learning methods to this problem often suffers from the insufficiency of labeled training data. This paper describes an approach to alleviate this difficulty by adding a large unlabeled training set. Combining supervised and unsupervised learning paradigms, a novel and powerful learning approach, the Discriminant-EM (D-EM) algorithm, is proposed in this paper to handle the case of small labeled training set. Experiments show that D-EM outperforms many other learning methods. Based on this approach, we implement a gesture interface to recognize a set o... | [
506,
1386,
1969,
2889,
2893
] | Validation |
369 | 1 | Programming by Demonstration: An Inductive Learning Formulation Although Programming by Demonstration (PBD) has the potential to improve the productivity of unsophisticated users, previous PBD systems have used brittle, heuristic, domain-specific approaches to execution-trace generalization. In this paper we define two applicationindependent methods for performing generalization that are based on well-understood machine learning technology. TGen vs uses version-space generalization, and TGen foil is based on the FOIL inductive logic programming algorithm. We analyze each method both theoretically and empirically, arguing that TGen vs has lower sample complexity, but TGen foil can learn a much more interesting class of programs. Keywords Programming by Demonstration, machine learning, inductive logic programming, version spaces INTRODUCTION Computer users are largely unable to customize massproduced applications to fit their individual tasks. This problem of end-user customization has been addressed in several ways. ffl Adjustable preference... | [
2345
] | Train |
370 | 2 | Exploiting Structure for Intelligent Web Search Together with the rapidly growing amount of online data we register an immense need for intelligent search engines that access a restricted amount of data as found in intranets or other limited domains. This sort of search engines must go beyond simple keyword indexing/matching, but they also have to be easily adaptable to new domains without huge costs. This paper presents a mechanism that addresses both of these points: first of all, the internal document structure is being used to extract concepts which impose a directorylike structure on the documents similar to those found in classified directories. Furthermore, this is done in an efficient way which is largely language independent and does not make assumptions about the document structure. | [
58,
586,
1442,
1559,
2459,
2503,
2535,
2796
] | Train |
371 | 2 | Accurately and Reliably Extracting Data from the Web: A Machine Learning Approach A critical problem in developing information agents for the Web is accessing data that is formatted for human use. We have developed a set of tools for extracting data from web sites and transforming it into a structured data format, such as XML. The resulting data can then be used to build new applications without having to deal with unstructured data. The advantages of our wrapping technology over previous work are the the ability to learn highly accurate extraction rules, to verify the wrapper to ensure that the correct data continues to be extracted, and to automatically adapt to changes in the sites from which the data is being extracted. 1 Introduction There is a tremendous amount of information available on the Web, but much of this information is not in a form that can be easily used by other applications. There are hopes that XML will solve this problem, but XML is not yet in widespread use and even in the best case it will only address the problem within application domains... | [
529,
714,
1398,
2074,
2418,
3099
] | Validation |
372 | 4 | The CLEF 2003 Interactive Track The CLEF 2003 Interactive Track (iCLEF) was the third year of a shared experiment design to compare strategies for cross-language search assistance. Two kinds of experiments were performed: a) experiments in Cross-Language Document Selection, where the user task is to scan a ranked list of documents written in a foreign language, selecting those which seem relevant to a given query. The aim here is to compare di#erent translation strategies for an "indicative" purpose; and b) Full Cross-Language Search experiments, where the user task is to maximize the number of relevant documents that can be found in a foreignlanguage collection with the help of an end-to-end cross-language search system. Participating teams could choose to focus on any aspects of the search task (e.g., query formulation, query translation and/or relevance feedback). This paper describes the shared experiment design and briefly summarizes the experiments run by the five teams that participated. | [
1721
] | Train |
373 | 3 | Rotational Polygon Containment and Minimum Enclosure An algorithm and a robust floating point implementation is given for rotational polygon containment:given polygons P 1 ,P 2 ,P 3 ,...,P k and a container polygon C, find rotations and translations for the k polygons that place them into the container without overlapping. A version of the algorithm and implementation also solves rotational minimum enclosure: givenaclass C of container polygons, find a container C in C of minimum area for which containment has a solution. The minimum enclosure is approximate: it bounds the minimum area between (1-epsilon)A and A. Experiments indicate that finding the minimum enclosure is practical for k = 2, 3 but not larger unless optimality is sacrificed or angles ranges are limited (although these solutions can still be useful). Important applications for these algorithm to industrial problems are discussed. The paper also gives practical algorithms and numerical techniques for robustly calculating polygon set intersection, Minkowski sum, and range in... | [
85
] | Validation |
374 | 0 | Disseminating Mobile Agents for Distributed Information Filtering An often claimed benefit of mobile agent technology is the reduction of communication cost. Especially the area of information filtering has been proposed for the application of mobile filter agents. However, an effective coordination of agents, which takes into account the current network conditions, is difficult to achieve. This contribution analyses the situation that data distributed among various remote data servers has to be examined with mobile filter agents. We present an approach for coordinating the agents' employment, which minimizes communication costs. Validation studies on the possible cost savings for various constellations show that savings up to 90% can be achieved in the face of actual Internet conditions. 1. Introduction An often claimed benefit of mobile agent technology is the reduction of communication cost, either for decreasing an application's latency or for reducing the load on a network. This prediction has been made especially for scenarios of information... | [
2337
] | Test |
375 | 1 | A Quantification Of Distance-Bias Between Evaluation Metrics In Classification This paper provides a characterization of bias for evaluation metrics in classification (e.g., Information Gain, Gini, 2 , etc.). Our characterization provides a uniform representation for all traditional evaluation metrics. Such representation leads naturally to a measure for the distance between the bias of two evaluation metrics. We give a practical value to our measure by observing if the distance between the bias of two evaluation metrics correlates with differences in predictive accuracy when we compare two versions of the same learning algorithm that differ in the evaluation metric only. Experiments on real-world domains show how the expectations on accuracy differences generated by the distance-bias measure correlate with actual differences when the learning algorithm is simple (e.g., search for the best single-feature or the best single-rule). The correlation, however, weakens with more complex algorithms (e.g., learning decision trees). Our results sh... | [
2834
] | Validation |
376 | 3 | On Reconfiguring Query Execution Plans in Distributed Object-Relational DBMS Massive database sizes and growing demands for decision support and data mining result in long-running queries in extensible Object-Relational DBMS, particularly in decision support and data warehousing analysis applications. Parallelization of query evaluation is often required for acceptable performance. Yet queries are frequently processed suboptimally due to (1) only coarse or inaccurate estimates of the query characteristics and database statistics available prior to query evaluation; (2) changes in system configuration and resource availability during query evaluation. In a distributed environment, dynamically reconfiguring query execution plans (QEPs), which adapts QEPs to the environment as well as the query characteristics, is a promising means to significantly improve query evaluation performance. Based on an operator classification, we propose an algorithm to coordinate the steps in a reconfiguration and introduce alternatives for execution context checkpointing and restorin... | [
1536,
1950
] | Train |
377 | 1 | Reasoning within Fuzzy Description Logics Description Logics (DLs) are suitable, well-known, logics for managing structured knowledge. They allow reasoning about individuals and well defined concepts, i.e. set of individuals with co#hfiP pro# erties. The experience in using DLs inapplicatio#& has sho wn that in many cases we wo#6H like to extend their capabilities. In particular, their use in the co# texto# Multimedia Info#mediafi Retrieval (MIR) leadsto the co# vincement that such DLssho#PF allo w the treatmento f the inherentimprecisio# in multimediao# ject co# tent representatio# and retrieval. In this paper we will present a fuzzyextensio# ALC,co# bining Zadeh's fuzzy lo#zy with a classical DL. In particular,co#rticu beco#FK fuzzy and, thus,reaso#HO6 ab o#fi impreciseco#recis is suppo#ppfi6 We will define its syntax, its semantics, describe its pro# erties and present a co#PHOSfi9 tpro#F&fi9KFS calculus for reasoning in it. | [
671,
1484
] | Validation |
378 | 2 | Feature Selection in Web Applications By ROC Inflections and Powerset Pruning coetzee,compuman,lawrence,giles¥ A basic problem of information processing is selecting enough features to ensure that events are accurately represented for classification problems, while simultaneously minimizing storage and processing of irrelevant or marginally important features. To address this problem, feature selection procedures perform a search through the feature power set to find the smallest subset meeting performance requirements. Major restrictions of existing procedures are that they typically explicitly or implicitly assume a fixed operating point, and make limited use of the statistical structure of the feature power set. We present a method that combines the Neyman-Pearson design procedure on finite data, with the directed set structure of the Receiver Operating Curves on the feature subsets, to determine the maximal size of the feature subsets that can be ranked in a given problem. The search can then be restricted to the smaller subsets, resulting in significant reductions in computational complexity. Optimizing the overall Receiver Operating Curve also allows for end users to select different operating points and cost functions to optimize. The algorithm also produces a natural method of Boolean representation of the minimal feature combinations that best describe the data near a given operating point. These representations are especially appropriate when describing data using common text-related features useful on the web, such as thresholded TFIDF data. We show how to use these results to perform automatic Boolean query modification generation for distributed databases, such as niche metasearch engines. 1 | [
1321,
2569,
3037,
3144
] | Validation |
379 | 3 | The XML Benchmark Project With standardization efforts of a query language for XML documents drawing to a close, researchers and users increasingly focus their attention on the database technology that has to deliver on the new challenges that the sheer amount of XML documents produced by applications pose to data management: validation, performance evaluation and optimization of XML query processors are the upcoming issues. Following a long tradition in database research, the XML Store Benchmark Project provides a framework to assess an XML database's abilities to cope with a broad spectrum of different queries, typically posed in real-world application scenarios. The benchmark is intended to help both implementors and users to compare XML databases independent of their own, specific application scenario. To this end, the benchmark o ers a set queries each of which is intended to challenge a particular primitive of the query processor or storage engine. The overall workload wepropose consists of a scalable document database and a concise, yet comprehensive set of queries, which covers the major aspects of query processing. The queries' challenges range from stressing the textual character of the document to data analysis queries, but include also typical ad-hoc queries. We complement our research with results obtained from running the benchmark on our XML database platform. They are intended to give a rst baseline, illustrating the state of the art. | [
29,
2503,
2910,
3118
] | Validation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.