node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
1,680
0
An Agent-Based Framework for Financial Transactions Ever-changing Internet technologies are creating revolutions in the way people interact with each other. In particular, business interactions are rapidly transforming and evolving toward more dynamic and automated solutions. Among various on-line commercial activities, financial services represent a fundamental component for developing and supplying many other e-businesses. The main idea behind efinance is to provide support by deploying software instruments that enable to automate many of B2B and B2C transactions. The current degree of automation and personalisation of on-line financial services is still very limited: Web interfaces or ad hoc tools still require a lots of human interactions. Agent technology seems to be one of the most promising approaches for evolving toward more flexible and dynamic solutions. Autonomous, intelligent, social and selfinterested software entities would act on behalf of final endusers and/or business operators without the need for direct human involvement. This paper describes an agent-based system supporting automated business transactions. The aim is to evaluate the main potential and the major limits of supplying financial services by deploying agents in a software environment. Keywords agent-based interactions, e-finance, electronic payments, ontology. 1.
[ 1020, 1346 ]
Train
1,681
0
The Tropos Software Development Methodology: Processes, Models And Diagrams Abstract. Tropos is a novel agent-oriented software development methodology founded on two key features: (i) the notions of agent, goal, plan and various other knowledge level concepts are fundamental primitives used uniformly throughout the software development process; and (ii) a crucial role is assigned to requirements analysis and specification when the system-to-be is analyzed with respect to its intended environment. This paper provides a (first) detailed account of the Tropos methodology. In particular, we describe the basic concepts on which Tropos is founded and the types of models one builds out of them. We also specify the analysis process through which design flows from external to system actors through a goal analysis and delegation. In addition, we provide an abstract syntax for Tropos diagrams and other linguistic constructs. 1
[ 1049, 1289, 2343 ]
Validation
1,682
0
Scalability Issues for Query Routing Service Discovery In this paper, we discuss the relationship between mediatorbased systems for service discovery in multi-agent systems, and the technique of query routing used for resource discovery in distributed information systems. We then construct a model of the query routing task which we use to examine the complexity and scalability characteristics of a number of commonly encountered architectures for resource or service discovery.
[ 1856 ]
Validation
1,683
2
Building a XML-based Corporate Memory . This paper emphasizes the interest of XML meta-language for corporate knowledge management. Taking into account the advantages of the World Wide Web and of ontologies for knowledge management, we present OSIRIX, a tool enabling enterprise-ontology- guided search in XML documents that may consitute a part of a corporate memory. Keywords: XML, World Wide Web, knowledge management, document-based corporate memory, enterprise ontology, information retrieval. 1 Introduction Extending the definitions proposed by [28] [20], we consider a corporate memory as an explicit, disembodied, persistent representation of knowledge and information in an organization, in order to facilitate its access and reuse by members of the organization, for their tasks. We consider its building as relying on the following steps [11]: (1) Detection of needs in corporate memory, (2) Construction of the corporate memory, (3) Diffusion of the corporate memory, (4) Use of the corporate memory, (5) Evaluation of...
[ 1226, 2665 ]
Validation
1,684
1
Using Information Extraction to Aid the Discovery of Prediction Rules from Text Text mining and Information Extraction (IE) are both topics of significant recent interest. Text mining concerns applying data mining, a.k.a. knowledge discovery from databases (KDD) techniques to unstructured text. Information extraction (IE) is a form of shallow text understanding that locates specific pieces of data in natural language documents, transforming unstructured text into a structured database. This paper describes a system called DiscoTEX, that combines IE and KDD methods to perform a text mining task, discovering prediction rules from natural-language corpora. An initial version of DiscoTEX is constructed by integrating an IE module based on Rapier and a rule-learning module, Ripper. We present encouraging results on applying these techniques to a corpus of computer job postings from an Internet newsgroup.
[ 1460, 2068, 2750 ]
Train
1,685
0
From Tuple Spaces to Tuple Centres A tuple centre is a Linda-like tuple space whose behaviour can be programmed by means of transactional reactions to the standard communication events. This paper defines the notion of tuple centre, and shows the impact of its adoption as a coordination medium for a distributed multi-agent system on both the system design and the overall system efficiency.
[ 2844 ]
Validation
1,686
3
Cache Management in CORBA Distributed Object Systems For many distributed data intensive applications, the default remote invocation of CORBA objects by clients is not acceptable because of performance degradation. Caching enables clients to invoke operations locally on distributed objects instead of fetching them from remote servers. This paper addresses the design and implementation of a specific caching approach for CORBA-based systems. We propose a new removal algorithm which uses a double linked structure and a hash table for eviction. We also present a new variation of optimistic two phase locking for consistency control, which does not require a lock at the client side by using a perprocess caching design. With the experiments we have performed, we demonstrate that the proposed caching approach provides an important performance gain: the caching with half buffer saves up to 45% of access time and the caching with full buffer saves up to 50% of access time. The Common Object Request Broker Architecture (CORBA) provides several adv...
[ 609 ]
Validation
1,687
4
Battery-aware Static Scheduling for Distributed Real-time Embedded Systems This paper addresses battery-aware static scheduling in batterypowered distributed real-time embedded systems. As suggested by previous work, reducing the discharge current level and shaping its distribution are essential for extending the battery lifespan. We propose two battery-aware static scheduling schemes. The first one optimizes the discharge power profile in order to maximize the utilization of the battery capacity. The second one targets distributed systems composed of voltage-scalable processing elements (PEs). It performs variable-voltage scheduling via efficient slack time re-allocation, which helps reduce the average discharge power consumption as well as flatten the discharge power profile. Both schemes guarantee the hard real-time constraints and precedence relationships in the real-time distributed embedded system specification. Based on previous work, we develop a battery lifespan evaluation metric which is aware of the shape of the discharge power profile. Our experimental results show that the battery lifespan can be increased by up to 29% by optimizing the discharge power file alone. Our variable-voltage scheme increases the battery lifespan by up to 76% over the non-voltage-scalable scheme and by up to 56% over the variable-voltage scheme without slack-time reallocation. 1.
[ 2483 ]
Train
1,688
4
MRML: An Extensible Communication Protocol for Interoperability and Benchmarking of Multimedia Information Retrieval Systems While in the area of relational databases interoperability is ensured by common communication protocols (e.g. ODBC/JDBC using SQL), Content Based Image Retrieval Systems (CBIRS) and other multimedia retrieval systems are lacking both a common query language and a common communication protocol. Besides its obvious short term convenience, interoperability of systems is crucial for the exchange and analysis of user data. In this paper, we present and describe an extensible XML-based query markup language, called MRML (Multimedia Retrieval Markup Language). MRML is primarily designed so as to ensure interoperability between dierent content{based multimedia retrieval systems. Further, MRML allows researchers to preserve their freedom in extending their system as needed. MRML encapsulates multimedia queries in a way that enables multimedia (MM) query languages, MM content descriptions, MM query engines, and MM user interfaces to grow independently from each other, reaching a maximum of in...
[ 270 ]
Train
1,689
0
Limited Logical Belief Analysis . The process of rational inquiry can be defined as the evolution of a rational agent's belief set as a consequence of its internal inference procedures and its interaction with the environment. These beliefs can be modelled in a formal way using doxastic logics. The possible worlds model and its associated Kripke semantics provide an intuitive semantics for these logics, but they seem to commit us to model agents that are logically omniscient and perfect reasoners. These problems can be avoided with a syntactic view of possible worlds, defining them as arbitrary sets of sentences in a propositional doxastic logic. In this paper this syntactic view of possible worlds is taken, and a dynamic analysis of the agent's beliefs is suggested in order to model the process of rational inquiry in which the agent is permanently engaged. One component of this analysis, the logical one, is summarily described. This dimension of analysis is performed using a modified version of the analytic tableaux...
[ 304, 1981 ]
Train
1,690
2
Exploiting Redundancy in Question Answering Our goal is to automatically answer brief factual questions of the form "When was the Battle of Hastings?" or "Who wrote The Wind in the Willows?". Since the answer to nearly any such question can now be found somewhere on the Web, the problem reduces to finding potential answers in large volumes of data and validating their accuracy. We apply a method for arbitrary passage retrieval to the first half of the problem and demonstrate that answer redundancy can be used to address the second half. The success of our approach depends on the idea that the volume of available Web data is large enough to supply the answer to most factual questions multiple times and in multiple contexts. A query is generated from a question and this query is used to select short passages that may contain the answer from a large collection of Web data. These passages are analyzed to identify candidate answers. The frequency of these candidates within the passages is used to "vote" for the most likely answer. The ...
[ 1095, 2392, 2503 ]
Validation
1,691
3
Active Database Systems , Exception, Clock, Externalg Granularity ae fMember, Subset, Setg Type ae fPrimitive, Composite g Operators ae for, and, seq, closure, times, not g Consumption mode ae fRecent, Chronicle, Cumulative, Continuous g Role 2 fMandatory, Optional, Noneg Condition Role 2 fMandatory, Optional, Noneg Context ae fDB T , BindE , DBE , DBC g Action Options ae fStructure Operation, Behavior Invocation, Update-Rules, Abort Inform, External, Do Instead g Context ae fDB T , BindE , BindC , DBE , DBC , DBA g ---behavior invocation, in which case the event is raised by the execution of some user-defined operation (e.g. the message display is sent to an object of type widget). It is common for event languages to allow events to be raised before or after an operation has been executed. ---transaction, in which case the event is raised by transaction commands (e.g. abort, commit, begin-transaction) ---abstract or user-defined, in which case a programming mechanism is used that allows an appli...
[ 975 ]
Train
1,692
4
Exploiting Context to Make Delivered Information Relevant To Tasks and Users Building truly "context-aware" environments presents a greater challenge than using data transmitted by ubiquitous computing devices: it requires shared understanding between humans and their computational environments. This essay articulates some specific problems that can be addressed by representing context. It explores the unique possibilities of design environments that model and represent domains, tasks, design guidelines, solutions and their rationale, and the larger context of such environments embedded in the physical world. Context in design is not a fixed entity sensed by devices, but it is emerging and it is unbounded. Context-aware environments must address these challenges to be more supportive to all stakeholders who design and evolve complex design artifacts.
[ 1876 ]
Validation
1,693
0
Design Issues for Mixed-Initiative Agent Systems This paper addresses the effect of mixed-initiative systems on multiagent systems design. A mixed-initiative system is one in which humans interact directly with software agents in a collaborative approach to problem solving. There are two main levels at which multiagent systems are designed: the domain level and the individual agent level. At the domain level, there are few unique challenges to mixedinitiative system design. However, at the individual agent level, the agent itself must be designed to interact with the human and the agent system, integrating the two into a single system. Introduction Much of the current research related to intelligent agents has focused on the capabilities and structure of individual agents. However, in order to solve complex problems, these agents must work cooperatively with other agents in a heterogeneous environment. This is the domain of Multiagent Systems. In multiagent systems, we are interested in the coordinated behavior of a system of indiv...
[ 1414 ]
Train
1,694
3
Characterization and Parallelization of Decision Tree Induction This paper examines the performance and memory-access behavior of the C4.5 decision tree induction program, a representative example of data mining applications, for both uniprocessor and parallel implementations. The goals of this paper are to characterize C4.5, in particular its memory hierarchy usage, and to decrease the runtime of C4.5 by algorithmic improvement and parallelization. Performance is studied via RSIM, an execution driven simulator, for three uniprocessor models that exploit instruction level parallelism to varying degrees. This paper makes the following four contributions. First it presents a complete characterization of the C4.5 decision tree induction program. It was found that the with the exception of the input dataset, the working set fits into an 8-kB data cache; the instruction working set also fits into an 8-kB instruction cache. For datasets smaller than the L2 cache, performance is limited by accesses to main memory. It was also found that multiple-issue can...
[ 1396 ]
Train
1,695
0
Reconciling Co-Operative Planning and Automated Co-Ordination in Multiagent Systems In the context of cooperative work, the coordination of activities is provided essentially by two opposite classes of approaches, based on the notion of situated action and planning. Actually, given the complexity and dynamism of current cooperative scenarios, the need of model supporting both the approaches has emerged. A similar situation can be recognised in multiagent system coordination, where two classes of approaches are clearly distinguishable, with properties similar to the situated action and planning cases: the one, defined in literature as subjective coordination, accounts for realising coordination exclusively by means of the skills and the situated actions of the individual agents, exploiting their intelligence and communication capability. Among the other properties, these approaches promote autonomy, flexibility and intelligence in coordination processes. The other, defined in literature as objective coordination, accounts for realising coordination exploiting coordination media, which mediate agent interactions and govern them according to laws which reflect social goals and norms. Among the other properties, these approaches promote automation, efficiency, and prescriptiveness in coordination processes. In this work, the importance to support both approaches in the same coordination context is remarked, and a conceptual framework -- derived from Activity Theory -- is proposed, where both approaches effectively co-exist and work at different collaborative levels, exploiting both the flexibility and intelligence of the subjective approaches, and the prescription and automation of the objective ones.
[ 2127 ]
Train
1,696
2
Application of ART2 Networks and Self-Organizing Maps to Collaborative Filtering Since the World Wide Web has become widespread, more and more applications exist that are suitable for the application of social information filtering techniques. In collaborative filtering, preferences of a user are estimated through mining data available about the whole user population, implicitly exploiting analogies between users that show similar characteristics.
[ 611, 2503 ]
Test
1,697
0
Winner Determination in Combinatorial Auction Generalizations Combinatorial markets where bids can be submitted on bundles of items can be economically desirable coordination mechanisms in multiagent systems where the items exhibit complementarity and substitutability. There has been a surge of recent research on winner determination in combinatorial auctions. In this paper we study a wider range of combinatorial market designs: auctions, reverse auctions, and exchanges, with one or multiple units of each item, with and without free disposal. We first theoretically characterize the complexity. The most interesting results are that reverse auctions with free disposal can be approximated, and in all of the cases without free disposal, even finding a feasible solution is ##-complete. We then ran experiments on known benchmarks as well as ones which we introduced, to study the complexity of the market variants in practice. Cases with free disposal tended to be easier than ones without. On many distributions, reverse auctions with free disposal were easier than auctions with free disposal -- as the approximability would suggest -- but interestingly, on one of the most realistic distributions they were harder. Single-unit exchanges were easy, but multi-unit exchanges were extremely hard.
[ 1337, 1878, 2707 ]
Validation
1,698
1
Feature Reduction for Neural Network Based Text Categorization In a text categorization model using an artificial neural network as the text classifier, scalability is poor if the neural network is trained using the raw feature space since textural data has a very high-dimension feature space. We proposed and compared four dimensionality reduction techniques to reduce the feature space into an input space of much lower dimension for the neural network classifier. To test the effectiveness of the proposed model, experiments were conducted using a subset of the Reuters-22173 test collection for text categorization. The results showed that the proposed model was able to achieve high categorization effectiveness as measured by precision and recall. Among the four dimensionality reduction techniques proposed, Principal Component Analysis was found to be the most effective in reducing the dimensionality of the feature space. 1. Introduction Text categorization is the classification of text documents into a set of one or more categories. In this paper, ...
[ 1440 ]
Train
1,699
4
Face-to-Face With Your Assistant - Realization Issues of Animated User Interface Agents for Home Appliances With the introduction of software agents and assistants, the concept of so-called social user interfaces evolved, incorporating natural language interaction, context awareness and anthropomorphic representations of visuals, scales, and degrees of freedom for interactions. Today's challenge is to build a suitable visualization architecture for anthropomorphic conversational user interfaces, and to design for the believable and appropriate inclusion of human attributes (such as emotions) in a face-toface interaction. Integrated approaches to these tasks are presented here. 1
[ 181 ]
Validation
1,700
3
Constraint-based Processing of Multiway Spatial Joins . A multiway spatial join combines information found in three or more spatial relations with respect to some spatial predicates. Motivated by their close correspondence with constraint satisfaction problems (CSPs), we show how multiway spatial joins can be processed by systematic search algorithms traditionally used for CSPs. This paper describes two different strategies, window reduction and synchronous traversal, that take advantage of underlying spatial indexes to effectively prune the search space. In addition, we provide cost models and optimization methods that combine the two strategies to compute more efficient execution plans. Finally, we evaluate the efficiency of the proposed techniques and the accuracy of the cost models through extensive experimentation with several query and data combinations. Key Words. Spatial Databases, Spatial Joins, Constraint Satisfaction, R-trees 1. INTRODUCTION Spatial DBMSs and GISs store large amounts of multi-dimensional data, such as points...
[ 2687 ]
Validation
1,701
3
Materialized View Selection and Maintenance Using Multi-Query Optimization Materialized views have been found to be very effective at speeding up queries, and are increasingly being supported by commercial databases and data warehouse systems. However, whereas the amount of data entering a warehouse and the number of materialized views are rapidly increasing, the time window available for maintaining materialized views is shrinking. These trends necessitate efficient techniques for the maintenance of materialized views. In this paper, we show how to find an efficient plan for the maintenance of a set of materialized views, by exploiting common subexpressions between different view maintenance expressions. In particular, we show how to efficiently select (a) expressions and indices that can be effectively shared, by transient materialization; (b) additional expressions and indices for permanent materialization; and (c) the best maintenance plan -- incremental or recomputation -- for each view. These three decisions are highly interdependent, and the choice of...
[ 1182, 1463, 2438 ]
Test
1,702
4
Multi-faceted Insight through Interoperable Visual Information Analysis Paradigms To gain insight and understanding of complex information collections, users must be able to visualize and explore many facets of the information. This paper presents several novel visual methods from an information analyst's perspective. We present a sample scenario, using the various methods to gain a variety of insights from a large information collection. We conclude that no single paradigm or visual method is sufficient for many analytical tasks. Often a suite of integrated methods offers a better analytic environment in today's emerging culture of information overload and rapidly changing issues. We also conclude that the interactions among these visual paradigms are equally as important as, if not more important than, the paradigms themselves. Keywords: information visualization, user scenario, information analysis, document analysis 1 Introduction Information analysts face significant challenges dealing with the growing amount of information available and how to gain needed i...
[ 1990 ]
Train
1,703
4
Realising the Full Potential of Workflow Modelling: A Practical Perspective In a broad sense, a workflow provides a partial or complete automation of a process at a level above traditional implementation platforms. Much of the emphasis of workflow specifications, as deployed by workflow management systems, focuses on process execution semantics. This includes a process's pre- and post-conditions and the sequence, repetition, choice, parallelism and synchronisation which characterises inter-process triggering. The successful development of workflows requires not only this but, in general, a fuller conceptualisation of a domain's processing, including process semantics. Given the diversity of business processing configurations, which in absence of a complete theoretical foundation, follow experience and intuition, the characterisations - indeed the cognition - of workflows still appears coarse. In this regard, a wellspring of modelling techniques, paradigms and informal-formal method extensions which address the analysis of organisational processing structures ...
[ 248 ]
Train
1,704
5
A Realistic (Non-Associative) Logic And a Possible Explanations of 7±2 Law When we know the subjective probabilities (degrees of belief) p1 and p2 of two statements S1 and S2 , and we have no information about the relationship between these statements, then the probability of S1 &S2 can take any value from the interval [max(p1 + p2 \Gamma 1; 0); min(p1 ; p2 )]. If we must select a single number from this interval, the natural idea is to take its midpoint. The corresponding "and" operation p1 & p2 def = (1=2) \Delta (max(p1 +p2 \Gamma 1; 0)+min(p1 ; p2)) is not associative. However, since the largest possible non-associativity degree j(a & b) & c \Gamma a & (b & c)j is equal to 1/9, this non-associativity is negligible if the realistic "granular" degree of belief have granules of width 1=9. This may explain why humans are most comfortable with 9 items to choose from (the famous "7 plus minus 2" law). We also show that the use of interval computations can simplify the (rather complicated) proofs. 1 1 In Expert Systems, We Need Estimates for the Degree of...
[ 2394 ]
Train
1,705
3
Managing Data Quality in Cooperative Information Systems (Extended Abstract) Massimo Mecella 1, Monica Scannapieco 1'2, Antonino Virgillito 1, Roberto Baldoni I , Tiziana Catarci 1, and Carlo Batini 3 i Universirk di Roma "La Sapienza" Dipartimento di Informatica e Sistemistica {mecella, monscan, virgi, baldoni, catarci}dis. uniromal. it 2 Consiglio Nazionale delle Ricerche Istituto di Analisi dei Sistemi ed Informatica (IASI-CNR) 3 Universirk di Milano "Bicocca" Dipartimento di Informatica, Sistemistica e Comunicazione batinidisco. unimib. it Abstract. Current approaches to the development of cooperative information systems are based on services to be offered by cooperating organizations, and on the opportunity of building coordinators and brokers on top of such services. The quality of data exchanged and provided by different services hampers such approaches, as data of low quality can spread all over the cooperative system. At the same time, improvement can be based on comparing data, correcting them and disseminating high quality data. In this paper, a service-based framework for managing data quality in cooperative information systems is presented. An XML-based model for data and quality data is proposed, and the design of a broker for data, which selects the best available data from different services, is presented. Such a broker also supports the improvement of data based on feedbacks to source services.
[ 1252, 3153 ]
Train
1,706
3
Learning Comprehensible Descriptions of Multivariate Time Series Supervised classification is one of the most active areas of machine learning research. Most work has focused on classification in static domains, where an instantaneous snapshot of attributes is meaningful. In many domains, attributes are not static; in fact, it is the way they vary temporally that can make classification possible. Examples of such domains include speech recognition, gesture recognition and electrocardiograph classification. While it is possible to use ad-hoc, domain-specific techniques for "attening" the time series to a learner-friendly representation, this fails to take into account both the special problems and special heuristics applicable to temporal data and often results in unreadable concept descriptions. Though traditional time series techniques can sometimes produce accurate classi ers, few can provide comprehensible descriptions. We propose a general architecture for classification and description of multivariate time series. It employs event primitives to ana...
[ 420, 844, 2792 ]
Train
1,707
1
Dimensionality Reduction by Random Mapping: Fast Similarity Computation for Clustering When the data vectors are high-dimensional it is computationally infeasible to use data analysis or pattern recognition algorithms which repeatedly compute similarities or distances in the original data space. It is therefore necessary to reduce the dimensionality before, for example, clustering the data. If the dimensionality is very high, like in the WEBSOM method which organizes textual document collections on a Self-Organizing Map, then even the commonly used dimensionality reduction methods like the principal component analysis may be too costly. It will be demonstrated that the document classiøcation accuracy obtained after the dimensionality has been reduced using a random mapping method will be almost as good as the original accuracy if the ønal dimensionality is sufficiently large (about 100 out of 6000). In fact, it can be shown that the inner product (similarity) between the mapped vectors follows closely the inner product of the original vectors.
[ 399 ]
Train
1,708
5
Visual Looming as a range sensor for mobile robots This paper describes and evaluates visual looming as a method for monocular range estimation. The looming algorithm is based on the relationship between displacements of the observer relative to an object, and the resulting change in the size of the object's image on the focal plane of the camera. Though the looming algorithm has been described in detail in prior reports, its usefulness for inexpensive, robust ranging has not been realized widely. In this paper we analyze visual looming as a visual range sensor for autonomous mobile robots. Systematic experiments with a Pioneer 1 mobile robot show that visual looming can be used to extract ranging information much as with sonar. The accuracy of the looming algorithm is found to be significantly more robust than sonar when the object whose distance is being measured is slanted relative to the robot's line of sight. On the other hand, sonar is better suited for objects that cannot be visually segmented from their background, or objects ...
[ 1652 ]
Train
1,709
5
Vision-Based Speaker Detection Using Bayesian Networks The development of user interfaces based on vision and speech requires the solution of a challenging statistical inference problem: The intentions and actions of multiple individuals must be inferred from noisy and ambiguous data. We argue that Bayesian network models are an attractive statistical framework for cue fusion in these applications. Bayes nets combine a natural mechanism for expressing contextual information with efficient algorithms for learning and inference. We illustrate these points through the development of a Bayes net model for detecting when a user is speaking. The model combines four simple vision sensors: face detection, skin color, skin texture, and mouth motion. We present some promising experimental results. 1 Introduction Human-centered user-interfaces based on vision and speech present challenging sensing problems in which multiple sources of information must be combined to infer the user's actions and intentions. Statistical inference techniques therefore...
[ 73, 564 ]
Test
1,710
0
Trust and Partial Typing in Open Systems of Mobile Agents . We present a partially-typed semantics for Dp, a distributed p-calculus. The semantics is designed for mobile agents in open distributed systems in which some sites may harbor malicious intentions. Nonetheless, the semantics guarantees traditional type-safety properties at good locations by using a mixture of static and dynamic type-checking. We show how the semantics can be extended to allow trust between sites, improving performance and expressiveness without compromising type-safety. 1 Introduction In [13] we presented a type system for controlling the use of resources in a distributed system, or network. The type system guarantees two properties: ffl resource access is always safe, e.g. integer resources are always accessed with integers and string resources are always accessed with strings, and ffl resource access is always authorized, i.e. resources may only be accessed by agents that have been granted permission to do so. While these properties are desirable, they are prop...
[]
Train
1,711
2
Incremental Document Clustering for Web Page Classification Introduction We consider document clustering for Web pages. Traditionally, the document classification task is carried out manually. In order to assign a document to an appropriate class, people would analyze the contents of the document first. Therefore a large amount of human effort would be required. There has been some research work conducted on automatic text classification. One approach is to learn the text classifiers by using the machine learning techniques. However, these algorithms are based on a set of positive and negative training examples for learning the text classifiers. The quality of the resulting classifiers highly depends on the fitness of the training examples. There are many terms and classes in the World Wide Web (or just the Web), and many new terms and concepts are created everyday. It is quite impossible to have domain experts to identify training examples to learn a classifier for each text class in the above manner. In order to make the document cl
[ 1312 ]
Train
1,712
4
Instant Messaging and Awareness of Presence in WebWho This is a study of how awareness of presence affects content of instant messages via an awareness tool, WebWho. The awareness tool is an easily accessible web based system that visualises a large university computer lab. The instant messaging system is one of the functions of the tool, which allows students to virtually locate one another and to communicate via the instant messaging system. As WebWho is there to be accessed through any web browser, it requires no programming skills or special software. It may also be used from outside the computer lab by students located elsewhere.
[ 552, 1776 ]
Validation
1,713
4
Augmented Reality Tracking in Natural Environments Tracking, or camera pose determination, is the main technical challenge in creating augmented realities. Constraining the degree to which the environment may be altered to support tracking heightens the challenge. This paper describes several years of work at the USC Computer Graphics and Immersive Technologies (CGIT) laboratory to develop self-contained, minimally intrusive tracking systems for use in both indoor and outdoor settings. These hybrid-technology tracking systems combine vision and inertial sensing with research in fiducial design, feature detection, motion estimation, recursive filters, and pragmatic engineering to satisfy realistic application requirements.
[ 469, 1088, 1654 ]
Test
1,714
3
An Object Oriented Multidimensional Data Model for OLAP . Online Analytical Processing (OLAP) data is frequently organized in the form of multidimensional data cubes each of which is used to examine a set of data values, called measures, associated with multiple dimensions and their multiple levels. In this paper, we first propose a conceptual multidimensional data model, which is able to represent and capture natural hierarchical relationships among members within a dimension as well as the relationships between dimension members and measure data values. Hereafter, dimensions and data cubes with their operators are formally introduced. Afterward, we use UML (Unified Modeling Language) to model the conceptual multidimensional model in the context of object oriented databases. 1. Introduction Data warehouses and OLAP are essential elements of decision support [5], they enable business decision makers to creatively approach, analyze and understand business problems [16]. While data warehouses are built to store very large amounts of integrat...
[ 539 ]
Train
1,715
2
Supporting Image Search on the Web While pages on the Web contain more and more multimedia information, such as images, videos and audio, today search engines are mostly based on textual information. There is an emerging need of a new generation of search engines that try to exploit the full multimedia information present on the Web. The approach presented in this paper is based on a multimedia model intended to describe the various multimedia components, their structure and their relationships with a pre-defined taxonomy of concepts, in order to support the information retrieval process. A prototype of an image search engine, based on this approach, is presented as a first step in this direction, and results are discussed. This research has been funded by the EC ESPRIT Long Term Research program, project no. 9141, HERMES (Foundations of High Performance Multimedia Information Management Systems). 1. INTRODUCTION The wide use of the World-Wide Web (WWW) across Internet is making of vital importance the proble...
[ 2662 ]
Validation
1,716
0
Applying A New Multidimentional Framework To The Evaluation Of Multiagent System Methodologies Because of the great interest in using multiagent systems (MAS) in a wide variety of applications in recent years, agentoriented methodologies and related modeling techniques have become a priority for the development of large scale agentbased systems. The work we present here belongs to the disciplines of Software Engineering and Distributed Artificial Intelligence. More specifically, we are interested in software engineering aspects involved in the development of multiagent systems (MAS). Several methodologies have been proposed for the development of MAS. For the most part, these methodologies remain incomplete: they are either an extension of object-oriented methodologies or an extension of knowledge-based methodologies. In addition, too little effort has gone into the standardization of MAS methodologies, platforms and environments. It seems obvious, therefore, that software engineering aspects of the development of MAS still remain an open field. The success of the agent paradigm requires systematic methodologies for the specification, analysis and design of "non toy" MAS applications. We here present a framework called MUCCMAS, which stands for MUltidimensional framework of Criteria for the Comparison of MAS methodologies, that enabled us to make a comparative analysis of existing main MAS methodologies. The overall results of this comparative analysis are presented here. Results from this work have also lead us to propose a unification scheme, much in the same spirit as that of UML, for MAS methodologies. Our goal is to make MAS design and development more systematic and to contribute to the standardisation of MAS methodologies and platforms.
[ 1414, 2343 ]
Train
1,717
1
Needles in a Haystack: Plan Recognition in Large Spatial Domains Involving Multiple Agents While plan recognition research has been applied to a wide variety of problems, it has largely made identical assumptions about the number of agents participating in the plan, the observability of the plan execution process, and the scale of the domain. We describe a method for plan recognition in a real-world domain involving large numbers of agents performing spatial maneuvers in concert under conditions of limited observability. These assumptions differ radically from those traditionally made in plan recognition and produce a problem which combines aspects of the fields of plan recognition, pattern recognition, and object tracking. We describe our initial solution which borrows and builds upon research from each of these areas, employing a pattern-directed approach to recognize individual movements and generalizing these to produce inferences of large-scale behavior. Introduction Plan recognition, the problem of inferring goals, intentions, or future actions given observations of ...
[ 672 ]
Train
1,718
2
Probabilistic Latent Semantic Analysis Probabilistic Latent Semantic Analysis is a novel statistical technique for the analysis of two--mode and co-occurrence data, which has applications in information retrieval and filtering, natural language processing, machine learning from text, and in related areas. Compared to standard Latent Semantic Analysis which stems from linear algebra and performs a Singular Value Decomposition of co-occurrence tables, the proposed method is based on a mixture decomposition derived from a latent class model. This results in a more principled approach which has a solid foundation in statistics. In order to avoid overfitting, we propose a widely applicable generalization of maximum likelihood model fitting by tempered EM. Our approach yields substantial and consistent improvements over Latent Semantic Analysis in a number of experiments.
[ 949, 1126, 1958, 2631 ]
Test
1,719
4
Tinmith-Metro: New Outdoor Techniques for Creating City Models with an Augmented Reality Wearable Computer This paper presents new techniques for capturing and viewing on site 3D graphical models for large outdoor objects. Using an augmented reality wearable computer, we have developed a software system, known as TinmithMetro. Tinmith-Metro allows users to control a 3D constructive solid geometry modeller for building graphical objects of large physical artefacts, for example buildings, in the physical world. The 3D modeller is driven by a new user interface known as Tinmith-Hand, which allows the user to control the modeller using a set of pinch gloves and hand tracking. These techniques allow user to supply their AR renderers with models that would previously have to be captured with manual, time-consuming, and/or expensive methods.
[ 2549 ]
Train
1,720
3
Component-based Algebraic Specification and Verification in CafeOBJ . We present a formal method for component-based system specification and verification which is based on the new algebraic specification language CafeOBJ, which is a modern successor of OBJ incorporating several new developments in algebraic specification theory and practice. We first give an overview of the main features of CafeOBJ, including its logical foundations, and then we focus on the behavioural specification paradigm in CafeOBJ, surveying the object-oriented CafeOBJ specification and verification methodology based on behavioural abstraction. The last part of this paper further focuses on a component-based behavioural specification and verification methodology which features high reusability of both specification code and verification proof scores. This methodology constitutes the basis for an industrial strength formal method around CafeOBJ. 1 Introduction CafeOBJ (whose definition is given by [7])is a modern successor of the OBJ language incorporating several ne...
[ 2355 ]
Train
1,721
4
Evaluating Interactive Cross-Language Information Retrieval: Document selection . The problem of finding documents that are written in a language that the searcher cannot read is perhaps the most challenging application of Cross-Language Information Retrieval (CLIR) technology. The first Cross-Language Evaluation Forum (CLEF) provided an excellent venue for assessing the performance of automated CLIR techniques, but little is known about how searchers and systems might interact to achieve better cross-language search results than automated systems alone can provide. This paper explores the question of how interactive approaches to CLIR might be evaluated, suggesting an initial focus on evaluation of interactive document selection. Important evaluation issues are identified, the structure of an interactive CLEF evaluation is proposed, and the key research communities that could be brought together by such an evaluation are introduced. 1 Introduction Cross-language information retrieval (CLIR) has somewhat uncharitably been referred to as "the problem ...
[ 372 ]
Test
1,722
0
Distributed Reflective Architectures The autonomy of a system can be defined as its capability to recover from unforeseen difficulties without any user intervention. This thesis proposal addresses a small part of this problem, namely the detection of anomalies within a system's own operation by the system itself. It is a response to a challenge presented by immune systems which can distinguish between "self" and "nonself", i.e. they can recognise a "foreign" pattern (due to a virus or bacterium) as different from those associated with the organism itself, even if the pattern was not previously encountered. The aim is to apply this requirement to an artificial system, where "nonself" may be any form of deliberate intrusion or random anomalous behaviour due to a fault. When designing reflective architectures or self-diagnostic systems, it is simpler to rely on a single coordination mechanism to make the system work as intended. However, such a coordination mechanism cannot be inspected or repaired by the system i...
[ 903 ]
Validation
1,723
0
Emile: Marshalling Passions in Training and Education Emotional reasoning can be an important contribution to auto- mated tutoring and training systems. This paper describes mile, a model of emotional reasoning that builds upon existing approaches and significantly generalizes and extends their capabilities. The main contribution is to show how an explicit planning model allows a more general treatment of several stages of the reasoning process. The model supports educational applications by allowing agents to appraise the emotional significance of events as they relate to students' (or their own) plans and goals, model and predict the emotional state of others, and alter behavior accordingly. 1 INTRODUCTION Emotional computers may seem an oxymoron but recent years have seen a flurry of computation accounts of emotion in a variety of applications. This paper describes mile, a model of emotional reasoning that extends and significantly generalizes prior work. mile illustrates how an explicit planning model supports a more general treatme...
[ 1965, 2021, 2536, 2992 ]
Train
1,724
0
Controlling Cooperative Problem Solving in Industrial Multi-Agent Systems using Joint Intentions One reason why Distributed AI (DAI) technology has been deployed in relatively few real-size applications is that it lacks a clear and implementable model of cooperative problem solving which specifies how agents should operate and interact in complex, dynamic and unpredictable environments. As a consequence of the experience gained whilst building a number of DAI systems for industrial applications, a new principled model of cooperation has been developed. This model, called Joint Responsibility, has the notion of joint intentions at its core. It specifies pre-conditions which must be attained before collaboration can commence and prescribes how individuals should behave both when joint activity is progressing satisfactorily and also when it runs into difficulty. The theoretical model has been used to guide the implementation of a general-purpose cooperation framework and the qualitative and quantitative benefits of this implementation have been assessed through a series of comparativ...
[ 266, 286, 424, 484, 542, 672, 717, 996, 1260, 1358, 1475, 1604, 1614, 1616, 2016, 2056, 2243, 2359, 2737, 2921, 3032, 3173 ]
Test
1,725
0
Towards the Standardization of Multi-Agent Systems Architectures: An Overview This article briefly describes these groups' efforts toward the standardization of multi-agent systems architectures, and sketches early works to define a multi-agent systems architecture at the University of Calgary. However, the main objective of this article is to give the reader a basic overview of the background and terminology in this exciting area of research.
[ 2353 ]
Validation
1,726
2
Distributional Clustering of Words for Text Classification This paper applies Distributional Clustering (Pereira et al. 1993) to document classification. The approach clusters words into groups based on the distribution of class labels associated with each word. Thus, unlike some other unsupervised dimensionality-reduction techniques, such as Latent Semantic Indexing, we are able to compress the feature space much more aggressively, while still maintaining high document classification accuracy. Experimental results obtained on three real-world data sets show that we can reduce the feature dimensionality by three orders of magnitude and lose only 2% accuracy---significantly better than Latent Semantic Indexing (Deerwester et al. 1990), class-based clustering (Brown et al. 1992), feature selection by mutual information (Yang and Pederson 1997), or Markovblanket -based feature selection (Koller and Sahami 1996). We also show that less aggressive clustering sometimes results in improved classification accuracy over classification without clusteri...
[ 430, 487, 514, 520, 630, 732, 759, 1405, 2181, 2560, 2780, 2839 ]
Train
1,727
1
A Hybrid Approach to Profile Creation and Intrusion Detection Anomaly detection involves characterizing the behaviors of individuals or systems and recognizing behavior that is outside the norm. This paper describes some preliminary results concerning the robustness and generalization capabilities of machine learning methods in creating user profiles based on the selection and subsequent classification of command line arguments. We base our method on the belief that legitimate users can be classified into categories based on the percentage of commands they use in a specified period. The hybrid approach we employ begins with the application of expert rules to reduce the dimensionality of the data, followed by an initial clustering of the data and subsequent refinement of the cluster locations using a competitive network called Learning Vector Quantization. Since Learning Vector Quantization is a nearest neighbor classifier, and new record presented to the network that lies outside a specified distance is classified as a masquerader. Thus, this system does not require anomalous records to be included in the training set. 1.
[ 1240 ]
Test
1,728
1
View-Based 3d Object Recognition With Support Vector Machines . Support Vector Machines have demonstrated excellent results in pattern recognition tasks and 3D object recognition. In this contribution, we confirm some of the results in 3D object recognition and compare it to other object recognition systems. We use di#erent pixel-level representations to perform the experiments, while we extend the setting to the more challenging and practical case when only a limited number of views of the object are presented during training. We report high correct classification of unseen views, especially considering that no domain knowledge is including into the proposed system. Finally, we suggest an active learning algorithm to reduce further the required number of training views. INTRODUCTION Humans are able to recognize everyday 3D objects when shown previously only one - or at most a few - views of the object. In contrast, artificial systems must either been shown many views of an object (e.g. [8]) or either a lot of knowledge of object structure must ...
[ 2389 ]
Train
1,729
4
Systematic Output Modification in a 2D User Interface Toolkit In this paper we present a simple but general set of techniques for modifying output in a 2D user interface toolkit. We use a combination of simple subclassing, wrapping, and collusion between parent and output objects to produce arbitrary sets of composable output transformations. The techniques described here allow rich output effects to be added to most, if not all, existing interactors in an application, without the knowledge of the interactors themselves. This paper explains how the approach works, discusses a number of example effects that have been built, and describes how the techniques presented here could be extended to work with other toolkits. We address issues of input by examining a number of extensions to the toolkit input subsystem to accommodate transformed graphical output. Our approach uses a set of "hooks" to undo output transformations when input is to be dispatched. KEYWORDS: User Interface Toolkits, Output, Rendering, Interactors, Drawing Effects. INTRODUCTION ...
[ 1645 ]
Train
1,730
5
OKBC: A Programmatic Foundation for Knowledge Base Interoperability The technology for building large knowledge bases (KBs) is yet to witness a breakthrough so that a KB can be constructed by the assembly of prefabricated knowledge components. Knowledge components include both pieces of domain knowledge (for example, theories of economics or fault diagnosis) and KB tools (for example, editors and theorem provers). Most of the current KB development tools can only manipulate knowledge residing in the knowledge representation system (KRS) for which the tools were originally developed. Open Knowledge Base Connectivity (OKBC) is an application programming interface for accessing KRSs, and was developed to enable the construction of reusable KB tools. OKBC improves upon its predecessor, the Generic Frame Protocol (GFP), in several signi cant ways. OKBC can be used with a much larger range of systems because its knowledge model supports an assertional view of a KRS. OKBC provides an explicit treatment ofentities that are not frames, and it has a much better way of controlling inference and specifying default values. OKBC can be used on practically any platform because it supports network transparency and has implementations for multiple programming languages. In this paper, we discuss technical design issues faced in the development of OKBC, highlight how OKBC improves upon GFP, and report on practical experiences in using it.
[ 1023, 1960, 2706 ]
Train
1,731
4
Multi-Sensor Context Aware Clothing Inspired by perception in biological systems, distribution of a massive amount of simple sensing devices is gaining more support in detection applications. A focus on fusion of sensor signals instead of strong analysis algorithms, and a scheme to distribute sensors, results in new issues. Especially in wearable computing, where sensor data continuously changes, and clothing provides an ideal supporting structure for simple sensors, this approach may prove to be favourable. Experiments with a body-distributed sensor system investigate the influence of two factors that affect classification of what has been sensed: an increase in sensors enhances recognition, while adding new classes or contexts depreciates the results. Finally, a wearable computing related scenario is discussed that exploits the presence of many sensors.
[ 290, 664, 2225, 2959 ]
Validation
1,732
1
Extended Compact Genetic Algorithm in C++ This report tells you how to download, compile, and run the extended compact genetic algorithm (ECGA) described in Harik's paper (Harik, 1999). It also explains how to modify the objective function that comes with the distribution of the code. The source is written in C++ but a knowledge of the C programming language is sufficient to modify the objective function so that you can try the ECGA on your own problems. 2 How to download the code
[ 1244, 1644 ]
Train
1,733
4
Moving from GUIs to PUIs For some time, graphical user interfaces (GUIs) have been the dominant platform for human computer interaction. The GUI-based style of interaction has made computers simpler and easier to use, especially for office productivity applications. However, as the way we use computers changes and computing becomes more pervasive and ubiquitous, GUIs will not easily support the range of interactions necessary to meet users' needs. In order to accommodate a wider range of scenarios, tasks, users, and preferences, we need to move toward interfaces that are natural, intuitive, adaptive, and unobtrusive. The aim of a new focus in HCI, called Perceptual User Interfaces (PUIs), is to make human-computer interaction more like how people interact with each other and with the world. This paper describes the emerging PUI field and then reports on three PUI-motivated components: computer vision-based techniques to visually perceive relevant information about the user. 1. Introduction Recent research i...
[ 1963 ]
Train
1,734
4
AdApt - a multimodal conversational dialogue system in an apartment domain A general overview of the AdApt project and the research that is performed within the project is presented. In this project various aspects of human-computer interaction in a multimodal conversational dialogue systems are investigated. The project will also include studies on the integration of user/system/dialogue dependent speech recognition and multimodal speech synthesis. A domain in which multimodal interaction is highly useful has been chosen, namely, finding available apartments in Stockholm. A Wizard-of-Oz data collection within this domain is also described. 1.
[ 2987 ]
Train
1,735
3
Array-Based Evaluation of Multi-Dimensional Queries in Object-Relational Database Systems Since multi-dimensional arrays are a natural data structure for supporting multi-dimensional queries, and object-relational database systems support multi-dimensional array ADTs, it is natural to ask if a multi-dimensional array-based ADT can be used to improve O/R DBMS performance on multi-dimensional queries. As an initial step toward answering this question, we have implemented a multi-dimensional array in the Paradise ObjectRelational DBMS. In this paper we describe the implementation of this compressed-array ADT, and explore its performance for queries including star-join consolidations and selections. We show that in many cases the array ADT can provide significantly higher performance than can be obtained by applying techniques such as bitmap indices and star-join algorithms to relational tables. 1 Introduction Multi-dimensional data analysis has been around for at least twenty years [OR95], but has recently taken the spotlight in the context of OLAP (On-Line Analytical Process...
[ 1945 ]
Test
1,736
5
Ontologies Description and Applications The word "ontology" has gained a good popularity within the AI community. Ontology is usually viewed as a high-level description consisting of concepts that organize the upper parts of the knowledge base.
[ 1146, 2670 ]
Train
1,737
1
Improving Minority Class Prediction Using Case-Specific Feature Weights This paper addresses the problem of handling skewed class distributions within the case-based learning (CBL) framework. We rst present as a baseline an informationgain-weighted CBL algorithm and apply it to three data sets from natural language processing (NLP) with skewed class distributions. Although overall performance of the baseline CBL algorithm is good, we show that the algorithm exhibits poor performance on minority class instances. We then present two CBL algorithms designed to improve the performance of minority class predictions. Each variation creates test-case-speci c feature weights by rst observing the path taken by the test case in a decision tree created for the learning task, and then using pathspeci c information gain values to create an appropriate weight vector for use during case retrieval. When applied to the NLP data sets, the algorithms are shown to signi cantly increase the accuracy of minority class predictions while maintaining or improving overall classi cation accuracy. 1
[ 744, 1389, 2117, 2641, 2823 ]
Validation
1,738
0
Multi-Agent Support for Internet-Scale Grid Management Internet-scale computational grids are emerging from various research projects. Most notably are the US National Technology Grid and the European Data Grid projects. One specific problem in realizing wide-area distributed computing environments as proposed in these projects, is effective management of the vast amount of resources that are made available within the grid environment. This paper proposes an agent-based approach to resource management in grid environments, and describes an agent infrastructure that could be integrated with the grid middleware layer. This agent infrastructure provides support for mobile agents that is scalable in the number of agents and the number of resources.
[ 725 ]
Train
1,739
4
Using Dynamic Configuration to Manage A Scalable Multimedia Distribution System Multimedia applications and interfaces will change radically the way computer systems will look like in the coming years. Radio and TV broadcasting will assume a digital format and their distribution networks will be integrated to the Internet. Existing hardware and software infrastructures, however, are unable to provide all the scalability, flexibility, and quality of service that these applications require. We present a framework for building scalable and flexible multimedia distribution systems that greatly improves the possibilities for the provision of quality of service in large-scale networks. We show how to use architectural-awareness, mobile agents, and a CORBA-based framework to support dynamic (re)configuration, efficient code distribution, and fault-tolerance. This approach can be applied not only for multimedia distribution, but also for any QoS-sensitive distributed application.
[ 138, 1305 ]
Train
1,740
4
Design of a Component-Based Augmented Reality Framework We propose a new approach to building augmented reality (AR) systems using a component-based software framework. This has advantages for all parties involved with AR systems. Our proposed framework consists of reusable distributed services for key subproblems of AR, the middleware to combine them, and an extensible software architecture.
[ 2113, 2365 ]
Train
1,741
1
Learning Complex Patterns for Document Categorization Knowledge-based approaches to document categorization make use of well elaborated and powerful pattern languages for manual writing of classification rules. Although such classification patterns have proven useful in many practical applications, algorithms for learning classifiers from examples mostly rely on much simpler representations of classification knowledge. In this paper, we describe a learning algorithm which employs a pattern language similar to languages used for manual rule editing. We focus on the learning of three specific constructs of this pattern language, namely phrases, tolerance matches of words and substring matches of words. Introduction Manually writing document categorization rules is labor intensive and requires much expertise. This caused a growing research interest in learning systems for document categorization. Most of these systems transform pre-classified example documents into a propositional attribute-value representation. Simple attributes indicate w...
[ 1572 ]
Train
1,742
4
Designing Synchronous User Interface for Collaborative Applications Synchronous User interface is a medium where all objects being shared on it can be viewed indifferently from the geographical location and its users can interact with each other in real-time. Designing such an interface for users working collaboratively requires to deal with a number of issues. Herein, our concerns lies on the design of control component of Human-Computer Interaction (HCI) and corresponding User Interface (UI) software that implements it. We make use of our approach to interactive system development based on the MPX - Mapping from PAN (Protagonist Action Notation) into Xchart (eXtended Statechart) - and illustrate it by presenting the case study of a collaborative application. Keywords: PAN, MPX, HCI design, UI software design. 1 INTRODUCTION To survive, human beings need to organize themselves into a society. Differently from other animals that are able to live separately in reasonable manner, human beings are endowed with physical and cognitive abilities ne...
[ 709 ]
Train
1,743
4
An XML-based runtime user interface description language for mobile computing devices In a time where mobile computing devices and embedded systems gain importance, too much time is spent to reinventing user interfaces for each new device. To enhance future extensibility and reusability of systems and their user interfaces we propose a runtime user interface description language, which can cope with constraints found in embedded systems and mobile computing devices. XML seems to be a suitable tool to do this, when combined with Java. Following the evolution of Java towards XML, it is logical to introduce the concept applied to mobile computing devices and embedded systems. 1
[ 2201 ]
Test
1,744
3
Overlapping B+-trees: an Implementation of a Transaction Time Access Method A new variation of Overlapping B+-trees is presented, which provides efficient indexing of transaction time and keys in a two dimensional key-time space. Modification operations (i.e. insertions, deletions and updates) are allowed at the current version, whereas queries are allowed to any temporal version, i.e. either in the current or in past versions. Using this structure, snapshot and range-timeslice queries can be answered optimally. However, the fundamental objective of the proposed method is to deliver efficient performance in case of a general pure-key query (i.e. "history of a key"). The trade-off is a small increase in time cost for version operations and storage requirements.
[ 1929 ]
Train
1,745
1
Ant Colony Control for Autonomous Decentralized Shop Floor Routing In this paper, we introduce a new approach to autonomous decentralized shop floor routing. Our system, which we call Ant Colony Control (AC 2 ), applies the analogy of a colony of ants foraging for food to the problem of dynamic shop floor routing. In this system, artificial ants use only indirect communication to make all shop routing decisions by altering and reacting to their dynamically changing common environment through the use of simulated pheromone trails. For simple factory layouts, we show that the emergent behavior of the colony is comparable to using the optimal routing strategy. Furthermore, as the complexity of the factory layout is increased, we show that the adaptive behavior of AC 2 evolves local decision making policies that lead to near-optimal solutions from the standpoint of global performance. 1. Introduction The factory is a complex dynamical environment often plagued by unexpected events. Machines may break down. An unexpected urgent job may suddenly be re...
[ 86, 2065 ]
Train
1,746
2
Domain-Specific Keyphrase Extraction Keyphrases are an important means of document summarization, clustering, and topic search. Only a small minority of documents have author-assigned keyphrases, and manually assigning keyphrases to existing documents is very laborious. Therefore it is highly desirable to automate the keyphrase extraction process. This paper shows that a simple procedure for keyphrase extraction based on the naiveBayes learning scheme performs comparably to the state of the art. It goes on to explain how this procedure's performance can be boosted by automatically tailoring the extraction process to the particular document collection at hand. Results on a large collection of technical reports in computer science show that the quality of the extracted keyphrases improves signi#cantly when domain-speci#c information is exploited. 1 Introduction Keyphrases give a high-level description of a document's contents that is intended to make it easy for prospective readers to decide whether or no...
[ 1567, 1608, 1647 ]
Train
1,747
4
The Design of History Mechanisms and their Use in Collaborative Educational Simulations Reviewing past events has been useful in many domains. Videotapes and flight data recorders provide invaluable technological help to sports coaches or aviation engineers. Similarly, providing learners with a readable recording of their actions may help them monitor their behavior, reflect on their progress, and experiment with revisions of their experiences. It may also facilitate active collaboration among dispersed learning communities. Learning histories can help students and professionals make more effective use of digital library searching, word processing tasks, computer-assisted design tools, electronic performance support systems, and web navigation. This paper describes the design space and discusses the challenges of implementing learning histories. It presents guidelines for creating effective implementations, and the design tradeoffs between sparse and dense history records. The paper also presents a first implementation of learning histories for a simulation-based engineer...
[ 1769 ]
Train
1,748
4
Between Information and Communication: Middle Spaces in Computer Media for Learning In this paper, we identify two categories of media that are common in computer-supported collaborative learning and software in general: communication media, and information media. These two types of media map easily on to two types of social activities in which learning is grounded: dialogue and monologue. Drawing on literature in learning theory, we suggest the need for interfaces that helpstudents transition from dialogue to monologue and back again. This "middle space" between communication and information interfaces is illustrated with several examples from CSCL. We advocate filling in this middle space with software and activities that transcend some of the traditional design tradeoffs associated with information and communication interfaces. Keywords: Collaboration, Interaction & Design Tradeoffs Introduction: computers, communication & learning Computer mediated acts of communication are becoming more commonplace in today's classroom. Like all media, particular computer techn...
[ 2158 ]
Train
1,749
0
Some Thoughts on Transiently Shared Dataspaces Transiently Shared Dataspaces (TSD), recently introduced in the coordination infrastructure Lime, represent an emerging technology to enable the use of dataspaces for the coordination of mobile agents: data-sharing is allowed among agents running on hosts belonging to the same federation (i.e., currently connected). In this paper we present a formalization of the TSD technology in order to provide a framework for reasoning about systems based on this infrastructure. In particular, we concentrate on alternative design choices related to data migration, host connectivity, and reactive programming. 1
[ 580 ]
Test
1,750
3
Design and Implementation of a High-Performance Distributed Web Crawler Broad web search engines as well as many more specialized search tools rely on web crawlers to acquire large collections of pages for indexing and analysis. Such a web crawler may interact with millions of hosts over a period of weeks or months, and thus issues of robustness, flexibility, and manageability are of major importance. In addition, I/O performance, network resources, and OS limits must be taken into account in order to achieve high performance at a reasonable cost. In this paper, we describe the design and implementation of a distributed web crawler that runs on a network of workstations. The crawler scales to (at least) several hundred pages per second, is resilient against system crashes and other events, and can be adapted to various crawling applications. We present the software architecture of the system, discuss the performance bottlenecks, and describe efficient techniques for achieving high performance. We also report preliminary experimental results based on a crawl of million pages on million hosts. Work supported by NSF CAREER Award NSF CCR-0093400, Intel Corporation, and the New York State Center for Advanced Technology in Telecommunications (CATT) at Polytechnic University, and by equipment grants from Intel Corporation and Sun Microsystems. 1 1
[ 573, 806, 1039, 1170, 1987, 2372, 2503 ]
Train
1,751
0
Network Management by Knowledge Distribution using Mobile Agents d to each other. Functionality provided by the different agents and managers are located at given places in the system. Such a system may have difficulties coping with a highly dynamic network environment. By introducing mechanisms for moving programcode. a more flexible system can be designed. A specific part (e.g. an agent) having a certain set of abilities can be moved or move itself closer to (or even into) the NE in need of management. Bandwidth can be saved and "expert agents" can be utilised. Autonomous moving program code is known as mobile agents [3]. 3 Collective behavior When designing agents for complex problem solving you can either create a few highly advanced agents [2] or try to divide and distribute the problem solving task between a number of less complex agents. Mother nature can give several examples of successful implementations based on the latter strategy. An ant colony is such an example where a high number of less inte
[ 2831 ]
Train
1,752
4
Towards Group Communication for Mobile Participants Group communication will undoubtedly be a useful paradigm for many applications of wireless networking in which reliability and timeliness are requirements. Moreover, location awareness is clearly central to mobile applications such as traffic management and smart spaces. In this paper, we introduce our definition of proximity groups in which group membership depends on location and then discuss some requirements for a group membership management service suitable for proximity groups. We describe a novel approach to efficient coverage estimation, giving applications feedback on the proportion of the area of interest covered by a proximity group, and also discuss our approach to partition anticipation.
[ 2248, 2529 ]
Train
1,753
2
Creating and Evaluating Multi-Document Sentence Extract Summaries This paper discusses passage extraction approaches to multidocument summarization that use available information about the document set as a whole and the relationships between the documents to build on single document summarization methodology. Multi-document summarization differs from single in that the issues of compression, speed, redundancy and passage selection are critical in the formation of useful summaries, as well as the user's goals in creating the summary. Our approach addresses these issues by using domain-independent techniques based mainly on fast, statistical processing, a metric for reducing redundancy and maximizing diversity in the selected passages, and a modular framework to allow easy parameterization for different genres, corpora characteristics and user requirements. We examined howhumans create multi-document summaries as well as the characteristics of such summaries and use these summaries to evaluate the performance of various multidocument summarization algorithms. 1.
[ 788 ]
Train
1,754
2
Privacy Interfaces for Information Management this article, we propose a set of guidelines for designing privacy interfaces that facilitate the creation, inspection, modification, and monitoring of privacy policies. These guidelines are based on our experience with COLLABCLIO---a system that supports automated sharing of Web browsing histories. COLLAB- CLIO stores a person's browsing history and makes it searchable by content, keyword, and other attributes. A typical COLLABCLIO query might be: "Show me all the pages Tessa has visited in the .edu domain that contain the phrase `direct manipulation.' " Since a COLLABCLIO user can make queries regarding the browsing history of other users, there are obvious privacy concerns.
[ 1248 ]
Train
1,755
2
Finding Code on the World Wide Web: A Preliminary Investigation To find out what kind of design structures programmers really use, we need to examine a wide variety of programs. Unfortunately most program source code is proprietary and is unavailable for analysis. The World Wide Web (Web) potentially can provide a rich source of programs for study. The freely available code on the Web, if in sufficient quality and quantity, can provide a window into software design as it is practiced today. In a preliminary study of source code availability on the Web, we estimate that 4% of URLs contain object-oriented source code, and 9% of URLs contain executable code --- either binary or class files. This represents an enormous resource for program analysis. We can, with some risk of inaccuracy, conservatively project our sampling results to the entire Web. Our estimate is that the Web contains at least 3.4 million files containing either Java, C++, or Perl source code, 20.3 million files containing C source code, and 8.7 million files containing executable code. Keywords: Design, source code analysis, World Wide Web estimation, code on the World Wide Web. 1.
[ 488, 1591, 1883 ]
Validation
1,756
2
Lower Bounds for High Dimensional Nearest Neighbor Search and Related Problems In spite of extensive and continuing research, for various geometric search problems (such as nearest neighbor search), the best algorithms known have performance that degrades exponentially in the dimension. This phenomenon is sometimes called the curse of dimensionality. Recent results [33, 32, 35] show that in some sense it is possible to avoid the curse of dimensionality for the approximate nearest neighbor search problem. But must the exact nearest neighbor search problem suffer this curse? We provide some evidence in support of the curse. Specifically we investigate the exact nearest neighbor search problem and the related problem of exact partial match within the asymmetric communication model first used by Miltersen [38] to study data structure problems. We derive non-trivial asymptotic lower bounds for the exact problem that stand in contrast to known algorithms for approximate nearest neighbor search. Department of Computer Science, University of Toronto. Part of this work ...
[ 53 ]
Validation
1,757
4
Finding Location Using Omnidirectional Video on a Wearable Computing Platform In this paper we present a framework for a navigation system in an indoor environment using only omnidirectional video. Within a Bayesian framework we seek the appropriate place and image from the training data to describe what we currently see and infer a location. The posterior distribution over the state space conditioned on image similarity is typically not Gaussian. The distribution is represented using sampling and the location is predicted and verified over time using the Condensation algorithm. The system does not require complicated feature detection, but uses a simple metric between two images. Even with low resolution input, the system may achieve accurate results with respect to the training data when given favorable initial conditions. 1. Introduction and Previous Work Recognizing location is a difficult but often essential part of identifying a wearable computer user's context. Location sensing may be used to provide mobility aids for the blind [13], spatially-based not...
[ 172, 468, 664, 1110, 1186, 1556, 1598, 2225, 2656, 2850 ]
Test
1,758
1
Rational and Convergent Learning in Stochastic Games This paper investigates the problem of policy learning in multiagent environments using the stochastic game framework, which we briefly overview. We introduce two properties as desirable for a learning agent when in the presence of other learning agents, namely rationality and convergence. We examine existing reinforcement learning algorithms according to these two properties and notice that they fail to simultaneously meet both criteria. We then contribute a new learning algorithm, WoLF policy hillclimbing, that is based on a simple principle: "learn quickly while losing, slowly while winning." The algorithm is proven to be rational and we present empirical results for a number of stochastic games showing the algorithm converges. 1
[ 240, 2221 ]
Train
1,759
0
An Overview of the Multiagent Systems Engineering Methodology . To solve complex problems, agents work cooperatively with other agents in heterogeneous environments. We are interested in coordinating the local behavior of individual agents to provide an appropriate system-level behavior. The use of intelligent agents provides an even greater amount of flexibility to the ability and configuration of the system itself. With these new intricacies, software development is becoming increasingly difficult. Therefore, it is critical that our processes for building the inherently complex distributed software that must run in this environment be adequate for the task. This paper introduces a methodology for designing these systems of interacting agents. 1.
[ 139, 1012, 1172, 1297, 1414, 1476, 1768, 2025, 2150, 2343, 3108 ]
Test
1,760
3
Abduction And Induction. Essays On Their Relation And Integration. Bibliography Charles S. Peirce Society, 22(2):145--164. Anderson, D. (1987). Creativity and the Philosophy of C.S. Peirce, volume 27 of Philosophy Library. Martinus Nijhoff. Andreasen, T. and Christiansen, H. (1996). Counterfactual exceptions in deductive database queries. In Proceedings of the Twelfth European Conference on Artificial Intelligence, pages 340--344. i ii BIBLIOGRAPHY Andreasen, T. and Christiansen, H. (1998). A practical approach to hypothetical database queries. In Freitag, B., Decker, H., Kifer, M., and Voronkov, A., editors, Transactions and Change in Logic Databases, volume 1472 of Lecture Notes in Computer Science, Berlin. Springer-Verlag. Angluin, D. (1980). Inductive inference of formal languages from positive data. Information and Control, 45:117--135. Angluin, D., Frazier, M., and Pitt, L. (1992). Learning conjunctions of horn clauses.<F11
[ 872, 2307 ]
Train
1,761
4
Social and Semiotic Analyses for Theorem Prover User Interface Design : We describe an approach to user interface design based on ideas from cognitive science, social science, especially the theory of stories, and a new area tentatively called algebraic semiotics. Social analysis helps to identify coherent classes of users and their requirements, and suggests some ways to make proofs more understandable, while algebraic semiotics, which combines semiotics with algebraic specification, provides a rigorous theory of interface functionality and quality. We apply these techniques to designing user interfaces for a distributed cooperative theorem proving system, whose main component is a website generation and proof assistance tool called Kumo. This interface integrates formal proving, proof browsing, animation, informal explanation, and online background tutorials. Experience with using the interface is reported, and some conclusions are drawn. 1 Introduction Recent large advances in performance have made it arguable that the most pressing open problems in ...
[ 1781, 1914 ]
Train
1,762
1
Continuous Categories For a Mobile Robot Autonomous agents make frequent use of knowledge in the form of categories --- categories of objects, human gestures, web pages, and so on. This paper describes a way for agents to learn such categories for themselves through interaction with the environment. In particular, the learning algorithm transforms raw sensor readings into clusters of time series that have predictive value to the agent. We address several issues related to the use of an uninterpreted sensory apparatus and show specific examples where a Pioneer 1 mobile robot interacts with objects in a cluttered laboratory setting. Introduction "There is nothing more basic than categorization to our thought, perception, action, and speech" (Lakoff 1987). For autonomous agents, categories often appear as abstractions of raw sensor readings that provide a means for recognizing circumstances and predicting effects of actions. For example, such categories play an important role for a mobile robot that navigates around obstacles ...
[ 1622, 2225, 2854 ]
Train
1,763
1
Fuzzy Finite-state Automata Can Be Deterministically Encoded into Recurrent Neural Networks There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships, i.e. they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-tim...
[ 265 ]
Train
1,764
1
Embodied Evolution: A Response to Challenges in Evolutionary Robotics We introduce Embodied Evolution (EE), a new methodology for conducting evolutionary robotics (ER). Embodied evolution uses a population of physical robots that evolve by reproducing with one another in the task environment. EE addresses several issues identified by researchers in the evolutionary robotics community as problematic for the development of ER. We review results from our first experiments and discuss the advantages and limitations of the EE methodology.
[ 360, 2066 ]
Test
1,765
1
Efficient Learning of Reactive Robot Behaviors with a Neural-Q Learning Approach The purpose of this paper is to propose a Neural-Q_learning approach designed for online learning of simple and reactive robot behaviors. In this approach, the Q_function is generalized by a multi-layer neural network allowing the use of continuous states and actions. The algorithm uses a database of the most recent learning samples to accelerate and guarantee the convergence. Each Neural-Q_learning function represents an independent, reactive and adaptive behavior which maps sensorial states to robot control actions. A group of these behaviors constitutes a reactive control scheme designed to fulfill simple missions. The paper centers on the description of the Neural-Q_learning based behaviors showing their performance with an autonomous underwater vehicle (AUV) in a target following mission. Simulated experiments demonstrate the convergence and stability of the learning system, pointing out its suitability for online robot learning. Advantages and limitations are discussed.
[ 470, 961, 1571 ]
Train
1,766
3
Routing Through Networks with Hierarchical Topology Aggregation Abstract In the future, global networks will consist of a hierarchy of subnetworks called domains. For reasons of both scalability and security, domains will not reveal details of their internal structure to outside nodes. Instead, these domains will advertise only a summary, or aggregated view, of their internal structure, e.g., as proposed by the ATM PNNI standard. This work compares, by simulation, the performance of several different aggregation schemes in terms of network throughput (the fraction of attempted connections that are realized), and network control load (the average number of crankbacks per realized connection.) Our main results are: ffl Minimum spanning tree is a good aggregation scheme; ffl Exponential link cost functions perform better than min-hop routing; ffl Our suggested logarithmic update scheme that determine when re-aggregation should be computed can significantly reduce the computational overhead due to re-aggregation with a negligible decrease in performance. 1.
[ 899 ]
Validation
1,767
2
The Overview of Web Search Engines The World Wide Web allows people to share information globally. The amount of information grows without bound. In order to extract information that we are interested in, we need a tool to search the Web. The tool is called a search engine. This survey covers different components of the search engine and how the search engine really works. It provides a background understanding of information retrieval. It discusses different search engines that are commercially available. It investigates how the search engines find information in the Web and how they rank its pages to the give n query. Also, the paper provides guidelines for users on how to use search engines.
[ 471, 1120, 1321, 2188, 2503 ]
Test
1,768
0
Agent Oriented Specification for Patient-Scheduling Systems in Hospitals Introduction Patient-scheduling in hospitals is a complex task which requires new computational methods, e.g. market mechanisms and enhanced support by software agents. These demands are addressed by the MedPAge-Project (Medical Path Agents) which covers the development of a multiagent -system for which an agent oriented specification will be presented. Firstly, based on field studies in five German hospitals, the hospital domain is analysed (c.f. [9]). In this domain analysis, a generic hospital structure is derived and the relevant co-ordination objects for patient-scheduling are identified. Secondly, hospital specific scheduling problems are discussed. On the foundation of this domain analysis, the architecture of the MedPAge multi-agent-system is developed, taking actual agent-oriented methodologies into account. The agents, consisting of an individual schedule and utility function, are modeled and the co-ordination mechanism, determining the agent interactions, is described. F
[ 1759 ]
Validation
1,769
4
Search History for User Support in Information-Seeking Interfaces The research overview described focuses on the design of search history displays to support information seeking (IS). It examines users' IS activities, current and potential use of histories, and building on this theoretical framework, assesses prototype interfaces that integrate these histories into search systems. Preliminary results described indicate search history use in coordinated work, mental model building, and end user IS strategies. Searchers create and use external records of their actions and the corresponding results by writing/typing notes, using copy and paste functions, and making printouts. Recording user actions and results in computerized systems automates this process, and enables the creation of search history displays that support users in their IS. Existing systems provide search history capabilities, however these often do not offer enough flexibility for users. Legal information has been selected as the domain for the research. Keywords History, Information-...
[ 955, 1747 ]
Train
1,770
4
UMLi: The unified modeling language for interactive applications User interfaces (UIs) are essential components of most software systems, and significantly affect the effectiveness of installed applications. In addition, UIs often represent a significant proportion of the code delivered by a development activity. However, despite this, there are no modelling languages and tools that support contract elaboration between UI developers and application developers. The Unified Modeling Language (UML) has been widely accepted by application developers, but not so much by UI designers. For this reason, this paper introduces the notation of the Unified Modelling Language for Interactive Applications (UMLi), that extends UML, to provide greater support for UI design. UI elements elicited in use cases and their scenarios can be used during the design of activities and UI presentations. A diagram notation for modelling user interface presentations is introduced. Activity diagram notation is extended to describe collaboration between interaction and domain objects. Further, a case study using UMLi notation and method is presented.
[ 96, 2851 ]
Train
1,771
3
An Event-Condition-Action Language for XML XML repositories are now a widespread means for storing and exchanging information on the Web. As these repositories become increasingly used in dynamic applications such as e-commerce, there is a rapidly growing need for a mechanism to incorporate reactive functionality in an XML setting. Event-condition-action (ECA) rules are a technology from active databases and are a natural method for supporting such functionality. ECA rules can be used for activities such as automatically enforcing document constraints, maintaining repository statistics, and facilitating publish/subscribe applications. An important question associated with the use of a ECA rules is how to statically predict their run-time behaviour. In this paper, we de ne a language for ECA rules on XML repositories. We theninvestigate methods for analysing the behaviour of a set of ECA rules, ataskwhich has added complexity in this XML setting compared with conventional active databases. Keywords: Event-condition-action rules, XML, XML repositories, reactive functionality, rule analysis. 1
[ 886, 1519, 1575, 1667, 2421 ]
Train
1,772
4
The Smart Floor: A Mechanism for Natural User Identification and Tracking We have created a system for identifying people based on their footstep force profiles and have tested its accuracy against a large pool of footstep data. This floor system may be used to identify users transparently in their everyday living and working environments. We have created user footstep models based on footstep profile features and have been able to achieve a recognition rate of 93%. We have also shown that the effect of footwear is negligible on recognition accuracy. Keywords Interaction technology, ubiquitous computing, user identification, biometrics, novel input. INTRODUCTION In the Smart Floor project, we have created and validated a system for biometric user identification based on footstep profiles. We have outfitted a floor tile with force measuring sensors and are using the data gathered as users walk over the tile to identify them. We rely on the uniqueness of footstep profiles within a small group of people to provide recognition accuracy similar to other biome...
[ 1854, 2012, 3159 ]
Train
1,773
2
Scalable Association-based Text Classification Nave Bayes (NB) classifier has long been considered a core methodology in text classification mainly due to its simplicity and computational efficiency. There is an increasing need however for methods that can achieve higher classification accuracy while maintaining the ability to process large document collections. In this paper we examine text categorization methods from a perspective that considers the tradeoff between accuracy and scalability to large data sets and large feature sizes. We start from the observation that Support Vector Machines, one of the best text categorization methods cannot scale up to handle the large document collections involved in many real word problems. We then consider bayesian extensions to NB that achieve higher accuracy by relaxing its strong independence assumptions. Our experimental results show that LB, an association-based lazy classifier can achieve a good tradeoff between high classification accuracy and scalability to large document collections...
[ 2424 ]
Validation
1,774
0
ERIM's Approach to Fine-Grained Agents Traditional software agents, an extension of Artificial Intelligence, seek human-level intelligence in each agent. For over 15 years, inspired by Artificial Life, ERIM has been devising architectures in which useful intelligence emerges at the system level from interactions of fine-grained agents. We have applied such architectures to a wide variety of domains, including business, industrial, and military. This white paper outlines three major principles that characterize our approach. For each we discuss what the principle is, why it is important, and how it works in practical implementations.
[ 755, 933, 1425 ]
Validation
1,775
1
Distributed Value Functions Many interesting problems, such as power grids, network switches, and traffic flow, that are candidates for solving with reinforcement learning (RL), also have properties that make distributed solutions desirable. We propose an algorithm for distributed reinforcement learning based on distributing the representation of the value function across nodes. Each node in the system only has the ability to sense state locally, choose actions locally, and receive reward locally (the goal of the system is to maximize the sum of the rewards over all nodes and over all time). However each node is allowed to give its neighbors the current estimate of its value function for the states it passes through. We present a value function learning rule, using that information, that allows each node to learn a value function that is an estimate of a weighted sum of future rewards for all the nodes in the network. With this representation, each node can choose actions to improve the performance of the overall...
[ 1880 ]
Train
1,776
4
WebWho: Support for Student Awareness and Coordination this paper, WEBWHO, is a lightweight, value-adding service that relies on readily available server status information, which is refined and visualized in a way that is easily accessible to individuals from any workstation with a web browser.
[ 983, 1712, 2064 ]
Validation
1,777
3
Nonmonotonic Reasoning In LDL++ Deductive database systems have made major advances on efficient support for nonmonotonic reasoning. A first generation of deductive database systems supported the notion of stratification for programs with negation and set aggregates. Stratification is simple to understand and efficient to implement but it is too restrictive; therefore, a second generation of systems seeks efficient support for more powerful semantics based on notions such as well-founded models and stable models. In this respect, a particularly powerful set of constructs is provided by the recently enhanced LDL++ system that supports (i) monotonic user-defined aggregates, (ii) XY-stratified programs, and (iii) the nondeterministic choice constructs under stable model semantics. This integrated set of primitives supports a terse formulation and efficient implementation for complex computations, such as greedy algorithms and data mining functions, yielding levels of expressive power unmatched by other deductive...
[ 2144, 2746 ]
Train
1,778
4
Three-dimensional PC: toward novel forms of human-computer interaction The ongoing integration of IT systems is offering computer users a wide range of new (networked) services and the access to an explosively growing host of multimedia information. Consequently, apart from the currently established computer tasks, the access and exchange of information is generally considered as the driving computer application of the next decade. Major advances are required in the development of the human-computer interface, in order to enable end-users with the most varied backgrounds to master the added functionalities of their computers with ease, and to make the use of computers in general an enjoyable experience. Research efforts in computer science are concentrated on user interfaces that support the highly evolved human perception and interaction capabilities better than today's 2D graphic user interfaces with a mouse and keyboard. Thanks to the boosting computing power of general purpose computers, highly interactive interfaces are becoming feasible supporting a...
[ 34, 103, 790, 1454, 1491, 2820 ]
Train
1,779
3
Overlapping Linear Quadtrees and Spatio-Temporal Query Processing indexing in spatio-temporal databases by using the technique of overlapping is investigated. Overlapping has been previously applied in various access methods to combine consecutive structure instances into a single structure, without storing identical sub-structures. In this way, space is saved without sacrificing time performance. A new access method, overlapping linear quadtrees is introduced. This structure is able to store consecutive historical raster images, a database of evolving images. Moreover, it can be used to support query processing in such a database. Five such spatio-temporal queries along with the respective algorithms that take advantage of the properties of the new structure are introduced. The new access method was implemented and extensive experimental studies for space efficiency and query processing performance were conducted. A number of results of these experiments are presented. As far as space is concerned, these results indicate that, in the case of similar consecutive images, considerable storage is saved in comparison to independent linear quadtrees. In the case of query processing, the results indicate that the proposed algorithmic approaches outperform the respective straightforward algorithms, in most cases. The region data sets used in experiments were real images of meteorological satellite views and synthetic random images with specified aggregation
[ 2718, 2939 ]
Train