node_id
int64
0
76.9k
label
int64
0
39
text
stringlengths
13
124k
neighbors
listlengths
0
3.32k
mask
stringclasses
4 values
1,180
1
Human Performance on Clustering Web Pages: A Preliminary Study With the increase in information on the World Wide Web it has become difficult to quickly find desired information without using multiple queries or using a topic-specific search engine. One way to help in the search is by grouping HTML pages together that appear in some way to be related. In order to better understand this task, we performed an initial study of human clustering of web pages, in the hope that it would provide some insight into the difficulty of automating this task. Our results show that subjects did not cluster identically; in fact, on average, any two subjects had little similarity in their web-page clusters. We also found that subjects generally created rather small clusters, and those with access only to URLs created fewer clusters than those with access to the full text of each web page. Generally the overlap of documents between clusters for any given subject increased when given the full text, as did the percentage of documents clustered. When analyzing individual subjects, we found that each had different behavior across queries, both in terms of overlap, size of clusters, and number of clusters. These results provide a sobering note on any quest for a single clearly correct clustering method for web pages.
[ 323, 2532, 2971, 3077 ]
Train
1,181
3
Integrating Light-Weight Workflow Management Systems within Existing Business Environments Workflow management systems support the efficient, largely au- tomated execution of business processes. However, using a workflow management system typically requires implementing the application's control flow exclusively by the workflow management system. This approach is powerful if the control flow is specified and implemented from scratch, but it has severe drawbacks if a workflow management system is to be integrated within environments with existing solutions for implementing control flow. Usual- ly, the existing solutions are too complex to be substituted by the workflow management system at once. Hence, the workflow management system must support an incremental integration, i.e. the reuse of existing implementations of control flow as well as their in- cremental substitution. Extending the workflow management system's functionality ac- cording to future application needs, e.g. by worklist and history management, must also be possible. In particular, at the beginning of...
[ 677, 2254 ]
Train
1,182
3
Efficient and Extensible Algorithms for Multi Query Optimization Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multi-query optimization aims at exploiting common subexpressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algo...
[ 1701, 2139, 2993 ]
Train
1,183
3
Query Optimization in Kess - An Ontology-Based KBMS This paper presents an approach for the implementation of query optimization techniques in Kess (the Knowledge Enhanced SQL Server). Kess is a Knowledge Database Management System (KBMS) that uses a semantic ontology-based data model. We have classified our query optimization techniques in two different categories: 1. semantic-based and 2. data access path related. These techniques use a compiler optimization approach to simplify query predicates and use caching to optimize memory hierarchy performance for the evaluation of ontology-related predicates. This work also presents the results of the implementation of such optimizations in Kess in the form of a performance analysis. 1 Introduction This paper presents an approach for the implementation of query optimization techniques in Kess (the Knowledge Enhanced SQL Server). Its main novel feature, when compared to conventional SQL servers, is the capability to store knowledge and to use it in a manner that is analogous to deducti...
[ 2139, 2746 ]
Train
1,184
3
Efficiently Ordering Query Plans for Data Integration interface to a multitude of data sources. Given a user query formulated in this interface, the system translates it into a set of query plans. Each plan is a query formulated over the data sources, and specifies a way to access sources and combine data to answer the user query.
[ 1536, 2022 ]
Train
1,185
3
Computing Extended Abduction Through Transaction Programs this paper, we propose a computational mechanism for extended abduction. When a background theory is written in a normal logic program, we introduce its transaction program for computing extended abduction. A transaction program is a set of non-deterministic production rules that declaratively specify addition and deletion of abductive hypotheses. Abductive explanations are then computed by the fixpoint of a transaction program using a bottom-up model generation procedure. The correctness of the proposed procedure is shown for the class of acyclic covered abductive logic programs. In the context of deductive databases, a transaction program provides a declarative specification of database update. Keywords: abduction, nonmonotonic reasoning, database update 1. Introduction 1.1. Motivation and background Abduction is inference to best explanations, and has recently been recognized as a very important form of reasoning in both AI [2
[ 872 ]
Train
1,186
4
Realtime Personal Positioning System for Wearable Computers Context awareness is an important functionality for wearable computers. In particular, the computer should know where the person is in the environment. This paper proposes an image sequence matching technique for the recognition of locations and previously visited places. As in single word recognition in speech recognition, a dynamic programming algorithm is proposed for the calculation of the similarity of different locations. The system runs on a stand alone wearable computer such as a Libretto PC. Using a training sequence a dictionary of locations is created automatically. These locations are then be recognized by the system in realtime using a hatmounted camera. 1. Introduction Obtaining user location is one of the important functions for wearable computers in two applications. One is automatic self-summary, and the other is contextaware user interface. In self-summary, the user is wearing a small camera and a small computer, capturing and recording every event of his/her daily ...
[ 664, 1598, 1757, 2656, 2799 ]
Validation
1,187
3
Efficient Use of Signatures in Object-Oriented Database Systems . Signatures are bit strings, which are generated by applying some hash function on some or all of the attributes of an object. The signatures of the objects can be stored separately from the objects themselves, and can later be used to filter out candidate objects during perfect match queries. In an object-oriented database system (OODB) using logical OIDs, an object identifier index (OIDX) is needed to map from logical OID to the physical location of the object. In this report we show how the signatures can be stored in the OIDX, and used to reduce the average object access cost in a system. We also extend this approach to transaction time temporal OODBs (TOODB), where this approach is even more beneficial, because maintaining signatures comes virtually for free. We develop a cost model that we use to analyze the performance of the proposed approaches, and this analysis shows that substantial gain can be achieved. Keywords: Signatures, object-oriented database systems, temporal objec...
[ 1441, 1803 ]
Test
1,188
3
Form-Based Proxy Caching for Database-Backed Web Sites Web caching proxy servers are essential for improving web performance and scalability, and recent research has focused on making proxy caching work for database-backed web sites. In this paper, we explore a new proxy caching framework that exploits the query semantics of HTML forms. We identify two common classes of form-based queries from real-world database-backed web sites, namely, keyword-based queries and function-embedded queries. Using typical examples of these queries, we study two representative caching schemes within our framework: (i) traditional passive query caching, and (ii) active query caching, in which the proxy cache can service a request by evaluating a query over the contents of the cache. Results from our experimental implementation show that our form-based proxy is a general and flexible approach that efficiently enables active caching schemes for database-backed web sites. Furthermore, handling query containment at the proxy yields significant performance advantages over passive query caching, but extending the power of the active cache to do full semantic caching appears to be less generally effective.
[]
Train
1,189
2
Evaluation of Item-Based Top-N Recommendation Algorithms The explosive growth of the world-wide-web and the emergence of e-commerce has led to the development of recommender systems---a personalized information filtering technology used to identify a set of N items that will be of interest to a certain user. User-based Collaborative filtering is the most successful technology for building recommender systems to date, and is extensively used in many commercial recommender systems. Unfortunately, the computational complexity of these methods grows linearly with the number of customers that in typical commercial applications can grow to be several millions. To address these scalability concerns item-based recommendation techniques have been developed that analyze the user-item matrix to identify relations between the different items, and use these relations to compute the list of recommendations. In this paper we present one such class of item-based recommendation algorithms that first determine the similarities between the various ite...
[ 1161, 3003 ]
Validation
1,190
5
Some Experiments with a Hybrid Model for Learning Sequential Decision Making To deal with sequential decision tasks, we present a learning model Clarion, which is a hybrid connectionist model consisting of both localist and distributed representations, based on the two-level approach proposed in Sun (1995). The model learns and utilizes procedural and declarative knowledge, tapping into the synergy of the two types of processes. It unifies neural, reinforcement, and symbolic methods to perform on-line, bottom-up learning. Experiments in various situations are reported that shed light on the working of the model. 1 Introduction This paper presents a hybrid model that unifies neural, symbolic, and reinforcement learning into an integrated architecture. It addresses the following three issues: (1) It deals with concurrent on-line learning: It allows a situated agent to learn continuously from on-going experience in the world, without the use of preconstructed data sets or preconceived concepts. (2) The model learns not only low-level procedural skills but also hi...
[ 812, 1898, 2997 ]
Train
1,191
1
Genetic and Evolutionary Algorithms in the Real World Introduction Since 1992, I have made regular trips to Japan to give talks about genetic algorithms (GAs)---search procedures based on the mechanics of natural selection and genetics. Back during my first visit, the use of genetic and evolutionary algorithms (GEAs) was restricted to a relatively small cadre of devoted specialists. Today, Japanese researchers and practitioners are ably advancing the state of GEA art and application across a broad front. Around the globe, from traditional and cutting-edge optimization in engineering and operations research to such non-traditional areas as drug design, financial prediction, data mining, and the composition of poetry and music, GEAs are grabbing attention and solving problems across a broad spectrum of human endeavor. Of course, science and technology go through fads and fashions much like those of apparel, food, and toys, and many practitioners---in Japan and elsewhere---are wondering whether GEAs, like so many methods that have c
[ 1830, 1848 ]
Train
1,192
0
Computational Logic and Multi-Agent Systems: a Roadmap Agent-based computing is an emerging computing paradigm that has proved extremely successful in dealing with a number of problems arising from new technological developments and applications. In this paper we report the role of computational logic in modeling intelligent agents, by analysing existing agent theories, agent-oriented programming languages and applications, as well as identifying challenges and promising directions for future research. 1 Introduction In the past ten years the eld of agent-based computing has emerged and greatly expanded, due to new technological developments such as ever faster and cheaper computers, fast and reliable interconnections between them as well as the emergence of the world wide web. These developments have at the same time opened new application areas, such as electronic commerce, and posed new problems, such as that of integrating great quantities of information and building complex software, embedding legacy code. The establishment o...
[ 223, 622, 916, 1143, 1325, 1458, 1617, 1625, 2144, 2307, 2442, 2605, 3104 ]
Validation
1,193
2
The Decor Toolbox For Workflow-Embedded Organizational Memory Access : We shortly motivate the idea of business-process oriented knowledge management (BPOKM) and sketch the basic approaches to achieve this goal. Then we describe the DECOR (Delivery of context-sensitive organisational knowledge) project which develops, tests, and consolidates new methods and tools for BPOKM. DECOR builds upon the KnowMore framework (Abecker et al 1998; Abecker et al 2000) for organizational memories (OM), but tries to overcome some limitations of this approach. In the DECOR project, three end-user environments serve as test-beds for validation and iterative improvement of innovative approaches to build: (1) knowledge archives organised around formal representations of business processes to facilitate navigation and access, (2) active information delivery services which - in collaboration with a workflow tool to support weaklystructured knowledge-intensive work - offer the user in a context-sensitive manner helpful information from the knowledge archive, and (3...
[ 1226 ]
Train
1,194
2
A Reinforcement Learning Agent for Personalized Information Filtering This paper describes a method for learning user's interests in the Web-based personalized information filtering system called WAIR. The proposed method analyzes user's reactions to the presented documents and learns from them the profiles for the individual users. Reinforcement learning is used to adapt the term weights in the user profile so that user's preferences are best represented. In contrast to conventional relevance feedback methods which require explicit user feedbacks, our approach learns user preferences implicitly from direct observations of user behaviors during interaction. Field tests have been made which involved 7 users reading a total of 7,700 HTML documents during 4 weeks. The proposed method showed superior performance in personalized information filtering compared to the existing relevance feedback methods.
[ 2645, 2912 ]
Validation
1,195
3
Empirically Evaluating an Adaptable Spoken Dialogue System Recent technological advances have made it possible to build real-time, interactive spoken dialogue systems for a wide variety of applications. However, when users do not respect the limitations of such systems, performance typically degrades. Although users differ with respect to their knowledge of system limitations, and although different dialogue strategies make system limitations more apparent to users, most current systems do not try to improve performance by adapting dialogue behavior to individual users. This paper presents an empirical evaluation of TOOT, an adaptable spoken dialogue system for retrieving train schedules on the web. We conduct an experiment in which 20 users carry out 4 tasks with both adaptable and non-adaptable versions of TOOT, resulting in a corpus of 80 dialogues. The values for a wide range of evaluation measures are then extracted from this corpus. Our results show that adaptable TOOT generally outperforms non-adaptable TOOT, and that the utility of adaptation depends on TOOT's initial dialogue strategies.
[]
Train
1,196
3
QoS Management in Web-based Real-Time Data Services The demand for real-time data services has been increasing recently. Many e-commerce applications and webbased information services are becoming very sophisticated in their data needs. They span the spectrum from low level status data, e.g., stock prices, to high level aggregated data, e.g., recommended selling/buying point. In these applications, it is desired to process user requests within their deadlines using fresh data, which reflect the current market status. Current web-based data services are poor at processing user requests in a timely manner using fresh data. To address this problem, we present a framework for guaranteed real-time data services in unpredictable environments. We also introduce a possible application of our approach in the distributed environment. 1
[ 2060 ]
Train
1,197
1
Simulating the Evolution of 2D Pattern Recognition on the CAM-Brain Machine, an Evolvable Hardware Tool for Building a 75 Million Neuron Artificial Brain This paper presents some simulation results of the evolution of 2D visual pattern recognizers to be implemented very shortly on real hardware, namely the "CAM-Brain Machine" (CBM), an FPGA based piece of evolvable hardware which implements a genetic algorithm (GA) to evolve a 3D cellular automata (CA) based neural network circuit module, of approximately 1,000 neurons, in about a second, i.e. a complete run of a GA, with 10,000s of circuit growths and performance evaluations. Up to 65,000 of these modules, each of which is evolved with a humanly specified function, can be downloaded into a large RAM space, and interconnected according to humanly specified artificial brain architectures. This RAM, containing an artificial brain with up to 75 million neurons, is then updated by the CBM at a rate of 130 billion CA cells per second. Such speeds will enable real time control of robots and hopefully the birth of a new research field that we call "brain building". The first such artif...
[ 2767 ]
Train
1,198
5
A New Research Tool for Intrinsic Hardware Evolution . The study of intrinsic hardware evolution relies heavily on commercial FPGA devices which can be configured in real time to produce physical electronic circuits. Use of these devices presents certain drawbacks to the researcher desirous of studying fundamental principles underlying hardware evolution, since he has no control over the architecture or type of basic configurable element. Furthermore, analysis of evolved circuits is difficult as only external pins of FPGAs are accessible to test equipment. After discussing current issues arising in intrinsic hardware evolution, this paper presents a new test platform designed specifically to tackle them, together with experimental results exemplifying its use. The results include the first circuits to be evolved intrinsically at the transistor level. 1 Introduction In recent years, evolutionary algorithms (EAs) have been applied to the design of electronic circuitry with significant results being attained using both computer simulations...
[ 3171 ]
Train
1,199
1
Genetic Algorithms Based Systems For Conceptual Engineering Design this paper we try to integrate methods of preferences and scenarios with Genetic Algorithms used to perform multi--objective optimisation. The goal is to make a system that will be able to work together with the designer during the conceptual design phase, where interaction and designer knowledge are sometimes more important than accuracy. MODULE OPTIMISATION CONSTRAINT HANDLING MODULE FUZZY RULES HANDLING MODULE
[ 2824 ]
Train
1,200
3
TEMPOS: A Temporal Database Model Seamlessly Extending ODMG This paper presents Tempos, a set of models and languages intended to seamlessly extend the ODMG object database standard with temporal functionalities. The proposed models exploit object-oriented technology to meet some important, yet traditionally neglected design criteria, related to legacy code migration and representation independence. Tempos has been fully formalized both at the syntactical and the semantical level and implemented on top of the O 2 DBMS. Its suitability in regard to applications' requirements has been validated through concrete case studies from various contexts. Keywords: temporal databases, temporal data models, temporal query languages, time representation, upward compatibility, object-oriented databases, ODMG R'esum'e Ce document pr'esente Tempos : un ensemble de mod`eles et de langages con¸cus pour 'etendre le standard pour Bases de Donn'ees `a objets ODMG, par des fonctionnalit'es temporelles. Les mod`eles d'ecrits exploitent les possibilit'es de la tech...
[ 131, 1255, 2606 ]
Train
1,201
2
Finding Related Pages in the World Wide Web When using traditional search engines, users have to formulate queries to describe their information need. This paper discusses a different approach toweb searching where the input to the search process is not a set of query terms, but instead is the URL of a page, and the output is a set of related web pages. A related web page is one that addresses the same topic as the original page. For example, www.washingtonpost.com is a page related to www.nytimes.com, since both are online newspapers. We describe two algorithms to identify related web pages. These algorithms use only the connectivity information in the web (i.e., the links between pages) and not the content of pages or usage information. We haveimplemented both algorithms and measured their runtime performance. To evaluate the e ectiveness of our algorithms, we performed a user study comparing our algorithms with Netscape's \What's Related " service [12]. Our study showed that the precision at 10 for our two algorithms are 73 % better and 51 % better than that of Netscape, despite the fact that Netscape uses both content and usage pattern information in addition to connectivity information.
[ 124, 507, 702, 843, 1307, 1815, 1838, 1976, 1983, 2106, 2386, 2433, 2437, 2459, 2471, 2503, 2565, 2705, 3090 ]
Train
1,202
2
A Search Engine for 3D Models As the number of 3D models available on the Web grows, there is an increasing need for a search engine to help people find them. Unfortunately, traditional text-based search techniques are not always effective for 3D data. In this paper, we investigate new shape-based search methods. The key challenges are to develop query methods simple enough for novice users and matching algorithms robust enough to work for arbitrary polygonal models. We present a web-based search engine system that supports queries based on 3D sketches, 2D sketches, 3D models, and/or text keywords. For the shape-based queries, we have developed a new matching algorithm that uses spherical harmonics to compute discriminating similarity measures without requiring repair of model degeneracies or alignment of orientations. It provides 46--245% better performance than related shape matching methods during precision-recall experiments, and it is fast enough to return query results from a repository of 20,000 models in under a second. The net result is a growing interactive index of 3D models available on the Web (i.e., a Google for 3D models).
[ 2503, 2523 ]
Train
1,203
1
Redundancy and Inconsistency Detection in Large and Semi-structured Case Bases With the dramatic proliferation of case based reasoning systems in commercial applications, many case bases are now becoming legacy systems. They represent a significant portion of an organization's assets, but they are large and difficult to maintain. One of the contributing factors is that these case bases are often large and yet unstructured or semi-structured; they are represented in natural language text. Adding to the complexity is the fact that the case bases are often authored and updated by different people from a variety of knowledge sources, making it highly likely for a case base to contain redundant and inconsistent knowledge. In this paper, we present methods and a system for maintaining large and semi-structured case bases. We focus on two difficult problems in case-base maintenance: redundancy and inconsistency detection. These two problems are particularly pervasive when one deals with an semi-structured case base. We will discuss both algorithms and a system for solvi...
[ 1844 ]
Test
1,204
4
Verifying Sequential Consistency on Shared-Memory Multiprocessors by Model Checking The memory model of a shared-memory multiprocessor is a contract between the designer and programmer of the multiprocessor. The sequential consistency memory model specifies a total order among the memory (read and write) events performed at each processor. A trace of a memory system satisfies sequential consistency if there exists a total order of all memory events in the trace that is both consistent with the total order at each processor and has the property that every read event to a location returns the value of the last write to that location. Descriptions of shared-memory systems are typically parameterized by the number of processors, the number of memory locations, and the number of data values. It has been shown that even for finite parameter values, verifying sequential consistency on general shared-memory systems is undecidable. We observe that, in practice, shared-memory systems satisfy the properties of causality and data independence. Causality is the property that values of read events flow from values of write events. Data independence is the property that all traces can be generated by renaming data values from traces where the written values are distinct from each other. If a causal and data independent system also has the property that the logical order of write events to each location is identical to their temporal order, then sequential consistency can be verified algorithmically. Specifically, we present a model checking algorithm to verify sequential consistency on such systems for a finite number of processors and memory locations and an arbitrary number of data values. 1
[]
Train
1,205
0
Secure Communication for Secure Agent-Based Electronic Commerce Applications Although electronic commerce is a relatively new concept, it has already become a normal aspect of our daily life. The software agent technology is also relatively new. In the area of electronic commerce, software agents could be used for example to search for the lowest prices and the best services, to buy goods on behalf of a user, etc. These applications involve a number of security issues, such as communications security, system security, and application security, that have to be solved. This paper describes how communications security is added to a lightweight agent framework. Secure agent-based electronic commerce applications require communications security services. Adding these services is a rst basis and an important enabler for the framework in order to be used for secure electronic commerce applications.
[ 2260 ]
Validation
1,206
1
Evolving Rule-Based Trading Systems In this study, a market trading rulebase is optimised using genetic programming (GP). The rulebase is comprised of simple relationships between technical indicators, and generates signals to buy, sell short, and remain inactive. The methodology is applied to prediction of the Standard & Poor's composite index (02-Jan-1990 to 18-Oct-2001). Two potential market systems are inferred: a simple system using few rules and nodes, and a more complex system. Results are compared with a benchmark buy-and-hold strategy. Neither trading system was found capable of consistently outperforming this benchmark. More complicated rulebases, in addition to being difficult to understand, are susceptible to overfitting.
[ 2743 ]
Validation
1,207
4
Towards a Network File System for Roaming Users Pervasive computing aims at offering access to user's data, anytime, anywhere, in a transparent manner. However, realizing such a vision necessitates several improvements in the way servers and user's terminals interact. Most notably, client terminals should not tightly rely on an information server which can be temporarily unavailable in a mobile situation. They should rather exploit all information servers available in a given context through loose coupling with both centralized servers and groups of terminals, in a serverless manner. In this position paper, we present the design rationale of a network file system that implements transparent adaptive file access according to the users' specific situations (e.g. device in use, network connectivity, etc).
[ 638, 1505 ]
Train
1,208
5
A Probabilistic Framework for Memory-Based Reasoning In this paper, we propose a probabilistic framework for Memory-Based Reasoning (MBR). The framework allows us to clarify the technical merits and limitations of several recently published MBR methods and to design new variants. The proposed computational framework consists of three components: a specification language to define an adaptive notion of relevant context for a query; mechanisms for retrieving this context; and local learning procedures that are used to induce the desired action from this context. Based on the framework we derive several analytical and empirical results that shed light on MBR algorithms. We introduce the notion of an MBR transform, and discuss its utility for learning algorithms. We also provide several perspectives on memory-based reasoning from a multi-disciplinary point of view. 1 Introduction Reasoning can be broadly defined as the task of deciding what action to perform in a particular state or in response to a given query. Actions can range from admit...
[ 209 ]
Train
1,209
0
Gongeroos'99 Team This article presents Gongeroos'99 approach to the Robocup simulator league challenge.
[ 2364 ]
Train
1,210
0
Toward Generalized Organizationally Contexted Agent Control Generalized domain-independent approaches to agent control enable control components to be used for a wide variety of applications. This abstraction from the domain context implies that contextual behavior is not possible or that it requires violation of the domain-independent objective. We discuss how context is used in the generalized framework and our current focus on the addition of organizational context in agent control. 1 Introduction From the vantage point of a long history of research in agents and agent control components for building distributed AI and multi-agent systems, we have focused our recent efforts on approaching agent control from a generalized domainindependent perspective. In implementation terms, the objective is to develop a set of agent control components that can be bundled with domain problem solvers or legacy applications to create agents that can meet real-time deadlines (and real-resource constraints) and coordinate activities with other agents. This pap...
[ 1107, 2321, 3071 ]
Test
1,211
2
Managing Change on the Web Increasingly, digital libraries are being defined that collect pointers to World-Wide Web based resources rather than hold the resources themselves. Maintaining these collections is challenging due to distributed document ownership and high fluidity. Typically a collection's maintainer has to assess the relevance of changes with little system aid. In this paper, we describe the Walden's Paths Path Manager, which assists a maintainer in discovering when relevant changes occur to linked resources. The approach and system design was informed by a study of how humans perceive changes of Web pages. The study indicated that structural changes are key in determining the overall change and that presentation changes are considered irrelevant. Categories and Subject Descriptors I.3.7 [Digital Libraries]: User issues; H.5.4 [Hypertext/Hypermedia]: Other (maintenance) General Terms Algorithms, Management, Design, Reliability, Experimentation, Human Factors, Verification. Keywords Walden's Paths, Path Maintenance. 1.
[ 585 ]
Train
1,212
0
Knowledge Base Support For Design And Synthesis Of Multiagent Systems
[ 1480, 2025 ]
Train
1,213
5
Tumor Detection in Colonoscopic Images Using Hybrid Methods for on-Line Neural Network Training In this paper the effectiveness of a new Hybrid Evolutionary Algorithm in on-line Neural Network training for tumor detection is investigated. To this end, a Lamarck-inspired combination of Evolutionary Algorithms and Stochastic Gradient Descent is proposed. The Evolutionary Algorithm works on the termination point of the Stochastic Gradient Descent. Thus, the method consists in a Stochastic Gradient Descent-based on-line training stage and an Evolutionary Algorithm-based retraining stage. On-line training is considered eminently suitable for large (or even redundant) training sets and/or networks; it also helps escaping local minima and provides a more natural approach for learning nonstationary tasks. Furthermore, the notion of retraining aids the hybrid method to exhibit reliable and stable performance, and increases the generalization capability of the trained neural network. Experimental results suggest that the proposed hybrid strategy is capable to train on-line, efficiently and effectively. Here, an artificial neural network architecture has been successfully used for detecting abnormalities in colonoscopic video images.
[ 2070, 2647 ]
Train
1,214
3
External Memory Algorithms and Data Structures Data sets in large applications are often too massive to fit completely inside the computer's internal memory. The resulting input/output communication (or I/O) between fast internal memory and slower external memory (such as disks) can be a major performance bottleneck. In this paper, we survey the state of the art in the design and analysis of external memory algorithms and data structures (which are sometimes referred to as "EM" or "I/O" or "out-of-core" algorithms and data structures). EM algorithms and data structures are often designed and analyzed using the parallel disk model (PDM). The three machine-independent measures of performance in PDM are the number of I/O operations, the CPU time, and the amount of disk space. PDM allows for multiple disks (or disk arrays) and parallel CPUs, and it can be generalized to handle tertiary storage and hierarchical memory. We discuss several important paradigms for how to solve batched and online problems efficiently in external memory. Programming tools and environments are available for simplifying the programming task. The TPIE system (Transparent Parallel I/O programming Environment) is both easy to use and efficient in terms of execution speed. We report on some experiments using TPIE in the domain of spatial databases. The newly developed EM algorithms and data structures that incorporate the paradigms we discuss are significantly faster than methods currently used in practice.
[ 2271, 2516, 3059 ]
Train
1,215
2
Using Text Classifiers for Numerical Classification Consider a supervised learning problem in which examples contain both numerical- and text-valued features. To use traditional featurevector -based learning methods, one could treat the presence or absence of a word as a Boolean feature and use these binary-valued features together with the numerical features. However, the use of a text-classification system on this is a bit more problematic --- in the most straight-forward approach each number would be considered a distinct token and treated as a word. This paper presents an alternative approach for the use of text classification methods for supervised learning problems with numerical-valued features in which the numerical features are converted into bag-of-words features, thereby making them directly usable by text classification methods. We show that even on purely numerical-valued data the results of textclassification on the derived text-like representation outperforms the more naive numbers-as-tokens representation and, more importantly, is competitive with mature numerical classification methods such as C4.5 and Ripper. 1
[ 739, 1446, 1564 ]
Train
1,216
0
Hierarchical Agent Interface for Animation Asynchronous, Hierarchical Agents (AHAs) provide a vertically structured multilevel abstraction hierarchy. In this paper, we argue that this multilevel hierarchy is a convenient way to create a human-agent interface at multiple levels of abstraction. In this way, the agent has several layers of specification (input) and visualization (output) which facilitates users with problem solving, because such an interface parallels the hierarchical and iterative nature of human creative thought processes. The AHA interface presents an intuitive, intimate interface which supports interactions on a scale from direct manipulation to delegation, depending on the user's choice. Another feature of this interface is its two modes of interaction: direct device interaction (mouse clicking) and interpretive, command line or scripting mode. This way, agents can be "forced" to perform certain activities via mouse clicks (direct control), or they can be programmed via scripts on the fly. We present example...
[ 273, 782 ]
Train
1,217
3
Answering queries using views with arithmetic comparisons We consider the problem of answering queries using views, where queries and views are conjunctive queries with arithmetic comparisons (CQACs) over dense orders. Previous work only considered limited variants of this problem, without giving a complete solution. We have developed a novel algorithm to obtain maximally-contained rewritings (MCRs) for queries having left (or right) semi-interval-comparison predicates. For semi-interval queries, we show that the language of finite unions of CQAC rewritings is not sufficient to find a maximally-contained solution, and identify cases where datalog is sufficient. Finally, we show that it is decidable to obtain equivalent rewritings for CQAC queries.
[ 2080 ]
Train
1,218
4
Partial Replication in the Vesta Software Repository The Vesta repository is a special-purpose replicated file system, developed as part of the Vesta software configuration management system. One of the major goals of Vesta is to make all software builds reproducible. To this end, the repository provides an append-only name space; new names can be inserted, but once a name exists, its meaning cannot change. More concretely, all files and some designated directories are immutable, while the remaining directories are appendable, allowing new names to be defined but not allowing existing names to be redefined. The data stored
[ 1860 ]
Train
1,219
3
Visual Specification of Queries for Finding Patterns in Time-Series Data Widespread interest in discovering features and trends in time- series has generated a need for tools that support interactive exploration.This paper introduces timeboxes: a powerful graphical, directmanipulation metaphor for the specification of queries over time-series datasets. Our TimeFinder implementation of timeboxes supports interactive formulation and modification of queries, thus speeding the process of exploring time-series data sets and guiding data mining. TimeFinder includes windows for timebox queries, individual time-series, and details-on-demand. Other features include drag-and-drop support for query-by-example and graphical envelopes for displaying the extent of the entire data set and result set from a given query. Extensions involving increased expressive power and general temporal data sets are discussed.
[ 1011 ]
Test
1,220
4
FootNotes: Personal Reflections on the Development of Instrumented Dance Shoes and their Musical Applications This paper describes experiences in designing and developing an extremely versatile, multimodal sensor interface built entirely into a pair of shoes. I discuss the system design, trace its motivations and goals, then describe its applications in interactive music for dance performance, summarizing lessons learned and future possibilities. ____________________________________ 1) The Inspiration Although the idea of instrumenting shoes for interactive music performance had crossed my mind before, the moment at which I decided to pursue this project can be traced to a demonstration that I attended with my Media Lab colleague Tod Machover in November of 1996. We were visiting some of our research sponsors and colleagues at a Yamaha development laboratory in the Shinjuku section of Tokyo, where they showed us the latest version of their Miburi musical controller [1]. The Miburi is an electronic vest, with bend sensors at various joints to monitor dynamic articulation and a pair of ha...
[ 1052, 2404 ]
Test
1,221
0
Hierarchical Optimization of Policy-Coupled Semi-Markov Decision Processes One general strategy for approximately solving large Markov decision processes is "divide-and-conquer": the original problem is decomposed into sub-problems which interact with each other, but yet can be solved independently by taking into account the nature of the interaction. In this paper we focus on a class of "policy-coupled" semi-Markov decision processes (SMDPs), which arise in many nonstationary real-world multi-agent tasks, such as manufacturing and robotics. The nature of the interaction among sub-problems (agents) is more subtle than that studied previously: the components of a sub-SMDP, namely the available states and actions, transition probabilities and rewards, depend on the policies used in solving the "neighboring" sub-SMDPs. This "strongly-coupled" interaction among subproblems causes the approach of solving each sub-SMDP in parallel to fail. We present a novel approach whereby many variants of each sub-SMDP are solved, explicitly taking into account the different mod...
[ 1130, 3012, 3147 ]
Test
1,222
3
Reasoning over Conceptual Schemas and Queries in Temporal Databases This paper introduces a new logical formalism, intended for temporal conceptual modelling, as a natural combination of the wellknown description logic DLR and pointbased linear temporal logic with Since and Until. The expressive power of the resulting DLRUS logic is illustrated by providing a systematic formalisation of the most important temporal entity-relationship data models appeared in the literature. We define a query language (where queries are nonrecursive Datalog programs and atoms are complex DLRUS expressions) and investigate the problem of checking query containment under the constraints defined by DLRUS conceptual schemas, as well as the problems of schema satisfiability and logical implication. Although it is shown that reasoning in full DLRUS is undecidable, we identify the decidable (in a sense, maximal) fragment DLR US by allowing applications of temporal operators to formulas and entities only (but not to relation expressions). We obtain the following hierarchy of complexity results: (a) reasoning in DLR US with atomic formulas is EXPTIME-complete, (b) satisfiability and logical implication of arbitrary DLR US formulas is EXPSPACE-complete, and (c) the problem of checking query containment of non-recursive Datalog queries under DLR US constraints is decidable in 2EXPTIME.
[ 108, 262, 2289, 2789, 2802 ]
Train
1,223
0
Automated Servicing of Agents Agents need to be able to adapt to changes in their environment. One way to achieve this, is to service agents when needed. A separate servicing facility, an agent factory, is capable of automatically modifying agents. This paper discusses the feasibility of automated servicing.
[ 2313, 2593 ]
Train
1,224
3
Fine-Granularity Signature Caching in Object Database Systems In many of the emerging application areas for database systems, data is viewed as a collection of objects, the access pattern is navigational, and a large fraction of the accesses are perfect match accesses/queries on one or more words in text strings in the objects in these databases. A typical example of such an application area is XML/Web storage. In order to reduce the object access cost, signature files can be used. However, traditional signature file maintenance is costly, and to be beneficial, a low update rate and high query selectivity is needed to make the maintenance and use of signatures beneficial. In this report, we present the SigCache approach. Instead of storing the signatures in separate signature files, the signatures are stored together with their objects. In addition, the most frequently accessed signatures are stored in a main memory signature cache (SigCache). Because the signatures are much smaller than the objects, the increase in update cost is not s...
[ 416 ]
Validation
1,225
0
XML Dataspaces for Mobile Agent Coordination This paper presents XMARS, a programmable coordination architecture for Internet applications based on mobile agents. In XMARS, agents coordinate -- both with each other and with their current execution environment -- through programmable XML dataspaces, accessed by agents in a Linda-like fashion. This suits very well the characteristics of the Internet environment: on the one hand, it offers all the advantages of XML in terms of interoperability and standard representation of information; on the other hand, it enforces open and uncoupled interactions, as required by the dynamicity of the environment and by the mobility of the application components. In addition, coordination in XMARS is made more flexible and secure by the capability of programming the behaviour of the coordination media in reaction to the agents' accesses. An application example related to the management of on-line academic courses shows the suitability and the effectiveness of the XMARS architecture. Ke...
[ 580, 2693 ]
Test
1,226
3
Knowledge Management through Ontologies Most enterprises agree that knowledge is an essential asset for success and survival on a increasingly competitive and global market. This awareness is one of the main reasons for the exponential growth of knowledge management in the past decade. Our approach to knowledge management is based on ontologies, and makes knowledge assets intelligently accessible to people in organizations. Most company-vital knowledge resides in the heads of people, and thus successful knowledge management does not only consider technical aspects, but also social ones. In this paper, we describe an approach to intelligent knowledge management that explicitly takes into account the social issues involved. The proof of concept is given by a large-scale initiative involving knowledge management of a virtual organization. 1 Introduction According to Information Week (Angus et al., 1998) "the business problem that knowledge management is designed to solve is that knowledge acquired through experience doesn't ge...
[ 759, 1193, 1683, 1821 ]
Test
1,227
4
Teaching Context to Applications Enhancing applications by adding one or more sensors is not new. Incorporating machinelearning techniques to fuse the data from the sensors into a high-level context description is less obvious. This paper describes an architecture that combines a hierarchy of self-organizing networks and a Markov chain to enable on-line context recognition. Depending on both user and application, the user can teach a context description to the system whenever he or she likes to, as long as the behavior of the sensors is different enough. Finally, consequences and complications of this new approach are discussed. 1
[ 1097, 2472 ]
Train
1,228
3
Path Constraints on Semistructured and Structured Data We present a class of path constraints of interest in connection with both structured and semi-structured databases, and investigate their associated implication problems. These path constraints are capable of expressing natural integrity constraints that are not only a fundamental part of the semantics of the data, but are also important in query optimization. We show that in semistructured databases, despite the simple syntax of the constraints, their associated implication problem is r.e. complete and finite implication problem is co-r.e. complete. However, we establish the decidability of the implication problems for several fragments of the path constraint language, and demonstrate that these fragments suffice to express important semantic information such as inverse relationships and local database constraints commonly found in object-oriented databases. We also show that in the presence of types, the analysis of path constraint implication becomes more delicate. We demonstrate so...
[ 1318, 3141 ]
Train
1,229
3
Discovering Structural Association of Semistructured Data Many semistructured objects are similarly, though not identically, structured. We study the problem of discovering "typical" substructures of a collection of semistructured objects. The discovered structures can serve the following purposes: (a) the "table-of-contents" for gaining general information of a source, (b) a road map for browsing and querying information sources, (c) a basis for clustering documents, (d) partial schemas for providing standard database access methods, (e) user/customer's interests and browsing patterns. The discovery task is impacted by structural features of semistructured data in a non-trivial way and traditional data mining frameworks are inapplicable. We define this discovery problem and propose a solution. 1 Introduction 1.1 Motivation Many on-line documents, such as HTML, Latex, BibTex, SGML files and those found in digital libraries, are semistructured. Semistructured data arises when the source does not impose a rigid structure (such as the ...
[ 904, 1460 ]
Test
1,230
3
Indexing Techniques for Continuously Evolving Phenomena The management of spatial, temporal, and spatiotemporal data is becoming increasingly important in a wide range of applications. This ongoing Ph.D. project focuses on applications where spatial or temporal aspects of objects are continuously changing and there is a need for indexing techniques that "track" the changing data, even in-between explicit updates. In spatiotemporal applications, there is a need to record and efficiently query the history, the current state, and the predicted future behavior of continuously moving objects, such as vehicles, mobile telephones, and people. Likewise, in temporal applications and spatiotemporal applications with discrete change, time intervals may be naturally related to the current time, which continuously progresses. The paper outlines the research agenda of the Ph.D. project and describes briefly two access methods developed so far in this project. 1 Introduction Recent years have shown both an increase in the amounts of ...
[ 228, 2488 ]
Train
1,231
0
Agent Technology in Communications Systems: An Overview Telecommunications infrastructures are a natural application domain for the distributed Software Agent paradigm. The authors clarify the potential application of software agent technology in legacy and future communications systems, and provide an overview of publicly available research on software agents as used for network management. The authors focus on the so called "stationary intelligent agent" type of software agent, although the paper also reviews the reasons why mobile agents have made an impact in this domain. The authors' objective is to describe some of the intricacies of using the software agent approach in the management of communications systems. The paper is in four main sections. The first section provides a brief introduction to software agent technology. The second section considers general problems of network management and the reasons why software agents may provide a suitable solution. The third section reviews some selected research on agents in a telec...
[]
Train
1,232
0
Determining When to Use an Agent-Oriented Software Engineering Paradigm . With the emergence of agent-oriented software engineering techniques, software engineers have a new way of conceptualizing complex distributed software requirements. To help determine the most appropriate software engineering methodology, a set of defining criteria is required. In this paper, we describe out approach to determining these criteria, as well as a technique to assist software engineers with the selection of a software engineering methodology based on those criteria. 1
[ 1022, 1158 ]
Train
1,233
5
Identifying Distinctive Subsequences in Multivariate Time Series by Clustering Most time series comparison algorithms attempt to discover what the members of a set of time series have in common. We investigate a different problem, determining what distinguishes time series in that set from other time series obtained from the same source. In both cases the goal is to identify shared patterns, though in the latter case those patterns must be distinctive as well. An efficient incremental algorithm for identifying distinctive subsequences in multivariate, real-valued time series is described and evaluated with data from two very different sources: the response of a set of bandpass filters to human speech and the sensors of a mobile robot. 1 Introduction Given two or more sequences of discrete tokens, a dynamic programming algorithm exists for finding the longest common subsequence they share (Cormen, Leiserson, & Rivest 1990). This basic algorithm has been adapted in various ways to find patterns shared by real-valued time series as well (Kruskall & Sankoff 1983). ...
[ 2792 ]
Train
1,234
2
Document Categorization and Query Generation on the World Wide Web Using WebACE We present WebACE, an agent for exploring and categorizing documents on the World Wide Web based on a user profile. The heart of the agent is an unsupervised categorization of a set of documents, combined with a process for generating new queries that is used to search for new related documents and for filtering the resulting documents to extract the ones most closely related to the starting set. The document categories are not given a priori. We present the overall architecture and describe two novel algorithms which provide significant improvement over Hierarchical Agglomeration Clustering and AutoClass algorithms and form the basis for the query generation and search component of the agent. We report on the results of our experiments comparing these new algorithms with more traditional clustering algorithms and we show that our algorithms are fast and scalable. y Authors are listed alphabetically. 1 Introduction The World Wide Web is a vast resource of information and services t...
[ 630, 847, 1361, 1405, 2179, 2324, 2780, 2839 ]
Train
1,235
2
Contextualizing the Information Space in Federated Digital Libraries Rapid growth in the volume of documents, their diversity, and terminological variations render federated digital libraries increasingly difficult to manage. Suitable abstraction mechanisms are required to construct meaningful and scalable document clusters, forming a cross-digital library information space for browsing and semantic searching. This paper addresses the above issues, proposes a distributed semantic framework that achieves a logical partitioning of the information space according to topic areas, and provides facilities to contextualize and landscape the available document sets in subject-specific categories.
[]
Train
1,236
3
The Hyper System: Knowledge Reformation for Efficient First-order Hypothetical Reasoning . We present the Hyper system that implements a new approach to knowledge compilation, where function-free first-order acyclic Horn theories are transformed to propositional logic. The compilation method integrates techniques from deductive databases (relevance reasoning) and theory transformation via unfold/fold transformations, to obtain a compact propositional representation. The transformed theory is more compact than the ground version of the original theory in terms of significantly less and mostly shorter clauses. This form of compilation, called knowledge (base) reformation, is important since the most efficient reasoning methods are defined for propositional theories, while knowledge is most naturally expressed in a first-order language. In particular, we will show that knowledge reformation allows low-order polynomial time inference to find a near-optimal solution in cost-based first-order hypothetical reasoning (or `abduction') problems. We will also present ex...
[ 458 ]
Train
1,237
3
Practical Lineage Tracing in Data Warehouses We consider the view data lineage problem in a warehousing environment: For a given data item in a materialized warehouse view, we want to identify the set of source data items that produced the view item. We formalize the problem and present a lineage tracing algorithm for relational views with aggregation. Based on our tracing algorithm, we propose a number of schemes for storing auxiliary views that enable consistent and efficient lineage tracing in a multisource data warehouse. We report on a performance study of the various schemes, identifying which schemes perform best in which settings. Based on our results, we have implemented a lineage tracing package in the WHIPS data warehousing system prototype at Stanford. With this package, users can select view tuples of interest, then efficiently "drill down" to examine the source data that produced them. 1 Introduction Data warehousing systems collect data from multiple distributed sources, integrate the information as materialized v...
[ 77, 148 ]
Train
1,238
3
Enabling Technologies for Interoperability We present a new approach, which proposes to minimize the numerous problems existing in order to have fully interoperable GIS. We discuss the existence of these heterogeneity problems and the fact that they must be solved to achieve interoperability. These problems are addressed on three levels: the syntactic, structural and semantic level. In addition, we identify the needs for an approach performing semantic translation for interoperability and introduce a uniform description of contexts. Furthermore, we discuss a conceptual architecture Buster (Bremen University Semantic Translation for Enhanced Retrieval) which can provide intelligent information integration based on a reclassification of information entities in a new context. Lastly, we demonstrate our theories by sketching a real life scenario.
[ 1801, 2990 ]
Train
1,239
4
Adapting Hidden Markov Models for ASL Recognition by Using Three-dimensional Computer Vision Methods We present an approach to continuous American Sign Language (ASL) recognition, which uses as input three-dimensional data of arm motions. We use computer vision methods for three-dimensional object shape and motion parameter extraction and an Ascension Technologies Flock of Birds interchangeably to obtain accurate three-dimensional movement parameters of ASL sentences, selected from a 53-sign vocabulary and a widely varied sentence structure. These parameters are used as features for Hidden Markov Models (HMMs). To address coarticulation e#ects and improve our recognition results, we experimented with two di#erent approaches. The first consists of training context-dependent HMMs and is inspired by speech recognition systems. The second consists of modeling transient movements between signs and is inspired by the characteristics of ASL phonology. Our experiments verified that the second approach yields better recognition results. 1. INTRODUCTION Sign language and gesture recognition h...
[ 1383, 1437 ]
Train
1,240
5
Experience with EMERALD to Date After summarizing the EMERALD architecture and the evolutionary process from which EMERALD has evolved, this paper focuses on our experience to date in designing, implementing, and applying EMERALD to various types of anomalies and misuse. The discussion addresses the fundamental importance of good software engineering practice and the importance of the system architecture -- in attaining detectability, interoperability, general applicability, and future evolvability. It also considers the importance of correlation among distributed and hierarchical instances of EMERALD, and needs for additional detection and analysis components. 1. Introduction EMERALD (Event Monitoring Enabling Responses to Anomalous Live Disturbances) [6, 8, 9] is an environment for anomaly and misuse detection and subsequent analysis of the behavior of systems and networks. EMERALD is being developed under DARPA/ITO Contract number F30602-96-C-0294 and applied under DARPA/ISO Contract number F30602-98-C-0059. EMER...
[ 503, 649, 1285, 1335, 1727, 2951 ]
Train
1,241
2
Selecting Text Spans for Document Summaries: Heuristics and Metrics Human-quality text summarization systems are difficult to design, and even more difficult to evaluate, in part because documents can differ along several dimensions, such as length, writing style and lexical usage. Nevertheless, certain cues can often help suggest the selection of sentences for inclusion in a summary. This paper presents an analysis of news-article summaries generated by sentence extraction. Sentences are ranked for potential inclusion in the summary using a weighted combination of linguistic features -- derived from an analysis of news-wire summaries. This paper evaluates the relative effectiveness of these features. In order to do so, we discuss the construction of a large corpus of extraction-based summaries, and characterize the underlying degree of difficulty of summarization at different compression levels on articles in this corpus. Results on our feature set are presented after normalization by this degree of difficulty.
[ 788, 1152, 1640 ]
Train
1,242
2
Assessment Methods for Information Quality Criteria Information quality (IQ) is one of the most important aspects of information integration on the Internet. Many projects realize and address this fact by gathering and classifying IQ criteria. Hardly ever do the projects address the immense difficulty of assessing scores for the criteria. This task must precede any usage of criteria for qualifying and integrating information. After reviewing previous attempts to classify IQ criteria, in this paper we also classify criteria, but in a new, assessment-oriented way. We identify three sources for IQ scores and thus, three IQ criterion classes, each with different general assessment possibilities. Additionally, for each criterion we give detailed assessment methods. Finally, we consider confidence measures for these methods. Confidence expresses the accuracy, lastingness, and credibility of the individual assessment methods. 1 Introduction Low information quality is one of the most pressing problems for consume rs of information that is di...
[ 1252, 2381 ]
Train
1,243
1
Neural Networks for Speech Processing this article. Currently (1998), successful use of NNs for speech processing is mainly limited to
[ 1403 ]
Train
1,244
5
Cluster Optimization Using Extended Compact Genetic Algorithm This study presents an ecient atomic cluster optimization algorithm that utilizes a hybrid extended compact genetic algorithm along with an eciency enhancement technique called seeding. Empirical results indicate that the population size and total number of function evaluations scale up with the cluster size as O n 0:83 and O n 2:45 respectively. The results also indicate that the proposed algorithm is not only very reliable in predicting lowest energy structures, but also has a better scale up of number of function evaluations with the cluster size.
[ 1644, 1732 ]
Train
1,245
1
Treating Constraints As Objectives For Single-Objective Evolutionary Optimization This paper presents a new approach to handle constraints using evolutionary algorithms. The new technique treats constraints as objectives, and uses a multiobjective optimization approach to solve the re-stated single-objective optimization problem. The new approach is compared against other numerical and evolutionary optimization techniques in several engineering optimization problems with different kinds of constraints. The results obtained show that the new approach can consistently outperform the other techniques using relatively small sub-populations, and without a significant sacrifice in terms of performance.
[ 559 ]
Test
1,246
0
RoboSoc a System for Developing RoboCup Agents for Educational Use This report describes RoboSoc, a system for developing RoboCup agents designed especially, but not only, for educational use. RoboSoc is designed to be as general, open, and easy to use as possible and to encourage and simplify the modification, extension and sharing of RoboCup agents, and parts of them. To do this I assumed four requirements from the user: she wants the best possible data, use as much time as possible for the decision making, rather act on incomplete information than not act at all, and she wants to manipulate the objects found in the soccer environment. RoboSoc consists of three parts: a library of basic objects and utilities used by the rest of the system, a basic system handling the interactions with the soccer server and the timing of the agent, and a framework for world modeling and decision support. The framework defines three concepts, views, predicates and skills. The views are specialized information processing units responsible for a specific part of the wo...
[ 962, 3173 ]
Train
1,247
2
Automatic Text Detection and Tracking in Digital Video Text which appears in a scene or is graphically added to video can provide an important supplemental source of index information as well as clues for decoding the video's structure and for classification. In this paper we present algorithms for detecting and tracking text in digital video. Our system implements a scalespace feature extractor that feeds an artificial neural processor to detect text blocks. Our text tracking scheme consists of two modules: an SSD (Sum of Squared Difference)-based module to find the initial position and a contour-based module to refine the position. Experiments conducted with a variety of video sources show that our scheme can detect and track text robustly. Keywords Text Detection, Text Tracking, Video Indexing, Digital Libraries, Neural Network I. Introduction The continued proliferation of large amounts of digital video has increased demand for true content based indexing and retrieval systems. Traditionally, content has been indexed primaril...
[ 17, 2018, 2938 ]
Test
1,248
4
Collections - Adapting The Display of Personal Objects for Different Audiences Although current networked systems and online applications provide new opportunities for displaying and sharing personal information, they do not account for the underlying social contexts that frame such interactions. Existing categorization and management mechanisms for digital content have been designed to focus on the data they handle without much regard for the social circumstances within which their content is shared. As we share large collections of personal information over mediated environments, our tools need to account for the social scenarios that surround our interactions. This thesis presents Collections: an application for the management of digital pictures according to their intended audiences. The goal is to create a graphical interface that supports the creation of fairly complex privacy decisions concerning the display of digital photographs. Simple graphics are used to enable the collector to create a wide range of audience arrangements for her digital pho...
[ 1754 ]
Train
1,249
1
Optimal Aggregation Algorithms for Middleware Assume that each object in a database has m grades, or scores, one for each of m attributes. ForexamzUan object can ave a color grade, t at tells ow red it is, and a s ape grade, t at tells ow round it is. For eac attribute, t ere is a sorted list, w ic lists eac object and its grade under t at attribute, sorted by grade ( ig est grade first). Eac object is assigned an overall grade, t at is obtained bycom`-T ng t e attribute grades using a fixedm notone aggregation function,orcombining rule, suc as mh or average. To determ`h t e top k objects, t at is, k objects wit t e ig est overall grades, t e naive algoritm mor access every object in t e database, to find its grade under eac attribute. Fagin as given an algoritm ("Fagin's Algorit mit or FA) t at is mh mh` efficient. For som mm'T` e aggregation functions, FA isoptim al wit ig probability in t e worst case. We analyze an elegant andrem'-`- lysim ple algoritm ("t e t res old algoritm ", or TA) t at isoptim al in am` stronger sense t an FA. We s ow t at TA is essentiallyoptim al, not just for som mmT-k' aggregation functions, but for all of tem and not just in a ig-probability worst-case sense, but over every database. Unlike FA, w ic requires large buffers (w ose sizemz grow unboundedly as t e database size grows), TA requires only asmkzz constant-size buffer. TA allows early stopping, w ic yields, in a precise sense, anapproxim ate version of t e top k answers. We distinguis two types of access: sorted access (w ere t em`U' ewaresystem obtains t e grade of an object insom sorted list by proceeding t roug t e list sequentiallyfrom t e top), and random access (w ere t e mzUETh. resystem requests t e grade of object in a list, and obtains it in one step). We consider t e scenarios w ere ra...
[ 1128, 2335, 2453 ]
Train
1,250
3
Active Disks: Programming Model, Algorithms and Evaluation Several application and technology trends indicate that it might be both pro#table and feasible to move computation closer to the data that it processes. In this paper, we evaluate Active Disk architectures which integrate signi #cant processing power and memory into a disk drive and allow application-speci#c code to be downloaded and executed on the data that is being read from #written to# disk. The key idea is to o#oad bulk of the processing to the diskresident processors and to use the host processor primarily for coordination, scheduling and combination of results from individual disks. To program Active Disks, we propose a stream-based programming model which allows disklets to be executed e#ciently and safely. Simulation results for a suite of six algorithms from three application domains #commercial data warehouses, image processing and satellite data processing# indicate that for these algorithms, Active Disks outperform conventional-disk architectures. 1 Introduction Severa...
[ 3068 ]
Test
1,251
4
Graspable interfaces: Establishing design principles PhD Research Plan for Morten Fjeld. Topic: Design of Tangible User Interfaces
[ 1336 ]
Train
1,252
3
An Extensible Framework for Data Cleaning Data integration solutions dealing with large amounts of data have been strongly required in the last few years. Besides the traditional data integration problems (e.g. schema integration, local to global schema mappings), three additional data problems have to be dealt with: (1) the absence of universal keys across dierent databases that is known as the object identity problem, (2) the existence of keyboard errors in the data, and (3) the presence of inconsistencies in data coming from multiple sources. Dealing with these problems is globally called the data cleaning process. In this work, we propose a framework which oers the fundamental services required by this process: data transformation, duplicate elimination and multi-table matching. These services are implemented using a set of purposely designed macro-operators. Moreover, we propose an SQL extension for specifying each of the macro-operators. One important feature of the framework is the ability of explicitly includ...
[ 1242, 1643, 1705, 2163, 2235, 2381, 2842 ]
Train
1,253
5
Circumventing Dynamic Modeling: Evaluation of the Error-State Kalman Filter applied to Mobile Robot Localization The mobile robot localization problem is treated as a two-stage iterative estimation process. The attitude is estimated first and is then available for position estimation. The indirect (error state) form of the Kalman filter is developed for attitude estimation when applying gyro modeling. The main benefit of this choice is that complex dynamic modeling of the mobile robot and its interaction with the environment is avoided. The filter optimally combines the attitude rate information from the gyro and the absolute orientation measurements. The proposed implementation is independent of the structure of the vehicle or the morphology of the ground. The method can easily be transfered to another mobile platform provided it carries an equivalent set of sensors. The 2D case is studied in detail first. Results of extending the approach to the 3D case are presented. In both cases the results demonstrate the efficacy of the proposed method. 1 Introduction On July 4th 1997, the Mars Pathfinde...
[ 1174 ]
Test
1,254
3
Adaptable Query Optimization and Evaluation in Temporal Middleware Time-referenced data are pervasive in most real-world databases. Recent advances in temporal query languages show that such database applications may benefit substantially from built-in temporal support in the DBMS. To achieve this, temporal query optimization and evaluation mechanisms must be provided, either within the DBMS proper or as a source level translation from temporal queries to conventional SQL. This paper proposes a new approach: using a middleware component on top of a conventional DBMS. This component accepts temporal SQL statements and produces a corresponding query plan consisting of algebraic as well as regular SQL parts. The algebraic parts are processed by the middleware, while the SQL parts are processed by the DBMS. The middleware uses performance feedback from the DBMS to adapt its partitioning of subsequent queries into middleware and DBMS parts. The paper describes the architecture and implementation of the temporal middleware component, termed TANGO, which is based on the Volcano extensible query optimizer and the XXL query processing library. Experiments with the system demonstrate the utility of the middleware`s internal processing capability and its cost-based mechanism for apportioning the processing between the middleware and the underlying DBMS. Index terms: temporal databases, query processing and optimization, cost-based optimization, middleware 1
[ 607 ]
Train
1,255
3
A representation-independent temporal extension of ODMG's Object Query Language TEMPOS is a set of models providing a framework for extending database systems with temporal functionalities. Based on this framework, an extension of the ODMG's object database standard has been defined. This extension includes a hierarchy of abstract datatypes for managing temporal values and histories, as well as temporal extensions of ODMG's object model, schema definition language and query language. This paper focuses on the latter, namely TEMPOQL. With respect to related proposals, the main originality of TEMPOQL is that it allows to manipulate histories regardless of their representations, by composition of functional constructs. Thereby, the abstraction principle of object-orientation is fulfilled, and the functional nature of OQL is enforced. In fact, TEMPOQL goes further in preserving OQL's structure, by generalizing most standard OQL constructs to deal with histories. The overall proposal has been fully formalized both at the syntactical and the semantical level and impleme...
[ 131, 1064, 1200, 2606, 2686, 3142 ]
Test
1,256
3
Novel Approaches to the Indexing of Moving Object Trajectories The domain of spatiotemporal applications is a treasure trove of new types of data and queries. However, work in this area is guided by related research from the spatial and temporal domains, so far, with little attention towards the true nature of spatiotemporal phenomena. In this work, the focus is on a spatiotemporal sub-domain, namely the trajectories of moving point objects. We present new types of spatiotemporal queries, as well as algorithms to process those. Further, we introduce two access methods this kind of data, namely the Spatio-Temporal R-tree (STR-tree) and the Trajectory-Bundle tree (TB-tree). The former is an R-tree based access method that considers the trajectory identity in the index as well, while the latter is a hybrid structure, which preserves trajectories as well as allows for R-tree typical range search in the data. We present performance studies that compare the two indices with the R-tree (appropriately modified, for a fair comparison) under a varying set of spatiotemporal queries, and we provide guidelines for a successful choice among them.
[ 72, 1410, 2806 ]
Test
1,257
2
An XML-based Multimedia Middleware for Mobile Online Auctions Pervasive Internet services today promise to provide users with a quick and convenient access to a variety of commercial applications. However, due to unsuitable architectures and poor performance user acceptance is still low. To be a major success mobile services have to provide device-adapted content and advanced value-added Web services. Innovative enabling technologies like XML and wireless communication may for the first time provide a facility to interact with online applications anytime anywhere. We present a prototype implementing an efficient multimedia middleware approach towards ubiquitous value-added services using an auction house as a sample application. Advanced multi-feature retrieval technologies are combined with enhanced content delivery to show the impact of modern enterprise information systems on today's e-commerce applications. Keywords: mobile commerce, online auctions, middleware architectures, pervasive Internet technology, multimedia database appli...
[ 1128, 2865 ]
Train
1,258
0
Categorization of Software Errors that led to Security Breaches A set of errors known to have led to security breaches in computer systems was analyzed. The analysis led to a categorization of these errors. After examining several proposed schemes for the categorization of software errors a new scheme was developed and used. This scheme classifies errors by their cause, the nature of their impact, and the type of change, or fix, made to remove the error. The errors considered in this work are found in a database maintained by the COAST laboratory. The categorization is the first step in the investigation of the effectiveness of various measures of code coverage in revealing software errors that might lead to security breaches. 1 Introduction We report the outcome of an effort to categorize errors in software that are known to have led to security breaches. The set of errors used in this study came from a database of errors developed in the COAST laboratory [10]. Several existing schemes for the categorization of software errors were evaluated for ...
[ 1931 ]
Test
1,259
4
Feedback From Video For Virtual Reality Navigation Important preconditions for wide acceptance of virtual reality systems include their comfort, ease and naturalness to use. Most existing trackers suer from discomfortrelated issues. For example, body-based trackers (such as hand controllers, joysticks or helmet attachments) restrict spontaneity and naturalness of motion, whereas groundbased devices (e.g., hand controllers) limit the workspace by literally binding an operator to the ground. Controls have similar problems. This paper describes using real-time video with registered depth information (from a commercially available camera) for virtual reality navigation. A camera-based setup can replace cumbersome trackers. The method includes selective depth processing for increased speed, and a robust skin-color segmentation for handling illumination variations.
[ 1454 ]
Test
1,260
0
Agents That Reason and Negotiate By Arguing The need for negotiation in multi-agent systems stems from the requirement for agents to solve the problems posed by their interdependence upon one another. Negotiation provides a solution to these problems by giving the agents the means to resolve their conflicting objectives, correct inconsistencies in their knowledge of other agents' world views, and coordinate a joint approach to domain tasks which benefits all the agents concerned. We propose a framework, based upon a system of argumentation, which permits agents to negotiate in order to establish acceptable ways of solving problems. The framework provides a formal model of argumentation-based reasoning and negotiation, details a design philosophy which ensures a clear link between the formal model and its practical instantiation, and describes a case study of this relationship for a particular class of architectures (namely those for belief-desire-intention agents). 1 Introduction An increasing number of software app...
[ 0, 350, 484, 566, 818, 1358, 1724, 2056, 2243, 2338, 2364, 2584, 2921, 3032, 3038 ]
Train
1,261
4
Mobile Collaborative Augmented Reality The combination of mobile computing and collaborative Augmented Reality into a single system makes the power of computer enhanced interaction and communication in the real world accessible anytime and everywhere. This paper describes our work to build a mobile collaborative Augmented Reality system that supports true stereoscopic 3D graphics, a pen and pad interface and direct interaction with virtual objects. The system is assembled from offthe -shelf hardware components and serves as a basic test bed for user interface experiments related to computer supported collaborative work in Augmented Reality. A mobile platform implementing the described features and collaboration between mobile and stationary users are demonstrated.
[ 172, 468, 2365, 2549 ]
Train
1,262
1
How Developmental Psychology and Robotics Complement Each Other This paper presents two complementary ideas relating the study of human development and the construction of intelligent artifacts. First, the use of developmental models will be a critical requirement in the construction of robotic systems that can acquire a large repertoire of motor, perceptual, and cognitive capabilities. Second, robotic systems can be used as a test-bed for evaluating models of human development much in the same way that simulation studies are currently used to evaluate cognitive models. To further explore these ideas, two examples from the author's own work will be presented: the use of developmental models of hand-eye coordination to simplify the task of learning to reach for a visual target and the use of a humanoid robot to evaluate models of normal and abnormal social skill development. Introduction Research on human development and research on the construction of intelligent artifacts can and should be complementary. Studies of human developm...
[ 637 ]
Train
1,263
1
Learning to Recognize 3D Objects A learning account for the problem of object recognition is developed within the PAC (Probably Approximately Correct) model of learnability. The key assumption underlying this work is that objects can be recognized (or, discriminated) using simple representations in terms of \syntactically" simple relations over the raw image. Although the potential number of these simple relations could be huge, only a few of them are actually present in each observed image and a fairly small number of those observed is relevant to discriminating an object. We show that these properties can be exploited to yield an ecient learning approach in terms of sample and computational complexity, within the PAC model. No assumptions are needed on the distribution of the observed objects and the learning performance is quantied relative to its past experience. Most importantly, the success of learning an object representation is naturally tied to the ability to represent it as a function of some in...
[ 816, 2860, 2898 ]
Validation
1,264
1
Parameter Learning of Logic Programs for Symbolic-statistical Modeling We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. de nite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, thatrunsfora class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside algorithm for PCFGs, and the one for singly connected Bayesian networks that have beendeveloped independently in each research eld. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can signi cantly outperform the Inside-Outside algorithm. 1.
[ 82 ]
Validation
1,265
3
The Impact of Database Selection on Distributed Searching Abstract The proliferation of online information resources increases the importance of effective and efficient distributed searching. Distributed searching is cast in three parts – database selection, query processing, and results merging. In this paper we examine the effect of database selection on retrieval performance. We look at retrieval performance in three different distributed retrieval testbeds and distill some general results. First we find that good database selection can result in better retrieval effectiveness than can be achieved in a centralized database. Second we find that good performance can be achieved when only a few sites are selected and that the performance generally increases as more sites are selected. Finally we find that when database selection is employed, it is not necessary to maintain collection wide information (CWI), e.g. global idf. Local information can be used to achieve superior performance. This means that distributed systems can be engineered with more autonomy and less cooperation. This work suggests that improvements in database selection can lead to broader improvements in retrieval performance, even in centralized (i.e. single database) systems. Given a centralized database and a good selection mechanism, retrieval performance can be improved by decomposing that database conceptually and employing a selection step. 1
[ 347, 1415 ]
Test
1,266
4
Improving the Registration Precision by Visual Horizon Silhouette Matching A system for enhancing the situational awareness in an outdoor scenario by Augmented Reality (AR) techniques can utilize visual clues for improving registration precision. If visible, terrain silhouettes provide unique features to be matched with digital elevation map (DEM) data. The best match of a visually extracted silhouette with the DEM silhouette provides camera/observer orientation (elevation and azimuth). We have developed such a registration system which runs on a PC (200 MHz) and is being ported to a wearable AR system. 1 Introduction and Context 1.1 Augmented Reality In the past years, augmented reality (AR) has gained significant attention due to rapid progress in several key areas (wearable computing, virtual reality rendering)[2]. AR technology provides means of intuitive information presentation for enhancing the situational awareness and perception by exploiting the natural and familiar human interaction modalities with the environment. Completely immersive A...
[ 1654 ]
Validation
1,267
1
Structure Identification of Fuzzy Classifiers For complex and high-dimensional problems, data-driven identification of classifiers has to deal with structural issues like the selection of the relevant features and effective initial partition of the input domain. Therefore, the identification of fuzzy classifiers is a challenging topic. Decision-tree (DT) generation algorithms are effective in feature selection and extraction of crisp classification rules, hence they can be used for the initialization of fuzzy systems. Because fuzzy classifiers have much flexible decision boundaries than DTs, fuzzy models can be more parsimonious than DTs. Hence, to get compact, easily interpretable and transparent classification system, a new structure identification algorithm is proposed, where genetic algorithm (GA) based parameter optimization of the DT initialized fuzzy sets is combined with similarity based rule base simplification algorithms. The performance of the approach is studied on a specially designed artificial data. An application to the Cancer classification problem is also shown.
[ 735, 1984, 3026 ]
Train
1,268
5
Expert System for Automatic Analysis of Facial Expressions This paper discusses our expert system called Integrated System for Facial Expression Recognition (ISFER), which performs recognition and emotional classification of human facial expression from a still full-face image. The system consists of two major parts. The first one is the ISFER Workbench, which forms a framework for hybrid facial feature detection. Multiple feature detection techniques are applied in parallel. The redundant information is used to define unambiguous face geometry containing no missing or highly inaccurate data. The second part of the system is its inference engine called HERCULES, which converts low level face geometry into high level facial actions, and then this into highest level weighted emotion labels.
[ 1904 ]
Train
1,269
2
A System For Automatic Personalized Tracking of Scientific Literature on the Web We introduce a system as part of the CiteSeer digital library project for automatic tracking of scientific literature that is relevant to a user’s research interests. Unlike previous systems that use simple keyword matching, CiteSeer is able to track and recommend topically relevant papers even when keyword based query profiles fail. This is made possible through the use of a heterogenous profile to represent user interests. These profiles include several representations, including content based relatedness measures. The CiteSeer tracking system is well integrated into the search and browsing facilities of CiteSeer, and provides the user with great flexibility in tuning a profile to better match his or her interests. The software for this system is available, and a sample database is online as a public service.
[ 95, 166, 676, 1016, 1357 ]
Train
1,270
3
A Virtual Document Interpreter for Reuse of Information . The importance of reuse of information is well recognised for electronic publishing. However, it is rarely achieved satisfactorily because of the complexity of the task: integrating different formats, handling updates of information, addressing document author's need for intuitiveness and simplicity, etc. An approach which addresses these problems is to dynamically generate and update documents through a descriptive definition of virtual documents. In this paper we present a document interpreter that allows gathering information from multiple sources, and combining it dynamically to produce a virtual document. Two strengths of our approach are: the generic information objects that we use, which enables access to distributed, heterogeneous data sources; and the interpreter's evaluation strategy, which permits a minimum of re-evaluation of the information objects from the data sources. Keywords: : Virtual Documents, Information Reuse, Active Documents, Document synthesis. 1. Introducti...
[ 1821, 2978 ]
Train
1,271
3
Reclustering of HEP Data in Object-Oriented Databases The Large Hadron Collider (LHC), build at CERN, will enter operation in 2005. The experiments at the LHC will generate some 5 PB of data per year, which are stored in an ODBMS. A good object clustering on the disk drives will be critical to achieve a high data throughput required by future analysis scenarios. This paper presents a new reclustering algorithm for HEP data that maximizes the read transfer rate for objects contained in multiple overlapping collections. It works by decomposing the stored objects into a number of chunks and rearranging them by means of heuristics solving the traveling salesman problem with Hamming distance. Furthermore experimental results of a prototype are presented. Keywords: object-oriented databases, scientific databases, object clustering, query optimisation 1 Introduction The ATLAS experiment [1] at CERN, due to take data in the year 2005 will store approximately 1 PB (10 15 bytes) of data per year. Data taking is expected to last 15 or more yea...
[ 2506, 3124 ]
Train
1,272
0
Towards UML-based Analysis and Design of Multi-Agent Systems The visual modeling facilities of the UML do not provide sufficient means to support the design of multi-agent systems. In this paper, we are investigating the development phases of requirements analysis, design, and code generation for multi agent systems. In the requirements analysis phase, we are using extended use case diagrams to identify agents and their relationship to the environment. In the design phase, we are using stereotyped class and object diagrams to model different agent types and their related goals and strategies. While these diagrams define the static agent system architecture, dynamic agent behavior is modeled in statecharts with respect to the BDI 1 agent approach. Concerning code generation, we show how the used diagrams can be taken to generate code for CASA, our executable agent specification language that is integrated into an existing multi-agent framework. 1
[ 231, 2309, 3133 ]
Train
1,273
0
The Evaluation of Microplanning and Surface Realization in the Generation of Multimodal Acts of Communication In this paper, we describe an application domain which requires the computational simulation of human-human communication in which one of the interlocutors has an expressive communication disorder. The importance and evaluation of a process, called here microplanning and surface realization, for such communicative agents is discussed and a related exploratory study is described. 1
[ 1925 ]
Train
1,274
1
Improvement in a Lazy Context: An Operational Theory for Call-By-Need The standard implementation technique for lazy functional languages is call-by-need, which ensures that an argument to a function in any given call is evaluated at most once. A significant problem with call-by-need is that it is difficult --- even for compiler writers --- to predict the effects of program transformations. The traditional theories for lazy functional languages are based on call-by-name models, and offer no help in determining which transformations do indeed optimize a program. In this article we present an operational theory for callby -need, based upon an improvement ordering on programs: M is improved by N if in all program-contexts C, when C[M ] terminates then C[N ] terminates at least as cheaply. We show that this improvement relation satisfies a "context lemma", and supports a rich inequational theory, subsuming the call-by-need lambda calculi of Ariola et al. [AFM + 95]. The reduction-based call-by-need calculi are inadequate as a theory of lazy-program tran...
[ 2302 ]
Train
1,275
0
What Sort Of Control System Is Able To Have A Personality? This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the `Nursemaid' or `Minder' scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas with work on synthetic agents inhabiting virtual reality environments. 1 Introduction: Personality belongs to a whole agent Most work in AI addresses only cognitive aspects of the design of intelligent agents, e.g. vision and other forms of perception, planning, problem solving, the learning of concepts and generalisations, natural language processing, motor control etc. Only a tiny subset of AI research has been concerned with motivation and emotion...
[ 606 ]
Train
1,276
4
Maintaining the Illusion of Interacting Within a 3D Virtual Space It is widely thought to more or less a degree, that a sense of presence may be induced in users of new and emerging media technologies, such as, the Internet, digital television and cinema (supporting interaction), teleconferencing and 3D virtual reality systems. In this paper, it is argued that presence presupposes that participants are absorbed in the illusion of interacting within the visual spaces created by these media. That is, prior to the possibility of any inducement of presence, participants need to be absorbed in the illusion conveyed by the media. Without this, participants' attention is broken and the illusion is lost. Hence, the potential to induce presence in participants ceases. To encourage participants to lose sight of the means of representation and be drawn into the illusion conveyed by these media, this paper proposes the development of design principles to increase participants' experience. In an attempt to inform design principles, this paper focuses on another artificial although highly successful visual medium - film. By way of example, this paper concentrates on one medium, virtual reality, and proposes design principles that attempt to maintain the illusion of interacting within 3D virtual space. This attempts to provide a platform through the resourceful blend of hardware and software Virtual Reality (VR) enabling technologies on which to support a well designed virtual environment and hence, from which the inducement of presence in participants may develop.
[ 1585, 1870 ]
Train
1,277
3
Proactive Detection of Distributed Denial of Service Attacks using MIB Traffic Variables - A Feasibility Study In this paper we propose a methodology for utilizing Network Management Systems for the early detection of Distributed Denial of Service (DDoS) Attacks. Although there are quite a large number of events that are prior to an attack (e.g. suspicious logons, start of processes, addition of new files, sudden shifts in traffic, etc.), in this work we depend solely on information from MIB (Management Information Base) Traffic Variables collected from the systems participating in the Attack. Three types of DDoS attacks were effected on a Research Test Bed, and MIB variables were recorded. Using these datasets, we show how there are indeed MIB-based precursors of DDoS attacks This work was supported by the Air Force Research Laboratory (Rome, NY - USA) under contract F30602-00-C-0126 to Scientific Systems Company, and by Aprisma's University Fellowship Program 1999/2000. 1 that render it possible to detect them before the Target is shut down. Most importantly, we describe how the relevant MI...
[ 896 ]
Train
1,278
3
Logical Update Queries as Open Nested Transactions . The rule-based update language ULTRA has been designed for the specification of complex database updates in a modular fashion. The logical semantics of update goals is based on update request sets, which correspond to deferred basic updates in the database. The declarative character of the logical semantics leaves much freedom for various evaluation strategies, among them a top-down resolution, which can be naturally mapped onto a system of nested transactions. In this paper, we extend this operational model as follows: Not only the basic operations are performed and committed independently from the toplevel transaction, but also complex operations defined by update rules. This leads to an open nested transaction hierarchy, which allows to exploit the semantical properties of complex operations to gain more concurrency. On the other hand, high-level compensation is necessary and meta information must be provided by the programmer. We present the key elements of this combi...
[]
Train
1,279
1
Naive Bayes and Exemplar-Based approaches to Word Sense Disambiguation Revisited . This paper describes an experimental comparison between two standard supervised learning methods, namely Naive Bayes and Exemplar--based classification, on the Word Sense Disambiguation (WSD) problem. The aim of the work is twofold. Firstly, it attempts to contribute to clarify some confusing information about the comparison between both methods appearing in the related literature. In doing so, several directions have been explored, including: testing several modifications of the basic learning algorithms and varying the feature space. Secondly, an improvement of both algorithms is proposed, in order to deal with large attribute sets. This modification, which basically consists in using only the positive information appearing in the examples, allows to improve greatly the efficiency of the methods, with no loss in accuracy. The experiments have been performed on the largest sense--tagged corpus available containing the most frequent and ambiguous English words. Results show that the Exemplar-based approach to WSD is generally superior to the Bayesian approach, especially when a specific metric for dealing with symbolic attributes is used.
[ 751, 2513, 2676 ]
Train