column "Global information systems have the potential of providing decision makers with timely spatial information about earth systems. This information will come from diverse sources, including field monitoring, remotely sensed imagery, and environmental models. Of the three the latter has the greatest potential of providing regional and global scale information on the behavior of environmental systems, which may be vital for setting multi-governmental policy and for making decisions that are critical to quality of life. However, environmental models have limited prootocol for quality control and standardization. They tend to have weak or poorly defined semantics and so their output is often difficult to interpret outside a very limited range of applications for which they are designed. This paper considers this issue with respect to spatially distributed environmental models. A method of measuring the semantic proximity between components of large, integrated models is presented, along with an example illustrating its application. It is concluded that many of the issues associated with weak model semantics can be resolved with the addition of self-evaluating logic and context-based tools that present the semantic weaknesses to the end-user." "Many commercial database systems use some form of statistics, typically histograms, to summarize the contents of relations and permit efficient estimation of required quantities. While there has been considerable work done on identifying good histograms for the estimation of query-result sizes, little attention has been paid to the estimation of the data distribution of the result, which is of importance in query optimization. In this paper, we prove that the optimal histogram for estimating the size of the result of a join operator is optimal for estimating its data distribution as well. We also study the effectiveness of these optimal histograms in the context of an important application that requires estimates for the data distribution of a query result: load-balancing for parallel Hybrid hash joins. We derive a cost formula to capture the effect of data skew in both the input and output relations on the load and use the optimal histograms to estimate this cost most accurately. We have developed and implemented a load balancing algorithm using these histograms on a simulator for the Gamma parallel database system. The experiments establish the superiority of this approach compared to earlier ones in handling all kinds and levels of skew while incurring negligible overhead." "This chapter describes an efficient method for maintaining materialized views with non-distributive aggregate functions, even in the presence of super aggregates. Incremental view maintenance is an extremely important aspect of the modern database management systems. It enables the fast execution of complex queries without sacrificing the freshness of the data. However, the maintenance of views defined with non-distributive aggregate functions was not sufficiently explored. Incremental refresh has been studied in depth only for a subset of the aggregate functions. Materialized views, or automatic summary tables (ASTs), are increasingly being used to facilitate the analysis of the large amounts of data being collected in relational databases. The use of ASTs can significantly reduce the execution time of a query, often by orders of magnitude, which is particularly significant for databases with sizes in the terabyte to petabyte range, whose queries are designed by business intelligence tools or decision support systems. Such queries tend to be extremely complex, involving a large number of join and grouping operations." "Although many of the problems that must be solved by an object-oriented database system are similar to problems solved by relational systems, there are also significant problems that are unique. In particular, an object query can include a path expression to traverse a number of related collections. The order of collection traversals given by the path expression may not be the most efficient to process the query. This generates a critical problem for object query optimizer to select an algorithm to process the query based on direct navigation or various combinations of joins. This paper studies the different algorithms to process path expressions with predicates, including depth first navigation, forward and reverse joins. Using a cost model, it then compares their performances in different cases, according to memory size, selectivity of predicates, fan out between collections, etc.. It also presents a heuristic-based algorithm to find profitable n-ary operators for traversing collections, thus reducing the search space of query plans to process a query with a qualified path expression. An implementation based on the O2 system demonstrates the validity of the results." "The spatial join operation is benchmarked using variants of well-known spatial data structures such as the R-tree, R -tree, R + -tree, and the PMR quadtree. The focus is on a spatial join with spatial output because the result of the spatial join frequently serves as input to subsequent spatial operations (i.e., a cascaded spatial join as would be common in a spatial spreadsheet). Thus, in addition to the time required to perform the spatial join itself (whose output is not always required to be spatial), the time to build the spatial data structure also plays an important role in the benchmark. The studied quantities are the time to build the data structure and the time to do the spatial join in an application domain consisting of planar line segment data. Experiments reveal that spatial data structures based on a disjoint decomposition of space and bounding boxes (i.e., the R + -tree and the PMR quadtree with bounding boxes) outperform the other structures that are based upon" "Searching a database of 3D-volume objects for objects which are similar to a given 3D search object is an important problem which arises in number of database applications- for example, in Medicine and CAD. In this paper, we present a new geometrybased solution to the problem of searching for similar 3D-volume objects. The problem is motivated from a real application in the medical domain where volume similarity is used as a basis for surgery decisions. Our solution for an efficient similarity search on large databases of 3D volume objects is based on a new geometric index structure. The basic idea of our new approach is to use the concept of hierarchical approximations of the 3D objects to speed up the search process. We formally show the correctness of our new approach and introduce two instantiations of our general idea, which are based on cuboid and octree approximations. We finally provide a performance evaluation of our new index structure revealing significant performance improvements over existing approaches." "Authors and publishers who wish their publications to be considered for review in Computational Linguistics should send a copy to the book review editor, Graeme Hirst, Department of Computer Science, University of Toronto, Toronto, Canada M5S 3G4. All relevant books received will be listed, but not all can be reviewed. Technical reports (other than dissertations) will not be listed or reviewed. Authors should be aware that some publishers will not send books for review (even when instructed to do so); authors wishing to inquire as to whether their book has been received for review may contact the book review editor." "The explosion in complex multimedia content makes it crucial for database systems to support such data efficiently. This paper argues that the ¡°blackbox¡± ADTs used in current object-relational systems inhibit their performance, thereby limiting their use in emerging applications. Instead, the next generation of object-relational database systems should be based on enhanced abstract data type (E-ADT) technology. An (E-ADT) can expose the semantics of its methods to the database system, thereby permitting advanced query optimizations. Fundamental architectural changes are required to build a database system with E-ADTs; the added functionality should not compromise the modularity of data types and the extensibility of the type system. The implementation issues have been explored through the development of E-ADTs in Predator. Initial performance results demonstrate an order of magnitude in performance improvements." "DART '96 was held in conjunction with the Conference of Information and Knowledge Management (CIKM) on Nov 15th in Baltimore. Its goal was to provide a forum for researchers and practitioners involved in integrating concepts and technologies from active and real-time databases to discuss the state of the art and chart a course of action. To this end, nine speakers from academia, industry, and research laboratories were invited to provide a perspective on the theory and practice underlying active real-time databases. In addition, some selected papers were presented briefly to complement the invited speakers' talks. The second half of the workshop was devoted to discussions aimed at identifying the problems that still need to be addressed in the contexts of the diverse target applications." "Object-Relational DBMSs have been receiving a great deal of attention from industry analysts and press as the next generation of database management systems. The motivation for a next generation DBMS is driven by the reality of shortened business cycles. This dynamic environment demands fast, cost-effective, time-to-market of new or modified business processes, services, and products. To support this important business need, the next generation DBMS must: 1. leverage the large investments made in existing relational technology, both in data and skill set; 2. Take advantage of the flexibility, productivity, and performance benefits of OO modeling; and 3. Integrate robust DBMS services for production quality systems. The objective of this article is to provide a brief overview of UniSQL's commercial object-relational database management system." "In this work, we devise and evaluate control strategies for combining two potentially powerful buffer management techniques in object bases: (1) buffer pool segmentation with segment-specific replacement criteria and (2) dual buffering consisting of copying objects from pages into object buffers. We distinguish two dimensions for exerting control on the buffer pool: (1) the copying time determines when objects are copied from their memory-resident home page and (2) the relocation time determines the occasion on which a (copied) object is transferred back into its home page. Along both dimensions, we distinguish an eager and a lazy strategy. Our extensive experimental results indicate that a lazy object copying combined with an eager relocation strategy is almost always superior and significantly outperforms page-based buffering in most applications. 1 Introduction In the Eighties, object-oriented database systems emerged as the potential next-generation database technology." "Streams are continuous data feeds generated by such sources as sensors, satellites, and stock feeds. Monitoring applications track data from numerous streams, filtering them for signs of abnormal activity, and processing them for purposes of filtering," "Active databases and real-time databases have been important areas of research in the recent past. It has been recognized that many benefits can be gained by integrating active and real-time database technologies. However, there has not been much work done in the area of transaction processing in active real-time databases. This paper deals with an important aspect of transaction processing in active real-time databases, namely the problem of assigning priorities to transactions. In these systems, time-constrained transactions trigger other transactions during their execution. We present three policies for assigning priorities to parent, immediate and deferred transactions executing on a multiprocessor system and then evaluate the policies through simulation. The policies use different amounts of semantic information about transactions to assign the priorities. The simulator has been validated against the results of earlier published studies." "Many applications require the management of spatial data. Clustering large spatial databases is an important problem which tries to find the densely populated regions in the feature space to be used in data mining, knowledge discovery, or efficient information retrieval. A good clustering approach should be efficient and detect clusters of arbitrary shape. It must be insensitive to the outliers (noise) and the order of input data. We pro-pose WaveCluster, a novel clustering approach based on wavelet transforms, which satisfies all the above requirements. Using multi-resolution property of wavelet transforms, we can effectively identify arbitrary shape clus-ters at different degrees of accuracy. We also demonstrate that WaveCluster is highly effi-cient in terms of time complexity. Experi-mental results on very large data sets are pre-sented which show the efficiency and effective-ness of the proposed approach compared to the other recent clustering methods" "In this paper we present a mechanism for approximately translating Boolean query constraints across heterogeneous information sources. Achieving the best translation is challenging because sources support different constraints for formulating queries, and often these constraints cannot be precisely translated. For instance, a query [score > 8] might be ""perfectly"" translated as [rating > 0.8] at some site, but can only be approximated as [grade = A] at another. Unlike other work, our general framework adopts a customizable ""closeness"" metric for the translation that combines both precision and recall. Our results show that for query translation we need to handle interdependencies among both query conjuncts as well as disjuncts. As the basis, we identify the essential requirements of a rule system for users to encode the mappings for atomic semantic units. Our algorithm then translates complex queries by rewriting them in terms of the semantic units. We show that, under practical assumptions, our algorithm generates the best approximate translations with respect to the closeness metric of choice. We also present a case study to show how our technique may be applied in practice." "Classification is an important data mining problem. Although classification is a well-studied problem, most of the current classi-fication algorithms require that all or a por-tion of the the entire dataset remain perma-nently in memory. This limits their suitability for mining over large databases. We present a new decision-tree-based classification algo-rithm, called SPRINT that removes all of the memory restrictions, and is fast and scalable. The algorithm has also been designed to be easily parallelized, allowing many processors to work together to build a single consistent model. This parallelization, also presented here, exhibits excellent scalability as well. The combination of these characteristics makes the proposed algorithm an ideal tool for data min-ing." "Text documents often contain valuable structured data that is hidden in regular English sentences. This data is best exploited if available as a relational table that we could use for answering precise queries or for running data mining tasks. We explore a technique for extracting such tables from document collections that requires only a handful of training examples from users. These examples are used to generate extraction patterns, that in turn result in new tuples being extracted from the document collection. We build on this idea and present our Snowball system. Snowball introduces novel strategies for generating patterns and extracting tuples from plain-text documents. At each iteration of the extraction process, Snowball evaluates the quality of these patterns and tuples without human intervention, and keeps only the most reliable ones for the next iteration." "Decision support applications involve complex queries on very large databases. Since response times should be small, query optimization is critical. Users typically view the data as multidimensional data cubes. Each cell of the data cube is a view consisting of an aggregation of interest, like total sales. The values of many of these cells are dependent on the values of other cells in the data cube..A common and powerful query optimization technique is to materialize some or all of these cells rather than compute them from raw data each time. Commercial systems differ mainly in their approach to materializing the data cube. In this paper, we investigate the issue of which cells (views) to materialize when it is too expensive to materialize all views. A lattice framework is used to express dependencies among views. We present greedy algorithms that work off this lattice and determine a good set of views to materialize. The greedy algorithm performs within a small constant factor of optimal under a variety of models. We then consider the most common case of the hypercube lattice and examine the choice of materialized views for hypercubes in detail, giving some good tradeoffs between the space used and the average time to answer a query." "Abstract¡ªCommercial applications usually rely on precompiled parameterized procedures to interact with a database. Unfortunately, executing a procedure with a set of parameters different from those used at compilation time may be arbitrarily suboptimal. Parametric query optimization (PQO) attempts to solve this problem by exhaustively determining the optimal plans at each point of the parameter space at compile time. However, PQO is likely not cost-effective if the query is executed infrequently or if it is executed with values only within a subset of the parameter space. In this paper, we propose instead to progressively explore the parameter space and build a parametric plan during several executions of the same query. We introduce algorithms that, as parametric plans are populated, are able to frequently bypass the optimizer but still execute optimal or near-optimal plans. Index Terms¡ªParametric query optimization, adaptive optimization, selectivity estimation." "Data warehousing systems integrate information from operational data sources into a central repository to enable analysis and mining of the integrated information. During the integration process, source data typically undergoes a series of transformations, which may vary from simple algebraic operations or aggregations to complex ¡°data cleansing ¡± procedures. In a warehousing environment, the data lineage problem is that of tracing warehouse data items back to the original source items from which they were derived. We formally define the lineage tracing problem in the presence of general data warehouse transformations, and we present algorithms for lineage tracing in this environment. Our tracing procedures take advantage of known structure or properties of transformations when present, but also work in the absence of such information. Our results can be used as the basis for a lineage tracing tool in a general warehousing setting, and also can guide the design of data warehouses that enable efficient lineage tracing." "With the use of data warehousing and online analytical processing (OLAP) for decision support applications new security issues arise. The goal of this paper is to introduce an OLAP security design methodology, pointing out fields that require further research work. We present possible access control requirements categorized by their complexity. OLAP security mechanisms and their implementations in commercial systems are presented and checked for their suitability to address the requirements." "We present a random walk as an e?cient and accurate approach to approximating cer-tain aggregate queries about web pages. Our method uses a novel random walk to produce an almost uniformly distributed sample of web pages. The walk traverses a dynamically built regular undirected graph. Queries we have es-timated using this method include the cover-age of search engines, the proportion of pages belonging to.com and other domains, and the average size of web pages. Strong experimen-tal evidence suggests that our walk produces accurate results quickly using very limited re-sources." " In addition it constructs a special node Authors() and connects it to all pages corresponding to ""Author""s. The output graph is called SiteGraph. One way to write this in StruQL is: input DataGraph where Root(x); x ! ! y; y ! l ! z; l in f""Paper"", ""TechReport"", ""Title"", ""Abstract"", ""Author""g create Authors(); Page(y); Page(z) link Page(y) ! l ! Page(z) where x ! ! y1; y1 ! ""Author"" ! z1 link Authors() ! ""Author"" ! Page(z1) output SiteGraph 2 In order to integrate information from several source, we allow multiple input graphs. When multiple input graphs are present, every occurrence of a collection needs to be preceded by a graph name. For clarity of presentation however, we focus on queries with only one input graph. Intermixing the where; create; link clauses makes the query easier to read. This is nothing more than syntactic convenience, since the meaning is the same as that of the query in which all clauses are joined together: input DataGraph where Root" " Several formal models for database access control have been proposed. However, little attention has been paid to temporal issues like authorizations with limited validity or obtained by deductive reasoning with temporal constraints. We present an access control model in which authorizations contain periodic temporal intervals of validity. An authorization is automatically granted in the time intervals specified by a periodic expression and revoked when such intervals expire. Deductive temporal rules with periodicity and order constraints are provided to derive new authorizations based on the presence or absence of other authorizations in specific periods of time. We prove the uniqueness of the set of implicit authorizations derivable at a given instant from the explicit ones, and we propose an algorithm to compute the global set of valid authorizations." " Data mining is computationally expensive. Since the benefits of data mining results are unpredictable, organizations may not be willing to buy new hardware for that purpose. We will present a system that enables data mining applications to run in parallel on networks of workstations in a fault-tolerant manner. We will describe our parallelization of a combinatorial pattern discovery algorithm and a classification tree algorithm. We will demonstrate the effectiveness of our system with two real applications: discovering active motifs in protein sequences and predicting foreign exchange rate movement." " A very promising idea for fast searching in traditional and multimedia databases is to map objects into points in k-d space, using k feature-extraction functions, provided by a domain expert [Jag91]. Thus, we can subsequently use highly fine-tuned spatial access methods (SAMs), to answer several types of queries, including the `Query By Example' type (which translates to a range query); the `all pairs' query (which translates to a spatial join [BKSS94]); the nearest-neighbor or best-match query, etc. However, designing feature extraction functions can be hard. It is relatively easier for a domain expert to assess the similarity/distance of two objects. Given only the distance information though, it is not obvious how to map objects into points. This is exactly the topic of this paper. We describe a fast algorithm to map objects into points in some k-dimensional space (k is user-defined), such that the dis-similarities are preserved." " The database systems have nowadays an increasingly important role in the knowledge-based society, in which computers have penetrated all fields of activity and the Internet tends to develop worldwide. In the current informatics context, the development of the applications with databases is the work of the specialists. Using databases, reach a database from various applications, and also some of related concepts, have become accessible to all categories of IT users. This paper aims to summarize the curricular area regarding the fundamental database systems issues, which are necessary in order to train specialists in economic informatics higher education. The database systems integrate and interfere with several informatics technologies and therefore are more difficult to understand and use. Thus, students should know already a set of minimum, mandatory concepts and their practical implementation: computer systems, programming techniques, programming languages, data structures. The article also presents the actual trends in the evolution of the database systems, in the context of economic informatics." " Web caching proxy servers are essential for improving web performance and scalability, and recent research has focused on making proxy caching work for database-backed web sites. In this paper, we explore a new proxy caching framework that exploits the query semantics of HTML forms. We identify two common classes of form-based queries from real-world database-backed web sites, namely, keyword-based queries and function-embedded queries. Using typical examples of these queries, we study two representative caching schemes within our framework: (i) traditional passive query caching, and (ii) active query caching, in which the proxy cache can service a request by evaluating a query over the contents of the cache. Results from our experimental implementation show that our form-based proxy is a general and flexible approach that efficiently enables active caching schemes for database-backed web sites. Furthermore, handling query containment at the proxy yields significant performance advantages over passive query caching, but extending the power of the active cache to do full semantic caching appears to be less generally effective." " Structured data stored in les can bene t from standard database technology. In particular, we show here how such data can be queried and updated using declarative database languages. We introduce the notion of structuring schema which consists of a grammar annotated with database programs. Based on a structuring schema, a le can be viewed as a database structure, queried and updated as such. For queries, weshow that almost standard database optimization techniques can be used to answer queries without having to construct the entire database. For updates, we study in depth the propagation to the le of an update speci ed on the database view of this le. The problem is infeasible in general and we present anumber of negative results. The positive results consist of techniques that allow to propagate updates e ciently under some reasonable locality conditions on the structuring schemas." " The challenge of peer-to-peer computing goes beyond simple file sharing. In the DBGlobe project, we view the multitude of peers carrying data and services as a superdatabase. Our goal is to develop a data management system for modeling, indexing and querying data hosted by such massively distributed, autonomous and possibly mobile peers. We employ a service-oriented approach, in that data are encapsulated in services. Direct querying of data is also supported by an XML-based query language. In this paper, we present our research results along the following topics: (a) infrastructure support, including mobile peers and the creation of context-dependent communities, (b) metadata management for services and peers, including locationdependent data, (c) filters for efficiently routing path queries on hierarchical data, and (d) querying using the AXML language that incorporates service calls inside XML documents." "Because of the Internet we believe that in the long run there will be alternative providers for all of these three resources for any given application. Data providers will bring more and more data and more and more different kinds of data to the net. Likewise, function providers will develop new methods to process and work with the data; e.g., function providers might develop new algorithms to compress data or to produce thumbnails out of large images and try to sell these on the Internet. It is also conceivable, that some people allow other people to use spare cycles of their idle machines in the Internet (as in the Condor system of the University of Wisconsin) or that some companies (cycle providers) even specialize on selling computing time to businesses that occasionally need to carry out very complex operations for which regular hardware is not sufficient." "Conduct of scientific and engineering research is becoming critically dependent on effective management of scientific and engineering data and technical information. The rapid advances in scientific instrumentation, computer and communication technologies enable the scientists to collect, generate, process, and share unprecedented volumes of data. For example, the Earth Observing System Data and Information System (EOSDIS) has the task to manage the data from NASA¡¯s Earth science research satellites and field measurement programs, and other data essential for the interpretation of these measurements in support of global change research. Apart from being able to handle a stream of 1 terabyte of data daily by the year 2000, EOSDIS will also need to provide transparent access to heterogeneous data held in the archives of several US government agencies, organizations and countries. A single graphical user interface employing the Global Change Master Directory needs to help users locate data sets of interest among massive and diverse data sets, or find the appropriate data analysis tools, regardless of their location. Another major international effort in the area of human genome research faces some similar, as well as unique issues due to the complexity of the genome data, special querying requirements and much more heterogeneous collections of data." "Recently there has been an increasing interest in supporting bulk operations on multidimensional index structures. Bulk loading refers to the process of creating an initial index structure for a presumably very large data set. In this paper, we present a generic algorithm for bulk loading which is applicable to a broad class of index structures. Our approach differs completely from previous ones for the following reasons. First, sorting multidimensional data according to a predefined global ordering is completely avoided. Instead, our approach is based on the standard routines for splitting and merging pages which are already fully implemented in the corresponding index structure. Second, in contrast to inserting records one by one, our approach is based on the idea of inserting multiple records simultaneously. As an example we demonstrate in this paper how to apply our technique to the R-tree family. For R-trees we show that the I/O performance of our generic algorithm meets the lower bound of external sorting. Empirical results demonstrate that performance improvements are also achieved in practice without sacrificing query performance" "To speed-up clustering algorithms, data summarization methods have been proposed, which first summarize the data set by computing suitable representative objects. Then, a clustering algorithm is applied to these representatives only, and a clustering structure for the whole data set is derived, based on the result for the representatives. Most previous methods are, however, limited in their application domain. They are in general based on sufficient statistics such as the linear sum of a set of points, which assumes that the data is from a vector space. On the other hand, in many important applications, the data is from a metric non-vector space, and only distances between objects can be exploited to construct effective data summarizations. In this paper, we develop a new data summarization method based only on distance information that can be applied directly to non-vector data. An extensive performance evaluation shows that our method is very effective in finding the hierarchical clustering structure of non-vector data using only a very small number of data summarizations, thus resulting in a large reduction of runtime while trading only very little clustering quality." "Detecting and extracting modifications from information sources is an integral part of data warehousing. For unsophisticated sources, in practice it is often necessary to infer modifications by periodically comparing snapshots of data from the source. Although this sapshot di/rem tial problem is closely related to traditional joins and outerjoins, there are significant differences, which lead to simple new algorithms. In particular, we present algorithms that perform (possibly lossy) compression of records. We also present a window algorithm that works very well if the snapshots are not ""very different."" The algorithms are studied via analysis and an implementation of two of them; the results illustrate the potential gains achievable with the new algorithms." This paper describes a model that integrates the execution of triggers with the evaluation of declarative constraints in SQL database systems. This model achieves full compatibility with the 1992 international standard for SQL (SQL92). It preserves the set semantics for declarative constraint evaluation while allowing the execution of powerful procedural triggers. It was implemented in DB2 for common servers and was recently accepted as the model for the emerging SQL standard (SQW). "Applications in which plain text coexists with structured data are pervasive. Commercial relational database management systems (RDBMSs) generally provide querying capabilities for text attributes that incorporate state-of-the-art information retrieval (IR) relevance ranking strategies, but this search functionality requires that queries specify the exact column or columns against which a given list of keywords is to be matched." "Real-world entities are inherently spatially and temporally referenced, and database applications increasingly exploit databases that record the past, present, and anticipatedfu tu locations of entities, e.g., the residences ofcuEERRx7 obtained by the geo-coding of addresses. Indices that efficiently suient quient on the spatio-temporal extents ofsuE entities are needed. However, past indexing research has progressed in largely separate spatial and temporal streams. Adding time dimensions to spatial indices, as if time were a spatial dimension, neither suther7 nor exploits the special properties of time. On the other hand, temporal indices are generally not amenable to extension with spatial dimensions. This paper proposes the first efficient and versatile index for a general class of spatio-temporal data: the discretely changing spatial aspect of an object may be a point or may have an extent; both transaction time and valid time are su;wkP-7y and a generalized notion of thecu;kx: time, now, is accommodated for both temporal dimensions. The index is based on the R # -tree and provides means of prioritizing space versu time, which enables it to adapt to spatially and temporally restrictivequ@;-P7 Performance experiments are reported that evalu-; pertinent aspects of the index." "This paper presents a novel strategy for temporal coalescing. Temporal coalescing merges the temporal extents of value-equivalent tuples. A temporal extent is usually coalesced offline and stored since coalescing is an expensive operation. But the temporal extent of a tuple with now, times at different granularities, or incomplete times cannot be determined until query evaluation. This paper presents a strategy to partially coalesce temporal extents by identifying regions that are potentially covered. The covered regions can be used to evaluate temporal predicates and constructors on the coalesced extent. Our strategy uses standard relational database technology. We quantify the cost using the Oracle DBMS." "The paper presents a systematic review of the relative efficacy of traditional listing and the USPS address list as sampling frames for national probability samples of households. NORC and ISR collaborated to compare these two national area-probability sampling frames for household surveys. We conducted this comparison in an ongoing survey operation which combines the current wave of the HRS with the first wave of NSHAP. Since 2000, survey samplers have been exploring the potential of the USPS address lists to serve as a sampling frame for probability samples from the general population. We report the relative coverage properties of the two frames, as well as predictors of the coverage and performance of the USPS frame. The research provides insight into the coverage and cost/benefit trade-offs that researchers can expect from traditionally listed frames and USPS address databases." "Because the truth is that relational database management systems aren¡¯t very good at what they were supposed to do-help us get answers from our data-unless we ignore a good deal of relational dogma and use a new approach to data modeling when we¡¯re building a decision support database. The relationally correct data modeling everyone is taught in school is only useful for achieving high performance in on-line transaction processing. The resulting model fragments the data into many tables of relatively equal width and depth to speed transactions along. But using that model with a realworld decision support system almost guarantees failure. Take the case of a mid-size durable goods manufacturer, one ofthe smokestacks on the horizon. After a year¡¯s effort, IS staffers had a database design, had loaded several hundred megabytes of data from 6 miilion invoices into a multiprocessor Teradata relational database machine, and users had begun to try to use the system. But there were 50 tables in their fairly typical schema." "With recent advances in storage and network technology it is now possible to provide movie on demand (MOD) service, eliminating the inflexibility inherent in todays broadcast cable systems. A MOD server is a computer system that stores movies in compressed digital form and provides support for different portions of compressed movie data to be accessed and transmitted concurrently. In this paper, we present a low-cost storage architecture for a MOD server that relies principally on disks. The high bandwidths of disks in conjunction with a clever strategy for striping movies on them is utilized in order to enable simultaneous access and transmission of ""certain"" different portions of a movie. We also present a wide range of schemes for implementing VCR-like functions." "Database management systems (DBMS) store and manage large sets of shared data whereas application programs perform the data processing tasks, e. g., to run the business of a company. Often, these programs are written in various programming languages (PLs) embodying different type systems. Thus, DBMSs should be ¡°multi-lingual¡° to serve the application requests. This is typically achieved by providing a DBMS and its database language (DBL), like SQL2, with an own type system. To access the database (DB), a DBL/PL-coupling called database API (DB-API or API for short) is required." "Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks. E-commerce applications, such as collaborative filtering, can also benefit from the new model, which captures not only the closeness of values of certain leading indicators but also the closeness of (purchasing, browsing, etc.) patterns exhibited by the customers." "Introduction MOCHA1 is a novel database middleware system designed to interconnect data sources distributed over a wide area network. MOCHA is built around the notion that the middleware for a large-scale distributed environment should be selfextensible. This means that new application-specific data types and query operators needed for query processing are deployed to remote sites in automatic fashion by the middleware system itself. In MOCHA, this is realized by shipping Java classes implementing these types or operators to the remote sites, where they can be used to manipulate the data of interest. All these Java classes are first stored in one or more code repositories from which MOCHA later retrieves and deploys them on a ¡°need-to-do¡± basis. A major goal behind this idea of automatic code deployment is to fulfill the need for application-specific processing components at remote sites that do not provide them. MOCHA capitalizes on its ability to automatically deploy code to provide an efficient query processing service. By shipping code for query operators, MOCHA can produce efficient plans that place the execution of powerful data-reducing operators (filters) on the data sources. Examples of such operators are aggregates, predicates and data mining operators, which return a much smaller abstraction of the original data. In contrast, datainflating operators that produce results larger that their arguments are evaluated near the client." "Though the query is posted in key words, the returned results contain exactly the information that the user is querying for, which may not be explicitly specified in the input query. The required information is often not contained in the Web pages whose URLs are returned by a search engine. FACT is capable of navigating in the neighborhood of these pages to find those that really contain the queried segments. The system does not require a prior knowledge about users such as user profiles or preprocessing of Web pages such as wrapper generation." "Regular readers of this column will have become familiar with database language SQL -- indeed, most readers are already familiar with it. We have also discussed the fact that the SQL standard is being published in multiple parts and have even discussed one of those parts in some detail[l].Another standard, based on SQL and its structured user-defined types[2], has been developed and published by the International Organization for Standardization (ISO). This standard, like SQL, is divided into multiple parts (more independent than the parts of SQL, in fact). Some parts of this other standard, known as SQL/MM, have already been published and are currently in revision, while others are still in preparation for initial publication.In this issue, we introduce SQL/MM and review each of its parts, necessarily at a high level." "Adapt/X Harness is an information integration system, platform, and tool set that provides integrated and seamless access to heterogeneous and distributed information in a networked environment (LAN, WAN, Intranets, and the global Internet). It allows cost-effective access, keyword and attribute queries, navigation, and linking and operations on these information resources using popular Internet browsers without requiring any translation, transfer, or rehosting of the original information resource. Information resources such as text and multimedia documents, files of various types, software applications, relational databases, email messages, and references can all be ¡°registered¡± with Adapt/X Harness and are organized into collections. Information consumers can then use a standard WWW browser to search for required the information with specific (keyword or attribute) queries or they can browse through a ¡°repository¡± looking for items of interest." XML data is likely to be widely used as a data exchange format but users also need to store and query XML data. The purpose of this panel is to explore whether and how to best provide this functionality. "Data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. Many commercial products and services are now available, and all of the principal database management system vendors now have offerings in these areas. Decision support places some rather different requirements on database technology compared to traditional on-line transaction processing applications. This paper provides an overview of data warehousing and OLAP technologies, with an emphasis on their new requirements. We describe back end tools for extracting, cleaning and loading data into a data warehouse; multidimensional data models typical of OLAP; front end client tools for querying and data analysis; server extensions for efficient query processing; and tools for metadata management and for managing the warehouse. In addition to surveying the state of the art, this paper also identifies some promising research issues, some of which are related to problems that the database research community has worked on for years, but others are only just beginning to be addressed." "Multimedia information systems have emerged as an essential component of many application domains ranging from library information systems to entertainment technology. However, most implementations of these systems cannot support the continuous display of multimedia objects and suffer from frequent disruptions and delays termed hiccups. This is due to the low I/O bandwidth of the current disk technology, the high bandwidth requirement of multimedia objects, and the large size of these objects that almost always requires them to be disk resident. One approach to resolve this limitation is to decluster a multimedia object across multiple disk drives in order to employ the aggregate bandwidth of several disks to support the continuous retrieval (and display) of objects." "Sensor networking technologies have developed very rapidly in the last ten years. In many situations, high quality multimedia streams may be required for providing detailed information of the hot spots in a large scale network. With the limited capabilities of sensor node and sensor network, it is very difficult to support multimedia streams in current sensor network structure. In this paper, we propose to enhance the sensor network by deploying limited number of mobile ""swarms"". The swarm nodes have much higher capabilities than the sensor nodes in terms of both hardware functionalities and networking capabilities. The mobile swarms can be directed to the hot spots in the sensor network to provide detailed information of the intended area. With the help of mobile swarms, high quality of multimedia streams can be supported in the large scale sensor network without too much cost. The wireless backbone network for connecting different swarms and the routing schemes for supporting such a unified architecture is also discussed and verified via simulations." Some problems connected with the handling of null values in SQL are discussed. A definition of sure answers to SQL queries is proposed which takes care of the ¡°no information¡± meaning of null values in SQL. An algorithm is presented for modifying SQL queries such that answers are not changed for databases without null values but sure answers are obtained for arbitrary databases with standard SQL semantics. "Materialization is a useful abstraction pattern that can be identified in many application settings. Intuitively, materialization is the relationship between a class of categories (e.g., models of cars) and a class of more concrete objects (e.g., individual cars). This paper gives a quasi-formal semantic definition of materialization in terms of the usual is-a and isof abstractions, and of a class/metaclass correspondence. New and powerful inheritance mechanisms are associated with materialization. Examples, properties, and extensions of materialization are also presented. Providing materialization as an abstraction mechanism for conceptual modeling enhances expressiveness by a controled introduction of classification at the application level." "The emergence of dynamic page generation is primarily driven by the need to deliver customized and personalized Web pages. Dynamic scripting technologies allow Web sites to assemble pages \on the y"" based on various run-time parameters (e.g., form-based parameters) in an attempt to tailor content to each individual user. Web developers have a wide variety of choices in dynamic scripting languages, e.g., Java Server Pages (JSP) and servlets from Sun; Active Server Pages (ASP) from Microsoft. A major disadvantage of dynamic scripting technologies, however, is that they reduce Web and application server scalability because of the additional load placed on the Web/application server. In addition to pure script execution overhead, there are several other types of delay associated with generating dynamic pages" "Semantic data modelling I is the established method for the requirements definition and the conceptual specification of application systems. In large projects and especially in enterprise data models the cost of creating a data model amount to a large proportion of the overall cost. On the other hand there is a general pressure to reduce the cost of data modelling for application systems to harness the skyrocketing costs of data processing in a colnpany. The standard textbook modelling process calls for the modelling of single entities to represent simple facts and combining these into a model in a bottom up fashion: 'An entity is a concept, person, thing" "Over the past years several works have proposed access con- trol models for XML data where only read-access rights over non-recursive DTDs are considered. A small number of works have studied the access rights for updates. In this paper, we present a general model for specifying access con- trol on XML data in the presence of the update operations of W3C XQuery Update Facility. Our approach for enforc- ing such update specification is based on the notion of query rewriting. A major issue is that query rewriting for recursive DTDs is still an open problem. We show that this limitation can be avoided using only the expressive power of the stan- dard XPath, and we propose a linear algorithm to rewrite each update operation defined over an arbitrary DTD (re- cursive or not) into a safe one in order to be evaluated only over the XML data which can be updated by the user. This paper represents the first effort for securely XML updating in the presence of arbitrary DTDs (recursive or not) and a rich fragment of XPath" "The Information Management Group at Dublin City University has research themes such as digital multimedia, interoperable systems and database engineering. In the area of digital multimedia, a collaboration with our School of Electronic Engineering has formed the Centre for Digital Video Processing, a university designated research centre whose aim is to research, develop and evaluate content-based operations on digital video information. To achieve this goal, the range of expertise in this centre covers the complete gamut from image analysis and feature extraction through to video search engine technology and interfaces to video browsing. The Interoperable Systems Group has research interests in federated databases and interoperability, object modelling and database engineering. This report describes the research activities of the major groupings within the Information Management community in Dublin City" "RasDaMan is a universal ¡ª i.e., domain-independent ¡ª array DBMS for multidimensional arrays of arbitrary size and structure. A declarative, SQL-based array query language offers flexible retrieval and manipulation. Efficient server-based query evaluation is enabled by an intelligent optimizer and a streamlined storage architecture based on flexible array tiling and compression." "The proliferation of mobile and pervasive computing devices has brought energy constraints into the limelight, together with performance considerations. Energy-conscious design is important at all levels of the system architecture, and the software has a key role to play in conserving the battery energy on these devices. With the increasing popularity of spatial database applications, and their anticipated deployment on mobile devices (such as road atlases and GPS based applications), it is critical to examine the energy implications of spatial data storage and access methods for memory resident datasets." "Microsoft SQL Server was successful for many years for transaction processing and decision sup- port workloads with neither merge join nor hash join, relying entirely on nested loops and index nested loops join. How much difference do addi- tional join algorithms really make, and how much system performance do they actually add? In a pure OLTP workload that requires only record-to-record navigation, intuition agrees that index nested loops join is sufficient. For a DSS workload, however, the question is much more complex. To answer this question, we have analyzed TPC-D query perform- ance using an internal build of SQL Server with merge-join and hash-join enabled and disabled. It shows that merge join and hash join are both re- quired to achieve the best performance for decision support workloads" "A star schema is very popular for modeling data warehouses and data marts. Therefore, it is important that a database system which is used for implementing such a data warehouse or data mart is able to efficiently handle operations on such a schema. In this paper we will describe how one of these operations, the join operation --- probably the most important operation --- is implemented in the IBM Informix Extended Parallel Server (XPS)." "Smartcards are the most secure portable computing device today. They have been used successfully in applications involving money, and proprietary and personal data (such as banking, healthcare, insurance, etc.). As smartcards get more powerful (with 32-bit CPU and more than 1 MB of stable memory in the next versions) and become multi-application, the need for database management arises. However, smartcards have severe hardware limitations (very slow write, very little RAM, constrained stable memory, no autonomy, etc.) which make traditional database technology irrelevant. The major problem is scaling down database techniques so they perform well under these limitations." "With the increasing importance of XML, LDAP directories, and text-based information sources on the Internet, there is an ever-greater need to evaluate queries involving (sub)string matching. In many cases, matches need to be on multiple attributes/dimensions, with correlations between the multiple dimensions. Effective query optimization in this context requires good selectivity estimates. In this paper, we use pruned count-suffix trees (PSTs) as the basic data structure for substring selectivity estimation. For the 1-D problem, we present a novel technique called MO (Maximal Overlap)." "The main characteristics of the language are its descriptiveness, its capability to map between schemas written in the relational, object-oriented, ER, or EXPRESS data model, and its facilities for specifying user-defined update operations on the view that are to be propagated to the data sources. Finally, we briefly discuss how this mapping information is employed to convert queries formulated with respect to the integrated view, into database operations over the heterogeneous data sources." "There are also problems in using values obtained from the study populations to those in economic models and the difficulty of predicting health state values in those who avoid a fracture. The review recommends a set of health state values as part of a ¡°reference case¡± for use in economic models. Due to the paucity of good quality of estimates in this area, further recommendations are made regarding the design of future studies to collect HSVs relevant to economic models." "To begin with, which RM do we mean? There are several lines of RM and each has had its own evolution. The original line was originated by E.F.Codd who developed during the Seventies what he later named RM/VI. In 1979 he proposed a new model, the RM/T that meant a huge change from the original RM approach. In the Eighties, sensing that plain people did not keep up with him" "Historically, there has been little overlap between the database and networking research communities; they operate on very different levels and focus on very different issues. While this strict separation of concerns has lasted for many years, in this talk I will argue that the gap has recently narrowed to the point where the two fields now have much to say to each other." "Our design enables self starting distributed queries that jump directly to the lowest common ancestor of the query result, dramatically reducing query response times. We present a novel query-evaluate gather technique (using XSLT) for detecting (1) which data in a local database fragment is part of the query result, and (2) how to gather the missing parts. We define partitioning and cache invariants that ensure that even partial matches on cached data are exploited and that correct answers are returned, despite our dynamic query-driven caching. Experimental results demonstrate that our techniques dramatically increase query throughputs and decrease query response times in wide area sensor databases." "Query processing is one of the most, critical issues in Object-Oriented DBMSs. Extensible opt,imizers with efficient, search strategies require a cost model to select the most efficient execution plans. In this paper we propose and partially validate a generic cost-model for Object-Oriented DBMSs. The storage model and its access methods support clust,ered and nested collections, links, and path indexes. Queries may involve complex predicates with qualified path expressions. We propose a, method for estimating the number of block a,ccesses to clustered collections and a paramet,erized execution model for evaluating predicat,es. We estimate the costs of path expression traversals in different cases of physical clustering of the supporting collections. Thr model is validated through experiments with the 02 DBMS." "Data warehouses are used to collect and analyze data from remote sources. The data collected often originate from transactional information and can become very large. This paper presents a framework for incrementally removing warehouse data (without a need to fully recompute offering two choices. One is to expunge data, in which case the result is as if the data had never existed. The second is to expire data, in which case views defined over the data are not necessarily affected. the framework, a user or administrator can specify what data to expire or expunge, what auxiliary data is to be kept for facilitating incremental view maintenance, what type of updates are expected from external sources, and how the system should compensate when data is expired or other parameters changed. We present algorithms for the various expiration and compensation actions, and we show how our framework can be implemented on top of a conventional RDBMS." "Three topic areas relevant to the database community are identi ed. First is user-centered information analysis environments for correlation and manipulation of multimedia and complex information resources based on semantic content, visualizing complex and abstract information spaces, value-based ltering, and search, retrieval, and manipulation of multimedia and complex documents. Second is scalable, secure, and interoperable information repositories supporting a wide range of information resources and services. Issues to be addressed include: registration and security of information resources and services, access control and rights management, automatic classi cation and federation, and distributed service quality assurance facilities. Third is the intelligent integration of information." "In this paper, we present and evaluate alternative techniques to effect the use of location-independent identifiers in distributed database systems. Location-independentidentifiers are important to take full advantage of migration and replication as they allow accessing objects without visiting the servers that created the objects. We will show how a distributed index structure can be used for this purpose, we will present a simple, yet effective replication strategy for the nodes of the index, and we will present alternative strategies to traverse the index in order to dereference identifiers (i.e., find a copy of an object given its identifier). Furthermore, we will discuss the results of performance experiments that show some tradeoffs of the proposed replication and traversal strategies and compare our techniques to an approach that uses locationdependent identifiers like many systems today" We capture such queries in our definition of preference queries that use a weight function over a relation's attributes to derive a score for each tuple. Database systems cannot efficiently produce the top results of a preference query because they need to evaluate the weight function over all tuples of the relation. PREFER answers preference queries efficiently by using materialized views that have been pre-processed and stored. "This is my first issue as associate editor of software reviews for The American Statistician. In this column, I will introduce myself, comment on the types of software reviews that can be published in this section of The American Statistician, and encourage others in the profession to consider taking on the task of reviewing statistical software packages." "We have developed an XML repository management system, called Rainbow, designed to exploit relational database technology to manage XML da ta based on a flexible mapping strategy [2].As shown in Figure 1, the Rainbow system is composed of three sub-systems, e.g., a loading manager, a mapping manager, and an XML query engine build on top of a relational database. Rainbow first loads an XML Schema into the relational database via the loading query provided by the loading manager. Loading queries are expressed in XQuery. Then the mapping manager provides an extraction view query genera ted from the loading query." "A major challenge still facing the designers and implementors of database programming languages (DBPLs) is that of query optimisation. In the paper we first give the syntax of our archetypal DBPL and briefly discuss its semantics. We then define a small but powerful algebra of operators over the set data type, provide some key equivalences for expressions in these operators, and list transformation principles for optimising expressions." "This article gives methods for statically analyzing sets of active database rules to determine if the rules are (1) guaranteed to terminate, (2) guaranteed to produce a unique final database state, and (3) guaranteed to produce a unique stream of observable actions. If the analysis determines that one of these properties is not guaranteed, it isolates the rules responsible for the problem and determines criteria that, if satisfied, guarantee the property. The analysis methods are presented in the context of the Starburst Rule System" "The ORES TDBMS will support the efficient and user friendly representation and manipulation of temporal knowledge and it will be developed as an extension of the relational database management system INGRES. The ORES project will result in a general purpose TDBMS, the development of which is based on a practical and yet theoretically sound approach. More specifically, the overall objectives of the ORES project are: i) to develop a formal foundation for temporal representation and reasoning, ii) to develop a temporal query language that will be upwards consistent with SQL2, iii) to develop models, techniques and tools for user friendly and effective definition, manipulation and validation of temporal database applications, and iv) to evaluate the ORES environment using a hospital case study" "We report the finding of ""triply magic"" conditions (the doubly magic frequency-intensity conditions of an optical dipole trap plus the magic magnetic field) for the microwave transitions of optically trapped alkali-metal atoms. The differential light shift (DLS) induced by a degenerate two-photon process is adopted to compensate a DLS associated with the one-photon process. Thus, doubly magic conditions for the intensity and frequency of the optical trap beam can be found. Moreover, the DLS decouples from the magnetic field in a linearly polarized optical dipole trap, so that the magic condition of the magnetic field can be applied independently." "Multimedia applications demand specific support from database management systems due to the characteristics of multimedia data and their interactive usage. This includes integrated support for high-volume and time-dependent (continuous) data types like audio and video. One critical issue is to provide handling of continuous data streams including buffer management as needed for multimedia presentations. Buffer management strategies for continuous data have to consider specific requirements like providing for continuity of presentations, for immediate continuation of presentations after frequent user interactions by appropriate buffer resource consumption. Existing buffer management strategies do not sufficiently support the handling of continuous data streams in highly interactive multimedia presentations." "Hardware developments allow wonderful reliability and essentially limitless capabilities in storage, networks, memory, and processing power. Costs have dropped dramatically. PCs are becoming ubiquitous. The features and scalability of DBMS software have advanced to the point where most commercial systems can solve virtually all OLTP and DSS requirements. The Internet and application software packages allow rapid deployment and facilitate a broad range of solutions." In this paper we present a second enhancement: a single operator that lets the analyst get summarized reasons for drops or increases observed at an aggregated level. This eliminates the need to manually drill-down for such reasons. We develop an information theoretic formulation for expressing the reasons that is compact and easy to interpret. We design a dynamic programming algorithm that requires only one pass of the data improving significantly over our initial greedy algorithm that required multiple passes. " This paper studies workfile disk management for concurrent mergesorts ina multiprocessor database system. Specifically, we examine the impacts of workfile disk allocation and data striping on the average mergesort response time. Concurrent mergesorts in a multiprocessor system can creat severe I/O interference in which a large number of sequential write requests are continuously issued to the same workfile disk and block other read requests for a long period of time. We examine through detailed simulations a logical partitioning approach to workfile disk management and evaluate the effectiveness of datastriping." The current paper outlines a number of important changes that face the database community and presents an agenda for how some of these challenges can be met. This database agenda is currently being addressed in the Enterprise Group at Microsoft Corporation. The paper concludes with a scenario for 2001 which reflects the Microsoft vision of ¡°Information at your fingertips.¡± " With increasing global exposure, today's enterprises must react quickly to changes, rapidly develop new services and products, and at the same time improve productivity and quality and reduce cost. Business process re-engineering and workflow automation to coordinate activities throughout the enterprise are recognized as important emerging technologies to support these requirements. Rosy estimates of a multi-billion dollar marketplace for workflow software has resulted in significant commercial activities in the area, with nearly hundred products now claiming to support workflow automation. While many help to automate document- and image-driven office applications" " After having grown briskly in the last several years, Internet services have not only entered the mainstream of society but also moved into areas where a plain best-effort service model is no longer adequate. This phenomenon is well illustrated by two major thrust areas: Electronic commerce where poor performance or unavailability could be very expensive, and streaming media services (including voice over IP) where quality of service is a fundamental requirement. These areas have brought to light a host of performance, availability and architectural issues that must be resolved effectively in order to prevent widespread customer dissatisfaction that could adversely affect the long-term growth of online services. In particular, the unresponsiveness and apparent failures of ecommerce servers, and consequent loss in revenue for ecommerce industry is well noted. Similarly, the difficulties in providing adequate quality of service have stunted the proliferation of streaming media services on the Internet." "The problem of answering queries using views is to find efficient methods of answering a query using a set of previously materialized views over the database, rather than accessing the database relations. The problem has received significant attention because of its relevance to a wide variety of data management problems, such as data integration, query optimization, and the maintenance of physical data independence. To date, the performance of proposed algorithms has received very little attention, and in particular, their scale up in the presence of a large number of views is unknown. We first analyze two previous algorithms, the bucket algorithm and the inverse-rules, and show their deficiencies. We then describe the MiniCon, a novel algorithm for finding the maximally-contained rewriting of a conjunctive query using a set of conjunctive views. We present the first experimental study of algorithms for answering queries using views." "When Alex Labrinidis asked me to write this essay, I initially balked. I was loathe to speak for academics worldwide, or even just those in SIGMOD. But I then realized that I could speak from personal experience. So these random musings will be of necessity entirely subjective, highly individualistic, and unrepresentative---attributes that a scholar normally attempts to vigorously avoid in his writing. I'm definitely not a ""typical"" academic (I don't know such an animal), but I can speak with some authority as to what motivates me.As another caveat, I make few comparisons with alternatives such as working in a research lab or as a developer. I won't even attempt to speak for them.The final caveat (distrust all commentaries that start with caveats, but perhaps more so those that don't!) is that my assumed audience comprises students who are considering such a profession. Current academics will find some of my observations trite or may disagree loudly, as academics are oft to do (see below).T" "Set value attributes are a concise and natural way to model complex data sets. Modern Object Relational systems support set value attributes and allow various query capabilities on them. In this paper we initiate a formal study of indexing techniques for set value attributes based on similarity, for suitably deened notions of similarity between sets. Such techniques are necessary in modern applications such as recommendations through collaborative ltering and automated advertising. Our techniques are probabilistic and approximate in nature. As a design principle we create structures that make use of well known and widely used data structuring techniques, as a means to ease integration with existing infrastructure." "Support vector machines (SVMs) have shown superb performance for text classification tasks. They are accurate, robust, and quick to apply to test instances. Their only potential drawback is their training time and memory requirement. For n training instances held in memory, the best-known SVM implementations take time proportional to na, where a is typically between 1.8 and 2.1. SVMs have been trained on data sets with several thousand instances, but Web directories today contain millions of instances that are valuable for mapping billions of Web pages into Yahoo!-like directories. We present SIMPL, a nearly linear-time classification algorithm that mimics the strengths of SVMs while avoiding the training bottleneck." "Over the past decade, there has been a lot of work in developing middleware for integrating and automating enterprise business processes. Today, with the growth in e-commerce and the blurring of enterprise boundaries, there is renewed interest in business process coordination, especially for inter-organizational processes. This paper provides a historical perspective on technologies for intraand interenterprise business processes , reviews the state of the art, and exposes some open research issues. We include a discussion of process-based coordination and event/rule-based coordination, and corresponding products and standards activities. We provide an overview of the rather extensive work that has been done on advanced transaction models for business processes, and of the fledgling area of business process intelligence" "In this paper, I will examine a field of science in which this is emphatically not the case ¡ª the field of biodiversity science. Crane¡¯s model works best in physics, where there is no assumption that information collected in the early nineteenth century will still be of interest to the current generation of field theorists. There is the assumption [6, for example] that new theories will reorder knowledge in the domain effectively and efficiently; and since Kuhn [5] most would accept that a major paradigm change in, say, the understanding of ¡®gravity¡¯ renders previous work on incline planes literally incommensurable ¡ª not to mention technical improvements making the older work too imprecise. Astronomers trawl back further in time, seeking traces of supernovae in ancient manuscripts ¡ª but sporadically; they are just as likely to look at monastery records as at Tycho Brahe¡¯s original data. Biodiversity information is fundamentally historical in three different ways." "Many aspects of time-based media¡ªcomplex data encoding, compression, ¡°quality factors,¡± timing¡ªappear problematic from a data modeling standpoint. This paper proposes timed streams as the basic abstraction for modeling time-based media. Several media-independent structuring mechanisms are introduced and a data model is presented which, rather than leaving the interpretation of multimedia data to applications, addresses the complex organization and relationships present in multimedia." "In the bottom-up evaluation of logic programs and recursively defined views on databases, all generated facts are usually assumed to be stored until the end of the evaluation. Discarding facts during the evaluation, however, can considerably improve the efficiency of the evaluation: the space needed to evaluate the program, the I/O costs, the costs of maintaining and accessing indices, and the cost of eliminating duplicates may all be reduced. Given an evaluation method that is sound, complete, and does not repeat derivation steps, we consider how facts can be discarded during the evaluation without compromising these properties. We show that every such space optimization method has certain components, the first to ensure soundness and completeness, the second to avoid redundancy (i.e., repetition of derivations), and the third to reduce ¡°fact lifetimes¡± (i.e., the time period for which each fact must be retained during evaluation). We present new techniques based on providing bounds on the number of derivations and uses of facts, and using monotonicity constraints for each of the first two components, and provide novel synchronization techniques for the third component of a space optimization method." "Many societal applications, for example, in domains such as health care, land use, disaster management, and environmental monitoring, increasingly rely on geographical information for their decision making. With the emergence of the World WideWeb this information is typically located in multiple, distributed, diverse, and autonomously maintained systems. Therefore, strategic decision making in these societal applications relies on the ability to enrich the semantics associated to geographical information in order to support a wide variety of tasks including data integration, interoperability, knowledge reuse, knowledge acquisition, knowledge management, spatial reasoning, and many others." "The present article will highlight, through an analysis of the media¡¯s treatment of the legislative reform process in Hong Kong, the political issues at stake in this ban, and in particular the grey areas of the public debate. It tries to break with the dichotomy ¡°for¡± or ¡°against¡± that are often typical of debates on the extinction of these emblematic mammals. In this press review I undertake a detailed analysis of local newspaper articles, essentially those of the English-language press. Of the 41 articles examined, I selected 21 on the basis of their relevance to legislative reform in Hong Kong and the diversity of their content. Two articles from the Chinese-language local press (selected from 28 articles), as well as six articles from the mainland¡¯s English-language press (selected from 47 articles) serve to underscore this analysis. These articles were published between 2015 and July 2018, that is, from the announcement of the reform until its initial implementation. This article will refer to the timeline of the reform with respect to several key moments and questions that require particular attention" "This paper proposes and evaluate Prefetching B+-Trees (pB+-Trees), which use prefetching to accelerate two important operations on B+-Tree indices: searches and range scans. To accelerate searches, pB+-Trees use prefetching to effectively create wider nodes than the natural data transfer size: e.g., eight vs. one cache lines or disk pages. These wider nodes reduce the height of the B+-Tree, thereby decreasing the number of expensive misses when going from parent to child without significantly increasing the cost of fetching a given node. Our results show that this technique speeds up search and update times by a factor of 1.21-1.5 for main-memory B+-Trees. In addition, it outperforms and is complementary to ¡°Cache-Sensitive B+-Trees.¡± To accelerate range scans, pB+-Trees provide arrays of pointers to their leaf nodes. These allow the pB+-Tree to prefetch arbitrarily far ahead, even for nonclustered indices, thereby hiding the normally expensive cache misses associated with traversing the leaves within the range. Our results show that this technique yields over a sixfold speedup on range scans of 1000+ keys." "We will share with readers some good news on NSF and Defense budget, and report on several interesting new programs at DARPA and NSF." "A major challenge still facing the designers and implementors of database programming languages (DBPLs) is that of query optimisation. We investigate algebraic query optimisation techniques for DBPLs in the context of a purely declarative functional language that supports sets as first-class objects. Since the language is computationally complete issues such as non-termination of expressions and construction of infinite data structures can be investigated, whilst its declarative nature allows the issue of side effects to be avoided and a richer set of equivalences to be developed. The support of a set bulk data type enables much prior work on the optimisation of relational languages to be utilised. Finally, the language has a well-defined semantics which permits us to reason formally about the prop erties of expressions, such as their equivalence with other expressions and their termination" "We present an optimization method and al gorithm designed for three objectives: physi cal data independence, semantic optimization, and generalized tableau minimization. The method relies on generalized forms of chase and ""backchase"" with constraints (dependen cies). By using dictionaries (finite functions) in physical schemas we can capture with con straints useful access structures such as indexes, materialized views, source capabilities, access support relations, gmaps, etc. The search space for query plans is defined and enumerated in a novel manner: the chase phase rewrites the original query into a ""universal"" plan that integrates all the access structures and alternative pathways that are allowed by appli cable constraints. Then, the backchase phase produces optimal plans by eliminating various combinations of redundancies, again according to constraints. This method is applicable (sound) to a large class of queries, physical access structures, and semantic constraints." "Query processing in data integration occurs over network-bound, autonomous data sources. This requires extensions to traditional optimization and execution techniques for three reasons: there is an absence of quality statistics about the data, data transfer rates are unpredictable and bursty, and slow or unavailable data sources can often be replaced by overlapping or mirrored sources. This paper presents the Tukwila data integration system, designed to support adaptivity at its core using a two-pronged approach. Interleaved planning and execution with partial optimization allows Tukwila to quickly recover from decisions based on inaccurate estimates. During execution, Tukwila uses adaptive query operators such as the double pipelined hash join, which produces answers quickly, and the dynamic collector, which robustly and efficiently computes unions across overlapping data sources." "The European Commission opens in its 7th IST call for proposals an action line for Semantic Web Technologies. It builds on ideas that have been looming for many years but have received their greatest push when the World Wide Web Consortium set up an interest group on that theme. The Semantic Web aims to make content machine understandable in order to automate a wide range of new tasks within the context of heterogeneous and distributed systems. The action line centres on four aspects: formalisation of the semantics, derivation of attributes, intelligent ltering and information visualisation. The Web is currently a mighty collection of ashy data but diAEcult to exploit. Adding semantics to content and ensuring their interoperability will turn it into an eAEcient knowledge source." "Parallel implementations based on OpenMP or MapReduce also adopt the pruning policy and do not solve the problem thoroughly. In this context, taking into account features of document datasets, we propose 2Step-SSJ, which solves the document similarity self-join in CUDA environment on GPUs. 2Step-SSJ performs the similarity self-join in two steps, i.e., similarity computing on the inverted list and similarity computing on the forward list, which compromises between the memory visiting and dot-product computation. The experimental results show that 2Step-SSJ could solve the problem much faster than existing methods on three benchmark text corpora, achieving the speedup of 2x-23x against the state-of-the-art parallel algorithm in general, while keep a relatively stable running time with different values of the threshold." "In this article, we first describe the XFilter and YFilter approaches and present results of a detailed performance comparison of structure matching for these algorithms as well as a hybrid approach. The results show that the path sharing employed by YFilter can provide order-of-magnitude performance benefits. We then propose two alternative techniques for extending YFilter's shared structure matching with support for value-based predicates, and compare the performance of these two techniques. The results of this latter study demonstrate some key differences between shared XML filtering and traditional database query processing. Finally, we describe how the YFilter approach is extended to handle more complicated queries containing nested path expressions" "In this paper, we propose the first space-efficient algorithmic solution for estimating the cardinality of full-fledged set expressions over general update streams. Our estimation algorithms are probabilistic in nature and rely on a novel, hash-based synopsis data structure, termed ""2-level hash sketch. We demonstrate how our 2-level hash sketch synopses can be used to provide low-error, high-confidence estimates for the cardinality of set expressions (including operators such as set union, intersection, and difference) over continuous update streams, using only small space and small processing time per update. Furthermore, our estimators never require rescanning or resampling of past stream items, regardless of the number of deletions in the stream. We also present lower bounds for the problem, demonstrating that the space usage of our estimation algorithms is within small factors of the optimal. Preliminary experimental results verify the effectiveness of our approach" "Many database applications make extensive use of bitmap indexing schemes. In this paper, we study how to improve the efficiencies of these indexing schemes by proposing new compression schemes for the bitmaps. Most compression schemes are designed primarily to achieve good compression. During query processing they can be orders of magnitude slower than their uncompressed counterparts. The new schemes are designed to bridge this performance gap by reducing compression effectiveness and improving operation speed." "To reduce the storage costs, the sparse prefix sums technique exploits sparsity in the data and avoids to materialize prefix sums for empty rows and columns in the data grid; instead, look-up tables are used to preserve constant query time. Sparse prefix sums are the first approach to achieve O ( 1 ) query time with sub-linear storage costs for range-sum queries over sparse low-dimensional arrays. A thorough experimental evaluation shows that the approach works very well in practice. On the tested real-world data sets the storage costs are reduced by an order of magnitude with only a small overhead in query time, thus preserving microsecond-fast query answering" "The Flexible Authorization Framework (FAF) defined by Jajodia et al. [2001] provides a policy-neutral framework for specifying access control policies that is expressive enough to specify many known access control policies. Although the original formulation of FAF indicated how rules could be added to or deleted from a FAF specification, it did not address the removal of access permissions from users. We present two options for removing permissions in FAF and provide details on the option which is representation independent" "The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to achieve this integration is by building specialized applications, which are time-consuming to develop and difficult to maintain. We are addressing this problem by creating the technology and tools for rapidly constructing information mediators that extract, query, and integrate data from web sources. The resulting system, called Ariadne, makes it feasible to rapidly build information mediators that access existing web sources" "In this article, we review pairwise spatial join algorithms and show how they can be combined for multiple inputs. In addition, we explore the application of synchronous traversal (ST), a methodology that processes synchronously all inputs without producing intermediate results. Then, we integrate the two approaches in an engine that includes ST and pairwise algorithms, using dynamic programming to determine the optimal execution plan. The results show that, in most cases, multiway spatial joins are best processed by combining ST with pairwise methods. Finally, we study the optimization of very large queries by employing randomized search algorithms." "Users typically view the data as multidimensional data cubes. Each cell of the data cube is a view consisting of an aggregation of interest, like total sales. The values of many of these cells are dependent on the values of other cells in the data cube. A common and powerful query optimization technique is to materialize some or all of these cells rather than compute them from raw data each time. Commercial systems differ mainly in their approach to materializing the data cube. In this paper, we investigate the issue of which cells (views) to materialize when it is too expensive to materialize all views. A lattice framework is used to express dependencies among views. We present greedy algorithms that work off this lattice and determine a good set of views to materialize. The greedy algorithm performs within a small constant factor of optimal under a variety of models. We then consider the most common case of the hypercube lattice and examine the choice of materialized views for hypercubes in detail, giving some good tradeoffs between the space used and the average time to answer a query" "SQL3, the name given to the new draft of the SQL standard that is likely to become an international s tandard replacing SQL92 in 1996 or 1997, contains several object-oriented extensions. When defining such extensions, X3H2 (the American Committee responsible for the specification of SQL3) and DBL (the International Committee for the same purpose) have had to make (and are still making) some of the same decisions made by the designers of other object-oriented languages. Among these decisions is the one described by Zdonik and Mater in. Zdonik and Mater observed that it is not possible to combine more than three of the following four features in a single language" "In Sept. 1999 SQLJ Part 1 was adopted as NCITS 331.1-1999 and it is now available for purchase from NCITS. It is worth mentioning that this specification is extremely approachable, with a lengthy tutorial section that introduces its more normative elements. Sybase brought SQLJ Part 1 to the SQLJ group in early 1997. Phil Shaw, of Sybase, has acted as editor of this document throughout its development. SQLJ Part 1 allows Java classes, contained in Jar files, to be brought into a DBMS. Methods in these classes may then be used as the implementation of SQL stored procedures and stored functions (together referred to as stored routines). Given how these methods are used, we¡¯ll provide a brief introduction to SQL routines before we discuss the features of SQLJ Part 1." "We study a set of linear transformations on the Fourier series representation of a sequence that can be used as the basis for similarity queries on time-series data. We show that our set of transformations is rich enough to formulate operations such as moving average and time warping. We present a query processing algorithm that uses the underlying R-tree index of a multidimensional data set to answer similarity queries efficiently. Our experiments show that the performance of this algorithm is competitive to that of processing ordinary (exact match) queries using the index, and much faster than sequential scanning. We relate our transformations to the general framework for similarity queries of Jagadish et al" "We have developed a web-based architecture and user interface for fast storage, searching and retrieval of large, distributed, files resulting from scientific simulations. We demonstrate that the new DATALINK type defined in the draft SQL Management of External Data Standard can help to overcome problems associated with limited bandwidth when trying to archive large files using the web. We also show that separating the user interface specification from the user interface processing can provide a number of advantages. We provide a tool to generate automatically a default user interface specification, in the form of an XML document, for a given database. This facilitates deployment of our system by users with little web or database development experience. The XML document can be customised to change the appearance of the interface" "Web-based data sources, particularly in Life Sciences, grow in diversity and volume. Most of the data collections are equipped with common document search, hyperlink and retrieval utilities. However, users' wishes often exceed simple document-oriented inquiries. With respect to complex scientific issues it becomes imperative to aid knowledge gain from huge interdependent and thus hard to comprehend data collections more efficiently. Especially data categories that constitute relationships between two each or more items require potent set-oriented content management, visualization and navigation utilities. Moreover, strategies are needed to discover correlations within and between data sets of independent origin." "The use of social media in advocacy, and particularly transnational advocacy, raises concerns of privacy and security for those conducting the advocacy and their contacts on social media. This chapter presents high-level summaries of cases of social media in advocacy and activism from the perspectives of information warfare and information security. From an analysis of these, the impact and relationships of social media in transnational advocacy and information security is discussed. Whilst online advocacy can be considered to be a form of information warfare aligned to a Cyber Macht theory, it can be argued that social media advocacy negatively impacts information security as it encourages various actors to actively attempt to breach security." "In a temporal OODB, an OID index (OIDX) is needed to map from OID to the physical location of the object. In a transaction time temporal OODB, the OIDX should also index the object versions. In this case, the index entries, which we call object descriptors (OD), also include the commit timestamp of the transaction that created the object version. The OIDX in a non-temporal OODB only needs to be updated when an object is created, but in a temporal OODB, the OIDX has to be updated every time an object is updated. This has previously been shown to be a potential bottleneck, and in this paper, we present the Persistent Cache (PCache), a novel approach which reduces the index update and lookup costs in temporal OODBs." "We examine the estimation of selectivities for range and spatial join queries in real spatial databases. As we have shown earlier, real point sets: (a) violate consistently the ¡°uniformity¡± and ¡°independence¡± assumptions, (b) can often be described as ¡°fractals¡±, with non-integer (fractal) dimension. In this paper we show that, among the infinite family of fractal dimensions, the so called ¡°Correlation Dimension¡± Dz is the one that we need to predict the selectivity of spatial join. The main contribution is that, for all the real and synthetic point-sets we tried, the average number of neighbors for a given point of the point-set follows a power law, with LI& as the exponent. This immediately solves the selectivity estimation for spatial joins, as well as for ¡°biased¡± range queries (i.e., queries whose centers prefer areas of high point density)." "A key aspect of interoperation among data-intensive systems involves the mediation of metadata and ontologies across database boundaries. One way to achieve such mediation between a local database and a remote database is to fold remote metadata into the local metadata, thereby creating a common platform through which information sharing and exchange becomes possible. Schema implantation and semantic evolution, our approach to the metadata folding problem, is a partial database integration scheme in which remote and local (meta)data are integrated in a stepwise manner over time." "In this paper, a new probe-based distributed deadlock detection algorithm is proposed. It is an enhanced version of the algorithm originally proposed by Chandy's et al.. The new algorithm has proven to be error free and suffers very little performance degradation from the additional deadlock detection overhead. The algorithm has been compared with the modified probe-based and timeout methods. It is found that under high data contention, it has the best performance. Results also indicate that the rate of probe initiation is significantly reduced in the new algorithm" "In late 2000, work was completed on yet another part of the SQL standard, to which we introduced our readers in an earlier edition of this column.Although SQL database systems manage an enormous amount of data, it certainly has no monopoly on that task. Tremendous amounts of data remain in ordinary operating system files, in network and hierarchical databases, and in other repositories. The need to query and manipulate that data alongside SQL data continues to grow. Database system vendors have developed many approaches to providing such integrated access.In this (partly guested) article, SQL's new part, Management of External Data (SQL/MED), is explored to give readers a better notion of just how applications can use standard SQL to concurrently access their SQL data and their non-SQL data." "An order-dependent query is one whose result (interpreted as a multiset) changes if the order of the input records is changed. In a stock-quotes database, for instance, retrieving all quotes concerning a given stock for a given day does not depend on order, because the collection of quotes does not depend on order. By contrast, finding a stock's five-price moving-average in a trades table gives a result that depends on the order of the table. Query languages based on the relational data model can handle order-dependent queries only through add-ons. SQL:1999, for instance, has a new ""window"" mechanism which can sort data in limited parts of a query. Add-ons make order-dependent queries di_cult to write and to optimize. In this paper we show that order can be a natural property of the underlying data model and algebra. We introduce a new query language and algebra, called AQuery, that supports order from-the-ground-up. New order-related query transformations arise in this setting. We show by experiment that this framework - language plus optimization techniques - brings orders-of-magnitude improvement over SQL:1999 systems on many natural order-dependent queries" "The enhanced pay-per-view (EPPV) model for providing continuous-media-on-demand (CMOD) services associates with each continuous media clip a display frequency that depends on the clip¡¯s popularity. The aim is to increase the number of clients that can be serviced concurrently beyond the capacity limitations of available resources, while guaranteeing a constraint on the response time. This is achieved by sharing periodic continuous media streams among multiple clients. In this paper, we provide a comprehensive study of the resource scheduling problems associated with supporting EPPV for continuous media clips with (possibly) different display rates, frequencies, and lengths. Our main objective is to maximize the amount of disk bandwidth that is effectively scheduled under the given data layout and storage constraints. This formulation gives rise to -hard combinatorial optimization problems that fall within the realm of hard real-time scheduling theory. Given the intractability of the problems, we propose novel heuristic solutions with polynomial-time complexity. Preliminary results from an experimental evaluation of the proposed schemes are also presented." "Real-time Virtual Walkthrough Data representing virtual environments (VEs) are getting increasingly large in order to better simulate real scenes. This poses interesting challenges to organize, store, and render the data for interactive navigation in VEs, or walkthrough. A large VE usually consists of thousands of 3D objects, each of which can be represented by hundreds of polygons, and may take thousands of megabytes of storage space. The amount of data is so large that it is impossible to store all of them in the main memory. Even for memory resident models, the graphics pipeline can become a bottleneck quickly with a large amount of data and slow down the rendering to an unacceptable frame rate for the walkthrough." "The random data perturbation (RDP) method of preserving the privacy of individual records in a statistical database is discussed. In particular, it is shown that if confidential attributes are allowed as query-defining variables, severe biases may result in responses to queries. It is also shown that even if query definition through confidential variables is not allowed, biases can still occur in responses to queries such as those involving proportions or counts. In either case, serious distortions may occur in user statistical analyses. A modified version of RDP is presented, in the form of a query adjustment procedure and specialized perturbation structure which will produce unbiased results." "The Web today consists exclusively of HTML documents designed for the human eye. While many of them are generated automatically by applications, it is difficult for other applbcations to read and process them. This may soon change, due to a series of new standards frorn the World Wide Web Consortium centered around XML (Extensible Markup Language). XML is designed to express the document content, while HTML expresses its presentation. In short, XML is a data exchange format, easily understood by applications. It enables data exchange on the Web, both intra-enterprise, across platforms (intranet), and inter-enterprise (internet). The focus of the Web shifts from document management to data management, and topics like queries, views, data warehouses, mediators, which were the domain of databases, become of interest to the Web." "Loading data is one of the most critical operations in any data warehouse, yet it is also the most neglected by the database vendors. Data must be loaded into a warehouse in a fixed batch window, typically overnight. During this period, we need to take maximum advantage of the machine resources to load data as efficiently as possible. A data warehouse can be on line for up to 20 hours of a day, which can leave only a window of 4 hours to complete the load. The Red Brick loader can validate, load and index at up to 12GB of data per hour on an SMP system." "One of the main difficulties in supporting global applications over a number of localized databases and migrating legacy information systems to modern computing environment is to cope with the heterogeneities of these systems. In this paper, we present a novel flexible architecture (called HODFA) to dynamically connect such localized heterogeneous databases in forming a homogenized federated database system and to support the process of transforming a collection of heterogeneous information systems onto a homogeneous environment. We further develop an incremental methodology of homogenization in the context of our HODFA framework, which can facilitate different degrees of homogenization in a stepwise manner, so that existing applications will not be affected during the process of homogenization" "As our reliance on computers and computerized data has increased, we have come to expect more from our computers. We no longer expect our computers to act as large expensive calculators that merely spit out bills and paychecks. We now, additionally, expect our systems to rapidly access and interactively present us with large volumes of accurate data. In fact, our expectations have changed so much, in the past decade, that we no longer focus on what our systems are but rather on what they do. We no longer refer to our systems as computer systems but rather information systems. With these new expectations have come new responsibilities for the information systems professional. We can no longer concern ourselves merely with keeping our systems up and running. We now need to concern ourselves with subjective concepts such as response time and throughput. With current expectations what they are, performance tuning has become vitally important." "Unfortunately, this will be my last influential papers column. I've been editor for about five years now (how time flies!) and have enjoyed it immensely. I've always found it rewarding to step back and look at why we do the research we do, and this column makes a big contribution to the process of self-examination. Further, I feel that there's a strong need for ways to publicly and explicitly highlight ""quality"" in papers. Criticism is easy, and is the more common experience given the amount of reviewing (and being reviewed) we typically engage in. I look forward to seeing this column in future issues" "An electronic dictionary system (EDS) is developed with object-oriented database techniques based on ObjectStore. The EDS is composed of two parts: the Database Building Program (DBP), and the Database Querying Program (DQP). DBP reads in a dictionary encoded in SGML tags, and builds a database composed of a collection of trees which holds dictionary entries, and several lists which contain items of various lexical categories. With text exchangeability introduced by the SGML, DBP is able to accommodate dictionaries of different languages with different structures, after easy modification of a configuration file. The tree model, the Category Lists, and an optimization procedure enables DQP to quickly accomplish complicated queries, including context requirements, via simple SQL-like syntax and straightforward search methods. Results show that compared with relational database, DQP enjoys much higher speed and flexibility." "This Tutorial presents the latest developments in the area of Java and Relational Databases. The material is based on the SQLJ consortium effort whose goal is to leverage Java technology for SQL processing. The SQLJ effort is driven by major industry vendors such as Oracle, Sybase, Tandem, JavaSoft, IBM, Informix and others. The SQLJ specifications describe Embedded SQL in Java, Java Stored Procedures, Java UDFs and Java Data Types." "In this paper, we ask if the traditional relational query acceleration techniques of summary tables and covering indexes have analogs for branching path expression queries over tree- or graph-structured XML data. Our answer is yes --- the forward-and-backward index already proposed in the literature can be viewed as a structure analogous to a summary table or covering index. We also show that it is the smallest such index that covers all branching path expression queries. While this index is very general, our experiments show that it can be so large in practice as to offer little performance improvement over evaluating queries directly on the data." "One important step in integrating heterogeneous databases is matching equivalent attributes: Determining which fields in two databases refer to the same data. In semantic integration, attributes are compared in a pairwise fashion to determine their equivalence. Automation is critical to integration as the volume of data or the number of databases to be integrated increase. Semiut ¡°discovers¡± how to match equivalent attributes from information that can be automatically extracted from databases; as opposed to requiring human lmowledge to predefine what makes attributes equivalent." "This article serves three purposes. First of all, to introduce dbjobs, the database of database jobs, and also describe its functionality and architecture. Secondly, to present statistics for the dbgrads system, after 18 months of continuous operation. Finally, to describe exciting future projects for SIGMOD Online." "The experimental results show that distributed commit processing can have considerably more influence than distributed data processing on the throughput performance and that the choice of commit protocol clearly affects the magnitude of this influence. Among the protocols evaluated, the new optimistic commit protocol provides the best transaction throughput performance for a variety of workloads and system configurations. In fact, OPT's peak throughput is often close to the upper bound on achievable performance. Even more interestingly, a three-phase (i.e., non-blocking) version of OPT provides better peak throughput performance than all of the standard two-phase (i.e., blocking protocols evaluated in our study" Several negative results are proved about the ability to type-check queries in the only existing proposed standard for object-oriented databases. The first of these negative results is that it is not possible to type-check OQL queries in the type system underlying the ODMG object model and its definition language ODL. The second negative result is that OQL queries cannot be type-checked in the type system of the Java binding of the ODMG standard either. A solution proposed in this paper is to extend the ODMG object model with explicit support for parametric polymorphism (universal type quantification). These results show that Java cannot be a viable database programming language unless extended with parametric polymorphism. "In the rnid-1980s. Chrts Dare¡¯s ¡°12 rules¡± for distributed database systems included replication. Repi ication makes transparent the problems of remote access de]dys and the management of data redundancy. The commercial market for distributed database features has been slowly building over the years. beginning with simple remote access gateways. Today. replication appears to dehver on the 1980s ideal, with a robust a-wrrchrcmuus infrasrntctnre. Current commercial tmhnology though. continues to fall shotl of that ideal." "e is no longer among us. The Italian Surgical Community has lost one of its best sons, because of this tragedy that is hitting our civilized world. A world-renowned surgeon, Prof. Valerio Di Carlo started his career in the Emergency Surgery Department of the Policlinico in Milan under the mentorship of Prof. Vittorio Staudacher. In 1978, he became Professor of Surgery at the Vita-Salute University of San Raffaele and was responsible for the General Surgery Department from 1980 to 2010." "Query optimization which is done by making a graph of the query and moving predicates around in the graph so that they will be applied early in the optimized query generated from the graph. Predicates are first propagated up from child nodes of the graph to parent nodes and then down into different child nodes. After the predicates have been moved, redundant predicates are detected and removed. Predicates are moved through aggregation operations and new predicates are deduced from aggregation operations and from functional dependencies. The optimization is not dependent on join order and works where nodes of the graph cannot be merged." "The Strudel system applies concepts from database management systems to the process of building Web sites. Strudel's key idea is separating the management of the site's data, the creation and management of the site's structure, and the visual presentation of the site's pages. First, the site builder creates a uniform model of all data available at the site. Second, the builder uses this model to declaratively define the Web site's structure by applying a ¡°site-definition query¡± to the underlying data. The result of evaluating this query is a ¡°site graph¡±, which represents both the site's content and structure. Third, the builder specifies the visual presentation of pages in Strudel's HTML-template language. The data model underlying Strudel is a semi-structured model of labeled directed graphs. We describe Strudel's key characteristics, report on our experiences using Strudel, and present the technical problems that arose from our experience." "Time-referenced data are pervasive in most real-world databases. Recent advances in temporal query languages show that such database applications may benefit substantially from built-in temporal support in the DBMS. To achieve this, temporal query optimization and evaluation mechanisms must be provided, either within the DBMS proper or as a source level translation from temporal queries to conventional SQL. This paper proposes a new approach: using a middleware component on top of a conventional DBMS. This component accepts temporal SQL statements and produces a corresponding query plan consisting of algebraic as well as regular SQL parts. The algebraic parts are processed by the middleware, while the SQL parts are processed by the DBMS. The middleware uses performance feedback from the DBMS to adapt its partitioning of subsequent queries into middleware and DBMS parts. The paper describes the architecture and implementation of the temporal middleware component, termed TANGO, which is based on the Volcano extensible query optimizer and the XXL query processing library." "BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle ""noise"" (data points that are not part of the underlying pattern) effectively." "Time-parameterized queries (TP queries for short) retrieve (i) the actual result at the time that the query is issued, (ii) the validity period of the result given the current motion of the query and the database objects, and (iii) the change that causes the expiration of the result. Due to the highly dynamic nature of several spatio-temporal applications, TP queries are important both as standalone methods, as well as building blocks of more complex operations. However, little work has been done towards their efficient processing. In this paper, we propose a general framework that covers time-parameterized variations of the most common spatial queries, namely window queries, k-nearest neighbors and spatial joins. In particular, each of these TP queries is reduced to nearest neighbor search where the distance functions are defined according to the query type. This reduction allows the application and extension of well-known branch and bound techniques to the current problem." "One of the important problems in data mining is discovering association rules from databases of transactions where each transaction consists of a set of items. The most time consuming operation in this discovery process is the computation of the frequency of the occurrences of interesting subset of items (called candidates) in the database of transactions. To prune the exponentially large space of candidates, most existing algorithms, consider only those candidates that have a user defined minimum support. Even with the pruning, the task of finding all association rules requires a lot of computation power and time. Parallel computers offer a potential solution to the computation requirement of this task, provided efficient and scalable parallel algorithms can be designed. In this paper, we present two new parallel algorithms for mining association rules." Rules in active database systems can be very difficult to program due to the unstructured and unpredictable nature of rule processing. We provide static analysis techniques for predicting whether a given rule set is guaranteed to terminate and whether rule execution is confluent (guaranteed to have a unique final state). Our methods are based on previous techniques for analyzing rules in active database systems. We improve considerably on the previous techniques by providing analysis criteria that are much less conservative: our methods often determine that a rule set will terminate or is confluent when previous methods could not make this determination. "This paper presents the parallel enhancements which allowed the port of the Teradata Database from TOS, a proprietary ldbit Operating System, to an SVR4 Unix system. It gives an architectural overview of how the Teradata Database solves the main VLDB problems: performance and reliability. We will present he transition from the Database Computer DBC/lOlZ nodes (Interface Processors-IFPs and Access Module Processors AMPS) to the virtual processors (vprocs), which run concurrently in, a collection of SMP nodes. We also present the Parallel Database Environment (PDE) add-on package to Unix that makes this possible. We will discuss the results of our performance enhancement work and the directions for the future" "Dwarf is a highly compressed structure for computing, storing, and querying data cubes. Dwarf identifies prefix and suffix structural redundancies and factors them out by coalescing their store. Prefix redundancy is high on dense areas of cubes but suffix redundancy is significantly higher for sparse areas. Putting the two together fuses the exponential sizes of high dimensional full cubes into a dramatically condensed data structure. The elimination of suffix redundancy has an equally dramatic reduction in the computation of the cube because recomputation of the redundant suffixes is avoided" "Imagine that you are a ¡°knowledge worker¡± in the coming millenium. That means you must synthesize information and make decisions such as ¡°Which benefits plan to use?¡± ¡°What do the regulations say about this course of action?¡± ¡°How does my job fit into the corporate business plan?¡± ¡°What should I be careful about when I approach this client?¡± or even ¡°HOW does this program work?¡± If the dream of digital libraries is to bring you all material relevant to your task, you may find yourself drowning before long. Reading is harder than talking to people who know the relevant documents and can tell you what you¡¯re interested in. That is what many current knowledge workers do, giving rise to professions such as insurance consultant, lawyer, benefits specialist, and so on. Imagine by contrast that the documents you retrieve could be tailored precisely to your needs. That is, imagine that the document might ask you questions and produce a document filtered and organized according to those you have answered." "In this paper, we present an efficient method to do online reorganization of sparsely-populated B+-trees. It reorganizes the leaves first, compacting in short operations groups of leaves with the same parent. After compacting, optionally, the new leaves may swap locations or be moved into empty pages so that they are in key order on the disk. After the leaves are reorganized, the method shrinks the tree by making a copy of the upper part of the tree while leaving the leaves in place. A new concurrency method is introduced so that only a minimum number of pages are locked during reorganization. During leaf reorganization, Forward Recovery is used to save all work already done while maintaining consistency after system crashes. A heuristic algorithm is developed to reduce the number of swaps needed during leaf reorganization, so that better concurrency and easier recovery can be achieved. A detailed description of switching from the old B+-tree to the new B+-tree is described for the first time." "In this paper, we investigate which problems exist in very large real databases and describe which mechanisms are provided by Informix Extended Parallel Server (XPS) for dealing with these problems. Currently the largest customer XPS database contains 27 TB of data. A database server that has to handle such an amount of data has to provide mechanisms which allow achieving adequate performance and easing the usability. We will present mechanisms which address both of these issues and illustrate them with examples from real customer systems." "In existing relational database systems, processing of group-by and computation of aggregate functions are always postponed until all joins are performed. In this paper, we present transformations that make it possible to push group-by operation past one or more joins and can potentially reduce the cost of processing a query significantly. Therefore, the placement of group-by should be decided based on cost estimation. We explain how the traditional System-R style optimizers can be modified by incorporating the greedy conservative heuristic that we developed. We prove that applications of greedy conservative heuristic produce plans that are better (or no worse) than the plans generated by a traditional optimizer. Our experimental study shows that the extent of improvement in the quality of plans is significant with only a modest increase in optimization cost. Our technique also applies to optimization of Select Distinct queries by pushing down duplicate elimination in a cost-based fashion." "We describe SCC-kS, a Speculative Concurrency Control (SCC) algorithm that allows a DBMS to use efficiently the extra computing resources available in the system to increase the likelihood of timely commitment of transactions. Using SCC-kS, up to k shadow transactions execute speculatively in behalf of a given uncommitted transaction so as to protect against the hazards of blockages and resterts. SCC-kS allows the system to scale the level of speculation that each transaction is allowed to perform, thus providing a straightforward mechanism of trading resources for timeliness. Also, we describe SCC-DC, a value-cognizant SCC protocol that utilizes deadline and criticalness information to improve timeliness through the controlled deferment of transaction commitments. We present simulation results that quantify the performance gains of our protocols compared to other widely used concurrency control protocols for real-time databases." "Data Mining: Practical Machine Learning Tools and Techniques offers a thorough grounding in machine learning concepts as well as practical advice on applying machine learning tools and techniques in real-world data mining situations. This highly anticipated third edition of the most acclaimed work on data mining and machine learning will teach you everything you need to know about preparing inputs, interpreting outputs, evaluating results, and the algorithmic methods at the heart of successful data mining. Thorough updates reflect the technical changes and modernizations that have taken place in the field since the last edition, including new material on Data Transformations, Ensemble Learning, Massive Data Sets, Multi-instance Learning, plus a new version of the popular Weka machine learning software developed by the authors." "The Eighth International Workshop on Knowledge Representation Meets Databases (KRDB) was held at the Ponti cia Universit a Urbaniana, in Rome, right after VLDB 2001. KRDB was initiated in 1994 to provide an opportunity for researchers and practitioners from the two areas to exchange ideas and results. This year's focus was on Modeling, Querying andManaging Semistructured Data. The one day program included ten research papers, one invited talk, and a panel. Eight of the accepted papers addressed various topics related to representation of information and reasoning in XML, one was on data integration and one on transaction processing." "Document sources are available everywhere, both within the internal networks of organizations and on the Internet. Even individual organizations use search engines from different vendors to index their internal document collections. These search engines are typically incompatible in that they support different query models and interfaces, they do not return enough information with the query results for adequate merging of the results, and finally, in that they do not export metadata about the collections that they index (e.g., to assist in resource discovery). This paper describes STARTS, an emerging protocol for Internet retrieval and search that facilitates the task of querying multiple document sources. STARTS has been developed in a unique way. It is not a standard, but a group effort coordinated by Stanford's Digital Library project, and involving over 11 companies and organizations. The objective of this paper is not only to give an overview of the STARTS protocol proposal, but also to discuss the process that led to its definition" "To avoid this problem, we introduce a new organization of the directory which uses a split algorithm minimizing overlap and additionally utilizes the concept of supernodes. The basic idea of overlap-minimizing split and supernodes is to keep the directory as hierarchical as possible, and at the same time to avoid splits in the directory that would result in high overlap" "The duplicate elimination problem of detecting multiple tuples, which describe the same real world entity, is an important data cleaning problem. Previous domain independent solutions to this problem relied on standard textual similarity functions (e.g., edit distance, cosine metric) between multi-attribute tuples. However, such approaches result in large numbers of false positives if we want to identify domain-specific abbreviations and conventions. In this paper, we develop an algorithm for eliminating duplicates in dimensional tables in a data warehouse, which are usually associated with hierarchies. We exploit hierarchies to develop a high quality, scalable duplicate elimination algorithm, and evaluate it on real datasets from an operational data warehouse." "This chapter discusses how to manage one's personal databases or database striptease. The database management problem has been entering everyone's life. To realize its presence it suffices to sit down and tally the electronic data sources crucial for survival in the modern society. As long as data sources are independent, devices are never replaced, nor do new devices enter the realm of existence, it will survive easily in the digital jungle. However, life runs a different course. Each time one meets a new person, one may have to synchronize several databases with his address information. The limitations of the human brain to cope with the information overload calls upon better support to ¡°remember¡± where, what and when has been accumulated in the fabric of data sources making up the environment. Buying a new PDA surely means a re-organization and possibly retyping the content of the database." "Today's Internet based businesses need a level of interoperability which will allow trading partners to seamlessly and dynamically come together and do business without ad hoc and proprietary integrations. Such a level of interoperability involves being able to find potential business partners, discovering their services and business processes, and conducting business ""on the fly"". This process of dynamic interoperation is only possible through standard B2B frameworks. Indeed a number of B2B electronic commerce standard frameworks have emerged recently. Although most of these standards are overlapping and competing, each with its own strenghts and weeknesses, a closer investigation reveals that they can be used in a manner to complement one another.In this paper we describe such an implementation where an ebXML infrastructure is developed by exploiting the Universal Description, Discovery and Integration (UDDI) registries and RosettaNet Partner Interface Processes (PIPs)." "Although research on temporal database systems has been active for about 20 years, implementations have not appeared until recently. This is one reason why current commercial database systems provide only limited temporal functionality. This paper summarizes extant state of the art of temporal database implementations. Rather than being very specific about each system we have attempted to provide an indication of the functionality together with pointers to additional information. It is hoped that this leads to more efforts pushing the implementation of temporal database systems in the near future." Query answers are ranked using extended information-retrieval techniques and are generated in an order similar to the ranking. Advanced indexing techniques were developed to facilitate efficient implementation of XSEarch. The performance of the different techniques as well as the recall and the precision were measured experimentally. "In this paper, we present OPOSSUM, a flexible, customizable, and extensible schema management system. Working within the established paradigm of schema editing through direct manipulation, OPOSSUM employs several novel techniques to offer the following capabilities: enhancement of schema visualizations with user-specific information; exploration of schemas through choice of visual representations; and creation of new visual representation styles when existing ones prove unsatisfactory. We discuss the architecture of the system and the methodology that guided its development, and illustrate its most important features through examples of how it has been used. OPOSSUM is operational and is in use by three groups of experimental scientists on the University of Wisconsin campus as a tool for experiment and database design." "Processes are increasingly being used to make complex application logic explicit. Programming using processes has significant advantages but it poses a difficult problem from the system point of view in that the interactions between processes cannot be controlled using conventional techniques. In terms of recovery, the steps of a process are different from operations within a transaction. Each one has its own termination semantics and there are dependencies among the different steps. Regarding concurrency control, the flow of control of a process is more complex than in a flat transaction. A process may, for example, partially roll back its execution or may follow one of several alternatives. In this article, we deal with the problem of atomicity and isolation in the context of processes. We propose a unified model for concurrency control and recovery for processes and show how this model can be implemented in practice, thereby providing a complete framework for developing middleware applications using processes." "Recovery can be extended to new domains at reduced logging cost by exploiting ¡°logical¡± log operations. During recovery, a logical log operation may read data values from any recoverable object, not solely from values on the log or from the updated object. Hence, we needn't log these values, a substantial saving. In [8], we developed a redo recovery theory that deals with general log operations and proved that the stable database remains recoverable when it is explained in terms of an installation graph. This graph was used to derived a write graph that determines a flush order for cached objects that ensures that the database remains recoverable. In this paper, we introduce a refined write graph that permits more flexible cache management that flushes smaller sets of objects." "The project focuses on the field of Technical Information Systems, where there is a need for tools supporting modeling of complex objects. Designers in this field usually use incremental design or step by step prototyping, because this seems to be best suited for users coping with complexity and uncertainty about their own needs or requirements. The IMPRESS DDT aims at supporting the database design part of this process." "Recent demands for querying big data have revealed various shortcomings of traditional database systems. This, in turn, has led to the emergency of a new kind of query mode, approximate query.Online aggregation is a sample-based technology for approximate querying. It becomes quite indispensable in the era of information explosion today. Online aggregation continuously gives an approximate result with some error estimation (usually confidence interval) until all data are processed." "Our results show that the policies are effective at achieving user-specified levels of I/O operations and database garbage percentage. We also investigate the sensitivity of the policies over a range of object connectivities. The evaluation demonstrates that semi-automatic, self-adaptive policies are a practical means for flexibly controlling garbage collection rate." A single pass computing and visualization engine will be demonstrated that allows one to interactively analyze VLDBs that contain tens of millions of multivariate records. The engine allows one to compute millions of different quantities from a single pass over the records. Each computation can be performed for the entire multivariate domain and for all subdomains that can be obtained by constraining one or more discrete variables to span any subset of their values and one or more continuous variables to span any subset of their predefined bins. Each new computation takes less than one second irrespective of the number of records. "Specifically, we use state-of-the-art concepts from morphology, n;,mely the ¡®pattern spectrum¡¯ of a shape, to map each shape to a point in n-dimensional space. FollowingThis text is a guide to the foundations of method engineering, a developing field concerned with the definition of techniques for designing software systems. The approach is based on metamodeling, the construction of a model about a collection of other models. The book applies the metamodeling approach in five case studies, each describing a solution to a problem in a specific domain. Suitable for classroom use, the book is also useful as a reference for practitioners. The book first presents the theoretical basis of metamodeling for method engineering, discussing information modeling, the potential of metamodeling for software systems development, and the introduction of the metamodeling tool ConceptBase. , we organize the n-d points in an R-tree. We show that the L, (= max) norm in the n-d space lower-bounds the actual distance. This guarantees no false dismissals for range queries. In addition, we present a nearest neighbor algorithm that also guarantees no false dismissals." "This text is a guide to the foundations of method engineering, a developing field concerned with the definition of techniques for designing software systems. The approach is based on metamodeling, the construction of a model about a collection of other models. The book applies the metamodeling approach in five case studies, each describing a solution to a problem in a specific domain. Suitable for classroom use, the book is also useful as a reference for practitioners. The book first presents the theoretical basis of metamodeling for method engineering, discussing information modeling, the potential of metamodeling for software systems development, and the introduction of the metamodeling tool ConceptBase." "Camps rightly focuses on certain important facets of the evolution of relational theory since 1969, in particular our ideas about what our revered originator, E.F. Codd, chose to call doma/ns. Camps's tone at times suggests that we (in the relational camp) have been guilty of waging war over issues on which we have subsequently recanted, too late. I think this is an exaggeration, and that we could make a reasonable counter-claim to the effect that all of the clarifications we have been able to suggest, after very careful study, over those many years, are compatible with what we said before. I would not strongly object if Camps retorted that that, too, is something of an exaggeration, but obviously I think my way of expressing it is closer to the troth" "This paper describes the external forces that motivate financial institutions to collect, aggregate, analyze, and mine data so that it can be transformed into information, one of a financial institution¡¯s most valuable assets. In this paper we refer to this strategic information asset as ¡°information currency.¡± In general, we describe the state of banking and the rapid global changes that affect financial institutions. We analyze how Bank of America (BofA) created and employed its information currency using the TeradataTM Relational Database Management System (Teradata RDBMS). The Teradata RDBMS manages a very large data warehouse (NCR Scalable Data Warehouse) for BofA using an NCR WorldMarkTM 51OOM MPP (Massive Parallel Processing) platform." "Tree patterns form a natural basis to query tree-structured data such as XML and LDAP. To improve the efficiency of tree pattern matching, it is essential to quickly identify and eliminate redundant nodes in the pattern. In this paper, we study tree pattern minimization both in the absence and in the presence of integrity constraints (ICs) on the underlying tree-structured database. In the absence of ICs, we develop a polynomial-time query minimization algorithm called CIM, whose efficiency stems from two key properties: (i) a node cannot be redundant unless its children are; and (ii) the order of elimination of redundant nodes is immaterial. When ICs are considered for minimization, we develop a technique for query minimization based on three fundamental operations: augmentation (an adaptation of the well-known chase procedure), minimization (based on homomorphism techniques), and reduction. We show the surprising result that the algorithm, referred to as ACIM, obtained by first augmenting the tree pattern using ICs, and then applying CIM, always finds the unique minimal equivalent query. While ACIM is polynomial time, it can be expensive in practice because of its inherent non-locality. We then present a fast algorithm, CDM, that identifies and eliminates local redundancies due to ICs, based on propagating ¡±information labels¡± up the tree pattern. CDM can be applied prior to ACIM for improving the minimization efficiency. We complement our analytical results with an experimental study that shows the effectiveness of our tree pattern minimization techniques." "For reasons of simplicity and communication efficiency, a number of existing object-oriented database management systems are based on page server architectures; data pages are their minimum unit of transfer and client caching. Despite their efficiency, page servers are often criticized as being too restrictive when it comes to concurrency, as existing systems use pages as the minimum locking unit as well. In this paper we show how to support object-level locking in a page server context. Several approaches are described, including an adaptive granularity approach that uses page-level locking for most pages but switches to object-level locking when finer-grained sharing is demanded. We study the performance of these approaches, comparing them to both a pure page server and a pure object server. For the range of workloads that we have examined, our results indicate that a page server is clearly preferable to an object server. Moreover, the adaptive page server is shown to provide very good performance, generally outperforming the pure page server, the pure object server, and the other alternatives as well." "Successful companies organise and run their business activities in an efficient manner. Core activities are completed on time and within specified resource constraints. However to stay competitive in today's markets, companies need to continually improve their efficiency ¡ª business activities need to be completed more quickly, to higher quality and at lower cost. To this end, there is an increasing awareness of the benefits and potential competitive advantage that well designed business process management systems can provide. In this paper we argue the case for an agent-based approach: showing how agent technology can improve efficiency by ensuring that business activities are better scheduled, executed, monitored, and coordinated." "In this continuation of Esther Duflo¡¯s in depth research from 1998-2008 on the impact of female leaders in India, the goal was to measure whether regions with a female Chief Minister (head of state) has resulted in an increase in education investment compared to the regions where men have remained dominant in leadership roles. To do this, six regions in India with female Chief Ministers are analyzed and six with male Ministers are analyzed for comparison." "There is an increasing demand for systems that can automatically analyze images and extract semantically meaningful information. IRIS, an Integrated Retinal Information system, has been developed to provide medical professionals easy and unified access to the screening, trend and progression of diabetic-related eye diseases in a diabetic patient database. This paper shows how mining techniques can be used to accurately extract features in the retinal images. In particular, we apply a classification approach to determine the conditions for tortuousity in retinal blood vessels." "Substantive changes in the business environment¡ªand aggressive initiatives in business process reengineering¡ªare driving corresponding changes in the information technology architectures of large enterprises. Those changes are enabled by the convergence of a long list of maturing new technologies. As one of its many implications, the new IT architecture demands revised assumptions about the design and deployment of databases. This paper reviews the components of the architectural shift now in process, and offers strategic planning assumptions for database professionals." "Information integration provides a competitive advantage to businesses and is fundamental to on demand computing. It is strategic area of investment by software companies today whose goal is to provide a unified view of the data regardless of differences in data format, data location and access interfaces, dynamically manage data placement to match availability, currency and performance requirements, and provide autonomic features that reduce the burden on IT staffs for managing complex data architectures. This paper describes the motivation for integrating information for on demand computing, explains its requirements, and illustrates its value through usage scenarios. As shown in the paper, there is still a tremendous amount of research, engineering, and development work needed to make the full information integration vision a reality and it is expected that software companies will continue to heavily invest in aggressively pursing the information integration vision." "This article presents a database programming language, Th¨¦mis, which supports subtyping and class hierarchies, and allows for the definition of integrity constraints in a global and declarative way. We first describe the salient features of the language: types, names, classes, integrity constraints (including methods), and transactions. The inclusion of methods into integrity constraints allows an increase of the declarative power of these constraints. Indeed, the information needed to define a constraint is not always stored in the database through attributes, but is sometimes computed or derived data. Then, we address the problem of efficiently checking constraints." "Together with techniques developed for relational databases, this basis in logic means that deductive databases are capable of handling large amounts of information as well as performing reasoning based on that information. There are many application areas for deductive database technology. One area is that of decision support systems. In particular, the exploitation of an organization's resources requires fi~tbniy sufficient information about the current and future status of the resources themselves, but also a way of reasoning effectively about plans for the future. The present generation of decision support systems are severely deficient when it comes to reasoning about future plans. Deductive database technology is an appropriate solution to this problem. Another fruitful application area is that of expert systems. There are many computing applications in which there are large amounts of information, from which the important facts may be distilled by a simple yet tedious analysis. For example, medical analysis and monitoring can generate a large amount of data, and an error can have disastrous consequences.We demonstrate the COUGAR System, a new distributed data management infrastructure that scales with the growth of sensor interconnectivity and computational power on the sensors over the next decades. Our system resides directly on the sensor nodes and creates the abstraction of a single processing node without centralizing data or computation." The aerospace industry poses significant challenges to information management unlike any other industry. Data management challenges arising from different segments of the aerospace business are identified through illustrative scenarios. These examples and challenges could provide focus and stimulus to further research in information management. "In the last few years, workflow management has become a hot topic in the research community and, especially, in the commercial arena. Workflow management is multidisciplinary in nature encompassing many aspects of computing: database management, distributed client-server systems, transaction management, mobile computing, business process reengineering, integration of legacy and new applications, and heterogeneity of hardware and software. Many academic and industrial research projects are underway. Numerous successful products have been released. Standardization efforts are in progress under the auspices of the Workflow Management Coalition." "Visual information, especially videos, plays an increasing role in our society for both work and entertainment as more sources become available to the user. Set-top boxes are poised to give home users access to videos that come not only from TV channels and personal recordings, but also from the Internet in the form of downloaded and streaming videos of various types. Current approaches such as Electronic Program Guides and video search engines search for video assets of one type or from one source. The capability to conveniently search through many types of video assets from a large number of video sources with easy-to-use user profiles cannot be found anywhere yet. VideoAnywhere has developed such a capability in the form of an extensible architecture as well as a specific implementation using the latest in Internet programming (Java, agents, XML, etc.) and applicable standards." "After a system crash, databases recover to the last committed transaction, but applications usually either crash or cannot continue. The Phoenix purpose is to enable application state to persist across system crashes, transparent to the application program. This simplifies application programming, reduces operational costs, masks failures from users, and increases application availability, which is critical in many scenarios, e.g., e-commerce. Within the Phoenix project, we have explored how to provide application recovery efficiently and transparently via redo logging. This paper describes the conceptual framework for the Phoenix project, and the software infrastructure that we are building." "Clustering is an unsupervised process since there are no predefined classes and no examples that would indicate grouping properties in the data set. The majority of the clustering algorithms behave differently depending on the features of the data set and the initial assumptions for defining groups. Therefore, in most applications the resulting clustering scheme requires some sort of evaluation as regards its validity. Evaluating and assessing the results of a clustering algorithm is the main subject of cluster validity. In this paper we present a review of the clustering validity and methods. More specifically, Part I of the paper discusses the cluster validity approaches based on external and internal criteria." "The end of the Cold War bas brought significant changes for GM Hughes Electronics, one of the world¡¯s leading satellite and defense electronics companies. Their response to the loss of defense revenue was to marry their satellite communications expertise with the rapidly expanding entertainment industry to produce DIRECTV¡±, the first all-digital direct broadcast satellite (DBS) service in the United States. For years, customers in rural areas have used large, unsightly satellite dishes to receive television programming. The cost, size and complexity of these systems has limited their appeal. Hughes has now launched two geosynchronous satellites that use higher powered transmitters to send streams of compressed digital data to 18 inch antennas that can be mounted inauspiciously. A specialized video processor decompresses the signal and displays it on the consumer¡¯s television with better-thanbroadcast quality audio and video. The programming offered inchrdes a number of cable-like television broadcast services, music and scores of offerings of pay-per-view movies, sports and special events." "This paper presents an approach that preserves the semi-atomicity (a weaker form of atomicity) of flexible transactions, allowing local sites to autonomously maintain serializability and recoverability. We offer a fundamental characterization of the flexible transaction model and precisely define the semi-atomicity. We investigate the commit dependencies among the subtransactions of a flexible transaction. These dependencies are used to control the commitment order of the subtransactions. We next identify those restrictions that must be placed upon a flexible transaction to ensure the maintenance of its semi-atomicity. As atomicity is a restrictive criterion, semi-atomicity enhances the class of executable global transactions." "The distinctions among the protocols in terms of performance are significant. For example, an offered load where 70% - 80% of transactions under the global locking protocol were aborted, only 10% of transactions were aborted under the protocols based on the replication graph. The results of the study suggest that protocols based on a replication graph offer practical techniques for replica management. However, it also shows that performance deteriorates rapidly and dramatically when transaction throughput reaches a saturation point." "The publish/subscribe paradigm is a simple to use interaction model that consists of information providers, who publish events to the system; and of information consumers, who subscribe to events of interest within the system. The publish/subscribe system ensures the timely notification of subscribers upon event occurrence. Events can be seen as data items (, tuples, columns, or tables) in the relational database model and subscriptions closely resemble database queries. From this point of view, publish/subscribe systems solve a problem inverse to database query processing (that is, evaluate an event on a set of subscriptions to identify the matching ones). Information dissemination services are often ¡°add-ons¡± to auction sites, shopping sites, or information services (for example, news, sports, traffic) that allow a subscriber to express interest in certain events and consequently be notified upon the occurrence of the event. For instance, a site offering apartments (rental or sale) may allow a user to submit a detailed subscription constraining location, size, price, and nearby attractions of the ideal apartment." "In this article we introduce the first index structure, called the QIC-M-tree, that can process user-defined queries in generic metric spaces, that is, where the only information about indexed objects is their relative distances. The QIC-M-tree is a metric access method that can deal with several distinct distances at a time: (1) a query (user-defined) distance, (2) an index distance (used to build the tree), and (3) a comparison (approximate) distance (used to quickly discard from the search uninteresting parts of the tree). We develop an analytical cost model that accurately characterizes the performance of the QIC-M-tree and validate such model through extensive experimentation on real metric data sets. In particular, our analysis is able to predict the best evaluation strategy (i.e., which distances to use) under a variety of configurations, by properly taking into account relevant factors such as the distribution of distances, the cost of computing distances, and the actual index structure." "Materialized views and view maintenance are important for data warehouses, retailing, banking, and billing applications. We consider two related view maintenance problems: 1) how to maintain views after the base tables have already been modified, and 2) how to minimize the time for which the view is inaccessible during maintenance.Typically, a view is maintained immediately, as a part of the transaction that updates the base tables. Immediate maintenance imposes a significant overhead on update transactions that cannot be tolerated in many applications. In contrast, deferred maintenance allows a view to become inconsistent with its definition. A refresh operation is used to reestablish consistency. We present new algorithms to incrementally refresh a view during deferred maintenance." "We use conventional hardware for servers and clients and examine bottlenecks and optimization options systematically, in order to reduce jitter and increase the maximum number of clients that the system can support. We show that the diversity of client performance characteristics can be taken into account, so that all clients are well supported for delay-sensitive retrieval in a heterogeneous environment. We also show that their characteristics can be exploited to maximize server throughput under server memory constrains." "We demonstrate the COUGAR System, a new distributed data management infrastructure that scales with the growth of sensor interconnectivity and computational power on the sensors over the next decades. Our system resides directly on the sensor nodes and creates the abstraction of a single processing node without centralizing data or computation." "Data exchange formats were originally devised for moving data between programs and between groups of researchers in a platform-independent file format. They are mostly self-describing, containing data element definitions along with the base data, though in some cases they involve a standardized external data dictionary. DXS allow exchange of data structures between programs, not just byte streams. They tend not to support a behavioral component as part of the interchange format, though some have assertions and derived data elements. They are typically implemented as a procedure library that is linked with an application, along with some stand-alone utilities, An interesting phenomenon is that DXS are being used for data management, alt bough they were originally intended for data exchange. Research groups keep their data in files using one of these formats, and reprogram their tools or write adaptors to use data in that format. The self-description means that, as with a database management system, there is no longer a dependence on a particular application program in order to be able to read and decode the data, This use of DXS for data storage is acknowledged in the tools that are appearing, such as DX-file browsers and cataloging facilities." "This paper takes the next logical step: It considers the use of timestamping for capturing transaction and valid time in the context of transactions. The paper initially identifies and analyzes several problems with straightforward timestamping, then proceeds to propose a variety of techniques aimed at solving these problems. Timestamping the results of a transaction with the commit time of the transaction is a promising approach. The paper studies how this timestamping may be done using a spectrum of techniques. While many database facts are valid until now, the current time, this value is absent from the existing temporal types. Techniques that address this problem using different substitute values are presented. Using a stratum architecture, the performance of the different proposed techniques are studied. Although querying and modifying time-varying data is accompanied by a number of subtle problems, we present a comprehensive approach that provides application programmers with simple, consistent, and efficient support for modifying bitemporal databases in the context of user transactions." "SilkRoute composes the application query with the public-view query, translates the result into SQL, executes this on the relational engine, and assembles the resulting tuple streams into an XML document. This work makes some key contributions to XML query processing. First, it describes an algorithm that translates an XQuery expression into SQL. The translation depends on a query representation that separates the structure of the output XML document from the computation that produces the document's content. The second contribution addresses the optimization problem of how to decompose an XML view over a relational database into an optimal set of SQL queries. We define formally the optimization problem, describe the search space, and propose a greedy, cost-based optimization algorithm, which obtains its cost estimates from the relational engine. Experiments confirm that the algorithm produces queries that are nearly optimal." "Motivated by this, we propose a new index structure called the TPR*- tree, which takes into account the unique features of dynamic objects through a set of improved construction algorithms. In addition, we provide cost models that determine the optimal performance achievable by any data-partition spatio-temporal access method. Using experimental comparison, we illustrate that the TPR*-tree is nearly-optimal and significantly outperforms the TPR-tree under all conditions." "This issue presents you with three workshop reports organized by Brian Cooper, the new associate editor for the workshop reports and technical notes. The first report summarized the events and discussions at the EDBT summer school on XML and Databases, contributed by Riccardo Torlone and Paolo Atzeni. The second report by Ioana Manolescu and Tannis Papakonstantinou, gives an overview of the workshop on the XQuery implementation, Experience, and Perspectives, which was held this year in Paris, France, incorporation with the ACM SIGMOD conference. The third workshop My term as the Editor of the SIGMOD RECORD is ending delighted to present y" "We present a novel framework and a tool (ToMAS) for automatically adapting mappings as schemas evolve. Our approach considers not only local changes to a schema, but also changes that may affect and transform many components of a schema. We consider a comprehensive class of mappings for relational and XML schemas with choice types and (nested) constraints. Our algorithm detects mappings affected by a structural or constraint change and generates all the rewritings that are consistent with the semantics of the mapped schemas. Our approach explicitly models mapping choices made by a user and maintains these choices, whenever possible, as the schemas and mappings evolve. We describe an implementation of a mapping management and adaptation tool based on these ideas and compare it with a mapping generation tool." "To support efficient similarity searches in an NDDS, we propose a new dynamic indexing technique, called the ND-tree. The key idea is to extend the relevant geometric concepts as well as some indexing strategies used in CDSs to NDDSs. Efficient algorithms for ND-tree construction are presented. Our experimental results on synthetic and genomic sequence data demonstrate that the performance of the ND-tree is significantly better than that of the linear scan and M-tree in high dimensional NDDSs." "For many years, TIBCO (the Information Bus Company) has pioneered the use of Publish/Subscribe¡ªa form of push technology ¡ª to build flexible, real-time loosely-coupled distributed applications. Today, Publish/Subscribe is used by 300 of the world's largest financial institutions, deployed in 6 of the top 10 semiconductor manufacturer' factory floors, utilized in the implementation large-scale Internet services like Yahoo, Intuit, and ETrade, and chosen by many of the world's leading corporations as the enterprise infrastructure for integrating disparate applications. In this paper, we will:Contrast the Publish/Subscribe event-driven interaction paradigm against the traditional demand-driven request-reply interaction paradigm" "Query optimizers nowadays draw upon many sources of information about the database to optimize queries. They employ runtime statistics in cost-based estimation of query plans. They employ integrity constraints in the query rewrite process. Primary and foreign key constraints have long played a role in the optimizer, both for rewrite opportunities and for providing more accurate cost predictions. More recently, other types of integrity constraints are being exploited by optimizers in commercial systems, for which certain semantic query optimization techniques have now been implemented." "The main components of MIND are a global query processor, a global transaction manager, a schema integrator, interfaces to supported database systems and a user graphical interface.In MIND all local databases are encapsulated in a generic database object with a well defined single interface. This approach hides the differences between local databases from the rest of the system. The integration of export schemas is currently performed manually by using an object definition language (ODL) which is based on OMG's interface definition language. The DBA builds the integrated schema as a view over export schemas. the functionalities of ODL allow selection and restructuring of schema elements from existing local schemas.MIND global query optimizer aims at maximizing the parallel execution of the intersite joins of the global subqueries. Through MIND global transaction manager, the serializable execution of the global transactions are provided." "There are a variety of main-memory access structures, such as segment trees, and quad trees, whose properties, such as good worstcase behaviour, make them attractive for database applicdions. Unfortunately, the structures are typically ¡®long and skinny¡¯, whereas disk data structuies must be ¡®shortand-fat (that is, have a high fanout and low height) in order to minimize I/O. We consider how to cluster the nodes (that is, map the nodes to disk pages) of mainmemory access structures such that although a path may traverse many nodes, it only traverses a few disk pages. The number of disk pages traversed in a path is called the external path length. We address several versions of the clustering problem. We present a clustering algorithm for tree structures that generates optimal worst-case external path length mappings; we also show how to make it dynamic, to support updates. We extend the algorithm to generate mappings that minimize the average weighted external path lengths. We also show that some other clustering problems, such as finding optimal external path lengths for DAG structures" "This paper describes an advanced development program to create a medical information system called the National Medical Knowledge Bank (NMKB). This five year program is sponsored in part by a grant from the National Institute of Standards and Technology Advanced Technology Program. The goals of the program, covering computer-assisted diagnosis, medical training, remote consultation, and medical records storage, are defined. The webbased architecture of the medical knowledge bank is presented, including the Teradata Multimedia Services, an object/relational database which serves as the central data repository for medical data stored in multiple data types. Also described are the applications of physician support, including case-based reasoning and image analysis for determining case similarity; virtual medical conferences; and initial/continuing medical education." "In this special issue on metadata management, we present a new work on creating, gathering, managing, and understanding metadata. The work in this issue highlights the reality that the lack of metadata and effective techniques for managing them is currently one of the biggest challenges to meaningful use and sharing of the wealth (or should we say glut) of data available today." "The overall theme of the FQAS conferences is innovative query systems that are aimed at providing easy, flexible and intuitive access to information. Such systems are intended to facilitate retrieval from information repositories such as databases, libraries, and the World Wide Web. These repositories are typically equipped with standard query systems, which are often inadequate, and the focus of FQAS is the development of query systems that are more expressive, informative, cooperative and productive." "This paper proposes a system for personalization of web portals. A specic implementation is discussed in reference to a web portal containing a news feed service. Techniques are proposed for effective categorization, management, and personalization of news feeds obtained from a live news wire service. The process consists of two steps: first manual input is required to build the domain knowledge which could be site-specific; then the automated component uses this domain knowledge in order to perform the personalization, categorization and presentation. Effective schemes for advertising are proposed, where the targeting is done using both the information about the user and the content of the web page on which the advertising icon appears. Automated techniques for identifying sudden variations in news patterns are described; these may be used for supporting news-alerts. A description of a version of this software for our customer web site is provided." "Growing amount of XML encoded data exchanged over the In- ternet increases the importance of XML based publish-subscribe (pub-sub) and content based routing systems. The input in such systems typically consists of a stream of XML documents and a set of user subscriptions expressed as XML queries. The pub-sub system then filters the published documents and passes them t o the subscribers. Pub-sub systems are characterized by very high input ratios, therefore the processing time is critical." "The growth of the geomatics industry is stunted by the difficulty of obtaining and transforming suitable spatial data. This paper describes a remedy: the Open Geospatial Datastore Interface (OGDI), which permits application software to access a variety of spatial data products. The discussion compares the OGDI approach to other standards efforts and describes the characteristics and use of OGDI, which is in the public domain." "In this issue of Leaven, we explore the theme of local church ministry by honoring the legacy of Paul and Kay Watson. The following reflections and essays are written by those who bear appreciative witness to the faithful service of this Christian couple. Paul and Kay have dedicated their time, love, and spiritual gifts for the last three decades to the Cole Mill Road congregation in Durham, North Carolina. And through their missionary travels and a host of teaching opportunities, their influence has been felt by those far beyond their home church." "In this paper, we propose novel techniques for performing SVD-based dimensionality reduction in dynamic databases. When the data distribution changes considerably so as to degrade query precision, we recompute the SVD transform and incorporate it in the existing index structure. For recomputing the SVD-transform, we propose a novel technique that uses aggregate data from the existing index rather than the entire data. This technique reduces the SVD-computation time without compromising query precision. We then explore efficient ways to incorporate the recomputed SVD-transform in the existing index structure. These techniques reduce the computation time by a factor of 20 in experiments on color and texture image vectors. The error due to approximate computation of SVD is less than 10%." "¡°Push Technology¡± stands for the ability to transfer information as a reaction to event occurrence. This demonstration proposal describes Amit, a middleware framework that resolves a major problem in this area: the gap that exists between events that are reported by various channels, and the actual cases in which the user needs to react to, hereby called; reactive situations. These situations are composition of events or other situations (for example, ¡°when atleast four events of the same type occurred¡±) or content filtering on events (for example, ¡°only events that relate to IBM stocks¡±) or both (¡°when atleast four purchases of more than 50,000 shares have been performed on IBM stocks in a single week¡±). This paper describes the generic application development tool, the middleware architecture and framework, and describes the demo." "Oracle Corporation, the world¡¯s second largest software company, is the leading supplier of software for enterprise information management. The company has two major businesses-one providing the lowest cost information technology infrastructure and another offering business and competitive advantage through high-value applications Oracle is one of the first software companies to implement its model of enterprise software management through network computing, and is the first major software company to make full-featured products available electronically on the Internet. Oracle is the only company capable of implementing end-to-end enterprise IT infrastmcture and applications solutions on a global scale." Our algorithm has the following characteristics: (1) It requires only one pass over the data; (2) It is deterministic; (3) It produces good lower and upper bounds of the true values of the quantiles; (4) It requires no a priori knowledge of the distribution of the data set; (5) It has a scalable parallel formulation; (6) Extra time and memory for computing additional quantiles (beyond the first one) are constant per quantile. We present experimental results on the IBM SP-2. The experimental results show that the algorithm is indeed robust and does not depend on the distribution of the data sets. "The chief editor' s ethics are the requirements put forward by the authors,editors and readers in ethical system based on identifying the scholarly publishing environment values for maintaining the moral attitudes and behavior in academic exchanges." "We present a novel framework for mapping between any combination of XML and relational schemas, in which a high-level, user-specified mapping is translated into semantically meaningful queries that transform source data into the target representation. Our approach works in two phases. In the first phase, the high-level mapping, expressed as a set of inter-schema correspondences, is converted into a set of mappings that capture the design choices made in the source and target schemas (including their hierarchical organization as well as their nested referential constraints). The second phase translates these mappings into queries over the source schemas that produce data satisfying the constraints and structure of the target schema, and preserving the semantic relationships of the source. Nonnull target values may need to be invented in this process. The mapping algorithm is complete in that it produces all mappings that are consistent with the schema constraints. We have implemented the translation algorithm in Clio, a schema mapping tool, and present our experience using Clio on several real schemas." "Repositories manage metadata. Metadata describes complex artifacts that are the subject of formal design activities, such as business processes, application interfaces, database (DB) schemas, engineering drawings, software configurations, ,and document libraries, Demand for them is growing, fueled by enterprise re-engineenng, integrated CASE, data warehouse, 8] might be ¡°perfectly¡± translated as [rating > 0.8] at some site, but can only be approximated as [grade = A] at another. Unlike other work, our general framework adopts a customizable ¡°closeness¡± metric for the translation that combines both precision and recall. Our results show that for query translation we need to handle interdependencies among both query conjuncts as well as disjuncts. As the basis, we identify the essential requirements of a rule system for users to encode the mappings for atomic semantic units. Our algorithm then translates complex queries by rewriting them in terms of the semantic units." "There is a wide array of programming languages available to express user-defined logic. The principal advantage of such an approach is that the workflow logic is kept directly where the workflow data resides, resulting in a more efficient, simpler and more compact system design. It also aids with the embedding of database-centric workflow into a larger framework application, since a DBMS is part of all enterprise applications. Finally, I discuss the advantages and disadvantages of this conceptual approach, and show how additional common workflow features can be added to the current architecture of the Informix Media360 workflow component." "This paper proposes a different approach, motivated by integrating large numbers of data sources on the Internet. On this ""deep Web,"" we observe two distinguishing characteristics that offer a new view for considering schema matching: First, as the Web scales, there are ample sources that provide structured information in the same domains (e.g., books and automobiles). Second, while sources proliferate, their aggregate schema vocabulary tends to converge at a relatively small size. Motivated by these observations, we propose a new paradigm, statistical schema matching: Unlike traditional approaches using pairwise-attribute correspondence, we take a holistic approach to match all input schemas by finding an underlying generative schema model. We propose a general statistical framework MGS for such hidden model discovery, which consists of hypothesis modeling, generation, and selection." "To provide high accessibility of continuous-media (CM) data, CM servers generally stripe data across multiple disks. Currently, the most widely used striping scheme for CM data is round-robin permutation (RRP). Unfortunately, when RRP is applied to variable-bit-rate (VBR) CM data, load imbalance across multiple disks occurs, thereby reducing overall system performance. In this paper, the performance of a VBR CM server with RRP is analyzed. In addition, we propose an efficient striping scheme called constant time permutation (CTP), which takes the VBR characteristic into account and obtains a more balanced load than RRP. Analytic models of both RRP and CTP are presented, and the models are verified via trace-driven simulations. Analysis and simulation results show that CTP can substantially increase the number of clients supported, though it might introduce a few seconds/minutes of initial delay." "To address this problem, we developed Cache Investment - a novel approach for integrating query optimization and data placement that looks beyond the performance of a single query. Cache Investment sometimes intentionally generates a ¡°suboptimal¡± plan for a particular query in the interest of effecting a better data placement for subsequent queries. Cache Investment can be integrated into a distributed database system without changing the internals of the query optimizer. In this paper, we propose Cache Investment mechanisms and policies and analyze their performance. The analysis uses results from both an implementation on the SHORE storage manager and a detailed simulation model. Our results show that Cache Investment can significantly improve the overall performance of a system and demonstrate the trade-offs among various alternative policies." "The latest, and the subject of this review, is a set of course notes published by the Mineralogical Association of Canada, compiled to accompany a short course on PGE exploration held at the recent PGE Symposium in Oulu, Finland." "Hello Everyone, I hope you all enjoyed your summer as our thoughts now turn to fall and all the wonders it brings. In this issue, I am responding to several assisted living (AL) nursing concerns we have received regarding advance directives (ADs)." "Continuous queries over data streams may suffer from blocking operations and/or unbound wait, which may delay answers until some relevant input arrives through the data stream. These delays may turn answers, when they arrive, obsolete to users who sometimes have to make decisions with no help whatsoever. Therefore, it can be useful to provide hypothetical answers - ""given the current information, it is possible that X will become true at time t"" - instead of no information at all." "As object-oriented model becomes the trend of database technology, there is a need to convert relational to object-oriented database system to improve productivity and flexibility. The changeover includes schema translation, data conversion and program conversion. This paper describes a methodology for integrating schema translation and data conversion. Schema translation involves semantic reconstruction and the mapping of relational schema into object-oriented schema. Data conversion involves unloading tuples of relations into sequential files and reloading them into object-oriented classes files. The methodology preserves the constraints of the relational database by mapping the equivalent data dependencies." "In this paper, we introduce a new type of integrity constraint, which we call a statistical constraint, and discuss its applicability to enhancing database correctness. Statistical constraints manifest embedded relationships among current attribute values in the database and are characterized by their probabilistic nature. They can be used to detect potential errors not easily detected by the conventional constraints. Methods for extracting statistical constraints from a relation and enforcement of such constraints are described. Preliminary performance evaluation of enforcing statistical constraints on a real life database is also presented." "While the number of database management systems (DBMSs) increases and the various DBMSs get more and more complex, no uniform method for DBMS construction exists. As a result, developers are forced to start more or less from scratch again for every desired system, resulting in a waste of time, effort, and cost. Hence, the database community is challenged with the development of an appropriate method, i.e. the time-saving application of engineering principles (e.g., reuse). Problems related to a construction method are described, as well as approaches towards solutions." "In this paper, we study efficient methods for computing iceberg cubes with some popularly used complex measures, such as average, and develop a methodology that adopts a weaker but anti-monotonic condition for testing and pruning search space. In particular, for efficient computation of iceberg cubes with the average measure, we propose a top-k average pruning method and extend two previously studied methods, Apriori and BUC, to Top-k Apriori and Top-k BUC. To further improve the performance, an interesting hypertree structure, called H-tree, is designed and a new iceberg cubing method, called Top-k H-Cubing, is developed. Our performance study shows that Top-k BUC and Top-k H-Cubing are two promising candidates for scalable computation, and Top-k H-Cubing has better performance in most cases." "Queries navigate semistructured data via path expressions, and can be accelerated using an index. Our solution encodes paths as strings, and inserts those strings into a special index that is highly optimized for long and complex keys. We describe the Index Fabric, an indexing structure that provides the efficiency and flexibility we need. We discuss how ""raw paths"" are used to optimize ad hoc queries over semistructured data, and how ""refined paths"" optimize specific access paths. Although we can use knowledge about the queries and structure of the data to create refined paths, no such knowledge is needed for raw paths." "We propose a mathematical formulation for the notion of optimal projective cluster, starting from natural requirements on the density of points in subspaces. This allows us to develop a Monte Carlo algorithm for iteratively computing projective clusters. We prove that the computed clusters are good with high probability. We implemented a modified version of the algorithm, using heuristics to speed up computation. Our extensive experiments show that our method is significantly more accurate than previous approaches. In particular, we use our techniques to build a classifier for detecting rotated human faces in cluttered images." "Databases, the center of today's information systems, are becoming more and more important judging by the huge volume of business they generate. In fact, database related material is included in a variety of curricula proposed by international organizations and prestigious universities. However, a systemized database body of knowledge (DBBOK), analogous to other works in Software Engineering (SWEBOK) or in Project Management (PMBOK) is needed. In this paper, we propose a first draft for this DBBOK based on degree programs from a variety of universities, the most relevant international curricula and the contents of the latest editions of principle books on databases." "Recent work in data integration has shown the importance of statistical information about the coverage and overlap of sources for efficient query processing. Despite this recognition there are no effective approaches for learning the needed statistics. In this paper we present StatMiner, a system for estimating the coverage and overlap statistics while keeping the needed statistics tightly under control. StatMiner uses a hierarchical classification of the queries, and threshold based variants of familiar data mining techniques to dynamically decide the level of resolution at which to learn the statistics. We will demonstrate the major functionalities of StatMiner and the effectiveness of the learned statistics in BibFinder, a publicly available computer science bibliography mediator we developed. The sources that BibFinder integrates are autonomous and can have uncontrolled coverage and overlap. An important focus in BibFinder was thus to mine coverage and overlap statistics about these sources and to exploit them to improve query processing." "The tutorial ""Software as a Service: ASP and ASP aggregation"" will give an introduction and overview of the concept of ""renting"" access to software to customers (subscribers). Application service providers (ASPs) are enterprises hosting one or more applications and provide access to subscribers over the Internet by means of browser technology. Furthermore, the underlying technologies are discussed to enable application hosting. The concept of ASP aggregation is introduced to provide a single access point and a single sign-on capability to subscribers sub-scribing to more than one hosted application in more than one ASP." "DBMS are widely used and successfully applied to a huge range of applications. However the transaction paradigm that is, with variations like nested transactions and workflow systems, the basis for durability and correctness is poorly suited to many modern applications. Transactions do not scale well and behave poorly at high concurrency levels and in distributed systems." "The summer school tutorial program on New Frontiers in Data Mining was presented under the auspices of DIMACS, the Center for Theoretical Computer Science at Rutgers University. It took place from August 13, 2001 to August 17, 2001 at the DIMACS center at Rutgers. Dimitrios Gunopulos and Nick Koudas were the organizers of the tutorial that included a large list of invited speakers and presenters. The National Science Foundation provided funding for this event. Fred Roberts, the DIMACS director, provided guidance during the course of this work." "Recently, technological advances have resulted in the wide availability of commercial products offering near-line, robot-based, tertiary storage libraries. Thus, such libraries have become a crucial component of modern largescale storage servers, given the very large storage requirements of modern applications. Although the subject of optimal data placement (ODP) strategies has received considerable attention for other storage devices (such as magnetic and optical disks and disk arrays), the issue of optimal data placement in tertiary libraries has been neglected. The latter issue is more critical since tertiary storage remains three orders of magnitude slower than secondary storage. In this paper, we address this issue by deriving such optimal placement algorithms." "In this paper, we present a unified framework that can enforce multiple access control policies within a single system. The framework is based on a language through which users can specify security policies to be enforced on specific accesses. The language allows the specification of both positive and negative authorizations and incorporates notions of authorization derivation, conflict resolution, and decision strategies. Different strategies may be applied to different users, groups, objects, or roles, based on the needs of the security policy. The overall result is a flexible and powerful, yet simple, framework that can easily capture many of the traditional access control policies as well as protection requirements that exist in real-world applications, but are seldom supported by existing systems. The major advantage of our approach is that it can be used to specify different access control policies that can all coexist in the same system and be enforced by the same security server." "This paper presents an algorithm, called ARIES/CSA (Algorithm for Recovery and Isolation Exploiting Semantics for Client-Server Architectures), for performing recovery correctly in client-server (CS) architectures. In CS, the server manages the disk version of the database. The clients, after obtaining database pages from the server, cache them in their buffer pools. Clients perform their updates on the cached pages and produce log records. The log records are buffered locally in virtual storage and later sent to the single log at the server. ARIES/CSA supports a write-ahead logging (WAL), fine-granularity (e.g., record) locking, partial rollbacks and flexible buffer management policies like steal and no-force. It does not require that the clocks on the clients and the server be synchronized. Checkpointing by the server and the clients allows for flexible and easier recovery." "Relational database systems do not effectively support complex queries containing quantifiers (quantified queries) that are increasingly becoming important in decision support applications. Generalized quantifiers provide an effective way of expressing such queries naturally. In this paper, we consider the problem of processing quantified queries within the generalized quantifier framework. We demonstrate that current relational systems are ill-equipped, both at the language and at the query processing level, to deal with such queries. We also provide insights into the intrinsic difficulties associated with processing such queries. We then describe the implementation of a quantified query processor, Q2P, that is based on multidimensional and boolean matrix structures. We provide results of performance experiments run on Q2P that demonstrate superior performance on quantified queries. Our results indicate that it is feasible to augment relational systems with query subsystems like Q2P for significant performance benefits for quantified queries in decision support applications." "Database support for multidimensional arrays is an area of growing importance; a variety of highvolume applications such as spatio-temporal data management and statistics/OLAP become focus of academic and market interest. RasDaMan is a domain-independent array database management system with server-based query optimization and evaluation. The system is fully operational and being used in international projects. We will demonstrate spatio-temporal retrieval using the rView visual query client. Examples will encompass 1-D time series, 2-D images, 3-D and 4-D voxel data. The real-life data sets used stem from life sciences, geo sciences, numerical simulation, and climate research." "The primary session of the workshop took place the morning of the first day. In this session, each of the participants had up to 10 min to deliver a brief message, using just one slide. Researchers were asked to answer the question: ¡®In your view, what is the most urgent, unsolved question/issue in verbal lie detection?¡¯ Similarly, practitioners were asked: ¡®As a practitioner, what question/issue do you wish verbal lie detection research would address?¡¯ The issues raised served as the basis for the discussions that were held throughout the workshop. The current paper first presents the urgent, unsolved issues raised by the workshop group members in the main session, followed by a message to researchers in the field, designed to deliver the insights, decisions, and conclusions resulting from the discussions." "Unfortunately, this will be my last influential papers column. I've been editor for about five years now (how time flies!) and have enjoyed it immensely. I've always found it rewarding to step back and look at why we do the research we do, and this column makes a big contribution to the process of self-examination. Further, I feel that there's a strong need for ways to publicly and explicitly highlight ""quality"" in papers. Criticism is easy, and is the more common experience given the amount of reviewing (and being reviewed) we typically engage in. I look forward to seeing this column in future issues." "Many applications compute aggregate functions (such as COUNT, SUM) over an attribute (or set of attributes) to find aggregate values above some specified threshold. We call such queries iceberg queries because the number of above-threshold results is often very small (the tip of an iceberg), relative to the large amount of input data (the iceberg). Such iceberg queries are common in many applications, including data warehousing, information-retrieval, market basket analysis in data mining, clustering and copy detection. We propose efficient algorithms to evaluate iceberg queries using very little memory and significantly fewer passes over data, as compared to current techniques that use sorting or hashing. We present an experimental case study using over three gigabytes of Web data to illustrate the savings obtained by our algorithms." "The goal of the Paradise project is to apply object-oriented and parallel database technology to the task of implementing a parallel GIS system capable of managing extremely large (multi-terabyte) data sets such as those that will be produced by the upcoming NASA EOSDIS project [Car92]. The project is focusing its resources on algorithms, processing, and storage techniques, and not on making new contributions to the data modeling, query language, or user interface domains." "Soon, the world. will need far more truly large databases then any of us ever imagined; yet, ironically, without a lot of care, VLDB¡¯s,as we know them today may be left along the wayside. The way in which we think about, design and build enormous databases will have to completely change if we are to participate in this revolution. By now everybody, including database people, realizes that the computer world is going through not one but two revolutionary changes. First, of course, there¡¯s the wh+e imiact of personal computers, commodity hardwar& and ever increasing speed and capacity. Second, there¡¯s the impact of the internet i put them together,. and the impact is. explosive. This paper deals with the particular impact of this changing world scene on the ckepts behind and implementation of very large databases. By the time we¡¯re done, the meanings of ¡°very¡±, ¡°large¡± and ¡°datab&e¡± will be pret& different than it is today." "In this paper we present a method for automatically segmenting unformatted text records into structured elements. Several useful data sources today are human-generated as continuous text whereas convenient usage requires the data to be organized as structured records. A prime motivation is the warehouse address cleaning problem of transforming dirty addresses stored in large corporate databases as a single text field into subfields like ¡°City¡± and ¡°Street¡±. Existing tools rely on hand-tuned, domain-specific rule-based systems." "A matrix comprising a water-insoluble beta -1,3-glucan gel in the shape of beads with diameters within the range of about 5 to 1000 mu is prepared by, for example, dispersing an alkaline aqueous solution of a water-soluble beta -1,3-glucan in a water-immiscible organic solvent, and adding an organic acid to the resultant dispersion. The matrix is useful as carrier materials for immobilized enzymes, affinity chromatography, gel filtration, ion exchange and other applications." "Together with bit-sliced addition, this permits us to solve a common basic problem of text retrieval: given an object-relational table T of rows representing documents, with a collection type column K representing keyword terms, we demonstrate an efficient algorithm to find k documents that share the largest number of terms with some query list Q of terms. A great deal of published work on such problems exists in the Information Retrieval (IR) field. The algorithm we introduce, which we call Bit-Sliced Term-Matching, or BSTM, uses an approach comparable in performance to the most efficient known IR algorithm, a major improvement on current DBMS text searching algorithms, with the advantage that it uses only indexing we propose for native database operations." "Those of us who were able to come to our Annual Meeting in Cancun had a most enjoyable time. Thanks to Laurie Levinson, Janet Szydlo, and their Program Committee, the scientific sessions were stimulating and productive ones. Our three English guests ¡ª Mark Solms, Peter Fonagy, and Mary Target ¡ª presented informative and challenging papers and both Peter Blos¡¯ discussion of Solms¡¯ paper and the ensuing discussion groups raised many important issues about the mind-body relationship in psychosomatic conditions in children." "HYPERQUERY is a hypertext query language for object-oriented pictorial database systems. First, we discuss object calculus based on term rewriting. Then, example queries are used to illustrate language facilities. This query language has been designed with a flavor similar to QBE as the highly nonprocedural and conversational language for object-oriented pictorial database management system OISDBS." "This paper presents BUCKY, a new benchmark for object-relational database systems. BUCKY is a query-oriented benchmark that tests many of the key features offered by object-relational systems, including row types and inheritance, references and path expressions, sets of atomic values and of references, methods and late binding, and user-defined abstract data types and their methods. To test the maturity of object-relational technology relative to relational technology, we provide both an object-relational version of BUCKY and a relational equivalent thereof (i.e., a relational BUCKY simulation). Finally, we briefly discuss the initial performance results and lessons that resulted from applying BUCKY to one of the early object-relational database system products." "Based on this probabilistic model, we develop three utility-theoretic based types of prefetching algorithms that anticipate how users will interact with the presentation. These prefetching algorithms allow efficient visualization of the query results in accordance with the underlying specification. We have built a prototype system that incorporates these algorithms. We report on the results of experiments conducted on top of this implementation." "The problem of data cleaning, which consists of emoving inconsistencies and errors from original data sets, is well known in the area of decision support systems and data warehouses. However, for non-conventional applications, such as the migration of largely unstructured data into structured one, or the integration of heterogeneous scientific data sets in inter-discipl- inary fields (e.g., in environmental science), existing ETL (Extraction Transformation Loading) and data cleaning tools for writing data cleaning programs are insufficient. The main challenge with them is the design of a data flow graph that effectively generates clean data, and can perform efficiently on large sets of input data. The difficulty with them comes from (i) a lack of clear separation between the logical specification of data transformations and their physical implementation and (ii) the lack of explanation of cleaning results and user interaction facilities to tune a data cleaning program." "Browsing ANd Keyword Searching (BANKS) enables almost effortless Web publishing of relational and eXtensible Markup Language (XML) data that would otherwise remain (at least partially) invisible to the Web. Relational databases store large amounts of data that are queried using structured query languages. A user needs to know the underlying schema and the query language in order to make meaningful ad hoc queries on the data. This is a substantial barrier for casual users, such as users of Web-based information systems. HTML forms can be provided for predefined queries. A university Website may provide a form interface to search for faculty and students. Searching for departments would require yet another form, as would search for courses offered. However, creating an interface for each such task is laborious, and is also confusing to users since they must first expend effort finding which form to use. search can provide a very simple and easy-to-use mechanism for casual users to get information from databases." "Garbage collection is important in object-oriented databases to free the programmer from explicitly deallocating memory. In this paper, we present a garbage collection algorithm, called Transactional Cyclic Reference Counting (TCRC), for object-oriented databases. The algorithm is based on a variant of a reference-counting algorithm proposed for functional programming languages The algorithm keeps track of auxiliary reference count information to detect and collect cyclic garbage. The algorithm works correctly in the presence of concurrently running transactions, and system failures. It does not obtain any long-term locks, thereby minimizing interference with transaction processing. It uses recovery subsystem logs to detect pointer updates; thus, existing code need not be rewritten." "To overcome the verbosity problem, the research on compressors for XML data has been conducted. However, some XML compressors do not support querying compressed data, while other XML compressors which support querying compressed data blindly encode tags and data values using predefined encoding methods. Thus, the query performance on compressed XML data is degraded.In this paper, we propose XPRESS, an XML compressor which supports direct and efficient evaluations of queries on compressed XML data. XPRESS adopts a novel encoding method, called reverse arithmetic encoding, which is intended for encoding label paths of XML data, and applies diverse encoding methods depending on the types of data values." "While the desire to support fast, ad hoc query processing for large data warehouses has motivated the recent introduction of many new indexing structures, with a few notable exceptions (namely, the LSM-Tree and the Stepped Merge Method) little attention has been given to developing new indexing schemes that allow fast insertions. Since additions to a large warehouse may number in the millions per day, indices that require a disk seek (or even a significant fraction of a seek) per insertion are not acceptable. In this paper, we offer an alternative to the B+-tree called the Y-tree for indexing huge warehouses having frequent insertions. The Y-tree is a new indexing structure supporting both point and range queries over a single attribute, with retrieval performance comparable to the B+-tree. For processing insertions, however, the Y-tree may exhibit a speedup of 100 times over batched insertions into a B+-tree." "trees that minimize the computation and communication costs of parallel execution. We address the problem of finding parallel plans for SQL queries using the two-phase approach of join ordering and query rewrite (JOQR) followed by parallelization. We focus on the JOQR phase and develop optimization algorithms that account for communication as well as computation costs. Using a model based on representing the partitioning of data as a color, we devise an efficient algorithm for the problem of choosing the partitioning attributes in a query tree so as to minimize total cost. We extend our model and algorithm to incorporate the interaction of data partitioning with conventional optimization choices such as access methods and strategies for computing operators. Our algorithms apply to queries that include operators such as grouping, aggregation, intersection and set difference in addition to joins." "Four degradation patterns were determined objectively through the clustering approach, each of which showed similar trends and characteristics. Based on the distribution of clusters, interval 2 was determined to have the worst overall condition. The LSTM network was used for performance prediction in each cluster. Compared with the traditional multilayer neural network, the LSTM also exhibited good prediction effectiveness. The predicted data are well consistent with the observed data with correlation coefficient R2 equaling to 88.4%." "We propose a solution that eliminates the need of such a trusted authority. The solution builds a centralized privacy-preserving index in conjunction with a distributed access-control enforcing search protocol. Two alternative methods to build the centralized index are proposed, allowing trade offs of efficiency and security. The new index provides strong and quantifiable privacy guarantees that hold even if the entire index is made public. Experiments on a real-life dataset validate performance of the scheme. The appeal of our solution is twofold: (a) content providers maintain complete control in defining access groups and ensuring its compliance, and (b) system implementors retain tunable knobs to balance privacy and efficiency concerns for their particular domains." "To address these limitations of the current restructuring technology, we have proposed the SERF framework which aims at providing a rich environment for doing complex user-defined transformations flexibly, easily and correctly. The goal of our work is to increase the usability and utility of the SERF framework and its applicability to re-structuring problems beyond OODB evolution. Towards that end, we provide re-usable transformations via the notion of SERF Templates that can be packaged into libraries, thereby increasing the portability of these transformations. We also now have a first cut at providing an assurance of consistency for the users of this system, a semantic optimizer that provides some performance improvements via enhanced query optimization techniques with emphasis on the re-structuring primitives. In this demo we give an overview of the SERF framework, its current status and the enhancements that are planned for the future. We also present an example of the application of SERF to a domain other than schema evolution, i.e., the web restructuring." "The article proposes a scalable protocol for replication management in large-scale replicated systems. The protocol organizes sites and data replicas into a tree-structured, hierarchical cluster architecture. The basic idea of the protocol is to accomplish the complex task of updating replicated data with a very large number of replicas by a set of related but independently committed transactions. Each transaction is responsible for updating replicas in exactly one cluster and invoking additional transactions for member clusters. Primary copies (one from each cluster) are updated by a cross-cluster transaction. Then each cluster is independently updated by a separate transaction. This decoupled update propagation process results in possible multiple views of replicated data in a cluster. Compared to other replicated data management protocols, the proposed protocol has several unique advantages." "Building such a database system requires fundamental changes in the architecture of the query processing engine; we present the system-level interfaces of PREDATOR that support E-ADTs, and describe the internal design details." "The contents of many valuable web-accessible databases are only accessible through search interfaces and are hence invisible to traditional web ¡°crawlers.¡± Recent studies have estimated the size of this ¡°hidden web¡± to be 500 billion pages, while the size of the ¡°crawlable¡± web is only an estimated two billion pages. Recently, commercial web sites have started to manually organize web-accessible databases into Yahoo!-like hierarchical classification schemes. In this paper, we introduce a method for automating this classification process by using a small number of query probes. To classify a database, our algorithm does not retrieve or inspect any documents or pages from the database, but rather just exploits the number of matches that each query probe generates at the database in question. We have conducted an extensive experimental evaluation of our technique over collections of real documents, including over one hundred web-accessible databases." "We present an approach to database interoperation that exploits the semantic information provided by integrity constraints defined on the component databases. We identify two roles of integrity constraints in database interoperation. First, a set of integrity constraints describing valid states of the integrated view can be derived from the constraints defined on the underlying databases. Moreover, local integrity constraints can be used as a semantic check on the validity of the specification of the integrated view. We illustrate our ideas in the context of an instance-based database interoperation paradigm, where objects rather than classes are the unit of integration. We introduce the notions of objectivity and subjectivity as an indication of whether a constraint is valid beyond the context of a specific database, and demonstrate the impact of these notions." "A new access method, called M-tree, is proposed to organize and search large data sets from a generic ¡°metric space¡±, i.e. where object proximity is only defined by a distance function satisfying the positivity, symmetry, and triangle inequality postulates. We detail algorithms for insertion of objects and split management, which keep the M-tree always balanced - several heuristic split alternatives are considered and experimentally evaluated. Algorithms for similarity (range and k-nearest neighbors) queries are also described. Results from extensive experimentation with a prototype system are reported, considering as the performance criteria the number of page I/O¡¯s and the number of distance computations. The results demonstrate that the Mtree indeed extends the domain of applicability beyond the traditional vector spaces, performs reasonably well in high-dimensional data spaces, and scales well in case of growing files." "The PMG offers the user one of the possible path-methods and the user verifies from his knowledge of the intended purpose of the request whether that path-method is the desired one. If the path method is rejected, then the user can utilize his now increased knowledge about the database to request (with additional parameters given) another offer from the PMG. The PMG is based on access weights attached to the connections between classes and precomputed access relevance between every pair of classes of the OODB. Specific rules for access weight assignment and algorithms for computing access relevance appeared in our previous papers [MGPF92, MGPF93, MGPF96]. In this paper, we present a variety of traversal algorithms based on access weights and precomputed access relevance. Experiments identify some of these algorithms as very successful in generating most desired path-methods. The PMG system utilizes these successful algorithms and is thus an efficient tool for aiding the user with the difficult task of querying and updating a large OODB." "Metadata has been identified as a key success factor in data warehouse projects. It captures all kinds of information necessary to analyse, design, build, use, and interpret the data warehouse contents. In order to spread the use of metadata, enable the interoperability between repositories, and tool integration within data warehousing architectures, a standard for metadata representation and exchange is needed. This paper considers two standards and compares them according to specific areas of interest within data warehousing. Despite their incontestable similarities, there are significant differences between the two standards which would make their unification difficult." "Language comprehension and production are generally assumed to use the same representations, but resumption poses a problem for this view: This structure is regularly produced, but judged highly unacceptable. Production-based solutions to this paradox explain resumption in terms of processing pressures, whereas the Facilitation Hypothesis suggests resumption is produced to help listeners comprehend." "In this paper, we propose a bulk-algebra, TIX, and describe how it can be used as a basis for integrating information retrieval techniques into a standard pipelined database query evaluation engine." ". In more deteil, we use a sliding window over the data sequence and extract its features; the result is a trail in feature space. We propose an efficient and effective algorithm to divide such trails into sub-trails, which are subsequently represented by their Minimum Bounding Rectangles (MBRs). We develop new evaluation strategies essential to obtaining good performance, including a stack-based TermJoin algorithm for efficiently scoring composite elements. We report results from an extensive experimental evaluation, which show, among other things, that the new TermJoin access method outperforms a direct implementation of the same functionality using standard operators by a large factor." An increasing number of database applications demands high availability combined with online scalability and soft real-time transaction response. This means that scaling must be done online and non-blocking. This paper extends the primary/hot standby approach to high availability with online scaling operations. The challenges are to do this without degrading the response time and throughput of transactions and to support high availability throughout the scaling period. We measure the impact of online scaling on response time and throughput using di erent scheduling schemes. We also show some of the recovery problems that appear in this setting. "Computing multidimensional aggregates in high dimensions is a performance bottleneck for many OLAP applications. Obtaining the exact answer to an aggregation query can be prohibitively expensive in terms of time and/or storage space in a data warehouse environment. It is advantageous to have fast, approximate answers to OLAP aggregation queries." "The increasing usage of Web services on the Internet has led to much interest in the area of service discovery. Web Service Discovery is the process of locating a machine-process able description of a web service that may previously unknown and that meets certain functional criteria. In industry, many applications are built by calling different web services available on internet. These applications are highly dependent on discovering correct and efficient web service. This paper gives overview of the web services discovery process from multidimensional view." "We present algorithms for performing backup and recovery of the DBMS data in a coordinated fashion with the files on the file servers. Our algorithms for coordinated backup and recovery have been implemented in the IBM DB2/DataLinks product. We also propose an efficient solution to the problem of maintaining consistency between the content of a file and the associated meta-data stored in the DBMS from a reader's point of view without holding long duration locks on meta-data tables. In the model, an object is directly accessed and edited in-place through normal file system APIs using a reference obtained via an SQL Query on the database. To relate file modifications to meta-data updates, the user issues an update through the DBMS, and commits both file and meta-data updates together." "The aim of this workshop was to bring together researchers from academia and industry as well as practitioners to discuss views on how to integrate semantics into current geographic information systems, and how this will benefit the end users. The workshop was organized in a way to highly stimulate interaction among the participants. Three or four experts from the same or a closely related discipline to the authors reviewed each of the 32 submissions. I would like to sincerely thank the Program Committee and the additional experts who realized an excellent work in carefully reviewing the papers: they made a strong contribution to the quality of the workshop." "Mainstream database management systems are designed for general use. Various compromises have been done to satisfy the most common users and the largest markets. One application which has been mostly ignored, is the network equipment made for the telco operators. The equipment used in the telco industry has requirements differing from traditional database applications with respect to availability and real-time performance." "This chapter presents a generic technique called progressive merge join (PMJ) that eliminates the blocking behavior of sort-based join algorithms. The basic idea behind PMJ is to have the join produce results, as early as the external mergesort generates initial runs. Many state-of-the-art join techniques require the input relations to be almost fully sorted before the actual join processing starts. Thus, these techniques start producing first results only after a considerable time has passed. This blocking behavior is a serious problem when consequent operators have to stop processing in order to wait for first results of the join. Furthermore, this behavior is not acceptable if the result of the join is visualized or/and requires user interaction. These are typical scenarios for data mining applications. The off-time of existing techniques even increases with growing problem sizes." We describe the implementation of the magic-sets transformation in the Starburst extensible relational database system. To our knowledge this is the first implementation of the magic-sets transformation in a relational database system. The Starburst implementation has many novel features that make our implementation especially interesting to database practitioners (in addition to database researchers). "In traditional software systems, significant attention is devoted to keeping modules well separated and coherent with respect to functionality, thus ensuring that changes in the system are localized to a handful of modules. Reuse is seen as the key method in reaching that goal. Ontology-based systems on the Semantic Web are just a special class of software systems, so the same principles apply. In this article, we present an integrated framework for managing multiple and distributed ontologies on the Semant ic Web." "Outlier detection is an integral component of statistical modelling and estimation. For high-dimensional data, classical methods based on the Mahalanobis distance are usually not applicable. We propose an outlier detection procedure that replaces the classical minimum covariance determinant estimator with a high-breakdown minimum diagonal product estimator. The cut-off value is obtained from the asymptotic distribution of the distance, which enables us to control the Type I error and deliver robust outlier detection. Simulation studies show that the proposed method behaves well for high-dimensional data." "The results show that there are both performance and energy trade-offs between the indexing schemes for the different queries. The nature of the query also plays an important role in determining the energy-performance trade-offs. Further, technological trends and architectural enhancements are influencing factors on the relative behavior of the index structures. The work in the query has a bearing on how and where (on a mobile client or/and on a server) it should be performed for performance and energy savings." "A novel execution model for rule application in active databases is developed and applied to the problem of updating derived data in a database represented using a semantic, object-based database model. The execution model is based on the use of ¡°limited ambiguity rules¡± (LARs), which permit disjunction in rule actions. The execution model essentially performs a breadth-first exploration of alternative extensions of a user-requested update. Given an object-based database schema, both integrity constraints and specifications of derived classes and attributes are compiled into a family of limited ambiguity rules. A theoretical analysis shows that the approach is sound: the execution model returns all valid ¡°completions¡± of a user-requested update, or terminates with an appropriate error notification. The complexity of the approach in connection with derived data update is considered." "This chapter reveals that the loader and compressor convert XML documents in a compressed, yet queryable format. The compressed repository stores the compressed documents and provides: access methods to this compressed data, and a set of compression-specific utilities that enable, e.g., the comparison of two compressed values. The query processor optimizes and evaluates XQuery queries over the compressed documents. Its complete set of physical operators allows for efficient evaluation over the compressed repository. The chapter motivates the choice of the storage structures for compressed XML, and of the compression algorithms employed. It describes the XQueC query processor, its set of physical operators, and outlines its optimization algorithm." "Overall performance can be improved by algorithms that enable operations to adjust their memory usage at run time in response to the actual size of their inputs and fluctuations in total memory demand. Sorting is a frequent operation in database systems. It is used not only to produce sorted output, but also in many sort-based algorithms, such as grouping with aggregation, duplicate removal, sort-merge join and set operations. Sorting can also improve the efficiency of algorithms like nested-loop joins and row retrieval via an index. This paper concentrates on dynamic memory adjustment for sorting but the same approach can be applied to other memory intensive operations." "The FileNet Integrated Document Management (IDM) products consists of a family of client applications and Imaging and Electronic Document Management (EDM) services. These services provide robust facilities for document creation, update, and deletion along with document search capabilities. Document properties are stored in an underlying relational database (RDBMS); document content is stored in files or in a specialized optical disk hierarchical storage manager. FileNet Corporation's Visual WorkFlo? and Ensemble? workflow products can be utilized in conjunction with FileNet's IDM technologies to automate production and ad hoc business processes respectively." "Computers running database management applications often manage large amounts of data. Typically, the price of the I/O subsystem is a considerable portion of the computing hardware. Fierce price competition demands every possible savings. Lossless data compression methods, when appropriately integrated with the dbms, yield signiflcant savings. Roughly speaking, a slight increase in cpu cycles is more than offset by savings in I/O subsystem. Various design issues arise in the use of data compression in the dbms from the choice of algorithm, statistics collection, hardware versus software based compression, location of the compression function in the overall computer system architecture, unit of compression, update in place, and the application of log¡¯ to compressed data." "We represent the occurrence time of an event with a set of possible instants, delimiting when the event might have occurred, and a probability distribution over that set. We also describe query language constructs to retrieve information in the presence of indeterminacy. These constructs enable users to specify their credibility in the underlying data and their plausibility in the relationships among that data. A denotational semantics for SQL's select statement with optional credibility and plausibility constructs is given. We show that this semantics is reliable, in that it never produces incorrect information, is maximal, in that if it were extended to be more informative, the results may not be reliable, and reduces to the previous semantics when there is no indeterminacy. Although the extended data model and query language provide needed modeling capabilities, these extensions appear initially to carry a significant execution cost. A contribution of this paper is to demonstrate that our approach is useful and practical." "There is a new emerging world of web services. In this world, services will be combined in innovative ways to form elaborate services out of building blocks of other services. This is predicated on having a common ground of vocabulary and communication protocols operating in a secured environment. Currently, massive standardization efforts [UDDI, WSDL, ebXML, RosettaNet] are aiming at achieving this common ground. We explore possible architectures for deploying computerized traders internal services. This encompasses both the structure and the functionalities of ¡°traders¡± and ¡°services¡± and the form in which these functionalities could be realized in actual implementations." "In a recent paper, we proposed adding aSTOP AFTER clause to SQL to permit the cardinality of a query result to be explicitly limited by query writers and query tools. We demonstrated the usefulness of having this clause, showed how to extend a traditional cost-based query optimizer to accommodateit, and demonstrated via DB2-basedsimulations that large performancegains are possible whenSTOP AFTER queries are explicitly supported by the database engine. In this paper, we present several new strategies for efficiently processing STOP AFTER queries. These strategies, based largely on the use of range partitioning techniques, offer significant additional savings for handling STOP AFTER queries that yield sizeable result sets. We describe classes of queries where such savings would indeed arise and present experimental measurements that show the benefits and tradeoffs associated with the new processing strategies" "Data replication has recently become a topic of increased interest among customers. Several database vendors provide products that perform data replication, The capabilities of these products and the customer problems they solve vary widely. This talk starts by identifying some of the dimensions of the replication solution space including latency, concurrency, logical and physical units of replication, network link requirements, heterogeneity, replica topology, replica transparency, and data transformation requirements. Digital Equipment Corporation provides three products that allow customers to replicate data. The distributed, two-phase commit products allow customers to program and coordinate replicated updates. DECTM Reliable Transaction Router provides an OLTP environment with transactional data replication. Transactions succeed in the face of site and network failures." "This paper proposes a cache-conscious version of the R-tree called the CR-tree. To pack more entries in a node, the CR-tree compresses MBR keys, which occupy almost 80% of index data in the two-dimensional case. It first represents the coordinates of an MBR key relatively to the lower left corner of its parent MBR to eliminate the leading O's from the relative coordinate representation. Then, it quantizes the relative coordinates with a fixed number of bits to further cut off the trailing less significant bits. Consequently, the CR-tree becomes significantly wider and smaller than the ordinary R-tree." "The customization of Web to find out the confidence of Web page in the community graph and to calculate the page rank of the pages in the graph, is described. The confidence factor of the page plays significant role in page ranking. Web can be efficiently traversed in search of pages related with specific topic, if viewed from the search topic perspective. Customizing the graph to the graph of pages with only relevant features, removes the citations from the irrelevant pages and hence uses only the relevant weight for calculating page rank." "This paper discusses the design and implementation of SEQ, a database system with support for sequence data. SEQ models a sequence as an ordered collection of records, and supports a declarative sequence query language based on an algebra of query operators, thereby permitting algebraic query optimization and evaluation. SEQ has been built as a component of the PREDATOR database system that provides support for relational and other kinds of complex data as well. that could describe a wide variety of sequence data, and a query algebra that could be used to represent queries over sequences." "Data warehousing is the latest ¡°hot topic¡± in the industry. With market projections of $8 billion by the year 2000, vendors of all flavors are claiming the suitability and superiority of their products for this market segment. This has led to a great deal of confusion, with terms such as OLAP, ROLAP, MDDB, decision support systems (DSS) and data warehousing being defined, re-defined, and sometimes even used interchangeably." "In this paper, we introduce a multi-agent system architecture and an implemented prototype for software component market-place. We emphasize the ontological perspective by discussing the ontology modeling for component market-place, UML extensions for ontology modeling, and the idea of ontology transfer which makes the multi-agent system to adapt itself to the dynamically changing ontologies." "Opening a series of concrete works to follow, this vision paper identifies, motivates, and abstracts the problem of model management. It proposes to support ¡°models¡± and their ¡°mapping¡± as first-class constructs, with high-level algebraic operations to manipulate. In the winter of 2000, I was a starting faculty at UIUC, and this paper inspired me immensely at the time when I had to create a research agenda of my own. I have always been interested in information integration, on various topics like query translation and data mapping. The area was exciting to me, as it was full of ¡°real-world¡± problems. However, it was also not hard to see that these problems seemed inherently messy and their solutions inherently heuristic. Probably because many problems remained unsolved, most research works were only able to address separate topics, without a clear context of an overall application." "This article investigates the minimization problem for a wide fragment of XPath (namely X P[?]), where the use of the most common operators (child, descendant, wildcard and branching) is allowed with some syntactic restrictions. The examined fragment consists of expressions which have not been specifically studied in the relational setting before: neither are they mere conjunctive queries (as the combination of ¡°//¡± and ¡°*¡± enables an implicit form of disjunction to be expressed) nor do they coincide with disjunctive ones (as the latter are more expressive). Three main contributions are provided. The ¡°global minimality¡± property is shown to hold: the minimization of a given XPath expression can be accomplished by removing pieces of the expression, without having to re-formulate it (as for ¡°general¡± disjunctive queries). Then, the complexity of the minimization problem is characterized, showing that it is the same as the containment problem. Finally, specific forms of XPath expressions are identified, which can be minimized in polynomial time." "Performance needs of many database applications dictate that the entire database be stored in main memory. The Dali system is a main memory storage manager designed to provide the persistence, availability and safety guarantees one typically expects from a diskresident database, while at the same time providing very high performance by virtue of being tuned to support in-memory data. Dali follows the philosophy of treating all data, including system data, uniformly as database files that can be memory mapped and directly accessed/updated by user processes. Direct access provides high performance; slower, but more secure, access is also provided through the use of a server process." "This paper describes alternative methods for data access that are available to developers using the Java#8482; platform and related technologies to create a new generation of enterprise applications. The paper highlights industry trends and describes Java technologies that are responsible for a new paradigm in data access. Java technology represents a new level of portability, scalability, and ease-of-use for applications that require data access." "To ease schema evolution, we propose to support exceptions to the behavioral consistency rules without sacrificing type safety. The basic idea is to detect unsafe statements in a method code at compile-time and check them at run-time. The run-time check is performed by a specific clause that is automatically inserted around unsafe statements. This check clause warns the programmer of the safety problem and lets him provide exception-handling code. Schema updates can therefore be performed with only minor changes to the code of methods." "In the past decade, advances in the speed of commodity CPUs have far out-paced advances in memory latency. Main-memory access is therefore increasingly a performance bottleneck for many computer applications, including database systems. In this article, we use a simple scan test to show the severe impact of this bottleneck. The insights gained are translated into guidelines for database architecture, in terms of both data structures and algorithms. We discuss how vertically fragmented data structures optimize cache performance on sequential data access. We then focus on equi-join, typically a random-access operation, and introduce radix algorithms for partitioned hash-join. The performance of these algorithms is quantified using a detailed analytical model that incorporates memory access cost. Experiments that validate this model were performed on the Monet database system. We obtained exact statistics on events such as TLB misses and L1 and L2 cache misses by using hardware performance counters found in modern CPUs. Using our cost model, we show how the carefully tuned memory access pattern of our radix algorithms makes them perform well, which is confirmed by experimental results." "The design of a distributed deductive database system differs from the design of conventional non-distributed deductive database systems in that it requires design of distribution of both the database and rulebase. In this paper, we address the rule allocation problem. We consider minimisation of data communic& tion cost during rule execution as a primary basis for rule allocation. The rule allocation problem can be stated in terms of a directed acyclic graph, where nodes represent rules or relations, and edges represent either dependencies between rules or usage of relations by rules. The arcs are given weights representing volume of data that need to flow between the connected nodes. We show that rule allocation problem is NP-complete. Next, we propose a heuristic for nonreplicated allocation based on successively combining adjacent nodes for placement at same site which are connected by highest weight edge, and study its performance vib*vis the enumerative algorithm for optimal allocation." "The present paper introduces techniques that solve this problem. Experience with a working prototype optimizer demonstrates (i) that the additional optimization and start-up overhead of dynamic plans compared to static plans is dominated by their advantage at run-time, (ii) that dynamic plans are as robust as the ¡°brute-force¡± remedy of run-time optimization, i.e., dynamic plans maintain their optimality even if parameters change between compile-time and run-time, and (iii) that the start-up overhead of dynamic plans is significantly less than the time required for complete optimization at run-time. In other words, our proposed techniques are superior to both techniques considered to-date, namely compile-time optimization into a single static plan as well as run-time optimization. Finally, we believe that the concepts and technology described can be transferred to commercial query optimizers in order to improve the performance of embedded queries with host variables in the query predicate and to adapt to run-time system loads unpredictable at compile time." "We present an approach to searching genetic DNA sequences using an adaptation of the sufx tree data structure deployed on the general purpose persistent Java platform, PJama. Our implementation technique is novel, in that it allows us to build su x trees on disk for arbitrarily large sequences, for instance for the longest human chromosome consisting of 263 million letters. We propose to use such indexes as an alternative to the current practice of serial scanning. We describe our tree creation algorithm, analyse the performance of our index, and discuss the interplay of the data structure with object store architectures. Early measurements are presented." "In contrast, our method has been optimized based on the special properties of high-dimensional spaces and therefore provides a near-optimal distribution of the data items among the disks. The basic idea of our data declustering technique is to assign the buckets corresponding to different quadrants of the data space to different disks. We show that our technique - in contrast to other declustering methods - guarantees that all buckets corresponding to neighboring quadrants are assigned to different disks. We evaluate our method using large amounts of real data (up to 40 MBytes) and compare it with the best known data declustering method, the Hilbert curve." "Analysis of expected and experimental results of various join algorithms show that a combination of the optimal nested block and optimal GRACE hash join algorithms usually provide the greatest cost benefit, unless the relation size is a small multiple of the memory size. Algorithms to quickly determine a buffer allocation producing the minimal cost for each of these algorithms are presented. When the relation size is a small multiple of the amount of main memory available (typically up to three to six times), the hybrid hash join algorithm is preferable." "We study various kinds of operations in a database context, and show how the inner loop of the operations can be accelerated using SIMD instructions. The use of SIMD instructions has two immediate performance benefits: It allows a degree of parallelism, so that many operands can be processed at once. It also often leads to the elimination of conditional branch instructions, reducing branch mispredictions.We consider the most important database operations, including sequential scans, aggregation, index operations, and joins. We present techniques for implementing these using SIMD instructions. We show that there are significant benefits in redesigning traditional query processing algorithms so that they can make better use of SIMD technology." "In this paper, we describe an approximation technique that reduces the storage cost of the cube without incurring the run time cost of lazy evaluation. The idea is to provide an incomplete description of the cube and a method of estimating the missing entries with a certain level of accuracy. The description, of course, should take a fraction of the space of the full cube and the estimation procedure should be faster than computing the data from the underlying relations. Since cubes are used to support data analysis and analysts are rarely interested in the precise values of the aggregates (but rather in trends), providing approximate answers is, in most cases, a satisfactory compromise." "Database management is one of the main areas of research of the School of Computer Science at The University of Oklahoma (OU). The objective of the database research team at OU (OUDB) is to help solve the many issues and challenges facing the database research community, especially with respect to emerging technology. Currently, many projects are being conducted in the following areas: real-time databases, object-oriented databases, mobile databases, multimedia databases, data mining and data warehouses. These projects have been funded by federal and state agencies as well as private industries such as National Science Foundation, the U.S. Department of Education, Oklahoma State Department of Environmental Quality, and Objectivity, Inc." "We follow the stack-baaed approach to query languages which is a new formal and intellectual paradigm for integrating querying and programming for object-oriented databases. Queries are considered generalized programing expressions which may be used within macroscopic imperative statements, such as creating, updating, inserting, and deleting data objects. Queries may be also used as procedures¡¯ parameters, as well as determine the output from functional procedures (SQL-like views). The semantics, including generalized query operators (selection, projection, navigation, join, quantifiers, etc.), is defined in terms of operations on two stacks. The environment stack deals with the scope control and binding names." "The purpose of a textual database is to store textual documents. These documents have not only textual contents, but also structure. Many traditional text database systems have focused only on querying by contents or by structure. Recently, a number of models integrating both types of queries have appeared. We argue in favor of that integration, and focus our attention on these recent models, covering a representative sampling of the proposals in the field. We pay special attention to the tradeoffs between expressiveness and efficiency, showing the compromises taken by the models. We argue in favor of achieving a good compromise, since being weak in any of these two aspects makes the model useless for many applications." "Recent work in query optimization has addressed the issue of placing expensive predicates in a query plan. In this paper we explore the predicate placement options considered in the Montage DBMS, presenting a family of algorithms that form successively more complex and effective optimization solutions. Through analysis and performance measurements of Montage SQL queries, we classify queries and highlight the simplest solution that will optimize each class correctly. We demonstrate limitations of previously published algorithms, and discuss the challenges and feasibility of implementing the various algorithms in a commercial-grade system." "In this paper, we explore an approach of interleaving a bushy execution tree with hash filters to improve the execution of multi-join queries. Similar to semi-joins in distributed query processing, hash filters can be applied to eliminate non-matching tuples from joining relations before the execution of a join, thus reducing the join cost. Note that hash filters built in different execution stages of a bushy tree can have different costs and effects. The effect of hash filters is evaluated first. Then, an efficient scheme to determine an effective sequence of hash filters for a bushy execution tree is developed, where hash filters are built and applied based on the join sequence specified in the bushy tree so that not only is the reduction effect optimized but also the cost associated is minimized. Various schemes using hash filters are implemented and evaluated via simulation." "The MultiView Project is an ongoing 5-year NFS-ftrnded effort at the University of Michigan to develop and apply object-oriented view technology to address the needs of recently emerging applications such as data warehousing and workflow management systems that require the sharing, virtual restructuring, and caching of data [5]. Through Multi-View, users can dynamically create and modify virtual classes and schemata at any time ." "In this paper, we present two algorithms for deriving optimal and near-optimal vertical class partitioning schemes. The cost-driven algorithm provides the optimal vertical class partitioning schemes by enumerating, exhaustively, all the schemes and calculating the number of disk accesses required to execute a given set of applications. For this, a cost model for executing a set of methods in an OODB system is developed. Since exhaustive enumeration is costly and only works for classes with a small number of instance variables, a hill-climbing heuristic algorithm (HCHA) is developed, which takes the solution provided by the affinity-based algorithm and improves it, thereby further reducing the total number of disk accesses incurred." "In this paper, we develop the core of a formal data model for network directories, and propose a sequence of efficiently computable query languages with increasing expressive power. The directory data model can naturally represent rich forms of heterogeneity exhibited in the real world. Answers to queries expressible in our query languages can exhibit the same kinds of heterogeneity. We present external memory algorithms for the evaluation of queries posed in our directory query languages, and prove the efficiency of each algorithm in terms of its I/O complexity. Our data model and query languages share the flexibility and utility of the recent proposals for semi-structured data models, while at the same time effectively addressing the specific needs of network directory applications, which we demonstrate by means of a representative real-life example." "In this paper we describe the architecture and interface of KODA, a production strength database kernel. KODA is unique in the industry in its ability to support two different data models viz. Oracle Rdb (a relational database system) and Oracle CODASYL DBMS (a CODASYL database system). Our experience in designing and implementing KODA demonstrates . the feasibility of implementing multiple data models on top of a common kernel, l the benefits of leveraging performance and feature enhancements for multiple products, . the benefits of maintainability of a common code base, . ease of migration and interoperability between the two products without customers having to re-learn common kernel utilities like backups, data file organization and analysis tools." "When building a database, it is mandatory to design a friendly interface, which allo ws the nal user to easily access the data of interest. V ery often,such an interface exploits the pow er of visualization and direct manipulation mechanisms. How ever, it is not su?cient to associate \any"" visual represen tation to a database, but the visual representation should be carefully chosen to e ectively con vey all and only the database information content. We are curren tly w orkingon a general theory (see ) for establishing the adequacy of a visual representation, once speci ed the database characteristics, and we are developing a system, called D ARE: Drawing Adequate REpresentations, which implements such a theory." "Data streams are a new class of data that is becoming pervasively important in a wide range of applications, ranging from sensor networks, environmental monitoring to finance. In this article, we propose a novel framework for the online diagnosis of evolution of multidimensional streaming data that incorporates Recursive Wavelet Density Estimators into the context of Velocity Density Estimation. In the proposed framework changes in streaming data are characterized by the use of local and global evolution coefficients. In addition, we propose for the analysis of changes in the correlation structure of the data a recursive implementation of the Pearson correlation coefficient using exponential discounting." "In this paper, we introduce a formalism for expressing and reasoning about order properties: ordering and grouping constraints that hold of physical representations of relations. In so doing, we can reason about how the relation is ordered or grouped, both in terms of primary and secondary orders. After formally defining order properties, we introduce a plan refinement algorithm that infers order properties for intermediate and final query results on the basis of those known to hold of query inputs, and then exploits these inferences to avoid unnecessary sorting and grouping." "Multimedia data mining is the mining of high-level multimedia information and knowledge from large multimedia databases. A multimedia data mining system prototype, MultiMediaMiner, has been designed and developed. It includes the construction of a multimedia data cube which facilitates multiple dimensional analysis of multimedia data, primarily based on visual content, and the mining of multiple kinds of knowledge, including summarization, comparison, classification, association, and clustering." "Bitmaps are popular indexes for data warehouse (DW) applications and most database management systems offer them today. This paper proposes query optimization strategies for selections using bitmaps. Both continuous and discrete selection criteria are considered. Query optimization strategies are categorized into static and dynamic. Static optimization strategies discussed are the optimal design of bitmaps, and algorithms based on tree and logical reduction. The dynamic optimization discussed is the approach of inclusion and exclusion for both bit-sliced indexes and encoded bitmap indexes." "In this column, we review these three books." "I first came across the AMS paper when I started getting interested in the data-streaming area, in the spring of 2001. Reading this paper was a real eye-opener for me. It was just amazing to see how simple randomization ideas and basic probabilistic tools (like the Chebyshev inequality and the Chernoff bound) can come together to provide elegant, space-efficient randomized approximation algorithms for estimation problems that, at first glance, would seem impossible to solve. The second-moment method described in the AMS paper is essentially the father of all ¡°sketch-based¡± techniques for data-stream management." "The World-Wide Web (WWW) is an ever growing, distributed, non-administered, global information resource. It resides on the worldwide computer network and allows access to heterogeneous information: text, image, video, sound and graphic data. Currently, this wealth of information is difficult to mine. One can either manually, slowly and tediously navigate through the WWW or utilize indexes and libraries which are built by automatic search engines (called knowbots or robots). We have designed and are now implementing a high level SQL-like language to support effective and flexible query processing, which addresses the structure and content of WWW nodes and their varied sorts of data. Query results are intuitively presented and continuously maintained when desired. The language itself integrates new utilities and existing Unix tools (e.g. grep, awk)." "We will consider both academic and non-academic positions, with special attention to some of the trickier points of academic job searches. In addressing these questions, we will draw on our combined personal experience of two-body job hunts as recent as 2003 and as long ago as 1987, along with information gleaned from the two-body job searches of friends and colleagues. At the end of the article, we give references for further reading." "In the past two years, Nimble Technology has developed a product for this market. Spawned from over a person-decade of data integration research, the product has been deployed at several Fortune-500 beta-customer sites. This abstract reports on the key challenges we faced in the design of our product and highlights some issues we think require more attention from the research community." "e consider the execution of multi-join queries in a hierarchical parallel system, i.e., a shared-nothing system whose nodes are shared-memory multiprocessors. In this context, load balancing must be addressed at two levels, locally among the processors of each shared-memory node and globally among all nodes. In this paper, we propose a dynamic execution model that maximizes local load balancing within shared-memory nodes and minimizes the need for load sharing across nodes. This is obtained by allowing each processor to execute any operator that can be processed locally, thereby taking full advantage of inter- and intra-operator parallelism. We conducted a performance evaluation using an implementation on a 72-processor KSR1 computer." "A data warehouse is a redundant collection of data replicated from several possibly distributed and loosely coupled source databases, organized to answer OLAP queries. Relational views are used both as a specification technique and as an execution plan for the derivation of the warehouse data. In this position paper, we summarize the versatility of relational views and their potential." Data Grids are being built across the world as the next generation data handling systems to manage peta-bytes of inter organizational data and storage space. A data grid (datagrid) is a logical name space consisting of storage resources and digital entities that is created by the cooperation of autonomous organizations and its users based on the coordination of local and global policies. Data Grid Management Systems (DGMSs) provide services for the confluence of organizations and management of inter-organizational data and resources in the datagrid.The objective of the tutorial is to provide an introduction to the opportunities and challenges of this emerging technology. "Given the complexity of many queries over a Data Warehouse (DW), it is interesting to precompute and store in the DW the answer sets of some demanding operations, so called materialized views. In this paper, we present an algorithm, including its experimental evaluation, which allows the materialization of several views simultaneously without losing sight of processing costs for queries using these materialized views." "French government has launch in 2000 a public debate about conservation of data and electronic documents. Due to the widespread use of Internet and extranet technologies, especially with electronic document exchange, we had have to adapt the politic of document conservation to this new challenge: ¡°avoid the lack of memory in public administration¡±. A guide-book was produced. This guide-book is an important step to introduce a reflection about the conservation and to give some guidelines." "To achieve these goals, Rainbow allows the user to configure and program the distributed environment, transactions, and transaction management protocols, and to observe local as well as global executions (history and measured behavior and performance). Rainbow also lends itself as an open system that can be easily changed and extended by students and researchers." "We present a framework for designing, in a declarative and flexible way, efficient migration programs and an undergoing implementation of a migration tool called RelOO whose targets are any ODBC compliant system on the relational side and the 02 system on the object side. The framework consists of (i) a declarative language to specify database transformations from relations to objects, but also physical properties on the object database (clustering and sorting) and (ii) an algebrabased program rewriting technique which optimizes the migration processing time while taking into account physical properties and transaction decomposition." "In order to cope with the dynamic scenario of fast changing business requirements enterprises have embraced web technologies to manage their business processes. However, the ability to integrate business processes like procurement, customer relationship management, finance, human resources and manufacturing in a typical supply chain on the web is a challenging task. Today¡¯s virtual enterprises need to integrate different workflows within and across enterprises efficiently so as to provide seamless services." "his book Databases and Transaction Processing constitutes a standard database textbook for advanced undergraduate and graduate courses¡ª albeit with a somewhat different focus compared to the established books. As the subtitle An Application-Oriented Approach indicates, the authors put an emphasis on teaching the systematic usage of database systems¡ªrather than concentrating on an in-depth coverage of implementation techniques for building database systems. The rationale for this focus is that many more students will be implementing applications than will actually be implementing database systems." "Here we propose the PeerOLAP architecture for supporting On-Line Analytical Processing queries. A large number low-end clients, each containing a cache with the most useful results, are connected through an arbitrary P2P network. If a query cannot be answered locally (i.e. by using the cache contents of the computer where it is issued), it is propagated through the network until a peer that has cached the answer is found. An answer may also be constructed by partial results from many peers. Thus PeerOLAP acts as a large distributed cache, which amplifies the benefits of traditional client-side caching. The system is fully distributed and can reconfigure itself on-the-fly in order to decrease the query cost for the observed workload." "This paper briefly describes an approach to business-to-business Internet commerce following a n-suppliers:m-customers scenario in order to provide for a virtual market place on the Internet. The RMP project, partially funded by the European Commission, realizes such a e-commerce system in the domain of small and medium-sized companies in rural areas. The RMP system provided for Internet trading makes heavy use of database technology and has been implemented in Java." "The emergence and growing popularity of Internet-based electronic market-places, in their various forms, has raised the challenge to explore genericity in market design. In this paper we present a domain-specific software architecture that delineates the abstract components of a generic market and specifies control and data-flow constraints between them, and a framework that allows convenient pluggability of components that implement specific market policies. The framework was realized in the GEM system. GEM provides infrastructure services that allow market designers to focus solely on market-issues. In addition, it allows dynamic (re)configuration of components." "Data warehouses support the analysis of historical data. This often involves aggregation over a period of time. Furthermore, data is typically incorporated in the warehouse in the increasing order of a time attribute, e.g., date of sale or time of a temperature measurement. In this paper we propose a framework to take advantage of this append only nature of updates due to a time attribute. The framework allows us to integrate large amounts of new data into the warehouse and generate historical summaries efficiently. Query and update costs are virtually independent from the extent of the data set in the time dimension, making our framework an attractive aggregation approach for append-only data streams." "Archiving is important for scientific data, where it is necessary to record all past versions of a database in order to verify findings based upon a specific version. Much scientific data is held in a hierachical format and has a key structure that provides a canonical identification for each element of the hierarchy. In this article, we exploit these properties to develop an archiving technique that is both efficient in its use of space and preserves the continuity of elements through versions of the database, something that is not provided by traditional minimum-edit-distance diff approaches. The approach also uses timestamps. All versions of the data are merged into one hierarchy where an element appearing in multiple versions is stored only once along with a timestamp. By identifying the semantic continuity of elements and merging them into one data structure, our technique is capable of providing meaningful change descriptions, the archive allows us to easily answer certain temporal queries such as retrieval of any specific version from the archive and finding the history of an element." "Successful companies organise and run their business activities in an efficient manner. Core activities are completed on time and within specified resource constraints. However to stay competitive in today's markets, companies need to continually improve their efficiency ¡ª business activities need to be completed more quickly, to higher quality and at lower cost. To this end, there is an increasing awareness of the benefits and potential competitive advantage that well designed business process management systems can provide. In this paper we argue the case for an agent-based approach: showing how agent technology can improve efficiency by ensuring that business activities are better scheduled, executed, monitored, and coordinated." "XQuery is the XML query language currently under development in the World Wide Web Consortium (W3C). XQuery specifications have been published in a series of W3C working drafts, and several reference implementations of the language are already available on the Web. If successful, XQuery has the potential to be one of the most important new computer languages to be introduced in several years. This tutorial will provide an overview of the syntax and semantics of XQuery, as well as insight into the principles that guided the design of the language." "In this paper, we describe an architecture for an open marketplace exploiting the workflow technology and the currently emerging data exchange and metadata representation standards on the Web. In this market architecture electronic commerce is realized through the adaptable workflow templates provided by the marketplace to its users. Having workflow templates for electronic commerce processes results in a component-based architecture where components can be agents (both buying and selling) as well as existing applications invoked by the workflows. Other advantages provided by the workflow technology are forward recovery, detailed logging of the processes through workflow history manager and being able to specify data and control flow among the workflow components." Model-driven engineering technologies offer a promising approach to address the inability of third-generation languages to alleviate the complexity of platforms and express domain concepts effectively. "Beginning in 1989 an ad-hoc collection of senior DBMS researchers has gathered periodically to perform a "" group grope "" i.e. an assessment of the state of the art in DBMS research as well as a prediction concerning what problems and problem areas deserve additional focus. The fifth ad-hoc meeting was held May 4-6, 2003 in Lowell, Ma. A report on the meeting is in preparation, and this panel discussion will summarize the upcoming document and discuss its conclusions." "We are living in an exciting time in the field of clinical research. There are changes occurring that will alter how medical care is provided, and it is the work of clinical research professionals that will lead these changes. In this column, the Chair of ACRP¡¯s Association Board of Trustees considers the impact of genetics in medicine and their application through precision medicine." "MOTIVATION A large number of useful databases are currently accessible over the Web and within corporate networks. In addition to being frequently updated, this collection of databases tends to be highly dynamic: new databases appear often, and databases (just like Web sites) also disappear. In this environment, the goal of providing flexible, timely and declarative query access over all these databases remains elusive." "IP network operators collect aggregate traffic statistics on network interfaces via the Simple Network Management Protocol (SNMP). This is part of routine network operations for most ISPs; it involves a large infrastructure with multiple network management stations polling information from all the network elements and collating a real time data feed. This demo will present a tool that manages the live SNMP data feed on a fully operational large ISP at industry scale. The tool primarily serves to study correlations in the network traffic, by providing a rich mix of ad-hoc querying based on a user-friendly correlation interface and as well as canned queries, based on the expertise of the network operators with field experience." "In the last years, the exponential growth of computer networks has created an incredibly large offer of products and services in the net. Such a huge amount of information makes it impossible for a single person to analyze all existing offers of a product on the net and decide which of them fits better her requirements. This problem is solved with the intelligent trade agents (ITA), which are programs that have the ability to roam a network, collect business-related data and use them to make decisions to buy goods on their owners' behalf. Known ITA systems do not provide anonymity in transactions, require an on-line trusted third party and implicitly assume that the user trusts the ITA. We present a new scheme for an intelligent untrusted trade agent system allowing anonymous electronic transactions with an off-line trusted third party." "This article describes a novel way of combining data mining techniques on Internet data in order to discover actionable marketing intelligence in electronic commerce scenarios. The data that is considered not only covers various types of server and web meta information, but also marketing data and knowledge. Furthermore, heterogeneity resolution thereof and Internet- and electronic commerce-specific pre-processing activities are embedded. A generic web log data hypercube is formally defined and schematic designs for analytical and predictive activities are given. From these materialised views, various online analytical web usage data mining techniques are shown, which include marketing expertise as domain knowledge and are specifically designed for electronic commerce purposes." "Query optimization is a computationally intensive process, especially for the complex queries that are typical in current data warehousing and mining applications. The inherent overheads of query optimization are compounded by the fact that a new query is typically optimized afresh, providing no opportunity to amortize these overheads over prior optimizations. While current commercial query optimizers do provide facilities for reusing execution plans generated for earlier queries (e.g. ""stored outlines"" in Oracle 9i), the query matching is extremely restrictive-only if the incoming query has a close textual resemblance with one of the stored queries is the associated plan re-used to execute the new query." "ROLEX is a research system for closely coupled XML-relational interoperation [2]. Whereas typical XML-based applications interoperate with existing relational databases via a ¡°shred-and-publish¡± approach, the ROLEX system seeks to provide direct access to relational data via XML interfaces at the speed of cached XML data. To achieve this, ROLEX is integrated tightly with both the DBMS and the application through a standard interface supported by most XML parsers, the Document Object Model (DOM). Thus, in general, an application need not be modified to be used with ROLEX." "VideoAnywhere has developed such a capability in the form of an extensible architecture as well as a specific implementation using the latest in Internet programming (Java, agents, XML, etc.) and applicable standards. It automatically extracts and manages an extensible set of metadata of major types of videos that can be queried using either attribute-based or keyword-based search. It also provides user profiling that can be combined with the query processing for filtering. A user-friendly interface provides management of all system functions and capabilities. VideoAnywhere can also be used as a video search engine for the Web, and a servlet-based version has also been implemented." "This is a beautifully simple paper that I feel encompasses many ideas that keep reappearing in different guises every decade or so! The paper proposes the replication of a dictionary (basically a set of key and value pairs) to all relevant sites in a distributed system. Updates and deletes are propagated in a lazy manner through the system as sites communicate with each other using a simple notion of a log. Queries are answered based on the local copy. The notion of correctness is based on the causal dependency between operations. A simple data structure keeps track of ""who knows what"", and is used for reducing the propagated information as well as garbage collection" "The database community has been researching problems in similarity query for time series databases for many years. The techniques developed in the area might shed light on the query by humming problem. In this demo, we treat both the melodies in the music databases and the user humming input as time series. Such an approach allows us to integrate many database indexing techniques into a query by humming system, improving the quality of such system over the traditional (contour) string databases approach. We design special searching techniques that are invariant to shifting, time scaling and local time warping. This makes the system robust and allows more flexible user humming input." "We have extended Rainbow, our existing XML data management system, as shown in Figure 1. Rainbow accepts an XQuery query or an update request in an extended XQuery syntax from the user. The XQuery is parsed into an algebraic representation, called XML Algebra Tree (XAT). The XAT is then optimized by the global query optimizer using algebraic rewrite rules. We have introduced a separate phase of XAT cleanup which includes the XAT table schema cleanup and cutting of unnecessary XML operators. This optimization often significantly improves the query performance. The optimized XAT is then executed by the query manager." "The Brazilian Symposium on Database Systems (SBBD) is a traditional conference in Brazil, and is sponsored by the Brazilian Computer Society. SBBD's technical program contemplates the following activities: presentation of peer reviewed full technical papers, invited talks, tutorials (either invited and selected from submissions), discussion panels and presentation of tools." "XML has become ubiquitous, and XML data has to be managed in databases. The current industry standard is to map XML data into relational tables and store this information in a relational database. Such mappings create both expressive power problems and performance problems.In the TIMBER project we are exploring the issues involved in storing XML in native format. We believe that the key intellectual contribution of this system is a comprehensive set-at-a-time query processing ability in a native XML store, with all the standard components of relational query processing, including algebraic rewriting and a cost-based optimizer." The OASIS Prototype is under development at Dublin City University in Ireland. We describe a multi-database architecture which uses the ODMG model as a canonical model and describe an extention for construction of virtual schemas within the multidatabase system. The OMG model is used to provide a standard distribution layer for data from local databases. This takes the form of CORBA objects representing export schemas from separate data sources. "Companies are using the Web for information dissemination, sparking interest in models and efficient mechanisms for controlled access to information. In this context, securing XML documents is important. Much of the work on XML access control to date has studied models for the specification of XML access control policies, focusing on issues such as granularity of access and conflict resolution. However, there has been little work on enforcement of access control policies for queries. A naive two-step solution to secure query evaluation is to first compute query results, and then use access control policies to filter the results. Consider the XML database of an online-seller, which has information on books and customers." "The Senior Therapist's Grandiosity: Clinical and Ethical Consequences of Merging Multiple Roles"" is the second paper by ROBERT S. PEPPER, C.S.W., Ph.D. to appear in this Journal on this very important topic in the contemporary practice of psychotherapy. He notes that some senior therapists engage in multiple roles with grandiosity and other unresolved narcissistic pathology, doing harm to their patients, while violating professional and ethical codes of conduct." "This paper describes issues and solutions related to the creation of a product information database in the enterprise, and using this database as a foundation for deploying an electronic catalog. Today, product information is typically managed in document composition systems and communicated on paper. In the new wired world, these processes are undertaking fundamental changes to cope with the time to market pressure and the need for accurate, complete, and structured presentation of product information. Electronic catalogs are the answer." "With the popularity of XML, it is increasingly common to nd data in the XML format. This highlights an important question: given an XML document S and a DTD D, how to extract data from S and construct another XML document T such that T conforms to the xed D? Let us refer to this as DTD-conforming XML to XML transformation. The need for this is evident in, e.g., data exchange: enterprises exchange their XML documents with respect to a certain prede ned DTD. Although a number of XML query languages (e.g., XQuery, XSLT) are currently being used to transform XML data, they cannot guarantee DTD conformance. Type inference and (static) checking for XML transformations are too expensive to be used in practice; worse, they provide no guidance for how to specify a DTD-conforming XML to XML transformation. In response to the need we have developed TREX (TRansformation Engine for XML), a middleware system for DTDconforming XML to XML transformations." "A long-term water quality study has been initiated by the Korean Ministry of Environment(MOE) - The G-7 Project--in cooperation with two national research institutes, an University research tn and a consulting firm. This study includes the development of computer software for total water quality management system, so called ISWQM (Integrated System of Water Quality Management). ISWQM includes four major components: a GIS database; two artificial intelligence (AI) based expert systems to estimate pollutant loadings and to provide cost-effective wastewater treatment system for small and medium size urban areas; and computer programs to integrate the database and expert systems." "The proliferation and affordability of smart sensors such as webcams, microphones, etc., has created opportunities for exciting new classes of distributed services. While such sensors are inexpensive and easy to deploy across a wide area, realizing useful services requires addressing a number of challenges, such as preventing transfer of large data feeds across the network, efficiently discovering relevant data among the distributed collection of sensors and delivering it to interested participants, and efficiently handling static meta-data information, live readings from sensor feeds, and historical data." "For several years now, you've been hearing and reading about an emerging standard that everybody has been calling SQL3. Intended as a major enhancement of the current second generation SQL standard, commonly called SQL-92 because of the year it was published, SQL3 was originally planned to be issued in about 1996¡­but things didn't go as planned." "In this paper, we describe the system architecture and its underlying technology, and report on our ongoing implementation effort, which leverages the PostgreSQL open source code base. We also discuss open issues and our research agenda." "This paper considers this issue with respect to spatially distributed environmental models. A method of measuring the semantic proximity between components of large, integrated models is presented, along with an example illustrating its application. It is concluded that many of the issues associated with weak model semantics can be resolved with the addition of self-evaluating logic and context-based tools that present the semantic weaknesses to the end-user." "Phrase matching is a common IR technique to search text and identify relevant documents in a document collection. Phrase matching in XML presents new challenges as text may be interleaved with arbitrary markup, thwarting search techniques that require strict contiguity or close proximity of keywords. We present a technique for phrase matching in XML that permits dynamic specification of both the phrase to be matched and the markup to be ignored. We develop an effective algorithm for our technique that utilizes inverted indices on phrase words and XML tags." The STREAM project at Stanford is developing a general-purpose system for processing continuous queries over multiple continuous data streams and stored relations. It is designed to handle high-volume and bursty data streams with large numbers of complex continuous queries. We describe the status of the system as of early 2003 and outline our ongoing research directions. "This paper reports on the principles underlying the semantic and pedagogic interoperability mechanisms built in the European Knowledge Pool System, developed by the European research project ARIADNE. This system, which is the central feature of ARIADNE, consists in a distributed repository of pedagogical documents (or learning objects) of diverse granularity, origin, content, type, language, etc., which are stored in view of their use (and reuse) in telematics-based training or teaching curricula." "It's my pleasure to announce that SIGMOD is financially strong. Our conference is the best in the field technically. This research excellence has contributed to financial health, enabling the conference to continue to generate more than enough revenue to cover its expenses. SIGMOD '98 took in approximately $25K more than it cost to produce." "MTCache is a prototype midtier database caching solution for SQL server that achieves this transparency. It builds on SQL server's support for materialized views, distributed queries and replication. We describe MTCache and report experimental results on the TPC-W benchmark. The experiments show that a significant part of the query workload can be offloaded to cache servers, resulting in greatly improved scale-out on the read-dominated workloads of the benchmark. Replication overhead was small with an average replication delay of less than two seconds." "Internet, Web and distributed computing infrastructures continue to gain in popularity as a means of communication for organizations, groups and individuals alike. In such an environment, characterized by large distributed, autonomous, diverse, and dynamic information sources, access to relevant and accurate information is becoming increasingly complex. This complexity is exacerbated by the evolving system, semantic and structural heterogeneity of these potentially global, cross-disciplinary, multicultural and rich-media technologies. Clearly, solutions to these challenges require addressing directly a variety of interoperability issues." "A wealth of information is hidden within unstructured text. This information is often best utilized in structured or relational form, which is suited for sophisticated query processing, for integration with relational databases, and for data mining. For example, newspaper and e-mail archives contain information that could be useful to analysts and govenment agencies. Information extraction systems produce a structured representation of the information that is ¡°buried¡± in text documents. Unfortunately, processing each document is computationally expensive, and is not feasible for large text databases or for the web. With many database sizes exceeding millions of documents, processing time is becoming a bottleneck for exploiting information extraction technology." "In this demonstration, we present a prototype peer-topeer (P2P) application called PeerDB that provides database capabilities. This system has been developed at the National University of Singapore in collaboration with Fudan University, and is being enhanced with more features and applications. The concept behind PeerDB is similar to the analogy of publishing personal web sites, except that it is now applied to personal databases." "The goal of this paper is to describe the MOMIS (Mediator envirOnment for Multiple Information Sources) approach to the integration and query of multiple, heterogeneous information sources, containing structured and semistructured data. MOMIS has been conceived as a joint collaboration between University of Milano and Modena in the framework of the INTERDATA national research project, aiming at providing methods and tools for data management in Internet-based information systems." "GridDB is an innovative solution built within Toshiba to solve these complex problems faced by its numerous customers. The foundation of GridDB¡¯s principles is based upon offering a versatile data store that is optimized for IoT, provides high scalability, tuned for high performance, and ensures high reliability." "Obtaining high quality information has become in recent years a challenging task as data should be gathered and filtered from a large, open and frequently changing network of distributed data sources, with blurred semantics, and no central control over the data sources' structure and availability. This technical challenge is highlighted by the advent of the World Wide Web, its current status and its evolving nature. This state of affair creates a need for technologies for building information services, capable of providing high quality information obtained from distributed, heterogeneous, and autonomous data sources on an as-needed basis." "Semantic interoperability is a growing challenge in the United States Department of Defense (DoD). In this paper, we describe the basis of an infrastructure for the reconciliation of relevant, but semantically heterogeneous attribute values. Three types of information are described which can be used to infer the context of attributes, making explicit hidden semantic conflicts and making it possible to adjust values appropriately. Through an extended example, we show how an automated integration agent can derive the transformations necessary to perform four tasks in a simple semantic reconciliation." Database queries often take the form of correlated SQL queries. Correlation refers to the use of values from the outer query block to compute the inner subquery. This is a convenient paradigm for SQL programmers and closely mimics a function invocation paradigm in a typical computer programming language. Queries with correlated subqueries are also often created by SQL generators that translate queries from application domain-specific languages into SQL. Another significant class of queries that use this correlated subquery form is that involving temporal databases using SQL. Performance of these queries is an important consideration particularly in large databases. "Data warehouses form the essential infrastructure for many data analysis tasks. A core operation in a data warehouse is the construction of a data cube, which can be viewed as a multi-level, multi-dimensional database with aggregate data at multiple granularity." We deal with the issue of combining dozens of classifiers into a better one. Our first contribution is the introduction of the notion of communities of classifiers. We build a complete graph with one node per classifier and edges weighted by a measure of similarity between connected classifiers. The resulting community structure is uncovered from this graph using the state-of-the-art Louvain algorithm. Our second contribution is a hierarchical fusion approach driven by these communities. "With these new expectations have come new responsibilities for the information systems professional. We can no longer concern ourselves merely with keeping our systems up and running. We now need to concern ourselves with subjective concepts such as response time and throughput. With current expectations what they are, performance tuning has become vitally important." "The amount of services and deployed software agents in the most famous o spring of the Internet, the World Wide Web, is exponentially increasing. In addition, the Internet is an open environment, where information sources, communication links and agents themselves may appear and disappear unpredictably. Thus, an e ective, automated search and selection of relevant services or agents is essential for human users and agents as well. We distinguish three general agent categories in the Cyberspace, service providers, service requester, and middle agents." "As our reliance on computers and computerized data has increased, we have come to expect more from our computers. We no longer expect our computers to act as large expensive calculators that merely spit out bills and paychecks. We now, additionally, expect our systems to rapidly access and interactively present us with large volumes of accurate data. In fact, our expectations have changed so much, in the past decade, that we no longer focus on what our systems are but rather on what they do." "Rapid growth in the volume of documents, their diversity, and terminological variations render federated digital libraries increasingly difficult to manage. Suitable abstraction mechanisms are required to construct meaningful and scalable document clusters, forming a cross-digital library information space for browsing and semantic searching. This paper addresses the above issues, proposes a distributed semantic framework that achieves a logical partitioning of the information space according to topic areas, and provides facilities to contextualize and landscape the available document sets in subject-specific categories." "Given the complexity of many queries over a Data Warehouse (DW), it is interesting to precompute and store in the DW the answer sets of some demanding operations, so called materialized views. In this paper, we present an algorithm, including its experimental evaluation, which allows the materialization of several views simultaneously without losing sight of processing costs for queries using these materialized views." "In view of this, we propose shifting from a cardinality-based approach to a rate-based approach, and give an optimization framework that aims at maximizing the output rate of query evaluation plans. This approach can be applied to cases where the cardinality-based approach cannot be used. It may also be useful for cases where cardinalities are known, because by focusing on rates we are able not only to optimize the time at which the last result tuple appears, but also to optimize for the number of answers computed at any specified time after the query evaluation commences. We present a preliminary validation of our rate-based optimization framework on a prototype XML query engine, though it is generic enough to be used in other database contexts." "Welcome to SIGMOD 2002! We think you will find both the conference and the setting to be invigorating. The natural timeless beauty of British Columbia will provide a fitting counterpoint to the dynamism of our field in which large scale, high performance, and ever more intelligent database systems are being conceived and deployed. This dynamism is reflected in our (extreme) keynote presentations, tutorials, research papers, demonstrations, industrial papers, and product presentations. The only unfortunate side of our program is that the five parallel session structure may prevent you from hearing every talk in which you are interested." "The InfoSleuth T M Project at MCC has developed a distributed agent architecture that addresses the need for semantic interoperability among information sources and analytical tools within diverse application domains. InfoSleuth is being used as a significant component of the Environmental Data Exchange Network (EDEN) 2. The current EDEN pilot demonstration enables integrated access via web browser to environmental information resources provided by offices of these agencies located in several states. At the application level, InfoSleuth provides for semantic interchange among users by allowing an application developer to express the concepts and relationships of the application domain in high-level terms that are then translated into the low-level types of database schemas or semantic analyses of text and image resources." "A point data retrieval algorithm for the HG-tree is introduced which improves the number of nodes accessed. The HG-tree is a multidimensional indexing tree designed for point data and it is a simple modification from the Hilbert R-tree for indexing spatial data. The HG-tree data search method mainly makes use of the Hilbert index values to search for exact data, instead of using conventional point search methods as used in most of the R-tree papers. The use of Hilbert curve values and MBR can reduce the spatial cover of an MBR." "Clustering of large databases is an important research area with a large variety of applications in the data base context. Missing in most of the research efforts are means for guiding the clustering process and understand the results, which is especially important if the data under consideration is high dimensional and has not been collected for the purpose of being analyzed. Visualization technology may help to solve this problem since it allows an effective support of different clustering paradigms and provides means for a visual inspection of the results." "The integrated design of the Web interface and of data content gives several advantages in the design of data-intensive Web sites. The main objectives of this design process are (a) associating the Web with a high-level description of its content, that can be used for querying, evolution, and maintenance; (b) providing multiple views of the same data; (c) separating the de nition of information content from Web page composition, navigation, and presentation, which should be de ned independently and autonomously; (d) storing the meta-information collected during the design process within a repository used for the dynamic generation of Web pages; (e) collecting information about the Web site usage, obtained both statically (user registration) and dynamically (user tracking); (f) supporting selective access to information based on users' requirements and needs; (g) using business rules to improve the generation of e ective Web pages and to present each user with personalised views of the Web site." "Currently gene expression data are being produced at a phenomenal rate. The general objective is to try to gain a better understanding of the functions of cellular tissues. In particular, one specific goal is to relate gene expression to cancer diagnosis, prognosis and treatment. However, a key obstacle is that the availability of analysis tools or lack thereof, impedes the use of the data, making it difficult for cancer researchers to perform analysis efficiently and effectively." "The chair of ACRP¡¯s Association Board of Trustees recounts how he left manufacturing and research and development a decade ago to take a position as a clinical research administrator. It was a move that place him into a role he knew little about, having not been engaged in clinical research beforehand." "There are a number of database systems available free of charge for the research community, with complete access to the source code. Some of these systems result from completed research projects, others have been developed outside the research community. How can the database community best take advantage of these publically available systems? The most widely used open-source database is MySQL. Their objective is to become the 'best and most used database in the world'. Can they do it without the database research community?" "Heavily used in both academic and corporate R&D settings, ACM Transactions on Database Systems (TODS) is a key publication for computer scientists working in data abstraction, data modeling, and designing data management systems. Topics include storage and retrieval, transaction management, distributed and federated databases, semantics of data, intelligent databases, and operations and algorithms relating to these areas." "From recent conferences, position papers, special journal issues, and informal hallway discussions, we have witnessed a quickly rising tide of interest among the database research community in querying and processing data streams. Numerous modem applications operate over data that arrives in the form of rapid, continuous streams, and in many of these applications simply diverting the streams in their entirety into a conventional DBMS and querying them in a conventional fashion is infeasible in terms of performance and functionality. Example applications include sensor networks, Web tracking, financial monitoring, telecommunications, network monitoring and traffic engineering, and manufacturing processes." "Currently, many projects are being conducted in the following areas: real-time databases, object-oriented databases, mobile databases, multimedia databases, data mining and data warehouses. These projects have been funded by federal and state agencies as well as private industries such as National Science Foundation, the U.S. Department of Education, Oklahoma State Department of Environmental Quality, and Objectivity, Inc." "Operators of large networks and providers of network services need to monitor and analyze the network traffic flowing through their systems. Monitoring requirements range from the long term (e.g., monitoring link utilizations, computing traffic matrices) to the ad-hoc (e.g. detecting network intrusions, debugging performance problems). Many of the applications are complex (e.g., reconstruct TCP/IP sessions), query layer-7 data (find streaming media connections), operate over huge volumes of data (Gigabit and higher speed links), and have real-time reporting requirements (e.g., to raise performance or intrusion alerts).We have found that existing network monitoring technologies have severe limitations. One option is to use TCPdump to monitor a network port and a user-level application program to process the data. While this approach is very flexible, it is not fast enough to handle gigabit speeds on inexpensive equipment." This paper aims at classifying existing approaches which can be used to query heterogeneous data sources. We consider one of the approaches ¡ª the mediated query approach ¡ª in more detail and provide a classification framework for it as well. "In this demo, we show how database-style declarative queries can be executed over data streaming from sensor networks. Our demo consists of two major components: a set of Berkeley TinyOS battery-powered, wireless sensor ""motes"" (see Figure 1) that produce and process data, and a desktop-based query processor which parses queries, distributes them over motes, and collects and displays answers. Specifically, we allow conference attendees standing at our query processing workstation to query a number of motes distributed throughout the demo space." "Internet search engines have popularized keyword based search. While relational database systems offer powerfifl structured query languages such as SQL, there is no support for keyword search over databases. The simplicity of keyword search as a querying paradigm offers compelling values for data exploration. Specifically, keyword search does not require a priori knowledge of the schema. The above is significant as much information in a corporation is increasingly being available at its intranet. However, it is unrealistic to expect users who would browse and query such information to have detailed knowledge of the schema of available databases. Therefore, just as keyword search and classification hierarchies complement each other for document search, keyword search over databases can be effective." "We are pleased to announce an excellent technical program for the 6th International Conference on Pervasive Computing and Communications. The program covers a broad cross section of topics in pervasive computing and communications. This year, 160 papers were submitted for consideration to the program committee. As a result, the selection process was highly competitive, and the result is a program of high-quality papers." "Welcome to IPDPS 2004 in Santa Fe. This year¡¯s program includes 17 workshops with a total of 306 papers. Many of the workshops have grown steadily in strength and are now operating with parallel sessions or on multiple days. We are pleased to welcome one new workshop this year, in the area of High Performance Grid Computing. As always, we are looking for new workshop proposals for the next IPDPS." "Data cube enables fast online analysis of large data repositories which is attractive in many applications. Although there are several kinds of available cube-based OLAP products, users may still encounter challenges on effectiveness and efficiency in the exploration of large data cubes due to the huge computation space as well as the huge observation space in a data cube. CubeExplorer is an integrated environment for online exploration of data cubes. It integrates our newly developed techniques on iceberg cube computation, cube-based feature extraction, and gradient analysis, and makes cube exploration effective and efficient. In this demo, we will show the features of CubeExplorer, especially its power and flexibility at exploring and mining of large databases." Content placement algorithm: An ACDN must decide which applications to deploy where and when. Content placement is solved trivially in traditional CDNs by cache replacement algorithms. We propose a multi-resolution transmission mechanism that allows various organizational units of a web document to be transferred and browsed according to the amount of information captured. We define the notion of information content for each individual organizational unit of a web document as an indication of its captured information. The concept of information content is used as a foundation for defining the notion of relative information content which determines the transmission order of various units. Our mechanism allows a web client to explore the more content-bearing portion of a web document earlier so as to be able to terminate browsing a possibly irrelevant document sooner. "DBCache also includes a cache initialization component that takes a backend database schema and SQL queries in the workload, and generates a middle-tier database schema for the cache." "The workshop Towards Adaptive Workflow System was organized by the authors of this report as part of the 1998 Conference on Computer Supported Collaborative Work (CSCW-98), and was held at the Westin Seattle on Saturday, November 14, 1998. The workshop had about 30 attendees and included invited presentations, paPer presentations/discussions and a panel. This report summarizes on the Goals and topics of the workshop, presents the major activities and summarizes some of the issues discussed during the workshop." "We demonstrate the COUGAR System, a new distributed data management infrastructure that scales with the growth of sensor interconnectivity and computational power on the sensors over the next decades. Our system resides directly on the sensor nodes and creates the abstraction of a single processing node without centralizing data or computation." "This is my first issue as associate editor of software reviews for The American Statistician. In this column, I will introduce myself, comment on the types of software reviews that can be published in this section of The American Statistician, and encourage others in the profession to consider taking on the task of reviewing statistical software packages." "As WWW becomes more and more popular and powerful, how to search information on the web in database way becomes an important research topic. COMMIX, which is developed in the DB group in Peking University (China), is a system towards building very large database using data from the Web for information extraction, integration and query answering. COMMIX has some innovative features, such as ontology-based wrapper generation, XML-based information integration, view-based query answering, and QBE-style XML query interface." "Unfortunately, this will be my last influential papers column. I've been editor for about five years now (how time flies!) and have enjoyed it immensely. I've always found it rewarding to step back and look at why we do the research we do, and this column makes a big contribution to the process of self-examination. Further, I feel that there's a strong need for ways to publicly and explicitly highlight ""quality"" in papers. Criticism is easy, and is the more common experience given the amount of reviewing (and being reviewed) we typically engage in. I look forward to seeing this column in future issues." "In ebXML, trading parties collaborate by agreeing on the same business process with complementary roles. Therefore there is a need for standardized business processes. In this respect, exploiting the already developed expertise through RosettaNet PIPs becomes indispensable. We show how to create and use ebXML ""Binary Collaborations"" based on RosettaNet PIPs and provide a GUI tool to allow users to graphically build their ebXML business processes by combining RosettaNet PIPs. In ebXML, trading parties reveal essential information about themselves through Collaboration Protocol Profiles (CPPs)." "A database of ¡«250 active fault traces in the Caribbean and Central American regions has been assembled to characterize the seismic hazard and tectonics of the area, as part of the Global Earthquake Model (GEM) Foundation's Caribbean and Central American Risk Assessment (CCARA) project." "In this paper we describe a Patricia tree-based B-tree variant suitable for OLTP. In this variant, each page of the B-tree contains a local Patricia tree instead of the usual sorted array of keys. It has been implemented in iAnywhere ASA Version 8.0. Preliminary experience has shown that these indexes can provide significant space and performance benefits over existing ASA indexes." "Only a small fraction, probably less than 1%, of the $1.149 trillion spent annually on medical care is spent on health promotion. Despite the progress we h~.ve made on developing the science of health promotion, it is not recognized as a mature science by any respected scientific group." "A star schema is very popular for modeling data warehouses and data marts. Therefore, it is important that a database system which is used for implementing such a data warehouse or data mart is able to efficiently handle operations on such a schema. In this paper we will describe how one of these operations, the join operation --- probably the most important operation --- is implemented in the IBM Informix Extended Parallel Server (XPS)." "After the successful first International Workshop on Engineering Federated Database Systems (EFDBS'97) in Barcelona in June 1997 [CEH+ 97], the goal of this second workshop was to bring together researchers and practitioners interested in various issues in the development of federated information systems, whereby the scope has been extended to cover database and non-database information sources (the change from EFDBS to EFIS reflects this). This report provides details of the workshop content and the conclusions reached in the final discussion." "Conceptual data modeling for complex applications, such as multimedia and spatiotemporal applications, often results in large, complicated and difficult-to-comprehend diagrams. One reason for this is that these diagrams frequently involve repetition of autonomous, semantically meaningful parts that capture similar situations and characteristics. By recognizing such parts and treating them as units, it is possible to simplify the diagrams, as well as the conceptual modeling process. We propose to capture autonomous and semantically meaningful excerpts of diagrams that occur frequently as modeling patterns." "We present a comprehensive solution to the problem that has been tightly integrated with the optimizer of a commercial shared-nothing parallel database system. Our approach uses the query optimizer itself both to recommend candidate partitions for each table that will benefit each query in the workload, and to evaluate various combinations of these candidates. We compare a rank-based enumeration method with a random-based one. Our experimental results show that the former is more effective." "In this paper we present a novel technique for cost estimation of user-defined methods in advanced database systems. This technique is based on multi-dimensional histograms. We explain how the system collects statistics on the method that a database user defines and adds to the system. From these statistics a multi-dimensional histogram is built. Afterwards, this histrogram can be used for estimating the cost of the target method whenever this method is referenced in a query." "In this paper, we propose the novel idea of hierarchical subspace sampling in order to create a reduced representation of the data. The method is naturally able to estimate the local implicit dimensionalities of each point very effectively, and thereby create a variable dimensionality reduced representation of the data. Such a technique has the advantage that it is very adaptive about adjusting its representation depending upon the behavior of the immediate locality of a data point. An interesting property of the subspace sampling technique is that unlike all other data reduction techniques, the overall efficiency of compression improves with increasing database size." "Much of the functionality required to support first class views can be generated semi-automatically, if the derivations between layers are declarative (e.g., SQL, rather than Java). We present a framework where propagation rules can be defined, allowing the flexible and incremental specification of view semantics, even by non-programmers. Finally, we describe research areas opened up by this approach." "This paper describes the Dwarf structure and the Dwarf cube construction algorithm. Further optimizations are then introduced for improving clustering and query performance. Experiments with the current implementation include comparisons on detailed measurements with real and synthetic datasets against previously published techniques. The comparisons show that Dwarfs by far out-perform these techniques on all counts: storage space, creation time, query response time, and updates of cubes." "In this article, we will discuss the effects of applying traditional transaction management techniques to multi-tier architectures in distributed environments. We will show the performance costs associated with distributed transactions and discuss ways by which enterprises really manage their distributed data to circumvent this performance hit. Our intent is to share our experience as an industrial customer with the database research and vendor community to create more usable and scalable designs." "Recently several important relational database tasks such as index selection, histogram tuning, approximate query processing, and statistics selection have recognized the importance of leveraging workloads. Often these tasks are presented with large workloads, i.e., a set of SQL DML statements, as input. A key factor affecting the scalability of such tasks is the size of the workload. In this paper, we present the novel problem of workload compression which helps improve the scalability of such tasks. We present a principled solution to this challenging problem. Our solution is broadly applicable to a variety of workload-driven tasks, while allowing for incorporation of task specific knowledge. We have implemented this solution and our experiments illustrate its effectiveness in the context of two workload-driven tasks: index selection and approximate query processing." "We have developed a web-based architecture and user interface for fast storage, searching and retrieval of large, distributed, files resulting from scientific simulations. We demonstrate that the new DATALINK type defined in the draft SQL Management of External Data Standard can help to overcome problems associated with limited bandwidth when trying to archive large files using the web. We also show that separating the user interface specification from the user interface processing can provide a number of advantages. We provide a tool to generate automatically a default user interface specification, in the form of an XML document, for a given database." "In this paper we describe solutions for two potentially problematic aspects of such a data management system: backup/recovery and data consistency. We present algorithms for performing backup and recovery of the DBMS data in a coordinated fashion with the files on the file servers. Our algorithms for coordinated backup and recovery have been implemented in the IBM DB2/DataLinks product. We also propose an efficient solution to the problem of maintaining consistency between the content of a file and the associated meta-data stored in the DBMS from a reader's point of view without holding long duration locks on meta-data tables. In the model, an object is directly accessed and edited in-place through normal file system APIs using a reference obtained via an SQL Query on the database." nvited Talk I.- Some Advances in Data-Mining Techniques.- Web Exploration.- Querying Semantically Tagged Documents on the World-Wide Web.- WWW Exploration Queries.- Strategies for Filtering E-mail Messages Combining Content-Based and Sociological Filtering with User-Stereotypes.- Interactive Query Expansion in a Meta-search Engine.- Database Technology.- On the Optimization of Queries containing Regular Path Expressions. "Dual Match, recently proposed as a dual approach of FRM, improves performance significantly over FRM by exploiting point filtering effect. However, it has the problem of having a smaller allowable window size---half that of FRM---given the minimum query length. A smaller window increases false alarms due to window size effect. General Match offers advantages of both methods: it can reduce window size effect by using large windows like FRM and, at the same time, can exploit point-filtering effect like Dual Match. General Match divides data sequences into generalized sliding windows (J-sliding windows) and the query sequence into generalized disjoint windows (J-disjoint windows)." "Clustering is the process of grouping a set of objects into classes of similar objects. Although definitions of similarity vary from one clustering model to another, in most of these models the concept of similarity is based on distances, e.g., Euclidean distance or cosine distance. In other words, similar objects are required to have close values on at least a set of dimensions. In this paper, we explore a more general type of similarity. Under the pCluster model we proposed, two objects are similar if they exhibit a coherent pattern on a subset of dimensions. For instance, in DNA microarray analysis, the expression levels of two genes may rise and fall synchronously in response to a set of environmental stimuli. Although the magnitude of their expression levels may not be close, the patterns they exhibit can be very much alike. Discovery of such clusters of genes is essential in revealing significant connections in gene regulatory networks." This paper proposes a simple model for a timer-driven triggering and alerting system. Such a system can be used with relational and object-relational databases systems. Timer-driven trigger systems have a number of advantages over traditional trigger systems that test trigger conditions and run trigger actions in response to update events. They are relatively easy to implement since they can be built using a middleware program that simply runs SQL statements against a DBMS. "We have developed new sort algorithms which eliminate almost all the compares, provide functional parallelism which can be exploited by multiple execution units, significantly reduce the number of passes through keys, and improve data locality. These new algorithms outperform traditional sort algorithms by a large factor.For the Datamation disk to disk sort benchmark (one million 100-byte records), at SIGMOD'94, Chris Nyberg et al presented several new performance records using DEC alpha processor based systems." "For each algorithm, we investigate the performance effects of explicit duplicate removal and referential integrity enforcement, variants for inputs larger than memory, and parallel execution strategies. Analytical and experimental performance comparisons illustrate the substantial differences among the algorithms." "Many societal applications, for example, in domains such as health care, land use, disaster management, and environmental monitoring, increasingly rely on geographical information for their decision making. With the emergence of the World WideWeb this information is typically located in multiple, distributed, diverse, and autonomously maintained systems." "At present Web services are thought as the revolution of the next generation e-commerce and its technical architectures include UDDI, WSDL, SOAP, XML and so on. Web service discovery is one of the most important parts in the web service architectures." "In this paper, we report our success in building efficient scalable classifiers in the form of decision tables by exploring capabilities of modern relational database management systems. In addition to high classification accuracy, the unique features of the approach include its high training speed, linear scalability, and simplicity in implementation. The novel classification approach based on grouping and counting and its implementation on top of RDBMS is described. The results of experiments conducted for performance evaluation and analysis are presented." "This paper introduces algorithms to recognize the invariant part of a data flow tree, and to restructure the evaluation plan to reuse the stored intermediate result. We also propose an efficient method to teach an existing join optimizer to understand the invariant feature and thus allow it to be able to generate better join plans in the new context. Some other related optimization techniques are also discussed. The proposed techniques were implemented within three months on an existing real commercial database system." "This demonstration illustrates how a comprehensive database reconciliation tool can provide the ability to characterize data-quality and data-reconciliation issues in complex real-world applications. Telcordia¡¯s data reconciliation and data quality analysis tool includes rapid generation of appropriate pre-processing and matching rules applied to a training set created from samples of the data. Once tuned, the appropriate rules can be applied efficiently to the complete data sets. The tool uses a modular JavaBeans-based architecture that allows for customized matching functions and iterative runs that build upon previously learned information." "In this paper, we define and examine a particular class of queries called group queries. Group queries are natural queries in many decisionsupport applications. The main characteristic of a group query is that it can be executed in a groupby-group fashion. In other words, the underlying relation(s) can be partitioned (based on some set of attributes) into disjoint groups, and each group can be processed separately. We give a syntactic criterion to identify these queries and prove its sufficiency." "Several alternatives to manage large XML document collections exist, ranging from file systems over relational or other database systems to specifically tailored XML base management systems. In this paper we give a tour of Natix, a database management system designed from scratch for storing and processing XML data. Contrary to the common belief that management of XML data is just another application for traditional databases like relational systems, we illustrate how almost every component in a database system is affected in terms of adequacy and performance." "The AQR-Toolkit divides the query routing task into two cooperating processes: query refinement and source selection. It is well known that a broadly defined query inevitably produces many false positives. Query refinement provides mechanisms to help the user formulate queries that will return more useful results and that can be processed efficiently. As a complimentary process, source selection reduces false negatives by identifying and locating a set of relevant information providers from a large collection of available sources. By pruning irrelevant information sources, source selection also reduces the overhead of contacting the information servers that do not contribute to the answer of the query. The system architecture of AQR-Toolkit consists of a hierarchical network (a directed acyclic graph) with external information providers at the leaves and query routers as mediating nodes. The end-point information providers support query-based access to their documents." "We present the design of ObjectGlobe, a distributed and open query processor for Internet data sources. Today, data is published on the Internet via Web servers which have, if at all, very localized query processing capabilities. The goal of the ObjectGlobe project is to establish an open marketplace in which data and query processing capabilities can be distributed and used by any kind of Internet application. Furthermore, ObjectGlobe integrates cycle providers (i.e., machines) which carry out query processing operators. The overall picture is to make it possible to execute a query with ¨C in principle ¨C unrelated query operators, cycle providers, and data sources." "We survey existing value-based approaches, develop a reference architecture that helps compare the approaches, and categorize the constituent algorithms. We explain the options for collecting value metadata, and for using that metadata to improve search, ranking of results, and the enhancement of information browsing. Based on our survey and analysis, we then point to several open problems." "MENTOR (¡°Middleware for Enterprise-Wide Workflow Management¡±) is a joint project of the University of the Saarland, the Union Bank of Switzerland, and ETH Zurich [1, 2, 3]. The focus of the project is on enterprise-wide workflow management. Workflows in this category may span multiple organizational units each unit having its own workflow server, involve a variety of heterogeneous information systems, and require many thousands of clients to interact with the workflow management system (WFMS). The project aims to develop a scalable and highly available environment for the execution and monitoring of workflows, seamlessly integrated with a specification and verification environment. For the specification of workflows," "The popularity of on-line document databases has led to a new problem: finding which text databases (out of many candidate choices) are the most relevant to a user. Identifying the relevant databases for a given query is the text database discovery problem. The first part of this paper presents a practical solution based on estimating the result size of a query and a database. The method is termed GlOSS¡ªGlossary of Servers Server. The second part of this paper evaluates the effectiveness of GlOSS based on a trace of real user queries. In addition, we analyze the storage cost of our approach." "Over the coming years, an increasingly ubiquitous and increasingly capacious Internet will introduce new opportunities for the creation of tightly integrated databases distributed across multiple institutions. These new capabilities, along with certain techniques arising from the emerging field of computational finance, could ultimately transform a substantial portion of the world¡¯s commercial and financial activity in fundamental ways. This talk will focus on some of the most significant changes such technologies may induce in the structure of the world financial system and the mechanisms of global commerce. Consideration will be given to such topics as algorithmic trading and portfolio optimization; electronic markets, automated marketmaking, and the historical inevitability of computational disintermediation; and the future of electronic commerce, including the potential use of shared knowledge bases incorporating standardized representations of enormous numbers of products and services available from multiple sources" "We address the problem of ordering accesses to multiple information sources, in order to maximize the likelihood of obtaining answers as early as possible. We describe a declarative formalism for specifying these kinds of probabilistic information, and we propose algorithms for ordering the information sources. Finally, we discuss a preliminary experimental evaluation of these algorithms on the domain of bibliographic sources available on the WWW." "Recently several important relational database tasks such as index selection, histogram tuning, approximate query processing, and statistics selection have recognized the importance of leveraging workloads. Often these tasks are presented with large workloads, i.e., a set of SQL DML statements, as input. A key factor affecting the scalability of such tasks is the size of the workload. In this paper, we present the novel problem of workload compression which helps improve the scalability of such tasks. We present a principled solution to this challenging problem. Our solution is broadly applicable to a variety of workload-driven tasks, while allowing for incorporation of task specific knowledge. We have implemented this solution and our experiments illustrate its effectiveness in the context of two workload-driven tasks: index selection and approximate query processing." "Relational OLAP tools and other database applications generate sequences of SQL statements that are sent to the database server as result of a single information request provided by a user. Unfortunately, these sequences cannot be processed efficiently by current database systems because they typically optimize and process each statement in isolation. We propose a practical approach for this optimization problem, called ""coarse-grained optimization,"" complementing the conventional query optimization phase. This new approach exploits the fact that statements of a sequence are correlated since they belong to the same information request. A lightweight heuristic optimizer modifies a given statement sequence using a small set of rewrite rules." "The parameters of the continuous-time Markov chain model, the probabilities of co-accessing certain documents and the interaction times between successive accesses, are dynamically estimated and adjusted to evolving workload patterns by keeping online statistics. The integrated policy for vertical data migration has been implemented in a prototype system. The system makes profitable use of the Markov chain model also for the scheduling of volume exchanges in the tertiary storage library. Detailed simulation experiments with Web-server-like synthetic workloads indicate significant gains in terms of client response time. The experiments also show that the overhead of the statistical bookkeeping and the computations for the access predictions is affordable." "Over the last few years, there have been at least two dramatic changes in the way computers are used. The first has its origin in the fact that computers have become more and more connected to each other. The second was triggered by the increasing miniaturization and affordability of hardware components and power supplies, together with the development of wireless communication paths. These two trends combined have allowed the development of powerful, yet comparatively low-priced, portable computers. In spite of these changes, little attention has been given to reaching a common consensus and to the development of a strong infrastructure in this area." There appears to be a discrepancy between the research topics being pursued by the database research community and the key problems facing information systems decisions makers such as Chief Information Officers (CIOs). Panelists will present their view of the key problems that would benefit from a research focus in the database research community and will discuss perceived discrepancies. "Morgan Kaufmann Publishers is the exclusive worldwide distributor for the VLDB proceedings volumes listed below: ISBN 2000 Cairo, Egypt 1-55860-715-3 1999 Edinburgh, Scotland 1-55860-615-7 1998 New York, USA 1-55860-566-5 1997 Athens, Greece 1-55860-470-7 1996 Mumbai (Bombay), India 1-55860-382-4 1995 Zurich, Switzerland 1-55860-379-4 1994 Santiago, Chile 1-55860-153-8 1993 Dublin, Ireland 1-55860-152-X 1992 Vancouver, Canada 1-55860-151-1 1991 Barcelona, Spain 1-55860-150-3 1990 Brisbane, Australia 1-55860-149-X 1989 Amsterdam, The Netherlands 1-55860-101-5 1988 Los Angeles, USA 0-934613-75-3 1985 Stockholm, Sweden 0-934613-17-6 1984 Singapore 0-934613-16-8 1983 Florence, Italy 0-934613-15-X 1996 2000 5-year set 1-55860-719-6 ($198) 1991 2000 10-year set 1-55860-720-x ($378) 1988 2000 13-year set 1-55860-718-8 ($486)" "We show that in this case there can be multiple maximal admissible subsets and that all maximal admissible subsets can be characterized as 3-valued stable models of PRA. We show that for a given set of user requests, in the presence of referential actions of the form ON UPDATE CASCADE, the admissibility check and the computation of the subsequent database state, and (for non-admissible updates) the derivation of debugging hints all are in ptime. Thus, full referential actions can be implemented efficiently." "Unfortunately, there is very little money to support the health promotion initiatives in this plan. Health promotion receives very little of the $17 billion NIH research budget and few health promotion procedures are covered by the $400+ billion spent annually for Medicare and Medicaid. The Office of Health Promotion and Disease Prevention has a budget so small that very few health promotion professionals ever encounter this office directly during their careers." "The paper discusses the LHAM concepts, including concurrency control and recovery, our full-fledged LHAM implementation, and experimental performance results based on this implementation. A detailed comparison with the TSB-tree, both analytically and based on experiments with real implementations, shows that LHAM is highly superior in terms of insert performance, while query performance is in almost all cases at least as good as for the TSB-tree; in many cases it is much better." "We describe the TIGUKAT objectbase management system, which is under development at the Laboratory for Database Systems Research at the University of Alberta. TIGUKAT has a novel object model, whose identifying characteristics include a purely behavioral semantics and a uniform approach to objects. Everything in the system, including types, classes, collections, behaviors, and functions, as well as meta-information, is a first-class object with well-defined behavior. In this way, the model abstracts everything, including traditional structural notions such as instance variables, method implementation, and schema definition, into a uniform semantics of behaviors on objects." "This paper describes the Onion technique, a special indexing structure for linear optimization queries. Linear optimization queries ask for top-N records subject to the maximization or minimization of linearly weighted sum of record attribute values. Such query appears in many applications employing linear models and is an effective way to summarize representative cases, such as the top-50 ranked colleges. The Onion indexing is based on a geometric property of convex hull, which guarantees that the optimal value can always be found at one or more of its vertices." "WALRUS employs a novel similarity model in which each image is first decomposed into its regions and the similarity measure between a pair of images is then defined to be the fraction of the area of the two images covered by matching regions from the images. In order to extract regions for an image, WALRUS considers sliding windows of varying sizes and then clusters them based on the proximity of their signatures. An efficient dynamic programming algorithm is used to compute wavelet-based signatures for the sliding windows. Experimental results on real-life data sets corroborate the effectiveness of WALRUS'S similarity model." "The education industry has a very poor record of productivity gains. In this brief article, I outline some of the ways the teaching of a college course in database systems could be made more efficient, and staff time used more productively. These ideas carry over to other programming-oriented courses, and many of them apply to any academic subject whatsoever. After proposing a number of things that could be done, I concentrate here on a system under development, called OTC (On-line Testing Center), and on its methodology of ""root questions.""" "From the standpoint of satisfying human's information needs, the current digital library (DL) systems suffer from the following two shortcomings: (i) inadequate high-level cognition support; (ii) inadequate knowledge sharing facilities. In this article, we introduce a two-layered digital library architecture to support different levels of human cognitive acts. The model moves beyond simple information searching and browsing across multiple repositories, to inquiry of knowledge about the contents of digital libraries." "In this paper we propose a shift in the intuition behind outerjoin: Instead of computing the join while also preserving its arguments, outerjoin delivers tuples that come either from the join or from the arguments. Queries with joins and outerjoins deliver tuples that come from one out of several joins, where a single relation is a trivial join. An advantage of this view is that, in contrast to preservation, disjunction is commutative and associative, which is a significant property for intuition, formalisms, and generation of execution plans." "This paper describes an incomplete data cube design. An incomplete data cube is modeled as a federation of cubettes. A cubette is a complete subcube within the incomplete data cube. The incomplete cube is built piecemeal by giving a concise, high-level specification of each cubette. An efficient algorithm to retrieve an aggregate value from the incomplete data cube is described. When a value cannot be retrieved because it is missing, alternatives at a lower precision that can be retrieved are identified." "Complex database queries require the use of memory-intensive operators like sort and hash-join. Those operators need memory, also referred to as SQL memory, to process their input data. For example, a sort operator uses a work area to perform the in-memory sort of a set of rows. The amount of memory allocated by these operators greatly affects their performance. However, there is only a finite amount of memory available in the system, shared by all concurrent operators. The challenge for database systems is to design a fair and efficient strategy to manage this memory." "The elapsed time for external mergesort is normally dominated by I/O time. This paper is focused on reducing I/O time during the merge phase. Three new buffering and readahead strategies are proposed, called equal buffering, extended forecasting and clustering. They exploit the fact that virtually all modern disks perform caching and sequential readahead. The latter two also collect information during run formation (the last key of each run block) which is then used to preplan reading. For random input data, extended forecasting and clustering were found to reduce merge time by 30% compared with traditional double buffering." "We also describe an adaptive page sampling algorithm which achieves greater efficiency by using all values in a sampled page but adjusts the amount of sampling depending on clustering of values in pages. Next, we establish that the problem of estimating the number of distinct values is provably difficult, but propose a new error metric which has a reliable estimator and can still be exploited by query optimizers to influence the choice of execution plans." "Various relation-based systems, concerned with the qualitative representation and processing of spatial knowledge, have been developed in numerous application domains. In this article, we identify the common concepts underlying qualitative spatial knowledge representation, we compare the representational properties of the different systems, and we outline the computational tasks involved in relation-based spatial information processing. We also describesymbolic spatial indexes, relation-based structures that combine several ideas in spatial knowledge representation. A symbolic spatial index is an array that preserves only a set of spatial relations among distinct objects in an image, called the modeling space; the index array discards information, such as shape and size of objects, and irrelevant spatial relations." "A goal of the Biomedical Informatics Research Network (birn) project is to develop a multi-institution information management system for Neurosciences to gain a deeper understanding of several neurological disorders. Each institution specializes in a different subdiscipline and produces a database of its experimental or computationally derived data; a mediator module performs semantic integration over the databases to enable neuroscientists to perform analyses that could not be done from any single institution¡¯s data. The overall system architecture of the birn system is that of a wrapper-mediator system. The information sources are various relational sources including Oracle 9i having userdefined packages, Oracle 8i with the Spatial Data Cartridge, and databases made available over the web." "E-business sites are increasingly utilizing dynamic web pages since they enable a much wider range of interaction than static HTML pages can provide. Dynamic page generation technologies allow a Web site to generate pages at run-time, based on various parameters. Delaying content decisions until run-time a ords a Web site signi cant exibility in customizing page content, thereby enriching users' Web experiences. At the same time, however, dynamic page generation technologies have resulted in serious performance problems due to the increased load placed on the server-side infrastructure. Consequently, end users experience increased response times." "Data mining evolved as a collection of applicative problems and efficient solution algorithms relative to rather peculiar problems, all focused on the discovery of relevant information hidden in databases of huge dimensions. In particular, one of the most investigated topics is the discovery of association rules. This work proposes a unifying model that enables a uniform description of the problem of discovering association rules. The model provides SQL-like operator, named MINE RULE, which is capable of expressing all the problems presented so far in the literature concerning the mining of association rules. We demonstrate the expressive power of the new operator by means of several examples, some of which are classical, while some others are fully original and correspond to novel and unusual applications. We also present the operational semantics of the operator by means of an extended relational algebra." "Application Servers (ASs), which have become very popular in the last few years, provide the platforms for the execution of transactional, server-side applications in the online world. ASs are the modern cousins of traditional transaction processing monitors (TPMs) like CICS. In this tutorial, I will provide an introduction to different ASs and their technologies. ASs play a central role in enabling electronic commerce in the web context. They are built on the basis of more standardized protocols and APIs than were the traditional TPMs. The emergence of Java, XML and OMG standards has played a significant role in this regard." "This paper formalizes a rule-based policy framework that includes provisions and obligations, and investigates a reasoning mechanism within this framework. A policy decision may be supported by more than one derivation, each associated with a potentially different set of provisions and obligations (called a global PO set). The reasoning mechanism can derive all the global PO sets for each specific policy decision, and facilitates the selection of the best one based on numerical weights assigned to provisions and obligations as well as on semantic relationships among them." "We highlight some of the pleasures of coding with Java, and some of the pains of coding around Java in order to obtain good performance in a data-intensive server. For those issues that were painful, we present concrete suggestions for evolving Java's interfaces to better suit serious software systems development. We believe these experiences can provide insight for other designers to avoid pitfalls we encountered and to decide if Java is a suitable platform for their system." "We investigate the optimization and evaluation of queries with universal quantification in the context of the object-oriented and object-relational data models. The queries are classified into 16 categories depending on the variables referenced in the so-called range and quantifier predicates. For the three most important classes we enumerate the known query evaluation plans and devise some new ones. These alternative plans are primarily based on anti-semijoin, division, generalized grouping with count aggregation, and set difference. In order to evaluate the quality of the many different evaluation plans a thorough performance analysis on some sample database configurations was carried out." "In this paper, we establish a number of optimality results for the existing encoding schemes; in particular, we prove that neither of the two known schemes is optimal for the class of two-sided range queries. We also propose a new encoding scheme and prove that it is optimal for that class. Finally, we present an experimental study comparing the performance of the new encoding scheme with that of the existing ones as well as four hybrid encoding schemes for both simple selection queries and the more general class of membership queries of the form" "Thank you for downloading foundation for object relational databases the third manifesto. As you may know, people have search hundreds times for their chosen books like this foundation for object relational databases the third manifesto, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some harmful virus inside their laptop." The results show that current hardware technology trends have significantly changed the performance tradeoffs considered in past studies. A simplistic data placement strategy based on the new results is developed and shown to perform well for a variety of workloads. "BeSS is a high performance, memory-mapped object storage manager offering distributed transaction management facilities and extensible support for persistence. In this paper, we present an overview of the peer-to-peer architecture of BeSS, and we discuss issues related to space management, inter-object references, database corruption, operation modes, cache replacement, and transaction management." "We present an update on the status of the Cougar Sensor Database Project, in which we are investigating a database approach to sensor networks: Clients ""program"" the sensors through queries in a high-level declarative language (such as a variant of SQL). In this paper, we give an overview of our activities on energy-efficient data dissemination and query processing. Due to space constraints, we cannot present a full menu of results; instead, we decided to only whet the reader's appetite with some problems in energy-efficient routing and in-network aggregation and some thoughts on how to approach them." "We present the information mediator prototype called Kind, recently developed as part of an integrated Neuroscience workbench project at SDSC/UCSD within the NPACI project. The broad goal of the workbench is to serve as an environment where, among other tasks, the Neuroscientist can query a mediator to retrieve information from across a number of information sources, and use the results to perform her own analysis on the data. The Kind mediator is an instance of a novel model-centered mediator architecture that extends current XML-based mediator approaches by incorporating a semantic model of an information source as an integral part of the mediation process." "An ¡Ê-approximate quantile summary of a sequence of N elements is a data structure that can answer quantile queries about the sequence to within a precision of N. We present a new online algorithm for computing¡Ê-approximate quantile summaries of very large data sequences. The algorithm has a worst-case space requirement of &Ogr;(1¡Â¡Ê log(¡ÊN)). This improves upon the previous best result of &Ogr;(1¡Â¡Ê log2(¡ÊN)). Moreover, in contrast to earlier deterministic algorithms, our algorithm does not require a priori knowledge of the length of the input sequence. Finally, the actual space bounds obtained on experimental data are significantly better than the worst case guarantees of our algorithm as well as the observed space requirements of earlier algorithms." "The democratization of ubiquitous computing, the increasing connection of corporate databases to the Internet and the today's natural resort to Web hosting companies and Database Service Providers strongly emphasize the need for data confidentiality. The chapter proposes a solution called chip-secured data access (C-SDA), which allows querying encrypted data while controlling personal privileges. C-SDA is a client-based security component acting as an incorruptible mediator between a client and an encrypted database." "PointCast Inc, the inventor and leader in broadcast news via the Internet and corporate intranets was founded in 1992 to deliver news as it happens from leading sources such as CNN, the New York Times, Wall Street Journal Interactive Edition and more, directly to a viewer's computer screen." "In response to pressures to reduce product lead times,manufacturing companies are increasingly aware of the need for someform of integration along the whole product chain. Engineeringtasks must be coordinated and data exchanged between the variousspecialised tools. An enterprise has two main tracks of informationflow, namely technical and managerial, and product data managementspans both tracks. On the technical track, applications are highlyspecialised supporting tasks such as product design (CAD) and theprogramming of numerically controlled machines (CAM). Generally,the various application systems on the technical track are referredto as CAx systems. CAx systems may not only differ in terms offunctionality but also in terms of the amount and type of datamanaged, the run-time environment and performance characteristics.For complete support of Computer Integrated Manufacturing (CIM),we must be able to integrate existing technical and administrativecomponent application systems." "A number of researchers have become interested in the design of global-scale networked systems and applications. Our thesis here is that the database community's principles and technologies have an important role to play in the design of these systems. The point of departure is at the roots of database research: we generalize Codd's notion of data independence to physical environments beyond storage systems. We note analogies between the development of database indexes and the new generation of structured peer-to-peer networks. We illustrate the emergence of data independence in networks by surveying a number of recent network facilities and applications, seen through a database lens." "The CQ project at OGI, funded by DARPA, aims at developing a scalable toolkit and techniques for update monitoring and event-driven information delivery on the net. The main feature of the CQ project is a ¡°personalized update monitoring¡± toolkit based on continual queries [3]. Comparing with the pure pull (such as DBMSs, various web search engines) and pure push (such as Pointcast, Marimba, Broadcast disks) technology, the CQ project can be seen as a hybrid approach that combines the pull and push technology by supporting personalized update monitoring through a combined client-pull and server-push paradigm." "Beginning in 1989 an ad-hoc collection of senior DBMS researchers has gathered periodically to perform a "" group grope "" ; i.e. an assessment of the state of the art in DBMS research as well as a prediction concerning what problems and problem areas deserve additional focus. The fifth ad-hoc meeting was held May 4-6, 2003 in Lowell, Ma. A report on the meeting is in preparation, and this panel discussion will summarize the upcoming document and discuss its conclusions." "In this demo, we show how database-style declarative queries can be executed over data streaming from sensor networks. Our demo consists of two major components: a set of Berkeley TinyOS battery-powered, wireless sensor ""motes"" (see Figure 1) that produce and process data, and a desktop-based query processor which parses queries, distributes them over motes, and collects and displays answers. Specifically, we allow conference attendees standing at our query processing workstation to query a number of motes distributed throughout the demo space." "This paper addresses the problem of finding the K closest pairs between two spatial data sets, where each set is stored in a structure belonging in the R-tree family. Five different algorithms (four recursive and one iterative) are presented for solving this problem. The case of 1 closest pair is treated as a special case. An extensive study, based on experiments performed with synthetic as well as with real point data sets, is presented. A wide range of values for the basic parameters affecting the performance of the algorithms, especially the effect of overlap between the two data sets, is explored." "In this paper, we present an original and complete methodology for supervising relational query processing in shared nothing systems. A new control mechanism is introduced which allows the detection and the correction of optimizer estimation errors and load imbalance. We especially focus on the management of intraprocessor communication and on the overlapping of communication and computation. Performance evaluations on an hypercube and a grid interconnection machine show the efficiency and the robustness of the proposed methods." "In this research, we represent our operational environment in two distinct ways. First, we characterize the underlying physical databases that serve as a foundation for the entire Westlaw search system. Second, we create a rearchitected set of logical document collections that corresponds to classes of high level organizational concepts such as jurisdiction, practice area, and document-type. Keeping the end-user in mind, we focus on performance issues relating to optimal database selection, where domain experts have provided complete pre-hoc relevance judgments for collections characterized under each of our physical and logical database models." "In this paper, we investigate algorithms for generic schema matching, outside of any particular data model or application. We first present a taxonomy for past solutions, showing that a rich range of techniques is available. We then propose a new algorithm, Cupid, that discovers mappings between schema elements based on their names, data types, constraints, and schema structure, using a broader set of techniques than past approaches. Some of our innovations are the integrated use of linguistic and structural matching, context-dependent matching of shared types, and a bias toward leaf structure where much of the schema content resides. After describing our algorithm, we present experimental results that compare Cupid to two other schema matching systems." "Here we propose a simple language UnQL for querying data organized as a rooted, edge-labeled graph. In this model, relational data may be represented as fixed-depth trees, and on such trees UnQL is equivalent to the relational algebra. The novelty of UnQL consists in its programming constructs for arbitrarily deep data and for cyclic structures. While strictly more powerful than query languages with path expressions like XSQL, UnQL can still be efficiently evaluated. We describe new optimization techniques for the deep or ""vertical"" dimension of UnQL queries. Furthermore, we show that known optimization techniques for operators on flat relations apply to the ""horizontal"" dimension of UnQL." "Interactive computer graphics systems for visualization of realistic-looking, three-dimensional models are useful for evaluation, design and training in virtual environments, such as those found in architectural and mechanical CAD, ight simulation, and virtual reality. Interactive visualization systems display images of a threedimensional model on the screen of a computer workstation as seen from a simulated observer's viewpoint under control by a user. If images are rendered smoothly and quickly enough, an illusion of real-time exploration of a virtual environment can be achieved as the simulated observer moves through the model." "This article concentrates on query unnesting (also known as query decorrelation), an optimization that, even though it improves performance considerably, is not treated properly (if at all) by most OODB systems. Our framework generalizes many unnesting techniques proposed recently in the literature, and is capable of removing any form of query nesting using a very simple and efficient algorithm. The simplicity of our method is due to the use of the monoid comprehension calculus as an intermediate form for OODB queries. The monoid comprehension calculus treats operations over multiple collection types, aggregates, and quantifiers in a similar way, resulting in a uniform method of unnesting queries, regardless of their type of nesting." "XQuery is a query language for real and virtual XML documents and collections of these documents. Its development began in the second half of 1999. With roughly 3 years of work completed, it¡¯s high time that we provided an initial description of this language, and a sense of where it is in its development cycle. XQuery is being developed within W3C. Every consortium of this type has its own rules and its own ways of getting its work done. W3C provides visibility to the public by making available drafts of the specifications that it has under development at relatively frequent intervals. Mailing lists are established for each specification to allow the public to provide feedback on these drafts." "In this paper we examine the problem of duplicates, in transformation-based enumeration. In general, different sequences of transformation rules may end up deriving the same element, and the optimizer must detect and discard these duplicate elements generated by multiple paths. We show that the usual commutativity/associativity rules for joins generate O(4^n) duplicate operators. We then propose a scheme ---within the generic transformation-based framework --- to avoid the generation of duplicates, which does achieve the O(3^n) lower bound on join enumeration. Our experiments show an improvement of up to a factor of 5 in the optimization of a query with 8 tables, when duplicates are avoided rather than detected." "This chapter shows the basic functionality of the Gem-BASE system, using the Oracle database installed at the San Diego Supercomputer Center. The database has been extended with data types for representing surfaces, which also implement all the necessary operations. Surface objects are specialized in Spherical surface objects and in flat surface objects. Brain researchers, neurologists and neurosurgeons acquire 3-D brain images from normal and diseased subjects with the idea to study the properties of brains, make comparisons and to measure changes caused by factors like age and disease." "Scientific data of importance to biologists in the Humitn Genome Project resides not only in conventional da.tabases, but in structured files maintained in a number of different formats (e.g. ASN.1 a.nd ACE) as well a.s sequence analysis packages (e.g. BLAST and FASTA). These formats and packages contain a number of data types not found in conventional databases, such as lists and variants, and may be deeply nested. We present in this paper techniques for querying and transforming such data, and illustrate their use in a prototype system developed in conjunction with the Human Genome Center for Chromosome 22. We also describe optimizations performed by the system, a crucial issue for bulk data." "The nearestor near-neighbor query problems arise in a large variety of database applications, usually in the context of similarity searching. Of late, there has been increasing interest in building search/index structures for performing similarity search over high-dimensional data, e.g., image databases, document collections, time-series databases, and genome databases. Unfortunately, all known techniques for solving this problem fall prey to the \curse of dimensionality."" That is, the data structures scale poorly with data dimensionality; in fact, if the number of dimensions exceeds 10 to 20, searching in k-d trees and related structures involves the inspection of a large fraction of the database, thereby doing no better than brute-force linear search. It has been suggested that since the selection of features and the choice of a distance metric in typical applications is rather heuristic, determining an approximate nearest neighbor should su ce for most practical purposes. In this paper, we examine a novel scheme for approximate similarity search based on hashing." "We address the development of a normalization theory for object-oriented data models that have common features to support objects. We first provide an extension of functional dependencies to cope with the richer semantics of relationships between objects, called path dependency, local dependency, and global dependency constraints. Using these dependency constraints, we provide normal forms for object-oriented data models based on the notions of user interpretation (user-specified dependency constraints) and object model. In constrast to conventional data models in which a normalized object has a unique interpretation, in object-oriented data models, an object may have many multiple interpretations that form the model for that object. An object will then be in a normal form if and only if the user's interpretation is derivable from the model of the object. Our normalization process is by nature iiterative, in which objects are restructured until their models reflect the user's interpretation." "Wide-area database replication technologies and the avail-ability of data centers allow database copies to be dis-tributed across the network. This requires a complete e-commercewebsite suite (i.e. edgecaches, Web servers, ap-plication servers, and DBMS) to be distributed along withthe database replicas. A major advantage of this approachis, like the caches, the possibility of serving dynamic con-tent from a location close to the users, reducing networklatency. However, this is achieved at the expense of ad-ditional overhead, caused by the need of invalidating dy-namic content cached in the edge caches and synchroniza-tion of the database replicas in the data center." "In this paper, we propose a new technique called structural function inlining which inlines recursive functions used in a query by making good use of available type information. Based on the technique, we develop a new approach to typing and optimizing structurally recursive queries. The new approach yields a more precise result type for a query. Furthermore, it produces an optimal algebraic expression for the query with respect to the type information. When a structurally recursive query is applied to non-recursive XML data, our approach translates the query into a finitely nested iterations." "To bridge the gap between these two extremes, we propose a new class of replication systems called TRAPP (Tradeoff in Replication Precision and Performance). TRAPP systems give each user fine-grained control over the tradeoff between precision and performance: Caches store ranges that are guaranteed to bound the current data values, instead of storing stale exact values. Users supply a quantitative precision constraint along with each query. To answer a query, TRAPP systems automatically select a combination of locally cached bounds and exact master data stored remotely to deliver a bounded answer consisting of a range that is no wider than the specified precision constraint, that is guaranteed to contain the precise answer, and that is computed as quickly as possible. This paper defines the architecture of TRAPP replication systems and covers some mechanics of caching data ranges." This paper proposes an effective key management scheme to harden embedded devices against side-channel attacks. This technique leverages the bandwidth limitation of side channels and employs an effective updating mechanism to prevent the keying materials from being exposed. This technique forces attackers to launch much more expensive and invasive attacks to tamper embedded devices and also has the potential of defeating unknown semi-invasive side-channel attacks. "Clustering results validation is an important topic in the context of pattern recognition. We review approaches and systems in this context. In the first part of this paper we presented clustering validity checking approaches based on internal and external criteria. In the second, current part, we present a review of clustering validity approaches based on relative criteria. Also we discuss the results of an experimental study based on widely known validity indices. Finally the paper illustrates the issues that are under-addressed by the recent approaches and proposes the research directions in the field." This is a beautifully simple paper that I feel encompasses many ideas that keep reappearing in different guises every decade or so! The paper proposes the replication of a dictionary (basically a set of key and value pairs) to all relevant sites in a distributed system. Updates and deletes are propagated in a lazy manner through the system as sites communicate with each other using a simple notion of a log. Queries are answered based on the local copy. "We present a two-phase Web Query Optimizer (WQO). In a pre-optimization phase, the WQO selects one or more WSIs for a pre-plan; a pre-plan represents a space of query evaluation plans (plans) based on this choice of WSIs. The WQO uses cost-based heuristics to evaluate the choice of WSI assignment in the pre-plan and to choose a good pre-plan. The WQO uses the pre-plan to drive the extended relational optimizer to obtain the best plan for a pre-plan. A prototype of the WQO has been developed. We compare the effectiveness of the WQO, i.e., its ability to efficiently search a large space of plans and obtain a low cost plan, in comparison to a traditional optimizer." "To facilitate queries over semi-structured data, various structural summaries have been proposed. Structural summaries are derived directly from the data and serve as indices for evaluating path expressions on semi-structured or XML data. We introduce the D(k) index, an adaptive structural summary for general graph structured documents. Building on previous work, 1-index and A(k) index, the D(k)-index is also based on the concept of bisimilarity. However, as a generalization of the 1-index and A(k)-index, the D(k) index possesses the adaptive ability to adjust its structure according to the current query load." "In this article, we analyze several quorum types in order to better understand their behavior in practice. The results obtained challenge many of the assumptions behind quorum based replication. Our evaluation indicates that the conventional read-one/write-all-available approach is the best choice for a large range of applications requiring data replication. We believe this is an important result for anybody developing code for computing clusters as the read-one/write-all-available strategy is much simpler to implement and more flexible than quorum-based approaches. In this article, we show that, in addition, it is also the best choice using a number of other selection criteria." "In this article, we first introduce the syntax of Temporal-Probabilistic (TP) relations and then show how they can be converted to an explicit, significantly more space-consuming form, called Annotated Relations. We then present a theoretical annotated temporal algebra (TATA). Being explicit, TATA is convenient for specifying how the algebraic operations should behave, but is impractical to use because annotated relations are overwhelmingly large. Next, we present a temporal probabilistic algebra (TPA). We show that our definition of the TP-algebra provides a correct implementation of TATA despite the fact that it operates on implicit, succinct TP-relations instead of overwhemingly large annotated relations. Finally, we report on timings for an implementation of the TP-Algebra built on top of ODBC." "This paper describes the current INFORMIX IDS/UD release (9.2 or Centaur) and compares and contrasts its functionality with the features of the SQL-99 language standard. INFORMIX and Illustra have been shipping DBMSs implementing the spirit of the SQL-99 standard for five years. In this paper, we review our experience working with ORDBMS technology, and argue that while SQL-99 is a huge improvement over SQL-92, substantial further work is necessary to make object-relational DBMSs truly useful. Specifically, we describe several interesting pieces of functionality unique to IDS/UD, and several dilemmas our customers have encountered that the standard does not address." "We define the problem of content integration for E-Business, and show how it differs in fundamental ways from traditional issues surrounding data integration, application integration, data warehousing and OLTP. Content integration includes catalog integration as a special case, but encompasses a broader set of applications and challenges. We explore the characteristics of content integration and required services for any solution. In addition, we explore architectural alternatives and discuss the use of XML in this arena." "The archive will enable astronomers to explore the data interactively. Data access will be aided by multidimensional spatial and attribute indices. The data will be partitioned in many ways. Small tag objects consisting of the most popular attributes will accelerate frequent searches. Splitting the data among multiple servers will allow parallel, scalable I/O and parallel data analysis. Hashing techniques will allow efficient clustering, and pair-wise comparison algorithms that should parallelize nicely. Randomly sampled subsets will allow de-bugging otherwise large queries at the desktop. Central servers will operate a data pump to support sweep searches touching most of the data." "In this paper, we claim that such approach is inadequate for rare classes, because of two problems: splintered false positives and error-prone small disjuncts. Motivated by the strengths of our two-phase design, we design various synthetic data models to identify and analyze the situations in which two state-of-the-art methods, RIPPER and C4.5 rules, either fail to learn a model or learn a very poor model. In all these situations, our two-phase approach learns a model with significantly better recall and precision levels. We also present a comparison of the three methods on a challenging real-life network intrusion detection dataset. Our method is significantly better or comparable to the best competitor in terms of achieving better balance between recall and precision." "Clustering is one of the most important tasks performed in Data Mining applications. This paper presents an efficient SQL implementation of the EM algorithm to perform clustering in very large databases. Our version can effectively handle high dimensional data, a high number of clusters and more importantly, a very large number of data records. We present three strategies to implement EM in SQL: horizontal, vertical and a hybrid one. We expect this work to be useful for data mining programmers and users who want to cluster large data sets inside a relational DBMS." "This chapter discusses the semantic Toronto publish/subscribe system. Middleware that can satisfy this requirement include event-based architectures such as publish-subscribe systems. The pub/sub paradigm has recently gained a significant interest in the database community for the support of information dissemination applications for which other models turned out to be inadequate. In pub/sub systems, clients are autonomous components that exchange information by publishing events and by subscribing to the classes of events, they are interested in. In these systems, publishers produce information, while subscribers consume it. A component usually generates a message when it wants the external world to know that a certain event has occurred." "In this paper, we extend this initial proposal, which only dealt with behavioural aspects, to cope with the question of representing data aspects as well. In this context, we show how the expressive process algebra LOTOS (and its toolbox CADP) can be used to tackle this issue." "The CONTROL project at U.C. Berkeley has developed technologies to provide online behavior for data-intensive applications. Using new query processing algorithms, these technologies continuously improve estimates and confidence statistics. In addition, they react to user feedback, thereby giving the user control over the behavior of long-running operations. This demonstration displays the modifications to a database system and the resulting impact on aggregation queries, data visualization, and GUI widgets. We then compare this interactive behavior to batch-processing alternatives." This metadatabase system provides several functions for performing the semantic associative search for images by using the metadata representing the features of images. These functions are realized by using our proposed mathematical model of meaning. The mathematical model of meaning is extended to compute specific meanings of keywords which are used for retrieving images unambiguously and dynamically. The main feature of this model is that the semantic associative search is performed in the orthogonal semantic space. This space is created for dynamically computing semantic equivalence or similarity between the metadata items of the images and keywords. The DataLinks technology developed at IBM Almaden Research Center and now available in DB2 UDB 5.2 introduces a new data type called DATALINK for a database to reference and manage files stored external to the database. An external file is put under a database control by ¡°linking¡± the file to the database. Control to a file can also be removed by ¡°unlinking¡± it. The technology provides transactional semantics with respect to linking or unlinking the file when DATALINK value is stored or updated. "The volume of unstructured text and hypertext data far exceeds that of structured data. Text and hypertext are used for digital libraries, product catalogs, reviews, newsgroups, medical reports, customer service reports, and the like. Currently measured in billions of dollars, the worldwide internet activity is expected to reach a trillion dollars by 2002. Database researchers have kept some cautious distance from this action. The goal of this tutorial is to expose database researchers to text and hypertext information retrieval (IR) and mining systems, and to discuss emerging issues in the overlapping areas of databases, hypertext, and data mining." "The amount of services and deployed software agents in the most famous o spring of the Internet, the World Wide Web, is exponentially increasing. In addition, the Internet is an open environment, where information sources, communication links and agents themselves may appear and disappear unpredictably. Thus, an e ective, automated search and selection of relevant services or agents is essential for human users and agents as well. We distinguish three general agent categories in the Cyberspace, service providers, service requester, and middle agents. Service providers provide some type of service, such as nding information, or performing some particular domain speci c problem solving. Requester agents need provider agents to perform some service for them." We provide an overview of query processing in parallel database systems and discuss several open issues in the optimization of queries for parallel machines. "Various types of computer systems are used behind the scenes in many parts of the telecommunications network to ensure its efficient and trouble-free operation. These systems are large, complex, and expensive real-time computer systems that are mission critical, and contains a database engine as a critical component. These systems share some of common database issues with conventional applications, but they also exhibit rather unique characteristics that present challenging database issues." "MapInfo SpatialWare is an integrated spatial information management system implemented as a ¡°Spatial Extender¡± to the IBM Universal Database (GA to be determined), a ¡°Spatial DataBlade¡± on the Informix Dynamic Server with the Universal Data Option, or as a spatial server on Oracle. It provides on-line spatial data services and improves critical business processes and operational applications. It enables the storage of spatial data in the RDBMS and its rapid retrieval using a sophisticated RTree indexing. SpatialWare uses a robust data model, fully scaleable client/server architecture for the desktop (PC and UNIX) and an extended SQL-based spatial query language to provide a single source for creating custom spatial solutions." "We present in this paper a fully automatic content-based approach to organizing and indexing video data. Our methodology involves three steps: