Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
CHI | 1,998 | Visualizing the Evolution of Web Ecologies | 10.1145/274644.274699 | Several visualizations have emerged which attempt to visualize all or part of the World Wide Web. Those visualizations, however, fail to present the dynamically changing ecology of users and documents on the Web. We present new techniques for Web Ecology and Evolution Visualization (WEEV). Disk Trees represent a discrete time slice of the Web ecology. A collection of Disk Trees forms a Time Tube, representing the evolution of the Web over longer periods of time. These visualizations are intended to aid authors and webmasters with the production and organization of content, assist Web surfers making sense of information, and help researchers understand the Web. | false | false | [
"Ed Huai-hsin Chi",
"James E. Pitkow",
"Jock D. Mackinlay",
"Peter Pirolli",
"Rich Gossweiler",
"Stuart K. Card"
] | [] | [] | [] |
Vis | 1,997 | A comparison of normal estimation schemes | 10.1109/VISUAL.1997.663848 | The task of reconstructing the derivative of a discrete function is essential for its shading and rendering as well as being widely used in image processing and analysis. We survey the possible methods for normal estimation in volume rendering and divide them into two classes based on the delivered numerical accuracy. The three members of the first class determine the normal in two steps by employing both interpolation and derivative filters. Among these is a new method which has never been realized. The members of the first class are all equally accurate. The second class has only one member and employs a continuous derivative filter obtained through the analytic derivation of an interpolation filter. We use the new method to analytically compare the accuracy of the first class with that of the second. As a result of our analysis we show that even inexpensive schemes can in fact be more accurate than high order methods. We describe the theoretical computational cost of applying the schemes in a volume rendering application and provide guidelines for helping one choose a scheme for estimating derivatives. In particular we find that the new method can be very inexpensive and can compete with the normal estimations which pre-shade and pre-classify the volume (M. Levoy, 1988). | false | false | [
"Torsten Möller",
"Raghu Machiraju",
"Klaus Mueller 0001",
"Roni Yagel"
] | [] | [] | [] |
Vis | 1,997 | A topology modifying progressive decimation algorithm | 10.1109/VISUAL.1997.663883 | Triangle decimation techniques reduce the number of triangles in a mesh, typically to improve interactive rendering performance or reduce data storage and transmission requirements. Most of these algorithms are designed to preserve the original topology of the mesh. Unfortunately, this characteristic is a strong limiting factor in overall reduction capability, since objects with a large number of holes or other topological constraints cannot be effectively reduced. The author presents an algorithm that yields a guaranteed reduction level, modifying topology as necessary to achieve the desired result. In addition, the algorithm is based on a fast local decimation technique, and its operations can be encoded for progressive storage, transmission, and reconstruction. He describes the new progressive decimation algorithm, introduces mesh splitting operations and shows how they can be encoded as a progressive mesh. He also demonstrates the utility of the algorithm on models ranging in size from 1,132 to 1.68 million triangles and reduction ratios of up to 200:1. | false | false | [
"William J. Schroeder"
] | [] | [] | [] |
Vis | 1,997 | A visualization of music | 10.1109/VISUAL.1997.663931 | Currently, the most popular method of visualizing music is music notation. Through music notation, an experienced musician can gain an impression of how a particular piece of music sounds simply by looking at the notes on paper. However, most listeners are unfamiliar or uncomfortable with the complex nature of music notation. The goal of this project is to present an alternate method for visualizing music that makes use of color and 3D space. This paper describes one method of visualizing music in 3D space. The implementation of this method shows that music visualization is an effective technique, although it is certainly not the only possible method for accomplishing the task. Throughout the course of this project, several variations and alternative approaches were discussed. The final version of this project reflects the decisions that were made in order to present the best possible representation of music data. | false | false | [
"Sean M. Smith",
"Glen N. Williams"
] | [] | [] | [] |
Vis | 1,997 | Accelerated volume rendering using homogeneous region encoding | 10.1109/VISUAL.1997.663880 | Previous accelerated volume rendering techniques have used auxiliary hierarchical datastructures to skip empty and homogeneous regions. Although some recent research has taken advantage of more efficient direct encoding techniques to skip empty regions, no work has been done to directly encode homogeneous but not empty regions. 3D distance transforms previously used to encode empty space can be extended to preprocess homogeneous regions as well, and these regions can be efficiently encoded and incorporated into volume ray-casting and back projection algorithms with a high degree of flexibility. | false | false | [
"Jason Freund",
"Kenneth R. Sloan"
] | [] | [] | [] |
Vis | 1,997 | An anti-aliasing technique for splatting | 10.1109/VISUAL.1997.663882 | Splatting is a popular direct volume rendering algorithm. However, the algorithm does not correctly render cases where the volume sampling rate is higher than the image sampling rate (e.g. more than one voxel maps into a pixel). This situation arises with orthographic projections of high-resolution volumes, as well as with perspective projections of volumes of any resolution. The result is potentially severe spatial and temporal aliasing artifacts. Some volume ray-casting algorithms avoid these artifacts by employing reconstruction kernels which vary in width as the rays diverge. Unlike ray-casting algorithms, existing splatting algorithms do not have an equivalent mechanism for avoiding these artifacts. The authors propose such a mechanism, which delivers high-quality splatted images and has the potential for a very efficient hardware implementation. | false | false | [
"J. Edward Swan II",
"Klaus Mueller 0001",
"Torsten Möller",
"Naeem Shareef",
"Roger Crawfis",
"Roni Yagel"
] | [
"BP"
] | [] | [] |
Vis | 1,997 | An interactive cerebral blood vessel exploration system | 10.1109/VISUAL.1997.663917 | An interactive cerebral blood vessel exploration system is described. It has been designed on the basis of neurosurgeons' requirements in order to assist them in the diagnosis of vascular pathologies. The system is based on the construction of a symbolic model of the vascular tree, with automatic identification and labelling of vessel bifurcations, aneurysms and stenoses. It provides several types of visualization: individual MRA (magnetic resonance angiography) slices, MIP (maximum intensity projection), shaded rendering, symbolic schemes and surface reconstruction. | false | false | [
"Anna Puig",
"Dani Tost",
"Isabel Navazo"
] | [] | [] | [] |
Vis | 1,997 | Application-controlled demand paging for out-of-core visualization | 10.1109/VISUAL.1997.663888 | In the area of scientific visualization, input data sets are often very large. In visualization of computational fluid dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. The authors demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to egregious performance. They then describe a paged segment system that they have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. They show that application control over some of these can significantly improve performance. They show that sparse traversal can be exploited by loading only those data actually required. | false | false | [
"Michael Cox",
"David Ellsworth"
] | [] | [] | [] |
Vis | 1,997 | Architectural walkthroughs using portal textures | 10.1109/VISUAL.1997.663903 | This paper outlines a method to dynamically replace portals with textures in a cell-partitioned model. The rendering complexity is reduced to the geometry of the current cell thus increasing interactive performance. A portal is a generalization of windows and doors. It connects two adjacent cells (or rooms). Each portal of the current cell that is some distance away from the viewpoint is rendered as a texture. The portal texture (smoothly) returns to geometry when the viewpoint gets close to the portal. This way all portal sequences (not too close to the viewpoint) have a depth complexity of one. The size of each texture and distance at which the transition occurs is configurable for each portal. | false | false | [
"Daniel G. Aliaga",
"Anselmo Lastra"
] | [] | [] | [] |
Vis | 1,997 | Auralization of streamline vorticity in computational fluid dynamics data | 10.1109/VISUAL.1997.663856 | Presents a new method for auralization of the vorticity of a streamline in a vector field. This technique involves using a composite tone formed by superimposing sine waves of various amplitudes whose frequency and amplitude vary in such a way as to give the perception that the resulting sound increases or decreases endlessly in pitch without ever extending beyond the listener's range of audible frequencies. Continuous clockwise or counterclockwise rotations of a streamline resulting from vorticity can then be displayed aurally as an apparently continuous increase or decrease in pitch. | false | false | [
"Christopher R. Volpe",
"Ephraim P. Glinert"
] | [] | [] | [] |
Vis | 1,997 | Brushing techniques for exploring volume datasets | 10.1109/VISUAL.1997.663914 | Describes several visualization techniques based on the notion of multi-resolution brushing to browse large 3D volume datasets. Our software is implemented using public-domain libraries, and is designed to run on average-equipped desktop computers such as a Linux machine with 32 MBytes of memory. Empirically, our system allows scientists to obtain information from a large dataset with over 8.3 million numbers in interactive time. We show that very large scientific volume datasets can be accessed and utilized without expensive hardware and software. | false | false | [
"Pak Chung Wong",
"R. Daniel Bergeron"
] | [] | [] | [] |
Vis | 1,997 | Building and traversing a surface at variable resolution | 10.1109/VISUAL.1997.663865 | The authors consider the multi-triangulation, a general model for representing surfaces at variable resolution based on triangle meshes. They analyse characteristics of the model that make it effective for supporting basic operations such as extraction of a surface approximation, and point location. An interruptible algorithm for extracting a representation at a resolution variable over the surface is presented. Different heuristics for building the model are considered and compared. Results on both the construction and the extraction algorithm are presented. | false | false | [
"Leila De Floriani",
"Paola Magillo",
"Enrico Puppo"
] | [] | [] | [] |
Vis | 1,997 | Case study: efficient visualization of physical and structural properties in crash-worthiness simulations | 10.1109/VISUAL.1997.663928 | Numerical finite element simulations of the behaviour of a car body in frontal, side or rear impact collision scenarios have become increasingly complex as well as reliable and precise. They are well-established as a standard evaluation tool in the automotive development process. Both the increased complexity and the advances in computer graphics technology have resulted in the need for new visualization techniques to facilitate the analysis of the immense amount of data originating from such scientific engineering computations. Expanding the effectiveness of traditional post-processing techniques is one key to achieve shorter design cycles and faster time-to-market. In this paper, we describe how the extensive use of texture mapping and new visualization mappings like force tubing can considerably enhance the post-processing of structural and physical properties of car components in crash simulations. We show that, using these techniques, both the calculation costs and the rendering costs are reduced, and the quality of the visualization is improved. | false | false | [
"Sven Kuschfeldt",
"Thomas Ertl",
"Michael Holzner"
] | [] | [] | [] |
Vis | 1,997 | Case study: visualizing customer segmentations produced by self organizing maps | 10.1109/VISUAL.1997.663922 | We describe a set of visualization programs developed for understanding segmentations of customer records produced by a self organizing map (SOM) algorithm. A SOM produces segments of similar customer records that can then be used as the basis of a marketing campaign. Since the characteristics that each segment will have in common are not specified a priori, visualization is essential to understanding the segment to design specific marketing strategies. Two different styles of visualizations were found to be useful for the two types of observers of the data. Abstract overviews of the entire segmentation were designed for analysts applying the SOM algorithm. Detailed scatterplots of individual records were designed for communicating the results to decision makers specifying marketing strategy. | false | false | [
"Holly E. Rushmeier",
"Richard D. Lawrence",
"George S. Almási"
] | [] | [] | [] |
Vis | 1,997 | Case study: wildfire visualization | 10.1109/VISUAL.1997.663919 | The ability to forecast the progress of crisis events would significantly reduce human suffering and loss of life, the destruction of property and expenditures for assessment and recovery. Los Alamos National Laboratory has established a scientific thrust in crisis forecasting to address this national challenge. In the initial phase of this project, scientists at Los Alamos are developing computer models to predict the spread of a wildfire. Visualization of the results of the wildfire simulation will be used by scientists to assess the quality of the simulation and eventually by fire personnel as a visual forecast of the wildfire's evolution. The fire personnel and scientists want the visualization to look as realistic as possible without compromising scientific accuracy. This paper describes how the visualization was created, analyzes the tools and approach that were used, and suggests directions for future work and research. | false | false | [
"James P. Ahrens",
"Patrick S. McCormick",
"James Bossert",
"Jon Reisner",
"Judith Winterkamp"
] | [] | [] | [] |
Vis | 1,997 | CAVEvis: distributed real-time visualization of time-varying scalar and vector fields using the CAVE virtual reality theater | 10.1109/VISUAL.1997.663896 | The paper discusses CAVEvis and a related set of tools for the interactive visualization and exploration of large sets of time-varying scalar and vector fields using the CAVE virtual reality environment. Since visualization of large data sets can be very time-consuming in both computation and rendering time, the task is distributed over multiple machines, each of which is specialized for some aspect of the visualization process. All modules must run asynchronously to maintain the highest level of interactivity. A model of distributed visualization is introduced that addresses important issues related to the management of time-dependent data, module synchronization, and interactivity bottlenecks. | false | false | [
"Vijendra Jaswal"
] | [] | [] | [] |
Vis | 1,997 | Collaborative augmented reality: exploring dynamical systems | 10.1109/VISUAL.1997.663921 | We present collaborative scientific visualization in STUDIERSTUBE. STUDIERSTUBE is an augmented reality system that has several advantages over conventional desktop and other virtual reality environments, including true stereoscopy, 3D-interaction, individual viewpoints and customized views for multiple users, unhindered natural collaboration and low cost. We demonstrate the application of this concept for the interaction of multiple users and illustrate it with several visualizations of dynamical systems in DynSys3D, a visualization system running on top of AVS. | false | false | [
"Anton L. Fuhrmann",
"Helwig Löffelmann",
"Dieter Schmalstieg"
] | [] | [] | [] |
Vis | 1,997 | Collaborative visualization | 10.1109/VISUAL.1997.663890 | Current visualization systems are designed around a single user model, making it awkward for large research teams to collectively analyse large data sets. The paper shows how the popular data flow approach to visualization can be extended to allow multiple users to collaborate-each running their own visualization pipeline but with the opportunity to connect in data generated by a colleague, Thus collaborative visualizations are 'programmed' in exactly the same 'plug-and-play' style as is now customary for single-user mode. The paper describes a system architecture that can act as a basis for the collaborative extension of any data flow visualization system, and the ideas are demonstrated through a particular implementation in terms of IRIS Explorer. | false | false | [
"Jason D. Wood",
"Helen Wright",
"Ken Brodlie"
] | [] | [] | [] |
Vis | 1,997 | Collision detection for volumetric objects | 10.1109/VISUAL.1997.663851 | We propose a probability model for the handling of complicated interactions between volumetric objects. In our model each volume is associated with a "probability map" that assigns a "surface crossing" probability to each space point according to local volume properties. The interaction between two volumes is then described by finding the intersecting regions between the volumes, and calculating the "collision probabilities" at each intersecting point from the surface crossing probabilities. To enable fast and efficient calculations, we introduce the concept of a distance map and develop two hierarchical collision detection algorithms, taking advantage of the uniform structure of volumetric datasets. | false | false | [
"Taosong He",
"Arie E. Kaufman"
] | [] | [] | [] |
Vis | 1,997 | Computing the separating surface for segmented data | 10.1109/VISUAL.1997.663887 | An algorithm for computing a triangulated surface which separates a collection of data points that have been segmented into a number of different classes is presented. The problem generalizes the concept of an isosurface which separates data points that have been segmented into only two classes: those for which data function values are above the threshold and those which are below the threshold value. The algorithm is very simple, easy to implement and applies without limit to the number of classes. | false | false | [
"Gregory M. Nielson",
"Richard Franke"
] | [] | [] | [] |
Vis | 1,997 | Constrained 3D navigation with 2D controllers | 10.1109/VISUAL.1997.663876 | Navigation through 3D spaces is required in many interactive graphics and virtual reality applications. The authors consider the subclass of situations in which a 2D device such as a mouse controls smooth movements among viewpoints for a "through the screen" display of a 3D world. Frequently, there is a poor match between the goal of such a navigation activity, the control device, and the skills of the average user. They propose a unified mathematical framework for incorporating context-dependent constraints into the generalized viewpoint generation problem. These designer-supplied constraint modes provide a middle ground between the triviality of a single camera animation path and the confusing excess freedom of common unconstrained control paradigms. They illustrate the approach with a variety of examples, including terrain models, interior architectural spaces, and complex molecules. | false | false | [
"Andrew J. Hanson",
"Eric A. Wernert"
] | [] | [] | [] |
Vis | 1,997 | Controlled simplification of genus for polygonal models | 10.1109/VISUAL.1997.663909 | Genus-reducing simplifications are important in constructing multiresolution hierarchies for level-of-detail-based rendering, especially for datasets that have several relatively small holes, tunnels, and cavities. We present a genus-reducing simplification approach that is complementary to the existing work on genus-preserving simplifications. We propose a simplification framework in which genus-reducing and genus-preserving simplifications alternate to yield much better multiresolution hierarchies than would have been possible by using either one of them. In our approach we first identify the holes and the concavities by extending the concept of /spl alpha/-hulls to polygonal meshes under the L/sub /spl infin// distance metric and then generate valid triangulations to fill them. | false | false | [
"Jihad El-Sana",
"Amitabh Varshney"
] | [] | [] | [] |
Vis | 1,997 | Determination of unknown particle charges in a thunder cloud based upon detected electric field vectors | 10.1109/VISUAL.1997.663926 | Climatological data about thunderstorms is traditionally collected by balloons or planes traveling through the storm along straight tracts. Such data lends itself to simple 2D representations. The data described in this paper was gathered by a sail plane spiraling in an updraft within a thundercloud. The more complex organization of data samples demands more complex representation methods. This paper describes a system developed using the Visualization Toolkit (VTK) to explore such data. The data consists of several scalar values and a set of vector values associated with positional data on the measuring devices. The goal of this visualization is to explore the location of point charges suggested by the electromagnetic field vectors and determine if any correlation exists between the point charge location and standard cloud microstructure scalar measurements such as temperature. There are several problems associated with visualizing this rather unique set of data. They stem from the fact that the data is a sparse spiraling sample of scalars and vectors. The system allows the track of the plane to be displayed as a line, a tube or a ribbon; scalar values can be displayed as transparent isosurfaces; and the vector data as an arrow plot along that track, given a color that is constant, based on orientation or related to the value of a scalar. Any combination of methods can be used to display the data. A single primitive can be overloaded in many ways, or several different variables can all be displayed simultaneously. | false | false | [
"Dan Drake",
"Thomas Simpson",
"Larry Smithmier",
"Penny Rheingans"
] | [] | [] | [] |
Vis | 1,997 | Displaying data in multidimensional relevance space with 2D visualization maps | 10.1109/VISUAL.1997.663868 | The paper introduces a tool for visualizing a multidimensional relevance space. Abstractly, the information to be displayed consists of a large number of objects, a set of features that are likely to be of interest to the user, and some function that measures the relevance level of every object to the various features. The goal is to provide the user with a concise and comprehensible visualization of that information. For the type of applications concentrated on, the exact relevance measures of the objects are not significant. This enables accuracy to be traded for a clearer display. The idea is to "flatten" the multidimensionality of the feature space into a 2D "relevance map", capturing the inter-relations among the features, without causing too many ambiguous interpretations of the results. To better reflect the nature of the data and to resolve the ambiguity the authors refine the given set of features and introduce the notion of composed features. The layout of the map is then obtained by grading it according to a set of rules and using a simulated annealing algorithm which optimizes the layout with respect to these rules. The technique proposed has been implemented and tested, in the context of visualizing the result of a Web search, in the RMAP (Relevance Map) prototype system. | false | false | [
"Jackie Assa",
"Daniel Cohen-Or",
"Tova Milo"
] | [] | [] | [] |
Vis | 1,997 | DNA visual and analytic data mining | 10.1109/VISUAL.1997.663916 | Describes data exploration techniques designed to classify DNA sequences. Several visualization and data mining techniques were used to validate and attempt to discover new methods for distinguishing coding DNA sequences (exons) from non-coding DNA sequences (introns). The goal of the data mining was to see whether some other, possibly non-linear combination of the fundamental position-dependent DNA nucleotide frequency values could be a better predictor than the AMI (average mutual information). We tried many different classification techniques including rule-based classifiers and neural networks. We also used visualization of both the original data and the results of the data mining to help verify patterns and to understand the distinction between the different types of data and classifications. In particular, the visualization helped us develop refinements to neural network classifiers, which have accuracies as high as any known method. Finally, we discuss the interactions between visualization and data mining and suggest an integrated approach. | false | false | [
"Patrick Hoffman",
"Georges G. Grinstein",
"Kenneth A. Marx",
"Ivo Grosse",
"Eugene Stanley"
] | [] | [] | [] |
Vis | 1,997 | Dynamic color mapping of bivariate qualitative data | 10.1109/VISUAL.1997.663874 | Color is widely and reliably used to display the value of a single scalar variable. It is more rarely, and far less reliably, used to display multivariate data. Dynamic control over the parameters of the color mapping results in a more effective environment for the exploration of multivariate spatial distributions. The paper describes an empirical study comparing the effectiveness of static versus dynamic representations for the exploration of qualitative aspects of bivariate distributions. In this experiment, subjects made judgments about the correspondence of the shape, location, and magnitude of two patterns under conditions with varying amounts of random noise. Subjects made significantly more correct judgements (p<0.001) about feature shape and relative positions using the dynamic representation, on average forty-five percent more. The differences between static and dynamic representations were greater in the presence of noise. | false | false | [
"Penny Rheingans"
] | [] | [] | [] |
Vis | 1,997 | Dynamic smooth subdivision surfaces for data visualization | 10.1109/VISUAL.1997.663905 | Recursive subdivision schemes have been extensively used in computer graphics and scientific visualization for modeling smooth surfaces of arbitrary topology. Recursive subdivision generates a visually pleasing smooth surface in the limit from an initial user-specified polygonal mesh through the repeated application of a fixed set of subdivision rules. In this paper, we present a new dynamic surface model based on the Catmull-Clark (1978) subdivision scheme, which is a very popular method to model complicated objects of arbitrary genus because of many of its nice properties. Our new dynamic surface model inherits the attractive properties of the Catmull-Clark subdivision scheme as well as that of the physics-based modeling paradigm. This new model provides a direct and intuitive means of manipulating geometric shapes, a fast, robust and hierarchical approach for recovering complex geometric shapes from range and volume data using very few degrees of freedom (control vertices). We provide an analytic formulation and introduce the physical quantities required to develop the dynamic subdivision surface model which can be interactively deformed by applying synthesized forces in real time. The governing dynamic differential equation is derived using Lagrangian mechanics and a finite element discretization. Our experiments demonstrate that this new dynamic model has a promising future in computer graphics, geometric shape design and scientific visualization. | false | false | [
"Chhandomay Mandal",
"Hong Qin",
"Baba C. Vemuri"
] | [] | [] | [] |
Vis | 1,997 | Efficient subdivision of finite-element datasets into consistent tetrahedra | 10.1109/VISUAL.1997.663885 | The paper discusses the problem of subdividing unstructured mesh topologies containing hexahedra, prisms, pyramids and tetrahedra into a consistent set of only tetrahedra, while preserving the overall mesh topology. Efficient algorithms for volume rendering, iso-contouring and particle advection exist for mesh topologies comprised solely of tetrahedra. General finite-element simulations however, consist mainly of hexahedra, and possibly prisms, pyramids and tetrahedra. Arbitrary subdivision of these mesh topologies into tetrahedra can lead to discontinuous behaviour across element faces. This will show up as visible artifacts in the iso-contouring and volume rendering algorithms, and lead to impossible face adjacency graphs for many algorithms. The authors present various properties of tetrahedral subdivisions, and an algorithm SOP determining a consistent subdivision containing a minimal set of tetrahedra. | false | false | [
"Guy Albertelli",
"Roger Crawfis"
] | [] | [] | [] |
Vis | 1,997 | Extracting feature lines from 3D unstructured grids | 10.1109/VISUAL.1997.663894 | The paper discusses techniques for extracting feature lines from three-dimensional unstructured grids. The twin objectives are to facilitate the interactive manipulation of these typically very large and dense meshes, and to clarify the visualization of the solution data that accompanies them. The authors describe the perceptual importance of specific viewpoint-dependent and view-independent features, discuss the relative advantages and disadvantages of several alternative algorithms for identifying these features (taking into consideration both local and global criteria), and demonstrate the results of these methods on a variety of different data sets. | false | false | [
"Kwan-Liu Ma",
"Victoria Interrante"
] | [] | [] | [] |
Vis | 1,997 | exVis: developing a wind tunnel data visualization tool | 10.1109/VISUAL.1997.663911 | Software has been developed to apply visualization techniques to aeronautics data collected during wind tunnel experiments. Interaction between the software developers and the aeroscientists has been crucial in making the software. The interaction has also been important in building the scientists' confidence in the use of interactive, computer-mediated analysis tools. | false | false | [
"Samuel P. Uselton"
] | [] | [] | [] |
Vis | 1,997 | Fast oriented line integral convolution for vector field visualization via the Internet | 10.1109/VISUAL.1997.663897 | Oriented line integral convolution (OLIC) illustrates flow fields by convolving a sparse texture with an anisotropic convolution kernel. The kernel is aligned to the underlying flow of the vector field. OLIC does not only show the direction of the flow but also its orientation. The paper presents fast rendering of oriented line integral convolution (FROLIC), which is approximately two orders of magnitude faster than OLIC. Costly convolution operations as done in OLIC are replaced in FROLIC by approximating a streamlet through a set of disks with varying intensity. The issue of overlapping streamlets is discussed. Two efficient animation techniques for animating FROLIC images are described. FROLIC has been implemented as a Java applet. This allows researchers from various disciplines (typically with inhomogenous hardware environments) to conveniently explore and investigate analytically defined 2D vector fields. | false | false | [
"Rainer Wegenkittl",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 1,997 | GADGET: goal-oriented application design guidance for modular visualization environments | 10.1109/VISUAL.1997.663889 | Modular visualization environments (MVEs) have recently been regarded as the de facto standard for scientific data visualization, mainly due to adoption of the visual programming style, reusability, and extendability. However, since scientists and engineers as the MVE principal user are not always familiar with how to map numerical data to proper graphical primitives, the set of built-in modules is not fully used to construct necessary application networks. Therefore, a certain mechanism needs to be incorporated into MVEs, which makes use of heuristics and expertise of visualization specialists (visineers), and which supports the user in designing his/her applications with MVEs. The Wehrend's goal-oriented taxonomy of visualization techniques is adopted as the basic philosophy to develop a system, called GADGET, for application design guidance for MVEs. The GADGET system interactively helps the user design appropriate applications according to the specific visualization goals, temporal efficiency versus accuracy requirements, and such properties as dimension and mesh type of a given target dataset. Also the GADGET system is capable of assisting the user in customizing a prototype modular network for his/her desired applications by showing execution examples involving datasets of the same type. The paper provides an overview of the GADGET guidance mechanism and system architecture, with an emphasis on its knowledge base design. Sample data visualization problems are used to demonstrate the usefulness of the GADGET system. | false | false | [
"Issei Fujishiro",
"Yuriko Takeshima",
"Yoshihiko Ichikawa",
"Kyoko Nakamura"
] | [] | [] | [] |
Vis | 1,997 | Haar wavelets over triangular domains with applications to multiresolution models for flow over a sphere | 10.1109/VISUAL.1997.663871 | Some new piecewise constant wavelets defined over nested triangulated domains are presented and applied to the problem of multiresolution analysis of flow over a spherical domain. These new, nearly orthogonal wavelets have advantages over the existing weaker biorthogonal wavelets. In the planar case of uniform areas, the wavelets converge to one of two fully orthogonal Haar wavelets. These new, fully orthogonal wavelets are proven to be the only possible wavelets of this type. | false | false | [
"Gregory M. Nielson",
"Il-Hong Jung",
"Junwon Sung"
] | [] | [] | [] |
Vis | 1,997 | I/O optimal isosurface extraction | 10.1109/VISUAL.1997.663895 | The authors give I/O-optimal techniques for the extraction of isosurfaces from volumetric data, by a novel application of the I/O-optimal interval tree of Arge and Vitter (1996). The main idea is to preprocess the data set once and for all to build an efficient search structure in disk, and then each time one wants to extract an isosurface, they perform an output-sensitive query on the search structure to retrieve only those active cells that are intersected by the isosurface. During the query operation, only two blocks of main memory space are needed, and only those active cells are brought into the main memory, plus some negligible overhead of disk accesses. This implies that one can efficiently visualize very large data sets on workstations with just enough main memory to hold the isosurfaces themselves. The implementation is delicate but not complicated. They give the first implementation of the I/O-optimal interval tree, and also implement their methods as an I/O filter for Vtk's isosurface extraction for the case of unstructured grids. They show that, in practice, the algorithms improve the performance of isosurface extraction by speeding up the active-cell searching process so that it is no longer a bottleneck. Moreover, this search time is independent of the main memory available. The practical efficiency of the techniques reflects their theoretical optimality. | false | false | [
"Yi-Jen Chiang",
"Cláudio T. Silva"
] | [] | [] | [] |
Vis | 1,997 | Image synthesis from a sparse set of views | 10.1109/VISUAL.1997.663892 | The authors present an image synthesis methodology and a system built around it. Given a sparse set of photographs taken from unknown viewpoints, the system generates images from new, different viewpoints with correct perspective, and handles occlusion. It achieves this without requiring any knowledge about the 3D structure of the scene nor the intrinsic camera parameters. The photo-realistic rendering process is polygon based and can be potentially implemented as real time texture mapping. The system is robust to noise by taking advantage of duplicate information from multiple views. They present results on several example scenes. | false | false | [
"Qian Chen 0022",
"Gérard G. Medioni"
] | [] | [] | [] |
Vis | 1,997 | Information Exploration Shootout Project And Benchmark Data Sets: Evaluating How Visualization Does In Analyzing Real-World Data Analysis Problems | 10.1109/VISUAL.1997.663933 | The convergence of database systems, knowledge discovery, anti visualization will have a fundamental impact on our capability to mine information and knowledge from very large databases. An enhanced data exploration capability has immediate applications to business and commerce, decision systems, information management, the intelligence community, and communication in the form of both on-line services and the World Wide Web. It holds the promise of major cost savings and increased revenues. Data mining now draws from fields including databases, statistics, information technology, data visualization, and artificial intelligence, especially knowledge-based systems. However, there is a clear sense that, to achieve the next increase in knowledge exploitation, these data exploration approaches must work together. | false | false | [
"Georges G. Grinstein",
"Sharon J. Laskowski",
"Graham J. Wills",
"Bernice E. Rogowitz"
] | [] | [] | [] |
Vis | 1,997 | Instructional software for visualizing optical phenomena | 10.1109/VISUAL.1997.663918 | We describe a multidisciplinary effort for creating interactive 3D graphical modules for visualizing optical phenomena. These modules are designed for use in an upper-level undergraduate course. The modules are developed in Open Inventor, which allows them to run under both Unix and Windows. The work is significant in that it applies contemporary interactive 3D visualization techniques to instructional courseware, which represents a considerable advance compared to the current state of the practice. | false | false | [
"David C. Banks",
"John T. Foley",
"Kiril Vidimce",
"Ming-Hoe Kiu"
] | [] | [] | [] |
Vis | 1,997 | Integrated volume compression and visualization | 10.1109/VISUAL.1997.663900 | Volumetric data sets require enormous storage capacity even at moderate resolution levels. The excessive storage demands not only stress the capacity of the underlying storage and communications systems, but also seriously limit the speed of volume rendering due to data movement and manipulation. A novel volumetric data visualization scheme is proposed and implemented in this work that renders 2D images directly from compressed 3D data sets. The novelty of this algorithm is that rendering is performed on the compressed representation of the volumetric data without pre-decompression. As a result, the overheads associated with both data movement and rendering processing are significantly reduced. The proposed algorithm generalizes previously proposed whole-volume frequency-domain rendering schemes by first dividing the 3D data set into subcubes, transforming each subcube to a frequency-domain representation, and applying the Fourier projection theorem to produce the projected 2D images according to given viewing angles. Compared to the whole-volume approach, the subcube-based scheme not only achieves higher compression efficiency by exploiting local coherency, but also improves the quality of resultant rendering images because it approximates the occlusion effect on a subcube by subcube basis. | false | false | [
"Tzi-cker Chiueh",
"Chuan-Kai Yang",
"Taosong He",
"Hanspeter Pfister",
"Arie E. Kaufman"
] | [] | [] | [] |
Vis | 1,997 | Interactive visualization of aircraft and power generation engines | 10.1109/VISUAL.1997.663927 | Presents a system for interactively visualizing large polygonal environments such as those produced by CAD systems during the design of aircraft and power generation engines. Our method combines view frustum culling with level-of-detail modeling to create a visualization system that supports part motion and has the ability to view arbitrary sets of data. To avoid long system start-up delays due to data loading, we have implemented our system using a dynamic loading strategy. This also allows us to interactively visualize more data than could fit in memory at one time. | false | false | [
"Lisa Sobierajski Avila",
"William J. Schroeder"
] | [] | [] | [] |
Vis | 1,997 | Interactive volume rendering for virtual colonoscopy | 10.1109/VISUAL.1997.663915 | 3D virtual colonoscopy has recently been proposed as a non-invasive alternative procedure for the visualization of the human colon. Surface rendering is sufficient for implementing such a procedure to obtain an overview of the interior surface of the colon at interactive rendering speeds. Unfortunately, physicians can not use it to explore tissues beneath the surface to differentiate between benign and malignant structures. In this paper, we present a direct volume rendering approach based on perspective ray casting, as a supplement to the surface navigation. To accelerate the rendering speed, surface-assistant techniques are used to adapt the resampling rates by skipping the empty space inside the colon. In addition, a parallel version of the algorithm has been implemented on a shared-memory multiprocessing architecture. Experiments have been conducted on both simulation and patient data sets. | false | false | [
"Suya You",
"Lichan Hong",
"Ming Wan",
"Kittiboon Junyaprasert",
"Arie E. Kaufman",
"Shigeru Muraki",
"Yong Zhou 0001",
"Mark Wax",
"Zhengrong Liang"
] | [] | [] | [] |
Vis | 1,997 | Interval volume tetrahedrization | 10.1109/VISUAL.1997.663886 | The interval volume is a generalization of the isosurface commonly associated with the marching cubes algorithm. Based upon samples at the locations of a 3D rectilinear grid, the algorithm produces a triangular approximation to the surface defined by F(x,y,z)=c. The interval volume is defined by /spl alpha//spl les/F(x,y,z)/spl les//spl beta/. The authors describe an algorithm for computing a tetrahedrization of a polyhedral approximation to the interval volume. | false | false | [
"Gregory M. Nielson",
"Junwon Sung"
] | [] | [] | [] |
Vis | 1,997 | Isosurface extraction using particle systems | 10.1109/VISUAL.1997.663930 | Presents a new approach to isosurface extraction from volume data using particle systems. Particle behavior is dynamic and can be based on laws of physics or artificial rules. For isosurface extraction, we program particles to be attracted towards a specific surface value while simultaneously repelling adjacent particles. The repulsive forces are based on the curvature of the surface at that location. A birth-death process results in a denser concentration of particles in areas of high curvature and sparser populations in areas of lower curvature. The overall level of detail is controlled through a scaling factor that increases or decreases the repulsive forces of the particles. Once particles reach equilibrium, their locations are used as vertices in generating a triangular mesh of the surface. The advantages of our approach include: vertex densities are based on surface features rather than on the sampling rate of the volume; a single scaling factor simplifies level-of-detail control; and meshing is efficient because it uses neighbor information that has already been generated during the force calculations. | false | false | [
"Patricia Crossno",
"Edward Angel"
] | [] | [] | [] |
Vis | 1,997 | Multiresolution compression and reconstruction | 10.1109/VISUAL.1997.663901 | The paper presents a framework for multiresolution compression and geometric reconstruction of arbitrarily dimensioned data designed for distributed applications. Although being restricted to uniform sampled data, the versatile approach enables the handling of a large variety of real world elements. Examples include nonparametric, parametric and implicit lines, surfaces or volumes, all of which are common to large scale data sets. The framework is based on two fundamental steps: compression is carried out by a remote server and generates a bit-stream transmitted over the underlying network. Geometric reconstruction is performed by the local client and renders a piecewise linear approximation of the data. More precisely, the compression scheme consists of a newly developed pipeline starting from an initial B-spline wavelet precoding. The fundamental properties of wavelets allow progressive transmission and interactive control of the compression gain by means of global and local oracles. In particular the authors discuss the problem of oracles in semiorthogonal settings and propose sophisticated oracles to remove unimportant coefficients. In addition, geometric constraints such as boundary lines can be compressed in a lossless manner and are incorporated into the resulting bit-stream. The reconstruction pipeline performs a piecewise adaptive linear approximation of data using a fast and easy to use point removal strategy which works with any subsequent triangulation technique. | false | false | [
"Oliver G. Staadt",
"Markus H. Gross",
"Roger Weber"
] | [] | [] | [] |
Vis | 1,997 | Multiresolution tetrahedral framework for visualizing regular volume data | 10.1109/VISUAL.1997.663869 | The authors present a multiresolution framework, called Multi-Tetra framework, that approximates volume data with different levels-of-detail tetrahedra. The framework is generated through a recursive subdivision of the volume data and is represented by binary trees. Instead of using a certain level of the Multi-Tetra framework for approximation, an error-based model (EBM) is generated by recursively fusing a sequence of tetrahedra from different levels of the Multi-Tetra framework. The EBM significantly reduces the number of voxels required to model an object, while preserving the original topology. The approach provides continuous distribution of rendered intensity or generated isosurfaces along boundaries of different levels-of-detail thus solving the crack problem. The model supports typical rendering approaches, such as marching cubes, direct volume projection, and splatting. Experimental results demonstrate the strengths of the approach. | false | false | [
"Yong Zhou 0001",
"Baoquan Chen",
"Arie E. Kaufman"
] | [] | [] | [] |
Vis | 1,997 | Multivariate visualization using metric scaling | 10.1109/VISUAL.1997.663866 | The authors present an efficient visualization approach to support multivariate data exploration through a simple but effective low dimensional data overview based on metric scaling. A multivariate dataset is first transformed into a set of dissimilarities between all pairs of data records. A graph configuration algorithm based on principal components is then wed to determine the display coordinates of the data records in the low dimensional data overview. This overview provides a graphical summary of the multivariate data with reduced data dimensions, reduced data size, and additional data semantics. It can be used to enhance multidimensional data brushing, or to arrange the layout of other conventional multivariate visualization techniques. Real life data is used to demonstrate the approach. | false | false | [
"Pak Chung Wong",
"R. Daniel Bergeron"
] | [] | [] | [] |
Vis | 1,997 | Optimized geometry compression for real-time rendering | 10.1109/VISUAL.1997.663902 | Most existing visualization applications use 3D geometry as their basic rendering primitive. As users demand more complex data sets, the memory requirements for retrieving and storing large 3D models are becoming excessive. In addition, the current 3D rendering hardware is facing a large memory bus bandwidth bottleneck at the processor to graphics pipeline interface. Rendering 1 million triangles with 24 bytes per triangle at 30 Hz requires as much as 720 MB/sec memory bus bandwidth. This transfer rate is well beyond the current low-cost graphics systems. A solution is to compress the static 3D geometry as an off-line pre-process. Then, only the compressed geometry needs to be stored in main memory and sent down to the graphics pipeline for real-time decompression and rendering. The author presents several new techniques for compression of 3D geometry that produce 2 to 3 times better compression ratios than existing methods. They first introduce several algorithms for the efficient encoding of the original geometry as generalized triangle meshes. This encoding allows most of the mesh vertices to be reused when forming new triangles. Their second contribution allows various parts of a geometric model to be compressed with different precision depending on the level of details present. Together, the meshifying algorithms and the variable compression method achieve compression ratios of 30 and 37 to one over ASCII encoded formats and 10 and 15 to one over binary encoded triangle strips. The experimental results show a dramatically lowered memory bandwidth required for real-time visualization of complex data sets. | false | false | [
"Mike M. Chow"
] | [] | [] | [] |
Vis | 1,997 | Pearls found on the way to the ideal interface for scanned probe microscopes | 10.1109/VISUAL.1997.663923 | Since 1991, our team of computer scientists, chemists and physicists have worked together to develop an advanced, virtual-environment interface to scanned-probe microscopes. The interface has provided insights and useful capabilities well beyond those of the traditional interface. This paper lists the particular visualization and control techniques that have enabled actual scientific discovery, including specific examples of insight gained using each technique. This information can help scientists determine which features are likely to be useful in their particular application, and which would be just sugar coating. It can also guide computer scientists to suggest the appropriate type of interface to help solve a particular problem. We have found benefit in advanced rendering with natural viewpoint control (but not always), from semi-automatic control techniques, from force feedback during manipulation, and from storing/replaying data for an entire experiment. These benefits come when the system is well-integrated into the existing tool and allows export of the data to standard visualization packages. | false | false | [
"Russell M. Taylor II",
"Jun Chen",
"Shoji Okimoto",
"Noel Llopis-Artime",
"Vernon L. Chi",
"Frederick P. Brooks Jr.",
"Michael R. Falvo",
"Scott Paulson",
"Pichet Thiansathaporn",
"David Glick",
"Sean Washburn",
"Richard Superfine"
] | [] | [] | [] |
Vis | 1,997 | Perceptual Measures For Effective Visualizations | 10.1109/VISUAL.1997.663934 | How do we measure the effectiveness of visualizations? Clearly any metric has to be based on perceptual models, since we are measuring how a display is perceived and interpreted by a human being, Can we build useful metrics to evaluate the value of image content? Can we build metrics for user interaction that can feed back into our visualization systems to improve their effectiveness? Is it impossible to have real metrics for visualization? Are rules of thumb all we have? Can better rules be developed for effectiveness? We will consider imagery used both for photorealistic visualization and scientific visualization. Metrics for static images and dynamic displays will be discussed. The goal of this panel is to promote discussion of research or development that is neededfor improving the measurement of visualization effectiveness. We also hope to promote debate on whether general measurements are possible, or whether all visualization is case specific. | false | false | [
"Holly E. Rushmeier",
"Harrison H. Barrett",
"Penny Rheingans",
"Samuel P. Uselton",
"Andrew Watson"
] | [] | [] | [] |
Vis | 1,997 | Principal stream surfaces | 10.1109/VISUAL.1997.663859 | The use of stream surfaces and streamlines is well established in vector visualization. However, the proper placement of starting points is critical for these constructs to clearly illustrate the flow topology. In this paper, we present the principal stream surface algorithm, which automatically generates stream surfaces that properly depict the topology of an irrotational flow. For each velocity point in the fluid field, we construct the normal to the principal stream surface through the point. The set of all such normal vectors is used to construct the principal stream function, which is a scalar field describing the direction of velocity in the fluid field. Volume rendering can then be used to visualize the principal stream function, which is directly related to the flow topology. Thus, topology in a fluid field can be easily modeled and rendered. | false | false | [
"Wenli Cai",
"Pheng-Ann Heng"
] | [] | [] | [] |
Vis | 1,997 | Repairing CAD models | 10.1109/VISUAL.1997.663904 | We describe an algorithm for repairing polyhedral CAD models that have errors in their B-REP. Errors like cracks, degeneracies, duplication, holes and overlaps are usually introduced in solid models due to imprecise arithmetic, model transformations, designer errors, programming bugs, etc. Such errors often hamper further processing such as finite element analysis, radiosity computation and rapid prototyping. Our fault-repair algorithm converts an unordered collection of polygons to a shared-vertex representation to help eliminate errors. This is done by choosing, for each polygon edge, the most appropriate edge to unify it with. The two edges are then geometrically merged into one, by moving vertices. At the end of this process, each polygon edge is either coincident with another or is a boundary edge for a polygonal hole or a dangling wall and may be appropriately repaired. Finally, in order to allow user-inspection of the automatic corrections, we produce a visualization of the repair and let the user mark the corrections that conflict with the original design intent. A second iteration of the correction algorithm then produces a repair that is commensurate with the intent. This, by involving the users in a feedback loop, we are able to refine the correction to their satisfaction. | false | false | [
"Gill Barequet",
"Subodh Kumar 0001"
] | [] | [] | [] |
Vis | 1,997 | ROAMing terrain: Real-time Optimally Adapting Meshes | 10.1109/VISUAL.1997.663860 | Terrain visualization is a difficult problem for applications requiring accurate images of large datasets at high frame rates, such as flight simulation and ground-based aircraft testing using synthetic sensor simulation. On current graphics hardware, the problem is to maintain dynamic, view-dependent triangle meshes and texture maps that produce good images at the required frame rate. We present an algorithm for constructing triangle meshes that optimizes flexible view-dependent error metrics, produces guaranteed error bounds, achieves specified triangle counts directly and uses frame-to-frame coherence to operate at high frame rates for thousands of triangles per frame. Our method, dubbed Real-time Optimally Adapting Meshes (ROAM), uses two priority queues to drive split and merge operations that maintain continuous triangulations built from pre-processed bintree triangles. We introduce two additional performance optimizations: incremental triangle stripping and priority-computation deferral lists. ROAM's execution time is proportional to the number of triangle changes per frame, which is typically a few percent of the output mesh size; hence ROAM's performance is insensitive to the resolution and extent of the input terrain. Dynamic terrain and simple vertex morphing are supported. | false | false | [
"Mark A. Duchaineau",
"Murray Wolinsky",
"David E. Sigeti",
"Mark C. Miller",
"Charles Aldrich",
"Mark B. Mineev-Weinstein"
] | [] | [] | [] |
Vis | 1,997 | Simplifying polygonal models using successive mappings | 10.1109/VISUAL.1997.663908 | We present the use of mapping functions to automatically generate levels of detail with known error bounds for polygonal models. We develop a piece-wise linear mapping function for each simplification operation and use this function to measure deviation of the new surface from both the previous level of detail and from the original surface. In addition, we use the mapping function to compute appropriate texture coordinates if the original map has texture coordinates at its vertices. Our overall algorithm uses edge collapse operations. We present rigorous procedures for the generation of local planar projections as well as for the selection of a new vertex position for the edge collapse operation. As compared to earlier methods, our algorithm is able to compute tight error bounds on surface deviation and produce an entire continuum of levels of detail with mappings between them. We demonstrate the effectiveness of our algorithm on several models: a Ford Bronco consisting of over 300 parts and 70,000 triangles, a textured lion model consisting of 49 parts and 86,000 triangles, and a textured, wrinkled torus consisting of 79,000 triangles. | false | false | [
"Jonathan D. Cohen 0001",
"Dinesh Manocha",
"Marc Olano"
] | [] | [] | [] |
Vis | 1,997 | Singularities in nonuniform tensor fields | 10.1109/VISUAL.1997.663857 | Studies the topology of 2nd-order symmetric tensor fields. Degenerate points are basic constituents of tensor fields. From the set of degenerate points, an experienced researcher can reconstruct a whole tensor field. We address the conditions for the existence of degenerate points and, based on these conditions, we predict the distribution of degenerate points inside the field. Every tensor can be decomposed into a deviator and an isotropic tensor. A deviator determines the properties of a tensor field, while the isotropic part provides a uniform bias. Deviators can be 3D or locally 2D. The triple-degenerate points of a tensor field are associated with the singular points of its deviator and the double-degenerate points of a tensor field have singular local 2D deviators. This provides insights into the similarity of topological structure between 1st-order (or vectors) and 2nd-order tensors. Control functions are in charge of the occurrences of a singularity of a deviator. These singularities can further be linked to important physical properties of the underlying physical phenomena. For a deformation tensor in a stationary flow, the singularities of its deviator actually represent the area of the vortex core in the field; for a stress tensor, the singularities represent the area with no stress; for a Newtonian flow, compressible flow and incompressible flow as well as stress and deformation tensors share similar topological features due to the similarity of their deviators; for a viscous flow, removing the large, isotropic pressure contribution dramatically enhances the anisotropy due to viscosity. | false | false | [
"Yingmei Lavin",
"Yuval Levy",
"Lambertus Hesselink"
] | [] | [] | [] |
Vis | 1,997 | Smooth hierarchical surface triangulations | 10.1109/VISUAL.1997.663906 | Presents a new method to produce a hierarchical set of triangle meshes that can be used to blend different levels of detail in a smooth fashion. The algorithm produces a sequence of meshes /spl Mscr//sub 0/, /spl Mscr//sub 1/, /spl Mscr//sub 2/..., /spl Mscr//sub n/, where each mesh /spl Mscr//sub i/ can be transformed to mesh /spl Mscr//sub i+1/ through a set of triangle-collapse operations. For each triangle, a function is generated that approximates the underlying surface in the area of the triangle, and this function serves as a basis for assigning a weight to the triangle in the ordering operation, and for supplying the point to which the triangles are collapsed. This technique allows us to view a triangulated surface model at varying levels of detail while insuring that the simplified mesh approximates the original surface well. | false | false | [
"Tran S. Gieng",
"Bernd Hamann",
"Kenneth I. Joy",
"Gregory L. Schussman",
"Issac J. Trotts"
] | [] | [] | [] |
Vis | 1,997 | Strategies for effectively visualizing 3D flow with volume LIC | 10.1109/VISUAL.1997.663912 | This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding "halos" that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow. | false | false | [
"Victoria Interrante",
"Chester Grosch"
] | [] | [] | [] |
Vis | 1,997 | Terascale Visualization: Approaches, Pitfalls And Issues | 10.1109/VISUAL.1997.663932 | Author(s): Cox, M. B.; Crawfis, Roger; Hamann, Bernd; Hanson, C.; Miller, Mark | Editor(s): Yagel, R.; Hagen, Hans | Abstract: Massively Parallel Supercomputers are once again quickly outpacing our ability to organize, manage and understand the prodigious amounts of data they generate. Graphics technology and algorithms have greatly aided in analyzing the modest datasets of years past, but rarely with enough interactivity to squelch the enduser's exploratory questions. Will computer graphics and scientific visualization or even computional science proceed as status quo or are new paradigm sifts needed? What is the architecture of tomorrow's high-end visualization systems? How much data can we even expect to pull off of these masively parallel machines? What are the new computer graphics technologies that can aid in teracale visulaization? This panel, leverging the panelists past experience and their current knowledge of the field, will provide visions (or dilemma's) for what the next stage or stages of scientific visualization and data management will look like. | false | false | [
"Carol L. Hunter",
"Roger Crawfis",
"Michael Cox",
"Bernd Hamann",
"Charles D. Hansen",
"Mark C. Miller"
] | [] | [] | [] |
Vis | 1,997 | The contour spectrum | 10.1109/VISUAL.1997.663875 | The authors introduce the contour spectrum, a user interface component that improves qualitative user interaction and provides real-time exact quantification in the visualization of isocontours. The contour spectrum is a signature consisting of a variety of scalar data and contour attributes, computed over the range of scalar values /spl omega//spl isin/R. They explore the use of surface, area, volume, and gradient integral of the contour that are shown to be univariate B-spline functions of the scalar value /spl omega/ for multi-dimensional unstructured triangular grids. These quantitative properties are calculated in real-time and presented to the user as a collection of signature graphs (plots of functions of /spl omega/) to assist in selecting relevant isovalues /spl omega//sub 0/ for informative visualization. For time-varying data, these quantitative properties can also be computed over time, and displayed using a 2D interface, giving the user an overview of the time-varying function, and allowing interaction in both isovalue and time step. The effectiveness of the current system and potential extensions are discussed. | false | false | [
"Chandrajit L. Bajaj",
"Valerio Pascucci",
"Daniel Schikore"
] | [] | [] | [] |
Vis | 1,997 | The motion map: efficient computation of steady flow animations | 10.1109/VISUAL.1997.663899 | The paper presents a new approach for animating 2D steady flow fields. It is based on an original data structure called the motion map. The motion map contains not only a dense representation of the flow field but also all the motion information required to animate the flow. An important feature of this method is that it allows, in a natural way, cyclical variable-speed animations. As far as efficiency is concerned, the advantage of this method is that computing the motion map does not take more time than computing a single still image of the flow and the motion map has to be computed only once. Another advantage is that the memory requirements for a cyclical animation of an arbitrary number of frames amounts to the memory cost of a single still image. | false | false | [
"Bruno Jobard",
"Wilfrid Lefer"
] | [] | [] | [] |
Vis | 1,997 | The multilevel finite element method for adaptive mesh optimization and visualization of volume data | 10.1109/VISUAL.1997.663907 | Multilevel representations and mesh reduction techniques have been used for accelerating the processing and the rendering of large datasets representing scalar- or vector-valued functions defined on complex 2D or 3D meshes. We present a method based on finite element approximations which combines these two approaches in a new and unique way that is conceptually simple and theoretically sound. The main idea is to consider mesh reduction as an approximation problem in appropriate finite element spaces. Starting with a very coarse triangulation of the functional domain, a hierarchy of highly non-uniform tetrahedral (or triangular in 2D) meshes is generated adaptively by local refinement. This process is driven by controlling the local error of the piecewise linear finite element approximation of the function on each mesh element. A reliable and efficient computation of the global approximation error and a multilevel preconditioned conjugate gradient solver are the key components of the implementation. In order to analyze the properties and advantages of the adaptively generated tetrahedral meshes, we implemented two volume visualization algorithms: an iso-surface extractor and a ray-caster. Both algorithms, while conceptually simple, show significant speedups over conventional methods delivering comparable rendering quality from adaptively compressed datasets. | false | false | [
"Roberto Grosso",
"Christoph Lürig",
"Thomas Ertl"
] | [] | [] | [] |
Vis | 1,997 | The VSBUFFER: visibility ordering of unstructured volume primitives by polygon drawing | 10.1109/VISUAL.1997.663853 | Different techniques have been proposed for rendering volumetric scalar data sets. Usually these approaches are focusing on orthogonal cartesian grids, but in the last years research did also concentrate on arbitrary structured or even unstructured topologies. In particular, direct volume rendering of these data types is numerically complex and mostly requires sorting the whole database. We present a new approach to direct rendering of convex, voluminous polyhedra on arbitrary grid topologies, which efficiently use hardware assisted polygon drawing to support the sorting procedure. The key idea of this technique lies in a two pass rendering approach. First, the volume primitives are drawn in polygon mode to obtain their cross sections in the VSBUFFER orthogonal to the viewing plane. Second, this buffer is traversed in front to back order and the volume integration is performed. Thus, the complexity of the sorting procedure is reduced. Furthermore, any connectivity information can be completely neglected, which allows for the rendering of arbitrary scattered, convex polyhedra. | false | false | [
"Rüdiger Westermann",
"Thomas Ertl"
] | [] | [] | [] |
Vis | 1,997 | Towards efficient visualization support for single-block and multi-block datasets | 10.1109/VISUAL.1997.663913 | Large simulation grids and multi-grid configurations impose many constraints on commercial visualization software. When available RAM is limited and graphics primitives are numbered in millions, alternative techniques for data access and processing are necessary. In this case study, we present our contributions to a visualization environment based on the AVS/Express software. We demonstrate how the efficient visualization of large datasets relies upon several forms of resource sharing, and alternate and efficient data access techniques. | false | false | [
"Jean-Marie Favre"
] | [] | [] | [] |
Vis | 1,997 | Two-phase perspective ray casting for interactive volume navigation | 10.1109/VISUAL.1997.663878 | Volume navigation is the interactive exploration of volume data sets by "flying" the view point through the data, producing a volume rendered view at each frame. The authors present an inexpensive perspective volume navigation method designed to run on a PC platform with accelerated 3D graphics hardware. They compute perspective projections at each frame, allow trilinear interpolation of sample points, and render both gray scale and RGB volumes by volumetric compositing. The implementation handles arbitrarily large volumes, by dynamically swapping data within the local depth-limited frustum into main memory as the viewpoint moves through the volume. They describe a new ray casting algorithm that takes advantage of the coherence inherent in adjacent frames to generate a sequence of approximate animated frames much faster than they could be computed individually. They also take advantage of the 3D graphics acceleration hardware to offload much of the alpha blending and resampling from the CPU. | false | false | [
"Martin L. Brady",
"Kenneth K. Jung",
"H. T. Nguyen",
"Thinh P. Q. Nguyen"
] | [] | [] | [] |
Vis | 1,997 | UFLIC: a line integral convolution algorithm for visualizing unsteady flows | 10.1109/VISUAL.1997.663898 | The paper presents an algorithm, UFLIC (Unsteady Flow LIC), to visualize vector data in unsteady flow fields. Using line integral convolution (LIC) as the underlying method, a new convolution algorithm is proposed that can effectively trace the flow's global features over time. The new algorithm consists of a time-accurate value depositing scheme and a successive feedforward method. The value depositing scheme accurately models the flow advection, and the successive feedforward method maintains the coherence between animation frames. The new algorithm can produce time-accurate, highly coherent flow animations to highlight global features in unsteady flow fields. CFD scientists, for the first time, are able to visualize unsteady surface flows using the algorithm. | false | false | [
"Han-Wei Shen",
"David L. Kao"
] | [] | [] | [] |
Vis | 1,997 | Viewing IGES files through VRML | 10.1109/VISUAL.1997.663924 | This paper describes our experiences with using the Virtual Reality Modeling Language (VRML) to view files in the Initial Graphics Exchange Specification (IGES) format using a Java-based translator from IGES to VRML and HTML (Hypertext Markup Language). The paper examines the conversion problems between IGES and VRML and presents some results of the process. | false | false | [
"Jed Marti"
] | [] | [] | [] |
Vis | 1,997 | Virtualized reality: constructing time-varying virtual worlds from real world events | 10.1109/VISUAL.1997.663893 | Virtualized reality is a modeling technique that constructs full 3D virtual representations of dynamic events from multiple video streams. Image-based stereo is used to compute a range image corresponding to each intensity image in each video stream. Each range and intensity image pair encodes the scene structure and appearance of the scene visible to the camera at that moment, and is therefore called a visible surface model (VSM). A single time instant of the dynamic event can be modeled as a collection of VSMs from different viewpoints, and the full event can be modeled as a sequence of static scenes-the 3D equivalent of video. Alternatively, the collection of VSMs at a single time can be fused into a global 3D surface model, thus creating a traditional virtual representation out of real world events. Global modeling has the added benefit of eliminating the need to hand-edit the range images to correct errors made in stereo, a drawback of previous techniques. Like image-based rendering models, these virtual representations can be used to synthesize nearly any view of the virtualized event. For this reason, the paper includes a detailed comparison of existing view synthesis techniques with the authors' own approach. In the virtualized representations, however, scene structure is explicitly represented and therefore easily manipulated, for example by adding virtual objects to (or removing virtualized objects from) the model without interfering with real event. Virtualized reality, then, is a platform not only for image-based rendering but also for 3D scene manipulation. | false | false | [
"Peter Rander",
"P. J. Narayanan",
"Takeo Kanade"
] | [] | [] | [] |
Vis | 1,997 | Visualization of geometric algorithms in an electronic classroom | 10.1109/VISUAL.1997.663920 | This paper investigates the visualization and animation of geometric computing in a distributed electronic classroom. We show how focusing in a well-defined domain makes it possible to develop a compact system that is accessible to even naive users. We present a conceptual model and a system, GASP-II (Geometric Animation System, Princeton, II), that realizes this model in the geometric domain. The system allows the presentation and interactive exploration of 3D geometric algorithms over a network. | false | false | [
"Maria Shneerson",
"Ayellet Tal"
] | [] | [] | [] |
Vis | 1,997 | Visualization of height field data with physical models and texture photomapping | 10.1109/VISUAL.1997.663862 | The paper discusses a unique way to visualize height field data-the use of solid fabricated parts with a photomapped texture to display scalar information. In this process, the data in a height field are turned into a 3D solid representation through solid freeform fabrication techniques, in this case laminated object manufacturing. Next, that object is used as a 3D "photographic plate" to allow a texture image representing scalar data to be permanently mapped onto it. The paper discusses this process and how it can be used in different visualization situations. | false | false | [
"Dru Clark",
"Michael J. Bailey"
] | [] | [] | [] |
Vis | 1,997 | Visualization of higher order singularities in vector fields | 10.1109/VISUAL.1997.663858 | Presents an algorithm for the visualization of vector field topology based on Clifford algebra. It allows the detection of higher-order singularities. This is accomplished by first analysing the possible critical points and then choosing a suitable polynomial approximation, because conventional methods based on piecewise linear or bilinear approximation do not allow higher-order critical points and destroy the topology in such cases. The algorithm is still very fast, because of using linear approximation outside the areas with several critical points. | false | false | [
"Gerik Scheuermann",
"Hans Hagen",
"Heinz Krüger",
"Martin Menzel",
"Alyn P. Rockwood"
] | [] | [] | [] |
Vis | 1,997 | Visualization of large terrains in resource-limited computing environments | 10.1109/VISUAL.1997.663863 | The authors describe a software system supporting interactive visualization of large terrains in a resource-limited environment, i.e. a low-end client computer accessing a large terrain database server through a low-bandwidth network. By "large", they mean that the size of the terrain database is orders of magnitude larger than the computer RAM. Superior performance is achieved by manipulating both geometric and texture data at a continuum of resolutions, and, at any given moment, using the best resolution dictated by the CPU and bandwidth constraints. The geometry is maintained as a Delaunay triangulation of a dynamic subset of the terrain data points, and the texture compressed by a progressive wavelet scheme. A careful blend of algorithmic techniques enables the system to achieve superior rendering performance on a low-end computer by optimizing the number of polygons and texture pixels sent to the graphics pipeline. It guarantees a frame rate depending only on the size and quality of the rendered image, independent of the viewing parameters and scene database size. An efficient paging scheme minimizes data I/O, thus enabling the use of the system in a low-bandwidth client/server data-streaming scenario, such as on the Internet. | false | false | [
"Boris Rabinovich",
"Craig Gotsman"
] | [] | [] | [] |
Vis | 1,997 | Visualization of plant growth | 10.1109/VISUAL.1997.663925 | The measurement, analysis and visualization of plant growth is of primary interest to plant biologists. We are developing software tools to support such investigations. There are two parts in this investigation, namely growth visualization of (i) a plant root and (ii) a plant stem. For both domains, the input data is a stream of images taken by cameras. The tools being developed make it possible to measure various time-varying quantities, such as differential growth. For both domains, the plant is modeled by using flexible templates to represent non-rigid motions. | false | false | [
"Jeremy J. Loomis",
"Xiuwen Liu",
"Zhaohua Ding",
"Kikuo Fujimura",
"Michael L. Evans",
"Hideo Ishikawa"
] | [] | [] | [] |
Vis | 1,997 | Visualization of rotation fields | 10.1109/VISUAL.1997.663929 | We define a rotation field by extending the notion of a vector field to rotations. A vector field has a vector as a value at each point of its domain; a rotation field has a rotation as a value at each point of its domain. Rotation fields result from mapping the orientation error of tracking systems. We build upon previous methods for the visualization of vector fields, tensor fields and rotations at a point, to visualize a rotation field resulting from calibration of a commonly-used magnetic tracking system. | false | false | [
"Mark A. Livingston"
] | [] | [] | [] |
Vis | 1,997 | Visualizing the behaviour of higher dimensional dynamical systems | 10.1109/VISUAL.1997.663867 | In recent years scientific visualization has been driven by the need to visualize high-dimensional data sets within high-dimensional spaces. However most visualization methods are designed to show only some statistical features of the data set. The paper deals with the visualization of trajectories of high-dimensional dynamical systems which form a L/sub n//sup n/ data set of a smooth n-dimensional flow. Three methods that are based on the idea of parallel coordinates are presented and discussed. Visualizations done with these new methods are shown and an interactive visualization tool for the exploration of high-dimensional dynamical systems is proposed. | false | false | [
"Rainer Wegenkittl",
"Helwig Löffelmann",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 1,997 | VizWiz: a Java applet for interactive 3D scientific visualization on the Web | 10.1109/VISUAL.1997.663891 | VizWiz is a Java applet that provides basic interactive scientific visualization functionality, such as isosurfaces, cutting planes, and elevation plots, for 2D and 3D datasets that can be loaded into the applet by the user via, the applet's Web server. VizWiz is unique in that it is a completely platform independent scientific visualization tool, and is usable over the Web, without being manually downloaded or installed. Its 3D graphics are implemented using only the Java AWT API, making them portable across all Java supporting platforms. The paper describes the implementation of VizWiz, including design tradeoffs. Graphics performance figures are provided for a number of different platforms. A solution to the problem of uploading user data files into a Java applet, working around security limitations, is demonstrated. The lessons learned from this project are discussed. | false | false | [
"Cherilyn Michaels",
"Michael J. Bailey"
] | [] | [] | [] |
Vis | 1,997 | Volume rendering of abdominal aortic aneurysms | 10.1109/VISUAL.1997.663855 | One well known application area of volume rendering is the reconstruction and visualization of output from medical scanners like computed tomography (CT). 2D greyscale slices produced by these scanners can be reconstructed and displayed onscreen as a 3D model. Volume visualization of medical images must address two important issues. First, it is difficult to segment medical scans into individual materials based only on intensity values. Second, although greyscale images are the normal method for displaying medical volumes, these types of images are not necessarily appropriate for highlighting regions of interest within the volume. Studies of the human visual system have shown that individual intensity values are difficult to detect in a greyscale image. In these situations colour is a more effective visual feature. We addressed both problems during the visualization of CT scans of abdominal aortic aneurysms. We have developed a classification method that empirically segments regions of interest in each of the 2D slices. We use a perceptual colour selection technique to identify each region of interest in both the 2D slices and the 3D reconstructed volumes. The result is a colourized volume that the radiologists are using to rapidly and accurately identify the locations and spatial interactions of different materials from their scans. Our technique is being used in an experimental post operative environment to help to evaluate the results of surgery designed to prevent the rupture of the aneurysm. In the future, we hope to use the technique during the planning of placement of support grafts prior to the actual operation. | false | false | [
"Roger C. Tam",
"Christopher G. Healey",
"Borys Flak",
"Peter Cahoon"
] | [] | [] | [] |
Vis | 1,997 | Vortex identification-applications in aerodynamics: a case study | 10.1109/VISUAL.1997.663910 | An eigenvector method for vortex identification has been applied to recent numerical and experimental studies in external flow aerodynamics. It is shown to be an effective way to extract and visualize features such as vortex cores, spiral vortex breakdowns, vortex bursting, and vortex diffusion. Several problems are reported and illustrated. These include: disjointed line segments, detecting non-vortical flow features, and vortex core displacement. Future research and applications are discussed, such as using vortex cores to guide automatic grid refinement. | false | false | [
"David N. Kenwright",
"Robert Haimes"
] | [
"BCS"
] | [] | [] |
Vis | 1,997 | Wavelet-based multiresolutional representation of computational field simulation datasets | 10.1109/VISUAL.1997.663872 | The paper addresses multiresolutional representation of datasets arising from a computational field simulation. The approach determines the regions of interest, breaks the volume into variable size blocks to localize the information, and then codes each block using a wavelet transform. The blocks are then ranked by visual information content so that the most informative wavelet coefficients can be embedded in a bit stream for progressive transmission or access. The technique is demonstrated on a widely-used computational field simulation dataset. | false | false | [
"Zhifan Zhu",
"Raghu Machiraju",
"Bryan Fry",
"Robert J. Moorhead II"
] | [] | [] | [] |
InfoVis | 1,997 | A spreadsheet approach to information visualization | 10.1109/INFVIS.1997.636761 | In information visualization, as the volume and complexity of the data increases, researchers require more powerful visualization tools that enable them to more effectively explore multidimensional datasets. We discuss the general utility of a novel visualization spreadsheet framework. Just as a numerical spreadsheet enables exploration of numbers, a visualization spreadsheet enables exploration of visual forms of information. We show that the spreadsheet approach facilitates certain information visualization tasks that are more difficult using other approaches. Unlike traditional spreadsheets, which store only simple data elements and formulas in each cell, a visualization spreadsheet cell can hold an entire complex data set, selection criteria, viewing specifications, and other information needed for a full-fledged information visualization. Similarly, inter-cell operations are far more complex, stretching beyond simple arithmetic and string operations to encompass a range of domain-specific operators. We have built two prototype systems that illustrate some of these research issues. The underlying approach in our work allows domain experts to define new data types and data operations, and enables visualization experts to incorporate new visualizations, viewing parameters, and view operations. | false | false | [
"Ed H. Chi",
"Phillip Barry",
"John Riedl",
"Joseph A. Konstan"
] | [] | [] | [] |
InfoVis | 1,997 | Adaptive information visualization based on the user's multiple viewpoints - interactive 3D visualization of the WWW | 10.1109/INFVIS.1997.636778 | We introduce the adaptive information visualization method for hypermedia and the WWW based on the user's multiple viewpoints. We propose two graphical interfaces, the CVI and the RF-Cone. The CVI is the interface for interactive viewpoint selection. We can select a viewpoint reflecting our interests by using the CVI. According to the given viewpoint, the RF-Cone adaptively organizes the 3D representation of the hypermedia so that we can understand the semantic and structural relationship of the hypermedia and easily retrieve the information. Combining these methods, we have developed the WWW visualization system which can provide highly efficient navigation. | false | false | [
"Teruhiko Teraoka",
"Minoru Maruyama"
] | [] | [] | [] |
InfoVis | 1,997 | Cacti: a front end for program visualization | 10.1109/INFVIS.1997.636785 | We describe a system that allows the user to rapidly construct program visualizations over a variety of data sources. Such a system is a necessary foundation for using visualization as an aid to software understanding. The system supports an arbitrary set of data sources so that information from both static and dynamic analysis can be combined to offer meaningful software visualizations. It provides the user with a visual universal-relation front end that supports the definition of queries over multiple data sources without knowledge of the structure or contents of the sources. It uses a flexible back end with a range of different visualizations, most geared to the efficient display of large amounts of data. The result is a high-quality, easy-to-define program visualization that can address specific problems and hence is useful for software understanding. The overall system is flexible and extensible in that both the underlying data model and the set of visualizations are defined in resource files. | false | false | [
"Steven P. Reiss"
] | [] | [] | [] |
InfoVis | 1,997 | Coordinating declarative queries with a direct manipulation data exploration environment | 10.1109/INFVIS.1997.636788 | Interactive visualization techniques allow data exploration to be a continuous process, rather than a discrete sequence of queries and results as in traditional database systems. However limitations in expressive power of current visualization systems force users to go outside the system and form a new dataset in order to perform certain operations, such as those involving the relationship among multiple objects. Further, there is no support for integrating data from the new dataset into previous visualizations, so users must recreate them. Visage's information centric paradigm provides an architectural hook for linking data across multiple queries, removing this overhead. This paper describes the addition to Visage of a visual query language, called VQE, which allows users to express more complicated queries than in previous interactive visualization systems. Visualizations can be created from queries and vice versa. When either is updated, the other changes to maintain consistency. | false | false | [
"Mark Derthick",
"Steven F. Roth",
"John Kolojejchick"
] | [] | [] | [] |
InfoVis | 1,997 | Design and evaluation of incremental data structures and algorithms for dynamic query interfaces | 10.1109/INFVIS.1997.636790 | A dynamic query interface (DQI) is a database access mechanism that provides continuous real-time feedback to the user during query formulation. Previous work shows that DQIs are elegant and powerful interfaces to small databases. Unfortunately, when applied to large databases, previous DQI algorithms slow to a crawl. We present a new incremental approach to DQI algorithms and display updates that work well with large databases, both in theory and in practice. | false | false | [
"Egemen Tanin",
"Richard Beigel",
"Ben Shneiderman"
] | [] | [] | [] |
InfoVis | 1,997 | Domesticating Bead: adapting an information visualization system to a financial institution | 10.1109/INFVIS.1997.636789 | The Bead visualization system employs a fast algorithm for laying out high-dimensional data in a low-dimensional space, and a number of features added to 3D visualizations to improve imageability. We describe recent work on both aspects of the system, in particular a generalization of the data types laid out and the implementation of imageability features in a 2D visualization tool. The variety of data analyzed in a financial institution such as UBS, and the ubiquity of spreadsheets as a medium for analysis, led us to extend our layout tools to handle data in a generic spreadsheet format. We describe the metrics of similarity used for this data type, and give examples of layouts of sets of records of financial trades. Conservatism and scepticism with regard to 3D visualization, along with the lack of functionality of widely available 3D web browsers, led to the development of a 2D visualization tool with refinements of a number of our imageability features. | false | false | [
"Dominique Brodbeck",
"Matthew Chalmers",
"Aran Lunzer",
"Pamela Cotture"
] | [] | [] | [] |
InfoVis | 1,997 | H3: laying out large directed graphs in 3D hyperbolic space | 10.1109/INFVIS.1997.636718 | We present the H3 layout technique for drawing large directed graphs as node-link diagrams in 3D hyperbolic space. We can lay out much larger structures than can be handled using traditional techniques for drawing general graphs because we assume a hierarchical nature of the data. We impose a hierarchy on the graph by using domain-specific knowledge to find an appropriate spanning tree. Links which are not part of the spanning tree do not influence the layout but can be selectively drawn by user request. The volume of hyperbolic 3-space increases exponentially, as opposed to the familiar geometric increase of euclidean 3-space. We exploit this exponential amount of room by computing the layout according to the hyperbolic metric. We optimize the cone tree layout algorithm for 3D hyperbolic space by placing children on a hemisphere around the cone mouth instead of on its perimeter. Hyperbolic navigation affords a Focus+Context view of the structure with minimal visual clutter. We have successfully laid out hierarchies of over 20,000 nodes. Our implementation accommodates navigation through graphs too large to be rendered interactively by allowing the user to explicitly prune or expand subtrees. | false | false | [
"Tamara Munzner"
] | [] | [] | [] |
InfoVis | 1,997 | Managing multiple focal levels in Table Lens | 10.1109/INFVIS.1997.636787 | The Table Lens, focus+context visualization for large data tables, allows users to see 100 times as many data values as a spreadsheet in the same screen space in a manner that enables an extremely immediate form of exploratory data analysis. In the original Table Lens design, data are shown in the context area using graphical representations in a single pixel row. Scaling up the Table Lens technique beyond approximately 500 cases (rows) by 40 variables (columns) requires not showing every value individually and thus raises challenges for preserving the exploratory and navigational ease and power of the original design. We describe two design enhancements for introducing regions of less than a pixel row for each data value and discuss the issues raised by each. | false | false | [
"Tichomir Tenev",
"Ramana Rao"
] | [] | [] | [] |
InfoVis | 1,997 | Managing software with new visual representations | 10.1109/INFVIS.1997.636782 | Managing large projects is a very challenging task requiring the tracking and scheduling of many resources. Although new technologies have made it possible to automatically collect data on project resources, it is very difficult to access this data because of its size and lack of structure. We present three novel glyphs for simplifying this process and apply them to visualizing statistics from a multi-million line software project. These glyphs address four important needs in project management: viewing time dependent data; managing large data volumes; dealing with diverse data types; and correspondence of data to real-world concepts. | false | false | [
"Mei C. Chuah",
"Stephen G. Eick"
] | [] | [] | [] |
InfoVis | 1,997 | Metrics for effective information visualization | 10.1109/INFVIS.1997.636794 | Metrics for information visualization will help designers create and evaluate 3D information visualizations. Based on experience from 60+ 3D information visualizations, the metrics we propose are: number of data points and data density; number of dimensions and cognitive overhead; occlusion percentage; and reference context and percentage of identifiable points. | false | false | [
"Richard Brath"
] | [] | [] | [] |
InfoVis | 1,997 | Multidimensional detective | 10.1109/INFVIS.1997.636793 | The display of multivariate datasets in parallel coordinates, transforms the search for relations among the variables into a 2-D pattern recognition problem. This is the basis for the application to visual data mining. The knowledge discovery process together with some general guidelines are illustrated on a dataset from the production of a VLSI chip. The special strength of parallel coordinates is in modeling relations. As an example, a simplified economic model is constructed with data from various economic sectors of a real country. The visual model shows the interelationship and dependencies between the sectors, circumstances where there is competition for the same resource, and feasible economic policies. Interactively, the model can be used to do trade-off analyses, discover sensitivities, do approximate optimization, monitor (as in a process) and provide decision support. | false | false | [
"Alfred Inselberg"
] | [] | [] | [] |
InfoVis | 1,997 | Nonlinear magnification fields | 10.1109/INFVIS.1997.636786 | We introduce nonlinear magnification fields as an abstract representation of nonlinear magnification, providing methods for converting transformation routines to magnification fields and vice-versa. This new representation provides ease of manipulation and power of expression. By removing the restrictions of explicit foci and allowing precise specification of magnification values, we can achieve magnification effects which were not previously possible. Of particular interest are techniques we introduce for expressing complex and subtle magnification effects through magnification brushing, and allowing intrinsic properties of the data being visualized to create data-driven magnifications. | false | false | [
"Alan Keahey",
"Edward L. Robertson"
] | [] | [] | [] |
InfoVis | 1,997 | On integrating visualization techniques for effective software exploration | 10.1109/INFVIS.1997.636784 | This paper describes the SHriMP visualization technique for seamlessly exploring software structure and browsing source code, with a focus on effectively assisting hybrid program comprehension strategies. The technique integrates both pan+zoom and fisheye-view visualization approaches for exploring a nested graph view of software structure. The fisheye-view approach handles multiple focal points, which are necessary when examining several subsystems and their mutual interconnections. Source code is presented by embedding code fragments within the nodes of the nested graph. Finer connections among these fragments are represented by a network that is navigated using a hypertext link-following metaphor. SHriMP combines this hypertext metaphor with animated panning and zooming motions over the nested graph to provide continuous orientation and contextual cues for the user. The SHriMP tool is being evaluated in several user studies. Observations of users performing program understanding tasks with the tool are discussed. | false | false | [
"Margaret-Anne D. Storey",
"Kenny Wong",
"F. David Fracchia",
"Hausi A. Müller"
] | [] | [] | [] |
InfoVis | 1,997 | The structure of the information visualization design space | 10.1109/INFVIS.1997.636792 | Research on information visualization has reached the point where a number of successful point designs have been proposed and a variety of techniques have been discovered. It is now appropriate to describe and analyze portions of the design space so as to understand the differences among designs and to suggest new possibilities. This paper proposes an organization of the information visualization literature and illustrates it with a series of examples. The result is a framework for designing new visualizations and augmenting existing designs. | false | false | [
"Stuart K. Card",
"Jock D. Mackinlay"
] | [
"TT"
] | [] | [] |
InfoVis | 1,997 | Visualizing information on a sphere | 10.1109/INFVIS.1997.636759 | We describe a method for the visualization of information units on spherical domains which is employed in the banking industry for risk analysis, stock prediction and other tasks. The system is based on a quantification of the similarity of related objects that governs the parameters of a mass-spring system. Unlike existing approaches we initialize all information units onto the inner surface of two concentric spheres and attach them with springs to the outer sphere. Since the spring stiffnesses correspond to the computed similarity measures, the system converges into an energy minimum which reveals multidimensional relations and adjacencies in terms of spatial neighborhoods. Depending on the application scenario our approach supports different topological arrangements of related objects. In order to cope with large data sets we propose a blobby clustering mechanism that enables encapsulation of similar objects by implicit shapes. In addition, we implemented various interaction techniques allowing semantic analysis of the underlying data sets. Our prototype system IVORY is written in Java, and its versatility is illustrated by an example from financial service providers. | false | false | [
"Markus H. Gross",
"Thomas C. Sprenger",
"J. Finger"
] | [] | [] | [] |
InfoVis | 1,997 | Volume rendering for relational data | 10.1109/INFVIS.1997.636791 | A method for efficiently volume rendering dense scatterplots of relational data is described. Plotting difficulties that arise from large numbers of data points, categorical variables, interaction with non-axis dimensions, and unknown values, are addressed by this method. The domain of the plot is voxelized using binning and then volume rendering. Since a table is used as the underlying data structure, no storage is wasted on regions with no data. The opacity of each voxel is a function of the number of data points in a corresponding bin. A voxel's color is derived by averaging the value of one of the variables for all the data points that fall in a bin. Other variables in the data may be mapped to external query sliders. A dragger object permits a user to select regions inside the volume. | false | false | [
"Barry G. Becker"
] | [] | [] | [] |
CHI | 1,997 | Balancing Usability and Learning in an Interface | 10.1145/258549.258995 | Creating educational software forces a difficult tradeoff. The software must be easy for the students to use, yet not so simple that the parts that students are to learn from are done for them by the computer. DEVICE (Dynamic Environment for Visualization of Chemical Engineering) is a learning environment aimed at allowing chemical engineering students to model chemical engineering problems, then execute those problems as simulations. In the design of DEVICE, we have attempted to use student tasks to focus attention on the most important parts of the problem without overwhelming students with extnneous detail. | false | false | [
"Noel Rappin",
"Mark Guzdial",
"Matthew J. Realff",
"Pete Ludovice"
] | [] | [] | [] |
CHI | 1,997 | Bringing Treasures to the Surface: Iterative Design for the Library of Congress National Digital Library Program | 10.1145/258549.259009 | ABSTRACT The Human-Computer Interaction Lab worked with a team for the Library of Congress (LC) to develop and test interface designs for LC's National Digital Library Program. Three iterations are described and illustrate the progression of the project toward a compact design that minimizes scrolling and jumping and anchors users in a screen space that tightly couples search and results. Issues and resolutions are discussed for each iteration and reflect the challenges of incomplete metadata, data visualization, and the rapidly changing web environment. | false | false | [
"Catherine Plaisant",
"Gary Marchionini",
"Tom Bruns",
"Anita Komlodi",
"Laura Campbell"
] | [] | [] | [] |
CHI | 1,997 | Characterizing Interactive Externalizations | 10.1145/258549.258803 | This paper seeks to characterize the space of techniques that exist for interactive externalisations (visualizations). A selection of visualizations are classified with respect to: the types of data represented, the nature of the visible feedback displayed and the forms of interactivity used. Such characterization provides a method for evaluating potential designs and comparing different tools. | false | false | [
"Lisa Tweedie"
] | [] | [] | [] |
CHI | 1,997 | Putting Visualization to Work: ProgramFinder for Youth Placement | 10.1145/258549.259003 | The Human-Computer Interaction Laboratory (HCIL) and the Maryland Department of Juvenile Justice (DJJ) have been working together to develop the ProgramFinder, a tool for choosing programs for a troubled youth from drug rehabilitation centers to secure residential facilities. The seemingly straightforward journey of the ProgramFinder from an existing user interface technique to a product design required the development of five different prototypes which involved user interface design, prototype implementation, and selecting search criterion. While HCIL’s effort focused primarily on design and implementation, DJJ’s attribute selection process was the most time consuming and difficult task. We also found that a direct link to DJJ’s workflow was needed in the prototypes to generate the necessary “buy-in”. This paper analyzes the interaction between the efforts of HCIL and DJJ and the amount of “buyin” by DJJ staff and management. Lesson learned are presented for developers. | false | false | [
"Jason B. Ellis",
"Anne Rose",
"Catherine Plaisant"
] | [] | [] | [] |
Vis | 1,996 | A 3D Contextual Shading Method for Visualization of Diecasting Defects | 10.1109/VISUAL.1996.568143 | In many mechanical design-related activities, the visualization tool needs to convey not only the shape of the objects, but also their interior problem regions. Due to the binary nature of these models, existing shading models often fall short of supporting a realistic display. In this case study, we present several new contextual shading methods that we originally developed for our design visualization tools. The results are then compared with gray-scale shading applied to a gray-level version of the binary object. The comparison shows that our method can be applied to any binary object and yields promising results. | false | false | [
"Shao-Chiung Lu",
"Alec B. Rebello",
"D. H. Cui",
"Roni Yagel",
"Richard Allen Miller",
"Gary L. Kinzel"
] | [] | [] | [] |
Vis | 1,996 | A fast Gibbs sampler for synthesizing constrained fractals | 10.1109/VISUAL.1996.567598 | It is well known that the spatial frequency spectra of membrane and thin-plate splines exhibit self-affine characteristics and hence behave as fractals. This behavior was exploited in generating the constrained fractal surfaces in the work of Szeliski and Terzopoulos (1989), which were generated by using a Gibbs sampler algorithm. The algorithm involves locally perturbing a constrained spline surface with white noise until the spline surface reaches an equilibrium state. In this paper, we introduce a very fast generalized Gibbs sampler that combines two novel techniques, namely a preconditioning technique in a wavelet basis for constraining the splines and a perturbation scheme in which, unlike the traditional Gibbs sampler, all sites (surface nodes) that do not share a common neighbor are updated simultaneously. In addition, we demonstrate the capability to generate arbitrary-order fractal surfaces without resorting to blending techniques. Using this fast Gibbs sampler algorithm, we demonstrate the synthesis of realistic terrain models from sparse elevation data. | false | false | [
"Baba C. Vemuri",
"Chhandomay Mandal"
] | [] | [] | [] |
Vis | 1,996 | A haptic interaction method for volume visualization | 10.1109/VISUAL.1996.568108 | Volume visualization techniques typically provide support for visual exploration of data, however additional information can be conveyed by allowing a user to see as well as feel virtual objects. We present a haptic interaction method that is suitable for both volume visualization and modeling applications. Point contact forces are computed directly from the volume data and are consistent with the isosurface and volume rendering methods, providing a strong correspondence between visual and haptic feedback. Virtual tools are simulated by applying three-dimensional filters to some properties of the data within the extent of the tool, and interactive visual feedback rates are obtained by using an accelerated ray casting method. This haptic interaction method was implemented using a PHANToM haptic interface. | false | false | [
"Ricardo S. Avila",
"Lisa M. Sobierajski"
] | [] | [] | [] |
Vis | 1,996 | A linear iteration time layout algorithm for visualising high-dimensional data | 10.1109/VISUAL.1996.567787 | A technique is presented for the layout of high dimensional data in a low dimensional space. This technique builds upon the force based methods that have been used previously to make visualisations of various types of data such as bibliographies and sets of software modules. The canonical force based model, related to solutions of the N body problem, has a computational complexity of O(N/sup 2/) per iteration. The paper presents a stochastically based algorithm of linear complexity per iteration which produces good layouts, has low overhead, and is easy to implement. Its performance and accuracy are discussed, in particular with regard to the data to which it is applied. Experience with application to bibliographic and time series data, which may have a dimensionality in the tens of thousands, is described. | false | false | [
"Matthew Chalmers"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.