Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
EuroVis | 2,002 | Locating Closed Streamlines in 3D Vector Fields | 10.2312/VisSym/VisSym02/227-232 | The analysis and visualization of flows is a central problem in visualization. Topology based methods have gained increasing interest in recent years. This article describes a method for the detection of closed streamlines in 3D flows. It is based on a special treatment of cases where a streamline reenters a cell to prevent infinite cycling during streamline calculation. The algorithm checks for possible exits of a loop of crossed faces and detects structurally stable closed streamlines. These global features are not detected by conventional topology and feature detection algorithms. | false | false | [
"Thomas Wischgoll",
"Gerik Scheuermann"
] | [] | [] | [] |
EuroVis | 2,002 | Octreemizer: A Hierarchical Approach for Interactive Roaming Through Very Large Volumes | 10.2312/VisSym/VisSym02/053-060 | We have developed a hierarchical paging scheme for handling very large volumetric data sets at interactive frame rates. Our system trades texture resolution for speed and uses effective prediction strategies. We have tested our approach for datasets with up to 16GB in size and show that it works well with less than 500MB of main memory cache for 64MB of 3D-texture memory. Our approach makes it feasible to deal with these volumes on desktop machines. | false | false | [
"John Plate",
"Michael Tirtasana",
"Rhadamés Carmona",
"Bernd Fröhlich 0001"
] | [] | [] | [] |
EuroVis | 2,002 | Parallel and Out-of-core View-dependent Isocontour Visualization Using Random Data Distribution | 10.2312/VisSym/VisSym02/009-018 | In this paper we describe a parallel and out-of-core view-dependent isocontour visualization algorithm that efficiently extracts and renders the visible portions of an isosurface from large datasets. The algorithm first creates an occlusion map using ray-casting and nearest neighbors. With the occlusion map constructed, the visible portion of the isosurface is extracted and rendered. All steps are in a single pass with minimal communication overhead. The overall workload is well balanced among parallel processors using random data distribution. Volumetric datasets are statically partitioned onto the local disks of each processor and loaded only when necessary. This out-of-core feature allows it to handle scalably large datasets. We additionally demonstrate significant speedup of the view-dependent isocontour visualization on a commodity off-the-shelf PC cluster. | false | false | [
"Xiaoyu Zhang",
"Chandrajit Bajaj",
"Vijaya Ramachandran"
] | [] | [] | [] |
EuroVis | 2,002 | Secondary Task Display Attributes - Optimizing Visualizations for Cognitive Task Suitability and Interference Avoidance | 10.2312/VisSym/VisSym02/165-171 | We found that established display design guidelines for focal images cannot be extended to images displayed as a secondary task in a dual-task situation. This paper describes an experiment that determines a new ordering guideline for secondary task image attributes according to human cognitive ability to extract information. The imperative for alternate guidelines is based on the difference in an image's ability to convey meaning, which decreases when moved from a focal to a secondary task situation. Secondary task attribute ordering varies with the level of degradation in the primary task. Furthermore, attribute effectiveness may be particular to types of visual operations relating to cognitive tasks. | false | false | [
"Christa M. Chewar",
"D. Scott McCrickard",
"Ali Ndiwalana",
"Chris North 0001",
"Jon Pryor",
"David Tessendorf"
] | [] | [] | [] |
EuroVis | 2,002 | Shear-Warp Deluxe: The Shear-Warp Algorithm Revisited | 10.2312/VisSym/VisSym02/095-104 | Despite continued advances in volume rendering technology, the Shear-Warp algorithm, although conceived as early as 1994, still remains the world's fastest purely software-based volume rendering algorithm. The impressive speed of near double-digit framerates for moderately sized datasets, however, does come at the price of reduced image quality and memory consumption. In this paper, we present the implementation and impact of certain measures that seek to address these shortcomings. Specifically, we investigate the effects of: (i) post-interpolated classification and shading, (ii) matched volume sampling on zoom, (iii) the interpolation of intermediate slices to reduce inter-slice aliasing, and (iv) the re-use of encoded RLE runs for more than one major viewing direction to preserve memory. We also study a new variation of the shear-warp algorithm that operates on body-centered cubic grids. We find that the reduction of the number of voxels that this grid affords translates into direct savings in rendering times, with minimal degradation in image quality. | false | false | [
"Jon Sweeney",
"Klaus Mueller 0001"
] | [] | [] | [] |
EuroVis | 2,002 | Speech and Gesture Multimodal Control of a Whole Earth 3D Visualization Environment | 10.2312/VisSym/VisSym02/195-200 | A growing body of research shows several advantages to multimodal interfaces including increased expressiveness, flexibility, and user freedom. This paper investigates the design of such an interface that integrates speech and hand gestures. The interface has the additional property of operating relative to the user and can be used while the user is in motion or standing at a distance from the computer display. The paper then describes an implementation of the multimodal interface for a whole Earth 3D visualization which presents navigation interface challenges due to the large magnitude of scale and extended spaces that are available. The characteristics of the multimodal interface are examined, such as speed, recognizability of gestures, ease and accuracy of use, and learnability under likely conditions of use. This implementation shows that such a multimodal interface can be effective in a real environment and sets some parameters for the design and use of such interfaces. | false | false | [
"David M. Krum",
"Olugbenga Omoteso",
"William Ribarsky",
"Thad Starner",
"Larry F. Hodges"
] | [] | [] | [] |
EuroVis | 2,002 | Statistical Computation of Salient ISO-Values | 10.2312/VisSym/VisSym02/019-024 | Detection of the salient iso-values in a volume dataset is often the first step towards its exploration. An error-and-trail approach is often used; new semi-automatic techniques either make assumptions about their data [4] or present multiple criteria for analysis. Determining if a dataset satisfies an algorithm's assumptions, or the criteria to be used in an analysis are both non-trivial tasks. The use of a dataset's statistical signatures, local higher order moments (LHOMs), to characterize its salient iso-values was presented in [10]. In this paper we propose a computational algorithm that uses LHOMs for expedient estimation of salient iso-values. As LHOMs are model independent statistical signatures our algorithm does not impose any assumptions on the data. Further, the algorithm has a single criterion for characterization of the salient iso-values, and the search for this criterion is easily automated. Examples from medical and computational domains are used to demonstrate the effectiveness of the proposed algorithm. | false | false | [
"Shivaraj Tenginakai",
"Raghu Machiraju"
] | [] | [] | [] |
EuroVis | 2,002 | Useful Properties of Semantic Depth of Field for Better F+C Visualization | 10.2312/VisSym/VisSym02/205-210 | This paper presents the results of a thorough user study that was performed to assess some features and the general usefulness of Semantic Depth of Field (SDOF). Based on these results, concrete hints are given on how SDOF can be used for visualization. SDOF was found to be a very effective means for guiding the viewer's attention and for giving him or her a quick overview of a data set. It can also very quickly be perceived, and therefore provides an efficient visual channel.Semantic Depth of Field is a focus+context (F+C) technique that uses blur to point the user to the most relevant objects. It was inspired by the depth of field (DOF) effect in photography, which serves a very similar purpose. | false | false | [
"Robert Kosara",
"Silvia Miksch",
"Helwig Hauser"
] | [] | [] | [] |
EuroVis | 2,002 | View-Dependent Multiresolution Splatting of Non-Uniform Data | 10.2312/VisSym/VisSym02/125-132 | This paper develops an approach for the splat-based visualization of large scale, non-uniform data. A hierarchical structure is generated that permits detailed treatment at the leaf nodes of the non-uniform distribution. A set of levels of detail (LODs) are generated based on the levels of the hierarchy. These yield two metrics, one in terms of the spatial extent of the bounding box containing the splat and one in terms of the variation of the scalar field over this box. The former yields a view-dependent choice of LODs while the latter yields a view-independent LOD based on the field variation. To show the utility of this general approach it is applied to a set of application data for a whole earth environment and some test data. Performance results are given. | false | false | [
"Justin Jang",
"William Ribarsky",
"Christopher D. Shaw",
"Nickolas Faust"
] | [] | [] | [] |
EuroVis | 2,002 | Viewpoint Entropy: A New Tool for Obtaining Good Views of Molecules | 10.2312/VisSym/VisSym02/183-188 | The computation of good viewpoints is important in several fields: computer graphics, removal of degeneracies in computational geometry, robotics, graph drawing, etc. However, in areas such as computer graphics there is no consensus on what a good viewpoint means and, consequently, each author uses his or her own definition according to the requirements of the application. In this paper we present a formal measure strongly based on Information Theory, viewpoint entropy, that can be applied to certain problems of Computer Graphics such as automatic exploration of objects or scenes and Scene Understanding. We also define a new measure, the orthogonal frustum entropy, in order to fulfill the requirements needed to visualize molecules. We design an algorithm that makes use of graphics hardware to accelerate computation, and whose complexity depends mainly on the number of views we want to analyze. Computation of good views of molecules is useful for molecular scientists, a field which includes practitioners from Crystallography, Chemistry, and Biology. | false | false | [
"Pere-Pau Vázquez",
"Miquel Feixas",
"Mateu Sbert",
"Antoni Llobet"
] | [] | [] | [] |
EuroVis | 2,002 | Visualization of Bibliographic Networks with a Reshaped Landscape Metaphor | 10.2312/VisSym/VisSym02/159-164 | We describe a novel approach to visualize bibliographic networks that facilitates the simultaneous identification of clusters (e.g., topic areas) and prominent entities (e.g., surveys or landmark papers). While employing the landscape metaphor proposed in several earlier works, we introduce new means to determine relevant parameters of the landscape. Moreover, we are able to compute prominent entities, clustering of entities, and the landscape's surface in a surprisingly simple and uniform way. The effectiveness of our network visualizations is illustrated on data from the graph drawing literature. | false | false | [
"Ulrik Brandes",
"Thomas Willhalm"
] | [] | [] | [] |
EuroVis | 2,002 | Visualization of Large Web Access Data Sets | 10.2312/VisSym/VisSym02/201-204 | Many real-world e-service applications require analyzing large volumes of transaction data to extract web access information. This paper describes Web Access Visualization (WAV) a system that visually associates the affinities and relationships of clients and URLs for large volumes of web transaction data. To date, many practical research projects have shown the usefulness of a physics-based mass-spring technique to layout data items with close relationships onto a graph. The WAV system: (1) maps transaction data items (clients, URLs) and their relationships to vertices, edges, and positions on a 3D spherical surface; (2) encapsulates a physics-based engine in a visual data analysis platform; and (3) employs various content sensitive visual techniques - linked multiple views, layered drill-down, and fade in/out - for interactive data analysis. We have applied this system to a web application to analyze web access patterns and trends. The web service quality has been greatly benefited from using the information provided by WAV. | false | false | [
"Ming C. Hao",
"Pankaj K. Garg",
"Umeshwar Dayal",
"Vijay Machiraju",
"Daniel Cotting"
] | [] | [] | [] |
EuroVis | 2,002 | Visualizing and Investigating Multidimensional Functions | 10.2312/VisSym/VisSym02/173-182 | This paper addresses the problem of visualizing multidimensional scalar functions. These functions are often encountered in fields such as Engineering, Mathematics, and Physics to understand and model complex phenomena. We propose a novel method based on the dimension reduction philosophy called HyperCell. Its basic concept is to represent the function by means of dynamic orthogonal low (1D, 2D, or 3D) dimensional subspaces, called Cells. Firstly the user defines a N-dimensional region of interest, in which the data can be visualized. Then the user interactively creates cells by selecting up to three dimensions from the function domain. A cell can be visualized using a standard visualization algorithm such as isosurfacing or volume rendering. The analysis of the function is done by investigating several cells, which can be sampled simultaneously at different regions of interest in N-Space. The HyperCell method allows several useful operations to help the exploratory process such as navigation and rotation in N-Space, and brushing. | false | false | [
"Selan R. dos Santos",
"Ken W. Brodlie"
] | [] | [] | [] |
EuroVis | 2,002 | Volume Rendering Multivariate Data to Visualize Meteorological Simulations: A Case Study | 10.2312/VisSym/VisSym02/189-194 | High resolution computational weather models are becoming increasing complex. However, the analysis of these models has not benefited from recent advancements in volume visualization. This case study applies the ideas and techniques from multi-dimensional transfer function based volume rendering to the multivariate weather simulations. The specific goal of identifying frontal zones is addressed. By combining temperature and humidity as a multivariate field, the frontal zones are more readily identified thereby assisting the meteorologists in their analysis tasks. | false | false | [
"Joe Kniss",
"Charles D. Hansen"
] | [] | [] | [] |
EuroVis | 2,002 | Vortex Tracking in Scale-Space | 10.2312/VisSym/VisSym02/233-240 | Scale-space techniques have become popular in computer vision for their capability to access the multiscale information inherently contained in images. We show that the field of flow visualization can benefit from these techniques, too, yielding more coherent features and sorting out numerical artifacts as well as irrelevant large-scale features. We describe an implementation of scale-space computation using finite elements and show that performance is sufficient for computing a scale-space of time-dependent CFD data. Feature tracking, if available, allows to process the information provided by scale-space not just visually but also algorithmically. We present a technique for extending a class of feature extraction schemes by an additional dimension, resulting in an efficient solution of the tracking problem. | false | false | [
"Dirk Bauer",
"Ronald Peikert"
] | [] | [] | [] |
CHI | 2,002 | Automating CPM-GOMS | 10.1145/503376.503404 | CPM-GOMS is a modeling method that combines the task decomposition of a GOMS analysis with a model of human resource usage at the level of cognitive, perceptual, and motor operations. CPM-GOMS models have made accurate predictions about skilled user behavior in routine tasks, but developing such models is tedious and error-prone. We describe a process for automatically generating CPM-GOMS models from a hierarchical task decomposition expressed in a cognitive modeling tool called Apex. Resource scheduling in Apex automates the difficult task of interleaving the cognitive, perceptual, and motor resources underlying common task operators (e.g. mouse move-and-click). Apex's UI automatically generates PERT charts, which allow modelers to visualize a model's complex parallel behavior. Because interleaving and visualization is now automated, it is feasible to construct arbitrarily long sequences of behavior. To demonstrate the process, we present a model of automated teller interactions in Apex and discuss implications for user modeling | false | false | [
"Bonnie E. John",
"Alonso H. Vera",
"Michael Matessa",
"Michael Freed",
"Roger W. Remington"
] | [] | [] | [] |
CHI | 2,002 | Keeping things in context: a comparative evaluation of focus plus context screens, overviews, and zooming | 10.1145/503376.503423 | Users working with documents that are too large and detailed to fit on the user's screen (e.g. chip designs) have the choice between zooming or applying appropriate visualization techniques. In this paper, we present a comparison of three such techniques. The first, focus plus context screens, are wall-size low-resolution displays with an embedded high-resolution display region. This technique is compared with overview plus detail and zooming/panning. We interviewed fourteen visual surveillance and design professionals from different areas (graphic design, chip design, air traffic control, etc.) in order to create a repre sentative sample of tasks to be used in two experimental comparison studies. In the first experiment, subjects using focus plus context screens to extract information from large static documents completed the two experimental tasks on average 21% and 36% faster than when they used the other interfaces. In the second experiment, focus plus context screens allowed subjects to reduce their error rate in a driving simulation to less than one third of the error rate of the competing overview plus detail setup | false | false | [
"Patrick Baudisch",
"Nathaniel Good",
"Victoria Bellotti",
"Pamela K. Schraedley"
] | [] | [] | [] |
CHI | 2,002 | Polyarchy visualization: visualizing multiple intersecting hierarchies | 10.1145/503376.503452 | We describe a new information structure composed of multiple intersecting hierarchies, which we call Polyarchies. Visualizing polyarchies enables use of novel views for discovery of relationships which are very difficult using existing hierarchy visualization tools. This paper will describe the visualization design and system architecture challenges as well as our current solutions. A Mid-Tier Cache architecture is used as a "polyarchy server" which supports a novel web-based polyarchy visualization technique, called Visual Pivot. A series of five user studies guided iterative design of Visual Pivot | false | false | [
"George G. Robertson",
"Kim Cameron",
"Mary Czerwinski",
"Daniel C. Robbins"
] | [] | [] | [] |
Vis | 2,001 | 4D space-time techniques: a medical imaging case study | 10.1109/VISUAL.2001.964554 | We present the problem of visualizing time-varying medical data. Two medical imaging modalities are compared-MRI and dynamic SPECT. For each modality, we examine several derived scalar and vector quantities such as the change in intensity over time, the spatial gradient, and the change of the gradient over time. We compare several methods for presenting the data, including isosurfaces, direct volume rendering, and vector visualization using glyphs. These techniques may provide more information and context than methods currently used in practice; thus it is easier to discover temporal changes and abnormalities in a data set. | false | false | [
"Melanie Tory",
"Niklas Röber",
"Torsten Möller",
"Anna Celler",
"M. Stella Atkins"
] | [] | [] | [] |
Vis | 2,001 | A case study on interactive exploration and guidance aids for visualizing historical data | 10.1109/VISUAL.2001.964557 | In this paper, we address the problem of historical data visualization. We describe the data acquisition, preparation, and visualization. Since the data contain four dimensions, the standard 3D exploration techniques have to be extended or appropriately adapted in order to enable interactive exploration. We discuss in detail two interaction concepts: (1) navigation with one fixed dimension, and (2) quasi 4D navigation allowing to simultaneously explore the four-dimensional space. In addition, we also present a picture-in-picture display mode, enabling the user to interactively view the data, while "flying with" a particular event, tracking its motion in time and space. Finally, we present a technique for guided exploration and animation generation, allowing for a vivid gain of insight into the historical data. | false | false | [
"Stanislav L. Stoev",
"Wolfgang Straßer"
] | [] | [] | [] |
Vis | 2,001 | A complete distance field representation | 10.1109/VISUAL.2001.964518 | Distance fields are an important volume representation. A high quality distance field facilitates accurate surface characterization and gradient estimation. However, due to Nyquist's law, no existing volumetric methods based on the linear sampling theory can fully capture surface details, such as comers and edges, in 3D space. We propose a novel complete distance field representation (CDFR) that does not rely on Nyquist's sampling theory. To accomplish this, we construct a volume where each voxel has a complete description of all portions of surface that affect the local distance field. For any desired distance, we are able to extract a surface contour in true Euclidean distance, at any level of accuracy, from the same CDFR representation. Such point-based iso-distance contours have faithful per-point gradients and can be interactively visualized using splatting, providing per-point shaded image quality. We also demonstrate applying CDFR to a cutting edge design for manufacturing application involving high-complexity parts at unprecedented accuracy using only commonly available computational resources. | false | false | [
"Jian Huang",
"Yan Li",
"Roger Crawfis",
"Shao-Chiung Lu",
"Shuh-Yuan Liou"
] | [] | [] | [] |
Vis | 2,001 | A memory insensitive technique for large model simplification | 10.1109/VISUAL.2001.964502 | The authors propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of P. Lindstrom (2000) which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems. The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses. Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the "representative vertex" of a grid cell to an optimal position inside the cell. | false | false | [
"Peter Lindstrom 0001",
"Cláudio T. Silva"
] | [] | [] | [] |
Vis | 2,001 | A simple algorithm for surface denoising | 10.1109/VISUAL.2001.964500 | We present a simple denoising technique for geometric data represented as a semiregular mesh, based on locally adaptive Wiener filtering. The degree of denoising is controlled by a single parameter (an estimate of the relative noise level) and the time required for denoising is independent of the magnitude of the estimate. The performance of the algorithm is sufficiently fast to allow interactive local denoising. | false | false | [
"Jianbo Peng",
"Vasily Strela",
"Denis Zorin"
] | [] | [] | [] |
Vis | 2,001 | A tetrahedra-based stream surface algorithm | 10.1109/VISUAL.2001.964506 | This paper presents a new algorithm for the calculation of stream surfaces for tetrahedral grids. It propagates the surface through the tetrahedra, one at a time, calculating the intersections with the tetrahedral faces. The method allows us to incorporate topological information from the cells, e.g. critical points. The calculations are based on barycentric coordinates, since this simplifies the theory and the algorithm. The stream surfaces are ruled surfaces inside each cell, and their construction starts with line segments on the faces. Our method supports the analysis of velocity fields resulting from computational fluid dynamics (CFD) simulations. | false | false | [
"Gerik Scheuermann",
"Tom Bobach",
"Hans Hagen",
"Karim Mahrous",
"Bernd Hamann",
"Kenneth I. Joy",
"Wolfgang Kollmann"
] | [] | [] | [] |
Vis | 2,001 | A virtual environment for simulated rat dissection: a case study of visualization for astronaut training | 10.1109/VISUAL.2001.964564 | Animal dissection for the scientific examination of organ subsystems is a delicate procedure. Performing this procedure under the complex environment of microgravity presents additional challenges because of the limited training opportunities available that can recreate the altered gravity environment. Traditional astronaut crew training often occurs several months in advance of experimentation, provides limited realism, and involves complicated logistics. We have developed an interactive virtual environment that can simulate several common tasks performed during animal dissection. In this paper, we describe the imaging modality used to reconstruct the rat, provide an overview of the simulation environment and briefly discuss some of the techniques used to manipulate the virtual rat. | false | false | [
"Kevin Montgomery",
"Cynthia Bruyns",
"Simon Wildermuth"
] | [] | [] | [] |
Vis | 2,001 | Accelerated volume ray-casting using texture mapping | 10.1109/VISUAL.2001.964521 | Acceleration techniques for volume ray-casting are primarily based on pre-computed data structures that allow one to efficiently traverse empty or homogeneous regions. In order to display volume data that successively undergoes color lookups, however, the data structures have to be re-built continuously. In this paper we propose a technique that circumvents this drawback using hardware accelerated texture mapping. In a first rendering pass we employ graphics hardware to interactively determine for each ray where the material is hit. In a second pass ray-casting is performed, but ray traversal starts right in front of the previously determined regions. The algorithm enables interactive classification and it considerably accelerates the view dependent display of selected materials and surfaces from volume data. In contrast to other techniques that are solely based on texture mapping our approach requires less memory and accurately performs the composition of material contributions along the ray. | false | false | [
"Rüdiger Westermann",
"Bernd Sevenich"
] | [] | [] | [] |
Vis | 2,001 | An immersive virtual environment for DT-MRI volume visualization applications: a case study | 10.1109/VISUAL.2001.964545 | We describe a virtual reality environment for visualizing tensor-valued volumetric datasets acquired with diffusion tensor magnetic resonance imaging (DT-MRI). We have prototyped a virtual environment that displays geometric representations of the volumetric second-order diffusion tensor data and are developing interaction and visualization techniques for two application areas: studying changes in white-matter structures after gamma-knife capsulotomy and pre-operative planning for brain tumor surgery. Our feedback shows that compared to desktop displays, our system helps the user better interpret the large and complex geometric models, and facilitates communication among a group of users. | false | false | [
"Song Zhang 0004",
"Çagatay Demiralp",
"Daniel F. Keefe",
"M. DaSilva",
"David H. Laidlaw",
"Benjamin D. Greenberg",
"Peter J. Basser",
"Carlo Pierpaoli",
"E. A. Chiocca",
"Thomas S. Deisboeck"
] | [] | [] | [] |
Vis | 2,001 | Approximate shading for the re-illumination of synthetic images | 10.1109/VISUAL.2001.964535 | Presents a method to estimate illumination dependent properties in image synthesis prior to rendering. A preprocessing step is described in which a linear image basis is developed and a lighting-independent formulation defined. A reflection function, similar to hemispherical reflectance, approximates normal Lambertian shading. Intensity errors resulting from this approximation are reduced by use of a polynomial gamma correction function and scaling to a normalized display range. This produces images that are similar to normal Lambertian shading without employing the maximum (max) function. For a single object view, images can then be expressed in a linear form so that lighting direction can be factored out. During normal rendering, image quantities for arbitrary light directions can be found without rendering. This method is demonstrated for estimating image intensity and level-of-detail error prior to rendering an object. | false | false | [
"Randy K. Scoggins",
"Raghu Machiraju",
"Robert J. Moorhead II"
] | [] | [] | [] |
Vis | 2,001 | Archaeological Data Visualization in VR: Analysis of Lamp Finds at the Great Temple of Petra, a Case Study | 10.1109/VISUAL.2001.964560 | Presents the results of an evaluation of the ARCHAVE (ARCHAeological Virtual Environment) system, an immersive virtual reality (VR) environment for archaeological research. ARCHAVE is implemented in a Cave. The evaluation studied researchers analyzing lamp and coin finds throughout the excavation trenches at the Petra Great Temple site in Jordan. Experienced archaeologists used our system to study excavation data, confirming existing hypotheses and postulating new theories they had not been able to discover without the system. ARCHAVE provided access to the excavation database, and researchers were able to examine the data in the context of a life-size representation of the present-day architectural ruins of the temple. They also had access to a miniature model for site-wide analysis. Because users quickly became comfortable with the interface, they concentrated their efforts on examining the data being retrieved and displayed. The immersive VR visualization of the recovered information gave them the opportunity to explore it in a new and dynamic way and, in several cases, enabled them to make discoveries that opened new lines of investigation about the excavation. | false | false | [
"Daniel Acevedo Feliz",
"Eileen Vote",
"David H. Laidlaw",
"Martha Sharp Joukowsky"
] | [] | [] | [] |
Vis | 2,001 | Attribute preserving dataset simplification | 10.1109/VISUAL.2001.964501 | The paper describes a novel application of feature preserving mesh simplification to the problem of managing large, multidimensional datasets during scientific visualization. To allow this, we view a scientific dataset as a triangulated mesh of data elements, where the attributes embedded in each element form a set of properties arrayed across the surface of the mesh. Existing simplification techniques were not designed to address the high dimensionality that exists in these types of datasets. In addition, vertex operations that relocate, insert, or remove data elements may need to be modified or restricted. Principal component analysis provides an algorithm-independent method for compressing a dataset's dimensionality during simplification. Vertex locking forces certain data elements to maintain their spatial locations; this technique is also used to guarantee a minimum density in the simplified dataset. The result is a visualization that significantly reduces the number of data elements to display, while at the same time ensuring that high-variance regions of potential interest remain intact. We apply our techniques to a number of well-known feature preserving algorithms, and demonstrate their applicability in a real-world context by simplifying a multidimensional weather dataset. Our results show a significant improvement in execution time with only a small reduction in accuracy; even when the dataset was simplified to 10% of its original size, average per attribute error was less than 1%. | false | false | [
"Jason D. Walter",
"Christopher G. Healey"
] | [] | [] | [] |
Vis | 2,001 | Case study on real-time visualization of virtual Tubingen on commodity PC hardware | 10.1109/VISUAL.2001.964544 | For psychophysical studies in spatial cognition a virtual model of the picturesque old town of Tubingen has been constructed. In order to perform psychophysical experiments in highly realistic virtual environments the model is based on high quality texture maps adding up to several hundreds of MBytes. To accomplish the required real-time frame updates, view frustum and occlusion culling without visibility pre-processing, levels of detail, and texture compression are applied in an interleaved manner. Shared memory communication and a standard PC with two commodity graphics cards is used to enable the powerful combination of those techniques because this combination is not yet available on a single graphics card. | false | false | [
"Michael Meißner",
"Jasmina Orman",
"Stephan J. Braun"
] | [] | [] | [] |
Vis | 2,001 | Case study: an environment for understanding protein simulations using game graphics | 10.1109/VISUAL.2001.964547 | We describe a visualization system designed for interactive study of proteins in the field of computational biology. Our system incorporates multiple, custom, three-dimensional and two-dimensional linked views of the proteins. We take advantage of modem commodity graphics cards, which are typically designed for games rather than scientific visualization applications, to provide instantaneous linking between views and three-dimensional interactivity on standard personal computers. Furthermore, we anticipate the usefulness of game techniques such as bump maps and skinning for scientific applications. | false | false | [
"Donna L. Gresh",
"Frank Suits",
"Yuk Yin Sham"
] | [] | [] | [] |
Vis | 2,001 | Case study: application of feature tracking to analysis of autoignition simulation data | 10.1109/VISUAL.2001.964551 | The focus of this paper is to evaluate the usefulness of some basic feature tracking algorithms as analysis tools for combustion datasets by application to a dataset modeling autoignition. Features defined as areas of high intermediate concentrations were examined to explore the initial phases in the autoignition process. | false | false | [
"Wendy S. Koegler"
] | [] | [] | [] |
Vis | 2,001 | Case study: interacting with cortical flat maps of the human brain | 10.1109/VISUAL.2001.964553 | The complex geometry of the human brain contains many folds and fissures, making it impossible to view the entire surface at once. Since most of the cortical activity occurs on these folds, it is desirable to be able to view the entire surface of the brain in a single view. This can be achieved using quasi-conformal flat maps of the cortical surface. Computational and visualization tools are now needed to be able to interact with these flat maps of the brain to gain information about spatial and functional relationships that might not otherwise be apparent. Such information can contribute to earlier diagnostic tools for diseases and improved treatment. Our group is developing visualization and analysis tools that will help elucidate new information about the human brain through the interaction between a cortical surface and its corresponding quasiconformal flat map. | false | false | [
"Monica K. Hurdal",
"Kevin W. Kurtz",
"David C. Banks"
] | [] | [] | [] |
Vis | 2,001 | Case study: medical Web service for the automatic 3D documentation for neuroradiological diagnosis | 10.1109/VISUAL.2001.964542 | The case study presents a medical Web service for the automatic analysis of CTA (computer tomography angiography) datasets. It aims at the detection and evaluation of intracranial aneurysms which are malformations of cerebral blood vessels. To obtain a standardized 3D visualization, digital videos are automatically generated. The time-consuming video production caused by the manual delineation of structures, software based volume rendering, and the interactive definition of an optimized camera path is considerably improved with a fully automatic strategy. Therefore, a previously suggested approach (C. Rezk-Salama, 2000) is applied which uses an optimized transfer function as a template and automatically adapts it to an individual dataset. Furthermore, we introduce hardware-accelerated morphologic filtering in order to detect the location of mid-size and giant aneurysms. The actual generation of the video is finally integrated into a hardware accelerated off-screen rendering process based on 3D texture mapping, ensuring fast visualization of high quality. Overall, clinical routine can be considerably assisted by providing a Web based service combining automatic detection and standardized visualization. | false | false | [
"Sabine Iserhardt-Bauer",
"Peter Hastreiter",
"Thomas Ertl",
"K. Eberhardt",
"Bernd Tomandl"
] | [] | [] | [] |
Vis | 2,001 | Case study: reconstruction, visualization and quantification of neuronal fiber pathways | 10.1109/VISUAL.2001.964549 | It is of significant interest for neurological studies to determine and visualize neuronal fiber pathways in the human brain. By exploiting the capability of diffusion tensor magnetic resonance imaging to detect local orientations of neuronal fibers, we have developed a system of algorithms to reconstruct, visualize and quantify neuronal fiber pathways in vivo. Illustrative results show that the system is a promising tool for visual analysis of fiber connectivity and quantitative studies of neuronal fibers. | false | false | [
"Zhaohua Ding",
"John C. Gore",
"Adam W. Anderson"
] | [] | [] | [] |
Vis | 2,001 | Case study: visual debugging of cluster hardware | 10.1109/VISUAL.2001.964543 | This paper presents a novel use of visualization applied to debugging the Cplant/sup TM/ cluster hardware at Sandia National Laboratories. As commodity cluster systems grow in popularity and grow in size, tracking component failures within the hardware will become more and more difficult. We have developed a tool that facilitates visual debugging of errors within the switches and cables connecting the processors. Combining an abstract system model with color-coding for both error and job information enables failing components to be identified. | false | false | [
"Patricia Crossno",
"Rena A. Haynes"
] | [] | [] | [] |
Vis | 2,001 | Case study: visualization of particle track data | 10.1109/VISUAL.2001.964552 | The Relativistic Heavy Ion Collider (RHIC) experiment at the Brookhaven National Lab is designed to study how the universe came into being. It is believed that after the Big Bang, the universe expanded and cooled, consisting of a soup of quarks, gluons, electrons and neutrinos. As the temperature lowered, electrons combined with protons and formed neutral atoms. Later, clouds of atoms contracted into stars. In this paper, we describe how techniques of volume rendering and information visualization are used to visualize the large particle track data set generated from this high energy physics experiment. The system, called TrackVis, is based on our earlier work of VolVis - Volume Visualization software. Example images of real particle collision data are shown, which are helpful to physicists in investigating the behavior of strongly interacting matter at high energy density. | false | false | [
"Xiaoming Wei",
"Arie E. Kaufman",
"Timothy J. Hallman"
] | [] | [] | [] |
Vis | 2,001 | Cell-projection of cyclic meshes | 10.1109/VISUAL.2001.964514 | We present the first algorithm that employs hardware-accelerated cell-projection for direct volume rendering of cyclic meshes, i.e., meshes with visibility cycles. The visibility sorting of a cyclic mesh is performed by an extended topological sorting, which computes and isolates visibility cycles. Measured sorting times are comparable to previously published algorithms, which are, however, restricted to acyclic meshes. In practice, our algorithm is also useful for acyclic meshes as numerical instabilities can lead to false visibility cycles. Our method includes a simple, hardware-assisted algorithm based on image compositing that renders visibility cycles correctly. For tetrahedral meshes this algorithm allows us to render each tetrahedral cell (whether it is part of a cycle or not) by hardware-accelerated cell-projection. In its basic form our method applies only to convex cyclic meshes; however, we present an exact and a simpler but inexact extension of our method for nonconvex meshes. | false | false | [
"Martin Kraus",
"Thomas Ertl"
] | [] | [] | [] |
Vis | 2,001 | Chromatin decondensation: case study of tracking features in confocal data | 10.1109/VISUAL.2001.964546 | In this case study we discuss an interactive feature tracking system and its use for the analysis of chromatin decondensation. Features are described as points in a multidimensional attribute space. Distances between points are used as a measure for feature correspondence. Users can interactively experiment with the correspondence measure in order to gain insight in chromatin movement. In addition, by defining time as an attribute, tracking problems related to noisy confocal data can be circumvented. | false | false | [
"Wim C. de Leeuw",
"Robert van Liere"
] | [] | [] | [] |
Vis | 2,001 | Circular incident edge lists: a data structure for rendering complex unstructured grids | 10.1109/VISUAL.2001.964511 | We present the circular incident edge lists (CIEL), a new data structure and a high-performance algorithm for generating a series of iso-surfaces in a highly unstructured grid. Slicing-based volume rendering is also considered. The CIEL data structure represents all the combinatorial information of the grid, making it possible to optimize the classical propagation from local minima paradigm. The usual geometric structures are replaced by a more efficient combinatorial structure. An active edges list is maintained, and iteratively propagated from an iso-surface to the next one in a very efficient way. The intersected cells incident to each active edge are retrieved, and the intersection polygons are generated by circulating around their facets. This latter feature enables arbitrary irregular cells to be treated, such as those encountered in certain computational fluid dynamics (CFD) simulations. Since the CIEL data structure solely depends on the connections between the cells, it is possible to take into account dynamic changes in the geometry of the mesh and in property values, which only requires the sorted extrema list to be updated. Experiments have shown that our approach is significantly faster than classical methods. The major drawback of our method is its memory consumption, higher than most classical methods. However, experimental results show that it stays within a practical range. | false | false | [
"Bruno Lévy 0001",
"Guillaume Caumon",
"Stéphane Conreaux",
"Xavier Cavin"
] | [] | [] | [] |
Vis | 2,001 | Compressing large polygonal models | 10.1109/VISUAL.2001.964532 | Presents an algorithm that uses partitioning and gluing to compress large triangular meshes which are too complex to fit in main memory. The algorithm is based largely on the existing mesh compression algorithms, most of which require an 'in-core' representation of the input mesh. Our solution is to partition the mesh into smaller submeshes and compress these submeshes separately using existing mesh compression techniques. Since a direct partition of the input mesh is out of question, instead we partition a simplified mesh and use the partition on the simplified model to obtain a partition on the original model. In order to recover the full connectivity, we present a simple scheme for encoding/decoding the resulting boundary structure from the mesh partition. When compressing large models with few singular vertices, a negligible portion of the compressed output is devoted to gluing information. On desktop computers, we have run experiments on models with millions of vertices, which could not be compressed using standard compression software packages, and have observed compression ratios as high as 17 to 1 using our technique. | false | false | [
"Jeffrey Ho",
"Kuang-chih Lee",
"David J. Kriegman"
] | [] | [] | [] |
Vis | 2,001 | Computed tomography angiography: a case study of peripheral vessel investigation | 10.1109/VISUAL.2001.964555 | This paper deals with vessel exploration based on computed tomography angiography. Large image sequences of the lower extremities are investigated in a clinical environment. Two different approaches for peripheral vessel diagnosis dealing with stenosis and calcification detection are introduced. The paper presents an automated vessel-tracking tool for curved planar reformation. An interactive segmentation tool for bone removal is proposed. | false | false | [
"Armin Kanitsar",
"Rainer Wegenkittl",
"Petr Felkel",
"Dominik Fleischmann",
"Dominique Sandner",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,001 | Connectivity shapes | 10.1109/VISUAL.2001.964504 | We describe a method to visualize the connectivity graph of a mesh using a natural embedding in 3D space. This uses a 3D shape representation that is based solely on mesh connectivity: the connectivity shape. Given a connectivity, we define its natural geometry as a smooth embedding in space with uniform edge lengths and describe efficient techniques to compute it. Our main contribution is to demonstrate that a surprising amount of geometric information is implicit in the connectivity. We also show how to generate connectivity shapes that approximate given 3D shapes. Potential applications of connectivity shapes to modeling and mesh coding are described. | false | false | [
"Martin Isenburg",
"Stefan Gumhold",
"Craig Gotsman"
] | [] | [] | [] |
Vis | 2,001 | Continuous topology simplification of planar vector fields | 10.1109/VISUAL.2001.964507 | Vector fields can present complex structural behavior, especially in turbulent computational fluid dynamics. The topological analysis of these data sets reduces the information, but one is usually still left with too many details for interpretation. In this paper, we present a simplification approach that removes pairs of critical points from the data set, based on relevance measures. In contrast to earlier methods, no grid changes are necessary, since the whole method uses small local changes of the vector values defining the vector field. An interpretation in terms of bifurcations underlines the continuous, natural flavor of the algorithm. | false | false | [
"Xavier Tricoche",
"Gerik Scheuermann",
"Hans Hagen"
] | [] | [] | [] |
Vis | 2,001 | Distance-field based skeletons for virtual navigation | 10.1109/VISUAL.2001.964517 | We present a generic method for rapid flight planning, virtual navigation and effective camera control in a volumetric environment. Directly derived from an accurate distance from boundary (DFB) field, our automatic path planning algorithm rapidly generates centered flight paths, a skeleton, in the navigable region of the virtual environment. Based on precomputed flight paths and the DFB field, our dual-mode physically based camera control model supports a smooth, safe, and sticking-free virtual navigation with six degrees of freedom. By using these techniques, combined with accelerated volume rendering, we have successfully developed a real-time virtual colonoscopy system on low-cost PCs and confirmed the high speed, high accuracy and robustness of our techniques on more than 40 patient datasets. | false | false | [
"Ming Wan",
"Frank Dachille",
"Arie E. Kaufman"
] | [] | [] | [] |
Vis | 2,001 | Dynamic Shadow Removal from Front Projection Displays | 10.1109/VISUAL.2001.964509 | Front-projection display environments suffer from a fundamental problem: users and other objects in the environment can easily and inadvertently block projectors, creating shadows on the displayed image. We introduce a technique that detects and corrects transient shadows in a multi-projector display. Our approach is to minimize the difference between predicted (generated) and observed (camera) images by continuous modification of the projected image values for each display device. We speculate that the general predictive monitoring framework introduced here is capable of addressing more general radiometric consistency problems. Using an automatically-derived relative position of cameras and projectors in the display environment and a straightforward color correction scheme, the system renders an expected image for each camera location. Cameras observe the displayed image, which is compared with the expected image to detect shadowed regions. These regions are transformed to the appropriate projector frames, where corresponding pixel values are increased. In display regions where more than one projector contributes to the image, shadow regions are eliminated. We demonstrate an implementation of the technique in a multiprojector system. | false | false | [
"Christopher O. Jaynes",
"Stephen B. Webb",
"R. Matt Steele",
"Michael S. Brown",
"W. Brent Seales"
] | [] | [] | [] |
Vis | 2,001 | Efficient Adaptive Simplification of Massive Meshes | 10.1109/VISUAL.2001.964503 | The growing availability of massive polygonal models, and the inability of most existing visualization tools to work with such data, has created a pressing need for memory-efficient methods capable of simplifying very large meshes. In this paper, we present a method for performing adaptive simplification of polygonal meshes that are too large to fit in-core. Our algorithm performs two passes over an input mesh. In the first pass, the model is quantized using a uniform grid, and surface information is accumulated in the form of quadrics and dual quadrics. This sampling is then used to construct a BSP-tree in which the partitioning planes are determined by the dual quadrics. In the final pass, the original vertices are clustered using the BSP-tree, yielding an adaptive approximation of the original mesh. The BSP-tree describes a natural simplification hierarchy, making it possible to generate a progressive transmission and construct level-of-detail representations. In this way, the algorithm provides some of the features associated with more expensive edge contraction methods while maintaining greater computational efficiency. In addition to performing adaptive simplification, our algorithm exhibits output-sensitive memory requirements and allows fine control over the size of the simplified mesh. | false | false | [
"Eric Shaffer",
"Michael Garland"
] | [] | [] | [] |
Vis | 2,001 | Enridged Contour Maps | 10.1109/VISUAL.2001.964495 | The visualization of scalar functions of two variables is a classic and ubiquitous application. We present a new method to visualize such data. The method is based on a nonlinear mapping of the function to a height field, followed by visualization as a shaded mountain landscape. The method is easy to implement and efficient, and leads to intriguing and insightful images: The visualization is enriched by adding ridges. Three types of applications are discussed: visualization of iso-levels, clusters (multivariate data visualization), and dense contours (flow visualization). | false | false | [
"Jarke J. van Wijk",
"Alexandru C. Telea"
] | [] | [] | [] |
Vis | 2,001 | EWA volume splatting | 10.1109/VISUAL.2001.964490 | In this paper we present a novel framework for direct volume rendering using a splatting approach based on elliptical Gaussian kernels. To avoid aliasing artifacts, we introduce the concept of a resampling filter combining a reconstruction with a low-pass kernel. Because of the similarity to Heckbert's EWA (elliptical weighted average) filter for texture mapping we call our technique EWA volume splatting. It provides high image quality without aliasing artifacts or excessive blurring even with non-spherical kernels. Hence it is suitable for regular, rectilinear, and irregular volume data sets. Moreover, our framework introduces a novel approach to compute the footprint function. It facilitates efficient perspective projection of arbitrary elliptical kernels at very little additional cost. Finally, we show that EWA volume reconstruction kernels can be reduced to surface reconstruction kernels. This makes our splat primitive universal in reconstructing surface and volume data. | false | false | [
"Matthias Zwicker",
"Hanspeter Pfister",
"Jeroen van Baar",
"Markus H. Gross"
] | [] | [] | [] |
Vis | 2,001 | Fast detection of meaningful isosurfaces for volume data visualization | 10.1109/VISUAL.2001.964515 | Automatic detection of meaningful isosurfaces is important for producing informative visualizations of volume data, especially when no information about the data origin and imaging protocol is available. We propose a computationally efficient method for the automated detection of intensity transitions in volume data. In this approach, the dominant transitions correspond to clear maxima in cumulative Laplacian-weighted gray value histograms. Only one pass through the data volume is required to compute the histogram. Several other features which may be useful for exploration of data of unknown origin can be efficiently computed in a similar manner. The detected intensity transitions can be used for setting of visualization parameters for surface rendering, as well as for direct volume rendering of 3D datasets. When using surface rendering, the detected dominant intensity transition values correspond to the optimal surface isovalues for extraction of boundaries of the objects of interest. In direct volume rendering, such transitions are important for generation of the transfer functions, which are used to assign visualization properties to data voxels and determine the appearance of the rendered image. The proposed method is illustrated by examples with synthetic data as well as real biomedical datasets. | false | false | [
"Vladimir Pekar",
"Rafael Wiemker",
"Daniel Bystrov"
] | [] | [] | [] |
Vis | 2,001 | Fast extraction of adaptive multiresolution meshes with guaranteed properties from volumetric data | 10.1109/VISUAL.2001.964524 | We present a new algorithm for extracting adaptive multiresolution triangle meshes from volume datasets. The algorithm guarantees that the topological genus of the generated mesh is the same as the genus of the surface embedded in the volume dataset at all levels of detail. In addition to this "hard constraint" on the genus of the mesh, the user can choose to specify some number of soft geometric constraints, such as triangle aspect ratio, minimum or maximum total number of vertices, minimum and/or maximum triangle edge lengths, maximum magnitude of various error metrics per triangle or vertex, including maximum curvature (area) error, maximum distance to the surface, and others. The mesh extraction process is fully automatic and does not require manual adjusting of parameters to produce the desired results as long as the user does not specify incompatible constraints. The algorithm robustly handles special topological cases, such as trimmed surfaces (intersections of the surface with the volume boundary), and manifolds with multiple disconnected components (several closed surfaces embedded in the same volume dataset). The meshes may self-intersect at coarse resolutions. However, the self-intersections are corrected automatically as the resolution of the meshes increase. We show several examples of meshes extracted from complex volume datasets. | false | false | [
"Marcel Gavriliu",
"Joel Carranza",
"David E. Breen",
"Alan H. Barr"
] | [] | [] | [] |
Vis | 2,001 | Fitting Subdivision Surfaces | 10.1109/VISUAL.2001.964527 | We introduce a new algorithm for fitting a Catmull-Clark subdivision surface to a given shape within a prescribed tolerance, based on the method of quasi-interpolation. The fitting algorithm is fast, local and scales well since it does not require the solution of linear systems. Its convergence rate is optimal for regular meshes and our experiments show that it behaves very well for irregular meshes. We demonstrate the power and versatility of our method with examples from interactive modeling, surface fitting, and scientific visualization. | false | false | [
"Nathan Litke",
"Adi Levin",
"Peter Schröder"
] | [] | [] | [] |
Vis | 2,001 | Graphical strategies to convey functional relationships in the human brain: a case study | 10.1109/VISUAL.2001.964556 | Brain imaging methods used in experimental brain research such as Positron Emission Tomography (PET) and Functional Magnetic Resonance (fMRI) require the analysis of large amounts of data. Exploratory statistical methods can be used to generate new hypotheses and to provide a reliable measure of a given effect. Typically, researchers report their findings by listing those regions which show significant statistical activity in a group of subjects under some experimental condition or task. A number of methods create statistical parametric maps (SPMs) of the brain on a voxel-basis. In our approach statistics are computed not on individual voxels but on predefined anatomical regions-of-interest (ROIs). A correlation coefficient is used to quantify similarity in response for various regions during an experimental setting. Since the functional inter-relationships can become rather complex and spatially widespread, they are best understood in the context of the underlying 3-D brain anatomy. However despite the power of the 3-D model, the relative location of ROIs in 3-D can be obscured due the inherent problem of presenting 3-D spatial information on a 2-D screen. In order to address this problem, we have explored a number of visualization techniques to aid the brain researcher in exploring the spatial relationships of brain activity. In this paper we present a novel 3-D interface that allows the interactive exploration of correlation datasets. | false | false | [
"Tomihisa Welsh",
"Klaus Mueller 0001",
"Wei Zhu 0008",
"Nora D. Volkow",
"Jeffrey Meade"
] | [] | [] | [] |
Vis | 2,001 | Hardware-software-balanced resampling for the interactive visualization of unstructured grids | 10.1109/VISUAL.2001.964512 | In this paper we address the problem of interactively resampling unstructured grids. Three algorithms are presented. They all allow adaptive resampling of an unstructured grid on a multiresolution hierarchy of arbitrarily sized cartesian grids according to a varying element size. Two of the algorithms presented take advantage of hardware accelerated polygon rendering and 2D texture mapping. In exploiting new features of modem PC graphics adapters, the first algorithm tries to significantly minimize the number of polygons to be rendered. Reducing rasterization requirements is the main goal of the second algorithm, which distributes the computational workload differently between the main processor and the graphics chip. By comparing them to a new pure software approach, an optimal software-hardware balance is studied. We end up with a hybrid approach which greatly improves the performance of hardware assisted resampling by involving the main processor to a higher degree and thus enabling resampling at nearly interactive rates. | false | false | [
"Manfred Weiler",
"Thomas Ertl"
] | [] | [] | [] |
Vis | 2,001 | Hybrid simplification: combining multi-resolution polygon and point rendering | 10.1109/VISUAL.2001.964491 | Multi-resolution hierarchies of polygons and more recently of points are familiar and useful tools for achieving interactive rendering rates. We present an algorithm for tightly integrating the two into a single hierarchical data structure. The trade-off between rendering portions of a model with points or with polygons is made automatically. Our approach to this problem is to apply a bottom-up simplification process involving not only polygon simplification operations, but point replacement and point simplification operations as well. Given one or more surface meshes, our algorithm produces a hybrid hierarchy comprising both polygon and point primitives. This hierarchy may be optimized according to the relative performance characteristics of these primitive types on the intended rendering platform. We also provide a range of aggressiveness for performing point replacement operations. The most conservative approach produces a hierarchy that is better than a purely polygonal hierarchy in some places, and roughly equal in others. A less conservative approach can trade reduced complexity at the far viewing ranges for some increased complexity at the near viewing ranges. We demonstrate our approach on a number of input models, achieving primitive counts that are 1.3 to 4.7 times smaller than those of triangle-only simplification. | false | false | [
"Jonathan D. Cohen 0001",
"Daniel G. Aliaga",
"Weiqiand Zhang"
] | [] | [] | [] |
Vis | 2,001 | Integrating occlusion culling with view-dependent rendering | 10.1109/VISUAL.2001.964534 | We present an approach that integrates occlusion culling within the view-dependent rendering framework. View-dependent rendering provides the ability to change level of detail over the surface seamlessly and smoothly in real-time. The exclusive use of view-parameters to perform level-of-detail selection causes even occluded regions to be rendered in high level of detail. To overcome this serious drawback we have integrated occlusion culling into the level selection mechanism. Because computing exact visibility is expensive and it is currently not possible to perform this computation in real time, we use a visibility estimation technique instead. Our approach reduces dramatically the resolution at occluded regions. | false | false | [
"Jihad El-Sana",
"Neta Sokolovsky",
"Cláudio T. Silva"
] | [] | [] | [] |
Vis | 2,001 | Interactive volume rendering using multi-dimensional transfer functions and direct manipulation widgets | 10.1109/VISUAL.2001.964519 | Most direct volume renderings produced today employ one-dimensional transfer functions, which assign color and opacity to the volume based solely on the single scalar quantity which comprises the dataset. Though they have not received widespread attention, multi-dimensional transfer functions are a very effective way to extract specific material boundaries and convey subtle surface properties. However, identifying good transfer functions is difficult enough in one dimension, let alone two or three dimensions. This paper demonstrates an important class of three-dimensional transfer functions for scalar data (based on data value, gradient magnitude, and a second directional derivative), and describes a set of direct manipulation widgets which make specifying such transfer functions intuitive and convenient. We also describe how to use modem graphics hardware to interactively render with multi-dimensional transfer functions. The transfer functions, widgets, and hardware combine to form a powerful system for interactive volume exploration. | false | false | [
"Joe Michael Kniss",
"Gordon L. Kindlmann",
"Charles D. Hansen"
] | [
"BP"
] | [] | [] |
Vis | 2,001 | Lagrangian-Eulerian Advection for Unsteady Flow Visualization | 10.1109/VISUAL.2001.964493 | In this paper, we propose a new technique to visualize dense representations of time-dependent vector fields based on a Lagrangian-Eulerian Advection (LEA) scheme. The algorithm produces animations with high spatio-temporal correlation at interactive rates. With this technique, every still frame depicts the instantaneous structure of the flow, whereas an animated sequence of frames reveals the motion a dense collection of particles would take when released into the flow. The simplicity of both the resulting data structures and the implementation suggest that LEA could become a useful component of any scientific visualization toolkit concerned with the display of unsteady flows. | false | false | [
"Bruno Jobard",
"Gordon Erlebacher",
"M. Yousuff Hussaini"
] | [] | [] | [] |
Vis | 2,001 | Multiresolution feature extraction for unstructured meshes | 10.1109/VISUAL.2001.964523 | We present a framework to extract mesh features from unstructured two-manifold surfaces. Our method computes a collection of piecewise linear curves describing the salient features of surfaces, such as edges and ridge lines. We extend these basic techniques to a multiresolution setting which improves the quality of the results and accelerates the extraction process. The extraction process is semi-automatic, that is, the user is required to input a few control parameters and to select the operators to be applied to the input surface. Our mesh feature extraction algorithm can be used as a preprocessor for a variety of applications in geometric modeling including mesh fairing, subdivision and simplification. | false | false | [
"Andreas Hubeli",
"Markus H. Gross"
] | [] | [] | [] |
Vis | 2,001 | Nonlinear virtual colon unfolding | 10.1109/VISUAL.2001.964540 | The majority of virtual endoscopy techniques tries to simulate a real endoscopy. A real endoscopy does not always give the optimal information due to the physical limitations it is subject to. In this paper, we deal with the unfolding of the surface of the colon as a possible visualization technique for diagnosis and polyp detection. A new two-step technique is presented which deals with the problems of double appearance of polyps and nonuniform sampling that other colon unfolding techniques suffer from. In the first step, a distance map from a central path induces nonlinear rays for unambiguous parameterization of the surface. The second step compensates for locally varying distortions of the unfolded surface. A technique similar to magnification fields in information visualization is hereby applied. The technique produces a single view of a complete, virtually dissected colon. | false | false | [
"Anna Vilanova",
"Rainer Wegenkittl",
"Andreas König 0002",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,001 | Nonmanifold subdivision | 10.1109/VISUAL.2001.964528 | Commonly-used subdivision schemes require manifold control meshes and produce manifold surfaces. However, it is often necessary to model nonmanifold surfaces, such as several surface patches meeting at a common boundary. In this paper, we describe a subdivision algorithm that makes it possible to model nonmanifold surfaces. Any triangle mesh, subject only to the restriction that no two vertices of any triangle coincide, can serve as an input to the algorithm. Resulting surfaces consist of collections of manifold patches joined along nonmanifold curves and vertices. If desired, constraints may be imposed on the tangent planes of manifold patches sharing a curve or a vertex. The algorithm is an extension of a well-known Loop subdivision scheme, and uses techniques developed for piecewise smooth surfaces. | false | false | [
"Lexing Ying",
"Denis Zorin"
] | [] | [] | [] |
Vis | 2,001 | Normal bounds for subdivision-surface interference detection | 10.1109/VISUAL.2001.964529 | Subdivision surfaces are an attractive representation when modeling arbitrary-topology free-form surfaces and show great promise for applications in engineering design and computer animation. Interference detection is a critical tool in many of these applications. In this paper, we derive normal bounds for subdivision surfaces and use these to develop an efficient algorithm for (self-) interference detection. | false | false | [
"Eitan Grinspun",
"Peter Schröder"
] | [] | [] | [] |
Vis | 2,001 | Optimal regular volume sampling | 10.1109/VISUAL.2001.964498 | The classification of volumetric data sets as well as their rendering algorithms are typically based on the representation of the underlying grid. Grid structures based on a Cartesian lattice are the de-facto standard for regular representations of volumetric data. In this paper we introduce a more general concept of regular grids for the representation of volumetric data. We demonstrate that a specific type of regular lattice-the so-called body-centered cubic-is able to represent the same data set as a Cartesian grid to the same accuracy but with 29.3% fewer samples. This speeds up traditional volume rendering algorithms by the same ratio, which we demonstrate by adopting a splatting implementation for these new lattices. We investigate different filtering methods required for computing the normals on this lattice. The lattice representation results also in lossless compression ratios that are better than previously reported. Although other regular grid structures achieve the same sample efficiency, the body-centered cubic is particularly easy to use. The only assumption necessary is that the underlying volume is isotropic and band-limited-an assumption that is valid for most practical data sets. | false | false | [
"Thomas Theußl",
"Torsten Möller",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,001 | PingTV: A Case Study in Visual Network Monitoring | 10.1109/VISUAL.2001.964541 | PingTV generates a logical map of a network that is used as an overlay on a physical geographical image of the location from the user perspective (buildings, floors within buildings, etc.). PingTV is used at Illinois State University as a visualization tool to communicate real-time network conditions to the university community via a dedicated channel on the campus cable TV system. Colored symbols allow students and staff to discern high-congestion "rush hours" and understand why their specific Internet connectivity is "broken" from the wide range of potential causes. Lessons learned include the use of color to visually convey confidence intervals using color shading and the visualization of cyclical network traffic patterns. Our implementation is general and flexible with potential for application for other domains. | false | false | [
"Alexander Gubin",
"William Yurcik",
"Larry Brumbaugh"
] | [] | [] | [] |
Vis | 2,001 | PixelFlex: a reconfigurable multi-projector display system | 10.1109/VISUAL.2001.964508 | This paper presents PixelFlex - a spatially reconfigurable multi-projector display system. The PixelFlex system is composed of ceiling-mounted projectors, each with computer-controlled pan, tilt, zoom and focus; and a camera for closed-loop calibration. Working collectively, these controllable projectors function as a single logical display capable of being easily modified into a variety of spatial formats of differing pixel density, size and shape. New layouts are automatically calibrated within minutes to generate the accurate warping and blending functions needed to produce seamless imagery across planar display surfaces, thus giving the user the flexibility to quickly create, save and restore multiple screen configurations. Overall, PixelFlex provides a new level of automatic reconfigurability and usage, departing from the static, one-size-fits-all design of traditional large-format displays. As a front-projection system, PixelFlex can be installed in most environments with space constraints and requires little or no post-installation mechanical maintenance because of the closed-loop calibration. | false | false | [
"Ruigang Yang",
"David Gotz",
"Justin Hensley",
"Herman Towles",
"Michael S. Brown"
] | [] | [] | [] |
Vis | 2,001 | Point set surfaces | 10.1109/VISUAL.2001.964489 | We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). We present tools to increase or decrease the density of the points, thus, allowing an adjustment of the spacing among the points to control the fidelity of the representation. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates. | false | false | [
"Marc Alexa",
"Johannes Behr",
"Daniel Cohen-Or",
"Shachar Fleishman",
"David Levin",
"Cláudio T. Silva"
] | [] | [] | [] |
Vis | 2,001 | POP: A Hybrid Point and Polygon Rendering System for Large Data | 10.1109/VISUAL.2001.964492 | We introduce a simple but effective extension to the existing pure point rendering systems. Rather than using only points, we use both points and polygons to represent and render large mesh models. We start from triangles as leaf nodes and build up a hierarchical tree structure with intermediate nodes as points. During the rendering, the system determines whether to use a point (of a certain intermediate level node) or a triangle (of a leaf node) for display depending on the screen contribution of each node. While points are used to speedup the rendering of distant objects, triangles are used to ensure the quality of close objects. Our method can accelerate the rendering of large models, compromising little in image quality. | false | false | [
"Baoquan Chen",
"Minh Xuan Nguye"
] | [] | [] | [] |
Vis | 2,001 | Quantitative comparative evaluation of 2D vector field visualization methods | 10.1109/VISUAL.2001.964505 | Presents results from a user study that compared six visualization methods for 2D vector data. Two methods used different distributions of short arrows, two used different distributions of integral curves, one used wedges located to suggest flow lines, and the final one was line-integral convolution (LIC). We defined three simple but representative tasks for users to perform using visualizations from each method: (1) locating all critical points in an image, (2) identifying critical point types, and (3) advecting a particle. The results show different strengths and weaknesses for each method. We found that users performed better with methods that: (1) showed the sign of vectors within the vector field, (2) visually represented integral curves, and (3) visually represented the locations of critical points. These results provide quantitative support for some of the anecdotal evidence concerning visualization methods. The tasks and testing framework also provide a basis for comparing other visualization methods, for creating more effective methods and for defining additional tasks to further understand tradeoffs among methods. They may also be useful for evaluating 2D vectors on 2D surfaces embedded in 3D and for defining analogous tasks for 3D visualization methods. | false | false | [
"David H. Laidlaw",
"Robert M. Kirby",
"J. Scott Davidson",
"Timothy S. Miller",
"Marco da Silva",
"William H. Warren",
"Michael J. Tarr"
] | [] | [] | [] |
Vis | 2,001 | Real-time decompression and visualization of animated volume data | 10.1109/VISUAL.2001.964531 | Interactive exploration of animated volume data is required by many application, but the huge amount of computational time and storage space needed for rendering does not yet allow the visualization of animated volumes. In this paper, we introduce an algorithm running at interactive frame rates using 3D wavelet transforms that allows for any wavelet, motion compensation techniques and various encoding schemes of the resulting wavelet coefficients to be used. We analyze different families and orders of wavelets for compression ratio and the introduced error. We use a quantization that has been optimized for the visual impression of the reconstructed volume, independent of the viewing algorithm. This enables us to achieve very high compression ratios while still being able to reconstruct the volume with as few visual artifacts as possible. A further improvement of the compression ratio has been achieved by applying a motion compensation scheme to exploit temporal coherency. Using these schemes, we are able to decompress each volume of our animation at interactive frame rates, while visualizing these decompressed volumes on a single PC. We also present a number of improved visualization algorithms for high-quality display using OpenGL hardware running at interactive frame rates on a standard PC. | false | false | [
"Stefan Guthe",
"Wolfgang Straßer"
] | [] | [] | [] |
Vis | 2,001 | RTVR-a flexible Java library for interactive volume rendering | 10.1109/VISUAL.2001.964522 | This paper presents several distinguishing design features of RTVR-a Java-based library for real-time volume rendering. We describe, how the careful design of data structures, which in our case are based on voxel enumeration, and an intelligent use of lookup tables enable interactive volume rendering even on low-end PC hardware. By assigning voxels to distinct objects within the volume and by using an individual setup and combination of look-up tables for each object, object-aware rendering is performed: different transfer functions, shading models, and also compositing modes can be mixed within a single scene to depict each object in the most appropriate way, while still providing rendering results in real-time. While providing frame rates similar to volume visualization using 3D consumer hardware, the approach utilized by RTVR offers much more flexibility and extensibility due to its pure software nature. Furthermore, due to the memory-efficiency of the data representation and the implementation in Java, RTVR can be used to provide volume viewing facilities over low-bandwidth networks, with almost full control over rendering and visualization mapping parameters (clipping, shading, compositing, transfer function) for the user. This paper also addresses specific problems which arise by the use of Java for interactive visualization. | false | false | [
"Lukas Mroz",
"Helwig Hauser"
] | [] | [] | [] |
Vis | 2,001 | Salient iso-surface detection with model-independent statistical signatures | 10.1109/VISUAL.2001.964516 | Volume graphics has not been accepted for widespread use. One of the inhibiting reasons is the lack of general methods for data-analysis and simple interfaces for data exploration. An error-and-trial iterative procedure is often used to select a desirable transfer function or mine the dataset for salient iso-values. New semi-automatic methods that are also data-centric have shown much promise. However, general and robust methods are still needed for data-exploration and analysis. In this paper, we propose general model-independent statistical methods based on central moments of data. Using these techniques we show how salient iso-surfaces at material boundaries can be determined. We provide examples from the medical and computational domain to demonstrate the effectiveness of our methods. | false | false | [
"Shivaraj Tenginakai",
"Jinho Lee",
"Raghu Machiraju"
] | [] | [] | [] |
Vis | 2,001 | Semi-immersive space mission design and visualization: case study of the "Terrestrial Planet Finder" mission | 10.1109/VISUAL.2001.964562 | The paper addresses visualization issues of the Terrestrial Planet Finder Mission (C.A. Beichman et al., 1999). The goal of this mission is to search for chemical signatures of life in distant solar systems using five satellites flying in formation to simulate a large telescope. To design and visually verify such a delicate mission, one has to analyze and interact with many different 3D spacecraft trajectories, which is often difficult in 2D. We employ a novel trajectory design approach using invariant manifold theory, which is best understood and utilized in an immersive setting. The visualization also addresses multi-scale issues related to the vast differences in distance, velocity, and time at different phases of the mission. Additionally, the parameterization and coordinate frames used for numerical simulations may not be suitable for direct visualization. Relative motion presents a more serious problem where the patterns of the trajectories can only be viewed in particular rotating frames. Some of these problems are greatly relieved by using interactive, animated stereo 3D visualization in a semi-immersive environment such as a Responsive Workbench. Others were solved using standard techniques such as a stratify approach with multiple windows to address the multiscale issues, re-parameterizations of trajectories and associated 2D manifolds and relative motion of the camera to "evoke" the desired patterns. | false | false | [
"Ken Museth",
"Alan H. Barr",
"Martin W. Lo"
] | [] | [] | [] |
Vis | 2,001 | Simplicial subdivisions and sampling artifacts | 10.1109/VISUAL.2001.964499 | We review several schemes for dividing cubical cells into simplices (tetrahedra) in 3-D for interpolating from sampled data to R/sup 3/ or for computing isosurfaces by barycentric interpolation. We present test data that reveal the geometric artifacts that these subdivision schemes generate, and discuss how these artifacts relate to the filter kernels that correspond to the subdivision schemes. | false | false | [
"Hamish A. Carr",
"Torsten Möller",
"Jack Snoeyink"
] | [] | [] | [] |
Vis | 2,001 | Smooth approximation and rendering of large scattered data sets | 10.1109/VISUAL.2001.964530 | Presents an efficient method to automatically compute a smooth approximation of large functional scattered data sets given over arbitrarily shaped planar domains. Our approach is based on the construction of a C/sup 1/-continuous bivariate cubic spline and our method offers optimal approximation order. Both local variation and nonuniform distribution of the data are taken into account by using local polynomial least squares approximations of varying degree. Since we only need to solve small linear systems and no triangulation of the scattered data points is required, the overall complexity of the algorithm is linear in the total number of points. Numerical examples dealing with several real-world scattered data sets with up to millions of points demonstrate the efficiency of our method. The resulting spline surface is of high visual quality and can be efficiently evaluated for rendering and modeling. In our implementation we achieve real-time frame rates for typical fly-through sequences and interactive frame rates for recomputing and rendering a locally modified spline surface. | false | false | [
"Jörg Haber",
"Frank Zeilfelder",
"Oleg Davydov",
"Hans-Peter Seidel"
] | [] | [] | [] |
Vis | 2,001 | Surgical simulator for hysteroscopy: a case study of visualization in surgical training | 10.1109/VISUAL.2001.964548 | Computer-based surgical simulation promises to provide a broader scope of clinical training through the introduction of anatomic variation, simulation of untoward events, and collection of performance data. We present a haptically-enabled surgical simulator for the most common techniques in diagnostic and operative hysteroscopy-cervical dilation, endometrial resection and ablation, and lesion excision. Engineering tradeoffs in developing a real-time, haptic-rate simulator are discussed. | false | false | [
"Kevin Montgomery",
"Wm. LeRoy Heinrichs",
"Cynthia Bruyns",
"Simon Wildermuth",
"Christopher J. Hasser",
"Stephanie Ozenne",
"David Bailey"
] | [] | [] | [] |
Vis | 2,001 | Texture Hardware Assisted Rendering of Time-Varying Volume Data | 10.1109/VISUAL.2001.964520 | In this paper we present a hardware-assisted rendering technique coupled with a compression scheme for the interactive visual exploration of time-varying scalar volume data. A palette-based decoding technique and an adaptive bit allocation scheme are developed to fully utilize the texturing capability of a commodity 3-D graphics card. Using a single PC equipped with a modest amount of memory, a texture capable graphics card, and an inexpensive disk array, we are able to render hundreds of time steps of regularly gridded volume data (up to 45 millions voxels each time step) at interactive rates, permitting the visual exploration of large scientific data sets in both the temporal and spatial domain. | false | false | [
"Eric B. Lum",
"Kwan-Liu Ma",
"John P. Clyne"
] | [] | [] | [] |
Vis | 2,001 | The "Which Blair project": a quick visual method for evaluating perceptual color maps | 10.1109/VISUAL.2001.964510 | We have developed a fast, perceptual method for selecting color scales for data visualization that takes advantage of our sensitivity to luminance variations in human faces. To do so, we conducted experiments in which we mapped various color scales onto the intensity values of a digitized photograph of a face and asked observers to rate each image. We found a very strong correlation between the perceived naturalness of the images and the degree to which the underlying color scales increased monotonically in luminance. Color scales that did not include a monotonically increasing luminance component produced no positive rating scores. Since color scales with monotonic luminance profiles are widely recommended for visualizing continuous scalar data, a purely visual technique for identifying such color scales could be very useful, especially in situations where color calibration is not integrated into the visualization environment, such as over the Internet. | false | false | [
"Bernice E. Rogowitz",
"Alan D. Kalvin"
] | [] | [] | [] |
Vis | 2,001 | The MetVR case study: meteorological visualization in an immersive virtual environment | 10.1109/VISUAL.2001.964559 | Traditional methods for displaying weather products are generally two-dimensional (2D) plots or just text format. It is hard for forecasters to get the entire picture of the atmosphere using these methods. The problems apparent in 2D with comparing and correlating multiple layers are overcome simply by adding a dimension. This is important because pertinent features in the data sets may lie in multiple layers and span several time steps. However, simply using a three-dimensional (3D) approach is not enough. The capacity for analysis of small-scale, but important, features in 2D are lost when transitioning to 3D. We propose that 3D's advantages can be incorporated with 2D's small-scale analysis by using an immersive virtual environment. In this case study, we evaluate our current standing with the project: have we met our goals, and how should we proceed from this point? To evaluate our application, we invited meteorologists to use the application to explore a data set. Then we presented our goals and asked which ones had we met, from a meteorologist's perspective. The results qualitatively reflected that our application was effective and further research would be worthwhile. | false | false | [
"Sean Ziegeler",
"Robert J. Moorhead II",
"Paul J. Croft",
"Duanjun Lu"
] | [] | [] | [] |
Vis | 2,001 | The perspective shear-warp algorithm in a virtual environment | 10.1109/VISUAL.2001.964513 | Since the original paper of Lacroute and Levoy (1994), where the shear-warp factorization was also shown for perspective projections, a lot of work has been carried out using the shear-warp factorization with parallel projections. However, none of it has proved or improved the algorithm for the perspective projection. Also in Lacroute's Volpack library, the perspective shear-warp volume rendering algorithm is missing. This paper reports on an implementation of the perspective shear-warp algorithm, which includes enhancements for its application in immersive virtual environments. Furthermore, a mathematical proof for the correctness of the permutation of projection and warp is provided, so far a basic assumption of the shear-warp perspective projection. | false | false | [
"Jürgen P. Schulze",
"Roland Niemeier",
"Ulrich Lang 0002"
] | [] | [] | [] |
Vis | 2,001 | Transport and anisotropic diffusion in time-dependent flow visualization | 10.1109/VISUAL.2001.964494 | The visualization of time-dependent flow is an important and challenging topic in scientific visualization. Its aim is to represent transport phenomena governed by time-dependent vector fields in an intuitively understandable way, using images and animations. Here we pick up the recently presented anisotropic diffusion method, expand and generalize it to allow a multiscale visualization of long-term, complex transport problems. Instead of streamline type patterns generated by the original method now streakline patterns are generated and advected. This process obeys a nonlinear transport diffusion equation with typically dominant transport. Starting from some noisy initial image, the diffusion actually generates and enhances patterns which are then transported in the direction of the flow field. Simultaneously the image is again sharpened in the direction orthogonal to the flow field. A careful adjustment of the models parameters is derived to balance diffusion and transport effects in a reasonable way. Properties of the method can be discussed for the continuous model, which is solved by an efficient upwind finite element discretization. As characteristic for the class of multiscale image processing methods, we can in advance select a suitable scale for representing the flow field. | false | false | [
"David Bürkle",
"Tobias Preußer",
"Martin Rumpf"
] | [] | [] | [] |
Vis | 2,001 | Undersampling and oversampling in sample based shape modeling | 10.1109/VISUAL.2001.964497 | Shape modeling is an integral part of many visualization problems. Recent advances in scanning technology and a number of surface reconstruction algorithms have opened up a new paradigm for modeling shapes from samples. Many of the problems currently faced in this modeling paradigm can be traced back to two anomalies in sampling, namely undersampling and oversampling. Boundaries, non-smoothness and small features create undersampling problems, whereas oversampling leads to too many triangles. We use Voronoi cell geometry as a unified guide to detect undersampling and oversampling. We apply these detections in surface reconstruction and model simplification. Guarantees of the algorithms can be proved. The authors show the success of the algorithms empirically on a number of interesting data sets. | false | false | [
"Tamal K. Dey",
"Joachim Giesen",
"Samrat Goswami",
"James Hudson",
"Rephael Wenger",
"Wulue Zhao"
] | [] | [] | [] |
Vis | 2,001 | User-centric viewpoint computation for haptic exploration and manipulation | 10.1109/VISUAL.2001.964526 | We present several techniques for user-centric viewing of the virtual objects or datasets under haptic exploration and manipulation. Depending on the type of tasks performed by the user, our algorithms compute automatic placement of the user viewpoint to navigate through the scene, to display the near-optimal views, and to reposition the viewpoint for haptic visualization. This is accomplished by conjecturing the user's intent based on the user's actions, the object geometry, and intra- and inter-object occlusion relationships. These algorithms have been implemented and interfaced with both a 3-DOF and a 6-DOF PHANToM arms. We demonstrate their application on haptic exploration and visualization of a complex structure, as well as multiresolution modeling and 3D painting with a haptic interface. | false | false | [
"Miguel A. Otaduy",
"Ming C. Lin"
] | [] | [] | [] |
Vis | 2,001 | Variational classification for visualization of 3D ultrasound data | 10.1109/VISUAL.2001.964539 | We present a new technique for visualizing surfaces from 3D ultrasound data. 3D ultrasound datasets are typically fuzzy, contain a substantial amount of noise and speckle, and suffer from several other problems that make extraction of continuous and smooth surfaces extremely difficult. We propose a novel opacity classification algorithm for 3D ultrasound datasets, based on the variational principle. More specifically, we compute a volumetric opacity function that optimally satisfies a set of simultaneous requirements. One requirement makes the function attain nonzero values only in the vicinity of a user-specified value, resulting in soft shells of finite, approximately constant thickness around isosurfaces in the volume. Other requirements are designed to make the function smoother and less sensitive to noise and speckle. The computed opacity function lends itself well to explicit geometric surface extraction, as well as to direct volume rendering at interactive rates. We also describe a new splatting algorithm that is particularly well suited for displaying soft opacity shells. Several examples and comparisons are included to illustrate our approach and demonstrate its effectiveness on real 3D ultrasound datasets. | false | false | [
"Raanan Fattal",
"Dani Lischinski"
] | [] | [] | [] |
Vis | 2,001 | Virtual Temporal Bone Dissection: A Case Study | 10.1109/VISUAL.2001.964561 | The Temporal Bone Dissection Simulator is an ongoing research project for the construction of a synthetic environment suitable for virtual dissection of human temporal bone and related anatomy. Funded by the National Institute on Deafness and Other Communication Disorders (NIDCD), the primary goal of this project is to provide a safe, robust, and cost-effective virtual environment for learning the anatomy and surgical procedures associated with the temporal bone. Direct volume visualization has been indispensable for the necessary level of realism and interactivity that is vital to the success of this project. This work is being conducted by the Ohio Supercomputer Center in conjunction with the Department of Otolaryngology at the Ohio State University, and NIDCD. | false | false | [
"Jason Bryan",
"Don Stredney",
"Gregory J. Wiet",
"Dennis Sessanna"
] | [] | [] | [] |
Vis | 2,001 | Visualization and interaction techniques for the exploration of vascular structures | 10.1109/VISUAL.2001.964538 | We describe a pipeline of image processing steps for deriving symbolic models of vascular structures from radiological data which reflect the branching pattern and diameter of vessels. For the visualization of these symbolic models, concatenated truncated cones are smoothly blended at branching points. We put emphasis on the quality of the visualizations which is achieved by anti-aliasing operations in different stages of the visualization. The methods presented are referred to as HQVV (high quality vessel visualization). Scalable techniques are provided to explore vascular structures of different orders of magnitude. The hierarchy as well as the diameter of the branches of vascular systems are used to restrict visualizations to relevant subtrees and to emphasize parts of vascular systems. Our research is inspired by clear visualizations in textbooks and is targeted toward medical education and therapy planning. We describe the application of vessel visualization techniques for liver surgery planning. For this application it is crucial to recognize the morphology and branching pattern of vascular systems as well as the basic spatial relations between vessels and other anatomic structures. | false | false | [
"Horst K. Hahn",
"Bernhard Preim",
"Dirk Selle",
"Heinz-Otto Peitgen"
] | [] | [] | [] |
Vis | 2,001 | Visualization of large terrains made easy | 10.1109/VISUAL.2001.964533 | We present an elegant and simple to implement framework for performing out-of-core visualization and view-dependent refinement of large terrain surfaces. Contrary to the trend of increasingly elaborate algorithms for large-scale terrain visualization, our algorithms and data structures have been designed with the primary goal of simplicity and efficiency of implementation. Our approach to managing large terrain data also departs from more conventional strategies based on data tiling. Rather than emphasizing how to segment and efficiently bring data in and out of memory, we focus on the manner in which the data is laid out to achieve good memory coherency for data accesses made in a top-down (coarse-to-fine) refinement of the terrain. We present and compare the results of using several different data indexing schemes, and propose a simple to compute index that yields substantial improvements in locality and speed over more commonly used data layouts. Our second contribution is a new and simple, yet easy to generalize method for view-dependent refinement. Similar to several published methods in this area, we use longest edge bisection in a top-down traversal of the mesh hierarchy to produce a continuous surface with subdivision connectivity. In tandem with the refinement, we perform view frustum culling and triangle stripping. These three components are done together in a single pass over the mesh. We show how this framework supports virtually any error metric, while still being highly memory and compute efficient. | false | false | [
"Peter Lindstrom 0001",
"Valerio Pascucci"
] | [] | [] | [] |
Vis | 2,001 | Visualization of Sports using Motion Trajectories: Providing Insights into Performance, Style, and Strategy | 10.1109/VISUAL.2001.964496 | Remote experience of sporting events has thus far been limited mostly to watching video and the scores and statistics associated with the sport. However, a fast-developing trend is the use of visualization techniques to give new insights into performance, style, and strategy of the players. Automated techniques can extract accurate information from video about player performance that not even the most skilled observer is able to discern. When presented as static images or as a three-dimensional virtual replay, this information makes viewing a game an entirely new and exciting experience. This paper presents one such sports visualization system called LucentVision, which has been developed for the sport of tennis. LucentVision uses real-time video analysis to obtain motion trajectories of players and the ball, and offers a rich set of visualization options based on this trajectory data. The system has been used extensively in the broadcast of international tennis tournaments, both on television and the Internet. | false | false | [
"Gopal Sarma Pingali",
"Agata Opalach",
"Yves Jean",
"Ingrid Carlbom"
] | [] | [] | [] |
Vis | 2,001 | Visualizing 2D probability distributions from EOS satellite image-derived data sets: a case study | 10.1109/VISUAL.2001.964550 | Maps of biophysical and geophysical variables using Earth Observing System (EOS) satellite image data are an important component of Earth science. These maps have a single value derived at every grid cell and standard techniques are used to visualize them. Current tools fall short, however, when it is necessary to describe a distribution of values at each grid cell. Distributions may represent a frequency of occurrence over time, frequency of occurrence from multiple runs of an ensemble forecast or possible values from an uncertainty model. We identify these "distribution data sets" and present a case study to visualize such 2D distributions. Distribution data sets are different from multivariate data sets in the sense that the values are for a single variable instead of multiple variables. Data for this case study consists of multiple realizations of percent forest cover, generated using a geostatistical technique that combines ground measurements and satellite imagery to model uncertainty about forest cover. We present two general approaches for analyzing and visualizing such data sets. The first is a pixel-wise analysis of the probability density functions for the 2D image while the second is an analysis of features identified within the image. Such pixel-wise and feature-wise views will give Earth scientists a more complete understanding of distribution data sets. See www.cse.ucsc.edu/research/avis/nasa is for additional information. | false | false | [
"David L. Kao",
"Jennifer L. Dungan",
"Alex T. Pang"
] | [] | [] | [] |
Vis | 2,001 | Volume Rendering of Fine Details Within Medical Data | 10.1109/VISUAL.2001.964537 | Presents a method concerning the volume rendering of fine details, such as blood vessels and nerves, from medical data. The realistic and efficient visualization of such structures is often of great medical interest, and conventional rendering techniques do not always deal with them adequately. Our method uses preprocessing to reconstruct fine details that are difficult to segment and label. It detects the presence of fine geometrical structures, such as cracks or cylinders that suggest the existence of, for example, blood vessels or nerves; the subsequent volume rendering then displays fine geometrical objects that lie on a surface. The method can also show structures within the volume, using a special "integration sampling" scheme to portray reconstructed volume texture, such as that exhibited by muscle fibers. By combining the surface structure and volume texture in the rendering, realistic results can be produced; examples are provided. | false | false | [
"Feng Dong 0005",
"Gordon Clapworthy",
"Mel Krokos"
] | [] | [] | [] |
Vis | 2,001 | Wavelet representation of contour sets | 10.1109/VISUAL.2001.964525 | We present a new wavelet compression and multiresolution modeling approach for sets of contours (level sets). In contrast to previous wavelet schemes, our algorithm creates a parametrization of a scalar field induced by its contours and compactly stores this parametrization rather than function values sampled on a regular grid. Our representation is based on hierarchical polygon meshes with subdivision connectivity whose vertices are transformed into wavelet coefficients. From this sparse set of coefficients, every set of contours can be efficiently reconstructed at multiple levels of resolution. When applying lossy compression, introducing high quantization errors, our method preserves contour topology, in contrast to compression methods applied to the corresponding field function. We provide numerical results for scalar fields defined on planar domains. Our approach generalizes to volumetric domains, time-varying contours, and level sets of vector fields. | false | false | [
"Martin Hering-Bertram",
"Daniel E. Laney",
"Mark A. Duchaineau",
"Charles D. Hansen",
"Bernd Hamann",
"Kenneth I. Joy"
] | [] | [] | [] |
Vis | 2,001 | Wind Tunnel Data Fusion and Immersive Visualization: A Case Study | 10.1109/VISUAL.2001.964563 | This case study describes the process of fusing the data from several wind tunnel experiments into a single coherent visualization. Each experiment was conducted independently and was designed to explore different flow features around airplane landing gear. In the past, it would have been very difficult to correlate results from the different experiments. However, with a single 3-D visualization representing the fusion of the three experiments, significant insight into the composite flowfield was observed that would have been extremely difficult to obtain by studying its component parts. The results are even more compelling when viewed in an immersive environment. | false | false | [
"Kurt Severance",
"Paul Brewster",
"Barry Lazos",
"Daniel F. Keefe"
] | [] | [] | [] |
InfoVis | 2,001 | 2D vs 3D, implications on spatial memory | 10.1109/INFVIS.2001.963291 | Since the introduction of graphical user interfaces (GUI) and two-dimensional (2D) displays, the concept of space has entered the information technology (IT) domain. Interactions with computers were re-encoded in terms of fidelity to the interactions with real environment and consequently in terms of fitness to cognitive and spatial abilities. A further step in this direction was the creation of three-dimensional (3D) displays which have amplified the fidelity of digital representations. However, there are no systematic results evaluating the extent to which 3D displays better support cognitive spatial abilities. The aim of this research is to empirically investigate spatial memory performance across different instances of 2D and 3D displays. Two experiments were performed. The displays used in the experimental situation represented hierarchical information structures. The results of the test show that the 3D display does improve performances in the designed spatial memory task. | false | false | [
"Monica Tavanti",
"Mats Lind"
] | [] | [] | [] |
InfoVis | 2,001 | A comparison of 2-D visualizations of hierarchies | 10.1109/INFVIS.2001.963290 | This paper describes two experiments that compare four two-dimensional visualizations of hierarchies: organization chart, icicle plot, treemap, and tree ring. The visualizations are evaluated in the context of decision tree analyses prevalent in data mining applications. The results suggest that either the tree ring or icicle plot is equivalent to the organization chart. | false | false | [
"S. Todd Barlow",
"Padraic Neville"
] | [] | [] | [] |
InfoVis | 2,001 | An empirical comparison of three commercial information visualization systems | 10.1109/INFVIS.2001.963289 | An empirical comparison of three commercial information visualization systems on three different databases is presented. The systems use different paradigms for visualizing data. Tasks were selected to be "ecologically relevant", i.e. meaningful and interesting in the respec- tive domains. Users of one system turned out to solve problems significantly faster than users of the other two, while users of another system would supply significantly more correct answers. Reasons for these results and general observations about the studied systems are discussed. | false | false | [
"Alfred Kobsa"
] | [] | [] | [] |
InfoVis | 2,001 | Animated exploration of dynamic graphs with radial layout | 10.1109/INFVIS.2001.963279 | We describe a new animation technique for supporting interactive exploration of a graph. We use the well-known radial tree layout method, in which the view is determined by the selection of a focus node. Our main contribution is a method for animating the transition to a new layout when a new focus node is selected. In order to keep the transition easy to follow, the animation linearly interpolates the polar coordinates of the nodes, while enforcing ordering and orientation constraints. We apply this technique to visualizations of social networks and of the Gnutella file-sharing network, and discuss the results from our informal usability tests. | false | false | [
"Ka-Ping Yee",
"Danyel Fisher",
"Rachna Dhamija",
"Marti A. Hearst"
] | [] | [] | [] |
InfoVis | 2,001 | Botanical visualization of huge hierarchies | 10.1109/INFVIS.2001.963285 | A new method for the visualization of huge hierarchical data structures is presented. The method is based on the observation that we can easily see the branches, leaves and their arrangement in a botanical tree, despite of the large number of elements. The strand model of Holton is used to convert an abstract tree into a geometric model. Non-leaf nodes are mapped to branches and child nodes to sub-branches. A naive application of this model leads to unsatisfactory results, hence it is tailored to suit our purposes better. Continuing branches are emphasized, long branches are contracted, and sets of leaves are shown as fruit. The method is applied to the visualization of directory structures. The elements, directories and files, as well as their relations can easily be extracted, thereby showing that the use of methods from botanical modeling can be effective for information visualization. | false | false | [
"Ernst Kleiberg",
"Huub van de Wetering",
"Jarke J. van Wijk"
] | [] | [] | [] |
InfoVis | 2,001 | Case study: design and assessment of an enhanced geographic information system for exploration of multivariate health statistics | 10.1109/INFVIS.2001.963294 | An implementation of an interactive parallel coordinate plot linked with the ArcView(r) geographic information system (GIS) is presented. The integrated geographic visualization system was created for the exploratory analysis of mortality data from specific cancers as they relate, specifically spatially, to other mortality causes and to demographic and socioeconomic risk factors. The linked and interactive parallel coordinate plot was tested with and compared to a similarly interactive and linked scatterplot in usability assessments designed to assess each representation's relative effectiveness for exploration of these data sets. Evidence from these studies suggests that multivariate, spatial, and/or time series exploration is enhanced through the use of the parallel coordinate plot linked to maps. | false | false | [
"Robert M. Edsall",
"Alan M. MacEachren",
"Linda Pickle"
] | [] | [] | [] |
InfoVis | 2,001 | Case study: e-commerce clickstream visualization | 10.1109/INFVIS.2001.963293 | We have developed an interactive, scalable visualization tool for analyzing the behavior of users of a web site. Our system not only shows site topology and traffic flow, but by segmenting site traffic data based on user attributes, including demographic data and purchase history, we can present a more complete picture of web site usage. This can lead to a more focussed analysis that allows direct comparison between user segments, and ultimately a deeper understanding of how users interact with a site. The tool is designed for real world use, and we present a usage study of the tool by analyzing the data of a failed "dot-com". | false | false | [
"Jeffrey Brainerd",
"Barry G. Becker"
] | [] | [] | [] |
InfoVis | 2,001 | Case study: visualization for decision tree analysis in data mining | 10.1109/INFVIS.2001.963292 | Decision trees are one of the most popular methods of data mining. Decision trees partition large amounts of data into smaller segments by applying a series of rules. Creating and evaluating decision trees benefits greatly from visualization of the trees and diagnostic measures of their effectiveness. This paper describes an application, EMTree Results Viewer, that supports decision tree analysis through the visualization of model results and diagnosis. The functionality of the application and the visualization techniques are revealed through an example of churn analysis in the telecommunications industry. | false | false | [
"S. Todd Barlow",
"Padraic Neville"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.