Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
CHI | 2,008 | Rendering navigation and information space with honeycombTM | 10.1145/1357054.1357333 | The growing amount of available information poses challenges not only in the process of information retrieval. The usability of the rendered search process and results can be increased by appropriate visualization techniques or new interaction paradigms, or both. In this article we present the HoneyComb™ paradigm, an information visualization style that aims to render and manage large quantities of information items. We describe the design objectives and the prototype of HC™. Finally, we present a short evaluation of the HC™ paradigm in the context of search and browsing. | false | false | [
"Sebastian Ryszard Kruk",
"Bill McDaniel"
] | [] | [] | [] |
CHI | 2,008 | Sesame: informing user security decisions with system visualization | 10.1145/1357054.1357217 | Non-expert users face a dilemma when making security decisions. Their security often cannot be fully automated for them, yet they generally lack both the motivation and technical knowledge to make informed security decisions on their own. To help users with this dilemma, we present a novel security user interface called Sesame. Sesame uses a concrete, spatial extension of the desktop metaphor to provide users with the security-related, visualized system-level information they need to make more informed decisions. It also provides users with actionable controls to affect a system's security state. Sesame graphically facilitates users' comprehension in making these decisions, and in doing so helps to lower the bar for motivating them to participate in the security of their system. In a controlled study, users with Sesame were found to make fewer errors than a control group which suggests that our novel security interface is a viable alternative approach to helping users with their dilemma. | false | false | [
"Jennifer Stoll",
"Craig S. Tashman",
"W. Keith Edwards",
"Kyle Spafford"
] | [] | [] | [] |
CHI | 2,008 | Supporting the analytical reasoning process in information visualization | 10.1145/1357054.1357247 | This paper presents a new information visualization framework that supports the analytical reasoning process. It consists of three views - a data view, a knowledge view and a navigation view. The data view offers interactive information visualization tools. The knowledge view enables the analyst to record analysis artifacts such as findings, hypotheses and so on. The navigation view provides an overview of the exploration process by capturing the visualization states automatically. An analysis artifact recorded in the knowledge view can be linked to a visualization state in the navigation view. The analyst can revisit a visualization state from both the navigation and knowledge views to review the analysis and reuse it to look for alternate views. The whole analysis process can be saved along with the synthesized information. We present a user study and discuss the perceived usefulness of a prototype based on this framework that we have developed. | false | false | [
"Yedendra Babu Shrinivasan",
"Jarke J. van Wijk"
] | [] | [] | [] |
CHI | 2,008 | Wedge: clutter-free visualization of off-screen locations | 10.1145/1357054.1357179 | To overcome display limitations of small-screen devices, researchers have proposed techniques that point users to objects located off-screen. Arrow-based techniques such as City Lights convey only direction. Halo conveys direction and distance, but is susceptible to clutter resulting from overlapping halos. We present Wedge, a visualization technique that conveys direction and distance, yet avoids overlap and clutter. Wedge represents each off-screen location using an acute isosceles triangle: the tip coincides with the off-screen locations, and the two corners are located on-screen. A wedge conveys location awareness primarily by means of its two legs pointing towards the target. Wedges avoid overlap programmatically by repelling each other, causing them to rotate until overlap is resolved. As a result, wedges can be applied to numbers and configurations of targets that would lead to clutter if visualized using halos. We report on a user study comparing Wedge and Halo for three off-screen tasks. Participants were significantly more accurate when using Wedge than when using Halo. | false | false | [
"Sean Gustafson",
"Patrick Baudisch",
"Carl Gutwin",
"Pourang Irani"
] | [] | [] | [] |
CHI | 2,008 | Your place or mine?: visualization as a community component | 10.1145/1357054.1357102 | Many Eyes is a web site that provides collaborative visualization services, allowing users to upload data sets, visualize them, and comment on each other's visualizations. This paper describes a first interview-based study of Many Eyes users, which sheds light on user motivation for creating public visualizations. Users talked about data for many reasons, from scientific research to political advocacy to hobbies. One consistent theme across these different scenarios is the use of visualizations in communication and collaborative practices. Collaboration and conversation, however, often took place outside the site, leaving no traces on Many Eyes itself. In other words, despite spurring significant social activity, Many Eyes is not so much an online community as a "community component" which users insert into pre-existing online social systems. | false | false | [
"Catalina M. Danis",
"Fernanda B. Viégas",
"Martin Wattenberg",
"Jesse Kriss"
] | [] | [] | [] |
Vis | 2,007 | A Flexible Multi-Volume Shader Framework for Arbitrarily Intersecting Multi-Resolution Datasets | 10.1109/TVCG.2007.70534 | We present a powerful framework for 3D-texture-based rendering of multiple arbitrarily intersecting volumetric datasets. Each volume is represented by a multi-resolution octree-based structure and we use out-of-core techniques to support extremely large volumes. Users define a set of convex polyhedral volume lenses, which may be associated with one or more volumetric datasets. The volumes or the lenses can be interactively moved around while the region inside each lens is rendered using interactively defined multi-volume shaders. Our rendering pipeline splits each lens into multiple convex regions such that each region is homogenous and contains a fixed number of volumes. Each such region is further split by the brick boundaries of the associated octree representations. The resulting puzzle of lens fragments is sorted in front-to-back or back-to-front order using a combination of a view-dependent octree traversal and a GPU-based depth peeling technique. Our current implementation uses slice-based volume rendering and allows interactive roaming through multiple intersecting multi-gigabyte volumes. | false | false | [
"John Plate",
"Thorsten Holtkämper",
"Bernd Fröhlich 0001"
] | [] | [] | [] |
Vis | 2,007 | A Unified Paradigm For Scalable Multi-Projector Displays | 10.1109/TVCG.2007.70536 | We present a general framework for the modeling and optimization of scalable multi-projector displays. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors without manual adjustment. When the projectors are tiled, we show that our framework automatically produces blending maps that outperform state-of-the-art projector blending methods. When all the projectors are superimposed, the framework can produce high-resolution images beyond the Nyquist resolution limits of component projectors. When a combination of tiled and superimposed projectors are deployed, the same framework harnesses the best features of both tiled and superimposed multi-projector projection paradigms. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution video at real-time interactive frame rates achieved on commodity graphics platforms. This work allows for inexpensive, compelling, flexible, and robust large scale visualization systems to be built and deployed very efficiently. | false | false | [
"Niranjan Damera-Venkata",
"Nelson L. Chang",
"Jeffrey M. DiCarlo"
] | [] | [] | [] |
Vis | 2,007 | An Effective Illustrative Visualization Framework Based on Photic Extremum Lines (PELs) | 10.1109/TVCG.2007.70538 | Conveying shape using feature lines is an important visualization tool in visual computing. The existing feature lines (e.g., ridges, valleys, silhouettes, suggestive contours, etc.) are solely determined by local geometry properties (e.g., normals and curvatures) as well as the view position. This paper is strongly inspired by the observation in human vision and perception that a sudden change in the luminance plays a critical role to faithfully represent and recover the 3D information. In particular, we adopt the edge detection techniques in image processing for 3D shape visualization and present photic extremum lines (PELs) which emphasize significant variations of illumination over 3D surfaces. Comparing with the existing feature lines, PELs are more flexible and offer users more freedom to achieve desirable visualization effects. In addition, the user can easily control the shape visualization by changing the light position, the number of light sources, and choosing various light models. We compare PELs with the existing approaches and demonstrate that PEL is a flexible and effective tool to illustrate 3D surface and volume for visual computing. | false | false | [
"Xuexiang Xie",
"Ying He 0001",
"Feng Tian 0006",
"Seah Hock Soon",
"Xianfeng Gu",
"Hong Qin"
] | [] | [] | [] |
Vis | 2,007 | Conjoint Analysis to Measure the Perceived Quality in Volume Rendering | 10.1109/TVCG.2007.70542 | Visualization algorithms can have a large number of parameters, making the space of possible rendering results rather high-dimensional. Only a systematic analysis of the perceived quality can truly reveal the optimal setting for each such parameter. However, an exhaustive search in which all possible parameter permutations are presented to each user within a study group would be infeasible to conduct. Additional complications may result from possible parameter co-dependencies. Here, we will introduce an efficient user study design and analysis strategy that is geared to cope with this problem. The user feedback is fast and easy to obtain and does not require exhaustive parameter testing. To enable such a framework we have modified a preference measuring methodology, conjoint analysis, that originated in psychology and is now also widely used in market research. We demonstrate our framework by a study that measures the perceived quality in volume rendering within the context of large parameter spaces. | false | false | [
"Joachim Giesen",
"Klaus Mueller 0001",
"Eva Schuberth",
"Lujin Wang",
"Peter Zolliker"
] | [] | [] | [] |
Vis | 2,007 | Construction of Simplified Boundary Surfaces from Serial-sectioned Metal Micrographs | 10.1109/TVCG.2007.70543 | We present a method for extracting boundary surfaces from segmented cross-section image data. We use a constrained Potts model to interpolate an arbitrary number of region boundaries between segmented images. This produces a segmented volume from which we extract a triangulated boundary surface using well-known marching tetrahedra methods. This surface contains staircase-like artifacts and an abundance of unnecessary triangles. We describe an approach that addresses these problems with a voxel-accurate simplification algorithm that reduces surface complexity by an order of magnitude. Our boundary interpolation and simplification methods are novel contributions to the study of surface extraction from segmented cross-sections. We have applied our method to construct polycrystal grain boundary surfaces from micrographs of a sample of the metal tantalum. | false | false | [
"Scott E. Dillard",
"John Bingert",
"Dan Thoma",
"Bernd Hamann"
] | [] | [] | [] |
Vis | 2,007 | Contextualized Videos: Combining Videos with Environment Models to Support Situational Understanding | 10.1109/TVCG.2007.70544 | Multiple spatially-related videos are increasingly used in security, communication, and other applications. Since it can be difficult to understand the spatial relationships between multiple videos in complex environments (e.g. to predict a person's path through a building), some visualization techniques, such as video texture projection, have been used to aid spatial understanding. In this paper, we identify and begin to characterize an overall class of visualization techniques that combine video with 3D spatial context. This set of techniques, which we call contextualized videos, forms a design palette which must be well understood so that designers can select and use appropriate techniques that address the requirements of particular spatial video tasks. In this paper, we first identify user tasks in video surveillance that are likely to benefit from contextualized videos and discuss the video, model, and navigation related dimensions of the contextualized video design space. We then describe our contextualized video testbed which allows us to explore this design space and compose various video visualizations for evaluation. Finally, we describe the results of our process to identify promising design patterns through user selection of visualization features from the design space, followed by user interviews. | false | false | [
"Yi Wang",
"David M. Krum",
"Enylton Machado Coelho",
"Doug A. Bowman"
] | [] | [] | [] |
Vis | 2,007 | Cores of Swirling Particle Motion in Unsteady Flows | 10.1109/TVCG.2007.70545 | In nature and in flow experiments particles form patterns of swirling motion in certain locations. Existing approaches identify these structures by considering the behavior of stream lines. However, in unsteady flows particle motion is described by path lines which generally gives different swirling patterns than stream lines. We introduce a novel mathematical characterization of swirling motion cores in unsteady flows by generalizing the approach of Sujudi/Haimes to path lines. The cores of swirling particle motion are lines sweeping over time, i.e., surfaces in the space-time domain. They occur at locations where three derived 4D vectors become coplanar. To extract them, we show how to re-formulate the problem using the parallel vectors operator. We apply our method to a number of unsteady flow fields. | false | false | [
"Tino Weinkauf",
"Jan Sahner",
"Holger Theisel",
"Hans-Christian Hege"
] | [] | [] | [] |
Vis | 2,007 | CoViCAD: Comprehensive Visualization of Coronary Artery Disease | 10.1109/TVCG.2007.70550 | We present novel, comprehensive visualization techniques for the diagnosis of patients with coronary artery disease using segmented cardiac MRI data. We extent an accepted medical visualization technique called the bull's eye plot by removing discontinuities, preserving the volumetric nature of the left ventricular wall and adding anatomical context. The resulting volumetric bull's eye plot can be used for the assessment of transmurality. We link these visualizations to a 3D view that presents viability information in a detailed anatomical context. We combine multiple MRI scans (whole heart anatomical data, late enhancement data) and multiple segmentations (polygonal heart model, late enhancement contours, coronary artery tree). By selectively combining different rendering techniques we obtain comprehensive yet intuitive visualizations of the various data sources. | false | false | [
"Maurice Termeer",
"Javier Oliván Bescós",
"Marcel Breeuwer",
"Anna Vilanova",
"Frans A. Gerritsen",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,007 | Efficient Computation and Visualization of Coherent Structures in Fluid Flow Applications | 10.1109/TVCG.2007.70551 | The recently introduced notion of Finite-Time Lyapunov Exponent to characterize Coherent Lagrangian Structures provides a powerful framework for the visualization and analysis of complex technical flows. Its definition is simple and intuitive, and it has a deep theoretical foundation. While the application of this approach seems straightforward in theory, the associated computational cost is essentially prohibitive. Due to the Lagrangian nature of this technique, a huge number of particle paths must be computed to fill the space-time flow domain. In this paper, we propose a novel scheme for the adaptive computation of FTLE fields in two and three dimensions that significantly reduces the number of required particle paths. Furthermore, for three-dimensional flows, we show on several examples that meaningful results can be obtained by restricting the analysis to a well-chosen plane intersecting the flow domain. Finally, we examine some of the visualization aspects of FTLE-based methods and introduce several new variations that help in the analysis of specific aspects of a flow. | false | false | [
"Christoph Garth",
"Florian Gerhardt",
"Xavier Tricoche",
"Hans Hagen"
] | [] | [] | [] |
Vis | 2,007 | Efficient Computation of Morse-Smale Complexes for Three-dimensional Scalar Functions | 10.1109/TVCG.2007.70552 | The Morse-Smale complex is an efficient representation of the gradient behavior of a scalar function, and critical points paired by the complex identify topological features and their importance. We present an algorithm that constructs the Morse-Smale complex in a series of sweeps through the data, identifying various components of the complex in a consistent manner. All components of the complex, both geometric and topological, are computed, providing a complete decomposition of the domain. Efficiency is maintained by representing the geometry of the complex in terms of point sets. | false | false | [
"Attila Gyulassy",
"Vijay Natarajan",
"Valerio Pascucci",
"Bernd Hamann"
] | [] | [] | [] |
Vis | 2,007 | Efficient Surface Reconstruction using Generalized Coulomb Potentials | 10.1109/TVCG.2007.70553 | We propose a novel, geometrically adaptive method for surface reconstruction from noisy and sparse point clouds, without orientation information. The method employs a fast convection algorithm to attract the evolving surface towards the data points. The force field in which the surface is convected is based on generalized Coulomb potentials evaluated on an adaptive grid (i.e., an octree) using a fast, hierarchical algorithm. Formulating reconstruction as a convection problem in a velocity field generated by Coulomb potentials offers a number of advantages. Unlike methods which compute the distance from the data set to the implicit surface, which are sensitive to noise due to the very reliance on the distance transform, our method is highly resilient to shot noise since global, generalized Coulomb potentials can be used to disregard the presence of outliers due to noise. Coulomb potentials represent long-range interactions that consider all data points at once, and thus they convey global information which is crucial in the fitting process. Both the spatial and temporal complexities of our spatially-adaptive method are proportional to the size of the reconstructed object, which makes our method compare favorably with respect to previous approaches in terms of speed and flexibility. Experiments with sparse as well as noisy data sets show that the method is capable of delivering crisp and detailed yet smooth surfaces. | false | false | [
"Andrei C. Jalba",
"Jos B. T. M. Roerdink"
] | [] | [] | [] |
Vis | 2,007 | Efficient Visualization of Lagrangian Coherent Structures by filtered AMR Ridge Extraction | 10.1109/TVCG.2007.70554 | This paper presents a method for filtered ridge extraction based on adaptive mesh refinement. It is applicable in situations where the underlying scalar field can be refined during ridge extraction. This requirement is met by the concept of Lagrangian coherent structures which is based on trajectories started at arbitrary sampling grids that are independent of the underlying vector field. The Lagrangian coherent structures are extracted as ridges in finite Lyapunov exponent fields computed from these grids of trajectories. The method is applied to several variants of finite Lyapunov exponents, one of which is newly introduced. High computation time due to the high number of required trajectories is a main drawback when computing Lyapunov exponents of 3-dimensional vector fields. The presented method allows a substantial speed-up by avoiding the seeding of trajectories in regions where no ridges are present or do not satisfy the prescribed filter criteria such as a minimum finite Lyapunov exponent. | false | false | [
"Filip Sadlo",
"Ronald Peikert"
] | [] | [] | [] |
Vis | 2,007 | Enhancing Depth-Perception with Flexible Volumetric Halos | 10.1109/TVCG.2007.70555 | Volumetric data commonly has high depth complexity which makes it difficult to judge spatial relationships accurately. There are many different ways to enhance depth perception, such as shading, contours, and shadows. Artists and illustrators frequently employ halos for this purpose. In this technique, regions surrounding the edges of certain structures are darkened or brightened which makes it easier to judge occlusion. Based on this concept, we present a flexible method for enhancing and highlighting structures of interest using GPU-based direct volume rendering. Our approach uses an interactively defined halo transfer function to classify structures of interest based on data value, direction, and position. A feature-preserving spreading algorithm is applied to distribute seed values to neighboring locations, generating a controllably smooth field of halo intensities. These halo intensities are then mapped to colors and opacities using a halo profile function. Our method can be used to annotate features at interactive frame rates. | false | false | [
"Stefan Bruckner",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,007 | Generalized Streak Lines: Analysis and Visualization of Boundary Induced Vortices | 10.1109/TVCG.2007.70557 | We present a method to extract and visualize vortices that originate from bounding walls of three-dimensional time- dependent flows. These vortices can be detected using their footprint on the boundary, which consists of critical points in the wall shear stress vector field. In order to follow these critical points and detect their transformations, affected regions of the surface are parameterized. Thus, an existing singularity tracking algorithm devised for planar settings can be applied. The trajectories of the singularities are used as a basis for seeding particles. This leads to a new type of streak line visualization, in which particles are released from a moving source. These generalized streak lines visualize the particles that are ejected from the wall. We demonstrate the usefulness of our method on several transient fluid flow datasets from computational fluid dynamics simulations. | false | false | [
"Alexander Wiebel",
"Xavier Tricoche",
"Dominic Schneider",
"Heike Leitte",
"Gerik Scheuermann"
] | [] | [] | [] |
Vis | 2,007 | Grid With a View: Optimal Texturing for Perception of Layered Surface Shape | 10.1109/TVCG.2007.70559 | We present the results of two controlled studies comparing layered surface visualizations under various texture conditions. The task was to estimate surface normals, measured by accuracy of a hand-set surface normal probe. A single surface visualization was compared with the two-surfaces case under conditions of no texture and with projected grid textures. Variations in relative texture spacing on top and bottom surfaces were compared, as well as opacity of the top surface. Significant improvements are found for the textured cases over non-textured surfaces. Either larger or thinner top-surface textures, and lower top surface opacities are shown to give less bottom surface error. Top surface error appears to be highly resilient to changes in texture. Given the results we also present an example of how appropriate textures might be useful in volume visualization. | false | false | [
"Alethea Bair",
"Donald H. House"
] | [] | [] | [] |
Vis | 2,007 | High-Quality Multimodal Volume Rendering for Preoperative Planning of Neurosurgical Interventions | 10.1109/TVCG.2007.70560 | Surgical approaches tailored to an individual patient's anatomy and pathology have become standard in neurosurgery. Precise preoperative planning of these procedures, however, is necessary to achieve an optimal therapeutic effect. Therefore, multiple radiological imaging modalities are used prior to surgery to delineate the patient's anatomy, neurological function, and metabolic processes. Developing a three-dimensional perception of the surgical approach, however, is traditionally still done by mentally fusing multiple modalities. Concurrent 3D visualization of these datasets can, therefore, improve the planning process significantly. In this paper we introduce an application for planning of individual neurosurgical approaches with high-quality interactive multimodal volume rendering. The application consists of three main modules which allow to (1) plan the optimal skin incision and opening of the skull tailored to the underlying pathology; (2) visualize superficial brain anatomy, function and metabolism; and (3) plan the patient-specific approach for surgery of deep-seated lesions. The visualization is based on direct multi-volume raycasting on graphics hardware, where multiple volumes from different modalities can be displayed concurrently at interactive frame rates. Graphics memory limitations are avoided by performing raycasting on bricked volumes. For preprocessing tasks such as registration or segmentation, the visualization modules are integrated into a larger framework, thus supporting the entire workflow of preoperative planning. | false | false | [
"Johanna Beyer",
"Markus Hadwiger",
"Stefan Wolfsberger",
"Katja Bühler"
] | [
"BA"
] | [] | [] |
Vis | 2,007 | Illustrative Deformation for Data Exploration | 10.1109/TVCG.2007.70565 | Much of the visualization research has focused on improving the rendering quality and speed, and enhancing the perceptibility of features in the data. Recently, significant emphasis has been placed on focus+context (F+C) techniques (e.g., fisheye views and magnification lens) for data exploration in addition to viewing transformation and hierarchical navigation. However, most of the existing data exploration techniques rely on the manipulation of viewing attributes of the rendering system or optical attributes of the data objects, with users being passive viewers. In this paper, we propose a more active approach to data exploration, which attempts to mimic how we would explore data if we were able to hold it and interact with it in our hands. This involves allowing the users to physically or actively manipulate the geometry of a data object. While this approach has been traditionally used in applications, such as surgical simulation, where the original geometry of the data objects is well understood by the users, there are several challenges when this approach is generalized for applications, such as flow and information visualization, where there is no common perception as to the normal or natural geometry of a data object. We introduce a taxonomy and a set of transformations especially for illustrative deformation of general data exploration. We present combined geometric or optical illustration operators for focus+context visualization, and examine the best means for preventing the deformed context from being misperceived. We demonstrated the feasibility of this generalization with examples of flow, information and video visualization. | false | false | [
"Carlos D. Correa",
"Deborah Silver",
"Mi Chen"
] | [] | [] | [] |
Vis | 2,007 | Interactive Isosurface Ray Tracing of Time-Varying Tetrahedral Volumes | 10.1109/TVCG.2007.70566 | We describe a system for interactively rendering isosurfaces of tetrahedral finite-element scalar fields using coherent ray tracing techniques on the CPU. By employing state-of-the art methods in polygonal ray tracing, namely aggressive packet/frustum traversal of a bounding volume hierarchy, we can accommodate large and time-varying unstructured data. In conjunction with this efficiency structure, we introduce a novel technique for intersecting ray packets with tetrahedral primitives. Ray tracing is flexible, allowing for dynamic changes in isovalue and time step, visualization of multiple isosurfaces, shadows, and depth-peeling transparency effects. The resulting system offers the intuitive simplicity of isosurfacing, guaranteed-correct visual results, and ultimately a scalable, dynamic and consistently interactive solution for visualizing unstructured volumes. | false | false | [
"Ingo Wald",
"Heiko Friedrich",
"Aaron Knoll",
"Charles D. Hansen"
] | [] | [] | [] |
Vis | 2,007 | Interactive sound rendering in complex and dynamic scenes using frustum tracing | 10.1109/TVCG.2007.70567 | We present a new approach for real-time sound rendering in complex, virtual scenes with dynamic sources and objects. Our approach combines the efficiency of interactive ray tracing with the accuracy of tracing a volumetric representation. We use a four-sided convex frustum and perform clipping and intersection tests using ray packet tracing. A simple and efficient formulation is used to compute secondary frusta and perform hierarchical traversal. We demonstrate the performance of our algorithm in an interactive system for complex environments and architectural models with tens or hundreds of thousands of triangles. Our algorithm can perform real-time simulation and rendering on a high-end PC. | false | false | [
"Christian Lauterbach",
"Anish Chandak",
"Dinesh Manocha"
] | [] | [] | [] |
Vis | 2,007 | Interactive Visual Analysis of Perfusion Data | 10.1109/TVCG.2007.70569 | Perfusion data are dynamic medical image data which characterize the regional blood flow in human tissue. These data bear a great potential in medical diagnosis, since diseases can be better distinguished and detected at an earlier stage compared to static image data. The wide-spread use of perfusion data is hampered by the lack of efficient evaluation methods. For each voxel, a time-intensity curve characterizes the enhancement of a contrast agent. Parameters derived from these curves characterize the perfusion and have to be integrated for diagnosis. The diagnostic evaluation of this multi-field data is challenging and time-consuming due to its complexity. For the visual analysis of such datasets, feature-based approaches allow to reduce the amount of data and direct the user to suspicious areas. We present an interactive visual analysis approach for the evaluation of perfusion data. For this purpose, we integrate statistical methods and interactive feature specification. Correlation analysis and Principal Component Analysis (PCA) are applied for dimension reduction and to achieve a better understanding of the inter-parameter relations. Multiple, linked views facilitate the definition of features by brushing multiple dimensions. The specification result is linked to all views establishing a focus+context style of visualization in 3D. We discuss our approach with respect to clinical datasets from the three major application areas: ischemic stroke diagnosis, breast tumor diagnosis, as well as the diagnosis of the coronary heart disease (CHD). It turns out that the significance of perfusion parameters strongly depends on the individual patient, scanning parameters, and data pre-processing. | false | false | [
"Steffen Oeltze-Jafra",
"Helmut Doleisch",
"Helwig Hauser",
"Philipp Muigg",
"Bernhard Preim"
] | [] | [] | [] |
Vis | 2,007 | Interactive Visualization of Volumetric White Matter Connectivity in DT-MRI Using a Parallel-Hardware Hamilton-Jacobi Solver | 10.1109/TVCG.2007.70571 | In this paper we present a method to compute and visualize volumetric white matter connectivity in diffusion tensor magnetic resonance imaging (DT-MRI) using a Hamilton-Jacobi (H-J) solver on the GPU (graphics processing unit). Paths through the volume are assigned costs that are lower if they are consistent with the preferred diffusion directions. The proposed method finds a set of voxels in the DTI volume that contain paths between two regions whose costs are within a threshold of the optimal path. The result is a volumetric optimal path analysis, which is driven by clinical and scientific questions relating to the connectivity between various known anatomical regions of the brain. To solve the minimal path problem quickly, we introduce a novel numerical algorithm for solving H-J equations, which we call the fast iterative method (FIM). This algorithm is well-adapted to parallel architectures, and we present a GPU-based implementation, which runs roughly 50-100 times faster than traditional CPU-based solvers for anisotropic H-J equations. The proposed system allows users to freely change the endpoints of interesting pathways and to visualize the optimal volumetric path between them at an interactive rate. We demonstrate the proposed method on some synthetic and real DT-MRI datasets and compare the performance with existing methods. | false | false | [
"Won-Ki Jeong",
"P. Thomas Fletcher",
"Ran Tao 0011",
"Ross T. Whitaker"
] | [] | [] | [] |
Vis | 2,007 | IStar: A Raster Representation for Scalable Image and Volume Data | 10.1109/TVCG.2007.70572 | Topology has been an important tool for analyzing scalar data and flow fields in visualization. In this work, we analyze the topology of multivariate image and volume data sets with discontinuities in order to create an efficient, raster-based representation we call IStar. Specifically, the topology information is used to create a dual structure that contains nodes and connectivity information for every segmentable region in the original data set. This graph structure, along with a sampled representation of the segmented data set, is embedded into a standard raster image which can then be substantially downsampled and compressed. During rendering, the raster image is upsampled and the dual graph is used to reconstruct the original function. Unlike traditional raster approaches, our representation can preserve sharp discontinuities at any level of magnification, much like scalable vector graphics. However, because our representation is raster-based, it is well suited to the real-time rendering pipeline. We demonstrate this by reconstructing our data sets on graphics hardware at real-time rates. | false | false | [
"Joe Michael Kniss",
"Warren A. Hunt",
"Kristi Potter",
"Pradeep Sen"
] | [] | [] | [] |
Vis | 2,007 | Lattice-Based Volumetric Global Illumination | 10.1109/TVCG.2007.70573 | We describe a novel volumetric global illumination framework based on the face-centered cubic (FCC) lattice. An FCC lattice has important advantages over a Cartesian lattice. It has higher packing density in the frequency domain, which translates to better sampling efficiency. Furthermore, it has the maximal possible kissing number (equivalent to the number of nearest neighbors of each site), which provides optimal 3D angular discretization among all lattices. We employ a new two-pass (illumination and rendering) global illumination scheme on an FCC lattice. This scheme exploits the angular discretization to greatly simplify the computation in multiple scattering and to minimize illumination information storage. The GPU has been utilized to further accelerate the rendering stage. We demonstrate our new framework with participating media and volume rendering with multiple scattering, where both are significantly faster than traditional techniques with comparable quality. | false | false | [
"Feng Qiu",
"Fang Xu",
"Zhe Fan",
"Neophytos Neophytou",
"Arie E. Kaufman",
"Klaus Mueller 0001"
] | [] | [] | [] |
Vis | 2,007 | Listener-based Analysis of Surface Importance for Acoustic Metrics | 10.1109/TVCG.2007.70575 | Acoustic quality in room acoustics is measured by well defined quantities, like definition, which can be derived from simulated impulse response filters or measured values. These take into account the intensity and phase shift of multiple reflections due to a wave front emanating from a sound source. Definition (D<sub>50</sub>) and clarity (C<sub>50</sub>) for example correspond to the fraction of the energy received in total to the energy received in the first 50 ms at a certain listener position. Unfortunately, the impulse response measured at a single point does not provide any information about the direction of reflections, and about the reflection surfaces which contribute to this measure. For the visualization of room acoustics, however, this information is very useful since it allows to discover regions with high contribution and provides insight into the influence of all reflecting surfaces to the quality measure. We use the phonon tracing method to calculate the contribution of the reflection surfaces to the impulse response for different listener positions. This data is used to compute importance values for the geometry taking a certain acoustic metric into account. To get a visual insight into the directional aspect, we map the importance to the reflecting surfaces of the geometry. This visualization indicates which parts of the surfaces need to be changed to enhance the chosen acoustic quality measure. We apply our method to the acoustic improvement of a lecture hall by means of enhancing the overall speech comprehensibility (clarity) and evaluate the results using glyphs to visualize the clarity (C<sub>50</sub>) values at listener positions throughout the room. | false | false | [
"Frank Michel 0001",
"Eduard Deines",
"Martin Hering-Bertram",
"Christoph Garth",
"Hans Hagen"
] | [] | [] | [] |
Vis | 2,007 | LiveSync: Deformed Viewing Spheres for Knowledge-Based Navigation | 10.1109/TVCG.2007.70576 | Although real-time interactive volume rendering is available even for very large data sets, this visualization method is used quite rarely in the clinical practice. We suspect this is because it is very complicated and time consuming to adjust the parameters to achieve meaningful results. The clinician has to take care of the appropriate viewpoint, zooming, transfer function setup, clipping planes and other parameters. Because of this, most often only 2D slices of the data set are examined. Our work introduces LiveSync, a new concept to synchronize 2D slice views and volumetric views of medical data sets. Through intuitive picking actions on the slice, the users define the anatomical structures they are interested in. The 3D volumetric view is updated automatically with the goal that the users are provided with expressive result images. To achieve this live synchronization we use a minimal set of derived information without the need for segmented data sets or data-specific pre-computations. The components we consider are the picked point, slice view zoom, patient orientation, viewpoint history, local object shape and visibility. We introduce deformed viewing spheres which encode the viewpoint quality for the components. A combination of these deformed viewing spheres is used to estimate a good viewpoint. Our system provides the physician with synchronized views which help to gain deeper insight into the medical data with minimal user interaction. | false | false | [
"Peter Kohlmann",
"Stefan Bruckner",
"Armin Kanitsar",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,007 | Molecular Surface Abstraction | 10.1109/TVCG.2007.70578 | In this paper we introduce a visualization technique that provides an abstracted view of the shape and spatio-physico-chemical properties of complex molecules. Unlike existing molecular viewing methods, our approach suppresses small details to facilitate rapid comprehension, yet marks the location of significant features so they remain visible. Our approach uses a combination of filters and mesh restructuring to generate a simplified representation that conveys the overall shape and spatio-physico-chemical properties (e.g. electrostatic charge). Surface markings are then used in the place of important removed details, as well as to supply additional information. These simplified representations are amenable to display using stylized rendering algorithms to further enhance comprehension. Our initial experience suggests that our approach is particularly useful in browsing collections of large molecules and in readily making comparisons between them. | false | false | [
"Gregory Cipriano",
"Michael Gleicher"
] | [] | [] | [] |
Vis | 2,007 | Moment Invariants for the Analysis of 2D Flow fields | 10.1109/TVCG.2007.70579 | We present a novel approach for analyzing two-dimensional (2D) flow field data based on the idea of invariant moments. Moment invariants have traditionally been used in computer vision applications, and we have adapted them for the purpose of interactive exploration of flow field data. The new class of moment invariants we have developed allows us to extract and visualize 2D flow patterns, invariant under translation, scaling, and rotation. With our approach one can study arbitrary flow patterns by searching a given 2D flow data set for any type of pattern as specified by a user. Further, our approach supports the computation of moments at multiple scales, facilitating fast pattern extraction and recognition. This can be done for critical point classification, but also for patterns with greater complexity. This multi-scale moment representation is also valuable for the comparative visualization of flow field data. The specific novel contributions of the work presented are the mathematical derivation of the new class of moment invariants, their analysis regarding critical point features, the efficient computation of a novel feature space representation, and based upon this the development of a fast pattern recognition algorithm for complex flow structures. | false | false | [
"Michael Schlemmer",
"Manuel Heringer",
"Florian Morr",
"Ingrid Hotz",
"Martin Hering-Bertram",
"Christoph Garth",
"Wolfgang Kollmann",
"Bernd Hamann",
"Hans Hagen"
] | [] | [] | [] |
Vis | 2,007 | Multifield Visualization Using Local Statistical Complexity | 10.1109/TVCG.2007.70615 | Modern unsteady (multi-)field visualizations require an effective reduction of the data to be displayed. From a huge amount of information the most informative parts have to be extracted. Instead of the fuzzy application dependent notion of feature, a new approach based on information theoretic concepts is introduced in this paper to detect important regions. This is accomplished by extending the concept of local statistical complexity from finite state cellular automata to discretized (multi-)fields. Thus, informative parts of the data can be highlighted in an application-independent, purely mathematical sense. The new measure can be applied to unsteady multifields on regular grids in any application domain. The ability to detect and visualize important parts is demonstrated using diffusion, flow, and weather simulations. | false | false | [
"Heike Leitte",
"Alexander Wiebel",
"Gerik Scheuermann",
"Wolfgang Kollmann"
] | [] | [] | [] |
Vis | 2,007 | Navigating in a Shape Space of Registered Models | 10.1109/TVCG.2007.70581 | New product development involves people with different backgrounds. Designers, engineers, and consumers all have different design criteria, and these criteria interact. Early concepts evolve in this kind of collaborative context, and there is a need for dynamic visualization of the interaction between design shape and other shape-related design criteria. In this paper, a morphable model is defined from simplified representations of suitably chosen real cars, providing a continuous shape space to navigate, manipulate and visualize. Physical properties and consumer-provided scores for the real cars (such as 'weight' and 'sportiness') are estimated for new designs across the shape space. This coupling allows one to manipulate the shape directly while reviewing the impact on estimated criteria, or conversely, to manipulate the criterial values of the current design to produce a new shape with more desirable attributes. | false | false | [
"Randall C. Smith",
"Richard R. Pawlicki",
"István Kókai",
"Jörg Finger",
"Thomas Vetter"
] | [] | [] | [] |
Vis | 2,007 | Querying and Creating Visualizations by Analogy | 10.1109/TVCG.2007.70584 | While there have been advances in visualization systems, particularly in multi-view visualizations and visual exploration, the process of building visualizations remains a major bottleneck in data exploration. We show that provenance metadata collected during the creation of pipelines can be reused to suggest similar content in related visualizations and guide semi-automated changes. We introduce the idea of query-by-example in the context of an ensemble of visualizations, and the use of analogies as first-class operations in a system to guide scalable interactions. We describe an implementation of these techniques in VisTrails, a publicly-available, open-source system. | false | false | [
"Carlos Scheidegger",
"Huy T. Vo",
"David Koop",
"Juliana Freire",
"Cláudio T. Silva"
] | [
"BP"
] | [] | [] |
Vis | 2,007 | Random-Accessible Compressed Triangle Meshes | 10.1109/TVCG.2007.70585 | With the exponential growth in size of geometric data, it is becoming increasingly important to make effective use of multilevel caches, limited disk storage, and bandwidth. As a result, recent work in the visualization community has focused either on designing sequential access compression schemes or on producing cache-coherent layouts of (uncompressed) meshes for random access. Unfortunately combining these two strategies is challenging as they fundamentally assume conflicting modes of data access. In this paper, we propose a novel order-preserving compression method that supports transparent random access to compressed triangle meshes. Our decompression method selectively fetches from disk, decodes, and caches in memory requested parts of a mesh. We also provide a general mesh access API for seamless mesh traversal and incidence queries. While the method imposes no particular mesh layout, it is especially suitable for cache-oblivious layouts, which minimize the number of decompression I/O requests and provide high cache utilization during access to decompressed, in-memory portions of the mesh. Moreover, the transparency of our scheme enables improved performance without the need for application code changes. We achieve compression rates on the order of 20:1 and significantly improved I/O performance due to reduced data transfer. To demonstrate the benefits of our method, we implement two common applications as benchmarks. By using cache-oblivious layouts for the input models, we observe 2-6 times overall speedup compared to using uncompressed meshes. | false | false | [
"Sung-Eui Yoon",
"Peter Lindstrom 0001"
] | [] | [] | [] |
Vis | 2,007 | Registration Techniques for Using Imperfect and Par tially Calibrated Devices in Planar Multi-Projector Displays | 10.1109/TVCG.2007.70586 | Multi-projector displays today are automatically registered, both geometrically and photometrically, using cameras. Existing registration techniques assume pre-calibrated projectors and cameras that are devoid of imperfections such as lens distortion. In practice, however, these devices are usually imperfect and uncalibrated. Registration of each of these devices is often more challenging than the multi-projector display registration itself. To make tiled projection-based displays accessible to a layman user we should allow the use of uncalibrated inexpensive devices that are prone to imperfections. In this paper, we make two important advances in this direction. First, we present a new geometric registration technique that can achieve geometric alignment in the presence of severe projector lens distortion using a relatively inexpensive low-resolution camera. This is achieved via a closed-form model that relates the projectors to cameras, in planar multi-projector displays, using rational Bezier patches. This enables us to geometrically calibrate a 3000 times 2500 resolution planar multi-projector display made of 3 times 3 array of nine severely distorted projectors using a low resolution (640 times 480) VGA camera. Second, we present a photometric self-calibration technique for a projector-camera pair. This allows us to photometrically calibrate the same display made of nine projectors using a photometrically uncalibrated camera. To the best of our knowledge, this is the first work that allows geometrically imperfect projectors and photometrically uncalibrated cameras in calibrating multi-projector displays. | false | false | [
"Ezekiel S. Bhasker",
"Ray Juang",
"Aditi Majumder"
] | [] | [] | [] |
Vis | 2,007 | Scalable Hybrid Unstructured and Structured Grid Raycasting | 10.1109/TVCG.2007.70588 | This paper presents a scalable framework for real-time raycasting of large unstructured volumes that employs a hybrid bricking approach. It adaptively combines original unstructured bricks in important (focus) regions, with structured bricks that are resampled on demand in less important (context) regions. The basis of this focus+context approach is interactive specification of a scalar degree of interest (DOI) function. Thus, rendering always considers two volumes simultaneously: a scalar data volume, and the current DOI volume. The crucial problem of visibility sorting is solved by raycasting individual bricks and compositing in visibility order from front to back. In order to minimize visual errors at the grid boundary, it is always rendered accurately, even for resampled bricks. A variety of different rendering modes can be combined, including contour enhancement. A very important property of our approach is that it supports a variety of cell types natively, i.e., it is not constrained to tetrahedral grids, even when interpolation within cells is used. Moreover, our framework can handle multi-variate data, e.g., multiple scalar channels such as temperature or pressure, as well as time-dependent data. The combination of unstructured and structured bricks with different quality characteristics such as the type of interpolation or resampling resolution in conjunction with custom texture memory management yields a very scalable system. | false | false | [
"Philipp Muigg",
"Markus Hadwiger",
"Helmut Doleisch",
"Helwig Hauser"
] | [] | [] | [] |
Vis | 2,007 | Segmentation of Three-dimensional Retinal Image Data | 10.1109/TVCG.2007.70590 | We have combined methods from volume visualization and data analysis to support better diagnosis and treatment of human retinal diseases. Many diseases can be identified by abnormalities in the thicknesses of various retinal layers captured using optical coherence tomography (OCT). We used a support vector machine (SVM) to perform semi-automatic segmentation of retinal layers for subsequent analysis including a comparison of layer thicknesses to known healthy parameters. We have extended and generalized an older SVM approach to support better performance in a clinical setting through performance enhancements and graceful handling of inherent noise in OCT data by considering statistical characteristics at multiple levels of resolution. The addition of the multi-resolution hierarchy extends the SVM to have "global awareness". A feature, such as a retinal layer, can therefore be modeled within the SVM as a combination of statistical characteristics across all levels; thus capturing high- and low-frequency information. We have compared our semi-automatically generated segmentations to manually segmented layers for verification purposes. Our main goals were to provide a tool that could (i) be used in a clinical setting; (ii) operate on noisy OCT data; and (iii) isolate individual or multiple retinal layers in both healthy and disease cases that contain structural deformities. | false | false | [
"Alfred R. Fuller",
"Robert Zawadzki",
"Stacey Choi",
"David F. Wiley",
"John S. Werner",
"Bernd Hamann"
] | [] | [] | [] |
Vis | 2,007 | Semantic Layers for Illustrative Volume Rendering | 10.1109/TVCG.2007.70591 | Direct volume rendering techniques map volumetric attributes (e.g., density, gradient magnitude, etc.) to visual styles. Commonly this mapping is specified by a transfer function. The specification of transfer functions is a complex task and requires expert knowledge about the underlying rendering technique. In the case of multiple volumetric attributes and multiple visual styles the specification of the multi-dimensional transfer function becomes more challenging and non-intuitive. We present a novel methodology for the specification of a mapping from several volumetric attributes to multiple illustrative visual styles. We introduce semantic layers that allow a domain expert to specify the mapping in the natural language of the domain. A semantic layer defines the mapping of volumetric attributes to one visual style. Volumetric attributes and visual styles are represented as fuzzy sets. The mapping is specified by rules that are evaluated with fuzzy logic arithmetics. The user specifies the fuzzy sets and the rules without special knowledge about the underlying rendering technique. Semantic layers allow for a linguistic specification of the mapping from attributes to visual styles replacing the traditional transfer function specification. | false | false | [
"Peter Rautek",
"Stefan Bruckner",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,007 | Shadow-Driven 4D Haptic Visualization | 10.1109/TVCG.2007.70593 | Just as we can work with two-dimensional floor plans to communicate 3D architectural design, we can exploit reduced- dimension shadows to manipulate the higher-dimensional objects generating the shadows. In particular, by taking advantage of physically reactive 3D shadow-space controllers, we can transform the task of interacting with 4D objects to a new level of physical reality. We begin with a teaching tool that uses 2D knot diagrams to manipulate the geometry of 3D mathematical knots via their projections; our unique 2D haptic interface allows the user to become familiar with sketching, editing, exploration, and manipulation of 3D knots rendered as projected images on a 2D shadow space. By combining graphics and collision-sensing haptics, we can enhance the 2D shadow-driven editing protocol to successfully leverage 2D pen-and-paper or blackboard skills. Building on the reduced-dimension 2D editing tool for manipulating 3D shapes, we develop the natural analogy to produce a reduced-dimension 3D tool for manipulating 4D shapes. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the experience accessible to human beings. As far as we are aware, this paper reports the first interactive system with force-feedback that provides "4D haptic visualization" permitting the user to model and interact with 4D cloth-like objects. | false | false | [
"Hui Zhang 0006",
"Andrew J. Hanson"
] | [] | [] | [] |
Vis | 2,007 | Similarity-Guided Streamline Placement with Error Evaluation | 10.1109/TVCG.2007.70595 | Most streamline generation algorithms either provide a particular density of streamlines across the domain or explicitly detect features, such as critical points, and follow customized rules to emphasize those features. However, the former generally includes many redundant streamlines, and the latter requires Boolean decisions on which points are features (and may thus suffer from robustness problems for real-world data). We take a new approach to adaptive streamline placement for steady vector fields in 2D and 3D. We define a metric for local similarity among streamlines and use this metric to grow streamlines from a dense set of candidate seed points. The metric considers not only Euclidean distance, but also a simple statistical measure of shape and directional similarity. Without explicit feature detection, our method produces streamlines that naturally accentuate regions of geometric interest. In conjunction with this method, we also propose a quantitative error metric for evaluating a streamline representation based on how well it preserves the information from the original vector field. This error metric reconstructs a vector field from points on the streamline representation and computes a difference of the reconstruction from the original vector field. | false | false | [
"Yuan Chen",
"Jonathan D. Cohen 0001",
"Julian Krolik"
] | [] | [] | [] |
Vis | 2,007 | Stochastic DT-MRI Connectivity Mapping on the GPU | 10.1109/TVCG.2007.70597 | We present a method for stochastic fiber tract mapping from diffusion tensor MRI (DT-MRI) implemented on graphics hardware. From the simulated fibers we compute a connectivity map that gives an indication of the probability that two points in the dataset are connected by a neuronal fiber path. A Bayesian formulation of the fiber model is given and it is shown that the inversion method can be used to construct plausible connectivity. An implementation of this fiber model on the graphics processing unit (GPU) is presented. Since the fiber paths can be stochastically generated independently of one another, the algorithm is highly parallelizable. This allows us to exploit the data-parallel nature of the GPU fragment processors. We also present a framework for the connectivity computation on the GPU. Our implementation allows the user to interactively select regions of interest and observe the evolving connectivity results during computation. Results are presented from the stochastic generation of over 250,000 fiber steps per iteration at interactive frame rates on consumer-grade graphics hardware. | false | false | [
"Tim McGraw",
"Mariappan S. Nadar"
] | [] | [] | [] |
Vis | 2,007 | Surface Extraction from Multi-Material Components for Metrology using Dual Energy CT | 10.1109/TVCG.2007.70598 | This paper describes a novel method for creating surface models of multi-material components using dual energy computed tomography (DECT). The application scenario is metrology and dimensional measurement in industrial high resolution 3D X-ray computed tomography (3DCT). Based on the dual source / dual exposure technology this method employs 3DCT scans of a high precision micro-focus and a high energy macro-focus X-ray source. The presented work makes use of the advantages of dual X-ray exposure technology in order to facilitate dimensional measurements of multi-material components with high density material within low density material. We propose a workflow which uses image fusion and local surface extraction techniques: a prefiltering step reduces noise inherent in the data. For image fusion the datasets have to be registered. In the fusion step the benefits of both scans are combined. The structure of the specimen is taken from the low precision, blurry, high energy dataset while the sharp edges are adopted and fused into the resulting image from the high precision, crisp, low energy dataset. In the final step a reliable surface model is extracted from the fused dataset using a local adaptive technique. The major contribution of this paper is the development of a specific workflow for dimensional measurements of multi-material industrial components, which takes two X-ray CT datasets with complementary strengths and weaknesses into account. The performance of the workflow is discussed using a test specimen as well as two real world industrial parts. As result, a significant improvement in overall measurement precision, surface geometry and mean deviation to reference measurement compared to single exposure scans was facilitated. | false | false | [
"Christoph Heinzl",
"Johann Kastner",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,007 | Texture-based feature tracking for effective time-varying data visualization | 10.1109/TVCG.2007.70599 | Analyzing, visualizing, and illustrating changes within time-varying volumetric data is challenging due to the dynamic changes occurring between timesteps. The changes and variations in computational fluid dynamic volumes and atmospheric 3D datasets do not follow any particular transformation. Features within the data move at different speeds and directions making the tracking and visualization of these features a difficult task. We introduce a texture-based feature tracking technique to overcome some of the current limitations found in the illustration and visualization of dynamic changes within time-varying volumetric data. Our texture-based technique tracks various features individually and then uses the tracked objects to better visualize structural changes. We show the effectiveness of our texture-based tracking technique with both synthetic and real world time-varying data. Furthermore, we highlight the specific visualization, annotation, registration, and feature isolation benefits of our technique. For instance, we show how our texture-based tracking can lead to insightful visualizations of time-varying data. Such visualizations, more than traditional visualization techniques, can assist domain scientists to explore and understand dynamic changes. | false | false | [
"Jesus J. Caban",
"Alark Joshi",
"Penny Rheingans"
] | [] | [] | [] |
Vis | 2,007 | Tile-based Level of Detail for the Parallel Age | 10.1109/TVCG.2007.70587 | Today's PCs incorporate multiple CPUs and GPUs and are easily arranged in clusters for high-performance, interactive graphics. We present an approach based on hierarchical, screen-space tiles to parallelizing rendering with level of detail. Adapt tiles, render tiles, and machine tiles are associated with CPUs, GPUs, and PCs, respectively, to efficiently parallelize the workload with good resource utilization. Adaptive tile sizes provide load balancing while our level of detail system allows total and independent management of the load on CPUs and GPUs. We demonstrate our approach on parallel configurations consisting of both single PCs and a cluster of PCs. | false | false | [
"Krzysztof Niski",
"Jonathan D. Cohen 0001"
] | [] | [] | [] |
Vis | 2,007 | Time Dependent Processing in a Parallel Pipeline Architecture | 10.1109/TVCG.2007.70600 | Pipeline architectures provide a versatile and efficient mechanism for constructing visualizations, and they have been implemented in numerous libraries and applications over the past two decades. In addition to allowing developers and users to freely combine algorithms, visualization pipelines have proven to work well when streaming data and scale well on parallel distributed- memory computers. However, current pipeline visualization frameworks have a critical flaw: they are unable to manage time varying data. As data flows through the pipeline, each algorithm has access to only a single snapshot in time of the data. This prevents the implementation of algorithms that do any temporal processing such as particle tracing; plotting over time; or interpolation, fitting, or smoothing of time series data. As data acquisition technology improves, as simulation time-integration techniques become more complex, and as simulations save less frequently and regularly, the ability to analyze the time-behavior of data becomes more important. This paper describes a modification to the traditional pipeline architecture that allows it to accommodate temporal algorithms. Furthermore, the architecture allows temporal algorithms to be used in conjunction with algorithms expecting a single time snapshot, thus simplifying software design and allowing adoption into existing pipeline frameworks. Our architecture also continues to work well in parallel distributed-memory environments. We demonstrate our architecture by modifying the popular VTK framework and exposing the functionality to the ParaView application. We use this framework to apply time-dependent algorithms on large data with a parallel cluster computer and thereby exercise a functionality that previously did not exist. | false | false | [
"John Biddiscombe",
"Berk Geveci",
"Ken Martin",
"Kenneth Moreland",
"David C. Thompson 0001"
] | [] | [] | [] |
Vis | 2,007 | Topological Landscapes: A Terrain Metaphor for Scientific Data | 10.1109/TVCG.2007.70601 | Scientific visualization and illustration tools are designed to help people understand the structure and complexity of scientific data with images that are as informative and intuitive as possible. In this context the use of metaphors plays an important role since they make complex information easily accessible by using commonly known concepts. In this paper we propose a new metaphor, called "topological landscapes," which facilitates understanding the topological structure of scalar functions. The basic idea is to construct a terrain with the same topology as a given dataset and to display the terrain as an easily understood representation of the actual input data. In this projection from an n-dimensional scalar function to a two-dimensional (2D) model we preserve function values of critical points, the persistence (function span) of topological features, and one possible additional metric property (in our examples volume). By displaying this topologically equivalent landscape together with the original data we harness the natural human proficiency in understanding terrain topography and make complex topological information easily accessible. | false | false | [
"Gunther H. Weber",
"Peer-Timo Bremer",
"Valerio Pascucci"
] | [] | [] | [] |
Vis | 2,007 | Topological Visualization of Brain Diffusion MRI Data | 10.1109/TVCG.2007.70602 | Topological methods give concise and expressive visual representations of flow fields. The present work suggests a comparable method for the visualization of human brain diffusion MRI data. We explore existing techniques for the topological analysis of generic tensor fields, but find them inappropriate for diffusion MRI data. Thus, we propose a novel approach that considers the asymptotic behavior of a probabilistic fiber tracking method and define analogs of the basic concepts of flow topology, like critical points, basins, and faces, with interpretations in terms of brain anatomy. The resulting features are fuzzy, reflecting the uncertainty inherent in any connectivity estimate from diffusion imaging. We describe an algorithm to extract the new type of features, demonstrate its robustness under noise, and present results for two regions in a diffusion MRI dataset to illustrate that the method allows a meaningful visual analysis of probabilistic fiber tracking results. | false | false | [
"Thomas Schultz 0001",
"Holger Theisel",
"Hans-Peter Seidel"
] | [] | [] | [] |
Vis | 2,007 | Topologically Clean Distance fields | 10.1109/TVCG.2007.70603 | Analysis of the results obtained from material simulations is important in the physical sciences. Our research was motivated by the need to investigate the properties of a simulated porous solid as it is hit by a projectile. This paper describes two techniques for the generation of distance fields containing a minimal number of topological features, and we use them to identify features of the material. We focus on distance fields defined on a volumetric domain considering the distance to a given surface embedded within the domain. Topological features of the field are characterized by its critical points. Our first method begins with a distance field that is computed using a standard approach, and simplifies this field using ideas from Morse theory. We present a procedure for identifying and extracting a feature set through analysis of the MS complex, and apply it to find the invariants in the clean distance field. Our second method proceeds by advancing a front, beginning at the surface, and locally controlling the creation of new critical points. We demonstrate the value of topologically clean distance fields for the analysis of filament structures in porous solids. Our methods produce a curved skeleton representation of the filaments that helps material scientists to perform a detailed qualitative and quantitative analysis of pores, and hence infer important material properties. Furthermore, we provide a set of criteria for finding the "difference" between two skeletal structures, and use this to examine how the structure of the porous solid changes over several timesteps in the simulation of the particle impact. | false | false | [
"Attila Gyulassy",
"Mark A. Duchaineau",
"Vijay Natarajan",
"Valerio Pascucci",
"Eduardo M. Bringa",
"Andrew Higginbotham",
"Bernd Hamann"
] | [] | [] | [] |
Vis | 2,007 | Topology, Accuracy, and Quality of Isosurface Meshes Using Dynamic Particles | 10.1109/TVCG.2007.70604 | This paper describes a method for constructing isosurface triangulations of sampled, volumetric, three-dimensional scalar fields. The resulting meshes consist of triangles that are of consistently high quality, making them well suited for accurate interpolation of scalar and vector-valued quantities, as required for numerous applications in visualization and numerical simulation. The proposed method does not rely on a local construction or adjustment of triangles as is done, for instance, in advancing wavefront or adaptive refinement methods. Instead, a system of dynamic particles optimally samples an implicit function such that the particles' relative positions can produce a topologically correct Delaunay triangulation. Thus, the proposed method relies on a global placement of triangle vertices. The main contributions of the paper are the integration of dynamic particles systems with surface sampling theory and PDE-based methods for controlling the local variability of particle densities, as well as detailing a practical method that accommodates Delaunay sampling requirements to generate sparse sets of points for the production of high-quality tessellations. | false | false | [
"Miriah D. Meyer",
"Robert M. Kirby",
"Ross T. Whitaker"
] | [] | [] | [] |
Vis | 2,007 | Transform Coding for Hardware-accelerated Volume Rendering | 10.1109/TVCG.2007.70516 | Hardware-accelerated volume rendering using the GPU is now the standard approach for real-time volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a real-time hardware-accelerated volume rendering context. In this paper we present a novel block-based transform coding scheme designed specifically with real-time volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by offline compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in real-time while rendering on the GPU. | false | false | [
"Nathaniel Fout",
"Kwan-Liu Ma"
] | [] | [] | [] |
Vis | 2,007 | Two-Level Approach to Efficient Visualization of Protein Dynamics | 10.1109/TVCG.2007.70517 | Proteins are highly flexible and large amplitude deformations of their structure, also called slow dynamics, are often decisive to their function. We present a two-level rendering approach that enables visualization of slow dynamics of large protein assemblies. Our approach is aligned with a hierarchical model of large scale molecules. Instead of constantly updating positions of large amounts of atoms, we update the position and rotation of residues, i.e., higher level building blocks of a protein. Residues are represented by one vertex only indicating its position and additional information defining the rotation. The atoms in the residues are generated on-the-fly on the GPU, exploiting the new graphics hardware geometry shader capabilities. Moreover, we represent the atoms by billboards instead of tessellated spheres. Our representation is then significantly faster and pixel precise. We demonstrate the usefulness of our new approach in the context of our collaborative bioinformatics project. | false | false | [
"Ove Daae Lampe",
"Ivan Viola",
"Nathalie Reuter",
"Helwig Hauser"
] | [] | [] | [] |
Vis | 2,007 | Uncertainty Visualization in Medical Volume Rendering Using Probabilistic Animation | 10.1109/TVCG.2007.70518 | Direct volume rendering has proved to be an effective visualization method for medical data sets and has reached wide-spread clinical use. The diagnostic exploration, in essence, corresponds to a tissue classification task, which is often complex and time-consuming. Moreover, a major problem is the lack of information on the uncertainty of the classification, which can have dramatic consequences for the diagnosis. In this paper this problem is addressed by proposing animation methods to convey uncertainty in the rendering. The foundation is a probabilistic Transfer Function model which allows for direct user interaction with the classification. The rendering is animated by sampling the probability domain over time, which results in varying appearance for uncertain regions. A particularly promising application of this technique is a "sensitivity lens" applied to focus regions in the data set. The methods have been evaluated by radiologists in a study simulating the clinical task of stenosis assessment, in which the animation technique is shown to outperform traditional rendering in terms of assessment accuracy. | false | false | [
"Claes Lundström",
"Patric Ljung",
"Anders Persson",
"Anders Ynnerman"
] | [] | [] | [] |
Vis | 2,007 | Variable Interactions in Query-Driven Visualization | 10.1109/TVCG.2007.70519 | Our ability to generate ever-larger, increasingly-complex data, has established the need for scalable methods that identify, and provide insight into, important variable trends and interactions. Query-driven methods are among the small subset of techniques that are able to address both large and highly complex datasets. This paper presents a new method that increases the utility of query-driven techniques by visually conveying statistical information about the trends that exist between variables in a query. In this method, correlation fields, created between pairs of variables, are used with the cumulative distribution functions of variables expressed in a users query. This integrated use of cumulative distribution functions and correlation fields visually reveals, with respect to the solution space of the query, statistically important interactions between any three variables, and allows for trends between these variables to be readily identified. We demonstrate our method by analyzing interactions between variables in two flame-front simulations. | false | false | [
"Luke J. Gosink",
"John C. Anderson",
"E. Wes Bethel",
"Kenneth I. Joy"
] | [] | [] | [] |
Vis | 2,007 | Virtual Rheoscopic Fluids for Flow Visualization | 10.1109/TVCG.2007.70610 | Physics-based flow visualization techniques seek to mimic laboratory flow visualization methods with virtual analogues. In this work we describe the rendering of a virtual rheoscopic fluid to produce images with results strikingly similar to laboratory experiments with real-world rheoscopic fluids using products such as Kalliroscope. These fluid additives consist of microscopic, anisotropic particles which, when suspended in the flow, align with both the flow velocity and the local shear to produce high-quality depictions of complex flow structures. Our virtual rheoscopic fluid is produced by defining a closed-form formula for the orientation of shear layers in the flow and using this orientation to volume render the flow as a material with anisotropic reflectance and transparency. Examples are presented for natural convection, thermocapillary convection, and Taylor-Couette flow simulations. The latter agree well with photographs of experimental results of Taylor-Couette flows from the literature. | false | false | [
"William L. Barth",
"Christopher Burns"
] | [] | [] | [] |
Vis | 2,007 | Visual Analysis of the Air Pollution Problem in Hong Kong | 10.1109/TVCG.2007.70523 | We present a comprehensive system for weather data visualization. Weather data are multivariate and contain vector fields formed by wind speed and direction. Several well-established visualization techniques such as parallel coordinates and polar systems are integrated into our system. We also develop various novel methods, including circular pixel bar charts embedded into polar systems, enhanced parallel coordinates with S-shape axis, and weighted complete graphs. Our system was used to analyze the air pollution problem in Hong Kong and some interesting patterns have been found. | false | false | [
"Huamin Qu",
"Wing-Yi Chan",
"Anbang Xu",
"Kai-Lun Chung",
"Alexis Kai-Hon Lau",
"Ping Guo 0002"
] | [] | [] | [] |
Vis | 2,007 | Visual Verification and Analysis of Cluster Detection for Molecular Dynamics | 10.1109/TVCG.2007.70614 | A current research topic in molecular thermodynamics is the condensation of vapor to liquid and the investigation of this process at the molecular level. Condensation is found in many physical phenomena, e.g. the formation of atmospheric clouds or the processes inside steam turbines, where a detailed knowledge of the dynamics of condensation processes will help to optimize energy efficiency and avoid problems with droplets of macroscopic size. The key properties of these processes are the nucleation rate and the critical cluster size. For the calculation of these properties it is essential to make use of a meaningful definition of molecular clusters, which currently is a not completely resolved issue. In this paper a framework capable of interactively visualizing molecular datasets of such nucleation simulations is presented, with an emphasis on the detected molecular clusters. To check the quality of the results of the cluster detection, our framework introduces the concept of flow groups to highlight potential cluster evolution over time which is not detected by the employed algorithm. To confirm the findings of the visual analysis, we coupled the rendering view with a schematic view of the clusters' evolution. This allows to rapidly assess the quality of the molecular cluster detection algorithm and to identify locations in the simulation data in space as well as in time where the cluster detection fails. Thus, thermodynamics researchers can eliminate weaknesses in their cluster detection algorithms. Several examples for the effective and efficient usage of our tool are presented. | false | false | [
"Sebastian Grottel",
"Guido Reina",
"Jadran Vrabec",
"Thomas Ertl"
] | [] | [] | [] |
Vis | 2,007 | Visualization of Cosmological Particle-Based Datasets | 10.1109/TVCG.2007.70526 | We describe our visualization process for a particle-based simulation of the formation of the first stars and their impact on cosmic history. The dataset consists of several hundred time-steps of point simulation data, with each time-step containing approximately two million point particles. For each time-step, we interpolate the point data onto a regular grid using a method taken from the radiance estimate of photon mapping [21]. We import the resulting regular grid representation into ParaView [24], with which we extract isosurfaces across multiple variables. Our images provide insights into the evolution of the early universe, tracing the cosmic transition from an initially homogeneous state to one of increasing complexity. Specifically, our visualizations capture the build-up of regions of ionized gas around the first stars, their evolution, and their complex interactions with the surrounding matter. These observations will guide the upcoming James Webb Space Telescope, the key astronomy mission of the next decade. | false | false | [
"Paul A. Navrátil",
"Jarrett Johnson",
"Volker Bromm"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/0708.0961v1",
"icon": "paper"
}
] |
Vis | 2,007 | Visualizing Large-Scale Uncertainty in Astrophysical Data | 10.1109/TVCG.2007.70530 | Visualization of uncertainty or error in astrophysical data is seldom available in simulations of astronomical phenomena, and yet almost all rendered attributes possess some degree of uncertainty due to observational error. Uncertainties associated with spatial location typically vary significantly with scale and thus introduce further complexity in the interpretation of a given visualization. This paper introduces effective techniques for visualizing uncertainty in large-scale virtual astrophysical environments. Building upon our previous transparently scalable visualization architecture, we develop tools that enhance the perception and comprehension of uncertainty across wide scale ranges. Our methods include a unified color-coding scheme for representing log-scale distances and percentage errors, an ellipsoid model to represent positional uncertainty, an ellipsoid envelope model to expose trajectory uncertainty, and a magic-glass design supporting the selection of ranges of log-scale distance and uncertainty parameters, as well as an overview mode and a scalable WIM tool for exposing the magnitudes of spatial context and uncertainty. | false | false | [
"Hongwei Li",
"Chi-Wing Fu",
"Yinggang Li",
"Andrew J. Hanson"
] | [] | [] | [] |
Vis | 2,007 | Visualizing Whole-Brain DTI Tractography with GPU-based Tuboids and LoD Management | 10.1109/TVCG.2007.70532 | Diffusion tensor imaging (DTI) of the human brain, coupled with tractography techniques, enable the extraction of large- collections of three-dimensional tract pathways per subject. These pathways and pathway bundles represent the connectivity between different brain regions and are critical for the understanding of brain related diseases. A flexible and efficient GPU-based rendering technique for DTI tractography data is presented that addresses common performance bottlenecks and image-quality issues, allowing interactive render rates to be achieved on commodity hardware. An occlusion query-based pathway LoD management system for streamlines/streamtubes/tuboids is introduced that optimizes input geometry, vertex processing, and fragment processing loads, and helps reduce overdraw. The tuboid, a fully-shaded streamtube impostor constructed entirely on the GPU from streamline vertices, is also introduced. Unlike full streamtubes and other impostor constructs, tuboids require little to no preprocessing or extra space over the original streamline data. The supported fragment processing levels of detail range from texture-based draft shading to full raycast normal computation, Phong shading, environment mapping, and curvature-correct text labeling. The presented text labeling technique for tuboids provides adaptive, aesthetically pleasing labels that appear attached to the surface of the tubes. Furthermore, an occlusion query aggregating and scheduling scheme for tuboids is described that reduces the query overhead. Results for a tractography dataset are presented, and demonstrate that LoD-managed tuboids offer benefits over traditional streamtubes both in performance and appearance. | false | false | [
"Vid Petrovic",
"James H. Fallon",
"Falko Kuester"
] | [] | [] | [] |
VAST | 2,007 | Activity Analysis Using Spatio-Temporal Trajectory Volumes in Surveillance Applications | 10.1109/VAST.2007.4388990 | In this paper, we present a system to analyze activities and detect anomalies in a surveillance application, which exploits the intuition and experience of security and surveillance experts through an easy- to-use visual feedback loop. The multi-scale and location specific nature of behavior patterns in space and time is captured using a wavelet-based feature descriptor. The system learns the fundamental descriptions of the behavior patterns in a semi-supervised fashion by the higher order singular value decomposition of the space described by the training data. This training process is guided and refined by the users in an intuitive fashion. Anomalies are detected by projecting the test data into this multi-linear space and are visualized by the system to direct the attention of the user to potential problem spots. We tested our system on real-world surveillance data, and it satisfied the security concerns of the environment. | false | false | [
"Firdaus Janoos",
"Shantanu Singh",
"M. Okan Irfanoglu",
"Raghu Machiraju",
"Richard E. Parent"
] | [] | [] | [] |
VAST | 2,007 | Analysis Guided Visual Exploration of Multivariate Data | 10.1109/VAST.2007.4389000 | Visualization systems traditionally focus on graphical representation of information. They tend not to provide integrated analytical services that could aid users in tackling complex knowledge discovery tasks. Users' exploration in such environments is usually impeded due to several problems: 1) valuable information is hard to discover when too much data is visualized on the screen; 2) Users have to manage and organize their discoveries off line, because no systematic discovery management mechanism exists; 3) their discoveries based on visual exploration alone may lack accuracy; 4) and they have no convenient access to the important knowledge learned by other users. To tackle these problems, it has been recognized that analytical tools must be introduced into visualization systems. In this paper, we present a novel analysis-guided exploration system, called the nugget management system (NMS). It leverages the collaborative effort of human comprehensibility and machine computations to facilitate users' visual exploration processes. Specifically, NMS first extracts the valuable information (nuggets) hidden in datasets based on the interests of users. Given that similar nuggets may be re-discovered by different users, NMS consolidates the nugget candidate set by clustering based on their semantic similarity. To solve the problem of inaccurate discoveries, localized data mining techniques are applied to refine the nuggets to best represent the captured patterns in datasets. Lastly, the resulting well-organized nugget pool is used to guide users' exploration. To evaluate the effectiveness of NMS, we integrated NMS into Xmd- vTool, a freeware multivariate visualization system. User studies were performed to compare the users' efficiency and accuracy in finishing tasks on real datasets, with and without the help of NMS. Our user studies confirmed the effectiveness of NMS. | false | false | [
"Di Yang",
"Elke A. Rundensteiner",
"Matthew O. Ward"
] | [] | [] | [] |
VAST | 2,007 | Analyzing Large-Scale News Video Databases to Support Knowledge Visualization and Intuitive Retrieval | 10.1109/VAST.2007.4389003 | In this paper, we have developed a novel framework to enable more effective investigation of large-scale news video database via knowledge visualization. To relieve users from the burdensome exploration of well-known and uninteresting knowledge of news reports, a novel interestingness measurement for video news reports is presented to enable users to find news stories of interest at first glance and capture the relevant knowledge in large-scale video news databases efficiently. Our framework takes advantage of both automatic semantic video analysis and human intelligence by integrating with visualization techniques on semantic video retrieval systems. Our techniques on intelligent news video analysis and knowledge discovery have the capacity to enable more effective visualization and exploration of large-scale news video collections. In addition, news video visualization and exploration can provide valuable feedback to improve our techniques for intelligent news video analysis and knowledge discovery. | false | false | [
"Hangzai Luo",
"Jianping Fan 0001",
"Jing Yang 0001",
"William Ribarsky",
"Shin'ichi Satoh 0001"
] | [] | [] | [] |
VAST | 2,007 | Balancing Interactive Data Management of Massive Data with Situational Awareness through Smart Aggregation | 10.1109/VAST.2007.4388998 | Designing a visualization system capable of processing, managing, and presenting massive data sets while maximizing the user's situational awareness (SA) is a challenging, but important, research question in visual analytics. Traditional data management and interactive retrieval approaches have often focused on solving the data overload problem at the expense of the user's SA. This paper discusses various data management strategies and the strengths and limitations of each approach in providing the user with SA. A new data management strategy, coined Smart Aggregation, is presented as a powerful approach to overcome the challenges of both massive data sets and maintaining SA. By combining automatic data aggregation with user-defined controls on what, how, and when data should be aggregated, we present a visualization system that can handle massive amounts of data while affording the user with the best possible SA. This approach ensures that a system is always usable in terms of both system resources and human perceptual resources. We have implemented our Smart Aggregation approach in a visual analytics system called VIAssist (Visual Assistant for Information Assurance Analysis) to facilitate exploration, discovery, and SA in the domain of Information Assurance. | false | false | [
"Daniel R. Tesone",
"John R. Goodall"
] | [] | [] | [] |
VAST | 2,007 | C-GROUP: A Visual Analytic Tool for Pairwise Analysis of Dynamic Group Membership | 10.1109/VAST.2007.4389022 | C-GROUP is a tool for analyzing dynamic group membership in social networks over time. Unlike most network visualization tools, which show the group structure within an entire network, or the group membership for a single actor, C-GROUP allows users to focus their analysis on a pair of individuals of interest. And unlike most dynamic social network visualization tools, which focus on the addition and deletion of nodes (actors) and edges (relationships) over time, C-GROUP focuses on changing group memberships over time. C-GROUP provides users with a flexible interface for defining (and redefining) groups interactively, and allows users to view the changing group memberships for the pair over time. This helps to highlight the similarities and differences between the individuals and their evolving group memberships. C-GROUP allows users to dynamically select the time granularity of the temporal evolution and supports two novel visual representations of the evolving group memberships. This flexibility gives users alternate views that are appropriate for different network sizes and provides users with different insights into the grouping behavior. | false | false | [
"Hyunmo Kang",
"Lise Getoor",
"Lisa Singh"
] | [] | [] | [] |
VAST | 2,007 | ClusterSculptor: A Visual Analytics Tool for High-Dimensional Data | 10.1109/VAST.2007.4388999 | Cluster analysis (CA) is a powerful strategy for the exploration of high-dimensional data in the absence of a-priori hypotheses or data classification models, and the results of CA can then be used to form such models. But even though formal models and classification rules may not exist in these data exploration scenarios, domain scientists and experts generally have a vast amount of non-compiled knowledge and intuition that they can bring to bear in this effort. In CA, there are various popular mechanisms to generate the clusters, however, the results from their non- supervised deployment rarely fully agree with this expert knowledge and intuition. To this end, our paper describes a comprehensive and intuitive framework to aid scientists in the derivation of classification hierarchies in CA, using k-means as the overall clustering engine, but allowing them to tune its parameters interactively based on a non-distorted compact visual presentation of the inherent characteristics of the data in high- dimensional space. These include cluster geometry, composition, spatial relations to neighbors, and others. In essence, we provide all the tools necessary for a high-dimensional activity we call cluster sculpting, and the evolving hierarchy can then be viewed in a space-efficient radial dendrogram. We demonstrate our system in the context of the mining and classification of a large collection of millions of data items of aerosol mass spectra, but our framework readily applies to any high-dimensional CA scenario. | false | false | [
"Eun Ju Nam",
"Yiping Han",
"Klaus Mueller 0001",
"Alla Zelenyuk",
"Dan Imre"
] | [] | [] | [] |
VAST | 2,007 | DataMeadow: A Visual Canvas for Analysis of Large-Scale Multivariate Data | 10.1109/VAST.2007.4389013 | Supporting visual analytics of multiple large-scale multidimensional datasets requires a high degree of interactivity and user control beyond the conventional challenges of visualizing such datasets. We present the DataMeadow, a visual canvas providing rich interaction for constructing visual queries using graphical set representations called DataRoses. A DataRose is essentially a starplot of selected columns in a dataset displayed as multivariate visualizations with dynamic query sliders integrated into each axis. The purpose of the DataMeadow is to allow users to create advanced visual queries by iteratively selecting and filtering into the multidimensional data. Furthermore, the canvas provides a clear history of the analysis that can be annotated to facilitate dissemination of analytical results to outsiders. Towards this end, the DataMeadow has a direct manipulation interface for selection, filtering, and creation of sets, subsets, and data dependencies using both simple and complex mouse gestures. We have evaluated our system using a qualitative expert review involving two researchers working in the area. Results from this review are favorable for our new method. | false | false | [
"Niklas Elmqvist",
"John T. Stasko",
"Philippas Tsigas"
] | [] | [] | [] |
VAST | 2,007 | Design Considerations for Collaborative Visual Analytics | 10.1109/VAST.2007.4389011 | Information visualization leverages the human visual system to support the process of sensemaking, in which information is collected, organized, and analyzed to generate knowledge and inform action. Though most research to date assumes a single-user focus on perceptual and cognitive processes, in practice, sensemaking is often a social process involving parallelization of effort, discussion, and consensus building. This suggests that to fully support sensemaking, interactive visualization should also support social interaction. However, the most appropriate collaboration mechanisms for supporting this interaction are not immediately clear. In this article, we present design considerations for asynchronous collaboration in visual analysis environments, highlighting issues of work parallelization, communication, and social organization. These considerations provide a guide for the design and evaluation of collaborative visualization systems. | false | false | [
"Jeffrey Heer",
"Maneesh Agrawala"
] | [] | [] | [] |
VAST | 2,007 | FemaRepViz: Automatic Extraction and Geo-Temporal Visualization of FEMA National Situation Updates | 10.1109/VAST.2007.4388991 | An architecture for visualizing information extracted from text documents is proposed. In conformance with this architecture, a toolkit, FemaRepViz, has been implemented to extract and visualize temporal, geospatial, and summarized information from FEMA national update reports. Preliminary tests have shown satisfactory accuracy for FEMARepViz. A central component of the architecture is an entity extractor that extracts named entities like person names, location names, temporal references, etc. FEMARepViz is based on FactXtractor, an entity-extractor that works on text documents. The information extracted using FactXtractor is processed using GeoTagger, a geographical name disambiguation tool based on a novel clustering-based disambiguation algorithm. To extract relationships among entities, we propose a machine-learning based algorithm that uses a novel stripped dependency tree kernel. We illustrate and evaluate the usefulness of our system on the FEMA National Situation Updates. Daily reports are fetched by FEMARepViz from the FEMA website, segmented into coherent sections and each section is classified into one of several known incident types. We use concept Vista, Google maps and Google earth to visualize the events extracted from the text reports and allow the user to interactively filter the topics, locations, and time-periods of interest to create a visual analytics toolkit that is useful for rapid analysis of events reported in a large set of text documents. | false | false | [
"Chi-Chun Pan",
"Prasenjit Mitra"
] | [] | [] | [] |
VAST | 2,007 | Formalizing Analytical Discourse in Visual Analytics | 10.1109/VAST.2007.4389025 | This paper presents a theory of analytical discourse and a formal model of the intentional structure of visual analytic reasoning process. Our model rests on the theory of collaborative discourse, and allows for cooperative human-machine communication in visual interactive dialogues. Using a sample discourse from a crisis management scenario, we demonstrated the utility of our theory in characterizing the discourse context and collaboration. In particular, we view analytical discourse as plans consisting of complex mental attitude towards analytical tasks and issues. Under this view, human reasoning and computational analysis become integral part of the collaborative plan that evolves through discourse. | false | false | [
"Guoray Cai"
] | [] | [] | [] |
VAST | 2,007 | From Tasks to Tools: A Field Study in Collaborative Visual Analytics | 10.1109/VAST.2007.4389028 | This poster presents an exploratory field study of a VAST 2007 contest entry. We applied cognitive task analysis (CTA), grounded theory (GT), and activity theory (AT), to analysis of field notes and interviews from participants. Our results are described in the context of activity theory and sensemaking, two theoretical perspectives that we have found to be particularly useful in understanding analytic tasks. | false | false | [
"Daniel Ha",
"Minjung Kim",
"Andrew Wade",
"William Chao",
"Kevin I.-J. Ho",
"Linda T. Kaastra",
"Brian D. Fisher",
"John Dill"
] | [] | [] | [] |
VAST | 2,007 | IMAS: The Interactive Multigenomic Analysis System | 10.1109/VAST.2007.4388997 | This paper introduces a new Visual Analysis tool named IMAS (Interactive Multigenomic Analysis System), which combines common analysis tools such as Glimmer, BLAST, and Clustal-W into a unified Visual Analytic framework. IMAS displays the primary DNA sequence being analyzed by the biologist in a highly interactive, zoomable visual display. The user may analyze the sequence in a number of ways, and visualize these analyses in a coherent, sequence aligned form, with all related analysis products grouped together. This enables the user to rapidly perform analyses of DNA sequences without the need for tedious and error-prone cutting and pasting of sequence data from text files to and from web-based databases and data analysis services, as is now common practice. | false | false | [
"Chris Shaw 0002",
"Greg A. Dasch",
"Marina E. Eremeeva"
] | [] | [] | [] |
VAST | 2,007 | InfoVis as Seen by the World Out There: 2007 in Review | 10.1109/VAST.2007.4388989 | How we as insiders see and understand InfoVis is quite different from how it is seen by most people in the world out there. Most people get only glimpses of what we do, and those glimpses rarely tell the story clearly. Think about the view of InfoVis that has been created in 2007 through marketing, blogs, and articles. This view is peppered with misperception. In this presentation, I'll take you on a tour of InfoVis' exposure in 2007: the highlights and the failures that have shaped the world's perception of our beloved and important work. The world needs what we do, but this need remains largely unsatisfied. | false | false | [
"Stephen Few"
] | [] | [] | [] |
VAST | 2,007 | Intelligence Analysis Using Titan | 10.1109/VAST.2007.4389036 | The open source Titan informatics toolkit project, which extends the visualization toolkit (VTK) to include information visualization capabilities, is being developed by Sandia National Laboratories in collaboration with Kitware. The VAST Contest provided us with an opportunity to explore various ideas for constructing an analysis tool, while allowing us to exercise our architecture in the solution of a complex problem. As amateur analysts, we found the experience both enlightening and fun. | false | false | [
"Patricia Crossno",
"Brian N. Wylie",
"Andrew T. Wilson",
"John A. Greenfield",
"Eric T. Stanton",
"Timothy M. Shead",
"Lisa G. Ice",
"Kenneth Moreland",
"Jeffrey Baumes",
"Berk Geveci"
] | [] | [] | [] |
VAST | 2,007 | Intelligent Visual Analytics Queries | 10.1109/VAST.2007.4389001 | Visualizations of large multi-dimensional data sets, occurring in scientific and commercial applications, often reveal interesting local patterns. Analysts want to identify the causes and impacts of these interesting areas, and they also want to search for similar patterns occurring elsewhere in the data set. In this paper we introduce the Intelligent Visual Analytics Query (IVQuery) concept that combines visual interaction with automated analytical methods to support analysts in discovering the special properties and relations of identified patterns. The idea of IVQuery is to interactively select focus areas in the visualization. Then, according to the characteristics of the selected areas, such as the data dimensions and records, IVQuery employs analytical methods to identify the relationships to other portions of the data set. Finally, IVQuery generates visual representations for analysts to view and refine the results. IVQuery has been applied successfully to different real-world data sets, such as data warehouse performance, product sales, and sever performance analysis, and demonstrates the benefits of this technique over traditional filtering and zooming techniques. The visual analytics query technique can be used with many different types of visual representation. In this paper we show how to use IVQuery with parallel coordinates, visual maps, and scatter plots. | false | false | [
"Ming C. Hao",
"Umeshwar Dayal",
"Daniel A. Keim",
"D. Morent",
"Jörn Schneidewind"
] | [] | [] | [] |
VAST | 2,007 | Jigsaw meets Blue Iguanodon - The VAST 2007 Contest | 10.1109/VAST.2007.4389034 | This article describes our use of the Jigsaw system in working on the VAST 2007 contest. Jigsaw provides multiple views of a document collection and the individual entities within those documents, with a particular focus on exposing connections between entities. We describe how we refined the identified entities in order to better facilitate Jigsaw's use and how the different views helped us to uncover key parts of the underlying plot. | false | false | [
"Carsten Görg",
"Zhicheng Liu",
"Neel Parekh",
"Kanupriyah Singhal",
"John T. Stasko"
] | [] | [] | [] |
VAST | 2,007 | Jigsaw: Supporting Investigative Analysis through Interactive Visualization | 10.1109/VAST.2007.4389006 | Investigative analysts who work with collections of text documents connect embedded threads of evidence in order to formulate hypotheses about plans and activities of potential interest. As the number of documents and the corresponding number of concepts and entities within the documents grow larger, sense-making processes become more and more difficult for the analysts. We have developed a visual analytic system called Jigsaw that represents documents and their entities visually in order to help analysts examine reports more efficiently and develop theories about potential actions more quickly. Jigsaw provides multiple coordinated views of document entities with a special emphasis on visually illustrating connections between entities across the different documents. | false | false | [
"John T. Stasko",
"Carsten Görg",
"Zhicheng Liu",
"Kanupriya Singhal"
] | [
"TT"
] | [] | [] |
VAST | 2,007 | LAHVA: Linked Animal-Human Health Visual Analytics | 10.1109/VAST.2007.4388993 | Coordinated animal-human health monitoring can provide an early warning system with fewer false alarms for naturally occurring disease outbreaks, as well as biological, chemical and environmental incidents. This monitoring requires the integration and analysis of multi-field, multi-scale and multi-source data sets. In order to better understand these data sets, models and measurements at different resolutions must be analyzed. To facilitate these investigations, we have created an application to provide a visual analytics framework for analyzing both human emergency room data and veterinary hospital data. Our integrated visual analytic tool links temporally varying geospatial visualization of animal and human patient health information with advanced statistical analysis of these multi-source data. Various statistical analysis techniques have been applied in conjunction with a spatio-temporal viewing window. Such an application provides researchers with the ability to visually search the data for clusters in both a statistical model view and a spatio-temporal view. Our interface provides a factor specification/filtering component to allow exploration of causal factors and spread patterns. In this paper, we will discuss the application of our linked animal-human visual analytics (LAHVA) tool to two specific case studies. The first case study is the effect of seasonal influenza and its correlation with different companion animals (e.g., cats, dogs) syndromes. Here we use data from the Indiana Network for Patient Care (INPC) and Banfield Pet Hospitals in an attempt to determine if there are correlations between respiratory syndromes representing the onset of seasonal influenza in humans and general respiratory syndromes in cats and dogs. Our second case study examines the effect of the release of industrial wastewater in a community through companion animal surveillance. | false | false | [
"Ross Maciejewski",
"Benjamin Tyner",
"Yun Jang",
"Cheng Zheng",
"Rimma V. Nehme",
"David S. Ebert",
"William S. Cleveland",
"Mourad Ouzzani",
"Shaun J. Grannis",
"Lawrence T. Glickman"
] | [] | [] | [] |
VAST | 2,007 | Literature Fingerprinting: A New Method for Visual Literary Analysis | 10.1109/VAST.2007.4389004 | In computer-based literary analysis different types of features are used to characterize a text. Usually, only a single feature value or vector is calculated for the whole text. In this paper, we combine automatic literature analysis methods with an effective visualization technique to analyze the behavior of the feature values across the text. For an interactive visual analysis, we calculate a sequence of feature values per text and present them to the user as a characteristic fingerprint. The feature values may be calculated on different hierarchy levels, allowing the analysis to be done on different resolution levels. A case study shows several successful applications of our new method to known literature problems and demonstrates the advantage of our new visual literature fingerprinting. | false | false | [
"Daniel A. Keim",
"Daniela Oelke"
] | [] | [] | [] |
VAST | 2,007 | NewsLab: Exploratory Broadcast News Video Analysis | 10.1109/VAST.2007.4389005 | In this paper, we introduce NewsLab, an exploratory visualization approach for the analysis of large scale broadcast news video collections containing many thousands of news stories over extended periods of time. A river metaphor is used to depict the thematic changes of the news over time. An interactive lens metaphor allows the playback of fine-grained video segments selected through the river overview. Multi-resolution navigation is supported via a hierarchical time structure as well as a hierarchical theme structure. Themes can be explored hierarchically according to their thematic structure, or in an unstructured fashion using various ranking criteria. A rich set of interactions such as filtering, drill-down/roll-up navigation, history animation, and keyword based search are also provided. Our case studies show how this set of tools can be used to find emerging topics in the news, compare different broadcasters, or mine the news for topics of interest. | false | false | [
"Mohammad Ghoniem",
"Dongning Luo",
"Jing Yang 0001",
"William Ribarsky"
] | [] | [] | [] |
VAST | 2,007 | Outlook for Visual Analytics Research Funding | 10.1109/VAST.2007.4389030 | Visual Analytics has become a rapidly growing field of study. It is also a field that is addressing very significant real world problems in homeland security, business analytics, emergency management, genetics and bioinformatics, investigative analysis, medical analytics, and other areas. For both these reasons, it is attracting new funding and will continue to do so in the future. Visual analytics has also become an international field, with significant research efforts in Canada, Europe, and Australia, as well as the U.S. There is significant new research funding in Canada and Germany with other efforts being discussed, including a major program sponsored by the European Union. The contributors to this panel are some of the primary thought leaders providing research funding or involved in setting up the funding apparatus. We have asked them to present their needs, funding programs, and expectations from the research community. They all come from different perspectives, different missions, and different expectations. They will present their views of the range of activity in both the U.S. and internationally and discuss what is coming. Come learn about these programs, initiatives, and plans, and how you can contribute. | false | false | [
"James J. Thomas",
"Daniel A. Keim",
"Joe Kielman",
"Larry Rosenblum"
] | [] | [] | [] |
VAST | 2,007 | Point Placement by Phylogenetic Trees and its Application to Visual Analysis of Document Collections | 10.1109/VAST.2007.4389002 | The task of building effective representations to visualize and explore collections with moderate to large number of documents is hard. It depends on the evaluation of some distance measure among texts and also on the representation of such relationships in bi- dimensional spaces. In this paper we introduce an alternative approach for building visual maps of documents based on their content similarity, through reconstruction of phylogenetic trees. The tree is capable of representing relationships that allows the user to quickly recover information detected by the similarity metric. For a variety of text collections of different natures we show that we can achieve improved exploration capability and more clear visualization of relationships amongst documents. | false | false | [
"Ana M. Cuadros",
"Fernando Vieira Paulovich",
"Rosane Minghim",
"Guilherme P. Telles"
] | [] | [] | [] |
VAST | 2,007 | Session Viewer: Visual Exploratory Analysis of Web Session Logs | 10.1109/VAST.2007.4389008 | Large-scale session log analysis typically includes statistical methods and detailed log examinations. While both methods have merits, statistical methods can miss previously unknown sub- populations in the data and detailed analyses may have selection biases. We therefore built Session Viewer, a visualization tool to facilitate and bridge between statistical and detailed analyses. Taking a multiple-coordinated view approach, Session Viewer shows multiple session populations at the Aggregate, Multiple, and Detail data levels to support different analysis styles. To bridge between the statistical and the detailed analysis levels, Session Viewer provides fluid traversal between data levels and side-by-side comparison at all data levels. We describe an analysis of a large-scale web usage study to demonstrate the use of Session Viewer, where we quantified the importance of grouping sessions based on task type. | false | false | [
"Heidi Lam",
"Daniel M. Russell",
"Diane Tang",
"Tamara Munzner"
] | [] | [] | [] |
VAST | 2,007 | Situation Awareness Tool for Global Argus | 10.1109/VAST.2007.4389023 | We present a visualization tool to enhance situation awareness for Global Argus, a system that tracks and detects indications and warnings of biological events in near real time. Because Global Argus generates massive amounts of data daily, its analysts often struggle to interpret the information. To overcome this problem, we have developed the Global Argus situation awareness tool (GASAT) using the InteleView/World Wind geographical information system. This tool allows users to visualize current and past events in a particular region, and thus to understand how events evolve over time. Combined with the other tools that we are developing, GASAT will contribute to enhanced situation awareness in the tracking and detection of biological events. | false | false | [
"Jae Choi",
"Sang-joon Lee",
"Sarah Gigitashvilli",
"James M. Wilson V"
] | [] | [] | [] |
VAST | 2,007 | Something's "Fishy" at Global Ways and Gill Breeders - Analysis with nSpace and GeoTime | 10.1109/VAST.2007.4389018 | GeoTime and nSpace are two interactive visual analytics tools that support the process of analyzing massive and complex datasets. The two tools were used to examine and interpret the 2007 VAST contest dataset. This poster paper describes how the capabilities of the tools were used to facilitate and expedite every stage of an analyst workflow. | false | false | [
"Lynn Chien",
"Annie Tat",
"William Wright"
] | [] | [] | [] |
VAST | 2,007 | Spectra transformed for model-testing and visual exploration | 10.1109/VAST.2007.4389024 | The presence of highly tangled patterns in spectra and other serial data exacerbates the difficulty of performing visual comparison between a test model for a particular pattern and the data. The use of a simple map that plants peaks in the data directly onto their corresponding position in a residual plot with respect to a chosen test model not only retrieves the advantages of dynamic regression plotting, but in practical cases also causes patterns in the data to congregate in meaningful ways with respect to more than one reference curve in the plane. The technique is demonstrated on a polyphonic music signal. | false | false | [
"Palmyra Catravas"
] | [] | [] | [] |
VAST | 2,007 | SpiralView: Towards Security Policies Assessment through Visual Correlation of Network Resources with Evolution of Alarms | 10.1109/VAST.2007.4389007 | This article presents SpiralView, a visualization tool for helping system administrators to assess network policies. The tool is meant to be a complementary support to the routine activity of network monitoring, enabling a retrospective view on the alarms generated during and extended period of time. The tool permits to reason about how alarms distribute over time and how they correlate with network resources (e.g., users, IPs, applications, etc.), supporting the analysts in understanding how the network evolves and thus in devising new security policies for the future. The spiral visualization plots alarms in time, and, coupled with interactive bar charts and a users/applications graph view, is used to present network data and perform queries. The user is able to segment the data in meaningful subsets, zoom on specific related information, and inspect for relationships between alarms, users, and applications. In designing the visualizations and their interaction, and through tests with security experts, several ameliorations over the standard techniques have been provided. | false | false | [
"Enrico Bertini",
"Patrick Hertzog",
"Denis Lalanne"
] | [] | [] | [] |
VAST | 2,007 | Stories in GeoTime | 10.1109/VAST.2007.4388992 | A story is a powerful abstraction used by intelligence analysts to conceptualize threats and understand patterns as part of the analytical process. This paper demonstrates a system that detects geo-temporal patterns and integrates story narration to increase analytic sense-making cohesion in GeoTime. The GeoTime geo-temporal event visualization tool was augmented with a story system that uses narratives, hypertext linked visualizations, visual annotations, and pattern detection to create an environment for analytic exploration and communication, thereby assisting the analyst in identifying, extracting, arranging and presenting stories within the data The story system lets analysts operate at the story level with higher-level abstractions of data, such as behaviors and events, while staying connected to the evidence. The story system was developed and evaluated in collaboration with analysts. | false | false | [
"Ryan Eccles",
"Thomas Kapler",
"Robert Harper 0002",
"William Wright"
] | [] | [] | [] |
VAST | 2,007 | Sunfall: A Collaborative Visual Analytics System for Astrophysics | 10.1109/VAST.2007.4389026 | Computational and experimental sciences produce and collect ever- larger and complex datasets, often in large-scale, multi-institution projects. The inability to gain insight into complex scientific phenomena using current software tools is a bottleneck facing virtually all endeavors of science. In this paper, we introduce Sunfall, a collaborative visual analytics system developed for the Nearby Supernova Factory, an international astrophysics experiment and the largest data volume supernova search currently in operation. Sunfall utilizes novel interactive visualization and analysis techniques to facilitate deeper scientific insight into complex, noisy, high-dimensional, high-volume, time-critical data. The system combines novel image processing algorithms, statistical analysis, and machine learning with highly interactive visual interfaces to enable collaborative, user-driven scientific exploration of supernova image and spectral data. Sunfall is currently in operation at the Nearby Supernova Factory; it is the first visual analytics system in production use at a major astrophysics project. | false | false | [
"Cecilia R. Aragon",
"Stephen J. Bailey",
"Sarah S. Poon",
"Karl J. Runge",
"Rollin C. Thomas"
] | [] | [] | [] |
VAST | 2,007 | TextPlorer: An application supporting text analysis | 10.1109/VAST.2007.4389019 | TexPlorer is an integrated system for exploring and analyzing large amounts of text documents. The data processing modules of TexPlorer consist of named entity extraction, entity relation extraction, hierarchical clustering, and text summarization tools. Using a timeline tool, tree-view, table-view, and concept maps, TexPlorer provides an analytical interface for exploring a set of text documents from different perspectives and allows users to explore vast amount of text documents efficiently. | false | false | [
"Chi-Chun Pan",
"Anuj R. Jaiswal",
"Junyan Luo",
"Anthony C. Robinson"
] | [] | [] | [] |
VAST | 2,007 | Thin Client Visualization | 10.1109/VAST.2007.4388996 | We have developed a Web 2.0 thin client visualization framework called GeoBoosttrade. Our framework focuses on geospatial visualization and using scalable vector graphics (SVG), AJAX, RSS and GeoRSS we have built a complete thin client component set. Our component set provides a rich user experience that is completely browser based. It includes maps, standard business charts, graphs, and time-oriented components. The components are live, interactive, linked, and support real time collaboration. | false | false | [
"Stephen G. Eick",
"M. Andrew Eick",
"Jesse Fugitt",
"Brian Horst",
"Maxim Khailo",
"Russell A. Lankenau"
] | [] | [] | [] |
VAST | 2,007 | University of British Columbia & Simon Fraser University - The Bricolage | 10.1109/VAST.2007.4389020 | This abstract presents abricolageapproach to the 2007 VAST contest. The analytical process we used is presented across four stages of sensemaking. Several tools were used throughout our approach, and we present their strengths and weaknesses for specific aspects of the analytical process. In addition, we review the details of both individual and collaborative techniques for solving visual analytics problems. | false | false | [
"William Chao",
"Daniel Ha",
"Kevin I.-J. Ho",
"Linda T. Kaastra",
"Minjung Kim",
"Andrew Wade",
"Brian D. Fisher"
] | [] | [] | [] |
VAST | 2,007 | University of British Columbia & Simon Fraser University - The Bricolage | 10.1109/VAST.2007.4470207 | This abstract presents a bricolage approach to the 2007 VAST contest. The analytical process we used is presented across four stages of sensemaking. Several tools were used throughout our approach, and we present their strengths and weaknesses for specific aspects of the analytical process. In addition, we review the details of both individual and collaborative techniques for solving visual analytics problems. | false | false | [
"William Chao",
"Daniel Ha",
"Kevin I.-J. Ho",
"Linda T. Kaastra",
"Minjung Kim",
"Andrew Wade",
"Brian D. Fisher"
] | [] | [] | [] |
VAST | 2,007 | Us vs. Them: Understanding Social Dynamics in Wikipedia with Revert Graph Visualizations | 10.1109/VAST.2007.4389010 | Wikipedia is a wiki-based encyclopedia that has become one of the most popular collaborative on-line knowledge systems. As in any large collaborative system, as Wikipedia has grown, conflicts and coordination costs have increased dramatically. Visual analytic tools provide a mechanism for addressing these issues by enabling users to more quickly and effectively make sense of the status of a collaborative environment. In this paper we describe a model for identifying patterns of conflicts in Wikipedia articles. The model relies on users' editing history and the relationships between user edits, especially revisions that void previous edits, known as "reverts". Based on this model, we constructed Revert Graph, a tool that visualizes the overall conflict patterns between groups of users. It enables visual analysis of opinion groups and rapid interactive exploration of those relationships via detail drill- downs. We present user patterns and case studies that show the effectiveness of these techniques, and discuss how they could generalize to other systems. | false | false | [
"Bongwon Suh",
"Ed H. Chi",
"Bryan A. Pendleton",
"Aniket Kittur"
] | [] | [] | [] |
VAST | 2,007 | VAST 2007 Contest - Analysis with nSpace and GeoTime | 10.1109/VAST.2007.4389033 | GeoTime and nSpace are two interactive visual analytics tools that support the process of analyzing massive and complex datasets. The two tools were used to examine and interpret the 2007 VAST contest dataset. This paper describes how the capabilities of the tools were used to facilitate and expedite every stage of the analysis. | false | false | [
"Lynn Chien",
"Annie Tat",
"Thomas Kapler",
"Patricia Enns",
"Winniefried Kuan",
"William Wright"
] | [] | [] | [] |
VAST | 2,007 | VAST 2007 Contest - Blue Iguanodon | 10.1109/VAST.2007.4389032 | Visual analytics experts realize that one effective way to push the field forward and to develop metrics for measuring the performance of various visual analytics components is to hold an annual competition. The second visual analytics science and technology (VAST) contest was held in conjunction with the 2007 IEEE VAST symposium. In this contest participants were to use visual analytic tools to explore a large heterogeneous data collection to construct a scenario and find evidence buried in the data of illegal and terrorist activities that were occurring. A synthetic data set was made available as well as tasks. In this paper we describe some of the advances we have made from the first competition held in 2006. | false | false | [
"Georges G. Grinstein",
"Catherine Plaisant",
"Sharon J. Laskowski",
"Theresa A. O'Connell",
"Jean Scholtz",
"Mark A. Whiting"
] | [] | [] | [] |
VAST | 2,007 | VAST 2007 Contest Data Analysis Using NdCore and REGGAE | 10.1109/VAST.2007.4389038 | ATS Intelligent Discovery analyzed the VAST 2007 contest data set using two of its proprietary applications, NdCore and REGGAE (Relationship Generating Graph Analysis Engine). The paper describes these tools and how they were used to discover the contest's scenarios of wildlife law enforcement, endangered species issues, and ecoterrorism. | false | false | [
"Lynn Schwendiman",
"Jonathan McLean",
"Jonathan Larson"
] | [] | [] | [] |
VAST | 2,007 | VAST 2007 Contest Interactive Poster: Data Analysis Using NdCore and REGGAE | 10.1109/VAST.2007.4389016 | ATS intelligent discovery analyzed the VAST 2007 contest data set using two of its proprietary applications, NdCore and REGGAE (relationship generating graph analysis engine). The paper describes these tools and how they were used to discover the contest's scenarios of wildlife law enforcement, endangered species issues, and ecoterrorism. | false | false | [
"Lynn Schwendiman",
"Jonathan McLean",
"Jonathan Larson"
] | [] | [] | [] |
VAST | 2,007 | VAST 2007 Contest TexPlorer | 10.1109/VAST.2007.4389037 | TexPlorer is an integrated system for exploring and analyzing vast amount of text documents. The data processing modules of TexPlorer consist of named entity extraction, entity relation extraction, hierarchical clustering, and text summarization tools. Using time line tool, tree-view, table-view, and concept maps, TexPlorer provides visualizations from different aspects and allows analysts to explore vast amount of text documents efficiently. | false | false | [
"Chi-Chun Pan",
"Anuj R. Jaiswal",
"Junyan Luo",
"Anthony C. Robinson",
"Prasenjit Mitra",
"Alan M. MacEachren",
"Ian Turton"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.