Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
CHI | 2,010 | A comparative evaluation on tree visualization methods for hierarchical structures with large fan-outs | 10.1145/1753326.1753359 | Hierarchical structures with large fan-outs are hard to browse and understand. In the conventional node-link tree visualization, the screen quickly becomes overcrowded as users open nodes that have too many child nodes to fit in one screen. To address this problem, we propose two extensions to the conventional node-link tree visualization: a list view with a scrollbar and a multi-column interface. We compared them against the conventional tree visualization interface in a user study. Results show that users are able to browse and understand the tree structure faster with the multi-column interface than the other two interfaces. Overall, they also liked the multi-column better than others. | false | false | [
"Hyunjoo Song",
"Bo Hyoung Kim",
"Bongshin Lee",
"Jinwook Seo"
] | [] | [] | [] |
CHI | 2,010 | Apatite: a new interface for exploring APIs | 10.1145/1753326.1753525 | We present Apatite, a new tool that aids users in learning and understanding a complex API by visualizing the common associations between its various components. Current object-oriented API documentation is usually navigated in a fixed tree structure, starting with a package and then filtering by a specific class. For large APIs, this scheme is overly restrictive, because it prevents users from locating a particular action without first knowing which class it belongs to. Apatite's design instead enables users to search across any level of an API's hierarchy. This is made possible by the introduction of a novel interaction technique that presents popular items from multiple categories simultaneously, determining their relevance by approximating the strength of their association using search engine data. The design of Apatite was refined through iterative usability testing, and it has been released publicly as a web application. | false | false | [
"Daniel S. Eisenberg",
"Jeffrey Stylos",
"Brad A. Myers"
] | [] | [] | [] |
CHI | 2,010 | Biketastic: sensing and mapping for better biking | 10.1145/1753326.1753598 | Bicycling is an affordable, environmentally friendly alternative transportation mode to motorized travel. A common task performed by bikers is to find good routes in an area, where the quality of a route is based on safety, efficiency, and enjoyment. Finding routes involves trial and error as well as exchanging information between members of a bike community. Biketastic is a platform that enriches this experimentation and route sharing process making it both easier and more effective. Using a mobile phone application and online map visualization, bikers are able to document and share routes, ride statistics, sensed information to infer route roughness and noisiness, and media that documents ride experience. Biketastic was designed to ensure the link between information gathering, visualization, and bicycling practices. In this paper, we present architecture and algorithms for route data inferences and visualization. We evaluate the system based on feedback from bicyclists provided during a two-week pilot. | false | false | [
"Sasank Reddy",
"Katie Shilton",
"Gleb Denisov",
"Christian Cenizal",
"Deborah Estrin",
"Mani B. Srivastava"
] | [] | [] | [] |
CHI | 2,010 | Crowdsourcing graphical perception: using mechanical turk to assess visualization design | 10.1145/1753326.1753357 | Understanding perception is critical to effective visualization design. With its low cost and scalability, crowdsourcing presents an attractive option for evaluating the large design space of visualizations; however, it first requires validation. In this paper, we assess the viability of Amazon's Mechanical Turk as a platform for graphical perception experiments. We replicate previous studies of spatial encoding and luminance contrast and compare our results. We also conduct new experiments on rectangular area perception (as in treemaps or cartograms) and on chart size and gridline spacing. Our results demonstrate that crowdsourced perception experiments are viable and contribute new insights for visualization design. Lastly, we report cost and performance data from our experiments and distill recommendations for the design of crowdsourced studies. | false | false | [
"Jeffrey Heer",
"Michael Bostock"
] | [] | [] | [] |
CHI | 2,010 | InAir: sharing indoor air quality measurements and visualizations | 10.1145/1753326.1753605 | This paper describes inAir, a tool for sharing measurements and visualizations of indoor air quality within one's social network. Poor indoor air quality is difficult for humans to detect through sight and smell alone and can contribute to the development of chronic diseases. Through a four-week long study of fourteen households as six groups, we found that inAir (1) increased awareness of, and reflection on air quality, (2) promoted behavioral changes that resulted in improved indoor air quality, and (3) demonstrated the persuasive power of sharing for furthering improvements to indoor air quality in terms of fostering new social awareness and behavior changes as well as strengthening social bonds and prompting collaborative efforts across social networks to improve human health and well being. | false | false | [
"Sunyoung Kim",
"Eric Paulos"
] | [] | [] | [] |
CHI | 2,010 | Individual models of color differentiation to improve interpretability of information visualization | 10.1145/1753326.1753715 | Color is commonly used to represent categories and values in many computer applications, but differentiating these colors can be difficult in many situations (e.g., for users with color vision deficiency (CVD), or in bright light). Current solutions to this problem can adapt colors based on standard simulations of CVD, but these models cover only a fraction of the ways in which color perception can vary. To improve the specificity and accuracy of these approaches, we have developed the first ever individualized model of color differentiation (ICD). The model is based on a short calibration performed by a particular user for a particular display, and so automatically covers all aspects of the user's ability to see and differentiate colors in an environment. In this paper we introduce the new model and the manner in which differentiability limits are predicted. We gathered empirical data from 16 users to assess the model's accuracy and robustness. We found that the model is highly effective at capturing individual differentiation abilities, works for users with and without CVD, can be tuned to balance accuracy and color availability, and can serve as the basis for improved color adaptation schemes. | false | false | [
"David R. Flatla",
"Carl Gutwin"
] | [] | [] | [] |
CHI | 2,010 | Interactive optimization for steering machine classification | 10.1145/1753326.1753529 | Interest has been growing within HCI on the use of machine learning and reasoning in applications to classify such hidden states as user intentions, based on observations. HCI researchers with these interests typically have little expertise in machine learning and often employ toolkits as relatively fixed "black boxes" for generating statistical classifiers. However, attempts to tailor the performance of classifiers to specific application requirements may require a more sophisticated understanding and custom-tailoring of methods. We present ManiMatrix, a system that provides controls and visualizations that enable system builders to refine the behavior of classification systems in an intuitive manner. With ManiMatrix, users directly refine parameters of a confusion matrix via an interactive cycle of re-classification and visualization. We present the core methods and evaluate the effectiveness of the approach in a user study. Results show that users are able to quickly and effectively modify decision boundaries of classifiers to tai-lor the behavior of classifiers to problems at hand. | false | false | [
"Ashish Kapoor",
"Bongshin Lee",
"Desney S. Tan",
"Eric Horvitz"
] | [] | [] | [] |
CHI | 2,010 | ManyNets: an interface for multiple network analysis and visualization | 10.1145/1753326.1753358 | Traditional network analysis tools support analysts in studying a single network. ManyNets offers these analysts a powerful new approach that enables them to work on multiple networks simultaneously. Several thousand networks can be presented as rows in a tabular visualization, and then inspected, sorted and filtered according to their attributes. The networks to be displayed can be obtained by subdivision of larger networks. Examples of meaningful subdivisions used by analysts include ego networks, community extraction, and time-based slices. Cell visualizations and interactive column overviews allow analysts to assess the distribution of attributes within particular sets of networks. Details, such as traditional node-link diagrams, are available on demand. We describe a case study analyzing a social network geared towards film recommendations by means of decomposition. A small usability study provides feedback on the use of the interface on a set of tasks issued from the case study. | false | false | [
"Manuel Freire 0001",
"Catherine Plaisant",
"Ben Shneiderman",
"Jennifer Golbeck"
] | [] | [] | [] |
CHI | 2,010 | pCubee: a perspective-corrected handheld cubic display | 10.1145/1753326.1753535 | In this paper, we describe the design of a personal cubic display that offers novel interaction techniques for static and dynamic 3D content. We extended one-screen Fish Tank VR by arranging five small LCD panels into a box shape that is light and compact enough to be handheld. The display uses head-coupled perspective rendering and a real-time physics simulation engine to establish an interaction metaphor of having real objects inside a physical box that a user can hold and manipulate. We evaluated our prototype as a visualization tool and as an input device by comparing it with a conventional LCD display and mouse for a 3D tree-tracing task. We found that bimanual interaction with pCubee and a mouse offered the best performance and was most preferred by users. pCubee has potential in 3D visualization and interactive applications such as games, storytelling and education, as well as viewing 3D maps, medical and architectural data. | false | false | [
"Ian Stavness",
"Billy Lam",
"Sidney S. Fels"
] | [] | [] | [] |
CHI | 2,010 | Small business applications of sourcemap: a web tool for sustainable design and supply chain transparency | 10.1145/1753326.1753465 | This paper introduces sustainable design applications for small businesses through the Life Cycle Assessment and supply chain publishing platform Sourcemap.org. This web-based tool was developed through a year-long participatory design process with five small businesses in Scotland and in New England. Sourcemap was used as a diagnostic tool for carbon accounting, design and supply chain management. It offers a number of ways to market sustainable practices through embedded and printed visualizations. Our experiences confirm the potential of web sustainability tools and social media to expand the discourse and to negotiate the diverse goals inherent in social and environmental sustainability. | false | false | [
"Leonardo Bonanni",
"Matthew Hockenberry",
"David Zwarg",
"Chris Csikszentmihályi",
"Hiroshi Ishii 0001"
] | [] | [] | [] |
CHI | 2,010 | The case of the disappearing Ox: a field study of mobile activity and context logging | 10.1145/1753326.1753397 | There are numerous settings where people examine, scrutinize and discuss the details of images in the course of their work. In most medical domains, scans and x-rays are used in the diagnosis of cases; in most areas of science, methods of visualization have been adopted to assist in the analysis of data; and images of different kinds are critical for many research fields in the social sciences and humanities. It is not surprising that recently technologies have been proposed to assist with the analysis and examination of images. In this paper, we consider requirements for technologies in a rather distinctive domain of research, the classics. Drawing upon an analysis of the detailed ways in which classicists work with digital images, we discuss the requirements for systems to support researchers in this domain, and also provide further considerations on the general development of image processing technologies and visualization techniques. | false | false | [
"Grace de la Flor",
"Paul Luff",
"Marina Jirotka",
"John Pybus",
"Ruth Kirkham",
"Annamaria Carusi"
] | [] | [] | [] |
CHI | 2,010 | Timeline collaboration | 10.1145/1753326.1753404 | This paper explores timelines as a web-based tool for collaboration between citizens and municipal caseworkers. The paper takes its outset in a case study of planning and control of parental leave; a process that may involve surprisingly many actors. As part of the case study, a web-based timeline, CaseLine, was designed. This design crosses the boundaries between leisure and work, in ways that are different from what is often seen in current HCI. The timeline has several roles on these boundaries: It is a shared planning and visualization tool that may be used by parents and caseworkers alone or together, it serves as a contract and a sandbox, as a record and a plan, as inspiration for planning and an authoritative road, as a common information space and a fragmented exchange. Serving all these roles does not happen smoothly, and the paper discusses the challenges of such timeline interaction in, and beyond this case. | false | false | [
"Morten Bohøj",
"Nikolaj Gandrup Borchorst",
"Niels Olof Bouvin",
"Susanne Bødker",
"Pär-Ola Zander"
] | [] | [] | [] |
CHI | 2,010 | UpStream: motivating water conservation with low-cost water flow sensing and persuasive displays | 10.1145/1753326.1753604 | Water is our most precious and most rapidly declining natural resource. We explore pervasive technology as an approach for promoting water conservation in public and private spaces. We hope to motivate immediate reduction in water use as well as higher-order behaviors (seeking new information, etc) through unobtrusive low-cost water flow sensing and several persuasive displays. Early prototypes were installed at public faucets and a private (shared) shower, logging water usage first without and then with ambient displays. This pilot study led to design iterations, culminating in long-term deployment of sensors in four private showers over the course of three weeks. Sensors first logged baseline water usage without visualization. Then, two display styles, ambient and numeric, were deployed in random order, each showing individual and average water consumption. Quantitative data along with participants' feedback contrast the effectiveness of numeric displays against abstract visualization in this very important domain of water conservation and public health. | false | false | [
"Stacey Kuznetsov",
"Eric Paulos"
] | [] | [] | [] |
CHI | 2,010 | Using text animated transitions to support navigation in document histories | 10.1145/1753326.1753427 | This article examines the benefits of using text animated transitions for navigating in the revision history of textual documents. We propose an animation technique for smoothly transitioning between different text revisions, then present the Diffamation system. Diffamation supports rapid exploration of revision histories by combining text animated transitions with simple navigation and visualization tools. We finally describe a user study showing that smooth text animation allows users to track changes in the evolution of textual documents more effectively than flipping pages. | false | false | [
"Fanny Chevalier",
"Pierre Dragicevic",
"Anastasia Bezerianos",
"Jean-Daniel Fekete"
] | [] | [] | [] |
Vis | 2,009 | A Novel Interface for Interactive Exploration of DTI fibers | 10.1109/TVCG.2009.112 | Visual exploration is essential to the visualization and analysis of densely sampled 3D DTI fibers in biological speciments, due to the high geometric, spatial, and anatomical complexity of fiber tracts. Previous methods for DTI fiber visualization use zooming, color-mapping, selection, and abstraction to deliver the characteristics of the fibers. However, these schemes mainly focus on the optimization of visualization in the 3D space where cluttering and occlusion make grasping even a few thousand fibers difficult. This paper introduces a novel interaction method that augments the 3D visualization with a 2D representation containing a low-dimensional embedding of the DTI fibers. This embedding preserves the relationship between the fibers and removes the visual clutter that is inherent in 3D renderings of the fibers. This new interface allows the user to manipulate the DTI fibers as both 3D curves and 2D embedded points and easily compare or validate his or her results in both domains. The implementation of the framework is GPU based to achieve real-time interaction. The framework was applied to several tasks, and the results show that our method reduces the user's workload in recognizing 3D DTI fibers and permits quick and accurate DTI fiber selection. | false | false | [
"Wei Chen 0001",
"Zi'ang Ding",
"Song Zhang 0004",
"Anna MacKay-Brandt",
"Stephen Correia",
"Huamin Qu",
"John Allen Crow",
"David F. Tate",
"Zhicheng Yan",
"Qunsheng Peng 0001"
] | [] | [] | [] |
Vis | 2,009 | A Physiologically-based Model for Simulation of Color Vision Deficiency | 10.1109/TVCG.2009.113 | Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals. | false | false | [
"Gustavo Mello Machado",
"Manuel Menezes de Oliveira Neto",
"Leandro A. F. Fernandes"
] | [] | [] | [] |
Vis | 2,009 | A User Study to Compare Four Uncertainty Visualization Methods for 1D and 2D Datasets | 10.1109/TVCG.2009.114 | Many techniques have been proposed to show uncertainty in data visualizations. However, very little is known about their effectiveness in conveying meaningful information. In this paper, we present a user study that evaluates the perception of uncertainty amongst four of the most commonly used techniques for visualizing uncertainty in one-dimensional and two-dimensional data. The techniques evaluated are traditional errorbars, scaled size of glyphs, color-mapping on glyphs, and color-mapping of uncertainty on the data surface. The study uses generated data that was designed to represent the systematic and random uncertainty components. Twenty-seven users performed two types of search tasks and two types of counting tasks on 1D and 2D datasets. The search tasks involved finding data points that were least or most uncertain. The counting tasks involved counting data features or uncertainty features. A 4 times 4 full-factorial ANOVA indicated a significant interaction between the techniques used and the type of tasks assigned for both datasets indicating that differences in performance between the four techniques depended on the type of task performed. Several one-way ANOVAs were computed to explore the simple main effects. Bonferronni's correction was used to control for the family-wise error rate for alpha-inflation. Although we did not find a consistent order among the four techniques for all the tasks, there are several findings from the study that we think are useful for uncertainty visualization design. We found a significant difference in user performance between searching for locations of high and searching for locations of low uncertainty. Errorbars consistently underperformed throughout the experiment. Scaling the size of glyphs and color-mapping of the surface performed reasonably well. The efficiency of most of these techniques were highly dependent on the tasks performed. We believe that these findings can be used in future uncertainty visualization design. In addition, the framework developed in this user study presents a structured approach to evaluate uncertainty visualization techniques, as well as provides a basis for future research in uncertainty visualization. | false | false | [
"Jibonananda Sanyal",
"Song Zhang 0004",
"Gargi Bhattacharya",
"Philip Amburn",
"Robert J. Moorhead II"
] | [] | [] | [] |
Vis | 2,009 | A Visual Approach to Efficient Analysis and Quantification of Ductile Iron and Reinforced Sprayed Concrete | 10.1109/TVCG.2009.115 | This paper describes advanced volume visualization and quantification for applications in non-destructive testing (NDT), which results in novel and highly effective interactive workflows for NDT practitioners. We employ a visual approach to explore and quantify the features of interest, based on transfer functions in the parameter spaces of specific application scenarios. Examples are the orientations of fibres or the roundness of particles. The applicability and effectiveness of our approach is illustrated using two specific scenarios of high practical relevance. First, we discuss the analysis of Steel Fibre Reinforced Sprayed Concrete (SFRSpC). We investigate the orientations of the enclosed steel fibres and their distribution, depending on the concrete's application direction. This is a crucial step in assessing the material's behavior under mechanical stress, which is still in its infancy and therefore a hot topic in the building industry. The second application scenario is the designation of the microstructure of ductile cast irons with respect to the contained graphite. This corresponds to the requirements of the ISO standard 945-1, which deals with 2D metallographic samples. We illustrate how the necessary analysis steps can be carried out much more efficiently using our system for 3D volumes. Overall, we show that a visual approach with custom transfer functions in specific application domains offers significant benefits and has the potential of greatly improving and optimizing the workflows of domain scientists and engineers. | false | false | [
"Laura Fritz",
"Markus Hadwiger",
"Georg Geier",
"Gerhard Pittino",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,009 | An interactive visualization tool for multi-channel confocal microscopy data in neurobiology research | 10.1109/TVCG.2009.118 | Confocal microscopy is widely used in neurobiology for studying the three-dimensional structure of the nervous system. Confocal image data are often multi-channel, with each channel resulting from a different fluorescent dye or fluorescent protein; one channel may have dense data, while another has sparse; and there are often structures at several spatial scales: subneuronal domains, neurons, and large groups of neurons (brain regions). Even qualitative analysis can therefore require visualization using techniques and parameters fine-tuned to a particular dataset. Despite the plethora of volume rendering techniques that have been available for many years, the techniques standardly used in neurobiological research are somewhat rudimentary, such as looking at image slices or maximal intensity projections. Thus there is a real demand from neurobiologists, and biologists in general, for a flexible visualization tool that allows interactive visualization of multi-channel confocal data, with rapid fine-tuning of parameters to reveal the three-dimensional relationships of structures of interest. Together with neurobiologists, we have designed such a tool, choosing visualization methods to suit the characteristics of confocal data and a typical biologist's workflow. We use interactive volume rendering with intuitive settings for multidimensional transfer functions, multiple render modes and multi-views for multi-channel volume data, and embedding of polygon data into volume data for rendering and editing. As an example, we apply this tool to visualize confocal microscopy datasets of the developing zebrafish visual system. | false | false | [
"Yong Wan",
"Hideo Otsuna",
"Chi-Bin Chien",
"Charles D. Hansen"
] | [] | [] | [] |
Vis | 2,009 | Applying Manifold Learning to Plotting Approximate Contour Trees | 10.1109/TVCG.2009.119 | A contour tree is a powerful tool for delineating the topological evolution of isosurfaces of a single-valued function, and thus has been frequently used as a means of extracting features from volumes and their time-varying behaviors. Several sophisticated algorithms have been proposed for constructing contour trees while they often complicate the software implementation especially for higher-dimensional cases such as time-varying volumes. This paper presents a simple yet effective approach to plotting in 3D space, approximate contour trees from a set of scattered samples embedded in the high-dimensional space. Our main idea is to take advantage of manifold learning so that we can elongate the distribution of high-dimensional data samples to embed it into a low-dimensional space while respecting its local proximity of sample points. The contribution of this paper lies in the introduction of new distance metrics to manifold learning, which allows us to reformulate existing algorithms as a variant of currently available dimensionality reduction scheme. Efficient reduction of data sizes together with segmentation capability is also developed to equip our approach with a coarse-to-fine analysis even for large-scale datasets. Examples are provided to demonstrate that our proposed scheme can successfully traverse the features of volumes and their temporal behaviors through the constructed contour trees. | false | false | [
"Shigeo Takahashi",
"Issei Fujishiro",
"Masato Okada"
] | [] | [] | [] |
Vis | 2,009 | Automatic Transfer Function Generation Using Contour Tree Controlled Residue Flow Model and Color Harmonics | 10.1109/TVCG.2009.120 | Transfer functions facilitate the volumetric data visualization by assigning optical properties to various data features and scalar values. Automation of transfer function specifications still remains a challenge in volume rendering. This paper presents an approach for automating transfer function generations by utilizing topological attributes derived from the contour tree of a volume. The contour tree acts as a visual index to volume segments, and captures associated topological attributes involved in volumetric data. A residue flow model based on Darcy's law is employed to control distributions of opacity between branches of the contour tree. Topological attributes are also used to control color selection in a perceptual color space and create harmonic color transfer functions. The generated transfer functions can depict inclusion relationship between structures and maximize opacity and color differences between them. The proposed approach allows efficient automation of transfer function generations, and exploration on the data to be carried out based on controlling of opacity residue flow rate instead of complex low-level transfer function parameter adjustments. Experiments on various data sets demonstrate the practical use of our approach in transfer function generations. | false | false | [
"Jianlong Zhou",
"Masahiro Takatsuka"
] | [] | [] | [] |
Vis | 2,009 | BrainGazer - Visual Queries for Neurobiology Research | 10.1109/TVCG.2009.121 | Neurobiology investigates how anatomical and physiological relationships in the nervous system mediate behavior. Molecular genetic techniques, applied to species such as the common fruit fly Drosophila melanogaster, have proven to be an important tool in this research. Large databases of transgenic specimens are being built and need to be analyzed to establish models of neural information processing. In this paper we present an approach for the exploration and analysis of neural circuits based on such a database. We have designed and implemented \emph{BrainGazer}, a system which integrates visualization techniques for volume data acquired through confocal microscopy as well as annotated anatomical structures with an intuitive approach for accessing the available information. We focus on the ability to visually query the data based on semantic as well as spatial relationships. Additionally, we present visualization techniques for the concurrent depiction of neurobiological volume data and geometric objects which aim to reduce visual clutter. The described system is the result of an ongoing interdisciplinary collaboration between neurobiologists and visualization researchers. | false | false | [
"Stefan Bruckner",
"Veronika Soltészová",
"M. Eduard Gröller",
"Jirí Hladuvka",
"Katja Bühler",
"Jai Y. Yu",
"Barry J. Dickson"
] | [] | [] | [] |
Vis | 2,009 | Color Seamlessness in Multi-Projector Displays Using Constrained Gamut Morphing | 10.1109/TVCG.2009.124 | Multi-projector displays show significant spatial variation in 3D color gamut due to variation in the chromaticity gamuts across the projectors, vignetting effect of each projector and also overlap across adjacent projectors. In this paper we present a new constrained gamut morphing algorithm that removes all these variations and results in true color seamlessness across tiled multi-projector displays. Our color morphing algorithm adjusts the intensities of light from each pixel of each projector precisely to achieve a smooth morphing from one projector's gamut to the other's through the overlap region. This morphing is achieved by imposing precise constraints on the perceptual difference between the gamuts of two adjacent pixels. In addition, our gamut morphing assures a C1 continuity yielding visually pleasing appearance across the entire display. We demonstrate our method successfully on a planar and a curved display using both low and high-end projectors. Our approach is completely scalable, efficient and automatic. We also demonstrate the real-time performance of our image correction algorithm on GPUs for interactive applications. To the best of our knowledge, this is the first work that presents a scalable method with a strong foundation in perception and realizes, for the first time, a truly seamless display where the number of projectors cannot be deciphered. | false | false | [
"Behzad Sajadi",
"Maxim Lazarov",
"Meenakshisundaram Gopi",
"Aditi Majumder"
] | [] | [] | [] |
Vis | 2,009 | Coloring 3D Line fields Using Boy's Real Projective Plane Immersion | 10.1109/TVCG.2009.125 | We introduce a new method for coloring 3D line fields and show results from its application in visualizing orientation in DTI brain data sets. The method uses Boy's surface, an immersion of RP2 in 3D. This coloring method is smooth and one-to-one except on a set of measure zero, the double curve of Boy's surface. | false | false | [
"Çagatay Demiralp",
"John F. Hughes",
"David H. Laidlaw"
] | [] | [] | [] |
Vis | 2,009 | Comparing 3D Vector field Visualization Methods: A User Study | 10.1109/TVCG.2009.126 | In a user study comparing four visualization methods for three-dimensional vector data, participants used visualizations from each method to perform five simple but representative tasks: 1) determining whether a given point was a critical point, 2) determining the type of a critical point, 3) determining whether an integral curve would advect through two points, 4) determining whether swirling movement is present at a point, and 5) determining whether the vector field is moving faster at one point than another. The visualization methods were line and tube representations of integral curves with both monoscopic and stereoscopic viewing. While participants reported a preference for stereo lines, quantitative results showed performance among the tasks varied by method. Users performed all tasks better with methods that: 1) gave a clear representation with no perceived occlusion, 2) clearly visualized curve speed and direction information, and 3) provided fewer rich 3D cues (e.g., shading, polygonal arrows, overlap cues, and surface textures). These results provide quantitative support for anecdotal evidence on visualization methods. The tasks and testing framework also give a basis for comparing other visualization methods, for creating more effective methods, and for defining additional tasks to explore further the tradeoffs among the methods. | false | false | [
"Andrew S. Forsberg",
"Jian Chen 0006",
"David H. Laidlaw"
] | [] | [] | [] |
Vis | 2,009 | Continuous Parallel Coordinates | 10.1109/TVCG.2009.131 | Typical scientific data is represented on a grid with appropriate interpolation or approximation schemes,defined on a continuous domain. The visualization of such data in parallel coordinates may reveal patterns latently contained in the data and thus can improve the understanding of multidimensional relations. In this paper, we adopt the concept of continuous scatterplots for the visualization of spatially continuous input data to derive a density model for parallel coordinates. Based on the point-line duality between scatterplots and parallel coordinates, we propose a mathematical model that maps density from a continuous scatterplot to parallel coordinates and present different algorithms for both numerical and analytical computation of the resulting density field. In addition, we show how the 2-D model can be used to successively construct continuous parallel coordinates with an arbitrary number of dimensions. Since continuous parallel coordinates interpolate data values within grid cells, a scalable and dense visualization is achieved, which will be demonstrated for typical multi-variate scientific data. | false | false | [
"Julian Heinrich",
"Daniel Weiskopf"
] | [] | [] | [] |
Vis | 2,009 | Curve-Centric Volume Reformation for Comparative Visualization | 10.1109/TVCG.2009.136 | We present two visualization techniques for curve-centric volume reformation with the aim to create compelling comparative visualizations. A curve-centric volume reformation deforms a volume, with regards to a curve in space, to create a new space in which the curve evaluates to zero in two dimensions and spans its arc-length in the third. The volume surrounding the curve is deformed such that spatial neighborhood to the curve is preserved. The result of the curve-centric reformation produces images where one axis is aligned to arc-length, and thus allows researchers and practitioners to apply their arc-length parameterized data visualizations in parallel for comparison. Furthermore we show that when visualizing dense data, our technique provides an inside out projection, from the curve and out into the volume, which allows for inspection what is around the curve. Finally we demonstrate the usefulness of our techniques in the context of two application cases. We show that existing data visualizations of arc-length parameterized data can be enhanced by using our techniques, in addition to creating a new view and perspective on volumetric data around curves. Additionally we show how volumetric data can be brought into plotting environments that allow precise readouts. In the first case we inspect streamlines in a flow field around a car, and in the second we inspect seismic volumes and well logs from drilling. | false | false | [
"Ove Daae Lampe",
"Carlos D. Correa",
"Kwan-Liu Ma",
"Helwig Hauser"
] | [] | [] | [] |
Vis | 2,009 | Decoupling Illumination from Isosurface Generation Using 4D Light Transport | 10.1109/TVCG.2009.137 | One way to provide global illumination for the scientist who performs an interactive sweep through a 3D scalar dataset is to pre-compute global illumination, resample the radiance onto a 3D grid, then use it as a 3D texture. The basic approach of repeatedly extracting isosurfaces, illuminating them, and then building a 3D illumination grid suffers from the non-uniform sampling that arises from coupling the sampling of radiance with the sampling of isosurfaces. We demonstrate how the illumination step can be decoupled from the isosurface extraction step by illuminating the entire 3D scalar function as a 3-manifold in 4-dimensional space. By reformulating light transport in a higher dimension, one can sample a 3D volume without requiring the radiance samples to aggregate along individual isosurfaces in the pre-computed illumination grid. | false | false | [
"David C. Banks",
"Kevin Beason"
] | [] | [] | [] |
Vis | 2,009 | Depth-Dependent Halos: Illustrative Rendering of Dense Line Data | 10.1109/TVCG.2009.138 | We present a technique for the illustrative rendering of 3D line data at interactive frame rates. We create depth-dependent halos around lines to emphasize tight line bundles while less structured lines are de-emphasized. Moreover, the depth-dependent halos combined with depth cueing via line width attenuation increase depth perception, extending techniques from sparse line rendering to the illustrative visualization of dense line data. We demonstrate how the technique can be used, in particular, for illustrating DTI fiber tracts but also show examples from gas and fluid flow simulations and mathematics as well as describe how the technique extends to point data. We report on an informal evaluation of the illustrative DTI fiber tract visualizations with domain experts in neurosurgery and tractography who commented positively about the results and suggested a number of directions for future work. | false | false | [
"Maarten H. Everts",
"Henk Bekker",
"Jos B. T. M. Roerdink",
"Tobias Isenberg 0001"
] | [
"TT",
"BP"
] | [] | [] |
Vis | 2,009 | Exploring 3D DTI fiber Tracts with Linked 2D Representations | 10.1109/TVCG.2009.141 | We present a visual exploration paradigm that facilitates navigation through complex fiber tracts by combining traditional 3D model viewing with lower dimensional representations. To this end, we create standard streamtube models along with two two-dimensional representations, an embedding in the plane and a hierarchical clustering tree, for a given set of fiber tracts. We then link these three representations using both interaction and color obtained by embedding fiber tracts into a perceptually uniform color space. We describe an anecdotal evaluation with neuroscientists to assess the usefulness of our method in exploring anatomical and functional structures in the brain. Expert feedback indicates that, while a standalone clinical use of the proposed method would require anatomical landmarks in the lower dimensional representations, the approach would be particularly useful in accelerating tract bundle selection. Results also suggest that combining traditional 3D model viewing with lower dimensional representations can ease navigation through the complex fiber tract models, improving exploration of the connectivity in the brain. | false | false | [
"Radu Jianu",
"Çagatay Demiralp",
"David H. Laidlaw"
] | [] | [] | [] |
Vis | 2,009 | Exploring the Millennium Run - Scalable Rendering of Large-Scale Cosmological Datasets | 10.1109/TVCG.2009.142 | In this paper we investigate scalability limitations in the visualization of large-scale particle-based cosmological simulations, and we present methods to reduce these limitations on current PC architectures. To minimize the amount of data to be streamed from disk to the graphics subsystem, we propose a visually continuous level-of-detail (LOD) particle representation based on a hierarchical quantization scheme for particle coordinates and rules for generating coarse particle distributions. Given the maximal world space error per level, our LOD selection technique guarantees a sub-pixel screen space error during rendering. A brick-based page-tree allows to further reduce the number of disk seek operations to be performed. Additional particle quantities like density, velocity dispersion, and radius are compressed at no visible loss using vector quantization of logarithmically encoded floating point values. By fine-grain view-frustum culling and presence acceleration in a geometry shader the required geometry throughput on the GPU can be significantly reduced. We validate the quality and scalability of our method by presenting visualizations of a particle-based cosmological dark-matter simulation exceeding 10 billion elements. | false | false | [
"Roland Fraedrich",
"Jens Schneider 0002",
"Rüdiger Westermann"
] | [] | [] | [] |
Vis | 2,009 | Focus+Context Route Zooming and Information Overlay in 3D Urban Environments | 10.1109/TVCG.2009.144 | In this paper we present a novel focus+context zooming technique, which allows users to zoom into a route and its associated landmarks in a 3D urban environment from a 45-degree bird's-eye view. Through the creative utilization of the empty space in an urban environment, our technique can informatively reveal the focus region and minimize distortions to the context buildings. We first create more empty space in the 2D map by broadening the road with an adapted seam carving algorithm. A grid-based zooming technique is then used to enlarge the landmarks to reclaim the created empty space and thus reduce distortions to the other parts. Finally,an occlusion-free route visualization scheme adaptively scales the buildings occluding the route to make the route always visible to users. Our method can be conveniently integrated into Google Earth and Virtual Earth to provide seamless route zooming and help users better explore a city and plan their tours. It can also be used in other applications such as information overlay to a virtual city. | false | false | [
"Huamin Qu",
"Haomian Wang",
"Weiwei Cui",
"Yingcai Wu",
"Ming-Yuen Chan"
] | [] | [] | [] |
Vis | 2,009 | GL4D: A GPU-based Architecture for Interactive 4D Visualization | 10.1109/TVCG.2009.147 | This paper describes GL4D, an interactive system for visualizing 2-manifolds and 3-manifolds embedded in four Euclidean dimensions and illuminated by 4D light sources. It is a tetrahedron-based rendering pipeline that projects geometry into volume images, an exact parallel to the conventional triangle-based rendering pipeline for 3D graphics. Novel features include GPU-based algorithms for real-time 4D occlusion handling and transparency compositing; we thus enable a previously impossible level of quality and interactivity for exploring lit 4D objects. The 4D tetrahedrons are stored in GPU memory as vertex buffer objects, and the vertex shader is used to perform per-vertex 4D modelview transformations and 4D-to-3D projection. The geometry shader extension is utilized to slice the projected tetrahedrons and rasterize the slices into individual 2D layers of voxel fragments. Finally, the fragment shader performs per-voxel operations such as lighting and alpha blending with previously computed layers. We account for 4D voxel occlusion along the 4D-to-3D projection ray by supporting a multi-pass back-to-front fragment composition along the projection ray; to accomplish this, we exploit a new adaptation of the dual depth peeling technique to produce correct volume image data and to simultaneously render the resulting volume data using 3D transfer functions into the final 2D image. Previous CPU implementations of the rendering of 4D-embedded 3-manifolds could not perform either the 4D depth-buffered projection or manipulation of the volume-rendered image in real-time; in particular, the dual depth peeling algorithm is a novel GPU-based solution to the real-time 4D depth-buffering problem. GL4D is implemented as an integrated OpenGL-style API library, so that the underlying shader operations are as transparent as possible to the user. | false | false | [
"Alan Chu",
"Chi-Wing Fu",
"Andrew J. Hanson",
"Pheng-Ann Heng"
] | [] | [] | [] |
Vis | 2,009 | High-Quality, Semi-Analytical Volume Rendering for AMR Data | 10.1109/TVCG.2009.149 | This paper presents a pipeline for high quality volume rendering of adaptive mesh refinement (AMR) datasets. We introduce a new method allowing high quality visualization of hexahedral cells in this context; this method avoids artifacts like discontinuities in the isosurfaces. To achieve this, we choose the number and placement of sampling points over the cast rays according to the analytical properties of the reconstructed signal inside each cell. We extend our method to handle volume shading of such cells. We propose an interpolation scheme that guarantees continuity between adjacent cells of different AMR levels. We introduce an efficient hybrid CPU-GPU mesh traversal technique. We present an implementation of our AMR visualization method on current graphics hardware, and show results demonstrating both the quality and performance of our method. | false | false | [
"Stéphane Marchesin",
"Guillaume Colin de Verdière"
] | [] | [] | [] |
Vis | 2,009 | Hue-Preserving Color Blending | 10.1109/TVCG.2009.150 | We propose a new perception-guided compositing operator for color blending. The operator maintains the same rules for achromatic compositing as standard operators (such as the over operator), but it modifies the computation of the chromatic channels. Chromatic compositing aims at preserving the hue of the input colors; color continuity is achieved by reducing the saturation of colors that are to change their hue value. The main benefit of hue preservation is that color can be used for proper visual labeling, even under the constraint of transparency rendering or image overlays. Therefore, the visualization of nominal data is improved. Hue-preserving blending can be used in any existing compositing algorithm, and it is particularly useful for volume rendering. The usefulness of hue-preserving blending and its visual characteristics are shown for several examples of volume visualization. | false | false | [
"Johnson Chuang",
"Daniel Weiskopf",
"Torsten Möller"
] | [] | [] | [] |
Vis | 2,009 | Interactive Coordinated Multiple-View Visualization of Biomechanical Motion Data | 10.1109/TVCG.2009.152 | We present an interactive framework for exploring space-time and form-function relationships in experimentally collected high-resolution biomechanical data sets. These data describe complex 3D motions (e.g. chewing, walking, flying) performed by animals and humans and captured via high-speed imaging technologies, such as biplane fluoroscopy. In analyzing these 3D biomechanical motions, interactive 3D visualizations are important, in particular, for supporting spatial analysis. However, as researchers in information visualization have pointed out, 2D visualizations can also be effective tools for multi-dimensional data analysis, especially for identifying trends over time. Our approach, therefore, combines techniques from both 3D and 2D visualizations. Specifically, it utilizes a multi-view visualization strategy including a small multiples view of motion sequences, a parallel coordinates view, and detailed 3D inspection views. The resulting framework follows an overview first, zoom and filter, then details-on-demand style of analysis, and it explicitly targets a limitation of current tools, namely, supporting analysis and comparison at the level of a collection of motions rather than sequential analysis of a single or small number of motions. Scientific motion collections appropriate for this style of analysis exist in clinical work in orthopedics and physical rehabilitation, in the study of functional morphology within evolutionary biology, and in other contexts. An application is described based on a collaboration with evolutionary biologists studying the mechanics of chewing motions in pigs. Interactive exploration of data describing a collection of more than one hundred experimentally captured pig chewing cycles is described. | false | false | [
"Daniel F. Keefe",
"Marcus Ewert",
"William Ribarsky",
"Remco Chang"
] | [] | [] | [] |
Vis | 2,009 | Interactive Streak Surface Visualization on the GPU | 10.1109/TVCG.2009.154 | In this paper we present techniques for the visualization of unsteady flows using streak surfaces, which allow for the first time an adaptive integration and rendering of such surfaces in real-time. The techniques consist of two main components, which are both realized on the GPU to exploit computational and bandwidth capacities for numerical particle integration and to minimize bandwidth requirements in the rendering of the surface. In the construction stage, an adaptive surface representation is generated. Surface refinement and coarsening strategies are based on local surface properties like distortion and curvature. We compare two different methods to generate a streak surface: a) by computing a patch-based surface representation that avoids any interdependence between patches, and b) by computing a particle-based surface representation including particle connectivity, and by updating this connectivity during particle refinement and coarsening. In the rendering stage, the surface is either rendered as a set of quadrilateral surface patches using high-quality point-based approaches, or a surface triangulation is built in turn from the given particle connectivity and the resulting triangle mesh is rendered. We perform a comparative study of the proposed techniques with respect to surface quality, visual quality and performance by visualizing streak surfaces in real flows using different rendering options. | false | false | [
"Kai Bürger",
"Florian Ferstl",
"Holger Theisel",
"Rüdiger Westermann"
] | [] | [] | [] |
Vis | 2,009 | Interactive Visual Analysis of Complex Scientific Data as Families of Data Surfaces | 10.1109/TVCG.2009.155 | The widespread use of computational simulation in science and engineering provides challenging research opportunities. Multiple independent variables are considered and large and complex data are computed, especially in the case of multi-run simulation. Classical visualization techniques deal well with 2D or 3D data and also with time-dependent data. Additional independent dimensions, however, provide interesting new challenges. We present an advanced visual analysis approach that enables a thorough investigation of families of data surfaces, i.e., datasets, with respect to pairs of independent dimensions. While it is almost trivial to visualize one such data surface, the visual exploration and analysis of many such data surfaces is a grand challenge, stressing the users' perception and cognition. We propose an approach that integrates projections and aggregations of the data surfaces at different levels (one scalar aggregate per surface, a 1D profile per surface, or the surface as such). We demonstrate the necessity for a flexible visual analysis system that integrates many different (linked) views for making sense of this highly complex data. To demonstrate its usefulness, we exemplify our approach in the context of a meteorological multi-run simulation data case and in the context of the engineering domain, where our collaborators are working with the simulation of elastohydrodynamic (EHD) lubrication bearing in the automotive industry. | false | false | [
"Kresimir Matkovic",
"Denis Gracanin",
"Borislav Klarin",
"Helwig Hauser"
] | [] | [] | [] |
Vis | 2,009 | Interactive Visual Optimization and Analysis for RfiD Benchmarking | 10.1109/TVCG.2009.156 | Radiofrequency identification (RFID) is a powerful automatic remote identification technique that has wide applications. To facilitate RFID deployment, an RFID benchmarking instrument called aGate has been invented to identify the strengths and weaknesses of different RFID technologies in various environments. However, the data acquired by aGate are usually complex time varying multidimensional 3D volumetric data, which are extremely challenging for engineers to analyze. In this paper, we introduce a set of visualization techniques, namely, parallel coordinate plots, orientation plots, a visual history mechanism, and a 3D spatial viewer, to help RFID engineers analyze benchmark data visually and intuitively. With the techniques, we further introduce two workflow procedures (a visual optimization procedure for finding the optimum reader antenna configuration and a visual analysis procedure for comparing the performance and identifying the flaws of RFID devices) for the RFID benchmarking, with focus on the performance analysis of the aGate system. The usefulness and usability of the system are demonstrated in the user evaluation. | false | false | [
"Yingcai Wu",
"Ka-Kei Chung",
"Huamin Qu",
"Xiaoru Yuan",
"Shing-Chi Cheung"
] | [] | [] | [] |
Vis | 2,009 | Interactive Visualization of Molecular Surface Dynamics | 10.1109/TVCG.2009.157 | Molecular dynamics simulations of proteins play a growing role in various fields such as pharmaceutical, biochemical and medical research. Accordingly, the need for high quality visualization of these protein systems raises. Highly interactive visualization techniques are especially needed for the analysis of time-dependent molecular simulations. Beside various other molecular representations the surface representations are of high importance for these applications. So far, users had to accept a trade-off between rendering quality and performance - particularly when visualizing trajectories of time-dependent protein data. We present a new approach for visualizing the solvent excluded surface of proteins using a GPU ray casting technique and thus achieving interactive frame rates even for long protein trajectories where conventional methods based on precomputation are not applicable. Furthermore, we propose a semantic simplification of the raw protein data to reduce the visual complexity of the surface and thereby accelerate the rendering without impeding perception of the protein's basic shape. We also demonstrate the application of our solvent excluded surface method to visualize the spatial probability density for the protein atoms over the whole period of the trajectory in one frame, providing a qualitative analysis of the protein flexibility. | false | false | [
"Michael Krone",
"Katrin Bidmon",
"Thomas Ertl"
] | [
"HM"
] | [] | [] |
Vis | 2,009 | Interactive Volume Rendering of Functional Representations in Quantum Chemistry | 10.1109/TVCG.2009.158 | Simulation and computation in chemistry studies have been improved as computational power has increased over decades. Many types of chemistry simulation results are available, from atomic level bonding to volumetric representations of electron density. However, tools for the visualization of the results from quantum chemistry computations are still limited to showing atomic bonds and isosurfaces or isocontours corresponding to certain isovalues. In this work, we study the volumetric representations of the results from quantum chemistry computations, and evaluate and visualize the representations directly on the GPU without resampling the result in grid structures. Our visualization tool handles the direct evaluation of the approximated wavefunctions described as a combination of Gaussian-like primitive basis functions. For visualizations, we use a slice based volume rendering technique with a 2D transfer function, volume clipping, and illustrative rendering in order to reveal and enhance the quantum chemistry structure. Since there is no need of resampling the volume from the functional representations, two issues, data transfer and resampling resolution, can be ignored, therefore, it is possible to interactively explore large amount of different information in the computation results. | false | false | [
"Yun Jang",
"Ugo Varetto"
] | [] | [] | [] |
Vis | 2,009 | Intrinsic Geometric Scale Space by Shape Diffusion | 10.1109/TVCG.2009.159 | This paper formalizes a novel, intrinsic geometric scale space (IGSS) of 3D surface shapes. The intrinsic geometry of a surface is diffused by means of the Ricci flow for the generation of a geometric scale space. We rigorously prove that this multiscale shape representation satisfies the axiomatic causality property. Within the theoretical framework, we further present a feature-based shape representation derived from IGSS processing, which is shown to be theoretically plausible and practically effective. By integrating the concept of scale-dependent saliency into the shape description, this representation is not only highly descriptive of the local structures, but also exhibits several desired characteristics of global shape representations, such as being compact, robust to noise and computationally efficient. We demonstrate the capabilities of our approach through salient geometric feature detection and highly discriminative matching of 3D scans. | false | false | [
"Guangyu Zou",
"Jing Hua 0001",
"Zhaoqiang Lai",
"Xianfeng Gu",
"Ming Dong 0001"
] | [] | [] | [] |
Vis | 2,009 | Isosurface Extraction and View-Dependent filtering from Time-Varying fields Using Persistent Time-Octree (PTOT) | 10.1109/TVCG.2009.160 | We develop a new algorithm for isosurface extraction and view-dependent filtering from large time-varying fields, by using a novel persistent time-octree (PTOT) indexing structure. Previously, the persistent octree (POT) was proposed to perform isosurface extraction and view-dependent filtering, which combines the advantages of the interval tree (for optimal searches of active cells) and of the branch-on-need octree (BONO, for view-dependent filtering), but it only works for steady-state(i.e., single time step) data. For time-varying fields, a 4D version of POT, 4D-POT, was proposed for 4D isocontour slicing, where slicing on the time domain gives all active cells in the queried timestep and isovalue. However, such slicing is not output sensitive and thus the searching is sub-optimal. Moreover, it was not known how to support view-dependent filtering in addition to time-domain slicing.In this paper, we develop a novel persistent time-octree (PTOT) indexing structure, which has the advantages of POT and performs 4D isocontour slicing on the time domain with an output-sensitive and optimal searching. In addition, when we query the same iso value q over m consecutive time steps, there is no additional searching overhead (except for reporting the additional active cells) compared to querying just the first time step. Such searching performance for finding active cells is asymptotically optimal, with asymptotically optimal space and preprocessing time as well. Moreover, our PTOT supports view-dependent filtering in addition to time-domain slicing. We propose a simple and effective out-of-core scheme, where we integrate our PTOT with implicit occluders, batched occlusion queries and batched CUDA computing tasks, so that we can greatly reduce the I/O cost as well as increase the amount of data being concurrently computed in GPU.This results in an efficient algorithm for isosurface extraction with view-dependent filtering utilizing a state-of-the-art programmable GPU for time-varying fields larger than main memory. Our experiments on datasets as large as 192 GB (with 4 GB per time step) having no more than 870 MB of memory footprint in both preprocessing and run-time phases demonstrate the efficacy of our new technique. | false | false | [
"Cong Wang",
"Yi-Jen Chiang"
] | [] | [] | [] |
Vis | 2,009 | Kd-Jump: a Path-Preserving Stackless Traversal for Faster Isosurface Raytracing on GPUs | 10.1109/TVCG.2009.161 | Stackless traversal techniques are often used to circumvent memory bottlenecks by avoiding a stack and replacing return traversal with extra computation. This paper addresses whether the stackless traversal approaches are useful on newer hardware and technology (such as CUDA). To this end, we present a novel stackless approach for implicit kd-trees, which exploits the benefits of index-based node traversal, without incurring extra node visitation. This approach, which we term Kd-Jump, enables the traversal to immediately return to the next valid node, like a stack, without incurring extra node visitation (kd-restart). Also, Kd-Jump does not require global memory (stack) at all and only requires a small matrix in fast constant-memory. We report that Kd-Jump outperforms a stack by 10 to 20% and kd-restar t by 100%. We also present a Hybrid Kd-Jump, which utilizes a volume stepper for leaf testing and a run-time depth threshold to define where kd-tree traversal stops and volume-stepping occurs. By using both methods, we gain the benefits of empty space removal, fast texture-caching and realtime ability to determine the best threshold for current isosurface and view direction. | false | false | [
"David Meirion Hughes",
"Ik Soo Lim"
] | [] | [] | [] |
Vis | 2,009 | Loop surgery for volumetric meshes: Reeb graphs reduced to contour trees | 10.1109/TVCG.2009.163 | This paper introduces an efficient algorithm for computing the Reeb graph of a scalar function f defined on a volumetric mesh M in Ropf<sup>3</sup>. We introduce a procedure called "loop surgery" that transforms M into a mesh M' by a sequence of cuts and guarantees the Reeb graph of f(M') to be loop free. Therefore, loop surgery reduces Reeb graph computation to the simpler problem of computing a contour tree, for which well-known algorithms exist that are theoretically efficient (O(n log n)) and fast in practice. Inverse cuts reconstruct the loops removed at the beginning. The time complexity of our algorithm is that of a contour tree computation plus a loop surgery overhead, which depends on the number of handles of the mesh. Our systematic experiments confirm that for real-life data, this overhead is comparable to the computation of the contour tree, demonstrating virtually linear scalability on meshes ranging from 70 thousand to 3.5 million tetrahedra. Performance numbers show that our algorithm, although restricted to volumetric data, has an average speedup factor of 6,500 over the previous fastest techniques, handling larger and more complex data-sets. We demonstrate the verstility of our approach by extending fast topologically clean isosurface extraction to non simply-connected domains. We apply this technique in the context of pressure analysis for mechanical design. In this case, our technique produces results in matter of seconds even for the largest meshes. For the same models, previous Reeb graph techniques do not produce a result. | false | false | [
"Julien Tierny",
"Attila Gyulassy",
"Eddie Simon",
"Valerio Pascucci"
] | [] | [] | [] |
Vis | 2,009 | Mapping High-fidelity Volume Rendering for Medical Imaging to CPU, GPU and Many-Core Architectures | 10.1109/TVCG.2009.164 | Medical volumetric imaging requires high fidelity, high performance rendering algorithms. We motivate and analyze new volumetric rendering algorithms that are suited to modern parallel processing architectures. First, we describe the three major categories of volume rendering algorithms and confirm through an imaging scientist-guided evaluation that ray-casting is the most acceptable. We describe a thread- and data-parallel implementation of ray-casting that makes it amenable to key architectural trends of three modern commodity parallel architectures: multi-core, GPU, and an upcoming many-core Intel<sup>reg</sup> architecture code-named Larrabee. We achieve more than an order of magnitude performance improvement on a number of large 3D medical datasets. We further describe a data compression scheme that significantly reduces data-transfer overhead. This allows our approach to scale well to large numbers of Larrabee cores. | false | false | [
"Mikhail Smelyanskiy",
"David R. Holmes 0001",
"Jatin Chhugani",
"Alan Larson",
"Doug Carmean",
"Dennis P. Hanson",
"Pradeep Dubey",
"Kurt Augustine",
"Daehyun Kim 0001",
"Alan Kyker",
"Victor W. Lee",
"Anthony D. Nguyen",
"Larry Seiler",
"Richard A. Robb"
] | [] | [] | [] |
Vis | 2,009 | Markerless View-Independent Registration of Multiple Distorted Projectors on Extruded Surfaces Using an Uncalibrated Camera | 10.1109/TVCG.2009.166 | In this paper, we present the first algorithm to geometrically register multiple projectors in a view-independent manner (i.e. wallpapered) on a common type of curved surface, vertically extruded surface, using an uncalibrated camera without attaching any obtrusive markers to the display screen. Further, it can also tolerate large non-linear geometric distortions in the projectors as is common when mounting short throw lenses to allow a compact set-up. Our registration achieves sub-pixel accuracy on a large number of different vertically extruded surfaces and the image correction to achieve this registration can be run in real time on the GPU. This simple markerless registration has the potential to have a large impact on easy set-up and maintenance of large curved multi-projector displays, common for visualization, edutainment, training and simulation applications. | false | false | [
"Behzad Sajadi",
"Aditi Majumder"
] | [
"HM"
] | [] | [] |
Vis | 2,009 | Multi-Scale Surface Descriptors | 10.1109/TVCG.2009.168 | Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. | false | false | [
"Gregory Cipriano",
"George N. Phillips Jr.",
"Michael Gleicher"
] | [] | [] | [] |
Vis | 2,009 | Multimodal Vessel Visualization of Mouse Aorta PET/CT Scans | 10.1109/TVCG.2009.169 | In this paper, we present a visualization system for the visual analysis of PET/CT scans of aortic arches of mice. The system has been designed in close collaboration between researchers from the areas of visualization and molecular imaging with the objective to get deeper insights into the structural and molecular processes which take place during plaque development. Understanding the development of plaques might lead to a better and earlier diagnosis of cardiovascular diseases, which are still the main cause of death in the western world. After motivating our approach, we will briefly describe the multimodal data acquisition process before explaining the visualization techniques used. The main goal is to develop a system which supports visual comparison of the data of different species. Therefore, we have chosen a linked multi-view approach, which amongst others integrates a specialized straightened multipath curved planar reformation and a multimodal vessel flattening technique. We have applied the visualization concepts to multiple data sets, and we will present the results of this investigation. | false | false | [
"Timo Ropinski",
"Sven Hermann",
"Rainer Reich",
"Michael Schäfers 0001",
"Klaus H. Hinrichs"
] | [] | [] | [] |
Vis | 2,009 | Parameter Sensitivity Visualization for DTI fiber Tracking | 10.1109/TVCG.2009.170 | Fiber tracking of diffusion tensor imaging (DTI) data offers a unique insight into the three-dimensional organisation of white matter structures in the living brain. However, fiber tracking algorithms require a number of user-defined input parameters that strongly affect the output results. Usually the fiber tracking parameters are set once and are then re-used for several patient datasets. However, the stability of the chosen parameters is not evaluated and a small change in the parameter values can give very different results. The user remains completely unaware of such effects. Furthermore, it is difficult to reproduce output results between different users. We propose a visualization tool that allows the user to visually explore how small variations in parameter values affect the output of fiber tracking. With this knowledge the user cannot only assess the stability of commonly used parameter values but also evaluate in a more reliable way the output results between different patients. Existing tools do not provide such information. A small user evaluation of our tool has been done to show the potential of the technique. | false | false | [
"Ralph Brecheisen",
"Anna Vilanova",
"Bram Platel",
"Bart M. ter Haar Romeny"
] | [] | [] | [] |
Vis | 2,009 | Perception-Based Transparency Optimization for Direct Volume Rendering | 10.1109/TVCG.2009.172 | The semi-transparent nature of direct volume rendered images is useful to depict layered structures in a volume. However, obtaining a semi-transparent result with the layers clearly revealed is difficult and may involve tedious adjustment on opacity and other rendering parameters. Furthermore, the visual quality of layers also depends on various perceptual factors. In this paper, we propose an auto-correction method for enhancing the perceived quality of the semi-transparent layers in direct volume rendered images. We introduce a suite of new measures based on psychological principles to evaluate the perceptual quality of transparent structures in the rendered images. By optimizing rendering parameters within an adaptive and intuitive user interaction process, the quality of the images is enhanced such that specific user requirements can be met. Experimental results on various datasets demonstrate the effectiveness and robustness of our method. | false | false | [
"Ming-Yuen Chan",
"Yingcai Wu",
"Wai-Ho Mak",
"Wei Chen 0001",
"Huamin Qu"
] | [
"HM"
] | [] | [] |
Vis | 2,009 | Predictor-Corrector Schemes for Visualization ofSmoothed Particle Hydrodynamics Data | 10.1109/TVCG.2009.173 | In this paper we present a method for vortex core line extraction which operates directly on the smoothed particle hydrodynamics (SPH) representation and, by this, generates smoother and more (spatially and temporally) coherent results in an efficient way. The underlying predictor-corrector scheme is general enough to be applied to other line-type features and it is extendable to the extraction of surfaces such as isosurfaces or Lagrangian coherent structures. The proposed method exploits temporal coherence to speed up computation for subsequent time steps. We show how the predictor-corrector formulation can be specialized for several variants of vortex core line definitions including two recent unsteady extensions, and we contribute a theoretical and practical comparison of these. In particular, we reveal a close relation between unsteady extensions of Fuchs et al. and Weinkauf et al. and we give a proof of the Galilean invariance of the latter. When visualizing SPH data, there is the possibility to use the same interpolation method for visualization as has been used for the simulation. This is different from the case of finite volume simulation results, where it is not possible to recover from the results the spatial interpolation that was used during the simulation. Such data are typically interpolated using the basic trilinear interpolant, and if smoothness is required, some artificial processing is added. In SPH data, however, the smoothing kernels are specified from the simulation, and they provide an exact and smooth interpolation of data or gradients at arbitrary points in the domain. | false | false | [
"Benjamin Schindler",
"Raphael Fuchs",
"John Biddiscombe",
"Ronald Peikert"
] | [] | [] | [] |
Vis | 2,009 | Quantitative Texton Sequences for Legible Bivariate Maps | 10.1109/TVCG.2009.175 | Representing bivariate scalar maps is a common but difficult visualization problem. One solution has been to use two dimensional color schemes, but the results are often hard to interpret and inaccurately read. An alternative is to use a color sequence for one variable and a texture sequence for another. This has been used, for example, in geology, but much less studied than the two dimensional color scheme, although theory suggests that it should lead to easier perceptual separation of information relating to the two variables. To make a texture sequence more clearly readable the concept of the quantitative texton sequence (QTonS) is introduced. A QTonS is defined a sequence of small graphical elements, called textons, where each texton represents a different numerical value and sets of textons can be densely displayed to produce visually differentiable textures. An experiment was carried out to compare two bivariate color coding schemes with two schemes using QTonS for one bivariate map component and a color sequence for the other. Two different key designs were investigated (a key being a sequence of colors or textures used in obtaining quantitative values from a map). The first design used two separate keys, one for each dimension, in order to measure how accurately subjects could independently estimate the underlying scalar variables. The second key design was two dimensional and intended to measure the overall integral accuracy that could be obtained. The results show that the accuracy is substantially higher for the QTonS/color sequence schemes. A hypothesis that texture/color sequence combinations are better for independent judgments of mapped quantities was supported. A second experiment probed the limits of spatial resolution for QTonSs. | false | false | [
"Colin Ware"
] | [] | [] | [] |
Vis | 2,009 | Sampling and Visualizing Creases with Scale-Space Particles | 10.1109/TVCG.2009.177 | Particle systems have gained importance as a methodology for sampling implicit surfaces and segmented objects to improve mesh generation and shape analysis. We propose that particle systems have a significantly more general role in sampling structure from unsegmented data. We describe a particle system that computes samplings of crease features (i.e. ridges and valleys, as lines or surfaces) that effectively represent many anatomical structures in scanned medical data. Because structure naturally exists at a range of sizes relative to the image resolution, computer vision has developed the theory of scale-space, which considers an n-D image as an (n + 1)-D stack of images at different blurring levels. Our scale-space particles move through continuous four-dimensional scale-space according to spatial constraints imposed by the crease features, a particle-image energy that draws particles towards scales of maximal feature strength, and an inter-particle energy that controls sampling density in space and scale. To make scale-space practical for large three-dimensional data, we present a spline-based interpolation across scale from a small number of pre-computed blurrings at optimally selected scales. The configuration of the particle system is visualized with tensor glyphs that display information about the local Hessian of the image, and the scale of the particle. We use scale-space particles to sample the complex three-dimensional branching structure of airways in lung CT, and the major white matter structures in brain DTI. | false | false | [
"Gordon L. Kindlmann",
"Raúl San José Estépar",
"Stephen M. Smith 0001",
"Carl-Fredrik Westin"
] | [] | [] | [] |
Vis | 2,009 | Scalable and Interactive Segmentation and Visualization of Neural Processes in EM Datasets | 10.1109/TVCG.2009.178 | Recent advances in scanning technology provide high resolution EM (electron microscopy) datasets that allow neuro-scientists to reconstruct complex neural connections in a nervous system. However, due to the enormous size and complexity of the resulting data, segmentation and visualization of neural processes in EM data is usually a difficult and very time-consuming task. In this paper, we present NeuroTrace, a novel EM volume segmentation and visualization system that consists of two parts: a semi-automatic multiphase level set segmentation with 3D tracking for reconstruction of neural processes, and a specialized volume rendering approach for visualization of EM volumes. It employs view-dependent on-demand filtering and evaluation of a local histogram edge metric, as well as on-the-fly interpolation and ray-casting of implicit surfaces for segmented neural structures. Both methods are implemented on the GPU for interactive performance. NeuroTrace is designed to be scalable to large datasets and data-parallel hardware architectures. A comparison of NeuroTrace with a commonly used manual EM segmentation tool shows that our interactive workflow is faster and easier to use for the reconstruction of complex neural processes. | false | false | [
"Won-Ki Jeong",
"Johanna Beyer",
"Markus Hadwiger",
"Amelio Vázquez Reina",
"Hanspeter Pfister",
"Ross T. Whitaker"
] | [] | [] | [] |
Vis | 2,009 | Stress Tensor field Visualization for Implant Planning in Orthopedics | 10.1109/TVCG.2009.184 | We demonstrate the application of advanced 3D visualization techniques to determine the optimal implant design and position in hip joint replacement planning. Our methods take as input the physiological stress distribution inside a patient's bone under load and the stress distribution inside this bone under the same load after a simulated replacement surgery. The visualization aims at showing principal stress directions and magnitudes, as well as differences in both distributions. By visualizing changes of normal and shear stresses with respect to the principal stress directions of the physiological state, a comparative analysis of the physiological stress distribution and the stress distribution with implant is provided, and the implant parameters that most closely replicate the physiological stress state in order to avoid stress shielding can be determined. Our method combines volume rendering for the visualization of stress magnitudes with the tracing of short line segments for the visualization of stress directions. To improve depth perception, transparent, shaded, and antialiased lines are rendered in correct visibility order, and they are attenuated by the volume rendering. We use a focus+context approach to visually guide the user to relevant regions in the data, and to support a detailed stress analysis in these regions while preserving spatial context information. Since all of our techniques have been realized on the GPU, they can immediately react to changes in the simulated stress tensor field and thus provide an effective means for optimal implant selection and positioning in a computational steering environment. | false | false | [
"Christian Dick",
"Joachim Georgii",
"Rainer Burgkart",
"Rüdiger Westermann"
] | [] | [] | [] |
Vis | 2,009 | Structuring Feature Space: A Non-Parametric Method for Volumetric Transfer Function Generation | 10.1109/TVCG.2009.185 | The use of multi-dimensional transfer functions for direct volume rendering has been shown to be an effective means of extracting materials and their boundaries for both scalar and multivariate data. The most common multi-dimensional transfer function consists of a two-dimensional (2D) histogram with axes representing a subset of the feature space (e.g., value vs. value gradient magnitude), with each entry in the 2D histogram being the number of voxels at a given feature space pair. Users then assign color and opacity to the voxel distributions within the given feature space through the use of interactive widgets (e.g., box, circular, triangular selection). Unfortunately, such tools lead users through a trial-and-error approach as they assess which data values within the feature space map to a given area of interest within the volumetric space. In this work, we propose the addition of non-parametric clustering within the transfer function feature space in order to extract patterns and guide transfer function generation. We apply a non-parametric kernel density estimation to group voxels of similar features within the 2D histogram. These groups are then binned and colored based on their estimated density, and the user may interactively grow and shrink the binned regions to explore feature boundaries and extract regions of interest. We also extend this scheme to temporal volumetric data in which time steps of 2D histograms are composited into a histogram volume. A three-dimensional (3D) density estimation is then applied, and users can explore regions within the feature space across time without adjusting the transfer function at each time step. Our work enables users to effectively explore the structures found within a feature space of the volume and provide a context in which the user can understand how these structures relate to their volumetric data. We provide tools for enhanced exploration and manipulation of the transfer function, and we show that the initial transfer function generation serves as a reasonable base for volumetric rendering, reducing the trial-and-error overhead typically found in transfer function design. | false | false | [
"Ross Maciejewski",
"Insoo Woo",
"Wei Chen 0001",
"David S. Ebert"
] | [] | [] | [] |
Vis | 2,009 | Supercubes: A High-Level Primitive for Diamond Hierarchies | 10.1109/TVCG.2009.186 | Volumetric datasets are often modeled using a multiresolution approach based on a nested decomposition of the domain into a polyhedral mesh. Nested tetrahedral meshes generated through the longest edge bisection rule are commonly used to decompose regular volumetric datasets since they produce highly adaptive crack-free representations. Efficient representations for such models have been achieved by clustering the set of tetrahedra sharing a common longest edge into a structure called a diamond. The alignment and orientation of the longest edge can be used to implicitly determine the geometry of a diamond and its relations to the other diamonds within the hierarchy. We introduce the supercube as a high-level primitive within such meshes that encompasses all unique types of diamonds. A supercube is a coherent set of edges corresponding to three consecutive levels of subdivision. Diamonds are uniquely characterized by the longest edge of the tetrahedra forming them and are clustered in supercubes through the association of the longest edge of a diamond with a unique edge in a supercube. Supercubes are thus a compact and highly efficient means of associating information with a subset of the vertices, edges and tetrahedra of the meshes generated through longest edge bisection. We demonstrate the effectiveness of the supercube representation when encoding multiresolution diamond hierarchies built on a subset of the points of a regular grid. We also show how supercubes can be used to efficiently extract meshes from diamond hierarchies and to reduce the storage requirements of such variable-resolution meshes. | false | false | [
"Kenneth Weiss 0001",
"Leila De Floriani"
] | [] | [] | [] |
Vis | 2,009 | The Occlusion Spectrum for Volume Classification and Visualization | 10.1109/TVCG.2009.189 | Despite the ever-growing improvements on graphics processing units and computational power, classifying 3D volume data remains a challenge.In this paper, we present a new method for classifying volume data based on the ambient occlusion of voxels. This information stems from the observation that most volumes of a certain type, e.g., CT, MRI or flow simulation, contain occlusion patterns that reveal the spatial structure of their materials or features. Furthermore, these patterns appear to emerge consistently for different data sets of the same type. We call this collection of patterns the occlusion spectrum of a dataset. We show that using this occlusion spectrum leads to better two-dimensional transfer functions that can help classify complex data sets in terms of the spatial relationships among features. In general, the ambient occlusion of a voxel can be interpreted as a weighted average of the intensities in a spherical neighborhood around the voxel. Different weighting schemes determine the ability to separate structures of interest in the occlusion spectrum. We present a general methodology for finding such a weighting. We show results of our approach in 3D imaging for different applications, including brain and breast tumor detection and the visualization of turbulent flow. | false | false | [
"Carlos D. Correa",
"Kwan-Liu Ma"
] | [] | [] | [] |
Vis | 2,009 | Time and Streak Surfaces for Flow Visualization in Large Time-Varying Data Sets | 10.1109/TVCG.2009.190 | Time and streak surfaces are ideal tools to illustrate time-varying vector fields since they directly appeal to the intuition about coherently moving particles. However, efficient generation of high-quality time and streak surfaces for complex, large and time-varying vector field data has been elusive due to the computational effort involved. In this work, we propose a novel algorithm for computing such surfaces. Our approach is based on a decoupling of surface advection and surface adaptation and yields improved efficiency over other surface tracking methods, and allows us to leverage inherent parallelization opportunities in the surface advection, resulting in more rapid parallel computation. Moreover, we obtain as a result of our algorithm the entire evolution of a time or streak surface in a compact representation, allowing for interactive, high-quality rendering, visualization and exploration of the evolving surface. Finally, we discuss a number of ways to improve surface depiction through advanced rendering and texturing, while preserving interactivity, and provide a number of examples for real-world datasets and analyze the behavior of our algorithm on them. | false | false | [
"Harinarayan Krishnan",
"Christoph Garth",
"Kenneth I. Joy"
] | [] | [] | [] |
Vis | 2,009 | Verifiable Visualization for Isosurface Extraction | 10.1109/TVCG.2009.194 | Visual representations of isosurfaces are ubiquitous in the scientific and engineering literature. In this paper, we present techniques to assess the behavior of isosurface extraction codes. Where applicable, these techniques allow us to distinguish whether anomalies in isosurface features can be attributed to the underlying physical process or to artifacts from the extraction process. Such scientific scrutiny is at the heart of verifiable visualization - subjecting visualization algorithms to the same verification process that is used in other components of the scientific pipeline. More concretely, we derive formulas for the expected order of accuracy (or convergence rate) of several isosurface features, and compare them to experimentally observed results in the selected codes. This technique is practical: in two cases, it exposed actual problems in implementations. We provide the reader with the range of responses they can expect to encounter with isosurface techniques, both under ldquonormal operating conditionsrdquo and also under adverse conditions. Armed with this information - the results of the verification process - practitioners can judiciously select the isosurface extraction technique appropriate for their problem of interest, and have confidence in its behavior. | false | false | [
"Tiago Etiene",
"Carlos Scheidegger",
"Luis Gustavo Nonato",
"Robert M. Kirby",
"Cláudio T. Silva"
] | [] | [] | [] |
Vis | 2,009 | VisMashup: Streamlining the Creation of Custom Visualization Applications | 10.1109/TVCG.2009.195 | Visualization is essential for understanding the increasing volumes of digital data. However, the process required to create insightful visualizations is involved and time consuming. Although several visualization tools are available, including tools with sophisticated visual interfaces, they are out of reach for users who have little or no knowledge of visualization techniques and/or who do not have programming expertise. In this paper, we propose VisMashup, a new framework for streamlining the creation of customized visualization applications. Because these applications can be customized for very specific tasks, they can hide much of the complexity in a visualization specification and make it easier for users to explore visualizations by manipulating a small set of parameters. We describe the framework and how it supports the various tasks a designer needs to carry out to develop an application, from mining and exploring a set of visualization specifications (pipelines), to the creation of simplified views of the pipelines, and the automatic generation of the application and its interface. We also describe the implementation of the system and demonstrate its use in two real application scenarios. | false | false | [
"Emanuele Santos",
"Lauro Didier Lins",
"James P. Ahrens",
"Juliana Freire",
"Cláudio T. Silva"
] | [] | [] | [] |
Vis | 2,009 | Visual Exploration of Climate Variability Changes Using Wavelet Analysis | 10.1109/TVCG.2009.197 | Due to its nonlinear nature, the climate system shows quite high natural variability on different time scales, including multiyear oscillations such as the El Nino southern oscillation phenomenon. Beside a shift of the mean states and of extreme values of climate variables, climate change may also change the frequency or the spatial patterns of these natural climate variations. Wavelet analysis is a well established tool to investigate variability in the frequency domain. However, due to the size and complexity of the analysis results, only few time series are commonly analyzed concurrently. In this paper we will explore different techniques to visually assist the user in the analysis of variability and variability changes to allow for a holistic analysis of a global climate model data set consisting of several variables and extending over 250 years. Our new framework and data from the IPCC AR4 simulations with the coupled climate model ECHAM5/MPI-OM are used to explore the temporal evolution of El Nino due to climate change. | false | false | [
"Heike Leitte",
"Michael Böttinger",
"Uwe Mikolajewicz",
"Gerik Scheuermann"
] | [] | [] | [] |
Vis | 2,009 | Visual Exploration of Nasal Airflow | 10.1109/TVCG.2009.198 | Rhinologists are often faced with the challenge of assessing nasal breathing from a functional point of view to derive effective therapeutic interventions. While the complex nasal anatomy can be revealed by visual inspection and medical imaging, only vague information is available regarding the nasal airflow itself: Rhinomanometry delivers rather unspecific integral information on the pressure gradient as well as on total flow and nasal flow resistance. In this article we demonstrate how the understanding of physiological nasal breathing can be improved by simulating and visually analyzing nasal airflow, based on an anatomically correct model of the upper human respiratory tract. In particular we demonstrate how various information visualization (InfoVis) techniques, such as a highly scalable implementation of parallel coordinates, time series visualizations, as well as unstructured grid multi-volume rendering, all integrated within a multiple linked views framework, can be utilized to gain a deeper understanding of nasal breathing. Evaluation is accomplished by visual exploration of spatio-temporal airflow characteristics that include not only information on flow features but also on accompanying quantities such as temperature and humidity. To our knowledge, this is the first in-depth visual exploration of the physiological function of the nose over several simulated breathing cycles under consideration of a complete model of the nasal airways, realistic boundary conditions, and all physically relevant time-varying quantities. | false | false | [
"Stefan Zachow",
"Philipp Muigg",
"Thomas Hildebrandt",
"Helmut Doleisch",
"Hans-Christian Hege"
] | [] | [] | [] |
Vis | 2,009 | Visual Human+Machine Learning | 10.1109/TVCG.2009.199 | In this paper we describe a novel method to integrate interactive visual analysis and machine learning to support the insight generation of the user. The suggested approach combines the vast search and processing power of the computer with the superior reasoning and pattern recognition capabilities of the human user. An evolutionary search algorithm has been adapted to assist in the fuzzy logic formalization of hypotheses that aim at explaining features inside multivariate, volumetric data. Up to now, users solely rely on their knowledge and expertise when looking for explanatory theories. However, it often remains unclear whether the selected attribute ranges represent the real explanation for the feature of interest. Other selections hidden in the large number of data variables could potentially lead to similar features. Moreover, as simulation complexity grows, users are confronted with huge multidimensional data sets making it almost impossible to find meaningful hypotheses at all. We propose an interactive cycle of knowledge-based analysis and automatic hypothesis generation. Starting from initial hypotheses, created with linking and brushing, the user steers a heuristic search algorithm to look for alternative or related hypotheses. The results are analyzed in information visualization views that are linked to the volume rendering. Individual properties as well as global aggregates are visually presented to provide insight into the most relevant aspects of the generated hypotheses. This novel approach becomes computationally feasible due to a GPU implementation of the time-critical parts in the algorithm. A thorough evaluation of search times and noise sensitivity as well as a case study on data from the automotive domain substantiate the usefulness of the suggested approach. | false | false | [
"Raphael Fuchs",
"Jürgen Waser",
"M. Eduard Gröller"
] | [] | [] | [] |
Vis | 2,009 | Visualization and Exploration of Temporal Trend Relationships in Multivariate Time-Varying Data | 10.1109/TVCG.2009.200 | We present a new algorithm to explore and visualize multivariate time-varying data sets. We identify important trend relationships among the variables based on how the values of the variables change over time and how those changes are related to each other in different spatial regions and time intervals. The trend relationships can be used to describe the correlation and causal effects among the different variables. To identify the temporal trends from a local region, we design a new algorithm called SUBDTW to estimate when a trend appears and vanishes in a given time series. Based on the beginning and ending times of the trends, their temporal relationships can be modeled as a state machine representing the trend sequence. Since a scientific data set usually contains millions of data points, we propose an algorithm to extract important trend relationships in linear time complexity. We design novel user interfaces to explore the trend relationships, to visualize their temporal characteristics, and to display their spatial distributions. We use several scientific data sets to test our algorithm and demonstrate its utilities. | false | false | [
"Teng-Yok Lee",
"Han-Wei Shen"
] | [] | [] | [] |
Vis | 2,009 | Volume Illustration of Muscle from Diffusion Tensor Images | 10.1109/TVCG.2009.203 | Medical illustration has demonstrated its effectiveness to depict salient anatomical features while hiding the irrelevant details. Current solutions are ineffective for visualizing fibrous structures such as muscle, because typical datasets (CT or MRI) do not contain directional details. In this paper, we introduce a new muscle illustration approach that leverages diffusion tensor imaging (DTI) data and example-based texture synthesis techniques. Beginning with a volumetric diffusion tensor image, we reformulate it into a scalar field and an auxiliary guidance vector field to represent the structure and orientation of a muscle bundle. A muscle mask derived from the input diffusion tensor image is used to classify the muscle structure. The guidance vector field is further refined to remove noise and clarify structure. To simulate the internal appearance of the muscle, we propose a new two-dimensional example based solid texture synthesis algorithm that builds a solid texture constrained by the guidance vector field. Illustrating the constructed scalar field and solid texture efficiently highlights the global appearance of the muscle as well as the local shape and structure of the muscle fibers in an illustrative fashion. We have applied the proposed approach to five example datasets (four pig hearts and a pig leg), demonstrating plausible illustration and expressiveness. | false | false | [
"Wei Chen 0001",
"Zhicheng Yan",
"Song Zhang 0004",
"John Allen Crow",
"David S. Ebert",
"Ronald M. McLaughlin",
"Katie B. Mullins",
"Robert Cooper",
"Zi'ang Ding",
"Jun Liao"
] | [] | [] | [] |
Vis | 2,009 | Volume Ray Casting with Peak finding and Differential Sampling | 10.1109/TVCG.2009.204 | Direct volume rendering and isosurfacing are ubiquitous rendering techniques in scientific visualization, commonly employed in imaging 3D data from simulation and scan sources. Conventionally, these methods have been treated as separate modalities, necessitating different sampling strategies and rendering algorithms. In reality, an isosurface is a special case of a transfer function, namely a Dirac impulse at a given isovalue. However, artifact-free rendering of discrete isosurfaces in a volume rendering framework is an elusive goal, requiring either infinite sampling or smoothing of the transfer function. While preintegration approaches solve the most obvious deficiencies in handling sharp transfer functions, artifacts can still result, limiting classification. In this paper, we introduce a method for rendering such features by explicitly solving for isovalues within the volume rendering integral. In addition, we present a sampling strategy inspired by ray differentials that automatically matches the frequency of the image plane, resulting in fewer artifacts near the eye and better overall performance. These techniques exhibit clear advantages over standard uniform ray casting with and without preintegration, and allow for high-quality interactive volume rendering with sharp C<sup>0</sup> transfer functions. | false | false | [
"Aaron Knoll",
"Younis Hijazi",
"Rolf Westerteiger",
"Mathias Schott",
"Charles D. Hansen",
"Hans Hagen"
] | [] | [] | [] |
VAST | 2,009 | A framework for uncertainty-aware visual analytics | 10.1109/VAST.2009.5332611 | Visual analytics has become an important tool for gaining insight on large and complex collections of data. Numerous statistical tools and data transformations, such as projections, binning and clustering, have been coupled with visualization to help analysts understand data better and faster. However, data is inherently uncertain, due to error, noise or unreliable sources. When making decisions based on uncertain data, it is important to quantify and present to the analyst both the aggregated uncertainty of the results and the impact of the sources of that uncertainty. In this paper, we present a new framework to support uncertainty in the visual analytics process, through statistic methods such as uncertainty modeling, propagation and aggregation. We show that data transformations, such as regression, principal component analysis and k-means clustering, can be adapted to account for uncertainty. This framework leads to better visualizations that improve the decision-making process and help analysts gain insight on the analytic process itself. | false | false | [
"Carlos D. Correa",
"Yu-Hsuan Chan",
"Kwan-Liu Ma"
] | [] | [] | [] |
VAST | 2,009 | A multi-level middle-out cross-zooming approach for large graph analytics | 10.1109/VAST.2009.5333880 | This paper presents a working graph analytics model that embraces the strengths of the traditional top-down and bottom-up approaches with a resilient crossover concept to exploit the vast middle-ground information overlooked by the two extreme analytical approaches. Our graph analytics model is co-developed by users and researchers, who carefully studied the functional requirements that reflect the critical thinking and interaction pattern of a real-life intelligence analyst. To evaluate the model, we implement a system prototype, known as GreenHornet, which allows our analysts to test the theory in practice, identify the technological and usage-related gaps in the model, and then adapt the new technology in their work space. The paper describes the implementation of GreenHornet and compares its strengths and weaknesses against the other prevailing models and tools. | false | false | [
"Pak Chung Wong",
"Patrick Mackey",
"Kristin A. Cook",
"Randall M. Rohrer",
"Harlan Foote",
"Mark A. Whiting"
] | [] | [] | [] |
VAST | 2,009 | A scalable architecture for visual data exploration | 10.1109/VAST.2009.5333451 | Intelligence analysts in the areas of defense and homeland security are now faced with the difficult problem of discerning the relevant details amidst massive data stores. We propose a component-based visualization architecture that is built specifically to encourage the flexible exploration of geospatial event databases. The proposed system is designed to deploy on a variety of display layouts, from a single laptop screen to a multi-monitor tiled-display. By utilizing a combination of parallel coordinates, principal components plots, and other data views, analysts may reduce the dimensionality of a data set to its most salient features. Of particular value to our target applications are understanding correlations between data layers, both within a single view and across multiple views. Our proposed system aims to address the limited scalability associated with coordinated multiple views (CMVs) through the implementation of an efficient core application which is extensible by the end-user. | false | false | [
"Jonathan W. Decker",
"Alex Godwin",
"Mark A. Livingston",
"Denise Royle"
] | [] | [] | [] |
VAST | 2,009 | A visual analytics system for radio frequency fingerprinting-based localization | 10.1109/VAST.2009.5332596 | Radio frequency (RF) fingerprinting-based techniques for localization are a promising approach for ubiquitous positioning systems, particularly indoors. By finding unique fingerprints of RF signals received at different locations within a predefined area beforehand, whenever a similar fingerprint is subsequently seen again, the localization system will be able to infer a user's current location. However, developers of these systems face the problem of finding reliable RF fingerprints that are unique enough and adequately stable over time. We present a visual analytics system that enables developers of these localization systems to visually gain insight on whether their collected datasets and chosen fingerprint features have the necessary properties to enable a reliable RF fingerprinting-based localization system. The system was evaluated by testing and debugging an existing localization system. | false | false | [
"Yi Han 0005",
"Erich P. Stuntebeck",
"John T. Stasko",
"Gregory D. Abowd"
] | [] | [] | [] |
VAST | 2,009 | Analysis of community-contributed space- and time-referenced data (example of flickr and panoramio photos) | 10.1109/VAST.2009.5333472 | Space- and time-referenced data published on the Web by general people can be viewed in a dual way: as independent spatio-temporal events and as trajectories of people in the geographical space. These two views suppose different approaches to the analysis, which can yield different kinds of valuable knowledge about places and about people. We define possible types of analysis tasks related to the two views of the data and present several analysis methods appropriate for these tasks. The methods are suited to large amounts of the data. | false | false | [
"Gennady L. Andrienko",
"Natalia V. Andrienko",
"Peter Bak",
"Slava Kisilevich",
"Daniel A. Keim"
] | [] | [] | [] |
VAST | 2,009 | Articulate: a conversational interface for visual analytics | 10.1109/VAST.2009.5333099 | While many visualization tools exist that offer sophisticated functions for charting complex data, they still expect users to possess a high degree of expertise in wielding the tools to create an effective visualization. This poster presents Articulate, an attempt at a semi-automated visual analytic model that is guided by a conversational user interface. The goal is to relieve the user of the physical burden of having to directly craft a visualization through the manipulation of a complex user-interface, by instead being able to verbally articulate what the user wants to see, and then using natural language processing and heuristics to semi-automatically create a suitable visualization. | false | false | [
"Yiwen Sun",
"Jason Leigh",
"Andrew E. Johnson 0001",
"Dennis Chau"
] | [] | [] | [] |
VAST | 2,009 | BEADS: High dimensional data cluster visualization | 10.1109/VAST.2009.5333417 | In this poster paper, we present BEADS, a high dimensional data cluster visualization by having a 2-D representation of shape and spread of the cluster. The Cluster Division component, the Bead Shape Identification and Cluster Shape Composition form the core of the system. BEADS visualization consists of a 2-D plot, standard 2-D shapes which are used as metaphors to represent corresponding high-dimensional shapes of beads. The final resulting images convey the relative placement of beads with respect to the cluster center, the shape of the beads. We give a textual summary of the beads and their 2-D placement on the Beads plot in tabular format along with the image. | false | false | [
"Soujanya Vadapalli",
"Kamalakar Karlapalem"
] | [] | [] | [] |
VAST | 2,009 | Capturing and supporting the analysis process | 10.1109/VAST.2009.5333020 | Visual analytics tools provide powerful visual representations in order to support the sense-making process. In this process, analysts typically iterate through sequences of steps many times, varying parameters each time. Few visual analytics tools support this process well, nor do they provide support for visualizing and understanding the analysis process itself. To help analysts understand, explore, reference, and reuse their analysis process, we present a visual analytics system named CzSaw (See-Saw) that provides an editable and re-playable history navigation channel in addition to multiple visual representations of document collections and the entities within them (in a manner inspired by Jigsaw). Conventional history navigation tools range from basic undo and redo to branching timelines of user actions. In CzSaw's approach to this, first, user interactions are translated into a script language that drives the underlying scripting-driven propagation system. The latter allows analysts to edit analysis steps, and ultimately to program them. Second, on this base, we build both a history view showing progress and alternative paths, and a dependency graph showing the underlying logic of the analysis and dependency relations among the results of each step. These tools result in a visual model of the sense-making process, providing a way for analysts to visualize their analysis process, to reinterpret the problem, explore alternative paths, extract analysis patterns from existing history, and reuse them with other related analyses. | false | false | [
"Nazanin Kadivar",
"Victor Y. Chen",
"Dustin T. Dunsmuir",
"Eric Lee",
"Cheryl Z. Qian",
"John Dill",
"Chris Shaw 0002",
"Robert F. Woodbury"
] | [] | [] | [] |
VAST | 2,009 | Combining automated analysis and visualization techniques for effective exploration of high-dimensional data | 10.1109/VAST.2009.5332628 | Visual exploration of multivariate data typically requires projection onto lower-dimensional representations. The number of possible representations grows rapidly with the number of dimensions, and manual exploration quickly becomes ineffective or even unfeasible. This paper proposes automatic analysis methods to extract potentially relevant visual structures from a set of candidate visualizations. Based on features, the visualizations are ranked in accordance with a specified user task. The user is provided with a manageable number of potentially useful candidate visualizations, which can be used as a starting point for interactive data analysis. This can effectively ease the task of finding truly useful visualizations and potentially speed up the data exploration task. In this paper, we present ranking measures for class-based as well as non class-based Scatterplots and Parallel Coordinates visualizations. The proposed analysis methods are evaluated on different datasets. | false | false | [
"Andrada Tatu",
"Georgia Albuquerque",
"Martin Eisemann",
"Jörn Schneidewind",
"Holger Theisel",
"Marcus A. Magnor",
"Daniel A. Keim"
] | [] | [] | [] |
VAST | 2,009 | Combining iterative analytical reasoning and software development using the visualization language Processing | 10.1109/VAST.2009.5334463 | Processing is a very powerful visualization language which combines software concepts with principles of visual form and interaction. Artists, designers and architects use it but it is also a very effective programming language in the area of visual analytics. In the following contribution Processing is utilized in order to visually analyze data provided by IEEE VAST 2009 Mini Challenge Badge and Network Traffic. The applied process is iterative and each stage of the analytical reasoning process is accompanied by customized software development. The visual model, the process and the technical solution will be briefly introduced. | false | false | [
"Claudia Müller-Birn",
"Lukas Birn"
] | [] | [] | [] |
VAST | 2,009 | Comparing two interface tools in performing visual analytics tasks | 10.1109/VAST.2009.5333469 | In visual analytics, menu systems are commonly adopted as supporting tools because of the complex nature of data. However, it is still unknown how much the interaction implicit to the interface impacts the performance of visual analysis. To show the effectiveness of two interface tools, one a floating text-based menu (Floating Menu) and the other a more interactive iconic tool (Interactive-Icon), we evaluated the use and human performance of both tools within one highly interactive visual analytics system. We asked participants to answer similarly constructed, straightforward questions in a genomic visualization, first with one tool, and then the other. During task performance we tracked completion times, task errors, and captured coarse-grained interactive behaviors. Based on the participants accuracy, speed, behaviors and post-task qualitative feedback, we observed that although the Interactive-Icon tool supports continuous interactions, task-oriented user evaluation did not find a significant difference between the two tools because there is a familiarity effect on the performance of solving the task questions with using Floating-Menu interface tool. | false | false | [
"Dong Hyun Jeong",
"Tera Marie Green",
"William Ribarsky",
"Remco Chang"
] | [] | [] | [] |
VAST | 2,009 | Connecting the dots in visual analysis | 10.1109/VAST.2009.5333023 | During visual analysis, users must often connect insights discovered at various points of time. This process is often called ldquoconnecting the dots.rdquo When analysts interactively explore complex datasets over multiple sessions, they may uncover a large number of findings. As a result, it is often difficult for them to recall the past insights, views and concepts that are most relevant to their current line of inquiry. This challenge is even more difficult during collaborative analysis tasks where they need to find connections between their own discoveries and insights found by others. In this paper, we describe a context-based retrieval algorithm to identify notes, views and concepts from users' past analyses that are most relevant to a view or a note based on their line of inquiry. We then describe a related notes recommendation feature that surfaces the most relevant items to the user as they work based on this algorithm. We have implemented this recommendation feature in HARVEST, a Web based visual analytic system. We evaluate the related notes recommendation feature of HARVEST through a case study and discuss the implications of our approach. | false | false | [
"Yedendra Babu Shrinivasan",
"David Gotz",
"Jie Lu"
] | [] | [] | [] |
VAST | 2,009 | Describing story evolution from dynamic information streams | 10.1109/VAST.2009.5333437 | Sources of streaming information, such as news syndicates, publish information continuously. Information portals and news aggregators list the latest information from around the world enabling information consumers to easily identify events in the past 24 hours. The volume and velocity of these streams causes information from prior days to quickly vanish despite its utility in providing an informative context for interpreting new information. Few capabilities exist to support an individual attempting to identify or understand trends and changes from streaming information over time. The burden of retaining prior information and integrating with the new is left to the skills, determination, and discipline of each individual. In this paper we present a visual analytics system for linking essential content from information streams over time into dynamic stories that develop and change over multiple days. We describe particular challenges to the analysis of streaming information and present a fundamental visual representation for showing story change and evolution over time. | false | false | [
"Stuart J. Rose",
"Scott Butner",
"Wendy Cowley",
"Michelle L. Gregory",
"Julia Walker"
] | [] | [] | [] |
VAST | 2,009 | Detecting and analyzing relationships among anomalies | 10.1109/VAST.2009.5334426 | The HRL anomaly analysis tool was developed as part of the IEEE VAST Challenge 2009. One of the tasks involved processing badge and network traffic in order to detect and identify a fictitious embassy employee suspected of leaking information. The tool is designed to assist an analyst in detecting, analyzing, and visualizing anomalies and their relationships. Two key visualizations in our submission present how we identified the suspicious traffic using network visualization and how subsequently we connected that activity to an employee using an alibi table. | false | false | [
"David Allen",
"Tsai-Ching Lu",
"David J. Huber"
] | [] | [] | [] |
VAST | 2,009 | EAKOS: VAST 2009 | 10.1109/VAST.2009.5333967 | In this article, I describe the tools and techniques used to generate competing hypotheses for the VAST 2009 Flitter mini challenge. I will describe how I approached solving the social networks and the importance of the geospatial relationships to determine that ldquoSocial Structure Form Ardquo was the best matching social network. | false | false | [
"Lorne Leonard"
] | [] | [] | [] |
VAST | 2,009 | Evaluating visual analytics systems for investigative analysis: Deriving design principles from a case study | 10.1109/VAST.2009.5333878 | Despite the growing number of systems providing visual analytic support for investigative analysis, few empirical studies of the potential benefits of such systems have been conducted, particularly controlled, comparative evaluations. Determining how such systems foster insight and sensemaking is important for their continued growth and study, however. Furthermore, studies that identify how people use such systems and why they benefit (or not) can help inform the design of new systems in this area. We conducted an evaluation of the visual analytics system Jigsaw employed in a small investigative sensemaking exercise, and we compared its use to three other more traditional methods of analysis. Sixteen participants performed a simulated intelligence analysis task under one of the four conditions. Experimental results suggest that Jigsaw assisted participants to analyze the data and identify an embedded threat. We describe different analysis strategies used by study participants and how computational support (or the lack thereof) influenced the strategies. We then illustrate several characteristics of the sensemaking process identified in the study and provide design implications for investigative analysis tools based thereon. We conclude with recommendations for metrics and techniques for evaluating other visual analytics investigative analysis tools. | false | false | [
"Youn ah Kang",
"Carsten Görg",
"John T. Stasko"
] | [] | [] | [] |
VAST | 2,009 | Finding comparable temporal categorical records: A similarity measure with an interactive visualization | 10.1109/VAST.2009.5332595 | An increasing number of temporal categorical databases are being collected: Electronic Health Records in healthcare organizations, traffic incident logs in transportation systems, or student records in universities. Finding similar records within these large databases requires effective similarity measures that capture the searcher's intent. Many similarity measures exist for numerical time series, but temporal categorical records are different. We propose a temporal categorical similarity measure, the M&M (Match & Mismatch) measure, which is based on the concept of aligning records by sentinel events, then matching events between the target and the compared records. The M&M measure combines the time differences between pairs of events and the number of mismatches. To accom-modate customization of parameters in the M&M measure and results interpretation, we implemented Similan, an interactive search and visualization tool for temporal categorical records. A usability study with 8 participants demonstrated that Similan was easy to learn and enabled them to find similar records, but users had difficulty understanding the M&M measure. The usability study feedback, led to an improved version with a continuous timeline, which was tested in a pilot study with 5 participants. | false | false | [
"Krist Wongsuphasawat",
"Ben Shneiderman"
] | [] | [] | [] |
VAST | 2,009 | finVis: Applied visual analytics for personal financial planning | 10.1109/VAST.2009.5333920 | FinVis is a visual analytics tool that allows the non-expert casual user to interpret the return, risk and correlation aspects of financial data and make personal finance decisions. This interactive exploratory tool helps the casual decision-maker quickly choose between various financial portfolio options and view possible outcomes. FinVis allows for exploration of inter-temporal data to analyze outcomes of short-term or long-term investment decisions. FinVis helps the user overcome cognitive limitations and understand the impact of correlation between financial instruments in order to reap the benefits of portfolio diversification. Because this software is accessible by non-expert users, decision-makers from the general population can benefit greatly from using FinVis in practical applications. We quantify the value of FinVis using experimental economics methods and find that subjects using the FinVis software make better financial portfolio decisions as compared to subjects using a tabular version with the same information. We also find that FinVis engages the user, which results in greater exploration of the dataset and increased learning as compared to a tabular display. Further, participants using FinVis reported increased confidence in financial decision-making and noted that they were likely to use this tool in practical application. | false | false | [
"Stephen Rudolph",
"Anya Samak",
"David S. Ebert"
] | [] | [] | [] |
VAST | 2,009 | Geovisual analytics for self-organizing network data | 10.1109/VAST.2009.5332610 | Cellular radio networks are continually growing in both node count and complexity. It therefore becomes more difficult to manage the networks and necessary to use time and cost effective automatic algorithms to organize the networks neighbor cell relations. There have been a number of attempts to develop such automatic algorithms. Network operators, however, may not trust them because they need to have an understanding of their behavior and of their reliability and performance, which is not easily perceived. This paper presents a novel Web-enabled geovisual analytics approach to exploration and understanding of self-organizing network data related to cells and neighbor cell relations. A demonstrator and case study are presented in this paper, developed in close collaboration with the Swedish telecom company Ericsson and based on large multivariate, time-varying and geospatial data provided by the company. It allows the operators to follow, interact with and analyze the evolution of a self-organizing network and enhance their understanding of how an automatic algorithm configures locally-unique physical cell identities and organizes neighbor cell relations of the network. The geovisual analytics tool is tested with a self-organizing network that is operated by the automatic neighbor relations (ANR) algorithm. The demonstrator has been tested with positive results by a group of domain experts from Ericsson and will be tested in production. | false | false | [
"Ho Van Quan",
"Tobias Åström",
"Mikael Jern"
] | [] | [] | [] |
VAST | 2,009 | Guided analysis of hurricane trends using statistical processes integrated with interactive parallel coordinates | 10.1109/VAST.2009.5332586 | This paper demonstrates the promise of augmenting interactive multivariate representations with information from statistical processes in the domain of weather data analysis. Statistical regression, correlation analysis, and descriptive statistical calculations are integrated via graphical indicators into an enhanced parallel coordinates system, called the Multidimensional Data eXplorer (MDX). These statistical indicators, which highlight significant associations in the data, are complemented with interactive visual analysis capabilities. The resulting system allows a smooth, interactive, and highly visual workflow. The system's utility is demonstrated with an extensive hurricane climate study that was conducted by a hurricane expert. In the study, the expert used a new data set of environmental weather data, composed of 28 independent variables, to predict annual hurricane activity. MDX shows the Atlantic Meridional Mode increases the explained variance of hurricane seasonal activity by 7-15% and removes less significant variables used in earlier studies. The findings and feedback from the expert (1) validate the utility of the data set for hurricane prediction, and (2) indicate that the integration of statistical processes with interactive parallel coordinates, as implemented in MDX, addresses both deficiencies in traditional weather data analysis and exhibits some of the expected benefits of visual data analysis. | false | false | [
"Chad A. Steed",
"J. Edward Swan II",
"T. J. Jankun-Kelly",
"Patrick J. Fitzpatrick"
] | [] | [] | [] |
VAST | 2,009 | Innovative filtering techniques and customized analytics tools | 10.1109/VAST.2009.5334300 | The VAST 2009 Challenge consisted of three heterogeneous synthetic data sets organized into separate mini-challenges with minimal correspondence information. The challenge task was the identification of a suspected data theft from cyber and real-world traces. The grand challenge required integrating the findings from the mini challenges into a plausible, consistent scenario. A mixture of linked, customized tools based on queryable models and rapid prototyping as well as generic analysis tools (developed in-house) helped us correctly solve all of the mini challenges. A collaborative analytic process was employed to reconstruct the scenario and to propose the correct steps for the reliable identification of the criminal organization based on activity traces of its members. | false | false | [
"Harald Bosch",
"Julian Heinrich",
"Christoph Müller 0001",
"Benjamin Höferlin",
"Guido Reina",
"Markus Höferlin",
"Michael Wörner 0001",
"Steffen Koch 0001"
] | [] | [] | [] |
VAST | 2,009 | Integrative visual analytics for suspicious behavior detection | 10.1109/VAST.2009.5334430 | In the VAST Challenge 2009 suspicious behavior had to be detected applying visual analytics to heterogeneous data, such as network traffic, social network enriched with geo-spatial attributes, and finally video surveillance data. This paper describes some of the awarded parts from our solution entry. | false | false | [
"Peter Bak",
"Christian Rohrdantz",
"Svenja Leifert",
"Christoph Granacher",
"Stefan Koch",
"Simon Butscher",
"Patrick Jungk",
"Daniel A. Keim"
] | [] | [] | [] |
VAST | 2,009 | Interactive poster: A proposal for sharing user requirements for visual analytic tools | 10.1109/VAST.2009.5333474 | Although many in the community have advocated user-centered evaluations for visual analytic environments, a significant barrier exists. The users targeted by the visual analytics community (law enforcement personnel, professional information analysts, financial analysts, health care analysts, etc.) are often inaccessible to researchers. These analysts are extremely busy and their work environments and data are often classified or at least confidential. Furthermore, their tasks often last weeks or even months. It is simply not feasible to do such long-term observations to understand their jobs. How then can we hope to gather enough information about the diverse user populations to understand their needs? Some researchers, including the author, have been successful in getting access to specific end-users. A reasonable approach, therefore, would be to find a way to share user information. This work outlines a proposal for developing a handbook of user profiles for use by researchers, developers, and evaluators. | false | false | [
"Jean Scholtz"
] | [] | [] | [] |
VAST | 2,009 | Interactive poster: Interactive multiobjective optimization - a new application area for visual analytics | 10.1109/VAST.2009.5333081 | The poster introduces interactive multiobjective optimization (IMO) as a field offering new application possibilities and challenges for visual analytics (VA), and aims at inspiring collaboration between the two fields. Our aim is to collect new ideas in order to be able to utilize VA techniques more effectively in our user interface development. Simulation-based IMO methods are developed for complex problem solving, where the expert decision maker (analyst) should be supported during the iterative process of eliciting preference information and examining the resulting output data. IMO is a subfield of multiple criteria decision making (MCDM). In simulation-based IMO, the optimization task is formulated in a mathematical model containing several conflicting objectives and constraints depending on decision variables. While using IMO methods the analyst progressively provides preference information in order to find the most satisfactory compromise between the conflicting objectives. In the poster, the implementations of two new IMO methods are used as examples to demonstrate concrete challenges of interaction design. One of them is described in this summary. | false | false | [
"Suvi Tarkkanen",
"Kaisa Miettinen",
"Jussi Hakanen"
] | [] | [] | [] |
VAST | 2,009 | Interactive visual analysis of location reporting patterns | 10.1109/VAST.2009.5333453 | Interactive visualization methods are often used to aid in the analysis of large datasets. We present a novel interactive visualization technique designed specifically for the analysis of location reporting patterns within large time-series datasets. We use a set of triangles with color coding to indicate the time between location reports. This allows reporting patterns (expected and unexpected) to be easily discerned during interactive analysis. We discuss the details of our method and describe evaluation both from expert opinion and from a user study. | false | false | [
"Derek Overby",
"John Keyser",
"Jim Wall"
] | [] | [] | [] |
VAST | 2,009 | Interactive visual clustering of large collections of trajectories | 10.1109/VAST.2009.5332584 | One of the most common operations in exploration and analysis of various kinds of data is clustering, i.e. discovery and interpretation of groups of objects having similar properties and/or behaviors. In clustering, objects are often treated as points in multi-dimensional space of properties. However, structurally complex objects, such as trajectories of moving entities and other kinds of spatio-temporal data, cannot be adequately represented in this manner. Such data require sophisticated and computationally intensive clustering algorithms, which are very hard to scale effectively to large datasets not fitting in the computer main memory. We propose an approach to extracting meaningful clusters from large databases by combining clustering and classification, which are driven by a human analyst through an interactive visual interface. | false | false | [
"Gennady L. Andrienko",
"Natalia V. Andrienko",
"Salvatore Rinzivillo",
"Mirco Nanni",
"Dino Pedreschi",
"Fosca Giannotti"
] | [] | [] | [] |
VAST | 2,009 | Iterative integration of visual insights during patent search and analysis | 10.1109/VAST.2009.5333564 | Patents are an important economic factor in todays globalized markets. Therefore, the analysis of patent information has become an inevitable task for a variety of interest groups. The retrieval of relevant patent information is an integral part of almost every patent analysis scenario. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. With `PatViz', a new system for interactive analysis of patent information has been developed to leverage iterative query refinement. PatViz supports users in building complex queries visually and in exploring patent result sets interactively. Thereby, the visual query module introduces an abstraction layer that provides uniform access to different retrieval systems and relieves users of the burden to learn different complex query languages. By establishing an integrated environment it allows for interactive reintegration of insights gained from visual result set exploration into the visual query representation. We expect that the approach we have taken is also suitable to improve iterative query refinement in other Visual Analytics systems. | false | false | [
"Steffen Koch 0001",
"Harald Bosch",
"Mark Giereth",
"Thomas Ertl"
] | [
"BP"
] | [] | [] |
VAST | 2,009 | LSAView: A tool for visual exploration of latent semantic modeling | 10.1109/VAST.2009.5333428 | Latent Semantic Analysis (LSA) is a commonly-used method for automated processing, modeling, and analysis of unstructured text data. One of the biggest challenges in using LSA is determining the appropriate model parameters to use for different data domains and types of analyses. Although automated methods have been developed to make rank and scaling parameter choices, these approaches often make choices with respect to noise in the data, without an understanding of how those choices impact analysis and problem solving. Further, no tools currently exist to explore the relationships between an LSA model and analysis methods. Our work focuses on how parameter choices impact analysis and problem solving. In this paper, we present LSAView, a system for interactively exploring parameter choices for LSA models. We illustrate the use of LSAView's small multiple views, linked matrix-graph views, and data views to analyze parameter selection and application in the context of graph layout and clustering. | false | false | [
"Patricia Crossno",
"Daniel M. Dunlavy",
"Timothy M. Shead"
] | [] | [] | [] |
VAST | 2,009 | MassVis: Visual analysis of protein complexes using mass spectrometry | 10.1109/VAST.2009.5333895 | Protein complexes are formed when two or more proteins non-covalently interact to form a larger three dimensional structure with specific biological function. Understanding the composition of such complexes is vital to understanding cell biology at the molecular level. MassVis is a visual analysis tool designed to assist the interpretation of data from a new workflow for detecting the composition of such protein complexes in biological samples. The data generated by the laboratory workflow naturally lends itself to a scatter plot visualization. However, characteristics of this data give rise to some unique aspects not typical of a standard scatter plot. We are able to take the output from tandem mass spectrometry and render the data in such a way that it mimics more traditional two-dimensional gel techniques and at the same time reveals the correlated behavior indicative of protein complexes. By computationally measuring these correlated patterns in the data, membership in putative complexes can be inferred. User interactions are provided to support both an interactive discovery mode as well as an unsupervised clustering of likely complexes. The specific analysis tasks led us to design a unique arrangement of item selection and coordinated detail views in order to simultaneously view different aspects of the selected item. | false | false | [
"Robert Kincaid",
"Kurt Dejgaard"
] | [] | [] | [] |
VAST | 2,009 | Merging visual analysis with automated reasoning: Using Prajna to solve the traffic challenge | 10.1109/VAST.2009.5332481 | The Internet traffic challenge required the development of a custom application to analyze internet traffic patterns coupled with building access records. To solve this challenge, the author applied the Prajna Project, an open-source Java toolkit designed to provide various capabilities for visualization, knowledge representation, semantic reasoning, and data fusion. By applying some of the capabilities of Prajna to this challenge, the author could quickly develop a custom application for visual analysis. The author determined that he could solve some of the analytical components of this challenge using automated reasoning techniques. Prajna includes interfaces to incorporate automated reasoners into visual applications. By blending the automated reasoning processes with visual analysis, the author could design a flexible, useful application to solve this challenge. | false | false | [
"Edward Swing"
] | [] | [] | [] |
VAST | 2,009 | Model space visualization for multivariate linear trend discovery | 10.1109/VAST.2009.5333431 | Discovering and extracting linear trends and correlations in datasets is very important for analysts to understand multivariate phenomena. However, current widely used multivariate visualization techniques, such as parallel coordinates and scatterplot matrices, fail to reveal and illustrate such linear relationships intuitively, especially when more than 3 variables are involved or multiple trends coexist in the dataset. We present a novel multivariate model parameter space visualization system that helps analysts discover single and multiple linear patterns and extract subsets of data that fit a model well. Using this system, analysts are able to explore and navigate in model parameter space, interactively select and tune patterns, and refine the model for accuracy using computational techniques. We build connections between model space and data space visually, allowing analysts to employ their domain knowledge during exploration to better interpret the patterns they discover and their validity. Case studies with real datasets are used to investigate the effectiveness of the visualizations. | false | false | [
"Zhenyu Guo",
"Matthew O. Ward",
"Elke A. Rundensteiner"
] | [] | [] | [] |
VAST | 2,009 | Multiple step social structure analysis with Cytoscape | 10.1109/VAST.2009.5333961 | Cytoscape is a popular open source tool for biologists to visualize interaction networks. We find that it offers most of the desired functionality for visual analytics on graph data to guide us in the identification of the underlying social structure. We demonstrate its utility in the identification of the social structure in the VAST 2009 Flitter Mini Challenge. | false | false | [
"Hao Zhou",
"Anna A. Shaverdian",
"H. V. Jagadish",
"George Michailidis"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.