Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
VAST
2,019
Selection Bias Tracking and Detailed Subset Comparison for High-Dimensional Data
10.1109/TVCG.2019.2934209
The collection of large, complex datasets has become common across a wide variety of domains. Visual analytics tools increasingly play a key role in exploring and answering complex questions about these large datasets. However, many visualizations are not designed to concurrently visualize the large number of dimensions present in complex datasets (e.g. tens of thousands of distinct codes in an electronic health record system). This fact, combined with the ability of many visual analytics systems to enable rapid, ad-hoc specification of groups, or cohorts, of individuals based on a small subset of visualized dimensions, leads to the possibility of introducing selection bias–when the user creates a cohort based on a specified set of dimensions, differences across many other unseen dimensions may also be introduced. These unintended side effects may result in the cohort no longer being representative of the larger population intended to be studied, which can negatively affect the validity of subsequent analyses. We present techniques for selection bias tracking and visualization that can be incorporated into high-dimensional exploratory visual analytics systems, with a focus on medical data with existing data hierarchies. These techniques include: (1) tree-based cohort provenance and visualization, including a user-specified baseline cohort that all other cohorts are compared against, and visual encoding of cohort “drift”, which indicates where selection bias may have occurred, and (2) a set of visualizations, including a novel icicle-plot based visualization, to compare in detail the per-dimension differences between the baseline and a user-specified focus cohort. These techniques are integrated into a medical temporal event sequence visual analytics tool. We present example use cases and report findings from domain expert user interviews.
false
false
[ "David Borland", "Wenyuan Wang", "Jonathan Zhang", "Joshua Shrestha", "David Gotz" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1906.07625v3", "icon": "paper" } ]
VAST
2,019
Semantic Concept Spaces: Guided Topic Model Refinement using Word-Embedding Projections
10.1109/TVCG.2019.2934654
We present a framework that allows users to incorporate the semantics of their domain knowledge for topic model refinement while remaining model-agnostic. Our approach enables users to (1) understand the semantic space of the model, (2) identify regions of potential conflicts and problems, and (3) readjust the semantic relation of concepts based on their understanding, directly influencing the topic modeling. These tasks are supported by an interactive visual analytics workspace that uses word-embedding projections to define concept regions which can then be refined. The user-refined concepts are independent of a particular document collection and can be transferred to related corpora. All user interactions within the concept space directly affect the semantic relations of the underlying vector space model, which, in turn, change the topic modeling. In addition to direct manipulation, our system guides the users' decision-making process through recommended interactions that point out potential improvements. This targeted refinement aims at minimizing the feedback required for an efficient human-in-the-loop process. We confirm the improvements achieved through our approach in two user studies that show topic model quality improvements through our visual knowledge externalization and learning process.
false
false
[ "Mennatallah El-Assady", "Rebecca Kehlbeck", "Christopher Collins 0001", "Daniel A. Keim", "Oliver Deussen" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00475v1", "icon": "paper" } ]
VAST
2,019
sPortfolio: Stratified Visual Analysis of Stock Portfolios
10.1109/TVCG.2019.2934660
Quantitative Investment, built on the solid foundation of robust financial theories, is at the center stage in investment industry today. The essence of quantitative investment is the multi-factor model, which explains the relationship between the risk and return of equities. However, the multi-factor model generates enormous quantities of factor data, through which even experienced portfolio managers find it difficult to navigate. This has led to portfolio analysis and factor research being limited by a lack of intuitive visual analytics tools. Previous portfolio visualization systems have mainly focused on the relationship between the portfolio return and stock holdings, which is insufficient for making actionable insights or understanding market trends. In this paper, we present s Portfolio, which, to the best of our knowledge, is the first visualization that attempts to explore the factor investment area. In particular, sPortfolio provides a holistic overview of the factor data and aims to facilitate the analysis at three different levels: a Risk-Factor level, for a general market situation analysis; a Multiple-Portfolio level, for understanding the portfolio strategies; and a Single-Portfolio level, for investigating detailed operations. The system's effectiveness and usability are demonstrated through three case studies. The system has passed its pilot study and is soon to be deployed in industry.
false
false
[ "Xuanwu Yue", "Jiaxin Bai", "Qinhan Liu", "Yiyang Tang", "Abishek Puri", "Ke Li", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1910.05536v1", "icon": "paper" } ]
VAST
2,019
STBins: Visual Tracking and Comparison of Multiple Data Sequences Using Temporal Binning
10.1109/TVCG.2019.2934289
While analyzing multiple data sequences, the following questions typically arise: how does a single sequence change over time, how do multiple sequences compare within a period, and how does such comparison change over time. This paper presents a visual technique named STBins to answer these questions. STBins is designed for visual tracking of individual data sequences and also for comparison of sequences. The latter is done by showing the similarity of sequences within temporal windows. A perception study is conducted to examine the readability of alternative visual designs based on sequence tracking and comparison tasks. Also, two case studies based on real-world datasets are presented in detail to demonstrate usage of our technique.
false
false
[ "Ji Qi", "Vincent Bloemen", "Shihan Wang 0001", "Jarke J. van Wijk", "Huub van de Wetering" ]
[]
[]
[]
VAST
2,019
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
10.1109/TVCG.2019.2934659
Deep learning is increasingly used in decision-making tasks. However, understanding how neural networks produce final predictions remains a fundamental challenge. Existing work on interpreting neural network predictions for images often focuses on explaining predictions for single images or neurons. As predictions are often computed from millions of weights that are optimized over millions of images, such explanations can easily miss a bigger picture. We present Summit, an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions. Summit introduces two new scalable summarization techniques: (1) activation aggregation discovers important neurons, and (2) neuron-influence aggregation identifies relationships among such neurons. Summit combines these techniques to create the novel attribution graph that reveals and summarizes crucial neuron associations and substructures that contribute to a model's outcomes. Summit scales to large data, such as the ImageNet dataset with 1.2M images, and leverages neural network feature visualization and dataset examples to help users distill large, complex neural network models into compact, interactive visualizations. We present neural network exploration scenarios where Summit helps us discover multiple surprising insights into a prevalent, large-scale image classifier's learned representations and informs future neural network architecture design. The Summit visualization runs in modern web browsers and is open-sourced.
false
false
[ "Fred Hohman", "Haekyu Park", "Caleb Robinson", "Polo Chau" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1904.02323v3", "icon": "paper" } ]
VAST
2,019
Supporting Analysis of Dimensionality Reduction Results with Contrastive Learning
10.1109/TVCG.2019.2934251
Dimensionality reduction (DR) is frequently used for analyzing and visualizing high-dimensional data as it provides a good first glance of the data. However, to interpret the DR result for gaining useful insights from the data, it would take additional analysis effort such as identifying clusters and understanding their characteristics. While there are many automatic methods (e.g., density-based clustering methods) to identify clusters, effective methods for understanding a cluster's characteristics are still lacking. A cluster can be mostly characterized by its distribution of feature values. Reviewing the original feature values is not a straightforward task when the number of features is large. To address this challenge, we present a visual analytics method that effectively highlights the essential features of a cluster in a DR result. To extract the essential features, we introduce an enhanced usage of contrastive principal component analysis (cPCA). Our method, called ccPCA (contrasting clusters in PCA), can calculate each feature's relative contribution to the contrast between one cluster and other clusters. With ccPCA, we have created an interactive system including a scalable visualization of clusters' feature contributions. We demonstrate the effectiveness of our method and system with case studies using several publicly available datasets.
false
false
[ "Takanori Fujiwara", "Oh-Hyun Kwon", "Kwan-Liu Ma" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1905.03911v3", "icon": "paper" } ]
VAST
2,019
Tac-Simur: Tactic-based Simulative Visual Analytics of Table Tennis
10.1109/TVCG.2019.2934630
Simulative analysis in competitive sports can provide prospective insights, which can help improve the performance of players in future matches. However, adequately simulating the complex competition process and effectively explaining the simulation result to domain experts are typically challenging. This work presents a design study to address these challenges in table tennis. We propose a well-established hybrid second-order Markov chain model to characterize and simulate the competition process in table tennis. Compared with existing methods, our approach is the first to support the effective simulation of tactics, which represent high-level competition strategies in table tennis. Furthermore, we introduce a visual analytics system called Tac-Simur based on the proposed model for simulative visual analytics. Tac-Simur enables users to easily navigate different players and their tactics based on their respective performance in matches to identify the player and the tactics of interest for further analysis. Then, users can utilize the system to interactively explore diverse simulation tasks and visually explain the simulation results. The effectiveness and usefulness of this work are demonstrated by two case studies, in which domain experts utilize Tac-Simur to find interesting and valuable insights. The domain experts also provide positive feedback on the usability of Tac-Simur. Our work can be extended to other similar sports such as tennis and badminton.
false
false
[ "Jiachen Wang", "Kejian Zhao", "Dazhen Deng", "Anqi Cao", "Xiao Xie", "Zheng Zhou", "Hui Zhang 0051", "Yingcai Wu" ]
[]
[]
[]
VAST
2,019
The Validity, Generalizability and Feasibility of Summative Evaluation Methods in Visual Analytics
10.1109/TVCG.2019.2934264
Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.
false
false
[ "Mosab Khayat", "Morteza Karimzadeh", "David S. Ebert", "Arif Ghafoor" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13314v2", "icon": "paper" } ]
VAST
2,019
The What-If Tool: Interactive Probing of Machine Learning Models
10.1109/TVCG.2019.2934619
A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.
false
false
[ "James Wexler", "Mahima Pushkarna", "Tolga Bolukbasi", "Martin Wattenberg", "Fernanda B. Viégas", "Jimbo Wilson" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.04135v2", "icon": "paper" } ]
VAST
2,019
TopicSifter: Interactive Search Space Reduction through Targeted Topic Modeling
10.1109/VAST47406.2019.8986922
Topic modeling is commonly used to analyze and understand large document collections. However, in practice, users want to focus on specific aspects or “targets” rather than the entire corpus. For example, given a large collection of documents, users may want only a smaller subset which more closely aligns with their interests, tasks, and domains. In particular, our paper focuses on large-scale document retrieval with high recall where any missed relevant documents can be critical. A simple keyword matching search is generally not effective nor efficient as 1) it is difficult to find a list of keyword queries that can cover the documents of interest before exploring the dataset, 2) some documents may not contain the exact keywords of interest but may still be highly relevant, and 3) some words have multiple meanings, which would result in irrelevant documents included in the retrieved subset. In this paper, we present TopicSifter, a visual analytics system for interactive search space reduction. Our system utilizes targeted topic modeling based on nonnegative matrix factorization and allows users to give relevance feedback in order to refine their target and guide the topic modeling to the most relevant results.
false
false
[ "Hannah Kim", "Dongjin Choi", "Barry L. Drake", "Alex Endert", "Haesun Park" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.12079v1", "icon": "paper" } ]
VAST
2,019
Understanding the Role of Alternatives in Data Analysis Practices
10.1109/TVCG.2019.2934593
Data workers are people who perform data analysis activities as a part of their daily work but do not formally identify as data scientists. They come from various domains and often need to explore diverse sets of hypotheses and theories, a variety of data sources, algorithms, methods, tools, and visual designs. Taken together, we call these alternatives. To better understand and characterize the role of alternatives in their analyses, we conducted semi-structured interviews with 12 data workers with different types of expertise. We conducted four types of analyses to understand 1) why data workers explore alternatives; 2) the different notions of alternatives and how they fit into the sensemaking process; 3) the high-level processes around alternatives; and 4) their strategies to generate, explore, and manage those alternatives. We find that participants' diverse levels of domain and computational expertise, experience with different tools, and collaboration within their broader context play an important role in how they explore these alternatives. These findings call out the need for more attention towards a deeper understanding of alternatives and the need for better tools to facilitate the exploration, interpretation, and management of alternatives. Drawing upon these analyses and findings, we present a framework based on participants' 1) degree of attention, 2) abstraction level, and 3) analytic processes. We show how this framework can help understand how data workers consider such alternatives in their analyses and how tool designers might create tools to better support them.
false
false
[ "Jiali Liu", "Nadia Boukhelifa", "James R. Eagan" ]
[]
[]
[]
VAST
2,019
VASABI: Hierarchical User Profiles for Interactive Visual User Behaviour Analytics
10.1109/TVCG.2019.2934609
User behaviour analytics (UBA) systems offer sophisticated models that capture users' behaviour over time with an aim to identify fraudulent activities that do not match their profiles. Motivated by the challenges in the interpretation of UBA models, this paper presents a visual analytics approach to help analysts gain a comprehensive understanding of user behaviour at multiple levels, namely individual and group level. We take a user-centred approach to design a visual analytics framework supporting the analysis of collections of users and the numerous sessions of activities they conduct within digital applications. The framework is centred around the concept of hierarchical user profiles that are built based on features derived from sessions, as well as on user tasks extracted using a topic modelling approach to summarise and stratify user behaviour. We externalise a series of analysis goals and tasks, and evaluate our methods through use cases conducted with experts. We observe that with the aid of interactive visual hierarchical user profiles, analysts are able to conduct exploratory and investigative analysis effectively, and able to understand the characteristics of user behaviour to make informed decisions whilst evaluating suspicious users and activities.
false
false
[ "Phong H. Nguyen", "Rafael Henkin", "Siming Chen 0001", "Natalia V. Andrienko", "Gennady L. Andrienko", "Olivier Thonnard", "Cagatay Turkay" ]
[]
[]
[]
VAST
2,019
VASSL: A Visual Analytics Toolkit for Social Spambot Labeling
10.1109/TVCG.2019.2934266
Social media platforms are filled with social spambots. Detecting these malicious accounts is essential, yet challenging, as they continually evolve to evade detection techniques. In this article, we present VASSL, a visual analytics system that assists in the process of detecting and labeling spambots. Our tool enhances the performance and scalability of manual labeling by providing multiple connected views and utilizing dimensionality reduction, sentiment analysis and topic modeling, enabling insights for the identification of spambots. The system allows users to select and analyze groups of accounts in an interactive manner, which enables the detection of spambots that may not be identified when examined individually. We present a user study to objectively evaluate the performance of VASSL users, as well as capturing subjective opinions about the usefulness and the ease of use of the tool.
false
false
[ "Mosab Khayat", "Morteza Karimzadeh", "Jieqiong Zhao", "David S. Ebert" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13319v2", "icon": "paper" } ]
VAST
2,019
VIANA: Visual Interactive Annotation of Argumentation
10.1109/VAST47406.2019.8986917
Argumentation Mining addresses the challenging tasks of identifying boundaries of argumentative text fragments and extracting their relationships. Fully automated solutions do not reach satisfactory accuracy due to their insufficient incorporation of semantics and domain knowledge. Therefore, experts currently rely on time-consuming manual annotations. In this paper, we present a visual analytics system that augments the manual annotation process by automatically suggesting which text fragments to annotate next. The accuracy of those suggestions is improved over time by incorporating linguistic knowledge and language modeling to learn a measure of argument similarity from user interactions. Based on a long-term collaboration with domain experts, we identify and model five high-level analysis tasks. We enable close reading and note-taking, annotation of arguments, argument reconstruction, extraction of argument relations, and exploration of argument graphs. To avoid context switches, we transition between all views through seamless morphing, visually anchoring all text- and graph-based layers. We evaluate our system with a two-stage expert user study based on a corpus of presidential debates. The results show that experts prefer our system over existing solutions due to the speedup provided by the automatic suggestions and the tight integration between text and graph views.
false
false
[ "Fabian Sperrle", "Rita Sevastjanova", "Rebecca Kehlbeck", "Mennatallah El-Assady" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.12413v1", "icon": "paper" } ]
VAST
2,019
Visual Analysis of High-Dimensional Event Sequence Data via Dynamic Hierarchical Aggregation
10.1109/TVCG.2019.2934661
Temporal event data are collected across a broad range of domains, and a variety of visual analytics techniques have been developed to empower analysts working with this form of data. These techniques generally display aggregate statistics computed over sets of event sequences that share common patterns. Such techniques are often hindered, however, by the high-dimensionality of many real-world event sequence datasets which can prevent effective aggregation. A common coping strategy for this challenge is to group event types together prior to visualization, as a pre-process, so that each group can be represented within an analysis as a single event type. However, computing these event groupings as a pre-process also places significant constraints on the analysis. This paper presents a new visual analytics approach for dynamic hierarchical dimension aggregation. The approach leverages a predefined hierarchy of dimensions to computationally quantify the informativeness, with respect to a measure of interest, of alternative levels of grouping within the hierarchy at runtime. This information is then interactively visualized, enabling users to dynamically explore the hierarchy to select the most appropriate level of grouping to use at any individual step within an analysis. Key contributions include an algorithm for interactively determining the most informative set of event groupings for a specific analysis context, and a scented scatter-plus-focus visualization design with an optimization-based layout algorithm that supports interactive hierarchical exploration of alternative event type groupings. We apply these techniques to high-dimensional event sequence data from the medical domain and report findings from domain expert interviews.
false
false
[ "David Gotz", "Jonathan Zhang", "Wenyuan Wang", "Joshua Shrestha", "David Borland" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1906.07617v2", "icon": "paper" } ]
VAST
2,019
Visual Analytics for Electromagnetic Situation Awareness in Radio Monitoring and Management
10.1109/TVCG.2019.2934655
Traditional radio monitoring and management largely depend on radio spectrum data analysis, which requires considerable domain experience and heavy cognition effort and frequently results in incorrect signal judgment and incomprehensive situation awareness. Faced with increasingly complicated electromagnetic environments, radio supervisors urgently need additional data sources and advanced analytical technologies to enhance their situation awareness ability. This paper introduces a visual analytics approach for electromagnetic situation awareness. Guided by a detailed scenario and requirement analysis, we first propose a signal clustering method to process radio signal data and a situation assessment model to obtain qualitative and quantitative descriptions of the electromagnetic situations. We then design a two-module interface with a set of visualization views and interactions to help radio supervisors perceive and understand the electromagnetic situations by a joint analysis of radio signal data and radio spectrum data. Evaluations on real-world data sets and an interview with actual users demonstrate the effectiveness of our prototype system. Finally, we discuss the limitations of the proposed approach and provide future work directions.
false
false
[ "Ying Zhao 0001", "Xiaobo Luo", "Xiaoru Lin", "Hairong Wang", "Xiaoyan Kui", "Fangfang Zhou", "Jinsong Wang", "Yi Chen 0007", "Wei Chen 0001" ]
[]
[]
[]
VAST
2,019
Visual Interaction with Deep Learning Models through Collaborative Semantic Inference
10.1109/TVCG.2019.2934595
Automation of tasks can have critical consequences when humans lose agency over decision processes. Deep learning models are particularly susceptible since current black-box approaches lack explainable reasoning. We argue that both the visual interface and model structure of deep learning systems need to take into account interaction design. We propose a framework of collaborative semantic inference (CSI) for the co-design of interactions and models to enable visual collaboration between humans and algorithms. The approach exposes the intermediate reasoning process of models which allows semantic interactions with the visual metaphors of a problem, which means that a user can both understand and control parts of the model reasoning process. We demonstrate the feasibility of CSI with a co-designed case study of a document summarization system.
false
false
[ "Sebastian Gehrmann", "Hendrik Strobelt", "Robert Krüger", "Hanspeter Pfister", "Alexander M. Rush" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.10739v1", "icon": "paper" } ]
VAST
2,019
You can't always sketch what you want: Understanding Sensemaking in Visual Query Systems
10.1109/TVCG.2019.2934666
Visual query systems (VQSs) empower users to interactively search for line charts with desired visual patterns, typically specified using intuitive sketch-based interfaces. Despite decades of past work on VQSs, these efforts have not translated to adoption in practice, possibly because VQSs are largely evaluated in unrealistic lab-based settings. To remedy this gap in adoption, we collaborated with experts from three diverse domains—astronomy, genetics, and material science—via a year-long user-centered design process to develop a VQS that supports their workflow and analytical needs, and evaluate how VQSs can be used in practice. Our study results reveal that ad-hoc sketch-only querying is not as commonly used as prior work suggests, since analysts are often unable to precisely express their patterns of interest. In addition, we characterize three essential sensemaking processes supported by our enhanced VQS. We discover that participants employ all three processes, but in different proportions, depending on the analytical needs in each domain. Our findings suggest that all three sensemaking processes must be integrated in order to make future VQSs useful for a wide range of analytical inquiries.
false
false
[ "Doris Jung Lin Lee", "John Lee 0005", "Tarique Siddiqui", "Jaewoo Kim", "Karrie Karahalios", "Aditya G. Parameswaran" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1710.00763v7", "icon": "paper" } ]
SciVis
2,019
A Structural Average of Labeled Merge Trees for Uncertainty Visualization
10.1109/TVCG.2019.2934242
Physical phenomena in science and engineering are frequently modeled using scalar fields. In scalar field topology, graph-based topological descriptors such as merge trees, contour trees, and Reeb graphs are commonly used to characterize topological changes in the (sub)level sets of scalar fields. One of the biggest challenges and opportunities to advance topology-based visualization is to understand and incorporate uncertainty into such topological descriptors to effectively reason about their underlying data. In this paper, we study a structural average of a set of labeled merge trees and use it to encode uncertainty in data. Specifically, we compute a 1-center tree that minimizes its maximum distance to any other tree in the set under a well-defined metric called the interleaving distance. We provide heuristic strategies that compute structural averages of merge trees whose labels do not fully agree. We further provide an interactive visualization system that resembles a numerical calculator that takes as input a set of merge trees and outputs a tree as their structural average. We also highlight structural similarities between the input and the average and incorporate uncertainty information for visual exploration. We develop a novel measure of uncertainty, referred to as consistency, via a metric-space view of the input trees. Finally, we demonstrate an application of our framework through merge trees that arise from ensembles of scalar fields. Our work is the first to employ interleaving distances and consistency to study a global, mathematically rigorous, structural average of merge trees in the context of uncertainty visualization.
false
false
[ "Lin Yan", "Yusu Wang 0001", "Elizabeth Munch", "Ellen Gasparovic", "Bei Wang 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00113v2", "icon": "paper" } ]
SciVis
2,019
Accelerated Monte Carlo Rendering of Finite-Time Lyapunov Exponents
10.1109/TVCG.2019.2934313
Time-dependent fluid flows often contain numerous hyperbolic Lagrangian coherent structures, which act as transport barriers that guide the advection. The finite-time Lyapunov exponent is a commonly-used approximation to locate these repelling or attracting structures. Especially on large numerical simulations, the FTLE ridges can become arbitrarily sharp and very complex. Thus, the discrete sampling onto a grid for a subsequent direct volume rendering is likely to miss sharp ridges in the visualization. For this reason, an unbiased Monte Carlo-based rendering approach was recently proposed that treats the FTLE field as participating medium with single scattering. This method constructs a ground truth rendering without discretization, but it is prohibitively slow with render times in the order of days or weeks for a single image. In this paper, we accelerate the rendering process significantly, which allows us to compute video sequence of high-resolution FTLE animations in a much more reasonable time frame. For this, we follow two orthogonal approaches to improve on the rendering process: the volumetric light path integration in gradient domain and an acceleration of the transmittance estimation. We analyze the convergence and performance of the proposed method and demonstrate the approach by rendering complex FTLE fields in several 3D vector fields.
false
false
[ "Irene Baeza Rojo", "Markus H. Gross", "Tobias Günther" ]
[]
[]
[]
SciVis
2,019
Analysis of the Near-Wall Flow in a Turbine Cascade by Splat Visualization
10.1109/TVCG.2019.2934367
Turbines are essential components of jet planes and power plants. Therefore, their efficiency and service life are of central engineering interest. In the case of jet planes or thermal power plants, the heating of the turbines due to the hot gas flow is critical. Besides effective cooling, it is a major goal of engineers to minimize heat transfer between gas flow and turbine by design. Since it is known that splat events have a substantial impact on the heat transfer between flow and immersed surfaces, we adapt a splat detection and visualization method to a turbine cascade simulation in this case study. Because splat events are small phenomena, we use a direct numerical simulation resolving the turbulence in the flow as the base of our analysis. The outcome shows promising insights into splat formation and its relation to vortex structures. This may lead to better turbine design in the future.
false
false
[ "Baldwin Nsonga", "Gerik Scheuermann", "Stefan Gumhold", "Jordi Ventosa-Molina", "Denis Koschichow", "Jochen Fröhlich" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.09904v1", "icon": "paper" } ]
SciVis
2,019
Artifact-Based Rendering: Harnessing Natural and Traditional Visual Media for More Expressive and Engaging 3D Visualizations
10.1109/TVCG.2019.2934260
We introduce Artifact-Based Rendering (ABR), a framework of tools, algorithms, and processes that makes it possible to produce real, data-driven 3D scientific visualizations with a visual language derived entirely from colors, lines, textures, and forms created using traditional physical media or found in nature. A theory and process for ABR is presented to address three current needs: (i) designing better visualizations by making it possible for non-programmers to rapidly design and critique many alternative data-to-visual mappings; (ii) expanding the visual vocabulary used in scientific visualizations to depict increasingly complex multivariate data; (iii) bringing a more engaging, natural, and human-relatable handcrafted aesthetic to data visualization. New tools and algorithms to support ABR include front-end applets for constructing artifact-based colormaps, optimizing 3D scanned meshes for use in data visualization, and synthesizing textures from artifacts. These are complemented by an interactive rendering engine with custom algorithms and interfaces that demonstrate multiple new visual styles for depicting point, line, surface, and volume data. A within-the-research-team design study provides early evidence of the shift in visualization design processes that ABR is believed to enable when compared to traditional scientific visualization systems. Qualitative user feedback on applications to climate science and brain imaging support the utility of ABR for scientific discovery and public communication.
false
false
[ "Seth Johnson", "Francesca Samsel", "Greg Abram", "Daniel Olson", "Andrew J. Solis", "Bridger Herman", "Phillip J. Wolfram", "Christophe Lenglet", "Daniel F. Keefe" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13178v2", "icon": "paper" } ]
SciVis
2,019
Cohort-based T-SSIM Visual Computing for Radiation Therapy Prediction and Exploration
10.1109/TVCG.2019.2934546
We describe a visual computing approach to radiation therapy (RT) planning, based on spatial similarity within a patient cohort. In radiotherapy for head and neck cancer treatment, dosage to organs at risk surrounding a tumor is a large cause of treatment toxicity. Along with the availability of patient repositories, this situation has lead to clinician interest in understanding and predicting RT outcomes based on previously treated similar patients. To enable this type of analysis, we introduce a novel topology-based spatial similarity measure, T-SSIM, and a predictive algorithm based on this similarity measure. We couple the algorithm with a visual steering interface that intertwines visual encodings for the spatial data and statistical results, including a novel parallel-marker encoding that is spatially aware. We report quantitative results on a cohort of 165 patients, as well as a qualitative evaluation with domain experts in radiation oncology, data management, biostatistics, and medical imaging, who are collaborating remotely.
false
false
[ "Andrew Wentzel", "Peter Hanula", "Timothy Luciani", "Baher Elgohari", "Hesham Elhalawani", "Guadalupe Canahuate", "David M. Vock", "Clifton D. Fuller", "G. Elisabeta Marai" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.05919v2", "icon": "paper" } ]
SciVis
2,019
Deadeye Visualization Revisited: Investigation of Preattentiveness and Applicability in Virtual Environments
10.1109/TVCG.2019.2934370
Visualizations rely on highlighting to attract and guide our attention. To make an object of interest stand out independently from a number of distractors, the underlying visual cue, e.g., color, has to be preattentive. In our prior work, we introduced Deadeye as an instantly recognizable highlighting technique that works by rendering the target object for one eye only. In contrast to prior approaches, Deadeye excels by not modifying any visual properties of the target. However, in the case of 2D visualizations, the method requires an additional setup to allow dichoptic presentation, which is a considerable drawback. As a follow-up to requests from the community, this paper explores Deadeye as a highlighting technique for 3D visualizations, because such stereoscopic scenarios support dichoptic presentation out of the box. Deadeye suppresses binocular disparities for the target object, so we cannot assume the applicability of our technique as a given fact. With this motivation, the paper presents quantitative evaluations of Deadeye in VR, including configurations with multiple heterogeneous distractors as an important robustness challenge. After confirming the preserved preattentiveness (all average accuracies above 90%) under such real-world conditions, we explore VR volume rendering as an example application scenario for Deadeye. We depict a possible workflow for integrating our technique, conduct an exploratory survey to demonstrate benefits and limitations, and finally provide related design implications.
false
false
[ "Andrey Krekhov", "Sebastian Cmentowski", "Andre Waschk", "Jens H. Krüger" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.04702v1", "icon": "paper" } ]
SciVis
2,019
DeepOrganNet: On-the-Fly Reconstruction and Visualization of 3D / 4D Lung Models from Single-View Projections by Deep Deformation Network
10.1109/TVCG.2019.2934369
This paper introduces a deep neural network based method, i.e., DeepOrganNet, to generate and visualize fully high-fidelity 3D / 4D organ geometric models from single-view medical images with complicated background in real time. Traditional 3D / 4D medical image reconstruction requires near hundreds of projections, which cost insufferable computational time and deliver undesirable high imaging / radiation dose to human subjects. Moreover, it always needs further notorious processes to segment or extract the accurate 3D organ models subsequently. The computational time and imaging dose can be reduced by decreasing the number of projections, but the reconstructed image quality is degraded accordingly. To our knowledge, there is no method directly and explicitly reconstructing multiple 3D organ meshes from a single 2D medical grayscale image on the fly. Given single-view 2D medical images, e.g., 3D / 4D-CT projections or X-ray images, our end-to-end DeepOrganNet framework can efficiently and effectively reconstruct 3D / 4D lung models with a variety of geometric shapes by learning the smooth deformation fields from multiple templates based on a trivariate tensor-product deformation technique, leveraging an informative latent descriptor extracted from input 2D images. The proposed method can guarantee to generate high-quality and high-fidelity manifold meshes for 3D / 4D lung models; while, all current deep learning based approaches on the shape reconstruction from a single image cannot. The major contributions of this work are to accurately reconstruct the 3D organ shapes from 2D single-view projection, significantly improve the procedure time to allow on-the-fly visualization, and dramatically reduce the imaging dose for human subjects. Experimental results are evaluated and compared with the traditional reconstruction method and the state-of-the-art in deep learning, by using extensive 3D and 4D examples, including both synthetic phantom and real patient datasets. The efficiency of the proposed method shows that it only needs several milliseconds to generate organ meshes with 10K vertices, which has great potential to be used in real-time image guided radiation therapy (IGRT).
false
false
[ "Yifan Wang", "Zichun Zhong", "Jing Hua 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.09375v1", "icon": "paper" } ]
SciVis
2,019
Dynamic Nested Tracking Graphs
10.1109/TVCG.2019.2934368
This work describes an approach for the interactive visual analysis of large-scale simulations, where numerous superlevel set components and their evolution are of primary interest. The approach first derives, at simulation runtime, a specialized Cinema database that consists of images of component groups, and topological abstractions. This database is processed by a novel graph operation-based nested tracking graph algorithm (GO-NTG) that dynamically computes NTGs for component groups based on size, overlap, persistence, and level thresholds. The resulting NTGs are in turn used in a feature-centered visual analytics framework to query specific database elements and update feature parameters, facilitating flexible post hoc analysis.
false
false
[ "Jonas Lukasczyk", "Christoph Garth", "Gunther H. Weber", "Tim Biedert", "Ross Maciejewski", "Heike Leitte" ]
[]
[]
[]
SciVis
2,019
Extraction and Visual Analysis of Potential Vorticity Banners around the Alps
10.1109/TVCG.2019.2934310
Potential vorticity is among the most important scalar quantities in atmospheric dynamics. For instance, potential vorticity plays a key role in particularly strong wind peaks in extratropical cyclones and it is able to explain the occurrence of frontal rain bands. Potential vorticity combines the key quantities of atmospheric dynamics, namely rotation and stratification. Under suitable wind conditions elongated banners of potential vorticity appear in the lee of mountains. Their role in atmospheric dynamics has recently raised considerable interest in the meteorological community for instance due to their influence in aviation wind hazards and maritime transport. In order to support meteorologists and climatologists in the analysis of these structures, we developed an extraction algorithm and a visual exploration framework consisting of multiple linked views. For the extraction we apply a predictor-corrector algorithm that follows streamlines and realigns them with extremal lines of potential vorticity. Using the agglomerative hierarchical clustering algorithm, we group banners from different sources based on their proximity. To visually analyze the time-dependent banner geometry, we provide interactive overviews and enable the query for detail on demand, including the analysis of different time steps, potentially correlated scalar quantities, and the wind vector field. In particular, we study the relationship between relative humidity and the banners for their potential in indicating the development of precipitation. Working with our method, the collaborating meteorologists gained a deeper understanding of the three-dimensional processes, which may spur follow-up research in the future.
false
false
[ "Robin Bader", "Michael Sprenger", "Nikolina Ban", "Stefan Rüdisühli", "Christoph Schär", "Tobias Günther" ]
[]
[]
[]
SciVis
2,019
High-throughput feature extraction for measuring attributes of deforming open-cell foams
10.1109/TVCG.2019.2934620
Metallic open-cell foams are promising structural materials with applications in multifunctional systems such as biomedical implants, energy absorbers in impact, noise mitigation, and batteries. There is a high demand for means to understand and correlate the design space of material performance metrics to the material structure in terms of attributes such as density, ligament and node properties, void sizes, and alignments. Currently, X-ray Computed Tomography (CT) scans of these materials are segmented either manually or with skeletonization approaches that may not accurately model the variety of shapes present in nodes and ligaments, especially irregularities that arise from manufacturing, image artifacts, or deterioration due to compression. In this paper, we present a new workflow for analysis of open-cell foams that combines a new density measurement to identify nodal structures, and topological approaches to identify ligament structures between them. Additionally, we provide automated measurement of foam properties. We demonstrate stable extraction of features and time-tracking in an image sequence of a foam being compressed. Our approach allows researchers to study larger and more complex foams than could previously be segmented only manually, and enables the high-throughput analysis needed to predict future foam performance.
false
false
[ "Steve Petruzza", "Attila Gyulassy", "Samuel Leventhal", "John J. Baglino", "Michael Czabaj", "Ashley D. Spear", "Valerio Pascucci" ]
[]
[]
[]
SciVis
2,019
InSituNet: Deep Image Synthesis for Parameter Space Exploration of Ensemble Simulations
10.1109/TVCG.2019.2934312
We propose InSituNet, a deep learning based surrogate model to support parameter space exploration for ensemble simulations that are visualized in situ. In situ visualization, generating visualizations at simulation time, is becoming prevalent in handling large-scale simulations because of the I/O and storage constraints. However, in situ visualization approaches limit the flexibility of post-hoc exploration because the raw simulation data are no longer available. Although multiple image-based approaches have been proposed to mitigate this limitation, those approaches lack the ability to explore the simulation parameters. Our approach allows flexible exploration of parameter space for large-scale ensemble simulations by taking advantage of the recent advances in deep learning. Specifically, we design InSituNet as a convolutional regression model to learn the mapping from the simulation and visualization parameters to the visualization results. With the trained model, users can generate new images for different simulation parameters under various visualization settings, which enables in-depth analysis of the underlying ensemble simulations. We demonstrate the effectiveness of InSituNet in combustion, cosmology, and ocean simulations through quantitative and qualitative evaluations.
false
false
[ "Wenbin He", "Junpeng Wang", "Hanqi Guo 0001", "Ko-Chih Wang", "Han-Wei Shen", "Mukund Raj", "Youssef S. G. Nashed", "Tom Peterka" ]
[ "BP" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00407v3", "icon": "paper" } ]
SciVis
2,019
LassoNet: Deep Lasso-Selection of 3D Point Clouds
10.1109/TVCG.2019.2934332
Selection is a fundamental task in exploratory analysis and visualization of 3D point clouds. Prior researches on selection methods were developed mainly based on heuristics such as local point density, thus limiting their applicability in general data. Specific challenges root in the great variabilities implied by point clouds (e.g., dense vs. sparse), viewpoint (e.g., occluded vs. non-occluded), and lasso (e.g., small vs. large). In this work, we introduce LassoNet, a new deep neural network for lasso selection of 3D point clouds, attempting to learn a latent mapping from viewpoint and lasso to point cloud regions. To achieve this, we couple user-target points with viewpoint and lasso information through 3D coordinate transform and naive selection, and improve the method scalability via an intention filtering and farthest point sampling. A hierarchical network is trained using a dataset with over 30K lasso-selection records on two different point cloud data. We conduct a formal user study to compare LassoNet with two state-of-the-art lasso-selection methods. The evaluations confirm that our approach improves the selection effectiveness and efficiency across different combinations of 3D point clouds, viewpoints, and lasso selections. Project Website: https://LassoNet.github.io
false
false
[ "Zhutian Chen", "Wei Zeng 0004", "Zhiguang Yang", "Lingyun Yu 0005", "Chi-Wing Fu", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13538v3", "icon": "paper" } ]
SciVis
2,019
Multi-Scale Procedural Animations of Microtubule Dynamics Based on Measured Data
10.1109/TVCG.2019.2934612
Biologists often use computer graphics to visualize structures, which due to physical limitations are not possible to image with a microscope. One example for such structures are microtubules, which are present in every eukaryotic cell. They are part of the cytoskeleton maintaining the shape of the cell and playing a key role in the cell division. In this paper, we propose a scientifically-accurate multi-scale procedural model of microtubule dynamics as a novel application scenario for procedural animation, which can generate visualizations of their overall shape, molecular structure, as well as animations of the dynamic behaviour of their growth and disassembly. The model is spanning from tens of micrometers down to atomic resolution. All the aspects of the model are driven by scientific data. The advantage over a traditional, manual animation approach is that when the underlying data change, for instance due to new evidence, the model can be recreated immediately. The procedural animation concept is presented in its generic form, with several novel extensions, facilitating an easy translation to other domains with emergent multi-scale behavior.
false
false
[ "Tobias Klein", "Ivan Viola", "M. Eduard Gröller", "Peter Mindek" ]
[]
[]
[]
SciVis
2,019
Multi-Scale Topological Analysis of Asymmetric Tensor Fields on Surfaces
10.1109/TVCG.2019.2934314
Asymmetric tensor fields have found applications in many science and engineering domains, such as fluid dynamics. Recent advances in the visualization and analysis of 2D asymmetric tensor fields focus on pointwise analysis of the tensor field and effective visualization metaphors such as colors, glyphs, and hyperstreamlines. In this paper, we provide a novel multi-scale topological analysis framework for asymmetric tensor fields on surfaces. Our multi-scale framework is based on the notions of eigenvalue and eigenvector graphs. At the core of our framework are the identification of atomic operations that modify the graphs and the scale definition that guides the order in which the graphs are simplified to enable clarity and focus for the visualization of topological analysis on data of different sizes. We also provide efficient algorithms to realize these operations. Furthermore, we provide physical interpretation of these graphs. To demonstrate the utility of our system, we apply our multi-scale analysis to data in computational fluid dynamics.
false
false
[ "Fariba Khan", "Lawrence Roy", "Eugene Zhang", "Botong Qu", "Shih-Hsuan Hung", "Harry Yeh", "Robert S. Laramee", "Yue Zhang 0009" ]
[]
[]
[]
SciVis
2,019
Multiscale Visual Drilldown for the Analysis of Large Ensembles of Multi-Body Protein Complexes
10.1109/TVCG.2019.2934333
When studying multi-body protein complexes, biochemists use computational tools that can suggest hundreds or thousands of their possible spatial configurations. However, it is not feasible to experimentally verify more than only a very small subset of them. In this paper, we propose a novel multiscale visual drilldown approach that was designed in tight collaboration with proteomic experts, enabling a systematic exploration of the configuration space. Our approach takes advantage of the hierarchical structure of the data – from the whole ensemble of protein complex configurations to the individual configurations, their contact interfaces, and the interacting amino acids. Our new solution is based on interactively linked 2D and 3D views for individual hierarchy levels. At each level, we offer a set of selection and filtering operations that enable the user to narrow down the number of configurations that need to be manually scrutinized. Furthermore, we offer a dedicated filter interface, which provides the users with an overview of the applied filtering operations and enables them to examine their impact on the explored ensemble. This way, we maintain the history of the exploration process and thus enable the user to return to an earlier point of the exploration. We demonstrate the effectiveness of our approach on two case studies conducted by collaborating proteomic experts.
false
false
[ "Katarína Furmanová", "Adam Jurcík", "Barbora Kozlíková", "Helwig Hauser", "Jan Byska" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.04112v1", "icon": "paper" } ]
SciVis
2,019
OpenSpace: A System for Astrographics
10.1109/TVCG.2019.2934259
Human knowledge about the cosmos is rapidly increasing as instruments and simulations are generating new data supporting the formation of theory and understanding of the vastness and complexity of the universe. OpenSpace is a software system that takes on the mission of providing an integrated view of all these sources of data and supports interactive exploration of the known universe from the millimeter scale showing instruments on spacecrafts to billions of light years when visualizing the early universe. The ambition is to support research in astronomy and space exploration, science communication at museums and in planetariums as well as bringing exploratory astrographics to the class room. There is a multitude of challenges that need to be met in reaching this goal such as the data variety, multiple spatio-temporal scales, collaboration capabilities, etc. Furthermore, the system has to be flexible and modular to enable rapid prototyping and inclusion of new research results or space mission data and thereby shorten the time from discovery to dissemination. To support the different use cases the system has to be hardware agnostic and support a range of platforms and interaction paradigms. In this paper we describe how OpenSpace meets these challenges in an open source effort that is paving the path for the next generation of interactive astrographics.
false
false
[ "Alexander Bock 0002", "Anders Ynnerman", "Emil Axelsson", "Jonathas Costa", "Gene Payne", "Micah Acinapura", "Vivian Trakinski", "Carter Emmart", "Cláudio T. Silva", "Charles D. Hansen" ]
[]
[]
[]
SciVis
2,019
Progressive Wasserstein Barycenters of Persistence Diagrams
10.1109/TVCG.2019.2934256
This paper presents an efficient algorithm for the progressive approximation of Wasserstein barycenters of persistence diagrams, with applications to the visual analysis of ensemble data. Given a set of scalar fields, our approach enables the computation of a persistence diagram which is representative of the set, and which visually conveys the number, data ranges and saliences of the main features of interest found in the set. Such representative diagrams are obtained by computing explicitly the discrete Wasserstein barycenter of the set of persistence diagrams, a notoriously computationally intensive task. In particular, we revisit efficient algorithms for Wasserstein distance approximation [12], [51] to extend previous work on barycenter estimation [94]. We present a new fast algorithm, which progressively approximates the barycenter by iteratively increasing the computation accuracy as well as the number of persistent features in the output diagram. Such a progressivity drastically improves convergence in practice and allows to design an interruptible algorithm, capable of respecting computation time constraints. This enables the approximation of Wasserstein barycenters within interactive times. We present an application to ensemble clustering where we revisit the $k$-means algorithm to exploit our barycenters and compute, within execution time constraints, meaningful clusters of ensemble data along with their barycenter diagram. Extensive experiments on synthetic and real-life data sets report that our algorithm converges to barycenters that are qualitatively meaningful with regard to the applications, and quantitatively comparable to previous techniques, while offering an order of magnitude speedup when run until convergence (without time constraint). Our algorithm can be trivially parallelized to provide additional speedups in practice on standard workstations. We provide a lightweight C++ implementation of our approach that can be used to reproduce our results.
false
false
[ "Jules Vidal", "Joseph Budin", "Julien Tierny" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.04565v2", "icon": "paper" } ]
SciVis
2,019
Scale Trotter: Illustrative Visual Travels Across Negative Scales
10.1109/TVCG.2019.2934334
We present ScaleTrotter, a conceptual framework for an interactive, multi-scale visualization of biological mesoscale data and, specifically, genome data. ScaleTrotter allows viewers to smoothly transition from the nucleus of a cell to the atomistic composition of the DNA, while bridging several orders of magnitude in scale. The challenges in creating an interactive visualization of genome data are fundamentally different in several ways from those in other domains like astronomy that require a multi-scale representation as well. First, genome data has intertwined scale levels—the DNA is an extremely long, connected molecule that manifests itself at all scale levels. Second, elements of the DNA do not disappear as one zooms out—instead the scale levels at which they are observed group these elements differently. Third, we have detailed information and thus geometry for the entire dataset and for all scale levels, posing a challenge for interactive visual exploration. Finally, the conceptual scale levels for genome data are close in scale space, requiring us to find ways to visually embed a smaller scale into a coarser one. We address these challenges by creating a new multi-scale visualization concept. We use a scale-dependent camera model that controls the visual embedding of the scales into their respective parents, the rendering of a subset of the scale hierarchy, and the location, size, and scope of the view. In traversing the scales, ScaleTrotter is roaming between 2D and 3D visual representations that are depicted in integrated visuals. We discuss, specifically, how this form of multi-scale visualization follows from the specific characteristics of the genome data and describe its implementation. Finally, we discuss the implications of our work to the general illustrative depiction of multi-scale data.
false
false
[ "Sarkis Halladjian", "Haichao Miao", "David Kouril", "M. Eduard Gröller", "Ivan Viola", "Tobias Isenberg 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.12352v1", "icon": "paper" } ]
SciVis
2,019
Scale-Space Splatting: Reforming Spacetime for Cross-Scale Exploration of Integral Measures in Molecular Dynamics
10.1109/TVCG.2019.2934258
Understanding large amounts of spatiotemporal data from particle-based simulations, such as molecular dynamics, often relies on the computation and analysis of aggregate measures. These, however, by virtue of aggregation, hide structural information about the space/time localization of the studied phenomena. This leads to degenerate cases where the measures fail to capture distinct behaviour. In order to drill into these aggregate values, we propose a multi-scale visual exploration technique. Our novel representation, based on partial domain aggregation, enables the construction of a continuous scale-space for discrete datasets and the simultaneous exploration of scales in both space and time. We link these two scale-spaces in a scale-space space-time cube and model linked views as orthogonal slices through this cube, thus enabling the rapid identification of spatio-temporal patterns at multiple scales. To demonstrate the effectiveness of our approach, we showcase an advanced exploration of a protein-ligand simulation.
false
false
[ "Juraj Pálenik", "Jan Byska", "Stefan Bruckner", "Helwig Hauser" ]
[]
[]
[]
SciVis
2,019
Temporal Views of Flattened Mitral Valve Geometries
10.1109/TVCG.2019.2934337
The mitral valve, one of the four valves in the human heart, controls the bloodflow between the left atrium and ventricle and may suffer from various pathologies. Malfunctioning valves can be treated by reconstructive surgeries, which have to be carefully planned and evaluated. While current research focuses on the modeling and segmentation of the valve, we base our work on existing segmentations of patient-specific mitral valves, that are also time-resolved ($3\mathrm{D}+\mathrm{t}$) over the cardiac cycle. The interpretation of the data can be ambiguous, due to the complex surface of the valve and multiple time steps. We therefore propose a software prototype to analyze such $3\mathrm{D}+\mathrm{t}$ data, by extracting pathophysiological parameters and presenting them via dimensionally reduced visualizations. For this, we rely on an existing algorithm to unroll the convoluted valve surface towards a flattened 2D representation. In this paper, we show that the $3\mathrm{D}+\mathrm{t}$ data can be transferred to 3D or 2D representations in a way that allows the domain expert to faithfully grasp important aspects of the cardiac cycle. In this course, we not only consider common pathophysiological parameters, but also introduce new observations that are derived from landmarks within the segmentation model. Our analysis techniques were developed in collaboration with domain experts and a survey showed that the insights have the potential to support mitral valve diagnosis and the comparison of the pre- and post-operative condition of a patient.
false
false
[ "Pepe Eulzer", "Sandy Engelhardt", "Nils Lichtenberg", "Raffaele De Simone", "Kai Lawonn" ]
[]
[]
[]
SciVis
2,019
The Effect of Data Transformations on Scalar Field Topological Analysis of High-Order FEM Solutions
10.1109/TVCG.2019.2934338
High-order finite element methods (HO-FEM) are gaining popularity in the simulation community due to their success in solving complex flow dynamics. There is an increasing need to analyze the data produced as output by these simulations. Simultaneously, topological analysis tools are emerging as powerful methods for investigating simulation data. However, most of the current approaches to topological analysis have had limited application to HO-FEM simulation data for two reasons. First, the current topological tools are designed for linear data (polynomial degree one), but the polynomial degree of the data output by these simulations is typically higher (routinely up to polynomial degree six). Second, the simulation data and derived quantities of the simulation data have discontinuities at element boundaries, and these discontinuities do not match the input requirements for the topological tools. One solution to both issues is to transform the high-order data to achieve low-order, continuous inputs for topological analysis. Nevertheless, there has been little work evaluating the possible transformation choices and their downstream effect on the topological analysis. We perform an empirical study to evaluate two commonly used data transformation methodologies along with the recently introduced L-SIAC filter for processing high-order simulation data. Our results show diverse behaviors are possible. We offer some guidance about how best to consider a pipeline of topological analysis of HO-FEM simulations with the currently available implementations of topological analysis.
false
false
[ "Ashok Jallepalli", "Joshua A. Levine", "Robert M. Kirby" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.07224v1", "icon": "paper" } ]
SciVis
2,019
Toward Localized Topological Data Structures: Querying the Forest for the Tree
10.1109/TVCG.2019.2934257
Topological approaches to data analysis can answer complex questions about the number, connectivity, and scale of intrinsic features in scalar data. However, the global nature of many topological structures makes their computation challenging at scale, and thus often limits the size of data that can be processed. One key quality to achieving scalability and performance on modern architectures is data locality, i.e., a process operates on data that resides in a nearby memory system, avoiding frequent jumps in data access patterns. From this perspective, topological computations are particularly challenging because the implied data structures represent features that can span the entire data set, often requiring a global traversal phase that limits their scalability. Traditionally, expensive preprocessing is considered an acceptable trade-off as it accelerates all subsequent queries. Most published use cases, however, explore only a fraction of all possible queries, most often those returning small, local features. In these cases, much of the global information is not utilized, yet computing it dominates the overall response time. We address this challenge for merge trees, one of the most commonly used topological structures. In particular, we propose an alternative representation, the merge forest, a collection of local trees corresponding to regions in a domain decomposition. Local trees are connected by a bridge set that allows us to recover any necessary global information at query time. The resulting system couples (i) a preprocessing that scales linearly in practice with (ii) fast runtime queries that provide the same functionality as traditional queries of a global merge tree. We test the scalability of our approach on a shared-memory parallel computer and demonstrate how data structure locality enables the analysis of large data with an order of magnitude performance improvement over the status quo. Furthermore, a merge forest reduces the memory overhead compared to a global merge tree and enables the processing of data sets that are an order of magnitude larger than possible with previous algorithms.
false
false
[ "Pavol Klacansky", "Attila Gyulassy", "Peer-Timo Bremer", "Valerio Pascucci" ]
[ "HM" ]
[]
[]
SciVis
2,019
TSR-TVD: Temporal Super-Resolution for Time-Varying Data Analysis and Visualization
10.1109/TVCG.2019.2934255
We present TSR-TVD, a novel deep learning framework that generates temporal super-resolution (TSR) of time-varying data (TVD) using adversarial learning. TSR-TVD is the first work that applies the recurrent generative network (RGN), a combination of the recurrent neural network (RNN) and generative adversarial network (GAN), to generate temporal high-resolution volume sequences from low-resolution ones. The design of TSR-TVD includes a generator and a discriminator. The generator takes a pair of volumes as input and outputs the synthesized intermediate volume sequence through forward and backward predictions. The discriminator takes the synthesized intermediate volumes as input and produces a score indicating the realness of the volumes. Our method handles multivariate data as well where the trained network from one variable is applied to generate TSR for another variable. To demonstrate the effectiveness of TSR-TVD, we show quantitative and qualitative results with several time-varying multivariate data sets and compare our method against standard linear interpolation and solutions solely based on RNN or CNN.
false
false
[ "Jun Han 0010", "Chaoli Wang 0001" ]
[]
[]
[]
SciVis
2,019
Vector Field Topology of Time-Dependent Flows in a Steady Reference Frame
10.1109/TVCG.2019.2934375
The topological analysis of unsteady vector fields remains to this day one of the largest challenges in flow visualization. We build up on recent work on vortex extraction to define a time-dependent vector field topology for 2D and 3D flows. In our work, we split the vector field into two components: a vector field in which the flow becomes steady, and the remaining ambient flow that describes the motion of topological elements (such as sinks, sources and saddles) and feature curves (vortex corelines and bifurcation lines). To this end, we expand on recent local optimization approaches by modeling spatially-varying deformations through displacement transformations from continuum mechanics. We compare and discuss the relationships with existing local and integration-based topology extraction methods, showing for instance that separatrices seeded from saddles in the optimal frame align with the integration-based streakline vector field topology. In contrast to the streakline-based approach, our method gives a complete picture of the topology for every time slice, including the steps near the temporal domain boundaries. With our work it now becomes possible to extract topological information even when only few time slices are available. We demonstrate the method in several analytical and numerically-simulated flows and discuss practical aspects, limitations and opportunities for future work.
false
false
[ "Irene Baeza Rojo", "Tobias Günther" ]
[]
[]
[]
SciVis
2,019
Void-and-Cluster Sampling of Large Scattered Data and Trajectories
10.1109/TVCG.2019.2934335
We propose a data reduction technique for scattered data based on statistical sampling. Our void-and-cluster sampling technique finds a representative subset that is optimally distributed in the spatial domain with respect to the blue noise property. In addition, it can adapt to a given density function, which we use to sample regions of high complexity in the multivariate value domain more densely. Moreover, our sampling technique implicitly defines an ordering on the samples that enables progressive data loading and a continuous level-of-detail representation. We extend our technique to sample time-dependent trajectories, for example pathlines in a time interval, using an efficient and iterative approach. Furthermore, we introduce a local and continuous error measure to quantify how well a set of samples represents the original dataset. We apply this error measure during sampling to guide the number of samples that are taken. Finally, we use this error measure and other quantities to evaluate the quality, performance, and scalability of our algorithm.
false
false
[ "Tobias Rapp", "Christoph Peters 0002", "Carsten Dachsbacher" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.05073v2", "icon": "paper" } ]
InfoVis
2,019
A Comparative Evaluation of Animation and Small Multiples for Trend Visualization on Mobile Phones
10.1109/TVCG.2019.2934397
We compare the efficacy of animated and small multiples variants of scatterplots on mobile phones for comparing trends in multivariate datasets. Visualization is increasingly prevalent in mobile applications and mobile-first websites, yet there is little prior visualization research dedicated to small displays. In this paper, we build upon previous experimental research carried out on larger displays that assessed animated and non-animated variants of scatterplots. Incorporating similar experimental stimuli and tasks, we conducted an experiment where 96 crowdworker participants performed nine trend comparison tasks using their mobile phones. We found that those using a small multiples design consistently completed tasks in less time, albeit with slightly less confidence than those using an animated design. The accuracy results were more task-dependent, and we further interpret our results according to the characteristics of the individual tasks, with a specific focus on the trajectories of target and distractor data items in each task. We identify cases that appear to favor either animation or small multiples, providing new questions for further experimental research and implications for visualization design on mobile devices. Lastly, we provide a reflection on our evaluation methodology.
false
false
[ "Matthew Brehmer", "Bongshin Lee", "Petra Isenberg", "Eun Kyoung Choe" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.03919v2", "icon": "paper" } ]
InfoVis
2,019
A Comparison of Radial and Linear Charts for Visualizing Daily Patterns
10.1109/TVCG.2019.2934784
Radial charts are generally considered less effective than linear charts. Perhaps the only exception is in visualizing periodical time-dependent data, which is believed to be naturally supported by the radial layout. It has been demonstrated that the drawbacks of radial charts outweigh the benefits of this natural mapping. Visualization of daily patterns, as a special case, has not been systematically evaluated using radial charts. In contrast to yearly or weekly recurrent trends, the analysis of daily patterns on a radial chart may benefit from our trained skill on reading radial clocks that are ubiquitous in our culture. In a crowd-sourced experiment with 92 non-expert users, we evaluated the accuracy, efficiency, and subjective ratings of radial and linear charts for visualizing daily traffic accident patterns. We systematically compared juxtaposed 12-hours variants and single 24-hours variants for both layouts in four low-level tasks and one high-level interpretation task. Our results show that over all tasks, the most elementary 24-hours linear bar chart is most accurate and efficient and is also preferred by the users. This provides strong evidence for the use of linear layouts – even for visualizing periodical daily patterns.
false
false
[ "Manuela Waldner", "Alexandra Diehl", "Denis Gracanin", "Rainer Splechtna", "Claudio Delrieux", "Kresimir Matkovic" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13534v1", "icon": "paper" } ]
InfoVis
2,019
A Comparison of Visualizations for Identifying Correlation over Space and Time
10.1109/TVCG.2019.2934807
Observing the relationship between two or more variables over space and time is essential in many domains. For instance, looking, for different countries, at the evolution of both the life expectancy at birth and the fertility rate will give an overview of their demographics. The choice of visual representation for such multivariate data is key to enabling analysts to extract patterns and trends. Prior work has compared geo-temporal visualization techniques for a single thematic variable that evolves over space and time, or for two variables at a specific point in time. But how effective visualization techniques are at communicating correlation between two variables that evolve over space and time remains to be investigated. We report on a study comparing three techniques that are representative of different strategies to visualize geo-temporal multivariate data: either juxtaposing all locations for a given time step, or juxtaposing all time steps for a given location; and encoding thematic attributes either using symbols overlaid on top of map features, or using visual channels of the map features themselves. Participants performed a series of tasks that required them to identify if two variables were correlated over time and if there was a pattern in their evolution. Tasks varied in granularity for both dimensions: time (all time steps, a subrange of steps, one step only) and space (all locations, locations in a subregion, one location only). Our results show that a visualization's effectiveness depends strongly on the task to be carried out. Based on these findings we present a set of design guidelines about geo-temporal visualization techniques for communicating correlation.
false
false
[ "Vanessa Peña Araya", "Emmanuel Pietriga", "Anastasia Bezerianos" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.06399v2", "icon": "paper" } ]
InfoVis
2,019
A Deep Generative Model for Graph Layout
10.1109/TVCG.2019.2934396
Different layouts can characterize different aspects of the same graph. Finding a “good” layout of a graph is thus an important task for graph visualization. In practice, users often visualize a graph in multiple layouts by using different methods and varying parameter settings until they find a layout that best suits the purpose of the visualization. However, this trial-and-error process is often haphazard and time-consuming. To provide users with an intuitive way to navigate the layout design space, we present a technique to systematically visualize a graph in diverse layouts using deep generative models. We design an encoder-decoder architecture to learn a model from a collection of example layouts, where the encoder represents training examples in a latent space and the decoder produces layouts from the latent space. In particular, we train the model to construct a two-dimensional latent space for users to easily explore and generate various layouts. We demonstrate our approach through quantitative and qualitative evaluations of the generated layouts. The results of our evaluations show that our model is capable of learning and generalizing abstract concepts of graph layouts, not just memorizing the training examples. In summary, this paper presents a fundamentally new approach to graph visualization where a machine learning model learns to visualize a graph from examples without manually-defined heuristics.
false
false
[ "Oh-Hyun Kwon", "Kwan-Liu Ma" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1904.12225v7", "icon": "paper" } ]
InfoVis
2,019
A Recursive Subdivision Technique for Sampling Multi-class Scatterplots
10.1109/TVCG.2019.2934541
We present a non-uniform recursive sampling technique for multi-class scatterplots, with the specific goal of faithfully presenting relative data and class densities, while preserving major outliers in the plots. Our technique is based on a customized binary kd-tree, in which leaf nodes are created by recursively subdividing the underlying multi-class density map. By backtracking, we merge leaf nodes until they encompass points of all classes for our subsequently applied outlier-aware multi-class sampling strategy. A quantitative evaluation shows that our approach can better preserve outliers and at the same time relative densities in multi-class scatterplots compared to the previous approaches, several case studies demonstrate the effectiveness of our approach in exploring complex and real world data.
false
false
[ "Xin Chen", "Tong Ge", "Jian Zhang 0070", "Baoquan Chen", "Chi-Wing Fu", "Oliver Deussen", "Yunhai Wang" ]
[]
[]
[]
InfoVis
2,019
An Incremental Dimensionality Reduction Method for Visualizing Streaming Multidimensional Data
10.1109/TVCG.2019.2934433
Dimensionality reduction (DR) methods are commonly used for analyzing and visualizing multidimensional data. However, when data is a live streaming feed, conventional DR methods cannot be directly used because of their computational complexity and inability to preserve the projected data positions at previous time points. In addition, the problem becomes even more challenging when the dynamic data records have a varying number of dimensions as often found in real-world applications. This paper presents an incremental DR solution. We enhance an existing incremental PCA method in several ways to ensure its usability for visualizing streaming multidimensional data. First, we use geometric transformation and animation methods to help preserve a viewer's mental map when visualizing the incremental results. Second, to handle data dimension variants, we use an optimization method to estimate the projected data positions, and also convey the resulting uncertainty in the visualization. We demonstrate the effectiveness of our design with two case studies using real-world datasets.
false
false
[ "Takanori Fujiwara", "Jia-Kai Chou", "Shilpika", "Panpan Xu", "Ren Liu", "Kwan-Liu Ma" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1905.04000v3", "icon": "paper" } ]
InfoVis
2,019
BarcodeTree: Scalable Comparison of Multiple Hierarchies
10.1109/TVCG.2019.2934535
We propose BarcodeTree (BCT), a novel visualization technique for comparing topological structures and node attribute values of multiple trees. BCT can provide an overview of one hundred shallow and stable trees simultaneously, without aggregating individual nodes. Each BCT is shown within a single row using a style similar to a barcode, allowing trees to be stacked vertically with matching nodes aligned horizontally to ease comparison and maintain space efficiency. We design several visual cues and interactive techniques to help users understand the topological structure and compare trees. In an experiment comparing two variants of BCT with icicle plots, the results suggest that BCTs make it easier to visually compare trees by reducing the vertical distance between different trees. We also present two case studies involving a dataset of hundreds of trees to demonstrate BCT's utility.
false
false
[ "Guozheng Li 0002", "Yu Zhang 0043", "Yu Dong", "Christy Jie Liang", "Jinson Zhang", "Jinsong Wang", "Michael J. McGuffin", "Xiaoru Yuan" ]
[]
[]
[]
InfoVis
2,019
Biased Average Position Estimates in Line and Bar Graphs: Underestimation, Overestimation, and Perceptual Pull
10.1109/TVCG.2019.2934400
In visual depictions of data, position (i.e., the vertical height of a line or a bar) is believed to be the most precise way to encode information compared to other encodings (e.g., hue). Not only are other encodings less precise than position, but they can also be prone to systematic biases (e.g., color category boundaries can distort perceived differences between hues). By comparison, position's high level of precision may seem to protect it from such biases. In contrast, across three empirical studies, we show that while position may be a precise form of data encoding, it can also produce systematic biases in how values are visually encoded, at least for reports of average position across a short delay. In displays with a single line or a single set of bars, reports of average positions were significantly biased, such that line positions were underestimated and bar positions were overestimated. In displays with multiple data series (i.e., multiple lines and/or sets of bars), this systematic bias still persisted. We also observed an effect of “perceptual pull”, where the average position estimate for each series was ‘pulled’ toward the other. These findings suggest that, although position may still be the most precise form of visual data encoding, it can also be systematically biased.
false
false
[ "Cindy Xiong", "Cristina R. Ceja", "Casimir J. H. Ludwig", "Steven Franconeri" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00073v1", "icon": "paper" } ]
InfoVis
2,019
CerebroVis: Designing an Abstract yet Spatially Contextualized Cerebral Artery Network Visualization
10.1109/TVCG.2019.2934402
Blood circulation in the human brain is supplied through a network of cerebral arteries. If a clinician suspects a patient has a stroke or other cerebrovascular condition, they order imaging tests. Neuroradiologists visually search the resulting scans for abnormalities. Their visual search tasks correspond to the abstract network analysis tasks of browsing and path following. To assist neuroradiologists in identifying cerebral artery abnormalities, we designed CerebroVis, a novel abstract—yet spatially contextualized—cerebral artery network visualization. In this design study, we contribute a novel framing and definition of the cerebral artery system in terms of network theory and characterize neuroradiologist domain goals as abstract visualization and network analysis tasks. Through an iterative, user-centered design process we developed an abstract network layout technique which incorporates cerebral artery spatial context. The abstract visualization enables increased domain task performance over 3D geometry representations, while including spatial context helps preserve the user's mental map of the underlying geometry. We provide open source implementations of our network layout technique and prototype cerebral artery visualization tool. We demonstrate the robustness of our technique by successfully laying out 61 open source brain scans. We evaluate the effectiveness of our layout through a mixed methods study with three neuroradiologists. In a formative controlled experiment our study participants used CerebroVis and a conventional 3D visualization to examine real cerebral artery imaging data to identify a simulated intracranial artery stenosis. Participants were more accurate at identifying stenoses using CerebroVis (absolute risk difference 13%). A free copy of this paper, the evaluation stimuli and data, and source code are available at osf.io/e5sxt.
false
false
[ "Aditeya Pandey", "Harsh Shukla", "Geoffrey S. Young", "Lei Qin", "Amir A. Zamani", "Liangge Hsu", "Raymond Huang", "Cody Dunne", "Michelle Borkin" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/63y5c", "icon": "paper" } ]
InfoVis
2,019
Color Crafting: Automating the Construction of Designer Quality Color Ramps
10.1109/TVCG.2019.2934284
Visualizations often encode numeric data using sequential and diverging color ramps. Effective ramps use colors that are sufficiently discriminable, align well with the data, and are aesthetically pleasing. Designers rely on years of experience to create high-quality color ramps. However, it is challenging for novice visualization developers that lack this experience to craft effective ramps as most guidelines for constructing ramps are loosely defined qualitative heuristics that are often difficult to apply. Our goal is to enable visualization developers to readily create effective color encodings using a single seed color. We do this using an algorithmic approach that models designer practices by analyzing patterns in the structure of designer-crafted color ramps. We construct these models from a corpus of 222 expert-designed color ramps, and use the results to automatically generate ramps that mimic designer practices. We evaluate our approach through an empirical study comparing the outputs of our approach with designer-crafted color ramps. Our models produce ramps that support accurate and aesthetically pleasing visualizations at least as well as designer ramps and that outperform conventional mathematical approaches.
false
false
[ "Stephen Smart", "Keke Wu", "Danielle Albers Szafir" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00629v1", "icon": "paper" } ]
InfoVis
2,019
Common Fate for Animated Transitions in Visualization
10.1109/TVCG.2019.2934288
The Law of Common Fate from Gestalt psychology states that visual objects moving with the same velocity along parallel trajectories will be perceived by a human observer as grouped. However, the concept of common fate is much broader than mere velocity; in this paper we explore how common fate results from coordinated changes in luminance and size. We present results from a crowdsourced graphical perception study where we asked workers to make perceptual judgments on a series of trials involving four graphical objects under the influence of conflicting static and dynamic visual factors (position, size and luminance) used in conjunction. Our results yield the following rankings for visual grouping: motion > (dynamic luminance, size, luminance); dynamic size > (dynamic luminance, position); and dynamic luminance > size. We also conducted a follow-up experiment to evaluate the three dynamic visual factors in a more ecologically valid setting, using both a Gapminder-like animated scatterplot and a thematic map of election data. The results indicate that in practice the relative grouping strengths of these factors may depend on various parameters including the visualization characteristics and the underlying data. We discuss design implications for animated transitions in data visualization.
false
false
[ "Amira Chalbi", "Jacob Ritchie", "Deok Gun Park 0001", "Jungu Choi", "Nicolas Roussel 0001", "Niklas Elmqvist", "Fanny Chevalier" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00661v1", "icon": "paper" } ]
InfoVis
2,019
Construct-A-Vis: Exploring the Free-Form Visualization Processes of Children
10.1109/TVCG.2019.2934804
Building data analysis skills is part of modern elementary school curricula. Recent research has explored how to facilitate children's understanding of visual data representations through completion exercises which highlight links between concrete and abstract mappings. This approach scaffolds visualization activities by presenting a target visualization to children. But how can we engage children in more free-form visual data mapping exercises that are driven by their own mapping ideas? How can we scaffold a creative exploration of visualization techniques and mapping possibilities? We present Construct-A-Vis, a tablet-based tool designed to explore the feasibility of free-form and constructive visualization activities with elementary school children. Construct-A-Vis provides adjustable levels of scaffolding visual mapping processes. It can be used by children individually or as part of collaborative activities. Findings from a study with elementary school children using Construct-A-Vis individually and in pairs highlight the potential of this free-form constructive approach, as visible in children's diverse visualization outcomes and their critical engagement with the data and mapping processes. Based on our study findings we contribute insights into the design of free-form visualization tools for children, including the role of tool-based scaffolding mechanisms and shared interactions to guide visualization activities with children.
false
false
[ "Fearn Bishop", "Johannes Zagermann", "Ulrike Pfeil", "Gemma Sanderson", "Harald Reiterer", "Uta Hinrichs" ]
[]
[]
[]
InfoVis
2,019
Criteria for Rigor in Visualization Design Study
10.1109/TVCG.2019.2934539
We develop a new perspective on research conducted through visualization design study that emphasizes design as a method of inquiry and the broad range of knowledge-contributions achieved through it as multiple, subjective, and socially constructed. From this interpretivist position we explore the nature of visualization design study and develop six criteria for rigor. We propose that rigor is established and judged according to the extent to which visualization design study research and its reporting are INFORMED, REFLEXIVE, ABUNDANT, PLAUSIBLE, RESONANT, and TRANSPARENT. This perspective and the criteria were constructed through a four-year engagement with the discourse around rigor and the nature of knowledge in social science, information systems, and design. We suggest methods from cognate disciplines that can support visualization researchers in meeting these criteria during the planning, execution, and reporting of design study. Through a series of deliberately provocative questions, we explore implications of this new perspective for design study research in visualization, concluding that as a discipline, visualization is not yet well positioned to embrace, nurture, and fully benefit from a rigorous, interpretivist approach to design study. The perspective and criteria we present are intended to stimulate dialogue and debate around the nature of visualization design study and the broader underpinnings of the discipline.
false
false
[ "Miriah D. Meyer", "Jason Dykes" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.08495v2", "icon": "paper" } ]
InfoVis
2,019
Critical Reflections on Visualization Authoring Systems
10.1109/TVCG.2019.2934281
An emerging generation of visualization authoring systems support expressive information visualization without textual programming. As they vary in their visualization models, system architectures, and user interfaces, it is challenging to directly compare these systems using traditional evaluative methods. Recognizing the value of contextualizing our decisions in the broader design space, we present critical reflections on three systems we developed —Lyra, Data Illustrator, and Charticulator. This paper surfaces knowledge that would have been daunting within the constituent papers of these three systems. We compare and contrast their (previously unmentioned) limitations and trade-offs between expressivity and learnability. We also reflect on common assumptions that we made during the development of our systems, thereby informing future research directions in visualization authoring systems.
false
false
[ "Arvind Satyanarayan", "Bongshin Lee", "Donghao Ren", "Jeffrey Heer", "John T. Stasko", "John Thompson 0002", "Matthew Brehmer", "Zhicheng Liu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13568v1", "icon": "paper" } ]
InfoVis
2,019
Data by Proxy — Material Traces as Autographic Visualizations
10.1109/TVCG.2019.2934788
Information visualization limits itself, per definition, to the domain of symbolic information. This paper discusses arguments why the field should also consider forms of data that are not symbolically encoded, including physical traces and material indicators. Continuing a provocation presented by Pat Hanrahan in his 2004 IEEE Vis capstone address, this paper compares physical traces to visualizations and describes the techniques and visual practices for producing, revealing, and interpreting them. By contrasting information visualization with a speculative counter model of autographic visualization, this paper examines the design principles for material data. Autographic visualization addresses limitations of information visualization, such as the inability to directly reflect the material circumstances of data generation. The comparison between the two models allows probing the epistemic assumptions behind information visualization and uncovers linkages with the rich history of scientific visualization and trace reading. The paper begins by discussing the gap between data visualizations and their corresponding phenomena and proceeds by investigating how material visualizations can bridge this gap. It contextualizes autographic visualization with paradigms such as data physicalization and indexical visualization and grounds it in the broader theoretical literature of semiotics, science and technology studies (STS), and the history of scientific representation. The main section of the paper proposes a foundational design vocabulary for autographic visualization and offers examples of how citizen scientists already use autographic principles in their displays, which seem to violate the canonical principles of information visualization but succeed at fulfilling other rhetorical purposes in evidence construction. The paper concludes with a discussion of the limitations of autographic visualization, a roadmap for the empirical investigation of trace perception, and thoughts about how information visualization and autographic visualization techniques can contribute to each other.
false
false
[ "Dietmar Offenhuber" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.05454v1", "icon": "paper" } ]
InfoVis
2,019
Data Changes Everything: Challenges and Opportunities in Data Visualization Design Handoff
10.1109/TVCG.2019.2934538
Complex data visualization design projects often entail collaboration between people with different visualization-related skills. For example, many teams include both designers who create new visualization designs and developers who implement the resulting visualization software. We identify gaps between data characterization tools, visualization design tools, and development platforms that pose challenges for designer-developer teams working to create new data visualizations. While it is common for commercial interaction design tools to support collaboration between designers and developers, creating data visualizations poses several unique challenges that are not supported by current tools. In particular, visualization designers must characterize and build an understanding of the underlying data, then specify layouts, data encodings, and other data-driven parameters that will be robust across many different data values. In larger teams, designers must also clearly communicate these mappings and their dependencies to developers, clients, and other collaborators. We report observations and reflections from five large multidisciplinary visualization design projects and highlight six data-specific visualization challenges for design specification and handoff. These challenges include adapting to changing data, anticipating edge cases in data, understanding technical challenges, articulating data-dependent interactions, communicating data mappings, and preserving the integrity of data mappings across iterations. Based on these observations, we identify opportunities for future tools for prototyping, testing, and communicating data-driven designs, which might contribute to more successful and collaborative data visualization design.
false
false
[ "Jagoda Walny", "Christian Frisson", "Mieka West", "Doris Kosminsky", "Søren Knudsen", "Sheelagh Carpendale", "Wesley Willett" ]
[ "BP" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00192v1", "icon": "paper" } ]
InfoVis
2,019
Data Sampling in Multi-view and Multi-class Scatterplots via Set Cover Optimization
10.1109/TVCG.2019.2934799
We present a method for data sampling in scatterplots by jointly optimizing point selection for different views or classes. Our method uses space-filling curves (Z-order curves) that partition a point set into subsets that, when covered each by one sample, provide a sampling or coreset with good approximation guarantees in relation to the original point set. For scatterplot matrices with multiple views, different views provide different space-filling curves, leading to different partitions of the given point set. For multi-class scatterplots, the focus on either per-class distribution or global distribution provides two different partitions of the given point set that need to be considered in the selection of the coreset. For both cases, we convert the coreset selection problem into an Exact Cover Problem (ECP), and demonstrate with quantitative and qualitative evaluations that an approximate solution that solves the ECP efficiently is able to provide high-quality samplings.
false
false
[ "Ruizhen Hu", "Tingkai Sha", "Oliver van Kaick", "Oliver Deussen", "Hui Huang 0004" ]
[]
[]
[]
InfoVis
2,019
DataShot: Automatic Generation of Fact Sheets from Tabular Data
10.1109/TVCG.2019.2934398
Fact sheets with vivid graphical design and intriguing statistical insights are prevalent for presenting raw data. They help audiences understand data-related facts effectively and make a deep impression. However, designing a fact sheet requires both data and design expertise and is a laborious and time-consuming process. One needs to not only understand the data in depth but also produce intricate graphical representations. To assist in the design process, we present DataShot which, to the best of our knowledge, is the first automated system that creates fact sheets automatically from tabular data. First, we conduct a qualitative analysis of 245 infographic examples to explore general infographic design space at both the sheet and element levels. We identify common infographic structures, sheet layouts, fact types, and visualization styles during the study. Based on these findings, we propose a fact sheet generation pipeline, consisting of fact extraction, fact composition, and presentation synthesis, for the auto-generation workflow. To validate our system, we present use cases with three real-world datasets. We conduct an in-lab user study to understand the usage of our system. Our evaluation results show that DataShot can efficiently generate satisfactory fact sheets to support further customization and data presentation.
false
false
[ "Yun Wang 0012", "Zhida Sun", "Haidong Zhang", "Weiwei Cui", "Ke Xu", "Xiaojuan Ma", "Dongmei Zhang 0001" ]
[]
[]
[]
InfoVis
2,019
Decoding a Complex Visualization in a Science Museum – An Empirical Study
10.1109/TVCG.2019.2934401
This study describes a detailed analysis of museum visitors' decoding process as they used a visualization designed to support exploration of a large, complex dataset. Quantitative and qualitative analyses revealed that it took, on average, 43 seconds for visitors to decode enough of the visualization to see patterns and relationships in the underlying data represented, and 54 seconds to arrive at their first correct data interpretation. Furthermore, visitors decoded throughout and not only upon initial use of the visualization. The study analyzed think-aloud data to identify issues visitors had mapping the visual representations to their intended referents, examine why they occurred, and consider if and how these decoding issues were resolved. The paper also describes how multiple visual encodings both helped and hindered decoding and concludes with implications on the design and adaptation of visualizations for informal science learning venues.
false
false
[ "Joyce Ma", "Kwan-Liu Ma", "Jennifer Frazier" ]
[]
[]
[]
InfoVis
2,019
DeepDrawing: A Deep Learning Approach to Graph Drawing
10.1109/TVCG.2019.2934798
Node-link diagrams are widely used to facilitate network explorations. However, when using a graph drawing technique to visualize networks, users often need to tune different algorithm-specific parameters iteratively by comparing the corresponding drawing results in order to achieve a desired visual effect. This trial and error process is often tedious and time-consuming, especially for non-expert users. Inspired by the powerful data modelling and prediction capabilities of deep learning techniques, we explore the possibility of applying deep learning techniques to graph drawing. Specifically, we propose using a graph-LSTM-based approach to directly map network structures to graph drawings. Given a set of layout examples as the training dataset, we train the proposed graph-LSTM-based model to capture their layout characteristics. Then, the trained model is used to generate graph drawings in a similar style for new networks. We evaluated the proposed approach on two special types of layouts (i.e., grid layouts and star layouts) and two general types of layouts (i.e., ForceAtlas2 and PivotMDS) in both qualitative and quantitative ways. The results provide support for the effectiveness of our approach. We also conducted a time cost assessment on the drawings of small graphs with 20 to 50 nodes. We further report the lessons we learned and discuss the limitations and future work.
false
false
[ "Yong Wang 0021", "Zhihua Jin", "Qianwen Wang", "Weiwei Cui", "Tengfei Ma 0001", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.11040v3", "icon": "paper" } ]
InfoVis
2,019
Design by Immersion: A Transdisciplinary Approach to Problem-Driven Visualizations
10.1109/TVCG.2019.2934790
While previous work exists on how to conduct and disseminate insights from problem-driven visualization projects and design studies, the literature does not address how to accomplish these goals in transdisciplinary teams in ways that advance all disciplines involved. In this paper we introduce and define a new methodological paradigm we call design by immersion, which provides an alternative perspective on problem-driven visualization work. Design by immersion embeds transdisciplinary experiences at the center of the visualization process by having visualization researchers participate in the work of the target domain (or domain experts participate in visualization research). Based on our own combined experiences of working on cross-disciplinary, problem-driven visualization projects, we present six case studies that expose the opportunities that design by immersion enables, including (1) exploring new domain-inspired visualization design spaces, (2) enriching domain understanding through personal experiences, and (3) building strong transdisciplinary relationships. Furthermore, we illustrate how the process of design by immersion opens up a diverse set of design activities that can be combined in different ways depending on the type of collaboration, project, and goals. Finally, we discuss the challenges and potential pitfalls of design by immersion.
false
false
[ "Kyle Wm. Hall", "Adam James Bradley", "Uta Hinrichs", "Samuel Huron", "Jo Wood", "Christopher Collins 0001", "Sheelagh Carpendale" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00559v2", "icon": "paper" } ]
InfoVis
2,019
Designing for Mobile and Immersive Visual Analytics in the Field
10.1109/TVCG.2019.2934282
Data collection and analysis in the field is critical for operations in domains such as environmental science and public safety. However, field workers currently face data- and platform-oriented issues in efficient data collection and analysis in the field, such as limited connectivity, screen space, and attentional resources. In this paper, we explore how visual analytics tools might transform field practices by more deeply integrating data into these operations. We use a design probe coupling mobile, cloud, and immersive analytics components to guide interviews with ten experts from five domains to explore how visual analytics could support data collection and analysis needs in the field. The results identify shortcomings of current approaches and target scenarios and design considerations for future field analysis systems. We embody these findings in FieldView, an extensible, open-source prototype designed to support critical use cases for situated field analysis. Our findings suggest the potential for integrating mobile and immersive technologies to enhance data's utility for various field operations and new directions for visual analytics tools to transform fieldwork.
false
false
[ "Matt Whitlock", "Keke Wu", "Danielle Albers Szafir" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00680v1", "icon": "paper" } ]
InfoVis
2,019
Discriminability Tests for Visualization Effectiveness and Scalability
10.1109/TVCG.2019.2934432
The scalability of a particular visualization approach is limited by the ability for people to discern differences between plots made with different datasets. Ideally, when the data changes, the visualization changes in perceptible ways. This relation breaks down when there is a mismatch between the encoding and the character of the dataset being viewed. Unfortunately, visualizations are often designed and evaluated without fully exploring how they will respond to a wide variety of datasets. We explore the use of an image similarity measure, the Multi-Scale Structural Similarity Index (MS-SSIM), for testing the discriminability of a data visualization across a variety of datasets. MS-SSIM is able to capture the similarity of two visualizations across multiple scales, including low level granular changes and high level patterns. Significant data changes that are not captured by the MS-SSIM indicate visualizations of low discriminability and effectiveness. The measure's utility is demonstrated with two empirical studies. In the first, we compare human similarity judgments and MS-SSIM scores for a collection of scatterplots. In the second, we compute the discriminability values for a set of basic visualizations and compare them with empirical measurements of effectiveness. In both cases, the analyses show that the computational measure is able to approximate empirical results. Our approach can be used to rank competing encodings on their discriminability and to aid in selecting visualizations for a particular type of data distribution.
false
false
[ "Rafael Veras", "Christopher Collins 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.11358v1", "icon": "paper" } ]
InfoVis
2,019
Estimating Color-Concept Associations from Image Statistics
10.1109/TVCG.2019.2934536
To interpret the meanings of colors in visualizations of categorical information, people must determine how distinct colors correspond to different concepts. This process is easier when assignments between colors and concepts in visualizations match people's expectations, making color palettes semantically interpretable. Efforts have been underway to optimize color palette design for semantic interpretablity, but this requires having good estimates of human color-concept associations. Obtaining these data from humans is costly, which motivates the need for automated methods. We developed and evaluated a new method for automatically estimating color-concept associations in a way that strongly correlates with human ratings. Building on prior studies using Google Images, our approach operates directly on Google Image search results without the need for humans in the loop. Specifically, we evaluated several methods for extracting raw pixel content of the images in order to best estimate color-concept associations obtained from human ratings. The most effective method extracted colors using a combination of cylindrical sectors and color categories in color space. We demonstrate that our approach can accurately estimate average human color-concept associations for different fruits using only a small set of images. The approach also generalizes moderately well to more complicated recycling-related concepts of objects that can appear in any color.
false
false
[ "Ragini Rathore", "Zachary Leggon", "Laurent Lessard", "Karen B. Schloss" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00220v2", "icon": "paper" } ]
InfoVis
2,019
Evaluating an Immersive Space-Time Cube Geovisualization for Intuitive Trajectory Data Exploration
10.1109/TVCG.2019.2934415
A Space-Time Cube enables analysts to clearly observe spatio-temporal features in movement trajectory datasets in geovisualization. However, its general usability is impacted by a lack of depth cues, a reported steep learning curve, and the requirement for efficient 3D navigation. In this work, we investigate a Space-Time Cube in the Immersive Analytics domain. Based on a review of previous work and selecting an appropriate exploration metaphor, we built a prototype environment where the cube is coupled to a virtual representation of the analyst's real desk, and zooming and panning in space and time are intuitively controlled using mid-air gestures. We compared our immersive environment to a desktop-based implementation in a user study with 20 participants across 7 tasks of varying difficulty, which targeted different user interface features. To investigate how performance is affected in the presence of clutter, we explored two scenarios with different numbers of trajectories. While the quantitative performance was similar for the majority of tasks, large differences appear when we analyze the patterns of interaction and consider subjective metrics. The immersive version of the Space-Time Cube received higher usability scores, much higher user preference, and was rated to have a lower mental workload, without causing participants discomfort in 25-minute-long VR sessions.
false
false
[ "Jorge A. Wagner Filho", "Wolfgang Stuerzlinger", "Luciana Porcher Nedel" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00580v2", "icon": "paper" } ]
InfoVis
2,019
GenerativeMap: Visualization and Exploration of Dynamic Density Maps via Generative Learning Model
10.1109/TVCG.2019.2934806
The density map is widely used for data sampling, time-varying detection, ensemble representation, etc. The visualization of dynamic evolution is a challenging task when exploring spatiotemporal data. Many approaches have been provided to explore the variation of data patterns over time, which commonly need multiple parameters and preprocessing works. Image generation is a well-known topic in deep learning, and a variety of generating models have been promoted in recent years. In this paper, we introduce a general pipeline called GenerativeMap to extract dynamics of density maps by generating interpolation information. First, a trained generative model comprises an important part of our approach, which can generate nonlinear and natural results by implementing a few parameters. Second, a visual presentation is proposed to show the density change, which is combined with the level of detail and blue noise sampling for a better visual effect. Third, for dynamic visualization of large-scale density maps, we extend this approach to show the evolution in regions of interest, which costs less to overcome the drawback of the learning-based generative model. We demonstrate our method on different types of cases, and we evaluate and compare the approach from multiple aspects. The results help identify the effectiveness of our approach and confirm its applicability in different scenarios.
false
false
[ "Chen Chen", "Changbo Wang", "Xue Bai", "Peiying Zhang", "Chenhui Li" ]
[]
[]
[]
InfoVis
2,019
Illusion of Causality in Visualized Data
10.1109/TVCG.2019.2934399
Students who eat breakfast more frequently tend to have a higher grade point average. From this data, many people might confidently state that a before-school breakfast program would lead to higher grades. This is a reasoning error, because correlation does not necessarily indicate causation – X and Y can be correlated without one directly causing the other. While this error is pervasive, its prevalence might be amplified or mitigated by the way that the data is presented to a viewer. Across three crowdsourced experiments, we examined whether how simple data relations are presented would mitigate this reasoning error. The first experiment tested examples similar to the breakfast-GPA relation, varying in the plausibility of the causal link. We asked participants to rate their level of agreement that the relation was correlated, which they rated appropriately as high. However, participants also expressed high agreement with a causal interpretation of the data. Levels of support for the causal interpretation were not equally strong across visualization types: causality ratings were highest for text descriptions and bar graphs, but weaker for scatter plots. But is this effect driven by bar graphs aggregating data into two groups or by the visual encoding type? We isolated data aggregation versus visual encoding type and examined their individual effect on perceived causality. Overall, different visualization designs afford different cognitive reasoning affordances across the same data. High levels of data aggregation by graphs tend to be associated with higher perceived causality in data. Participants perceived line and dot visual encodings as more causal than bar encodings. Our results demonstrate how some visualization designs trigger stronger causal links while choosing others can help mitigate unwarranted perceptions of causality.
false
false
[ "Cindy Xiong", "Joel Shapiro", "Jessica Hullman", "Steven Franconeri" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00215v1", "icon": "paper" } ]
InfoVis
2,019
Improving the Robustness of Scagnostics
10.1109/TVCG.2019.2934796
In this paper, we examine the robustness of scagnostics through a series of theoretical and empirical studies. First, we investigate the sensitivity of scagnostics by employing perturbing operations on more than 60M synthetic and real-world scatterplots. We found that two scagnostic measures, Outlying and Clumpy, are overly sensitive to data binning. To understand how these measures align with human judgments of visual features, we conducted a study with 24 participants, which reveals that i) humans are not sensitive to small perturbations of the data that cause large changes in both measures, and ii) the perception of clumpiness heavily depends on per-cluster topologies and structures. Motivated by these results, we propose Robust Scagnostics (RScag) by combining adaptive binning with a hierarchy-based form of scagnostics. An analysis shows that RScag improves on the robustness of original scagnostics, aligns better with human judgments, and is equally fast as the traditional scagnostic measures.
false
false
[ "Yunhai Wang", "Zeyu Wang 0005", "Tingting Liu", "Michael Correll", "Zhanglin Cheng", "Oliver Deussen", "Michael Sedlmair" ]
[]
[]
[]
InfoVis
2,019
Interactive Structure-aware Blending of Diverse Edge Bundling Visualizations
10.1109/TVCG.2019.2934805
Many edge bundling techniques (i.e., data simplification as a support for data visualization and decision making) exist but they are not directly applicable to any kind of dataset and their parameters are often too abstract and difficult to set up. As a result, this hinders the user ability to create efficient aggregated visualizations. To address these issues, we investigated a novel way of handling visual aggregation with a task-driven and user-centered approach. Given a graph, our approach produces a decluttered view as follows: first, the user investigates different edge bundling results and specifies areas, where certain edge bundling techniques would provide user-desired results. Second, our system then computes a smooth and structural preserving transition between these specified areas. Lastly, the user can further fine-tune the global visualization with a direct manipulation technique to remove the local ambiguity and to apply different visual deformations. In this paper, we provide details for our design rationale and implementation. Also, we show how our algorithm gives more suitable results compared to current edge bundling techniques, and in the end, we provide concrete instances of usages, where the algorithm combines various edge bundling results to support diverse data exploration and visualizations.
false
false
[ "Yunhai Wang", "Mingliang Xue", "Yanyan Wang", "Xinyuan Yan", "Baoquan Chen", "Chi-Wing Fu", "Christophe Hurter" ]
[]
[]
[]
InfoVis
2,019
Investigating Direct Manipulation of Graphical Encodings as a Method for User Interaction
10.1109/TVCG.2019.2934534
We investigate direct manipulation of graphical encodings as a method for interacting with visualizations. There is an increasing interest in developing visualization tools that enable users to perform operations by directly manipulating graphical encodings rather than external widgets such as checkboxes and sliders. Designers of such tools must decide which direct manipulation operations should be supported, and identify how each operation can be invoked. However, we lack empirical guidelines for how people convey their intended operations using direct manipulation of graphical encodings. We address this issue by conducting a qualitative study that examines how participants perform 15 operations using direct manipulation of standard graphical encodings. From this study, we 1) identify a list of strategies people employ to perform each operation, 2) observe commonalities in strategies across operations, and 3) derive implications to help designers leverage direct manipulation of graphical encoding as a method for user interaction.
false
false
[ "Bahador Saket", "Samuel Huron", "Charles Perin", "Alex Endert" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00679v1", "icon": "paper" } ]
InfoVis
2,019
Measures of the Benefit of Direct Encoding of Data Deltas for Data Pair Relation Perception
10.1109/TVCG.2019.2934801
The power of data visualization is not to convey absolute values of individual data points, but to allow the exploration of relations (increases or decreases in a data value) among them. One approach to highlighting these relations is to explicitly encode the numeric differences (deltas) between data values. Because this approach removes the context of the individual data values, it is important to measure how much of a performance improvement it actually offers, especially across differences in encodings and tasks, to ensure that it is worth adding to a visualization design. Across 3 different tasks, we measured the increase in visual processing efficiency for judging the relations between pairs of data values, from when only the values were shown, to when the deltas between the values were explicitly encoded, across position and length visual feature encodings (and slope encodings in Experiments 1 & 2). In Experiment 1, the participant's task was to locate a pair of data values with a given relation (e.g., Find the ‘small bar to the left of a tall bar’ pair) among pairs of the opposite relation, and we measured processing efficiency from the increase in response times as the number of pairs increased. In Experiment 2, the task was to judge which of two relation types was more prevalent in a briefly presented display of 10 data pairs (e.g., Are there more ‘small bar to the left of a tall bar’ pairs or more ‘tall bar to the left of a small bar’ pairs?). In the final experiment, the task was to estimate the average delta within a briefly presented display of 6 data pairs (e.g., What is the average bar height difference across all ‘small bar to the left of a tall bar’ pairs?). Across all three experiments, visual processing of relations between data value pairs was significantly better when directly encoded as deltas rather than implicitly between individual data points, and varied substantially depending on the task (improvement ranged from 25% to 95%). Considering the ubiquity of bar charts and dot plots, relation perception for individual data values is highly inefficient, and confirms the need for alternative designs that provide not only absolute values, but also direct encoding of critical relationships between those values.
false
false
[ "Christine Nothelfer", "Steven Franconeri" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/jup9a", "icon": "paper" } ]
InfoVis
2,019
OntoPlot: A Novel Visualisation for Non-hierarchical Associations in Large Ontologies
10.1109/TVCG.2019.2934557
Ontologies are formal representations of concepts and complex relationships among them. They have been widely used to capture comprehensive domain knowledge in areas such as biology and medicine, where large and complex ontologies can contain hundreds of thousands of concepts. Especially due to the large size of ontologies, visualisation is useful for authoring, exploring and understanding their underlying data. Existing ontology visualisation tools generally focus on the hierarchical structure, giving much less emphasis to non-hierarchical associations. In this paper we present OntoPlot, a novel visualisation specifically designed to facilitate the exploration of all concept associations whilst still showing an ontology's large hierarchical structure. This hybrid visualisation combines icicle plots, visual compression techniques and interactivity, improving space-efficiency and reducing visual structural complexity. We conducted a user study with domain experts to evaluate the usability of OntoPlot, comparing it with the de facto ontology editor Protégé. The results confirm that OntoPlot attains our design goals for association-related tasks and is strongly favoured by domain experts.
false
false
[ "Ying Yang", "Michael Wybrow", "Yuan-Fang Li", "Tobias Czauderna", "Yongqun He" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.00688v2", "icon": "paper" } ]
InfoVis
2,019
P5: Portable Progressive Parallel Processing Pipelines for Interactive Data Analysis and Visualization
10.1109/TVCG.2019.2934537
We present P5, a web-based visualization toolkit that combines declarative visualization grammar and GPU computing for progressive data analysis and visualization. To interactively analyze and explore big data, progressive analytics and visualization methods have recently emerged. Progressive visualizations of incrementally refining results have the advantages of allowing users to steer the analysis process and make early decisions. P5 leverages declarative grammar for specifying visualization designs and exploits GPU computing to accelerate progressive data processing and rendering. The declarative specifications can be modified during progressive processing to create different visualizations for analyzing the intermediate results. To enable user interactions for progressive data analysis, P5 utilizes the GPU to automatically aggregate and index data based on declarative interaction specifications to facilitate effective interactive visualization. We demonstrate the effectiveness and usefulness of P5 through a variety of example applications and several performance benchmark tests.
false
false
[ "Jianping Kelvin Li", "Kwan-Liu Ma" ]
[]
[]
[]
InfoVis
2,019
Pattern-Driven Navigation in 2D Multiscale Visualizations with Scalable Insets
10.1109/TVCG.2019.2934555
We present Scalable Insets, a technique for interactively exploring and navigating large numbers of annotated patterns in multiscale visualizations such as gigapixel images, matrices, or maps. Exploration of many but sparsely-distributed patterns in multiscale visualizations is challenging as visual representations change across zoom levels, context and navigational cues get lost upon zooming, and navigation is time consuming. Our technique visualizes annotated patterns too small to be identifiable at certain zoom levels using insets, i.e., magnified thumbnail views of the annotated patterns. Insets support users in searching, comparing, and contextualizing patterns while reducing the amount of navigation needed. They are dynamically placed either within the viewport or along the boundary of the viewport to offer a compromise between locality and context preservation. Annotated patterns are interactively clustered by location and type. They are visually represented as an aggregated inset to provide scalable exploration within a single viewport. In a controlled user study with 18 participants, we found that Scalable Insets can speed up visual search and improve the accuracy of pattern comparison at the cost of slower frequency estimation compared to a baseline technique. A second study with 6 experts in the field of genomics showed that Scalable Insets is easy to learn and provides first insights into how Scalable Insets can be applied in an open-ended data exploration scenario.
false
false
[ "Fritz Lekschas", "Michael Behrisch 0001", "Benjamin Bach", "Peter Kerpedjiev", "Nils Gehlenborg", "Hanspeter Pfister" ]
[]
[]
[]
InfoVis
2,019
Persistent Homology Guided Force-Directed Graph Layouts
10.1109/TVCG.2019.2934802
Graphs are commonly used to encode relationships among entities, yet their abstractness makes them difficult to analyze. Node-link diagrams are popular for drawing graphs, and force-directed layouts provide a flexible method for node arrangements that use local relationships in an attempt to reveal the global shape of the graph. However, clutter and overlap of unrelated structures can lead to confusing graph visualizations. This paper leverages the persistent homology features of an undirected graph as derived information for interactive manipulation of force-directed layouts. We first discuss how to efficiently extract 0-dimensional persistent homology features from both weighted and unweighted undirected graphs. We then introduce the interactive persistence barcode used to manipulate the force-directed graph layout. In particular, the user adds and removes contracting and repulsing forces generated by the persistent homology features, eventually selecting the set of persistent homology features that most improve the layout. Finally, we demonstrate the utility of our approach across a variety of synthetic and real datasets.
false
false
[ "Ashley Suh", "Mustafa Hajij", "Bei Wang 0001", "Carlos Scheidegger", "Paul Rosen 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1712.05548v4", "icon": "paper" } ]
InfoVis
2,019
RSATree: Distribution-Aware Data Representation of Large-Scale Tabular Datasets for Flexible Visual Query
10.1109/TVCG.2019.2934800
Analysts commonly investigate the data distributions derived from statistical aggregations of data that are represented by charts, such as histograms and binned scatterplots, to visualize and analyze a large-scale dataset. Aggregate queries are implicitly executed through such a process. Datasets are constantly extremely large; thus, the response time should be accelerated by calculating predefined data cubes. However, the queries are limited to the predefined binning schema of preprocessed data cubes. Such limitation hinders analysts' flexible adjustment of visual specifications to investigate the implicit patterns in the data effectively. Particularly, RSATree enables arbitrary queries and flexible binning strategies by leveraging three schemes, namely, an R-tree-based space partitioning scheme to catch the data distribution, a locality-sensitive hashing technique to achieve locality-preserving random access to data items, and a summed area table scheme to support interactive query of aggregated values with a linear computational complexity. This study presents and implements a web-based visual query system that supports visual specification, query, and exploration of large-scale tabular data with user-adjustable granularities. We demonstrate the efficiency and utility of our approach by performing various experiments on real-world datasets and analyzing time and space complexity.
false
false
[ "Honghui Mei", "Wei Chen 0001", "Yating Wei", "Yuanzhe Hu", "Shuyue Zhou", "Bingru Lin", "Ying Zhao 0001", "Jiazhi Xia" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.02005v2", "icon": "paper" } ]
InfoVis
2,019
Searching the Visual Style and Structure of D3 Visualizations
10.1109/TVCG.2019.2934431
We present a search engine for D3 visualizations that allows queries based on their visual style and underlying structure. To build the engine we crawl a collection of 7860 D3 visualizations from the Web and deconstruct each one to recover its data, its data-encoding marks and the encodings describing how the data is mapped to visual attributes of the marks. We also extract axes and other non-data-encoding attributes of marks (e.g., typeface, background color). Our search engine indexes this style and structure information as well as metadata about the webpage containing the chart. We show how visualization developers can search the collection to find visualizations that exhibit specific design characteristics and thereby explore the space of possible designs. We also demonstrate how researchers can use the search engine to identify commonly used visual design patterns and we perform such a demographic design analysis across our collection of D3 charts. A user study reveals that visualization developers found our style and structure based search engine to be significantly more useful and satisfying for finding different designs of D3 charts, than a baseline search engine that only allows keyword search over the webpage containing a chart.
false
false
[ "Enamul Hoque", "Maneesh Agrawala" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.11265v2", "icon": "paper" } ]
InfoVis
2,019
Separating the Wheat from the Chaff: Comparative Visual Cues for Transparent Diagnostics of Competing Models
10.1109/TVCG.2019.2934540
Experts in data and physical sciences have to regularly grapple with the problem of competing models. Be it analytical or physics-based models, a cross-cutting challenge for experts is to reliably diagnose which model outcomes appropriately predict or simulate real-world phenomena. Expert judgment involves reconciling information across many, and often, conflicting criteria that describe the quality of model outcomes. In this paper, through a design study with climate scientists, we develop a deeper understanding of the problem and solution space of model diagnostics, resulting in the following contributions: i) a problem and task characterization using which we map experts' model diagnostics goals to multi-way visual comparison tasks, ii) a design space of comparative visual cues for letting experts quickly understand the degree of disagreement among competing models and gauge the degree of stability of model outputs with respect to alternative criteria, and iii) design and evaluation of MyriadCues, an interactive visualization interface for exploring alternative hypotheses and insights about good and bad models by leveraging comparative visual cues. We present case studies and subjective feedback by experts, which validate how MyriadCues enables more transparent model diagnostic mechanisms, as compared to the state of the art.
false
false
[ "Aritra Dasgupta", "Hong Wang", "Nancy O'Brien", "Susannah Burrows" ]
[]
[]
[]
InfoVis
2,019
ShapeWordle: Tailoring Wordles using Shape-aware Archimedean Spirals
10.1109/TVCG.2019.2934783
We present a new technique to enable the creation of shape-bounded Wordles, we call ShapeWordle, in which we fit words to form a given shape. To guide word placement within a shape, we extend the traditional Archimedean spirals to be shape-aware by formulating the spirals in a differential form using the distance field of the shape. To handle non-convex shapes, we introduce a multi-centric Wordle layout method that segments the shape into parts for our shape-aware spirals to adaptively fill the space and generate word placements. In addition, we offer a set of editing interactions to facilitate the creation of semantically-meaningful Wordles. Lastly, we present three evaluations: a comprehensive comparison of our results against the state-of-the-art technique (WordArt), case studies with 14 users, and a gallery to showcase the coverage of our technique.
false
false
[ "Yunhai Wang", "Xiaowei Chu", "Kaiyi Zhang", "Chen Bao", "Xiaotong Li", "Jian Zhang 0070", "Chi-Wing Fu", "Christophe Hurter", "Bongshin Lee", "Oliver Deussen" ]
[]
[]
[]
InfoVis
2,019
SmartCube: An Adaptive Data Management Architecture for the Real-Time Visualization of Spatiotemporal Datasets
10.1109/TVCG.2019.2934434
Interactive visualization and exploration of large spatiotemporal data sets is difficult without carefully-designed data pre-processing and management tools. We propose a novel architecture for spatiotemporal data management. The architecture can dynamically update itself based on user queries. Datasets is stored in a tree-like structure to support memory sharing among cuboids in a logical structure of data cubes. An update mechanism is designed to create or remove cuboids on it, according to the analysis of the user queries, with the consideration of memory size limitation. Data structure is dynamically optimized according to different user queries. During a query process, user queries are recorded to predict the performance increment of the new cuboid. The creation or deletion of a cuboid is determined by performance increment. Experiment results show that our prototype system deliveries good performance towards user queries on different spatiotemporal datasets, which costing small memory size with comparable performance compared with other state-of-the-art algorithms.
false
false
[ "Can Liu 0004", "Cong Wu 0004", "Hanning Shao", "Xiaoru Yuan" ]
[]
[]
[]
InfoVis
2,019
Text-to-Viz: Automatic Generation of Infographics from Proportion-Related Natural Language Statements
10.1109/TVCG.2019.2934785
Combining data content with visual embellishments, infographics can effectively deliver messages in an engaging and memorable manner. Various authoring tools have been proposed to facilitate the creation of infographics. However, creating a professional infographic with these authoring tools is still not an easy task, requiring much time and design expertise. Therefore, these tools are generally not attractive to casual users, who are either unwilling to take time to learn the tools or lacking in proper design expertise to create a professional infographic. In this paper, we explore an alternative approach: to automatically generate infographics from natural language statements. We first conducted a preliminary study to explore the design space of infographics. Based on the preliminary study, we built a proof-of-concept system that automatically converts statements about simple proportion-related statistics to a set of infographics with pre-designed styles. Finally, we demonstrated the usability and usefulness of the system through sample results, exhibits, and expert reviews.
false
false
[ "Weiwei Cui", "Xiaoyu Zhang", "Yun Wang 0012", "He Huang", "Bei Chen", "Lei Fang 0004", "Haidong Zhang", "Jian-Guang Lou", "Dongmei Zhang 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.09091v1", "icon": "paper" } ]
InfoVis
2,019
The Impact of Immersion on Cluster Identification Tasks
10.1109/TVCG.2019.2934395
Recent developments in technology encourage the use of head-mounted displays (HMDs) as a medium to explore visualizations in virtual realities (VRs). VR environments (VREs) enable new, more immersive visualization design spaces compared to traditional computer screens. Previous studies in different domains, such as medicine, psychology, and geology, report a positive effect of immersion, e.g., on learning performance or phobia treatment effectiveness. Our work presented in this paper assesses the applicability of those findings to a common task from the information visualization (InfoVis) domain. We conducted a quantitative user study to investigate the impact of immersion on cluster identification tasks in scatterplot visualizations. The main experiment was carried out with 18 participants in a within-subjects setting using four different visualizations, (1) a 2D scatterplot matrix on a screen, (2) a 3D scatterplot on a screen, (3) a 3D scatterplot miniature in a VRE and (4) a fully immersive 3D scatterplot in a VRE. The four visualization design spaces vary in their level of immersion, as shown in a supplementary study. The results of our main study indicate that task performance differs between the investigated visualization design spaces in terms of accuracy, efficiency, memorability, sense of orientation, and user preference. In particular, the 2D visualization on the screen performed worse compared to the 3D visualizations with regard to the measured variables. The study shows that an increased level of immersion can be a substantial benefit in the context of 3D data and cluster detection.
false
false
[ "Matthias Kraus", "Niklas Weiler", "Daniela Oelke", "Johannes Kehrer", "Daniel A. Keim", "Johannes Fuchs 0001" ]
[]
[]
[]
InfoVis
2,019
The Perceptual Proxies of Visual Comparison
10.1109/TVCG.2019.2934786
Perceptual tasks in visualizations often involve comparisons. Of two sets of values depicted in two charts, which set had values that were the highest overall? Which had the widest range? Prior empirical work found that the performance on different visual comparison tasks (e.g., “biggest delta”, “biggest correlation”) varied widely across different combinations of marks and spatial arrangements. In this paper, we expand upon these combinations in an empirical evaluation of two new comparison tasks: the “biggest mean” and “biggest range” between two sets of values. We used a staircase procedure to titrate the difficulty of the data comparison to assess which arrangements produced the most precise comparisons for each task. We find visual comparisons of biggest mean and biggest range are supported by some chart arrangements more than others, and that this pattern is substantially different from the pattern for other tasks. To synthesize these dissonant findings, we argue that we must understand which features of a visualization are actually used by the human visual system to solve a given task. We call these perceptual proxies. For example, when comparing the means of two bar charts, the visual system might use a “Mean length” proxy that isolates the actual lengths of the bars and then constructs a true average across these lengths. Alternatively, it might use a “Hull Area” proxy that perceives an implied hull bounded by the bars of each chart and then compares the areas of these hulls. We propose a series of potential proxies across different tasks, marks, and spatial arrangements. Simple models of these proxies can be empirically evaluated for their explanatory power by matching their performance to human performance across these marks, arrangements, and tasks. We use this process to highlight candidates for perceptual proxies that might scale more broadly to explain performance in visual comparison.
false
false
[ "Nicole Jardine", "Brian D. Ondov", "Niklas Elmqvist", "Steven Franconeri" ]
[ "HM" ]
[]
[]
InfoVis
2,019
The Role of Latency and Task Complexity in Predicting Visual Search Behavior
10.1109/TVCG.2019.2934556
Latency in a visualization system is widely believed to affect user behavior in measurable ways, such as requiring the user to wait for the visualization system to respond, leading to interruption of the analytic flow. While this effect is frequently observed and widely accepted, precisely how latency affects different analysis scenarios is less well understood. In this paper, we examine the role of latency in the context of visual search, an essential task in data foraging and exploration using visualization. We conduct a series of studies on Amazon Mechanical Turk and find that under certain conditions, latency is a statistically significant predictor of visual search behavior, which is consistent with previous studies. However, our results also suggest that task type, task complexity, and other factors can modulate the effect of latency, in some cases rendering latency statistically insignificant in predicting user behavior. This suggests a more nuanced view of the role of latency than previously reported. Building on these results and the findings of prior studies, we propose design guidelines for measuring and interpreting the effects of latency when evaluating performance on visual search tasks.
false
false
[ "Leilani Battle", "R. Jordan Crouser", "Audace Nakeshimana", "Ananda Montoly", "Remco Chang", "Michael Stonebraker" ]
[]
[]
[]
InfoVis
2,019
There Is No Spoon: Evaluating Performance, Space Use, and Presence with Expert Domain Users in Immersive Analytics
10.1109/TVCG.2019.2934803
Immersive analytics turns the very space surrounding the user into a canvas for data analysis, supporting human cognitive abilities in myriad ways. We present the results of a design study, contextual inquiry, and longitudinal evaluation involving professional economists using a Virtual Reality (VR) system for multidimensional visualization to explore actual economic data. Results from our preregistered evaluation highlight the varied use of space depending on context (exploration vs. presentation), the organization of space to support work, and the impact of immersion on navigation and orientation in the 3D analysis space.
false
false
[ "Andrea Batch", "Andrew Cunningham", "Maxime Cordeil", "Niklas Elmqvist", "Tim Dwyer", "Bruce H. Thomas", "Kim Marriott" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/wzqbu", "icon": "paper" } ]
InfoVis
2,019
Toward Objective Evaluation of Working Memory in Visualizations: A Case Study Using Pupillometry and a Dual-Task Paradigm
10.1109/TVCG.2019.2934286
Cognitive science has established widely used and validated procedures for evaluating working memory in numerous applied domains, but surprisingly few studies have employed these methodologies to assess claims about the impacts of visualizations on working memory. The lack of information visualization research that uses validated procedures for measuring working memory may be due, in part, to the absence of cross-domain methodological guidance tailored explicitly to the unique needs of visualization research. This paper presents a set of clear, practical, and empirically validated methods for evaluating working memory during visualization tasks and provides readers with guidance in selecting an appropriate working memory evaluation paradigm. As a case study, we illustrate multiple methods for evaluating working memory in a visual-spatial aggregation task with geospatial data. The results show that the use of dual-task experimental designs (simultaneous performance of several tasks compared to single-task performance) and pupil dilation can reveal working memory demands associated with task difficulty and dual-tasking. In a dual-task experimental design, measures of task completion times and pupillometry revealed the working memory demands associated with both task difficulty and dual-tasking. Pupillometry demonstrated that participants' pupils were significantly larger when they were completing a more difficult task and when multitasking. We propose that researchers interested in the relative differences in working memory between visualizations should consider a converging methods approach, where physiological measures and behavioral measures of working memory are employed to generate a rich evaluation of visualization effort.
false
false
[ "Lace M. K. Padilla", "Spencer C. Castro", "P. Samuel Quinan", "Ian T. Ruginski", "Sarah H. Creem-Regehr" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/zj6tp", "icon": "paper" } ]
InfoVis
2,019
Towards Automated Infographic Design: Deep Learning-based Auto-Extraction of Extensible Timeline
10.1109/TVCG.2019.2934810
Designers need to consider not only perceptual effectiveness but also visual styles when creating an infographic. This process can be difficult and time consuming for professional designers, not to mention non-expert users, leading to the demand for automated infographics design. As a first step, we focus on timeline infographics, which have been widely used for centuries. We contribute an end-to-end approach that automatically extracts an extensible timeline template from a bitmap image. Our approach adopts a deconstruction and reconstruction paradigm. At the deconstruction stage, we propose a multi-task deep neural network that simultaneously parses two kinds of information from a bitmap timeline: 1) the global information, i.e., the representation, scale, layout, and orientation of the timeline, and 2) the local information, i.e., the location, category, and pixels of each visual element on the timeline. At the reconstruction stage, we propose a pipeline with three techniques, i.e., Non-Maximum Merging, Redundancy Recover, and DL GrabCut, to extract an extensible template from the infographic, by utilizing the deconstruction results. To evaluate the effectiveness of our approach, we synthesize a timeline dataset (4296 images) and collect a real-world timeline dataset (393 images) from the Internet. We first report quantitative evaluation results of our approach over the two datasets. Then, we present examples of automatically extracted templates and timelines automatically generated based on these templates to qualitatively demonstrate the performance. The results confirm that our approach can effectively extract extensible templates from real-world timeline infographics.
false
false
[ "Zhutian Chen", "Yun Wang 0012", "Qianwen Wang", "Yong Wang 0021", "Huamin Qu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.13550v3", "icon": "paper" } ]
InfoVis
2,019
Uncertainty-Aware Principal Component Analysis
10.1109/TVCG.2019.2934812
We present a technique to perform dimensionality reduction on data that is subject to uncertainty. Our method is a generalization of traditional principal component analysis (PCA) to multivariate probability distributions. In comparison to non-linear methods, linear dimensionality reduction techniques have the advantage that the characteristics of such probability distributions remain intact after projection. We derive a representation of the PCA sample covariance matrix that respects potential uncertainty in each of the inputs, building the mathematical foundation of our new method: uncertainty-aware PCA. In addition to the accuracy and performance gained by our approach over sampling-based strategies, our formulation allows us to perform sensitivity analysis with regard to the uncertainty in the data. For this, we propose factor traces as a novel visualization that enables to better understand the influence of uncertainty on the chosen principal components. We provide multiple examples of our technique using real-world datasets. As a special case, we show how to propagate multivariate normal distributions through PCA in closed form. Furthermore, we discuss extensions and limitations of our approach.
false
false
[ "Jochen Görtler", "Thilo Spinner", "Dirk Streeb", "Daniel Weiskopf", "Oliver Deussen" ]
[]
[]
[]
InfoVis
2,019
VisTA: Integrating Machine Intelligence with Visualization to Support the Investigation of Think-Aloud Sessions
10.1109/TVCG.2019.2934797
Think-aloud protocols are widely used by user experience (UX) practitioners in usability testing to uncover issues in user interface design. It is often arduous to analyze large amounts of recorded think-aloud sessions and few UX practitioners have an opportunity to get a second perspective during their analysis due to time and resource constraints. Inspired by the recent research that shows subtle verbalization and speech patterns tend to occur when users encounter usability problems, we take the first step to design and evaluate an intelligent visual analytics tool that leverages such patterns to identify usability problem encounters and present them to UX practitioners to assist their analysis. We first conducted and recorded think-aloud sessions, and then extracted textual and acoustic features from the recordings and trained machine learning (ML) models to detect problem encounters. Next, we iteratively designed and developed a visual analytics tool, VisTA, which enables dynamic investigation of think-aloud sessions with a timeline visualization of ML predictions and input features. We conducted a between-subjects laboratory study to compare three conditions, i.e., VisTA, VisTASimple (no visualization of the ML's input features), and Baseline (no ML information at all), with 30 UX professionals. The findings show that UX professionals identified more problem encounters when using VisTA than Baseline by leveraging the problem visualization as an overview, anticipations, and anchors as well as the feature visualization as a means to understand what ML considers and omits. Our findings also provide insights into how they treated ML, dealt with (dis)agreement with ML, and reviewed the videos (i.e., play, pause, and rewind).
false
false
[ "Mingming Fan 0001", "Ke Wu", "Jian Zhao 0010", "Yue Li", "Winter Wei", "Khai N. Truong" ]
[]
[]
[]
InfoVis
2,019
Visualizing a Moving Target: A Design Study on Task Parallel Programs in the Presence of Evolving Data and Concerns
10.1109/TVCG.2019.2934285
Common pitfalls in visualization projects include lack of data availability and the domain users' needs and focus changing too rapidly for the design process to complete. While it is often prudent to avoid such projects, we argue it can be beneficial to engage them in some cases as the visualization process can help refine data collection, solving a “chicken and egg” problem of having the data and tools to analyze it. We found this to be the case in the domain of task parallel computing where such data and tooling is an open area of research. Despite these hurdles, we conducted a design study. Through a tightly-coupled iterative design process, we built Atria, a multi-view execution graph visualization to support performance analysis. Atria simplifies the initial representation of the execution graph by aggregating nodes as related to their line of code. We deployed Atria on multiple platforms, some requiring design alteration. We describe how we adapted the design study methodology to the “moving target” of both the data and the domain experts' concerns and how this movement kept both the visualization and programming project healthy. We reflect on our process and discuss what factors allow the project to be successful in the presence of changing data and user needs.
false
false
[ "Katy Williams", "Alex Bigelow", "Katherine E. Isaacs" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1905.13135v4", "icon": "paper" } ]
InfoVis
2,019
What is Interaction for Data Visualization?
10.1109/TVCG.2019.2934283
Interaction is fundamental to data visualization, but what “interaction” means in the context of visualization is ambiguous and confusing. We argue that this confusion is due to a lack of consensual definition. To tackle this problem, we start by synthesizing an inclusive view of interaction in the visualization community – including insights from information visualization, visual analytics and scientific visualization, as well as the input of both senior and junior visualization researchers. Once this view takes shape, we look at how interaction is defined in the field of human-computer interaction (HCI). By extracting commonalities and differences between the views of interaction in visualization and in HCI, we synthesize a definition of interaction for visualization. Our definition is meant to be a thinking tool and inspire novel and bolder interaction design practices. We hope that by better understanding what interaction in visualization is and what it can be, we will enrich the quality of interaction in visualization systems and empower those who use them.
false
false
[ "Evanthia Dimara", "Charles Perin" ]
[]
[]
[]
InfoVis
2,019
Why Authors Don't Visualize Uncertainty
10.1109/TVCG.2019.2934287
Clear presentation of uncertainty is an exception rather than rule in media articles, data-driven reports, and consumer applications, despite proposed techniques for communicating sources of uncertainty in data. This work considers, Why do so many visualization authors choose not to visualize uncertainty? I contribute a detailed characterization of practices, associations, and attitudes related to uncertainty communication among visualization authors, derived from the results of surveying 90 authors who regularly create visualizations for others as part of their work, and interviewing thirteen influential visualization designers. My results highlight challenges that authors face and expose assumptions and inconsistencies in beliefs about the role of uncertainty in visualization. In particular, a clear contradiction arises between authors' acknowledgment of the value of depicting uncertainty and the norm of omitting direct depiction of uncertainty. To help explain this contradiction, I present a rhetorical model of uncertainty omission in visualization-based communication. I also adapt a formal statistical model of how viewers judge the strength of a signal in a visualization to visualization-based communication, to argue that uncertainty communication necessarily reduces degrees of freedom in viewers' statistical inferences. I conclude with recommendations for how visualization research on uncertainty communication could better serve practitioners' current needs and values while deepening understanding of assumptions that reinforce uncertainty omission.
false
false
[ "Jessica Hullman" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1908.01697v1", "icon": "paper" } ]
InfoVis
2,019
Winglets: Visualizing Association with Uncertainty in Multi-class Scatterplots
10.1109/TVCG.2019.2934811
This work proposes Winglets, an enhancement to the classic scatterplot to better perceptually pronounce multiple classes by improving the perception of association and uncertainty of points to their related cluster. Designed as a pair of dual-sided strokes belonging to a data point, Winglets leverage the Gestalt principle of Closure to shape the perception of the form of the clusters, rather than use an explicit divisive encoding. Through a subtle design of two dominant attributes, length and orientation, Winglets enable viewers to perform a mental completion of the clusters. A controlled user study was conducted to examine the efficiency of Winglets in perceiving the cluster association and the uncertainty of certain points. The results show Winglets form a more prominent association of points into clusters and improve the perception of associating uncertainty.
false
false
[ "Min Lu 0002", "Shuaiqi Wang", "Joel Lanir", "Noa Fish", "Yang Yue", "Daniel Cohen-Or", "Hui Huang 0004" ]
[]
[]
[]
EuroVis
2,019
A framework for GPU-accelerated exploration of massive time-varying rectilinear scalar volumes
10.1111/cgf.13671
We introduce a novel flexible approach to spatiotemporal exploration of rectilinear scalar volumes. Our out‐of‐core representation, based on per‐frame levels of hierarchically tiled non‐redundant 3D grids, efficiently supports spatiotemporal random access and streaming to the GPU in compressed formats. A novel low‐bitrate codec able to store into fixed‐size pages a variable‐rate approximation based on sparse coding with learned dictionaries is exploited to meet stringent bandwidth constraint during time‐critical operations, while a near‐lossless representation is employed to support high‐quality static frame rendering. A flexible high‐speed GPU decoder and raycasting framework mixes and matches GPU kernels performing parallel object‐space and image‐space operations for seamless support, on fat and thin clients, of different exploration use cases, including animation and temporal browsing, dynamic exploration of single frames, and high‐quality snapshots generated from near‐lossless data. The quality and performance of our approach are demonstrated on large data sets with thousands of multi‐billion‐voxel frames.
false
false
[ "Fabio Marton", "Marco Agus", "Enrico Gobbetti" ]
[ "HM" ]
[]
[]
EuroVis
2,019
A Geometric Optimization Approach for the Detection and Segmentation of Multiple Aneurysms
10.1111/cgf.13699
We present a method for detecting and segmenting aneurysms in blood vessels that facilitates the assessment of risks associated with the aneurysms. The detection and analysis of aneurysms is important for medical diagnosis as aneurysms bear the risk of rupture with fatal consequences for the patient. For risk assessment and treatment planning, morphological descriptors, such as the height and width of the aneurysm, are used. Our system enables the fast detection, segmentation and analysis of single and multiple aneurysms. The method proceeds in two stages plus an optional third stage in which the user interacts with the system. First, a set of aneurysm candidate regions is created by segmenting regions of the vessels. Second, the aneurysms are detected by a classification of the candidates. The third stage allows users to adjust and correct the result of the previous stages using a brushing interface. When the segmentation of the aneurysm is complete, the corresponding ostium curves and morphological descriptors are computed and a report including the results of the analysis and renderings of the aneurysms is generated. The novelty of our approach lies in combining an analytic characterization of aneurysms and vessels to generate a list of candidate regions with a classifier trained on data to identify the aneurysms in the candidate list. The candidate generation is modeled as a global combinatorial optimization problem that is based on a local geometric characterization of aneurysms and vessels and can be efficiently solved using a graph cut algorithm. For the aneurysm classification scheme, we identified four suitable features and modeled appropriate training data. An important aspect of our approach is that the resulting system is fast enough to allow for user interaction with the global optimization by specifying additional constraints via a brushing interface.
false
false
[ "Kai Lawonn", "Monique Meuschke", "Ralph Wickenhöfer", "Bernhard Preim", "Klaus Hildebrandt" ]
[]
[]
[]
EuroVis
2,019
A Random Sampling O(n) Force-calculation Algorithm for Graph Layouts
10.1111/cgf.13724
This paper proposes a linear‐time repulsive‐force‐calculation algorithm with sub‐linear auxiliary space requirements, achieving an asymptotic improvement over the Barnes‐Hut and Fast Multipole Method force‐calculation algorithms. The algorithm, named random vertex sampling (RVS), achieves its speed by updating a random sample of vertices at each iteration, each with a random sample of repulsive forces. This paper also proposes a combination algorithm that uses RVS to derive an initial layout and then applies Barnes‐Hut to refine the layout. An evaluation of RVS and the combination algorithm compares their speed and quality on 109 graphs against a Barnes‐Hut layout algorithm. The RVS algorithm performs up to 6.1 times faster on the tested graphs while maintaining comparable layout quality. The combination algorithm also performs faster than Barnes‐Hut, but produces layouts that are more symmetric than using RVS alone. Data and code: https://osf.io/nb7m8/
false
false
[ "Robert Gove" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/2vpe4", "icon": "paper" } ]
EuroVis
2,019
A Review of Guidance Approaches in Visual Data Analysis: A Multifocal Perspective
10.1111/cgf.13730
Visual data analysis can be envisioned as a collaboration of the user and the computational system with the aim of completing a given task. Pursuing an effective system‐user integration, in which the system actively helps the user to reach his/her analysis goal has been focus of visualization research for quite some time. However, this problem is still largely unsolved. As a result, users might be overwhelmed by powerful but complex visual analysis systems which also limits their ability to produce insightful results. In this context, guidance is a promising step towards enabling an effective mixed‐initiative collaboration to promote the visual analysis. However, the way how guidance should be put into practice is still to be unravelled. Thus, we conducted a comprehensive literature research and provide an overview of how guidance is tackled by different approaches in visual analysis systems. We distinguish between guidance that is provided by the system to support the user, and guidance that is provided by the user to support the system. By identifying open problems, we highlight promising research directions and point to missing factors that are needed to enable the envisioned human‐computer collaboration, and thus, promote a more effective visual data analysis.
false
false
[ "Davide Ceneda", "Theresia Gschwandtner", "Silvia Miksch" ]
[]
[]
[]