Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
Vis
2,022
GenoREC: A Recommendation System for Interactive Genomics Data Visualization
10.1109/TVCG.2022.3209407
Interpretation of genomics data is critically reliant on the application of a wide range of visualization tools. A large number of visualization techniques for genomics data and different analysis tasks pose a significant challenge for analysts: which visualization technique is most likely to help them generate insights into their data? Since genomics analysts typically have limited training in data visualization, their choices are often based on trial and error or guided by technical details, such as data formats that a specific tool can load. This approach prevents them from making effective visualization choices for the many combinations of data types and analysis questions they encounter in their work. Visualization recommendation systems assist non-experts in creating data visualization by recommending appropriate visualizations based on the data and task characteristics. However, existing visualization recommendation systems are not designed to handle domain-specific problems. To address these challenges, we designed GenoREC, a novel visualization recommendation system for genomics. GenoREC enables genomics analysts to select effective visualizations based on a description of their data and analysis tasks. Here, we present the recommendation model that uses a knowledge-based method for choosing appropriate visualizations and a web application that enables analysts to input their requirements, explore recommended visualizations, and export them for their usage. Furthermore, we present the results of two user studies demonstrating that GenoREC recommends visualizations that are both accepted by domain experts and suited to address the given genomics analysis problem. All supplemental materials are available at https://osf.io/y73pt/.
false
false
[ "Aditeya Pandey", "Sehi L'Yi", "Qianwen Wang", "Michelle Borkin", "Nils Gehlenborg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/rscb4", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/MK8OcbGlaCk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/UmYxcrR1PmY", "icon": "video" } ]
Vis
2,022
Geo-Storylines: Integrating Maps into Storyline Visualizations
10.1109/TVCG.2022.3209480
Storyline visualizations are a powerful way to compactly visualize how the relationships between people evolve over time. Real-world relationships often also involve space, for example the cities that two political rivals visited together or alone over the years. By default, Storyline visualizations only show implicitly geospatial co-occurrence between people (drawn as lines), by bringing their lines together. Even the few designs that do explicitly show geographic locations only do so in abstract ways (e.g., annotations) and do not communicate geospatial information, such as the direction or extent of their political campains. We introduce Geo-Storylines, a collection of visualisation designs that integrate geospatial context into Storyline visualizations, using different strategies for compositing time and space. Our contribution is twofold. First, we present the results of a sketching workshop with 11 participants, that we used to derive a design space for integrating maps into Storylines. Second, by analyzing the strengths and weaknesses of the potential designs of the design space in terms of legibility and ability to scale to multiple relationships, we extract the three most promising: Time Glyphs, Coordinated Views, and Map Glyphs. We compare these three techniques first in a controlled study with 18 participants, under five different geospatial tasks and two maps of different complexity. We additionally collected informal feedback about their usefulness from domain experts in data journalism. Our results indicate that, as expected, detailed performance depends on the task. Nevertheless, Coordinated Views remain a highly effective and preferred technique across the board.
false
false
[ "Golina Hulstein", "Vanessa Peña Araya", "Anastasia Bezerianos" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/sPzsQqHDSGo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/S_WISD78cfs", "icon": "video" } ]
Vis
2,022
GRay: Ray Casting for Visualization and Interactive Data Exploration of Gaussian Mixture Models
10.1109/TVCG.2022.3209374
The Gaussian mixture model (GMM) describes the distribution of random variables from several different populations. GMMs have widespread applications in probability theory, statistics, machine learning for unsupervised cluster analysis and topic modeling, as well as in deep learning pipelines. So far, few efforts have been made to explore the underlying point distribution in combination with the GMMs, in particular when the data becomes high-dimensional and when the GMMs are composed of many Gaussians. We present an analysis tool comprising various GPU-based visualization techniques to explore such complex GMMs. To facilitate the exploration of high-dimensional data, we provide a novel navigation system to analyze the underlying data. Instead of projecting the data to 2D, we utilize interactive 3D views to better support users in understanding the spatial arrangements of the Gaussian distributions. The interactive system is composed of two parts: (1) raycasting-based views that visualize cluster memberships, spatial arrangements, and support the discovery of new modes. (2) overview visualizations that enable the comparison of Gaussians with each other, as well as small multiples of different choices of basis vectors. Users are supported in their exploration with customization tools and smooth camera navigations. Our tool was developed and assessed by five domain experts, and its usefulness was evaluated with 23 participants. To demonstrate the effectiveness, we identify interesting features in several data sets.
false
false
[ "Kai Lawonn", "Monique Meuschke", "Pepe Eulzer", "Matthias Mitterreiter", "Joachim Giesen", "Tobias Günther" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Vh9iA5A-HNo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/4KbR6BNn-ws", "icon": "video" } ]
Vis
2,022
HetVis: A Visual Analysis Approach for Identifying Data Heterogeneity in Horizontal Federated Learning
10.1109/TVCG.2022.3209347
Horizontal federated learning (HFL) enables distributed clients to train a shared model and keep their data privacy. In training high-quality HFL models, the data heterogeneity among clients is one of the major concerns. However, due to the security issue and the complexity of deep learning models, it is challenging to investigate data heterogeneity across different clients. To address this issue, based on a requirement analysis we developed a visual analytics tool, HetVis, for participating clients to explore data heterogeneity. We identify data heterogeneity through comparing prediction behaviors of the global federated model and the stand-alone model trained with local data. Then, a context-aware clustering of the inconsistent records is done, to provide a summary of data heterogeneity. Combining with the proposed comparison techniques, we develop a novel set of visualizations to identify heterogeneity issues in HFL. We designed three case studies to introduce how HetVis can assist client analysts in understanding different types of heterogeneity issues. Expert reviews and a comparative study demonstrate the effectiveness of HetVis.
false
false
[ "Xumeng Wang", "Wei Chen 0001", "Jiazhi Xia", "Zhen Wen", "Rongchen Zhu", "Tobias Schreck" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.07491v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/bboSdZ294x0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/mN4rlnMSD7Y", "icon": "video" } ]
Vis
2,022
HiTailor: Interactive Transformation and Visualization for Hierarchical Tabular Data
10.1109/TVCG.2022.3209354
Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.
false
false
[ "Guozheng Li 0002", "Runfei Li", "Zicheng Wang", "Chi Harold Liu", "Min Lu 0002", "Guoren Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.05821v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/nrSju2-lqCc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Eem27BmZfXs", "icon": "video" } ]
Vis
2,022
How Do Viewers Synthesize Conflicting Information from Data Visualizations?
10.1109/TVCG.2022.3209467
Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Scientists and journalists often communicate these incremental findings to lay people through visualizations and text (e.g., the positive and negative effects of caffeine intake). Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. To address this knowledge gap, we conducted a series of four experiments ($\mathrm{N}=1166$) in which participants synthesized empirical evidence from a pair of line charts presented sequentially. In Experiment 1, we administered a baseline condition with charts depicting no specific context where participants held no strong belief. To test for the generalizability, we introduced real-world scenarios to our visualizations in Experiment 2 and added accompanying text descriptions similar to online news articles or blog posts in Experiment 3. In all three experiments, we varied the relative direction and magnitude of line slopes within the chart pairs. We found that participants tended to weigh the positive slope more when the two charts depicted relationships in the opposite direction (e.g., one positive slope and one negative slope). Participants tended to weigh the less steep slope more when the two charts depicted relationships in the same direction (e.g., both positive). Through these experiments, we characterize participants' synthesis behaviors depending on the relationship between the information they viewed, contribute to theories describing underlying cognitive mechanisms in information synthesis, and describe design implications for data storytelling.
false
false
[ "Prateek Mantri", "Hariharan Subramonyam", "Audrey L. Michal", "Cindy Xiong" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03828v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/nudl0CYXZBU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Un-d8rKSqrs", "icon": "video" } ]
Vis
2,022
IDLat: An Importance-Driven Latent Generation Method for Scientific Data
10.1109/TVCG.2022.3209419
Deep learning based latent representations have been widely used for numerous scientific visualization applications such as isosurface similarity analysis, volume rendering, flow field synthesis, and data reduction, just to name a few. However, existing latent representations are mostly generated from raw data in an unsupervised manner, which makes it difficult to incorporate domain interest to control the size of the latent representations and the quality of the reconstructed data. In this paper, we present a novel importance-driven latent representation to facilitate domain-interest-guided scientific data visualization and analysis. We utilize spatial importance maps to represent various scientific interests and take them as the input to a feature transformation network to guide latent generation. We further reduced the latent size by a lossless entropy encoding algorithm trained together with the autoencoder, improving the storage and memory efficiency. We qualitatively and quantitatively evaluate the effectiveness and efficiency of latent representations generated by our method with data from multiple scientific visualization applications.
false
false
[ "Jingyi Shen", "Haoyu Li", "Jiayi Xu 0001", "Ayan Biswas", "Han-Wei Shen" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03345v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/0rCVIDzQG6Y", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/wJkr7i_yRXg", "icon": "video" } ]
Vis
2,022
In Defence of Visual Analytics Systems: Replies to Critics
10.1109/TVCG.2022.3209360
The last decade has witnessed many visual analytics (VA) systems that make successful applications to wide-ranging domains like urban analytics and explainable AI. However, their research rigor and contributions have been extensively challenged within the visualization community. We come in defence of VA systems by contributing two interview studies for gathering critics and responses to those criticisms. First, we interview 24 researchers to collect criticisms the review comments on their VA work. Through an iterative coding and refinement process, the interview feedback is summarized into a list of 36 common criticisms. Second, we interview 17 researchers to validate our list and collect their responses, thereby discussing implications for defending and improving the scientific values and rigor of VA systems. We highlight that the presented knowledge is deep, extensive, but also imperfect, provocative, and controversial, and thus recommend reading with an inclusive and critical eye. We hope our work can provide thoughts and foundations for conducting VA research and spark discussions to promote the research field forward more rigorously and vibrantly.
false
false
[ "Aoyu Wu", "Dazhen Deng", "Furui Cheng", "Yingcai Wu", "Shixia Liu", "Huamin Qu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2201.09772v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/i5bp51QSFCQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/KJMstdzDEeY", "icon": "video" } ]
Vis
2,022
Incorporation of Human Knowledge into Data Embeddings to Improve Pattern Significance and Interpretability
10.1109/TVCG.2022.3209382
Embedding is a common technique for analyzing multi-dimensional data. However, the embedding projection cannot always form significant and interpretable visual structures that foreshadow underlying data patterns. We propose an approach that incorporates human knowledge into data embeddings to improve pattern significance and interpretability. The core idea is (1) externalizing tacit human knowledge as explicit sample labels and (2) adding a classification loss in the embedding network to encode samples' classes. The approach pulls samples of the same class with similar data features closer in the projection, leading to more compact (significant) and class-consistent (interpretable) visual structures. We give an embedding network with a customized classification loss to implement the idea and integrate the network into a visualization system to form a workflow that supports flexible class creation and pattern exploration. Patterns found on open datasets in case studies, subjects' performance in a user study, and quantitative experiment results illustrate the general usability and effectiveness of the approach.
false
false
[ "Jie Li 0006", "Chun-qi Zhou" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.11364v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/LUmmzKTx-rM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/oQyBQv530RA", "icon": "video" } ]
Vis
2,022
Interactive and Visual Prompt Engineering for Ad-hoc Task Adaptation with Large Language Models
10.1109/TVCG.2022.3209479
State-of-the-art neural language models can now be used to solve ad-hoc language tasks through zero-shot prompting without the need for supervised training. This approach has gained popularity in recent years, and researchers have demonstrated prompts that achieve strong accuracy on specific NLP tasks. However, finding a prompt for new tasks requires experimentation. Different prompt templates with different wording choices lead to significant accuracy differences. PromptIDE allows users to experiment with prompt variations, visualize prompt performance, and iteratively optimize prompts. We developed a workflow that allows users to first focus on model feedback using small data before moving on to a large data regime that allows empirical grounding of promising prompts using quantitative measures of the task. The tool then allows easy deployment of the newly created ad-hoc models. We demonstrate the utility of PromptIDE (demo: http://prompt.vizhub.ai) and our workflow using several real-world use cases.
false
false
[ "Hendrik Strobelt", "Albert Webson", "Victor Sanh", "Benjamin Hoover", "Johanna Beyer", "Hanspeter Pfister", "Alexander M. Rush" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.07852v1", "icon": "paper" } ]
Vis
2,022
Interactive Visual Analysis of Structure-borne Noise Data
10.1109/TVCG.2022.3209478
Numerical simulation has become omnipresent in the automotive domain, posing new challenges such as high-dimensional parameter spaces and large as well as incomplete and multi-faceted data. In this design study, we show how interactive visual exploration and analysis of high-dimensional, spectral data from noise simulation can facilitate design improvements in the context of conflicting criteria. Here, we focus on structure-borne noise, i.e., noise from vibrating mechanical parts. Detecting problematic noise sources early in the design and production process is essential for reducing a product's development costs and its time to market. In a close collaboration of visualization and automotive engineering, we designed a new, interactive approach to quickly identify and analyze critical noise sources, also contributing to an improved understanding of the analyzed system. Several carefully designed, interactive linked views enable the exploration of noises, vibrations, and harshness at multiple levels of detail, both in the frequency and spatial domain. This enables swift and smooth changes of perspective; selections in the frequency domain are immediately reflected in the spatial domain, and vice versa. Noise sources are quickly identified and shown in the context of their neighborhood, both in the frequency and spatial domain. We propose a novel drill-down view, especially tailored to noise data analysis. Split boxplots and synchronized 3D geometry views support comparison tasks. With this solution, engineers iterate over design optimizations much faster, while maintaining a good overview at each iteration. We evaluated the new approach in the automotive industry, studying noise simulation data for an internal combustion engine.
false
false
[ "Rainer Splechtna", "Denis Gracanin", "Goran Todorovic", "Stanislav Goja", "Boris Bedic", "Helwig Hauser", "Kresimir Matkovic" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.03083v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/3xxYS7aTSo4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/k6SuFkvq4FY", "icon": "video" } ]
Vis
2,022
Interactive Visual Cluster Analysis by Contrastive Dimensionality Reduction
10.1109/TVCG.2022.3209423
We propose a contrastive dimensionality reduction approach (CDR) for interactive visual cluster analysis. Although dimensionality reduction of high-dimensional data is widely used in visual cluster analysis in conjunction with scatterplots, there are several limitations on effective visual cluster analysis. First, it is non-trivial for an embedding to present clear visual cluster separation when keeping neighborhood structures. Second, as cluster analysis is a subjective task, user steering is required. However, it is also non-trivial to enable interactions in dimensionality reduction. To tackle these problems, we introduce contrastive learning into dimensionality reduction for high-quality embedding. We then redefine the gradient of the loss function to the negative pairs to enhance the visual cluster separation of embedding results. Based on the contrastive learning scheme, we employ link-based interactions to steer embeddings. After that, we implement a prototype visual interface that integrates the proposed algorithms and a set of visualizations. Quantitative experiments demonstrate that CDR outperforms existing techniques in terms of preserving correct neighborhood structures and improving visual cluster separation. The ablation experiment demonstrates the effectiveness of gradient redefinition. The user study verifies that CDR outperforms t-SNE and UMAP in the task of cluster identification. We also showcase two use cases on real-world datasets to present the effectiveness of link-based interactions.
false
false
[ "Jiazhi Xia", "Linquan Huang", "Weixing Lin", "Xin Zhao", "Jing Wu 0004", "Yang Chen", "Ying Zhao 0001", "Wei Chen 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/jCzKrn_Sins", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/E-TnYjcNyW4", "icon": "video" } ]
Vis
2,022
KiriPhys: Exploring New Data Physicalization Opportunities
10.1109/TVCG.2022.3209365
We present KiriPhys, a new type of data physicalization based on kirigami, a traditional Japanese art form that uses paper-cutting. Within the kirigami possibilities, we investigate how different aspects of cutting patterns offer opportunities for mapping data to both independent and dependent physical variables. As a first step towards understanding the data physicalization opportunities in KiriPhys, we conducted a qualitative study in which 12 participants interacted with four KiriPhys examples. Our observations of how people interact with, understand, and respond to KiriPhys suggest that KiriPhys: 1) provides new opportunities for interactive, layered data exploration, 2) introduces elastic expansion as a new sensation that can reveal data, and 3) offers data mapping possibilities while providing a pleasurable experience that stimulates curiosity and engagement.
false
false
[ "Foroozan Daneshzand", "Charles Perin", "Sheelagh Carpendale" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/ra36e", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/KqPcvUkoyK4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/nxg0FpRtrXk", "icon": "video" } ]
Vis
2,022
LargeNetVis: Visual Exploration of Large Temporal Networks Based on Community Taxonomies
10.1109/TVCG.2022.3209477
Temporal (or time-evolving) networks are commonly used to model complex systems and the evolution of their components throughout time. Although these networks can be analyzed by different means, visual analytics stands out as an effective way for a pre-analysis before doing quantitative/statistical analyses to identify patterns, anomalies, and other behaviors in the data, thus leading to new insights and better decision-making. However, the large number of nodes, edges, and/or timestamps in many real-world networks may lead to polluted layouts that make the analysis inefficient or even infeasible. In this paper, we propose LargeNetVis, a web-based visual analytics system designed to assist in analyzing small and large temporal networks. It successfully achieves this goal by leveraging three taxonomies focused on network communities to guide the visual exploration process. The system is composed of four interactive visual components: the first (Taxonomy Matrix) presents a summary of the network characteristics, the second (Global View) gives an overview of the network evolution, the third (a node-link diagram) enables community- and node-level structural analysis, and the fourth (a Temporal Activity Map – TAM) shows the community- and node-level activity under a temporal perspective. We demonstrate the usefulness and effectiveness of LargeNetVis through two usage scenarios and a user study with 14 participants.
false
false
[ "Claudio D. G. Linhares", "Jean R. Ponciano", "Diogenes S. Pedro", "Luis Enrique Correa da Rocha", "Agma J. M. Traina", "Jorge Poco" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04358v1", "icon": "paper" } ]
Vis
2,022
Level Set Restricted Voronoi Tessellation for Large scale Spatial Statistical Analysis
10.1109/TVCG.2022.3209473
Spatial statistical analysis of multivariate volumetric data can be challenging due to scale, complexity, and occlusion. Advances in topological segmentation, feature extraction, and statistical summarization have helped overcome the challenges. This work introduces a new spatial statistical decomposition method based on level sets, connected components, and a novel variation of the restricted centroidal Voronoi tessellation that is better suited for spatial statistical decomposition and parallel efficiency. The resulting data structures organize features into a coherent nested hierarchy to support flexible and efficient out-of-core region-of-interest extraction. Next, we provide an efficient parallel implementation. Finally, an interactive visualization system based on this approach is designed and then applied to turbulent combustion data. The combined approach enables an interactive spatial statistical analysis workflow for large-scale data with a top-down approach through multiple-levels-of-detail that links phase space statistics with spatial features.
false
false
[ "Tyson Neuroth", "Martin Rieth", "Aditya Konduri", "Myoungkyu Lee", "Jacqueline Chen", "Kwan-Liu Ma" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.06970v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/QQN-LcMdhkY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/-HMcxq7x_eQ", "icon": "video" } ]
Vis
2,022
Lotse: A Practical Framework for Guidance in Visual Analytics
10.1109/TVCG.2022.3209393
Co-adaptive guidance aims to enable efficient human-machine collaboration in visual analytics, as proposed by multiple theoretical frameworks. This paper bridges the gap between such conceptual frameworks and practical implementation by introducing an accessible model of guidance and an accompanying guidance library, mapping theory into practice. We contribute a model of system-provided guidance based on design templates and derived strategies. We instantiate the model in a library called Lotse that allows specifying guidance strategies in definition files and generates running code from them. Lotse is the first guidance library using such an approach. It supports the creation of reusable guidance strategies to retrofit existing applications with guidance and fosters the creation of general guidance strategy patterns. We demonstrate its effectiveness through first-use case studies with VA researchers of varying guidance design expertise and find that they are able to effectively and quickly implement guidance with Lotse. Further, we analyze our framework's cognitive dimensions to evaluate its expressiveness and outline a summary of open research questions for aligning guidance practice with its intricate theory.
false
false
[ "Fabian Sperrle", "Davide Ceneda", "Mennatallah El-Assady" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04434v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/AiCCyBacEcs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/cYallUixgps", "icon": "video" } ]
Vis
2,022
Measuring Effects of Spatial Visualization and Domain on Visualization Task Performance: A Comparative Study
10.1109/TVCG.2022.3209491
Understanding one's audience is foundational to creating high impact visualization designs. However, individual differences and cognitive abilities influence interactions with information visualization. Different user needs and abilities suggest that an individual's background could influence cognitive performance and interactions with visuals in a systematic way. This study builds on current research in domain-specific visualization and cognition to address if domain and spatial visualization ability combine to affect performance on information visualization tasks. We measure spatial visualization and visual task performance between those with tertiary education and professional profile in business, law & political science, and math & computer science. We conducted an online study with 90 participants using an established psychometric test to assess spatial visualization ability, and bar chart layouts rotated along Cartesian and polar coordinates to assess performance on spatially rotated data. Accuracy and response times varied with domain across chart types and task difficulty. We found that accuracy and time correlate with spatial visualization level, and education in math & computer science can indicate higher spatial visualization. Additionally, we found that motivational differences between domains could contribute to increased levels of accuracy. Our findings indicate discipline not only affects user needs and interactions with data visualization, but also cognitive traits. Our results can advance inclusive practices in visualization design and add to knowledge in domain-specific visual research that can empower designers across disciplines to create effective visualizations.
false
false
[ "Sara Tandon", "Alfie Abdul-Rahman", "Rita Borgo" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.04844v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/9fMhNt5XZSs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/iQVkv1GjArs", "icon": "video" } ]
Vis
2,022
MedChemLens: An Interactive Visual Tool to Support Direction Selection in Interdisciplinary Experimental Research of Medicinal Chemistry
10.1109/TVCG.2022.3209434
Interdisciplinary experimental science (e.g., medicinal chemistry) refers to the disciplines that integrate knowledge from different scientific backgrounds and involve experiments in the research process. Deciding “in what direction to proceed” is critical for the success of the research in such disciplines, since the time, money, and resource costs of the subsequent research steps depend largely on this decision. However, such a direction identification task is challenging in that researchers need to integrate information from large-scale, heterogeneous materials from all associated disciplines and summarize the related publications of which the core contributions are often showcased in diverse formats. The task also requires researchers to estimate the feasibility and potential in future experiments in the selected directions. In this work, we selected medicinal chemistry as a case and presented an interactive visual tool, MedChemLens, to assist medicinal chemists in choosing their intended directions of research. This task is also known as drug target (i.e., disease-linked proteins) selection. Given a candidate target name, MedChemLens automatically extracts the molecular features of drug compounds from chemical papers and clinical trial records, organizes them based on the drug structures, and interactively visualizes factors concerning subsequent experiments. We evaluated MedChemLens through a within-subjects study (N=16). Compared with the control condition (i.e., unrestricted online search without using our tool), participants who only used MedChemLens reported faster search, better-informed selections, higher confidence in their selections, and lower cognitive load.
false
false
[ "Chuhan Shi", "Fei Nie", "Yicheng Hu", "Yige Xu 0001", "Lei Chen 0002", "Xiaojuan Ma", "Qiong Luo 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/ZiZ9i8QZLR0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/_n2PNb6qzuM", "icon": "video" } ]
Vis
2,022
MEDLEY: Intent-based Recommendations to Support Dashboard Composition
10.1109/TVCG.2022.3209421
Despite the ever-growing popularity of dashboards across a wide range of domains, their authoring still remains a tedious and complex process. Current tools offer considerable support for creating individual visualizations but provide limited support for discovering groups of visualizations that can be collectively useful for composing analytic dashboards. To address this problem, we present Medley, a mixed-initiative interface that assists in dashboard composition by recommending dashboard collections (i.e., a logically grouped set of views and filtering widgets) that map to specific analytical intents. Users can specify dashboard intents (namely, measure analysis, change analysis, category analysis, or distribution analysis) explicitly through an input panel in the interface or implicitly by selecting data attributes and views of interest. The system recommends collections based on these analytic intents, and views and widgets can be selected to compose a variety of dashboards. Medley also provides a lightweight direct manipulation interface to configure interactions between views in a dashboard. Based on a study with 13 participants performing both targeted and open-ended tasks, we discuss how Medley's recommendations guide dashboard composition and facilitate different user workflows. Observations from the study identify potential directions for future work, including combining manual view specification with dashboard recommendations and designing natural language interfaces for dashboard authoring.
false
false
[ "Aditeya Pandey", "Arjun Srinivasan", "Vidya Setlur" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03175v1", "icon": "paper" } ]
Vis
2,022
MetaGlyph: Automatic Generation of Metaphoric Glyph-based Visualization
10.1109/TVCG.2022.3209447
Glyph-based visualization achieves an impressive graphic design when associated with comprehensive visual metaphors, which help audiences effectively grasp the conveyed information through revealing data semantics. However, creating such metaphoric glyph-based visualization (MGV) is not an easy task, as it requires not only a deep understanding of data but also professional design skills. This paper proposes MetaGlyph, an automatic system for generating MGVs from a spreadsheet. To develop MetaGlyph, we first conduct a qualitative analysis to understand the design of current MGVs from the perspectives of metaphor embodiment and glyph design. Based on the results, we introduce a novel framework for generating MGVs by metaphoric image selection and an MGV construction. Specifically, MetaGlyph automatically selects metaphors with corresponding images from online resources based on the input data semantics. We then integrate a Monte Carlo tree search algorithm that explores the design of an MGV by associating visual elements with data dimensions given the data importance, semantic relevance, and glyph non-overlap. The system also provides editing feedback that allows users to customize the MGVs according to their design preferences. We demonstrate the use of MetaGlyph through a set of examples, one usage scenario, and validate its effectiveness through a series of expert interviews.
false
false
[ "Lu Ying", "Xinhuan Shu", "Dazhen Deng", "Yuchen Yang", "Tan Tang", "Lingyun Yu 0001", "Yingcai Wu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.05739v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/7VAaN2HhsJw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/3Xq2cN06nos", "icon": "video" } ]
Vis
2,022
MosaicSets: Embedding Set Systems into Grid Graphs
10.1109/TVCG.2022.3209485
Visualizing sets of elements and their relations is an important research area in information visualization. In this paper, we present MosaicSets: a novel approach to create Euler-like diagrams from non-spatial set systems such that each element occupies one cell of a regular hexagonal or square grid. The main challenge is to find an assignment of the elements to the grid cells such that each set constitutes a contiguous region. As use case, we consider the research groups of a university faculty as elements, and the departments and joint research projects as sets. We aim at finding a suitable mapping between the research groups and the grid cells such that the department structure forms a base map layout. Our objectives are to optimize both the compactness of the entirety of all cells and of each set by itself. We show that computing the mapping is NP-hard. However, using integer linear programming we can solve real-world instances optimally within a few seconds. Moreover, we propose a relaxation of the contiguity requirement to visualize otherwise non-embeddable set systems. We present and discuss different rendering styles for the set overlays. Based on a case study with real-world data, our evaluation comprises quantitative measures as well as expert interviews.
false
false
[ "Peter Rottmann", "Markus Wallinger", "Annika Bonerath", "Sven Gedicke", "Martin Nöllenburg", "Jan-Henrik Haunert" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.07982v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/kvvDm_5661Q", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/hB8RRJeHuCA", "icon": "video" } ]
Vis
2,022
Multi-View Design Patterns and Responsive Visualization for Genomics Data
10.1109/TVCG.2022.3209398
A series of recent studies has focused on designing cross-resolution and cross-device visualizations, i.e., responsive visualization, a concept adopted from responsive web design. However, these studies mainly focused on visualizations with a single view to a small number of views, and there are still unresolved questions about how to design responsive multi-view visualizations. In this paper, we present a reusable and generalizable framework for designing responsive multi-view visualizations focused on genomics data. To gain a better understanding of existing design challenges, we review web-based genomics visualization tools in the wild. By characterizing tools based on a taxonomy of responsive designs, we find that responsiveness is rarely supported in existing tools. To distill insights from the survey results in a systematic way, we classify typical view composition patterns, such as “vertically long,” “horizontally wide,” “circular,” and “cross-shaped” compositions. We then identify their usability issues in different resolutions that stem from the composition patterns, as well as discussing approaches to address the issues and to make genomics visualizations responsive. By extending the Gosling visualization grammar to support responsive constructs, we show how these approaches can be supported. A valuable follow-up study would be taking different input modalities into account, such as mouse and touch interactions, which was not considered in our study.
false
false
[ "Sehi L'Yi", "Nils Gehlenborg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/pd7vq", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/vPc0sh_iVB0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/dJVPjcxK_-Q", "icon": "video" } ]
Vis
2,022
Multiple Forecast Visualizations (MFVs): Trade-offs in Trust and Performance in Multiple COVID-19 Forecast Visualizations
10.1109/TVCG.2022.3209457
The prevalence of inadequate SARS-COV-2 (COVID-19) responses may indicate a lack of trust in forecasts and risk communication. However, no work has empirically tested how multiple forecast visualization choices impact trust and task-based performance. The three studies presented in this paper ($N=1299$) examine how visualization choices impact trust in COVID-19 mortality forecasts and how they influence performance in a trend prediction task. These studies focus on line charts populated with real-time COVID-19 data that varied the number and color encoding of the forecasts and the presence of best/worst-case forecasts. The studies reveal that trust in COVID-19 forecast visualizations initially increases with the number of forecasts and then plateaus after 6–9 forecasts. However, participants were most trusting of visualizations that showed less visual information, including a 95% confidence interval, single forecast, and grayscale encoded forecasts. Participants maintained high trust in intervals labeled with 50% and 25% and did not proportionally scale their trust to the indicated interval size. Despite the high trust, the 95% CI condition was the most likely to evoke predictions that did not correspond with the actual COVID-19 trend. Qualitative analysis of participants' strategies confirmed that many participants trusted both the simplistic visualizations and those with numerous forecasts. This work provides practical guides for how COVID-19 forecast visualizations influence trust, including recommendations for identifying the range where forecasts balance trade-offs between trust and task-based performance.
false
false
[ "Lace M. K. Padilla", "Racquel Fygenson", "Spencer C. Castro", "Enrico Bertini" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/2sq7j", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/O9IqmSVsfXI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/6vQJYh7O3lg", "icon": "video" } ]
Vis
2,022
Multivariate Probabilistic Range Queries for Scalable Interactive 3D Visualization
10.1109/TVCG.2022.3209439
Large-scale scientific data, such as weather and climate simulations, often comprise a large number of attributes for each data sample, like temperature, pressure, humidity, and many more. Interactive visualization and analysis require filtering according to any desired combination of attributes, in particular logical AND operations, which is challenging for large data and many attributes. Many general data structures for this problem are built for and scale with a fixed number of attributes, and scalability of joint queries with arbitrary attribute subsets remains a significant problem. We propose a flexible probabilistic framework for multivariate range queries that decouples all attribute dimensions via projection, allowing any subset of attributes to be queried with full efficiency. Moreover, our approach is output-sensitive, mainly scaling with the cardinality of the query result rather than with the input data size. This is particularly important for joint attribute queries, where the query output is usually much smaller than the whole data set. Additionally, our approach can split query evaluation between user interaction and rendering, achieving much better scalability for interactive visualization than the previous state of the art. Furthermore, even when a multi-resolution strategy is used for visualization, queries are jointly evaluated at the finest data granularity, because our framework does not limit query accuracy to a fixed spatial subdivision.
false
false
[ "Amani Ageeli", "Alberto Jaspe Villanueva", "Ronell Sicat", "Florian Mannuß", "Peter Rautek", "Markus Hadwiger" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/8X8Ctt10oII", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yrQRYjA_NTg", "icon": "video" } ]
Vis
2,022
NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural Network Synthesis
10.1109/TVCG.2022.3209361
The success of DL can be attributed to hours of parameter and architecture tuning by human experts. Neural Architecture Search (NAS) techniques aim to solve this problem by automating the search procedure for DNN architectures making it possible for non-experts to work with DNNs. Specifically, One-shot NAS techniques have recently gained popularity as they are known to reduce the search time for NAS techniques. One-Shot NAS works by training a large template network through parameter sharing which includes all the candidate NNs. This is followed by applying a procedure to rank its components through evaluating the possible candidate architectures chosen randomly. However, as these search models become increasingly powerful and diverse, they become harder to understand. Consequently, even though the search results work well, it is hard to identify search biases and control the search progression, hence a need for explainability and human-in-the-loop (HIL) One-Shot NAS. To alleviate these problems, we present NAS-Navigator, a visual analytics (VA) system aiming to solve three problems with One-Shot NAS; explainability, HIL design, and performance improvements compared to existing state-of-the-art (SOTA) techniques. NAS-Navigator gives full control of NAS back in the hands of the users while still keeping the perks of automated search, thus assisting non-expert users. Analysts can use their domain knowledge aided by cues from the interface to guide the search. Evaluation results confirm the performance of our improved One-Shot NAS algorithm is comparable to other SOTA techniques. While adding Visual Analytics (VA) using NAS-Navigator shows further improvements in search time and performance. We designed our interface in collaboration with several deep learning researchers and evaluated NAS-Navigator through a control experiment and expert interviews.
false
false
[ "Anjul Tyagi", "Cong Xie", "Klaus Mueller 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2009.13008v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/DwCxfleJStg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xKcNSf9gGko", "icon": "video" } ]
Vis
2,022
No Grammar to Rule Them All: A Survey of JSON-style DSLs for Visualization
10.1109/TVCG.2022.3209460
There has been substantial growth in the use of JSON-based grammars, as well as other standard data serialization languages, to create visualizations. Each of these grammars serves a purpose: some focus on particular computational tasks (such as animation), some are concerned with certain chart types (such as maps), and some target specific data domains (such as ML). Despite the prominence of this interface form, there has been little detailed analysis of the characteristics of these languages. In this study, we survey and analyze the design and implementation of 57 JSON-style DSLs for visualization. We analyze these languages supported by a collected corpus of examples for each DSL (consisting of 4395 instances) across a variety of axes organized into concerns related to domain, conceptual model, language relationships, affordances, and general practicalities. We identify tensions throughout these areas, such as between formal and colloquial specifications, among types of users, and within the composition of languages. Through this work, we seek to support language implementers by elucidating the choices, opportunities, and tradeoffs in visualization DSL design.
false
false
[ "Andrew M. McNutt" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.07998v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/1GTqeZ4nKpk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/eudQcdPhXDU", "icon": "video" } ]
Vis
2,022
OBTracker: Visual Analytics of Off-ball Movements in Basketball
10.1109/TVCG.2022.3209373
In a basketball play, players who are not in possession of the ball (i.e., off-ball players) can still effectively contribute to the team's offense, such as making a sudden move to create scoring opportunities. Analyzing the movements of off-ball players can thus facilitate the development of effective strategies for coaches. However, common basketball statistics (e.g., points and assists) primarily focus on what happens around the ball and are mostly result-oriented, making it challenging to objectively assess and fully understand the contributions of off-ball movements. To address these challenges, we collaborate closely with domain experts and summarize the multi-level requirements for off-ball movement analysis in basketball. We first establish an assessment model to quantitatively evaluate the offensive contribution of an off-ball movement considering both the position of players and the team cooperation. Based on the model, we design and develop a visual analytics system called OBTracker to support the multifaceted analysis of off-ball movements. OBTracker enables users to identify the frequency and effectiveness of off-ball movement patterns and learn the performance of different off-ball players. A tailored visualization based on the Voronoi diagram is proposed to help users interpret the contribution of off-ball movements from a temporal perspective. We conduct two case studies based on the tracking data from NBA games and demonstrate the effectiveness and usability of OBTracker through expert feedback.
false
false
[ "Yihong Wu", "Dazhen Deng", "Xiao Xie", "Moqi He", "Jie Xu", "Hongzeng Zhang", "Hui Zhang 0051", "Yingcai Wu" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/AGYqHPqQ8_4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/iKKbQScLptg", "icon": "video" } ]
Vis
2,022
On-Tube Attribute Visualization for Multivariate Trajectory Data
10.1109/TVCG.2022.3209400
Stylized tubes are an established visualization primitive for line data as encountered in many scientific fields, ranging from characteristic lines in flow fields, fiber tracks reconstructed from diffusion tensor imaging, to trajectories of moving objects as they arise from cyber-physical systems in many engineering disciplines. Typical challenges include large data set sizes demanding for efficient rendering techniques as well as a large number of attributes that cannot be mapped simultaneously to the basic visual attributes provided by a tube-based visualization. In this work, we tackle both challenges with a new on-tube visualization approach. We improve recent work on high-quality GPU ray casting of Hermite spline tubes supporting ambient occlusion and extend it by a new layered procedural texturing technique. In the proposed framework, a large number of data set attributes can be mapped simultaneously to a variety of glyphs and plots that are embedded in texture space and organized in layers. Efficient rendering with minimal data transfer is achieved by generating the glyphs procedurally and drawing them in a deferred shading pass. We integrated these techniques in a prototype visualization tool that facilitates flexible mapping of data set attributes to visual tube and glyph attributes. We studied our approach on a variety of example data from different fields and found it to provide a highly adaptable and extensible toolbox to quickly craft tailor-made tube-based trajectory visualizations.
false
false
[ "Benjamin Russig", "David Groß", "Raimund Dachselt", "Stefan Gumhold" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/no-xqx4VRQ8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/0uB3HGtLuO4", "icon": "video" } ]
Vis
2,022
PC-Expo: A Metrics-Based Interactive Axes Reordering Method for Parallel Coordinate Displays
10.1109/TVCG.2022.3209392
Parallel coordinate plots (PCPs) have been widely used for high-dimensional (HD) data storytelling because they allow for presenting a large number of dimensions without distortions. The axes ordering in PCP presents a particular story from the data based on the user perception of PCP polylines. Existing works focus on directly optimizing for PCP axes ordering based on some common analysis tasks like clustering, neighborhood, and correlation. However, direct optimization for PCP axes based on these common properties is restrictive because it does not account for multiple properties occurring between the axes, and for local properties that occur in small regions in the data. Also, many of these techniques do not support the human-in-the-loop (HIL) paradigm, which is crucial (i) for explainability and (ii) in cases where no single reordering scheme fits the users' goals. To alleviate these problems, we present PC-Expo, a real-time visual analytics framework for all-in-one PCP line pattern detection and axes reordering. We studied the connection of line patterns in PCPs with different data analysis tasks and datasets. PC-Expo expands prior work on PCP axes reordering by developing real-time, local detection schemes for the 12 most common analysis tasks (properties). Users can choose the story they want to present with PCPs by optimizing directly over their choice of properties. These properties can be ranked, or combined using individual weights, creating a custom optimization scheme for axes reordering. Users can control the granularity at which they want to work with their detection scheme in the data, allowing exploration of local regions. PC-Expo also supports HIL axes reordering via local-property visualization, which shows the regions of granular activity for every axis pair. Local-property visualization is helpful for PCP axes reordering based on multiple properties, when no single reordering scheme fits the user goals. A comprehensive evaluation was done with real users and diverse datasets confirm the efficacy of PC-Expo in data storytelling with PCPs.
false
false
[ "Anjul Tyagi", "Tyler Estro", "Geoffrey H. Kuenning", "Erez Zadok", "Klaus Mueller 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03430v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/TjxnMIOaJo4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yQD5nKklvN8", "icon": "video" } ]
Vis
2,022
Photosensitive Accessibility for Interactive Data Visualizations
10.1109/TVCG.2022.3209359
Accessibility guidelines place restrictions on the use of animations and interactivity on webpages to lessen the likelihood of webpages inadvertently producing sequences with flashes, patterns, or color changes that may trigger seizures for individuals with photosensitive epilepsy. Online data visualizations often incorporate elements of animation and interactivity to create a narrative, engage users, or encourage exploration. These design guidelines have been empirically validated by perceptual studies in visualization literature, but the impact of animation and interaction in visualizations on users with photosensitivity, who may experience seizures in response to certain visual stimuli, has not been considered. We systematically gathered and tested 1,132 interactive and animated visualizations for seizure-inducing risk using established methods and found that currently available methods for determining photosensitive risk are not reliable when evaluating interactive visualizations, as risk scores varied significantly based on the individual interacting with the visualization. To address this issue, we introduce a theoretical model defining the degree of control visualization designers have over three determinants of photosensitive risk in potentially seizure-inducing sequences: the size, frequency, and color of flashing content. Using an analysis of 375 visualizations hosted on bl.ocks.org, we created a theoretical model of photosensitive risk in visualizations by arranging the photosensitive risk determinants according to the degree of control visualization authors have over whether content exceeds photosensitive accessibility thresholds. We then use this model to propose a new method of testing for photosensitive risk that focuses on elements of visualizations that are subject to greater authorial control - and are therefore more robust to variations in the individual user - producing more reliable risk assessments than existing methods when applied to interactive visualizations. A full copy of this paper and all study materials are available at https://osf.io/8kzmg/.
false
false
[ "Laura South", "Michelle Borkin" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/7uyn9", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/O9PXUJM2ocQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/QSFUOVRGHzU", "icon": "video" } ]
Vis
2,022
PMU Tracker: A Visualization Platform for Epicentric Event Propagation Analysis in the Power Grid
10.1109/TVCG.2022.3209380
The electrical power grid is a critical infrastructure, with disruptions in transmission having severe repercussions on daily activities, across multiple sectors. To identify, prevent, and mitigate such events, power grids are being refurbished as ‘smart’ systems that include the widespread deployment of GPS-enabled phasor measurement units (PMUs). PMUs provide fast, precise, and time-synchronized measurements of voltage and current, enabling real-time wide-area monitoring and control. However, the potential benefits of PMUs, for analyzing grid events like abnormal power oscillations and load fluctuations, are hindered by the fact that these sensors produce large, concurrent volumes of noisy data. In this paper, we describe working with power grid engineers to investigate how this problem can be addressed from a visual analytics perspective. As a result, we have developed PMU Tracker, an event localization tool that supports power grid operators in visually analyzing and identifying power grid events and tracking their propagation through the power grid's network. As a part of the PMU Tracker interface, we develop a novel visualization technique which we term an epicentric cluster dendrogram, which allows operators to analyze the effects of an event as it propagates outwards from a source location. We robustly validate PMU Tracker with: (1) a usage scenario demonstrating how PMU Tracker can be used to analyze anomalous grid events, and (2) case studies with power grid operators using a real-world interconnection dataset. Our results indicate that PMU Tracker effectively supports the analysis of power grid events; we also demonstrate and discuss how PMU Tracker's visual analytics approach can be generalized to other domains composed of time-varying networks with epicentric event characteristics.
false
false
[ "Anjana Arunkumar", "Andrea Pinceti", "Lalitha Sankar", "Chris Bryan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.03514v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Q0-SJVxudh8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/VfjyqEqoitQ", "icon": "video" } ]
Vis
2,022
Polyphony: an Interactive Transfer Learning Framework for Single-Cell Data Analysis
10.1109/TVCG.2022.3209408
Reference-based cell-type annotation can significantly reduce time and effort in single-cell analysis by transferring labels from a previously-annotated dataset to a new dataset. However, label transfer by end-to-end computational methods is challenging due to the entanglement of technical (e.g., from different sequencing batches or techniques) and biological (e.g., from different cellular microenvironments) variations, only the first of which must be removed. To address this issue, we propose Polyphony, an interactive transfer learning (ITL) framework, to complement biologists' knowledge with advanced computational methods. Polyphony is motivated and guided by domain experts' needs for a controllable, interactive, and algorithm-assisted annotation process, identified through interviews with seven biologists. We introduce anchors, i.e., analogous cell populations across datasets, as a paradigm to explain the computational process and collect user feedback for model improvement. We further design a set of visualizations and interactions to empower users to add, delete, or modify anchors, resulting in refined cell type annotations. The effectiveness of this approach is demonstrated through quantitative experiments, two hypothetical use cases, and interviews with two biologists. The results show that our anchor-based ITL method takes advantage of both human and machine intelligence in annotating massive single-cell datasets.
false
false
[ "Furui Cheng", "Mark S. Keller", "Huamin Qu", "Nils Gehlenborg", "Qianwen Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/b76nt", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/-_vFKtJsliQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/FkbFjJO8QZY", "icon": "video" } ]
Vis
2,022
Predicting User Preferences of Dimensionality Reduction Embedding Quality
10.1109/TVCG.2022.3209449
A plethora of dimensionality reduction techniques have emerged over the past decades, leaving researchers and analysts with a wide variety of choices for reducing their data, all the more so given some techniques come with additional hyper-parametrization (e.g., t-SNE, UMAP, etc.). Recent studies are showing that people often use dimensionality reduction as a black-box regardless of the specific properties the method itself preserves. Hence, evaluating and comparing 2D embeddings is usually qualitatively decided, by setting embeddings side-by-side and letting human judgment decide which embedding is the best. In this work, we propose a quantitative way of evaluating embeddings, that nonetheless places human perception at the center. We run a comparative study, where we ask people to select “good” and “misleading” views between scatterplots of low-dimensional embeddings of image datasets, simulating the way people usually select embeddings. We use the study data as labels for a set of quality metrics for a supervised machine learning model whose purpose is to discover and quantify what exactly people are looking for when deciding between embeddings. With the model as a proxy for human judgments, we use it to rank embeddings on new datasets, explain why they are relevant, and quantify the degree of subjectivity when people select preferred embeddings.
false
false
[ "Cristina Morariu", "Adrien Bibal", "René Cutura", "Benoît Frénay", "Michael Sedlmair" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/x6qjUoEUpIc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/NGWxpprFM1A", "icon": "video" } ]
Vis
2,022
Probablement, Wahrscheinlich, Likely? A Cross-Language Study of How People Verbalize Probabilities in Icon Array Visualizations
10.1109/TVCG.2022.3209367
Visualizations today are used across a wide range of languages and cultures. Yet the extent to which language impacts how we reason about data and visualizations remains unclear. In this paper, we explore the intersection of visualization and language through a cross-language study on estimative probability tasks with icon-array visualizations. Across Arabic, English, French, German, and Mandarin, $n=50$ participants per language both chose probability expressions — e.g. likely, probable — to describe icon-array visualizations (Vis-to-Expression), and drew icon-array visualizations to match a given expression (Expression-to-Vis). Results suggest that there is no clear one-to-one mapping of probability expressions and associated visual ranges between languages. Several translated expressions fell significantly above or below the range of the corresponding English expressions. Compared to other languages, French and German respondents appear to exhibit high levels of consistency between the visualizations they drew and the words they chose. Participants across languages used similar words when describing scenarios above 80% chance, with more variance in expressions targeting mid-range and lower values. We discuss how these results suggest potential differences in the expressiveness of language as it relates to visualization interpretation and design goals, as well as practical implications for translation efforts and future studies at the intersection of languages, culture, and visualization. Experiment data, source code, and analysis scripts are available at the following repository: https://osf.io/g5d4r/.
false
false
[ "Noëlle Rakotondravony", "Yiren Ding", "Lane Harrison" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.09608v3", "icon": "paper" } ]
Vis
2,022
PromotionLens: Inspecting Promotion Strategies of Online E-commerce via Visual Analytics
10.1109/TVCG.2022.3209440
Promotions are commonly used by e-commerce merchants to boost sales. The efficacy of different promotion strategies can help sellers adapt their offering to customer demand in order to survive and thrive. Current approaches to designing promotion strategies are either based on econometrics, which may not scale to large amounts of sales data, or are spontaneous and provide little explanation of sales volume. Moreover, accurately measuring the effects of promotion designs and making bootstrappable adjustments accordingly remains a challenge due to the incompleteness and complexity of the information describing promotion strategies and their market environments. We present PromotionLens, a visual analytics system for exploring, comparing, and modeling the impact of various promotion strategies. Our approach combines representative multivariant time-series forecasting models and well-designed visualizations to demonstrate and explain the impact of sales and promotional factors, and to support “what-if” analysis of promotions. Two case studies, expert feedback, and a qualitative user study demonstrate the efficacy of PromotionLens.
false
false
[ "Chenyang Zhang", "Xiyuan Wang", "Chuyi Zhao", "Yijing Ren", "Tianyu Zhang", "Zhenhui Peng", "Xiaomeng Fan", "Xiaojuan Ma", "Quan Li" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.01404v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/pWUqTkf0S74", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/pbwjVySy8O8", "icon": "video" } ]
Vis
2,022
PuzzleFixer: A Visual Reassembly System for Immersive Fragments Restoration
10.1109/TVCG.2022.3209388
We present PuzzleFixer, an immersive interactive system for experts to rectify defective reassembled 3D objects. Reassembling the fragments of a broken object to restore its original state is the prerequisite of many analytical tasks such as cultural relics analysis and forensics reasoning. While existing computer-aided methods can automatically reassemble fragments, they often derive incorrect objects due to the complex and ambiguous fragment shapes. Thus, experts usually need to refine the object manually. Prior advances in immersive technologies provide benefits for realistic perception and direct interactions to visualize and interact with 3D fragments. However, few studies have investigated the reassembled object refinement. The specific challenges include: 1) the fragment combination set is too large to determine the correct matches, and 2) the geometry of the fragments is too complex to align them properly. To tackle the first challenge, PuzzleFixer leverages dimensionality reduction and clustering techniques, allowing users to review possible match categories, select the matches with reasonable shapes, and drill down to shapes to correct the corresponding faces. For the second challenge, PuzzleFixer embeds the object with node-link networks to augment the perception of match relations. Specifically, it instantly visualizes matches with graph edges and provides force feedback to facilitate the efficiency of alignment interactions. To demonstrate the effectiveness of PuzzleFixer, we conducted an expert evaluation based on two cases on real-world artifacts and collected feedback through post-study interviews. The results suggest that our system is suitable and efficient for experts to refine incorrect reassembled objects.
false
false
[ "Shuainan Ye", "Zhutian Chen", "Xiangtong Chu", "Kang Li 0005", "Juntong Luo", "Yi Li", "Guohua Geng", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/52xbZtb1cmo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/q1ZT8jv7EUE", "icon": "video" } ]
Vis
2,022
Quick Clusters: A GPU-Parallel Partitioning for Efficient Path Tracing of Unstructured Volumetric Grids
10.1109/TVCG.2022.3209418
We propose a simple yet effective method for clustering finite elements to improve preprocessing times and rendering performance of unstructured volumetric grids without requiring auxiliary connectivity data. Rather than building bounding volume hierarchies (BVHs) over individual elements, we sort elements along with a Hilbert curve and aggregate neighboring elements together, improving BVH memory consumption by over an order of magnitude. Then to further reduce memory consumption, we cluster the mesh on the fly into sub-meshes with smaller indices using a series of efficient parallel mesh re-indexing operations. These clusters are then passed to a highly optimized ray tracing API for point containment queries and ray-cluster intersection testing. Each cluster is assigned a maximum extinction value for adaptive sampling, which we rasterize into non-overlapping view-aligned bins allocated along the ray. These maximum extinction bins are then used to guide the placement of samples along the ray during visualization, reducing the number of samples required by multiple orders of magnitude (depending on the dataset), thereby improving overall visualization interactivity. Using our approach, we improve rendering performance over a competitive baseline on the NASA Mars Lander dataset from 6× (1 frame per second (fps) and 1.0 M rays per second (rps) up to now 6 fps and 12.4 M rps, now including volumetric shadows) while simultaneously reducing memory consumption by 3×(33 GB down to 11 GB) and avoiding any offline preprocessing steps, enabling high-quality interactive visualization on consumer graphics cards. Then by utilizing the full 48 GB of an RTX 8000, we improve the performance of Lander by 17 × (1 fps up to 17 fps, 1.0 M rps up to 35.6 M rps).
false
false
[ "Nathan Morrical", "Alper Sahistan", "Ugur Güdükbay", "Ingo Wald", "Valerio Pascucci" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/shi-_p-9QTE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Msu4ZX01FHY", "icon": "video" } ]
Vis
2,022
RankAxis: Towards a Systematic Combination of Projection and Ranking in Multi-Attribute Data Exploration
10.1109/TVCG.2022.3209463
Projection and ranking are frequently used analysis techniques in multi-attribute data exploration. Both families of techniques help analysts with tasks such as identifying similarities between observations and determining ordered subgroups, and have shown good performances in multi-attribute data exploration. However, they often exhibit problems such as distorted projection layouts, obscure semantic interpretations, and non-intuitive effects produced by selecting a subset of (weighted) attributes. Moreover, few studies have attempted to combine projection and ranking into the same exploration space to complement each other's strengths and weaknesses. For this reason, we propose RankAxis, a visual analytics system that systematically combines projection and ranking to facilitate the mutual interpretation of these two techniques and jointly support multi-attribute data exploration. A real-world case study, expert feedback, and a user study demonstrate the efficacy of RankAxis.
false
false
[ "Qiangqiang Liu", "Yukun Ren", "Zhihua Zhu", "Dai Li", "Xiaojuan Ma", "Quan Li" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.01493v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/NnsJSS34pPI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/DmA0g8UlNjU", "icon": "video" } ]
Vis
2,022
RASIPAM: Interactive Pattern Mining of Multivariate Event Sequences in Racket Sports
10.1109/TVCG.2022.3209452
Experts in racket sports like tennis and badminton use tactical analysis to gain insight into competitors' playing styles. Many data-driven methods apply pattern mining to racket sports data — which is often recorded as multivariate event sequences — to uncover sports tactics. However, tactics obtained in this way are often inconsistent with those deduced by experts through their domain knowledge, which can be confusing to those experts. This work introduces RASIPAM, a RAcket-Sports Interactive PAttern Mining system, which allows experts to incorporate their knowledge into data mining algorithms to discover meaningful tactics interactively. RASIPAM consists of a constraint-based pattern mining algorithm that responds to the analysis demands of experts: Experts provide suggestions for finding tactics in intuitive written language, and these suggestions are translated into constraints to run the algorithm. RASIPAM further introduces a tailored visual interface that allows experts to compare the new tactics with the original ones and decide whether to apply a given adjustment. This interactive workflow iteratively progresses until experts are satisfied with all tactics. We conduct a quantitative experiment to show that our algorithm supports real-time interaction. Two case studies in tennis and in badminton respectively, each involving two domain experts, are conducted to show the effectiveness and usefulness of the system.
false
false
[ "Jiang Wu", "Dongyu Liu", "Ziyang Guo", "Yingcai Wu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.00671v4", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CL8HxPjKHuI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Wr956aLgcmU", "icon": "video" } ]
Vis
2,022
Relaxed Dot Plots: Faithful Visualization of Samples and Their Distribution
10.1109/TVCG.2022.3209429
We introduce relaxed dot plots as an improvement of nonlinear dot plots for unit visualization. Our plots produce more faithful data representations and reduce moiré effects. Their contour is based on a customized kernel frequency estimation to match the shape of the distribution of underlying data values. Previous nonlinear layouts introduce column-centric nonlinear scaling of dot diameters for visualization of high-dynamic-range data with high peaks. We provide a mathematical approach to convert that column-centric scaling to our smooth envelope shape. This formalism allows us to use linear, root, and logarithmic scaling to find ideal dot sizes. Our method iteratively relaxes the dot layout for more correct and aesthetically pleasing results. To achieve this, we modified Lloyd's algorithm with additional constraints and heuristics. We evaluate the layouts of relaxed dot plots against a previously existing nonlinear variant and show that our algorithm produces less error regarding the underlying data while establishing the blue noise property that works against moiré effects. Further, we analyze the readability of our relaxed plots in three crowd-sourced experiments. The results indicate that our proposed technique surpasses traditional dot plots.
false
false
[ "Nils Rodrigues", "Christoph Schulz 0001", "Sören Döring", "Daniel Baumgartner", "Tim Krake", "Daniel Weiskopf" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/r4letvZDJWQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/7D8MsG7t3CA", "icon": "video" } ]
Vis
2,022
Revealing the Semantics of Data Wrangling Scripts With Comantics
10.1109/TVCG.2022.3209470
Data workers usually seek to understand the semantics of data wrangling scripts in various scenarios, such as code debugging, reusing, and maintaining. However, the understanding is challenging for novice data workers due to the variety of programming languages, functions, and parameters. Based on the observation that differences between input and output tables highly relate to the type of data transformation, we outline a design space including 103 characteristics to describe table differences. Then, we develop Comantics, a three-step pipeline that automatically detects the semantics of data transformation scripts. The first step focuses on the detection of table differences for each line of wrangling code. Second, we incorporate a characteristic-based component and a Siamese convolutional neural network-based component for the detection of transformation types. Third, we derive the parameters of each data transformation by employing a “slot filling” strategy. We design experiments to evaluate the performance of Comantics. Further, we assess its flexibility using three example applications in different domains.
false
false
[ "Kai Xiong", "Zhongsu Luo", "Siwei Fu", "Yongheng Wang", "Mingliang Xu", "Yingcai Wu" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.13995v1", "icon": "paper" } ]
Vis
2,022
Rigel: Transforming Tabular Data by Declarative Mapping
10.1109/TVCG.2022.3209385
We present Rigel, an interactive system for rapid transformation of tabular data. Rigel implements a new declarative mapping approach that formulates the data transformation procedure as direct mappings from data to the row, column, and cell channels of the target table. To construct such mappings, Rigel allows users to directly drag data attributes from input data to these three channels and indirectly drag or type data values in a spreadsheet, and possible mappings that do not contradict these interactions are recommended to achieve efficient and straightforward data transformation. The recommended mappings are generated by enumerating and composing data variables based on the row, column, and cell channels, thereby revealing the possibility of alternative tabular forms and facilitating open-ended exploration in many data transformation scenarios, such as designing tables for presentation. In contrast to existing systems that transform data by composing operations (like transposing and pivoting), Rigel requires less prior knowledge on these operations, and constructing tables from the channels is more efficient and results in less ambiguity than generating operation sequences as done by the traditional by-example approaches. User study results demonstrated that Rigel is significantly less demanding in terms of time and interactions and suits more scenarios compared to the state-of-the-art by-example approach. A gallery of diverse transformation cases is also presented to show the potential of Rigel's expressiveness.
false
false
[ "Ran Chen", "Di Weng", "Yanwei Huang", "Xinhuan Shu", "Jiayi Zhou", "Guodao Sun", "Yingcai Wu" ]
[]
[]
[]
Vis
2,022
RISeer: Inspecting the Status and Dynamics of Regional Industrial Structure via Visual Analytics
10.1109/TVCG.2022.3209351
Restructuring the regional industrial structure (RIS) has the potential to halt economic recession and achieve revitalization. Understanding the current status and dynamics of RIS will greatly assist in studying and evaluating the current industrial structure. Previous studies have focused on qualitative and quantitative research to rationalize RIS from a macroscopic perspective. Although recent studies have traced information at the industrial enterprise level to complement existing research from a micro perspective, the ambiguity of the underlying variables contributing to the industrial sector and its composition, the dynamic nature, and the large number of multivariant features of RIS records have obscured a deep and fine-grained understanding of RIS. To this end, we propose an interactive visualization system, RISeer, which is based on interpretable machine learning models and enhanced visualizations designed to identify the evolutionary patterns of the RIS and facilitate inter-regional inspection and comparison. Two case studies confirm the effectiveness of our approach, and feedback from experts indicates that RISeer helps them to gain a fine-grained understanding of the dynamics and evolution of the RIS.
false
false
[ "Longfei Chen", "Yang Ouyang", "Haipeng Zhang", "Suting Hong", "Quan Li" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.00625v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Cx81AWaK_oU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/GwkjlVpyIuc", "icon": "video" } ]
Vis
2,022
RoboHapalytics: A Robot Assisted Haptic Controller for Immersive Analytics
10.1109/TVCG.2022.3209433
Immersive environments offer new possibilities for exploring three-dimensional volumetric or abstract data. However, typical mid-air interaction offers little guidance to the user in interacting with the resulting visuals. Previous work has explored the use of haptic controls to give users tangible affordances for interacting with the data, but these controls have either: been limited in their range and resolution; were spatially fixed; or required users to manually align them with the data space. We explore the use of a robot arm with hand tracking to align tangible controls under the user's fingers as they reach out to interact with data affordances. We begin with a study evaluating the effectiveness of a robot-extended slider control compared to a large fixed physical slider and a purely virtual mid-air slider. We find that the robot slider has similar accuracy to the physical slider but is significantly more accurate than mid-air interaction. Further, the robot slider can be arbitrarily reoriented, opening up many new possibilities for tangible haptic interaction with immersive visualisations. We demonstrate these possibilities through three use-cases: selection in a time-series chart; interactive slicing of CT scans; and finally exploration of a scatter plot depicting time-varying socio-economic data.
false
false
[ "Shaozhang Dai", "Jim Smiley", "Tim Dwyer", "Barrett Ens", "Lonni Besançon" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/gmju6", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/e9ANz79Z8mw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/XBApjWgHDJo", "icon": "video" } ]
Vis
2,022
Roboviz: A Game-Centered Project for Information Visualization Education
10.1109/TVCG.2022.3209402
Due to their pedagogical advantages, large final projects in information visualization courses have become standard practice. Students take on a client–real or simulated–a dataset, and a vague set of goals to create a complete visualization or visual analytics product. Unfortunately, many projects suffer from ambiguous goals, over or under-constrained client expectations, and data constraints that have students spending their time on non-visualization problems (e.g., data cleaning). These are important skills, but are often secondary course objectives, and unforeseen problems can majorly hinder students. We created an alternative for our information visualization course: Roboviz, a real-time game for students to play by building a visualization-focused interface. By designing the game mechanics around four different data types, the project allows students to create a wide array of interactive visualizations. Student teams play against their classmates with the objective to collect the most (good) robots. The flexibility of the strategies encourages variability, a range of approaches, and solving wicked design constraints. We describe the construction of this game and report on student projects over two years. We further show how the game mechanics can be extended or adapted to other game-based projects.
false
false
[ "Eytan Adar", "Elsie Lee-Robbins" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04403v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/V7QcQRRmg7k", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/8SrT3nkfjn4", "icon": "video" } ]
Vis
2,022
Seeing What You Believe or Believing What You See? Belief Biases Correlation Estimation
10.1109/TVCG.2022.3209405
When an analyst or scientist has a belief about how the world works, their thinking can be biased in favor of that belief. Therefore, one bedrock principle of science is to minimize that bias by testing the predictions of one's belief against objective data. But interpreting visualized data is a complex perceptual and cognitive process. Through two crowdsourced experiments, we demonstrate that supposedly objective assessments of the strength of a correlational relationship can be influenced by how strongly a viewer believes in the existence of that relationship. Participants viewed scatterplots depicting a relationship between meaningful variable pairs (e.g., number of environmental regulations and air quality) and estimated their correlations. They also estimated the correlation of the same scatterplots labeled instead with generic ‘X’ and ‘Y’ axes. In a separate section, they also reported how strongly they believed there to be a correlation between the meaningful variable pairs. Participants estimated correlations more accurately when they viewed scatterplots labeled with generic axes compared to scatterplots labeled with meaningful variable pairs. Furthermore, when viewers believed that two variables should have a strong relationship, they overestimated correlations between those variables by an r-value of about 0.1. When they believed that the variables should be unrelated, they underestimated the correlations by an r-value of about 0.1. While data visualizations are typically thought to present objective truths to the viewer, these results suggest that existing personal beliefs can bias even objective statistical values people extract from data.
false
false
[ "Cindy Xiong", "Chase Stokes", "Yea-Seul Kim", "Steven Franconeri" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.04436v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/7ISD9WtYChI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xBSZC2jj_wc", "icon": "video" } ]
Vis
2,022
Self-Supervised Color-Concept Association via Image Colorization
10.1109/TVCG.2022.3209481
The interpretation of colors in visualizations is facilitated when the assignments between colors and concepts in the visualizations match human's expectations, implying that the colors can be interpreted in a semantic manner. However, manually creating a dataset of suitable associations between colors and concepts for use in visualizations is costly, as such associations would have to be collected from humans for a large variety of concepts. To address the challenge of collecting this data, we introduce a method to extract color-concept associations automatically from a set of concept images. While the state-of-the-art method extracts associations from data with supervised learning, we developed a self-supervised method based on colorization that does not require the preparation of ground truth color-concept associations. Our key insight is that a set of images of a concept should be sufficient for learning color-concept associations, since humans also learn to associate colors to concepts mainly from past visual input. Thus, we propose to use an automatic colorization method to extract statistical models of the color-concept associations that appear in concept images. Specifically, we take a colorization model pre-trained on ImageNet and fine-tune it on the set of images associated with a given concept, to predict pixel-wise probability distributions in Lab color space for the images. Then, we convert the predicted probability distributions into color ratings for a given color library and aggregate them for all the images of a concept to obtain the final color-concept associations. We evaluate our method using four different evaluation metrics and via a user study. Experiments show that, although the state-of-the-art method based on supervised learning with user-provided ratings is more effective at capturing relative associations, our self-supervised method obtains overall better results according to metrics like Earth Mover's Distance (EMD) and Entropy Difference (ED), which are closer to human perception of color distributions.
false
false
[ "Ruizhen Hu", "Ziqi Ye", "Bin Chen", "Oliver van Kaick", "Hui Huang 0004" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/iSXB800cU7w", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/AU9NoTlEh8U", "icon": "video" } ]
Vis
2,022
SizePairs: Achieving Stable and Balanced Temporal Treemaps using Hierarchical Size-based Pairing
10.1109/TVCG.2022.3209450
We present SizePairs, a new technique to create stable and balanced treemap layouts that visualize values changing over time in hierarchical data. To achieve an overall high-quality result across all time steps in terms of stability and aspect ratio, SizePairs employs a new hierarchical size-based pairing algorithm that recursively pairs two nodes that complement their size changes over time and have similar sizes. SizePairs maximizes the visual quality and stability by optimizing the splitting orientation of each internal node and flipping leaf nodes, if necessary. We also present a comprehensive comparison of SizePairs against the state-of-the-art treemaps developed for visualizing time-dependent data. SizePairs outperforms existing techniques in both visual quality and stability, while being faster than the local moves technique.
false
false
[ "Chang Han", "Jaemin Jo", "Anyi Li", "Bongshin Lee", "Oliver Deussen", "Yunhai Wang" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/UZyjwbsri6c", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/R9ntseofJx0", "icon": "video" } ]
Vis
2,022
SliceTeller: A Data Slice-Driven Approach for Machine Learning Model Validation
10.1109/TVCG.2022.3209465
Real-world machine learning applications need to be thoroughly evaluated to meet critical product requirements for model release, to ensure fairness for different groups or individuals, and to achieve a consistent performance in various scenarios. For example, in autonomous driving, an object classification model should achieve high detection rates under different conditions of weather, distance, etc. Similarly, in the financial setting, credit-scoring models must not discriminate against minority groups. These conditions or groups are called as “Data Slices”. In product MLOps cycles, product developers must identify such critical data slices and adapt models to mitigate data slice problems. Discovering where models fail, understanding why they fail, and mitigating these problems, are therefore essential tasks in the MLOps life-cycle. In this paper, we present SliceTeller, a novel tool that allows users to debug, compare and improve machine learning models driven by critical data slices. SliceTeller automatically discovers problematic slices in the data, helps the user understand why models fail. More importantly, we present an efficient algorithm, SliceBoosting, to estimate trade-offs when prioritizing the optimization over certain slices. Furthermore, our system empowers model developers to compare and analyze different model versions during model iterations, allowing them to choose the model version best suitable for their applications. We evaluate our system with three use cases, including two real-world use cases of product development, to demonstrate the power of SliceTeller in the debugging and improvement of product-quality ML models.
false
false
[ "Xiaoyu Zhang", "Jorge Henrique Piazentin Ono", "Huan Song", "Liang Gou", "Kwan-Liu Ma", "Ren Liu" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/NOcHFj87uWI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MXCaUK-RBr4", "icon": "video" } ]
Vis
2,022
sMolBoxes: Dataflow Model for Molecular Dynamics Exploration
10.1109/TVCG.2022.3209411
We present sMolBoxes, a dataflow representation for the exploration and analysis of long molecular dynamics (MD) simulations. When MD simulations reach millions of snapshots, a frame-by-frame observation is not feasible anymore. Thus, biochemists rely to a large extent only on quantitative analysis of geometric and physico-chemical properties. However, the usage of abstract methods to study inherently spatial data hinders the exploration and poses a considerable workload. sMolBoxes link quantitative analysis of a user-defined set of properties with interactive 3D visualizations. They enable visual explanations of molecular behaviors, which lead to an efficient discovery of biochemically significant parts of the MD simulation. sMolBoxes follow a node-based model for flexible definition, combination, and immediate evaluation of properties to be investigated. Progressive analytics enable fluid switching between multiple properties, which facilitates hypothesis generation. Each sMolBox provides quick insight to an observed property or function, available in more detail in the bigBox View. The case studies illustrate that even with relatively few sMolBoxes, it is possible to express complex analytical tasks, and their use in exploratory analysis is perceived as more efficient than traditional scripting-based methods.
false
false
[ "Pavol Ulbrich", "Manuela Waldner", "Katarína Furmanová", "Sérgio M. Marques", "David Bednar", "Barbora Kozlíková", "Jan Byska" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.11771v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/F8swLXWHwmU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Bu2_1uwUdUY", "icon": "video" } ]
Vis
2,022
Sporthesia: Augmenting Sports Videos Using Natural Language
10.1109/TVCG.2022.3209497
Augmented sports videos, which combine visualizations and video effects to present data in actual scenes, can communicate insights engagingly and thus have been increasingly popular for sports enthusiasts around the world. Yet, creating augmented sports videos remains a challenging task, requiring considerable time and video editing skills. On the other hand, sports insights are often communicated using natural language, such as in commentaries, oral presentations, and articles, but usually lack visual cues. Thus, this work aims to facilitate the creation of augmented sports videos by enabling analysts to directly create visualizations embedded in videos using insights expressed in natural language. To achieve this goal, we propose a three-step approach – 1) detecting visualizable entities in the text, 2) mapping these entities into visualizations, and 3) scheduling these visualizations to play with the video – and analyzed 155 sports video clips and the accompanying commentaries for accomplishing these steps. Informed by our analysis, we have designed and implemented Sporthesia, a proof-of-concept system that takes racket-based sports videos and textual commentaries as the input and outputs augmented videos. We demonstrate Sporthesia's applicability in two exemplar scenarios, i.e., authoring augmented sports videos using text and augmenting historical sports videos based on auditory comments. A technical evaluation shows that Sporthesia achieves high accuracy (F1-score of 0.9) in detecting visualizable entities in the text. An expert evaluation with eight sports analysts suggests high utility, effectiveness, and satisfaction with our language-driven authoring method and provides insights for future improvement and opportunities.
false
false
[ "Zhutian Chen", "Qisen Yang", "Xiao Xie", "Johanna Beyer", "Haijun Xia", "Yingcai Wu", "Hanspeter Pfister" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.03434v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Uoi1FlLWk6I", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/IV42XpeiaoQ", "icon": "video" } ]
Vis
2,022
Striking a Balance: Reader Takeaways and Preferences when Integrating Text and Charts
10.1109/TVCG.2022.3209383
While visualizations are an effective way to represent insights about information, they rarely stand alone. When designing a visualization, text is often added to provide additional context and guidance for the reader. However, there is little experimental evidence to guide designers as to what is the right amount of text to show within a chart, what its qualitative properties should be, and where it should be placed. Prior work also shows variation in personal preferences for charts versus textual representations. In this paper, we explore several research questions about the relative value of textual components of visualizations. 302 participants ranked univariate line charts containing varying amounts of text, ranging from no text (except for the axes) to a written paragraph with no visuals. Participants also described what information they could take away from line charts containing text with varying semantic content. We find that heavily annotated charts were not penalized. In fact, participants preferred the charts with the largest number of textual annotations over charts with fewer annotations or text alone. We also find effects of semantic content. For instance, the text that describes statistical or relational components of a chart leads to more takeaways referring to statistics or relational comparisons than text describing elemental or encoded components. Finally, we find different effects for the semantic levels based on the placement of the text on the chart; some kinds of information are best placed in the title, while others should be placed closer to the data. We compile these results into four chart design guidelines and discuss future implications for the combination of text and charts.
false
false
[ "Chase Stokes", "Vidya Setlur", "Bridget Cogley", "Arvind Satyanarayan", "Marti A. Hearst" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.01780v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Mpf7EiAlZ08", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/H8kyOnmvXyc", "icon": "video" } ]
Vis
2,022
Studying Early Decision Making with Progressive Bar Charts
10.1109/TVCG.2022.3209426
We conduct a user study to quantify and compare user performance for a value comparison task using four bar chart designs, where the bars show the mean values of data loaded progressively and updated every second (progressive bar charts). Progressive visualization divides different stages of the visualization pipeline—data loading, processing, and visualization—into iterative animated steps to limit the latency when loading large amounts of data. An animated visualization appearing quickly, unfolding, and getting more accurate with time, enables users to make early decisions. However, intermediate mean estimates are computed only on partial data and may not have time to converge to the true means, potentially misleading users and resulting in incorrect decisions. To address this issue, we propose two new designs visualizing the history of values in progressive bar charts, in addition to the use of confidence intervals. We comparatively study four progressive bar chart designs: with/without confidence intervals, and using near-history representation with/without confidence intervals, on three realistic data distributions. We evaluate user performance based on the percentage of correct answers (accuracy), response time, and user confidence. Our results show that, overall, users can make early and accurate decisions with 92% accuracy using only 18% of the data, regardless of the design. We find that our proposed bar chart design with only near-history is comparable to bar charts with only confidence intervals in performance, and the qualitative feedback we received indicates a preference for designs with history.
false
false
[ "Ameya D. Patil", "Gaëlle Richer", "Christopher Jermaine", "Dominik Moritz", "Jean-Daniel Fekete" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/ygpu92JMhA0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/GiSmTXxoD_0", "icon": "video" } ]
Vis
2,022
Supporting Expressive and Faithful Pictorial Visualization Design with Visual Style Transfer
10.1109/TVCG.2022.3209486
Pictorial visualizations portray data with figurative messages and approximate the audience to the visualization. Previous research on pictorial visualizations has developed authoring tools or generation systems, but their methods are restricted to specific visualization types and templates. Instead, we propose to augment pictorial visualization authoring with visual style transfer, enabling a more extensible approach to visualization design. To explore this, our work presents Vistylist, a design support tool that disentangles the visual style of a source pictorial visualization from its content and transfers the visual style to one or more intended pictorial visualizations. We evaluated Vistylist through a survey of example pictorial visualizations, a controlled user study, and a series of expert interviews. The results of our evaluation indicated that Vistylist is useful for creating expressive and faithful pictorial visualizations.
false
false
[ "Yang Shi 0007", "Pei Liu", "Siji Chen", "Mengdi Sun", "Nan Cao" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/8Vuf7bUTJMk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/008mZfvIO-E", "icon": "video" } ]
Vis
2,022
Tac-Trainer: A Visual Analytics System for IoT-based Racket Sports Training
10.1109/TVCG.2022.3209352
Conventional racket sports training highly relies on coaches' knowledge and experience, leading to biases in the guidance. To solve this problem, smart wearable devices based on Internet of Things technology (IoT) have been extensively investigated to support data-driven training. Considerable studies introduced methods to extract valuable information from the sensor data collected by IoT devices. However, the information cannot provide actionable insights for coaches due to the large data volume and high data dimensions. We proposed an IoT + VA framework, Tac-Trainer, to integrate the sensor data, the information, and coaches' knowledge to facilitate racket sports training. Tac-Trainer consists of four components: device configuration, data interpretation, training optimization, and result visualization. These components collect trainees' kinematic data through IoT devices, transform the data into attributes and indicators, generate training suggestions, and provide an interactive visualization interface for exploration, respectively. We further discuss new research opportunities and challenges inspired by our work from two perspectives, VA for IoT and IoT for VA.
false
false
[ "Jiachen Wang", "Ji Ma", "Kangping Hu", "Zheng Zhou", "Hui Zhang 0051", "Xiao Xie", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Da-OM9bMbC8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/HZ-wn30pEzA", "icon": "video" } ]
Vis
2,022
Taurus: Towards a Unified Force Representation and Universal Solver for Graph Layout
10.1109/TVCG.2022.3209371
Over the past few decades, a large number of graph layout techniques have been proposed for visualizing graphs from various domains. In this paper, we present a general framework, Taurus, for unifying popular techniques such as the spring-electrical model, stress model, and maxent-stress model. It is based on a unified force representation, which formulates most existing techniques as a combination of quotient-based forces that combine power functions of graph-theoretical and Euclidean distances. This representation enables us to compare the strengths and weaknesses of existing techniques, while facilitating the development of new methods. Based on this, we propose a new balanced stress model (BSM) that is able to layout graphs in superior quality. In addition, we introduce a universal augmented stochastic gradient descent (SGD) optimizer that efficiently finds proper solutions for all layout techniques. To demonstrate the power of our framework, we conduct a comprehensive evaluation of existing techniques on a large number of synthetic and real graphs. We release an open-source package, which facilitates easy comparison of different graph layout methods for any graph input as well as effectively creating customized graph layout techniques.
false
false
[ "Mingliang Xue", "Zhi Wang", "Fahai Zhong", "Yong Wang 0021", "Mingliang Xu", "Oliver Deussen", "Yunhai Wang" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/DU4edAM92P8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/w1yKkdkoVi8", "icon": "video" } ]
Vis
2,022
Temporal Merge Tree Maps: A Topology-Based Static Visualization for Temporal Scalar Data
10.1109/TVCG.2022.3209387
Creating a static visualization for a time-dependent scalar field is a non-trivial task, yet very insightful as it shows the dynamics in one picture. Existing approaches are based on a linearization of the domain or on feature tracking. Domain linearizations use space-filling curves to place all sample points into a 1D domain, thereby breaking up individual features. Feature tracking methods explicitly respect feature continuity in space and time, but generally neglect the data context in which those features live. We present a feature-based linearization of the spatial domain that keeps features together and preserves their context by involving all data samples. We use augmented merge trees to linearize the domain and show that our linearized function has the same merge tree as the original data. A greedy optimization scheme aligns the trees over time providing temporal continuity. This leads to a static 2D visualization with one temporal dimension, and all spatial dimensions compressed into one. We compare our method against other domain linearizations as well as feature-tracking approaches, and apply it to several real-world data sets.
false
false
[ "Wiebke Köpp", "Tino Weinkauf" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/JWDj-gwVuv0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/B8q5o0s_TaY", "icon": "video" } ]
Vis
2,022
The Influence of Visual Provenance Representations on Strategies in a Collaborative Hand-off Data Analysis Scenario
10.1109/TVCG.2022.3209495
Conducting data analysis tasks rarely occur in isolation. Especially in intelligence analysis scenarios where different experts contribute knowledge to a shared understanding, members must communicate how insights develop to establish common ground among collaborators. The use of provenance to communicate analytic sensemaking carries promise by describing the interactions and summarizing the steps taken to reach insights. Yet, no universal guidelines exist for communicating provenance in different settings. Our work focuses on the presentation of provenance information and the resulting conclusions reached and strategies used by new analysts. In an open-ended, 30-minute, textual exploration scenario, we qualitatively compare how adding different types of provenance information (specifically data coverage and interaction history) affects analysts' confidence in conclusions developed, propensity to repeat work, filtering of data, identification of relevant information, and typical investigation strategies. We see that data coverage (i.e., what was interacted with) provides provenance information without limiting individual investigation freedom. On the other hand, while interaction history (i.e., when something was interacted with) does not significantly encourage more mimicry, it does take more time to comfortably understand, as represented by less confident conclusions and less relevant information-gathering behaviors. Our results contribute empirical data towards understanding how provenance summarizations can influence analysis behaviors.
false
false
[ "Jeremy E. Block", "Shaghayegh Esmaeili", "Eric D. Ragan", "John R. Goodall", "G. David Richardson" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03900v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/x-A2jxT-yJY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/-Bf7XWXbvLk", "icon": "video" } ]
Vis
2,022
The Quest for Omnioculars: Embedded Visualization for Augmenting Basketball Game Viewing Experiences
10.1109/TVCG.2022.3209353
Sports game data is becoming increasingly complex, often consisting of multivariate data such as player performance stats, historical team records, and athletes' positional tracking information. While numerous visual analytics systems have been developed for sports analysts to derive insights, few tools target fans to improve their understanding and engagement of sports data during live games. By presenting extra data in the actual game views, embedded visualization has the potential to enhance fans' game-viewing experience. However, little is known about how to design such kinds of visualizations embedded into live games. In this work, we present a user-centered design study of developing interactive embedded visualizations for basketball fans to improve their live game-watching experiences. We first conducted a formative study to characterize basketball fans' in-game analysis behaviors and tasks. Based on our findings, we propose a design framework to inform the design of embedded visualizations based on specific data-seeking contexts. Following the design framework, we present five novel embedded visualization designs targeting five representative contexts identified by the fans, including shooting, offense, defense, player evaluation, and team comparison. We then developed Omnioculars, an interactive basketball game-viewing prototype that features the proposed embedded visualizations for fans' in-game data analysis. We evaluated Omnioculars in a simulated basketball game with basketball fans. The study results suggest that our design supports personalized in-game data analysis and enhances game understanding and engagement.
false
false
[ "Tica Lin", "Zhutian Chen", "Yalong Yang 0001", "Daniele Chiappalupi", "Johanna Beyer", "Hanspeter Pfister" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.00202v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/1hr1t5ZjZ9g", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/6jdNmoT9fAA", "icon": "video" } ]
Vis
2,022
The State of the Art in BGP Visualization Tools: A Mapping of Visualization Techniques to Cyberattack Types
10.1109/TVCG.2022.3209412
Internet routing is largely dependent on Border Gateway Protocol (BGP). However, BGP does not have any inherent authentication or integrity mechanisms that help make it secure. Effective security is challenging or infeasible to implement due to high costs, policy employment in these distributed systems, and unique routing behavior. Visualization tools provide an attractive alternative in lieu of traditional security approaches. Several BGP security visualization tools have been developed as a stop-gap in the face of ever-present BGP attacks. Even though the target users, tasks, and domain remain largely consistent across such tools, many diverse visualization designs have been proposed. The purpose of this study is to provide an initial formalization of methods and visualization techniques for BGP cybersecurity analysis. Using PRISMA guidelines, we provide a systematic review and survey of 29 BGP visualization tools with their tasks, implementation techniques, and attacks and anomalies that they were intended for. We focused on BGP visualization tools as the main inclusion criteria to best capture the visualization techniques used in this domain while excluding solely algorithmic solutions and other detection tools that do not involve user interaction or interpretation. We take the unique approach of connecting (1) the actual BGP attacks and anomalies used to validate existing tools with (2) the techniques employed to detect them. In this way, we contribute an analysis of which techniques can be used for each attack type. Furthermore, we can see the evolution of visualization solutions in this domain as new attack types are discovered. This systematic review provides the groundwork for future designers and researchers building visualization tools for providing BGP cybersecurity, including an understanding of the state-of-the-art in this space and an analysis of what techniques are appropriate for each attack type. Our novel security visualization survey methodology—connecting visualization techniques with appropriate attack types—may also assist future researchers conducting systematic reviews of security visualizations. All supplemental materials are available at https://osf.io/tupz6/.
false
false
[ "Justin Raynor", "Tarik Crnovrsanin", "Sara Di Bartolomeo", "Laura South", "David Saffo", "Cody Dunne" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/pkqc9", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/EN4msZICZcg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/HRA0KS38DSQ", "icon": "video" } ]
Vis
2,022
Thirty-Two Years of IEEE VIS: Authors, Fields of Study and Citations
10.1109/TVCG.2022.3209422
The IEEE VIS Conference (VIS) recently rebranded itself as a unified conference and officially positioned itself within the discipline of Data Science. Driven by this movement, we investigated (1) who contributed to VIS, and (2) where VIS stands in the scientific world. We examined the authors and fields of study of 3,240 VIS publications in the past 32 years based on data collected from OpenAlex and IEEE Xplore, among other sources. We also examined the citation flows from referenced papers (i.e., those referenced in VIS) to VIS, and from VIS to citing papers (i.e., those citing VIS). We found that VIS has been becoming increasingly popular and collaborative. The number of publications, of unique authors, and of participating countries have been steadily growing. Both cross-country collaborations, and collaborations between educational and non-educational affiliations, namely “cross-type collaborations”, are increasing. The dominance of the US is decreasing, and authors from China are now an important part of VIS. In terms of author affiliation types, VIS is increasingly dominated by authors from universities. We found that the topics, inspirations, and influences of VIS research is limited such that (1) VIS, and their referenced and citing papers largely fall into the Computer Science domain, and (2) citations flow mostly between the same set of subfields within Computer Science. Our citation analyses showed that award-winning VIS papers had higher citations. Interactive visualizations, replication data, source code and supplementary material are available at https://32vis.hongtaoh.com and https://osf.io/zkvjm.
false
false
[ "Hongtao Hao 0002", "Yumian Cui", "Zhengxiang Wang", "Yea-Seul Kim" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.03772v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/myKlYDJzk-Q", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/6tZqOB_wwFk", "icon": "video" } ]
Vis
2,022
Towards Natural Language-Based Visualization Authoring
10.1109/TVCG.2022.3209357
A key challenge to visualization authoring is the process of getting familiar with the complex user interfaces of authoring tools. Natural Language Interface (NLI) presents promising benefits due to its learnability and usability. However, supporting NLIs for authoring tools requires expertise in natural language processing, while existing NLIs are mostly designed for visual analytic workflow. In this paper, we propose an authoring-oriented NLI pipeline by introducing a structured representation of users' visualization editing intents, called editing actions, based on a formative study and an extensive survey on visualization construction tools. The editing actions are executable, and thus decouple natural language interpretation and visualization applications as an intermediate layer. We implement a deep learning-based NL interpreter to translate NL utterances into editing actions. The interpreter is reusable and extensible across authoring tools. The authoring tools only need to map the editing actions into tool-specific operations. To illustrate the usages of the NL interpreter, we implement an Excel chart editor and a proof-of-concept authoring tool, VisTalk. We conduct a user study with VisTalk to understand the usage patterns of NL-based authoring systems. Finally, we discuss observations on how users author charts with natural language, as well as implications for future research.
false
false
[ "Yun Wang 0012", "Zhitao Hou", "Leixian Shen", "Tongshuang Wu", "Jiaqi Wang", "He Huang", "Haidong Zhang", "Dongmei Zhang 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.10947v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/fgCDJ2s9k-M", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/tri-4oWpAAg", "icon": "video" } ]
Vis
2,022
TrafficVis: Visualizing Organized Activity and Spatio-Temporal Patterns for Detecting and Labeling Human Trafficking
10.1109/TVCG.2022.3209403
Law enforcement and domain experts can detect human trafficking (HT) in online escort websites by analyzing suspicious clusters of connected ads. How can we explain clustering results intuitively and interactively, visualizing potential evidence for experts to analyze? We present TrafficVis, the first interface for cluster-level HT detection and labeling. Developed through months of participatory design with domain experts, TrafficVis provides coordinated views in conjunction with carefully chosen backend algorithms to effectively show spatio-temporal and text patterns to a wide variety of anti-HT stakeholders. We build upon state-of-the-art text clustering algorithms by incorporating shared metadata as a signal of connected and possibly suspicious activity, then visualize the results. Domain experts can use TrafficVis to label clusters as HT, or other, suspicious, but non-HT activity such as spam and scam, quickly creating labeled datasets to enable further HT research. Through domain expert feedback and a usage scenario, we demonstrate TRAFFICVIS's efficacy. The feedback was overwhelmingly positive, with repeated high praises for the usability and explainability of our tool, the latter being vital for indicting possible criminals.
false
false
[ "Catalina Vajiac", "Polo Chau", "Andreas M. Olligschlaeger", "Rebecca Mackenzie", "Pratheeksha Nair", "Meng-Chieh Lee", "Yifei Li", "Namyong Park", "Reihaneh Rabbany", "Christos Faloutsos" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/uIQ4tZDadK4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/VTTTk0pfc8M", "icon": "video" } ]
Vis
2,022
Traveler: Navigating Task Parallel Traces for Performance Analysis
10.1109/TVCG.2022.3209375
Understanding the behavior of software in execution is a key step in identifying and fixing performance issues. This is especially important in high performance computing contexts where even minor performance tweaks can translate into large savings in terms of computational resource use. To aid performance analysis, developers may collect an execution trace—a chronological log of program activity during execution. As traces represent the full history, developers can discover a wide array of possibly previously unknown performance issues, making them an important artifact for exploratory performance analysis. However, interactive trace visualization is difficult due to issues of data size and complexity of meaning. Traces represent nanosecond-level events across many parallel processes, meaning the collected data is often large and difficult to explore. The rise of asynchronous task parallel programming paradigms complicates the relation between events and their probable cause. To address these challenges, we conduct a continuing design study in collaboration with high performance computing researchers. We develop diverse and hierarchical ways to navigate and represent execution trace data in support of their trace analysis tasks. Through an iterative design process, we developed Traveler, an integrated visualization platform for task parallel traces. Traveler provides multiple linked interfaces to help navigate trace data from multiple contexts. We evaluate the utility of Traveler through feedback from users and a case study, finding that integrating multiple modes of navigation in our design supported performance analysis tasks and led to the discovery of previously unknown behavior in a distributed array library.
false
false
[ "Sayef Azad Sakin", "Alex Bigelow", "R. Tohid", "Connor Scully-Allison", "Carlos Scheidegger", "Steven R. Brandt", "Christopher Taylor", "Kevin A. Huck", "Hartmut Kaiser", "Katherine E. Isaacs" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.00109v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/hkWhOtfkLfo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/z4pGQdqGEJ4", "icon": "video" } ]
Vis
2,022
Uncertainty-Aware Multidimensional Scaling
10.1109/TVCG.2022.3209420
We present an extension of multidimensional scaling (MDS) to uncertain data, facilitating uncertainty visualization of multidimensional data. Our approach uses local projection operators that map high-dimensional random vectors to low-dimensional space to formulate a generalized stress. In this way, our generic model supports arbitrary distributions and various stress types. We use our uncertainty-aware multidimensional scaling (UAMDS) concept to derive a formulation for the case of normally distributed random vectors and a squared stress. The resulting minimization problem is numerically solved via gradient descent. We complement UAMDS by additional visualization techniques that address the sensitivity and trustworthiness of dimensionality reduction under uncertainty. With several examples, we demonstrate the usefulness of our approach and the importance of uncertainty-aware techniques.
false
false
[ "David Hägele", "Tim Krake", "Daniel Weiskopf" ]
[ "BP" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/edxsNZJMVAE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/hMOzEQ1x7jI", "icon": "video" } ]
Vis
2,022
Understanding Barriers to Network Exploration with Visualization: A Report from the Trenches
10.1109/TVCG.2022.3209487
This article reports on an in-depth study that investigates barriers to network exploration with visualizations. Network visualization tools are becoming increasingly popular, but little is known about how analysts plan and engage in the visual exploration of network data—which exploration strategies they employ, and how they prepare their data, define questions, and decide on visual mappings. Our study involved a series of workshops, interaction logging, and observations from a 6-week network exploration course. Our findings shed light on the stages that define analysts' approaches to network visualization and barriers experienced by some analysts during their network visualization processes. These barriers mainly appear before using a specific tool and include defining exploration goals, identifying relevant network structures and abstractions, or creating appropriate visual mappings for their network data. Our findings inform future work in visualization education and analyst-centered network visualization tool design.
false
false
[ "Mashael AlKadi", "Vanessa Serrano", "James Scott-Brown", "Catherine Plaisant", "Jean-Daniel Fekete", "Uta Hinrichs", "Benjamin Bach" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Epml77ggwoQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/m5OSeJOOOzo", "icon": "video" } ]
Vis
2,022
Understanding how Designers Find and Use Data Visualization Examples
10.1109/TVCG.2022.3209490
Examples are useful for inspiring ideas and facilitating implementation in visualization design. However, there is little understanding of how visualization designers use examples, and how computational tools may support such activities. In this paper, we contribute an exploratory study of current practices in incorporating visualization examples. We conducted semi-structured interviews with 15 university students and 15 professional designers. Our analysis focus on two core design activities: searching for examples and utilizing examples. We characterize observed strategies and tools for performing these activities, as well as major challenges that hinder designers' current workflows. In addition, we identify themes that cut across these two activities: criteria for determining example usefulness, curation practices, and design fixation. Given our findings, we discuss the implications for visualization design and authoring tools and highlight critical areas for future research.
false
false
[ "Hannah K. Bako", "Xinyi Liu", "Leilani Battle", "Zhicheng Liu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/gwx8P5rMt2s", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/yoS0dYyllqI", "icon": "video" } ]
Vis
2,022
Unifying Effects of Direct and Relational Associations for Visual Communication
10.1109/TVCG.2022.3209443
People have expectations about how colors map to concepts in visualizations, and they are better at interpreting visualizations that match their expectations. Traditionally, studies on these expectations (inferred mappings) distinguished distinct factors relevant for visualizations of categorical vs. continuous information. Studies on categorical information focused on direct associations (e.g., mangos are associated with yellows) whereas studies on continuous information focused on relational associations (e.g., darker colors map to larger quantities; dark-is-more bias). We unite these two areas within a single framework of assignment inference. Assignment inference is the process by which people infer mappings between perceptual features and concepts represented in encoding systems. Observers infer globally optimal assignments by maximizing the “merit,” or “goodness,” of each possible assignment. Previous work on assignment inference focused on visualizations of categorical information. We extend this approach to visualizations of continuous data by (a) broadening the notion of merit to include relational associations and (b) developing a method for combining multiple (sometimes conflicting) sources of merit to predict people's inferred mappings. We developed and tested our model on data from experiments in which participants interpreted colormap data visualizations, representing fictitious data about environmental concepts (sunshine, shade, wild fire, ocean water, glacial ice). We found both direct and relational associations contribute independently to inferred mappings. These results can be used to optimize visualization design to facilitate visual communication.
false
false
[ "Melissa A. Schoenlein", "Johnny Campos", "Kevin J. Lande", "Laurent Lessard", "Karen B. Schloss" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2209.02782v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/U-YfpqfbUfg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/nwkT-szaoGE", "icon": "video" } ]
Vis
2,022
VACSEN: A Visualization Approach for Noise Awareness in Quantum Computing
10.1109/TVCG.2022.3209455
Quantum computing has attracted considerable public attention due to its exponential speedup over classical computing. Despite its advantages, today's quantum computers intrinsically suffer from noise and are error-prone. To guarantee the high fidelity of the execution result of a quantum algorithm, it is crucial to inform users of the noises of the used quantum computer and the compiled physical circuits. However, an intuitive and systematic way to make users aware of the quantum computing noise is still missing. In this paper, we fill the gap by proposing a novel visualization approach to achieve noise-aware quantum computing. It provides a holistic picture of the noise of quantum computing through multiple interactively coordinated views: a Computer Evolution View with a circuit-like design overviews the temporal evolution of the noises of different quantum computers, a Circuit Filtering View facilitates quick filtering of multiple compiled physical circuits for the same quantum algorithm, and a Circuit Comparison View with a coupled bar chart enables detailed comparison of the filtered compiled circuits. We extensively evaluate the performance of VACSEN through two case studies on quantum algorithms of different scales and in-depth interviews with 12 quantum computing users. The results demonstrate the effectiveness and usability of VACSEN in achieving noise-aware quantum computing.
false
false
[ "Shaolun Ruan", "Yong Wang 0021", "Weiwen Jiang", "Ying Mao", "Qiang Guan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.14135v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/G3AeHSmoYYc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/QWEnGpJAThs", "icon": "video" } ]
Vis
2,022
VDL-Surrogate: A View-Dependent Latent-based Model for Parameter Space Exploration of Ensemble Simulations
10.1109/TVCG.2022.3209413
We propose VDL-Surrogate, a view-dependent neural-network-latent-based surrogate model for parameter space exploration of ensemble simulations that allows high-resolution visualizations and user-specified visual mappings. Surrogate-enabled parameter space exploration allows domain scientists to preview simulation results without having to run a large number of computationally costly simulations. Limited by computational resources, however, existing surrogate models may not produce previews with sufficient resolution for visualization and analysis. To improve the efficient use of computational resources and support high-resolution exploration, we perform ray casting from different viewpoints to collect samples and produce compact latent representations. This latent encoding process reduces the cost of surrogate model training while maintaining the output quality. In the model training stage, we select viewpoints to cover the whole viewing sphere and train corresponding VDL-Surrogate models for the selected viewpoints. In the model inference stage, we predict the latent representations at previously selected viewpoints and decode the latent representations to data space. For any given viewpoint, we make interpolations over decoded data at selected viewpoints and generate visualizations with user-specified visual mappings. We show the effectiveness and efficiency of VDL-Surrogate in cosmological and ocean simulations with quantitative and qualitative evaluations. Source code is publicly available at https://github.com/trainsn/VDL-Surrogate.
false
false
[ "Neng Shi", "Jiayi Xu 0001", "Haoyu Li", "Hanqi Guo 0001", "Jonathan Woodring", "Han-Wei Shen" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.13091v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/nP3i5XfMmCQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/YCBLJDno87I", "icon": "video" } ]
Vis
2,022
Visinity: Visual Spatial Neighborhood Analysis for Multiplexed Tissue Imaging Data
10.1109/TVCG.2022.3209378
New highly-multiplexed imaging technologies have enabled the study of tissues in unprecedented detail. These methods are increasingly being applied to understand how cancer cells and immune response change during tumor development, progression, and metastasis, as well as following treatment. Yet, existing analysis approaches focus on investigating small tissue samples on a per-cell basis, not taking into account the spatial proximity of cells, which indicates cell-cell interaction and specific biological processes in the larger cancer microenvironment. We present Visinity, a scalable visual analytics system to analyze cell interaction patterns across cohorts of whole-slide multiplexed tissue images. Our approach is based on a fast regional neighborhood computation, leveraging unsupervised learning to quantify, compare, and group cells by their surrounding cellular neighborhood. These neighborhoods can be visually analyzed in an exploratory and confirmatory workflow. Users can explore spatial patterns present across tissues through a scalable image viewer and coordinated views highlighting the neighborhood composition and spatial arrangements of cells. To verify or refine existing hypotheses, users can query for specific patterns to determine their presence and statistical significance. Findings can be interactively annotated, ranked, and compared in the form of small multiples. In two case studies with biomedical experts, we demonstrate that Visinity can identify common biological processes within a human tonsil and uncover novel white-blood cell networks and immune-tumor interactions.
false
false
[ "Simon Warchol", "Robert Krüger", "Ajit Johnson Nirmal", "Giorgio Gaglia", "Jared Jessup", "Cecily C. Ritch", "John Hoffer", "Jeremy Muhlich", "Megan L. Burger", "Tyler Jacks", "Sandro Santagata", "Peter K. Sorger", "Hanspeter Pfister" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/g3xy000UsFo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/J3Vd6SEQrH0", "icon": "video" } ]
Vis
2,022
Visual Analysis and Detection of Contrails in Aircraft Engine Simulations
10.1109/TVCG.2022.3209356
Contrails are condensation trails generated from emitted particles by aircraft engines, which perturb Earth's radiation budget. Simulation modeling is used to interpret the formation and development of contrails. These simulations are computationally intensive and rely on high-performance computing solutions, and the contrail structures are not well defined. We propose a visual computing system to assist in defining contrails and their characteristics, as well as in the analysis of parameters for computer-generated aircraft engine simulations. The back-end of our system leverages a contrail-formation criterion and clustering methods to detect contrails' shape and evolution and identify similar simulation runs. The front-end system helps analyze contrails and their parameters across multiple simulation runs. The evaluation with domain experts shows this approach successfully aids in contrail data investigation.
false
false
[ "Nafiul Nipu", "Carla Floricel", "Negar Naghashzadeh", "Roberto Paoli", "G. Elisabeta Marai" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.02321v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/hg0fEqMIwZA", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MzQT1pGOAyw", "icon": "video" } ]
Vis
2,022
Visual Analysis of Neural Architecture Spaces for Summarizing Design Principles
10.1109/TVCG.2022.3209404
Recent advances in artificial intelligence largely benefit from better neural network architectures. These architectures are a product of a costly process of trial-and-error. To ease this process, we develop ArchExplorer, a visual analysis method for understanding a neural architecture space and summarizing design principles. The key idea behind our method is to make the architecture space explainable by exploiting structural distances between architectures. We formulate the pairwise distance calculation as solving an all-pairs shortest path problem. To improve efficiency, we decompose this problem into a set of single-source shortest path problems. The time complexity is reduced from O(kn2N) to O(knN). Architectures are hierarchically clustered according to the distances between them. A circle-packing-based architecture visualization has been developed to convey both the global relationships between clusters and local neighborhoods of the architectures in each cluster. Two case studies and a post-analysis are presented to demonstrate the effectiveness of ArchExplorer in summarizing design principles and selecting better-performing architectures.
false
false
[ "Jun Yuan 0003", "Mengchen Liu", "Fengyuan Tian", "Shixia Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.09665v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ThkSP326kJ8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xvRd7LIOjLA", "icon": "video" } ]
Vis
2,022
Visual Comparison of Language Model Adaptation
10.1109/TVCG.2022.3209458
Neural language models are widely used; however, their model parameters often need to be adapted to the specific domains and tasks of an application, which is time- and resource-consuming. Thus, adapters have recently been introduced as a lightweight alternative for model adaptation. They consist of a small set of task-specific parameters with a reduced training time and simple parameter composition. The simplicity of adapter training and composition comes along with new challenges, such as maintaining an overview of adapter properties and effectively comparing their produced embedding spaces. To help developers overcome these challenges, we provide a twofold contribution. First, in close collaboration with NLP researchers, we conducted a requirement analysis for an approach supporting adapter evaluation and detected, among others, the need for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e., prediction-based) explanation methods. Second, motivated by the gathered requirements, we designed a flexible visual analytics workspace that enables the comparison of adapter properties. In this paper, we discuss several design iterations and alternatives for interactive, comparative visual explanation methods. Our comparative visualizations show the differences in the adapted embedding vectors and prediction outcomes for diverse human-interpretable concepts (e.g., person names, human qualities). We evaluate our workspace through case studies and show that, for instance, an adapter trained on the language debiasing task according to context-0 (decontextualized) embeddings introduces a new type of bias where words (even gender-independent words such as countries) become more similar to female- than male pronouns. We demonstrate that these are artifacts of context-0 embeddings, and the adapter effectively eliminates the gender information from the contextualized word representations.
false
false
[ "Rita Sevastjanova", "Eren Cakmak", "Shauli Ravfogel", "Ryan Cotterell", "Mennatallah El-Assady" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.08176v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/xChs1wpCnmk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/iYDyaWRhKmU", "icon": "video" } ]
Vis
2,022
Visual Concept Programming: A Visual Analytics Approach to Injecting Human Intelligence at Scale
10.1109/TVCG.2022.3209466
Data-centric AI has emerged as a new research area to systematically engineer the data to land AI models for real-world applications. As a core method for data-centric AI, data programming helps experts inject domain knowledge into data and label data at scale using carefully designed labeling functions (e.g., heuristic rules, logistics). Though data programming has shown great success in the NLP domain, it is challenging to program image data because of a) the challenge to describe images using visual vocabulary without human annotations and b) lacking efficient tools for data programming of images. We present Visual Concept Programming, a first-of-its-kind visual analytics approach of using visual concepts to program image data at scale while requiring a few human efforts. Our approach is built upon three unique components. It first uses a self-supervised learning approach to learn visual representation at the pixel level and extract a dictionary of visual concepts from images without using any human annotations. The visual concepts serve as building blocks of labeling functions for experts to inject their domain knowledge. We then design interactive visualizations to explore and understand visual concepts and compose labeling functions with concepts without writing code. Finally, with the composed labeling functions, users can label the image data at scale and use the labeled data to refine the pixel-wise visual representation and concept quality. We evaluate the learned pixel-wise visual representation for the downstream task of semantic segmentation to show the effectiveness and usefulness of our approach. In addition, we demonstrate how our approach tackles real-world problems of image retrieval for autonomous driving.
false
false
[ "Md. Naimul Hoque", "Wenbin He", "Arvind Kumar Shekar", "Liang Gou", "Ren Liu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/qRkDIc3hd88", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Rq9T84ssE0o", "icon": "video" } ]
Vis
2,022
Visualization Design Practices in a Crisis: Behind the Scenes with COVID-19 Dashboard Creators
10.1109/TVCG.2022.3209493
During the COVID-19 pandemic, a number of data visualizations were created to inform the public about the rapidly evolving crisis. Data dashboards, a form of information dissemination used during the pandemic, have facilitated this process by visualizing statistics regarding the number of COVID-19 cases over time. Prior work on COVID-19 visualizations has primarily focused on the design and evaluation of specific visualization systems from technology-centered perspectives. However, little is known about what occurs behind the scenes during the visualization creation processes, given the complex sociotechnical contexts in which they are embedded. Yet, such ecological knowledge is necessary to help characterize the nuances and trajectories of visualization design practices in the wild, as well as generate insights into how creators come to understand and approach visualization design on their own terms and for their own situated purposes. In this research, we conducted a qualitative interview study among dashboard creators from federal agencies, state health departments, mainstream news media outlets, and other organizations that created (often widely-used) COVID-19 dashboards to answer the following questions: how did visualization creators engage in COVID-19 dashboard design, and what tensions, conflicts, and challenges arose during this process? Our findings detail the trajectory of design practices—from creation to expansion, maintenance, and termination—that are shaped by the complex interplay between design goals, tools and technologies, labor, emerging crisis contexts, and public engagement. We particularly examined the tensions between designers and the general public involved in these processes. These conflicts, which often materialized due to a divergence between public demands and standing policies, centered around the type and amount of information to be visualized, how public perceptions shape and are shaped by visualization design, and the strategies utilized to deal with (potential) misinterpretations and misuse of visualizations. Our findings and lessons learned shed light on new ways of thinking in visualization design, focusing on the bundled activities that are invariably involved in human and nonhuman participation throughout the entire trajectory of design practice.
false
false
[ "Yixuan Zhang", "Yifan Sun 0002", "Joseph D. Gaggiano", "Neha Kumar", "Clio Andris", "Andrea G. Parker" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2207.12829v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/YAIVmmIm2KI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Ub0_LUxcVHA", "icon": "video" } ]
Vis
2,022
Visualizing Ensemble Predictions of Music Mood
10.1109/TVCG.2022.3209379
Music mood classification has been a challenging problem in comparison with other music classification problems (e.g., genre, composer, or period). One solution for addressing this challenge is to use an ensemble of machine learning models. In this paper, we show that visualization techniques can effectively convey the popular prediction as well as uncertainty at different music sections along the temporal axis while enabling the analysis of individual ML models in conjunction with their application to different musical data. In addition to the traditional visual designs, such as stacked line graph, ThemeRiver, and pixel-based visualization, we introduce a new variant of ThemeRiver, called “dual-flux ThemeRiver”, which allows viewers to observe and measure the most popular prediction more easily than stacked line graph and ThemeRiver. Together with pixel-based visualization, dual-flux ThemeRiver plots can also assist in model-development workflows, in addition to annotating music using ensemble model predictions.
false
false
[ "Zelin Ye", "Min Chen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2112.07627v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/-nVLuVKEpe4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/AyWvx7mJSXw", "icon": "video" } ]
Vis
2,022
Visualizing the Passage of Time with Video Temporal Pyramids
10.1109/TVCG.2022.3209454
What can we learn about a scene by watching it for months or years? A video recorded over a long timespan will depict interesting phenomena at multiple timescales, but identifying and viewing them presents a challenge. The video is too long to watch in full, and some things are too slow to experience in real-time, such as glacial retreat or the gradual shift from summer to fall. Timelapse videography is a common approach to summarizing long videos and visualizing slow timescales. However, a timelapse is limited to a single chosen temporal frequency, and often appears flickery due to aliasing. Also, the length of the timelapse video is directly tied to its temporal resolution, which necessitates tradeoffs between those two facets. In this paper, we propose Video Temporal Pyramids, a technique that addresses these limitations and expands the possibilities for visualizing the passage of time. Inspired by spatial image pyramids from computer vision, we developed an algorithm that builds video pyramids in the temporal domain. Each level of a Video Temporal Pyramid visualizes a different timescale; for instance, videos from the monthly timescale are usually good for visualizing seasonal changes, while videos from the one-minute timescale are best for visualizing sunrise or the movement of clouds across the sky. To help explore the different pyramid levels, we also propose a Video Spectrogram to visualize the amount of activity across the entire pyramid, providing a holistic overview of the scene dynamics and the ability to explore and discover phenomena across time and timescales. To demonstrate our approach, we have built Video Temporal Pyramids from ten outdoor scenes, each containing months or years of data. We compare Video Temporal Pyramid layers to naive timelapse and find that our pyramids enable alias-free viewing of longer-term changes. We also demonstrate that the Video Spectrogram facilitates exploration and discovery of phenomena across pyramid levels, by enabling both overview and detail-focused perspectives.
false
false
[ "Melissa E. Swift", "Wyatt Ayers", "Sophie Pallanck", "Scott Wehrwein" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2208.11885v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/0YdE0CU2cEw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/58dm1vK2-V4", "icon": "video" } ]
EuroVis
2,022
A Flip-book of Knot Diagrams for Visualizing Surfaces in 4-Space
10.1111/cgf.14545
Just as 2D shadows of 3D curves lose structure where lines cross, 3D graphics projections of smooth 4D topological surfaces are interrupted where one surface intersects itself. They twist, turn, and fold back on themselves, leaving important but hidden features behind the surface sheets. In this paper, we propose a smart slicing tool that can read the 4D surface in its entropy map and suggest the optimal way to generate cross‐sectional images — or “slices” — of the surface to visualize its underlying 4D structure. Our visualization thinks of a 4D‐embedded surface as a collection of 3D curves stacked in time, very much like a flip‐book animation, where successive terms in the sequence differ at most by a critical change. This novel method can generate topologically meaningful visualization to depict complex and unfamiliar 4D surfaces, with the minimum number of cross‐sectional diagrams. Our approach has been successfully used to create flip‐books of diagrams to visualize a range of known 4D surfaces. In this preliminary study, our results show that the new visualization and slicing tool can help the viewers to understand and describe the complex spatial relationships and overall structures of 4D surfaces.
false
false
[ "Huan Liu", "Hui Zhang" ]
[]
[]
[]
EuroVis
2,022
A Grammar-Based Approach for Applying Visualization Taxonomies to Interaction Logs
10.1111/cgf.14557
Researchers collect large amounts of user interaction data with the goal of mapping user's workflows and behaviors to their high‐level motivations, intuitions, and goals. Although the visual analytics community has proposed numerous taxonomies to facilitate this mapping process, no formal methods exist for systematically applying these existing theories to user interaction logs. This paper seeks to bridge the gap between visualization task taxonomies and interaction log data by making the taxonomies more actionable for interaction log analysis. To achieve this, we leverage structural parallels between how people express themselves through interactions and language by reformulating existing theories as regular grammars. We represent interactions as terminals within a regular grammar, similar to the role of individual words in a language, and patterns of interactions or non‐terminals as regular expressions over these terminals to capture common language patterns. To demonstrate our approach, we generate regular grammars for seven existing visualization taxonomies and develop code to apply them to three public interaction log datasets. In analyzing these regular grammars, we find that the taxonomies at the low‐level (i.e., terminals) show mixed results in expressing multiple interaction log datasets, and taxonomies at the high‐level (i.e., regular expressions) have limited expressiveness, due to primarily two challenges: inconsistencies in interaction log dataset granularity and structure, and under‐expressiveness of certain terminals. Based on our findings, we suggest new research directions for the visualization community to augment existing taxonomies, develop new ones, and build better interaction log recording processes to facilitate the data‐driven development of user behavior taxonomies.
false
false
[ "Sneha Gathani", "Shayan Monadjemi", "Alvitta Ottley", "Leilani Battle" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2201.03740v2", "icon": "paper" } ]
EuroVis
2,022
A Process Model for Dashboard Onboarding
10.1111/cgf.14558
Dashboards are used ubiquitously to gain and present insights into data by means of interactive visualizations. To bridge the gap between non‐expert dashboard users and potentially complex datasets and/or visualizations, a variety of onboarding strategies are employed, including videos, narration, and interactive tutorials. We propose a process model for dashboard onboarding that formalizes and unifies such diverse onboarding strategies. Our model introduces the onboarding loop alongside the dashboard usage loop. Unpacking the onboarding loop reveals how each onboarding strategy combines selected building blocks of the dashboard with an onboarding narrative. Specific means are applied to this narration sequence for onboarding, which results in onboarding artifacts that are presented to the user via an interface. We concretize these concepts by showing how our process model can be used to describe a selection of real‐world onboarding examples. Finally, we discuss how our model can serve as an actionable blueprint for developing new onboarding systems.
false
false
[ "Vaishali Dhanoa", "Conny Walchshofer", "Andreas P. Hinterreiter", "Holger Stitz", "M. Eduard Gröller", "Marc Streit" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/gux9w", "icon": "paper" } ]
EuroVis
2,022
A Survey of Visualization and Analysis in High-Resolution Connectomics
10.1111/cgf.14574
The field of connectomics aims to reconstruct the wiring diagram of Neurons and synapses to enable new insights into the workings of the brain. Reconstructing and analyzing the Neuronal connectivity, however, relies on many individual steps, starting from high‐resolution data acquisition to automated segmentation, proofreading, interactive data exploration, and circuit analysis. All of these steps have to handle large and complex datasets and rely on or benefit from integrated visualization methods. In this state‐of‐the‐art report, we describe visualization methods that can be applied throughout the connectomics pipeline, from data acquisition to circuit analysis. We first define the different steps of the pipeline and focus on how visualization is currently integrated into these steps. We also survey open science initiatives in connectomics, including usable open‐source tools and publicly available datasets. Finally, we discuss open challenges and possible future directions of this exciting research field.
false
false
[ "Johanna Beyer", "Jakob Troidl", "Saeed Boorboor", "Markus Hadwiger", "Arie E. Kaufman", "Hanspeter Pfister" ]
[]
[]
[]
EuroVis
2,022
A Typology of Guidance Tasks in Mixed-Initiative Visual Analytics Environments
10.1111/cgf.14555
Guidance has been proposed as a conceptual framework to understand how mixed‐initiative visual analytics approaches can actively support users as they solve analytical tasks. While user tasks received a fair share of attention, it is still not completely clear how they could be supported with guidance and how such support could influence the progress of the task itself. Our observation is that there is a research gap in understanding the effect of guidance on the analytical discourse, in particular, for the knowledge generation in mixed‐initiative approaches. As a consequence, guidance in a visual analytics environment is usually indistinguishable from common visualization features, making user responses challenging to predict and measure. To address these issues, we take a system perspective to propose the notion of guidance tasks and we present it as a typology closely aligned to established user task typologies. We derived the proposed typology directly from a model of guidance in the knowledge generation process and illustrate its implications for guidance design. By discussing three case studies, we show how our typology can be applied to analyze existing guidance systems. We argue that without a clear consideration of the system perspective, the analysis of tasks in mixed‐initiative approaches is incomplete. Finally, by analyzing matchings of user and guidance tasks, we describe how guidance tasks could either help the user conclude the analysis or change its course.
false
false
[ "Ignacio Pérez-Messina", "Davide Ceneda", "Mennatallah El-Assady", "Silvia Miksch", "Fabian Sperrle" ]
[]
[]
[]
EuroVis
2,022
AirLens: Multi-Level Visual Exploration of Air Quality Evolution in Urban Agglomerations
10.1111/cgf.14535
The precise prevention and control of air pollution is a great challenge faced by environmental experts in recent years. Understanding the air quality evolution in the urban agglomeration is important for coordinated control of air pollution. However, the complex pollutant interactions between different cities lead to the collaborative evolution of air quality. The existing statistical and machine learning methods cannot well support the comprehensive analysis of the dynamic air quality evolution. In this study, we propose AirLens, an interactive visual analytics system that can help domain experts explore and understand the air quality evolution in the urban agglomeration from multiple levels and multiple aspects. To facilitate the cognition of the complex multivariate spatiotemporal data, we first propose a multi‐run clustering strategy with a novel glyph design for summarizing and understanding the typical pollutant patterns effectively. On this basis, the system supports the multi‐level exploration of air quality evolution, namely, the overall level, stage level and detail level. Frequent pattern mining, city community extraction and useful filters are integrated into the system for discovering significant information comprehensively. The case study and positive feedback from domain experts demonstrate the effectiveness and usability of AirLens.
false
false
[ "Dezhan Qu", "Cheng Lv", "Yiming Lin", "Huijie Zhang", "Rong Wang" ]
[]
[]
[]
EuroVis
2,022
An Interactive Approach for Identifying Structure Definitions
10.1111/cgf.14543
Our ability to grasp and understand complex phenomena is essentially based on recognizing structures and relating these to each other. For example, any meteorological description of a weather condition and explanation of its evolution recurs to meteorological structures, such as convection and circulation structures, cloud fields and rain fronts. All of these are spatiotemporal structures, defined by time‐dependent patterns in the underlying fields. Typically, such a structure is defined by a verbal description that corresponds to the more or less uniform, often somewhat vague mental images of the experts. However, a precise, formal definition of the structures or, more generally, of the concepts is often desirable, e.g., to enable automated data analysis or the development of phenomenological models. Here, we present a systematic approach and an interactive tool to obtain formal definitions of spatiotemporal structures. The tool enables experts to evaluate and compare different structure definitions on the basis of data sets with time‐dependent fields that contain the respective structure. Since structure definitions are typically parameterized, an essential part is to identify parameter ranges that lead to desired structures in all time steps. In addition, it is important to allow a quantitative assessment of the resulting structures simultaneously. We demonstrate the use of the tool by applying it to two meteorological examples: finding structure definitions for vortex cores and center lines of temporarily evolving tropical cyclones.Ideally, structure definitions should be objective and applicable to as many data sets as possible. However, finding such definitions, e.g., for the common atmospheric structures in meteorology, can only be a long‐term goal. The proposed procedure, together with the presented tool, is just a first systematic approach aiming at facilitating this long and arduous way.
false
false
[ "Natalia Mikula", "Tom Dörffel", "Daniel Baum", "Hans-Christian Hege" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2112.09066v1", "icon": "paper" } ]
EuroVis
2,022
Barrio: Customizable Spatial Neighborhood Analysis and Comparison for Nanoscale Brain Structures
10.1111/cgf.14532
High‐resolution electron microscopy imaging allows neuroscientists to reconstruct not just entire cells but individual cell substructures (i.e., cell organelles) as well. Based on these data, scientists hope to get a better understanding of brain function and development through detailed analysis of local organelle neighborhoods. In‐depth analyses require efficient and scalable comparison of a varying number of cell organelles, ranging from two to hundreds of local spatial neighborhoods. Scientists need to be able to analyze the 3D morphologies of organelles, their spatial distributions and distances, and their spatial correlations. We have designed Barrio as a configurable framework that scientists can adjust to their preferred workflow, visualizations, and supported user interactions for their specific tasks and domain questions. Furthermore, Barrio provides a scalable comparative visualization approach for spatial neighborhoods that automatically adjusts visualizations based on the number of structures to be compared. Barrio supports small multiples of spatial 3D views as well as abstract quantitative views, and arranges them in linked and juxtaposed views. To adapt to new domain‐specific analysis scenarios, we allow the definition of individualized visualizations and their parameters for each analysis session. We present an in‐depth case study for mitochondria analysis in neuronal tissue and demonstrate the usefulness of Barrio in a qualitative user study with neuroscientists.
false
false
[ "Jakob Troidl", "Corrado Calì", "Eduard Gröller", "Hanspeter Pfister", "Markus Hadwiger", "Johanna Beyer" ]
[]
[]
[]
EuroVis
2,022
Branch Decomposition-Independent Edit Distances for Merge Trees
10.1111/cgf.14547
Edit distances between merge trees of scalar fields have many applications in scientific visualization, such as ensemble analysis, feature tracking or symmetry detection. In this paper, we propose branch mappings, a novel approach to the construction of edit mappings for merge trees. Classic edit mappings match nodes or edges of two trees onto each other, and therefore have to either rely on branch decompositions of both trees or have to use auxiliary node properties to determine a matching. In contrast, branch mappings employ branch properties instead of node similarity information, and are independent of predetermined branch decompositions. Especially for topological features, which are typically based on branch properties, this allows a more intuitive distance measure which is also less susceptible to instabilities from small‐scale perturbations. For trees with 𝒪(n) nodes, we describe an 𝒪(n4) algorithm for computing optimal branch mappings, which is faster than the only other branch decomposition‐independent method in the literature by more than a linear factor. Furthermore, we compare the results of our method on synthetic and real‐world examples to demonstrate its practicality and utility.
false
false
[ "Florian Wetzels", "Heike Leitte", "Christoph Garth" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2204.02919v1", "icon": "paper" } ]
EuroVis
2,022
Chart Question Answering: State of the Art and Future Directions
10.1111/cgf.14573
Information visualizations such as bar charts and line charts are very common for analyzing data and discovering critical insights. Often people analyze charts to answer questions that they have in mind. Answering such questions can be challenging as they often require a significant amount of perceptual and cognitive effort. Chart Question Answering (CQA) systems typically take a chart and a natural language question as input and automatically generate the answer to facilitate visual data analysis. Over the last few years, there has been a growing body of literature on the task of CQA. In this survey, we systematically review the current state‐of‐the‐art research focusing on the problem of chart question answering. We provide a taxonomy by identifying several important dimensions of the problem domain including possible inputs and outputs of the task and discuss the advantages and limitations of proposed solutions. We then summarize various evaluation techniques used in the surveyed papers. Finally, we outline the open challenges and future research opportunities related to chart question answering.
false
false
[ "Enamul Hoque", "Parsa Kavehzadeh", "Ahmed Masry" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2205.03966v2", "icon": "paper" } ]
EuroVis
2,022
CorpusVis: Visual Analysis of Digital Sheet Music Collections
10.1111/cgf.14540
Manually investigating sheet music collections is challenging for music analysts due to the magnitude and complexity of underlying features, structures, and contextual information. However, applying sophisticated algorithmic methods would require advanced technical expertise that analysts do not necessarily have. Bridging this gap, we contribute CorpusVis, an interactive visual workspace, enabling scalable and multi‐faceted analysis. Our proposed visual analytics dashboard provides access to computational methods, generating varying perspectives on the same data. The proposed application uses metadata including composers, type, epoch, and low‐level features, such as pitch, melody, and rhythm. To evaluate our approach, we conducted a pair‐analytics study with nine participants. The qualitative results show that CorpusVis supports users in performing exploratory and confirmatory analysis, leading them to new insights and findings. In addition, based on three exemplary workflows, we demonstrate how to apply our approach to different tasks, such as exploring musical features or comparing composers.
false
false
[ "Matthias Miller", "Julius Rauscher", "Daniel A. Keim", "Mennatallah El-Assady" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2203.12663v1", "icon": "paper" } ]
EuroVis
2,022
DanmuVis: Visualizing Danmu Content Dynamics and Associated Viewer Behaviors in Online Videos
10.1111/cgf.14552
Danmu (Danmaku) is a unique social media service in online videos, especially popular in Japan and China, for viewers to write comments while watching videos. The danmu comments are overlaid on the video screen and synchronized to the associated video time, indicating viewers' thoughts of the video clip. This paper introduces an interactive visualization system to analyze danmu comments and associated viewer behaviors in a collection of videos and enable detailed exploration of one video on demand. The watching behaviors of viewers are identified by comparing video time and post time of viewers' danmu. The system supports analyzing danmu content and viewers' behaviors against both video time and post time to gain insights into viewers' online participation and perceived experience. Our evaluations, including usage scenarios and user interviews, demonstrate the effectiveness and usability of our system.
false
false
[ "Shuai Chen 0001", "Sihang Li", "Yanda Li", "Junlin Zhu", "Juanjuan Long", "Siming Chen 0001", "Jiawan Zhang", "Xiaoru Yuan" ]
[]
[]
[]
EuroVis
2,022
Effective Use of Likert Scales in Visualization Evaluations: A Systematic Review
10.1111/cgf.14521
Likert scales are often used in visualization evaluations to produce quantitative estimates of subjective attributes, such as ease of use or aesthetic appeal. However, the methods used to collect, analyze, and visualize data collected with Likert scales are inconsistent among evaluations in visualization papers. In this paper, we examine the use of Likert scales as a tool for measuring subjective response in a systematic review of 134 visualization evaluations published between 2009 and 2019. We find that papers with both objective and subjective measures do not hold the same reporting and analysis standards for both aspects of their evaluation, producing less rigorous work for the subjective qualities measured by Likert scales. Additionally, we demonstrate that many papers are inconsistent in their interpretations of Likert data as discrete or continuous and may even sacrifice statistical power by applying nonparametric tests unnecessarily. Finally, we identify instances where key details about Likert item construction with the potential to bias participant responses are omitted from evaluation methodology reporting, inhibiting the feasibility and reliability of future replication studies. We summarize recommendations from other fields for best practices with Likert data in visualization evaluations, based on the results of our survey. A full copy of this paper and all supplementary material are available at https://osf.io/exbz8/.
false
false
[ "Laura South", "David Saffo", "Olga Vitek", "Cody Dunne", "Michelle A. Borkin" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/6f3zs", "icon": "paper" } ]
EuroVis
2,022
Exploring Effects of Ecological Visual Analytics Interfaces on Experts' and Novices' Decision-Making Processes: A Case Study in Air Traffic Control
10.1111/cgf.14554
Operational demands in safety‐critical systems impose a risk of failure to the operators especially during urgent situations. Operators of safety‐critical systems learn to make decisions effectively throughout extensive training programs and many years of experience. In the domain of air traffic control, expensive training with high dropout rates calls for research to enhance novices' ability to detect and resolve conflicts in the airspace. While previous researchers have mostly focused on redesigning training instructions and programs, the current paper explores possible benefits of novel visual representations to improve novices' understanding of the situations as well as their decision‐making process. We conduct an experimental evaluation study testing two ecological visual analytics interfaces, developed in a previous study, as support systems to facilitate novice decision‐making. The main contribution of this paper is threefold. First, we describe the application of an ecological interface design approach to the development of two visual analytics interfaces. Second, we perform a human‐in‐the‐loop experiment with forty‐five novices within a simplified air traffic control simulation environment. Third, by performing an expert‐novice comparison we investigate the extent to which effects of the proposed interfaces can be attributed to the subjects' expertise. The results show that the proposed ecological visual analytics interfaces improved novices' understanding of the information about conflicts as well as their problem‐solving performance. Further, the results show that the beneficial effects of the proposed interfaces were more attributable to the visual representations than the users' expertise.
false
false
[ "Elmira Zohrevandi", "Carl A. L. Westin", "Katerina Vrotsou", "Jonas Lundberg" ]
[]
[]
[]
EuroVis
2,022
Exploring How Visualization Design and Situatedness Evoke Compassion in the Wild
10.1111/cgf.14553
This work explores how the design and situatedness of data representations affect people's compassion with a case study concerning harassment episodes in a public place. Results contribute to advancing the understanding of how visualizations can evoke emotions and their impact on prosocial behaviors, such as helping people in need. Recent literature examined the effect of different on‐screen data representations on emotion or prosociality, but little has been done concerning visualizations shown in a public place — especially a space contextually relevant to the data — or presented through unconventional media formats such as physical marks. We conducted two in‐the‐wild studies to investigate how different factors affect people's self‐reported compassion and intention to donate. We compared three ways of presenting data about the harassment cases: (1) communicating data only verbally; (2) using a printed poster with aggregated information; and (3) using a physicalization with detailed information about each story. We found that the physicalization influenced people to donate more than only hearing about the data, but it is unclear if the same applied to the poster visualization. Also, passers‐by reported a likely small increase in compassion when they saw the physicalization instead of the poster. We also examined the role of situatedness by showing the physicalization in a site that is not contextually relevant to the data. Our results suggest that people had a similar intention to donate and levels of compassion in both places. Those findings may indicate that using specific visualization designs to support campaigns about sensitive causes (e.g., sexual harassment) can increase the emotional response of passers‐by and may motivate them to help, independently of where the data representation is shown. Finally, this work also informs on the strengths and weaknesses of using research in the wild to evaluate data visualizations in public spaces.
false
false
[ "Luiz Morais", "Nazareno Andrade", "D. Sousa" ]
[]
[]
[]
EuroVis
2,022
Exploring Multivariate Event Sequences with an Interactive Similarity Builder
10.1111/cgf.14539
Similarity‐based exploration is an effective method in knowledge discovery. Faced with multivariate event sequence data (MVES), developing a satisfactory similarity measurement for a specific question is challenging because of the heterogeneity introduced by numerous attributes with different data formats, coupled with their associations. Additionally, the absence of effective validation feedback makes judging the goodness of a measurement scheme a time‐consuming and error‐prone procedure. To free analysts from tedious programming to concentrate on the exploration of MVES data, this paper introduces an interactive similarity builder, where analysts can use visual building blocks for assembling similarity measurements in a drag‐and‐drop and incremental fashion. Based on the builder, we further propose a visual analytics framework that provides multi‐granularity visual validations for measurement schemes and supports a recursive workflow for refining the focus set. We illustrate the power of our prototype through a case study and a user study with real‐world datasets. Results suggest that the system improves the efficiency of developing similarity measurements and the usefulness of exploring MVES data.
false
false
[ "Shaobin Xu", "Minghui Sun", "Zhengtai Zhang", "Hao Xue" ]
[]
[]
[]
EuroVis
2,022
How accessible is my visualization? Evaluating visualization accessibility with Chartability
10.1111/cgf.14522
Novices and experts have struggled to evaluate the accessibility of data visualizations because there are no common shared guidelines across environments, platforms, and contexts in which data visualizations are authored. Between non‐specific standards bodies like WCAG, emerging research, and guidelines from specific communities of practice, it is hard to organize knowledge on how to evaluate accessible data visualizations. We present Chartability, a set of heuristics synthesized from these various sources which enables designers, developers, researchers, and auditors to evaluate data‐driven visualizations and interfaces for visual, motor, vestibular, neurological, and cognitive accessibility. In this paper, we outline our process of making a set of heuristics and accessibility principles for Chartability and highlight key features in the auditing process. Working with participants on real projects, we found that data practitioners with a novice level of accessibility skills were more confident and found auditing to be easier after using Chartability. Expert accessibility practitioners were eager to integrate Chartability into their own work. Reflecting on Chartability's development and the preliminary user evaluation, we discuss tradeoffs of open projects, working with high‐risk evaluations like auditing projects in the wild, and challenge future research projects at the intersection of visualization and accessibility to consider the broad intersections of disabilities.
false
false
[ "Frank Elavsky", "Cynthia L. Bennett", "Dominik Moritz" ]
[]
[]
[]
EuroVis
2,022
Hybrid Touch/Tangible Spatial Selection in Augmented Reality
10.1111/cgf.14550
We study tangible touch tablets combined with Augmented Reality Head‐Mounted Displays (AR‐HMDs) to perform spatial 3D selections. We are primarily interested in the exploration of 3D unstructured datasets such as cloud points or volumetric datasets. AR‐HMDs immerse users by showing datasets stereoscopically, and tablets provide a set of 2D exploration tools. Because AR‐HMDs merge the visualization, interaction, and the users' physical spaces, users can also use the tablets as tangible objects in their 3D space. Nonetheless, the tablets' touch displays provide their own visualization and interaction spaces, separated from those of the AR‐HMD. This raises several research questions compared to traditional setups. In this paper, we theorize, discuss, and study different available mappings for manual spatial selections using a tangible tablet within an AR‐HMD space. We then study the use of this tablet within a 3D AR environment, compared to its use with a 2D external screen.
false
false
[ "Mickaël Sereno", "Stéphane Gosset", "Lonni Besançon", "Tobias Isenberg 0001" ]
[]
[]
[]
EuroVis
2,022
HyperNP: Interactive Visual Exploration of Multidimensional Projection Hyperparameters
10.1111/cgf.14531
Projection algorithms such as t‐SNE or UMAP are useful for the visualization of high dimensional data, but depend on hyperparameters which must be tuned carefully. Unfortunately, iteratively recomputing projections to find the optimal hyperparameter values is computationally intensive and unintuitive due to the stochastic nature of such methods. In this paper we propose HyperNP, a scalable method that allows for real‐time interactive hyperparameter exploration of projection methods by training neural network approximations. A HyperNP model can be trained on a fraction of the total data instances and hyperparameter configurations that one would like to investigate and can compute projections for new data and hyperparameters at interactive speeds. HyperNP models are compact in size and fast to compute, thus allowing them to be embedded in lightweight visualization systems. We evaluate the performance of HyperNP across three datasets in terms of performance and speed. The results suggest that HyperNP models are accurate, scalable, interactive, and appropriate for use in real‐world settings.
false
false
[ "Gabriel Appleby", "Mateus Espadoto", "Rui Chen", "Samuel Goree", "Alexandru C. Telea", "Erik W. Anderson", "Remco Chang" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2106.13777v1", "icon": "paper" } ]
EuroVis
2,022
Infographics Wizard: Flexible Infographics Authoring and Design Exploration
10.1111/cgf.14527
Infographics are an aesthetic visual representation of information following specific design principles of human perception. Designing infographics can be a tedious process for non‐experts and time‐consuming, even for professional designers. With the help of designers, we propose a semi‐automated infographic framework for general structured and flow‐based infographic design generation. For novice designers, our framework automatically creates and ranks infographic designs for a user‐provided text with no requirement for design input. However, expert designers can still provide custom design inputs to customize the infographics. We will also contribute an individual visual group (VG) designs dataset (in SVG), along with a 1k complete infographic image dataset with segmented VGs in this work. Evaluation results confirm that by using our framework, designers from all expertise levels can generate generic infographic designs faster than existing methods while maintaining the same quality as hand‐designed infographics templates.
false
false
[ "Anjul Tyagi", "Jian Zhao 0010", "Pushkar Patel", "Swasti Khurana", "Klaus Mueller 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2204.09904v2", "icon": "paper" } ]
EuroVis
2,022
Interactively Assessing Disentanglement in GANs
10.1111/cgf.14524
Generative adversarial networks (GAN) have witnessed tremendous growth in recent years, demonstrating wide applicability in many domains. However, GANs remain notoriously difficult for people to interpret, particularly for modern GANs capable of generating photo‐realistic imagery. In this work we contribute a visual analytics approach for GAN interpretability, where we focus on the analysis and visualization of GAN disentanglement. Disentanglement is concerned with the ability to control content produced by a GAN along a small number of distinct, yet semantic, factors of variation. The goal of our approach is to shed insight on GAN disentanglement, above and beyond coarse summaries, instead permitting a deeper analysis of the data distribution modeled by a GAN. Our visualization allows one to assess a single factor of variation in terms of groupings and trends in the data distribution, where our analysis seeks to relate the learned representation space of GANs with attribute‐based semantic scoring of images produced by GANs. Through use‐cases, we show that our visualization is effective in assessing disentanglement, allowing one to quickly recognize a factor of variation and its overall quality. In addition, we show how our approach can highlight potential dataset biases learned by GANs.
false
false
[ "Sangwon Jeong", "Shusen Liu 0001", "Matthew Berger" ]
[]
[]
[]
EuroVis
2,022
Investigating the Role and Interplay of Narrations and Animations in Data Videos
10.1111/cgf.14560
Combining data visualizations, animations, and audio narrations, data videos can increase viewer engagement and effectively communicate data stories. Due to their increasing popularity, data videos have gained growing attention from the visualization research community. However, recent research on data videos has focused on animations, lacking an understanding of narrations. In this work, we study how data videos use narrations and animations to convey information effectively. We conduct a qualitative analysis on 426 clips with visualizations extracted from 60 data videos collected from a variety of media outlets, covering a diverse array of topics. We manually label 816 sentences with 1226 semantic labels and record the composition of 2553 animations through an open coding process. We also analyze how narrations and animations coordinate with each other by assigning links between semantic labels and animations. With 937 (76.4%) semantic labels and 2503 (98.0%) animations linked, we identify four types of narration‐animation relationships in the collected clips. Drawing from the findings, we discuss study implications and future research opportunities of data videos.
false
false
[ "Hao Cheng", "Junhong Wang", "Yun Wang 0012", "Bongshin Lee", "Haidong Zhang", "Dongmei Zhang 0001" ]
[]
[]
[]