Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
Vis
2,021
Compass: Towards Better Causal Analysis of Urban Time Series
10.1109/TVCG.2021.3114875
The spatial time series generated by city sensors allow us to observe urban phenomena like environmental pollution and traffic congestion at an unprecedented scale. However, recovering causal relations from these observations to explain the sources of urban phenomena remains a challenging task because these causal relations tend to be time-varying and demand proper time series partitioning for effective analyses. The prior approaches extract one causal graph given long-time observations, which cannot be directly applied to capturing, interpreting, and validating dynamic urban causality. This paper presents Compass, a novel visual analytics approach for in-depth analyses of the dynamic causality in urban time series. To develop Compass, we identify and address three challenges: detecting urban causality, interpreting dynamic causal relations, and unveiling suspicious causal relations. First, multiple causal graphs over time among urban time series are obtained with a causal detection framework extended from the Granger causality test. Then, a dynamic causal graph visualization is designed to reveal the time-varying causal relations across these causal graphs and facilitate the exploration of the graphs along the time. Finally, a tailored multi-dimensional visualization is developed to support the identification of spurious causal relations, thereby improving the reliability of causal analyses. The effectiveness of Compass is evaluated with two case studies conducted on the real-world urban datasets, including the air pollution and traffic speed datasets, and positive feedback was received from domain experts.
false
false
[ "Zikun Deng", "Di Weng", "Xiao Xie", "Jie Bao 0003", "Yu Zheng 0004", "Mingliang Xu", "Wei Chen 0001", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/XQpSA3iRs7k", "icon": "video" } ]
Vis
2,021
Context Matters: A Theory of Semantic Discriminability for Perceptual Encoding Systems
10.1109/TVCG.2021.3114780
People's associations between colors and concepts influence their ability to interpret the meanings of colors in information visualizations. Previous work has suggested such effects are limited to concepts that have strong, specific associations with colors. However, although a concept may not be strongly associated with any colors, its mapping can be disambiguated in the context of other concepts in an encoding system. We articulate this view in semantic discriminability theory, a general framework for understanding conditions determining when people can infer meaning from perceptual features. Semantic discriminability is the degree to which observers can infer a unique mapping between visual features and concepts. Semantic discriminability theory posits that the capacity for semantic discriminability for a set of concepts is constrained by the difference between the feature-concept association distributions across the concepts in the set. We define formal properties of this theory and test its implications in two experiments. The results show that the capacity to produce semantically discriminable colors for sets of concepts was indeed constrained by the statistical distance between color-concept association distributions (Experiment 1). Moreover, people could interpret meanings of colors in bar graphs insofar as the colors were semantically discriminable, even for concepts previously considered “non-colorable” (Experiment 2). The results suggest that colors are more robust for visual communication than previously thought.
false
false
[ "Kushin Mukherjee", "Brian Yin", "Brianne E. Sherman", "Laurent Lessard", "Karen B. Schloss" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03685v4", "icon": "paper" } ]
Vis
2,021
CoUX: Collaborative Visual Analysis of Think-Aloud Usability Test Videos for Digital Interfaces
10.1109/TVCG.2021.3114822
Reviewing a think-aloud video is both time-consuming and demanding as it requires UX (user experience) professionals to attend to many behavioral signals of the user in the video. Moreover, challenges arise when multiple UX professionals need to collaborate to reduce bias and errors. We propose a collaborative visual analytics tool, CoUX, to facilitate UX evaluators collectively reviewing think-aloud usability test videos of digital interfaces. CoUX seamlessly supports usability problem identification, annotation, and discussion in an integrated environment. To ease the discovery of usability problems, CoUX visualizes a set of problem-indicators based on acoustic, textual, and visual features extracted from the video and audio of a think-aloud session with machine learning. CoUX further enables collaboration amongst UX evaluators for logging, commenting, and consolidating the discovered problems with a chatbox-like user interface. We designed CoUX based on a formative study with two UX experts and insights derived from the literature. We conducted a user study with six pairs of UX practitioners on collaborative think-aloud video analysis tasks. The results indicate that CoUX is useful and effective in facilitating both problem identification and collaborative teamwork. We provide insights into how different features of CoUX were used to support both independent analysis and collaboration. Furthermore, our work highlights opportunities to improve collaborative usability test video analysis.
false
false
[ "Ehsan Jahangirzadeh Soure", "Emily Kuang", "Mingming Fan 0001", "Jian Zhao 0010" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/36l1zxUS_N4", "icon": "video" } ]
Vis
2,021
COVID-view: Diagnosis of COVID-19 using Chest CT
10.1109/TVCG.2021.3114851
Significant work has been done towards deep learning (DL) models for automatic lung and lesion segmentation and classification of COVID-19 on chest CT data. However, comprehensive visualization systems focused on supporting the dual visual+DL diagnosis of COVID-19 are non-existent. We present COVID-view, a visualization application specially tailored for radiologists to diagnose COVID-19 from chest CT data. The system incorporates a complete pipeline of automatic lungs segmentation, localization/isolation of lung abnormalities, followed by visualization, visual and DL analysis, and measurement/quantification tools. Our system combines the traditional 2D workflow of radiologists with newer 2D and 3D visualization techniques with DL support for a more comprehensive diagnosis. COVID-view incorporates a novel DL model for classifying the patients into positive/negative COVID-19 cases, which acts as a reading aid for the radiologist using COVID-view and provides the attention heatmap as an explainable DL for the model output. We designed and evaluated COVID-view through suggestions, close feedback and conducting case studies of real-world patient data by expert radiologists who have substantial experience diagnosing chest CT scans for COVID-19, pulmonary embolism, and other forms of lung infections. We present requirements and task analysis for the diagnosis of COVID-19 that motivate our design choices and results in a practical system which is capable of handling real-world patient cases.
false
false
[ "Shreeraj Jadhav", "Gaofeng Deng", "Marlene Zawin", "Arie E. Kaufman" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03799v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/q9PAQZU8bb0", "icon": "video" } ]
Vis
2,021
DDLVis: Real-time Visual Query of Spatiotemporal Data Distribution via Density Dictionary Learning
10.1109/TVCG.2021.3114762
Visual query of spatiotemporal data is becoming an increasingly important function in visual analytics applications. Various works have been presented for querying large spatiotemporal data in real time. However, the real-time query of spatiotemporal data distribution is still an open challenge. As spatiotemporal data become larger, methods of aggregation, storage and querying become critical. We propose a new visual query system that creates a low-memory storage component and provides real-time visual interactions of spatiotemporal data. We first present a peak-based kernel density estimation method to produce the data distribution for the spatiotemporal data. Then a novel density dictionary learning approach is proposed to compress temporal density maps and accelerate the query calculation. Moreover, various intuitive query interactions are presented to interactively gain patterns. The experimental results obtained on three datasets demonstrate that the presented system offers an effective query for visual analytics of spatiotemporal data.
false
false
[ "Chenhui Li", "George Baciu", "Yunzhe Wang", "Junjie Chen", "Changbo Wang" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/mo92ArEk8Nk", "icon": "video" } ]
Vis
2,021
DIEL: Interactive Visualization Beyond the Here and Now
10.1109/TVCG.2021.3114796
Interactive visualization design and research have primarily focused on local data and synchronous events. However, for more complex use cases-e.g., remote database access and streaming data sources-developers must grapple with distributed data and asynchronous events. Currently, constructing these use cases is difficult and time-consuming; developers are forced to operationally program low-level details like asynchronous database querying and reactive event handling. This approach is in stark contrast to modern methods for browser-based interactive visualization, which feature high-level declarative specifications. In response, we present DIEL, a declarative framework that supports asynchronous events over distributed data. As in many declarative languages, DIEL developers specify only what data they want, rather than procedural steps for how to assemble it. Uniquely, DIEL models asynchronous events (e.g., user interactions, server responses) as streams of data that are captured in event logs. To specify the state of a visualization at any time, developers write declarative queries over the data and event logs; DIEL compiles and optimizes a corresponding dataflow graph, and automatically generates necessary low-level distributed systems details. We demonstrate DIEL'S performance and expressivity through example interactive visualizations that make diverse use of remote data and asynchronous events. We further evaluate DIEL'S usability using the Cognitive Dimensions of Notations framework, revealing wins such as ease of change, and compromises such as premature commitments.
false
false
[ "Yifan Wu", "Remco Chang", "Joseph M. Hellerstein", "Arvind Satyanarayan", "Eugene Wu 0002" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/1907.00062v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/PB3D4RRI7Es", "icon": "video" } ]
Vis
2,021
Differentiable Direct Volume Rendering
10.1109/TVCG.2021.3114769
We present a differentiable volume rendering solution that provides differentiability of all continuous parameters of the volume rendering process. This differentiable renderer is used to steer the parameters towards a setting with an optimal solution of a problem-specific objective function. We have tailored the approach to volume rendering by enforcing a constant memory footprint via analytic inversion of the blending functions. This makes it independent of the number of sampling steps through the volume and facilitates the consideration of small-scale changes. The approach forms the basis for automatic optimizations regarding external parameters of the rendering process and the volumetric density field itself. We demonstrate its use for automatic viewpoint selection using differentiable entropy as objective, and for optimizing a transfer function from rendered images of a given volume. Optimization of per-voxel densities is addressed in two different ways: First, we mimic inverse tomography and optimize a 3D density field from images using an absorption model. This simplification enables comparisons with algebraic reconstruction techniques and state-of-the-art differentiable path tracers. Second, we introduce a novel approach for tomographic reconstruction from images using an emission-absorption model with post-shading via an arbitrary transfer function.
false
false
[ "Sebastian Weiss", "Rüdiger Westermann" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.12672v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/5E2I8iDKVIc", "icon": "video" } ]
Vis
2,021
E-ffective: A Visual Analytic System for Exploring the Emotion and Effectiveness of Inspirational Speeches
10.1109/TVCG.2021.3114789
What makes speeches effective has long been a subject for debate, and until today there is broad controversy among public speaking experts about what factors make a speech effective as well as the roles of these factors in speeches. Moreover, there is a lack of quantitative analysis methods to help understand effective speaking strategies. In this paper, we propose E-ffective, a visual analytic system allowing speaking experts and novices to analyze both the role of speech factors and their contribution in effective speeches. From interviews with domain experts and investigating existing literature, we identified important factors to consider in inspirational speeches. We obtained the generated factors from multi-modal data that were then related to effectiveness data. Our system supports rapid understanding of critical factors in inspirational speeches, including the influence of emotions by means of novel visualization methods and interaction. Two novel visualizations include E-spiral (that shows the emotional shifts in speeches in a visually compact way) and E-script (that connects speech content with key speech delivery information). In our evaluation we studied the influence of our system on experts' domain knowledge about speech factors. We further studied the usability of the system by speaking novices and experts on assisting analysis of inspirational speech effectiveness.
false
false
[ "Kevin T. Maher", "Ze-Yuan Huang", "Jian-Cheng Song", "Xiaoming Deng 0001", "Yu-Kun Lai", "CuiXia Ma", "Hao Wang 0005", "Yong-Jin Liu", "Hongan Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2110.14908v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/TyKuaEtYQm0", "icon": "video" } ]
Vis
2,021
Edge-Path Bundling: A Less Ambiguous Edge Bundling Approach
10.1109/TVCG.2021.3114795
Edge bundling techniques cluster edges with similar attributes (i.e. similarity in direction and proximity) together to reduce the visual clutter. All edge bundling techniques to date implicitly or explicitly cluster groups of individual edges, or parts of them, together based on these attributes. These clusters can result in ambiguous connections that do not exist in the data. Confluent drawings of networks do not have these ambiguities, but require the layout to be computed as part of the bundling process. We devise a new bundling method, Edge-Path bundling, to simplify edge clutter while greatly reducing ambiguities compared to previous bundling techniques. Edge-Path bundling takes a layout as input and clusters each edge along a weighted, shortest path to limit its deviation from a straight line. Edge-Path bundling does not incur independent edge ambiguities typically seen in all edge bundling methods, and the level of bundling can be tuned through shortest path distances, Euclidean distances, and combinations of the two. Also, directed edge bundling naturally emerges from the model. Through metric evaluations, we demonstrate the advantages of Edge-Path bundling over other techniques.
false
false
[ "Markus Wallinger", "Daniel Archambault", "David Auber", "Martin Nöllenburg", "Jaakko Peltonen" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.05467v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/3NHfrXpS2IU", "icon": "video" } ]
Vis
2,021
Effect of uncertainty visualizations on myopic loss aversion and the equity premium puzzle in retirement investment decisions
10.1109/TVCG.2021.3114692
For many households, investing for retirement is one of the most significant decisions and is fraught with uncertainty. In a classic study in behavioral economics, Benartzi and Thaler (1999) found evidence using bar charts that investors exhibit myopic loss aversion in retirement decisions: Investors overly focus on the potential for short-term losses, leading them to invest less in riskier assets and miss out on higher long-term returns. Recently, advances in uncertainty visualizations have shown improvements in decision-making under uncertainty in a variety of tasks. In this paper, we conduct a controlled and incentivized crowdsourced experiment replicating Benartzi and Thaler (1999) and extending it to measure the effect of different uncertainty representations on myopic loss aversion. Consistent with the original study, we find evidence of myopic loss aversion with bar charts and find that participants make better investment decisions with longer evaluation periods. We also find that common uncertainty representations such as interval plots and bar charts achieve the highest mean expected returns while other uncertainty visualizations lead to poorer long-term performance and strong effects on the equity premium. Qualitative feedback further suggests that different uncertainty representations lead to visual reasoning heuristics that can either mitigate or encourage a focus on potential short-term losses. We discuss implications of our results on using uncertainty visualizations for retirement decisions in practice and possible extensions for future work.
false
false
[ "Ryan Wesslen", "Alireza Karduni", "Douglas Markant", "Wenwen Dou" ]
[]
[]
[]
Vis
2,021
EVis: Visually Analyzing Environmentally Driven Events
10.1109/TVCG.2021.3114867
Earth scientists are increasingly employing time series data with multiple dimensions and high temporal resolution to study the impacts of climate and environmental changes on Earth's atmosphere, biosphere, hydrosphere, and lithosphere. However, the large number of variables and varying time scales of antecedent conditions contributing to natural phenomena hinder scientists from completing more than the most basic analyses. In this paper, we present EVis (Environmental Visualization), a new visual analytics prototype to help scientists analyze and explore recurring environmental events (e.g. rock fracture, landslides, heat waves, floods) and their relationships with high dimensional time series of continuous numeric environmental variables, such as ambient temperature and precipitation. EVis provides coordinated scatterplots, heatmaps, histograms, and RadViz for foundational analyses. These features allow users to interactively examine relationships between events and one, two, three, or more environmental variables. EVis also provides a novel visual analytics approach to allowing users to discover temporally lagging relationships related to antecedent conditions between events and multiple variables, a critical task in Earth sciences. In particular, this latter approach projects multivariate time series onto trajectories in a 2D space using RadViz, and clusters the trajectories for temporal pattern discovery. Our case studies with rock cracking data and interviews with domain experts from a range of sub-disciplines within Earth sciences illustrate the extensive applicability and usefulness of EVis.
false
false
[ "Tinghao Feng", "Jing Yang 0001", "Martha-Cary Eppes", "Zhaocong Yang", "Faye Moser" ]
[]
[]
[]
Vis
2,021
Examining Effort in 1D Uncertainty Communication Using Individual Differences in Working Memory and NASA-TLX
10.1109/TVCG.2021.3114803
As uncertainty visualizations for general audiences become increasingly common, designers must understand the full impact of uncertainty communication techniques on viewers' decision processes. Prior work demonstrates mixed performance outcomes with respect to how individuals make decisions using various visual and textual depictions of uncertainty. Part of the inconsistency across findings may be due to an over-reliance on task accuracy, which cannot, on its own, provide a comprehensive understanding of how uncertainty visualization techniques support reasoning processes. In this work, we advance the debate surrounding the efficacy of modern 1D uncertainty visualizations by conducting converging quantitative and qualitative analyses of both the effort and strategies used by individuals when provided with quantile dotplots, density plots, interval plots, mean plots, and textual descriptions of uncertainty. We utilize two approaches for examining effort across uncertainty communication techniques: a measure of individual differences in working-memory capacity known as an operation span (OSPAN) task and self-reports of perceived workload via the NASA-TLX. The results reveal that both visualization methods and working-memory capacity impact participants' decisions. Specifically, quantile dotplots and density plots (i.e., distributional annotations) result in more accurate judgments than interval plots, textual descriptions of uncertainty, and mean plots (i.e., summary annotations). Additionally, participants' open-ended responses suggest that individuals viewing distributional annotations are more likely to employ a strategy that explicitly incorporates uncertainty into their judgments than those viewing summary annotations. When comparing quantile dotplots to density plots, this work finds that both methods are equally effective for low-working-memory individuals. However, for individuals with high-working-memory capacity, quantile dotplots evoke more accurate responses with less perceived effort. Given these results, we advocate for the inclusion of converging behavioral and subjective workload metrics in addition to accuracy performance to further disambiguate meaningful differences among visualization techniques.
false
false
[ "Spencer C. Castro", "P. Samuel Quinan", "Helia Hosseinpour", "Lace M. K. Padilla" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/wpz8b", "icon": "paper" } ]
Vis
2,021
Explanatory Journeys: Visualising to Understand and Explain Administrative Justice Paths of Redress
10.1109/TVCG.2021.3114818
Administrative justice concerns the relationships between individuals and the state. It includes redress and complaints on decisions of a child's education, social care, licensing, planning, environment, housing and homelessness. However, if someone has a complaint or an issue, it is challenging for people to understand different possible redress paths and explore what path is suitable for their situation. Explanatory visualisation has the potential to display these paths of redress in a clear way, such that people can see, understand and explore their options. The visualisation challenge is further complicated because information is spread across many documents, laws, guidance and policies and requires judicial interpretation. Consequently, there is not a single database of paths of redress. In this work we present how we have co-designed a system to visualise administrative justice paths of redress. Simultaneously, we classify, collate and organise the underpinning data, from expert workshops, heuristic evaluation and expert critical reflection. We make four contributions: (i) an application design study of the explanatory visualisation tool (Artemus), (ii) coordinated and co-design approach to aggregating the data, (iii) two in-depth case studies in housing and education demonstrating explanatory paths of redress in administrative law, and (iv) reflections on the expert co-design process and expert data gathering and explanatory visualisation for administrative justice and law.
false
false
[ "Jonathan Roberts 0002", "Peter W. S. Butcher", "Ann Sherlock", "Sarah Nason" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.14013v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uwJeTbqri0M", "icon": "video" } ]
Vis
2,021
Exploring the Personal Informatics Analysis Gap: "There's a Lot of Bacon"
10.1109/TVCG.2021.3114798
Personal informatics research helps people track personal data for the purposes of self-reflection and gaining self-knowledge. This field, however, has predominantly focused on the data collection and insight-generation elements of self-tracking, with less attention paid to flexible data analysis. As a result, this inattention has led to inflexible analytic pipelines that do not reflect or support the diverse ways people want to engage with their data. This paper contributes a review of personal informatics and visualization research literature to expose a gap in our knowledge for designing flexible tools that assist people engaging with and analyzing personal data in personal contexts, what we call the personal informatics analysis gap. We explore this gap through a multistage longitudinal study on how asthmatics engage with personal air quality data, and we report how participants: were motivated by broad and diverse goals; exhibited patterns in the way they explored their data; engaged with their data in playful ways; discovered new insights through serendipitous exploration; and were reluctant to use analysis tools on their own. These results present new opportunities for visual analysis research and suggest the need for fundamental shifts in how and what we design when supporting personal data analysis.
false
false
[ "Jimmy Moore", "Pascal Goffin", "Jason Wiese", "Miriah D. Meyer" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03761v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/8U4hGTpmzXE", "icon": "video" } ]
Vis
2,021
F2-Bubbles: Faithful Bubble Set Construction and Flexible Editing
10.1109/TVCG.2021.3114761
In this paper, we propose F2-Bubbles, a set overlay visualization technique that addresses overlapping artifacts and supports interactive editing with intelligent suggestions. The core of our method is a new, efficient set overlay construction algorithm that approximates the optimal set overlay by considering set elements and their non-set neighbors. Thanks to the efficiency of the algorithm, interactive editing is achieved, and with intelligent suggestions, users can easily and flexibly edit visualizations through direct manipulations with local adaptations. A quantitative comparison with state-of-the-art set visualization techniques and case studies demonstrate the effectiveness of our method and suggests that F2-Bubbles is a helpful technique for set visualization.
false
false
[ "Yunhai Wang", "Da Cheng", "Zhirui Wang", "Jian Zhang 0070", "Liang Zhou 0001", "Gaoqi He", "Oliver Deussen" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/AeQuMMXHWl8", "icon": "video" } ]
Vis
2,021
FairRankVis: A Visual Analytics Framework for Exploring Algorithmic Fairness in Graph Mining Models
10.1109/TVCG.2021.3114850
Graph mining is an essential component of recommender systems and search engines. Outputs of graph mining models typically provide a ranked list sorted by each item's relevance or utility. However, recent research has identified issues of algorithmic bias in such models, and new graph mining algorithms have been proposed to correct for bias. As such, algorithm developers need tools that can help them uncover potential biases in their models while also exploring the impacts of correcting for biases when employing fairness-aware algorithms. In this paper, we present FairRankVis, a visual analytics framework designed to enable the exploration of multi-class bias in graph mining algorithms. We support both group and individual fairness levels of comparison. Our framework is designed to enable model developers to compare multi-class fairness between algorithms (for example, comparing PageRank with a debiased PageRank algorithm) to assess the impacts of algorithmic debiasing with respect to group and individual fairness. We demonstrate our framework through two usage scenarios inspecting algorithmic fairness.
false
false
[ "Tiankai Xie", "Yuxin Ma", "Jian Kang", "Hanghang Tong", "Ross Maciejewski" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/LAxI6_i3CHo", "icon": "video" } ]
Vis
2,021
Feature Curves and Surfaces of 3D Asymmetric Tensor Fields
10.1109/TVCG.2021.3114808
3D asymmetric tensor fields have found many applications in science and engineering domains, such as fluid dynamics and solid mechanics. 3D asymmetric tensors can have complex eigenvalues, which makes their analysis and visualization more challenging than 3D symmetric tensors. Existing research in tensor field visualization focuses on 2D asymmetric tensor fields and 3D symmetric tensor fields. In this paper, we address the analysis and visualization of 3D asymmetric tensor fields. We introduce six topological surfaces and one topological curve, which lead to an eigenvalue space based on the tensor mode that we define. In addition, we identify several non-topological feature surfaces that are nonetheless physically important. Included in our analysis are the realizations that triple degenerate tensors are structurally stable and form curves, unlike the case for 3D symmetric tensors fields. Furthermore, there are two different ways of measuring the relative strengths of rotation and angular deformation in the tensor fields, unlike the case for 2D asymmetric tensor fields. We extract these feature surfaces using the A-patches algorithm. However, since three of our feature surfaces are quadratic, we develop a method to extract quadratic surfaces at any given accuracy. To facilitate the analysis of eigenvector fields, we visualize a hyperstreamline as a tree stem with the other two eigenvectors represented as thorns in the real domain or the dual-eigenvectors as leaves in the complex domain. To demonstrate the effectiveness of our analysis and visualization, we apply our approach to datasets from solid mechanics and fluid dynamics.
false
false
[ "Shih-Hsuan Hung", "Yue Zhang 0009", "Harry Yeh", "Eugene Zhang" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.02114v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/kIYX3lWuIew", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/LZ5MjWtcGp8?si=DXxOiwZ8NnbGPhFt", "icon": "video" } ]
Vis
2,021
From Jam Session to Recital: Synchronous Communication and Collaboration Around Data in Organizations
10.1109/TVCG.2021.3114760
Prior research on communicating with visualization has focused on public presentation and asynchronous individual consumption, such as in the domain of journalism. The visualization research community knows comparatively little about synchronous and multimodal communication around data within organizations, from team meetings to executive briefings. We conducted two qualitative interview studies with individuals who prepare and deliver presentations about data to audiences in organizations. In contrast to prior work, we did not limit our interviews to those who self-identify as data analysts or data scientists. Both studies examined aspects of speaking about data with visual aids such as charts, dashboards, and tables. One study was a retrospective examination of current practices and difficulties, from which we identified three scenarios involving presentations of data. We describe these scenarios using an analogy to musical performance: small collaborative team meetings are akin to jam session, while more structured presentations can range from semi-improvisational performances among peers to formal recitals given to executives or customers. In our second study, we grounded the discussion around three design probes, each examining a different aspect of presenting data: the progressive reveal of visualization to direct attention and advance a narrative, visualization presentation controls that are hidden from the audience's view, and the coordination of a presenter's video with interactive visualization. Our distillation of interviewees' responses surfaced twelve themes, from ways of authoring presentations to creating accessible and engaging audience experiences.
false
false
[ "Matthew Brehmer", "Robert Kosara" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.09042v1", "icon": "paper" } ]
Vis
2,021
Gender in 30 Years of IEEE Visualization
10.1109/TVCG.2021.3114787
We present an exploratory analysis of gender representation among the authors, committee members, and award winners at the IEEE Visualization (VIS) conference over the last 30 years. Our goal is to provide descriptive data on which diversity discussions and efforts in the community can build. We look in particular at the gender of VIS authors as a proxy for the community at large. We consider measures of overall gender representation among authors, differences in careers, positions in author lists, and collaborations. We found that the proportion of female authors has increased from 9% in the first five years to 22% in the last five years of the conference. Over the years, we found the same representation of women in program committees and slightly more women in organizing committees. Women are less likely to appear in the last author position, but more in the middle positions. In terms of collaboration patterns, female authors tend to collaborate more than expected with other women in the community. All non-gender related data is available on https://osf.io/ydfj4/ and the gender-author matching can be accessed through https://nyu.databrary.org/volume/1301.
false
false
[ "Natkamon Tovanich", "Pierre Dragicevic", "Petra Isenberg" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/00FBt95k2-s", "icon": "video" } ]
Vis
2,021
Generative Design Inspiration for Glyphs with Diatoms
10.1109/TVCG.2021.3114792
We introduce Diatoms, a technique that generates design inspiration for glyphs by sampling from palettes of mark shapes, encoding channels, and glyph scaffold shapes. Diatoms allows for a degree of randomness while respecting constraints imposed by columns in a data table: their data types and domains as well as semantic associations between columns as specified by the designer. We pair this generative design process with two forms of interactive design externalization that enable comparison and critique of the design alternatives. First, we incorporate a familiar small multiples configuration in which every data point is drawn according to a single glyph design, coupled with the ability to page between alternative glyph designs. Second, we propose a small permutables design gallery, in which a single data point is drawn according to each alternative glyph design, coupled with the ability to page between data points. We demonstrate an implementation of our technique as an extension to Tableau featuring three example palettes, and to better understand how Diatoms could fit into existing design workflows, we conducted interviews and chauffeured demos with 12 designers. Finally, we reflect on our process and the designers' reactions, discussing the potential of our technique in the context of visualization authoring systems. Ultimately, our approach to glyph design and comparison can kickstart and inspire visualization design, allowing for the serendipitous discovery of shape and channel combinations that would have otherwise been overlooked.
false
false
[ "Matthew Brehmer", "Robert Kosara", "Carmen Hull" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.09015v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/xm2Hn7ykKt4", "icon": "video" } ]
Vis
2,021
GenNI: Human-AI Collaboration for Data-Backed Text Generation
10.1109/TVCG.2021.3114845
Table2Text systems generate textual output based on structured data utilizing machine learning. These systems are essential for fluent natural language interfaces in tools such as virtual assistants; however, left to generate freely these ML systems often produce misleading or unexpected outputs. GenNI (Generation Negotiation Interface) is an interactive visual system for high-level human-AI collaboration in producing descriptive text. The tool utilizes a deep learning model designed with explicit control states. These controls allow users to globally constrain model generations, without sacrificing the representation power of the deep learning models. The visual interface makes it possible for users to interact with AI systems following a Refine-Forecast paradigm to ensure that the generation system acts in a manner human users find suitable. We report multiple use cases on two experiments that improve over uncontrolled generation approaches, while at the same time providing fine-grained control. A demo and source code are available at https://genni.vizhub.ai.
false
false
[ "Hendrik Strobelt", "Jambay Kinley", "Robert Krüger", "Johanna Beyer", "Hanspeter Pfister", "Alexander M. Rush" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2110.10185v1", "icon": "paper" } ]
Vis
2,021
Geo-Context Aware Study of Vision-Based Autonomous Driving Models and Spatial Video Data
10.1109/TVCG.2021.3114853
Vision-based deep learning (DL) methods have made great progress in learning autonomous driving models from large-scale crowd-sourced video datasets. They are trained to predict instantaneous driving behaviors from video data captured by on-vehicle cameras. In this paper, we develop a geo-context aware visualization system for the study of Autonomous Driving Model (ADM) predictions together with large-scale ADM video data. The visual study is seamlessly integrated with the geographical environment by combining DL model performance with geospatial visualization techniques. Model performance measures can be studied together with a set of geospatial attributes over map views. Users can also discover and compare prediction behaviors of multiple DL models in both city-wide and street-level analysis, together with road images and video contents. Therefore, the system provides a new visual exploration platform for DL model designers in autonomous driving. Use cases and domain expert evaluation show the utility and effectiveness of the visualization system.
false
false
[ "Suphanut Jamonnak", "Ye Zhao 0003", "Xinyi Huang", "Md. Amiruzzaman" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.10895v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Y8bN-6G3Vko", "icon": "video" } ]
Vis
2,021
GlyphCreator: Towards Example-based Automatic Generation of Circular Glyphs
10.1109/TVCG.2021.3114877
Circular glyphs are used across disparate fields to represent multidimensional data. However, although these glyphs are extremely effective, creating them is often laborious, even for those with professional design skills. This paper presents GlyphCreator, an interactive tool for the example-based generation of circular glyphs. Given an example circular glyph and multidimensional input data, GlyphCreator promptly generates a list of design candidates, any of which can be edited to satisfy the requirements of a particular representation. To develop GlyphCreator, we first derive a design space of circular glyphs by summarizing relationships between different visual elements. With this design space, we build a circular glyph dataset and develop a deep learning model for glyph parsing. The model can deconstruct a circular glyph bitmap into a series of visual elements. Next, we introduce an interface that helps users bind the input data attributes to visual elements and customize visual styles. We evaluate the parsing model through a quantitative experiment, demonstrate the use of GlyphCreator through two use scenarios, and validate its effectiveness through user interviews.
false
false
[ "Lu Ying", "Tan Tang", "Yuzhe Luo", "Lvkeshen Shen", "Xiao Xie", "Lingyun Yu 0001", "Yingcai Wu" ]
[]
[]
[]
Vis
2,021
Gosling: A Grammar-based Toolkit for Scalable and Interactive Genomics Data Visualization
10.1109/TVCG.2021.3114876
The combination of diverse data types and analysis tasks in genomics has resulted in the development of a wide range of visualization techniques and tools. However, most existing tools are tailored to a specific problem or data type and offer limited customization, making it challenging to optimize visualizations for new analysis tasks or datasets. To address this challenge, we designed Gosling-a grammar for interactive and scalable genomics data visualization. Gosling balances expressiveness for comprehensive multi-scale genomics data visualizations with accessibility for domain scientists. Our accompanying JavaScript toolkit called Gosling.js provides scalable and interactive rendering. Gosling.js is built on top of an existing platform for web-based genomics data visualization to further simplify the visualization of common genomics data formats. We demonstrate the expressiveness of the grammar through a variety of real-world examples. Furthermore, we show how Gosling supports the design of novel genomics visualizations. An online editor and examples of Gosling.js, its source code, and documentation are available at https://gosling.js.org.
false
false
[ "Sehi L'Yi", "Qianwen Wang", "Fritz Lekschas", "Nils Gehlenborg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/6evmb", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Wvyb9dus9eU", "icon": "video" } ]
Vis
2,021
Human-in-the-loop Extraction of Interpretable Concepts in Deep Learning Models
10.1109/TVCG.2021.3114837
The interpretation of deep neural networks (DNNs) has become a key topic as more and more people apply them to solve various problems and making critical decisions. Concept-based explanations have recently become a popular approach for post-hoc interpretation of DNNs. However, identifying human-understandable visual concepts that affect model decisions is a challenging task that is not easily addressed with automatic approaches. We present a novel human-in-the-Ioop approach to generate user-defined concepts for model interpretation and diagnostics. Central to our proposal is the use of active learning, where human knowledge and feedback are combined to train a concept extractor with very little human labeling effort. We integrate this process into an interactive system, ConceptExtract. Through two case studies, we show how our approach helps analyze model behavior and extract human-friendly concepts for different machine learning tasks and datasets and how to use these concepts to understand the predictions, compare model performance and make suggestions for model refinement. Quantitative experiments show that our active learning approach can accurately extract meaningful visual concepts. More importantly, by identifying visual concepts that negatively affect model performance, we develop the corresponding data augmentation strategy that consistently improves model performance.
false
false
[ "Zhenge Zhao", "Panpan Xu", "Carlos Scheidegger", "Ren Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03738v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/OrnFfB7rCKY", "icon": "video" } ]
Vis
2,021
Improving Visualization Interpretation Using Counterfactuals
10.1109/TVCG.2021.3114779
Complex, high-dimensional data is used in a wide range of domains to explore problems and make decisions. Analysis of high-dimensional data, however, is vulnerable to the hidden influence of confounding variables, especially as users apply ad hoc filtering operations to visualize only specific subsets of an entire dataset. Thus, visual data-driven analysis can mislead users and encourage mistaken assumptions about causality or the strength of relationships between features. This work introduces a novel visual approach designed to reveal the presence of confounding variables via counterfactual possibilities during visual data analysis. It is implemented in CoFact, an interactive visualization prototype that determines and visualizes counterfactual subsets to better support user exploration of feature relationships. Using publicly available datasets, we conducted a controlled user study to demonstrate the effectiveness of our approach; the results indicate that users exposed to counterfactual visualizations formed more careful judgments about feature-to-outcome relationships.
false
false
[ "Smiti Kaul", "David Borland", "Nan Cao", "David Gotz" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.10309v1", "icon": "paper" } ]
Vis
2,021
Interactive Data Comics
10.1109/TVCG.2021.3114849
This paper investigates how to make data comics interactive. Data comics are an effective and versatile means for visual communication, leveraging the power of sequential narration and combined textual and visual content, while providing an overview of the storyline through panels assembled in expressive layouts. While a powerful static storytelling medium that works well on paper support, adding interactivity to data comics can enable non-linear storytelling, personalization, levels of details, explanations, and potentially enriched user experiences. This paper introduces a set of operations tailored to support data comics narrative goals that go beyond the traditional linear, immutable storyline curated by a story author. The goals and operations include adding and removing panels into pre-defined layouts to support branching, change of perspective, or access to detail-on-demand, as well as providing and modifying data, and interacting with data representation, to support personalization and reader-defined data focus. We propose a lightweight specification language, COMICSCRIPT, for designers to add such interactivity to static comics. To assess the viability of our authoring process, we recruited six professional illustrators, designers and data comics enthusiasts and asked them to craft an interactive comic, allowing us to understand authoring workflow and potential of our approach. We present examples of interactive comics in a gallery. This initial step towards understanding the design space of interactive comics can inform the design of creation tools and experiences for interactive storytelling.
false
false
[ "Zezhong Wang 0001", "Hugo Romat", "Fanny Chevalier", "Nathalie Henry Riche", "David Murray-Rust", "Benjamin Bach" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/dx7mj", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/w8gLs-jYm04", "icon": "video" } ]
Vis
2,021
Interactive Dimensionality Reduction for Comparative Analysis
10.1109/TVCG.2021.3114807
Finding the similarities and differences between groups of datasets is a fundamental analysis task. For high-dimensional data, dimensionality reduction (DR) methods are often used to find the characteristics of each group. However, existing DR methods provide limited capability and flexibility for such comparative analysis as each method is designed only for a narrow analysis target, such as identifying factors that most differentiate groups. This paper presents an interactive DR framework where we integrate our new DR method, called ULCA (unified linear comparative analysis), with an interactive visual interface. ULCA unifies two DR schemes, discriminant analysis and contrastive learning, to support various comparative analysis tasks. To provide flexibility for comparative analysis, we develop an optimization algorithm that enables analysts to interactively refine ULCA results. Additionally, the interactive visualization interface facilitates interpretation and refinement of the ULCA results. We evaluate ULCA and the optimization algorithm to show their efficiency as well as present multiple case studies using real-world datasets to demonstrate the usefulness of this framework.
false
false
[ "Takanori Fujiwara", "Xinhai Wei", "Jian Zhao 0010", "Kwan-Liu Ma" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2106.15481v3", "icon": "paper" } ]
Vis
2,021
Interactive Exploration of Physically-Observable Objective Vortices in Unsteady 2D Flow
10.1109/TVCG.2021.3115565
State-of-the-art computation and visualization of vortices in unsteady fluid flow employ objective vortex criteria, which makes them independent of reference frames or observers. However, objectivity by itself, although crucial, is not sufficient to guarantee that one can identify physically-realizable observers that would perceive or detect the same vortices. Moreover, a significant challenge is that a single reference frame is often not sufficient to accurately observe multiple vortices that follow different motions. This paper presents a novel framework for the exploration and use of an interactively-chosen set of observers, of the resulting relative velocity fields, and of objective vortex structures. We show that our approach facilitates the objective detection and visualization of vortices relative to well-adapted reference frame motions, while at the same time guaranteeing that these observers are in fact physically realizable. In order to represent and manipulate observers efficiently, we make use of the low-dimensional vector space structure of the Lie algebra of physically-realizable observer motions. We illustrate that our framework facilitates the efficient choice and guided exploration of objective vortices in unsteady 2D flow, on planar as well as on spherical domains, using well-adapted reference frames.
false
false
[ "Xingdi Zhang", "Markus Hadwiger", "Thomas Theußl", "Peter Rautek" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/0kdWTHGd5yQ", "icon": "video" } ]
Vis
2,021
Interactive Visual Pattern Search on Graph Data via Graph Representation Learning
10.1109/TVCG.2021.3114857
Graphs are a ubiquitous data structure to model processes and relations in a wide range of domains. Examples include control-flow graphs in programs and semantic scene graphs in images. Identifying subgraph patterns in graphs is an important approach to understand their structural properties. We propose a visual analytics system GraphQ to support human-in-the-loop, example-based, subgraph pattern search in a database containing many individual graphs. To support fast, interactive queries, we use graph neural networks (GNNs) to encode a graph as fixed-length latent vector representation, and perform subgraph matching in the latent space. Due to the complexity of the problem, it is still difficult to obtain accurate one-to-one node correspondences in the matching results that are crucial for visualization and interpretation. We, therefore, propose a novel GNN for node-alignment called NeuroAlign, to facilitate easy validation and interpretation of the query results. GraphQ provides a visual query interface with a query editor and a multi-scale visualization of the results, as well as a user feedback mechanism for refining the results with additional constraints. We demonstrate GraphQ through two example usage scenarios: analyzing reusable subroutines in program workflows and semantic scene graph search in images. Quantitative experiments show that NeuroAlign achieves 19%-29% improvement in node-alignment accuracy compared to baseline GNN and provides up to 100× speedup compared to combinatorial algorithms. Our qualitative study with domain experts confirms the effectiveness for both usage scenarios.
false
false
[ "Huan Song", "Zeng Dai", "Panpan Xu", "Ren Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2202.09459v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/6dP9SR-ZSSc", "icon": "video" } ]
Vis
2,021
IRVINE: A Design Study on Analyzing Correlation Patterns of Electrical Engines
10.1109/TVCG.2021.3114797
In this design study, we present IRVINE, a Visual Analytics (VA) system, which facilitates the analysis of acoustic data to detect and understand previously unknown errors in the manufacturing of electrical engines. In serial manufacturing processes, signatures from acoustic data provide valuable information on how the relationship between multiple produced engines serves to detect and understand previously unknown errors. To analyze such signatures, IRVINE leverages interactive clustering and data labeling techniques, allowing users to analyze clusters of engines with similar signatures, drill down to groups of engines, and select an engine of interest. Furthermore, IRVINE allows to assign labels to engines and clusters and annotate the cause of an error in the acoustic raw measurement of an engine. Since labels and annotations represent valuable knowledge, they are conserved in a knowledge database to be available for other stakeholders. We contribute a design study, where we developed IRVINE in four main iterations with engineers from a company in the automotive sector. To validate IRVINE, we conducted a field study with six domain experts. Our results suggest a high usability and usefulness of IRVINE as part of the improvement of a real-world manufacturing process. Specifically, with IRVINE domain experts were able to label and annotate produced electrical engines more than 30% faster.
false
false
[ "Joscha Eirich", "Jakob Bonart", "Dominik Jäckle", "Michael Sedlmair", "Ute Schmid", "Kai Fischbach", "Tobias Schreck", "Jürgen Bernard" ]
[ "BP" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/EKO-fgUCF5w", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/E_u27Xltbt0?si=tW5gKHGm9Y0glGFL", "icon": "video" } ]
Vis
2,021
Joint t-SNE for Comparable Projections of Multiple High-Dimensional Datasets
10.1109/TVCG.2021.3114765
We present Joint t-Stochastic Neighbor Embedding (Joint t-SNE), a technique to generate comparable projections of multiple high-dimensional datasets. Although t-SNE has been widely employed to visualize high-dimensional datasets from various domains, it is limited to projecting a single dataset. When a series of high-dimensional datasets, such as datasets changing over time, is projected independently using t-SNE, misaligned layouts are obtained. Even items with identical features across datasets are projected to different locations, making the technique unsuitable for comparison tasks. To tackle this problem, we introduce edge similarity, which captures the similarities between two adjacent time frames based on the Graphlet Frequency Distribution (GFD). We then integrate a novel loss term into the t-SNE loss function, which we call vector constraints, to preserve the vectors between projected points across the projections, allowing these points to serve as visual landmarks for direct comparisons between projections. Using synthetic datasets whose ground-truth structures are known, we show that Joint t-SNE outperforms existing techniques, including Dynamic t-SNE, in terms of local coherence error, Kullback-Leibler divergence, and neighborhood preservation. We also showcase a real-world use case to visualize and compare the activation of different layers of a neural network.
false
false
[ "Yinqiao Wang", "Lu Chen", "Jaemin Jo", "Yunhai Wang" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/6Gxdv-unloM", "icon": "video" } ]
Vis
2,021
KD-Box: Line-segment-based KD-tree for Interactive Exploration of Large-scale Time-Series Data
10.1109/TVCG.2021.3114865
Time-series data-usually presented in the form of lines-plays an important role in many domains such as finance, meteorology, health, and urban informatics. Yet, little has been done to support interactive exploration of large-scale time-series data, which requires a clutter-free visual representation with low-latency interactions. In this paper, we contribute a novel line-segment-based KD-tree method to enable interactive analysis of many time series. Our method enables not only fast queries over time series in selected regions of interest but also a line splatting method for efficient computation of the density field and selection of representative lines. Further, we develop KD-Box, an interactive system that provides rich interactions, e.g., timebox, attribute filtering, and coordinated multiple views. We demonstrate the effectiveness of KD-Box in supporting efficient line query and density field computation through a quantitative comparison and show its usefulness for interactive visual analysis on several real-world datasets.
false
false
[ "Yue Zhao", "Yunhai Wang", "Jian Zhang 0070", "Chi-Wing Fu", "Mingliang Xu", "Dominik Moritz" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/zHiYTZUw5Co", "icon": "video" } ]
Vis
2,021
KG4Vis: A Knowledge Graph-Based Approach for Visualization Recommendation
10.1109/TVCG.2021.3114863
Visualization recommendation or automatic visualization generation can significantly lower the barriers for general users to rapidly create effective data visualizations, especially for those users without a background in data visualizations. However, existing rule-based approaches require tedious manual specifications of visualization rules by visualization experts. Other machine learning-based approaches often work like black-box and are difficult to understand why a specific visualization is recommended, limiting the wider adoption of these approaches. This paper fills the gap by presenting KG4Vis, a knowledge graph (KG)-based approach for visualization recommendation. It does not require manual specifications of visualization rules and can also guarantee good explainability. Specifically, we propose a framework for building knowledge graphs, consisting of three types of entities (i.e., data features, data columns and visualization design choices) and the relations between them, to model the mapping rules between data and effective visualizations. A TransE-based embedding technique is employed to learn the embeddings of both entities and relations of the knowledge graph from existing dataset-visualization pairs. Such embeddings intrinsically model the desirable visualization rules. Then, given a new dataset, effective visualizations can be inferred from the knowledge graph with semantically meaningful rules. We conducted extensive evaluations to assess the proposed approach, including quantitative comparisons, case studies and expert interviews. The results demonstrate the effectiveness of our approach.
false
false
[ "Haotian Li 0001", "Yong Wang 0021", "Songheng Zhang", "Yangqiu Song", "Huamin Qu" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.12548v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/RVX1jFGNLdw", "icon": "video" } ]
Vis
2,021
Kineticharts: Augmenting Affective Expressiveness of Charts in Data Stories with Animation Design
10.1109/TVCG.2021.3114775
Data stories often seek to elicit affective feelings from viewers. However, how to design affective data stories remains under-explored. In this work, we investigate one specific design factor, animation, and present Kineticharts, an animation design scheme for creating charts that express five positive affects: joy, amusement, surprise, tenderness, and excitement. These five affects were found to be frequently communicated through animation in data stories. Regarding each affect, we designed varied kinetic motions represented by bar charts, line charts, and pie charts, resulting in 60 animated charts for the five affects. We designed Kineticharts by first conducting a need-finding study with professional practitioners from data journalism and then analyzing a corpus of affective motion graphics to identify salient kinetic patterns. We evaluated Kineticharts through two user studies. The results suggest that Kineticharts can accurately convey affects, and improve the expressiveness of data stories, as well as enhance user engagement without hindering data comprehension compared to the animation design from DataClips, an authoring tool for data videos.
false
false
[ "Xingyu Lan", "Yang Shi 0007", "Yanqiu Wu", "Xiaohan Jiao", "Nan Cao" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/e-CXSOpKuyY", "icon": "video" } ]
Vis
2,021
Knowledge Rocks: Adding Knowledge Assistance to Visualization Systems
10.1109/TVCG.2021.3114687
We present Knowledge Rocks, an implementation strategy and guideline for augmenting visualization systems to knowledge-assisted visualization systems, as defined by the KAVA model. Visualization systems become more and more sophisticated. Hence, it is increasingly important to support users with an integrated knowledge base in making constructive choices and drawing the right conclusions. We support the effective reactivation of visualization software resources by augmenting them with knowledge-assistance. To provide a general and yet supportive implementation strategy, we propose an implementation process that bases on an application-agnostic architecture. This architecture is derived from existing knowledge-assisted visualization systems and the KAVA model. Its centerpiece is an ontology that is able to automatically analyze and classify input data, linked to a database to store classified instances. We discuss design decisions and advantages of the KR framework and illustrate its broad area of application in diverse integration possibilities of this architecture into an existing visualization system. In addition, we provide a detailed case study by augmenting an it-security system with knowledge-assistance facilities.
false
false
[ "Anna Pia Lohfink", "Simon Duque Antón", "Heike Leitte", "Christoph Garth" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.11095v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/4laWRkt2rC0", "icon": "video" } ]
Vis
2,021
Kori: Interactive Synthesis of Text and Charts in Data Documents
10.1109/TVCG.2021.3114802
Charts go hand in hand with text to communicate complex data and are widely adopted in news articles, online blogs, and academic papers. They provide graphical summaries of the data, while text explains the message and context. However, synthesizing information across text and charts is difficult; it requires readers to frequently shift their attention. We investigated ways to support the tight coupling of text and charts in data documents. To understand their interplay, we analyzed the design space of chart-text references through news articles and scientific papers. Informed by the analysis, we developed a mixed-initiative interface enabling users to construct interactive references between text and charts. It leverages natural language processing to automatically suggest references as well as allows users to manually construct other references effortlessly. A user study complemented with algorithmic evaluation of the system suggests that the interface provides an effective way to compose interactive data documents.
false
false
[ "Shahid Latif", "Zheng Zhou", "Yoon Kim", "Fabian Beck 0001", "Nam Wook Kim" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.04203v1", "icon": "paper" } ]
Vis
2,021
Learning Objectives, Insights, and Assessments: How Specification Formats Impact Design
10.1109/TVCG.2021.3114811
Despite the ubiquity of communicative visualizations, specifying communicative intent during design is ad hoc. Whether we are selecting from a set of visualizations, commissioning someone to produce them, or creating them ourselves, an effective way of specifying intent can help guide this process. Ideally, we would have a concise and shared specification language. In previous work, we have argued that communicative intents can be viewed as a learning/assessment problem (i.e., what should the reader learn and what test should they do well on). Learning-based specification formats are linked (e.g., assessments are derived from objectives) but some may more effectively specify communicative intent. Through a large-scale experiment, we studied three specification types: learning objectives, insights, and assessments. Participants, guided by one of these specifications, rated their preferences for a set of visualization designs. Then, we evaluated the set of visualization designs to assess which specification led participants to prefer the most effective visualizations. We find that while all specification types have benefits over no-specification, each format has its own advantages. Our results show that learning objective-based specifications helped participants the most in visualization selection. We also identify situations in which specifications may be insufficient and assessments are vital.
false
false
[ "Elsie Lee-Robbins", "Shiqing He", "Eytan Adar" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03111v1", "icon": "paper" } ]
Vis
2,021
Left, Right, and Gender: Exploring Interaction Traces to Mitigate Human Biases
10.1109/TVCG.2021.3114862
Human biases impact the way people analyze data and make decisions. Recent work has shown that some visualization designs can better support cognitive processes and mitigate cognitive biases (i.e., errors that occur due to the use of mental “shortcuts”). In this work, we explore how visualizing a user's interaction history (i.e., which data points and attributes a user has interacted with) can be used to mitigate potential biases that drive decision making by promoting conscious reflection of one's analysis process. Given an interactive scatterplot-based visualization tool, we showed interaction history in real-time while exploring data (by coloring points in the scatterplot that the user has interacted with), and in a summative format after a decision has been made (by comparing the distribution of user interactions to the underlying distribution of the data). We conducted a series of in-lab experiments and a crowd-sourced experiment to evaluate the effectiveness of interaction history interventions toward mitigating bias. We contextualized this work in a political scenario in which participants were instructed to choose a committee of 10 fictitious politicians to review a recent bill passed in the U.S. state of Georgia banning abortion after 6 weeks, where things like gender bias or political party bias may drive one's analysis process. We demonstrate the generalizability of this approach by evaluating a second decision making scenario related to movies. Our results are inconclusive for the effectiveness of interaction history (henceforth referred to as interaction traces) toward mitigating biased decision making. However, we find some mixed support that interaction traces, particularly in a summative format, can increase awareness of potential unconscious biases.
false
false
[ "Emily Wall", "Arpit Narechania", "Adam Coscia", "Jamal Paden", "Alex Endert" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03536v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/KBAkr9ROv5k", "icon": "video" } ]
Vis
2,021
Loon: Using Exemplars to Visualize Large-Scale Microscopy Data
10.1109/TVCG.2021.3114766
Which drug is most promising for a cancer patient? A new microscopy-based approach for measuring the mass of individual cancer cells treated with different drugs promises to answer this question in only a few hours. However, the analysis pipeline for extracting data from these images is still far from complete automation: human intervention is necessary for quality control for preprocessing steps such as segmentation, adjusting filters, removing noise, and analyzing the result. To address this workflow, we developed Loon, a visualization tool for analyzing drug screening data based on quantitative phase microscopy imaging. Loon visualizes both derived data such as growth rates and imaging data. Since the images are collected automatically at a large scale, manual inspection of images and segmentations is infeasible. However, reviewing representative samples of cells is essential, both for quality control and for data analysis. We introduce a new approach for choosing and visualizing representative exemplar cells that retain a close connection to the low-level data. By tightly integrating the derived data visualization capabilities with the novel exemplar visualization and providing selection and filtering capabilities, Loon is well suited for making decisions about which drugs are suitable for a specific patient.
false
false
[ "Devin Lange", "Eddie Polanco", "Robert Judson-Torres", "Thomas Zangle", "Alexander Lex" ]
[ "HM" ]
[ "PW", "P", "V", "C", "O" ]
[ { "name": "Project Website with Demo", "url": "https://loon.sci.utah.edu/", "icon": "project_website" }, { "name": "Paper Preprint", "url": "https://doi.org/10.31219/osf.io/dfajc", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/iRsL3WiZbhI?si=VaV8E5xhJnLmJm0c", "icon": "video" }, { "name": "Overview", "url": "https://youtu.be/Y7u3Kg3At9A?si=j4ZyDb8THSZT0NPY", "icon": "video" }, { "name": "Blog Post", "url": "https://vdl.sci.utah.edu/blog/2022/04/27/loon/", "icon": "other" }, { "name": "Supplemental Material", "url": "https://osf.io/czdbx/", "icon": "other" }, { "name": "VIS Talk", "url": "https://youtu.be/Xz5VrBXk5J0?si=E1dZ1IxyyxlH0VQG", "icon": "video" }, { "name": "Slides — PDF", "url": "https://sci.utah.edu/~vdl/papers/2021_vis_loon_talk_slides.pdf", "icon": "other" }, { "name": "Source Code", "url": "https://github.com/visdesignlab/Loon", "icon": "code" } ]
Vis
2,021
Lumos: Increasing Awareness of Analytic Behavior during Visual Data Analysis
10.1109/TVCG.2021.3114827
Visual data analysis tools provide people with the agency and flexibility to explore data using a variety of interactive functionalities. However, this flexibility may introduce potential consequences in situations where users unknowingly overemphasize or underemphasize specific subsets of the data or attribute space they are analyzing. For example, users may overemphasize specific attributes and/or their values (e.g., Gender is always encoded on the X axis), underemphasize others (e.g., Religion is never encoded), ignore a subset of the data (e.g., older people are filtered out), etc. In response, we present Lumos, a visual data analysis tool that captures and shows the interaction history with data to increase awareness of such analytic behaviors. Using in-situ (at the place of interaction) and ex-situ (in an external view) visualization techniques, Lumos provides real-time feedback to users for them to reflect on their activities. For example, Lumos highlights datapoints that have been previously examined in the same visualization (in-situ) and also overlays them on the underlying data distribution (i.e., baseline distribution) in a separate visualization (ex-situ). Through a user study with 24 participants, we investigate how Lumos helps users' data exploration and decision-making processes. We found that Lumos increases users' awareness of visual data analysis practices in real-time, promoting reflection upon and acknowledgement of their intentions and potentially influencing subsequent interactions.
false
false
[ "Arpit Narechania", "Adam Coscia", "Emily Wall", "Alex Endert" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.02909v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/KKDiLrsTlLA", "icon": "video" } ]
Vis
2,021
M2Lens: Visualizing and Explaining Multimodal Models for Sentiment Analysis
10.1109/TVCG.2021.3114794
Multimodal sentiment analysis aims to recognize people's attitudes from multiple communication channels such as verbal content (i.e., text), voice, and facial expressions. It has become a vibrant and important research topic in natural language processing. Much research focuses on modeling the complex intra- and inter-modal interactions between different communication channels. However, current multimodal models with strong performance are often deep-learning-based techniques and work like black boxes. It is not clear how models utilize multimodal information for sentiment predictions. Despite recent advances in techniques for enhancing the explainability of machine learning models, they often target unimodal scenarios (e.g., images, sentences), and little research has been done on explaining multimodal models. In this paper, we present an interactive visual analytics system, M2 Lens, to visualize and explain multimodal models for sentiment analysis. M2 Lens provides explanations on intra- and inter-modal interactions at the global, subset, and local levels. Specifically, it summarizes the influence of three typical interaction types (i.e., dominance, complement, and conflict) on the model predictions. Moreover, M2 Lens identifies frequent and influential multimodal features and supports the multi-faceted exploration of model behaviors from language, acoustic, and visual modalities. Through two case studies and expert interviews, we demonstrate our system can help users gain deep insights into the multimodal models for sentiment analysis.
false
false
[ "Xingbo Wang 0001", "Jianben He", "Zhihua Jin", "Muqiao Yang", "Yong Wang 0021", "Huamin Qu" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.08264v4", "icon": "paper" } ]
Vis
2,021
matExplorer: Visual Exploration on Predicting Ionic Conductivity for Solid-state Electrolytes
10.1109/TVCG.2021.3114812
Lithium ion batteries (LIBs) are widely used as important energy sources for mobile phones, electric vehicles, and drones. Experts have attempted to replace liquid electrolytes with solid electrolytes that have wider electrochemical window and higher stability due to the potential safety risks, such as electrolyte leakage, flammable solvents, poor thermal stability, and many side reactions caused by liquid electrolytes. However, finding suitable alternative materials using traditional approaches is very difficult due to the incredibly high cost in searching. Machine learning (ML)-based methods are currently introduced and used for material prediction. However, learning tools designed for domain experts to conduct intuitive performance comparison and analysis of ML models are rare. In this case, we propose an interactive visualization system for experts to select suitable ML models and understand and explore the predication results comprehensively. Our system uses a multifaceted visualization scheme designed to support analysis from various perspectives, such as feature distribution, data similarity, model performance, and result presentation. Case studies with actual lab experiments have been conducted by the experts, and the final results confirmed the effectiveness and helpfulness of our system.
false
false
[ "Jiansu Pu", "Hui Shao", "Boyang Gao", "Zhengguo Zhu", "Yanlin Zhu", "Yunbo Rao", "Yong Xiang" ]
[]
[]
[]
Vis
2,021
Measuring and Explaining the Inter-Cluster Reliability of Multidimensional Projections
10.1109/TVCG.2021.3114833
We propose Steadiness and Cohesiveness, two novel metrics to measure the inter-cluster reliability of multidimensional projection (MDP), specifically how well the inter-cluster structures are preserved between the original high-dimensional space and the low-dimensional projection space. Measuring inter-cluster reliability is crucial as it directly affects how well inter-cluster tasks (e.g., identifying cluster relationships in the original space from a projected view) can be conducted; however, despite the importance of inter-cluster tasks, we found that previous metrics, such as Trustworthiness and Continuity, fail to measure inter-cluster reliability. Our metrics consider two aspects of the inter-cluster reliability: Steadiness measures the extent to which clusters in the projected space form clusters in the original space, and Cohesiveness measures the opposite. They extract random clusters with arbitrary shapes and positions in one space and evaluate how much the clusters are stretched or dispersed in the other space. Furthermore, our metrics can quantify pointwise distortions, allowing for the visualization of inter-cluster reliability in a projection, which we call a reliability map. Through quantitative experiments, we verify that our metrics precisely capture the distortions that harm inter-cluster reliability while previous metrics have difficulty capturing the distortions. A case study also demonstrates that our metrics and the reliability map 1) support users in selecting the proper projection techniques or hyperparameters and 2) prevent misinterpretation while performing inter-cluster tasks, thus allow an adequate identification of inter-cluster structure.
false
false
[ "Hyeon Jeon", "Hyung-Kwon Ko", "Jaemin Jo", "Youngtaek Kim", "Jinwook Seo" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.07859v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/EFaKO5hvtcw", "icon": "video" } ]
Vis
2,021
MiningVis: Visual Analytics of the Bitcoin Mining Economy
10.1109/TVCG.2021.3114821
We present a visual analytics tool, MiningVis, to explore the long-term historical evolution and dynamics of the Bitcoin mining ecosystem. Bitcoin is a cryptocurrency that attracts much attention but remains difficult to understand. Particularly important to the success, stability, and security of Bitcoin is a component of the system called “mining.” Miners are responsible for validating transactions and are incentivized to participate by the promise of a monetary reward. Mining pools have emerged as collectives of miners that ensure a more stable and predictable income. MiningVis aims to help analysts understand the evolution and dynamics of the Bitcoin mining ecosystem, including mining market statistics, multi-measure mining pool rankings, and pool hopping behavior. Each of these features can be compared to external data concerning pool characteristics and Bitcoin news. In order to assess the value of MiningVis, we conducted online interviews and insight-based user studies with Bitcoin miners. We describe research questions tackled and insights made by our participants and illustrate practical implications for visual analytics systems for Bitcoin mining.
false
false
[ "Natkamon Tovanich", "Nicolas Soulié", "Nicolas Heulot", "Petra Isenberg" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/aPh_QMZ1xfk", "icon": "video" } ]
Vis
2,021
Modeling Just Noticeable Differences in Charts
10.1109/TVCG.2021.3114874
One of the fundamental tasks in visualization is to compare two or more visual elements. However, it is often difficult to visually differentiate graphical elements encoding a small difference in value, such as the heights of similar bars in bar chart or angles of similar sections in pie chart. Perceptual laws can be used in order to model when and how we perceive this difference. In this work, we model the perception of Just Noticeable Differences (JNDs), the minimum difference in visual attributes that allow faithfully comparing similar elements, in charts. Specifically, we explore the relation between JNDs and two major visual variables: the intensity of visual elements and the distance between them, and study it in three charts: bar chart, pie chart and bubble chart. Through an empirical study, we identify main effects on JND for distance in bar charts, intensity in pie charts, and both distance and intensity in bubble charts. By fitting a linear mixed effects model, we model JND and find that JND grows as the exponential function of variables. We highlight several usage scenarios that make use of the JND modeling in which elements below the fitted JND are detected and enhanced with secondary visual cues for better discrimination.
false
false
[ "Min Lu 0002", "Joel Lanir", "Chufeng Wang", "Yucong Yao", "Wen Zhang", "Oliver Deussen", "Hui Huang 0004" ]
[]
[]
[]
Vis
2,021
MultiVision: Designing Analytical Dashboards with Deep Learning Based Recommendation
10.1109/TVCG.2021.3114826
We contribute a deep-learning-based method that assists in designing analytical dashboards for analyzing a data table. Given a data table, data workers usually need to experience a tedious and time-consuming process to select meaningful combinations of data columns for creating charts. This process is further complicated by the needs of creating dashboards composed of multiple views that unveil different perspectives of data. Existing automated approaches for recommending multiple-view visualizations mainly build on manually crafted design rules, producing sub-optimal or irrelevant suggestions. To address this gap, we present a deep learning approach for selecting data columns and recommending multiple charts. More importantly, we integrate the deep learning models into a mixed-initiative system. Our model could make recommendations given optional user-input selections of data columns. The model, in turn, learns from provenance data of authoring logs in an offline manner. We compare our deep learning model with existing methods for visualization recommendation and conduct a user study to evaluate the usefulness of the system.
false
false
[ "Aoyu Wu", "Yun Wang 0012", "Mengyu Zhou", "Xinyi He", "Haidong Zhang", "Huamin Qu", "Dongmei Zhang 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.07823v1", "icon": "paper" } ]
Vis
2,021
Natural Language to Visualization by Neural Machine Translation
10.1109/TVCG.2021.3114848
Supporting the translation from natural language (NL) query to visualization (NL2VIS) can simplify the creation of data visualizations because if successful, anyone can generate visualizations by their natural language from the tabular data. The state-of-the-art NL2VIS approaches (e.g., NL4DV and FlowSense) are based on semantic parsers and heuristic algorithms, which are not end-to-end and are not designed for supporting (possibly) complex data transformations. Deep neural network powered neural machine translation models have made great strides in many machine translation tasks, which suggests that they might be viable for NL2VIS as well. In this paper, we present ncNet, a Transformer-based sequence-to-sequence model for supporting NL2VIS, with several novel visualization-aware optimizations, including using attention-forcing to optimize the learning process, and visualization-aware rendering to produce better visualization results. To enhance the capability of machine to comprehend natural language queries, ncNet is also designed to take an optional chart template (e.g., a pie chart or a scatter plot) as an additional input, where the chart template will be served as a constraint to limit what could be visualized. We conducted both quantitative evaluation and user study, showing that ncNet achieves good accuracy in the nvBench benchmark and is easy-to-use.
false
false
[ "Yuyu Luo", "Nan Tang 0001", "Guoliang Li 0001", "Jiawei Tang", "Chengliang Chai", "Xuedi Qin" ]
[]
[]
[]
Vis
2,021
NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks
10.1109/TVCG.2021.3114858
Existing research on making sense of deep neural networks often focuses on neuron-level interpretation, which may not adequately capture the bigger picture of how concepts are collectively encoded by multiple neurons. We present Neurocartography, an interactive system that scalably summarizes and visualizes concepts learned by neural networks. It automatically discovers and groups neurons that detect the same concepts, and describes how such neuron groups interact to form higher-level concepts and the subsequent predictions. Neurocartography introduces two scalable summarization techniques: (1) neuron clustering groups neurons based on the semantic similarity of the concepts detected by neurons (e.g., neurons detecting “dog faces” of different breeds are grouped); and (2) neuron embedding encodes the associations between related concepts based on how often they co-occur (e.g., neurons detecting “dog face” and “dog tail” are placed closer in the embedding space). Key to our scalable techniques is the ability to efficiently compute all neuron pairs' relationships, in time linear to the number of neurons instead of quadratic time. Neurocartography scales to large data, such as the ImageNet dataset with 1.2M images. The system's tightly coordinated views integrate the scalable techniques to visualize the concepts and their relationships, projecting the concept associations to a 2D space in Neuron Projection View, and summarizing neuron clusters and their relationships in Graph View. Through a large-scale human evaluation, we demonstrate that our technique discovers neuron groups that represent coherent, human-meaningful concepts. And through usage scenarios, we describe how our approaches enable interesting and surprising discoveries, such as concept cascades of related and isolated concepts. The Neurocartography visualization runs in modern browsers and is open-sourced.
false
false
[ "Haekyu Park", "Nilaksh Das", "Rahul Duggal", "Austin P. Wright", "Omar Shaikh", "Fred Hohman", "Polo Chau" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.12931v1", "icon": "paper" } ]
Vis
2,021
Perception! Immersion! Empowerment! Superpowers as Inspiration for Visualization
10.1109/TVCG.2021.3114844
We explore how the lens of fictional superpowers can help characterize how visualizations empower people and provide inspiration for new visualization systems. Researchers and practitioners often tout visualizations' ability to “make the invisible visible” and to “enhance cognitive abilities.” Meanwhile superhero comics and other modern fiction often depict characters with similarly fantastic abilities that allow them to see and interpret the world in ways that transcend traditional human perception. We investigate the intersection of these domains, and show how the language of superpowers can be used to characterize existing visualization systems and suggest opportunities for new and empowering ones. We introduce two frameworks: The first characterizes seven underlying mechanisms that form the basis for a variety of visual superpowers portrayed in fiction. The second identifies seven ways in which visualization tools and interfaces can instill a sense of empowerment in the people who use them. Building on these observations, we illustrate a diverse set of “visualization superpowers” and highlight opportunities for the visualization community to create new systems and interactions that empower new experiences with data Material and illustrations are available under CC-BY 4.0 at osf.io/8yhfz.
false
false
[ "Wesley Willett", "Bon Adriel Aseniero", "Sheelagh Carpendale", "Pierre Dragicevic", "Yvonne Jansen", "Lora Oehlberg", "Petra Isenberg" ]
[ "BP" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Y-6GdB_nVeg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/_k51PyDj5Ag?si=Hm1knwbEHt2RSVav", "icon": "video" } ]
Vis
2,021
Probabilistic Occlusion Culling using Confidence Maps for High-Quality Rendering of Large Particle Data
10.1109/TVCG.2021.3114788
Achieving high rendering quality in the visualization of large particle data, for example from large-scale molecular dynamics simulations, requires a significant amount of sub-pixel super-sampling, due to very high numbers of particles per pixel. Although it is impossible to super-sample all particles of large-scale data at interactive rates, efficient occlusion culling can decouple the overall data size from a high effective sampling rate of visible particles. However, while the latter is essential for domain scientists to be able to see important data features, performing occlusion culling by sampling or sorting the data is usually slow or error-prone due to visibility estimates of insufficient quality. We present a novel probabilistic culling architecture for super-sampled high-quality rendering of large particle data. Occlusion is dynamically determined at the sub-pixel level, without explicit visibility sorting or data simplification. We introduce confidence maps to probabilistically estimate confidence in the visibility data gathered so far. This enables progressive, confidence-based culling, helping to avoid wrong visibility decisions. In this way, we determine particle visibility with high accuracy, although only a small part of the data set is sampled. This enables extensive super-sampling of (partially) visible particles for high rendering quality, at a fraction of the cost of sampling all particles. For real-time performance with millions of particles, we exploit novel features of recent GPU architectures to group particles into two hierarchy levels, combining fine-grained culling with high frame rates.
false
false
[ "Mohamed Ibrahim", "Peter Rautek", "Guido Reina", "Marco Agus", "Markus Hadwiger" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/zMbAIqrnRj0", "icon": "video" } ]
Vis
2,021
Professional Differences: A Comparative Study of Visualization Task Performance and Spatial Ability Across Disciplines
10.1109/TVCG.2021.3114805
Problem-driven visualization work is rooted in deeply understanding the data, actors, processes, and workflows of a target domain. However, an individual's personality traits and cognitive abilities may also influence visualization use. Diverse user needs and abilities raise natural questions for specificity in visualization design: Could individuals from different domains exhibit performance differences when using visualizations? Are any systematic variations related to their cognitive abilities? This study bridges domain-specific perspectives on visualization design with those provided by cognition and perception. We measure variations in visualization task performance across chemistry, computer science, and education, and relate these differences to variations in spatial ability. We conducted an online study with over 60 domain experts consisting of tasks related to pie charts, isocontour plots, and 3D scatterplots, and grounded by a well-documented spatial ability test. Task performance (correctness) varied with profession across more complex visualizations (isocontour plots and scatterplots), but not pie charts, a comparatively common visualization. We found that correctness correlates with spatial ability, and the professions differ in terms of spatial ability. These results indicate that domains differ not only in the specifics of their data and tasks, but also in terms of how effectively their constituent members engage with visualizations and their cognitive traits. Analyzing participants' confidence and strategy comments suggests that focusing on performance neglects important nuances, such as differing approaches to engage with even common visualizations and potential skill transference. Our findings offer a fresh perspective on discipline-specific visualization with specific recommendations to help guide visualization design that celebrates the uniqueness of the disciplines and individuals we seek to serve.
false
false
[ "Kyle Wm. Hall", "Anthony Kouroupis", "Anastasia Bezerianos", "Danielle Albers Szafir", "Christopher Collins 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.02333v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/FIkUJnoV6Lg", "icon": "video" } ]
Vis
2,021
Propagating Visual Designs to Numerous Plots and Dashboards
10.1109/TVCG.2021.3114828
In the process of developing an infrastructure for providing visualization and visual analytics (VIS) tools to epidemiologists and modeling scientists, we encountered a technical challenge for applying a number of visual designs to numerous datasets rapidly and reliably with limited development resources. In this paper, we present a technical solution to address this challenge. Operationally, we separate the tasks of data management, visual designs, and plots and dashboard deployment in order to streamline the development workflow. Technically, we utilize: an ontology to bring datasets, visual designs, and deployable plots and dashboards under the same management framework; multi-criteria search and ranking algorithms for discovering potential datasets that match a visual design; and a purposely-design user interface for propagating each visual design to appropriate datasets (often in tens and hundreds) and quality-assuring the propagation before the deployment. This technical solution has been used in the development of the RAMPVIS infrastructure for supporting a consortium of epidemiologists and modeling scientists through visualization.
false
false
[ "Saiful Khan", "Phong Hai Nguyen", "Alfie Abdul-Rahman", "Benjamin Bach", "Min Chen 0001", "Euan Freeman", "Cagatay Turkay" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.08882v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/WVsrMdvjQlk", "icon": "video" } ]
Vis
2,021
Pyramid-based Scatterplots Sampling for Progressive and Streaming Data Visualization
10.1109/TVCG.2021.3114880
We present a pyramid-based scatterplot sampling technique to avoid overplotting and enable progressive and streaming visualization of large data. Our technique is based on a multiresolution pyramid-based decomposition of the underlying density map and makes use of the density values in the pyramid to guide the sampling at each scale for preserving the relative data densities and outliers. We show that our technique is competitive in quality with state-of-the-art methods and runs faster by about an order of magnitude. Also, we have adapted it to deliver progressive and streaming data visualization by processing the data in chunks and updating the scatterplot areas with visible changes in the density map. A quantitative evaluation shows that our approach generates stable and faithful progressive samples that are comparable to the state-of-the-art method in preserving relative densities and superior to it in keeping outliers and stability when switching frames. We present two case studies that demonstrate the effectiveness of our approach for exploring large data.
false
false
[ "Xin Chen", "Jian Zhang 0070", "Chi-Wing Fu", "Jean-Daniel Fekete", "Yunhai Wang" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/_dj2w4nAqfs", "icon": "video" } ]
Vis
2,021
Rapid Labels: Point-Feature Labeling on GPU
10.1109/TVCG.2021.3114854
Labels, short textual annotations are an important component of data visualizations, illustrations, infographics, and geographical maps. In interactive applications, the labeling method responsible for positioning the labels should not take the resources from the application itself. In other words, the labeling method should provide the result as fast as possible. In this work, we propose a greedy point-feature labeling method running on GPU. In contrast to existing methods that position the labels sequentially, the proposed method positions several labels in parallel. Yet, we guarantee that the positioned labels will not overlap, nor will they overlap important visual features. When the proposed method is searching for the label position of a point-feature, the available label candidates are evaluated with respect to overlaps with important visual features, conflicts with label candidates of other point-features, and their ambiguity. The evaluation of each label candidate is done in constant time independently from the number of point-features, the number of important visual features, and the resolution of the created image. Our measurements indicate that the proposed method is able to position more labels than existing greedy methods that do not evaluate conflicts between the label candidates. At the same time, the proposed method achieves a significant increase in performance. The increase in performance is mainly due to the parallelization and the efficient evaluation of label candidates.
false
false
[ "Vaclav Pavlovec", "Ladislav Cmolík" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/tVEMCigvWkw", "icon": "video" } ]
Vis
2,021
Real-Time Visual Analysis of High-Volume Social Media Posts
10.1109/TVCG.2021.3114800
Breaking news and first-hand reports often trend on social media platforms before traditional news outlets cover them. The real-time analysis of posts on such platforms can reveal valuable and timely insights for journalists, politicians, business analysts, and first responders, but the high number and diversity of new posts pose a challenge. In this work, we present an interactive system that enables the visual analysis of streaming social media data on a large scale in real-time. We propose an efficient and explainable dynamic clustering algorithm that powers a continuously updated visualization of the current thematic landscape as well as detailed visual summaries of specific topics of interest. Our parallel clustering strategy provides an adaptive stream with a digestible but diverse selection of recent posts related to relevant topics. We also integrate familiar visual metaphors that are highly interlinked for enabling both explorative and more focused monitoring tasks. Analysts can gradually increase the resolution to dive deeper into particular topics. In contrast to previous work, our system also works with non-geolocated posts and avoids extensive preprocessing such as detecting events. We evaluated our dynamic clustering algorithm and discuss several use cases that show the utility of our system.
false
false
[ "Johannes Knittel", "Steffen Koch 0001", "Tan Tang", "Wei Chen 0001", "Yingcai Wu", "Shixia Liu", "Thomas Ertl" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03052v1", "icon": "paper" } ]
Vis
2,021
Rethinking the Ranks of Visual Channels
10.1109/TVCG.2021.3114684
Data can be visually represented using visual channels like position, length or luminance. An existing ranking of these visual channels is based on how accurately participants could report the ratio between two depicted values. There is an assumption that this ranking should hold for different tasks and for different numbers of marks. However, there is surprisingly little existing work that tests this assumption, especially given that visually computing ratios is relatively unimportant in real-world visualizations, compared to seeing, remembering, and comparing trends and motifs, across displays that almost universally depict more than two values. To simulate the information extracted from a glance at a visualization, we instead asked participants to immediately reproduce a set of values from memory after they were shown the visualization. These values could be shown in a bar graph (position (bar)), line graph (position (line)), heat map (luminance), bubble chart (area), misaligned bar graph (length), or ‘wind map’ (angle). With a Bayesian multilevel modeling approach, we show how the rank positions of visual channels shift across different numbers of marks (2, 4 or 8) and for bias, precision, and error measures. The ranking did not hold, even for reproductions of only 2 marks, and the new probabilistic ranking was highly inconsistent for reproductions of different numbers of marks. Other factors besides channel choice had an order of magnitude more influence on performance, such as the number of values in the series (e.g., more marks led to larger errors), or the value of each mark (e.g., small values were systematically overestimated). Every visual channel was worse for displays with 8 marks than 4, consistent with established limits on visual memory. These results point to the need for a body of empirical studies that move beyond two-value ratio judgments as a baseline for reliably ranking the quality of a visual channel, including testing new tasks (detection of trends or motifs), timescales (immediate computation, or later comparison), and the number of values (from a handful, to thousands).
false
false
[ "Caitlyn M. McColeman", "Fumeng Yang", "Timothy F. Brady", "Steven Franconeri" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.11367v1", "icon": "paper" } ]
Vis
2,021
Revisiting Dimensionality Reduction Techniques for Visual Cluster Analysis: An Empirical Study
10.1109/TVCG.2021.3114694
Dimensionality Reduction (DR) techniques can generate 2D projections and enable visual exploration of cluster structures of high-dimensional datasets. However, different DR techniques would yield various patterns, which significantly affect the performance of visual cluster analysis tasks. We present the results of a user study that investigates the influence of different DR techniques on visual cluster analysis. Our study focuses on the most concerned property types, namely the linearity and locality, and evaluates twelve representative DR techniques that cover the concerned properties. Four controlled experiments were conducted to evaluate how the DR techniques facilitate the tasks of 1) cluster identification, 2) membership identification, 3) distance comparison, and 4) density comparison, respectively. We also evaluated users' subjective preference of the DR techniques regarding the quality of projected clusters. The results show that: 1) Non-linear and Local techniques are preferred in cluster identification and membership identification; 2) Linear techniques perform better than non-linear techniques in density comparison; 3) UMAP (Uniform Manifold Approximation and Projection) and t-SNE (t-Distributed Stochastic Neighbor Embedding) perform the best in cluster identification and membership identification; 4) NMF (Nonnegative Matrix Factorization) has competitive performance in distance comparison; 5) t-SNLE (t-Distributed Stochastic Neighbor Linear Embedding) has competitive performance in density comparison.
false
false
[ "Jiazhi Xia", "Yuchen Zhang", "Jie Song", "Yang Chen", "Yunhai Wang", "Shixia Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2110.02894v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/aNjcy5KLfd8", "icon": "video" } ]
Vis
2,021
Rotate or Wrap? Interactive Visualisations of Cyclical Data on Cylindrical or Toroidal Topologies
10.1109/TVCG.2021.3114693
In this paper, we report on a study of visual representations for cyclical data and the effect of interactively wrapping a bar chart ‘around its boundaries’. Compared to linear bar chart, polar (or radial) visualisations have the advantage that cyclical data can be presented continuously without mentally bridging the visual ‘cut’ across the left-and-right boundaries. To investigate this hypothesis and to assess the effect the cut has on analysis performance, this paper presents results from a crowdsourced, controlled experiment with 72 participants comparing new continuous panning technique to linear bar charts (interactive wrapping). Our results show that bar charts with interactive wrapping lead to less errors compared to standard bar charts or polar charts. Inspired by these results, we generalise the concept of interactive wrapping to other visualisations for cyclical or relational data. We describe a design space based on the concept of one-dimensional wrapping and two-dimensional wrapping, linked to two common 3D topologies; cylinder and torus that can be used to metaphorically explain one- and two-dimensional wrapping. This design space suggests that interactive wrapping is widely applicable to many different data types.
false
false
[ "Kun-Ting Chen", "Tim Dwyer", "Benjamin Bach", "Kim Marriott" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/-SMB0pJ17rI", "icon": "video" } ]
Vis
2,021
Scope2Screen: Focus+Context Techniques for Pathology Tumor Assessment in Multivariate Image Data
10.1109/TVCG.2021.3114786
Inspection of tissues using a light microscope is the primary method of diagnosing many diseases, notably cancer. Highly multiplexed tissue imaging builds on this foundation, enabling the collection of up to 60 channels of molecular information plus cell and tissue morphology using antibody staining. This provides unique insight into disease biology and promises to help with the design of patient-specific therapies. However, a substantial gap remains with respect to visualizing the resulting multivariate image data and effectively supporting pathology workflows in digital environments on screen. We, therefore, developed Scope2Screen, a scalable software system for focus+context exploration and annotation of whole-slide, high-plex, tissue images. Our approach scales to analyzing 100GB images of 109 or more pixels per channel, containing millions of individual cells. A multidisciplinary team of visualization experts, microscopists, and pathologists identified key image exploration and annotation tasks involving finding, magnifying, quantifying, and organizing regions of interest (ROIs) in an intuitive and cohesive manner. Building on a scope-to-screen metaphor, we present interactive lensing techniques that operate at single-cell and tissue levels. Lenses are equipped with task-specific functionality and descriptive statistics, making it possible to analyze image features, cell types, and spatial arrangements (neighborhoods) across image channels and scales. A fast sliding-window search guides users to regions similar to those under the lens; these regions can be analyzed and considered either separately or as part of a larger image collection. A novel snapshot method enables linked lens configurations and image statistics to be saved, restored, and shared with these regions. We validate our designs with domain experts and apply Scope2Screen in two case studies involving lung and colorectal cancers to discover cancer-relevant image features.
false
false
[ "Jared Jessup", "Robert Krüger", "Simon Warchol", "John Hoffer", "Jeremy Muhlich", "Cecily C. Ritch", "Giorgio Gaglia", "Shannon Coy", "Yu-An Chen", "Jia-Ren Lin", "Sandro Santagata", "Peter K. Sorger", "Hanspeter Pfister" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2110.04875v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Y2tGl2r8v6U", "icon": "video" } ]
Vis
2,021
Seek for Success: A Visualization Approach for Understanding the Dynamics of Academic Careers
10.1109/TVCG.2021.3114790
How to achieve academic career success has been a long-standing research question in social science research. With the growing availability of large-scale well-documented academic profiles and career trajectories, scholarly interest in career success has been reinvigorated, which has emerged to be an active research domain called the Science of Science (i.e., SciSci). In this study, we adopt an innovative dynamic perspective to examine how individual and social factors will influence career success over time. We propose ACSeeker, an interactive visual analytics approach to explore the potential factors of success and how the influence of multiple factors changes at different stages of academic careers. We first applied a Multi-factor Impact Analysis framework to estimate the effect of different factors on academic career success over time. We then developed a visual analytics system to understand the dynamic effects interactively. A novel timeline is designed to reveal and compare the factor impacts based on the whole population. A customized career line showing the individual career development is provided to allow a detailed inspection. To validate the effectiveness and usability of ACSeeker, we report two case studies and interviews with a social scientist and general researchers.
false
false
[ "Yifang Wang 0001", "Tai-Quan Peng", "Huihua Lu", "Haoren Wang", "Xiao Xie", "Huamin Qu", "Yingcai Wu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03381v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/vXNPPudEGtg", "icon": "video" } ]
Vis
2,021
Semantic Snapping for Guided Multi-View Visualization Design
10.1109/TVCG.2021.3114860
Visual information displays are typically composed of multiple visualizations that are used to facilitate an understanding of the underlying data. A common example are dashboards, which are frequently used in domains such as finance, process monitoring and business intelligence. However, users may not be aware of existing guidelines and lack expert design knowledge when composing such multi-view visualizations. In this paper, we present semantic snapping, an approach to help non-expert users design effective multi-view visualizations from sets of pre-existing views. When a particular view is placed on a canvas, it is “aligned” with the remaining views-not with respect to its geometric layout, but based on aspects of the visual encoding itself, such as how data dimensions are mapped to channels. Our method uses an on-the-fly procedure to detect and suggest resolutions for conflicting, misleading, or ambiguous designs, as well as to provide suggestions for alternative presentations. With this approach, users can be guided to avoid common pitfalls encountered when composing visualizations. Our provided examples and case studies demonstrate the usefulness and validity of our approach.
false
false
[ "Yngve Sekse Kristiansen", "Laura A. Garrison", "Stefan Bruckner" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.08384v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/YZ95jlPuym8", "icon": "video" } ]
Vis
2,021
Sequen-C: A Multilevel Overview of Temporal Event Sequences
10.1109/TVCG.2021.3114868
Building a visual overview of temporal event sequences with an optimal level-of-detail (i.e. simplified but informative) is an ongoing challenge - expecting the user to zoom into every important aspect of the overview can lead to missing insights. We propose a technique to build a multilevel overview of event sequences, whose granularity can be transformed across sequence clusters (vertical level-of-detail) or longitudinally (horizontal level-of-detail), using hierarchical aggregation and a novel cluster data representation Align-Score-Simplify. By default, the overview shows an optimal number of sequence clusters obtained through the average silhouette width metric - then users are able to explore alternative optimal sequence clusterings. The vertical level-of-detail of the overview changes along with the number of clusters, whilst the horizontal level-of-detail refers to the level of summarization applied to each cluster representation. The proposed technique has been implemented into a visualization system called Sequence Cluster Explorer (Sequen-C) that allows multilevel and detail-on-demand exploration through three coordinated views, and the inspection of data attributes at cluster, unique sequence, and individual sequence level. We present two case studies using real-world datasets in the healthcare domain: CUREd and MIMIC-III; which demonstrate how the technique can aid users to obtain a summary of common and deviating pathways, and explore data attributes for selected patterns.
false
false
[ "Jessica Magallanes", "Tony Stone", "Paul D. Morris", "Suzanne Mason", "Steven Wood", "Maria-Cruz Villa-Uriol" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03043v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/cbii6AHG9Oo", "icon": "video" } ]
Vis
2,021
Sibyl: Understanding and Addressing the Usability Challenges of Machine Learning In High-Stakes Decision Making
10.1109/TVCG.2021.3114864
Machine learning (ML) is being applied to a diverse and ever-growing set of domains. In many cases, domain experts - who often have no expertise in ML or data science - are asked to use ML predictions to make high-stakes decisions. Multiple ML usability challenges can appear as result, such as lack of user trust in the model, inability to reconcile human-ML disagreement, and ethical concerns about oversimplification of complex problems to a single algorithm output. In this paper, we investigate the ML usability challenges that present in the domain of child welfare screening through a series of collaborations with child welfare screeners. Following the iterative design process between the ML scientists, visualization researchers, and domain experts (child screeners), we first identified four key ML challenges and honed in on one promising explainable ML technique to address them (local factor contributions). Then we implemented and evaluated our visual analytics tool, Sibyl, to increase the interpretability and interactivity of local factor contributions. The effectiveness of our tool is demonstrated by two formal user studies with 12 non-expert participants and 13 expert participants respectively. Valuable feedback was collected, from which we composed a list of design implications as a useful guideline for researchers who aim to develop an interpretable and interactive visualization tool for ML prediction models deployed for child welfare screeners and other similar domain experts.
false
false
[ "Alexandra Zytek", "Dongyu Liu", "Rhema Vaithianathan", "Kalyan Veeramachaneni" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2103.02071v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ClATgmKwVCs", "icon": "video" } ]
Vis
2,021
SightBi: Exploring Cross-View Data Relationships with Biclusters
10.1109/TVCG.2021.3114801
Multiple-view visualization (MV) has been heavily used in visual analysis tools for sensemaking of data in various domains (e.g., bioinformatics, cybersecurity and text analytics). One common task of visual analysis with multiple views is to relate data across different views. For example, to identify threats, an intelligence analyst needs to link people from a social network graph with locations on a crime-map, and then search for and read relevant documents. Currently, exploring cross-view data relationships heavily relies on view-coordination techniques (e.g., brushing and linking), which may require significant user effort on many trial-and-error attempts, such as repetitiously selecting elements in one view, and then observing and following elements highlighted in other views. To address this, we present SightBi, a visual analytics approach for supporting cross-view data relationship explorations. We discuss the design rationale of SightBi in detail, with identified user tasks regarding the use of cross-view data relationships. SightBi formalizes cross-view data relationships as biclusters, computes them from a dataset, and uses a bi-context design that highlights creating stand-alone relationship-views. This helps preserve existing views and offers an overview of cross-view data relationships to guide user exploration. Moreover, SightBi allows users to interactively manage the layout of multiple views by using newly created relationship-views. With a usage scenario, we demonstrate the usefulness of SightBi for sensemaking of cross-view data relationships.
false
false
[ "Maoyuan Sun", "Abdul Rahman Shaikh", "Hamed Alhoori", "Jian Zhao 0010" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.01044v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/yF3sUH1gQBQ", "icon": "video" } ]
Vis
2,021
Simultaneous Matrix Orderings for Graph Collections
10.1109/TVCG.2021.3114773
Undirected graphs are frequently used to model phenomena that deal with interacting objects, such as social networks, brain activity and communication networks. The topology of an undirected graph $G$ can be captured by an adjacency matrix; this matrix in turn can be visualized directly to give insight into the graph structure. Which visual patterns appear in such a matrix visualization crucially depends on the ordering of its rows and columns. Formally defining the quality of an ordering and then automatically computing a high-quality ordering are both challenging problems; however, effective heuristics exist and are used in practice. Often, graphs do not exist in isolation but as part of a collection of graphs on the same set of vertices, for example, brain scans over time or of different people. To visualize such graph collections, we need a single ordering that works well for all matrices simultaneously. The current state-of-the-art solves this problem by taking a (weighted) union over all graphs and applying existing heuristics. However, this union leads to a loss of information, specifically in those parts of the graphs which are different. We propose a collection-aware approach to avoid this loss of information and apply it to two popular heuristic methods: leaf order and barycenter. The de-facto standard computational quality metrics for matrix ordering capture only block-diagonal patterns (cliques). Instead, we propose to use Moran's $I$, a spatial auto-correlation metric, which captures the full range of established patterns. Moran's $I$ refines previously proposed stress measures. Furthermore, the popular leaf order method heuristically optimizes a similar measure which further supports the use of Moran's $I$ in this context. An ordering that maximizes Moran's $I$ can be computed via solutions to the Traveling Salesperson Problem (TSP); orderings that approximate the optimal ordering can be computed more efficiently, using any of the approximation algorithms for metric TSP. We evaluated our methods for simultaneous orderings on real-world datasets using Moran's $I$ as the quality metric. Our results show that our collection-aware approach matches or improves performance compared to the union approach, depending on the similarity of the graphs in the collection. Specifically, our Moran's $I$-based collection-aware leaf order implementation consistently outperforms other implementations. Our collection-aware implementations carry no significant additional computational costs.
false
false
[ "Nathan van Beusekom", "Wouter Meulemans", "Bettina Speckmann" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.12050v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/0BIMBxNBSgk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xbmXg0xDPFs?si=Olp2i2bwJrMApmNi", "icon": "video" } ]
Vis
2,021
SPEULER: Semantics-preserving Euler Diagrams
10.1109/TVCG.2021.3114834
Creating comprehensible visualizations of highly overlapping set-typed data is a challenging task due to its complexity. To facilitate insights into set connectivity and to leverage semantic relations between intersections, we propose a fast two-step layout technique for Euler diagrams that are both well-matched and well-formed. Our method conforms to established form guidelines for Euler diagrams regarding semantics, aesthetics, and readability. First, we establish an initial ordering of the data, which we then use to incrementally create a planar, connected, and monotone dual graph representation. In the next step, the graph is transformed into a circular layout that maintains the semantics and yields simple Euler diagrams with smooth curves. When the data cannot be represented by simple diagrams, our algorithm always falls back to a solution that is not well-formed but still well-matched, whereas previous methods often fail to produce expected results. We show the usefulness of our method for visualizing set-typed data using examples from text analysis and infographics. Furthermore, we discuss the characteristics of our approach and evaluate our method against state-of-the-art methods.
false
false
[ "Rebecca Kehlbeck", "Jochen Görtler", "Yunhai Wang", "Oliver Deussen" ]
[ "HM" ]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03529v1", "icon": "paper" } ]
Vis
2,021
STNet: An End-to-End Generative Framework for Synthesizing Spatiotemporal Super-Resolution Volumes
10.1109/TVCG.2021.3114815
We present STNet, an end-to-end generative framework that synthesizes spatiotemporal super-resolution volumes with high fidelity for time-varying data. STNet includes two modules: a generator and a spatiotemporal discriminator. The input to the generator is two low-resolution volumes at both ends, and the output is the intermediate and the two-ending spatiotemporal super-resolution volumes. The spatiotemporal discriminator, leveraging convolutional long short-term memory, accepts a spatiotemporal super-resolution sequence as input and predicts a conditional score for each volume based on its spatial (the volume itself) and temporal (the previous volumes) information. We propose an unsupervised pre-training stage using cycle loss to improve the generalization of STNet. Once trained, STNet can generate spatiotemporal super-resolution volumes from low-resolution ones, offering scientists an option to save data storage (i.e., sparsely sampling the simulation output in both spatial and temporal dimensions). We compare STNet with the baseline bicubic+linear interpolation, two deep learning solutions ($\mathsf{SSR}+\mathsf{TSF}$, STD), and a state-of-the-art tensor compression solution (TTHRESH) to show the effectiveness of STNet.
false
false
[ "Jun Han 0010", "Hao Zheng 0006", "Danny Ziyi Chen", "Chaoli Wang 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/AezFUomjfzI", "icon": "video" } ]
Vis
2,021
STRATISFIMAL LAYOUT: A modular optimization model for laying out layered node-link network visualizations
10.1109/TVCG.2021.3114756
Node-link visualizations are a familiar and powerful tool for displaying the relationships in a network. The readability of these visualizations highly depends on the spatial layout used for the nodes. In this paper, we focus on computing layered layouts, in which nodes are aligned on a set of parallel axes to better expose hierarchical or sequential relationships. Heuristic-based layouts are widely used as they scale well to larger networks and usually create readable, albeit sub-optimal, visualizations. We instead use a layout optimization model that prioritizes optimality - as compared to scalability - because an optimal solution not only represents the best attainable result, but can also serve as a baseline to evaluate the effectiveness of layout heuristics. We take an important step towards powerful and flexible network visualization by proposing Stratisfimal Layout, a modular integer-linear-programming formulation that can consider several important readability criteria simultaneously — crossing reduction, edge bendiness, and nested and multi-layer groups. The layout can be adapted to diverse use cases through its modularity. Individual features can be enabled and customized depending on the application. We provide open-source and documented implementations of the layout, both for web-based and desktop visualizations. As a proof-of-concept, we apply it to the problem of visualizing complicated SQL queries, which have features that we believe cannot be addressed by existing layout optimization models. We also include a benchmark network generator and the results of an empirical evaluation to assess the performance trade-offs of our design choices. A full version of this paper with all appendices, data, and source code is available at osf.io/qdyt9 with live examples at https://visdunneright.github.io/stratisfimal/.
false
false
[ "Sara Di Bartolomeo", "Mirek Riedewald", "Wolfgang Gatterbauer", "Cody Dunne" ]
[]
[]
[]
Vis
2,021
TacticFlow: Visual Analytics of Ever-Changing Tactics in Racket Sports
10.1109/TVCG.2021.3114832
Event sequence mining is often used to summarize patterns from hundreds of sequences but faces special challenges when handling racket sports data. In racket sports (e.g., tennis and badminton), a player hitting the ball is considered a multivariate event consisting of multiple attributes (e.g., hit technique and ball position). A rally (i.e., a series of consecutive hits beginning with one player serving the ball and ending with one player winning a point) thereby can be viewed as a multivariate event sequence. Mining frequent patterns and depicting how patterns change over time is instructive and meaningful to players who want to learn more short-term competitive strategies (i.e., tactics) that encompass multiple hits. However, players in racket sports usually change their tactics rapidly according to the opponent's reaction, resulting in ever-changing tactic progression. In this work, we introduce a tailored visualization system built on a novel multivariate sequence pattern mining algorithm to facilitate explorative identification and analysis of various tactics and tactic progression. The algorithm can mine multiple non-overlapping multivariate patterns from hundreds of sequences effectively. Based on the mined results, we propose a glyph-based Sankey diagram to visualize the ever-changing tactic progression and support interactive data exploration. Through two case studies with four domain experts in tennis and badminton, we demonstrate that our system can effectively obtain insights about tactic progression in most racket sports. We further discuss the strengths and the limitations of our system based on domain experts' feedback.
false
false
[ "Jiang Wu", "Dongyu Liu", "Ziyang Guo", "Qingyang Xu", "Yingcai Wu" ]
[]
[]
[]
Vis
2,021
THALIS: Human-Machine Analysis of Longitudinal Symptoms in Cancer Therapy
10.1109/TVCG.2021.3114810
Although cancer patients survive years after oncologic therapy, they are plagued with long-lasting or permanent residual symptoms, whose severity, rate of development, and resolution after treatment vary largely between survivors. The analysis and interpretation of symptoms is complicated by their partial co-occurrence, variability across populations and across time, and, in the case of cancers that use radiotherapy, by further symptom dependency on the tumor location and prescribed treatment. We describe THALIS, an environment for visual analysis and knowledge discovery from cancer therapy symptom data, developed in close collaboration with oncology experts. Our approach leverages unsupervised machine learning methodology over cohorts of patients, and, in conjunction with custom visual encodings and interactions, provides context for new patients based on patients with similar diagnostic features and symptom evolution. We evaluate this approach on data collected from a cohort of head and neck cancer patients. Feedback from our clinician collaborators indicates that THALIS supports knowledge discovery beyond the limits of machines or humans alone, and that it serves as a valuable tool in both the clinic and symptom research.
false
false
[ "Carla Floricel", "Nafiul Nipu", "Mikayla Biggs", "Andrew Wentzel", "Guadalupe Canahuate", "Lisanne van Dijk", "Abdallah Sherif Radwan Mohamed", "Clifton D. Fuller", "G. Elisabeta Marai" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.02817v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/HXuYC6NimSg", "icon": "video" } ]
Vis
2,021
The Weighted Average Illusion: Biases in Perceived Mean Position in Scatterplots
10.1109/TVCG.2021.3114783
Scatterplots can encode a third dimension by using additional channels like size or color (e.g. bubble charts). We explore a potential misinterpretation of trivariate scatterplots, which we call the weighted average illusion, where locations of larger and darker points are given more weight toward x- and y-mean estimates. This systematic bias is sensitive to a designer's choice of size or lightness ranges mapped onto the data. In this paper, we quantify this bias against varying size/lightness ranges and data correlations. We discuss possible explanations for its cause by measuring attention given to individual data points using a vision science technique called the centroid method. Our work illustrates how ensemble processing mechanisms and mental shortcuts can significantly distort visual summaries of data, and can lead to misjudgments like the demonstrated weighted average illusion.
false
false
[ "Matt-Heun Hong", "Jessica K. Witt", "Danielle Albers Szafir" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03766v1", "icon": "paper" } ]
Vis
2,021
ThreadStates: State-based Visual Analysis of Disease Progression
10.1109/TVCG.2021.3114840
A growing number of longitudinal cohort studies are generating data with extensive patient observations across multiple timepoints. Such data offers promising opportunities to better understand the progression of diseases. However, these observations are usually treated as general events in existing visual analysis tools. As a result, their capabilities in modeling disease progression are not fully utilized. To fill this gap, we designed and implemented ThreadStates, an interactive visual analytics tool for the exploration of longitudinal patient cohort data. The focus of ThreadStates is to identify the states of disease progression by learning from observation data in a human-in-the-loop manner. We propose a novel Glyph Matrix design and combine it with a scatter plot to enable seamless identification, observation, and refinement of states. The disease progression patterns are then revealed in terms of state transitions using Sankey-based visualizations. We employ sequence clustering techniques to find patient groups with distinctive progression patterns, and to reveal the association between disease progression and patient-level features. The design and development were driven by a requirement analysis and iteratively refined based on feedback from domain experts over the course of a 10-month design study. Case studies and expert interviews demonstrate that ThreadStates can successively summarize disease states, reveal disease progression, and compare patient groups.
false
false
[ "Qianwen Wang", "Tali Mazor", "Theresa Harbig", "Ethan Cerami", "Nils Gehlenborg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/vcskm", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/BuripTHkSKk", "icon": "video" } ]
Vis
2,021
TIVEE: Visual Exploration and Explanation of Badminton Tactics in Immersive Visualizations
10.1109/TVCG.2021.3114861
Tactic analysis is a major issue in badminton as the effective usage of tactics is the key to win. The tactic in badminton is defined as a sequence of consecutive strokes. Most existing methods use statistical models to find sequential patterns of strokes and apply 2D visualizations such as glyphs and statistical charts to explore and analyze the discovered patterns. However, in badminton, spatial information like the shuttle trajectory, which is inherently 3D, is the core of a tactic. The lack of sufficient spatial awareness in 2D visualizations largely limited the tactic analysis of badminton. In this work, we collaborate with domain experts to study the tactic analysis of badminton in a 3D environment and propose an immersive visual analytics system, TIVEE, to assist users in exploring and explaining badminton tactics from multi-levels. Users can first explore various tactics from the third-person perspective using an unfolded visual presentation of stroke sequences. By selecting a tactic of interest, users can turn to the first-person perspective to perceive the detailed kinematic characteristics and explain its effects on the game result. The effectiveness and usefulness of TIVEE are demonstrated by case studies and an expert interview.
false
false
[ "Xiangtong Chu", "Xiao Xie", "Shuainan Ye", "Haolin Lu", "Hongguang Xiao", "Zeqing Yuan", "Zhutian Chen", "Hui Zhang 0051", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/9iJprmMHhfc", "icon": "video" } ]
Vis
2,021
Towards replacing physical testing of granular materials with a Topology-based Model
10.1109/TVCG.2021.3114819
In the study of packed granular materials, the performance of a sample (e.g., the detonation of a high-energy explosive) often correlates to measurements of a fluid flowing through it. The “effective surface area,” the surface area accessible to the airflow, is typically measured using a permeametry apparatus that relates the flow conductance to the permeable surface area via the Carman-Kozeny equation. This equation allows calculating the flow rate of a fluid flowing through the granules packed in the sample for a given pressure drop. However, Carman-Kozeny makes inherent assumptions about tunnel shapes and flow paths that may not accurately hold in situations where the particles possess a wide distribution in shapes, sizes, and aspect ratios, as is true with many powdered systems of technological and commercial interest. To address this challenge, we replicate these measurements virtually on micro-CT images of the powdered material, introducing a new Pore Network Model based on the skeleton of the Morse-Smale complex. Pores are identified as basins of the complex, their incidence encodes adjacency, and the conductivity of the capillary between them is computed from the cross-section at their interface. We build and solve a resistive network to compute an approximate laminar fluid flow through the pore structure. We provide two means of estimating flow-permeable surface area: (i) by direct computation of conductivity, and (ii) by identifying dead-ends in the flow coupled with isosurface extraction and the application of the Carman-Kozeny equation, with the aim of establishing consistency over a range of particle shapes, sizes, porosity levels, and void distribution patterns.
false
false
[ "Aniketh Venkat", "Attila Gyulassy", "Graham Kosiba", "Amitesh Maiti", "Henry Reinstein", "Richard Gee", "Peer-Timo Bremer", "Valerio Pascucci" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.08777v1", "icon": "paper" } ]
Vis
2,021
Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study
10.1109/TVCG.2021.3114829
For all its potential in supporting data analysis, particularly in exploratory situations, visualization also creates barriers: accessibility for blind and visually impaired individuals. Regardless of how effective a visualization is, providing equal access for blind users requires a paradigm shift for the visualization research community. To enact such a shift, it is not sufficient to treat visualization accessibility as merely another technical problem to overcome. Instead, supporting the millions of blind and visually impaired users around the world who have equally valid needs for data analysis as sighted individuals requires a respectful, equitable, and holistic approach that includes all users from the onset. In this paper, we draw on accessibility research methodologies to make inroads towards such an approach. We first identify the people who have specific insight into how blind people perceive the world: orientation and mobility (O&M) experts, who are instructors that teach blind individuals how to navigate the physical world using non-visual senses. We interview 10 O&M experts—all of them blind—to understand how best to use sensory substitution other than the visual sense for conveying spatial layouts. Finally, we investigate our qualitative findings using thematic analysis. While blind people in general tend to use both sound and touch to understand their surroundings, we focused on auditory affordances and how they can be used to make data visualizations accessible—using sonification and auralization. However, our experts recommended supporting a combination of senses—sound and touch—to make charts accessible as blind individuals may be more familiar with exploring tactile charts. We report results on both sound and touch affordances, and conclude by discussing implications for accessible visualization for blind individuals.
false
false
[ "Pramod Chundury", "Biswaksen Patnaik", "Yasmin Reyazuddin", "Christine Tang", "Jonathan Lazar", "Niklas Elmqvist" ]
[]
[]
[]
Vis
2,021
Towards Visual Explainable Active Learning for Zero-Shot Classification
10.1109/TVCG.2021.3114793
Zero-shot classification is a promising paradigm to solve an applicable problem when the training classes and test classes are disjoint. Achieving this usually needs experts to externalize their domain knowledge by manually specifying a class-attribute matrix to define which classes have which attributes. Designing a suitable class-attribute matrix is the key to the subsequent procedure, but this design process is tedious and trial-and-error with no guidance. This paper proposes a visual explainable active learning approach with its design and implementation called semantic navigator to solve the above problems. This approach promotes human-AI teaming with four actions (ask, explain, recommend, respond) in each interaction loop. The machine asks contrastive questions to guide humans in the thinking process of attributes. A novel visualization called semantic map explains the current status of the machine. Therefore analysts can better understand why the machine misclassifies objects. Moreover, the machine recommends the labels of classes for each attribute to ease the labeling burden. Finally, humans can steer the model by modifying the labels interactively, and the machine adjusts its recommendations. The visual explainable active learning approach improves humans' efficiency of building zero-shot classification models interactively, compared with the method without guidance. We justify our results with user studies using the standard benchmarks for zero-shot classification.
false
false
[ "Shichao Jia", "Zeyu Li 0003", "Nuo Chen", "Jiawan Zhang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.06730v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uDq00Plsct4", "icon": "video" } ]
Vis
2,021
Understanding Data Visualization Design Practice
10.1109/TVCG.2021.3114959
Professional roles for data visualization designers are growing in popularity, and interest in relationships between the academic research and professional practice communities is gaining traction. However, despite the potential for knowledge sharing between these communities, we have little understanding of the ways in which practitioners design in real-world, professional settings. Inquiry in numerous design disciplines indicates that practitioners approach complex situations in ways that are fundamentally different from those of researchers. In this work, I take a practice-led approach to understanding visualization design practice on its own terms. Twenty data visualization practitioners were interviewed and asked about their design process, including the steps they take, how they make decisions, and the methods they use. Findings suggest that practitioners do not follow highly systematic processes, but instead rely on situated forms of knowing and acting in which they draw from precedent and use methods and principles that are determined appropriate in the moment. These findings have implications for how visualization researchers understand and engage with practitioners, and how educators approach the training of future data visualization designers.
false
false
[ "Paul Parsons" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.07855v1", "icon": "paper" } ]
Vis
2,021
Untidy Data: The Unreasonable Effectiveness of Tables
10.1109/TVCG.2021.3114830
Working with data in table form is usually considered a preparatory and tedious step in the sensemaking pipeline; a way of getting the data ready for more sophisticated visualization and analytical tools. But for many people, spreadsheets — the quintessential table tool — remain a critical part of their information ecosystem, allowing them to interact with their data in ways that are hidden or abstracted in more complex tools. This is particularly true for data workers [61], people who work with data as part of their job but do not identify as professional analysts or data scientists. We report on a qualitative study of how these workers interact with and reason about their data. Our findings show that data tables serve a broader purpose beyond data cleanup at the initial stage of a linear analytic flow: users want to see and “get their hands on” the underlying data throughout the analytics process, reshaping and augmenting it to support sensemaking. They reorganize, mark up, layer on levels of detail, and spawn alternatives within the context of the base data. These direct interactions and human-readable table representations form a rich and cognitively important part of building understanding of what the data mean and what they can do with it. We argue that interactive tables are an important visualization idiom in their own right; that the direct data interaction they afford offers a fertile design space for visual analytics; and that sense making can be enriched by more flexible human-data interaction than is currently supported in visual analytics tools.
false
false
[ "Lyn Bartram", "Michael Correll", "Melanie Tory" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2106.15005v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/XZl6DopUlDo", "icon": "video" } ]
Vis
2,021
VBridge: Connecting the Dots Between Features and Data to Explain Healthcare Models
10.1109/TVCG.2021.3114836
Machine learning (ML) is increasingly applied to Electronic Health Records (EHRs) to solve clinical prediction tasks. Although many ML models perform promisingly, issues with model transparency and interpretability limit their adoption in clinical practice. Directly using existing explainable ML techniques in clinical settings can be challenging. Through literature surveys and collaborations with six clinicians with an average of 17 years of clinical experience, we identified three key challenges, including clinicians' unfamiliarity with ML features, lack of contextual information, and the need for cohort-level evidence. Following an iterative design process, we further designed and developed VBridge, a visual analytics tool that seamlessly incorporates ML explanations into clinicians' decision-making workflow. The system includes a novel hierarchical display of contribution-based feature explanations and enriched interactions that connect the dots between ML features, explanations, and data. We demonstrated the effectiveness of VBridge through two case studies and expert interviews with four clinicians, showing that visually associating model explanations with patients' situational records can help clinicians better interpret and use model predictions when making clinician decisions. We further derived a list of design implications for developing future explainable ML tools to support clinical decision-making.
false
false
[ "Furui Cheng", "Dongyu Liu", "Fan Du", "Yanna Lin", "Alexandra Zytek", "Haomin Li", "Huamin Qu", "Kalyan Veeramachaneni" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.02550v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/PnAxWRLKgFY", "icon": "video" } ]
Vis
2,021
VideoModerator: A Risk-aware Framework for Multimodal Video Moderation in E-Commerce
10.1109/TVCG.2021.3114781
Video moderation, which refers to remove deviant or explicit content from e-commerce livestreams, has become prevalent owing to social and engaging features. However, this task is tedious and time consuming due to the difficulties associated with watching and reviewing multimodal video content, including video frames and audio clips. To ensure effective video moderation, we propose VideoModerator, a risk-aware framework that seamlessly integrates human knowledge with machine insights. This framework incorporates a set of advanced machine learning models to extract the risk-aware features from multimodal video content and discover potentially deviant videos. Moreover, this framework introduces an interactive visualization interface with three views, namely, a video view, a frame view, and an audio view. In the video view, we adopt a segmented timeline and highlight high-risk periods that may contain deviant information. In the frame view, we present a novel visual summarization method that combines risk-aware features and video context to enable quick video navigation. In the audio view, we employ a storyline-based design to provide a multi-faceted overview which can be used to explore audio content. Furthermore, we report the usage of VideoModerator through a case scenario and conduct experiments and a controlled user study to validate its effectiveness.
false
false
[ "Tan Tang", "Yanhong Wu", "Yingcai Wu", "Lingyun Yu 0001", "Yuhong Li" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2109.03479v1", "icon": "paper" } ]
Vis
2,021
VisQA: X-raying Vision and Language Reasoning in Transformers
10.1109/TVCG.2021.3114683
Visual Question Answering systems target answering open-ended textual questions given input images. They are a testbed for learning high-level reasoning with a primary use in HCI, for instance assistance for the visually impaired. Recent research has shown that state-of-the-art models tend to produce answers exploiting biases and shortcuts in the training data, and sometimes do not even look at the input image, instead of performing the required reasoning steps. We present VisQA, a visual analytics tool that explores this question of reasoning vs. bias exploitation. It exposes the key element of state-of-the-art neural models — attention maps in transformers. Our working hypothesis is that reasoning steps leading to model predictions are observable from attention distributions, which are particularly useful for visualization. The design process of VisQA was motivated by well-known bias examples from the fields of deep learning and vision-language reasoning and evaluated in two ways. First, as a result of a collaboration of three fields, machine learning, vision and language reasoning, and data analytics, the work lead to a better understanding of bias exploitation of neural models for VQA, which eventually resulted in an impact on its design and training through the proposition of a method for the transfer of reasoning patterns from an oracle model. Second, we also report on the design of VisQA, and a goal-oriented evaluation of VisQA targeting the analysis of a model decision process from multiple experts, providing evidence that it makes the inner workings of models accessible to users.
false
false
[ "Theo Jaunet", "Corentin Kervadec", "Romain Vuillemot", "Grigory Antipov", "Moez Baccouche", "Christian Wolf 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2104.00926v2", "icon": "paper" } ]
Vis
2,021
Visual Analysis of Hyperproperties for Understanding Model Checking Results
10.1109/TVCG.2021.3114866
Model checkers provide algorithms for proving that a mathematical model of a system satisfies a given specification. In case of a violation, a counterexample that shows the erroneous behavior is returned. Understanding these counterexamples is challenging, especially for hyperproperty specifications, i.e., specifications that relate multiple executions of a system to each other. We aim to facilitate the visual analysis of such counterexamples through our HyperVis tool, which provides interactive visualizations of the given model, specification, and counterexample. Within an iterative and interdisciplinary design process, we developed visualization solutions that can effectively communicate the core aspects of the model checking result. Specifically, we introduce graphical representations of binary values for improving pattern recognition, color encoding for better indicating related aspects, visually enhanced textual descriptions, as well as extensive cross-view highlighting mechanisms. Further, through an underlying causal analysis of the counterexample, we are also able to identify values that contributed to the violation and use this knowledge for both improved encoding and highlighting. Finally, the analyst can modify both the specification of the hyperproperty and the system directly within HyperVis and initiate the model checking of the new version. In combination, these features notably support the analyst in understanding the error leading to the counterexample as well as iterating the provided system and specification. We ran multiple case studies with HyperVis and tested it with domain experts in qualitative feedback sessions. The participants' positive feedback confirms the considerable improvement over the manual, text-based status quo and the value of the tool for explaining hyperproperties.
false
false
[ "Tom Horak", "Norine Coenen", "Niklas Metzger 0001", "Christopher Hahn", "Tamara Flemisch", "Julián Méndez 0001", "Dennis Dimov", "Bernd Finkbeiner", "Raimund Dachselt" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.03698v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Si3Hw35NWXg", "icon": "video" } ]
Vis
2,021
Visual Arrangements of Bar Charts Influence Comparisons in Viewer Takeaways
10.1109/TVCG.2021.3114823
Well-designed data visualizations can lead to more powerful and intuitive processing by a viewer. To help a viewer intuitively compare values to quickly generate key takeaways, visualization designers can manipulate how data values are arranged in a chart to afford particular comparisons. Using simple bar charts as a case study, we empirically tested the comparison affordances of four common arrangements: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. We asked participants to type out what patterns they perceived in a chart and we coded their takeaways into types of comparisons. In a second study, we asked data visualization design experts to predict which arrangement they would use to afford each type of comparison and found both alignments and mismatches with our findings. These results provide concrete guidelines for how both human designers and automatic chart recommendation systems can make visualizations that help viewers extract the “right” takeaway.
false
false
[ "Cindy Xiong", "Vidya Setlur", "Benjamin Bach", "Eunyee Koh", "Kylie Lin", "Steven Franconeri" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.06370v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uQWXEvr8Qlc", "icon": "video" } ]
Vis
2,021
Visual Evaluation for Autonomous Driving
10.1109/TVCG.2021.3114777
Autonomous driving technologies often use state-of-the-art artificial intelligence algorithms to understand the relationship between the vehicle and the external environment, to predict the changes of the environment, and then to plan and control the behaviors of the vehicle accordingly. The complexity of such technologies makes it challenging to evaluate the performance of autonomous driving systems and to find ways to improve them. The current approaches to evaluating such autonomous driving systems largely use a single score to indicate the overall performance of a system, but domain experts have difficulties in understanding how individual components or algorithms in an autonomous driving system may contribute to the score. To address this problem, we collaborate with domain experts on autonomous driving algorithms, and propose a visual evaluation method for autonomous driving. Our method considers the data generated in all components during the whole process of autonomous driving, including perception results, planning routes, prediction of obstacles, various controlling parameters, and evaluation of comfort. We develop a visual analytics workflow to integrate an evaluation mathematical model with adjustable parameters, support the evaluation of the system from the level of the overall performance to the level of detailed measures of individual components, and to show both evaluation scores and their contributing factors. Our implemented visual analytics system provides an overview evaluation score at the beginning and shows the animation of the dynamic change of the scores at each period. Experts can interactively explore the specific component at different time periods and identify related factors. With our method, domain experts not only learn about the performance of an autonomous driving system, but also identify and access the problematic parts of each component. Our visual evaluation system can be applied to the autonomous driving simulation system and used for various evaluation cases. The results of using our system in some simulation cases and the feedback from involved domain experts confirm the usefulness and efficiency of our method in helping people gain in-depth insight into autonomous driving systems.
false
false
[ "Yijie Hou", "Chengshun Wang", "Junhong Wang", "Xiangyang Xue", "Xiaolong Zhang 0001", "Jun Zhu", "Dongliang Wang", "Siming Chen 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/r2DuL5JMX_0", "icon": "video" } ]
Vis
2,021
Visualization Equilibrium
10.1109/TVCG.2021.3114842
In many real-world strategic settings, people use information displays to make decisions. In these settings, an information provider chooses which information to provide to strategic agents and how to present it, and agents formulate a best response based on the information and their anticipation of how others will behave. We contribute the results of a controlled online experiment to examine how the provision and presentation of information impacts people's decisions in a congestion game. Our experiment compares how different visualization approaches for displaying this information, including bar charts and hypothetical outcome plots, and different information conditions, including where the visualized information is private versus public (i.e., available to all agents), affect decision making and welfare. We characterize the effects of visualization anticipation, referring to changes to behavior when an agent goes from alone having access to a visualization to knowing that others also have access to the visualization to guide their decisions. We also empirically identify the visualization equilibrium, i.e., the visualization for which the visualized outcome of agents' decisions matches the realized decisions of the agents who view it. We reflect on the implications of visualization equilibria and visualization anticipation for designing information displays for real-world strategic settings.
false
false
[ "Paula Kayongo", "Glenn Sun", "Jason D. Hartline", "Jessica Hullman" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.04953v1", "icon": "paper" } ]
Vis
2,021
Visualizing Uncertainty in Probabilistic Graphs with Network Hypothetical Outcome Plots (NetHOPs)
10.1109/TVCG.2021.3114679
Probabilistic graphs are challenging to visualize using the traditional node-link diagram. Encoding edge probability using visual variables like width or fuzziness makes it difficult for users of static network visualizations to estimate network statistics like densities, isolates, path lengths, or clustering under uncertainty. We introduce Network Hypothetical Outcome Plots (NetHOPs), a visualization technique that animates a sequence of network realizations sampled from a network distribution defined by probabilistic edges. NetHOPs employ an aggregation and anchoring algorithm used in dynamic and longitudinal graph drawing to parameterize layout stability for uncertainty estimation. We present a community matching algorithm to enable visualizing the uncertainty of cluster membership and community occurrence. We describe the results of a study in which 51 network experts used NetHOPs to complete a set of common visual analysis tasks and reported how they perceived network structures and properties subject to uncertainty. Participants' estimates fell, on average, within 11% of the ground truth statistics, suggesting NetHOPs can be a reasonable approach for enabling network analysts to reason about multiple properties under uncertainty. Participants appeared to articulate the distribution of network statistics slightly more accurately when they could manipulate the layout anchoring and the animation speed. Based on these findings, we synthesize design recommendations for developing and using animated visualizations for probabilistic networks.
false
false
[ "Dongping Zhang", "Eytan Adar", "Jessica Hullman" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.09870v1", "icon": "paper" } ]
Vis
2,021
VITALITY: Promoting Serendipitous Discovery of Academic Literature with Transformers & Visual Analytics
10.1109/TVCG.2021.3114820
There are a few prominent practices for conducting reviews of academic literature, including searching for specific keywords on Google Scholar or checking citations from some initial seed paper(s). These approaches serve a critical purpose for academic literature reviews, yet there remain challenges in identifying relevant literature when similar work may utilize different terminology (e.g., mixed-initiative visual analytics papers may not use the same terminology as papers on model-steering, yet the two topics are relevant to one another). In this paper, we introduce a system, VITALITY, intended to complement existing practices. In particular, VITALITY promotes serendipitous discovery of relevant literature using transformer language models, allowing users to find semantically similar papers in a word embedding space given (1) a list of input paper(s) or (2) a working abstract. VITALITY visualizes this document-level embedding space in an interactive 2-D scatterplot using dimension reduction. VITALITY also summarizes meta information about the document corpus or search query, including keywords and co-authors, and allows users to save and export papers for use in a literature review. We present qualitative findings from an evaluation of VITALITY, suggesting it can be a promising complementary technique for conducting academic literature reviews. Furthermore, we contribute data from 38 popular data visualization publication venues in VITALITY, and we provide scrapers for the open-source community to continue to grow the list of supported venues.
false
false
[ "Arpit Narechania", "Alireza Karduni", "Ryan Wesslen", "Emily Wall" ]
[]
[]
[]
Vis
2,021
VizLinter: A Linter and Fixer Framework for Data Visualization
10.1109/TVCG.2021.3114804
Despite the rising popularity of automated visualization tools, existing systems tend to provide direct results which do not always fit the input data or meet visualization requirements. Therefore, additional specification adjustments are still required in real-world use cases. However, manual adjustments are difficult since most users do not necessarily possess adequate skills or visualization knowledge. Even experienced users might create imperfect visualizations that involve chart construction errors. We present a framework, VizLinter, to help users detect flaws and rectify already-built but defective visualizations. The framework consists of two components, (1) a visualization linter, which applies well-recognized principles to inspect the legitimacy of rendered visualizations, and (2) a visualization fixer, which automatically corrects the detected violations according to the linter. We implement the framework into an online editor prototype based on Vega-Lite specifications. To further evaluate the system, we conduct an in-lab user study. The results prove its effectiveness and efficiency in identifying and fixing errors for data visualizations.
false
false
[ "Qing Chen 0001", "Fuling Sun", "Xinyue Xu", "Zui Chen", "Jiazhe Wang", "Nan Cao" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2108.10299v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/GGLacuuuUdo", "icon": "video" } ]
Vis
2,021
VizSnippets: Compressing Visualization Bundles Into Representative Previews for Browsing Visualization Collections
10.1109/TVCG.2021.3114841
Visualization collections, accessed by platforms such as Tableau Online or Power Bl, are used by millions of people to share and access diverse analytical knowledge in the form of interactive visualization bundles. Result snippets, compact previews of these bundles, are presented to users to help them identify relevant content when browsing collections. Our engagement with Tableau product teams and review of existing snippet designs on five platforms showed us that current practices fail to help people judge the relevance of bundles because they include only the title and one image. Users frequently need to undertake the time-consuming endeavour of opening a bundle within its visualization system to examine its many views and dashboards. In response, we contribute the first systematic approach to visualization snippet design. We propose a framework for snippet design that addresses eight key challenges that we identify. We present a computational pipeline to compress the visual and textual content of bundles into representative previews that is adaptive to a provided pixel budget and provides high information density with multiple images and carefully chosen keywords. We also reflect on the method of visual inspection through random sampling to gain confidence in model and parameter choices.
false
false
[ "Michael Oppermann", "Tamara Munzner" ]
[ "HM" ]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/3Wrf2_kXLEg", "icon": "video" } ]
Vis
2,021
Wasserstein Distances, Geodesics and Barycenters of Merge Trees
10.1109/TVCG.2021.3114839
This paper presents a unified computational framework for the estimation of distances, geodesics and barycenters of merge trees. We extend recent work on the edit distance [104] and introduce a new metric, called the Wasserstein distance between merge trees, which is purposely designed to enable efficient computations of geodesics and barycenters. Specifically, our new distance is strictly equivalent to the $L$2-Wasserstein distance between extremum persistence diagrams, but it is restricted to a smaller solution space, namely, the space of rooted partial isomorphisms between branch decomposition trees. This enables a simple extension of existing optimization frameworks [110] for geodesics and barycenters from persistence diagrams to merge trees. We introduce a task-based algorithm which can be generically applied to distance, geodesic, barycenter or cluster computation. The task-based nature of our approach enables further accelerations with shared-memory parallelism. Extensive experiments on public ensembles and SciVis contest benchmarks demonstrate the efficiency of our approach - with barycenter computations in the orders of minutes for the largest examples - as well as its qualitative ability to generate representative barycenter merge trees, visually summarizing the features of interest found in the ensemble. We show the utility of our contributions with dedicated visualization applications: feature tracking, temporal reduction and ensemble clustering. We provide a lightweight C++ implementation that can be used to reproduce our results.
false
false
[ "Mathieu Pont", "Jules Vidal", "Julie Delon", "Julien Tierny" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2107.07789v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ifVvH0uMW6k", "icon": "video" } ]
Vis
2,021
What's the Situation with Situated Visualization? A Survey and Perspectives on Situatedness
10.1109/TVCG.2021.3114835
Situated visualization is an emerging concept within visualization, in which data is visualized in situ, where it is relevant to people. The concept has gained interest from multiple research communities, including visualization, human-computer interaction (HCI) and augmented reality. This has led to a range of explorations and applications of the concept, however, this early work has focused on the operational aspect of situatedness leading to inconsistent adoption of the concept and terminology. First, we contribute a literature survey in which we analyze 44 papers that explicitly use the term “situated visualization” to provide an overview of the research area, how it defines situated visualization, common application areas and technology used, as well as type of data and type of visualizations. Our survey shows that research on situated visualization has focused on technology-centric approaches that foreground a spatial understanding of situatedness. Secondly, we contribute five perspectives on situatedness (space, time, place, activity, and community) that together expand on the prevalent notion of situatedness in the corpus. We draw from six case studies and prior theoretical developments in HCI. Each perspective develops a generative way of looking at and working with situatedness in design and research. We outline future directions, including considering technology, material and aesthetics, leveraging the perspectives for design, and methods for stronger engagement with target audiences. We conclude with opportunities to consolidate situated visualization research.
false
false
[ "Nathalie Bressa", "Henrik Korsgaard", "Aurélien Tabard", "Steven Houben", "Jo Vermeulen" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/fzHGkDlVKdc", "icon": "video" } ]
Vis
2,021
Where Can We Help? A Visual Analytics Approach to Diagnosing and Improving Semantic Segmentation of Movable Objects
10.1109/TVCG.2021.3114855
Semantic segmentation is a critical component in autonomous driving and has to be thoroughly evaluated due to safety concerns. Deep neural network (DNN) based semantic segmentation models are widely used in autonomous driving. However, it is challenging to evaluate DNN-based models due to their black-box-like nature, and it is even more difficult to assess model performance for crucial objects, such as lost cargos and pedestrians, in autonomous driving applications. In this work, we propose VASS, a Visual Analytics approach to diagnosing and improving the accuracy and robustness of Semantic Segmentation models, especially for critical objects moving in various driving scenes. The key component of our approach is a context-aware spatial representation learning that extracts important spatial information of objects, such as position, size, and aspect ratio, with respect to given scene contexts. Based on this spatial representation, we first use it to create visual summarization to analyze models' performance. We then use it to guide the generation of adversarial examples to evaluate models' spatial robustness and obtain actionable insights. We demonstrate the effectiveness of VASS via two case studies of lost cargo detection and pedestrian detection in autonomous driving. For both cases, we show quantitative evaluation on the improvement of models' performance with actionable insights obtained from VASS.
false
false
[ "Wenbin He", "Lincan Zou", "Arvind Kumar Shekar", "Liang Gou", "Ren Liu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/8vEHnlzLMes", "icon": "video" } ]
EuroVis
2,021
A Deeper Understanding of Visualization-Text Interplay in Geographic Data-driven Stories
10.1111/cgf.14309
Data‐driven stories comprise of visualizations and a textual narrative. The two representations coexist and complement each other. Although existing research has explored the design strategies and structure of such stories, it remains an open research question how the two representations play together on a detailed level and how they are linked with each other. In this paper, we aim at understanding the fine‐grained interplay of text and visualizations in geographic data‐driven stories. We focus on geographic content as it often includes complex spatiotemporal data presented as versatile visualizations and rich textual descriptions. We conduct a qualitative empirical study on 22 stories collected from a variety of news media outlets; 10 of the stories report the COVID‐19 pandemic, the others cover diverse topics. We investigate the role of every sentence and visualization within the narrative to reveal how they reference each other and interact. Moreover, we explore the positioning and sequence of various parts of the narrative to find patterns that further consolidate the stories. Drawing from the findings, we discuss study implications with respect to best practices and possibilities to automate the report generation.
false
false
[ "Shahid Latif", "Siming Chen 0001", "Fabian Beck 0001" ]
[]
[]
[]
EuroVis
2,021
A novel approach for exploring annotated data with interactive lenses
10.1111/cgf.14315
We introduce a novel approach for assisting users in exploring 2D data representations with an interactive lens. Focus‐and‐context exploration is supported by translating user actions to the joint adjustments in camera and lens parameters that ensure a good placement and sizing of the lens within the view. This general approach, implemented using standard device mappings, overcomes the limitations of current solutions, which force users to continuously switch from lens positioning and scaling to view panning and zooming. Navigation is further assisted by exploiting data annotations. In addition to traditional visual markups and information links, we associate to each annotation a lens configuration that highlights the region of interest. During interaction, an assisting controller determines the next best lens in the database based on the current view and lens parameters and the navigation history. Then, the controller interactively guides the user's lens towards the selected target and displays its annotation markup. As only one annotation markup is displayed at a time, clutter is reduced. Moreover, in addition to guidance, the navigation can also be automated to create a tour through the data. While our methods are generally applicable to general 2D visualization, we have implemented them for the exploration of stratigraphic relightable models. The capabilities of our approach are demonstrated in cultural heritage use cases. A user study has been performed in order to validate our approach.
false
false
[ "Fabio Bettio", "Moonisa Ahsan", "Fabio Marton", "Enrico Gobbetti" ]
[]
[]
[]
EuroVis
2,021
A Progressive Approach for Uncertainty Visualization in Diffusion Tensor Imaging
10.1111/cgf.14317
Diffusion Tensor Imaging (DTI) is a non‐invasive magnetic resonance imaging technique that, combined with fiber tracking algorithms, allows the characterization and visualization of white matter structures in the brain. The resulting fiber tracts are used, for example, in tumor surgery to evaluate the potential brain functional damage due to tumor resection. The DTI processing pipeline from image acquisition to the final visualization is rather complex generating undesirable uncertainties in the final results. Most DTI visualization techniques do not provide any information regarding the presence of uncertainty. When planning surgery, a fixed safety margin around the fiber tracts is often used; however, it cannot capture local variability and distribution of the uncertainty, thereby limiting the informed decision‐making process. Stochastic techniques are a possibility to estimate uncertainty for the DTI pipeline. However, it has high computational and memory requirements that make it infeasible in a clinical setting. The delay in the visualization of the results adds hindrance to the workflow. We propose a progressive approach that relies on a combination of wild‐bootstrapping and fiber tracking to be used within the progressive visual analytics paradigm. We present a local bootstrapping strategy, which reduces the computational and memory costs, and provides fiber‐tracking results in a progressive manner. We have also implemented a progressive aggregation technique that computes the distances in the fiber ensemble during progressive bootstrap computations. We present experiments with different scenarios to highlight the benefits of using our progressive visual analytic pipeline in a clinical workflow along with a use case and analysis obtained by discussions with our collaborators.
false
false
[ "Faizan Siddiqui", "Thomas Höllt", "Anna Vilanova" ]
[]
[]
[]
EuroVis
2,021
A Survey of Human-Centered Evaluations in Human-Centered Machine Learning
10.1111/cgf.14329
Visual analytics systems integrate interactive visualizations and machine learning to enable expert users to solve complex analysis tasks. Applications combine techniques from various fields of research and are consequently not trivial to evaluate. The result is a lack of structure and comparability between evaluations. In this survey, we provide a comprehensive overview of evaluations in the field of human‐centered machine learning. We particularly focus on human‐related factors that influence trust, interpretability, and explainability. We analyze the evaluations presented in papers from top conferences and journals in information visualization and human‐computer interaction to provide a systematic review of their setup and findings. From this survey, we distill design dimensions for structured evaluations, identify evaluation gaps, and derive future research opportunities.
false
false
[ "Fabian Sperrle", "Mennatallah El-Assady", "Grace Guo", "Rita Borgo", "Duen Horng Chau", "Alex Endert", "Daniel A. Keim" ]
[]
[]
[]
EuroVis
2,021
A Visual Designer of Layer-wise Relevance Propagation Models
10.1111/cgf.14302
Layer‐wise Relevance Propagation (LRP) is an emerging and widely‐used method for interpreting the prediction results of convolutional neural networks (CNN). LRP developers often select and employ different relevance backpropagation rules and parameters, to compute relevance scores on input images. However, there exists no obvious solution to define a “best” LRP model. A satisfied model is highly reliant on pertinent images and designers' goals. We develop a visual model designer, named as VisLRPDesigner, to overcome the challenges in the design and use of LRP models. Various LRP rules are unified into an integrated framework with an intuitive workflow of parameter setup. VisLRPDesigner thus allows users to interactively configure and compare LRP models. It also facilitates relevance‐based visual analysis with two important functions: relevance‐based pixel flipping and neuron ablation. Several use cases illustrate the benefits of VisLRPDesigner. The usability and limitation of the visual designer is evaluated by LRP users.
false
false
[ "Xinyi Huang", "Suphanut Jamonnak", "Ye Zhao 0003", "Tsung Heng Wu", "Wei Xu 0020" ]
[]
[]
[]
EuroVis
2,021
Accessible Visualization: Design Space, Opportunities, and Challenges
10.1111/cgf.14298
Visualizations are now widely used across disciplines to understand and communicate data. The benefit of visualizations lies in leveraging our natural visual perception. However, the sole dependency on vision can produce unintended discrimination against people with visual impairments. While the visualization field has seen enormous growth in recent years, supporting people with disabilities is much less explored. In this work, we examine approaches to support this marginalized user group, focusing on visual disabilities. We collected and analyzed papers published for the last 20 years on visualization accessibility. We mapped a design space for accessible visualization that includes seven dimensions: user group, literacy task, chart type, interaction, information granularity, sensory modality, assistive technology. We described the current knowledge gap in light of the latest advances in visualization and presented a preliminary accessibility model by synthesizing findings from existing research. Finally, we reflected on the dimensions and discussed opportunities and challenges for future research.
false
false
[ "Nam Wook Kim", "Shakila Cherise Joyner", "Amalia Riegelhuth", "Yea-Seul Kim" ]
[]
[]
[]
EuroVis
2,021
Animated Presentation of Static Infographics with InfoMotion
10.1111/cgf.14325
By displaying visual elements logically in temporal order, animated infographics can help readers better understand layers of information expressed in an infographic. While many techniques and tools target the quick generation of static infographics, few support animation designs. We propose InfoMotion that automatically generates animated presentations of static infographics. We first conduct a survey to explore the design space of animated infographics. Based on this survey, InfoMotion extracts graphical properties of an infographic to analyze the underlying information structures; then, animation effects are applied to the visual elements in the infographic in temporal order to present the infographic. The generated animations can be used in data videos or presentations. We demonstrate the utility of InfoMotion with two example applications, including mixed‐initiative animation authoring and animation recommendation. To further understand the quality of the generated animations, we conduct a user study to gather subjective feedback on the animations generated by InfoMotion.
false
false
[ "Yun Wang 0012", "Yi Gao", "Ray Huang", "Weiwei Cui", "Haidong Zhang", "Dongmei Zhang 0001" ]
[]
[]
[]