Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
VAST | 2,020 | A Visual Analytics Framework for Contrastive Network Analysis | 10.1109/VAST50239.2020.00010 | A common network analysis task is comparison of two networks to identify unique characteristics in one network with respect to the other. For example, when comparing protein interaction networks derived from normal and cancer tissues, one essential task is to discover protein-protein interactions unique to cancer tissues. However, this task is challenging when the networks contain complex structural (and semantic) relations. To address this problem, we design ContraNA, a visual analytics framework leveraging both the power of machine learning for uncovering unique characteristics in networks and also the effectiveness of visualization for understanding such uniqueness. The basis of ContraNA is cNRL, which integrates two machine learning schemes, network representation learning (NRL) and contrastive learning (CL), to generate a low-dimensional embedding that reveals the uniqueness of one network when compared to another. ContraNA provides an interactive visualization interface to help analyze the uniqueness by relating embedding results and network structures as well as explaining the learned features by cNRL. We demonstrate the usefulness of ContraNA with two case studies using real-world datasets. We also evaluate ContraNA through a controlled user study with 12 participants on network comparison tasks. The results show that participants were able to both effectively identify unique characteristics from complex networks and interpret the results obtained from cNRL. | false | false | [
"Takanori Fujiwara",
"Jian Zhao 0010",
"Francine Chen 0001",
"Kwan-Liu Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.00151v2",
"icon": "paper"
}
] |
VAST | 2,020 | A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes | 10.1109/TVCG.2020.3028888 | Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework. | false | false | [
"Yuxin Ma",
"Arlen Fan",
"Jingrui He",
"Arun Reddy Nelakurthi",
"Ross Maciejewski"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.06876v1",
"icon": "paper"
}
] |
VAST | 2,020 | A Visual Analytics Framework for Reviewing Multivariate Time-Series Data with Dimensionality Reduction | 10.1109/TVCG.2020.3028889 | Data-driven problem solving in many real-world applications involves analysis of time-dependent multivariate data, for which dimensionality reduction (DR) methods are often used to uncover the intrinsic structure and features of the data. However, DR is usually applied to a subset of data that is either single-time-point multivariate or univariate time-series, resulting in the need to manually examine and correlate the DR results out of different data subsets. When the number of dimensions is large either in terms of the number of time points or attributes, this manual task becomes too tedious and infeasible. In this paper, we present MulTiDR, a new DR framework that enables processing of time-dependent multivariate data as a whole to provide a comprehensive overview of the data. With the framework, we employ DR in two steps. When treating the instances, time points, and attributes of the data as a 3D array, the first DR step reduces the three axes of the array to two, and the second DR step visualizes the data in a lower-dimensional space. In addition, by coupling with a contrastive learning method and interactive visualizations, our framework enhances analysts' ability to interpret DR results. We demonstrate the effectiveness of our framework with four case studies using real-world datasets. | false | false | [
"Takanori Fujiwara",
"Shilpika",
"Naohisa Sakamoto",
"Jorji Nonaka",
"Keiji Yamamoto",
"Kwan-Liu Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.01645v3",
"icon": "paper"
}
] |
VAST | 2,020 | An Examination of Grouping and Spatial Organization Tasks for High-Dimensional Data Exploration | 10.1109/TVCG.2020.3028890 | How do analysts think about grouping and spatial operations? This overarching research question incorporates a number of points for investigation, including understanding how analysts begin to explore a dataset, the types of grouping/spatial structures created and the operations performed on them, the relationship between grouping and spatial structures, the decisions analysts make when exploring individual observations, and the role of external information. This work contributes the design and results of such a study, in which a group of participants are asked to organize the data contained within an unfamiliar quantitative dataset. We identify several overarching approaches taken by participants to design their organizational space, discuss the interactions performed by the participants, and propose design recommendations to improve the usability of future high-dimensional data exploration tools that make use of grouping (clustering) and spatial (dimension reduction) operations. | false | false | [
"John E. Wenskovitch",
"Chris North 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.09233v1",
"icon": "paper"
}
] |
VAST | 2,020 | Argus: Interactive a priori Power Analysis | 10.1109/TVCG.2020.3028894 | A key challenge HCl researchers face when designing a controlled experiment is choosing the appropriate number of participants, or sample size. A priori power analysis examines the relationships among multiple parameters, including the complexity associated with human participants, e.g., order and fatigue effects, to calculate the statistical power of a given experiment design. We created Argus, a tool that supports interactive exploration of statistical power: Researchers specify experiment design scenarios with varying confounds and effect sizes. Argus then simulates data and visualizes statistical power across these scenarios, which lets researchers interactively weigh various trade-offs and make informed decisions about sample size. We describe the design and implementation of Argus, a usage scenario designing a visualization experiment, and a think-aloud study. | false | false | [
"Xiaoyi Wang",
"Alexander Eiselmayer",
"Wendy E. Mackay",
"Kasper Hornbæk",
"Chat Wacharamanotham"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.07564v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/gWoDjnGejGQ",
"icon": "video"
}
] |
VAST | 2,020 | Attention Flows: Analyzing and Comparing Attention Mechanisms in Language Models | 10.1109/TVCG.2020.3028976 | Advances in language modeling have led to the development of deep attention-based models that are performant across a wide variety of natural language processing (NLP) problems. These language models are typified by a pre-training process on large unlabeled text corpora and subsequently fine-tuned for specific tasks. Although considerable work has been devoted to understanding the attention mechanisms of pre-trained models, it is less understood how a model's attention mechanisms change when trained for a target NLP task. In this paper, we propose a visual analytics approach to understanding fine-tuning in attention-based language models. Our visualization, Attention Flows, is designed to support users in querying, tracing, and comparing attention within layers, across layers, and amongst attention heads in Transformer-based language models. To help users gain insight on how a classification decision is made, our design is centered on depicting classification-based attention at the deepest layer and how attention from prior layers flows throughout words in the input. Attention Flows supports the analysis of a single model, as well as the visual comparison between pre-trained and fine-tuned models via their similarities and differences. We use Attention Flows to study attention mechanisms in various sentence understanding tasks and highlight how attention evolves to address the nuances of solving these tasks. | false | false | [
"Joseph F. DeRose",
"Jiayao Wang",
"Matthew Berger"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.07053v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/rz3BpEVpS7E",
"icon": "video"
}
] |
VAST | 2,020 | Auditing the Sensitivity of Graph-based Ranking with Visual Analytics | 10.1109/TVCG.2020.3028958 | Graph mining plays a pivotal role across a number of disciplines, and a variety of algorithms have been developed to answer who/what type questions. For example, what items shall we recommend to a given user on an e-commerce platform? The answers to such questions are typically returned in the form of a ranked list, and graph-based ranking methods are widely used in industrial information retrieval settings. However, these ranking algorithms have a variety of sensitivities, and even small changes in rank can lead to vast reductions in product sales and page hits. As such, there is a need for tools and methods that can help model developers and analysts explore the sensitivities of graph ranking algorithms with respect to perturbations within the graph structure. In this paper, we present a visual analytics framework for explaining and exploring the sensitivity of any graph-based ranking algorithm by performing perturbation-based what-if analysis. We demonstrate our framework through three case studies inspecting the sensitivity of two classic graph-based ranking algorithms (PageRank and HITS) as applied to rankings in political news media and social networks. | false | false | [
"Tiankai Xie",
"Yuxin Ma",
"Hanghang Tong",
"My T. Thai",
"Ross Maciejewski"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.07227v1",
"icon": "paper"
}
] |
VAST | 2,020 | Boba: Authoring and Visualizing Multiverse Analyses | 10.1109/TVCG.2020.3028985 | Multiverse analysis is an approach to data analysis in which all “reasonable” analytic decisions are evaluated in parallel and interpreted collectively, in order to foster robustness and transparency. However, specifying a multiverse is demanding because analysts must manage myriad variants from a cross-product of analytic decisions, and the results require nuanced interpretation. We contribute Baba: an integrated domain-specific language (DSL) and visual analysis system for authoring and reviewing multiverse analyses. With the Boba DSL, analysts write the shared portion of analysis code only once, alongside local variations defining alternative decisions, from which the compiler generates a multiplex of scripts representing all possible analysis paths. The Boba Visualizer provides linked views of model results and the multiverse decision space to enable rapid, systematic assessment of consequential decisions and robustness, including sampling uncertainty and model fit. We demonstrate Boba's utility through two data analysis case studies, and reflect on challenges and design opportunities for multiverse analysis software. | false | false | [
"Yang Liu 0136",
"Alex Kale",
"Tim Althoff",
"Jeffrey Heer"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.05551v2",
"icon": "paper"
}
] |
VAST | 2,020 | CAVA: A Visual Analytics System for Exploratory Columnar Data Augmentation Using Knowledge Graphs | 10.1109/TVCG.2020.3030443 | Most visual analytics systems assume that all foraging for data happens before the analytics process; once analysis begins, the set of data attributes considered is fixed. Such separation of data construction from analysis precludes iteration that can enable foraging informed by the needs that arise in-situ during the analysis. The separation of the foraging loop from the data analysis tasks can limit the pace and scope of analysis. In this paper, we present CAVA, a system that integrates data curation and data augmentation with the traditional data exploration and analysis tasks, enabling information foraging in-situ during analysis. Identifying attributes to add to the dataset is difficult because it requires human knowledge to determine which available attributes will be helpful for the ensuing analytical tasks. CAVA crawls knowledge graphs to provide users with a a broad set of attributes drawn from external data to choose from. Users can then specify complex operations on knowledge graphs to construct additional attributes. CAVA shows how visual analytics can help users forage for attributes by letting users visually explore the set of available data, and by serving as an interface for query construction. It also provides visualizations of the knowledge graph itself to help users understand complex joins such as multi-hop aggregations. We assess the ability of our system to enable users to perform complex data combinations without programming in a user study over two datasets. We then demonstrate the generalizability of CAVA through two additional usage scenarios. The results of the evaluation confirm that CAVA is effective in helping the user perform data foraging that leads to improved analysis outcomes, and offer evidence in support of integrating data augmentation as a part of the visual analytics pipeline. | false | false | [
"Dylan Cashman",
"Shenyu Xu",
"Subhajit Das 0002",
"Florian Heimerl",
"Cong Liu",
"Shah Rukh Humayoun",
"Michael Gleicher",
"Alex Endert",
"Remco Chang"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02865v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/mOidkJ_0_3U",
"icon": "video"
}
] |
VAST | 2,020 | CcNav: Understanding Compiler Optimizations in Binary Code | 10.1109/TVCG.2020.3030357 | Program developers spend significant time on optimizing and tuning programs. During this iterative process, they apply optimizations, analyze the resulting code, and modify the compilation until they are satisfied. Understanding what the compiler did with the code is crucial to this process but is very time-consuming and labor-intensive. Users need to navigate through thousands of lines of binary code and correlate it to source code concepts to understand the results of the compilation and to identify optimizations. We present a design study in collaboration with program developers and performance analysts. Our collaborators work with various artifacts related to the program such as binary code, source code, control flow graphs, and call graphs. Through interviews, feedback, and pair-analytics sessions, we analyzed their tasks and workflow. Based on this task analysis and through a human-centric design process, we designed a visual analytics system Compilation Navigator (CcNav) to aid exploration of the effects of compiler optimizations on the program. CcNav provides a streamlined workflow and a unified context that integrates disparate artifacts. CcNav supports consistent interactions across all the artifacts making it easy to correlate binary code with source code concepts. CcNav enables users to navigate and filter large binary code to identify and summarize optimizations such as inlining, vectorization, loop unrolling, and code hoisting. We evaluate CcNav through guided sessions and semi-structured interviews. We reflect on our design process, particularly the immersive elements, and on the transferability of design studies through our experience with a previous design study on program analysis. | false | false | [
"Sabin Devkota",
"Pascal Aschwanden",
"Adam Kunen",
"Matthew P. LeGendre",
"Katherine E. Isaacs"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.00956v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Y3yjiyfNf48",
"icon": "video"
}
] |
VAST | 2,020 | CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization | 10.1109/TVCG.2020.3030418 | Deep learning's great success motivates many practitioners and students to learn about this exciting technology. However, it is often challenging for beginners to take their first step due to the complexity of understanding and applying deep learning. We present CNN Explainer, an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks (CNNs), a foundational deep learning model architecture. Our tool addresses key challenges that novices face while learning about CNNs, which we identify from interviews with instructors and a survey with past students. CNN Explainer tightly integrates a model overview that summarizes a CNN's structure, and on-demand, dynamic visual explanation views that help users understand the underlying components of CNNs. Through smooth transitions across levels of abstraction, our tool enables users to inspect the interplay between low-level mathematical operations and high-level model structures. A qualitative user study shows that CNN Explainer helps users more easily understand the inner workings of CNNs, and is engaging and enjoyable to use. We also derive design lessons from our study. Developed using modern web technologies, CNN Explainer runs locally in users' web browsers without the need for installation or specialized hardware, broadening the public's education access to modern deep learning techniques. | false | false | [
"Zijie J. Wang",
"Robert Turko",
"Omar Shaikh",
"Haekyu Park",
"Nilaksh Das",
"Fred Hohman",
"Minsuk Kahng",
"Polo Chau"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2004.15004v3",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/SlEvmkS4Rs4",
"icon": "video"
}
] |
VAST | 2,020 | CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics | 10.1109/TVCG.2020.3030461 | Convolutional neural networks (CNNs) have demonstrated extraordinarily good performance in many computer vision tasks. The increasing size of CNN models, however, prevents them from being widely deployed to devices with limited computational resources, e.g., mobile/embedded devices. The emerging topic of model pruning strives to address this problem by removing less important neurons and fine-tuning the pruned networks to minimize the accuracy loss. Nevertheless, existing automated pruning solutions often rely on a numerical threshold of the pruning criteria, lacking the flexibility to optimally balance the trade-off between efficiency and accuracy. Moreover, the complicated interplay between the stages of neuron pruning and model fine-tuning makes this process opaque, and therefore becomes difficult to optimize. In this paper, we address these challenges through a visual analytics approach, named CNNPruner. It considers the importance of convolutional filters through both instability and sensitivity, and allows users to interactively create pruning plans according to a desired goal on model size or accuracy. Also, CNNPruner integrates state-of-the-art filter visualization techniques to help users understand the roles that different filters played and refine their pruning plans. Through comprehensive case studies on CNNs with real-world sizes, we validate the effectiveness of CNNPruner. | false | false | [
"Guan Li",
"Junpeng Wang",
"Han-Wei Shen",
"Kaixin Chen 0004",
"Guihua Shan",
"Zhonghua Lu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.09940v1",
"icon": "paper"
}
] |
VAST | 2,020 | Co-Bridges: Pair-wise Visual Connection and Comparison for Multi-item Data Streams | 10.1109/TVCG.2020.3030411 | In various domains, there are abundant streams or sequences of multi-item data of various kinds, e.g. streams of news and social media texts, sequences of genes and sports events, etc. Comparison is an important and general task in data analysis. For comparing data streams involving multiple items (e.g., words in texts, actors or action types in action sequences, visited places in itineraries, etc.), we propose Co-Bridges, a visual design involving connection and comparison techniques that reveal similarities and differences between two streams. Co-Bridges use river and bridge metaphors, where two sides of a river represent data streams, and bridges connect temporally or sequentially aligned segments of streams. Commonalities and differences between these segments in terms of involvement of various items are shown on the bridges. Interactive query tools support the selection of particular stream subsets for focused exploration. The visualization supports both qualitative (common and distinct items) and quantitative (stream volume, amount of item involvement) comparisons. We further propose Comparison-of-Comparisons, in which two or more Co-Bridges corresponding to different selections are juxtaposed. We test the applicability of the Co-Bridges in different domains, including social media text streams and sports event sequences. We perform an evaluation of the users' capability to understand and use Co-Bridges. The results confirm that Co-Bridges is effective for supporting pair-wise visual comparisons in a wide range of applications. | false | false | [
"Siming Chen 0001",
"Natalia V. Andrienko",
"Gennady L. Andrienko",
"Jie Li 0006",
"Xiaoru Yuan"
] | [] | [] | [] |
VAST | 2,020 | Competing Models: Inferring Exploration Patterns and Information Relevance via Bayesian Model Selection | 10.1109/TVCG.2020.3030430 | Analyzing interaction data provides an opportunity to learn about users, uncover their underlying goals, and create intelligent visualization systems. The first step for intelligent response in visualizations is to enable computers to infer user goals and strategies through observing their interactions with a system. Researchers have proposed multiple techniques to model users, however, their frameworks often depend on the visualization design, interaction space, and dataset. Due to these dependencies, many techniques do not provide a general algorithmic solution to user exploration modeling. In this paper, we construct a series of models based on the dataset and pose user exploration modeling as a Bayesian model selection problem where we maintain a belief over numerous competing models that could explain user interactions. Each of these competing models represent an exploration strategy the user could adopt during a session. The goal of our technique is to make high-level and in-depth inferences about the user by observing their low-level interactions. Although our proposed idea is applicable to various probabilistic model spaces, we demonstrate a specific instance of encoding exploration patterns as competing models to infer information relevance. We validate our technique's ability to infer exploration bias, predict future interactions, and summarize an analytic session using user study datasets. Our results indicate that depending on the application, our method outperforms established baselines for bias detection and future interaction prediction. Finally, we discuss future research directions based on our proposed modeling paradigm and suggest how practitioners can use this method to build intelligent visualization systems that understand users' goals and adapt to improve the exploration process. | false | false | [
"Shayan Monadjemi",
"Roman Garnett",
"Alvitta Ottley"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.06042v2",
"icon": "paper"
}
] |
VAST | 2,020 | ConceptExplorer: Visual Analysis of Concept Drifts in Multi-source Time-series Data | 10.1109/VAST50239.2020.00006 | Time-series data is widely studied in various scenarios, like weather forecast, stock market, customer behavior analysis. To comprehensively learn about the dynamic environments, it is necessary to comprehend features from multiple data sources. This paper proposes a novel visual analysis approach for detecting and analyzing concept drifts from multi-sourced time-series. We propose a visual detection scheme for discovering concept drifts from multiple sourced time-series based on prediction models. We design a drift level index to depict the dynamics, and a consistency judgment model to justify whether the concept drifts from various sources are consistent. Our integrated visual interface, ConceptExplorer, facilitates visual exploration, extraction, understanding, and comparison of concepts and concept drifts from multi-source time-series data. We conduct three case studies and expert interviews to verify the effectiveness of our approach. | false | false | [
"Xumeng Wang",
"Wei Chen 0001",
"Jiazhi Xia",
"Zexian Chen",
"Dongshi Xu",
"Xiangyang Wu",
"Mingliang Xu",
"Tobias Schreck"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.15272v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/KqB3Gy1eHvQ",
"icon": "video"
}
] |
VAST | 2,020 | DECE: Decision Explorer with Counterfactual Explanations for Machine Learning Models | 10.1109/TVCG.2020.3030342 | With machine learning models being increasingly applied to various decision-making scenarios, people have spent growing efforts to make machine learning models more transparent and explainable. Among various explanation techniques, counterfactual explanations have the advantages of being human-friendly and actionable-a counterfactual explanation tells the user how to gain the desired prediction with minimal changes to the input. Besides, counterfactual explanations can also serve as efficient probes to the models' decisions. In this work, we exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models. We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets, supporting users ranging from decision-subjects to model developers. DECE supports exploratory analysis of model decisions by combining the strengths of counterfactual explanations at instance- and subgroup-levels. We also introduce a set of interactions that enable users to customize the generation of counterfactual explanations to find more actionable ones that can suit their needs. Through three use cases and an expert interview, we demonstrate the effectiveness of DECE in supporting decision exploration tasks and instance explanations. | false | false | [
"Furui Cheng",
"Yao Ming",
"Huamin Qu"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.08353v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/wVrJ5youWNU",
"icon": "video"
}
] |
VAST | 2,020 | Diagnosing Concept Drift with Visual Analytics | 10.1109/VAST50239.2020.00007 | Concept drift is a phenomenon in which the distribution of a data stream changes over time in unforeseen ways, causing prediction models built on historical data to become inaccurate. While a variety of automated methods have been developed to identify when concept drift occurs, there is limited support for analysts who need to understand and correct their models when drift is detected. In this paper, we present a visual analytics method, DriftVis, to support model builders and analysts in the identification and correction of concept drift in streaming data. DriftVis combines a distribution-based drift detection method with a streaming scatterplot to support the analysis of drift caused by the distribution changes of data streams and to explore the impact of these changes on the model’s accuracy. A quantitative experiment and two case studies on weather prediction and text classification have been conducted to demonstrate our proposed tool and illustrate how visual analytics can be used to support the detection, examination, and correction of concept drift. | false | false | [
"Weikai Yang",
"Zhen Li 0044",
"Mengchen Liu",
"Yafeng Lu",
"Kelei Cao",
"Ross Maciejewski",
"Shixia Liu"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.14372v3",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/449t1pfeKq0",
"icon": "video"
}
] |
VAST | 2,020 | Evaluation of Sampling Methods for Scatterplots | 10.1109/TVCG.2020.3030432 | Given a scatterplot with tens of thousands of points or even more, a natural question is which sampling method should be used to create a small but “good” scatterplot for a better abstraction. We present the results of a user study that investigates the influence of different sampling strategies on multi-class scatterplots. The main goal of this study is to understand the capability of sampling methods in preserving the density, outliers, and overall shape of a scatterplot. To this end, we comprehensively review the literature and select seven typical sampling strategies as well as eight representative datasets. We then design four experiments to understand the performance of different strategies in maintaining: 1) region density; 2) class density; 3) outliers; and 4) overall shape in the sampling results. The results show that: 1) random sampling is preferred for preserving region density; 2) blue noise sampling and random sampling have comparable performance with the three multi-class sampling strategies in preserving class density; 3) outlier biased density based sampling, recursive subdivision based sampling, and blue noise sampling perform the best in keeping outliers; and 4) blue noise sampling outperforms the others in maintaining the overall shape of a scatterplot. | false | false | [
"Jun Yuan 0003",
"Shouxing Xiang",
"Jiazhi Xia",
"Lingyun Yu 0001",
"Shixia Liu"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.14666v4",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/gPtAZsJKO5I",
"icon": "video"
}
] |
VAST | 2,020 | Explainable Matrix - Visualization for Global and Local Interpretability of Random Forest Classification Ensembles | 10.1109/TVCG.2020.3030354 | Over the past decades, classification models have proven to be essential machine learning tools given their potential and applicability in various domains. In these years, the north of the majority of the researchers had been to improve quantitative metrics, notwithstanding the lack of information about models' decisions such metrics convey. This paradigm has recently shifted, and strategies beyond tables and numbers to assist in interpreting models' decisions are increasing in importance. Part of this trend, visualization techniques have been extensively used to support classification models' interpretability, with a significant focus on rule-based models. Despite the advances, the existing approaches present limitations in terms of visual scalability, and the visualization of large and complex models, such as the ones produced by the Random Forest (RF) technique, remains a challenge. In this paper, we propose Explainable Matrix (ExMatrix), a novel visualization method for RF interpretability that can handle models with massive quantities of rules. It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates, enabling the analysis of entire models and auditing classification results. ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability. | false | false | [
"Mário Popolin Neto",
"Fernando Vieira Paulovich"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/qlthySP_mwA",
"icon": "video"
}
] |
VAST | 2,020 | Githru: Visual Analytics for Understanding Software Development History Through Git Metadata Analysis | 10.1109/TVCG.2020.3030414 | Git metadata contains rich information for developers to understand the overall context of a large software development project. Thus it can help new developers, managers, and testers understand the history of development without needing to dig into a large pile of unfamiliar source code. However, the current tools for Git visualization are not adequate to analyze and explore the metadata: They focus mainly on improving the usability of Git commands instead of on helping users understand the development history. Furthermore, they do not scale for large and complex Git commit graphs, which can play an important role in understanding the overall development history. In this paper, we present Githru, an interactive visual analytics system that enables developers to effectively understand the context of development history through the interactive exploration of Git metadata. We design an interactive visual encoding idiom to represent a large Git graph in a scalable manner while preserving the topological structures in the Git graph. To enable scalable exploration of a large Git commit graph, we propose novel techniques (graph reconstruction, clustering, and Context-Preserving Squash Merge (CSM) methods) to abstract a large-scale Git commit graph. Based on these Git commit graph abstraction techniques, Githru provides an interactive summary view to help users gain an overview of the development history and a comparison view in which users can compare different clusters of commits. The efficacy of Githru has been demonstrated by case studies with domain experts using real-world, in-house datasets from a large software development team at a major international IT company. A controlled user study with 12 developers comparing Githru to previous tools also confirms the effectiveness of Githru in terms of task completion time. | false | false | [
"Youngtaek Kim",
"Jaeyoung Kim",
"Hyeon Jeon",
"Young-Ho Kim",
"Hyunjoo Song",
"Bo Hyoung Kim",
"Jinwook Seo"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.03115v2",
"icon": "paper"
}
] |
VAST | 2,020 | HyperTendril: Visual Analytics for User-Driven Hyperparameter Optimization of Deep Neural Networks | 10.1109/TVCG.2020.3030380 | To mitigate the pain of manually tuning hyperparameters of deep neural networks, automated machine learning (AutoML) methods have been developed to search for an optimal set of hyperparameters in large combinatorial search spaces. However, the search results of AutoML methods significantly depend on initial configurations, making it a non-trivial task to find a proper configuration. Therefore, human intervention via a visual analytic approach bears huge potential in this task. In response, we propose HyperTendril, a web-based visual analytics system that supports user-driven hyperparameter tuning processes in a model-agnostic environment. HyperTendril takes a novel approach to effectively steering hyperparameter optimization through an iterative, interactive tuning procedure that allows users to refine the search spaces and the configuration of the AutoML method based on their own insights from given results. Using HyperTendril, users can obtain insights into the complex behaviors of various hyperparameter search algorithms and diagnose their configurations. In addition, HyperTendril supports variable importance analysis to help the users refine their search spaces based on the analysis of relative importance of different hyperparameters and their interaction effects. We present the evaluation demonstrating how HyperTendril helps users steer their tuning processes via a longitudinal user study based on the analysis of interaction logs and in-depth interviews while we deploy our system in a professional industrial environment. | false | false | [
"Heungseok Park",
"Yoonsoo Nam",
"Jihoon Kim",
"Jaegul Choo"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02078v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/3nD6kXCL2xI",
"icon": "video"
}
] |
VAST | 2,020 | HypoML: Visual Analysis for Hypothesis-based Evaluation of Machine Learning Models | 10.1109/TVCG.2020.3030449 | In this paper, we present a visual analytics tool for enabling hypothesis-based evaluation of machine learning (ML) models. We describe a novel ML-testing framework that combines the traditional statistical hypothesis testing (commonly used in empirical research) with logical reasoning about the conclusions of multiple hypotheses. The framework defines a controlled configuration for testing a number of hypotheses as to whether and how some extra information about a “concept” or “feature” may benefit or hinder an ML model. Because reasoning multiple hypotheses is not always straightforward, we provide HypoML as a visual analysis tool, with which, the multi-thread testing results are first transformed to analytical results using statistical and logical inferences, and then to a visual representation for rapid observation of the conclusions and the logical flow between the testing results and hypotheses. We have applied HypoML to a number of hypothesized concepts, demonstrating the intuitive and explainable nature of the visual analysis. | false | false | [
"Qianwen Wang",
"William Alexander",
"Jack Pegg",
"Huamin Qu",
"Min Chen 0001"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2002.05271v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/rf-amrd2Goc",
"icon": "video"
}
] |
VAST | 2,020 | iConViz: Interactive Visual Exploration of the Default Contagion Risk of Networked-Guarantee Loans | 10.1109/VAST50239.2020.00013 | Groups of enterprises can serve as guarantees for one another and form complex networks when obtaining loans from commercial banks. During economic slowdowns, corporate default may spread like a virus and lead to large-scale defaults or even systemic financial crises. To help financial regulatory authorities and banks manage the risk associated with networked loans, we identified the default contagion risk, a pivotal issue in developing preventive measures, and established iConViz, an interactive visual analysis tool that facilitates the closed-loop analysis process. A novel financial metric, the contagion effect, was formulated to quantify the infectious consequences of guarantee chains in this type of network. Based on this metric, we designed and implemented a series of novel and coordinated views that address the analysis of financial problems. Experts evaluated the system using real-world financial data. The proposed approach grants practitioners the ability to avoid previous ad hoc analysis methodologies and extend coverage of the conventional Capital Accord to the banking industry. | false | false | [
"Zhibin Niu",
"Runlin Li",
"Junqi Wu",
"Dawei Cheng",
"Jiawan Zhang"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2006.09542v3",
"icon": "paper"
}
] |
VAST | 2,020 | II-20: Intelligent and pragmatic analytic categorization of image collections | 10.1109/TVCG.2020.3030383 | In this paper, we introduce 11–20 (Image Insight 2020), a multimedia analytics approach for analytic categorization of image collections. Advanced visualizations for image collections exist, but they need tight integration with a machine model to support the task of analytic categorization. Directly employing computer vision and interactive learning techniques gravitates towards search. Analytic categorization, however, is not machine classification (the difference between the two is called the pragmatic gap): a human adds/redefines/deletes categories of relevance on the fly to build insight, whereas the machine classifier is rigid and non-adaptive. Analytic categorization that truly brings the user to insight requires a flexible machine model that allows dynamic sliding on the exploration-search axis, as well as semantic interactions: a human thinks about image data mostly in semantic terms. 11–20 brings three major contributions to multimedia analytics on image collections and towards closing the pragmatic gap. Firstly, a new machine model that closely follows the user's interactions and dynamically models her categories of relevance. II-20's machine model, in addition to matching and exceeding the state of the art's ability to produce relevant suggestions, allows the user to dynamically slide on the exploration-search axis without any additional input from her side. Secondly, the dynamic, 1-image-at-a-time Tetris metaphor that synergizes with the model. It allows a well-trained model to analyze the collection by itself with minimal interaction from the user and complements the classic grid metaphor. Thirdly, the fast-forward interaction, allowing the user to harness the model to quickly expand (“fast-forward”) the categories of relevance, expands the multimedia analytics semantic interaction dictionary. Automated experiments show that II-20's machine model outperforms the existing state of the art and also demonstrate the Tetris metaphor's analytic quality. User studies further confirm that II–20 is an intuitive, efficient, and effective multimedia analytics tool. | false | false | [
"Jan Zahálka",
"Marcel Worring",
"Jarke J. van Wijk"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.02149v3",
"icon": "paper"
}
] |
VAST | 2,020 | In Search of Patient Zero: Visual Analytics of Pathogen Transmission Pathways in Hospitals | 10.1109/TVCG.2020.3030437 | Pathogen outbreaks (i.e., outbreaks of bacteria and viruses) in hospitals can cause high mortality rates and increase costs for hospitals significantly. An outbreak is generally noticed when the number of infected patients rises above an endemic level or the usual prevalence of a pathogen in a defined population. Reconstructing transmission pathways back to the source of an outbreak - the patient zero or index patient - requires the analysis of microbiological data and patient contacts. This is often manually completed by infection control experts. We present a novel visual analytics approach to support the analysis of transmission pathways, patient contacts, the progression of the outbreak, and patient timelines during hospitalization. Infection control experts applied our solution to a real outbreak of Klebsiella pneumoniae in a large German hospital. Using our system, our experts were able to scale the analysis of transmission pathways to longer time intervals (i.e., several years of data instead of days) and across a larger number of wards. Also, the system is able to reduce the analysis time from days to hours. In our final study, feedback from twenty-five experts from seven German hospitals provides evidence that our solution brings significant benefits for analyzing outbreaks. | false | false | [
"Tom Baumgartl",
"Markus Petzold",
"Marcel Wunderlich",
"Markus Höhn",
"Daniel Archambault",
"M. Lieser",
"A. Dalpke",
"Simone Scheithauer",
"Michael Marschollek",
"Vanessa Eichel",
"Nico T. Mutters",
"Highmed Consortium",
"Tatiana von Landesberger"
] | [
"HM"
] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.09552v3",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Y3fGnxKLFIM",
"icon": "video"
}
] |
VAST | 2,020 | InCorr: Interactive Data-Driven Correlation Panels for Digital Outcrop Analysis | 10.1109/TVCG.2020.3030409 | Geological analysis of 3D Digital Outcrop Models (DOMs) for reconstruction of ancient habitable environments is a key aspect of the upcoming ESA ExoMars 2022 Rosalind Franklin Rover and the NASA 2020 Rover Perseverance missions in seeking signs of past life on Mars. Geologists measure and interpret 3D DOMs, create sedimentary logs and combine them in ‘correlation panels’ to map the extents of key geological horizons, and build a stratigraphic model to understand their position in the ancient landscape. Currently, the creation of correlation panels is completely manual and therefore time-consuming, and inflexible. With InCorr we present a visualization solution that encompasses a 3D logging tool and an interactive data-driven correlation panel that evolves with the stratigraphic analysis. For the creation of InCorr we closely cooperated with leading planetary geologists in the form of a design study. We verify our results by recreating an existing correlation analysis with InCorr and validate our correlation panel against a manually created illustration. Further, we conducted a user-study with a wider circle of geologists. Our evaluation shows that InCorr efficiently supports the domain experts in tackling their research questions and that it has the potential to significantly impact how geologists work with digital outcrop representations in general. | false | false | [
"Thomas Ortner",
"Andreas Walch",
"Rebecca Nowak",
"Robert Barnes",
"Thomas Höllt",
"M. Eduard Gröller"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.11512v2",
"icon": "paper"
}
] |
VAST | 2,020 | Insight Beyond Numbers: The Impact of Qualitative Factors on Visual Data Analysis | 10.1109/TVCG.2020.3030376 | As of today, data analysis focuses primarily on the findings to be made inside the data and concentrates less on how those findings relate to the domain of investigation. Contemporary visualization as a field of research shows a strong tendency to adopt this data-centrism. Despite their decisive influence on the analysis result, qualitative aspects of the analysis process such as the structure, soundness, and complexity of the applied reasoning strategy are rarely discussed explicitly. We argue that if the purpose of visualization is the provision of domain insight rather than the depiction of data analysis results, a holistic perspective requires a qualitative component to to be added to the discussion of quantitative and human factors. To support this point, we demonstrate how considerations of qualitative factors in visual analysis can be applied to obtain explanations and possible solutions for a number of practical limitations inherent to the data-centric perspective on analysis. Based on this discussion of what we call qualitative visual analysis, we develop an inside-outside principle of nested levels of context that can serve as a conceptual basis for the development of visualization systems that optimally support the emergence of insight during analysis. | false | false | [
"Benjamin Karer",
"Hans Hagen",
"Dirk J. Lehmann"
] | [] | [] | [] |
VAST | 2,020 | Integrating Prior Knowledge in Mixed-Initiative Social Network Clustering | 10.1109/TVCG.2020.3030347 | We propose a new approach-called PK-clustering-to help social scientists create meaningful clusters in social networks. Many clustering algorithms exist but most social scientists find them difficult to understand, and tools do not provide any guidance to choose algorithms, or to evaluate results taking into account the prior knowledge of the scientists. Our work introduces a new clustering approach and a visual analytics user interface that address this issue. It is based on a process that 1) captures the prior knowledge of the scientists as a set of incomplete clusters, 2) runs multiple clustering algorithms (similarly to clustering ensemble methods), 3) visualizes the results of all the algorithms ranked and summarized by how well each algorithm matches the prior knowledge, 4) evaluates the consensus between user-selected algorithms and 5) allows users to review details and iteratively update the acquired knowledge. We describe our approach using an initial functional prototype, then provide two examples of use and early feedback from social scientists. We believe our clustering approach offers a novel constructive method to iteratively build knowledge while avoiding being overly influenced by the results of often randomly selected black-box clustering algorithms. | false | false | [
"Alexis Pister",
"Paolo Buono",
"Jean-Daniel Fekete",
"Catherine Plaisant",
"Paola Valdivia"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.02972v2",
"icon": "paper"
}
] |
VAST | 2,020 | LineSmooth: An Analytical Framework for Evaluating the Effectiveness of Smoothing Techniques on Line Charts | 10.1109/TVCG.2020.3030421 | We present a comprehensive framework for evaluating line chart smoothing methods under a variety of visual analytics tasks. Line charts are commonly used to visualize a series of data samples. When the number of samples is large, or the data are noisy, smoothing can be applied to make the signal more apparent. However, there are a wide variety of smoothing techniques available, and the effectiveness of each depends upon both nature of the data and the visual analytics task at hand. To date, the visualization community lacks a summary work for analyzing and classifying the various smoothing methods available. In this paper, we establish a framework, based on 8 measures of the line smoothing effectiveness tied to 8 low-level visual analytics tasks. We then analyze 12 methods coming from 4 commonly used classes of line chart smoothing-rank filters, convolutional filters, frequency domain filters, and subsampling. The results show that while no method is ideal for all situations, certain methods, such as Gaussian filters and TOPOLOGY-based subsampling, perform well in general. Other methods, such as low-pass CUTOFF filters and Douglas-peucker subsampling, perform well for specific visual analytics tasks. Almost as importantly, our framework demonstrates that several methods, including the commonly used UNIFORM subsampling, produce low-quality results, and should, therefore, be avoided, if possible. | false | false | [
"Paul Rosen 0001",
"Ghulam Jilani Quadri"
] | [] | [
"PW",
"P",
"V",
"C"
] | [
{
"name": "Project Website with Demo",
"url": "https://usfdatavisualization.github.io/LineSmoothDemo/",
"icon": "project_website"
},
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.13882v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/KCNqQuz47Tw",
"icon": "video"
},
{
"name": "Prerecorded Talk",
"url": "https://www.youtube.com/live/LebtfJxfGGk?si=9Klicxrz12I8pYPq&t=4133",
"icon": "video"
},
{
"name": "Souce Code",
"url": "https://github.com/USFDataVisualization/LineSmooth",
"icon": "code"
}
] |
VAST | 2,020 | Multiscale Snapshots: Visual Analysis of Temporal Summaries in Dynamic Graphs | 10.1109/TVCG.2020.3030398 | The overview-driven visual analysis of large-scale dynamic graphs poses a major challenge. We propose Multiscale Snapshots, a visual analytics approach to analyze temporal summaries of dynamic graphs at multiple temporal scales. First, we recursively generate temporal summaries to abstract overlapping sequences of graphs into compact snapshots. Second, we apply graph embeddings to the snapshots to learn low-dimensional representations of each sequence of graphs to speed up specific analytical tasks (e.g., similarity search). Third, we visualize the evolving data from a coarse to fine-granular snapshots to semi-automatically analyze temporal states, trends, and outliers. The approach enables us to discover similar temporal summaries (e.g., reoccurring states), reduces the temporal data to speed up automatic analysis, and to explore both structural and temporal properties of a dynamic graph. We demonstrate the usefulness of our approach by a quantitative evaluation and the application to a real-world dataset. | false | false | [
"Eren Cakmak",
"Udo Schlegel",
"Dominik Jäckle",
"Daniel A. Keim",
"Tobias Schreck"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.08282v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/qqNPRLmFqDM",
"icon": "video"
}
] |
VAST | 2,020 | MultiSegVA: Using Visual Analytics to Segment Biologging Time Series on Multiple Scales | 10.1109/TVCG.2020.3030386 | Segmenting biologging time series of animals on multiple temporal scales is an essential step that requires complex techniques with careful parameterization and possibly cross-domain expertise. Yet, there is a lack of visual-interactive tools that strongly support such multi-scale segmentation. To close this gap, we present our MultiSegVA platform for interactively defining segmentation techniques and parameters on multiple temporal scales. MultiSegVA primarily contributes tailored, visual-interactive means and visual analytics paradigms for segmenting unlabeled time series on multiple scales. Further, to flexibly compose the multi-scale segmentation, the platform contributes a new visual query language that links a variety of segmentation techniques. To illustrate our approach, we present a domain-oriented set of segmentation techniques derived in collaboration with movement ecologists. We demonstrate the applicability and usefulness of MultiSegVA in two real-world use cases from movement ecology, related to behavior analysis after environment-aware segmentation, and after progressive clustering. Expert feedback from movement ecologists shows the effectiveness of tailored visual-interactive means and visual analytics paradigms at segmenting multi-scale data, enabling them to perform semantically meaningful analyses. A third use case demonstrates that MultiSegVA is generalizable to other domains. | false | false | [
"Philipp Meschenmoser",
"Juri Buchmüller",
"Daniel Seebacher",
"Martin Wikelski",
"Daniel A. Keim"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.00548v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Zqqlgv7ZaV0",
"icon": "video"
}
] |
VAST | 2,020 | Once Upon A Time In Visualization: Understanding the Use of Textual Narratives for Causality | 10.1109/TVCG.2020.3030358 | Causality visualization can help people understand temporal chains of events, such as messages sent in a distributed system, cause and effect in a historical conflict, or the interplay between political actors over time. However, as the scale and complexity of these event sequences grows, even these visualizations can become overwhelming to use. In this paper, we propose the use of textual narratives as a data-driven storytelling method to augment causality visualization. We first propose a design space for how textual narratives can be used to describe causal data. We then present results from a crowdsourced user study where participants were asked to recover causality information from two causality visualizations-causal graphs and Hasse diagrams-with and without an associated textual narrative. Finally, we describe Causeworks, a causality visualization system for understanding how specific interventions influence a causal model. The system incorporates an automatic textual narrative mechanism based on our design space. We validate Causeworks through interviews with experts who used the system for understanding complex events. | false | false | [
"Arjun Choudhry",
"Mandar Sharma",
"Pramod Chundury",
"Thomas Kapler",
"Derek W. S. Gray",
"Naren Ramakrishnan",
"Niklas Elmqvist"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02649v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Ra5hihtc8c0",
"icon": "video"
}
] |
VAST | 2,020 | P6: A Declarative Language for Integrating Machine Learning in Visual Analytics | 10.1109/TVCG.2020.3030453 | We present P6, a declarative language for building high performance visual analytics systems through its support for specifying and integrating machine learning and interactive visualization methods. As data analysis methods based on machine learning and artificial intelligence continue to advance, a visual analytics solution can leverage these methods for better exploiting large and complex data. However, integrating machine learning methods with interactive visual analysis is challenging. Existing declarative programming libraries and toolkits for visualization lack support for coupling machine learning methods. By providing a declarative language for visual analytics, P6 can empower more developers to create visual analytics applications that combine machine learning and visualization methods for data analysis and problem solving. Through a variety of example applications, we demonstrate P6's capabilities and show the benefits of using declarative specifications to build visual analytics systems. We also identify and discuss the research opportunities and challenges for declarative visual analytics. | false | false | [
"Jianping Kelvin Li",
"Kwan-Liu Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.01399v1",
"icon": "paper"
}
] |
VAST | 2,020 | PassVizor: Toward Better Understanding of the Dynamics of Soccer Passes | 10.1109/TVCG.2020.3030359 | In soccer, passing is the most frequent interaction between players and plays a significant role in creating scoring chances. Experts are interested in analyzing players' passing behavior to learn passing tactics, i.e., how players build up an attack with passing. Various approaches have been proposed to facilitate the analysis of passing tactics. However, the dynamic changes of a team's employed tactics over a match have not been comprehensively investigated. To address the problem, we closely collaborate with domain experts and characterize requirements to analyze the dynamic changes of a team's passing tactics. To characterize the passing tactic employed for each attack, we propose a topic-based approach that provides a high-level abstraction of complex passing behaviors. Based on the model, we propose a glyph-based design to reveal the multi-variate information of passing tactics within different phases of attacks, including player identity, spatial context, and formation. We further design and develop PassVizor, a visual analytics system, to support the comprehensive analysis of passing dynamics. With the system, users can detect the changing patterns of passing tactics and examine the detailed passing process for evaluating passing tactics. We invite experts to conduct analysis with PassVizor and demonstrate the usability of the system through an expert interview. | false | false | [
"Xiao Xie",
"Jiachen Wang",
"Hongye Liang",
"Dazhen Deng",
"Shoubin Cheng",
"Hui Zhang 0051",
"Wei Chen 0001",
"Yingcai Wu"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02464v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Lr6yuBBrMQw",
"icon": "video"
}
] |
VAST | 2,020 | PipelineProfiler: A Visual Analytics Tool for the Exploration of AutoML Pipelines | 10.1109/TVCG.2020.3030361 | In recent years, a wide variety of automated machine learning (AutoML) methods have been proposed to generate end-to-end ML pipelines. While these techniques facilitate the creation of models, given their black-box nature, the complexity of the underlying algorithms, and the large number of pipelines they derive, they are difficult for developers to debug. It is also challenging for machine learning experts to select an AutoML system that is well suited for a given problem. In this paper, we present the Pipeline Profiler, an interactive visualization tool that allows the exploration and comparison of the solution space of machine learning (ML) pipelines produced by AutoML systems. PipelineProfiler is integrated with Jupyter Notebook and can be combined with common data science tools to enable a rich set of analyses of the ML pipelines, providing users a better understanding of the algorithms that generated them as well as insights into how they can be improved. We demonstrate the utility of our tool through use cases where PipelineProfiler is used to better understand and improve a real-world AutoML system. Furthermore, we validate our approach by presenting a detailed analysis of a think-aloud experiment with six data scientists who develop and evaluate AutoML tools. | false | false | [
"Jorge Henrique Piazentin Ono",
"Sonia Castelo",
"Roque Lopez",
"Enrico Bertini",
"Juliana Freire",
"Cláudio T. Silva"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.00160v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/0FlwKtToYLQ",
"icon": "video"
}
] |
VAST | 2,020 | Preserving Minority Structures in Graph Sampling | 10.1109/TVCG.2020.3030428 | Sampling is a widely used graph reduction technique to accelerate graph computations and simplify graph visualizations. By comprehensively analyzing the literature on graph sampling, we assume that existing algorithms cannot effectively preserve minority structures that are rare and small in a graph but are very important in graph analysis. In this work, we initially conduct a pilot user study to investigate representative minority structures that are most appealing to human viewers. We then perform an experimental study to evaluate the performance of existing graph sampling algorithms regarding minority structure preservation. Results confirm our assumption and suggest key points for designing a new graph sampling approach named mino-centric graph sampling (MCGS). In this approach, a triangle-based algorithm and a cut-point-based algorithm are proposed to efficiently identify minority structures. A set of importance assessment criteria are designed to guide the preservation of important minority structures. Three optimization objectives are introduced into a greedy strategy to balance the preservation between minority and majority structures and suppress the generation of new minority structures. A series of experiments and case studies are conducted to evaluate the effectiveness of the proposed MCGS. | false | false | [
"Ying Zhao 0001",
"Haojin Jiang",
"Qi'an Chen",
"Yaqi Qin",
"Huixuan Xie",
"Yitao Wu",
"Shixia Liu",
"Zhiguang Zhou",
"Jiazhi Xia",
"Fangfang Zhou"
] | [
"HM"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02498v2",
"icon": "paper"
}
] |
VAST | 2,020 | QLens: Visual Analytics of MUlti-step Problem-solving Behaviors for Improving Question Design | 10.1109/TVCG.2020.3030337 | With the rapid development of online education in recent years, there has been an increasing number of learning platforms that provide students with multi-step questions to cultivate their problem-solving skills. To guarantee the high quality of such learning materials, question designers need to inspect how students' problem-solving processes unfold step by step to infer whether students' problem-solving logic matches their design intent. They also need to compare the behaviors of different groups (e.g., students from different grades) to distribute questions to students with the right level of knowledge. The availability of fine-grained interaction data, such as mouse movement trajectories from the online platforms, provides the opportunity to analyze problem-solving behaviors. However, it is still challenging to interpret, summarize, and compare the high dimensional problem-solving sequence data. In this paper, we present a visual analytics system, QLens, to help question designers inspect detailed problem-solving trajectories, compare different student groups, distill insights for design improvements. In particular, QLens models problem-solving behavior as a hybrid state transition graph and visualizes it through a novel glyph-embedded Sankey diagram, which reflects students' problem-solving logic, engagement, and encountered difficulties. We conduct three case studies and three expert interviews to demonstrate the usefulness of QLens on real-world datasets that consist of thousands of problem-solving traces. | false | false | [
"Meng Xia",
"Reshika Palaniyappan Velumani",
"Yong Wang 0021",
"Huamin Qu",
"Xiaojuan Ma"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.12833v1",
"icon": "paper"
}
] |
VAST | 2,020 | Revisiting the Modifiable Areal Unit Problem in Deep Traffic Prediction with Visual Analytics | 10.1109/TVCG.2020.3030410 | Deep learning methods are being increasingly used for urban traffic prediction where spatiotemporal traffic data is aggregated into sequentially organized matrices that are then fed into convolution-based residual neural networks. However, the widely known modifiable areal unit problem within such aggregation processes can lead to perturbations in the network inputs. This issue can significantly destabilize the feature embeddings and the predictions - rendering deep networks much less useful for the experts. This paper approaches this challenge by leveraging unit visualization techniques that enable the investigation of many-to-many relationships between dynamically varied multi-scalar aggregations of urban traffic data and neural network predictions. Through regular exchanges with a domain expert, we design and develop a visual analytics solution that integrates 1) a Bivariate Map equipped with an advanced bivariate colormap to simultaneously depict input traffic and prediction errors across space, 2) a Moran's I Scatterplot that provides local indicators of spatial association analysis, and 3) a Multi-scale Attribution View that arranges non-linear dot plots in a tree layout to promote model analysis and comparison across scales. We evaluate our approach through a series of case studies involving a real-world dataset of Shenzhen taxi trips, and through interviews with domain experts. We observe that geographical scale variations have important impact on prediction performances, and interactive visual exploration of dynamically varying inputs and outputs benefit experts in the development of deep traffic prediction models. | false | false | [
"Wei Zeng 0004",
"Chengqiao Lin",
"Juncong Lin",
"Jincheng Jiang",
"Jiazhi Xia",
"Cagatay Turkay",
"Wei Chen 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.15486v3",
"icon": "paper"
}
] |
VAST | 2,020 | Selection-Bias-Corrected Visualization via Dynamic Reweighting | 10.1109/TVCG.2020.3030455 | The collection and visual analysis of large-scale data from complex systems, such as electronic health records or clickstream data, has become increasingly common across a wide range of industries. This type of retrospective visual analysis, however, is prone to a variety of selection bias effects, especially for high-dimensional data where only a subset of dimensions is visualized at any given time. The risk of selection bias is even higher when analysts dynamically apply filters or perform grouping operations during ad hoc analyses. These bias effects threaten the validity and generalizability of insights discovered during visual analysis as the basis for decision making. Past work has focused on bias transparency, helping users understand when selection bias may have occurred. However, countering the effects of selection bias via bias mitigation is typically left for the user to accomplish as a separate process. Dynamic reweighting (DR) is a novel computational approach to selection bias mitigation that helps users craft bias-corrected visualizations. This paper describes the DR workflow, introduces key DR visualization designs, and presents statistical methods that support the DR process. Use cases from the medical domain, as well as findings from domain expert user interviews, are also reported. | false | false | [
"David Borland",
"Jonathan Zhang",
"Smiti Kaul",
"David Gotz"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.14964v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/pqoQZZ07HOo",
"icon": "video"
}
] |
VAST | 2,020 | SilkViser: A Visual Explorer of Blockchain-based Cryptocurrency Transaction Data | 10.1109/VAST50239.2020.00014 | Many blockchain-based cryptocurrencies provide users with online blockchain explorers for viewing online transaction data. However, traditional blockchain explorers mostly present transaction information in textual and tabular forms. Such forms make understanding cryptocurrency transaction mechanisms difficult for novice users (NUsers). They are also insufficiently informative for experienced users (EUsers) to recognize advanced transaction information. This study introduces a new online cryptocurrency transaction data viewing tool called SilkViser. Guided by detailed scenario and requirement analyses, we create a series of appreciating visualization designs, such as paper ledger-inspired block and blockchain visualizations and ancient copper coin-inspired transaction visualizations, to help users understand cryptocurrency transaction mechanisms and recognize advanced transaction information. We also provide a set of lightweight interactions to facilitate easy and free data exploration. Moreover, a controlled user study is conducted to quantitatively evaluate the usability and effectiveness of SilkViser. Results indicate that SilkViser can satisfy the requirements of NUsers and EUsers. Our visualization designs can compensate for the inexperience of NUsers in data viewing and attract potential users to participate in cryptocurrency transactions. | false | false | [
"Zengsheng Zhong",
"Shuirun Wei",
"Yeting Xu",
"Ying Zhao 0001",
"Fangfang Zhou",
"Feng Luo",
"Ronghua Shi"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02651v1",
"icon": "paper"
}
] |
VAST | 2,020 | SMAP: A Joint Dimensionality Reduction Scheme for Secure Multi-Party Visualization | 10.1109/VAST50239.2020.00015 | Nowadays, as data becomes increasingly complex and distributed, data analyses often involve several related datasets that are stored on different servers and probably owned by different stakeholders. While there is an emerging need to provide these stakeholders with a full picture of their data under a global context, conventional visual analytical methods, such as dimensionality reduction, could expose data privacy when multi-party datasets are fused into a single site to build point-level relationships. In this paper, we reformulate the conventional t-SNE method from the single-site mode into a secure distributed infrastructure. We present a secure multi-party scheme for joint t-SNE computation, which can minimize the risk of data leakage. Aggregated visualization can be optionally employed to hide disclosure of point-level relationships. We build a prototype system based on our method, SMAP, to support the organization, computation, and exploration of secure joint embedding. We demonstrate the effectiveness of our approach with three case studies, one of which is based on the deployment of our system in real-world applications. | false | false | [
"Jiazhi Xia",
"Tianxiang Chen",
"Lei Zhang",
"Wei Chen 0001",
"Yang Chen",
"Xiaolong Zhang 0001",
"Cong Xie",
"Tobias Schreck"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.15591v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/ckm5b5slF7Y",
"icon": "video"
}
] |
VAST | 2,020 | StackGenVis: Alignment of Data, Algorithms, and Models for Stacking Ensemble Learning Using Performance Metrics | 10.1109/TVCG.2020.3030352 | In machine learning (ML), ensemble methods-such as bagging, boosting, and stacking-are widely-established approaches that regularly achieve top-notch predictive performance. Stacking (also called “stacked generalization”) is an ensemble method that combines heterogeneous base models, arranged in at least one layer, and then employs another metamodel to summarize the predictions of those models. Although it may be a highly-effective approach for increasing the predictive performance of ML, generating a stack of models from scratch can be a cumbersome trial-and-error process. This challenge stems from the enormous space of available solutions, with different sets of data instances and features that could be used for training, several algorithms to choose from, and instantiations of these algorithms using diverse parameters (i.e., models) that perform differently according to various metrics. In this work, we present a knowledge generation model, which supports ensemble learning with the use of visualization, and a visual analytics system for stacked generalization. Our system, StackGenVis, assists users in dynamically adapting performance metrics, managing data instances, selecting the most important features for a given data set, choosing a set of top-performant and diverse algorithms, and measuring the predictive performance. In consequence, our proposed tool helps users to decide between distinct models and to reduce the complexity of the resulting stack by removing overpromising and underperforming models. The applicability and effectiveness of StackGenVis are demonstrated with two use cases: a real-world healthcare data set and a collection of data related to sentiment/stance detection in texts. Finally, the tool has been evaluated through interviews with three ML experts. | false | false | [
"Angelos Chatzimparmpas",
"Rafael Messias Martins",
"Kostiantyn Kucher",
"Andreas Kerren"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.01575v8",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/9lvdgPHGfsQ",
"icon": "video"
}
] |
VAST | 2,020 | STULL: Unbiased Online Sampling for Visual Exploration of Large Spatiotemporal Data | 10.1109/VAST50239.2020.00012 | Online sampling-supported visual analytics is increasingly important, as it allows users to explore large datasets with acceptable approximate answers at interactive rates. However, existing online spatiotemporal sampling techniques are often biased, as most researchers have primarily focused on reducing computational latency. Biased sampling approaches select data with unequal probabilities and produce results that do not match the exact data distribution, leading end users to incorrect interpretations. In this paper, we propose a novel approach to perform unbiased online sampling of large spatiotemporal data. The proposed approach ensures the same probability of selection to every point that qualifies the specifications of a user’s multidimensional query. To achieve unbiased sampling for accurate representative interactive visualizations, we design a novel data index and an associated sample retrieval plan. Our proposed sampling approach is suitable for a wide variety of visual analytics tasks, e.g., tasks that run aggregate queries of spatiotemporal data. Extensive experiments confirm the superiority of our approach over a state-of-the-art spatial online sampling technique, demonstrating that within the same computational time, data samples generated in our approach are at least 50% more accurate in representing the actual spatial distribution of the data and enable approximate visualizations to present closer visual appearances to the exact ones. | false | false | [
"Guizhen Wang",
"Jingjing Guo",
"MingJie Tang",
"Jose Florencio de Queiroz Neto",
"Calvin Yau",
"Anas Daghistani",
"Morteza Karimzadeh",
"Walid G. Aref",
"David S. Ebert"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.13028v1",
"icon": "paper"
}
] |
VAST | 2,020 | Supporting the Problem-Solving Loop: Designing Highly Interactive Optimisation Systems | 10.1109/TVCG.2020.3030364 | Efficient optimisation algorithms have become important tools for finding high-quality solutions to hard, real-world problems such as production scheduling, timetabling, or vehicle routing. These algorithms are typically “black boxes” that work on mathematical models of the problem to solve. However, many problems are difficult to fully specify, and require a “human in the loop” who collaborates with the algorithm by refining the model and guiding the search to produce acceptable solutions. Recently, the Problem-Solving Loop was introduced as a high-level model of such interactive optimisation. Here, we present and evaluate nine recommendations for the design of interactive visualisation tools supporting the Problem-Solving Loop. They range from the choice of visual representation for solutions and constraints to the use of a solution gallery to support exploration of alternate solutions. We first examined the applicability of the recommendations by investigating how well they had been supported in previous interactive optimisation tools. We then evaluated the recommendations in the context of the vehicle routing problem with time windows (VRPTW). To do so we built a sophisticated interactive visual system for solving VRPTW that was informed by the recommendations. Ten participants then used this system to solve a variety of routing problems. We report on participant comments and interaction patterns with the tool. These showed the tool was regarded as highly usable and the results generally supported the usefulness of the underlying recommendations. | false | false | [
"Jie Liu",
"Tim Dwyer",
"Guido Tack",
"Samuel Gratzl",
"Kim Marriott"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.03163v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/nT55ocI-73o",
"icon": "video"
}
] |
VAST | 2,020 | TaxThemis: Interactive Mining and Exploration of Suspicious Tax Evasion Groups | 10.1109/TVCG.2020.3030370 | Tax evasion is a serious economic problem for many countries, as it can undermine the government's tax system and lead to an unfair business competition environment. Recent research has applied data analytics techniques to analyze and detect tax evasion behaviors of individual taxpayers. However, they have failed to support the analysis and exploration of the related party transaction tax evasion (RPTTE) behaviors (e.g., transfer pricing), where a group of taxpayers is involved. In this paper, we present TaxThemis, an interactive visual analytics system to help tax officers mine and explore suspicious tax evasion groups through analyzing heterogeneous tax-related data. A taxpayer network is constructed and fused with the respective trade network to detect suspicious RPTTE groups. Rich visualizations are designed to facilitate the exploration and investigation of suspicious transactions between related taxpayers with profit and topological data analysis. Specifically, we propose a calendar heatmap with a carefully-designed encoding scheme to intuitively show the evidence of transferring revenue through related party transactions. We demonstrate the usefulness and effectiveness of TaxThemis through two case studies on real-world tax-related data and interviews with domain experts. | false | false | [
"Yating Lin",
"Kamkwai Wong",
"Yong Wang 0021",
"Rong Zhang",
"Bo Dong 0001",
"Huamin Qu",
"Qinghua Zheng"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.03179v1",
"icon": "paper"
}
] |
VAST | 2,020 | Topology Density Map for Urban Data Visualization and Analysis | 10.1109/TVCG.2020.3030469 | Density map is an effective visualization technique for depicting the scalar field distribution in 2D space. Conventional methods for constructing density maps are mainly based on Euclidean distance, limiting their applicability in urban analysis that shall consider road network and urban traffic. In this work, we propose a new method named Topology Density Map, targeting for accurate and intuitive density maps in the context of urban environment. Based on the various constraints of road connections and traffic conditions, the method first constructs a directed acyclic graph (DAG) that propagates nonlinear scalar fields along 1D road networks. Next, the method extends the scalar fields to a 2D space by identifying key intersecting points in the DAG and calculating the scalar fields for every point, yielding a weighted Voronoi diagram like effect of space division. Two case studies demonstrate that the Topology Density Map supplies accurate information to users and provides an intuitive visualization for decision making. An interview with domain experts demonstrates the feasibility, usability, and effectiveness of our method. | false | false | [
"Zezheng Feng",
"Haotian Li 0001",
"Wei Zeng 0004",
"Shuang-Hua Yang",
"Huamin Qu"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2007.15828v4",
"icon": "paper"
}
] |
VAST | 2,020 | Towards Better Bus Networks: A Visual Analytics Approach | 10.1109/TVCG.2020.3030458 | Bus routes are typically updated every 3–5 years to meet constantly changing travel demands. However, identifying deficient bus routes and finding their optimal replacements remain challenging due to the difficulties in analyzing a complex bus network and the large solution space comprising alternative routes. Most of the automated approaches cannot produce satisfactory results in real-world settings without laborious inspection and evaluation of the candidates. The limitations observed in these approaches motivate us to collaborate with domain experts and propose a visual analytics solution for the performance analysis and incremental planning of bus routes based on an existing bus network. Developing such a solution involves three major challenges, namely, a) the in-depth analysis of complex bus route networks, b) the interactive generation of improved route candidates, and c) the effective evaluation of alternative bus routes. For challenge a, we employ an overview-to-detail approach by dividing the analysis of a complex bus network into three levels to facilitate the efficient identification of deficient routes. For challenge b, we improve a route generation model and interpret the performance of the generation with tailored visualizations. For challenge c, we incorporate a conflict resolution strategy in the progressive decision-making process to assist users in evaluating the alternative routes and finding the most optimal one. The proposed system is evaluated with two usage scenarios based on real-world data and received positive feedback from the experts. Index Terms-Bus route planning, spatial decision-making, urban data visual analytics | false | false | [
"Di Weng",
"Chengbo Zheng",
"Zikun Deng",
"Mingze Ma",
"Jie Bao 0003",
"Yu Zheng 0004",
"Mingliang Xu",
"Yingcai Wu"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.10915v3",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/DEAfK8F2dQE",
"icon": "video"
}
] |
VAST | 2,020 | Uplift: A Tangible and Immersive Tabletop System for Casual Collaborative Visual Analytics | 10.1109/TVCG.2020.3030334 | Collaborative visual analytics leverages social interaction to support data exploration and sensemaking. These processes are typically imagined as formalised, extended activities, between groups of dedicated experts, requiring expertise with sophisticated data analysis tools. However, there are many professional domains that benefit from support for short 'bursts' of data exploration between a subset of stakeholders with a diverse breadth of knowledge. Such 'casual collaborative’ scenarios will require engaging features to draw users' attention, with intuitive, 'walk-up and use’ interfaces. This paper presents Uplift, a novel prototype system to support 'casual collaborative visual analytics' for a campus microgrid, co-designed with local stakeholders. An elicitation workshop with key members of the building management team revealed relevant knowledge is distributed among multiple experts in their team, each using bespoke analysis tools. Uplift combines an engaging 3D model on a central tabletop display with intuitive tangible interaction, as well as augmented-reality, mid-air data visualisation, in order to support casual collaborative visual analytics for this complex domain. Evaluations with expert stakeholders from the building management and energy domains were conducted during and following our prototype development and indicate that Uplift is successful as an engaging backdrop for casual collaboration. Experts see high potential in such a system to bring together diverse knowledge holders and reveal complex interactions between structural, operational, and financial aspects of their domain. Such systems have further potential in other domains that require collaborative discussion or demonstration of models, forecasts, or cost-benefit analyses to high-level stakeholders. | false | false | [
"Barrett Ens",
"Sarah Goodwin",
"Arnaud Prouzeau",
"Fraser Anderson",
"Florence Y. Wang",
"Samuel Gratzl",
"Zac Lucarelli",
"Brendan Moyle",
"Jim Smiley",
"Tim Dwyer"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/JrH2dVuxa1I",
"icon": "video"
}
] |
VAST | 2,020 | VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection | 10.1109/TVCG.2020.3030350 | Traffic light detection is crucial for environment perception and decision-making in autonomous driving. State-of-the-art detectors are built upon deep Convolutional Neural Networks (CNNs) and have exhibited promising performance. However, one looming concern with CNN based detectors is how to thoroughly evaluate the performance of accuracy and robustness before they can be deployed to autonomous vehicles. In this work, we propose a visual analytics system, VATLD, equipped with a disentangled representation learning and semantic adversarial learning, to assess, understand, and improve the accuracy and robustness of traffic light detectors in autonomous driving applications. The disentangled representation learning extracts data semantics to augment human cognition with human-friendly visual summarization, and the semantic adversarial learning efficiently exposes interpretable robustness risks and enables minimal human interaction for actionable insights. We also demonstrate the effectiveness of various performance improvement strategies derived from actionable insights with our visual analytics system, VATLD, and illustrate some practical implications for safety-critical applications in autonomous driving. | false | false | [
"Liang Gou",
"Lincan Zou",
"Nanxiang Li",
"Michael Hofmann 0010",
"Arvind Kumar Shekar",
"Axel Wendt",
"Ren Liu"
] | [
"BP"
] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.12975v1",
"icon": "paper"
}
] |
VAST | 2,020 | Visilant: Visual Support for the Exploration and Analytical Process Tracking in Criminal Investigations | 10.1109/TVCG.2020.3030356 | The daily routine of criminal investigators consists of a thorough analysis of highly complex and heterogeneous data of crime cases. Such data can consist of case descriptions, testimonies, criminal networks, spatial and temporal information, and virtually any other data that is relevant for the case. Criminal investigators work under heavy time pressure to analyze the data for relationships, propose and verify several hypotheses, and derive conclusions, while the data can be incomplete or inconsistent and is changed and updated throughout the investigation, as new findings are added to the case. Based on a four-year intense collaboration with criminalists, we present a conceptual design for a visual tool supporting the investigation workflow and Visilant, a web-based tool for the exploration and analysis of criminal data guided by the proposed design. Visilant aims to support namely the exploratory part of the investigation pipeline, from case overview, through exploration and hypothesis generation, to the case presentation. Visilant tracks the reasoning process and as the data is changing, it informs investigators which hypotheses are affected by the data change and should be revised. The tool was evaluated by senior criminology experts within two sessions and their feedback is summarized in the paper. Additional supplementary material contains the technical details and exemplary case study. | false | false | [
"Kristína Zákopcanová",
"Marko Rehácek",
"Jozef Bátrna",
"Daniel Plakinger",
"Sergej Stoppel",
"Barbora Kozlíková"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.09082v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/xMcE5toAoxY",
"icon": "video"
}
] |
VAST | 2,020 | Visual Abstraction of Geographical Point Data with Spatial Autocorrelations | 10.1109/VAST50239.2020.00011 | Scatterplots are always employed to visualize geographical point datasets, which often suffer from an overdraw problem due to the increase of data sizes. A variety of sampling strategies have been proposed to reduce overdraw and visual clutter with the spatial densities of points taken into account. However, informative attributes associated with the points also play significant roles in the exploration of geographical datasets. In this paper, we propose an attribute-based abstraction method to simplify the cluttered visualization of large-scale geographical points. Spatial autocorrelations are utilized to measure the attribute relationships of points in local areas, and a novel attribute-based sampling model is designed to generate a subset of points to preserve both density and attribute characteristics of original geographical points. A set of visual designs and user-friendly interactions are implemented, enabling users to capture the spatial distribution of geographical points and get deeper insights into the attribute features across local areas. Case studies and quantitative comparisons based on the real-world datasets further demonstrate the effectiveness of our method in the abstraction and exploration of large-scale geographical point datasets. | false | false | [
"Zhiguang Zhou",
"Xinlong Zhang",
"Zhendong Yang",
"Yuanyuan Chen",
"Yuhua Liu",
"Jin Wen",
"Binjie Chen",
"Ying Zhao 0001",
"Wei Chen 0001"
] | [] | [] | [] |
VAST | 2,020 | Visual Analysis of Argumentation in Essays | 10.1109/TVCG.2020.3030425 | This paper presents a visual analytics system for exploring, analyzing and comparing argument structures in essay corpora. We provide an overview of the corpus by a list of ArguLines which represent the argument units of each essay by a sequence of glyphs. Each glyph encodes the stance, the depth and the relative position of an argument unit. The overview can be ordered in various ways to reveal patterns and outliers. Subsets of essays can be selected and analyzed in detail using the Argument Unit Occurrence Tree which aggregates the argument structures using hierarchical histograms. This hierarchical view facilitates the estimation of statistics and trends concerning the progression of the argumentation in the essays. It also provides insights into the commonalities and differences between selected subsets. The text view is the necessary textual basis to verify conclusions from the other views and the annotation process. Linking the views and interaction techniques for visual filtering, studying the evolution of stance within a subset of essays and scrutinizing the order of argumentative units enable a deep analysis of essay corpora. Our expert reviews confirmed the utility of the system and revealed detailed and previously unknown information about the argumentation in our sample corpus. | false | false | [
"Dora Kiesel",
"Patrick Riehmann",
"Henning Wachsmuth",
"Benno Stein 0001",
"Bernd Fröhlich 0001"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/Nongy_HBaM8",
"icon": "video"
}
] |
VAST | 2,020 | Visual Analytics for Temporal Hypergraph Model Exploration | 10.1109/TVCG.2020.3030408 | Many processes, from gene interaction in biology to computer networks to social media, can be modeled more precisely as temporal hypergraphs than by regular graphs. This is because hypergraphs generalize graphs by extending edges to connect any number of vertices, allowing complex relationships to be described more accurately and predict their behavior over time. However, the interactive exploration and seamless refinement of such hypergraph-based prediction models still pose a major challenge. We contribute Hyper-Matrix, a novel visual analytics technique that addresses this challenge through a tight coupling between machine-learning and interactive visualizations. In particular, the technique incorporates a geometric deep learning model as a blueprint for problem-specific models while integrating visualizations for graph-based and category-based data with a novel combination of interactions for an effective user-driven exploration of hypergraph models. To eliminate demanding context switches and ensure scalability, our matrix-based visualization provides drill-down capabilities across multiple levels of semantic zoom, from an overview of model predictions down to the content. We facilitate a focused analysis of relevant connections and groups based on interactive user-steering for filtering and search tasks, a dynamically modifiable partition hierarchy, various matrix reordering techniques, and interactive model feedback. We evaluate our technique in a case study and through formative evaluation with law enforcement experts using real-world internet forum communication data. The results show that our approach surpasses existing solutions in terms of scalability and applicability, enables the incorporation of domain knowledge, and allows for fast search-space traversal. With the proposed technique, we pave the way for the visual analytics of temporal hypergraphs in a wide variety of domains. | false | false | [
"Maximilian T. Fischer",
"Devanshu Arya",
"Dirk Streeb",
"Daniel Seebacher",
"Daniel A. Keim",
"Marcel Worring"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.07299v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Z1J6RX0W2ao",
"icon": "video"
}
] |
VAST | 2,020 | Visual Analytics of Multivariate Event Sequence Data in Racquet Sports | 10.1109/VAST50239.2020.00009 | In this work, we propose a generic visual analytics framework to support tactic analysis based on data collected from racquet sports (such as tennis and badminton). The proposed approach models each rally in a game as a sequence of hits (i.e., events) until one athlete scores a point. Each hit can be described with a set of attributes, such as the positions of the ball and the techniques used to hit the ball (such as drive and volley in tennis). Thus, the mentioned sequence of hits can be viewed as a multivariate event sequence. By detecting and analyzing the multivariate subsequences that frequently occur in the rallies (namely, tactical patterns), athletes can gain insights into the playing styles adopted by their opponents, and therefore help them identify systematic weaknesses of the opponents and develop counter strategies in matches. To support such analysis effectively, we propose a steerable multivariate sequential pattern mining algorithm with adjustable weights over event attributes, such that the domain expert can obtain frequent tactical patterns according to the attributes specified by himself. We also propose a re-configurable glyph design to help users simultaneously analyze multiple attributes of the hits. The framework further supports comparative analysis of the tactical patterns, e.g., for different athletes or the same athlete playing under different conditions. By applying the framework on two datasets collected in tennis and badminton matches, we demonstrate that the system is generic and effective for tactic analysis in sports and can help identify signature techniques used by individual athletes. Finally, we discuss the strengths and limitations of the proposed approach based on the feedback from the domain experts. | false | false | [
"Jiang Wu",
"Ziyang Guo",
"Zuobin Wang",
"Qingyang Xu",
"Yingcai Wu"
] | [] | [] | [] |
VAST | 2,020 | Visual Causality Analysis of Event Sequence Data | 10.1109/TVCG.2020.3030465 | Causality is crucial to understanding the mechanisms behind complex systems and making decisions that lead to intended outcomes. Event sequence data is widely collected from many real-world processes, such as electronic health records, web clickstreams, and financial transactions, which transmit a great deal of information reflecting the causal relations among event types. Unfortunately, recovering causalities from observational event sequences is challenging, as the heterogeneous and high-dimensional event variables are often connected to rather complex underlying event excitation mechanisms that are hard to infer from limited observations. Many existing automated causal analysis techniques suffer from poor explainability and fail to include an adequate amount of human knowledge. In this paper, we introduce a visual analytics method for recovering causalities in event sequence data. We extend the Granger causality analysis algorithm on Hawkes processes to incorporate user feedback into causal model refinement. The visualization system includes an interactive causal analysis framework that supports bottom-up causal exploration, iterative causal verification and refinement, and causal comparison through a set of novel visualizations and interactions. We report two forms of evaluation: a quantitative evaluation of the model improvements resulting from the user-feedback mechanism, and a qualitative evaluation through case studies in different application domains to demonstrate the usefulness of the system. | false | false | [
"Zhuochen Jin",
"Shunan Guo",
"Nan Chen",
"Daniel Weiskopf",
"David Gotz",
"Nan Cao"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.00219v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/JWhyQxA7SEg",
"icon": "video"
}
] |
VAST | 2,020 | Visual cohort comparison for spatial single-cell omics-data | 10.1109/TVCG.2020.3030336 | Spatially-resolved omics-data enable researchers to precisely distinguish cell types in tissue and explore their spatial interactions, enabling deep understanding of tissue functionality. To understand what causes or deteriorates a disease and identify related biomarkers, clinical researchers regularly perform large-scale cohort studies, requiring the comparison of such data at cellular level. In such studies, with little a-priori knowledge of what to expect in the data, explorative data analysis is a necessity. Here, we present an interactive visual analysis workflow for the comparison of cohorts of spatially-resolved omics-data. Our workflow allows the comparative analysis of two cohorts based on multiple levels-of-detail, from simple abundance of contained cell types over complex co-localization patterns to individual comparison of complete tissue images. As a result, the workflow enables the identification of cohort-differentiating features, as well as outlier samples at any stage of the workflow. During the development of the workflow, we continuously consulted with domain experts. To show the effectiveness of the workflow, we conducted multiple case studies with domain experts from different application areas and with different data modalities. | false | false | [
"Antonios Somarakis",
"Marieke E. Ijsselsteijn",
"Sietse J. Luk",
"Boyd Kenkhuis",
"Noel F. C. C. de Miranda",
"Boudewijn P. F. Lelieveldt",
"Thomas Höllt"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2006.05175v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/xGFBiyBkm38",
"icon": "video"
}
] |
VAST | 2,020 | Visual Neural Decomposition to Explain Multivariate Data Sets | 10.1109/TVCG.2020.3030420 | Investigating relationships between variables in multi-dimensional data sets is a common task for data analysts and engineers. More specifically, it is often valuable to understand which ranges of which input variables lead to particular values of a given target variable. Unfortunately, with an increasing number of independent variables, this process may become cumbersome and time-consuming due to the many possible combinations that have to be explored. In this paper, we propose a novel approach to visualize correlations between input variables and a target output variable that scales to hundreds of variables. We developed a visual model based on neural networks that can be explored in a guided way to help analysts find and understand such correlations. First, we train a neural network to predict the target from the input variables. Then, we visualize the inner workings of the resulting model to help understand relations within the data set. We further introduce a new regularization term for the backpropagation algorithm that encourages the neural network to learn representations that are easier to interpret visually. We apply our method to artificial and real-world data sets to show its utility. | false | false | [
"Johannes Knittel",
"Andrés Lalama",
"Steffen Koch 0001",
"Thomas Ertl"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.05502v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/dD9wY6R_gt0",
"icon": "video"
}
] |
VAST | 2,020 | VizCommender: Computing Text-Based Similarity in Visualization Repositories for Content-Based Recommendations | 10.1109/TVCG.2020.3030387 | Cloud-based visualization services have made visual analytics accessible to a much wider audience than ever before. Systems such as Tableau have started to amass increasingly large repositories of analytical knowledge in the form of interactive visualization workbooks. When shared, these collections can form a visual analytic knowledge base. However, as the size of a collection increases, so does the difficulty in finding relevant information. Content-based recommendation (CBR) systems could help analysts in finding and managing workbooks relevant to their interests. Toward this goal, we focus on text-based content that is representative of the subject matter of visualizations rather than the visual encodings and style. We discuss the challenges associated with creating a CBR based on visualization specifications and explore more concretely how to implement the relevance measures required using Tableau workbook specifications as the source of content data. We also demonstrate what information can be extracted from these visualization specifications and how various natural language processing techniques can be used to compute similarity between workbooks as one way to measure relevance. We report on a crowd-sourced user study to determine if our similarity measure mimics human judgement. Finally, we choose latent Dirichl et al.ocation (LDA) as a specific model and instantiate it in a proof-of-concept recommender tool to demonstrate the basic function of our similarity measure. | false | false | [
"Michael Oppermann",
"Robert Kincaid",
"Tamara Munzner"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.07702v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/wp4CWYFAbZw",
"icon": "video"
}
] |
SciVis | 2,020 | A Fluid Flow Data Set for Machine Learning and its Application to Neural Flow Map Interpolation | 10.1109/TVCG.2020.3028947 | In recent years, deep learning has opened countless research opportunities across many different disciplines. At present, visualization is mainly applied to explore and explain neural networks. Its counterpart-the application of deep learning to visualization problems-requires us to share data more openly in order to enable more scientists to engage in data-driven research. In this paper, we construct a large fluid flow data set and apply it to a deep learning problem in scientific visualization. Parameterized by the Reynolds number, the data set contains a wide spectrum of laminar and turbulent fluid flow regimes. The full data set was simulated on a high-performance compute cluster and contains 8000 time-dependent 2D vector fields, accumulating to more than 16 TB in size. Using our public fluid data set, we trained deep convolutional neural networks in order to set a benchmark for an improved post-hoc Lagrangian fluid flow analysis. In in-situ settings, flow maps are exported and interpolated in order to assess the transport characteristics of time-dependent fluids. Using deep learning, we improve the accuracy of flow map interpolations, allowing a more precise flow analysis at a reduced memory IO footprint. | false | false | [
"Jakob Jakob",
"Markus H. Gross",
"Tobias Günther"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/6gKr7YmA0QE",
"icon": "video"
}
] |
SciVis | 2,020 | A Suggestive Interface for Untangling Mathematical Knots | 10.1109/TVCG.2020.3028893 | In this paper we present a user-friendly sketching-based suggestive interface for untangling mathematical knots with complicated structures. Rather than treating mathematical knots as if they were 3D ropes, our interface is designed to assist the user to interact with knots with the right sequence of mathematically legal moves. Our knot interface allows one to sketch and untangle knots by proposing the Reidemeister moves, and can guide the user to untangle mathematical knots to the fewest possible number of crossings by suggesting the moves needed. The system highlights parts of the knot where the Reidemeister moves are applicable, suggests the possible moves, and constrains the user's drawing to legal moves only. This ongoing suggestion is based on a Reidemeister move analyzer, that reads the evolving knot in its Gauss code and predicts the needed Reidemeister moves towards the fewest possible number of crossings. For our principal test case of mathematical knot diagrams, this for the first time permits us to visualize, analyze, and deform them in a mathematical visual interface. In addition, understanding of a fairly long mathematical deformation sequence in our interface can be aided by visual analysis and comparison over the identified “key moments” where only critical changes occur in the sequence. Our knot interface allows users to track and trace mathematical knot deformation with a significantly reduced number of visual frames containing only the Reidemeister moves being applied. All these combine to allow a much cleaner exploratory interface for us to analyze and study mathematical knots and their dynamics in topological space. | false | false | [
"Huan Liu",
"Hui Zhang 0006"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/D0ltvMJteoE",
"icon": "video"
}
] |
SciVis | 2,020 | A Testing Environment for Continuous Colormaps | 10.1109/TVCG.2020.3028955 | Many computer science disciplines (e.g., combinatorial optimization, natural language processing, and information retrieval) use standard or established test suites for evaluating algorithms. In visualization, similar approaches have been adopted in some areas (e.g., volume visualization), while user testimonies and empirical studies have been the dominant means of evaluation in most other areas, such as designing colormaps. In this paper, we propose to establish a test suite for evaluating the design of colormaps. With such a suite, the users can observe the effects when different continuous colormaps are applied to planar scalar fields that may exhibit various characteristic features, such as jumps, local extrema, ridge or valley lines, different distributions of scalar values, different gradients, different signal frequencies, different levels of noise, and so on. The suite also includes an expansible collection of real-world data sets including the most popular data for colormap testing in the visualization literature. The test suite has been integrated into a web-based application for creating continuous colormaps (https://ccctool.com/), facilitating close inter-operation between design and evaluation processes. This new facility complements traditional evaluation methods such as user testimonies and empirical studies. | false | false | [
"Pascal Nardini",
"Min Chen 0001",
"Roxana Bujack",
"Michael Böttinger",
"Gerik Scheuermann"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.13133v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/QbWd1Iv7eXc",
"icon": "video"
}
] |
SciVis | 2,020 | A Visualization Framework for Multi-scale Coherent Structures in Taylor-Couette Turbulence | 10.1109/TVCG.2020.3028892 | Taylor-Couette flow (TCF) is the turbulent fluid motion created between two concentric and independently rotating cylinders. It has been heavily researched in fluid mechanics thanks to the various nonlinear dynamical phenomena that are exhibited in the flow. As many dense coherent structures overlap each other in TCF, it is challenging to isolate and visualize them, especially when the cylinder rotation ratio is changing. Previous approaches rely on 2D cross sections to study TCF due to its simplicity, which cannot provide the complete information of TCF. In the meantime, standard visualization techniques, such as volume rendering / iso-surfacing of certain attributes and the placement of integral curves/surfaces, usually produce cluttered visualization. To address this challenge and to support domain experts in the analysis of TCF, we developed a visualization framework to separate large-scale structures from the dense, small-scale structures and provide an effective visual representation of these structures. Instead of using a single physical attribute as the standard approach which cannot efficiently separate structures in different scales for TCF, we adapt the feature level-set method to combine multiple attributes and use them as a filter to separate large- and small-scale structures. To visualize these structures, we apply the iso-surface extraction on the kernel density estimate of the distance field generated from the feature level-set. The proposed methods successfully reveal 3D large-scale coherent structures of TCF with different control parameter settings, which are difficult to achieve with the conventional methods. | false | false | [
"Duong B. Nguyen",
"Rodolfo Ostilla Monico",
"Guoning Chen"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/Al7uuUlkCa4",
"icon": "video"
}
] |
SciVis | 2,020 | Advanced Rendering of Line Data with Ambient Occlusion and Transparency | 10.1109/TVCG.2020.3028954 | 3D Lines are a widespread rendering primitive for the visualization of data from research fields like fluid dynamics or fiber tractography. Global illumination effects and transparent rendering improve the perception of three-dimensional features and decrease occlusion within the data set, thus enabling better understanding of complex line data. We present an efficient approach for high quality GPU-based rendering of line data with ambient occlusion and transparency effects. Our approach builds on GPU-based raycasting of rounded cones, which are geometric primitives similar to truncated cones, but with spherical endcaps. Object space ambient occlusion is provided by an efficient voxel cone tracing approach. Our core contribution is a new fragment visibility sorting strategy that allows for interactive visualization of line data sets with millions of line segments. We improve performance further by exploiting hierarchical opacity maps. | false | false | [
"David Groß",
"Stefan Gumhold"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/nP6r-ItI8u4",
"icon": "video"
}
] |
SciVis | 2,020 | ChemVA: Interactive Visual Analysis of Chemical Compound Similarity in Virtual Screening | 10.1109/TVCG.2020.3030438 | In the modern drug discovery process, medicinal chemists deal with the complexity of analysis of large ensembles of candidate molecules. Computational tools, such as dimensionality reduction (DR) and classification, are commonly used to efficiently process the multidimensional space of features. These underlying calculations often hinder interpretability of results and prevent experts from assessing the impact of individual molecular features on the resulting representations. To provide a solution for scrutinizing such complex data, we introduce ChemVA, an interactive application for the visual exploration of large molecular ensembles and their features. Our tool consists of multiple coordinated views: Hexagonal view, Detail view, 3D view, Table view, and a newly proposed Difference view designed for the comparison of DR projections. These views display DR projections combined with biological activity, selected molecular features, and confidence scores for each of these projections. This conjunction of views allows the user to drill down through the dataset and to efficiently select candidate compounds. Our approach was evaluated on two case studies of finding structurally similar ligands with similar binding affinity to a target protein, as well as on an external qualitative evaluation. The results suggest that our system allows effective visual inspection and comparison of different high-dimensional molecular representations. Furthermore, ChemVA assists in the identification of candidate compounds while providing information on the certainty behind different molecular representations. | false | false | [
"María Virginia Sabando",
"Pavol Ulbrich",
"Matias Nicolás Selzer",
"Jan Byska",
"Jan Mican",
"Ignacio Ponzoni",
"Axel J. Soto",
"Maria Luján Ganuza",
"Barbora Kozlíková"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.13150v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/vKMRGer-pAY",
"icon": "video"
}
] |
SciVis | 2,020 | Data-Driven Space-Filling Curves | 10.1109/TVCG.2020.3030473 | Abstract-We propose a data-driven space-filling curve method for 2D and 3D visualization. Our flexible curve traverses the data elements in the spatial domain in a way that the resulting linearization better preserves features in space compared to existing methods. We achieve such data coherency by calculating a Hamiltonian path that approximately minimizes an objective function that describes the similarity of data values and location coherency in a neighborhood. Our extended variant even supports multiscale data via quadtrees and octrees. Our method is useful in many areas of visualization including multivariate or comparative visualization ensemble visualization of 2D and 3D data on regular grids or multiscale visual analysis of particle simulations. The effectiveness of our method is evaluated with numerical comparisons to existing techniques and through examples of ensemble and multivariate datasets. | false | false | [
"Liang Zhou 0001",
"Christopher R. Johnson 0001",
"Daniel Weiskopf"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/gEujag3akYw",
"icon": "video"
}
] |
SciVis | 2,020 | Deep Volumetric Ambient Occlusion | 10.1109/TVCG.2020.3030344 | We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAO's ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only. | false | false | [
"Dominik Engel",
"Timo Ropinski"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.08345v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/AMLlnwqGiIU",
"icon": "video"
}
] |
SciVis | 2,020 | Direct Volume Rendering with Nonparametric Models of Uncertainty | 10.1109/TVCG.2020.3030394 | We present a nonparametric statistical framework for the quantification, analysis, and propagation of data uncertainty in direct volume rendering (DVR). The state-of-the-art statistical DVR framework allows for preserving the transfer function (TF) of the ground truth function when visualizing uncertain data; however, the existing framework is restricted to parametric models of uncertainty. In this paper, we address the limitations of the existing DVR framework by extending the DVR framework for nonparametric distributions. We exploit the quantile interpolation technique to derive probability distributions representing uncertainty in viewing-ray sample intensities in closed form, which allows for accurate and efficient computation. We evaluate our proposed nonparametric statistical models through qualitative and quantitative comparisons with the mean-field and parametric statistical models, such as uniform and Gaussian, as well as Gaussian mixtures. In addition, we present an extension of the state-of-the-art rendering parametric framework to 2D TFs for improved DVR classifications. We show the applicability of our uncertainty quantification framework to ensemble, downsampled, and bivariate versions of scalar field datasets. | false | false | [
"Tushar M. Athawale",
"Bo Ma 0002",
"Elham Sakhaee",
"Christopher R. Johnson 0001",
"Alireza Entezari"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.13576v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/3R2dlFj5qoc",
"icon": "video"
}
] |
SciVis | 2,020 | Efficient and Flexible Hierarchical Data Layouts for a Unified Encoding of Scalar Field Precision and Resolution | 10.1109/TVCG.2020.3030381 | To address the problem of ever-growing scientific data sizes making data movement a major hindrance to analysis, we introduce a novel encoding for scalar fields: a unified tree of resolution and precision, specifically constructed so that valid cuts correspond to sensible approximations of the original field in the precision-resolution space. Furthermore, we introduce a highly flexible encoding of such trees that forms a parameterized family of data hierarchies. We discuss how different parameter choices lead to different trade-offs in practice, and show how specific choices result in known data representation schemes such as zfp [52], idx [58], and jpeg2000 [76]. Finally, we provide system-level details and empirical evidence on how such hierarchies facilitate common approximate queries with minimal data movement and time, using real-world data sets ranging from a few gigabytes to nearly a terabyte in size. Experiments suggest that our new strategy of combining reductions in resolution and precision is competitive with state-of-the-art compression techniques with respect to data quality, while being significantly more flexible and orders of magnitude faster, and requiring significantly reduced resources. | false | false | [
"Duong Hoang",
"Brian Summa",
"Harsh Bhatia",
"Peter Lindstrom 0001",
"Pavol Klacansky",
"Will Usher 0001",
"Peer-Timo Bremer",
"Valerio Pascucci"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/4V6r8RUlrX4",
"icon": "video"
}
] |
SciVis | 2,020 | Extraction and Visualization of Poincare Map Topology for Spacecraft Trajectory Design | 10.1109/TVCG.2020.3030402 | Mission designers must study many dynamical models to plan a low-cost spacecraft trajectory that satisfies mission constraints. They routinely use Poincare maps to search for a suitable path through the interconnected web of periodic orbits and invariant manifolds found in multi-body gravitational systems. This paper is concerned with the extraction and interactive visual exploration of this structural landscape to assist spacecraft trajectory planning. We propose algorithmic solutions that address the specific challenges posed by the characterization of the topology in astrodynamics problems and allow for an effective visual analysis of the resulting information. This visualization framework is applied to the circular restricted three-body problem (CR3BP), where it reveals novel periodic orbits with their relevant invariant manifolds in a suitable format for interactive transfer selection. Representative design problems illustrate how spacecraft path planners can leverage our topology visualization to fully exploit the natural dynamics pathways for energy-efficient trajectory designs. | false | false | [
"Xavier Tricoche",
"Wayne R. Schlei",
"Kathleen C. Howell"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.03454v1",
"icon": "paper"
}
] |
SciVis | 2,020 | Homomorphic-Encrypted Volume Rendering | 10.1109/TVCG.2020.3030436 | Computationally demanding tasks are typically calculated in dedicated data centers, and real-time visualizations also follow this trend. Some rendering tasks, however, require the highest level of confidentiality so that no other party, besides the owner, can read or see the sensitive data. Here we present a direct volume rendering approach that performs volume rendering directly on encrypted volume data by using the homomorphic Paillier encryption algorithm. This approach ensures that the volume data and rendered image are uninterpretable to the rendering server. Our volume rendering pipeline introduces novel approaches for encrypted-data compositing, interpolation, and opacity modulation, as well as simple transfer function design, where each of these routines maintains the highest level of privacy. We present performance and memory overhead analysis that is associated with our privacy-preserving scheme. Our approach is open and secure by design, as opposed to secure through obscurity. Owners of the data only have to keep their secure key confidential to guarantee the privacy of their volume data and the rendered images. Our work is, to our knowledge, the first privacy-preserving remote volume-rendering approach that does not require that any server involved be trustworthy; even in cases when the server is compromised, no sensitive data will be leaked to a foreign party. | false | false | [
"Sebastian Mazza",
"Daniel Patel",
"Ivan Viola"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02122v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/v0eO7uXGzG4",
"icon": "video"
}
] |
SciVis | 2,020 | Improving the Usability of Virtual Reality Neuron Tracing with Topological Elements | 10.1109/TVCG.2020.3030363 | Researchers in the field of connectomics are working to reconstruct a map of neural connections in the brain in order to understand at a fundamental level how the brain processes information. Constructing this wiring diagram is done by tracing neurons through high-resolution image stacks acquired with fluorescence microscopy imaging techniques. While a large number of automatic tracing algorithms have been proposed, these frequently rely on local features in the data and fail on noisy data or ambiguous cases, requiring time-consuming manual correction. As a result, manual and semi-automatic tracing methods remain the state-of-the-art for creating accurate neuron reconstructions. We propose a new semi-automatic method that uses topological features to guide users in tracing neurons and integrate this method within a virtual reality (VR) framework previously used for manual tracing. Our approach augments both visualization and interaction with topological elements, allowing rapid understanding and tracing of complex morphologies. In our pilot study, neuroscientists demonstrated a strong preference for using our tool over prior approaches, reported less fatigue during tracing, and commended the ability to better understand possible paths and alternatives. Quantitative evaluation of the traces reveals that users' tracing speed increased, while retaining similar accuracy compared to a fully manual approach. | false | false | [
"Torin McDonald",
"Will Usher 0001",
"Nathan Morrical",
"Attila Gyulassy",
"Steve Petruzza",
"Frederick Federer",
"Alessandra Angelucci",
"Valerio Pascucci"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.01891v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/8l5d6hC_K_4",
"icon": "video"
}
] |
SciVis | 2,020 | Interactive Black-Hole Visualization | 10.1109/TVCG.2020.3030452 | We present an efficient algorithm for visualizing the effect of black holes on its distant surroundings as seen from an observer nearby in orbit. Our solution is GPU-based and builds upon a two-step approach, where we first derive an adaptive grid to map the 360-view around the observer to the distorted celestial sky, which can be directly reused for different camera orientations. Using a grid, we can rapidly trace rays back to the observer through the distorted spacetime, avoiding the heavy workload of standard tracing solutions at real-time rates. By using a novel interpolation technique we can also simulate an observer path by smoothly transitioning between multiple grids. Our approach accepts real star catalogues and environment maps of the celestial sky and generates the resulting black-hole deformations in real time. | false | false | [
"Annemieke Verbraeck",
"Elmar Eisemann"
] | [
"HM"
] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/k5YB_4cCJG4",
"icon": "video"
}
] |
SciVis | 2,020 | Interactive Visual Study of Multiple Attributes Learning Model of X-Ray Scattering Images | 10.1109/TVCG.2020.3030384 | Existing interactive visualization tools for deep learning are mostly applied to the training, debugging, and refinement of neural network models working on natural images. However, visual analytics tools are lacking for the specific application of x-ray image classification with multiple structural attributes. In this paper, we present an interactive system for domain scientists to visually study the multiple attributes learning models applied to x-ray scattering images. It allows domain scientists to interactively explore this important type of scientific images in embedded spaces that are defined on the model prediction output, the actual labels, and the discovered feature space of neural networks. Users are allowed to flexibly select instance images, their clusters, and compare them regarding the specified visual representation of attributes. The exploration is guided by the manifestation of model performance related to mutual relationships among attributes, which often affect the learning accuracy and effectiveness. The system thus supports domain scientists to improve the training dataset and model, find questionable attributes labels, and identify outlier images or spurious data clusters. Case studies and scientists feedback demonstrate its functionalities and usefulness. | false | false | [
"Xinyi Huang",
"Suphanut Jamonnak",
"Ye Zhao 0003",
"Boyu Wang 0001",
"Minh Hoai",
"Kevin G. Yager",
"Wei Xu 0020"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02256v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/GKaNHZPqM6I",
"icon": "video"
}
] |
SciVis | 2,020 | Interactive Visualization of Atmospheric Effects for Celestial Bodies | 10.1109/TVCG.2020.3030333 | We present an atmospheric model tailored for the interactive visualization of planetary surfaces. As the exploration of the solar system is progressing with increasingly accurate missions and instruments, the faithful visualization of planetary environments is gaining increasing interest in space research, mission planning, and science communication and education. Atmospheric effects are crucial in data analysis and to provide contextual information for planetary data. Our model correctly accounts for the non-linear path of the light inside the atmosphere (in Earth's case), the light absorption effects by molecules and dust particles, such as the ozone layer and the Martian dust, and a wavelength-dependent phase function for Mie scattering. The mode focuses on interactivity, versatility, and customization, and a comprehensive set of interactive controls make it possible to adapt its appearance dynamically. We demonstrate our results using Earth and Mars as examples. However, it can be readily adapted for the exploration of other atmospheres found on, for example, of exoplanets. For Earth's atmosphere, we visually compare our results with pictures taken from the International Space Station and against the CIE clear sky model. The Martian atmosphere is reproduced based on available scientific data, feedback from domain experts, and is compared to images taken by the Curiosity rover. The work presented here has been implemented in the OpenSpace system, which enables interactive parameter setting and real-time feedback visualization targeting presentations in a wide range of environments, from immersive dome theaters to virtual reality headsets. | false | false | [
"Jonathas Costa",
"Alexander Bock 0002",
"Carter Emmart",
"Charles D. Hansen",
"Anders Ynnerman",
"Cláudio T. Silva"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2010.03534v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/e-JPG3Ki2f4",
"icon": "video"
}
] |
SciVis | 2,020 | IsoTrotter: Visually Guided Empirical Modelling of Atmospheric Convection | 10.1109/TVCG.2020.3030389 | Empirical models, fitted to data from observations, are often used in natural sciences to describe physical behaviour and support discoveries. However, with more complex models, the regression of parameters quickly becomes insufficient, requiring a visual parameter space analysis to understand and optimize the models. In this work, we present a design study for building a model describing atmospheric convection. We present a mixed-initiative approach to visually guided modelling, integrating an interactive visual parameter space analysis with partial automatic parameter optimization. Our approach includes a new, semi-automatic technique called IsoTrotting, where we optimize the procedure by navigating along isocontours of the model. We evaluate the model with unique observational data of atmospheric convection based on flight trajectories of paragliders. | false | false | [
"Juraj Pálenik",
"Thomas Spengler",
"Helwig Hauser"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.10301v1",
"icon": "paper"
}
] |
SciVis | 2,020 | Localized Topological Simplification of Scalar Data | 10.1109/TVCG.2020.3030353 | This paper describes a localized algorithm for the topological simplification of scalar data, an essential pre-processing step of topological data analysis (TDA). Given a scalar field $f$ and a selection of extrema to preserve, the proposed localized topological simplification (LTS) derives a function g that is close to $f$ and only exhibits the selected set of extrema. Specifically, sub- and superlevel set components associated with undesired extrema are first locally flattened and then correctly embedded into the global scalar field, such that these regions are guaranteed-from a combinatorial perspective-to no longer contain any undesired extrema. In contrast to previous global approaches, LTS only and independently processes regions of the domain that actually need to be simplified, which already results in a noticeable speedup. Moreover, due to the localized nature of the algorithm, LTS can utilize shared-memory parallelism to simplify regions simultaneously with a high parallel efficiency (70%). Hence, LTS significantly improves interactivity for the exploration of simplification parameters and their effect on subsequent topological analysis. For such exploration tasks, LTS brings the overall execution time of a plethora of TDA pipelines from minutes down to seconds, with an average observed speedup over state-of-the-art techniques of up to $\times 36$. Furthermore, in the special case where preserved extrema are selected based on topological persistence, an adapted version of LTS partially computes the persistence diagram and simultaneously simplifies features below a predefined persistence threshold. The effectiveness of LTS, its parallel efficiency, and its resulting benefits for TDA are demonstrated on several simulated and acquired datasets from different application domains, including physics, chemistry, and biomedical imaging. | false | false | [
"Jonas Lukasczyk",
"Christoph Garth",
"Ross Maciejewski",
"Julien Tierny"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.00083v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/bP6zVaBXIlA",
"icon": "video"
}
] |
SciVis | 2,020 | Mode Surfaces of Symmetric Tensor Fields: Topological Analysis and Seamless Extraction | 10.1109/TVCG.2020.3030431 | Mode surfaces are the generalization of degenerate curves and neutral surfaces, which constitute 3D symmetric tensor field topology. Efficient analysis and visualization of mode surfaces can provide additional insight into not only degenerate curves and neutral surfaces, but also how these features transition into each other. Moreover, the geometry and topology of mode surfaces can help domain scientists better understand the tensor fields in their applications. Existing mode surface extraction methods can miss features in the surfaces. Moreover, the mode surfaces extracted from neighboring cells have gaps, which make their subsequent analysis difficult. In this paper, we provide novel analysis on the topological structures of mode surfaces, including a common parameterization of all mode surfaces of a tensor field using 2D asymmetric tensors. This allows us to not only better understand the structures in mode surfaces and their interactions with degenerate curves and neutral surfaces, but also develop an efficient algorithm to seamlessly extract mode surfaces, including neutral surfaces. The seamless mode surfaces enable efficient analysis of their geometric structures, such as the principal curvature directions. We apply our analysis and visualization to a number of solid mechanics data sets. | false | false | [
"Botong Qu",
"Lawrence Roy",
"Yue Zhang 0009",
"Eugene Zhang"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.04601v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/Q12NMVybRUs",
"icon": "video"
}
] |
SciVis | 2,020 | Modeling in the Time of COVID-19: Statistical and Rule-based Mesoscale Models | 10.1109/TVCG.2020.3030415 | We present a new technique for the rapid modeling and construction of scientifically accurate mesoscale biological models. The resulting 3D models are based on a few 2D microscopy scans and the latest knowledge available about the biological entity, represented as a set of geometric relationships. Our new visual-programming technique is based on statistical and rule-based modeling approaches that are rapid to author, fast to construct, and easy to revise. From a few 2D microscopy scans, we determine the statistical properties of various structural aspects, such as the outer membrane shape, the spatial properties, and the distribution characteristics of the macromolecular elements on the membrane. This information is utilized in the construction of the 3D model. Once all the imaging evidence is incorporated into the model, additional information can be incorporated by interactively defining the rules that spatially characterize the rest of the biological entity, such as mutual interactions among macromolecules, and their distances and orientations relative to other structures. These rules are defined through an intuitive 3D interactive visualization as a visual-programming feedback loop. We demonstrate the applicability of our approach on a use case of the modeling procedure of the SARS-CoV-2 virion ultrastructure. This atomistic model, which we present here, can steer biological research to new promising directions in our efforts to fight the spread of the virus. | false | false | [
"Ngan V. T. Nguyen",
"Ondrej Strnad",
"Tobias Klein",
"Deng Luo",
"Ruwayda Alharbi",
"Peter Wonka",
"Martina Maritan",
"Peter Mindek",
"Ludovic Autin",
"David S. Goodsell",
"Ivan Viola"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.01804v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/g0Ks7a-tITQ",
"icon": "video"
}
] |
SciVis | 2,020 | Objective Observer-Relative Flow Visualization in Curved Spaces for Unsteady 2D Geophysical Flows | 10.1109/TVCG.2020.3030454 | Computing and visualizing features in fluid flow often depends on the observer, or reference frame, relative to which the input velocity field is given. A desired property of feature detectors is therefore that they are objective, meaning independent of the input reference frame. However, the standard definition of objectivity is only given for Euclidean domains and cannot be applied in curved spaces. We build on methods from mathematical physics and Riemannian geometry to generalize objectivity to curved spaces, using the powerful notion of symmetry groups as the basis for definition. From this, we develop a general mathematical framework for the objective computation of observer fields for curved spaces, relative to which other computed measures become objective. An important property of our framework is that it works intrinsically in 2D, instead of in the 3D ambient space. This enables a direct generalization of the 2D computation via optimization of observer fields in flat space to curved domains, without having to perform optimization in 3D. We specifically develop the case of unsteady 2D geophysical flows given on spheres, such as the Earth. Our observer fields in curved spaces then enable objective feature computation as well as the visualization of the time evolution of scalar and vector fields, such that the automatically computed reference frames follow moving structures like vortices in a way that makes them appear to be steady. | false | false | [
"Peter Rautek",
"Matej Mlejnek",
"Johanna Beyer",
"Jakob Troidl",
"Hanspeter Pfister",
"Thomas Theußl",
"Markus Hadwiger"
] | [
"BP"
] | [] | [] |
SciVis | 2,020 | Polyphorm: Structural Analysis of Cosmological Datasets via Interactive Physarum Polycephalum Visualization | 10.1109/TVCG.2020.3030407 | This paper introduces Polyphorm, an interactive visualization and model fitting tool that provides a novel approach for investigating cosmological datasets. Through a fast computational simulation method inspired by the behavior of Physarum polycephalum, an unicellular slime mold organism that efficiently forages for nutrients, astrophysicists are able to extrapolate from sparse datasets, such as galaxy maps archived in the Sloan Digital Sky Survey, and then use these extrapolations to inform analyses of a wide range of other data, such as spectroscopic observations captured by the Hubble Space Telescope. Researchers can interactively update the simulation by adjusting model parameters, and then investigate the resulting visual output to form hypotheses about the data. We describe details of Polyphorm's simulation model and its interaction and visualization modalities, and we evaluate Polyphorm through three scientific use cases that demonstrate the effectiveness of our approach. | false | false | [
"Oskar Elek",
"Joseph N. Burchett",
"J. Xavier Prochaska",
"Angus G. Forbes"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02441v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/V_JivWRFJKA",
"icon": "video"
}
] |
SciVis | 2,020 | Ray Tracing Structured AMR Data Using ExaBricks | 10.1109/TVCG.2020.3030470 | Structured Adaptive Mesh Refinement (Structured AMR) enables simulations to adapt the domain resolution to save computation and storage, and has become one of the dominant data representations used by scientific simulations; however, efficiently rendering such data remains a challenge. We present an efficient approach for volume- and iso-surface ray tracing of Structured AMR data on GPU-equipped workstations, using a combination of two different data structures. Together, these data structures allow a ray tracing based renderer to quickly determine which segments along the ray need to be integrated and at what frequency, while also providing quick access to all data values required for a smooth sample reconstruction kernel. Our method makes use of the RTX ray tracing hardware for surface rendering, ray marching, space skipping, and adaptive sampling; and allows for interactive changes to the transfer function and implicit iso-surfacing thresholds. We demonstrate that our method achieves high performance with little memory overhead, enabling interactive high quality rendering of complex AMR data sets on individual GPU workstations. | false | false | [
"Ingo Wald",
"Stefan Zellmann",
"Will Usher 0001",
"Nathan Morrical",
"Ulrich Lang 0002",
"Valerio Pascucci"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.03076v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/WdieKs0m-gY",
"icon": "video"
}
] |
SciVis | 2,020 | Sea of Genes: A Reflection on Visualising Metagenomic Data for Museums | 10.1109/TVCG.2020.3030412 | We examine the process of designing an exhibit to communicate scientific findings from a complex dataset and unfamiliar domain to the public in a science museum. Our exhibit sought to communicate new lessons based on scientific findings from the domain of metagenomics. This multi-user exhibit had three goals: (1) to inform the public about microbial communities and their daily cycles; (2) to link microbes' activity to the concept of gene expression; (3) and to highlight scientists' use of gene expression data to understand the role of microbes. To address these three goals, we derived visualization designs with three corresponding stories, each corresponding to a goal. We present three successive rounds of design and evaluation of our attempts to convey these goals. We could successfully present one story but had limited success with our second and third goals. This work presents a detailed account of an attempt to explain tightly coupled relationships through storytelling and animation in a multi-user, informal learning environment to a public with varying prior knowledge on the domain and identify lessons for future design. | false | false | [
"Keshav Dasu",
"Kwan-Liu Ma",
"Joyce Ma",
"Jennifer Frazier"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/r8cFMh0Ze8I",
"icon": "video"
}
] |
SciVis | 2,020 | The Mixture Graph-A Data Structure for Compressing, Rendering, and Querying Segmentation Histograms | 10.1109/TVCG.2020.3030451 | In this paper, we present a novel data structure, called the Mixture Graph. This data structure allows us to compress, render, and query segmentation histograms. Such histograms arise when building a mipmap of a volume containing segmentation IDs. Each voxel in the histogram mipmap contains a convex combination (mixture) of segmentation IDs. Each mixture represents the distribution of IDs in the respective voxel's children. Our method factorizes these mixtures into a series of linear interpolations between exactly two segmentation IDs. The result is represented as a directed acyclic graph (DAG) whose nodes are topologically ordered. Pruning replicate nodes in the tree followed by compression allows us to store the resulting data structure efficiently. During rendering, transfer functions are propagated from sources (leafs) through the DAG to allow for efficient, pre-filtered rendering at interactive frame rates. Assembly of histogram contributions across the footprint of a given volume allows us to efficiently query partial histograms, achieving up to 178 x speed-up over naive parallelized range queries. Additionally, we apply the Mixture Graph to compute correctly pre-filtered volume lighting and to interactively explore segments based on shape, geometry, and orientation using multi-dimensional transfer functions. | false | false | [
"Khaled A. Al-Thelaya",
"Marco Agus",
"Jens Schneider 0002"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.02702v1",
"icon": "paper"
}
] |
SciVis | 2,020 | TopoMap: A 0-dimensional Homology Preserving Projection of High-Dimensional Data | 10.1109/TVCG.2020.3030441 | Multidimensional Projection is a fundamental tool for high-dimensional data analytics and visualization. With very few exceptions, projection techniques are designed to map data from a high-dimensional space to a visual space so as to preserve some dissimilarity (similarity) measure, such as the Euclidean distance for example. In fact, although adopting distinct mathematical formulations designed to favor different aspects of the data, most multidimensional projection methods strive to preserve dissimilarity measures that encapsulate geometric properties such as distances or the proximity relation between data objects. However, geometric relations are not the only interesting property to be preserved in a projection. For instance, the analysis of particular structures such as clusters and outliers could be more reliably performed if the mapping process gives some guarantee as to topological invariants such as connected components and loops. This paper introduces TopoMap, a novel projection technique which provides topological guarantees during the mapping process. In particular, the proposed method performs the mapping from a high-dimensional space to a visual space, while preserving the 0-dimensional persistence diagram of the Rips filtration of the high-dimensional data, ensuring that the filtrations generate the same connected components when applied to the original as well as projected data. The presented case studies show that the topological guarantee provided by TopoMap not only brings confidence to the visual analytic process but also can be used to assist in the assessment of other projection methods. | false | false | [
"Harish Doraiswamy",
"Julien Tierny",
"Paulo J. S. Silva",
"Luis Gustavo Nonato",
"Cláudio T. Silva"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.01512v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/2HeB4z_bsOA",
"icon": "video"
}
] |
SciVis | 2,020 | Uncertainty in Continuous Scatterplots, Continuous Parallel Coordinates, and Fibers | 10.1109/TVCG.2020.3030466 | In this paper, we introduce uncertainty to continuous scatterplots and continuous parallel coordinates. We derive respective models, validate them with sampling-based brute-force schemes, and present acceleration strategies for their computation. At the same time, we show that our approach lends itself as well for introducing uncertainty into the definition of fibers in bivariate data. Finally, we demonstrate the properties and the utility of our approach using specifically designed synthetic cases and simulated data. | false | false | [
"Boyan Zheng",
"Filip Sadlo"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/fcOvkYSbExY",
"icon": "video"
}
] |
SciVis | 2,020 | Uncertainty-Oriented Ensemble Data Visualization and Exploration using Variable Spatial Spreading | 10.1109/TVCG.2020.3030377 | As an important method of handling potential uncertainties in numerical simulations, ensemble simulation has been widely applied in many disciplines. Visualization is a promising and powerful ensemble simulation analysis method. However, conventional visualization methods mainly aim at data simplification and highlighting important information based on domain expertise instead of providing a flexible data exploration and intervention mechanism. Trial-and-error procedures have to be repeatedly conducted by such approaches. To resolve this issue, we propose a new perspective of ensemble data analysis using the attribute variable dimension as the primary analysis dimension. Particularly, we propose a variable uncertainty calculation method based on variable spatial spreading. Based on this method, we design an interactive ensemble analysis framework that provides a flexible interactive exploration of the ensemble data. Particularly, the proposed spreading curve view, the region stability heat map view, and the temporal analysis view, together with the commonly used 2D map view, jointly support uncertainty distribution perception, region selection, and temporal analysis, as well as other analysis requirements. We verify our approach by analyzing a real-world ensemble simulation dataset. Feedback collected from domain experts confirms the efficacy of our framework. | false | false | [
"Mingdong Zhang",
"Li Chen",
"Quan Li",
"Xiaoru Yuan",
"Junhai Yong"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2011.01497v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/nP0jC8QnHHY",
"icon": "video"
}
] |
SciVis | 2,020 | V2V: A Deep Learning Approach to Variable-to-Variable Selection and Translation for Multivariate Time-Varying Data | 10.1109/TVCG.2020.3030346 | We present V2V, a novel deep learning framework, as a general-purpose solution to the variable-to-variable (V2V) selection and translation problem for multivariate time-varying data (MTVD) analysis and visualization. V2V leverages a representation learning algorithm to identify transferable variables and utilizes Kullback-Leibler divergence to determine the source and target variables. It then uses a generative adversarial network (GAN) to learn the mapping from the source variable to the target variable via the adversarial, volumetric, and feature losses. V2V takes the pairs of time steps of the source and target variable as input for training, Once trained, it can infer unseen time steps of the target variable given the corresponding time steps of the source variable. Several multivariate time-varying data sets of different characteristics are used to demonstrate the effectiveness of V2V, both quantitatively and qualitatively. We compare V2V against histogram matching and two other deep learning solutions (Pix2Pix and CycleGAN). | false | false | [
"Jun Han 0010",
"Hao Zheng 0006",
"Yunhao Xing",
"Danny Ziyi Chen",
"Chaoli Wang 0001"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/OsX3v4aUONE",
"icon": "video"
}
] |
SciVis | 2,020 | VC-Net: Deep Volume-Composition Networks for Segmentation and Visualization of Highly Sparse and Noisy Image Data | 10.1109/TVCG.2020.3030374 | The fundamental motivation of the proposed work is to present a new visualization-guided computing paradigm to combine direct 3D volume processing and volume rendered clues for effective 3D exploration. For example, extracting and visualizing microstructures in-vivo have been a long-standing challenging problem. However, due to the high sparseness and noisiness in cerebrovasculature data as well as highly complex geometry and topology variations of micro vessels, it is still extremely challenging to extract the complete 3D vessel structure and visualize it in 3D with high fidelity. In this paper, we present an end-to-end deep learning method, VC-Net, for robust extraction of 3D microvascular structure through embedding the image composition, generated by maximum intensity projection (MIP), into the 3D volumetric image learning process to enhance the overall performance. The core novelty is to automatically leverage the volume visualization technique (e.g., MIP - a volume rendering scheme for 3D volume images) to enhance the 3D data exploration at the deep learning level. The MIP embedding features can enhance the local vessel signal (through canceling out the noise) and adapt to the geometric variability and scalability of vessels, which is of great importance in microvascular tracking. A multi-stream convolutional neural network (CNN) framework is proposed to effectively learn the 3D volume and 2D MIP feature vectors, respectively, and then explore their inter-dependencies in a joint volume-composition embedding space by unprojecting the 2D feature vectors into the 3D volume embedding space. It is noted that the proposed framework can better capture the small/micro vessels and improve the vessel connectivity. To our knowledge, this is the first time that a deep learning framework is proposed to construct a joint convolutional embedding space, where the computed vessel probabilities from volume rendering based 2D projection and 3D volume can be explored and integrated synergistically. Experimental results are evaluated and compared with the traditional 3D vessel segmentation methods and the state-of-the-art in deep learning, by using extensive public and real patient (micro- )cerebrovascular image datasets. The application of this accurate segmentation and visualization of sparse and complicated 3D microvascular structure facilitated by our method demonstrates the potential in a powerful MR arteriogram and venogram diagnosis of vascular disease. | false | false | [
"Yifan Wang",
"Guoli Yan",
"Haikuan Zhu",
"Sagar Buch",
"Ying Wang 0060",
"E. Mark Haacke",
"Jing Hua 0001",
"Zichun Zhong"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.06184v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/aF7PxeHloDs",
"icon": "video"
}
] |
SciVis | 2,020 | Visual Analysis of Large Multivariate Scattered Data using Clustering and Probabilistic Summaries | 10.1109/TVCG.2020.3030379 | Rapidly growing data sizes of scientific simulations pose significant challenges for interactive visualization and analysis techniques. In this work, we propose a compact probabilistic representation to interactively visualize large scattered datasets. In contrast to previous approaches that represent blocks of volumetric data using probability distributions, we model clusters of arbitrarily structured multivariate data. In detail, we discuss how to efficiently represent and store a high-dimensional distribution for each cluster. We observe that it suffices to consider low-dimensional marginal distributions for two or three data dimensions at a time to employ common visual analysis techniques. Based on this observation, we represent high-dimensional distributions by combinations of low-dimensional Gaussian mixture models. We discuss the application of common interactive visual analysis techniques to this representation. In particular, we investigate several frequency-based views, such as density plots in 1D and 2D, density-based parallel coordinates, and a time histogram. We visualize the uncertainty introduced by the representation, discuss a level-of-detail mechanism, and explicitly visualize outliers. Furthermore, we propose a spatial visualization by splatting anisotropic 3D Gaussians for which we derive a closed-form solution. Lastly, we describe the application of brushing and linking to this clustered representation. Our evaluation on several large, real-world datasets demonstrates the scaling of our approach. | false | false | [
"Tobias Rapp",
"Christoph Peters 0002",
"Carsten Dachsbacher"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.09544v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/qbXFFvO9Y1M",
"icon": "video"
}
] |
SciVis | 2,020 | Visualization of Human Spine Biomechanics for Spinal Surgery | 10.1109/TVCG.2020.3030388 | We propose a visualization application, designed for the exploration of human spine simulation data. Our goal is to support research in biomechanical spine simulation and advance efforts to implement simulation-backed analysis in surgical applications. Biomechanical simulation is a state-of-the-art technique for analyzing load distributions of spinal structures. Through the inclusion of patient-specific data, such simulations may facilitate personalized treatment and customized surgical interventions. Difficulties in spine modelling and simulation can be partly attributed to poor result representation, which may also be a hindrance when introducing such techniques into a clinical environment. Comparisons of measurements across multiple similar anatomical structures and the integration of temporal data make commonly available diagrams and charts insufficient for an intuitive and systematic display of results. Therefore, we facilitate methods such as multiple coordinated views, abstraction and focus and context to display simulation outcomes in a dedicated tool. $\mathrm{By}$ linking the result data with patient-specific anatomy, we make relevant parameters tangible for clinicians. Furthermore, we introduce new concepts to show the directions of impact force vectors, which were not accessible before. We integrated our toolset into a spine segmentation and simulation pipeline and evaluated our methods with both surgeons and biomechanical researchers. When comparing our methods against standard representations that are currently in use, we found increases in accuracy and speed in data exploration tasks. $\mathrm{in}$ a qualitative review, domain experts deemed the tool highly useful when dealing with simulation result data, which typically combines time-dependent patient movement and the resulting force distributions on spinal structures. | false | false | [
"Pepe Eulzer",
"Sabine Bauer 0001",
"Francis Kilian",
"Kai Lawonn"
] | [] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.11148v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/PvH4suYfU-o",
"icon": "video"
}
] |
InfoVis | 2,020 | A Bayesian cognition approach for belief updating of correlation judgement through uncertainty visualizations | 10.1109/TVCG.2020.3029412 | Understanding correlation judgement is important to designing effective visualizations of bivariate data. Prior work on correlation perception has not considered how factors including prior beliefs and uncertainty representation impact such judgements. The present work focuses on the impact of uncertainty communication when judging bivariate visualizations. Specifically, we model how users update their beliefs about variable relationships after seeing a scatterplot with and without uncertainty representation. To model and evaluate the belief updating, we present three studies. Study 1 focuses on a proposed “Line + Cone” visual elicitation method for capturing users' beliefs in an accurate and intuitive fashion. The findings reveal that our proposed method of belief solicitation reduces complexity and accurately captures the users' uncertainty about a range of bivariate relationships. Study 2 leverages the “Line + Cone” elicitation method to measure belief updating on the relationship between different sets of variables when seeing correlation visualization with and without uncertainty representation. We compare changes in users beliefs to the predictions of Bayesian cognitive models which provide normative benchmarks for how users should update their prior beliefs about a relationship in light of observed data. The findings from Study 2 revealed that one of the visualization conditions with uncertainty communication led to users being slightly more confident about their judgement compared to visualization without uncertainty information. Study 3 builds on findings from Study 2 and explores differences in belief update when the bivariate visualization is congruent or incongruent with users' prior belief. Our results highlight the effects of incorporating uncertainty representation, and the potential of measuring belief updating on correlation judgement with Bayesian cognitive models. | false | false | [
"Alireza Karduni",
"Douglas Markant",
"Ryan Wesslen",
"Wenwen Dou"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2008.00058v1",
"icon": "paper"
}
] |
InfoVis | 2,020 | A Design Space of Vision Science Methods for Visualization Research | 10.1109/TVCG.2020.3029413 | A growing number of efforts aim to understand what people see when using a visualization. These efforts provide scientific grounding to complement design intuitions, leading to more effective visualization practice. However, published visualization research currently reflects a limited set of available methods for understanding how people process visualized data. Alternative methods from vision science offer a rich suite of tools for understanding visualizations, but no curated collection of these methods exists in either perception or visualization research. We introduce a design space of experimental methods for empirically investigating the perceptual processes involved with viewing data visualizations to ultimately inform visualization design guidelines. This paper provides a shared lexicon for facilitating experimental visualization research. We discuss popular experimental paradigms, adjustment types, response types, and dependent measures used in vision science research, rooting each in visualization examples. We then discuss the advantages and limitations of each technique. Researchers can use this design space to create innovative studies and progress scientific understanding of design choices and evaluations in visualization. We highlight a history of collaborative success between visualization and vision science research and advocate for a deeper relationship between the two fields that can elaborate on and extend the methodological design space for understanding visualization and vision. | false | false | [
"Madison A. Elliott",
"Christine Nothelfer",
"Cindy Xiong",
"Danielle Albers Szafir"
] | [
"HM"
] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2009.06855v1",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/v6bJwsHxRLY",
"icon": "video"
}
] |
InfoVis | 2,020 | A Generic Framework and Library for Exploration of Small Multiples through Interactive Piling | 10.1109/TVCG.2020.3028948 | Small multiples are miniature representations of visual information used generically across many domains. Handling large numbers of small multiples imposes challenges on many analytic tasks like inspection, comparison, navigation, or annotation. To address these challenges, we developed a framework and implemented a library called PILlNG.JS for designing interactive piling interfaces. Based on the piling metaphor, such interfaces afford flexible organization, exploration, and comparison of large numbers of small multiples by interactively aggregating visual objects into piles. Based on a systematic analysis of previous work, we present a structured design space to guide the design of visual piling interfaces. To enable designers to efficiently build their own visual piling interfaces, PILlNG.JS provides a declarative interface to avoid having to write low-level code and implements common aspects of the design space. An accompanying GUI additionally supports the dynamic configuration of the piling interface. We demonstrate the expressiveness of PILlNG.JS with examples from machine learning, immunofluorescence microscopy, genomics, and public health. | false | false | [
"Fritz Lekschas",
"Xinyi Zhou 0005",
"Wei Chen 0001",
"Nils Gehlenborg",
"Benjamin Bach",
"Hanspeter Pfister"
] | [
"HM"
] | [
"P",
"V"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2005.00595v2",
"icon": "paper"
},
{
"name": "Fast Forward",
"url": "https://youtu.be/mojW-9Mc2qs",
"icon": "video"
}
] |
InfoVis | 2,020 | A Simple Pipeline for Coherent Grid Maps | 10.1109/TVCG.2020.3028953 | Grid maps are spatial arrangements of simple tiles (often squares or hexagons), each of which represents a spatial element. They are an established, effective way to show complex data per spatial element, using visual encodings within each tile ranging from simple coloring to nested small-multiples visualizations. An effective grid map is coherent with the underlying geographic space: the tiles maintain the contiguity, neighborhoods and identifiability of the corresponding spatial elements, while the grid map as a whole maintains the global shape of the input. Of particular importance are salient local features of the global shape which need to be represented by tiles assigned to the appropriate spatial elements. State-of-the-art techniques can adequately deal only with simple cases, such as close-to-uniform spatial distributions or global shapes that have few characteristic features. We introduce a simple fully-automated 3-step pipeline for computing coherent grid maps. Each step is a well-studied problem: shape decomposition based on salient features, tile-based Mosaic Cartograms, and point-set matching. Our pipeline is a seamless composition of existing techniques for these problems and results in high-quality grid maps. We provide an implementation, demonstrate the efficacy of our approach on various complex datasets, and compare it to the state-of-the-art. | false | false | [
"Wouter Meulemans",
"Max Sondag",
"Bettina Speckmann"
] | [
"HM"
] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/ve7qn0VDyUg",
"icon": "video"
}
] |
InfoVis | 2,020 | A Structured Review of Data Management Technology for Interactive Visualization and Analysis | 10.1109/TVCG.2020.3028891 | In the last two decades, interactive visualization and analysis have become a central tool in data-driven decision making. Concurrently to the contributions in data visualization, research in data management has produced technology that directly benefits interactive analysis. Here, we contribute a systematic review of 30 years of work in this adjacent field, and highlight techniques and principles we believe to be underappreciated in visualization work. We structure our review along two axes. First, we use task taxonomies from the visualization literature to structure the space of interactions in usual systems. Second, we created a categorization of data management work that strikes a balance between specificity and generality. Concretely, we contribute a characterization of 131 research papers along these two axes. We find that five notions in data management venues fit interactive visualization systems well: materialized views, approximate query processing, user modeling and query prediction, muiti-query optimization, lineage techniques, and indexing techniques. In addition, we find a preponderance of work in materialized views and approximate query processing, most targeting a limited subset of the interaction tasks in the taxonomy we used. This suggests natural avenues of future research both in visualization and data management. Our categorization both changes how we visualization researchers design and build our systems, and highlights where future work is necessary. | false | false | [
"Leilani Battle",
"Carlos Scheidegger"
] | [] | [] | [] |
InfoVis | 2,020 | A Survey of Text Alignment Visualization | 10.1109/TVCG.2020.3028975 | Text alignment is one of the fundamental techniques text-related domains like natural language processing, computational linguistics, and digital humanities. It compares two or more texts with each other aiming to find similar textual patterns, or to estimate in general how different or similar the texts are. Visualizing alignment results is an essential task, because it helps researchers getting a comprehensive overview of individual findings and the overall pattern structure. Different approaches have been developed to visualize and help making sense of these patterns depending on text size, alignment methods, and, most importantly, the underlying research tasks demanding for alignment. On the basis of those tasks, we reviewed existing text alignment visualization approaches, and discuss their advantages and drawbacks. We finally derive design implications and shed light on related future challenges. | false | false | [
"Tariq Yousef",
"Stefan Jänicke"
] | [] | [
"V"
] | [
{
"name": "Fast Forward",
"url": "https://youtu.be/iM3vrZRYAfE",
"icon": "video"
}
] |
InfoVis | 2,020 | Bayesian-Assisted Inference from Visualized Data | 10.1109/TVCG.2020.3028984 | A Bayesian view of data interpretation suggests that a visualization user should update their existing beliefs about a parameter's value in accordance with the amount of information about the parameter value captured by the new observations. Extending recent work applying Bayesian models to understand and evaluate belief updating from visualizations, we show how the predictions of Bayesian inference can be used to guide more rational belief updating. We design a Bayesian inference-assisted uncertainty analogy that numerically relates uncertainty in observed data to the user's subjective uncertainty, and a posterior visualization that prescribes how a user should update their beliefs given their prior beliefs and the observed data. In a pre-registered experiment on 4,800 people, we find that when a newly observed data sample is relatively small (N=158), both techniques reliably improve people's Bayesian updating on average compared to the current best practice of visualizing uncertainty in the observed data. For large data samples (N=5208), where people's updated beliefs tend to deviate more strongly from the prescriptions of a Bayesian model, we find evidence that the effectiveness of the two forms of Bayesian assistance may depend on people's proclivity toward trusting the source of the data. We discuss how our results provide insight into individual processes of belief updating and subjective uncertainty, and how understanding these aspects of interpretation paves the way for more sophisticated interactive visualizations for analysis and communication. | false | false | [
"Yea-Seul Kim",
"Paula Kayongo",
"Madeleine Grunde-McLaughlin",
"Jessica Hullman"
] | [] | [] | [] |
InfoVis | 2,020 | Calliope: Automatic Visual Data Story Generation from a Spreadsheet | 10.1109/TVCG.2020.3030403 | Visual data stories shown in the form of narrative visualizations such as a poster or a data video, are frequently used in data-oriented storytelling to facilitate the understanding and memorization of the story content. Although useful, technique barriers, such as data analysis, visualization, and scripting, make the generation of a visual data story difficult. Existing authoring tools rely on users' skills and experiences, which are usually inefficient and still difficult. In this paper, we introduce a novel visual data story generating system, Calliope, which creates visual data stories from an input spreadsheet through an automatic process and facilities the easy revision of the generated story based on an online story editor. Particularly, Calliope incorporates a new logic-oriented Monte Carlo tree search algorithm that explores the data space given by the input spreadsheet to progressively generate story pieces (i.e., data facts) and organize them in a logical order. The importance of data facts is measured based on information theory, and each data fact is visualized in a chart and captioned by an automatically generated description. We evaluate the proposed technique through three example stories, two controlled experiments, and a series of interviews with 10 domain experts. Our evaluation shows that Calliope is beneficial to efficient visual data story generation. | false | false | [
"Danqing Shi",
"Xinyue Xu",
"Fuling Sun",
"Yang Shi 0007",
"Nan Cao"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2010.09975v1",
"icon": "paper"
}
] |
InfoVis | 2,020 | Cartographic Relief Shading with Neural Networks | 10.1109/TVCG.2020.3030456 | Shaded relief is an effective method for visualising terrain on topographic maps, especially when the direction of illumination is adapted locally to emphasise individual terrain features. However, digital shading algorithms are unable to fully match the expressiveness of hand-crafted masterpieces, which are created through a laborious process by highly specialised cartographers. We replicate hand-drawn relief shading using U-Net neural networks. The deep neural networks are trained with manual shaded relief images of the Swiss topographic map series and terrain models of the same area. The networks generate shaded relief that closely resemble hand-drawn shaded relief art. The networks learn essential design principles from manual relief shading such as removing unnecessary terrain details, locally adjusting the illumination direction to accentuate individual terrain features, and varying brightness to emphasise larger landforms. Neural network shadings are generated from digital elevation models in a few seconds, and a study with 18 relief shading experts found that they are of high quality. | false | false | [
"Bernhard Jenny",
"Magnus Heitzler",
"Dilpreet Singh",
"Marianna Farmakis-Serebryakova",
"Jeffery Chieh Liu",
"Lorenz Hurni"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2010.01256v1",
"icon": "paper"
}
] |
InfoVis | 2,020 | Chartem: Reviving Chart Images with Data Embedding | 10.1109/TVCG.2020.3030351 | In practice, charts are widely stored as bitmap images. Although easily consumed by humans, they are not convenient for other uses. For example, changing the chart style or type or a data value in a chart image practically requires creating a completely new chart, which is often a time-consuming and error-prone process. To assist these tasks, many approaches have been proposed to automatically extract information from chart images with computer vision and machine learning techniques. Although they have achieved promising preliminary results, there are still a lot of challenges to overcome in terms of robustness and accuracy. In this paper, we propose a novel alternative approach called Chartem to address this issue directly from the root. Specifically, we design a data-embedding schema to encode a significant amount of information into the background of a chart image without interfering human perception of the chart. The embedded information, when extracted from the image, can enable a variety of visualization applications to reuse or repurpose chart images. To evaluate the effectiveness of Chartem, we conduct a user study and performance experiments on Chartem embedding and extraction algorithms. We further present several prototype applications to demonstrate the utility of Chartem. | false | false | [
"Jiayun Fu",
"Bin Zhu",
"Weiwei Cui",
"Song Ge",
"Yun Wang 0012",
"Haidong Zhang",
"He Huang",
"Yuanyuan Tang",
"Dongmei Zhang 0001",
"Xiaojing Ma 0002"
] | [] | [] | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.