Conference
stringclasses
6 values
Year
int64
1.99k
2.03k
Title
stringlengths
8
187
DOI
stringlengths
16
32
Abstract
stringlengths
128
7.15k
Accessible
bool
2 classes
Early
bool
2 classes
AuthorNames-Deduped
listlengths
1
24
Award
listlengths
0
2
Resources
listlengths
0
5
ResourceLinks
listlengths
0
10
CHI
2,024
WAVE: Anticipatory Movement Visualization for VR Dancing
10.1145/3613904.3642145
Dance games are one of the most popular game genres in Virtual Reality (VR), and active dance communities have emerged on social VR platforms such as VR Chat. However, effective instruction of dancing in VR or through other computerized means remains an unsolved human-computer interaction problem. Existing approaches either only instruct movements partially, abstracting away nuances, or require learning and memorizing symbolic notation. In contrast, we investigate how realistic, full-body movements designed by a professional choreographer can be instructed on the fly, without prior learning or memorization. Towards this end, we describe the design and evaluation of WAVE, a novel anticipatory movement visualization technique where the user joins a group of dancers performing the choreography with different time offsets, similar to spectators making waves in sports events. In our user study (N=36), the participants more accurately followed a choreography using WAVE, compared to following a single model dancer.
false
false
[ "Markus Laattala", "Roosa Piitulainen", "Nadia M. Ady", "Monica Tamariz", "Perttu Hämäläinen" ]
[]
[]
[]
CHI
2,024
When the Body Became Data: Historical Data Cultures and Anatomical Illustration
10.1145/3613904.3642056
With changing attitudes around knowledge, medicine, art, and technology, the human body has become a source of information and, ultimately, shareable and analyzable data. Centuries of illustrations and visualizations of the body occur within particular historical, social, and political contexts. These contexts are enmeshed in different so-called data cultures: ways that data, knowledge, and information are conceptualized and collected, structured and shared. In this work, we explore how information about the body was collected as well as the circulation, impact, and persuasive force of the resulting images. We show how mindfulness of data cultural influences remain crucial for today's designers, researchers, and consumers of visualizations. We conclude with a call for the field to reflect on how visualizations are not timeless and contextless mirrors on objective data, but as much a product of our time and place as the visualizations of the past.
false
false
[ "Michael Correll", "Laura A. Garrison" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2402.05014v1", "icon": "paper" } ]
Vis
2,023
2D, 2.5D, or 3D? An Exploratory Study on Multilayer Network Visualisations in Virtual Reality
10.1109/TVCG.2023.3327402
Relational information between different types of entities is often modelled by a multilayer network (MLN) – a network with subnetworks represented by layers. The layers of an MLN can be arranged in different ways in a visual representation, however, the impact of the arrangement on the readability of the network is an open question. Therefore, we studied this impact for several commonly occurring tasks related to MLN analysis. Additionally, layer arrangements with a dimensionality beyond 2D, which are common in this scenario, motivate the use of stereoscopic displays. We ran a human subject study utilising a Virtual Reality headset to evaluate 2D, 2.5D, and 3D layer arrangements. The study employs six analysis tasks that cover the spectrum of an MLN task taxonomy, from path finding and pattern identification to comparisons between and across layers. We found no clear overall winner. However, we explore the task-to-arrangement space and derive empirical-based recommendations on the effective use of 2D, 2.5D, and 3D layer arrangements for MLNs.
false
false
[ "Stefan P. Feyer", "Bruno Pinaud", "Stephen G. Kobourov", "Nicolas Brich", "Michael Krone", "Andreas Kerren", "Michael Behrisch 0001", "Falk Schreiber", "Karsten Klein 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.10674v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/76CcqIms1KM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/0kP3NhrRoSk", "icon": "video" } ]
Vis
2,023
A Comparative Study of the Perceptual Sensitivity of Topological Visualizations to Feature Variations
10.1109/TVCG.2023.3326592
Color maps are a commonly used visualization technique in which data are mapped to optical properties, e.g., color or opacity. Color maps, however, do not explicitly convey structures (e.g., positions and scale of features) within data. Topology-based visualizations reveal and explicitly communicate structures underlying data. Although our understanding of what types of features are captured by topological visualizations is good, our understanding of people's perception of those features is not. This paper evaluates the sensitivity of topology-based isocontour, Reeb graph, and persistence diagram visualizations compared to a reference color map visualization for synthetically generated scalar fields on 2-manifold triangular meshes embedded in 3D. In particular, we built and ran a human-subject study that evaluated the perception of data features characterized by Gaussian signals and measured how effectively each visualization technique portrays variations of data features arising from the position and amplitude variation of a mixture of Gaussians. For positional feature variations, the results showed that only the Reeb graph visualization had high sensitivity. For amplitude feature variations, persistence diagrams and color maps demonstrated the highest sensitivity, whereas isocontours showed only weak sensitivity. These results take an important step toward understanding which topology-based tools are best for various data and task scenarios and their effectiveness in conveying topological variations as compared to conventional color mapping.
false
false
[ "Tushar M. Athawale", "Bryan Triana", "Tanmay Kotha", "Dave Pugmire", "Paul Rosen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.08795v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/4IHuBESGvJs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/e3NtxX19Lwc", "icon": "video" } ]
Vis
2,023
A Comparative Visual Analytics Framework for Evaluating Evolutionary Processes in Multi-Objective Optimization
10.1109/TVCG.2023.3326921
Evolutionary multi-objective optimization (EMO) algorithms have been demonstrated to be effective in solving multi-criteria decision-making problems. In real-world applications, analysts often employ several algorithms concurrently and compare their solution sets to gain insight into the characteristics of different algorithms and explore a broader range of feasible solutions. However, EMO algorithms are typically treated as black boxes, leading to difficulties in performing detailed analysis and comparisons between the internal evolutionary processes. Inspired by the successful application of visual analytics tools in explainable AI, we argue that interactive visualization can significantly enhance the comparative analysis between multiple EMO algorithms. In this paper, we present a visual analytics framework that enables the exploration and comparison of evolutionary processes in EMO algorithms. Guided by a literature review and expert interviews, the proposed framework addresses various analytical tasks and establishes a multi-faceted visualization design to support the comparative analysis of intermediate generations in the evolution as well as solution sets. We demonstrate the effectiveness of our framework through case studies on benchmarking and real-world multi-objective optimization problems to elucidate how analysts can leverage our framework to inspect and compare diverse algorithms.
false
false
[ "Yansong Huang", "Zherui Zhang", "Ao Jiao", "Yuxin Ma", "Ran Cheng" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.05640v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CuJOVQaVWrA", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/D5EyyPiax-A", "icon": "video" } ]
Vis
2,023
A Computational Design Pipeline to Fabricate Sensing Network Physicalizations
10.1109/TVCG.2023.3327198
Interaction is critical for data analysis and sensemaking. However, designing interactive physicalizations is challenging as it requires cross-disciplinary knowledge in visualization, fabrication, and electronics. Interactive physicalizations are typically produced in an unstructured manner, resulting in unique solutions for a specific dataset, problem, or interaction that cannot be easily extended or adapted to new scenarios or future physicalizations. To mitigate these challenges, we introduce a computational design pipeline to 3D print network physicalizations with integrated sensing capabilities. Networks are ubiquitous, yet their complex geometry also requires significant engineering considerations to provide intuitive, effective interactions for exploration. Using our pipeline, designers can readily produce network physicalizations supporting selection—the most critical atomic operation for interaction—by touch through capacitive sensing and computational inference. Our computational design pipeline introduces a new design paradigm by concurrently considering the form and interactivity of a physicalization into one cohesive fabrication workflow. We evaluate our approach using (i) computational evaluations, (ii) three usage scenarios focusing on general visualization tasks, and (iii) expert interviews. The design paradigm introduced by our pipeline can lower barriers to physicalization research, creation, and adoption.
false
false
[ "S. Sandra Bae", "Takanori Fujiwara", "Anders Ynnerman", "Ellen Yi-Luen Do", "Michael L. Rivera", "Danielle Albers Szafir" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.04714v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/9fexUfzRIjs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/xhGKUf2PCYw", "icon": "video" } ]
Vis
2,023
A General Framework for Progressive Data Compression and Retrieval
10.1109/TVCG.2023.3327186
In scientific simulations, observations, and experiments, the transfer of data to and from disk and across networks has become a major bottleneck for data analysis and visualization. Compression techniques have been employed to tackle this challenge, but traditional lossy methods often demand conservative error tolerances to meet the numerical accuracy requirements of both anticipated and unknown data analysis tasks. Progressive data compression and retrieval has emerged as a promising solution, where each analysis task dictates its own accuracy needs. However, few analysis algorithms inherently support progressive data processing, and adapting compression techniques, file formats, client/server frameworks, and APIs to support progressivity can be challenging. This paper presents a framework that enables progressive-precision data queries for any data compressor or numerical representation. Our strategy hinges on a multi-component representation that successively reduces the error between the original and compressed field, allowing each field in the progressive sequence to be expressed as a partial sum of components. We have implemented this approach with four established scientific data compressors and assessed its effectiveness using real-world data sets from the SDRBench collection. The results show that our framework competes in accuracy with the standalone compressors it is based upon. Additionally, (de)compression time is proportional to the number of components requested by the user. Finally, our framework allows for fully lossless compression using lossy compressors when a sufficient number of components are employed.
false
false
[ "Victor Antonio Paludetto Magri", "Peter Lindstrom 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.11759v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/RWsdS44wpm4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MASGoL1NZMM", "icon": "video" } ]
Vis
2,023
A Heuristic Approach for Dual Expert/End-User Evaluation of Guidance in Visual Analytics
10.1109/TVCG.2023.3327152
Guidance can support users during the exploration and analysis of complex data. Previous research focused on characterizing the theoretical aspects of guidance in visual analytics and implementing guidance in different scenarios. However, the evaluation of guidance-enhanced visual analytics solutions remains an open research question. We tackle this question by introducing and validating a practical evaluation methodology for guidance in visual analytics. We identify eight quality criteria to be fulfilled and collect expert feedback on their validity. To facilitate actual evaluation studies, we derive two sets of heuristics. The first set targets heuristic evaluations conducted by expert evaluators. The second set facilitates end-user studies where participants actually use a guidance-enhanced system. By following such a dual approach, the different quality criteria of guidance can be examined from two different perspectives, enhancing the overall value of evaluation studies. To test the practical utility of our methodology, we employ it in two studies to gain insight into the quality of two guidance-enhanced visual analytics solutions, one being a work-in-progress research prototype, and the other being a publicly available visualization recommender system. Based on these two evaluations, we derive good practices for conducting evaluations of guidance in visual analytics and identify pitfalls to be avoided during such studies.
false
false
[ "Davide Ceneda", "Christopher Collins 0001", "Mennatallah El-Assady", "Silvia Miksch", "Christian Tominski", "Alessio Arleo" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.13052v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/P0LmNH6Sl34", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/e7Hu3C8Mj7M", "icon": "video" } ]
Vis
2,023
A Local Iterative Approach for the Extraction of 2D Manifolds from Strongly Curved and Folded Thin-Layer Structures
10.1109/TVCG.2023.3327403
Ridge surfaces represent important features for the analysis of 3-dimensional (3D) datasets in diverse applications and are often derived from varying underlying data including flow fields, geological fault data, and point data, but they can also be present in the original scalar images acquired using a plethora of imaging techniques. Our work is motivated by the analysis of image data acquired using micro-computed tomography ($\mu\text{CT}$) of ancient, rolled and folded thin-layer structures such as papyrus, parchment, and paper as well as silver and lead sheets. From these documents we know that they are 2-dimensional (2D) in nature. Hence, we are particularly interested in reconstructing 2D manifolds that approximate the document's structure. The image data from which we want to reconstruct the 2D manifolds are often very noisy and represent folded, densely-layered structures with many artifacts, such as ruptures or layer splitting and merging. Previous ridge-surface extraction methods fail to extract the desired 2D manifold for such challenging data. We have therefore developed a novel method to extract 2D manifolds. The proposed method uses a local fast marching scheme in combination with a separation of the region covered by fast marching into two sub-regions. The 2D manifold of interest is then extracted as the surface separating the two sub-regions. The local scheme can be applied for both automatic propagation as well as interactive analysis. We demonstrate the applicability and robustness of our method on both artificial data as well as real-world data including folded silver and papyrus sheets.
false
false
[ "Nicolas Klenert", "Verena Lepper", "Daniel Baum" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.07070v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/nJWJrZd0KFI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/B10fjAOfFMo", "icon": "video" } ]
Vis
2,023
A Parallel Framework for Streaming Dimensionality Reduction
10.1109/TVCG.2023.3326515
The visualization of streaming high-dimensional data often needs to consider the speed in dimensionality reduction algorithms, the quality of visualized data patterns, and the stability of view graphs that usually change over time with new data. Existing methods of streaming high-dimensional data visualization primarily line up essential modules in a serial manner and often face challenges in satisfying all these design considerations. In this research, we propose a novel parallel framework for streaming high-dimensional data visualization to achieve high data processing speed, high quality in data patterns, and good stability in visual presentations. This framework arranges all essential modules in parallel to mitigate the delays caused by module waiting in serial setups. In addition, to facilitate the parallel pipeline, we redesign these modules with a parametric non-linear embedding method for new data embedding, an incremental learning method for online embedding function updating, and a hybrid strategy for optimized embedding updating. We also improve the coordination mechanism among these modules. Our experiments show that our method has advantages in embedding speed, quality, and stability over other existing methods to visualize streaming high-dimensional data.
false
false
[ "Jiazhi Xia", "Linquan Huang", "Yiping Sun", "Zhiwei Deng", "Xiaolong Luke Zhang", "Minfeng Zhu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/lHQAMgYuyvA", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/L39askTrxh8", "icon": "video" } ]
Vis
2,023
A Task-Parallel Approach for Localized Topological Data Structures
10.1109/TVCG.2023.3327182
Unstructured meshes are characterized by data points irregularly distributed in the Euclidian space. Due to the irregular nature of these data, computing connectivity information between the mesh elements requires much more time and memory than on uniformly distributed data. To lower storage costs, dynamic data structures have been proposed. These data structures compute connectivity information on the fly and discard them when no longer needed. However, on-the-fly computation slows down algorithms and results in a negative impact on the time performance. To address this issue, we propose a new task-parallel approach to proactively compute mesh connectivity. Unlike previous approaches implementing data-parallel models, where all threads run the same type of instructions, our task-parallel approach allows threads to run different functions. Specifically, some threads run the algorithm of choice while other threads compute connectivity information before they are actually needed. The approach was implemented in the new Accelerated Clustered TOPOlogical (ACTOPO) data structure, which can support any processing algorithm requiring mesh connectivity information. Our experiments show that ACTOPO combines the benefits of state-of-the-art memory-efficient (TTK CompactTriangulation) and time-efficient (TTK ExplicitTriangulation) topological data structures. It occupies a similar amount of memory as TTK CompactTriangulation while providing up to 5x speedup. Moreover, it achieves comparable time performance as TTK ExplicitTriangulation while using only half of the memory space.
false
false
[ "Guoxi Liu", "Federico Iuricich" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.15638v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/DX1Pi4V7iQw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/K7q5fiqmJq8", "icon": "video" } ]
Vis
2,023
A Unified Interactive Model Evaluation for Classification, Object Detection, and Instance Segmentation in Computer Vision
10.1109/TVCG.2023.3326588
Existing model evaluation tools mainly focus on evaluating classification models, leaving a gap in evaluating more complex models, such as object detection. In this paper, we develop an open-source visual analysis tool, Uni-Evaluator, to support a unified model evaluation for classification, object detection, and instance segmentation in computer vision. The key idea behind our method is to formulate both discrete and continuous predictions in different tasks as unified probability distributions. Based on these distributions, we develop 1) a matrix-based visualization to provide an overview of model performance; 2) a table visualization to identify the problematic data subsets where the model performs poorly; 3) a grid visualization to display the samples of interest. These visualizations work together to facilitate the model evaluation from a global overview to individual samples. Two case studies demonstrate the effectiveness of Uni-Evaluator in evaluating model performance and making informed improvements.
false
false
[ "Changjian Chen", "Yukai Guo", "Fengyuan Tian", "Shilong Liu", "Weikai Yang", "Zhaowei Wang", "Jing Wu 0004", "Hang Su", "Hanspeter Pfister", "Shixia Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.05168v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/4JK2zn0LYdI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/zM0FXNzq6PM", "icon": "video" } ]
Vis
2,023
Action-Evaluator: A Visualization Approach for Player Action Evaluation in Soccer
10.1109/TVCG.2023.3326524
In soccer, player action evaluation provides a fine-grained method to analyze player performance and plays an important role in improving winning chances in future matches. However, previous studies on action evaluation only provide a score for each action, and hardly support inspecting and comparing player actions integrated with complex match context information such as team tactics and player locations. In this work, we collaborate with soccer analysts and coaches to characterize the domain problems of evaluating player performance based on action scores. We design a tailored visualization of soccer player actions that places the action choice together with the tactic it belongs to as well as the player locations in the same view. Based on the design, we introduce a visual analytics system, Action-Evaluator, to facilitate a comprehensive player action evaluation through player navigation, action investigation, and action explanation. With the system, analysts can find players to be analyzed efficiently, learn how they performed under various match situations, and obtain valuable insights to improve their action choices. The usefulness and effectiveness of this work are demonstrated by two case studies on a real-world dataset and an expert interview.
false
false
[ "Anqi Cao", "Xiao Xie", "Mingxu Zhou", "Hui Zhang 0051", "Mingliang Xu", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/FMmI8pz3e5M", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/gfn4zJemyaQ", "icon": "video" } ]
Vis
2,023
Adaptive Assessment of Visualization Literacy
10.1109/TVCG.2023.3327165
Visualization literacy is an essential skill for accurately interpreting data to inform critical decisions. Consequently, it is vital to understand the evolution of this ability and devise targeted interventions to enhance it, requiring concise and repeatable assessments of visualization literacy for individuals. However, current assessments, such as the Visualization Literacy Assessment Test (vlat), are time-consuming due to their fixed, lengthy format. To address this limitation, we develop two streamlined computerized adaptive tests (cats) for visualization literacy, a-vlat and a-calvi, which measure the same set of skills as their original versions in half the number of questions. Specifically, we (1) employ item response theory (IRT) and non-psychometric constraints to construct adaptive versions of the assessments, (2) finalize the configurations of adaptation through simulation, (3) refine the composition of test items of a-calvi via a qualitative study, and (4) demonstrate the test-retest reliability (ICC: 0.98 and 0.98) and convergent validity (correlation: 0.81 and 0.66) of both CATS via four online studies. We discuss practical recommendations for using our CATS and opportunities for further customization to leverage the full potential of adaptive assessments. All supplemental materials are available at https://osf.io/a6258/.
false
false
[ "Yuan Cui", "Lily W. Ge", "Yiren Ding", "Fumeng Yang", "Lane Harrison", "Matthew Kay 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.14147v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/fKJifGleSYY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/4JWm4X8JQQQ", "icon": "video" } ]
Vis
2,023
Adaptively Placed Multi-Grid Scene Representation Networks for Large-Scale Data Visualization
10.1109/TVCG.2023.3327194
Scene representation networks (SRNs) have been recently proposed for compression and visualization of scientific data. However, state-of-the-art SRNs do not adapt the allocation of available network parameters to the complex features found in scientific data, leading to a loss in reconstruction quality. We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN) and propose a domain decomposition training and inference technique for accelerated parallel training on multi-GPU systems. We also release an open-source neural volume rendering application that allows plug-and-play rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses multiple spatially adaptive feature grids that learn where to be placed within the domain to dynamically allocate more neural network resources where error is high in the volume, improving state-of-the-art reconstruction accuracy of SRNs for scientific data without requiring expensive octree refining, pruning, and traversal like previous adaptive models. In our domain decomposition approach for representing large-scale data, we train an set of APMGSRNs in parallel on separate bricks of the volume to reduce training time while avoiding overhead necessary for an out-of-core solution for volumes too large to fit in GPU memory. After training, the lightweight SRNs are used for realtime neural volume rendering in our open-source renderer, where arbitrary view angles and transfer functions can be explored. A copy of this paper, all code, all models used in our experiments, and all supplemental materials and videos are available at https://github.com/skywolf829/APMGSRN.
false
false
[ "Skylar W. Wurster", "Tianyu Xiong", "Han-Wei Shen", "Hanqi Guo 0001", "Tom Peterka" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.02494v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CM2h8KUgsMg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/DNV-hfpiBLI", "icon": "video" } ]
Vis
2,023
Affective Visualization Design: Leveraging the Emotional Impact of Data
10.1109/TVCG.2023.3327385
In recent years, more and more researchers have reflected on the undervaluation of emotion in data visualization and highlighted the importance of considering human emotion in visualization design. Meanwhile, an increasing number of studies have been conducted to explore emotion-related factors. However, so far, this research area is still in its early stages and faces a set of challenges, such as the unclear definition of key concepts, the insufficient justification of why emotion is important in visualization design, and the lack of characterization of the design space of affective visualization design. To address these challenges, first, we conducted a literature review and identified three research lines that examined both emotion and data visualization. We clarified the differences between these research lines and kept 109 papers that studied or discussed how data visualization communicates and influences emotion. Then, we coded the 109 papers in terms of how they justified the legitimacy of considering emotion in visualization design (i.e., why emotion is important) and identified five argumentative perspectives. Based on these papers, we also identified 61 projects that practiced affective visualization design. We coded these design projects in three dimensions, including design fields (where), design tasks (what), and design methods (how), to explore the design space of affective visualization design.
false
false
[ "Xingyu Lan", "Yanqiu Wu", "Nan Cao 0001" ]
[ "BP" ]
[ "PW", "P", "V" ]
[ { "name": "Project Website", "url": "https://affectivevis.github.io/", "icon": "project_website" }, { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.02831v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Q7KBPKW85qc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/cPMVHC2cAWE", "icon": "video" } ]
Vis
2,023
Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research
10.1109/TVCG.2023.3326591
Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research.
false
false
[ "Hariharan Subramonyam", "Jessica Hullman" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.06290v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/oyxGvCFuobc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/fn5zAUom2Mk", "icon": "video" } ]
Vis
2,023
ARGUS: Visualization of AI-Assisted Task Guidance in AR
10.1109/TVCG.2023.3327396
The concept of augmented reality (AR) assistants has captured the human imagination for decades, becoming a staple of modern science fiction. To pursue this goal, it is necessary to develop artificial intelligence (AI)-based methods that simultaneously perceive the 3D environment, reason about physical tasks, and model the performer, all in real-time. Within this framework, a wide variety of sensors are needed to generate data across different modalities, such as audio, video, depth, speech, and time-of-flight. The required sensors are typically part of the AR headset, providing performer sensing and interaction through visual, audio, and haptic feedback. AI assistants not only record the performer as they perform activities, but also require machine learning (ML) models to understand and assist the performer as they interact with the physical world. Therefore, developing such assistants is a challenging task. We propose ARGUS, a visual analytics system to support the development of intelligent AR assistants. Our system was designed as part of a multi-year-long collaboration between visualization researchers and ML and AR experts. This co-design process has led to advances in the visualization of ML in AR. Our system allows for online visualization of object, action, and step detection as well as offline analysis of previously recorded AR sessions. It visualizes not only the multimodal sensor data streams but also the output of the ML models. This allows developers to gain insights into the performer activities as well as the ML models, helping them troubleshoot, improve, and fine-tune the components of the AR assistant.
false
false
[ "Sonia Castelo", "João Rulff", "Erin McGowan", "Bea Steers", "Guande Wu", "Shaoyu Chen", "Irán R. Román", "Roque Lopez", "Ethan Brewer", "Chen Zhao", "Jing Qian", "Kyunghyun Cho", "He He 0001", "Qi Sun", "Huy T. Vo", "Juan Pablo Bello", "Michael Krone", "Cláudio T. Silva" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.06246v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/QIPtDJ57SK4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/qBDonJbkDjQ", "icon": "video" } ]
Vis
2,023
AttentionViz: A Global View of Transformer Attention
10.1109/TVCG.2023.3327163
Transformer models are revolutionizing machine learning, but their inner workings remain mysterious. In this work, we present a new visualization technique designed to help researchers understand the self-attention mechanism in transformers that allows these models to learn rich, contextual relationships between elements of a sequence. The main idea behind our method is to visualize a joint embedding of the query and key vectors used by transformer models to compute attention. Unlike previous attention visualization techniques, our approach enables the analysis of global patterns across multiple input sequences. We create an interactive visualization tool, AttentionViz (demo: http://attentionviz.com), based on these joint query-key embeddings, and use it to study attention mechanisms in both language and vision transformers. We demonstrate the utility of our approach in improving model understanding and offering new insights about query-key interactions through several application scenarios and expert feedback.
false
false
[ "Catherine Yeh", "Yida Chen", "Aoyu Wu", "Cynthia Chen", "Fernanda B. Viégas", "Martin Wattenberg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2305.03210v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uVoPKrRy3ik", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/YBxRfWTFb3U", "icon": "video" } ]
Vis
2,023
Attribute-Aware RBFs: Interactive Visualization of Time Series Particle Volumes Using RT Core Range Queries
10.1109/TVCG.2023.3327366
Smoothed-particle hydrodynamics (SPH) is a mesh-free method used to simulate volumetric media in fluids, astrophysics, and solid mechanics. Visualizing these simulations is problematic because these datasets often contain millions, if not billions of particles carrying physical attributes and moving over time. Radial basis functions (RBFs) are used to model particles, and overlapping particles are interpolated to reconstruct a high-quality volumetric field; however, this interpolation process is expensive and makes interactive visualization difficult. Existing RBF interpolation schemes do not account for color-mapped attributes and are instead constrained to visualizing just the density field. To address these challenges, we exploit ray tracing cores in modern GPU architectures to accelerate scalar field reconstruction. We use a novel RBF interpolation scheme to integrate per-particle colors and densities, and leverage GPU-parallel tree construction and refitting to quickly update the tree as the simulation animates over time or when the user manipulates particle radii. We also propose a Hilbert reordering scheme to cluster particles together at the leaves of the tree to reduce tree memory consumption. Finally, we reduce the noise of volumetric shadows by adopting a spatially temporal blue noise sampling scheme. Our method can provide a more detailed and interactive view of these large, volumetric, time-series particle datasets than traditional methods, leading to new insights into these physics simulations.
false
false
[ "Nate Morrical", "Stefan Zellmann", "Alper Sahistan", "Patrick C. Shriwise", "Valerio Pascucci" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Tf56UeFyPSI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/iIDCZbLPzRo", "icon": "video" } ]
Vis
2,023
Average Estimates in Line Graphs Are Biased Toward Areas of Higher Variability
10.1109/TVCG.2023.3326589
We investigate variability overweighting, a previously undocumented bias in line graphs, where estimates of average value are biased toward areas of higher variability in that line. We found this effect across two preregistered experiments with 140 and 420 participants. These experiments also show that the bias is reduced when using a dot encoding of the same series. We can model the bias with the average of the data series and the average of the points drawn along the line. This bias might arise because higher variability leads to stronger weighting in the average calculation, either due to the longer line segments (even though those segments contain the same number of data values) or line segments with higher variability being otherwise more visually salient. Understanding and predicting this bias is important for visualization design guidelines, recommendation systems, and tool builders, as the bias can adversely affect estimates of averages and trends.
false
false
[ "Dominik Moritz", "Lace M. K. Padilla", "Francis Nguyen", "Steven L. Franconeri" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03903v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/yhMcIVE-hDE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/XTA-DNCITtA", "icon": "video" } ]
Vis
2,023
Calliope-Net: Automatic Generation of Graph Data Facts via Annotated Node-Link Diagrams
10.1109/TVCG.2023.3326925
Graph or network data are widely studied in both data mining and visualization communities to review the relationship among different entities and groups. The data facts derived from graph visual analysis are important to help understand the social structures of complex data, especially for data journalism. However, it is challenging for data journalists to discover graph data facts and manually organize correlated facts around a meaningful topic due to the complexity of graph data and the difficulty to interpret graph narratives. Therefore, we present an automatic graph facts generation system, Calliope-Net, which consists of a fact discovery module, a fact organization module, and a visualization module. It creates annotated node-link diagrams with facts automatically discovered and organized from network data. A novel layout algorithm is designed to present meaningful and visually appealing annotated graphs. We evaluate the proposed system with two case studies and an in-lab user study. The results show that Calliope-Net can benefit users in discovering and understanding graph data facts with visually pleasing annotated visualizations.
false
false
[ "Qing Chen 0001", "Nan Chen", "Wei Shuai", "Guande Wu", "Zhe Xu 0007", "Hanghang Tong", "Nan Cao 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.06441v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CI8z4kdshpo", "icon": "video" } ]
Vis
2,023
Causality-Based Visual Analysis of Questionnaire Responses
10.1109/TVCG.2023.3327376
As the final stage of questionnaire analysis, causal reasoning is the key to turning responses into valuable insights and actionable items for decision-makers. During the questionnaire analysis, classical statistical methods (e.g., Differences-in-Differences) have been widely exploited to evaluate causality between questions. However, due to the huge search space and complex causal structure in data, causal reasoning is still extremely challenging and time-consuming, and often conducted in a trial-and-error manner. On the other hand, existing visual methods of causal reasoning face the challenge of bringing scalability and expert knowledge together and can hardly be used in the questionnaire scenario. In this work, we present a systematic solution to help analysts effectively and efficiently explore questionnaire data and derive causality. Based on the association mining algorithm, we dig question combinations with potential inner causality and help analysts interactively explore the causal sub-graph of each question combination. Furthermore, leveraging the requirements collected from the experts, we built a visualization tool and conducted a comparative study with the state-of-the-art system to show the usability and efficiency of our system.
false
false
[ "Renzhong Li", "Weiwei Cui", "Tianqi Song", "Xiao Xie", "Rui Ding 0001", "Yun Wang 0012", "Haidong Zhang", "Hong Zhou 0004", "Yingcai Wu" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/W4wfW9-WWLw", "icon": "video" } ]
Vis
2,023
Challenges and Opportunities in Data Visualization Education: A Call to Action
10.1109/TVCG.2023.3327378
This paper is a call to action for research and discussion on data visualization education. As visualization evolves and spreads through our professional and personal lives, we need to understand how to support and empower a broad and diverse community of learners in visualization. Data Visualization is a diverse and dynamic discipline that combines knowledge from different fields, is tailored to suit diverse audiences and contexts, and frequently incorporates tacit knowledge. This complex nature leads to a series of interrelated challenges for data visualization education. Driven by a lack of consolidated knowledge, overview, and orientation for visualization education, the 21 authors of this paper—educators and researchers in data visualization—identify and describe 19 challenges informed by our collective practical experience. We organize these challenges around seven themes People, Goals & Assessment, Environment, Motivation, Methods, Materials, and Change. Across these themes, we formulate 43 research questions to address these challenges. As part of our call to action, we then conclude with 5 cross-cutting opportunities and respective action items: embrace DIVERSITY+INCLUSION, build COMMUNITIES, conduct RESEARCH, act AGILE, and relish RESPONSIBILITY. We aim to inspire researchers, educators and learners to drive visualization education forward and discuss why, how, who and where we educate, as we learn to use visualization to address challenges across many scales and many domains in a rapidly changing world: viseducationchallenges.github.io.
false
false
[ "Benjamin Bach", "Mandy Keck", "Fateme Rajabiyazdi", "Tatiana Losev", "Isabel Meirelles", "Jason Dykes", "Robert S. Laramee", "Mashael AlKadi", "Christina Stoiber", "Samuel Huron", "Charles Perin", "Luiz Morais", "Wolfgang Aigner", "Doris Kosminsky", "Magdalena Boucher", "Søren Knudsen", "Areti Manataki", "Jan Aerts", "Uta Hinrichs", "Jonathan C. Roberts", "Sheelagh Carpendale" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.07703v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/_xNwKJV4w2M", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MtsLr5b9pDM", "icon": "video" } ]
Vis
2,023
Character-Oriented Design for Visual Data Storytelling
10.1109/TVCG.2023.3326578
When telling a data story, an author has an intention they seek to convey to an audience. This intention can be of many forms such as to persuade, to educate, to inform, or even to entertain. In addition to expressing their intention, the story plot must balance being consumable and enjoyable while preserving scientific integrity. In data stories, numerous methods have been identified for constructing and presenting a plot. However, there is an opportunity to expand how we think and create the visual elements that present the story. Stories are brought to life by characters; often they are what make a story captivating, enjoyable, memorable, and facilitate following the plot until the end. Through the analysis of 160 existing data stories, we systematically investigate and identify distinguishable features of characters in data stories, and we illustrate how they feed into the broader concept of “character-oriented design”. We identify the roles and visual representations data characters assume as well as the types of relationships these roles have with one another. We identify characteristics of antagonists as well as define conflict in data stories. We find the need for an identifiable central character that the audience latches on to in order to follow the narrative and identify their visual representations. We then illustrate “character-oriented design” by showing how to develop data characters with common data story plots. With this work, we present a framework for data characters derived from our analysis; we then offer our extension to the data storytelling process using character-oriented design. To access our supplemental materials please visit https://chaorientdesignds.github.io/.
false
false
[ "Keshav Dasu", "Yun-Hsin Kuo", "Kwan-Liu Ma" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.07557v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/cV6Oro-gfBg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/om-urGmmd5o", "icon": "video" } ]
Vis
2,023
CLAMS: A Cluster Ambiguity Measure for Estimating Perceptual Variability in Visual Clustering
10.1109/TVCG.2023.3327201
Visual clustering is a common perceptual task in scatterplots that supports diverse analytics tasks (e.g., cluster identification). However, even with the same scatterplot, the ways of perceiving clusters (i.e., conducting visual clustering) can differ due to the differences among individuals and ambiguous cluster boundaries. Although such perceptual variability casts doubt on the reliability of data analysis based on visual clustering, we lack a systematic way to efficiently assess this variability. In this research, we study perceptual variability in conducting visual clustering, which we call Cluster Ambiguity. To this end, we introduce CLAMS, a data-driven visual quality measure for automatically predicting cluster ambiguity in monochrome scatterplots. We first conduct a qualitative study to identify key factors that affect the visual separation of clusters (e.g., proximity or size difference between clusters). Based on study findings, we deploy a regression module that estimates the human-judged separability of two clusters. Then, CLAMS predicts cluster ambiguity by analyzing the aggregated results of all pairwise separability between clusters that are generated by the module. CLAMS outperforms widely-used clustering techniques in predicting ground truth cluster ambiguity. Meanwhile, CLAMS exhibits performance on par with human annotators. We conclude our work by presenting two applications for optimizing and benchmarking data mining techniques using CLAMS. The interactive demo of CLAMS is available at clusterambiguity.dev.
false
false
[ "Hyeon Jeon", "Ghulam Jilani Quadri", "Hyunwook Lee", "Paul Rosen 0001", "Danielle Albers Szafir", "Jinwook Seo" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.00284v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Jd3naMKyScU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/-Lx8iElkiDg", "icon": "video" } ]
Vis
2,023
Class-Constrained t-SNE: Combining Data Features and Class Probabilities
10.1109/TVCG.2023.3326600
Data features and class probabilities are two main perspectives when, e.g., evaluating model results and identifying problematic items. Class probabilities represent the likelihood that each instance belongs to a particular class, which can be produced by probabilistic classifiers or even human labeling with uncertainty. Since both perspectives are multi-dimensional data, dimensionality reduction (DR) techniques are commonly used to extract informative characteristics from them. However, existing methods either focus solely on the data feature perspective or rely on class probability estimates to guide the DR process. In contrast to previous work where separate views are linked to conduct the analysis, we propose a novel approach, class-constrained t-SNE, that combines data features and class probabilities in the same DR result. Specifically, we combine them by balancing two corresponding components in a cost function to optimize the positions of data points and iconic representation of classes – class landmarks. Furthermore, an interactive user-adjustable parameter balances these two components so that users can focus on the weighted perspectives of interest and also empowers a smooth visual transition between varying perspectives to preserve the mental map. We illustrate its application potential in model evaluation and visual-interactive labeling. A comparative analysis is performed to evaluate the DR results.
false
false
[ "Linhao Meng", "Stef van den Elzen", "Nicola Pezzotti", "Anna Vilanova" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.13837v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/91nAgNhpHIg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/O2xRwsUpIc0", "icon": "video" } ]
Vis
2,023
Classes are not Clusters: Improving Label-Based Evaluation of Dimensionality Reduction
10.1109/TVCG.2023.3327187
A common way to evaluate the reliability of dimensionality reduction (DR) embeddings is to quantify how well labeled classes form compact, mutually separated clusters in the embeddings. This approach is based on the assumption that the classes stay as clear clusters in the original high-dimensional space. However, in reality, this assumption can be violated; a single class can be fragmented into multiple separated clusters, and multiple classes can be merged into a single cluster. We thus cannot always assure the credibility of the evaluation using class labels. In this paper, we introduce two novel quality measures—Label-Trustworthiness and Label-Continuity (Label-T&C)—advancing the process of DR evaluation based on class labels. Instead of assuming that classes are well-clustered in the original space, Label-T&C work by (1) estimating the extent to which classes form clusters in the original and embedded spaces and (2) evaluating the difference between the two. A quantitative evaluation showed that Label-T&C outperform widely used DR evaluation measures (e.g., Trustworthiness and Continuity, Kullback-Leibler divergence) in terms of the accuracy in assessing how well DR embeddings preserve the cluster structure, and are also scalable. Moreover, we present case studies demonstrating that Label-T&C can be successfully used for revealing the intrinsic characteristics of DR techniques and their hyperparameters.
false
false
[ "Hyeon Jeon", "Yun-Hsin Kuo", "Michaël Aupetit 0001", "Kwan-Liu Ma", "Jinwook Seo" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.00278v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/2z0zTS7lSMo", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/_lIlinnyHkA", "icon": "video" } ]
Vis
2,023
Cluster-Aware Grid Layout
10.1109/TVCG.2023.3326934
Grid visualizations are widely used in many applications to visually explain a set of data and their proximity relationships. However, existing layout methods face difficulties when dealing with the inherent cluster structures within the data. To address this issue, we propose a cluster-aware grid layout method that aims to better preserve cluster structures by simultaneously considering proximity, compactness, and convexity in the optimization process. Our method utilizes a hybrid optimization strategy that consists of two phases. The global phase aims to balance proximity and compactness within each cluster, while the local phase ensures the convexity of cluster shapes. We evaluate the proposed grid layout method through a series of quantitative experiments and two use cases, demonstrating its effectiveness in preserving cluster structures and facilitating analysis tasks.
false
false
[ "Yuxing Zhou", "Weikai Yang", "Jiashu Chen", "Changjian Chen", "Zhiyang Shen", "Xiaonan Luo", "Lingyun Yu 0001", "Shixia Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03651v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/UTG1UIbfOVc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/j3RC42_HyeU", "icon": "video" } ]
Vis
2,023
CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models
10.1109/TVCG.2023.3327153
Recently, large pretrained language models have achieved compelling performance on commonsense benchmarks. Nevertheless, it is unclear what commonsense knowledge the models learn and whether they solely exploit spurious patterns. Feature attributions are popular explainability techniques that identify important input concepts for model outputs. However, commonsense knowledge tends to be implicit and rarely explicitly presented in inputs. These methods cannot infer models' implicit reasoning over mentioned concepts. We present CommonsenseVIS, a visual explanatory system that utilizes external commonsense knowledge bases to contextualize model behavior for commonsense question-answering. Specifically, we extract relevant commonsense knowledge in inputs as references to align model behavior with human knowledge. Our system features multi-level visualization and interactive model probing and editing for different concepts and their underlying relations. Through a user study, we show that CommonsenseVIS helps NLP experts conduct a systematic and scalable visual analysis of models' relational reasoning over concepts in different situations.
false
false
[ "Xingbo Wang 0001", "Renfei Huang", "Zhihua Jin", "Tianqing Fang", "Huamin Qu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12382v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Ko6di7FUGVc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Jvsw0VE3Bko", "icon": "video" } ]
Vis
2,023
Data Formulator: AI-Powered Concept-Driven Visualization Authoring
10.1109/TVCG.2023.3326585
With most modern visualization tools, authors need to transform their data into tidy formats to create visualizations they want. Because this requires experience with programming or separate data processing tools, data transformation remains a barrier in visualization authoring. To address this challenge, we present a new visualization paradigm, concept binding, that separates high-level visualization intents and low-level data transformation steps, leveraging an AI agent. We realize this paradigm in Data Formulator, an interactive visualization authoring tool. With Data Formulator, authors first define data concepts they plan to visualize using natural languages or examples, and then bind them to visual channels. Data Formulator then dispatches its AI-agent to automatically transform the input data to surface these concepts and generate desired visualizations. When presenting the results (transformed table and output visualizations) from the AI agent, Data Formulator provides feedback to help authors inspect and understand them. A user study with 10 participants shows that participants could learn and use Data Formulator to create visualizations that involve challenging data transformations, and presents interesting future research directions.
false
false
[ "Chenglong Wang", "John Thompson 0002", "Bongshin Lee" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.10094v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/gc2fOZ3E-1c", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/2CmYytcpoPg", "icon": "video" } ]
Vis
2,023
Data Navigator: An Accessibility-Centered Data Navigation Toolkit
10.1109/TVCG.2023.3327393
Making data visualizations accessible for people with disabilities remains a significant challenge in current practitioner efforts. Existing visualizations often lack an underlying navigable structure, fail to engage necessary input modalities, and rely heavily on visual-only rendering practices. These limitations exclude people with disabilities, especially users of assistive technologies. To address these challenges, we present Data Navigator: a system built on a dynamic graph structure, enabling developers to construct navigable lists, trees, graphs, and flows as well as spatial, diagrammatic, and geographic relations. Data Navigator supports a wide range of input modalities: screen reader, keyboard, speech, gesture detection, and even fabricated assistive devices. We present 3 case examples with Data Navigator, demonstrating we can provide accessible navigation structures on top of raster images, integrate with existing toolkits at scale, and rapidly develop novel prototypes. Data Navigator is a step towards making accessible data visualizations easier to design and implement.
false
false
[ "Frank Elavsky", "Lucas Nadolskis", "Dominik Moritz" ]
[]
[ "P", "V", "C" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.08475v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/z-J6kjYRohA", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/mxTeCOPT6Vo", "icon": "video" }, { "name": "Source Code", "url": "https://github.com/cmudig/data-navigator", "icon": "code" } ]
Vis
2,023
Data Player: Automatic Generation of Data Videos with Narration-Animation Interplay
10.1109/TVCG.2023.3327197
Data visualizations and narratives are often integrated to convey data stories effectively. Among various data storytelling formats, data videos have been garnering increasing attention. These videos provide an intuitive interpretation of data charts while vividly articulating the underlying data insights. However, the production of data videos demands a diverse set of professional skills and considerable manual labor, including understanding narratives, linking visual elements with narration segments, designing and crafting animations, recording audio narrations, and synchronizing audio with visual animations. To simplify this process, our paper introduces a novel method, referred to as Data Player, capable of automatically generating dynamic data videos with narration-animation interplay. This approach lowers the technical barriers associated with creating data videos rich in narration. To enable narration-animation interplay, Data Player constructs references between visualizations and text input. Specifically, it first extracts data into tables from the visualizations. Subsequently, it utilizes large language models to form semantic connections between text and visuals. Finally, Data Player encodes animation design knowledge as computational low-level constraints, allowing for the recommendation of suitable animation presets that align with the audio narration produced by text-to-speech technologies. We assessed Data Player's efficacy through an example gallery, a user study, and expert interviews. The evaluation results demonstrated that Data Player can generate high-quality data videos that are comparable to human-composed ones.
false
false
[ "Leixian Shen", "Yizhi Zhang", "Haidong Zhang", "Yun Wang 0012" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.04703v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/WkHHY7haJYI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/wrtAbOrr3m8", "icon": "video" } ]
Vis
2,023
Data Type Agnostic Visual Sensitivity Analysis
10.1109/TVCG.2023.3327203
Modern science and industry rely on computational models for simulation, prediction, and data analysis. Spatial blind source separation (SBSS) is a model used to analyze spatial data. Designed explicitly for spatial data analysis, it is superior to popular non-spatial methods, like PCA. However, a challenge to its practical use is setting two complex tuning parameters, which requires parameter space analysis. In this paper, we focus on sensitivity analysis (SA). SBSS parameters and outputs are spatial data, which makes SA difficult as few SA approaches in the literature assume such complex data on both sides of the model. Based on the requirements in our design study with statistics experts, we developed a visual analytics prototype for data type agnostic visual sensitivity analysis that fits SBSS and other contexts. The main advantage of our approach is that it requires only dissimilarity measures for parameter settings and outputs (Fig. 1). We evaluated the prototype heuristically with visualization experts and through interviews with two SBSS experts. In addition, we show the transferability of our approach by applying it to microclimate simulations. Study participants could confirm suspected and known parameter-output relations, find surprising associations, and identify parameter subspaces to examine in the future. During our design study and evaluation, we identified challenging future research opportunities.
false
false
[ "Nikolaus Piccolotto", "Markus Bögl", "Christoph Muehlmann", "Klaus Nordhausen", "Peter Filzmoser", "Johanna Schmidt", "Silvia Miksch" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.03580v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uM-t5wrikFs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/qlz1lzLI48Y", "icon": "video" } ]
Vis
2,023
Dataopsy: Scalable and Fluid Visual Exploration using Aggregate Query Sculpting
10.1109/TVCG.2023.3326594
We present aggregate query sculpting (AQS), a faceted visual query technique for large-scale multidimensional data. As a “born scalable” query technique, AQS starts visualization with a single visual mark representing an aggregation of the entire dataset. The user can then progressively explore the dataset through a sequence of operations abbreviated as $\mathbb{P}^{6}$: pivot (facet an aggregate based on an attribute), partition (lay out a facet in space), peek (see inside a subset using an aggregate visual representation), pile (merge two or more subsets), project (extracting a subset into a new substrate), and prune (discard an aggregate not currently of interest). We validate AQS with Dataopsy, a prototype implementation of AQS that has been designed for fluid interaction on desktop and touch-based mobile devices. We demonstrate AQS and Dataopsy using two case studies and three application examples.
false
false
[ "Md. Naimul Hoque", "Niklas Elmqvist" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.02764v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/QQRfR7cqtLk", "icon": "video" } ]
Vis
2,023
Dead or Alive: Continuous Data Profiling for Interactive Data Science
10.1109/TVCG.2023.3327367
Profiling data by plotting distributions and analyzing summary statistics is a critical step throughout data analysis. Currently, this process is manual and tedious since analysts must write extra code to examine their data after every transformation. This inefficiency may lead to data scientists profiling their data infrequently, rather than after each transformation, making it easy for them to miss important errors or insights. We propose continuous data profiling as a process that allows analysts to immediately see interactive visual summaries of their data throughout their data analysis to facilitate fast and thorough analysis. Our system, AutoProfiler, presents three ways to support continuous data profiling: (1) it automatically displays data distributions and summary statistics to facilitate data comprehension; (2) it is live, so visualizations are always accessible and update automatically as the data updates; (3) it supports follow up analysis and documentation by authoring code for the user in the notebook. In a user study with 16 participants, we evaluate two versions of our system that integrate different levels of automation: both automatically show data profiles and facilitate code authoring, however, one version updates reactively (“live”) and the other updates only on demand (“dead”). We find that both tools, dead or alive, facilitate insight discovery with 91% of user-generated insights originating from the tools rather than manual profiling code written by users. Participants found live updates intuitive and felt it helped them verify their transformations while those with on-demand profiles liked the ability to look at past visualizations. We also present a longitudinal case study on how AutoProfiler helped domain scientists find serendipitous insights about their data through automatic, live data profiles. Our results have implications for the design of future tools that offer automated data analysis support.
false
false
[ "Will Epperson", "Vaishnavi Gorantla", "Dominik Moritz", "Adam Perer" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03964v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/qtSt39Z3tIk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/eMphMxjoIUA", "icon": "video" } ]
Vis
2,023
Design Characterization for Black-and-White Textures in Visualization
10.1109/TVCG.2023.3326941
We investigate the use of 2D black-and-white textures for the visualization of categorical data and contribute a summary of texture attributes, and the results of three experiments that elicited design strategies as well as aesthetic and effectiveness measures. Black-and-white textures are useful, for instance, as a visual channel for categorical data on low-color displays, in 2D/3D print, to achieve the aesthetic of historic visualizations, or to retain the color hue channel for other visual mappings. We specifically study how to use what we call geometric and iconic textures. Geometric textures use patterns of repeated abstract geometric shapes, while iconic textures use repeated icons that may stand for data categories. We parameterized both types of textures and developed a tool for designers to create textures on simple charts by adjusting texture parameters. 30 visualization experts used our tool and designed 66 textured bar charts, pie charts, and maps. We then had 150 participants rate these designs for aesthetics. Finally, with the top-rated geometric and iconic textures, our perceptual assessment experiment with 150 participants revealed that textured charts perform about equally well as non-textured charts, and that there are some differences depending on the type of chart.
false
false
[ "Tingying He", "Yuanyang Zhong", "Petra Isenberg", "Tobias Isenberg 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.10089v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/YSZAgpoddtA", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/5nZ_v7C8xog", "icon": "video" } ]
Vis
2,023
Design Patterns for Situated Visualization in Augmented Reality
10.1109/TVCG.2023.3327398
Situated visualization has become an increasingly popular research area in the visualization community, fueled by advancements in augmented reality (AR) technology and immersive analytics. Visualizing data in spatial proximity to their physical referents affords new design opportunities and considerations not present in traditional visualization, which researchers are now beginning to explore. However, the AR research community has an extensive history of designing graphics that are displayed in highly physical contexts. In this work, we leverage the richness of AR research and apply it to situated visualization. We derive design patterns which summarize common approaches of visualizing data in situ. The design patterns are based on a survey of 293 papers published in the AR and visualization communities, as well as our own expertise. We discuss design dimensions that help to describe both our patterns and previous work in the literature. This discussion is accompanied by several guidelines which explain how to apply the patterns given the constraints imposed by the real world. We conclude by discussing future research directions that will help establish a complete understanding of the design of situated visualization, including the role of interactivity, tasks, and workflows.
false
false
[ "Benjamin Lee", "Michael Sedlmair", "Dieter Schmalstieg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.09157v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Ya5dInAbZE0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/hgf7X_uKG6g", "icon": "video" } ]
Vis
2,023
Designing for Ambiguity in Visual Analytics: Lessons from Risk Assessment and Prediction
10.1109/TVCG.2023.3326571
Ambiguity is pervasive in the complex sensemaking domains of risk assessment and prediction but there remains little research on how to design visual analytics tools to accommodate it. We report on findings from a qualitative study based on a conceptual framework of sensemaking processes to investigate how both new visual analytics designs and existing tools, primarily data tables, support the cognitive work demanded in avalanche forecasting. While both systems yielded similar analytic outcomes we observed differences in ambiguous sensemaking and the analytic actions either afforded. Our findings challenge conventional visualization design guidance in both perceptual and interaction design, highlighting the need for data interfaces that encourage reflection, provoke alternative interpretations, and support the inherently ambiguous nature of sensemaking in this critical application. We review how different visual and interactive forms support or impede analytic processes and introduce “gisting” as a significant yet unexplored analytic action for visual analytics research. We conclude with design implications for enabling ambiguity in visual analytics tools to scaffold sensemaking in risk assessment.
false
false
[ "Stanislaw Nowak", "Lyn Bartram" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/btRIn_f3kKE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/SBB4WR305xk", "icon": "video" } ]
Vis
2,023
Differentiable Design Galleries: A Differentiable Approach to Explore the Design Space of Transfer Functions
10.1109/TVCG.2023.3327371
The transfer function is crucial for direct volume rendering (DVR) to create an informative visual representation of volumetric data. However, manually adjusting the transfer function to achieve the desired DVR result can be time-consuming and unintuitive. In this paper, we propose Differentiable Design Galleries, an image-based transfer function design approach to help users explore the design space of transfer functions by taking advantage of the recent advances in deep learning and differentiable rendering. Specifically, we leverage neural rendering to learn a latent design space, which is a continuous manifold representing various types of implicit transfer functions. We further provide a set of interactive tools to support intuitive query, navigation, and modification to obtain the target design, which is represented as a neural-rendered design exemplar. The explicit transfer function can be reconstructed from the target design with a differentiable direct volume renderer. Experimental results on real volumetric data demonstrate the effectiveness of our method.
false
false
[ "Bo Pan", "Jiaying Lu", "Haoxuan Li", "Weifeng Chen 0003", "Yiyao Wang", "Minfeng Zhu", "Chenhao Yu", "Wei Chen 0001" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/U2tRFLO5jtk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/oErDe_2ScnU", "icon": "video" } ]
Vis
2,023
DIVI: Dynamically Interactive Visualization
10.1109/TVCG.2023.3327172
Dynamically Interactive Visualization (DIVI) is a novel approach for orchestrating interactions within and across static visualizations. DIVI deconstructs Scalable Vector Graphics charts at runtime to infer content and coordinate user input, decoupling interaction from specification logic. This decoupling allows interactions to extend and compose freely across different tools, chart types, and analysis goals. DIVI exploits positional relations of marks to detect chart components such as axes and legends, reconstruct scales and view encodings, and infer data fields. DIVI then enumerates candidate transformations across inferred data to perform linking between views. To support dynamic interaction without prior specification, we introduce a taxonomy that formalizes the space of standard interactions by chart element, interaction type, and input event. We demonstrate DIVI's usefulness for rapid data exploration and analysis through a usability study with 13 participants and a diverse gallery of dynamically interactive visualizations, including single chart, multi-view, and cross-tool configurations.
false
false
[ "Luke S. Snyder", "Jeffrey Heer" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2310.17814v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/VKOqZ2iihnE", "icon": "video" } ]
Vis
2,023
Dr. KID: Direct Remeshing and K-Set Isometric Decomposition for Scalable Physicalization of Organic Shapes
10.1109/TVCG.2023.3326595
Dr. KID is an algorithm that uses isometric decomposition for the physicalization of potato-shaped organic models in a puzzle fashion. The algorithm begins with creating a simple, regular triangular surface mesh of organic shapes, followed by iterative K-means clustering and remeshing. For clustering, we need similarity between triangles (segments) which is defined as a distance function. The distance function maps each triangle's shape to a single point in the virtual 3D space. Thus, the distance between the triangles indicates their degree of dissimilarity. K-means clustering uses this distance and sorts segments into $k$ classes. After this, remeshing is applied to minimize the distance between triangles within the same cluster by making their shapes identical. Clustering and remeshing are repeated until the distance between triangles in the same cluster reaches an acceptable threshold. We adopt a curvature-aware strategy to determine the surface thickness and finalize puzzle pieces for 3D printing. Identical hinges and holes are created for assembling the puzzle components. For smoother outcomes, we use triangle subdivision along with curvature-aware clustering, generating curved triangular patches for 3D printing. Our algorithm was evaluated using various models, and the 3D-printed results were analyzed. Findings indicate that our algorithm performs reliably on target organic shapes with minimal loss of input geometry.
false
false
[ "Dawar Khan", "Ciril Bohak", "Ivan Viola" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.02941v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Kv9V69zSsgg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/8v156eZrfTQ", "icon": "video" } ]
Vis
2,023
Dupo: A Mixed-Initiative Authoring Tool for Responsive Visualization
10.1109/TVCG.2023.3326583
Designing responsive visualizations for various screen types can be tedious as authors must manage multiple chart versions across design iterations. Automated approaches for responsive visualization must take into account the user's need for agency in exploring possible design ideas and applying customizations based on their own goals. We design and implement Dupo, a mixedinitiative approach to creating responsive visualizations that combines the agency afforded by a manual interface with automation provided by a recommender system. Given an initial design, users can browse automated design suggestions for a different screen type and make edits to a chosen design, thereby supporting quick prototyping and customizability. Dupo employs a two-step recommender pipeline that first suggests significant design changes (Exploration) followed by more subtle changes (Alteration). We evaluated Dupo with six expert responsive visualization authors. While creating responsive versions of a source design in Dupo, participants could reason about different design suggestions without having to manually prototype them, and thus avoid prematurely fixating on a particular design. This process led participants to create designs that they were satisfied with but which they had previously overlooked.
false
false
[ "Hyeok Kim", "Ryan A. Rossi", "Jessica Hullman", "Jane Hoffswell" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.05136v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/kQr6UJQF40g", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/GYIqimSJwiY", "icon": "video" } ]
Vis
2,023
Eleven Years of Gender Data Visualization: A Step Towards More Inclusive Gender Representation
10.1109/TVCG.2023.3327369
We present an analysis of the representation of gender as a data dimension in data visualizations and propose a set of considerations around visual variables and annotations for gender-related data. Gender is a common demographic dimension of data collected from study or survey participants, passengers, or customers, as well as across academic studies, especially in certain disciplines like sociology. Our work contributes to multiple ongoing discussions on the ethical implications of data visualizations. By choosing specific data, visual variables, and text labels, visualization designers may, inadvertently or not, perpetuate stereotypes and biases. Here, our goal is to start an evolving discussion on how to represent data on gender in data visualizations and raise awareness of the subtleties of choosing visual variables and words in gender visualizations. In order to ground this discussion, we collected and coded gender visualizations and their captions from five different scientific communities (Biology, Politics, Social Studies, Visualisation, and Human-Computer Interaction), in addition to images from Tableau Public and the Information Is Beautiful awards showcase. Overall we found that representation types are community-specific, color hue is the dominant visual channel for gender data, and nonconforming gender is under-represented. We end our paper with a discussion of considerations for gender visualization derived from our coding and the literature and recommendations for large data collection bodies. A free copy of this paper and all supplemental materials are available at https://osf.io/v9ams/.
false
false
[ "Florent Cabric", "Margrét Vilborg Bjarnadóttir", "Meng Ling", "Guðbjörg Linda Rafnsdóttir", "Petra Isenberg" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.14415v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/nVE_sjdWUok", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/lqImDkBWpe8", "icon": "video" } ]
Vis
2,023
Embellishments Revisited: Perceptions of Embellished Visualisations Through the Viewer's Lens
10.1109/TVCG.2023.3326914
Embellishments are features commonly used in everyday visualisations which are demonstrated to enhance assimilation and memorability. Despite their popularity, little is known about their impact on enticing readers to explore visualisations. To address this gap, we conducted 18 interviews with a diverse group of participants who were consumers of news media but non-experts in visualisation and design. Participants were shown ten embellished and plain visualisations collected from the news and asked to rank them based on enticement and ease of understanding. Extending prior work, our interview results suggest that visualisations with multiple embellishment types might make a visualisation perceived as more enticing. An important finding from our study is that the widespread of certain embellishments in the media might have made them part of visualisation conventions, making a visualisation appear more objective but less enticing. Based on these findings, we ran a follow-up online user study showing participants variations of the visualisations with multiple embellishments to isolate each embellishment type and investigate its effect. We found that variations with salient embellishments were perceived as more enticing. We argue that to unpack the concept of embellishments; we must consider two factors: embellishment saliency and editorial styles. Our study contributes concept and design considerations to the literature concerned with visualisation design for non-experts in visualisation and design.
false
false
[ "Muna Alebri", "Enrico Costanza", "Georgia Panagiotidou", "Duncan P. Brumby" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/-DSGK_yrsL0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/zKUENG1MSzA", "icon": "video" } ]
Vis
2,023
EmphasisChecker: A Tool for Guiding Chart and Caption Emphasis
10.1109/TVCG.2023.3327150
Recent work has shown that when both the chart and caption emphasize the same aspects of the data, readers tend to remember the doubly-emphasized features as takeaways; when there is a mismatch, readers rely on the chart to form takeaways and can miss information in the caption text. Through a survey of 280 chart-caption pairs in real-world sources (e.g., news media, poll reports, government reports, academic articles, and Tableau Public), we find that captions often do not emphasize the same information in practice, which could limit how effectively readers take away the authors' intended messages. Motivated by the survey findings, we present Emphasischecker, an interactive tool that highlights visually prominent chart features as well as the features emphasized by the caption text along with any mismatches in the emphasis. The tool implements a time-series prominent feature detector based on the Ramer-Douglas-Peucker algorithm and a text reference extractor that identifies time references and data descriptions in the caption and matches them with chart data. This information enables authors to compare features emphasized by these two modalities, quickly see mismatches, and make necessary revisions. A user study confirms that our tool is both useful and easy to use when authoring charts and captions.
false
false
[ "Daehyun Kim 0005", "Seulgi Choi", "Juho Kim", "Vidya Setlur", "Maneesh Agrawala" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.13858v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/uk-gt_dGXDI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/moe_-ROvDsc", "icon": "video" } ]
Vis
2,023
Enthusiastic and Grounded, Avoidant and Cautious: Understanding Public Receptivity to Data and Visualizations
10.1109/TVCG.2023.3326917
Despite an abundance of open data initiatives aimed to inform and empower “general” audiences, we still know little about the ways people outside of traditional data analysis communities experience and engage with public data and visualizations. To investigate this gap, we present results from an in-depth qualitative interview study with 19 participants from diverse ethnic, occupational, and demographic backgrounds. Our findings characterize a set of lived experiences with open data and visualizations in the domain of energy consumption, production, and transmission. This work exposes information receptivity — an individual's transient state of willingness or openness to receive information —as a blind spot for the data visualization community, complementary to but distinct from previous notions of data visualization literacy and engagement. We observed four clusters of receptivity responses to data- and visualization-based rhetoric: Information-Avoidant, Data-Cautious, Data-Enthusiastic, and Domain-Grounded. Based on our findings, we highlight research opportunities for the visualization community. This exploratory work identifies the existence of diverse receptivity responses, highlighting the need to consider audiences with varying levels of openness to new information. Our findings also suggest new approaches for improving the accessibility and inclusivity of open data and visualization initiatives targeted at broad audiences. A free copy of this paper and all supplemental materials are available at https://OSF.IO/MPQ32.
false
false
[ "Helen Ai He", "Jagoda Walny", "Sonja Thoma", "Sheelagh Carpendale", "Wesley Willett" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/mpq32", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/fwOwzoapPMw", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/QvGw-9SIL54", "icon": "video" } ]
Vis
2,023
EVM: Incorporating Model Checking into Exploratory Visual Analysis
10.1109/TVCG.2023.3326516
Visual analytics (VA) tools support data exploration by helping analysts quickly and iteratively generate views of data which reveal interesting patterns. However, these tools seldom enable explicit checks of the resulting interpretations of data—e.g., whether patterns can be accounted for by a model that implies a particular structure in the relationships between variables. We present EVM, a data exploration tool that enables users to express and check provisional interpretations of data in the form of statistical models. EVM integrates support for visualization-based model checks by rendering distributions of model predictions alongside user-generated views of data. In a user study with data scientists practicing in the private and public sector, we evaluate how model checks influence analysts' thinking during data exploration. Our analysis characterizes how participants use model checks to scrutinize expectations about data generating process and surfaces further opportunities to scaffold model exploration in VA tools.
false
false
[ "Alex Kale", "Ziyang Guo", "Xiaoli Qiao", "Jeffrey Heer", "Jessica Hullman" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.13024v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/fGtd0CzXm0w", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MYJ-1sKgxbI", "icon": "video" } ]
Vis
2,023
Explore Your Network in Minutes: A Rapid Prototyping Toolkit for Understanding Neural Networks with Visual Analytics
10.1109/TVCG.2023.3326575
Neural networks attract significant attention in almost every field due to their widespread applications in various tasks. However, developers often struggle with debugging due to the black-box nature of neural networks. Visual analytics provides an intuitive way for developers to understand the hidden states and underlying complex transformations in neural networks. Existing visual analytics tools for neural networks have been demonstrated to be effective in providing useful hints for debugging certain network architectures. However, these approaches are often architecture-specific with strong assumptions of how the network should be understood. This limits their use when the network architecture or the exploration goal changes. In this paper, we present a general model and a programming toolkit, Neural Network Visualization Builder (NNVisBuilder), for prototyping visual analytics systems to understand neural networks. NNVisBuilder covers the common data transformation and interaction model involved in existing tools for exploring neural networks. It enables developers to customize a visual analytics interface for answering their specific questions about networks. NNVisBuilder is compatible with PyTorch so that developers can integrate the visualization code into their learning code seamlessly. We demonstrate the applicability by reproducing several existing visual analytics systems for networks with NNVisBuilder. The source code and some example cases can be found at https://github.com/sysuvis/NVB.
false
false
[ "Shaoxuan Lai", "Wanna Luan", "Jun Tao 0002" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/tZFpiomSRTE", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/WcAWOeqiRfo", "icon": "video" } ]
Vis
2,023
Extract and Characterize Hairpin Vortices in Turbulent Flows
10.1109/TVCG.2023.3326603
Hairpin vortices are one of the most important vortical structures in turbulent flows. Extracting and characterizing hairpin vortices provides useful insight into many behaviors in turbulent flows. However, hairpin vortices have complex configurations and might be entangled with other vortices, making their extraction difficult. In this work, we introduce a framework to extract and separate hairpin vortices in shear driven turbulent flows for their study. Our method first extracts general vortical regions with a region-growing strategy based on certain vortex criteria (e.g., $\lambda_{2}$) and then separates those vortices with the help of progressive extraction of ($\lambda_{2}$) iso-surfaces in a top-down fashion. This leads to a hierarchical tree representing the spatial proximity and merging relation of vortices. After separating individual vortices, their shape and orientation information is extracted. Candidate hairpin vortices are identified based on their shape and orientation information as well as their physical characteristics. An interactive visualization system is developed to aid the exploration, classification, and analysis of hairpin vortices based on their geometric and physical attributes. We also present additional use cases of the proposed system for the analysis and study of general vortices in other types of flows.
false
false
[ "Adeel Zafar", "Di Yang", "Guoning Chen" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.06283v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Gemap7uU6_o", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/TpJUFmReA-M", "icon": "video" } ]
Vis
2,023
ExTreeM: Scalable Augmented Merge Tree Computation via Extremum Graphs
10.1109/TVCG.2023.3326526
Over the last decade merge trees have been proven to support a plethora of visualization and analysis tasks since they effectively abstract complex datasets. This paper describes the ExTreeM-Algorithm: A scalable algorithm for the computation of merge trees via extremum graphs. The core idea of ExTreeM is to first derive the extremum graph $\mathcal{G}$ of an input scalar field $f$ defined on a cell complex $\mathcal{K}$, and subsequently compute the unaugmented merge tree of $f$ on $\mathcal{G}$ instead of $\mathcal{K}$; which are equivalent. Any merge tree algorithm can be carried out significantly faster on $\mathcal{G}$, since $\mathcal{K}$ in general contains substantially more cells than $\mathcal{G}$. To further speed up computation, ExTreeM includes a tailored procedure to derive merge trees of extremum graphs. The computation of the fully augmented merge tree, i.e., a merge tree domain segmentation of $\mathcal{K}$, can then be performed in an optional post-processing step. All steps of ExTreeM consist of procedures with high parallel efficiency, and we provide a formal proof of its correctness. Our experiments, performed on publicly available datasets, report a speedup of up to one order of magnitude over the state-of-the-art algorithms included in the TTK and VTK-m software libraries, while also requiring significantly less memory and exhibiting excellent scaling behavior.
false
false
[ "Jonas Lukasczyk", "Michael Will", "Florian Wetzels", "Gunther H. Weber", "Christoph Garth" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/w9bKW5O_Jf0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/AVHlMbYJnNk", "icon": "video" } ]
Vis
2,023
Fast Compressed Segmentation Volumes for Scientific Visualization
10.1109/TVCG.2023.3326573
Voxel-based segmentation volumes often store a large number of labels and voxels, and the resulting amount of data can make storage, transfer, and interactive visualization difficult. We present a lossless compression technique which addresses these challenges. It processes individual small bricks of a segmentation volume and compactly encodes the labelled regions and their boundaries by an iterative refinement scheme. The result for each brick is a list of labels, and a sequence of operations to reconstruct the brick which is further compressed using rANS-entropy coding. As the relative frequencies of operations are very similar across bricks, the entropy coding can use global frequency tables for an entire data set which enables efficient and effective parallel (de)compression. Our technique achieves high throughput (up to gigabytes per second both for compression and decompression) and strong compression ratios of about 1% to 3% of the original data set size while being applicable to GPU-based rendering. We evaluate our method for various data sets from different fields and demonstrate GPU-based volume visualization with on-the-fly decompression, level-of-detail rendering (with optional on-demand streaming of detail coefficients to the GPU), and a caching strategy for decompressed bricks for further performance improvement.
false
false
[ "Max Piochowiak", "Carsten Dachsbacher" ]
[ "BP" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.16619v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/2BN2Lktbu4U", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/pZfV6o7V7SU", "icon": "video" } ]
Vis
2,023
From Information to Choice: A Critical Inquiry Into Visualization Tools for Decision Making
10.1109/TVCG.2023.3326593
In the face of complex decisions, people often engage in a three-stage process that spans from (1) exploring and analyzing pertinent information (intelligence); (2) generating and exploring alternative options (design); and ultimately culminating in (3) selecting the optimal decision by evaluating discerning criteria (choice). We can fairly assume that all good visualizations aid in the “intelligence” stage by enabling data exploration and analysis. Yet, to what degree and how do visualization systems currently support the other decision making stages, namely “design” and “choice”? To further explore this question, we conducted a comprehensive review of decision-focused visualization tools by examining publications in major visualization journals and conferences, including VIS, EuroVis, and CHI, spanning all available years. We employed a deductive coding method and in-depth analysis to assess whether and how visualization tools support design and choice. Specifically, we examined each visualization tool by (i) its degree of visibility for displaying decision alternatives, criteria, and preferences, and (ii) its degree of flexibility for offering means to manipulate the decision alternatives, criteria, and preferences with interactions such as adding, modifying, changing mapping, and filtering. Our review highlights the opportunities and challenges that decision-focused visualization tools face in realizing their full potential to support all stages of the decision making process. It reveals a surprising scarcity of tools that support all stages, and while most tools excel in offering visibility for decision criteria and alternatives, the degree of flexibility to manipulate these elements is often limited, and the lack of tools that accommodate decision preferences and their elicitation is notable. Based on our findings, to better support the choice stage, future research could explore enhancing flexibility levels and variety, exploring novel visualization paradigms, increasing algorithmic support, and ensuring that this automation is user-controlled via the enhanced flexibility I evels. Our curated list of the 88 surveyed visualization tools is available in the OSF link (https://osf.io/nrasz/?view_only=b92a90a34ae241449b5f2cd33383bfcb).
false
false
[ "Emre Oral", "Ria Chawla", "Michel Wijkstra", "Narges Mahyar", "Evanthia Dimara" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.08326v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/X9M-YXH3vrI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/tgE2x-2mq78", "icon": "video" } ]
Vis
2,023
From Shock to Shift: Data Visualization for Constructive Climate Journalism
10.1109/TVCG.2023.3327185
We present a multi-dimensional, multi-level, and multi-channel approach to data visualization for the purpose of constructive climate journalism. Data visualization has assumed a central role in environmental journalism and is often used in data stories to convey the dramatic consequences of climate change and other ecological crises. However, the emphasis on the catastrophic impacts of climate change tends to induce feelings of fear, anxiety, and apathy in readers. Climate mitigation, adaptation, and protection—all highly urgent in the face of the climate crisis—are at risk of being overlooked. These topics are more difficult to communicate as they are hard to convey on varying levels of locality, involve multiple interconnected sectors, and need to be mediated across various channels from the printed newspaper to social media platforms. So far, there has been little research on data visualization to enhance affective engagement with data about climate protection as part of solution-oriented reporting of climate change. With this research we characterize the unique challenges of constructive climate journalism for data visualization and share findings from a research and design study in collaboration with a national newspaper in Germany. Using the affordances and aesthetics of travel postcards, we present Klimakarten, a data journalism project on the progress of climate protection at multiple spatial scales (from national to local), across five key sectors (agriculture, buildings, energy, mobility, and waste), and for print and online use. The findings from quantitative and qualitative analysis of reader feedback confirm our overall approach and suggest implications for future work.
false
false
[ "Francesca Morini", "Anna Eschenbacher", "Johanna Hartmann", "Marian Dörk" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/Q_G6d5enbZI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/35q-7hUec0Y", "icon": "video" } ]
Vis
2,023
FSLens: A Visual Analytics Approach to Evaluating and Optimizing the Spatial Layout of Fire Stations
10.1109/TVCG.2023.3327077
The provision of fire services plays a vital role in ensuring the safety of residents' lives and property. The spatial layout of fire stations is closely linked to the efficiency of fire rescue operations. Traditional approaches have primarily relied on mathematical planning models to generate appropriate layouts by summarizing relevant evaluation criteria. However, this optimization process presents significant challenges due to the extensive decision space, inherent conflicts among criteria, and decision-makers' preferences. To address these challenges, we propose FSLens, an interactive visual analytics system that enables in-depth evaluation and rational optimization of fire station layout. Our approach integrates fire records and correlation features to reveal fire occurrence patterns and influencing factors using spatiotemporal sequence forecasting. We design an interactive visualization method to explore areas within the city that are potentially under-resourced for fire service based on the fire distribution and existing fire station layout. Moreover, we develop a collaborative human-computer multi-criteria decision model that generates multiple candidate solutions for optimizing firefighting resources within these areas. We simulate and compare the impact of different solutions on the original layout through well-designed visualizations, providing decision-makers with the most satisfactory solution. We demonstrate the effectiveness of our approach through one case study with real-world datasets. The feedback from domain experts indicates that our system helps them to better identify and improve potential gaps in the current fire station layout.
false
false
[ "Longfei Chen", "He Wang", "Yang Ouyang", "Yang Zhou", "Naiyu Wang", "Quan Li" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12227v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/W0zFHKKAPs0", "icon": "video" } ]
Vis
2,023
GeoExplainer: A Visual Analytics Framework for Spatial Modeling Contextualization and Report Generation
10.1109/TVCG.2023.3327359
Geographic regression models of various descriptions are often applied to identify patterns and anomalies in the determinants of spatially distributed observations. These types of analyses focus on answering why questions about underlying spatial phenomena, e.g., why is crime higher in this locale, why do children in one school district outperform those in another, etc.? Answers to these questions require explanations of the model structure, the choice of parameters, and contextualization of the findings with respect to their geographic context. This is particularly true for local forms of regression models which are focused on the role of locational context in determining human behavior. In this paper, we present GeoExplainer, a visual analytics framework designed to support analysts in creating explanative documentation that summarizes and contextualizes their spatial analyses. As analysts create their spatial models, our framework flags potential issues with model parameter selections, utilizes template-based text generation to summarize model outputs, and links with external knowledge repositories to provide annotations that help to explain the model results. As analysts explore the model results, all visualizations and annotations can be captured in an interactive report generation widget. We demonstrate our framework using a case study modeling the determinants of voting in the 2016 US Presidential Election.
false
false
[ "Fan Lei", "Yuxin Ma", "A. Stewart Fotheringham", "Elizabeth A. Mack", "Ziqi Li", "Mehak Sachdeva", "Sarah Bardin", "Ross Maciejewski" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.13588v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/mkM05Rwi8Pw", "icon": "video" } ]
Vis
2,023
ggdist: Visualizations of Distributions and Uncertainty in the Grammar of Graphics
10.1109/TVCG.2023.3327195
The grammar of graphics is ubiquitous, providing the foundation for a variety of popular visualization tools and toolkits. Yet support for uncertainty visualization in the grammar graphics—beyond simple variations of error bars, uncertainty bands, and density plots—remains rudimentary. Research in uncertainty visualization has developed a rich variety of improved uncertainty visualizations, most of which are difficult to create in existing grammar of graphics implementations. ggdist, an extension to the popular ggplot2 grammar of graphics toolkit, is an attempt to rectify this situation. ggdist unifies a variety of uncertainty visualization types through the lens of distributional visualization, allowing functions of distributions to be mapped to directly to visual channels (aesthetics), making it straightforward to express a variety of (sometimes weird!) uncertainty visualization types. This distributional lens also offers a way to unify Bayesian and frequentist uncertainty visualization by formalizing the latter with the help of confidence distributions. In this paper, I offer a description of this uncertainty visualization paradigm and lessons learned from its development and adoption: ggdist has existed in some form for about six years (originally as part of the tidybayes R package for post-processing Bayesian models), and it has evolved substantially over that time, with several rewrites and API re-organizations as it changed in response to user feedback and expanded to cover increasing varieties of uncertainty visualization types. Ultimately, given the huge expressive power of the grammar of graphics and the popularity of tools built on it, I hope a catalog of my experience with ggdist will provide a catalyst for further improvements to formalizations and implementations of uncertainty visualization in grammar of graphics ecosystems. A free copy of this paper is available at https://osf.io/2gsz6. All supplemental materials are available at https://github.com/mjskay/ggdist-paper and are archived on Zenodo at doi:10.5281/zenodo.7770984.
false
false
[ "Matthew Kay 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/2gsz6", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/j_0vSP7HldQ", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/htJORACnb54", "icon": "video" } ]
Vis
2,023
Global Topology of 3D Symmetric Tensor Fields
10.1109/TVCG.2023.3326933
There have been recent advances in the analysis and visualization of 3D symmetric tensor fields, with a focus on the robust extraction of tensor field topology. However, topological features such as degenerate curves and neutral surfaces do not live in isolation. Instead, they intriguingly interact with each other. In this paper, we introduce the notion of topological graph for 3D symmetric tensor fields to facilitate global topological analysis of such fields. The nodes of the graph include degenerate curves and regions bounded by neutral surfaces in the domain. The edges in the graph denote the adjacency information between the regions and degenerate curves. In addition, we observe that a degenerate curve can be a loop and even a knot and that two degenerate curves (whether in the same region or not) can form a link. We provide a definition and theoretical analysis of individual degenerate curves in order to help understand why knots and links may occur. Moreover, we differentiate between wedges and trisectors, thus making the analysis more detailed about degenerate curves. We incorporate this information into the topological graph. Such a graph can not only reveal the global structure in a 3D symmetric tensor field but also allow two symmetric tensor fields to be compared. We demonstrate our approach by applying it to solid mechanics and material science data sets.
false
false
[ "Shih-Hsuan Hung", "Yue Zhang 0009", "Eugene Zhang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.01863v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/T_uen4YkRxg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/tXiELTWc-lA", "icon": "video" } ]
Vis
2,023
Guaranteed Visibility in Scatterplots with Tolerance
10.1109/TVCG.2023.3326596
In 2D visualizations, visibility of every datum's representation is crucial to ease the completion of visual tasks. Such a guarantee is barely respected in complex visualizations, mainly because of overdraws between datum representations that hide parts of the information (e.g., outliers). The literature proposes various Layout Adjustment algorithms to improve the readability of visualizations that suffer from this issue. Manipulating the data in high-dimensional, geometric or visual space; they rely on different strategies with their own strengths and weaknesses. Moreover, most of these algorithms are computationally expensive as they search for an exact solution in the geometric space and do not scale well to large datasets. This article proposes GIST, a layout adjustment algorithm that aims at optimizing three criteria: (i) node visibility guarantee (at least 1 pixel), (ii) node size maximization, and (iii) the original layout preservation. This is achieved by combining a search for the maximum node size that enables to draw all the data points without overlaps, with a limited budget of movements (i.e., limiting the distortions of the original layout). The method's basis relies on the idea that it is not necessary for two data representations to be strictly not overlapping in order to guarantee their visibility in visual space. Our algorithm therefore uses a tolerance in the geometric space to determine the overlaps between pairs of data. The tolerance is optimized such that the approximation computed in the geometric space can lead to visualization without noticeable overdraw after the data rendering rasterization. In addition, such an approximation helps to ease the algorithm's convergence as it reduces the number of constraints to resolve, enabling it to handle large datasets. We demonstrate the effectiveness of our approach by comparing its results to those of state-of-the-art methods on several large datasets.
false
false
[ "Loann Giovannangeli", "Frédéric Lalanne", "Romain Giot", "Romain Bourqui" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/StvLgjDdixU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/4rXA8Nn2M70", "icon": "video" } ]
Vis
2,023
Guided Visual Analytics for Image Selection in Time and Space
10.1109/TVCG.2023.3326572
Unexploded Ordnance (UXO) detection, the identification of remnant active bombs buried underground from archival aerial images, implies a complex workflow involving decision-making at each stage. An essential phase in UXO detection is the task of image selection, where a small subset of images must be chosen from archives to reconstruct an area of interest (AOI) and identify craters. The selected image set must comply with good spatial and temporal coverage over the AOI, particularly in the temporal vicinity of recorded aerial attacks, and do so with minimal images for resource optimization. This paper presents a guidance-enhanced visual analytics prototype to select images for UXO detection. In close collaboration with domain experts, our design process involved analyzing user tasks, eliciting expert knowledge, modeling quality metrics, and choosing appropriate guidance. We report on a user study with two real-world scenarios of image selection performed with and without guidance. Our solution was well-received and deemed highly usable. Through the lens of our task-based design and developed quality measures, we observed guidance-driven changes in user behavior and improved quality of analysis results. An expert evaluation of the study allowed us to improve our guidance-enhanced prototype further and discuss new possibilities for user-adaptive guidance.
false
false
[ "Ignacio Pérez-Messina", "Davide Ceneda", "Silvia Miksch" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/3P6-uz4Cun8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/HzGQ6yczMPI", "icon": "video" } ]
Vis
2,023
Handling Non-Visible Referents in Situated Visualizations
10.1109/TVCG.2023.3327361
Situated visualizations are a type of visualization where data is presented next to its physical referent (i.e., the physical object, space, or person it refers to), often using augmented-reality displays. While situated visualizations can be beneficial in various contexts and have received research attention, they are typically designed with the assumption that the physical referent is visible. However, in practice, a physical referent may be obscured by another object, such as a wall, or may be outside the user's visual field. In this paper, we propose a conceptual framework and a design space to help researchers and user interface designers handle non-visible referents in situated visualizations. We first provide an overview of techniques proposed in the past for dealing with non-visible objects in the areas of 3D user interfaces, 3D visualization, and mixed reality. From this overview, we derive a design space that applies to situated visualizations and employ it to examine various trade-offs, challenges, and opportunities for future research in this area.
false
false
[ "Ambre Assor", "Arnaud Prouzeau", "Martin Hachet", "Pierre Dragicevic" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/tvE8z_llpqg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/5GViRMjaUoU", "icon": "video" } ]
Vis
2,023
HealthPrism: A Visual Analytics System for Exploring Children's Physical and Mental Health Profiles with Multimodal Data
10.1109/TVCG.2023.3326943
The correlation between children's personal and family characteristics (e.g., demographics and socioeconomic status) and their physical and mental health status has been extensively studied across various research domains, such as public health, medicine, and data science. Such studies can provide insights into the underlying factors affecting children's health and aid in the development of targeted interventions to improve their health outcomes. However, with the availability of multiple data sources, including context data (i.e., the background information of children) and motion data (i.e., sensor data measuring activities of children), new challenges have arisen due to the large-scale, heterogeneous, and multimodal nature of the data. Existing statistical hypothesis-based and learning model-based approaches have been inadequate for comprehensively analyzing the complex correlation between multimodal features and multi-dimensional health outcomes due to the limited information revealed. In this work, we first distill a set of design requirements from multiple levels through conducting a literature review and iteratively interviewing 11 experts from multiple domains (e.g., public health and medicine). Then, we propose HealthPrism, an interactive visual and analytics system for assisting researchers in exploring the importance and influence of various context and motion features on children's health status from multi-level perspectives. Within HealthPrism, a multimodal learning model with a gate mechanism is proposed for health profiling and cross-modality feature importance comparison. A set of visualization components is designed for experts to explore and understand multimodal data freely. We demonstrate the effectiveness and usability of HealthPrism through quantitative evaluation of the model performance, case studies, and expert interviews in associated domains.
false
false
[ "Zhihan Jiang", "Handi Chen", "Rui Zhou", "Jing Deng", "Xinchen Zhang", "Running Zhao", "Cong Xie", "Yifang Wang 0001", "Edith C. H. Ngai" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12242v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/C95-YcpnIr8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/sYpgwC54wcU", "icon": "video" } ]
Vis
2,023
Heuristics for Supporting Cooperative Dashboard Design
10.1109/TVCG.2023.3327158
Dashboards are no longer mere static displays of metrics; through functionality such as interaction and storytelling, they have evolved to support analytic and communicative goals like monitoring and reporting. Existing dashboard design guidelines, however, are often unable to account for this expanded scope as they largely focus on best practices for visual design. In contrast, we frame dashboard design as facilitating an analytical conversation: a cooperative, interactive experience where a user may interact with, reason about, or freely query the underlying data. By drawing on established principles of conversational flow and communication, we define the concept of a cooperative dashboard as one that enables a fruitful and productive analytical conversation, and derive a set of 39 dashboard design heuristics to support effective analytical conversations. To assess the utility of this framing, we asked 52 computer science and engineering graduate students to apply our heuristics to critique and design dashboards as part of an ungraded, opt-in homework assignment. Feedback from participants demonstrates that our heuristics surface new reasons dashboards may fail, and encourage a more fluid, supportive, and responsive style of dashboard design. Our approach suggests several compelling directions for future work, including dashboard authoring tools that better anticipate conversational turn-taking, repair, and refinement and extending cooperative principles to other analytical workflows.
false
false
[ "Vidya Setlur", "Michael Correll", "Arvind Satyanarayan", "Melanie Tory" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.04514v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/ESoh9DNeFgs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/zDR8CbsQznM", "icon": "video" } ]
Vis
2,023
HoopInSight: Analyzing and Comparing Basketball Shooting Performance Through Visualization
10.1109/TVCG.2023.3326910
Data visualization has the power to revolutionize sports. For example, the rise of shot maps has changed basketball strategy by visually illustrating where “good/bad” shots are taken from. As a result, professional basketball teams today take shots from very different positions on the court than they did 20 years ago. Although the shot map has transformed many facets of the game, there is still much room for improvement to support richer and more complex analytical tasks. More specifically, we believe that the lack of sufficient interactivity to support various analytical queries and the inability to visually compare differences across situations are significant limitations of current shot maps. To address these limitations and showcase new possibilities, we designed and developed HoopInSight, an interactive visualization system that centers around a novel spatial comparison visual technique, enhancing the capabilities of shot maps in basketball analytics. This article presents the system, with a focus on our proposed visual technique and its accompanying interactions, all designed to promote comparison of two different scenarios. Furthermore, we provide reflections on and a discussion of relevant issues, including considerations for designing spatial comparison techniques, the scalability and transferability of this approach, and the benefits and pitfalls of designing as domain experts.
false
false
[ "Yu Fu", "John T. Stasko" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/LnZdCpbHWt4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/lSxRinkaVH4", "icon": "video" } ]
Vis
2,023
Image or Information? Examining the Nature and Impact of Visualization Perceptual Classification
10.1109/TVCG.2023.3326919
How do people internalize visualizations: as images or information? In this study, we investigate the nature of internalization for visualizations (i.e., how the mind encodes visualizations in memory) and how memory encoding affects its retrieval. This exploratory work examines the influence of various design elements on a user's perception of a chart. Specifically, which design elements lead to perceptions of visualization as an image (aims to provide visual references, evoke emotions, express creativity, and inspire philosophic thought) or as information (aims to present complex data, information, or ideas concisely and promote analytical thinking)? Understanding how design elements contribute to viewers perceiving a visualization more as an image or information will help designers decide which elements to include to achieve their communication goals. For this study, we annotated 500 visualizations and analyzed the responses of 250 online participants, who rated the visualizations on a bilinear scale as ‘image’ or ‘information.’ We then conducted an in-person study ($n = 101$) using a free recall task to examine how the image/information ratings and design elements impacted memory. The results revealed several interesting findings: Image-rated visualizations were perceived as more aesthetically ‘appealing,’ ‘enjoyable,’ and ‘pleasing.’ Information-rated visualizations were perceived as less ‘difficult to understand’ and more aesthetically ‘likable’ and ‘nice,’ though participants expressed higher ‘positive’ sentiment when viewing image-rated visualizations and felt less ‘guided to a conclusion.’ The presence of axes and text annotations heavily influenced the likelihood of participants rating the visualization as ‘information.’ We also found different patterns among participants that were older. Importantly, we show that visualizations internalized as ‘images’ are less effective in conveying trends and messages, though they elicit a more positive emotional judgment, while ‘informative’ visualizations exhibit annotation focused recall and elicit a more positive design judgment. We discuss the implications of this dissociation between aesthetic pleasure and perceived ease of use in visualization design.
false
false
[ "Anjana Arunkumar", "Lace M. K. Padilla", "Gi-Yeul Bae", "Chris Bryan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.10571v2", "icon": "paper" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/XaOcp1Cdb04", "icon": "video" } ]
Vis
2,023
InkSight: Leveraging Sketch Interaction for Documenting Chart Findings in Computational Notebooks
10.1109/TVCG.2023.3327170
Computational notebooks have become increasingly popular for exploratory data analysis due to their ability to support data exploration and explanation within a single document. Effective documentation for explaining chart findings during the exploration process is essential as it helps recall and share data analysis. However, documenting chart findings remains a challenge due to its time-consuming and tedious nature. While existing automatic methods alleviate some of the burden on users, they often fail to cater to users' specific interests. In response to these limitations, we present InkSight, a mixed-initiative computational notebook plugin that generates finding documentation based on the user's intent. InkSight allows users to express their intent in specific data subsets through sketching atop visualizations intuitively. To facilitate this, we designed two types of sketches, i.e., open-path and closed-path sketch. Upon receiving a user's sketch, InkSight identifies the sketch type and corresponding selected data items. Subsequently, it filters data fact types based on the sketch and selected data items before employing existing automatic data fact recommendation algorithms to infer data facts. Using large language models (GPT-3.5), InkSight converts data facts into effective natural language documentation. Users can conveniently fine-tune the generated documentation within InkSight. A user study with 12 participants demonstrated the usability and effectiveness of InkSight in expressing user intent and facilitating chart finding documentation.
false
false
[ "Yanna Lin", "Haotian Li 0001", "Leni Yang", "Aoyu Wu", "Huamin Qu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.07922v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/pNxS5x-zt5A", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/9aUrbFWiaTk", "icon": "video" } ]
Vis
2,023
InnovationInsights: A Visual Analytics Approach for Understanding the Dual Frontiers of Science and Technology
10.1109/TVCG.2023.3327387
Science has long been viewed as a key driver of economic growth and rising standards of living. Knowledge about how scientific advances support marketplace inventions is therefore essential for understanding the role of science in propelling real-world applications and technological progress. The increasing availability of large-scale datasets tracing scientific publications and patented inventions and the complex interactions among them offers us new opportunities to explore the evolving dual frontiers of science and technology at an unprecedented level of scale and detail. However, we lack suitable visual analytics approaches to analyze such complex interactions effectively. Here we introduce InnovationInsights, an interactive visual analysis system for researchers, research institutions, and policymakers to explore the complex linkages between science and technology, and to identify critical innovations, inventors, and potential partners. The system first identifies important associations between scientific papers and patented inventions through a set of statistical measures introduced by our experts from the field of the Science of Science. A series of visualization views are then used to present these associations in the data context. In particular, we introduce the Interplay Graph to visualize patterns and insights derived from the data, helping users effectively navigate citation relationships between papers and patents. This visualization thereby helps them identify the origins of technical inventions and the impact of scientific research. We evaluate the system through two case studies with experts followed by expert interviews. We further engage a premier research institution to test-run the system, helping its institution leaders to extract new insights for innovation. Through both the case studies and the engagement project, we find that our system not only meets our original goals of design, allowing users to better identify the sources of technical inventions and to understand the broad impact of scientific research; it also goes beyond these purposes to enable an array of new applications for researchers and research institutions, ranging from identifying untapped innovation potential within an institution to forging new collaboration opportunities between science and industry.
false
false
[ "Yifang Wang 0001", "Yifan Qian", "Xiaoyu Qi", "Nan Cao 0001", "Dashun Wang" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.02933v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/a-pCg9jrhFg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/06ZsvuyjtjA", "icon": "video" } ]
Vis
2,023
Interactive Design and Optics-Based Visualization of Arbitrary Non-Euclidean Kaleidoscopic Orbifolds
10.1109/TVCG.2023.3326927
Orbifolds are a modern mathematical concept that arises in the research of hyperbolic geometry with applications in computer graphics and visualization. In this paper, we make use of rooms with mirrors as the visual metaphor for orbifolds. Given any arbitrary two-dimensional kaleidoscopic orbifold, we provide an algorithm to construct a Euclidean, spherical, or hyperbolic polygon to match the orbifold. This polygon is then used to create a room for which the polygon serves as the floor and the ceiling. With our system that implements Möbius transformations, the user can interactively edit the scene and see the reflections of the edited objects. To correctly visualize non-Euclidean orbifolds, we adapt the rendering algorithms to account for the geodesics in these spaces, which light rays follow. Our interactive orbifold design system allows the user to create arbitrary two-dimensional kaleidoscopic orbifolds. In addition, our mirror-based orbifold visualization approach has the potential of helping our users gain insight on the orbifold, including its orbifold notation as well as its universal cover, which can also be the spherical space and the hyperbolic space.
false
false
[ "Jinta Zheng", "Eugene Zhang", "Yue Zhang 0009" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.01853v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/HZauUsDoKao", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/C0cjlLlOM_0", "icon": "video" } ]
Vis
2,023
InvVis: Large-Scale Data Embedding for Invertible Visualization
10.1109/TVCG.2023.3326597
We present InvVis, a new approach for invertible visualization, which is reconstructing or further modifying a visualization from an image. InvVis allows the embedding of a significant amount of data, such as chart data, chart information, source code, etc., into visualization images. The encoded image is perceptually indistinguishable from the original one. We propose a new method to efficiently express chart data in the form of images, enabling large-capacity data embedding. We also outline a model based on the invertible neural network to achieve high-quality data concealing and revealing. We explore and implement a variety of application scenarios of InvVis. Additionally, we conduct a series of evaluation experiments to assess our method from multiple perspectives, including data embedding quality, data restoration accuracy, data encoding capacity, etc. The result of our experiments demonstrates the great potential of InvVis in invertible visualization.
false
false
[ "Huayuan Ye", "Chenhui Li", "Yang Li", "Changbo Wang" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.16176v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/O6gAlsCxrWU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/SalRGyajgds", "icon": "video" } ]
Vis
2,023
Knowledge Graphs in Practice: Characterizing their Users, Challenges, and Visualization Opportunities
10.1109/TVCG.2023.3326904
This study presents insights from interviews with nineteen Knowledge Graph (KG) practitioners who work in both enterprise and academic settings on a wide variety of use cases. Through this study, we identify critical challenges experienced by KG practitioners when creating, exploring, and analyzing KGs that could be alleviated through visualization design. Our findings reveal three major personas among KG practitioners – KG Builders, Analysts, and Consumers – each of whom have their own distinct expertise and needs. We discover that KG Builders would benefit from schema enforcers, while KG Analysts need customizable query builders that provide interim query results. For KG Consumers, we identify a lack of efficacy for node-link diagrams, and the need for tailored domain-specific visualizations to promote KG adoption and comprehension. Lastly, we find that implementing KGs effectively in practice requires both technical and social solutions that are not addressed with current tools, technologies, and collaborative workflows. From the analysis of our interviews, we distill several visualization research directions to improve KG usability, including knowledge cards that balance digestibility and discoverability, timeline views to track temporal changes, interfaces that support organic discovery, and semantic explanations for AI and machine learning predictions.
false
false
[ "Harry X. Li", "Gabriel Appleby", "Camelia Daniela Brumar", "Remco Chang", "Ashley Suh 0001" ]
[]
[ "P" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.01311v3", "icon": "paper" } ]
Vis
2,023
Large-Scale Evaluation of Topic Models and Dimensionality Reduction Methods for 2D Text Spatialization
10.1109/TVCG.2023.3326569
Topic models are a class of unsupervised learning algorithms for detecting the semantic structure within a text corpus. Together with a subsequent dimensionality reduction algorithm, topic models can be used for deriving spatializations for text corpora as two-dimensional scatter plots, reflecting semantic similarity between the documents and supporting corpus analysis. Although the choice of the topic model, the dimensionality reduction, and their underlying hyperparameters significantly impact the resulting layout, it is unknown which particular combinations result in high-quality layouts with respect to accuracy and perception metrics. To investigate the effectiveness of topic models and dimensionality reduction methods for the spatialization of corpora as two-dimensional scatter plots (or basis for landscape-type visualizations), we present a large-scale, benchmark-based computational evaluation. Our evaluation consists of (1) a set of corpora, (2) a set of layout algorithms that are combinations of topic models and dimensionality reductions, and (3) quality metrics for quantifying the resulting layout. The corpora are given as document-term matrices, and each document is assigned to a thematic class. The chosen metrics quantify the preservation of local and global properties and the perceptual effectiveness of the two-dimensional scatter plots. By evaluating the benchmark on a computing cluster, we derived a multivariate dataset with over 45 000 individual layouts and corresponding quality metrics. Based on the results, we propose guidelines for the effective design of text spatializations that are based on topic models and dimensionality reductions. As a main result, we show that interpretable topic models are beneficial for capturing the structure of text corpora. We furthermore recommend the use of t-SNE as a subsequent dimensionality reduction.
false
false
[ "Daniel Atzberger", "Tim Cech", "Matthias Trapp 0001", "Rico Richter", "Willy Scheibel", "Jürgen Döllner", "Tobias Schreck" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.11770v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/gLMIy-ea8qU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/E2EacKmpdhI", "icon": "video" } ]
Vis
2,023
Let the Chart Spark: Embedding Semantic Context into Chart with Text-to-Image Generative Model
10.1109/TVCG.2023.3326913
Pictorial visualization seamlessly integrates data and semantic context into visual representation, conveying complex information in an engaging and informative manner. Extensive studies have been devoted to developing authoring tools to simplify the creation of pictorial visualizations. However, mainstream works follow a retrieving-and-editing pipeline that heavily relies on retrieved visual elements from a dedicated corpus, which often compromise data integrity. Text-guided generation methods are emerging, but may have limited applicability due to their predefined entities. In this work, we propose ChartSpark, a novel system that embeds semantic context into chart based on text-to-image generative models. ChartSpark generates pictorial visualizations conditioned on both semantic context conveyed in textual inputs and data information embedded in plain charts. The method is generic for both foreground and background pictorial generation, satisfying the design practices identified from empirical research into existing pictorial visualizations. We further develop an interactive visual interface that integrates a text analyzer, editing module, and evaluation module to enable users to generate, modify, and assess pictorial visualizations. We experimentally demonstrate the usability of our tool, and conclude with a discussion of the potential of using text-to-image generative models combined with an interactive interface for visualization design.
false
false
[ "Shishi Xiao", "Suizi Huang", "Yue Lin", "Yilin Ye", "Wei Zeng" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.14630v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CiwtQhI1o5E", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/OXno64ZJhoE", "icon": "video" } ]
Vis
2,023
Leveraging Historical Medical Records as a Proxy via Multimodal Modeling and Visualization to Enrich Medical Diagnostic Learning
10.1109/TVCG.2023.3326929
Simulation-based Medical Education (SBME) has been developed as a cost-effective means of enhancing the diagnostic skills of novice physicians and interns, thereby mitigating the need for resource-intensive mentor-apprentice training. However, feedback provided in most SBME is often directed towards improving the operational proficiency of learners, rather than providing summative medical diagnoses that result from experience and time. Additionally, the multimodal nature of medical data during diagnosis poses significant challenges for interns and novice physicians, including the tendency to overlook or over-rely on data from certain modalities, and difficulties in comprehending potential associations between modalities. To address these challenges, we present DiagnosisAssistant, a visual analytics system that leverages historical medical records as a proxy for multimodal modeling and visualization to enhance the learning experience of interns and novice physicians. The system employs elaborately designed visualizations to explore different modality data, offer diagnostic interpretive hints based on the constructed model, and enable comparative analyses of specific patients. Our approach is validated through two case studies and expert interviews, demonstrating its effectiveness in enhancing medical training.
false
false
[ "Yang Ouyang", "Yuchen Wu", "He Wang", "Chenyang Zhang", "Furui Cheng", "Chang Jiang", "Lixia Jin", "Yuanwu Cao", "Quan Li" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12199v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Kpvq91vRi4Q", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/RAOZsUGql9E", "icon": "video" } ]
Vis
2,023
LiberRoad: Probing into the Journey of Chinese Classics Through Visual Analytics
10.1109/TVCG.2023.3326944
Books act as a crucial carrier of cultural dissemination in ancient times. This work involves joint efforts between visualization and humanities researchers, aiming at building a holistic view of the cultural exchange and integration between China and Japan brought about by the overseas circulation of Chinese classics. Book circulation data consist of uncertain spatiotemporal trajectories, with multiple dimensions, and movement across hierarchical spaces forms a compound network. LiberRoad visualizes the circulation of books collected in the Imperial Household Agency of Japan, and can be generalized to other book movement data. The LiberRoad system enables a smooth transition between three views (Location Graph, map, and timeline) according to the desired perspectives (spatial or temporal), as well as flexible filtering and selection. The Location Graph is a novel uncertainty-aware visualization method that employs improved circle packing to represent spatial hierarchy. The map view intuitively shows the overall circulation by clustering and allows zooming into single book trajectory with lenses magnifying local movements. The timeline view ranks dynamically in response to user interaction to facilitate the discovery of temporal events. The evaluation and feedback from the expert users demonstrate that LiberRoad is helpful in revealing movement patterns and comparing circulation characteristics of different times and spaces.
false
false
[ "Yuhan Guo", "Yuchu Luo", "Keer Lu", "Linfang Li", "Haizheng Yang", "Xiaoru Yuan" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/rblrPx4OGPU", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/h7Hxc8765ag", "icon": "video" } ]
Vis
2,023
LiveRetro: Visual Analytics for Strategic Retrospect in Livestream E-Commerce
10.1109/TVCG.2023.3326911
Livestream e-commerce integrates live streaming and online shopping, allowing viewers to make purchases while watching. However, effective marketing strategies remain a challenge due to limited empirical research and subjective biases from the absence of quantitative data. Current tools fail to capture the interdependence between live performances and feedback. This study identified computational features, formulated design requirements, and developed LiveRetro, an interactive visual analytics system. It enables comprehensive retrospective analysis of livestream e-commerce for streamers, viewers, and merchandise. LiveRetro employs enhanced visualization and time-series forecasting models to align performance features and feedback, identifying influences at channel, merchandise, feature, and segment levels. Through case studies and expert interviews, the system provides deep insights into the relationship between live performance and streaming statistics, enabling efficient strategic analysis from multiple perspectives.
false
false
[ "Yuchen Wu", "Yuansong Xu", "Shenghan Gao", "Xingbo Wang 0001", "Wenkai Song", "Zhiheng Nie", "Xiaomeng Fan", "Quan Li" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.12213v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/BroIyMJvTmc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/G_ofhrgeANw", "icon": "video" } ]
Vis
2,023
ManiVault: A Flexible and Extensible Visual Analytics Framework for High-Dimensional Data
10.1109/TVCG.2023.3326582
Exploration and analysis of high-dimensional data are important tasks in many fields that produce large and complex data, like the financial sector, systems biology, or cultural heritage. Tailor-made visual analytics software is developed for each specific application, limiting their applicability in other fields. However, as diverse as these fields are, their characteristics and requirements for data analysis are conceptually similar. Many applications share abstract tasks and data types and are often constructed with similar building blocks. Developing such applications, even when based mostly on existing building blocks, requires significant engineering efforts. We developed ManiVault, a flexible and extensible open-source visual analytics framework for analyzing high-dimensional data. The primary objective of ManiVault is to facilitate rapid prototyping of visual analytics workflows for visualization software developers and practitioners alike. ManiVault is built using a plugin-based architecture that offers easy extensibility. While our architecture deliberately keeps plugins self-contained, to guarantee maximum flexibility and re-usability, we have designed and implemented a messaging API for tight integration and linking of modules to support common visual analytics design patterns. We provide several visualization and analytics plugins, and ManiVault's API makes the integration of new plugins easy for developers. ManiVault facilitates the distribution of visualization and analysis pipelines and results for practitioners through saving and reproducing complete application states. As such, ManiVault can be used as a communication tool among researchers to discuss workflows and results. A copy of this paper and all supplemental material is available at osf.io/9k6jw, and source code at github.com/ManiVaultStudio.
false
false
[ "Alexander Vieth", "Thomas Kroes", "Julian Thijssen", "Baldur van Lew", "Jeroen Eggermont", "Soumyadeep Basu", "Elmar Eisemann", "Anna Vilanova", "Thomas Höllt", "Boudewijn P. F. Lelieveldt" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.01751v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/KFUyjH8dBsk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/zR0CAUMdGEM", "icon": "video" } ]
Vis
2,023
Marjorie: Visualizing Type 1 Diabetes Data to Support Pattern Exploration
10.1109/TVCG.2023.3326936
In this work we propose Marjorie, a visual analytics approach to address the challenge of analyzing patients' diabetes data during brief regular appointments with their diabetologists. Designed in consultation with diabetologists, Marjorie uses a combination of visual and algorithmic methods to support the exploration of patterns in the data. Patterns of interest include seasonal variations of the glucose profiles, and non-periodic patterns such as fluctuations around mealtimes or periods of hypoglycemia (i.e., glucose levels below the normal range). We introduce a unique representation of glucose data based on modified horizon graphs and hierarchical clustering of adjacent carbohydrate or insulin entries. Semantic zooming allows the exploration of patterns on different levels of temporal detail. We evaluated our solution in a case study, which demonstrated Marjorie's potential to provide valuable insights into therapy parameters and unfavorable eating habits, among others. The study results and informal feedback collected from target users suggest that Marjorie effectively supports patients and diabetologists in the joint exploration of patterns in diabetes data, potentially enabling more informed treatment decisions. A free copy of this paper and all supplemental materials are available at https://osf.io/34t8c/.
false
false
[ "Anna Scimone", "Klaus Eckelt", "Marc Streit", "Andreas P. Hinterreiter" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "https://osf.io/caj2n", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/zOfOEm4NU3g", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/MYspNQH2fi8", "icon": "video" } ]
Vis
2,023
Merge Tree Geodesics and Barycenters with Path Mappings
10.1109/TVCG.2023.3326601
Comparative visualization of scalar fields is often facilitated using similarity measures such as edit distances. In this paper, we describe a novel approach for similarity analysis of scalar fields that combines two recently introduced techniques: Wasserstein geodesics/barycenters as well as path mappings, a branch decomposition-independent edit distance. Effectively, we are able to leverage the reduced susceptibility of path mappings to small perturbations in the data when compared with the original Wasserstein distance. Our approach therefore exhibits superior performance and quality in typical tasks such as ensemble summarization, ensemble clustering, and temporal reduction of time series, while retaining practically feasible runtimes. Beyond studying theoretical properties of our approach and discussing implementation aspects, we describe a number of case studies that provide empirical insights into its utility for comparative visualization, and demonstrate the advantages of our method in both synthetic and real-world scenarios. We supply a C++ implementation that can be used to reproduce our results.
false
false
[ "Florian Wetzels", "Mathieu Pont", "Julien Tierny", "Christoph Garth" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03672v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/vgeBm-pPER0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/bybxAFqWovA", "icon": "video" } ]
Vis
2,023
MeTACAST: Target- and Context-Aware Spatial Selection in VR
10.1109/TVCG.2023.3326517
We propose three novel spatial data selection techniques for particle data in VR visualization environments. They are designed to be target- and context-aware and be suitable for a wide range of data features and complex scenarios. Each technique is designed to be adjusted to particular selection intents: the selection of consecutive dense regions, the selection of filament-like structures, and the selection of clusters—with all of them facilitating post-selection threshold adjustment. These techniques allow users to precisely select those regions of space for further exploration—with simple and approximate 3D pointing, brushing, or drawing input—using flexible point- or path-based input and without being limited by 3D occlusions, non-homogeneous feature density, or complex data shapes. These new techniques are evaluated in a controlled experiment and compared with the Baseline method, a region-based 3D painting selection. Our results indicate that our techniques are effective in handling a wide range of scenarios and allow users to select data based on their comprehension of crucial features. Furthermore, we analyze the attributes, requirements, and strategies of our spatial selection methods and compare them with existing state-of-the-art selection methods to handle diverse data features and situations. Based on this analysis we provide guidelines for choosing the most suitable 3D spatial selection techniques based on the interaction environment, the given data characteristics, or the need for interactive post-selection threshold adjustment.
false
false
[ "Lixiang Zhao", "Tobias Isenberg 0001", "Fuqi Xie", "Hai-Ning Liang", "Lingyun Yu 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03616v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/rPIlDZSqKs4", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/cVDWVGbMTc8", "icon": "video" } ]
Vis
2,023
Metrics-Based Evaluation and Comparison of Visualization Notations
10.1109/TVCG.2023.3326907
A visualization notation is a recurring pattern of symbols used to author specifications of visualizations, from data transformation to visual mapping. Programmatic notations use symbols defined by grammars or domain-specific languages (e.g. ggplot2, dplyr, Vega-Lite) or libraries (e.g. Matplotlib, Pandas). Designers and prospective users of grammars and libraries often evaluate visualization notations by inspecting galleries of examples. While such collections demonstrate usage and expressiveness, their construction and evaluation are usually ad hoc, making comparisons of different notations difficult. More rarely, experts analyze notations via usability heuristics, such as the Cognitive Dimensions of Notations framework. These analyses, akin to structured close readings of text, can reveal design deficiencies, but place a burden on the expert to simultaneously consider many facets of often complex systems. To alleviate these issues, we introduce a metrics-based approach to usability evaluation and comparison of notations in which metrics are computed for a gallery of examples across a suite of notations. While applicable to any visualization domain, we explore the utility of our approach via a case study considering statistical graphics that explores 40 visualizations across 9 widely used notations. We facilitate the computation of appropriate metrics and analysis via a new tool called NotaScope. We gathered feedback via interviews with authors or maintainers of prominent charting libraries ($n=6$). We find that this approach is a promising way to formalize, externalize, and extend evaluations and comparisons of visualization notations.
false
false
[ "Nicolas Kruchten", "Andrew M. McNutt", "Michael J. McGuffin" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.16353v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/pwkYONAD3e0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/ZAFdNC4SSr8", "icon": "video" } ]
Vis
2,023
MolSieve: A Progressive Visual Analytics System for Molecular Dynamics Simulations
10.1109/TVCG.2023.3326584
Molecular Dynamics (MD) simulations are ubiquitous in cutting-edge physio-chemical research. They provide critical insights into how a physical system evolves over time given a model of interatomic interactions. Understanding a system's evolution is key to selecting the best candidates for new drugs, materials for manufacturing, and countless other practical applications. With today's technology, these simulations can encompass millions of unit transitions between discrete molecular structures, spanning up to several milliseconds of real time. Attempting to perform a brute-force analysis with data-sets of this size is not only computationally impractical, but would not shed light on the physically-relevant features of the data. Moreover, there is a need to analyze simulation ensembles in order to compare similar processes in differing environments. These problems call for an approach that is analytically transparent, computationally efficient, and flexible enough to handle the variety found in materials-based research. In order to address these problems, we introduce MolSieve, a progressive visual analytics system that enables the comparison of multiple long-duration simulations. Using MolSieve, analysts are able to quickly identify and compare regions of interest within immense simulations through its combination of control charts, data-reduction techniques, and highly informative visual components. A simple programming interface is provided which allows experts to fit MolSieve to their needs. To demonstrate the efficacy of our approach, we present two case studies of MolSieve and report on findings from domain collaborators.
false
false
[ "Rostyslav Hnatyshyn", "Jieqiong Zhao", "Danny Perez", "James P. Ahrens", "Ross Maciejewski" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.11724v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/9tz_7aRFP5o", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/VAxMeWR-1IA", "icon": "video" } ]
Vis
2,023
Mosaic: An Architecture for Scalable & Interoperable Data Views
10.1109/TVCG.2023.3327189
Mosaic is an architecture for greater scalability, extensibility, and interoperability of interactive data views. Mosaic decouples data processing from specification logic: clients publish their data needs as declarative queries that are then managed and automatically optimized by a coordinator that proxies access to a scalable data store. Mosaic generalizes Vegalite's selection abstraction to enable rich integration and linking across visualizations and components such as menus, text search, and tables. We demonstrate Mosaic's expressiveness, extensibility, and interoperability through examples that compose diverse visualization, interaction, and optimization techniques—many constructed using vgplot, a grammar of interactive graphics in which graphical marks act as Mosaic clients. To evaluate scalability, we present benchmark studies with order-of-magnitude performance improvements over existing web-based visualization systems—enabling flexible, real-time visual exploration of billion+ record datasets. We conclude by discussing Mosaic's potential as an open platform that bridges visualization languages, scalable visualization, and interactive data systems more broadly.
false
false
[ "Jeffrey Heer", "Dominik Moritz" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/4VPQmScA4Fg", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/txIvM1dA3EM", "icon": "video" } ]
Vis
2,023
My Model is Unfair, Do People Even Care? Visual Design Affects Trust and Perceived Bias in Machine Learning
10.1109/TVCG.2023.3327192
Machine learning technology has become ubiquitous, but, unfortunately, often exhibits bias. As a consequence, disparate stakeholders need to interact with and make informed decisions about using machine learning models in everyday systems. Visualization technology can support stakeholders in understanding and evaluating trade-offs between, for example, accuracy and fairness of models. This paper aims to empirically answer “Can visualization design choices affect a stakeholder's perception of model bias, trust in a model, and willingness to adopt a model?” Through a series of controlled, crowd-sourced experiments with more than 1,500 participants, we identify a set of strategies people follow in deciding which models to trust. Our results show that men and women prioritize fairness and performance differently and that visual design choices significantly affect that prioritization. For example, women trust fairer models more often than men do, participants value fairness more when it is explained using text than as a bar chart, and being explicitly told a model is biased has a bigger impact than showing past biased performance. We test the generalizability of our results by comparing the effect of multiple textual and visual design choices and offer potential explanations of the cognitive mechanisms behind the difference in fairness perception and trust. Our research guides design considerations to support future work developing visualization systems for machine learning.
false
false
[ "Aimen Gaba", "Zhanna Kaufman", "Jason Cheung", "Marie Shvakel", "Kyle Wm. Hall", "Yuriy Brun", "Cindy Xiong Bearfield" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03299v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CsugZupQSX0", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/sjnUF7NZL14", "icon": "video" } ]
Vis
2,023
Mystique: Deconstructing SVG Charts for Layout Reuse
10.1109/TVCG.2023.3327354
To facilitate the reuse of existing charts, previous research has examined how to obtain a semantic understanding of a chart by deconstructing its visual representation into reusable components, such as encodings. However, existing deconstruction approaches primarily focus on chart styles, handling only basic layouts. In this paper, we investigate how to deconstruct chart layouts, focusing on rectangle-based ones, as they cover not only 17 chart types but also advanced layouts (e.g., small multiples, nested layouts). We develop an interactive tool, called Mystique, adopting a mixed-initiative approach to extract the axes and legend, and deconstruct a chart's layout into four semantic components: mark groups, spatial relationships, data encodings, and graphical constraints. Mystique employs a wizard interface that guides chart authors through a series of steps to specify how the deconstructed components map to their own data. On 150 rectangle-based SVG charts, Mystique achieves above 85% accuracy for axis and legend extraction and 96% accuracy for layout deconstruction. In a chart reproduction study, participants could easily reuse existing charts on new datasets. We discuss the current limitations of Mystique and future research directions.
false
false
[ "Chen Chen", "Bongshin Lee", "Yunhai Wang", "Yunjeong Chang", "Zhicheng Liu" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.13567v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Wd8PGfKSsSM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/gP8a-DNLhN0", "icon": "video" } ]
Vis
2,023
NL2Color: Refining Color Palettes for Charts with Natural Language
10.1109/TVCG.2023.3326522
Choice of color is critical to creating effective charts with an engaging, enjoyable, and informative reading experience. However, designing a good color palette for a chart is a challenging task for novice users who lack related design expertise. For example, they often find it difficult to articulate their abstract intentions and translate these intentions into effective editing actions to achieve a desired outcome. In this work, we present NL2Color, a tool that allows novice users to refine chart color palettes using natural language expressions of their desired outcomes. We first collected and categorized a dataset of 131 triplets, each consisting of an original color palette of a chart, an editing intent, and a new color palette designed by human experts according to the intent. Our tool employs a large language model (LLM) to substitute the colors in original palettes and produce new color palettes by selecting some of the triplets as few-shot prompts. To evaluate our tool, we conducted a comprehensive two-stage evaluation, including a crowd-sourcing study ($\mathrm{N}=71$) and a within-subjects user study ($\mathrm{N}=12$). The results indicate that the quality of the color palettes revised by NL2Color has no significantly large difference from those designed by human experts. The participants who used NL2Color obtained revised color palettes to their satisfaction in a shorter period and with less effort.
false
false
[ "Chuhan Shi", "Weiwei Cui", "Chengzhong Liu", "Chengbo Zheng", "Haidong Zhang", "Qiong Luo 0001", "Xiaojuan Ma" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/DdahFCNJnWY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/mnYexe0jdAI", "icon": "video" } ]
Vis
2,023
OldVisOnline: Curating a Dataset of Historical Visualizations
10.1109/TVCG.2023.3326908
With the increasing adoption of digitization, more and more historical visualizations created hundreds of years ago are accessible in digital libraries online. It provides a unique opportunity for visualization and history research. Meanwhile, there is no large-scale digital collection dedicated to historical visualizations. The visualizations are scattered in various collections, which hinders retrieval. In this study, we curate the first large-scale dataset dedicated to historical visualizations. Our dataset comprises 13K historical visualization images with corresponding processed metadata from seven digital libraries. In curating the dataset, we propose a workflow to scrape and process heterogeneous metadata. We develop a semi-automatic labeling approach to distinguish visualizations from other artifacts. Our dataset can be accessed with OldVisOnline, a system we have built to browse and label historical visualizations. We discuss our vision of usage scenarios and research opportunities with our dataset, such as textual criticism for historical visualizations. Drawing upon our experience, we summarize recommendations for future efforts to improve our dataset.
false
false
[ "Yu Zhang 0043", "Ruike Jiang", "Liwenhan Xie", "Yuheng Zhao", "Can Liu 0004", "Tianhong Ding", "Siming Chen 0001", "Xiaoru Yuan" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.16053v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/qU9cXKv0Dzk", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/ZyKijYzleR4", "icon": "video" } ]
Vis
2,023
OW-Adapter: Human-Assisted Open-World Object Detection with a Few Examples
10.1109/TVCG.2023.3326577
Open-world object detection (OWOD) is an emerging computer vision problem that involves not only the identification of predefined object classes, like what general object detectors do, but also detects new unknown objects simultaneously. Recently, several end-to-end deep learning models have been proposed to address the OWOD problem. However, these approaches face several challenges: a) significant changes in both network architecture and training procedure are required; b) they are trained from scratch, which can not leverage existing pre-trained general detectors; c) costly annotations for all unknown classes are needed. To overcome these challenges, we present a visual analytic framework called OW-Adapter. It acts as an adaptor to enable pre-trained general object detectors to handle the OWOD problem. Specifically, OW-Adapter is designed to identify, summarize, and annotate unknown examples with minimal human effort. Moreover, we introduce a lightweight classifier to learn newly annotated unknown classes and plug the classifier into pre-trained general detectors to detect unknown objects. We demonstrate the effectiveness of our framework through two case studies of different domains, including common object recognition and autonomous driving. The studies show that a simple yet powerful adaptor can extend the capability of pre-trained general detectors to detect unknown objects and improve the performance on known classes simultaneously.
false
false
[ "Suphanut Jamonnak", "Jiajing Guo", "Wenbin He", "Liang Gou", "Liu Ren" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/QNub6PYMp1k", "icon": "video" } ]
Vis
2,023
Perception of Line Attributes for Visualization
10.1109/TVCG.2023.3326523
Line attributes such as width and dashing are commonly used to encode information. However, many questions on the perception of line attributes remain, such as how many levels of attribute variation can be distinguished or which line attributes are the preferred choices for which tasks. We conducted three studies to develop guidelines for using stylized lines to encode scalar data. In our first study, participants drew stylized lines to encode uncertainty information. Uncertainty is usually visualized alongside other data. Therefore, alternative visual channels are important for the visualization of uncertainty. Additionally, uncertainty—e.g., in weather forecasts—is a familiar topic to most people. Thus, we picked it for our visualization scenarios in study 1. We used the results of our study to determine the most common line attributes for drawing uncertainty: Dashing, luminance, wave amplitude, and width. While those line attributes were especially common for drawing uncertainty, they are also commonly used in other areas. In studies 2 and 3, we investigated the discriminability of the line attributes determined in study 1. Studies 2 and 3 did not require specific application areas; thus, their results apply to visualizing any scalar data in line attributes. We evaluated the just-noticeable differences (JND) and derived recommendations for perceptually distinct line levels. We found that participants could discriminate considerably more levels for the line attribute width than for wave amplitude, dashing, or luminance.
false
false
[ "Anna Sterzik", "Nils Lichtenberg", "Jana Wilms", "Michael Krone", "Douglas W. Cunningham", "Kai Lawonn" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.04150v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/O-mD127e7nY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/BwN9YtwhkcM", "icon": "video" } ]
Vis
2,023
Perceptually Uniform Construction of Illustrative Textures
10.1109/TVCG.2023.3326574
Illustrative textures, such as stippling or hatching, were predominantly used as an alternative to conventional Phong rendering. Recently, the potential of encoding information on surfaces or maps using different densities has also been recognized. This has the significant advantage that additional color can be used as another visual channel and the illustrative textures can then be overlaid. Effectively, it is thus possible to display multiple information, such as two different scalar fields on surfaces simultaneously. In previous work, these textures were manually generated and the choice of density was unempirically determined. Here, we first want to determine and understand the perceptual space of illustrative textures. We chose a succession of simplices with increasing dimensions as primitives for our textures: Dots, lines, and triangles. Thus, we explore the texture types of stippling, hatching, and triangles. We create a range of textures by sampling the density space uniformly. Then, we conduct three perceptual studies in which the participants performed pairwise comparisons for each texture type. We use multidimensional scaling (MDS) to analyze the perceptual spaces per category. The perception of stippling and triangles seems relatively similar. Both are adequately described by a 1D manifold in 2D space. The perceptual space of hatching consists of two main clusters: Crosshatched textures, and textures with only one hatching direction. However, the perception of hatching textures with only one hatching direction is similar to the perception of stippling and triangles. Based on our findings, we construct perceptually uniform illustrative textures. Afterwards, we provide concrete application examples for the constructed textures.
false
false
[ "Anna Sterzik", "Monique Meuschke", "Douglas W. Cunningham", "Kai Lawonn" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.03644v3", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/CGu5KzFO7-w", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/pWsjetlz5pQ", "icon": "video" } ]
Vis
2,023
Photon Field Networks for Dynamic Real-Time Volumetric Global Illumination
10.1109/TVCG.2023.3327107
Volume data is commonly found in many scientific disciplines, like medicine, physics, and biology. Experts rely on robust scientific visualization techniques to extract valuable insights from the data. Recent years have shown path tracing to be the preferred approach for volumetric rendering, given its high levels of realism. However, real-time volumetric path tracing often suffers from stochastic noise and long convergence times, limiting interactive exploration. In this paper, we present a novel method to enable real-time global illumination for volume data visualization. We develop Photon Field Networks—a phase-function-aware, multi-light neural representation of indirect volumetric global illumination. The fields are trained on multi-phase photon caches that we compute a priori. Training can be done within seconds, after which the fields can be used in various rendering tasks. To showcase their potential, we develop a custom neural path tracer, with which our photon fields achieve interactive framerates even on large datasets. We conduct in-depth evaluations of the method's performance, including visual quality, stochastic noise, inference and rendering speeds, and accuracy regarding illumination and phase function awareness. Results are compared to ray marching, path tracing and photon mapping. Our findings show that Photon Field Networks can faithfully represent indirect global illumination within the boundaries of the trained phase spectrum while exhibiting less stochastic noise and rendering at a significantly faster rate than traditional methods.
false
false
[ "David Bauer", "Qi Wu 0015", "Kwan-Liu Ma" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2304.07338v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/Q_c9v8F6BwM", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/CSWbuK0rwoc", "icon": "video" } ]
Vis
2,023
Polarizing Political Polls: How Visualization Design Choices Can Shape Public Opinion and Increase Political Polarization
10.1109/TVCG.2023.3326512
While we typically focus on data visualization as a tool for facilitating cognitive tasks (e.g. learning facts, making decisions), we know relatively little about their second-order impacts on our opinions, attitudes, and values. For example, could design or framing choices interact with viewers' social cognitive biases in ways that promote political polarization? When reporting on U.S. attitudes toward public policies, it is popular to highlight the gap between Democrats and Republicans (e.g. with blue vs red connected dot plots). But these charts may encourage social-normative conformity, influencing viewers' attitudes to match the divided opinions shown in the visualization. We conducted three experiments examining visualization framing in the context of social conformity and polarization. Crowdworkers viewed charts showing simulated polling results for public policy proposals. We varied framing (aggregating data as non-partisan “All US Adults,” or partisan “Democrat” / “Republican”) and the visualized groups' support levels. Participants then reported their own support for each policy. We found that participants' attitudes biased significantly toward the group attitudes shown in the stimuli and this can increase inter-party attitude divergence. These results demonstrate that data visualizations can induce social conformity and accelerate political polarization. Choosing to visualize partisan divisions can divide us further.
false
false
[ "Eli Holder", "Cindy Xiong Bearfield" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.00690v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/RmrWjeji25Y", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/_Ux-vx35Iu0", "icon": "video" } ]
Vis
2,023
PromptMagician: Interactive Prompt Engineering for Text-to-Image Creation
10.1109/TVCG.2023.3327168
Generative text-to-image models have gained great popularity among the public for their powerful capability to generate high-quality images based on natural language prompts. However, developing effective prompts for desired images can be challenging due to the complexity and ambiguity of natural language. This research proposes PromptMagician, a visual analysis system that helps users explore the image results and refine the input prompts. The backbone of our system is a prompt recommendation model that takes user prompts as input, retrieves similar prompt-image pairs from DiffusionDB, and identifies special (important and relevant) prompt keywords. To facilitate interactive prompt refinement, PromptMagician introduces a multi-level visualization for the cross-modal embedding of the retrieved images and recommended keywords, and supports users in specifying multiple criteria for personalized exploration. Two usage scenarios, a user study, and expert interviews demonstrate the effectiveness and usability of our system, suggesting it facilitates prompt engineering and improves the creativity support of the generative text-to-image model.
false
false
[ "Yingchaojie Feng", "Xingbo Wang 0001", "Kamkwai Wong", "Sijia Wang", "Yuhong Lu", "Minfeng Zhu", "Baicheng Wang", "Wei Chen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.09036v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/zO6bF0BPSQQ", "icon": "video" } ]
Vis
2,023
ProWis: A Visual Approach for Building, Managing, and Analyzing Weather Simulation Ensembles at Runtime
10.1109/TVCG.2023.3326514
Weather forecasting is essential for decision-making and is usually performed using numerical modeling. Numerical weather models, in turn, are complex tools that require specialized training and laborious setup and are challenging even for weather experts. Moreover, weather simulations are data-intensive computations and may take hours to days to complete. When the simulation is finished, the experts face challenges analyzing its outputs, a large mass of spatiotemporal and multivariate data. From the simulation setup to the analysis of results, working with weather simulations involves several manual and error-prone steps. The complexity of the problem increases exponentially when the experts must deal with ensembles of simulations, a frequent task in their daily duties. To tackle these challenges, we propose ProWis: an interactive and provenance-oriented system to help weather experts build, manage, and analyze simulation ensembles at runtime. Our system follows a human-in-the-loop approach to enable the exploration of multiple atmospheric variables and weather scenarios. ProWis was built in close collaboration with weather experts, and we demonstrate its effectiveness by presenting two case studies of rainfall events in Brazil.
false
false
[ "Carolina Veiga Ferreira de Souza", "Suzanna Maria Bonnet", "Daniel de Oliveira 0001", "Márcio Cataldi", "Fabio Miranda 0001", "Marcos Lage" ]
[ "HM" ]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.05019v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/tW6WzsnZXE8", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/wVpLwvVA5XE", "icon": "video" } ]
Vis
2,023
PSRFlow: Probabilistic Super Resolution with Flow-Based Models for Scientific Data
10.1109/TVCG.2023.3327171
Although many deep-learning-based super-resolution approaches have been proposed in recent years, because no ground truth is available in the inference stage, few can quantify the errors and uncertainties of the super-resolved results. For scientific visualization applications, however, conveying uncertainties of the results to scientists is crucial to avoid generating misleading or incorrect information. In this paper, we propose PSRFlow, a novel normalizing flow-based generative model for scientific data super-resolution that incorporates uncertainty quantification into the super-resolution process. PSRFlow learns the conditional distribution of the high-resolution data based on the low-resolution counterpart. By sampling from a Gaussian latent space that captures the missing information in the high-resolution data, one can generate different plausible super-resolution outputs. The efficient sampling in the Gaussian latent space allows our model to perform uncertainty quantification for the super-resolved results. During model training, we augment the training data with samples across various scales to make the model adaptable to data of different scales, achieving flexible super-resolution for a given input. Our results demonstrate superior performance and robust uncertainty quantification compared with existing methods such as interpolation and GAN-based super-resolution networks.
false
false
[ "Jingyi Shen", "Han-Wei Shen" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2308.04605v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/_7X25uEUyz0", "icon": "video" } ]
Vis
2,023
QEVIS: Multi-Grained Visualization of Distributed Query Execution
10.1109/TVCG.2023.3326930
Distributed query processing systems such as Apache Hive and Spark are widely-used in many organizations for large-scale data analytics. Analyzing and understanding the query execution process of these systems are daily routines for engineers and crucial for identifying performance problems, optimizing system configurations, and rectifying errors. However, existing visualization tools for distributed query execution are insufficient because (i) most of them (if not all) do not provide fine-grained visualization (i.e., the atomic task level), which can be crucial for understanding query performance and reasoning about the underlying execution anomalies, and (ii) they do not support proper linkages between system status and query execution, which makes it difficult to identify the causes of execution problems. To tackle these limitations, we propose QEVIS, which visualizes distributed query execution process with multiple views that focus on different granularities and complement each other. Specifically, we first devise a query logical plan layout algorithm to visualize the overall query execution progress compactly and clearly. We then propose two novel scoring methods to summarize the anomaly degrees of the jobs and machines during query execution, and visualize the anomaly scores intuitively, which allow users to easily identify the components that are worth paying attention to. Moreover, we devise a scatter plot-based task view to show a massive number of atomic tasks, where task distribution patterns are informative for execution problems. We also equip QEVIS with a suite of auxiliary views and interaction methods to support easy and effective cross-view exploration, which makes it convenient to track the causes of execution problems. QEVIS has been used in the production environment of our industry partner, and we present three use cases from real-world applications and user interview to demonstrate its effectiveness. QEVIS is open-source at https://github.com/DBGroup-SUSTech/QEVIS.
false
false
[ "Qiaomu Shen", "Zhengxin You", "Xiao Yan 0002", "Chaozu Zhang", "Ke Xu", "Dan Zeng 0002", "Jianbin Qin", "Bo Tang 0016" ]
[]
[ "V" ]
[ { "name": "Fast Forward", "url": "https://youtu.be/pg5fqxrqgFc", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/fYXKIJmLs0g", "icon": "video" } ]
Vis
2,023
Quantivine: A Visualization Approach for Large-Scale Quantum Circuit Representation and Analysis
10.1109/TVCG.2023.3327148
Quantum computing is a rapidly evolving field that enables exponential speed-up over classical algorithms. At the heart of this revolutionary technology are quantum circuits, which serve as vital tools for implementing, analyzing, and optimizing quantum algorithms. Recent advancements in quantum computing and the increasing capability of quantum devices have led to the development of more complex quantum circuits. However, traditional quantum circuit diagrams suffer from scalability and readability issues, which limit the efficiency of analysis and optimization processes. In this research, we propose a novel visualization approach for large-scale quantum circuits by adopting semantic analysis to facilitate the comprehension of quantum circuits. We first exploit meta-data and semantic information extracted from the underlying code of quantum circuits to create component segmentations and pattern abstractions, allowing for easier wrangling of massive circuit diagrams. We then develop Quantivine, an interactive system for exploring and understanding quantum circuits. A series of novel circuit visualizations is designed to uncover contextual details such as qubit provenance, parallelism, and entanglement. The effectiveness of Quantivine is demonstrated through two usage scenarios of quantum circuits with up to 100 qubits and a formal user evaluation with quantum experts. A free copy of this paper and all supplemental materials are available at https://osf.io/2m9yh/?view_only=0aa1618c97244f5093cd7ce15f1431f9.
false
false
[ "Zhen Wen", "Yihan Liu", "Siwei Tan", "Jieyi Chen", "Minfeng Zhu", "Dongming Han", "Jianwei Yin", "Mingliang Xu", "Wei Chen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.08969v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/jRw1l1VCzjY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Zpy-YQEnPS4", "icon": "video" } ]
Vis
2,023
Radial Icicle Tree (RIT): Node Separation and Area Constancy
10.1109/TVCG.2023.3327178
Icicles and sunbursts are two commonly-used visual representations of trees. While icicle trees can map data values faithfully to rectangles of different sizes, often some rectangles are too narrow to be noticed easily. When an icicle tree is transformed into a sunburst tree, the width of each rectangle becomes the length of an annular sector that is usually longer than the original width. While sunburst trees alleviate the problem of narrow rectangles in icicle trees, it no longer maintains the consistency of size encoding. At different tree depths, nodes of the same data values are displayed in annular sections of different sizes in a sunburst tree, though they are represented by rectangles of the same size in an icicle tree. Furthermore, two nodes from different subtrees could sometimes appear as a single node in both icicle trees and sunburst trees. In this paper, we propose a new visual representation, referred to as radial icicle tree (RIT), which transforms the rectangular bounding box of an icicle tree into a circle, circular sector, or annular sector while introducing gaps between nodes and maintaining area constancy for nodes of the same size. We applied the new visual design to several datasets. Both the analytical design process and user-centered evaluation have confirmed that this new design has improved the design of icicles and sunburst trees without introducing any relative demerit.
false
false
[ "Yuanzhe Jin", "Tim J. A. de Jong", "Martijn Tennekes", "Min Chen 0001" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.10481v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/hBAaiCz18EI", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/Qg6gOYXhW_Q", "icon": "video" } ]
Vis
2,023
Reclaiming the Horizon: Novel Visualization Designs for Time-Series Data with Large Value Ranges
10.1109/TVCG.2023.3326576
We introduce two novel visualization designs to support practitioners in performing identification and discrimination tasks on large value ranges (i.e., several orders of magnitude) in time-series data: (1) The order of magnitude horizon graph, which extends the classic horizon graph; and (2) the order of magnitude line chart, which adapts the log-line chart. These new visualization designs visualize large value ranges by explicitly splitting the mantissa $m$ and exponent $e$ of a value $v=m\cdot 10^{e}$. We evaluate our novel designs against the most relevant state-of-the-art visualizations in an empirical user study. It focuses on four main tasks commonly employed in the analysis of time-series and large value ranges visualization: identification, discrimination, estimation, and trend detection. For each task we analyze error, confidence, and response time. The new order of magnitude horizon graph performs better or equal to all other designs in identification, discrimination, and estimation tasks. Only for trend detection tasks, the more traditional horizon graphs reported better performance. Our results are domain-independent, only requiring time-series data with large value ranges.
false
false
[ "Daniel Braun 0010", "Rita Borgo", "Max Sondag", "Tatiana von Landesberger" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.10278v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/kqx5KXyM84w", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/ATQJQr5dGpQ", "icon": "video" } ]
Vis
2,023
Reducing Ambiguities in Line-Based Density Plots by Image-Space Colorization
10.1109/TVCG.2023.3327149
Line-based density plots are used to reduce visual clutter in line charts with a multitude of individual lines. However, these traditional density plots are often perceived ambiguously, which obstructs the user's identification of underlying trends in complex datasets. Thus, we propose a novel image space coloring method for line-based density plots that enhances their interpretability. Our method employs color not only to visually communicate data density but also to highlight similar regions in the plot, allowing users to identify and distinguish trends easily. We achieve this by performing hierarchical clustering based on the lines passing through each region and mapping the identified clusters to the hue circle using circular MDS. Additionally, we propose a heuristic approach to assign each line to the most probable cluster, enabling users to analyze density and individual lines. We motivate our method by conducting a small-scale user study, demonstrating the effectiveness of our method using synthetic and real-world datasets, and providing an interactive online tool for generating colored line-based density plots.
false
false
[ "Yumeng Xue", "Patrick Paetzold", "Rebecca Kehlbeck", "Bin Chen", "Kin Chung Kwan", "Yunhai Wang", "Oliver Deussen" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2307.10447v2", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/-9BPVyeWbbs", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/-B164EgzQKs", "icon": "video" } ]
Vis
2,023
Residency Octree: A Hybrid Approach for Scalable Web-Based Multi-Volume Rendering
10.1109/TVCG.2023.3327193
We present a hybrid multi-volume rendering approach based on a novel Residency Octree that combines the advantages of out-of-core volume rendering using page tables with those of standard octrees. Octree approaches work by performing hierarchical tree traversal. However, in octree volume rendering, tree traversal and the selection of data resolution are intrinsically coupled. This makes fine-grained empty-space skipping costly. Page tables, on the other hand, allow access to any cached brick from any resolution. However, they do not offer a clear and efficient strategy for substituting missing high-resolution data with lower-resolution data. We enable flexible mixed-resolution out-of-core multi-volume rendering by decoupling the cache residency of multi-resolution data from a resolution-independent spatial subdivision determined by the tree. Instead of one-to-one node-to-brick correspondences, each residency octree node is mapped to a set of bricks from different resolution levels. This makes it possible to efficiently and adaptively choose and mix resolutions, adapt sampling rates, and compensate for cache misses. At the same time, residency octrees support fine-grained empty-space skipping, independent of the data subdivision used for caching. Finally, to facilitate collaboration and outreach, and to eliminate local data storage, our implementation is a web-based, pure client-side renderer using WebGPU and WebAssembly. Our method is faster than prior approaches and efficient for many data channels with a flexible and adaptive choice of data resolution.
false
false
[ "Lukas Herzberger", "Markus Hadwiger", "Robert Krüger", "Peter K. Sorger", "Hanspeter Pfister", "Eduard Gröller", "Johanna Beyer" ]
[]
[ "P", "V" ]
[ { "name": "Paper Preprint", "url": "http://arxiv.org/pdf/2309.04393v1", "icon": "paper" }, { "name": "Fast Forward", "url": "https://youtu.be/WtaUcnuG-cY", "icon": "video" }, { "name": "Prerecorded Talk", "url": "https://youtu.be/JCqZaePlqmw", "icon": "video" } ]