Conference stringclasses 6 values | Year int64 1.99k 2.03k | Title stringlengths 8 187 | DOI stringlengths 16 32 | Abstract stringlengths 128 7.15k ⌀ | Accessible bool 2 classes | Early bool 2 classes | AuthorNames-Deduped listlengths 1 24 | Award listlengths 0 2 | Resources listlengths 0 5 | ResourceLinks listlengths 0 10 |
|---|---|---|---|---|---|---|---|---|---|---|
Vis | 2,024 | DeLVE into Earth's Past: A Visualization-Based Exhibit Deployed Across Multiple Museum Contexts | 10.1109/TVCG.2024.3456174 | While previous work has found success in deploying visualizations as museum exhibits, it has not investigated whether museum context impacts visitor behaviour with these exhibits. We present an interactive Deep-time Literacy Visualization Exhibit (DeLVE) to help museum visitors understand deep time (lengths of extremely long geological processes) by improving proportional reasoning skills through comparison of different time periods. DeLVE uses a new visualization idiom, Connected Multi-Tier Ranges, to visualize curated datasets of past events across multiple scales of time, relating extreme scales with concrete scales that have more familiar magnitudes and units. Museum staff at three separate museums approved the deployment of DeLVE as a digital kiosk, and devoted time to curating a unique dataset in each of them. We collect data from two sources, an observational study and system trace logs. We discuss the importance of context: similar museum exhibits in different contexts were received very differently by visitors. We additionally discuss differences in our process from Sedlmair et al.'s design study methodology which is focused on design studies triggered by connection with collaborators rather than the discovery of a concept to communicate. Supplemental materials are available at: https://osf.io/z53dq/ | true | true | [
"Mara Solen",
"Nigar Sultana",
"Laura A. Lukes",
"Tamara Munzner"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2404.01488",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/z53dq/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1063.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/LoSRYmcllmY",
"icon": "video"
}
] |
Vis | 2,024 | DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic Graphs | 10.1109/TVCG.2024.3456340 | Comics are an effective method for sequential data-driven storytelling, especially for dynamic graphs—graphs whose vertices and edges change over time. However, manually creating such comics is currently time-consuming, complex, and error-prone. In this paper, we propose DG COMICS, a novel comic authoring tool for dynamic graphs that allows users to semi-automatically build and annotate comics. The tool uses a newly developed hierarchical clustering algorithm to segment consecutive snapshots of dynamic graphs while preserving their chronological order. It also presents rich information on both individuals and communities extracted from dynamic graphs in multiple views, where users can explore dynamic graphs and choose what to tell in comics. For evaluation, we provide an example and report the results of a user study and an expert review. | false | true | [
"Joohee Kim",
"Hyunwook Lee",
"Duc M. Nguyen",
"Minjeong Shin",
"Bum Chul Kwon",
"Sungahn Ko",
"Niklas Elmqvist"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.04874",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1425.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/qzU1QLDM4zs",
"icon": "video"
}
] |
Vis | 2,024 | DiffFit: Visually-Guided Differentiable Fitting of Molecule Structures to a Cryo-EM Map | 10.1109/TVCG.2024.3456404 | We introduce DiffFit, a differentiable algorithm for fitting protein atomistic structures into an experimental reconstructed Cryo-Electron Microscopy (cryo-EM) volume map. In structural biology, this process is necessary to semi-automatically composite large mesoscale models of complex protein assemblies and complete cellular structures that are based on measured cryo-EM data. The current approaches require manual fitting in three dimensions to start, resulting in approximately aligned structures followed by an automated fine-tuning of the alignment. The DiffFit approach enables domain scientists to fit new structures automatically and visualize the results for inspection and interactive revision. The fitting begins with differentiable three-dimensional (3D) rigid transformations of the protein atom coordinates followed by sampling the density values at the atom coordinates from the target cryo-EM volume. To ensure a meaningful correlation between the sampled densities and the protein structure, we proposed a novel loss function based on a multi-resolution volume-array approach and the exploitation of the negative space. This loss function serves as a critical metric for assessing the fitting quality, ensuring the fitting accuracy and an improved visualization of the results. We assessed the placement quality of DiffFit with several large, realistic datasets and found it to be superior to that of previous methods. We further evaluated our method in two use cases: automating the integration of known composite structures into larger protein complexes and facilitating the fitting of predicted protein domains into volume densities to aid researchers in identifying unknown proteins. We implemented our algorithm as an open-source plugin (github.com/nanovis/DiffFit) in ChimeraX, a leading visualization software in the field. All supplemental materials are available at osf.io/5tx4q. | false | true | [
"Deng Luo",
"Zainab Alsuwaykit",
"Dawar Khan",
"Ondřej Strnad",
"Tobias Isenberg",
"Ivan Viola"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2404.02465",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/5tx4q/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1533.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/ptmTip8km8k",
"icon": "video"
}
] |
Vis | 2,024 | DimBridge: Interactive Explanation of Visual Patterns in Dimensionality Reductions with Predicate Logic | 10.1109/TVCG.2024.3456391 | Dimensionality reduction techniques are widely used for visualizing high-dimensional data. However, support for interpreting patterns of dimension reduction results in the context of the original data space is often insufficient. Consequently, users may struggle to extract insights from the projections. In this paper, we introduce DimBridge, a visual analytics tool that allows users to interact with visual patterns in a projection and retrieve corresponding data patterns. DimBridge supports several interactions, allowing users to perform various analyses, from contrasting multiple clusters to explaining complex latent structures. Leveraging first-order predicate logic, DimBridge identifies subspaces in the original dimensions relevant to a queried pattern and provides an interface for users to visualize and interact with them. We demonstrate how DimBridge can help users overcome the challenges associated with interpreting visual patterns in projections | false | true | [
"Brian Montambault",
"Gabriel Appleby",
"Jen Rogers",
"Camelia D. Brumar",
"Mingwei Li",
"Remco Chang"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2404.07386v2",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1568.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/tH3ik7KCn0A",
"icon": "video"
}
] |
Vis | 2,024 | Discursive Patinas: Anchoring Discussions in Data Visualizations | 10.1109/TVCG.2024.3456334 | This paper presents discursive patinas, a technique to visualize discussions onto data visualizations, inspired by how people leave traces in the physical world. While data visualizations are widely discussed in online communities and social media, comments tend to be displayed separately from the visualization and we lack ways to relate these discussions back to the content of the visualization, e.g., to situate comments, explain visual patterns, or question assumptions. In our visualization annotation interface, users can designate areas within the visualization. Discursive patinas are made of overlaid visual marks (anchors), attached to textual comments with category labels, likes, and replies. By coloring and styling the anchors, a meta visualization emerges, showing what and where people comment and annotate the visualization. These patinas show regions of heavy discussions, recent commenting activity, and the distribution of questions, suggestions, or personal stories. We ran workshops with 90 students, domain experts, and visualization researchers to study how people use anchors to discuss visualizations and how patinas influence people's understanding of the discussion. Our results show that discursive patinas improve the ability to navigate discussions and guide people to comments that help understand, contextualize, or scrutinize the visualization. We discuss the potential of anchors and patinas to support discursive engagements, including critical readings of visualizations, design feedback, and feminist approaches to data visualization | true | true | [
"Tobias Kauer",
"Derya Akbaba",
"Marian Dörk",
"Benjamin Bach"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.17994",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1394.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/zBwtliqYULc",
"icon": "video"
}
] |
Vis | 2,024 | Distributed Augmentation, Hypersweeps, and Branch Decomposition of Contour Trees for Scientific Exploration | 10.1109/TVCG.2024.3456322 | Contour trees describe the topology of level sets in scalar fields and are widely used in topological data analysis and visualization. A main challenge of utilizing contour trees for large-scale scientific data is their computation at scale using highperformance computing. To address this challenge, recent work has introduced distributed hierarchical contour trees for distributed computation and storage of contour trees. However, effective use of these distributed structures in analysis and visualization requires subsequent computation of geometric properties and branch decomposition to support contour extraction and exploration. In this work, we introduce distributed algorithms for augmentation, hypersweeps, and branch decomposition that enable parallel computation of geometric properties, and support the use of distributed contour trees as query structures for scientific exploration. We evaluate the parallel performance of these algorithms and apply them to identify and extract important contours for scientific visualization | false | true | [
"Mingzhe Li",
"Hamish Carr",
"Oliver Rübel",
"Bei Wang",
"Gunther H Weber"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.04836v2",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://gitlab.kitware.com/vtk/vtk-m",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1705.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/_RvXzzJfjFA",
"icon": "video"
}
] |
Vis | 2,024 | DITTO: A Visual Digital Twin for Interventions and Temporal Treatment Outcomes in Head and Neck Cancer | 10.1109/TVCG.2024.3456160 | Digital twin models are of high interest to Head and Neck Cancer (HNC) oncologists, who have to navigate a series of complex treatment decisions that weigh the efficacy of tumor control against toxicity and mortality risks. Evaluating individual risk profiles necessitates a deeper understanding of the interplay between different factors such as patient health, spatial tumor location and spread, and risk of subsequent toxicities that can not be adequately captured through simple heuristics. To support clinicians in better understanding tradeoffs when deciding on treatment courses, we developed DITTO, a digital-twin and visual computing system that allows clinicians to analyze detailed risk profiles for each patient, and decide on a treatment plan. DITTO relies on a sequential Deep Reinforcement Learning digital twin (DT) to deliver personalized risk of both long-term and short-term disease outcome and toxicity risk for HNC patients. Based on a participatory collaborative design alongside oncologists, we also implement several visual explainability methods to promote clinical trust and encourage healthy skepticism when using our system. We evaluate the efficacy of DITTO through quantitative evaluation of performance and case studies with qualitative feedback. Finally, we discuss design lessons for developing clinical visual XAI applications for clinical end users. | false | true | [
"Andrew Wentzel",
"Serageldin Attia",
"Xinhua Zhang",
"Guadalupe Canahuate",
"Clifton David Fuller",
"G. Elisabeta Marai"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.48550/arXiv.2407.13107",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/qhu7f/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1059.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/4AmQkVSrVdE",
"icon": "video"
}
] |
Vis | 2,024 | Does This Have a Particular Meaning? Interactive Pattern Explanation for Network Visualizations | 10.1109/TVCG.2024.3456192 | This paper presents an interactive technique to explain visual patterns in network visualizations to analysts who do not understand these visualizations and who are learning to read them. Learning a visualization requires mastering its visual grammar and decoding information presented through visual marks, graphical encodings, and spatial configurations. To help people learn network visualization designs and extract meaningful information, we introduce the concept of interactive pattern explanation that allows viewers to select an arbitrary area in a visualization, then automatically mines the underlying data patterns, and explains both visual and data patterns present in the viewer's selection. In a qualitative and a quantitative user study with a total of 32 participants, we compare interactive pattern explanations to textual-only and visual-only (cheatsheets) explanations. Our results show that interactive explanations increase learning of i) unfamiliar visualizations, ii) patterns in network science, and iii) the respective network terminology | false | true | [
"Xinhuan Shu",
"Alexis Pister",
"Junxiu Tang",
"Fanny Chevalier",
"Benjamin Bach"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2408.01272",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1185.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/XYAcTewN_E8",
"icon": "video"
}
] |
Vis | 2,024 | DracoGPT: Extracting Visualization Design Preferences from Large Language Models | 10.1109/TVCG.2024.3456350 | Trained on vast corpora, Large Language Models (LLMs) have the potential to encode visualization design knowledge and best practices. However, if they fail to do so, they might provide unreliable visualization recommendations. What visualization design preferences, then, have LLMs learned? We contribute DracoGPT, a method for extracting, modeling, and assessing visualization design preferences from LLMs. To assess varied tasks, we develop two pipelines—DracoGPT-Rank and DracoGPT-Recommend—to model LLMs prompted to either rank or recommend visual encoding specifications. We use Draco as a shared knowledge base in which to represent LLM design preferences and compare them to best practices from empirical research. We demonstrate that DracoGPT can accurately model the preferences expressed by LLMs, enabling analysis in terms of Draco design constraints. Across a suite of backing LLMs, we find that DracoGPT-Rank and DracoGPT-Recommend moderately agree with each other, but both substantially diverge from guidelines drawn from human subjects experiments. Future work can build on our approach to expand Draco's knowledge base to model a richer set of preferences and to provide a robust and cost-effective stand-in for LLMs. | false | true | [
"Huichen Will Wang",
"Mitchell L. Gordon",
"Leilani Battle",
"Jeffrey Heer"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.06845v2",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1472.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Y-lg3iu3-o4",
"icon": "video"
}
] |
Vis | 2,024 | Dynamic Color Assignment for Hierarchical Data | 10.1109/TVCG.2024.3456386 | Assigning discriminable and harmonic colors to samples according to their class labels and spatial distribution can generate attractive visualizations and facilitate data exploration. However, as the number of classes increases, it is challenging to generate a high-quality color assignment result that accommodates all classes simultaneously. A practical solution is to organize classes into a hierarchy and then dynamically assign colors during exploration. However, existing color assignment methods fall short in generating high-quality color assignment results and dynamically aligning them with hierarchical structures. To address this issue, we develop a dynamic color assignment method for hierarchical data, which is formulated as a multi-objective optimization problem. This method simultaneously considers color discriminability, color harmony, and spatial distribution at each hierarchical level. By using the colors of parent classes to guide the color assignment of their child classes, our method further promotes both consistency and clarity across hierarchical levels. We demonstrate the effectiveness of our method in generating dynamic color assignment results with quantitative experiments and a user study. | false | true | [
"Jiashu Chen",
"Weikai Yang",
"Zelin Jia",
"Lanxi Xiao",
"Shixia Liu"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14742",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/e4b5u/?view_only=68cc67c194c443b498bd2545ef551faa",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1595.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/RjtAd4XmMsU",
"icon": "video"
}
] |
Vis | 2,024 | Entanglements for Visualization: Changing Research Outcomes through Feminist Theory | 10.1109/TVCG.2024.3456171 | A growing body of work draws on feminist thinking to challenge assumptions about how people engage with and use visualizations. This work draws on feminist values, driving design and research guidelines that account for the influences of power and neglect. This prior work is largely prescriptive, however, forgoing articulation of how feminist theories of knowledge — or feminist epistemology — can alter research design and outcomes. At the core of our work is an engagement with feminist epistemology, drawing attention to how a new framework for how we know what we know enabled us to overcome intellectual tensions in our research. Specifically, we focus on the theoretical concept of entanglement, central to recent feminist scholarship, and contribute: a history of entanglement in the broader scope of feminist theory; an articulation of the main points of entanglement theory for a visualization context; and a case study of research outcomes as evidence of the potential of feminist epistemology to impact visualization research. This work answers a call in the community to embrace a broader set of theoretical and epistemic foundations and provides a starting point for bringing feminist theories into visualization research. | true | true | [
"Derya Akbaba",
"Lauren Klein",
"Miriah Meyer"
] | [
"BP"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/rw35g",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/ubrdy/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1077.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/x-XyV4J73t4",
"icon": "video"
}
] |
Vis | 2,024 | Evaluating and extending speedup techniques for optimal crossing minimization in layered graph drawings | 10.1109/TVCG.2024.3456349 | A layered graph is an important category of graph in which every node is assigned to a layer, and layers are drawn as parallel or radial lines. They are commonly used to display temporal data or hierarchical graphs. Previous research has demonstrated that minimizing edge crossings is the most important criterion to consider when looking to improve the readability of such graphs. While heuristic approaches exist for crossing minimization, we are interested in optimal approaches to the problem that prioritize human readability over computational scalability. We aim to improve the usefulness and applicability of such optimal methods by understanding and improving their scalability to larger graphs. This paper categorizes and evaluates the state-of-the-art linear programming formulations for exact crossing minimization and describes nine new and existing techniques that could plausibly accelerate the optimization algorithm. Through a computational evaluation, we explore each technique's effect on calculation time and how the techniques assist or inhibit one another, allowing researchers and practitioners to adapt them to the characteristics of their graphs. Our best-performing techniques yielded a median improvement of 2.5–17× depending on the solver used, giving us the capability to create optimal layouts faster and for larger graphs. We provide an open-source implementation of our methodology in Python, where users can pick which combination of techniques to enable according to their use case. A free copy of this paper and all supplemental materials, datasets used, and source code are available at https://osf.io/5vq79. | true | true | [
"Connor Wilson",
"Eduardo Puerta",
"Tarik Crnovrsanin",
"Sara Di Bartolomeo",
"Cody Dunne"
] | [] | [
"V",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/5vq79",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1874.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/wIQnahaRsKk",
"icon": "video"
}
] |
Vis | 2,024 | Evaluating Force-based Haptics for Immersive Tangible Interactions with Surface Visualizations | 10.1109/TVCG.2024.3456316 | Haptic feedback provides an essential sensory stimulus crucial for interaction and analyzing three-dimensional spatio-temporal phenomena on surface visualizations. Given its ability to provide enhanced spatial perception and scene maneuverability, virtual reality (VR) catalyzes haptic interactions on surface visualizations. Various interaction modes, encompassing both mid-air and on-surface interactions—with or without the application of assisting force stimuli—have been explored using haptic force feedback devices. In this paper, we evaluate the use of on-surface and assisted on-surface haptic modes of interaction compared to a no-haptic interaction mode. A force-based haptic stylus is used for all three modalities; the on-surface mode uses collision based forces, whereas the assisted on-surface mode is accompanied by an additional snapping force. We conducted a within-subjects user study involving fundamental interaction tasks performed on surface visualizations. Keeping a consistent visual design across all three modes, our study incorporates tasks that require the localization of the highest, lowest, and random points on surfaces; and tasks that focus on brushing curves on surfaces with varying complexity and occlusion levels. Our findings show that participants took almost the same time to brush curves using all the interaction modes. They could draw smoother curves using the on-surface interaction modes compared to the no-haptic mode. However, the assisted on-surface mode provided better accuracy than the on-surface mode. The on-surface mode was slower in point localization, but the accuracy depended on the visual cues and occlusions associated with the tasks. Finally, we discuss participant feedback on using haptic force feedback as a tangible input modality and share takeaways to aid the design of haptics-based tangible interactions for surface visualizations. | false | true | [
"Hamza Afzaal",
"Usman Alim"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.48550/arXiv.2408.04031",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1500.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/jJowp-dAYp8",
"icon": "video"
}
] |
Vis | 2,024 | Fast Comparative Analysis of Merge Trees Using Locality Sensitive Hashing | 10.1109/TVCG.2024.3456383 | Scalar field comparison is a fundamental task in scientific visualization. In topological data analysis, we compare topological descriptors of scalar fields—such as persistence diagrams and merge trees—because they provide succinct and robust abstract representations. Several similarity measures for topological descriptors seem to be both asymptotically and practically efficient with polynomial time algorithms, but they do not scale well when handling large-scale, time-varying scientific data and ensembles. In this paper, we propose a new framework to facilitate the comparative analysis of merge trees, inspired by tools from locality sensitive hashing (LSH). LSH hashes similar objects into the same hash buckets with high probability. We propose two new similarity measures for merge trees that can be computed via LSH, using new extensions to Recursive MinHash and subpath signature, respectively. Our similarity measures are extremely efficient to compute and closely resemble the results of existing measures such as merge tree edit distance or geometric interleaving distance. Our experiments demonstrate the utility of our LSH framework in applications such as shape matching, clustering, key event detection, and ensemble summarization. | false | true | [
"Weiran Lyu",
"Raghavendra Sridharamurthy",
"Jeff M. Phillips",
"Bei Wang"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2409.08519v1",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1803.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/77lGpXvrG0k",
"icon": "video"
}
] |
Vis | 2,024 | Ferry: Toward Better Understanding of Input/Output Space for Data Wrangling Scripts | 10.1109/TVCG.2024.3456328 | Understanding the input and output of data wrangling scripts is crucial for various tasks like debugging code and onboarding new data. However, existing research on script understanding primarily focuses on revealing the process of data transformations, lacking the ability to analyze the potential scope, i.e., the space of script inputs and outputs. Meanwhile, constructing input/output space during script analysis is challenging, as the wrangling scripts could be semantically complex and diverse, and the association between different data objects is intricate. To facilitate data workers in understanding the input and output space of wrangling scripts, we summarize ten types of constraints to express table space and build a mapping between data transformations and these constraints to guide the construction of the input/output for individual transformations. Then, we propose a constraint generation model for integrating table constraints across multiple transformations. Based on the model, we develop Ferry, an interactive system that extracts and visualizes the data constraints describing the input and output space of data wrangling scripts, thereby enabling users to grasp the high-level semantics of complex scripts and locate the origins of faulty data transformations. Besides, Ferry provides example input and output data to assist users in interpreting the extracted constraints and checking and resolving the conflicts between these constraints and any uploaded dataset. Ferry's effectiveness and usability are evaluated through two usage scenarios and two case studies, including understanding, debugging, and checking both single and multiple scripts, with and without executable data. Furthermore, an illustrative application is presented to demonstrate Ferry's flexibility. | false | true | [
"Zhongsu Luo",
"Kai Xiong",
"Jiajun Zhu",
"Ran Chen",
"Xinhuan Shu",
"Di Weng",
"Yingcai Wu"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1730.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/C0yhkKGlj7k",
"icon": "video"
}
] |
Vis | 2,024 | Fine-Tuned Large Language Model for Visualization System: A Study on Self-Regulated Learning in Education | 10.1109/TVCG.2024.3456145 | Large Language Models (LLMs) have shown great potential in intelligent visualization systems, especially for domainspecific applications. Integrating LLMs into visualization systems presents challenges, and we categorize these challenges into three alignments: domain problems with LLMs, visualization with LLMs, and interaction with LLMs. To achieve these alignments, we propose a framework and outline a workflow to guide the application of fine-tuned LLMs to enhance visual interactions for domain-specific tasks. These alignment challenges are critical in education because of the need for an intelligent visualization system to support beginners' self-regulated learning. Therefore, we apply the framework to education and introduce Tailor-Mind, an interactive visualization system designed to facilitate self-regulated learning for artificial intelligence beginners. Drawing on insights from a preliminary study, we identify self-regulated learning tasks and fine-tuning objectives to guide visualization design and tuning data construction. Our focus on aligning visualization with fine-tuned LLM makes Tailor-Mind more like a personalized tutor. Tailor-Mind also supports interactive recommendations to help beginners better achieve their learning goals. Model performance evaluations and user studies confirm that Tailor-Mind improves the self-regulated learning experience, effectively validating the proposed framework. | false | true | [
"Lin Gao",
"Jing Lu",
"Zekai Shao",
"Ziyue Lin",
"Shengbin Yue",
"Chiokit Ieong",
"Yi Sun",
"Rory Zauner",
"Zhongyu Wei",
"Siming Chen"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.20570",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1096.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/KR_r6ARzzx0",
"icon": "video"
}
] |
Vis | 2,024 | FPCS: Feature Preserving Compensated Sampling of Streaming Time Series Data | 10.1109/TVCG.2024.3456375 | Data visualization aids in making data analysis more intuitive and in-depth, with widespread applications in fields such as biology, finance, and medicine. For massive and continuously growing streaming time series data, these data are typically visualized in the form of line charts, but the data transmission puts significant pressure on the network, leading to visualization lag or even failure to render completely. This paper proposes a universal sampling algorithm FPCS, which retains feature points from continuously received streaming time series data, compensates for the frequent fluctuating feature points, and aims to achieve efficient visualization. This algorithm bridges the gap in sampling for streaming time series data. The algorithm has several advantages: (1) It optimizes the sampling results by compensating for fewer feature points, retaining the visualization features of the original data very well, ensuring high-quality sampled data; (2) The execution time is the shortest compared to similar existing algorithms; (3) It has an almost negligible space overhead; (4) The data sampling process does not depend on the overall data; (5) This algorithm can be applied to infinite streaming data and finite static data | false | true | [
"Hongyan Li",
"Bo Yang",
"Yansong Chua"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1363.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/TAD1E6fAMHU",
"icon": "video"
}
] |
Vis | 2,024 | From Instruction to Insight: Exploring the Functional and Semantic Roles of Text in Interactive Dashboards | 10.1109/TVCG.2024.3456601 | There is increased interest in understanding the interplay between text and visuals in the field of data visualization. However, this attention has predominantly been on the use of text in standalone visualizations (such as text annotation overlays) or augmenting text stories supported by a series of independent views. In this paper, we shift from the traditional focus on single-chart annotations to characterize the nuanced but crucial communication role of text in the complex environment of interactive dashboards. Through a survey and analysis of 190 dashboards in the wild, plus 13 expert interview sessions with experienced dashboard authors, we highlight the distinctive nature of text as an integral component of the dashboard experience, while delving into the categories, semantic levels, and functional roles of text, and exploring how these text elements are coalesced by dashboard authors to guide and inform dashboard users. Our contributions are threefold. First, we distill qualitative and quantitative findings from our studies to characterize current practices of text use in dashboards, including a categorization of text-based components and design patterns. Second, we leverage current practices and existing literature to propose, discuss, and validate recommended practices for text in dashboards, embodied as a set of 12 heuristics that underscore the semantic and functional role of text in offering navigational cues, contextualizing data insights, supporting reading order, among other concerns. Third, we reflect on our findings to identify gaps and propose opportunities for data visualization researchers to push the boundaries on text usage for dashboards, from authoring support and interactivity to text generation and content personalization. Our research underscores the significance of elevating text as a first-class citizen in data visualization, and the need to support the inclusion of textual components and their interactive affordances in dashboard design. | true | true | [
"Nicole Sultanum",
"Vidya Setlur"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14451",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/49zp5/?view_only=cafb29af267d4b50a379050695c39712",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1060.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/OZmdwGmz1BI",
"icon": "video"
}
] |
Vis | 2,024 | Graph Transformer for Label Placement | 10.1109/TVCG.2024.3456141 | Placing text labels is a common way to explain key elements in a given scene. Given a graphic input and original label information, how to place labels to meet both geometric and aesthetic requirements is an open challenging problem. Geometry-wise, traditional rule-driven solutions struggle to capture the complex interactions between labels, let alone consider graphical/appearance content. In terms of aesthetics, training/evaluation data ideally require nontrivial effort and expertise in design, thus resulting in a lack of decent datasets for learning-based methods. To address the above challenges, we formulate the task with a graph representation, where nodes correspond to labels and edges to interactions between labels, and treat label placement as a node position prediction problem. With this novel representation, we design a Label Placement Graph Transformer (LPGT) to predict label positions. Specifically, edge-level attention, conditioned on node representations, is introduced to reveal potential relationships between labels. To integrate graphic/image information, we design a feature aligning strategy that extracts deep features for nodes and edges efficiently. Next, to address the dataset issue, we collect commercial illustrations with professionally designed label layouts from household appliance manuals, and annotate them with useful information to create a novel dataset named the Appliance Manual Illustration Labels (AMIL) dataset. In the thorough evaluation on AMIL, our LPGT solution achieves promising label placement performance compared with popular baselines. Our algorithm and dataset are available at https://github.com/JingweiQu/LPGT. | false | true | [
"Jingwei Qu",
"Pingshun Zhang",
"Enyu Che",
"Yinan Chen",
"Haibin Ling"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1218.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/CrX4jHVmDfU",
"icon": "video"
}
] |
Vis | 2,024 | HiRegEx: Interactive Visual Query and Exploration of Multivariate Hierarchical Data | 10.1109/TVCG.2024.3456389 | When using exploratory visual analysis to examine multivariate hierarchical data, users often need to query data to narrow down the scope of analysis. However, formulating effective query expressions remains a challenge for multivariate hierarchical data, particularly when datasets become very large. To address this issue, we develop a declarative grammar, HiRegEx (Hierarchical data Regular Expression), for querying and exploring multivariate hierarchical data. Rooted in the extended multi-level task topology framework for tree visualizations (e-MLTT), HiRegEx delineates three query targets (node, path, and subtree) and two aspects for querying these targets (features and positions), and uses operators developed based on classical regular expressions for query construction. Based on the HiRegEx grammar, we develop an exploratory framework for querying and exploring multivariate hierarchical data and integrate it into the TreeQueryER prototype system. The exploratory framework includes three major components: top-down pattern specification, bottom-up data-driven inquiry, and context-creation data overview. We validate the expressiveness of HiRegEx with the tasks from the e-MLTT framework and showcase the utility and effectiveness of TreeQueryER system through a case study involving expert users in the analysis of a citation tree dataset | false | true | [
"Guozheng Li",
"haotian mi",
"Chi Harold Liu",
"Takayuki Itoh",
"Guoren Wang"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.06601v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/bitvis2021/HiRegEx",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1831.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/7q67dSgbZCI",
"icon": "video"
}
] |
Vis | 2,024 | How Aligned are Human Chart Takeaways and LLM Predictions? A Case Study on Bar Charts with Varying Layouts | 10.1109/TVCG.2024.3456378 | Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways. | false | true | [
"Huichen Will Wang",
"Jane Hoffswell",
"Sao Myat Thazin Thane",
"Victor S. Bursztyn",
"Cindy Xiong Bearfield"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.06837v2",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1544.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/L_tj96AoLnI",
"icon": "video"
}
] |
Vis | 2,024 | How Good (Or Bad) Are LLMs at Detecting Misleading Visualizations? | 10.1109/TVCG.2024.3456333 | In this study, we address the growing issue of misleading charts, a prevalent problem that undermines the integrity of information dissemination. Misleading charts can distort the viewer's perception of data, leading to misinterpretations and decisions based on false information. The development of effective automatic detection methods for misleading charts is an urgent field of research. The recent advancement of multimodal Large Language Models (LLMs) has introduced a promising direction for addressing this challenge. We explored the capabilities of these models in analyzing complex charts and assessing the impact of different prompting strategies on the models' analyses. We utilized a dataset of misleading charts collected from the internet by prior research and crafted nine distinct prompts, ranging from simple to complex, to test the ability of four different multimodal LLMs in detecting over 21 different chart issues. Through three experiments–from initial exploration to detailed analysis–we progressively gained insights into how to effectively prompt LLMs to identify misleading charts and developed strategies to address the scalability challenges encountered as we expanded our detection range from the initial five issues to 21 issues in the final experiment. Our findings reveal that multimodal LLMs possess a strong capability for chart comprehension and critical thinking in data interpretation. There is significant potential in employing multimodal LLMs to counter misleading information by supporting critical thinking and enhancing visualization literacy. This study demonstrates the applicability of LLMs in addressing the pressing concern of misleading charts | false | true | [
"Leo Yu-Ho Lo",
"Huamin Qu"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.17291",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/vx526",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1318.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/LYcwSpyRxR8",
"icon": "video"
}
] |
Vis | 2,024 | HuBar: A Visual Analytics Tool to Explore Human Behavior based on fNIRS in AR Guidance Systems | 10.1109/TVCG.2024.3456388 | The concept of an intelligent augmented reality (AR) assistant has significant, wide-ranging applications, with potential uses in medicine, military, and mechanics domains. Such an assistant must be able to perceive the environment and actions, reason about the environment state in relation to a given task, and seamlessly interact with the task performer. These interactions typically involve an AR headset equipped with sensors which capture video, audio, and haptic feedback. Previous works have sought to facilitate the development of intelligent AR assistants by visualizing these sensor data streams in conjunction with the assistant's perception and reasoning model outputs. However, existing visual analytics systems do not focus on user modeling or include biometric data, and are only capable of visualizing a single task session for a single performer at a time. Moreover, they typically assume a task involves linear progression from one step to the next. We propose a visual analytics system that allows users to compare performance during multiple task sessions, focusing on non-linear tasks where different step sequences can lead to success. In particular, we design visualizations for understanding user behavior through functional near-infrared spectroscopy (fNIRS) data as a proxy for perception, attention, and memory as well as corresponding motion data (acceleration, angular velocity, and gaze). We distill these insights into embedding representations that allow users to easily select groups of sessions with similar behaviors. We provide two case studies that demonstrate how to use these visualizations to gain insights about task performance using data collected during helicopter copilot training tasks. Finally, we evaluate our approach by conducting an in-depth examination of a think-aloud experiment with five domain experts. | false | true | [
"Sonia Castelo Quispe",
"João Rulff",
"Parikshit Solunke",
"Erin McGowan",
"Guande Wu",
"Iran Roman",
"Roque Lopez",
"Bea Steers",
"Qi Sun",
"Juan Pablo Bello",
"Bradley S Feest",
"Michael Middleton",
"Ryan McKendrick",
"Claudio Silva"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/pdf/2407.12260v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/VIDA-NYU/HuBar",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1833.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/AaX3LMAAkL4",
"icon": "video"
}
] |
Vis | 2,024 | Impact of Vertical Scaling on Normal Probability Density Function Plots | 10.1109/TVCG.2024.3456396 | Probability density function (PDF) curves are among the few charts on a Cartesian coordinate system that are commonly presented without y-axes. This design decision may be due to the lack of relevance of vertical scaling in normal PDFs. In fact, as long as two normal PDFs have the same means and standard deviations (SDs), they can be scaled to occupy different amounts of vertical space while still remaining statistically identical. Because unfxed PDF height increases as SD decreases, visualization designers may fnd themselves tempted to vertically shrink low-SD PDFs to avoid occlusion or save white space in their fgures. Although irregular vertical scaling has been explored in bar and line charts, the visualization community has yet to investigate how this visual manipulation may affect reader comparisons of PDFs. In this paper, we present two preregistered experiments (n = 600, n = 401) that systematically demonstrate that vertical scaling can lead to misinterpretations of PDFs. We also test visual interventions to mitigate misinterpretation. In some contexts, we fnd including a y-axis can help reduce this effect. Overall, we fnd that keeping vertical scaling consistent, and therefore maintaining equal pixel areas under PDF curves, results in the highest likelihood of accurate comparisons. Our fndings provide insights into the impact of vertical scaling on PDFs, and reveal the complicated nature of proportional area comparisons | true | true | [
"Racquel Fygenson",
"Lace M. Padilla"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/w3dgq",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/7k5un/",
"icon": "other"
},
{
"name": "Experiment 1",
"url": "https://osf.io/eu2th",
"icon": "other"
},
{
"name": "Experiment 2",
"url": "https://osf.io/uxb7n",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1638.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/nHx017A7OcI",
"icon": "video"
}
] |
Vis | 2,024 | Improved Visual Saliency of Graph Clusters with Orderable Node-Link Layouts | 10.1109/TVCG.2024.3456167 | Graphs are often used to model relationships between entities. The identification and visualization of clusters in graphs enable insight discovery in many application areas, such as life sciences and social sciences. Force-directed graph layouts promote the visual saliency of clusters, as they bring adjacent nodes closer together, and push non-adjacent nodes apart. At the same time, matrices can effectively show clusters when a suitable row/column ordering is applied, but are less appealing to untrained users not providing an intuitive node-link metaphor. It is thus worth exploring layouts combining the strengths of the node-link metaphor and node ordering. In this work, we study the impact of node ordering on the visual saliency of clusters in orderable node-link diagrams, namely radial diagrams, arc diagrams and symmetric arc diagrams. Through a crowdsourced controlled experiment, we show that users can count clusters consistently more accurately, and to a large extent faster, with orderable node-link diagrams than with three state-of-the art force-directed layout algorithms, i.e., ‘Linlog’, ‘Backbone’ and ‘sfdp’. The measured advantage is greater in case of low cluster separability and/or low compactness. | false | true | [
"Nora Al-Naami",
"Nicolas Medoc",
"Matteo Magnani",
"Mohammad Ghoniem"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://hal.science/hal-04668352",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/kc3dg/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1214.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/8QT8_S2C0fs",
"icon": "video"
}
] |
Vis | 2,024 | Interactive Design-of-Experiments: Optimizing a Cooling System | 10.1109/TVCG.2024.3456356 | The optimization of cooling systems is important in many cases, for example for cabin and battery cooling in electric cars. Such an optimization is governed by multiple, conflicting objectives and it is performed across a multi-dimensional parameter space. The extent of the parameter space, the complexity of the non-linear model of the system, as well as the time needed per simulation run and factors that are not modeled in the simulation necessitate an iterative, semi-automatic approach. We present an interactive visual optimization approach, where the user works with a p-h diagram to steer an iterative, guided optimization process. A deep learning (DL) model provides estimates for parameters, given a target characterization of the system, while numerical simulation is used to compute system characteristics for an ensemble of parameter sets. Since the DL model only serves as an approximation of the inverse of the cooling system and since target characteristics can be chosen according to different, competing objectives, an iterative optimization process is realized, developing multiple sets of intermediate solutions, which are visually related to each other. The standard p-h diagram, integrated interactively in this approach, is complemented by a dual, also interactive visual representation of additional expressive measures representing the system characteristics. We show how the known four-points semantic of the p-h diagram meaningfully transfers to the dual data representation. When evaluating this approach in the automotive domain, we found that our solution helped with the overall comprehension of the cooling system and that it lead to a faster convergence during optimization | false | true | [
"Rainer Splechtna",
"Majid Behravan",
"Mario Jelovic",
"Denis Gracanin",
"Helwig Hauser",
"Kresimir Matkovic"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.12607",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1805.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/zGpaBxAqkHw",
"icon": "video"
}
] |
Vis | 2,024 | KNOWNET: Guided Health Information Seeking from LLMs via Knowledge Graph Integration | 10.1109/TVCG.2024.3456364 | The increasing reliance on Large Language Models (LLMs) for health information seeking can pose severe risks due to the potential for misinformation and the complexity of these topics. This paper introduces KNOWNET a visualization system that integrates LLMs with Knowledge Graphs (KG) to provide enhanced accuracy and structured exploration. Specifically, for enhanced accuracy, KNOWNET extracts triples (e.g., entities and their relations) from LLM outputs and maps them into the validated information and supported evidence in external KGs. For structured exploration, KNOWNET provides next-step recommendations based on the neighborhood of the currently explored entities in KGs, aiming to guide a comprehensive understanding without overlooking critical aspects. To enable reasoning with both the structured data in KGs and the unstructured outputs from LLMs, KNOWNET conceptualizes the understanding of a subject as the gradual construction of graph visualization. A progressive graph visualization is introduced to monitor past inquiries, and bridge the current query with the exploration history and next-step recommendations. We demonstrate the effectiveness of our system via use cases and expert interviews. | false | true | [
"Youfu Yan",
"Yu Hou",
"Yongkang Xiao",
"Rui Zhang",
"Qianwen Wang"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.13598",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://visual-intelligence-umn.github.io/KNOWNET/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1503.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/_eV967qYScs",
"icon": "video"
}
] |
Vis | 2,024 | Learnable and Expressive Visualization Authoring through Blended Interfaces | 10.1109/TVCG.2024.3456598 | A wide range of visualization authoring interfaces enable the creation of highly customized visualizations. However, prioritizing expressiveness often impedes the learnability of the authoring interface. The diversity of users, such as varying computational skills and prior experiences in user interfaces, makes it even more challenging for a single authoring interface to satisfy the needs of a broad audience. In this paper, we introduce a framework to balance learnability and expressivity in a visualization authoring system. Adopting insights from learnability studies, such as multimodal interaction and visualization literacy, we explore the design space of blending multiple visualization authoring interfaces for supporting authoring tasks in a complementary and fexible manner. To evaluate the effectiveness of blending interfaces, we implemented a proof-of-concept system, Blace, that combines four common visualization authoring interfaces—template-based, shelf confguration, natural language, and code editor—that are tightly linked to one another to help users easily relate unfamiliar interfaces to more familiar ones. Using the system, we conducted a user study with 12 domain experts who regularly visualize genomics data as part of their analysis workfow. Participants with varied visualization and programming backgrounds were able to successfully reproduce unfamiliar visualization examples without a guided tutorial in the study. Feedback from a post-study qualitative questionnaire further suggests that blending interfaces enabled participants to learn the system easily and assisted them in confdently editing unfamiliar visualization grammar in the code editor, enabling expressive customization. Refecting on our study results and the design of our system, we discuss the different interaction patterns that we identifed and design implications for blending visualization authoring interfaces. | true | true | [
"Sehi L'Yi",
"Astrid van den Brandt",
"Etowah Adams",
"Huyen N. Nguyen",
"Nils Gehlenborg"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/pjcn4",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1504.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/IL0N2WMISlg",
"icon": "video"
}
] |
Vis | 2,024 | LLM Comparator: Interactive Analysis of Side-by-Side Evaluation of Large Language Models | 10.1109/TVCG.2024.3456354 | Evaluating large language models (LLMs) presents unique challenges. While automatic side-by-side evaluation, also known as LLM-as-a-judge, has become a promising solution, model developers and researchers face difficulties with scalability and interpretability when analyzing these evaluation outcomes. To address these challenges, we introduce LLM Comparator, a new visual analytics tool designed for side-by-side evaluations of LLMs. This tool provides analytical workflows that help users understand when and why one LLM outperforms or underperforms another, and how their responses differ. Through close collaboration with practitioners developing LLMs at Google, we have iteratively designed, developed, and refined the tool. Qualitative feedback from these users highlights that the tool facilitates in-depth analysis of individual examples while enabling users to visually overview and flexibly slice data. This empowers users to identify undesirable patterns, formulate hypotheses about model behavior, and gain insights for model improvement. LLM Comparator has been integrated into Google's LLM evaluation platforms and open-sourced | false | true | [
"Minsuk Kahng",
"Ian Tenney",
"Mahima Pushkarna",
"Michael Xieyang Liu",
"James Wexler",
"Emily Reif",
"Krystal Kallarackal",
"Minsuk Chang",
"Michael Terry",
"Lucas Dixon"
] | [] | [
"V",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://github.com/PAIR-code/llm-comparator",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1326.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/DVHN9srNTkk",
"icon": "video"
}
] |
Vis | 2,024 | Localized Evaluation for Constructing Discrete Vector Fields | 10.1109/TVCG.2024.3456355 | Topological abstractions offer a method to summarize the behavior of vector fields, but computing them robustly can be challenging due to numerical precision issues. One alternative is to represent the vector field using a discrete approach, which constructs a collection of pairs of simplices in the input mesh that satisfies criteria introduced by Forman's discrete Morse theory. While numerous approaches exist to compute pairs in the restricted case of the gradient of a scalar field, state-of-the-art algorithms for the general case of vector fields require expensive optimization procedures. This paper introduces a fast, novel approach for pairing simplices of two-dimensional, triangulated vector fields that do not vary in time. The key insight of our approach is that we can employ a local evaluation, inspired by the approach used to construct a discrete gradient field, where every simplex in a mesh is considered by no more than one of its vertices. Specifically, we observe that for any edge in the input mesh, we can uniquely assign an outward direction of flow. We can further expand this consistent notion of outward flow at each vertex, which corresponds to the concept of a downhill flow in the case of scalar fields. Working with outward flow enables a linear-time algorithm that processes the (outward) neighborhoods of each vertex one-by-one, similar to the approach used for scalar fields. We couple our approach to constructing discrete vector fields with a method to extract, simplify, and visualize topological features. Empirical results on analytic and simulation data demonstrate drastic improvements in running time, produce features similar to the current state-of-the-art, and show the application of simplification to large, complex flows | false | true | [
"Tanner Finken",
"Julien Tierny",
"Joshua A Levine"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2408.04769",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1494.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/OzB9wNzCmRc",
"icon": "video"
}
] |
Vis | 2,024 | Loops: Leveraging Provenance and Visualization to Support Exploratory Data Analysis in Notebooks | 10.1109/TVCG.2024.3456186 | Exploratory data science is an iterative process of obtaining, cleaning, profiling, analyzing, and interpreting data. This cyclical way of working creates challenges within the linear structure of computational notebooks, leading to issues with code quality, recall, and reproducibility. To remedy this, we present Loops, a set of visual support techniques for iterative and exploratory data analysis in computational notebooks. Loops leverages provenance information to visualize the impact of changes made within a notebook. In visualizations of the notebook provenance, we trace the evolution of the notebook over time and highlight differences between versions. Loops visualizes the provenance of code, markdown, tables, visualizations, and images and their respective differences. Analysts can explore these differences in detail in a separate view. Loops not only makes the analysis process transparent but also supports analysts in their data science work by showing the effects of changes and facilitating comparison of multiple versions. We demonstrate our approach's utility and potential impact in two use cases and feedback from notebook users from various backgrounds. This paper and all supplemental materials are available at https://osf.io/79eyn. | false | true | [
"Klaus Eckelt",
"Kiran Gadhave",
"Alexander Lex",
"Marc Streit"
] | [] | [
"PW",
"P",
"V",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/79eyn",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/hxuak/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1251.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/2l7HgOd2NIY",
"icon": "video"
},
{
"name": "Project Website",
"url": "https://jku-vds-lab.at/publications/2024_loops/",
"icon": "project_website"
},
{
"name": "Overview Video",
"url": "https://youtu.be/jCUwLm5wfNo",
"icon": "video"
},
{
"name": "Source Code",
"url": "https://github.com/jku-vds-lab/loops",
"icon": "code"
},
{
"name": "Live Demo",
"url": "https://mybinder.org/v2/gh/jku-vds-lab/loops/main?labpath=notebooks/Use%20Case%201.ipynb",
"icon": "project_website"
}
] |
Vis | 2,024 | Manipulable Semantic Components: a Computational Representation of Data Visualization Scenes | 10.1109/TVCG.2024.3456296 | Various data visualization applications such as reverse engineering and interactive authoring require a vocabulary that describes the structure of visualization scenes and the procedure to manipulate them. A few scene abstractions have been proposed, but they are restricted to specific applications for a limited set of visualization types. A unified and expressive model of data visualization scenes for different applications has been missing. To fill this gap, we present Manipulable Semantic Components (MSC), a computational representation of data visualization scenes, to support applications in scene understanding and augmentation. MSC consists of two parts: a unified object model describing the structure of a visualization scene in terms of semantic components, and a set of operations to generate and modify the scene components. We demonstrate the benefits of MSC in three applications: visualization authoring, visualization deconstruction and reuse, and animation specification. | false | true | [
"Zhicheng Liu",
"Chen Chen",
"John Hooker"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.04798v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://mascot-vis.github.io/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1416.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/4IYhlRFnM64",
"icon": "video"
}
] |
Vis | 2,024 | Mind Drifts, Data Shifts: Utilizing Mind Wandering to Track the Evolution of User Experience with Data Visualizations | 10.1109/TVCG.2024.3456344 | User experience in data visualization is typically assessed through post-viewing self-reports, but these overlook the dynamic cognitive processes during interaction. This study explores the use of mind wandering– a phenomenon where attention spontaneously shifts from a primary task to internal, task-related thoughts or unrelated distractions– as a dynamic measure during visualization exploration. Participants reported mind wandering while viewing visualizations from a pre-labeled visualization database and then provided quantitative ratings of trust, engagement, and design quality, along with qualitative descriptions and short-term/long-term recall assessments. Results show that mind wandering negatively affects short-term visualization recall and various post-viewing measures, particularly for visualizations with little text annotation. Further, the type of mind wandering impacts engagement and emotional response. Mind wandering also functions as an intermediate process linking visualization design elements to post-viewing measures, influencing how viewers engage with and interpret visual information over time. Overall, this research underscores the importance of incorporating mind wandering as a dynamic measure in visualization design and evaluation, offering novel avenues for enhancing user engagement and comprehension. | true | true | [
"Anjana Arunkumar",
"Lace M. Padilla",
"Chris Bryan"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.03576",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/h5awt/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1726.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/WuNz1VKzPLY",
"icon": "video"
}
] |
Vis | 2,024 | Mixing Linters with GUIs: A Color Palette Design Probe | 10.1109/TVCG.2024.3456317 | Visualization linters are end-user facing evaluators that automatically identify potential chart issues. These spell-checker like systems offer a blend of interpretability and customization that is not found in other forms of automated assistance. However, existing linters do not model context and have primarily targeted users who do not need assistance, resulting in obvious—even annoying—advice. We investigate these issues within the domain of color palette design, which serves as a microcosm of visualization design concerns. We contribute a GUI-based color palette linter as a design probe that covers perception, accessibility, context, and other design criteria, and use it to explore visual explanations, integrated fixes, and user defined linting rules. Through a formative interview study and theory-driven analysis, we find that linters can be meaningfully integrated into graphical contexts thereby addressing many of their core issues. We discuss implications for integrating linters into visualization tools, developing improved assertion languages, and supporting end-user tunable advice—all laying the groundwork for more effective visualization linters in any context. | true | true | [
"Andrew M McNutt",
"Maureen Stone",
"Jeffrey Heer"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.21285",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/geauf",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1290.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/CY7ycxWmLkw",
"icon": "video"
}
] |
Vis | 2,024 | ModalChorus: Visual Probing and Alignment of Multi-modal Embeddings via Modal Fusion Map | 10.1109/TVCG.2024.3456387 | Multi-modal embeddings form the foundation for vision-language models, such as CLIP embeddings, the most widely used text-image embeddings. However, these embeddings are vulnerable to subtle misalignment of cross-modal features, resulting in decreased model performance and diminished generalization. To address this problem, we design ModalChorus, an interactive system for visual probing and alignment of multi-modal embeddings. ModalChorus primarily offers a two-stage process: 1) embedding probing with Modal Fusion Map (MFM), a novel parametric dimensionality reduction method that integrates both metric and nonmetric objectives to enhance modality fusion; and 2) embedding alignment that allows users to interactively articulate intentions for both point-set and set-set alignments. Quantitative and qualitative comparisons for CLIP embeddings with existing dimensionality reduction (e.g., t-SNE and MDS) and data fusion (e.g., data context map) methods demonstrate the advantages of MFM in showcasing cross-modal features over common vision-language datasets. Case studies reveal that ModalChorus can facilitate intuitive discovery of misalignment and efficient re-alignment in scenarios ranging from zero-shot classification to cross-modal retrieval and generation | false | true | [
"Yilin Ye",
"Shishi Xiao",
"Xingchen Zeng",
"Wei Zeng"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.12315",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1603.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/oJrEG0FkEYw",
"icon": "video"
}
] |
Vis | 2,024 | Motion-Based Visual Encoding Can Improve Performance on Perceptual Tasks with Dynamic Time Series | 10.1109/TVCG.2024.3456405 | Dynamic data visualizations can convey large amounts of information over time, such as using motion to depict changes in data values for multiple entities. Such dynamic displays put a demand on our visual processing capacities, yet our perception of motion is limited. Several techniques have been shown to improve the processing of dynamic displays. Staging the animation to sequentially show steps in a transition and tracing object movement by displaying trajectory histories can improve processing by reducing the cognitive load. In this paper, We examine the effectiveness of staging and tracing in dynamic displays. We showed participants animated line charts depicting the movements of lines and asked them to identify the line with the highest mean and variance. We manipulated the animation to display the lines with or without staging, tracing and history, and compared the results to a static chart as a control. Results showed that tracing and staging are preferred by participants, and improve their performance in mean and variance tasks respectively. They also preferred display time 3 times shorter when staging is used. Also, encoding animation speed with mean and variance in congruent tasks is associated with higher accuracy. These findings help inform real-world best practices for building dynamic displays | false | true | [
"Songwen Hu",
"Ouxun Jiang",
"Jeffrey Riedmiller",
"Cindy Xiong Bearfield"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.04799v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/8c95v/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1325.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/pY3yFbMe5RE",
"icon": "video"
}
] |
Vis | 2,024 | MSz: An Efficient Parallel Algorithm for Correcting Morse-Smale Segmentations in Error-Bounded Lossy Compressors | 10.1109/TVCG.2024.3456337 | This research explores a novel paradigm for preserving topological segmentations in existing error-bounded lossy compressors. Today's lossy compressors rarely consider preserving topologies such as Morse-Smale complexes, and the discrepancies in topology between original and decompressed datasets could potentially result in erroneous interpretations or even incorrect scientific conclusions. In this paper, we focus on preserving Morse-Smale segmentations in 2D/3D piecewise linear scalar fields, targeting the precise reconstruction of minimum/maximum labels induced by the integral line of each vertex. The key is to derive a series of edits during compression time. These edits are applied to the decompressed data, leading to an accurate reconstruction of segmentations while keeping the error within the prescribed error bound. To this end, we develop a workflow to fix extrema and integral lines alternatively until convergence within finite iterations. We accelerate each workflow component with shared-memory/GPU parallelism to make the performance practical for coupling with compressors. We demonstrate use cases with fluid dynamics, ocean, and cosmology application datasets with a significant acceleration with an NVIDIA A100 GPU | true | true | [
"Yuxiao Li",
"Xin Liang",
"Bei Wang",
"Yongfeng Qiu",
"Lin Yan",
"Hanqi Guo"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2406.09423",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1793.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/TRMO8YUuSSs",
"icon": "video"
}
] |
Vis | 2,024 | Objective Lagrangian Vortex Cores and their Visual Representations | 10.1109/TVCG.2024.3456384 | The numerical extraction of vortex cores from time-dependent fluid flow attracted much attention over the past decades. A commonly agreed upon vortex definition remained elusive since a proper vortex core needs to satisfy two hard constraints: it must be objective and Lagrangian. Recent methods on objectivization met the first but not the second constraint, since there was no formal guarantee that the resulting vortex coreline is indeed a pathline of the fluid flow. In this paper, we propose the first vortex core definition that is both objective and Lagrangian. Our approach restricts observer motions to follow along pathlines, which reduces the degrees of freedoms: we only need to optimize for an observer rotation that makes the observed flow as steady as possible. This optimization succeeds along Lagrangian vortex corelines and will result in a non-zero time-partial everywhere else. By performing this optimization at each point of a spatial grid, we obtain a residual scalar field, which we call vortex deviation error. The local minima on the grid serve as seed points for a gradient descent optimization that delivers sub-voxel accurate corelines. The visualization of both 2D and 3D vortex cores is based on the separation of the movement of the vortex core and the swirling flow behavior around it. While the vortex core is represented by a pathline, the swirling motion around it is visualized by streamlines in the correct frame. We demonstrate the utility of the approach on several 2D and 3D time-dependent vector fields' | false | true | [
"Tobias Günther",
"Holger Theisel"
] | [] | [
"V",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://doi.org/10.5281/zenodo.12750719",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1574.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/uzDwMGgfoLE",
"icon": "video"
}
] |
Vis | 2,024 | ParamsDrag: Interactive Parameter Space Exploration via Image-Space Dragging | 10.1109/TVCG.2024.3456338 | Numerical simulation serves as a cornerstone in scientific modeling, yet the process of fine-tuning simulation parameters poses significant challenges. Conventionally, parameter adjustment relies on extensive numerical simulations, data analysis, and expert insights, resulting in substantial computational costs and low efficiency. The emergence of deep learning in recent years has provided promising avenues for more efficient exploration of parameter spaces. However, existing approaches often lack intuitive methods for precise parameter adjustment and optimization. To tackle these challenges, we introduce ParamsDrag, a model that facilitates parameter space exploration through direct interaction with visualizations. Inspired by DragGAN, our ParamsDrag model operates in three steps. First, the generative component of ParamsDrag generates visualizations based on the input simulation parameters. Second, by directly dragging structure-related features in the visualizations, users can intuitively understand the controlling effect of different parameters. Third, with the understanding from the earlier step, users can steer ParamsDrag to produce dynamic visual outcomes. Through experiments conducted on real-world simulations and comparisons with state-of-the-art deep learning-based approaches, we demonstrate the efficacy of our solution | true | true | [
"Guan Li",
"Yang Liu",
"Guihua Shan",
"Shiyu Cheng",
"Weiqun Cao",
"Junpeng Wang",
"Ko-Chih Wang"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14100",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/YangL-04-20/ParamsDrag",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1427.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/qD2sZpl6UHU",
"icon": "video"
}
] |
Vis | 2,024 | ParetoTracker: Understanding Population Dynamics in Multi-objective Evolutionary Algorithms through Visual Analytics | 10.1109/TVCG.2024.3456142 | Multi-objective evolutionary algorithms (MOEAs) have emerged as powerful tools for solving complex optimization problems characterized by multiple, often conflicting, objectives. While advancements have been made in computational efficiency as well as diversity and convergence of solutions, a critical challenge persists: the internal evolutionary mechanisms are opaque to human users. Drawing upon the successes of explainable AI in explaining complex algorithms and models, we argue that the need to understand the underlying evolutionary operators and population dynamics within MOEAs aligns well with a visual analytics paradigm. This paper introduces ParetoTracker, a visual analytics framework designed to support the comprehension and inspection of population dynamics in the evolutionary processes of MOEAs. Informed by preliminary literature review and expert interviews, the framework establishes a multi-level analysis scheme, which caters to user engagement and exploration ranging from examining overall trends in performance metrics to conducting fine-grained inspections of evolutionary operations. In contrast to conventional practices that require manual plotting of solutions for each generation, ParetoTracker facilitates the examination of temporal trends and dynamics across consecutive generations in an integrated visual interface. The effectiveness of the framework is demonstrated through case studies and expert interviews focused on widely adopted benchmark optimization problems | false | true | [
"Zherui Zhang",
"Fan Yang",
"Ran Cheng",
"Yuxin Ma"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.04539",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/VIS-SUSTech/ParetoTracker",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1179.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/iExTSj-IaHc",
"icon": "video"
}
] |
Vis | 2,024 | Path-based Design Model for Constructing and Exploring Alternative Visualisations | 10.1109/TVCG.2024.3456323 | We present a path-based design model and system for designing and creating visualisations. Our model represents a systematic approach to constructing visual representations of data or concepts following a predefined sequence of steps. The initial step involves outlining the overall appearance of the visualisation by creating a skeleton structure, referred to as a flowpath. Subsequently, we specify objects, visual marks, properties, and appearance, storing them in a gene. Lastly, we map data onto the flowpath, ensuring suitable morphisms. Alternative designs are created by exchanging values in the gene. For example, designs that share similar traits, are created by making small incremental changes to the gene. Our design methodology fosters the generation of diverse creative concepts, space-filling visualisations, and traditional formats like bar charts, circular plots and pie charts. Through our implementation we showcase the model in action. As an example application, we integrate the output visualisations onto a smartwatch and visualisation dashboards. In this article we (1) introduce, define and explain the path model and discuss possibilities for its use, (2) present our implementation, results, and evaluation, and (3) demonstrate and evaluate an application of its use on a mobile watch. | true | true | [
"James R Jackson",
"Panagiotis D. Ritsos",
"Peter W. S. Butcher",
"Jonathan C Roberts"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.03681",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://jamesjacko.github.io/genii/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1613.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/4GP7AtRD2y4",
"icon": "video"
}
] |
Vis | 2,024 | PhenoFlow: A Human-LLM Driven Visual Analytics System for Exploring Large and Complex Stroke Datasets | 10.1109/TVCG.2024.3456215 | Acute stroke demands prompt diagnosis and treatment to achieve optimal patient outcomes. However, the intricate and irregular nature of clinical data associated with acute stroke, particularly blood pressure (BP) measurements, presents substantial obstacles to effective visual analytics and decision-making. Through a year-long collaboration with experienced neurologists, we developed PhenoFlow, a visual analytics system that leverages the collaboration between human and Large Language Models (LLMs) to analyze the extensive and complex data of acute ischemic stroke patients. PhenoFlow pioneers an innovative workflow, where the LLM serves as a data wrangler while neurologists explore and supervise the output using visualizations and natural language interactions. This approach enables neurologists to focus more on decision-making with reduced cognitive load. To protect sensitive patient information, PhenoFlow only utilizes metadata to make inferences and synthesize executable codes, without accessing raw patient data. This ensures that the results are both reproducible and interpretable while maintaining patient privacy. The system incorporates a slice-and-wrap design that employs temporal folding to create an overlaid circular visualization. Combined with a linear bar graph, this design aids in exploring meaningful patterns within irregularly measured BP data. Through case studies, PhenoFlow has demonstrated its capability to support iterative analysis of extensive clinical datasets, reducing cognitive load and enabling neurologists to make well-informed decisions. Grounded in long-term collaboration with domain experts, our research demonstrates the potential of utilizing LLMs to tackle current challenges in data-driven clinical decision-making for acute ischemic stroke patients | false | true | [
"Jaeyoung Kim",
"Sihyeon Lee",
"Hyeon Jeon",
"Keon-Joo Lee",
"Bohyoung Kim",
"HEE JOON",
"Jinwook Seo"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.16329",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/q6yc4/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1121.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/K9vSYLsemPM",
"icon": "video"
}
] |
Vis | 2,024 | Practices and Strategies in Responsive Thematic Map Design: A Report from Design Workshops with Experts | 10.1109/TVCG.2024.3456352 | This paper discusses challenges and design strategies in responsive design for thematic maps in information visualization. Thematic maps pose a number of unique challenges for responsiveness, such as inflexible aspect ratios that do not easily adapt to varying screen dimensions, or densely clustered visual elements in urban areas becoming illegible at smaller scales. However, design guidance on how to best address these issues is currently lacking. We conducted design sessions with eight professional designers and developers of web-based thematic maps for information visualization. Participants were asked to redesign a given map for various screen sizes and aspect ratios and to describe their reasoning for when and how they adapted the design. We report general observations of practitioners' motivations, decision-making processes, and personal design frameworks. We then derive seven challenges commonly encountered in responsive maps, and 17 strategies to address them, such as repositioning elements, segmenting the map, or using alternative visualizations. We compile these challenges and strategies into an illustrated cheat sheet targeted at anyone designing or learning to design responsive maps. The cheat sheet is available online: responsive-vis.github.io/map-cheat-sheet | false | true | [
"Sarah Schöttler",
"Uta Hinrichs",
"Benjamin Bach"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.20735",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://responsive-vis.github.io/map-cheat-sheet/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1393.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/mGAIwYY0AN4",
"icon": "video"
}
] |
Vis | 2,024 | Precise Embodied Data Selection with Haptic Feedback while Retaining Room-Scale Visualisation Context | 10.1109/TVCG.2024.3456399 | Room-scale immersive data visualisations provide viewers a wide-scale overview of a large dataset, but to interact precisely with individual data points they typically have to navigate to change their point of view. In traditional screen-based visualisations, focus-and-context techniques allow visualisation users to keep a full dataset in view while making detailed selections. Such techniques have been studied extensively on desktop to allow precise selection within large data sets, but they have not been explored in immersive 3D modalities. In this paper we develop a novel immersive focus-and-context technique based on a "magic portal" metaphor adapted specifically for data visualisation scenarios. An extendable-hand interaction technique is used to place a portal close to the region of interest. The other end of the portal then opens comfortably within the user's physical reach such that they can reach through to precisely select individual data points. Through a controlled study with 12 participants, we find strong evidence that portals reduce overshoots in selection and overall hand trajectory length, reducing arm and shoulder fatigue compared to ranged interaction without the portal. The portals also enable us to use a robot arm to provide haptic feedback for data within the limited volume of the portal region. In a second study with another 12 participants we found that haptics provided a positive experience (qualitative feedback) but did not significantly reduce fatigue. We demonstrate applications for portal-based selection through two use-case scenarios. | true | true | [
"Shaozhang Dai",
"Yi Li",
"Barrett Ens",
"Lonni Besançon",
"Tim Dwyer"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/6c7za?view_only=",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/afmwx/",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/jkxsp",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1699.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/hJ1I_66AuK0",
"icon": "video"
}
] |
Vis | 2,024 | PREVis: Perceived Readability Evaluation for Visualizations | 10.1109/TVCG.2024.3456318 | We developed and validated an instrument to measure the perceived readability in data visualization: PREVis. Researchers and practitioners can easily use this instrument as part of their evaluations to compare the perceived readability of different visual data representations. Our instrument can complement results from controlled experiments on user task performance or provide additional data during in-depth qualitative work such as design iterations when developing a new technique. Although readability is recognized as an essential quality of data visualizations, so far there has not been a unified definition of the construct in the context of visual representations. As a result, researchers often lack guidance for determining how to ask people to rate their perceived readability of a visualization. To address this issue, we engaged in a rigorous process to develop the first validated instrument targeted at the subjective readability of visual data representations. Our final instrument consists of 11 items across 4 dimensions: understandability, layout clarity, readability of data values, and readability of data patterns. We provide the questionnaire as a document with implementation guidelines on osf.io/9cg8j. Beyond this instrument, we contribute a discussion of how researchers have previously assessed visualization readability, and an analysis of the factors underlying perceived readability in visual data representations | false | true | [
"Anne-Flore Cabouat",
"Tingying He",
"Petra Isenberg",
"Tobias Isenberg"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14908",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/9cg8j",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/q4sdm",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/4dcav",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/d9nmu",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/yex32",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1275.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/SmrTAspA0PM",
"icon": "video"
}
] |
Vis | 2,024 | Promises and Pitfalls: Using Large Language Models to Generate Visualization Items | 10.1109/TVCG.2024.3456309 | Visualization items—factual questions about visualizations that ask viewers to accomplish visualization tasks—are regularly used in the field of information visualization as educational and evaluative materials. For example, researchers of visualization literacy require large, diverse banks of items to conduct studies where the same skill is measured repeatedly on the same participants. Yet, generating a large number of high-quality, diverse items requires significant time and expertise. To address the critical need for a large number of diverse visualization items in education and research, this paper investigates the potential for large language models (LLMs) to automate the generation of multiple-choice visualization items. Through an iterative design process, we develop the VILA (Visualization Items Generated by Large LAnguage Models) pipeline, for efficiently generating visualization items that measure people's ability to accomplish visualization tasks. We use the VILA pipeline to generate 1,404 candidate items across 12 chart types and 13 visualization tasks. In collaboration with 11 visualization experts, we develop an evaluation rulebook which we then use to rate the quality of all candidate items. The result is the VILA bank of ∼1,100 items. From this evaluation, we also identify and classify current limitations of the VILA pipeline, and discuss the role of human oversight in ensuring quality. In addition, we demonstrate an application of our work by creating a visualization literacy test, VILA-VLAT, which measures people's ability to complete a diverse set of tasks on various types of visualizations; comparing it to the existing VLAT, VILA-VLAT shows moderate to high convergent validity (R = 0.70). Lastly, we discuss the application areas of the VILA pipeline and the VILA bank and provide practical recommendations for their use. All supplemental materials are available at https://osf.io/ysrhq/. | false | true | [
"Yuan Cui",
"Lily W. Ge",
"Yiren Ding",
"Lane Harrison",
"Fumeng Yang",
"Matthew Kay"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/ysrhq/",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1422.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/dA4Z80m5Rzs",
"icon": "video"
}
] |
Vis | 2,024 | ProvenanceWidgets: A Library of UI Control Elements to Track and Dynamically Overlay Analytic Provenance | 10.1109/TVCG.2024.3456144 | We present ProvenanceWidgets, a Javascript library of UI control elements such as radio buttons, checkboxes, and dropdowns to track and dynamically overlay a user's analytic provenance. These in situ overlays not only save screen space but also minimize the amount of time and effort needed to access the same information from elsewhere in the UI. In this paper, we discuss how we design modular UI control elements to track how often and how recently a user interacts with them and design visual overlays showing an aggregated summary as well as a detailed temporal history. We demonstrate the capability of ProvenanceWidgets by recreating three prior widget libraries: (1) Scented Widgets, (2) Phosphor objects, and (3) Dynamic Query Widgets. We also evaluated its expressiveness and conducted case studies with visualization developers to evaluate its effectiveness. We fnd that ProvenanceWidgets enables developers to implement custom provenance-tracking applications effectively. ProvenanceWidgets is available as open-source software at https://github.com/ProvenanceWidgets to help application developers build custom provenance-based systems. | true | true | [
"Arpit Narechania",
"Kaustubh Odak",
"Mennatallah El-Assady",
"Alex Endert"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2407.17431",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/ProvenanceWidgets/Supplemental-Material",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1204.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Ed1cZDTTFd0",
"icon": "video"
}
] |
Vis | 2,024 | PUREsuggest: Citation-based Literature Search and Visual Exploration with Keyword-controlled Rankings | 10.1109/TVCG.2024.3456199 | Citations allow quickly identifying related research. If multiple publications are selected as seeds, specifc suggestions for related literature can be made based on the number of incoming and outgoing citation links to this selection. Interactively adding recommended publications to the selection refnes the next suggestion and incrementally builds a relevant collection of publications. Following this approach, the paper presents a search and foraging approach, PUREsuggest, which combines citation-based suggestions with augmented visualizations of the citation network. The focus and novelty of the approach is, frst, the transparency of how the rankings are explained visually and, second, that the process can be steered through user-defned keywords, which refect topics of interests. The system can be used to build new literature collections, to update and assess existing ones, as well as to use the collected literature for identifying relevant experts in the feld. We evaluated the recommendation approach through simulated sessions and performed a user study investigating search strategies and usage patterns supported by the interface. | true | true | [
"Fabian Beck"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.02508",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/94ebr/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1128.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/obWhz2SJuzg",
"icon": "video"
}
] |
Vis | 2,024 | Quality Metrics and Reordering Strategies for Revealing Patterns in BioFabric Visualizations | 10.1109/TVCG.2024.3456312 | Visualizing relational data is crucial for understanding complex connections between entities in social networks, political affiliations, or biological interactions. Well-known representations like node-link diagrams and adjacency matrices offer valuable insights, but their effectiveness relies on the ability to identify patterns in the underlying topological structure. Reordering strategies and layout algorithms play a vital role in the visualization process since the arrangement of nodes, edges, or cells influences the visibility of these patterns. The BioFabric visualization combines elements of node-link diagrams and adjacency matrices, leveraging the strengths of both, the visual clarity of node-link diagrams and the tabular organization of adjacency matrices. A unique characteristic of BioFabric is the possibility to reorder nodes and edges separately. This raises the question of which combination of layout algorithms best reveals certain patterns. In this paper, we discuss patterns and anti-patterns in BioFabric, such as staircases or escalators, relate them to already established patterns, and propose metrics to evaluate their quality. Based on these quality metrics, we compared combinations of well-established reordering techniques applied to BioFabric with a well-known benchmark data set. Our experiments indicate that the edge order has a stronger influence on revealing patterns than the node layout. The results show that the best combination for revealing staircases is a barycentric node layout, together with an edge order based on node indices and length. Our research contributes a first building block for many promising future research directions, which we also share and discuss. A free copy of this paper and all supplemental materials are available at https://osf.io/9mt8r/?view_only = b70dfbe550e3404f83059afdc60184c6. | false | true | [
"Johannes Fuchs",
"Alexander Frings",
"Maria-Viktoria Heinle",
"Daniel Keim",
"Sara Di Bartolomeo"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1809.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/z5Loo1vtnXg",
"icon": "video"
}
] |
Vis | 2,024 | Quantifying Emotional Responses to Immutable Data Characteristics and Designer Choices in Data Visualizations | 10.1109/TVCG.2024.3456361 | Emotion is an important factor to consider when designing visualizations as it can impact the amount of trust viewers place in a visualization, how well they can retrieve information and understand the underlying data, and how much they engage with or connect to a visualization. We conducted five crowdsourced experiments to quantify the effects of color, chart type, data trend, data variability and data density on emotion (measured through self-reported arousal and valence). Results from our experiments show that there are multiple design elements which influence the emotion induced by a visualization and, more surprisingly, that certain data characteristics influence the emotion of viewers even when the data has no meaning. In light of these findings, we offer guidelines on how to use color, scale, and chart type to counterbalance and emphasize the emotional impact of immutable data characteristics. | false | true | [
"Carter Blair",
"Xiyao Wang",
"Charles Perin"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.48550/arXiv.2407.18427",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/ywjs4/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1291.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Hht8iAtJ40w",
"icon": "video"
}
] |
Vis | 2,024 | Rapid and Precise Topological Comparison with Merge Tree Neural Networks | 10.1109/TVCG.2024.3456395 | Merge trees are a valuable tool in the scientific visualization of scalar fields; however, current methods for merge tree comparisons are computationally expensive, primarily due to the exhaustive matching between tree nodes. To address this challenge, we introduce the Merge Tree Neural Network (MTNN), a learned neural network model designed for merge tree comparison. The MTNN enables rapid and high-quality similarity computation. We first demonstrate how to train graph neural networks, which emerged as effective encoders for graphs, in order to produce embeddings of merge trees in vector spaces for efficient similarity comparison. Next, we formulate the novel MTNN model that further improves the similarity comparisons by integrating the tree and node embeddings with a new topological attention mechanism. We demonstrate the effectiveness of our model on real-world data in different domains and examine our model's generalizability across various datasets. Our experimental analysis demonstrates our approach's superiority in accuracy and efficiency. In particular, we speed up the prior state-of-the-art by more than 100× on the benchmark datasets while maintaining an error rate below 0.1%. | true | true | [
"Yu Qin",
"Brittany Terese Fasy",
"Carola Wenk",
"Brian Summa"
] | [
"BP"
] | [
"P",
"V",
"C",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2404.05879",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1880.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/5x_3_xJ0xKc",
"icon": "video"
},
{
"name": "Source Code and Data",
"url": "https://osf.io/2n8dy/",
"icon": "code"
}
] |
Vis | 2,024 | Regularized Multi-Decoder Ensemble for an Error-Aware Scene Representation Network | 10.1109/TVCG.2024.3456357 | Feature grid Scene Representation Networks (SRNs) have been applied to scientific data as compact functional surrogates for analysis and visualization. As SRNs are black-box lossy data representations, assessing the prediction quality is critical for scientific visualization applications to ensure that scientists can trust the information being visualized. Currently, existing architectures do not support inference time reconstruction quality assessment, as coordinate-level errors cannot be evaluated in the absence of ground truth data. By employing the uncertain neural network architecture in feature grid SRNs, we obtain prediction variances during inference time to facilitate confidence-aware data reconstruction. Specifically, we propose a parameter-efficient multi-decoder SRN (MDSRN) architecture consisting of a shared feature grid with multiple lightweight multi-layer perceptron decoders. MDSRN can generate a set of plausible predictions for a given input coordinate to compute the mean as the prediction of the multi-decoder ensemble and the variance as a confidence score. The coordinate-level variance can be rendered along with the data to inform the reconstruction quality, or be integrated into uncertainty-aware volume visualization algorithms. To prevent the misalignment between the quantified variance and the prediction quality, we propose a novel variance regularization loss for ensemble learning that promotes the Regularized multi-decoder SRN (RMDSRN) to obtain a more reliable variance that correlates closely to the true model error. We comprehensively evaluate the quality of variance quantification and data reconstruction of Monte Carlo Dropout (MCD), Mean Field Variational Inference (MFVI), Deep Ensemble (DE), and Predicting Variance (PV) in comparison with our proposed MDSRN and RMDSRN applied to state-of-the-art feature grid SRNs across diverse scalar field datasets. We demonstrate that RMDSRN attains the most accurate data reconstruction and competitive variance-error correlation among uncertain SRNs under the same neural network parameter budgets. Furthermore, we present an adaptation of uncertainty-aware volume rendering and shed light on the potential of incorporating uncertain predictions in improving the quality of volume rendering for uncertain SRNs. Through ablation studies on the regularization strength and decoder count, we show that MDSRN and RMDSRN are expected to perform sufficiently well with a default configuration without requiring customized hyperparameter settings for different datasets. | false | true | [
"Tianyu Xiong",
"Skylar Wolfgang Wurster",
"Hanqi Guo",
"Tom Peterka",
"Han-Wei Shen"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2407.19082v2",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1866.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Kx3B9acBnOw",
"icon": "video"
}
] |
Vis | 2,024 | Revealing Interaction Dynamics: Multi-Level Visual Exploration of User Strategies with an Interactive Digital Environment | 10.1109/TVCG.2024.3456187 | We present a visual analytics approach for multi-level visual exploration of users' interaction strategies in an interactive digital environment. The use of interactive touchscreen exhibits in informal learning environments, such as museums and science centers, often incorporate frameworks that classify learning processes, such as Bloom's taxonomy, to achieve better user engagement and knowledge transfer. To analyze user behavior within these digital environments, interaction logs are recorded to capture diverse exploration strategies. However, analysis of such logs is challenging, especially in terms of coupling interactions and cognitive learning processes, and existing work within learning and educational contexts remains limited. To address these gaps, we develop a visual analytics approach for analyzing interaction logs that supports exploration at the individual user level and multi-user comparison. The approach utilizes algorithmic methods to identify similarities in users' interactions and reveal their exploration strategies. We motivate and illustrate our approach through an application scenario, using event sequences derived from interaction log data in an experimental study conducted with science center visitors from diverse backgrounds and demographics. The study involves 14 users completing tasks of increasing complexity, designed to stimulate different levels of cognitive learning processes. We implement our approach in an interactive visual analytics prototype system, named VISID, and together with domain experts, discover a set of task-solving exploration strategies, such as "cascading" and "nested-loop', which reflect different levels of learning processes from Bloom's taxonomy. Finally, we discuss the generalizability and scalability of the presented system and the need for further research with data acquired in the wild. | true | true | [
"Peilin Yu",
"Aida Nordman",
"Marta M. Koc-Januchta",
"Konrad J Schönborn",
"Lonni Besançon",
"Katerina Vrotsou"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.31219/osf.io/4yc8s",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/wnz32/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1026.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/H9JJoBZBGNk",
"icon": "video"
}
] |
Vis | 2,024 | Shape It Up: An Empirically Grounded Approach for Designing Shape Palettes | 10.1109/TVCG.2024.3456385 | Shape is commonly used to distinguish between categories in multi-class scatterplots. However, existing guidelines for choosing effective shape palettes rely largely on intuition and do not consider how these needs may change as the number of categories increases. Unlike color, shapes can not be represented by a numerical space, making it difficult to propose general guidelines or design heuristics for using shape effectively. This paper presents a series of four experiments evaluating the efficiency of 39 shapes across three tasks: relative mean judgment tasks, expert preference, and correlation estimation. Our results show that conventional means for reasoning about shapes, such as filled versus unfilled, are insufficient to inform effective palette design. Further, even expert palettes vary significantly in their use of shape and corresponding effectiveness. To support effective shape palette design, we developed a model based on pairwise relations between shapes in our experiments and the number of shapes required for a given design. We embed this model in a palette design tool to give designers agency over shape selection while incorporating empirical elements of perceptual performance captured in our study. Our model advances understanding of shape perception in visualization contexts and provides practical design guidelines that can help improve categorical data encodings. | false | true | [
"Chin Tseng",
"Arran Zeyu Wang",
"Ghulam Jilani Quadri",
"Danielle Albers Szafir"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.16079v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/5k47c/?view_only=52e6b52f69b84ceab8c8c1b897083fc3",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1836.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/SSB0MEkju-s",
"icon": "video"
}
] |
Vis | 2,024 | SimpleSets: Capturing Categorical Point Patterns with Simple Shapes | 10.1109/TVCG.2024.3456168 | Points of interest on a map such as restaurants, hotels, or subway stations, give rise to categorical point data: data that have a fixed location and one or more categorical attributes. Consequently, recent years have seen various set visualization approaches that visually connect points of the same category to support users in understanding the spatial distribution of categories. Existing methods use complex and often highly irregular shapes to connect points of the same category, leading to high cognitive load for the user. In this paper we introduce SimpleSets, which uses simple shapes to enclose categorical point patterns, thereby providing a clean overview of the data distribution. SimpleSets is designed to visualize sets of points with a single categorical attribute; as a result, the point patterns enclosed by SimpleSets form a partition of the data. We give formal definitions of point patterns that correspond to simple shapes and describe an algorithm that partitions categorical points into few such patterns. Our second contribution is a rendering algorithm that transforms a given partition into a clean set of shapes resulting in an aesthetically pleasing set visualization. Our algorithm pays particular attention to resolving intersections between nearby shapes in a consistent manner. We compare SimpleSets to the state-of-the-art set visualizations using standard datasets from the literature | false | true | [
"Steven van den Broek",
"Wouter Meulemans",
"Bettina Speckmann"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14433",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://doi.org/10.5281/zenodo.12784670",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1153.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/vZk9Sm6PIIo",
"icon": "video"
}
] |
Vis | 2,024 | SLInterpreter: An Exploratory and Iterative Human-AI Collaborative System for GNN-based Synthetic Lethal Prediction | 10.1109/TVCG.2024.3456325 | Synthetic Lethal (SL) relationships, though rare among the vast array of gene combinations, hold substantial promise for targeted cancer therapy. Despite advancements in AI model accuracy, there is still a significant need among domain experts for interpretive paths and mechanism explorations that align better with domain-specific knowledge, particularly due to the high costs of experimentation. To address this gap, we propose an iterative Human-AI collaborative framework with two key components: 1) HumanEngaged Knowledge Graph Refinement based on Metapath Strategies, which leverages insights from interpretive paths and domain expertise to refine the knowledge graph through metapath strategies with appropriate granularity. 2) Cross-Granularity SL Interpretation Enhancement and Mechanism Analysis, which aids experts in organizing and comparing predictions and interpretive paths across different granularities, uncovering new SL relationships, enhancing result interpretation, and elucidating potential mechanisms inferred by Graph Neural Network (GNN) models. These components cyclically optimize model predictions and mechanism explorations, enhancing expert involvement and intervention to build trust. Facilitated by SLInterpreter, this framework ensures that newly generated interpretive paths increasingly align with domain knowledge and adhere more closely to real-world biological principles through iterative Human-AI collaboration. We evaluate the framework's efficacy through a case study and expert interviews | false | true | [
"Haoran Jiang",
"Shaohan Shi",
"Shuhao Zhang",
"Jie Zheng",
"Quan Li"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14770",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/jianghr-shanghaitech/SLInterpreter-Demo",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1368.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/eaCCRPrMxk8",
"icon": "video"
}
] |
Vis | 2,024 | Smartboard: Visual Exploration of Team Tactics with LLM Agent | 10.1109/TVCG.2024.3456200 | Tactics play an important role in team sports by guiding how players interact on the field. Both sports fans and experts have a demand for analyzing sports tactics. Existing approaches allow users to visually perceive the multivariate tactical effects. However, these approaches require users to experience a complex reasoning process to connect the multiple interactions within each tactic to the final tactical effect. In this work, we collaborate with basketball experts and propose a progressive approach to help users gain a deeper understanding of how each tactic works and customize tactics on demand. Users can progressively sketch on a tactic board, and a coach agent will simulate the possible actions in each step and present the simulation to users with facet visualizations. We develop an extensible framework that integrates large language models (LLMs) and visualizations to help users communicate with the coach agent with multimodal inputs. Based on the framework, we design and develop Smartboard, an agent-based interactive visualization system for fine-grained tactical analysis, especially for play design. Smartboard provides users with a structured process of setup, simulation, and evolution, allowing for iterative exploration of tactics based on specific personalized scenarios. We conduct case studies based on real-world basketball datasets to demonstrate the effectiveness and usefulness of our system | false | true | [
"Ziao Liu",
"Xiao Xie",
"Moqi He",
"Wenshuo Zhao",
"Yihong Wu",
"Liqi Cheng",
"Hui Zhang",
"Yingcai Wu"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1099.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/LQ89KZHc_uY",
"icon": "video"
}
] |
Vis | 2,024 | SpatialTouch: Exploring Spatial Data Visualizations in Cross-reality | 10.1109/TVCG.2024.3456368 | We propose and study a novel cross-reality environment that seamlessly integrates a monoscopic 2D surface (an interactive screen with touch and pen input) with a stereoscopic 3D space (an augmented reality HMD) to jointly host spatial data visualizations. This innovative approach combines the best of two conventional methods of displaying and manipulating spatial 3D data, enabling users to fluidly explore diverse visual forms using tailored interaction techniques. Providing such effective 3D data exploration techniques is pivotal for conveying its intricate spatial structures—often at multiple spatial or semantic scales—across various application domains and requiring diverse visual representations for effective visualization. To understand user reactions to our new environment, we began with an elicitation user study, in which we captured their responses and interactions. We observed that users adapted their interaction approaches based on perceived visual representations, with natural transitions in spatial awareness and actions while navigating across the physical surface. Our findings then informed the development of a design space for spatial data exploration in cross-reality. We thus developed cross-reality environments tailored to three distinct domains: for 3D molecular structure data, for 3D point cloud data, and for 3D anatomical data. In particular, we designed interaction techniques that account for the inherent features of interactions in both spaces, facilitating various forms of interaction, including mid-air gestures, touch interactions, pen interactions, and combinations thereof, to enhance the users' sense of presence and engagement. We assessed the usability of our environment with biologists, focusing on its use for domain research. In addition, we evaluated our interaction transition designs with virtual and mixed-reality experts to gather further insights. As a result, we provide our design suggestions for the cross-reality environment, emphasizing the interaction with diverse visual representations and seamless interaction transitions between 2D and 3D spaces | false | true | [
"Lixiang Zhao",
"Tobias Isenberg",
"Fuqi Xie",
"Hai-Ning Liang",
"Lingyun Yu"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.14833",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/avxr9",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/wmycx",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/3aty8",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/u3q8f",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1626.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/C-F1zT-UgsE",
"icon": "video"
}
] |
Vis | 2,024 | Sportify: Question Answering with Embedded Visualizations and Personified Narratives for Sports Video | 10.1109/TVCG.2024.3456332 | As basketball's popularity surges, fans often find themselves confused and overwhelmed by the rapid game pace and complexity. Basketball tactics, involving a complex series of actions, require substantial knowledge to be fully understood. This complexity leads to a need for additional information and explanation, which can distract fans from the game. To tackle these challenges, we present Sportify, a Visual Question Answering system that integrates narratives and embedded visualization for demystifying basketball tactical questions, aiding fans in understanding various game aspects. We propose three novel action visualizations (i.e., Pass, Cut, and Screen) to demonstrate critical action sequences. To explain the reasoning and logic behind players' actions, we leverage a large-language model (LLM) to generate narratives. We adopt a storytelling approach for complex scenarios from both first and third-person perspectives, integrating action visualizations. We evaluated Sportify with basketball fans to investigate its impact on understanding of tactics, and how different personal perspectives of narratives impact the understanding of complex tactic with action visualizations. Our evaluation with basketball fans demonstrates Sportify's capability to deepen tactical insights and amplify the viewing experience. Furthermore, third-person narration assists people in getting in-depth game explanations while first-person narration enhances fans' game engagement | true | true | [
"Chunggi Lee",
"Tica Lin",
"Hanspeter Pfister",
"Chen Zhu-Tian"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.05123v1",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1351.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/IZil979U9UQ",
"icon": "video"
}
] |
Vis | 2,024 | SpreadLine: Visualizing Egocentric Dynamic Influence | 10.1109/TVCG.2024.3456373 | Egocentric networks, often visualized as node-link diagrams, portray the complex relationship (link) dynamics between an entity (node) and others. However, common analytics tasks are multifaceted, encompassing interactions among four key aspects: strength, function, structure, and content. Current node-link visualization designs may fall short, focusing narrowly on certain aspects and neglecting the holistic, dynamic nature of egocentric networks. To bridge this gap, we introduce SpreadLine, a novel visualization framework designed to enable the visual exploration of egocentric networks from these four aspects at the microscopic level. Leveraging the intuitive appeal of storyline visualizations, SpreadLine adopts a storyline-based design to represent entities and their evolving relationships. We further encode essential topological information in the layout and condense the contextual information in a metro map metaphor, allowing for a more engaging and effective way to explore temporal and attribute-based information. To guide our work, with a thorough review of pertinent literature, we have distilled a task taxonomy that addresses the analytical needs specific to egocentric network exploration. Acknowledging the diverse analytical requirements of users, SpreadLine offers customizable encodings to enable users to tailor the framework for their tasks. We demonstrate the efficacy and general applicability of SpreadLine through three diverse real-world case studies (disease surveillance, social media trends, and academic career evolution) and a usability study. | true | true | [
"Yun-Hsin Kuo",
"Dongyu Liu",
"Kwan-Liu Ma"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.08992v3",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1483.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/N4HpqmtLsDc",
"icon": "video"
}
] |
Vis | 2,024 | Structure-Aware Simplification for Hypergraph Visualization | 10.1109/TVCG.2024.3456367 | Hypergraphs provide a natural way to represent polyadic relationships in network data. For large hypergraphs, it is often difficult to visually detect structures within the data. Recently, a scalable polygon-based visualization approach was developed allowing hypergraphs with thousands of hyperedges to be simplified and examined at different levels of detail. However, this approach is not guaranteed to eliminate all of the visual clutter caused by unavoidable overlaps. Furthermore, meaningful structures can be lost at simplified scales, making their interpretation unreliable. In this paper, we define hypergraph structures using the bipartite graph representation, allowing us to decompose the hypergraph into a union of structures including topological blocks, bridges, and branches, and to identify exactly where unavoidable overlaps must occur. We also introduce a set of topology preserving and topology altering atomic operations, enabling the preservation of important structures while reducing unavoidable overlaps to improve visual clarity and interpretability in simplified scales. We demonstrate our approach in several real-world applications | false | true | [
"Peter D Oliver",
"Eugene Zhang",
"Yue Zhang"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.19621",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1746.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/kP6irewadAE",
"icon": "video"
}
] |
Vis | 2,024 | StuGPTViz: A Visual Analytics Approach to Understand Student-ChatGPT Interactions | 10.1109/TVCG.2024.3456363 | The integration of Large Language Models (LLMs), especially ChatGPT, into education is poised to revolutionize students' learning experiences by introducing innovative conversational learning methodologies. To empower students to fully leverage the capabilities of ChatGPT in educational scenarios, understanding students' interaction patterns with ChatGPT is crucial for instructors. However, this endeavor is challenging due to the absence of datasets focused on student-ChatGPT conversations and the complexities in identifying and analyzing the evolutional interaction patterns within conversations. To address these challenges, we collected conversational data from 48 students interacting with ChatGPT in a master's level data visualization course over one semester. We then developed a coding scheme, grounded in the literature on cognitive levels and thematic analysis, to categorize students' interaction patterns with ChatGPT. Furthermore, we present a visual analytics system, StuGPTViz, that tracks and compares temporal patterns in student prompts and the quality of ChatGPT's responses at multiple scales, revealing significant pedagogical insights for instructors. We validated the system's effectiveness through expert interviews with six data visualization instructors and three case studies. The results confirmed StuGPTViz's capacity to enhance educators' insights into the pedagogical value of ChatGPT. We also discussed the potential research opportunities of applying visual analytics in education and developing AI-driven personalized learning solutions. | true | true | [
"Zixin Chen",
"Jiachen Wang",
"Meng Xia",
"Kento Shigyo",
"Dingdong Liu",
"Rong Zhang",
"Huamin Qu"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.12423",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/CinderD/StuGPTViz_Supplemental",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1329.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/r4bxhQuXqIM",
"icon": "video"
}
] |
Vis | 2,024 | StyleRF-VolVis: Style Transfer of Neural Radiance Fields for Expressive Volume Visualization | 10.1109/TVCG.2024.3456342 | In volume visualization, visualization synthesis has attracted much attention due to its ability to generate novel visualizations without following the conventional rendering pipeline. However, existing solutions based on generative adversarial networks often require many training images and take significant training time. Still, issues such as low quality, consistency, and flexibility persist. This paper introduces StyleRF-VolVis, an innovative style transfer framework for expressive volume visualization (VolVis) via neural radiance field (NeRF). The expressiveness of StyleRF-VolVis is upheld by its ability to accurately separate the underlying scene geometry (i.e., content) and color appearance (i.e., style), conveniently modify color, opacity, and lighting of the original rendering while maintaining visual content consistency across the views, and effectively transfer arbitrary styles from reference images to the reconstructed 3D scene. To achieve these, we design a base NeRF model for scene geometry extraction, a palette color network to classify regions of the radiance field for photorealistic editing, and an unrestricted color network to lift the color palette constraint via knowledge distillation for non-photorealistic editing. We demonstrate the superior quality, consistency, and flexibility of StyleRF-VolVis by experimenting with various volume rendering scenes and reference images and comparing StyleRF-VolVis against other image-based (AdaIN), video-based (ReReVST), and NeRF-based (ARF and SNeRF) style rendering solutions | false | true | [
"Kaiyuan Tang",
"Chaoli Wang"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.00150",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1391.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/TTUmK5WKV_w",
"icon": "video"
}
] |
Vis | 2,024 | SurroFlow: A Flow-Based Surrogate Model for Parameter Space Exploration and Uncertainty Quantification | 10.1109/TVCG.2024.3456372 | Existing deep learning-based surrogate models facilitate efficient data generation, but fall short in uncertainty quantification, efficient parameter space exploration, and reverse prediction. In our work, we introduce SurroFlow, a novel normalizing flow-based surrogate model, to learn the invertible transformation between simulation parameters and simulation outputs. The model not only allows accurate predictions of simulation outcomes for a given simulation parameter but also supports uncertainty quantification in the data generation process. Additionally, it enables efficient simulation parameter recommendation and exploration. We integrate SurroFlow and a genetic algorithm as the backend of a visual interface to support effective user-guided ensemble simulation exploration and visualization. Our framework significantly reduces the computational costs while enhancing the reliability and exploration capabilities of scientific surrogate models. | false | true | [
"JINGYI SHEN",
"Yuhan Duan",
"Han-Wei Shen"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.12884",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1599.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/htK9ytzwcDM",
"icon": "video"
}
] |
Vis | 2,024 | Talk to the Wall: The Role of Speech Interaction in Collaborative Visual Analytics | 10.1109/TVCG.2024.3456335 | We present the results of an exploratory study on how pairs interact with speech commands and touch gestures on a wall-sized display during a collaborative sensemaking task. Previous work has shown that speech commands, alone or in combination with other input modalities, can support visual data exploration by individuals. However, it is still unknown whether and how speech commands can be used in collaboration, and for what tasks. To answer these questions, we developed a functioning prototype that we used as a technology probe. We conducted an in-depth exploratory study with 10 participant pairs to analyze their interaction choices, the interplay between the input modalities, and their collaboration. While touch was the most used modality, we found that participants preferred speech commands for global operations, used them for distant interaction, and that speech interaction contributed to the awareness of the partner's actions. Furthermore, the likelihood of using speech commands during collaboration was related to the personality trait of agreeableness. Regarding collaboration styles, participants interacted with speech equally often whether they were in loosely or closely coupled collaboration. While the partners stood closer to each other during close collaboration, they did not distance themselves to use speech commands. From our fndings, we derive and contribute a set of design considerations for collaborative and multimodal interactive data analysis systems. All supplemental materials are available at https://osf.io/8gpv2 | true | true | [
"Gabriela Molina León",
"Anastasia Bezerianos",
"Olivier Gladin",
"Petra Isenberg"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.03813",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/8gpv2/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1302.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/-xq224J5umc",
"icon": "video"
}
] |
Vis | 2,024 | Team-Scouter: Simulative Visual Analytics of Soccer Player Scouting | 10.1109/TVCG.2024.3456216 | In soccer, player scouting aims to find players suitable for a team to increase the winning chance in future matches. To scout suitable players, coaches and analysts need to consider whether the players will perform well in a new team, which is hard to learn directly from their historical performances. Match simulation methods have been introduced to scout players by estimating their expected contributions to a new team. However, they usually focus on the simulation of match results and hardly support interactive analysis to navigate potential target players and compare them in fine-grained simulated behaviors. In this work, we propose a visual analytics method to assist soccer player scouting based on match simulation. We construct a two-level match simulation framework for estimating both match results and player behaviors when a player comes to a new team. Based on the framework, we develop a visual analytics system, Team-Scouter, to facilitate the simulative-based soccer player scouting process through player navigation, comparison, and investigation. With our system, coaches and analysts can find potential players suitable for the team and compare them on historical and expected performances. For an in-depth investigation of the players' expected performances, the system provides a visual comparison between the simulated behaviors of the player and the actual ones. The usefulness and effectiveness of the system are demonstrated by two case studies on a real-world dataset and an expert interview. | false | true | [
"Anqi Cao",
"Xiao Xie",
"Runjin Zhang",
"Yuxin Tian",
"Mu Fan",
"Hui Zhang",
"Yingcai Wu"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1031.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/p07D01bK_fs",
"icon": "video"
}
] |
Vis | 2,024 | Telling Data Stories with the Hero's Journey: Design Guidance for Creating Data Videos | 10.1109/TVCG.2024.3456330 | Data videos increasingly becoming a popular data storytelling form represented by visual and audio integration. In recent years, more and more researchers have explored many narrative structures for effective and attractive data storytelling. Meanwhile, the Hero's Journey provides a classic narrative framework specific to the Hero's story that has been adopted by various mediums. There are continuous discussions about applying Hero's Journey to data stories. However, so far, little systematic and practical guidance on how to create a data video for a specific story type like the Hero's Journey, as well as how to manipulate its sound and visual designs simultaneously. To fulfill this gap, we first identified 48 data videos aligned with the Hero's Journey as the common storytelling from 109 high-quality data videos. Then, we examined how existing practices apply Hero's Journey for creating data videos. We coded the 48 data videos in terms of the narrative stages, sound design, and visual design according to the Hero's Journey structure. Based on our findings, we proposed a design space to provide practical guidance on the narrative, visual, and sound custom design for different narrative segments of the hero's journey (i.e., Departure, Initiation, Return) through data video creation. To validate our proposed design space, we conducted a user study where 20 participants were invited to design data videos with and without our design space guidance, which was evaluated by two experts. Results show that our design space provides useful and practical guidance for data storytellers effectively creating data videos with the Hero's Journey. | false | true | [
"Zheng Wei",
"Huamin Qu",
"Xian Xu"
] | [] | [
"V",
"O"
] | [
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1333.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/IXwVnOl8OAo",
"icon": "video"
}
] |
Vis | 2,024 | The Backstory to "Swaying the Public": A Design Chronicle of Election Forecast Visualizations | 10.1109/TVCG.2024.3456366 | A year ago, we submitted an IEEE VIS paper entitled "Swaying the Public? Impacts of Election Forecast Visualizations on Emotion, Trust, and Intention in the 2022 U.S. Midterms" [50], which was later bestowed with the honor of a best paper award. Yet, studying such a complex phenomenon required us to explore many more design paths than we could count, and certainly more than we could document in a single paper. This paper, then, is the unwritten prequel—the backstory. It chronicles our journey from a simple idea—to study visualizations for election forecasts—through obstacles such as developing meaningfully different, easy-to-understand forecast visualizations, crafting professional-looking forecasts, and grappling with how to study perceptions of the forecasts before, during, and after the 2022 U.S. midterm elections. This journey yielded a rich set of original knowledge. We formalized a design space for two-party election forecasts, navigating through dimensions like data transformations, visual channels, and types of animated narratives. Through qualitative evaluation of ten representative prototypes with 13 participants, we then identified six core insights into the interpretation of uncertainty visualizations in a U.S. election context. These insights informed our revisions to remove ambiguity in our visual encodings and to prepare a professional-looking forecasting website. As part of this story, we also distilled challenges faced and design lessons learned to inform both designers and practitioners. Ultimately, we hope our methodical approach could inspire others in the community to tackle the hard problems inherent to designing and evaluating visualizations for the general public | true | true | [
"Fumeng Yang",
"Mandi Cai",
"Chloe Rose Mortenson",
"Hoda Fakhari",
"Ayse Deniz Lokmanoglu",
"Nicholas Diakopoulos",
"Erik Nisbet",
"Matthew Kay"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.31219/osf.io/927vy",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://www.doi.org/10.17605/osf.io/ygq2v",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1488.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/haLpw_OzpFw",
"icon": "video"
}
] |
Vis | 2,024 | The Effect of Visual Aids on Reading Numeric Data Tables | 10.1109/TVCG.2024.3456403 | Data tables are one of the most common ways in which people encounter data. Although mostly built with text and numbers, data tables have a spatial layout and often exhibit visual elements meant to facilitate their reading. Surprisingly, there is an empirical knowledge gap on how people read tables and how different visual aids affect people's reading of tables. In this work, we seek to address this vacuum through a controlled study. We asked participants to repeatedly perform four different tasks with four table representation conditions (plain tables, tables with zebra striping, tables with cell background color encoding cell value, and tables with in-cell bars with lengths encoding cell value). We analyzed completion time, error rate, gaze-tracking data, mouse movement and participant preferences. We found that color and bar encodings help for finding maximum values. For a more complex task (comparison of proportional differences) color and bar helped less than zebra striping. We also characterize typical human behavior for the four tasks. These findings inform the design of tables and research directions for improving presentation of data in tabular form | false | true | [
"YongFeng Ji",
"Charles Perin",
"Miguel A Nacenta"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.31219/osf.io/2t3sc",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/jfg3h/?view_only=f064cff189c4440299a3c3b10ddab232",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/b67xu?view_only=b9cc56507fc54ae399d0f468d53474ed",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1288.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/U-KVskuEvz8",
"icon": "video"
}
] |
Vis | 2,024 | The Language of Infographics: Toward Understanding Conceptual Metaphor Use in Scientific Storytelling | 10.1109/TVCG.2024.3456327 | We apply an approach from cognitive linguistics by mapping Conceptual Metaphor Theory (CMT) to the visualization domain to address patterns of visual conceptual metaphors that are often used in science infographics. Metaphors play an essential part in visual communication and are frequently employed to explain complex concepts. However, their use is often based on intuition, rather than following a formal process. At present, we lack tools and language for understanding and describing metaphor use in visualization to the extent where taxonomy and grammar could guide the creation of visual components, e.g., infographics. Our classification of the visual conceptual mappings within scientific representations is based on the breakdown of visual components in existing scientific infographics. We demonstrate the development of this mapping through a detailed analysis of data collected from four domains (biomedicine, climate, space, and anthropology) that represent a diverse range of visual conceptual metaphors used in the visual communication of science. This work allows us to identify patterns of visual conceptual metaphor use within the domains, resolve ambiguities about why specific conceptual metaphors are used, and develop a better overall understanding of visual metaphor use in scientific infographics. Our analysis shows that ontological and orientational conceptual metaphors are the most widely applied to translate complex scientific concepts. To support our findings we developed a visual exploratory tool based on the collected database that places the individual infographics on a spatio-temporal scale and illustrates the breakdown of visual conceptual metaphors. | false | true | [
"Hana Pokojná",
"Tobias Isenberg",
"Stefan Bruckner",
"Barbora Kozlikova",
"Laura Garrison"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2407.13416",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/8xrjm/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1316.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/vydQsSgBECk",
"icon": "video"
}
] |
Vis | 2,024 | TopoMap++: A faster and more space efficient technique to compute projections with topological guarantees | 10.1109/TVCG.2024.3456365 | High-dimensional data, characterized by many features, can be difficult to visualize effectively. Dimensionality reduction techniques, such as PCA, UMAP, and t-SNE, address this challenge by projecting the data into a lower-dimensional space while preserving important relationships. TopoMap is another technique that excels at preserving the underlying structure of the data, leading to interpretable visualizations. In particular, TopoMap maps the high-dimensional data into a visual space, guaranteeing that the 0-dimensional persistence diagram of the Rips filtration of the visual space matches the one from the high-dimensional data. However, the original TopoMap algorithm can be slow and its layout can be too sparse for large and complex datasets. In this paper, we propose three improvements to TopoMap: 1) a more space-efficient layout, 2) a significantly faster implementation, and 3) a novel TreeMap-based representation that makes use of the topological hierarchy to aid the exploration of the projections. These advancements make TopoMap, now referred to as TopoMap++, a more powerful tool for visualizing high-dimensional data which we demonstrate through different use case scenarios. | false | true | [
"Vitoria Guardieiro",
"Felipe Inagaki de Oliveira",
"Harish Doraiswamy",
"Luis Gustavo Nonato",
"Claudio Silva"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2409.07257v1",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1632.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/RHAnJMEbOOQ",
"icon": "video"
}
] |
Vis | 2,024 | Touching the Ground: Evaluating the Effectiveness of Data Physicalizations for Spatial Data Analysis Tasks | 10.1109/TVCG.2024.3456377 | Inspired by recent advances in digital fabrication, artists and scientists have demonstrated that physical data encodings (i.e., data physicalizations) can increase engagement with data, foster collaboration, and in some cases, improve data legibility and analysis relative to digital alternatives. However, prior empirical studies have only investigated abstract data encoded in physical form (e.g., laser cut bar charts) and not continuously sampled spatial data fields relevant to climate and medical science (e.g., heights, temperatures, densities, and velocities sampled on a spatial grid). This paper presents the design and results of the first study to characterize human performance in 3D spatial data analysis tasks across analogous physical and digital visualizations. Participants analyzed continuous spatial elevation data with three visualization modalities: (1) 2D digital visualization; (2) perspective-tracked, stereoscopic "fishtank" virtual reality; and (3) 3D printed data physicalization. Their tasks included tracing paths downhill, looking up spatial locations and comparing their relative heights, and identifying and reporting the minimum and maximum heights within certain spatial regions. As hypothesized, in most cases, participants performed the tasks just as well or better in the physical modality (based on time and error metrics). Additional results include an analysis of open-ended feedback from participants and discussion of implications for further research on the value of data physicalization. All data and supplemental materials are available at https://osf.io/7xdq4/. | true | true | [
"Bridger Herman",
"Cullen D. Jackson",
"Daniel F. Keefe"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/z4s9d",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/7xdq4/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1137.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/84IvcxzBg7U",
"icon": "video"
}
] |
Vis | 2,024 | Towards Dataset-scale and Feature-oriented Evaluation of Text Summarization in Large Language Model Prompts | 10.1109/TVCG.2024.3456398 | Recent advancements in Large Language Models (LLMs) and Prompt Engineering have made chatbot customization more accessible, significantly reducing barriers to tasks that previously required programming skills. However, prompt evaluation, especially at the dataset scale, remains complex due to the need to assess prompts across thousands of test instances within a dataset. Our study, based on a comprehensive literature review and pilot study, summarized five critical challenges in prompt evaluation. In response, we introduce a feature-oriented workflow for systematic prompt evaluation. In the context of text summarization, our workflow advocates evaluation with summary characteristics (feature metrics) such as complexity, formality, or naturalness, instead of using traditional quality metrics like ROUGE. This design choice enables a more user-friendly evaluation of prompts, as it guides users in sorting through the ambiguity inherent in natural language. To support this workflow, we introduce Awesum, a visual analytics system that facilitates identifying optimal prompt refinements for text summarization through interactive visualizations, featuring a novel Prompt Comparator design that employs a BubbleSet-inspired design enhanced by dimensionality reduction techniques. We evaluate the effectiveness and general applicability of the system with practitioners from various domains and found that (1) our design helps overcome the learning curve for non-technical people to conduct a systematic evaluation of summarization prompts, and (2) our feature-oriented workflow has the potential to generalize to other NLG and image-generation tasks. For future works, we advocate moving towards feature-oriented evaluation of LLM prompts and discuss unsolved challenges in terms of human-agent interaction | false | true | [
"Sam Yu-Te Lee",
"Aryaman Bahukhandi",
"Dongyu Liu",
"Kwan-Liu Ma"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.12192",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1474.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/H4QzA6XFPFs",
"icon": "video"
}
] |
Vis | 2,024 | Towards Enhancing Low Vision Usability of Data Charts on Smartphones | 10.1109/TVCG.2024.3456348 | The importance of data charts is self-evident, given their ability to express complex data in a simple format that facilitates quick and easy comparisons, analysis, and consumption. However, the inherent visual nature of the charts creates barriers for people with visual impairments to reap the associated benefts to the same extent as their sighted peers. While extant research has predominantly focused on understanding and addressing these barriers for blind screen reader users, the needs of low-vision screen magnifer users have been largely overlooked. In an interview study, almost all low-vision participants stated that it was challenging to interact with data charts on small screen devices such as smartphones and tablets, even though they could technically "see" the chart content. They ascribed these challenges mainly to the magnifcation-induced loss of visual context that connected data points with each other and also with chart annotations, e.g., axis values. In this paper, we present a method that addresses this problem by automatically transforming charts that are typically non-interactive images into personalizable interactive charts which allow selective viewing of desired data points and preserve visual context as much as possible under screen enlargement. We evaluated our method in a usability study with 26 low-vision participants, who all performed a set of representative chart-related tasks under different study conditions. In the study, we observed that our method signifcantly improved the usability of charts over both the status quo screen magnifer and a state-of-the-art space compaction-based solution. | true | true | [
"Yash Prakash",
"Pathan Aseef Khan",
"Akshay Kolgar Nayak",
"Sampath Jayarathna",
"Hae-Na Lee",
"Vikas Ashok"
] | [] | [
"V",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://github.com/accessodu/GraphLite.git",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1917.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/2R4conY9Pfw",
"icon": "video"
}
] |
Vis | 2,024 | Trust Your Gut: Comparing Human and Machine Inference from Noisy Visualizations | 10.1109/TVCG.2024.3456182 | People commonly utilize visualizations not only to examine a given dataset, but also to draw generalizable conclusions about the underlying models or phenomena. Prior research has compared human visual inference to that of an optimal Bayesian agent, with deviations from rational analysis viewed as problematic. However, human reliance on non-normative heuristics may prove advantageous in certain circumstances. We investigate scenarios where human intuition might surpass idealized statistical rationality. In two experiments, we examine individuals' accuracy in characterizing the parameters of known data-generating models from bivariate visualizations. Our findings indicate that, although participants generally exhibited lower accuracy compared to statistical models, they frequently outperformed Bayesian agents, particularly when faced with extreme samples. Participants appeared to rely on their internal models to filter out noisy visualizations, thus improving their resilience against spurious data. However, participants displayed overconfidence and struggled with uncertainty estimation. They also exhibited higher variance than statistical machines. Our findings suggest that analyst gut reactions to visualizations may provide an advantage, even when departing from rationality. These results carry implications for designing visual analytics tools, offering new perspectives on how to integrate statistical models and analyst intuition for improved inference and decision-making. The data and materials for this paper are available at https://osf.io/qmfv6 | false | true | [
"Ratanond Koonchanok",
"Michael E. Papka",
"Khairi Reda"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.16871",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/qmfv6",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1256.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/bj8YXso5ly0",
"icon": "video"
}
] |
Vis | 2,024 | Uncertainty Visualization of Critical Points of 2D Scalar Fields for Parametric and Nonparametric Probabilistic Models | 10.1109/TVCG.2024.3456393 | This paper presents a novel end-to-end framework for closed-form computation and visualization of critical point uncertainty in 2D uncertain scalar fields. Critical points are fundamental topological descriptors used in the visualization and analysis of scalar fields. The uncertainty inherent in data (e.g., observational and experimental data, approximations in simulations, and compression), however, creates uncertainty regarding critical point positions. Uncertainty in critical point positions, therefore, cannot be ignored, given their impact on downstream data analysis tasks. In this work, we study uncertainty in critical points as a function of uncertainty in data modeled with probability distributions. Although Monte Carlo (MC) sampling techniques have been used in prior studies to quantify critical point uncertainty, they are often expensive and are infrequently used in production-quality visualization software. We, therefore, propose a new end-to-end framework to address these challenges that comprises a threefold contribution. First, we derive the critical point uncertainty in closed form, which is more accurate and efficient than the conventional MC sampling methods. Specifically, we provide the closed-form and semianalytical (a mix of closed-form and MC methods) solutions for parametric (e.g., uniform, Epanechnikov) and nonparametric models (e.g., histograms) with finite support. Second, we accelerate critical point probability computations using a parallel implementation with the VTK-m library, which is platform portable. Finally, we demonstrate the integration of our implementation with the ParaView software system to demonstrate near-real-time results for real datasets. | false | true | [
"Tushar M. Athawale",
"Zhe Wang",
"David Pugmire",
"Kenneth Moreland",
"Qian Gong",
"Scott Klasky",
"Chris R. Johnson",
"Paul Rosen"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.18015v1",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1277.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/kaB1IpYiCCU",
"icon": "video"
}
] |
Vis | 2,024 | Uncertainty-Aware Deep Neural Representations for Visual Analysis of Vector Field Data | 10.1109/TVCG.2024.3456360 | The widespread use of Deep Neural Networks (DNNs) has recently resulted in their application to challenging scientific visualization tasks. While advanced DNNs demonstrate impressive generalization abilities, understanding factors like prediction quality, confidence, robustness, and uncertainty is crucial. These insights aid application scientists in making informed decisions. However, DNNs lack inherent mechanisms to measure prediction uncertainty, prompting the creation of distinct frameworks for constructing robust uncertainty-aware models tailored to various visualization tasks. In this work, we develop uncertainty-aware implicit neural representations to model steady-state vector fields effectively. We comprehensively evaluate the efficacy of two principled deep uncertainty estimation techniques: (1) Deep Ensemble and (2) Monte Carlo Dropout, aimed at enabling uncertainty-informed visual analysis of features within steady vector field data. Our detailed exploration using several vector data sets indicate that uncertaintyaware models generate informative visualization results of vector field features. Furthermore, incorporating prediction uncertainty improves the resilience and interpretability of our DNN model, rendering it applicable for the analysis of non-trivial vector field data sets. | true | true | [
"Atul Kumar",
"Siddharth Garg",
"Soumya Dutta"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://doi.org/10.48550/arXiv.2407.16119",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1708.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/vEf-mNcR5M0",
"icon": "video"
}
] |
Vis | 2,024 | Understanding Visualization Authoring Techniques for Genomics Data in the Context of Personas and Tasks | 10.1109/TVCG.2024.3456298 | Genomics experts rely on visualization to extract and share insights from complex and large-scale datasets. Beyond off-the-shelf tools for data exploration, there is an increasing need for platforms that aid experts in authoring customized visualizations for both exploration and communication of insights. A variety of interactive techniques have been proposed for authoring data visualizations, such as template editing, shelf configuration, natural language input, and code editors. However, it remains unclear how genomics experts create visualizations and which techniques best support their visualization tasks and needs. To address this gap, we conducted two user studies with genomics researchers: (1) semi-structured interviews (n = 20) to identify the tasks, user contexts, and current visualization authoring techniques and (2) an exploratory study (n = 13) using visual probes to elicit users' intents and desired techniques when creating visualizations. Our contributions include (1) a characterization of how visualization authoring is currently utilized in genomics visualization, identifying limitations and benefits in light of common criteria for authoring tools, and (2) generalizable design implications for genomics visualization authoring tools based on our findings on task- and user-specific usefulness of authoring techniques. All supplemental materials are available at https://osf.io/bdj4v/. | false | true | [
"Astrid van den Brandt",
"Sehi L'Yi",
"Huyen N. Nguyen",
"Anna Vilanova",
"Nils Gehlenborg"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/6f42j",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/bdj4v/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1342.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Tw14XEoGMAk",
"icon": "video"
}
] |
Vis | 2,024 | UnDRground Tubes: Exploring Spatial Data With Multidimensional Projections and Set Visualization | 10.1109/TVCG.2024.3456314 | In various scientific and industrial domains, analyzing multivariate spatial data, i.e., vectors associated with spatial locations, is common practice. To analyze those datasets, analysts may turn to methods such as Spatial Blind Source Separation (SBSS). Designed explicitly for spatial data analysis, SBSS finds latent components in the dataset and is superior to popular non-spatial methods, like PCA. However, when analysts try different tuning parameter settings, the amount of latent components complicates analytical tasks. Based on our years-long collaboration with SBSS researchers, we propose a visualization approach to tackle this challenge. The main component is UnDRground Tubes (UT), a general-purpose idiom combining ideas from set visualization and multidimensional projections. We describe the UT visualization pipeline and integrate UT into an interactive multiple-view system. We demonstrate its effectiveness through interviews with SBSS experts, a qualitative evaluation with visualization experts, and computational experiments. SBSS experts were excited about our approach. They saw many benefits for their work and potential applications for geostatistical data analysis more generally. UT was also well received by visualization experts. Our benchmarks show that UT projections and its heuristics are appropriate. | false | true | [
"Nikolaus Piccolotto",
"Markus Wallinger",
"Silvia Miksch",
"Markus Bögl"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/zgphx",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/c7yga/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1272.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/JAizrYjsDB8",
"icon": "video"
}
] |
Vis | 2,024 | Unmasking Dunning-Kruger Effect in Visual Reasoning & Judgment | 10.1109/TVCG.2024.3456326 | The Dunning-Kruger Effect (DKE) is a metacognitive phenomenon where low-skilled individuals tend to overestimate their competence while high-skilled individuals tend to underestimate their competence. This effect has been observed in a number of domains including humor, grammar, and logic. In this paper, we explore if and how DKE manifests in visual reasoning and judgment tasks. Across two online user studies involving (1) a sliding puzzle game and (2) a scatterplot-based categorization task, we demonstrate that individuals are susceptible to DKE in visual reasoning and judgment tasks: those who performed best underestimated their performance, while bottom performers overestimated their performance. In addition, we contribute novel analyses that correlate susceptibility of DKE with personality traits and user interactions. Our findings pave the way for novel modes of bias detection via interaction patterns and establish promising directions towards interventions tailored to an individual's personality traits. All materials and analyses are in supplemental materials: https://github.com/CAV-Lab/DKE_supplemental.git. | false | true | [
"Mengyu Chen",
"Yijun Liu",
"Emily Wall"
] | [] | [
"V",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://github.com/CAV-Lab/DKE_supplemental.git",
"icon": "other"
},
{
"name": "Study 1",
"url": "https://osf.io/hqp6w?view_only=0a072aec326e475b88905fd6e17a807f",
"icon": "other"
},
{
"name": "Study 2",
"url": "https://aspredicted.org/blind.php?x=LF1_LQH",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1202.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/-paNXRpqH1E",
"icon": "video"
}
] |
Vis | 2,024 | Unveiling How Examples Shape Visualization Design Outcomes | 10.1109/TVCG.2024.3456407 | Visualization designers (e.g., journalists or data analysts) often rely on examples to explore the space of possible designs, yet we have little insight into how examples shape data visualization design outcomes. While the effects of examples have been studied in other disciplines, such as web design or engineering, the results are not readily applicable to visualization due to inconsistencies in findings and challenges unique to visualization design. Towards bridging this gap, we conduct an exploratory experiment involving 32 data visualization designers focusing on the influence of five factors (timing, quantity, diversity, data topic similarity, and data schema similarity) on objectively measurable design outcomes (e.g., numbers of designs and idea transfers). Our quantitative analysis shows that when examples are introduced after initial brainstorming, designers curate examples with topics less similar to the dataset they are working on and produce more designs with a high variation in visualization components. Also, designers copy more ideas from examples with higher data schema similarities. Our qualitative analysis of participants' thought processes provides insights into why designers incorporate examples into their designs, revealing potential factors that have not been previously investigated. Finally, we discuss how our results inform how designers may use examples during design ideation as well as future research on quantifying designs and supporting example-based visualization design. All supplemental materials are available in our OSF repo. | true | true | [
"Hannah K. Bako",
"Xinyi Liu",
"Grace Ko",
"Hyemi Song",
"Leilani Battle",
"Zhicheng Liu"
] | [] | [
"V",
"O"
] | [
{
"name": "Supplemental Material",
"url": "https://osf.io/sbp2k/wiki/home/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1414.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/6Nh--7IK6fw",
"icon": "video"
}
] |
Vis | 2,024 | User Experience of Visualizations in Motion: A Case Study and Design Considerations | 10.1109/TVCG.2024.3456319 | We present a systematic review, an empirical study, and a first set of considerations for designing visualizations in motion, derived from a concrete scenario in which these visualizations were used to support a primary task. In practice, when viewers are confronted with embedded visualizations, they often have to focus on a primary task and can only quickly glance at a visualization showing rich, often dynamically updated, information. As such, the visualizations must be designed so as not to distract from the primary task, while at the same time being readable and useful for aiding the primary task. For example, in games, players who are engaged in a battle have to look at their enemies but also read the remaining health of their own game character from the health bar over their character's head. Many trade-offs are possible in the design of embedded visualizations in such dynamic scenarios, which we explore in-depth in this paper with a focus on user experience. We use video games as an example of an application context with a rich existing set of visualizations in motion. We begin our work with a systematic review of in-game visualizations in motion. Next, we conduct an empirical user study to investigate how different embedded visualizations in motion designs impact user experience. We conclude with a set of considerations and trade-offs for designing visualizations in motion more broadly as derived from what we learned about video games. All supplemental materials of this paper are available at osf.io/3v8wm/. | true | true | [
"Lijie Yao",
"Federica Bucchieri",
"Victoria McArthur",
"Anastasia Bezerianos",
"Petra Isenberg"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2408.01991",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/3v8wm/",
"icon": "other"
},
{
"name": "Preregistration",
"url": "https://osf.io/h4cks",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1451.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/X9GOtQyXfx8",
"icon": "video"
}
] |
Vis | 2,024 | VADIS: A Visual Analytics Pipeline for Dynamic Document Representation and Information-Seeking | 10.1109/TVCG.2024.3456339 | In the biomedical domain, visualizing the document embeddings of an extensive corpus has been widely used in informationseeking tasks. However, three key challenges with existing visualizations make it difficult for clinicians to find information efficiently. First, the document embeddings used in these visualizations are generated statically by pretrained language models, which cannot adapt to the user's evolving interest. Second, existing document visualization techniques cannot effectively display how the documents are relevant to users' interest, making it difficult for users to identify the most pertinent information. Third, existing embedding generation and visualization processes suffer from a lack of interpretability, making it difficult to understand, trust and use the result for decision-making. In this paper, we present a novel visual analytics pipeline for user-driven document representation and iterative information seeking (VADIS). VADIS introduces a prompt-based attention model (PAM) that generates dynamic document embedding and document relevance adjusted to the user's query. To effectively visualize these two pieces of information, we design a new document map that leverages a circular grid layout to display documents based on both their relevance to the query and the semantic similarity. Additionally, to improve the interpretability, we introduce a corpus-level attention visualization method to improve the user's understanding of the model focus and to enable the users to identify potential oversight. This visualization, in turn, empowers users to refine, update and introduce new queries, thereby facilitating a dynamic and iterative information-seeking experience. We evaluated VADIS quantitatively and qualitatively on a real-world dataset of biomedical research papers to demonstrate its effectiveness. | false | true | [
"Rui Qiu",
"Yamei Tu",
"Po-Yin Yen",
"Han-Wei Shen"
] | [
"BP"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2504.05697v1",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1802.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/iafjQjWEHIY",
"icon": "video"
}
] |
Vis | 2,024 | VisEval: A Benchmark for Data Visualization in the Era of Large Language Models | 10.1109/TVCG.2024.3456320 | Translating natural language to visualization (NL2VIS) has shown great promise for visual data analysis, but it remains a challenging task that requires multiple low-level implementations, such as natural language processing and visualization design. Recent advancements in pre-trained large language models (LLMs) are opening new avenues for generating visualizations from natural language. However, the lack of a comprehensive and reliable benchmark hinders our understanding of LLMs' capabilities in visualization generation. In this paper, we address this gap by proposing a new NL2VIS benchmark called VisEval. Firstly, we introduce a high-quality and large-scale dataset. This dataset includes 2,524 representative queries covering 146 databases, paired with accurately labeled ground truths. Secondly, we advocate for a comprehensive automated evaluation methodology covering multiple dimensions, including validity, legality, and readability. By systematically scanning for potential issues with a number of heterogeneous checkers, VisEval provides reliable and trustworthy evaluation outcomes. We run VisEval on a series of state-of-the-art LLMs. Our evaluation reveals prevalent challenges and delivers essential insights for future advancements. | false | true | [
"Nan Chen",
"Yuge Zhang",
"Jiahang Xu",
"Kan Ren",
"Yuqing Yang"
] | [
"BP"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.00981",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/microsoft/VisEval",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1332.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/lKkg-pUufh8",
"icon": "video"
}
] |
Vis | 2,024 | Visual Analysis of Multi-outcome Causal Graphs | 10.1109/TVCG.2024.3456346 | We introduce a visual analysis method for multiple causal graphs with different outcome variables, namely, multi-outcome causal graphs. Multi-outcome causal graphs are important in healthcare for understanding multimorbidity and comorbidity. To support the visual analysis, we collaborated with medical experts to devise two comparative visualization techniques at different stages of the analysis process. First, a progressive visualization method is proposed for comparing multiple state-of-the-art causal discovery algorithms. The method can handle mixed-type datasets comprising both continuous and categorical variables and assist in the creation of a fine-tuned causal graph of a single outcome. Second, a comparative graph layout technique and specialized visual encodings are devised for the quick comparison of multiple causal graphs. In our visual analysis approach, analysts start by building individual causal graphs for each outcome variable, and then, multi-outcome causal graphs are generated and visualized with our comparative technique for analyzing differences and commonalities of these causal graphs. Evaluation includes quantitative measurements on benchmark datasets, a case study with a medical expert, and expert user studies with real-world health research data. | true | true | [
"Mengjie Fan",
"Jinlu Yu",
"Daniel Weiskopf",
"Nan Cao",
"Huaiyu Wang",
"Liang Zhou"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.02679",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/mengjiefan/multi_outcome/tree/vis_rev_sub",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1693.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/bu5PgW9Q6Kg",
"icon": "video"
}
] |
Vis | 2,024 | Visual Support for the Loop Grafting Workflow on Proteins | 10.1109/TVCG.2024.3456401 | In understanding and redesigning the function of proteins in modern biochemistry, protein engineers are increasingly focusing on exploring regions in proteins called loops. Analyzing various characteristics of these regions helps the experts design the transfer of the desired function from one protein to another. This process is denoted as loop grafting. We designed a set of interactive visualizations that provide experts with visual support through all the loop grafting pipeline steps. The workflow is divided into several phases, reflecting the steps of the pipeline. Each phase is supported by a specific set of abstracted 2D visual representations of proteins and their loops that are interactively linked with the 3D View of proteins. By sequentially passing through the individual phases, the user shapes the list of loops that are potential candidates for loop grafting. Finally, the actual in-silico insertion of the loop candidates from one protein to the other is performed, and the results are visually presented to the user. In this way, the fully computational rational design of proteins and their loops results in newly designed protein structures that can be further assembled and tested through in-vitro experiments. We showcase the contribution of our visual support design on a real case scenario changing the enantiomer selectivity of the engineered enzyme. Moreover, we provide the readers with the experts' feedback. | false | true | [
"Filip Opálený",
"Pavol Ulbrich",
"Joan Planas-Iglesias",
"Jan Byška",
"Jan Štourač",
"David Bednář",
"Katarína Furmanová",
"Barbora Kozlikova"
] | [
"HM"
] | [
"PW",
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.20054",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://gitlab.fi.muni.cz/visitlab/loopgrafter-frontend-1.2",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1597.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/TjB6UTqQMHc",
"icon": "video"
},
{
"name": "Project Website with Demo",
"url": "https://loschmidt.chemi.muni.cz/loopgrafter",
"icon": "project_website"
}
] |
Vis | 2,024 | Visualization Atlases: Explaining and Exploring Complex Topics through Data, Visualization, and Narration | 10.1109/TVCG.2024.3456311 | This paper defines, analyzes, and discusses the emerging genre of visualization atlases. We currently witness an increase in web-based, data-driven initiatives that call themselves "atlases" while explaining complex, contemporary issues through data and visualizations: climate change, sustainability, AI, or cultural discoveries. To understand this emerging genre and inform their design, study, and authoring support, we conducted a systematic analysis of 33 visualization atlases and semi-structured interviews with eight visualization atlas creators. Based on our results, we contribute (1) a definition of a visualization atlas as a compendium of (web) pages aimed at explaining and supporting exploration of data about a dedicated topic through data, visualizations and narration. (2) a set of design patterns of 8 design dimensions, (3) insights into the atlas creation from interviews and (4) the definition of 5 visualization atlas genres. We found that visualization atlases are unique in the way they combine i) exploratory visualization, ii) narrative elements from data-driven storytelling and iii) structured navigation mechanisms. They target a wide range of audiences with different levels of domain knowledge, acting as tools for study, communication, and discovery. We conclude with a discussion of current design practices and emerging questions around the ethics and potential real-world impact of visualization atlases, aimed to inform the design and study of visualization atlases. | false | true | [
"Jinrui Wang",
"Xinhuan Shu",
"Benjamin Bach",
"Uta Hinrichs"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.07483v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://vis-atlas.github.io",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1446.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/S5Pi7FB5Eek",
"icon": "video"
}
] |
Vis | 2,024 | Visualizing Temporal Topic Embeddings with a Compass | 10.1109/TVCG.2024.3456143 | Dynamic topic modeling is useful at discovering the development and change in latent topics over time. However, present methodology relies on algorithms that separate document and word representations. This prevents the creation of a meaningful embedding space where changes in word usage and documents can be directly analyzed in a temporal context. This paper proposes an expansion of the compass-aligned temporal Word2Vec methodology into dynamic topic modeling. Such a method allows for the direct comparison of word and document embeddings across time in dynamic topics. This enables the creation of visualizations that incorporate temporal word embeddings within the context of documents into topic visualizations. In experiments against the current state-of-the-art, our proposed method demonstrates overall competitive performance in topic relevancy and diversity across temporal datasets of varying size. Simultaneously, it provides insightful visualizations focused on temporal word embeddings while maintaining the insights provided by global topic evolution, advancing our understanding of how topics evolve over time | false | true | [
"Daniel Palamarchuk",
"Lemara Williams",
"Brian Mayer",
"Thomas Danielson",
"Rebecca Faust",
"Larry M Deschaine PhD",
"Chris North"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2409.10649v2",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://github.com/danilka4/ttec",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1032.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/49ktTLyplJc",
"icon": "video"
}
] |
Vis | 2,024 | VMC: A Grammar for Visualizing Statistical Model Checks | 10.1109/TVCG.2024.3456402 | Visualizations play a critical role in validating and improving statistical models. However, the design space of model check visualizations is not well understood, making it difficult for authors to explore and specify effective graphical model checks. VMC defines a model check visualization using four components: (1) samples of distributions of checkable quantities generated from the model, including predictive distributions for new data and distributions of model parameters; (2) transformations on observed data to facilitate comparison; (3) visual representations of distributions; and (4) layouts to facilitate comparing model samples and observed data. We contribute an implementation of VMC as an R package. We validate VMC by reproducing a set of canonical model check examples, and show how using VMC to generate model checks reduces the edit distance between visualizations relative to existing visualization toolkits. The findings of an interview study with three expert modelers who used VMC highlight challenges and opportunities for encouraging exploration of correct, effective model check visualizations | false | true | [
"Ziyang Guo",
"Alex Kale",
"Matthew Kay",
"Jessica Hullman"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2408.16702v1",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://mucollective.github.io/vmc/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1309.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/OqNLDTwT7DY",
"icon": "video"
}
] |
Vis | 2,024 | What Can Interactive Visualization do for Participatory Budgeting in Chicago? | 10.1109/TVCG.2024.3456343 | Participatory budgeting (PB) is a democratic approach to allocating municipal spending that has been adopted in many places in recent years, including in Chicago. Current PB voting resembles a ballot where residents are asked which municipal projects, such as school improvements and road repairs, to fund with a limited budget. In this work, we ask how interactive visualization can benefit PB by conducting a design probe-based interview study (N = 13) with policy workers and academics with expertise in PB, urban planning, and civic HCI. Our probe explores how graphical elicitation of voter preferences and a dashboard of voting statistics can be incorporated into a realistic PB tool. Through qualitative analysis, we find that visualization creates opportunities for city government to set expectations about budget constraints while also granting their constituents greater freedom to articulate a wider range of preferences. However, using visualization to provide transparency about PB requires efforts to mitigate potential access barriers and mistrust. We call for more visualization professionals to help build civic capacity by working in and studying political systems | true | true | [
"Alex Kale",
"Danni Liu",
"Maria Gabriela Ayala",
"Harper Schwab",
"Andrew M McNutt"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/abs/2407.20103",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/tn6m2/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1281.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Uwwba1Z9EbE",
"icon": "video"
}
] |
Vis | 2,024 | What University Students Learn In Visualization Classes | 10.1109/TVCG.2024.3456291 | As a step towards improving visualization literacy, this work investigates how students approach reading visualizations differently after taking a university-level visualization course. We asked students to verbally walk through their process of making sense of unfamiliar visualizations, and conducted a qualitative analysis of these walkthroughs. Our qualitative analysis found that after taking a visualization course, students engaged with visualizations in more sophisticated ways: they were more likely to exhibit design empathy by thinking critically about the tradeoffs behind why a chart was designed in a particular way, and were better able to deconstruct a chart to make sense of it. We also gave students a quantitative assessment of visualization literacy and found no evidence of scores improving after the class, likely because the test we used focused on a different set of skills than those emphasized in visualization classes. While current measurement instruments for visualization literacy are useful, we propose developing standardized assessments for additional aspects of visualization literacy, such as deconstruction and design empathy. We also suggest that these additional aspects could be incorporated more explicitly in visualization courses | false | true | [
"Maryam Hedayati",
"Matthew Kay"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://osf.io/preprints/osf/kg3am",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://osf.io/w5pum/",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1738.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/j5kScTwQeNk",
"icon": "video"
}
] |
Vis | 2,024 | When Refreshable Tactile Displays Meet Conversational Agents: Investigating Accessible Data Presentation and Analysis with Touch and Speech | 10.1109/TVCG.2024.3456358 | Despite the recent surge of research efforts to make data visualizations accessible to people who are blind or have low vision (BLV), how to support BLV people's data analysis remains an important and challenging question. As refreshable tactile displays (RTDs) become cheaper and conversational agents continue to improve, their combination provides a promising approach to support BLV people's interactive data exploration and analysis. To understand how BLV people would use and react to a system combining an RTD with a conversational agent, we conducted a Wizard-of-Oz study with 11 BLV participants, where they interacted with line charts, bar charts, and isarithmic maps. Our analysis of participants' interactions led to the identifcation of nine distinct patterns. We also learned that the choice of modalities depended on the type of task and prior experience with tactile graphics, and that participants strongly preferred the combination of RTD and speech to a single modality. In addition, participants with more tactile experience described how tactile images facilitated a deeper engagement with the data and supported independent interpretation. Our fndings will inform the design of interfaces for such interactive mixed-modality systems. | true | true | [
"Samuel Reinders",
"Matthew Butler",
"Ingrid Zukerman",
"Bongshin Lee",
"Lizhen Qu",
"Kim Marriott"
] | [
"HM"
] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2408.04806",
"icon": "paper"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1522.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/Xw469H8JWP4",
"icon": "video"
}
] |
Vis | 2,024 | Who Let the Guards Out: Visual Support for Patrolling Games | 10.1109/TVCG.2024.3456306 | Effective security patrol management is critical for ensuring safety in diverse environments such as art galleries, airports, and factories. The behavior of patrols in these situations can be modeled by patrolling games. They simulate the behavior of the patrol and adversary in the building, which is modeled as a graph of interconnected nodes representing rooms. The designers of algorithms solving the game face the problem of analyzing complex graph layouts with temporal dependencies. Therefore, appropriate visual support is crucial for them to work effectively. In this paper, we present a novel tool that helps the designers of patrolling games explore the outcomes of the proposed algorithms and approaches, evaluate their success rate, and propose modifications that can improve their solutions. Our tool offers an intuitive and interactive interface, featuring a detailed exploration of patrol routes and probabilities of taking them, simulation of patrols, and other requested features. In close collaboration with experts in designing patrolling games, we conducted three case studies demonstrating the usage and usefulness of our tool. The prototype of the tool, along with exemplary datasets, is available at https://gitlab.fi.muni.cz/formela/strategy-vizualizer. | false | true | [
"Matěj Lang",
"Adam Štěpánek",
"Róbert Zvara",
"Vojtěch Řehák",
"Barbora Kozlikova"
] | [] | [
"P",
"V",
"O"
] | [
{
"name": "Paper Preprint",
"url": "https://arxiv.org/abs/2407.18705",
"icon": "paper"
},
{
"name": "Supplemental Material",
"url": "https://gitlab.fi.muni.cz/formela/strategy-vizualizer",
"icon": "other"
},
{
"name": "IEEE VIS Conference Page",
"url": "https://ieeevis.org/year/2024/program/paper_v-full-1571.html",
"icon": "other"
},
{
"name": "Fast Forward Video",
"url": "https://youtu.be/BgFsC5T5ILM",
"icon": "video"
}
] |
EuroVis | 2,024 | A Prediction-Traversal Approach for Compressing Scientific Data on Unstructured Meshes with Bounded Error | 10.1111/cgf.15097 | We explore an error-bounded lossy compression approach for reducing scientific data associated with 2D/3D unstructured meshes. While existing lossy compressors offer a high compression ratio with bounded error for regular grid data, methodologies tailored for unstructured mesh data are lacking; for example, one can compress nodal data as 1D arrays, neglecting the spatial coherency of the mesh nodes. Inspired by the SZ compressor, which predicts and quantizes values in a multidimensional array, we dynamically reorganize nodal data into sequences. Each sequence starts with a seed cell; based on a predefined traversal order, the next cell is added to the sequence if the current cell can predict and quantize the nodal data in the next cell with the given error bound. As a result, one can efficiently compress the quantized nodal data in each sequence until all mesh nodes are traversed. This paper also introduces a suite of novel error metrics, namely continuous mean squared error (CMSE) and continuous peak signal-to-noise ratio (CPSNR), to assess compression results for unstructured mesh data. The continuous error metrics are defined by integrating the error function on all cells, providing objective statistics across nonuniformly distributed nodes/cells in the mesh. We evaluate our methods with several scientific simulations ranging from ocean-climate models and computational fluid dynamics simulations with both traditional and continuous error metrics. We demonstrated superior compression ratios and quality than existing lossy compressors. | false | false | [
"Congrong Ren",
"Xin Liang 0001",
"Hanqi Guo 0001"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2312.06080v2",
"icon": "paper"
}
] |
EuroVis | 2,024 | A Systematic Literature Review of User Evaluation in Immersive Analytics | 10.1111/cgf.15111 | User evaluation is a common and useful tool for systematically generating knowledge and validating novel approaches in the domain of Immersive Analytics. Since this research domain centres around users, user evaluation is of extraordinary relevance. Additionally, Immersive Analytics is an interdisciplinary field of research where different communities bring in their own methodologies. It is vital to investigate and synchronise these different approaches with the long-term goal to reach a shared evaluation framework. While there have been several studies focusing on Immersive Analytics as a whole or on certain aspects of the domain, this is the first systematic review of the state of evaluation methodology in Immersive Analytics. The main objective of this systematic literature review is to illustrate methodologies and research areas that are still underrepresented in user studies by identifying current practice in user evaluation in the domain of Immersive Analytics in coherence with the PRISMA protocol. (see https://www.acm.org/publications/class-2012) | false | false | [
"Judith Friedl-Knirsch",
"Fabian Pointecker",
"S. Pfistermüller",
"Christian Stach",
"Christoph Anthes",
"Daniel Roth 0001"
] | [] | [] | [] |
EuroVis | 2,024 | An Experimental Evaluation of Viewpoint-Based 3D Graph Drawing | 10.1111/cgf.15077 | Node-link diagrams are a widely used metaphor for creating visualizations of relational data. Most frequently, such techniques address creating 2D graph drawings, which are easy to use on computer screens and in print. In contrast, 3D node-link graph visualizations are far less used, as they have many known limitations and comparatively few well-understood advantages. A key issue here is that such 3D visualizations require users to select suitable viewpoints. We address this limitation by studying the ability of layout techniques to produce high-quality views of 3D graph drawings. For this, we perform a thorough experimental evaluation, comparing 3D graph drawings, rendered from a covering sampling of all viewpoints, with their 2D counterparts across various state-of-the-art node-link drawing algorithms, graph families, and quality metrics. Our results show that, depending on the graph family, 3D node-link diagrams can contain a many viewpoints that yield 2D visualizations that are of higher quality than those created by directly using 2D node-link diagrams. This not only sheds light on the potential of 3D node-link diagrams but also gives a simple approach to produce high-quality 2D node-link diagrams. | false | false | [
"Simon van Wageningen",
"Tamara Mchedlidze",
"Alexandru C. Telea"
] | [] | [] | [] |
EuroVis | 2,024 | Antarstick: Extracting Snow Height From Time-Lapse Photography | 10.1111/cgf.15088 | The evolution and accumulation of snow cover are among the most important characteristics influencing Antarctica's climate and biotopes. The changes in Antarctica are also substantially impacting global climate change. Therefore, detailed monitoring of snow evolution is key to understanding such changes. One way to conduct this monitoring is by installing trail cameras in a particular region and then processing the captured information. This option is affordable, but has some drawbacks, such as the fully automatic solution for the extraction of snow height from these images is not feasible. Therefore, it still requires human intervention, manually correcting the inaccurately extracted information. In this paper, we present Antarstick, a tool for visual guidance of the user to potentially wrong values extracted from poor-quality images and support for their interactive correction. This tool allows for much quicker and semi-automated processing of snow height from time-lapse photography. | false | false | [
"Matej Lang",
"Radoslav Mráz",
"Marek Trtík",
"Sergej Stoppel",
"Jan Byska",
"Barbora Kozlíková"
] | [] | [] | [] |
EuroVis | 2,024 | AutoVizuA11y: A Tool to Automate Screen Reader Accessibility in Charts | 10.1111/cgf.15099 | Charts remain widely inaccessible on the web for users of assistive technologies like screen readers. This is, in part, due to data visualization experts still lacking the experience, knowledge, and time to consistently implement accessible charts. As a result, screen reader users are prevented from accessing information and are forced to resort to tabular alternatives (if available), limiting the insights that they can gather. We worked with both groups to develop AutoVizuA11y, a tool that automates the addition of accessible features to web-based charts. It generates human-like descriptions of the data using a large language model, calculates statistical insights from the data, and provides keyboard navigation between multiple charts and underlying elements. Fifteen screen reader users interacted with charts made accessible with AutoVizuA11y in a usability test, thirteen of which praised the tool for its intuitive design, short learning curve, and rich information. On average, they took 66 seconds to complete each of the eight analytical tasks presented and achieved a success rate of 89%. Through a SUS questionnaire, the participants gave AutoVizuA11y an "Excellent" score — 83.5/100 points. We also gathered feedback from two data visualization experts who used the tool. They praised the tool availability, ease of use and functionalities, and provided feedback to add AutoVizuA11y support for other technologies in the future. | false | false | [
"Diogo Duarte",
"Rita Costa",
"Pedro Bizarro",
"Carlos Duarte"
] | [] | [] | [] |
EuroVis | 2,024 | AVA: Towards Autonomous Visualization Agents through Visual Perception-Driven Decision-Making | 10.1111/cgf.15093 | With recent advances in multi-modal foundation models, the previously text-only large language models (LLM) have evolved to incorporate visual input, opening up unprecedented opportunities for various applications in visualization. Compared to existing work on LLM-based visualization works that generate and control visualization with textual input and output only, the proposed approach explores the utilization of the visual processing ability of multi-modal LLMs to develop Autonomous Visualization Agents (AVAs) that can evaluate the generated visualization and iterate on the result to accomplish user-defined objectives defined through natural language. We propose the first framework for the design of AVAs and present several usage scenarios intended to demonstrate the general applicability of the proposed paradigm. Our preliminary exploration and proof-of-concept agents suggest that this approach can be widely applicable whenever the choices of appropriate visualization parameters require the interpretation of previous visual output. Our study indicates that AVAs represent a general paradigm for designing intelligent visualization systems that can achieve high-level visualization goals, which pave the way for developing expert-level visualization agents in the future. | false | false | [
"Shusen Liu 0001",
"Haichao Miao",
"Zhimin Li",
"Matthew L. Olson",
"Valerio Pascucci",
"Peer-Timo Bremer"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2312.04494v1",
"icon": "paper"
}
] |
EuroVis | 2,024 | Beyond ExaBricks: GPU Volume Path Tracing of AMR Data | 10.1111/cgf.15095 | Adaptive Mesh Refinement (AMR) is becoming a prevalent data representation for HPC, and thus also for scientific visualization. AMR data is usually cell centric (which imposes numerous challenges), complex, and generally hard to render. Recent work on GPU-accelerated AMR rendering has made much progress towards real-time volume and isosurface rendering of such data, but so far this work has focused exclusively on ray marching, with simple lighting models and without scattering events or global illumination. True high-quality rendering requires a modified approach that is able to trace arbitrary incoherent paths; but this may not be a perfect fit for the types of data structures recently developed for ray marching. In this paper, we describe a novel approach to high-quality path tracing of complex AMR data, with a specific focus on analyzing and comparing different data structures and algorithms to achieve this goal. | false | false | [
"Stefan Zellmann",
"Qi Wu 0015",
"Alper Sahistan",
"Kwan-Liu Ma",
"Ingo Wald"
] | [] | [
"P"
] | [
{
"name": "Paper Preprint",
"url": "http://arxiv.org/pdf/2211.09997v2",
"icon": "paper"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.